There was a time when people learned to program in more than one language. That’s because books on programming were designed around the core tenets of programming, rather than the syntax of specific languages. Nowadays, books are usually along the lines of “Learning to program in X”, where X can be any language. This is okay, but in reality its not optimal. Why? Because so many “core” languages these days are derived from C: C, C++, Java, Objective C, Swift etc. There may be subtle differences, but the underlying structure is the same. If you learned to program in the 1970s and 80s, then likely the first language learned was Pascal, followed by C, Fortran, Cobol, and Ada. There were really no scripting languages, so instead you learned about their fore-bearers: awk, sed, and the shell languages of Unix. But there-in there was also less emphasis on “software” type issues – they only really arrived in the late 1980s, and to be honest I don’t really know if learning about waterfall models did a lot for anyone (and we can thank OO for making the whole system more complicated).
There is a school of thought (likely outdated, but who knows) that says that given a structure such as an if statement, it is possible to learn 2-3 languages at the came time. In fact you should be learning about a structure and *how* it can be used in the context of building a program. Far too much time is spent on grappling with language-specific concepts, and students getting bogged down in a singular language. But then again, maybe languages used to introduce programming are too complex these days?