Kenya Hara on technology

“Technology has no point unless it subtly awakens and activates the senses of its recipients. Looking around, I notice that on the contrary, people today have been gradually developing thick skins because of technology. They wear elasticized or fleece clothing, sit on comfortable sofas and eat potato chips while watching large-screen TVs. They don’t take lessons in cooking or the tea ceremony. They can’t even be bothered to arrange flowers in a vase. Thanks to the calculator, they’ve ceased doing match in their heads or on paper, and are losing the ability to think quickly.

Kenya Hara

Tips for potential CS grad students

Every week university profs get unsolicited email from people looking for a grad school supervisor. In reality most of these emails are automatically deleted… there are just too few hours in the day. Here are some tips anyone cold-emailing a professor.

  • Write a genuine email – Many people send emails that are less than personable. It almost seems like they come off a form letter production line, with the profs research specialties “copied” and pasted” from some university website. It basically shows that the potential students has little or no understanding of the person they are emailing. Sometimes the copy-and-pasted sections are even in different fonts, or in double-quotes, things which are always red flags.
  • Find a prof with matching research interests – People make broad assumptions about what research topics the prof is interested in. Statements like “your research on this field perfectly aligns with my interests”, are very vague, and honestly show how little research you have done.
  • Read the prof’s profile properly – My university profile clearly says that I am not taking on grads/undergrads/ or postdoctoral fellows. Failure to read that basic information just indicates that you don’t read information properly… not a great quality. My blog page also distinctly says I’m not taking on any students anymore. Nobody will take on students that can’t follow basic instructions.
  • Don’t use flowery prose when writing – Saying how wonderful a prof is, even though you don’t even know them is just off-putting. Don’t use words like esteemed, respected, revered etc etc., they just aren’t necessary.
  • Choose a department that matches your field – If you have a degree in computer engineering (hardware), it’s likely that you just won’t have the required skills for a postgraduate degree in computer science degree. Your undergrad has to somewhat align with your postgraduate interests. You can’t do a MSc. in history without an undergrad in history, and it is similar in computing. There are exceptions to the rule, but try to align yourself to the proper degree.
  • Make sure you have the right skills – HTML is not a programming language, and just knowing Python/MATLAB is just not enough programming skills to really excel in a postgrad degree in CS (and neither are MS certifications). You should have skills in C, Fortran, C++ etc. (even Java I guess). If not you will have to teach yourself, and quickly – a postgrad degree in CS assumes some level of proficiency.
  • Make sure your English skills are up to par – Dissertations in universities where the language is English require you to be proficient in reading and writing English. No professor want’s to have to edit a thesis numerous times for basic language constructs etc. It’s honestly boring. As I’ve mentioned before, improving your English skills means reading more English material (and I don’t mean journal articles), and actually writing things. A high mark on some standardized English test is usually not that meaningful (in fact most standardized tests are somewhat dubious).
  • GPA’s aren’t that meaningful – Grades aren’t always that meaningful. You might have a 3.9/4.0 GPA, but have taken mostly courses where there are a lot of exams and you do well on exams. Have you ever designed and implemented a large scale software project? Can you explore problems and write?
  • Be self-motivated – As a grad student, you have to be able to run your research program. So if you’re sending someone an email requesting they supervise your PhD, you had better have some idea what you actually want to research. Just saying “you’re research aligns with mine“, is meaningless.
  • Don’t bend the truth – Don’t pad a resume with things you aren’t knowledgeable about. Don’t write you have “mastered” C or Python, as I very much doubt it. Ten or 20 years of writing code in C everyday might make you a master, a couple of classes does not.

The reality is that not everyone is suited to doing a postgraduate degree, and having an undergraduate degree does not automatically mean you have enough knowledge to be admitted, let alone do well. Success in undergraduate coursework programs does not always equate to success in graduate research degrees. The bottom line is that if you are writing to someone asking them to be your supervisor, you should treat it just like you are applying for a job. If you can’t be bothered putting in the effort, then don’t be surprised if the prof can’t be bothered emailing you back.

It’s the end of social media as we know it, and I feel fine

It may be hard for some people to think about the great shedding of employees that is or will be happening at many social media companies. Recently Meta announced 11,000 layoffs, about 13% of their workforce, so substantial. Twitter is shedding employees as well. But to a lot of people with any concept of software companies, this is not exactly surprising.

Why is this happening? One of the possible reasons is that some of these social media companies are finding that interest in social media is somewhat waning, and not for the reason you would think. Social media had a lot to offer once, and many of the apps provided interesting means of communicating with the greater world. But a lot of what social media offers now is limited to increased ads to generate revenue. It’s also possible that these companies just have too many employees, especially in a time when some of the tedious work previously done by humans is now slowly being transitioned to AI (whether that’s good or bad is left for you to determine).

Finally, and most importantly very few of these social media companies produce anything really tangible. They maintain a few apps, and mine large cohorts of data. They are in some ways just advertising platforms. Eventually this comes back to bite you. If enough people decide they are done with your platform and move on, well, (ad) revenue falls – and there are many reasons why people leave. Maybe they have had enough of social media, or are annoyed by some new feature. For example Instagram was an excellent photo site… but adding reels has turned it into a bit of a gong show. YouTube was great before you had to watch a bazillion ads in every video – which becomes annoying quite quickly. There is also a growing amount of content which is less than appealing – stuff people should honestly keep to themselves.

Will more of these companies shed employees? Likely, and it’s really their own fault. They are mostly data companies, relying on people to join, mining their data, and making a good amount of revenue off advertising. But they don’t produce anything tangible, so eventually they will go into decline. It’s no different to what has happened in other industries. Who will survive? Content-sharing sites, that provide tangible information that people want likely will be less impacted than social media companies that just rely on human interaction. It is very likely as Ian Bogost suggests, people just aren’t meant to talk to one another as much as they do. Social media has become some what of a lifestyle, for some an obsession, and companies have taken advantage of that.

Or maybe it’s time to consider changing to another platform: Glass ($, but no adds), or perhaps VSCO (free/$) for photography; Patreon for video content ($).

Python, where art thou?

People always say Python is such a wonderful language, and to be quite honest it is great for gluing stuff together, and ad-hoc-ing concepts. But while Python currently sits on top of the stack, it doesn’t really have much to show in the way of successful large-scale applications, beyond a few large websites. In fact some commentary is starting to turn against Python, even suggesting to avoid it for large scale projects. The main reason is that it’s too slow for many tasks. But there are deeper issues, and they have to do with the structure of the language, and what people learn when programming it.

People that learn Python invariably learn some things that aren’t really applicable to many other languages. In many ways learning Python is akin to learning shell programming, only with many more restrictions. It’s then much harder to transition to more expressive programming languages like C, or Java. One of the issues is spacing. Python is one of the few modern languages which forces programmers to indent code, such that it performs the tasks of block-structuring. There are a lot more restrictions with whitespace-sensitive languages, which is the reason most languages ditched the idea years ago.

Another is Python’s typing issues. For example the idea that dynamic typing is better than static typing. So if you put x=5.6, basically the Python interpreter looks at the 5.6, decides it’s a float, and makes x a float storing 5.6. If the next line says x=4, then because 4 is an integer, x becomes an integer. Other languages like C and Fortran use static typing, where x is declared to be a specific type that can’t change. Dynamic typing can lead to issues when there is code in a program that becomes incorrect because the type of a variable changes, i.e. a lack of type safety.

x = 5.6
x = 4
x = "hello"

In Python you can easily just invent variables as you go along. So in large programs it becomes easy to “reuse” a variable somewhere and break a program (and it will take a long time to find it). If you wanted to store the values shown above in Fortran, you would need three variables:

real :: x = 5.6
integer :: y = 4
character (len=5) :: str
str = "hello"

This leads to far fewer problems as opposed to Python. That’s not to say that Python won’t allow a variable to be explicitly typed, but only experienced programmers might think that necessary. Python as such is only considered a semi-type safe language. Python is a dynamically and strongly typed language, but the type checking is only done at run-time.

Modules and libraries are also a bit of a mess. While external reusable code is suppose to make life easier, it’s easy for the novice programmer to get lost in the quagmire of dependencies created by all these add-ons, something Pythonians term dependency hell. For instance a program could be dependent on modules A and B, but A could depend on modules C, D and E, and B on D and F, D and E could depend on G, H, and I. Never mind circular dependencies. This often happens that when a user downloads a library X, they find on install that it requires specific versions of 10 other libraries. It’s enough to make your mind explode.

The ultimate problem is transitioning from Python to other programming languages. Yes, using Python is easy for the beginner. But it is arguably difficult to write large-scale software in an interpretive language, which really means that there is no program produced by Python. It interprets and runs the code every time. Compiled code produces a binary executable, much like the python program itself. You will nearly always get faster, more efficient performance out of a compiled program.

I dislike OS upgrades because they break things

OS upgrades serve a purpose, but I’m always very hesitant to do them, mostly because I know that some things will end up broken. Mostly this means programs and compilers. When I administered a Windows NT network in the ’90s, when upgrades came we put them in the cupboard for 3 months (obviously it’s easier with online updates versus CD’s). When I upgraded OSX to Ventura (13.0.1) everything seemed fine, until I tried to use the Free Pascal Compiler, which of course could no longer find a library… the ubiquitous:

ld: library not found for -lc

Finding a solution wasn’t that easy, but it didn’t take that long. Having said that the documentation of many of these compilers etc. is less than stellar. I ended up using the fpc.cfg file from Mark Damon Hughes post on FreePascal Building, and modifying some of the paths. Particularly:

-XR/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk

In the .zshrc file I added a line which specifies an explicit path to find the fpc.cfg.

export PPC_CONFIG_PATH=/etc

Now the compiler works again! But that of course is not the end of the issues. Because an OS upgrade does not mean that the Xcode Command Line Tools will be upgraded, so that has to be done too.

 command xcode-select --install

You can find out the current version of the CLT with xcode-select --version. Now this will work, and all my command-line compilers work, by XCode itself is also not up to scratch, so it won’t even run. Best to also upgrade Xcode to version 14. You can determine the Xcode version with /usr/bin/xcodebuild -version.

That’s the trick of course, an OS upgrade does not always mean that things stay the same. And never, oh never upgrade an OS during a critical software development moment (upgrades are one of the biggest issues for outages on systems).

Where is my flying car?

For decades there were many fanciful ideas about what the future would hold. Read any copy of Popular Mechanics or Popular Science from before 1960, and there are numerous predictions of a technological future. Moving sidewalks, atomic trains and aircraft, flying cars, etc. I mean in 1957 Edwin J. Kirschner even published a book The Zeppelin in the Atomic Age, which explored the idea of atomic airships (now that would have been a really bad idea). Most of these things never really panned out, mainly because thinking up ideas is really easy, but putting them into reality is nearly always a lot of bother.

The only real predictions that likely came true, and in many respects were understated were those related to “brains that click”. In the late 1940s, computers, or “thinking machines” were starting to make their real debut in the form of electronic-type calculators. At the time ENIAC was cutting edge, a 30 ton behemoth with 18,000 vacuum tubes. Popular Mechanics predicted in 1949 that future computers would weigh only 1.5 tons and have 1000 vacuum tubes. How wrong they would be. Star Trek predicted the iPad – ST:The Next Generation and beyond, Starfleet used touchscreen PADDs (Personal Access Display Devices) to control panels and computers on the Enterprise-D.

The cover of Popular Mechanics, July 1957, showing the “aerial sedan” being developed by Hiller Helicopters for introduction in 1967.

“Just one of these machines will do in a few hours what a human mathematician couldn’t do with a million pencils in a hundred lifetimes.”

Popular Mechanics, March 1949 (p.258)

Now while we have come a long way with computers, we have reach a point in technology where like many fields we are now in a state of stagnation. Ever wondered where the new concepts in computing are? iPhones are old technology, and computers, although faster and more efficient, aren’t much different to 20 years ago. My MacBook Pro running on MacOS Ventura is still just a faster version of Unix, which debuted some 50 years ago (and it’s still the best OS out there). There is very little to be excited about anymore, but it may signify that we are at the upper extent of what current technology will do for us. The iPad was an exceptional idea when it debuted in 2010, because touch is a natural interaction mode for humans – but after an initial exciting “technology phase”, they are now no more than portable streaming systems, and have not really evolved.

It’s not surprising, the excitement over futuristic ideas likely disappeared somewhere in the 1970s. Humans landed on the moon in 1969, and stop going after 1972. The Space Shuttle was an incredible piece of technology, but only lasted three decades, with no replacement or evolution during that time period. Only now are we thinking about returning to the lunar surface (and going to Mars), more that half a century later (and frankly we would be better off first cleaning up this planet before we stomp all over another). It is emblematic of human society. Not to say there aren’t good ideas out there. Everyday we see new ideas to deal with plastics, or improved clean energy. But society now takes far to long to embrace any new technology.

Of course flying cars were always a bit of a fantasy. For starters would then even need to look like cars? Some ideas did come to fruition (well sort of), like Hiller’s idea for a transport aircraft with a tilting wing, the X-18, allowing for a tilted wing for vertical take-off, which is then moved to normal position for flight. The concepts in this plane likely contributed to tilt-rotor aircraft such as the Bell Boeing V-22 Osprey. In reality it’s just as well we don’t have flying cars. I mean skylanes on Coruscant always seem so neat, but that would never work here… some people can barely drive on terra firma.