Will computers take over the world?

People always worry that super-intelligence computers will take over the world, but we all know that that won’t happen right? Well guess what, they have already taken over the world, but not in the same sense that we see in the movies. Computers have already greatly disrupted our lives, via mobile devices, or even email. Social media anyone?

One news article cited that we spend over 8 hours, 41 minutes a day on electronic devices, more time than the average person spends sleeping a day. That may be why people are having problems sleeping. Apparently using technology before going to bed can overload the brains “working memory”, often leading to a reduced quality of sleep. See, they are already affecting our lives.

Yes, there are many good things about technology, yet it even negatively impacts areas of our lives that it was suppose to help. Emails help, and at the same time disrupt our working lives by reducing workplace productivity. Some estimates say email occupies 23% of the average employee’s workday, and that average employee checks his or her email 36 times an hour. There is also the expectation of a rapid response. Time spent on dealing with email is unproductive time, not to mention the time it takes to re-focus on the task at hand.

Yes, technology truly has taken over the world.

 

 

A.I. – what’s all the hype?

The computing community is always caught up with some sort of hype. In the early 70s it was “structured programming” (which thankfully had some context to it), in the late 80’s it was OO, and now it appears to be A.I. Although to be truly honest, there is neither anything artificial nor intelligent about most software and products with this label attached. But it is a good label to attach to products to sell them. The Nest is a smart thermostat, because they tell you its smart.

“Nest learns your habits and temperature preferences, and even learns when you’re at home and when you’re away.”

It’s almost like its your friend, helping you out. But it’s not smart, its just following an algorithm. The Ecobee3 is somewhat smarter, but that’s because it can uses wireless remote sensors to determine where you are in your house. But it’s still only an algorithm:

  1. Check all motion sensors for activity, and remote temperatures.
  2. Adjust temperature based on occupancy of rooms, and remote temperatures.
  3. Wait a certain time and repeat.

So the Ecobee3 might be the smartest of all the smart thermostats, but that’s only because it has more data to make decisions with. Dogs are smart too, because they can be taught to fetch balls. Dolphins are smart (maybe smarter than us if you’ve read “The Hitchhiker’s Guide to the Galaxy“). But devices? products? They aren’t smart. I mean they do learn, but that’s because they have been told to. A thermostat doesn’t one day think something like… “I like the heat, feels like a beach in the tropics,  so I’m not cooling the house today“. Smart appliances? They get marketed in the context of making your home smarter.

“Forget to turn on that load of dirty clothes in the washer? No problem. Choose a wash cycle and turn it on from virtually anywhere.”

Whoa. That’s not smart. That just means that you are starting the washing machine remotely. A smart machine would remind you after 5 minutes of loading it that you haven’t turned it on: “Hey, it’s been 5 minutes, you going to wash this laundry?”, or maybe it’s smart enough to wash it itself: determine the load size, what’s in there (whites? colours?), select a cycle, and WASH. Give itself a self-diagnosis, and clean itself whenever it needs to. Determine when detergent is running low, and order it’s own. Now *that’s* smart. Anything else is just programming.

e-books take over the word… NOT!

When e-readers arrived on our doorsteps a few years ago, it seemed like they were going to dominate the publishing world. Then something happened. They didn’t.

Of course some of us saw the writing on the wall. Specific e-readers, specific book formats, prices of e-Books that weren’t that much lower than paper books. Now it turns out that a growing share of Americans are reading e-Books on tablets, and mobile devices versus specific e-readers, but here’s the real truth – print books remain more popular than e-books. And the percentage reading books hasn’t increased in the past five years. 4 in 10 Americans read only print books, and only 6% read digital exclusively. Who thought that it would end up this way?

Why? Because e-Books bite. I dislike them, and I have a house littered with iPads (which are used mostly for travel, games, and web-browsing). I thought for a while I would get interested in digital magazines, and I still buy a couple from overseas, but only because they are hard to come buy here (and often cheaper digitally). But I don’t like them much either, I prefer to have a physical magazine, and a stack of physical cookbooks. Books are tangible. They exist and the stories within them make them come alive. They don’t need to be recharged. They can be read in a train, or on a beach. You can even read them while having a bath if you want.

It could also be because of something called digital fatigue. People may be tired of digital devices, and may prefer to spend their time reading a real book. When you are on devices for work all day, it is hard to pick up another digital device. And then there is sleep – studies have shown that people who read with electronic devices have 50% less melatonin, the stuff that help regulate sleep patterns.

 

The art of searching for knowledge

We live in a world that is inherently technological. It is hard to avoid the onslaught of data that pervades our everyday lives. However knowledge is not all digital. In fact it is often hard to find information on the web. Using Google, although a good search engine, does not guarantee that information on a subject will be found. Or possibly that the information is within a digitized book with only “snippet” access. Wikipedia is not the “be-all-and-end-all” of information. Sometimes the search for knowledge requires you to go to old-school methods, like looking in a library for real books. Yes they exist, and libraries are more than just places to sit and drink coffee.

who knew?

🙂

 

Grapheme clusters – YUM!

When you hear the words “grapheme clusters”, the first thing that springs to mind is a breakfast cereal. Now a grapheme is the “smallest unit of a writing system of any given language”. So a cluster is, well, a group of them. Makes sense right?

Now in the *old* days, a character set had 128 characters (think ASCII) in it, which isn’t exactly a lot. Well, it was a lot at the time. Now everyone wants extra characters, and we need characters in different languages, or mathematical characters. Enter Unicode. Partially this is because some characters, like say ü can’t really be represented by a single character, it is really two Unicode scalar values: the Unicode description for this letter is “LATIN SMALL LETTER U WITH DIAERESIS”, which is then decomposed into “LATIN SMALL LETTER U (U+0075) COMBINING DIAERESIS (U+0308)”.  This gives us the grapheme cluster.

Some languages such as Julia provide a function such a graphemes() to help iterate over grapheme clusters in a string. But one has to question whether this just complicates languages too much? Apart from being able to output these characters, which is obviously nice, do we need Unicode variable names like in Julia? So to use δ instead of the word delta, is as easy as using the LaTex code \delta followed by hitting the <tab> key. But is this practical? The use of the word pi or the symbol π associated with the same value, so that is nice.

Programming languages should sometimes be simpler than they are (like the good old days), and I have to think that adding extra stuff often makes them more complex than they have to be. Maybe that’s why people still like the simplicity of C – despite its idiosyncrasies.

 

The problem with evolving languages today

Unlike the 1960s, today very few new languages evolve. Those that do are often moulded from an existing form. In earlier days of programming language design, languages were implemented when the complete specifications were designed. Algol 60 evolved through such a specification, and although specific compilers always contained small variations, the core concept was the same. Radical changes to the underlying structures came at broad intervals, say 5-7 years, which allowed the language to attract users.

One of the major problems today is the pace of language roll-out. Good examples are Julia and Swift. Swift was introduced in 2014, upgrading to V1.2 in the same year, and Swift V2 in 2015, and V2.2 in December 2015. Swift 3.0 arrived in Sept. 2016. All this in a little over two years. What’s wrong with this? The problem is that development of these languages has become far too fluid. I understand that modern languages are often behemoths that naturally require tweaking, but sometimes features disappear or are radically altered between versions of a language – and that just shouldn’t happen. Why? Because radically modifying structures in a language on such a regular basis can lead to painful code migration, and frustrated programmers who then have to spend time rewriting code.

Let’s look at a case in point, the Swift string. Strings in Swift are their own entities, for example an empty string can be created in the following manner:

var emptyString = ""

Easy right? It’s also easy to do things with strings. For example:

var string1 = "darth"
var string2 = "vader"
string1 = string1 + string2
// string1 now contains darthvader

Many would argue this is much nicer than C. The problem lies in Swift’s evolution from V2 to V3, and involves naming conventions. Swift 2 had a number of functions to advance an index to traverse a string, functions like successor() and predecessor(). Okay, so they DO seem verbose. Some would likely argue that succ() and pred() would have been better… but that’s taking us off topic. In Swift V3 these changed. The properties startIndex and endIndex remain the same.  Okay, so let’s look at an example:

let sith = "vader"

// Swift 2
sith[sith.startIndex]               // "v"
sith[sith.startIndex.successor()]   // "a"

In V3, the  functions startIndex() and endIndex() have been replaced by index(after:) and index(before:). So the above code now looks like this:

// Swift 3
sith[sith.startIndex]               // "v"
sith[sith.index(after: startIndex)  // "a"

Which maybe just doesn’t seem as intuitive anymore. Why change it? Anyways the point I’m trying to make is that continuous changes to the structure of a language means that programmers have to constantly migrate code, which is not ideal. The other issue is when does a language become stable? Does it continuously evolve? And what about backwards compatibility?

The truth is, programmers will become very hesitant to write libraries for new languages, if the structure of the language changes too often. It is hard to devote time to an endeavour, only to make the realization that the codebase will have to be deeply modified on a yearly basis.

Here’s a thought. Design a language. Implement the language. Release the language. Let people USE the language for 3-4 years, whilst reviewing what works and what doesn’t. Then make subtle changes, and allow for backwards compatibility.

 

 

Imagination versus skill

Contemporary LEGO is predominantly comprised of “kits” related to particular themes, e.g. Star Wars, but it’s not the only company that has gone this way. Companies like Meccano, who once delivered creative building sets have also moved to the “model” system. One of the few hold-outs in this realm is the Swiss company Stokys.

They still produce cumulative building sets, starting from set 00. Consider the following picture of a  bulldozer.

stokeys

When constructed, it would look like this:

fig2_stokys

Building it requires an interpretation of the original picture, no parts list, or multi-view drawings are given. It requires imagination to fill in the blanks.

 

Did old code have weird style?

Look at many pieces of code in old programming books  (we’re talking pre 1975), and you might notice one thing… the code had an odd style to it. Consider the following piece of Algol 60 code:

gcdalgol

Note that likely the biggest style change here is the use of singular lines of code to hold entire structures. This is most noticeable in things like the if statement, where begin-end exist in linear form. As I mentioned in a previous post, indenting was rudimentary, with 3 spaces being used, and no real attempt to stretch out control structures. The code also looks odder because the code is shown in a proportional font, as opposed to Courier.

Does this code look aesthetically pleasing? Is there an inherent problem with linear structures such as this? Maybe not. Maybe we need to rethink how we style code for certain structures? Consider the code above translated to C (yes, and leaving in the goto statements).

gcd_c1

You know what? It’s not terrible. The if statements would each take up 4 lines of code, where they coded to most normal standards, i.e.:

rep1: if (m >= n) {
          m = m - n;
          goto rep1;
      }

Maybe this just isn’t necessary for small control structures? Food for thought.

 

 

Where did the use of long variables in C come from?

One thing about C that I have always found intriguing is the data type specifiers. Take for example the simple int. Using the appropriate adjectives, this could be turned into any number of large integers:

short int
int 
long int
long long int

The frustrating thing has always been that the actual size of these types is system dependent. Where did these things come from? The answer in short is Algol 68. The language specification allowed primitive modes which included long int, long long int, but also short int, and short short int. A similar scenario with reals. The benefit was however that their length was better mandated, there wasn’t a scenario (like in C), where int and long or long and long long ended up being the same length.

short short int     - 8 bits
short int           - 16 bits
int                 - 32 bits
long int            - 64 bits
long long int       - 96 bits
long long long int  - 128 bits

Yes, you see it, it’s not a typo, long long long. At this stage you wonder why they didn’t come up with better adjectives, maybe very long int? I imagine, in most cases, compiler implementations used long long int as 128 bits and skipped the whole 96 bit thing. Of course C, in its mantra to condense code, did away with the suffix int, allowing simply long, or long long.

Maybe the crazier thing in C? The type-specifiers may occur in any order and can be intermixed with the other declaration specifiers. Therefore, all of the following are valid and equivalent.

long long 
long long int 
long int long 
int long long

Maybe the use of  long long long came from the Beatles album “The Beatles“, released in 1968. Algol68? Coincidence? I think not!

Living languages

Fortran has been around now for near 70 years. It’s hard to imagine a piece of technology surviving for that length of time, but Cobol has done the same. How have they achieved this? In part it is because they are living languages, going through periodic revisions in 5-10 year cycles, and evolving.

Fortran 77 evolved to Fortran 90, then to 95, 2003, 2008 etc. All these revisions updated the core languages structures, whilst maintaining backwards compatibility. Some such as Fortran 2003 added a new paradigm into the mix – OO. Features like free-formatting were added, and the reliance on legacy jump instructions like the arithmetic IF were reduced, or removed altogether. Languages like C are also living languages, but the basic context of structured programming has changed little since it’s inception, so changes are usually of the incremental kind, and never really that radical. The evolution of languages should be a very organic process, with modifications reflecting the current needs of the programing community, and the inability of the language to perform what is required of it.