How to become a good writer

I have grown to loath scientific writing. Why? Because it is boring. It provides an insight into what it what, but it does so in an unimaginative and characterless way. It does not make me want to read these articles, but writing that way is the way academia expects things to be. Don’t dare deviate. Like I have mentioned before, this was not always the case, scientific writing 50 years ago was much more interesting to read. Partially this stems from the fact that articles were succinct and to the point: 2-3 pages were all that was needed to convey the information (or an opinion about something, but nowadays, few journals allow opinion pieces). Authors could also write, which was largely an artifact of not being constrained to a narrow field. Many people had interests outside their area of specialty. Modern academics are often too narrowly focused.

Look, I hate to say it, but the act of writing a thesis, or an academic paper does not make you a good writer. Good, interesting writing comes with time, and a good whack of experience. Twenty year olds are rarely that good at writing, but neither necessarily are people who have spent a lifetime writing nothing but scientific articles.

So how does one become a good writer? Firstly one has to obtain a writing style, which is not something that happens overnight. Putting words together to form readable sentences is something that improves with age and experience. It also has to do with reading. Reading a wide variety of things, from magazines to fiction and non-fiction on various topics will do a great deal to improve ones writing style. Subconsciously while reading your brain picks up on words, and stylistic features of how sentences are constructed. If you don’t read, then I’m afraid becoming a good writer will be a challenge (if not impossible). If you are writing in English, and it is not your native language, it is extremely important to read English material.

Write outside the box. If you are writing scientific things, try writing a blog (or four) with a style that is interesting for others to read (because really, that’s the point). Try explaining your work in simple terms to others (i.e. people not from your microcosm of academia). Over time your blog posts will improve, and your writing style will develop.

Remove distractions. For me I get easily distracted on the net, and end up spending time researching things off on a tangent, in many ways I am somewhat of a knowledge junky (i.e. I love reading non-fiction). One way of reducing the impact of digital noise it by turing it off. If you need to search for something online focus the search to that one item, in one tab. Use a paper book to keep notes – inherently it seems old-fashioned but it works because you are interpreting information and writing it down, possibly adding sketches, or visual interpretations. This lends itself to a differing cognitive experience that the one where one is tempted to cut-and-paste the raw information.

Get some fresh air. Walk through a forest, and leave your digital paraphernalia at home. Go to a cafe, watch the people passing by. Sit on your porch, or under a tree. There are many things you can do to help you think of new ideas, or a new way to approach a writing task. My best writing happens in small snippets in cafes. You can use a small writing pad, or iPhone notes. Jot down some ideas.

P.S. If you want to broaden your reading skills, start with a good non-fiction book. I highly recommend a book by British author John Lewis-Stempel. Start with The Wild Life: A Year of Living on Wild Food.

Why is Julia not more popular?

I truly like Julia, both from the perspective of how the language is designed, and the speed at which is undertakes tasks like processing images. I like Python too, its just too slow, and I don’t want to have to vectorize code. Why then is Julia not more popular? Python is ranked in the top 5 most popular languages, this despite the fact that it is often as slow as molasses flowing. Julia is lightning fast. It processes so fast, Python is still thinking about starting to read the data in.

Why are people not flocking to Julia?

  • Stability. Since it was launched in 2012, it has had numerous releases. Version 1.0 was finally released in August 2018, and the most recent release in August 2020 is V1.5. Too many small changes and tweaking. It makes one nauseous just thinking about small things in your older code that potentially won’t work. Too many releases. V1.6 is due to be released in on Sept.30, 2020. Crazy.
  • Contributors. Methods of open source language development such as Julia are a neat idea. It has supposedly had contributions from over 870 developers worldwide. Ever tried to cook with more than 1-2 people in the kitchen? Just say-in.
  • General-purpose. The term general-purpose is tricky. A general-purpose language is essentially one that does everything. Bad move. That’s like saying duct tape is a general-purpose fix-it? Although originally designed for numerical type programming, it can apparently now do low-level systems programming, and web programming. Stop-the-madness.
  • Multi-paradigm. It’s procedural, it’s functional, it’s a bit of everything.
  • Too large. Although languages like C can be challenging for novice programmers, their core benefit is brevity. Julia is anything but small. Gargantuan is more likely the word. The core of the programming structures is simple and easy to learn, but there is more and more baggage with every version – this is what happens when features keep getting added. Remember what happened to C++.
  • No executable. Despite all its abilities, Julia does not generate an executable, which is a bummer.
  • Immature packages. Add-ons (yes, there is still a need for these), are not sufficiently mature, or even well maintained. This is related to the lack of users in the field. People aren’t willing to commit time to a library that will hardly be used.
  • Error message are still horrible. I thought we might progress here, but for the novice programmer, dealing with the error messages is horrendous. It’s enough to send you running towards Python.
  • The name. Why call the language Julia? (there is no specific reason) Why not something more meaningful, like a tribute to one of the great women in computing – Hopper? Easley? Coombs?

Look, I really like Julia. I really love all the embedded math functions, makes things easier than building them from scratch, but are things getting somewhat out of hand? Does a language need to do soooooo much? If I want a low-level systems language, I have C. Maybe the problem is that I’m getting old, I mean I still enjoy coding in Fortran.

The Great Beyond – the future of Computer science education

This year we had to begin reevaluating the way we live our lives, or at least that’s the theory. Computer science has evolved for 63 years since the development of the first Fortran compiler. So many things were anticipated from the development of the first computers – and in many ways our lives have changed incredibly from the 1950s, all thanks to compute and automate. But lately, things have slowed down, and real progress is stagnant.

An IBM 704 in 1959 (NASA)

Why do I say this? Because in the early years of computing, just like the early years of the space program, new discoveries were being made all the time. At some point though, we reached a nexus where limitations in technology impeded what we could do. Real space exploration is limited by our inability to get places in a good amount of time (never mind the issues with artificial gravity, living in space etc). Real progress in computing is limited by our ability to write algorithms to do things. Speed here is not really an issue – machines are fast. Yes, bits of progress are made every year. The health related apps and technology of the Apple Watch V.6 are truly amazing. The internet gets faster, digital cameras get more pixels. Some of this is driven by AI, which isn’t the panaceas its made out to be. However, software is still largely inspired by human thought.

But overall the field is quite stagnant. Maybe the last real push was the advent of mobile technology, but that too has become old. Who develops apps anymore? There are a bunch of cool ones out there, that do neat things, but mobile devices have really just become texting devices, and proxy cameras. Programming, such as it is hasn’t really moved past the 1970s. Languages get bigger, but not smarter. Fortran 63 years on works extremely well, and frankly there is no need to replace it. It does what it does well, as does C, Python and the rest. The fundamental structures underpinning programming languages have not changed, or been augmented over the years – decisions, and repetitions. Nothing more to add?

Technology is only as good as the problems it seeks to solve. Part of the stagnant nature of computing comes from the education students receive in university. There seems to be little in the way of teaching innovative ways of thinking and this stems largely from a curriculum that is centred on 1980s computing. I do understand in some respects, unlike fields like physics, and chemistry, computer science is like spray foam – an ever expanding string of topics. Universities only have a certain amount of resources, and therefore many CS departments choose to focus on the core fundamentals. Listening to a lecture on self-balancing trees is okay, but what is more poignant is actually applying the concept to solve a real world problem.

What is needed? A colossal pivot in the way that CS education is delivered. A move away from 3-4 years of bricks-and-mortar education (even if it is interspersed with coop), towards a more hands-on experience for students. Here’s an idea… students should be formed into “start-ups” from their first semester. They will work in these groups for each year, possibly changing cohorts ever year. The first year will entail working as a group to learn the basics of software development, programming languages (note the “s” on the end), ethics, and methods of problem solving. They will build and analyze things not related to computer science as a way of broadening their minds. Classes will not be traditional, but rather hands-on experiences, interspersed with workshops on particular topics.

Years 2-3 year will add more breadth, but start the process of actually creating software to solve a real world problem – possibly involving something like robotics. 4th year – who needs it? Better for all students to take a one year coop placement, or possibly have them commercialize a project from a previous year. Skill are learned as they are needed, and directed towards the needs and interests of the students. Workshop modules could be 3-days in length, allowing students to take on concepts as needed, and not have to do semester long courses.

Fantasy right? Yes, because in universities changing the model of pedagogy is like trying to push an elephant up the stairs.

Recursion – Decimal to binary

Decimal integers can be converted to binary using successive divisions. For example, 57 becomes 111001 in binary.

57 ÷ 2 = 28 remainder 1
28 ÷ 2 = 14 remainder 0
14 ÷ 2 = 7 remainder 0
7 ÷ 2 = 3 remainder 1
3 ÷ 2 = 1 remainder 1
1 ÷ 2 = remainder 1

The binary representation of 57 is the binary representation of 28 (11100) followed by the remainder of 57÷2 (1). A recursive definition of the process can therefore be derived:

binary(57) = concat(binary(28),remainder(57/2))

The trick with this is that the last binary digit derived is actually the start of the binary number, as the digits are derived backwards. So it is important to incorporate this into the algorithm. Here is a C function to perform the conversion in a recursive manner:

int dec2bin(int n)
{
   if (n <= 0)
      return 0;
   else
      return 10 * dec2bin(n/2) + (n%2);
}

On entry to dec2bin(), the value of n is tested to see if it is zero. If n is greater than or equal to 1, then a recursive call is made to dec2bin() with n/2 as the argument (in C this will apply integer math). The value of dec2bin(n/2) is then multiplied by 10, and the remainder of n/2 (n%2), is added to it. This has the effect of concatenating the values. Here is the effect of the recursive calls shown as a visual trace:

An example, converting 57 to binary

Learning linked lists (ii) – adding nodes to a list

Let’s look at the simple task of adding nodes to a list. First we can create two variables of type link. The entire list is accessed from an external pointer (i.e. it is not included within a node), list which points to the first node in the list. The variable p is used to create an individual node before it is added to the linked list.

var list, p : link;

To illustrate the concepts, we will use the Pascal code segment below, which creates a list containing 5 items. The code uses a loop to read input from the user, and build a linked list with five nodes. In addition, the process of adding the first and second nodes is illustrated in Fig.1 – each step is marked by the letters A to E.

Using the variables, defined above, we first create a new empty list, by assigning list the value nil (line 1). Next, we create a new node (Fig.1A). This is done using the function new(), which creates a new instance of node in memory. The code on line 5 means that p is a pointer to a newly allocated node. The next line of code (6), assigns the value input by the user to the data portion of the node, p^.data (Fig.1B). For example, if the user enters 7, then p^.data will assume the value 7.

1   list := nil;
2   for i := 1 to 5 do
3   begin
4      read(s);
5      new(p);
6      p^.data := s;
7      p^.next := list;
8      list := p;
9   end

Now it is necessary to set the next portion of the node. When a node is inserted into the front of the list, the node that follows should be the current first node on the list. The variable list contains the address of the first node, p, can be added to the list by performing the operation outlined on line 7, where p^.next is set to list (Fig.1C). This has the effect of placing the value of list into the next field of node p.  In the case of the first element, the list will be empty (i.e. pointing to nil). At this point, p points to the list (Fig.1D). However since list is the external pointer to the list, its value must be modified to the address of the new first node of the list (Fig.1E). This is done in line 8.

linkedlist1
Fig.1: Adding the first (left column) and second (right column) nodes to a list.

What happens to p? Well, in reality p is just an auxiliary variable, used to add a node to the list, but it really is irrelevant to the list. Once a node has been added in the way described above, the value of p may be changed without affecting the list.

Learning linked lists (i) – the low-down

Pointers are really not that useful for simple variables. Need a pointer to store a single integer? Why bother. Where pointers become useful is in the creation of both simple and complex data structures. In some languages such as C, dynamic data structures are efficient for storing objects like large images, or data sets. The simplest form of dynamic data structure is the linked list (sometimes called a linear list, or linear linked list). The dynamic nature of a list may be contrasted with the static nature of an array, whose size remains constant. There are or course many differing types of linked list, each of which is designed to do a particular thing in a particular situation.

  • Singly linked lists (SLL) – nodes only have a pointer to the next item in the list.
  • Circular SLL – same as SLL, but the last node points back to the first node in the list.
  • Doubly linked list (DLL) – Each node has a reference to both the next and previous nodes, i.e. a two-way relationship.
  • Circular DLL – Same as DLL, except that the next property of the last node points to the first node, and the previous property of the first node points to the last node.

Linked lists are useful because they can be used to create other structures such as dynamic stacks, and queues. We will only look at the singly linked list, because it covers the basic use of pointers to create dynamic data structures.

This series looks at the creation of basic operations for a linked list. It is done using Pascal, which may seem odd to many people, but in actuality, pointers and the creation of dynamic data structures is intuitive and easy in Pascal, as opposed to the arcane and complicated methods of C. There is also the syntax, as Pascal uses ^, so ^integer means “pointer to integer”. Logical right? This series is not meant to make you proficient in pointers as they relate to languages like C… merely to help you understand the mechanics of pointers and the structures they create.

Let’s start with the basics. A linked list looks like this:

linkedlist2

Each item on the list is called a node. The node contains two fields, a data field, and a next address field. The data field holds the actual element in the list. The next field contains the address of the next node in the list. Such an address, which is used to access a particular node, is known as a pointer. Here is how they are described as types in Pascal:

type link = ^node;
     node = record
              data : integer;
              next : link;
            end;

The type link is a pointer to the type node, which is a record structure containing the data and next fields.  The next field of the last node in the list contains the value nil, representing an empty list.

The craft of programming

In the early days of programming, it was considered an art, largely because a special skill was required to create a program in a spooky notation – machine code. This machine code was a very low level form of programming. The problem is that as computers became faster, and programs became more complex, there was less time for artistic flair. Next came programming with assembly language, which was marginally better. However what was needed was a way of expressing solutions to problems in an easy manner – and in the late 1950s modern programming emerged. The programming languages which evolved provided a notation which was somewhere between cryptic machine code, and standard mathematical notation (in the 1950’s much of the computing power was dedicated to solving mathematical problems for engineering). Writing a program using a programming language allowed the computer itself to convert the human-readable program into a machine code by means of a translating program (now called a compiler). The craft of programming was born.

Want to learn to program?

If you want to learn to program, the biggest tip I could give you is – teach yourself. If you are waiting for some magical thing to occur when you sit in a class on programming, you will be sorely disappointed. Why you ask? Because programming is a very logical skill to learn, and you will only really learn by doing. Information provided in lectures will go through the mechanics of programming, and show examples, but until you actually do it yourself, you will never truly understand the concepts.

The first language I learnt was Pascal, and I taught myself (didn’t help that nobody could understand the ramblings of the Welsh lecturer). Second was C, again self-taught… in 1980s, when resources consisted of K&R’s The C Programming Language, and online manuals. You will find that you have a propensity towards programming, or you hate it. Similar to how some of us hated first year chemistry. You can’t learn programming in the same way as taking a first year biology class, where most learning is comprised of the rote kind. You have to design and write programs, from scratch – yes you can follow case studies to gain an understanding of how programming structures work, but at the end of the day, you must code.

You can likely pick up a basic book on programming and teach yourself the basics of a language. Once you know one language well, picking up another isn’t hard at all. Once you have written some basic programs, modify them to do other things. Leave things out to break them, just to see how a cohesive program is formed. There are plenty of resources on this blog to learn C, Python, Julia, even Fortran or Ada. The language doesn’t matter, although I wouldn’t start with C, pick a simple language like Julia or Python. Learn the basics, a bit at a time.

People seem to have this great belief that they will learn everything they need in life by attending university. Wrong. That all of what they have to learn will be provided. Wrong. Most of what you learn in life you will teach yourself… from baking a cake, to paving a patio, to changing a light switch. Life is not handed to you on a platter. And if you don’t like programming, then maybe you should rethink a career in computer science.

The pandemic will change the way technology companies operate

Computer science is ostensibly about technology, and most people doing a CS degree will probably work for a technology company in one form or another when they graduate. But the corporate office of the post-pandemic world may look vastly different from that of early 2020. It is no surprise that the world has changed, and it is no different from other events that have precipitated great change. The urbanization of society, the steam age, the advent of electricity, WW1, WW2, the Internet – events of a truly global nature, good or bad.

Change is not a bad thing, even if it is forced upon us. It won’t be the last thing we ever have to deal with (yeah, don’t forget that little thing called climate change). Many companies were likely already working on longer-term plans to have their employees work remotely… timeframes that were likely squeezed from 3-5 years to 3-5 days. The outcome? Many companies actually work quite well from the viewpoint of a remote workforce. The sudden change in the workplace environment has merely accelerated emerging trends such as flexible working, and re-skilling.

The workspace of the future for many companies may be as simple as a 50/50 split between remote working and office time. For some employees the time spent working remotely may be closer to 100%. Workplace culture will have to change, with more of an emphasis on the flexibility of working, and results delivered rather than an hours-based system. With a flexible workforce, a person can live anywhere they want, and possibly work something other than the traditional work-week. Pre-pandemic many companies would have been hesitant to talk the leap into remote-working, but in most cases productivity has not diminished (maybe in part because lack of commuting has provided greater lifestyle flexibility). There are benefits for companies as well – reduced operating costs, happier employees.

Technology companies are well poised to transition to a hybrid model for working. Some Canadian companies like Wysdom.AI have decided to transition permanently to a remote model. There is no inherent need for a physical office if you are developing software (obviously hardware design, or jobs involving specific technology is a little different).

Cities will have to change too, as companies have less need for commercial real-estate, and supporting businesses (e.g. food, clothing etc.) that once relied on towering buildings full of people will have to adapt. It is a brave new world, and we have a chance to reimagine how our lives work. There will no doubt be casualties, i.e. businesses that fold, but the reality is one must adapt in order to survive. There will be benefits to the environment and infrastructure as well. City cores could be adapted to become more livable as opposed to 9-to-5 places of work, less commuting will mean less emissions, and wear-and-tear on transport infrastructure. There will be a reduced need to add new transit to support huge rush-hours – money saved that could be better spent on making cities more livable.

What does this mean for people studying computer science? It means the workplace of the future will likely mean that you will spend some of the time working remotely. Not dissimilar to remote learning. People will have to change the way they work. While there is likely some benefit to collaboration in the workplace, working remotely may actually spurn more innovative products. There are people who design products but likely never actually see them in the context of the real world. A good example are the interfaces to washing machines, or self-checkouts. The office-centric culture is dead. It is time for us to embrace the world of the future, one which may provide us with better lives, and help reduce our impact on the planet.

Overloading of the mind

The new digital environment we live in has many different opportunities to overload the mind. The main one, and the one which influences the others is information overload – there is just too much information. This eventually overwhelms our cognitive abilities and affects the way we think. Information overload can lead to sensory overload. Too much information from our environment including such things as images, videos, sounds, smells, and physical sensations can lead to our nervous system being overwhelmed because it has an inability to process all that input. A good example of this is being in a situation where you are surrounded by numerous stimuli, such as a mall, sporting game, or carnival (or too many tabs in a browser).

Cognitive load usually refers to the amount of information that the working memory can effectively manage. Too much information can lead to cognitive overload where the working memory is just not capable of dealing with the information. This happens when someone wants you to remember a series of numbers, without writing them down. Excessive complexity and clutter in a visual scene can interfere with how we effectively perceive, process, and make decisions about information in visual environments. Part of the cognitive overload bubble is decision overload – too much information leads to sensory overload, and eventually there are too many decisions to be made. The problem here is that the brain does not prioritize these decisions, and there are likely only so many decisions which can be made before quality is compromised.

The brain has two attention systems, a conscious one which allows us to focus on things, and an unconscious one that shifts our attention towards things our senses might pick up. This makes sense as a survival technique. e.g. if you are feeding on berries as a hunter-gatherer, you still want your peripheral senses to be wary of any threats. Did I just hear a twig snap nearby? Now while conscious attention focuses on the task at hand, the unconscious one doesn’t really shut down – it is always scanning for things in the sensory periphery. That’s why rhythmic noises, e.g. a tap dripping in the background can be very distracting. Some people listen to music while focusing on a task – the music provides non-invasive noise to effectively neutralize unconscious noise.

So we live in a world in which we are overstimulated by the things around us, and much of that comes in the form of information noise – TV, digital streaming, and the Internet, all bombard us with too information. The information is not always complete, and we triage it by means of skimming it, picking out relevant pieces of information, but often missing the complete picture. It could be very probable that the world of digital information may be causing changes in our brains, and we may just not be ready for that.

Information noise is best dealt with by dispensing with it – that means turning off the digital stream, and stimulating your mind with a book, talking a long walk, walking through a forest, or taking up a hands-on hobby (spoon carving anyone?). A place where the senses are not overwhelmed allows the brain to rest, and perhaps think a little clearer.