Could software be the real problem with electric vehicles?

Everyone is suppose to get an EV now, because apparently switching to electric cars will help save us from the extreme’s of climate change. Hint – it won’t, at least not on its own (ever wondered how much GHG a cruise ship emits?). There are plenty of problems with EV’s: expensive, questionable provenance of materials, too heavy, short lifespan, short range, fire potential (just look at the two car carriers that have caught fire in the past year) etc. But there are other issues as well, namely the code used to run the cars.

Now most new cars have a good bit of code in them, used to run much of the car, to the point where you need to plug the car into an analysis system to determine what’s wrong. They will get even more of code once autonomous systems become more common (will they? … they don’t seem that robust at the moment… and you certainly can’t compare them to autonomous trains because they drive on tracks). A modern car runs on roughly 100 million lines of code (LOC), which is a lot of code for a vehicle that doesn’t do anything out-of the ordinary (I mean a human still has to drive it). An EV supposedly has up to 150 million LOC, and an autonomous vehicle supposedly has 1 billion LOC running it. To put that into perspective Apollo 11 used 145,000 LOC, written in AGC assembly (you can look at the code here if you like). Sure autonomous vehicles likely have to do a lot more, and have a lot more sensors feeds, but 1,000,000,000 LOC is a lot.

A lot considering the bug-plagued F-35 uses 8 million LOC to run the jet (everything from flight controls to fusing together sensor data), and another 24 million LOC to run the maintenance and logistics software. So, what exactly is happening with car software? Now 100M LOC is likely the same amount found in an EV, because systems have a different type of complexity (managing electric motors and batteries etc, versus more complex gas engines). But that’s still a lot of code to do nothing autonomously. Why is there so much more code in a car versus a jet plane? It certainly can’t be all the combat systems in a car.

The only thing to explain this might be sub-par programming. Unlike planes, which aircraft manufacturers prefer to keep in the air, cars are a totally different animal. There are usually fewer aircraft incidents than car crashes because there is a higher quality associated with aircraft software. But if we set aside autonomous vehicles, and only look at manual-drive gas and electric vehicles, it still begs the question – why so much software? it’s just running the engine and a few ancillary systems… nothing special really. Most of the code for these systems is supposedly written in C and C++ (with a sprinkling of assembler, Ada, Fortran), so perhaps some of the programmers are just writing sub-standard code? I mean C is a pretty no-nonsense language known for systems development… why would you need 100 million LOC to run a car? Perhaps the people doing the programming lack experience in embedded systems programming?

EV software is even more critical because unlike gas-powered cars, where the software basically runs the engine and ancillary systems, there is much more of a safety issue. EVs require a battery management system (BMS) to keep the EV’s battery functioning properly, correcting issues or isolating malfunctioning battery cells immediately. Poor software can have much dire consequences in a car with a 400-800 volt electrical system tied to a heavy battery system (anywhere from 300-800kg). So apart from the battery (which in my opinion is the biggest red flag system in an EV), the next most important system is the software. There is no mechanical system or engine, which basically means the software, all 150M lines of it, runs the car.

I would strongly question why so much code is needed, and how much testing is actually performed on these systems. Will EVs become like so many artifacts of the software world where the user does the testing by driving the car, similar to how user-based testing works with smartphones and the like? How many software upgrades will have to be performed? And of course how cyber safe are these cars? Lot’s of questions, but like many things in the world of software, not many answers.

Did you find this interesting? You might like: Are autonomous vehicles that important?

Median filtering using Quickselect

This is an add-on to a post from a while back comparing languages for median filtering an image. This is the Fortran program which uses the Quickselect algorithm as the means of selecting the median in each neighbourhood being filtered. The image (a simple rectangle of 8-bit integers) and its dimensions are hard-coded.

program medianfilter
   use selecting
   implicit none
   integer :: dx, dy
   integer, dimension(2144,6640) :: image, imageN
   integer, dimension(5,5) :: blk
   integer, dimension(0:24) :: mblk
   integer :: i, j, w

   dx = 2144
   dy = 6640
   w = 2
   open(unit=20,file='pano.txt',status='old',action='read')

   do i = 1,dx
      read(20,*) image(i,1:dy)
   end do
   close(20)

   imageN = image

   do i = w+1,dx-w
      do j = w+1,dy-w
          blk = image(i-w:i+w,j-w:j+w)
          mblk = reshape(blk, (/25/))
          imageN(i,j) = quickSelect(mblk,0,24,12)
      end do
   end do

   open(unit=20,file='pano_fortran2.txt',status='new',action='write')
   do i = 1,dx
      write(20,*) imageN(i,:)
   end do
   close(20)

end program medianfilter

The heavy lifting is done by the module selecting:

module selecting

contains

recursive function partitionHoareREC(a,left,right,pivot) result(r)
   integer, dimension(25), intent(inout) :: a
   integer, intent(in) :: left, right, pivot
   integer :: r, aux, nr, nl

   if (left > right) then
      r = right
   elseif (a(left) > pivot .and. a(right) <= pivot) then
      aux = a(left)
      a(left) = a(right)
      a(right) = aux
      r = partitionHoareREC(a,left+1,right-1,pivot)
   else
      if (a(left) <= pivot) then
         nl = left + 1
         nr = right
      end if
      if (a(right) > pivot) then
         nl = left
         nr = right - 1
      end if
      r = partitionHoareREC(a,nl,nr,pivot)
   end if
end function partitionHoareREC

integer function partitionHoareWrapper(a,lower,upper)
   integer, dimension(25), intent(inout) :: a
   integer, intent(in) :: lower, upper
   integer :: middle, pivot, right

   if (upper >= 0) then
      middle = (lower + upper) / 2
      pivot = a(middle)
      a(lower) = pivot
      right = partitionHoareREC(a,lower+1,upper,pivot)
      a(lower) = a(right)
      a(right) = pivot
      partitionHoareWrapper = right
  end if
end function partitionHoareWrapper

recursive function quickSelect(a,lower,upper,k) result(r)
   integer, dimension(25), intent(inout) :: a
   integer, intent(in) :: lower, upper, k
   integer :: r, pivotI
   if (lower == upper) then
      r = a(lower)
   else
      pivotI = partitionHoareWrapper(a, lower, upper)
      !write (*,*) "pivot= ", pivotI
      if (pivotI == k-1) then
         r = a(pivotI)
      elseif (pivotI < k-1) then
         r = quickSelect(a,pivotI+1,upper,k)
      else
         r = quickSelect(a,lower,pivotI-1,k)
      end if
   end if
end function quickSelect

end module selecting

Why Fortran and Cobol will likely be immortal

I find it funny when people call Fortran and Cobol “old” languages. Seems a little ageist. It appeared in 1957, and C, which nobody really calls old at all, appeared in 1972… only 15 years later. Hardly enough to warrant such abuse. Of course Fortran is a Baby Boomer, and C is Gen X, which is maybe why nobody messes with C, but somehow Fortran is fair game. Now the big problem is that nobody in higher-ed teaches Fortran anymore, opting instead for something like C, or one of the Millennials – C++ or Java. None of these are great options for introductory programming, and most institutions flog them into the ground, teaching very little else in terms of languages, and making students almost monolingual in the process (learning a combination of C-based languages is really being monolingual, as there is very little that differentiates the core structure of these languages).

This mantra has lead to students having an inherent bias towards 1-2 languages. It may be the leading cause of a lack of Fortran programmers… oh and Cobol programmers too. But people tend to shoot themselves in the foot using this approach. Way too many people have skills in languages like C, so there is a much larger pool of people for companies to choose from. And despite what the internet says, neither Fortran nor Cobol are dead languages. We rely on them far too much, and reengineering code is a colossal nightmare, not to mention *VERY* costly. Think AI will do it? Maybe, but which financial institution is going to risk it? Even two minutes of outage is too much.

Besides which there is nothing wrong with Fortran, because it is not the same language as Fortran I of 1957 fame. Languages evolve, which I doubt many of these people actually take into consideration when they write their “dead language” lists. Sure, by looking at “jobs available” you may be able to come to that conclusion, but you actually have to be clued into how software is used. There is over 800 billion lines of Cobol code in daily use around the world, and it continues to grow. Hardly the signs of a dying language, and that’s because Cobol does finance better than any other language – that’s what it was designed to do. Lots of science is still done in Fortran, because guess what, it’s good at performing calculations, especially on high performance platforms.

Look programming languages don’t age – if they dare to evolve. Sure, Fortran and Cobol aren’t multi-paradigm to the point newer languages are, but with all that functionality comes a key baggage – complex languages. Just because a language has structural simplicity, doesn’t mean it can’t do everything other languages do.

Recursion – The Knight’s Tour (N.Wirth)

The Knights tour has captured the attention of mathematicians and puzzle enthusiasts since the beginning of the 18th century. The problem is to move the knight, beginning on any given square on the chess board, in such a manner that it travels successively to all 64 squares, touching each square once and only once. Sometimes a solution is represented by placing the numbers, 1, 2,…, 64 in the squares of the chess board indicating the order in which the squares are reached.

Let’s consider an n×n board with positions. A knight, which moves according to the rules of chess, i.e. in an “L” shape, i.e. moving two squares horizontally or vertically, and then one square in the perpendicular direction (illustrated in Figures 1 and 2). The knight is placed at initial position (x₀,y₀). The problem requires finding a tour through the board such that every position on the board is visited exactly once. This problem has attracted the attention of mathematicians for over two centuries (although various forms of the problem date back over 1000 years (see further reading). The first to examine the problem in detail was Swiss mathematician Leonard Euler. In the later half of the 18th century he presented works analyzing the movement of the knight on the basis of closed circles, i.e. after a series of moves, the knight returns to the starting point. This type of knight’s tour has been named the Buler.

Fig.1: The eight positions which could be moved to from the centre.
Fig.2: Examples of “L” shaped move the Knight can perform.

A knight’s tour on an n×n board begins at some square and visits each square on the board, returning to the beginning square (using only legal moves).

The algorithm described below is from the ultimate book on algorithms, Niklaus Wirth’s Algorithms + Data Structures = Programs. This solution is essentially one which uses a backtracking algorithm implemented via recursion. The algorithm is based around a procedure called try(), which attempts to find the next move. Here is the algorithm in pseudocode:

procedure try(next_move)
begin
   initialize selection of moves
   repeat
      select next candidate from the list of moves
      if (acceptable) then
      begin
         mark move
         if (board not full) then
            begin
               try(next_move)
               if (not successful) then backtrack (erase previous move)
            end
      end
   until (move was successful) or (no more candidates)
end

The Pascal program of Wirth’s has been adapted to Ada.

procedure try(i: in integer; x, y: in integer; q: out boolean) is

   k, u, v : integer;
   q1 : boolean;

begin
   k := 0;
   loop
      k := k + 1;
      q1 := false;
      u := x + a(k);
      v := y + b(k);
      if u >= 1 and u <= 8 and v >= 1 and v <= 8 then
         if h(u,v) = 0 then
            h(u,v) := i;
            if i < nsq then
               try(i+1,u,v,q1);
               if not q1 then
                  h(u,v) := 0;
               end if;
            else
               q1 := true;
            end if;
         end if;
      end if;
      exit when q1 or k=8;
   end loop;
   q := q1;
end try;

Next we can define the main program:

procedure knightstour8 is
   n : constant := 8;
   nsq : constant := 64;
   type index is range 1..8;
   a, b : array (1..8) of integer;
   s : array (1..8) of index;
   q : boolean;
   h : array (1..8,1..8) of integer;

   -- procedure try() goes here

begin
   s(1..8) := (1, 2, 3, 4, 5, 6, 7, 8);
   a(1..8) := (2, 1, -1, -2, -2, -1, 1, 2);
   b(1..8) := (1, 2, 2, 1, -1, -2, -2, -1);

   for i in 1..n loop
      for j in 1..n loop
         h(i,j) := 0;
      end loop;
   end loop;

   h(1,1) := 1;

   try(2,1,1,q);

   if q then
      for i in 1..n loop
         for j in 1..n loop
            put(h(i,j), 4);
         end loop;
         new_line;
      end loop;
   else
      put_line("no solution");
   end if;

end knightstour8;

Here is the solution. Now this program starts at position (1,1) and requires modification to start at a random point.

   1  60  39  34  31  18   9  64
  38  35  32  61  10  63  30  17
  59   2  37  40  33  28  19   8
  36  49  42  27  62  11  16  29
  43  58   3  50  41  24   7  20
  48  51  46  55  26  21  12  15
  57  44  53   4  23  14  25   6
  52  47  56  45  54   5  22  13

Further reading:

  • Knight’s Tour Notes (comprehension guide to all things related to the Knight’s Tour, with an extensive bibliography, and notes on various approaches)

Is Gen-X tiring of technology?

I was born in 1970, part of Gen X (you know the generation nobody talks about). We lived through the technology experience. In the 1970s there was little in the way of consumer technology, at least not for the home consumer. I guess if you were lucky enough, you might have had an Odyssey video game console (1972), but few did (it sold for US$99.95). We grew up playing outside, oblivious to computers. There was more interest in space travel, aliens and the Bermuda Triangle than there was in technology, not surprising given 2001 (1968) and Star Wars: A New Hope in 1977. By the late 1970s cassette tapes had become widespread, and in 1979 the Walkman appeared – music was now portable, now *that* was a cool technology.

Then in the 80’s home computers arrived, *if* you could afford them. In the mid-80’s we had an Apple IIe which my father borrowed from work (and a dot matrix printer). That was lucky I guess, but it didn’t really have much in the way of software, so I wrote essays for school by writing BASIC programs made of PRINT statements. Oh, and the only real game that I can remember was Pong. There was no Internet per se, just a connection of computers from various country networks. No browsers, no nothing. Basically just email, ftp, and remote logins. Some people took computers in highschool, writing BASIC programs on TRS-80 machines (I was SOOOO not interested in that).

Most modern technology originated in the 1970s

From the 80’s onward we experienced every form of tech. Every new computer thing. We went from a world with no tech to one where tech is everywhere. A world where we saw future technology on Star Trek to one where props became reality. Nobody really thought it would happen so quickly. And somewhere along the way we lost something. Technology has become ubiquitous, maybe too ubiquitous, to the point where it has honestly become overwhelming. We are at the point where we need to question whether we need as much technology as we use. And as much as I like some aspects of the internet (it makes researching digitized historic books *so* easy), there is a lot of flotsam and jetsam on it.

Sometimes I, like many of my generation, long for a time before everyone became obsessed with social media and screens everywhere (According to a poll by Harris Poll, 77% of Americans aged 35-54 said they would like to return to a time before humanity was “plugged in”. Are we Luddites? Perhaps. But there are times when it is more satisfying to play an actual CD that it is to play songs on my iPhone. The sound is certainly better. Somewhere I still have a Discman as well. Even vinyl has made a huge comeback. The other day I walked past a consumer electronics store in downtown Toronto, and was amazed at the cornucopia of turntables they had on display. – like a *lot*. Even vintage markets have a lot of vinyl for sale… not something you would have seen even five years ago.

Gen X has seen consumer technology evolve from nothing, to all-encompassing. Some of us do long for a world where there was less of it, where it didn’t necessarily pervade every aspect of our lives. Even as someone who works in computing, I’m somewhat tired of modern technology – AI, smart-this, smart-that, intelligent light switches and fridges. None of it is really relevant. They often don’t make life easier, and are more complicated to fix. Maybe going back to a more basic world will allow us to better understand what is happening to the environment, and be better local citizens.

In academia few dare to criticize the status quo

After over three decades in academia in one form or another, I have learned one thing. Don’t bother having an opinion, because few if any people want to hear it. The vision of academia being a sort of utopia of openness is a lie. Sure there might be places where things are very congenial, but on the whole it never really seems that way. Academia is generally a place where new ideas go to die. I’m not talking about research, because that is everyone’s individual utopia. I’m talking more about pedagogical ideas, and how to make a better teaching institution.

I remember countless years of effectively being bullied by colleagues in meetings. Whenever someone had an idea that contradicted the norm, they were subject to the forces of “you’re wrong”. Even people whose research was pedagogy were mostly treated in a derogatory manner. It makes it hard to even think of pulling a curriculum out from the 1980s. These responses often look reasonable in their narrow contexts, but the net effect is that people’s legitimate experiences feel diminished or downplayed, and the deeper issues go unacknowledged and unaddressed.

You could come up with thousands of new ideas, but the reality is that few people will want to hear them (even if they are backed up by best practices in the literature). Few people are interested in transformative change. Let’s take a simple introductory programming course, say one taught in C. If you have taught the course for over a decade, surely you have an understanding of why students often don’t understand concepts. What you say? It could be the language? Right. C is not an appropriate language for teaching the concepts of introductory programming to novices (especially for non-CS majors). Better to start with a language that teaches the fundamentals of programming without having to focus on the bizarre idiosyncrasies of C. But these days people just see “industry based language”. The problem is that people end up learning a small amount of knowledge about C, and very little about problem solving or even generic language structures. Don’t even get me started on pointers.

Transformative change at any level of education is hard, and few actually want to try. Often it doesn’t even have to be that metamorphic, it just requires some insight into what the best practices are in the community – small changes to make the learning experience better for students. Of course it’s easier to sit back and do nothing, because its comfortable, and who’s going to complain, right?

AI will make some people mindless

It’s funny how people worry about AI turning on humans the way it is shown in some movies. We’re probably nowhere near that sort of scenario. AI expert Rodney Brooks argues that most AI tools are stupider than we realize, and can’t really compete with humans on an intellectual basis. AI has been around since the mid-1950s, the only reason that it now has the abilities it does is an abundance of data. So nothing to be afraid of, nothing to see here.

There is a bigger problem though, and that is the fact that AI is the latest contributor to the dumbing down of humankind. This trend likely started with social media, and has been exacerbated by the lack of proper teaching in schools. AI in its various incarnations such as ChatGPT will only make things worse. How you may ask? Well we have somewhat ingrained the younger generation to pick the lowest hanging fruit, take the path of least resistance. When you grew up with encyclopedia’s and books, you were forced to go and find the information, and interpret what you found. When Wikipedia arrived on the scene, it provided a huge amount of information (not always accurate), for very little ability. It was now easy to cut-and-paste such data with very little consideration to its validity, source, or even content. Social media has made things worse by effectively teaching people that “life” is represented as small snippets of information (often with very little context).

People are being dumbed down mainly because they will use AI to think for them, offering a question, and having AI provide them with an answer. Providing a simple answer to a question might seem somewhat innocuous, but because there was no cognitive ability behind it, or process to actually find the answer to the question, the brain will likely not have any ability to retain the information. Once this sort of behaviour becomes ingrained, some people may stop thinking critically or questioning information. A good example is the student who uses AI to write a program for an assignment. I’m not talking a complicated program, but something you might find in the first two years of a CS program. The AI builds the program, the student submits the program. But the student doesn’t understand how the program works, or why it was built the way it was. The student may get a good grade in the class, but is completely oblivious to the knowledge gained (and likely will falter later because of this). It’s similar with essays in the humanities, but there it is often more obvious that AI wrote the piece.

I’m not saying that all people will take this path, nor that AI isn’t useful in certain circumstances, but relying on it too much can lead to a reduction in cognitive skills. In addition, many people won’t ever question the answer AI provides (and it may be wrong). And before anyone says it will augment peoples way of thinking – it won’t. And don’t get me started on stuff like AI art and photography. Using an AI to generate art dilutes the art of people with real talent produce – but maybe society just doesn’t care anymore. Making everyone homogeneous seems to be the real outcome of AI – clone-like, conforming, and an inability to think outside the box.

Disclaimer: I really don’t think AI is necessary at this juncture in human existence. We should be teaching people to better use their minds instead of taking the easy route. In fact, I don’t really like AI.