Recursion – Sudan’s Function

Apart from Ackermann’s function, there is another function which is not primitively recursive – the Sudan function. It was one of the first functions having this property to be published in 1927, derived by Romanian mathematician, Gabriel Sudan (1899-1977) [2]. In the mid 1920s, both Sudan and Wilhelm Ackermann were (PhD) students of David Hilbert at the University of Göttingen in Germany, studying the foundations of computation. Sudan was in Germany from 1922-1925, after which he returned to Romania. Both Ackermann and Sudan are credited with discovering recursive functions that are not primitive recursive.

While Ackermann’s function is well known, Sudan’s is not. At the same time that Ackermann submitted his paper for publication, Sudan independently submitted his own work. Sudan cited the ideas of Ackermann contained in a paper by Hilbert [4]. Ackermann also cited knowledge of Sudan’s work [3]. However Sudan’s paper remained relatively unknown, in part because of the obscurity of the journal he published it in. Published in 1927 [2], it was not until 1979 that his contribution was presented at a conference by Calude and colleagues [1]. Here is Sudan’s function:

S(m,n,0) = m + n
S(m,0,k) = m
S(m,n,k) = S(S(m,n-1,k),S(m,n-1,k)+n,k-1) for n≥1, k≥1

Here is the algorithm implemented in C:

int sudan(int m, int n, int k)
{
   if (k == 0)
      return m + n;
   else if (n == 0)
      return m;
   else if (k != 0 && n != 0)
      return sudan(sudan(m,n-1,k),sudan(m,n-1,k)+n,k-1);
}

The data in Table 1 shows some of the output for various values of m, and n, when k=1.

n/m0123456
00123456
1135791113
2481216202428
311192735435159
42642587490106122
55789121153185217249
6120184248312376440504
Table 1: Some values for m, and n when k=1

Ackermann cited Sudan’s work in a footnote [3, p.119]:

Original: Eine Arbeit, die mit der vorliegenden manche Berührungspunkte hat, wird von Herrn G. Sudan publiziert werdem. Es handelt sich bei ihr um die Definition von Zahlen der zweiten Zahlklasse, die man in ähnlicher Weise klassifizieren kann wie die Definitionen der reellen Zahlen.

Translation: A work that has some points of contact with the present one will be published by Mr. G. Sudan. It is about the definition of numbers of the second number class, which can be classified in a similar way to the definitions of real numbers.

  1. Calude, C., Marcus, S., Tevy, I., “The first example of a recursive function which is not primitive recursive”, Historia Mathematica, 6, pp.380-384 (1979).
  2. Sudan, G., “Sur le nombre transfini ωω“, Bulletin Mathématique de la Société Roumaine des Sciences, 30, pp.11-30 (1927).
  3. Ackermann, W., “Zum Hilbertschen Aufbau der reellen Zahlen”, Mathematische Annalen, 99, pp.118-133 (1928).
  4. Hilbert, D., “Sur l’infini”, Acta Mathematica, 48, pp.910-922 (1926).

Testing Julia for Speed – 2021 update

A few years ago I tested Julia for speed using a series of algorithms. Since Julia has evolved, I figure I would revisit the scores to see how things have changed. The first was Ackermann’s function. The method of calculating Ackermann is by means of an iterative version of the algorithm which uses a stack. this is partially because both Julia and Python have “issues” with recursion. All calculations were performed on a MacBook Pro with an M1 chip. All timings are in seconds.

Ackermann(4,1)20162021
Julia (1.5.3) (Int built-in stack)121.7923.08
C (gcc)32.7725.51
Fortran (gfortran)37.5117.2
Python3 (built-in stack)878.1334.7

Interestingly, the faster speed was provided by Fortran, followed by Julia, and C in third place. All times were faster than the 2016 measurements, even Python. Next we retested the Bubblesort.

Bubblesort (100,000 integers)20162021
Julia (1.5.3) (Int built-in stack)184.39412.35
C (gcc)34.47528.072
Fortran (gfortran)35.12225.9
Python3 (built-in stack)3059.4451038.44

Here Julia takes the clear lead, with C again in third place. The fact that Julia is less than half the speed of C is quite miraculous. Finally we retested the application of a Mean Filter to a grayscale image.

Mean Filter (image 2144×6640)20162021
Julia (1.5.3) (Int built-in stack)3.4851.4
C (gcc)0.8480.8
Fortran (gfortran)1.0310.49
Python3 (built-in stack)245.266.03

Here Fortran actually holds as the winner, with C seconds and Julia in third. Now some of these increases can obviously be attributed to improvements in chip design over the intervening 5 years. With Julia the improvements seen are likely efficiency improvements to the compiler itself. What is more interesting it that while C was the winner in 2016, this is not the case now. Now these programs are no frills, meaning that the coding for each was as similar as possible, without the use of fancy things like vectorization, which would have likely made Python more efficient, but makes it hard to accurately compare the languages. In the case of the mean filtering, it is not surprising that Fortran outpaces C, because Fortran allows array slicing, and in C this would require extra loops.

Why were these three algorithms chosen? Largely because they exude slowness, or heavy use of resources, which is perfect for this kind of test.

The Wirth Trinity – Pascal

Algol was likely minimally successful from the point of view of being used extensively in industry, but was used in academic environments. There were many attempts to extend it’s applicability, which lead to Algol 68, and Algol W. However the complexity of Algol 68 ultimately lead to that variant’s demise. PL/I was an attempt to create Fortran VI, however, combining features from Algol, Fortran and COBOL, but the product was once again an extremely large language. Both Algol 68 and PL/I exemplify the notion of “Swiss army knife” approach to language design – providing every conceivable feature. What was required was a smaller, more compact language – enter Pascal.

Wirth began design on Pascal (named after French mathematician Blaise Pascal) in 1968, with a compiler written entirely in Pascal, on a CDC 6000 series mainframe. The first language compiler appeared in 1970. The language was tweaked in 1972, and became an ISO standard in 1982. In describing Pascal, Wirth remarked that “the guiding idea in the layout of the syntax of Pascal was simplicity, due to the recognition that structures difficult to process by compilers are also difficult to master by human readers and writers”. The highlights of Pascal [1] were:

  • Simple control structures for decisions, and repetitions.
  • Scalar data types = boolean, integer, real, char and enumerations.
  • Ability to construct complex data structures using records, arrays, and sets.
  • Strict static typing – every constant, variable, function or parameter had a type.
  • Dynamic data structures built with the use of pointers.
  • Recursive procedures.

Influences?

Pascal was heavily influenced by Algol, and is often called Algol-like.

Why was Pascal developed?

Pascal was born out of what Wirth terms “an act of liberation”. Liberation from the prospect of using Algol or Fortran as languages to teach programming, and liberation from the design constraint imposed by committee work.

What did it actually do for programming?

Pascal was one of the first languages built from the ground up with the notion of structured programming.

  • With no commercial backing, Pascal succeeded on its own merits, and was implemented on systems ranging from Cray supercomputers to personal computers. Programmers who felt “straitjacketed” by writing programs in BASIC, flocked to Pascal.
  • It was an ideal language for teaching programming.
  • Pascal was drafted as the basis for the DOD’s Ada project.
  • It introduced records into scientific language (although Algol-W really did this).
  • It introduced a usable case statement

Design considerations

The general idea dominating the design of Pascal was to provide a language appealing to systematic thinking, mirroring conventional mathematical notation, satisfying the needs of practical programming, and encouraging a structured approach. It should be simple, have the ability to handle non-numeric data, be suitable for teaching programming, and have the compile-time and runtime efficiency of Fortran.

Language features

  • Records, and variant records.
  • Algol-60 had blocks (local declarations + statements) and compound statements
    (statements only), whereas Pascal eliminated the block.
  • More, yet simpler control structures than Algol-60.
  • Use of a real assignment operator, :=
  • Strong type safety.
  • Case insensitive.
  • Native set operators.

Language deficiencies

  • Keeping the goto statement.
  • Syntactic ambiguities inherited from Algol – the lack of explicit closing symbols for
    nestable constructs, e.g. dangling-else.
  • Inability to support separate compilation of modules hindered the development of
    large programs.
  • The flawed case statement, which lacked an else clause.
  • Fixed size of arrays, precluded the use of general math and string libraries.
  • Fortran, Cobol programmers felt handcuffed by Pascal’s compulsory declaration of
    variables.
  • No exponentiation operator.

Criticisms of the language

  • Lack of block structures.
  • No dynamic arrays.
  • Lack of the “2nd” form of conditional (inline if).
  • Labels and the goto statement.
  • “Unnatural” unification of subranges, types and structures.
  • The difference between procedure and function is marginal.
  1. Wirth, N., “The development of procedural programming languages – personal contributions and perspectives”, in Modular Programming Languages, JMLC 2000, LNCS (1897).

Myths about becoming a programmer

I see a lot of people on sites like Quora asking about becoming a programmer. The reality of course is that programming is not easy, nor is it for everyone. Here are some common myths dispelled.

Myth 1: “I need to be super good at math.”

No, not really. Being great at calculus might be helpful in certain applications where you need to solve equations to derive an algorithm and implement a solution, but most universities put to much emphasis on esoteric math skills. Mathematical knowledge is good in areas line image processing, but too many times too much emphasis is placed on it at the expense of really important things like problem solving.

Myth 2: “I need a degree in computer science.”

Sure it helps, in the right context. Some of the most interesting people in computing never went to university, yet they achieved incredible things. If you have two people: one went to university and got an A average, the other self-taught themselves programming and created an incredibly successful app which sold millions of copies. Who would I hire? The latter. Why? Because they have already proven themselves without the need for some academic hubris. That and they have a portfolio of experience, and are self-motivated. There are many stories like this. Conversely there are people who barely pass their courses and still get a degree. So I want someone with a 55% average programming software for a nuclear reactor? Hardly. Oh, and remember, most of the people teaching computer science in institutions of higher learning don’t actually design software for a living.

Myth 3: “I need to be super brilliant.”

Define brilliance? The ability to get straight A’s in university? Hardly. You need to be a hard worker, and more importantly than be brilliant, you need to have a good sense of exploration, and willing to think outside the box. Clever algorithms come into existence from people who have the ability to think beyond current knowledge, into the great beyond. Brilliance comes in many forms, not just academic grades.

Myth 4: “I need to learn the best language.”

Define best? There is no best language, despite what anyone says. Every language has some inherent benefit or weakness and is geared towards slightly different things. In reality to become a good programmer you will need to learn about many different languages, and how they interact. Never have the attitude that “C is best”, or “I only code in Java”. Boring… everyone learns these languages. If you want to stand out learn the languages that others don’t, like Fortran and Ada.

Myth 5: “I’m done learning.”

Many people seem to believe that once they have a degree they are done learning. Wrong. Computing, like many disciplines continually evolves. You will need to learn new things all the time, and in fact maybe unlearn some of what you learned in university. University often doesn’t relate completely to the real world. Case in point, many years ago academia discarded teaching languages like Cobol because they thought it wasn’t relevant… news flash… it’s as relevant today as it was in 1970.

Myth 6: “Once I have mastered the syntax of a language, I can do anything.”

Mastering syntax is one thing, being able to actually implement an algorithm is another all together. There are often many ways to implement an algorithm, some might be more efficient than others. You have to have an innate understanding of how a language can be used to implement an algorithm. In some cases the language may not even be the best to implement the algorithm. For example you can master Java syntax, but it would not be the best language to implement a real-time control system for an autonomous train.

Myth 7: “I’m good at gaming, so I’ll be a great coder.”

Likely not. Gaming and actually designing and implementing software are world apart. If you don’t have any interests outside of the computer I would imagine you aren’t really able to think outside the box… and I don’t want to hear any malarkey about have great hand-eye coordination, and multitasking skills… it’s hyped up baloney.

Myth 8: “I can master language X in a few weeks.”

🤣 Nope. Nada. Not likely. You may get a hang of the syntax, but master? That’s like saying you could become a Jedi in a few weeks.

Myth 9: “I learned HTML, and it was easy.”

Yeah, HTML may be easy, but that’s because it’s not a programming language despite what people say. HTML is a language to mark up the structure of websites… and it doesn’t work well without CSS (also not a language), and things like Javascript (which is a language) to make things dynamic. Programming languages implement logic like making decisions, and repetitive actions, HTML doesn’t do that.

Myth 10: “I’m a woman, programming isn’t for me.”

Why not? just because there are so many guys in computing? Ignore that, follow yours interests. Women were as much on the forefront of computing in its formative years as men (it’s just often conveniently forgotten). Actually some of the best programmers in my classes are women.

Myth 11: “Programmers sit in front of a machine all day.”

Programming isn’t all about machines, and it isn’t all about coding. It is just as much about coming up with designs, and new algorithms, as implementing them. Besides, these days you can work from just about anywhere. Some people find inspiration sitting in a cabin in Iceland, or on a beach. It’s what you make of it.

Myth 12: “The more tools I use the better programmer I am.”

No. Tools are fine, but sometimes the more tools you know, the less you understand about what is happening. A good example is programmers who eschew learning low-level stuff like the command line, instead opting only for interactive development environments. They don’t understand how things work at the lowest level, and so have less of an understanding of what is going on overall.

Myth 13: “I’m a cool programmer because I code everything on the fly.”

No, you’re not. All it proves is that likely you never followed instructions. You probably indent with two spaces or worse use tabs. Coding on the fly is okay for trying out small things, experimenting and the like, but it’s not good for large scale projects because it’s easy to miss things. People usually code-on-the-fly because they think they are cool. Big mistake. I see it when people try to translate code, and then wonder why they get in a mess… it’s usually because they have no clue what they are doing.

C++ – The Cobol of the 90s

A quote from The Unix-Haters Handbook, (1994, p.203-204).

… C++ misses the point of what being object-oriented was all about. Instead of simplifying things, C++ sets a new world record for complexity. Like Unix, C++ was never designed, it mutated as one goofy mistake after another became obvious. It’s just one bug mess of afterthoughts. There is no grammar specifying the language, so you can’t even tell when a given line of code is legitimate or not. … Comparing C++ to Cobol is unfair to Cobol, which actually was a marvellous feat of engineering, given the technology of its day.

When you could read about the whole Internet in a book

When I was in university in the 1980s, our view of the internet was just a simple band of networks in sporadic countries, linked by undersea cables that were probably used more for other communications than the Internet. There was no web. There was email, USENET news, and far distant sites you could ftp in to, for various reasons, mostly to download freeware and shareware for MS-DOS. Sites like SIMTEL-20.

In 1990 O’Reilly & Associates launched a book, “!%@:: A Directory of Electronic Mail Addressing and Networks“, which basically outlined every network in the world and how they were connected. Places like BITNet, CDNnet, USENet, and Internet. Yes, Internet had its own entry, as “US Research and University Projects TCP/IP Network”. Its description mentions that it began in 1982 when a series of networks like ARPANET, and MILNET were interconnected, and by 1990 connected to over 40 countries. The book talked about issues like addressing for emails, and architecture, and which other networks each network was connected to. At the end of the book was a list of second and third level domains by organization.

Every network had its own page.

You could email someone if you knew their email address. You could read the USENet news groups to keep up with what was happening in the world in any number of weird and wonderful newsgroups. It was the place to go to post a technical problem, or just some question in the myriad of stupid newsgroups. It was the only real connection to the outside world. We spent time downloading shareware from SIMTEL-20, then housed at the White Sands Missile Range with ARPANet access. It may have been one of the busiest sites on the early Internet. Sometimes the submarine cables would be down, and the “net” would effectively shut down.

Why C and C++ are dangerous languages (for novice programmers)

The problem with languages such as C and C++ is that they are inherently dangerous for those who don’t know how to use them properly. It’s sort of like allowing a novice forestry worker to use a chainsaw without any protective gear. Dangerous no matter how well they think they know how to use it. Chainsaws are amongst the most dangerous of tools, as C/C++ are the most dangerous of languages. Why? Because both languages allow things to happen that the novice may not anticipate. Take as an example the following C++ program which just processes an integer array of 10 elements. The problem lies in the fact that the loop that processes the array processes way more elements than what exist in the array. The problem? Nothing, except that the values it outputs are erroneous.

#include <iostream>
using namespace std;

int main() {
   int x[10] = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 };
   int i, y;
   i = 10;
   while (i>-10){
      y = x[i] * 67;
      std::cout << i << "->" << y << endl;
      i = i - 1;
   }
   return 0;
}

What happens is when x[10] or x[-1] is accessed? Something crazy. The element x[10] exists outside the purvey of the array, and the results obtained are both unpredictable and may change every time the program is run. So accessing x[10] means the program accesses some random memory address, and retrieves some garbage value. The program assumes the value obtained is a number, and multiplies this random number by 67, ultimately printing out the value without nigh a hint there is a problem. Ellis and Stroustrup even point this out in their classic book on C++ [1] , “… the C array concept is weak and beyond repair.” Of course arrays are C at its utmost worst (not that arrays in C are really arrays). Case in point is this snippet of code from The Unix-haters handbook (p.192). It suggests that provided with the code:

char *str = "bugy";

Then the following equivalencies are true:

0[str] == 'b'
*(str+1) == 'u'
*(2+str) == 'g'
str[3] == 'y'

Which is just madness for the novice programmer. The arrays in C are just treated like pointers (you can debate this all you like, but a spade is a spade), and are not at all transparent. Sure the novice programmer can simply use str[x], but at some point they are going to encounter the other forms, and wonder, why? This coupled with C’s inability to check out of bounds makes arrays inherently problematic.

This can of course be problematic if a bad piece of code finds its way into a piece of software expected to perform some real-time function. Surprisingly it’s not the only problem, there are many. For the novice programmer, they are poor choices as introductory languages. Here are some of the simple issues.

  • Knowledge of C/C++ requires far too much knowledge of the internal workings of memory, e.g. stacks versus heaps, pointers, etc. Learning a to program should focus on language constructs are used to implement an algorithm, not on low-level programming.
  • C and C++ are very permissive about what they allow to compile. Problems like out-of-bounds arrays are not a problem, and a failure to indicate an error can lead to a level of complacency amongst novice programmers.
  • There is no inherent benefit to teaching OO to novice programmers. OO is an advanced methodology, and not a panacea for all programs.
  • Math is never simple. If you have two basic integers, each of which has 2,000,000,000 assigned to them, the value returned when they are added together is not 4 billion. The result of the code below is -294967296 when executed. Do the same in Julia, and you get 4000000000. These languages require far too much knowledge about appropriate datatypes.
int x, y, z;
x = 2000000000;
y = 2000000000;
z = x + y;

So why are these languages, and their descendants used so often in introductory courses? Probably because somebody thought it was a good idea. For academia’s sake likely because of the pointers, to help weed out novice programmers who didn’t really have a great handle on memory straight out of the gate. There is no doubt that C is a powerful language, but it likely should be introduced at an appropriate stage, not as a introductory language. C was designed as a language to be used, to build Unix and beyond (it’s no coincidence that in earlier books on Unix, C appeared in the Unix Programming section). C++ on the other hand, should likely be left alone. It basically just overlays OO concepts onto C.

Mody put it best in his criticism of C in 1991. He said he was…

“appalled at the monstrous messes that computer scientists can produce under the name of `improvements’. It is to efforts such as C++ that I here refer. These artefacts are filled with frills and features but lack coherence, simplicity, understandability and implementability. If computer scientists could see that art is at the root of the best science, such ugly creatures could never take birth.”

  • Mody, R.P., “C in Education and Software Engineering”, ACM SIGCSE Bulletin, 23(3), pp.45-56 (1991)
    1. The Annotated C++ Reference Manual

    Has programming language design become dull and soulless?

    “Design and programming are human activities; forget that and all is lost.”

    ― Bjarne Stroustrup

    We are a long way past the birth of computing somewhere in the 1940s. In many ways computing has come a long way, in others it has stagnated. There are few new revelations in technology, i.e. an iPhone can’t really get much better (except for better battery life). Even Moore’s Law doesn’t exactly hold true anymore. There are inherent limits to how much you can squeeze into a chip. Language design has suffered as well. Most language design occurred from the late 1950s to the early 1970s. This is when the core structures of programming languages were developed. Some were designed by committee (Algol 60), others by companies (Fortran), and others by individuals (Basic).

    As chips got faster, the design of languages languished somewhat. Most newer languages in the 1980s and 90s were derived in some manner from C. C++ (1985), was influenced heavily by C, just as Java (1995) was heavily influenced by C++. Language design has evolved into taking the skeleton of an existing language and stuffing it full of features gleaned from other languages. Older languages have evolved, largely by removing passé features, and streamlining language structures. We settled on a bunch of control structures, and never looked back. Is there something beyond decisions and repetition?

    The increased speed of chips has had another effect on programming languages – we mindlessly program with little regard to the one thing early programming languages were designed around – efficiency. It doesn’t matter if we code in a slow language, because machines are fast. C was designed around a PDP-11 with little memory, it is designed to be a sleek, minimalistic language. Move onto Python, and we get a somewhat easier to use (learn) language, with a larger language footprint, and a slow running. Flip the page to Julia, we get a fast-ish language with a seemingly huge language footprint.

    It could be partially attributed to the lack of discussion about programming languages anymore. In the 1960s and 70s, Dijkstra loved to poke a stick at languages, and people in general liked to debate languages. There were whole magazines dedicated to programming, like Creative Computing and BYTE in its formative years. And journals like the “Journal of Pascal, Ada & Modula-2”. There were studies related to the psychology of programming, and how programming languages could be improved. People even seemed to care about things like the usability of languages. Journals use to actively publish interesting articles about language design instead of the esoteric articles they tend to publish now.

    Where is the discourse on how languages could be improved, made more usable? Language design has become the vocation of those who enjoy cramming as many features as possible into their bloatware, creating dull and soulless languages.

    The shell: Searching file contents with grep

    I find OSX’s search tool Spotlight to be somewhat wanting. Sometimes I want to find something quickly in my coding folder, and what better way to do that but on the command line? But how? Is the best way using find, or a combination of find and grep? The find command is normally used for finding files, and you can find a large collection of different examples here. Grep on the other hand searches for lines in a file that match a pattern. grep gets its name from the ed (editor) command g/regular-expression/p.

    In many cases I want to search the entire directory hierarchy, recursively. The following command uses find to find files, and grep to search for a pattern.

    find . -type f -exec grep -l 'GOTO' {} \;

    This basically searches all files in all subdirectories of the current directory (“.”) for the pattern “GOTO“, and prints any results. Here is a sample output:

    ./closed_loop.for
    ./tictactoe/a1w18.for
    ./tictactoe/ttt.f95
    ./tictactoe/tictactoe.for
    ./legacy/overcommentF.for
    ./closed_loop2.for
    ./ttt.f95
    ./calc.for
    ./gotoWITHloop.f95
    ./loop_goto.for

    This is useful, but it is somewhat of a handful to remember. A easier way is just using grep, recursively.

    grep -rl GOTO .

    The “-rl” implies recursively output “files-with-matches”. It produces the same output as the combined find/grep command above. To search for files with either of one of two patterns, one can use the “-E” option, with the “or” symbol, |. The code below will search for either goto or GOTO.

    grep -rl -E 'goto|GOTO' .

    This can also be done using the “-i” option to ignore case.

    grep -irl goto .

    To print out the line number of the pattern being searched for, use the “-n” option. For example:

    grep -nr GOTO .

    Here is a sample output:

    ./tictactoe/a1w18.for:47:      IF (TURN .EQ. 1) GOTO 16
    ./tictactoe/a1w18.for:50:      IF (OVER) GOTO 30
    ./tictactoe/a1w18.for:54:      GOTO 14
    ./tictactoe/a1w18.for:56:      IF (OVER) GOTO 30
    ./tictactoe/a1w18.for:57:      GOTO 10

    The art of over-commenting

    One of the most common questions asked by novice programmers is “How many comments should I add?”. Commenting is as much common sense than anything else. But how many comments are too many? In the following legacy Fortran program, which calculate the Factorial, n!, consider how many comments exist which don’t need to be there, primarily because they are self-explained by the code itself.

          PROGRAM FACTORIAL
    C     THIS PROGRAM CALCULATES A FACTORIAL, N!
          INTEGER N,M,FACT
          READ(*,*) N
    C     CHECK IF N IS LESS THAN ZERO, PRINT ERROR AND TERMINATE
          IF (N .LT. 0) THEN
             WRITE(*,*) N, " IS NEGATIVE"
             STOP
          END IF
    C     CHECK IF N IS EQUAL TO ZERO, PRINT 1 AND TERMINATE
          IF (N .EQ. 0) THEN
             WRITE(*,*) "FACTORIAL = 1"
             STOP
          END IF
    C     CHECK IF N IS GREATER THAN 15, PRINT ERROR AND TERMINATE
          IF (N .GT. 15) THEN
             WRITE(*,*) N, " IS GREATER THAN 15"
             STOP
          END IF
    C     CALCULATE THE FACTORIAL BACKWARDS FROM N
    C     ASSIGN N TO FACT
          FACT = N
    C     ASSIGN N TO M, THE VARIABLE WHICH DECREASES N->1
          M = N
    C     DETERMINE IF M IS LESS THAN OR EQUAL TO 1, IF TRUE JUMP TO LABEL 10
        5 IF (M .LE. 1) GOTO 10
    C     DECREMENT M
          M = M - 1
    C     CALCULATE THE FACTORIAL
          FACT = FACT * M
    C     JUMP TO LABEL 5
          GOTO 5
    C     PRINT OUT THE VALUE OF THE FACTORIAL AND TERMINATE
       10 WRITE(*,*) "FACTORIAL = ", FACT
          STOP
          END

    Most of the comments, marked in red, are not really necessary. Below is the program, reengineered into a modern rendition with enough comments for the reader to understand what is happening. Half the original program dealt with outlier values of n – these have been condensed into a single statement. The goto statements, which were used to create a looping structure, have been replaced by a do loop. The statements are self-explanatory, and for the most part do not need a comment.

    ! This program calculates a factorial, n!: n x (n-1) x (n-2) x ... x 1
    
    program factorial
       integer :: n,m,fact
       read(*,*) n
    
       ! Check the value of n, and terminate if not valid
       if (n .lt. 0 .or. n .gt. 15) then
          write(*,*) n, " is not in the bounds 0..15"
          stop
       end if
    
       ! If n=0, then 0!=1, else calculate n! backwards from n->2
       if (n .eq. 0) then
          fact = 1
       else
          fact = n
          do m = n-1,2,-1
             fact = fact * m
          end do
       end if
    
       write(*,*) "factorial = ", fact
    end