How old are programming languages

A short history of the most important programming languages.

introduction

My essay Development of Programming Languages ​​deals with this topic from the aspect of further development and the tendency to achieve more and more with one language. This article deals with pure historical development. Of course, this can only include a few, but prominent, representatives of the programming languages. The article "Why C is not my favorite language" deals specifically with C and Pascal

The first program you usually write is one that just says "Hello world!" Followed by a line break on the screen. I will enclose this program in each of the programming languages ​​so that you can also get something of the syntax.

The beginnings: programmers working as wire plugs

Since computers have existed, they have to be programmed. With the first computers, this was only possible through hardware connections. For this purpose, the computers were programmed by closing lines between the individual connections. Only the data was in memory. This was the case with the very first ENIAC and MANIAC computers. In a 1947 newsreel report about ENIAC it says: "This computer can calculate the trajectory of a bullet, which it traverses in two and a half seconds, in one and a half seconds. Programming takes a day and a half"Unfortunately it is not conveyed whether this was meant satirically or seriously ....

This was also because it was not until 1949 that the architecture described by the mathematician von Neumann prevailed that is still there today: memory that stores data and programs. This meant that new lines did not have to be laid, but rather a program had to be stored in a memory (which, however, was initially limited). Such a program can be imagined as a signal sequence of 0's and 1's, which in turn switched certain connections in the arithmetic unit so that it did what you had programmed. But as early as 1945/46 Zuse developed a high-level programming language which he called Plan Calculus. Since he was not only a European but also a German, he could not enforce this language in the USA, where more computers were already being built back then. Plan calculation was quite modern. It knew conditional branches that were not possible with ENIAC due to the direct hardware wiring.

The hexadecimal system and machine language

The next step was programming a computer to simplify this input process. Instead of typing 0 en and 1 en, whole groups were entered, coded as digits. First 3 bits - which could be coded with the numbers 0 to 7 (octet code) and later 4 bits - coded by the numbers 0 to 9 and the letters A to F (hexadecimal system). The reason was that two of these groups each resulted in a byte: Until the early 1970s, a byte was only 6 bits wide - that was enough for the capital letters, numbers and some special characters (also a reason why old programming languages ​​such as COBOL and FORTRAN only used commands in Digest capital letters) and later you used 8 bits for a byte.

There are still hexadecimal numbers today, depending on the language they have a special notation, in C for example by prefixing "0x", in Pascal with a "$", in some BASIC dialects with an "&" and in assembler with an "H" " at the end. Older languages ​​like C also know the octet code.

With this system it was now possible to considerably simplify the input work. Because instead of eight switches, only two numbers had to be entered. Nevertheless, this was not very user-friendly, who knew what was behind the individual values?

Assembler

The very first programming language (originated around 1952) in today's sense was therefore assembler: an assembler is nothing more than a program that provides words for the commands that a computer understands - and whose command words each have a certain code - that a person can remember can. Assembler is therefore specific to each processor. This is perhaps no longer so important today, as there are only a few processors that are widely used, but in the early days of the PC there were many computers with different processors. Everyone had their own programming language. Sometimes these were even different, although the processors were compatible with each other.

An example: The following bytes (all in hexadecimal)

don't say anything to anyone. They represent the following commands from the Z80 processor:

That is easier to understand, in normal language it says the following:

- Load the value 1000H into the processor register "HL"
- Load the value 2000H into the processor register "DE"
- Add the two registers HL and DE, the result is then in HL
- Save the result in memory location 3000H in the memory.

So the whole thing is simply an integer addition. You would have probably written "Result = & 1000 + & 2000". An addition requires 4 computing steps, but as a consolation a Z80 processor (although already out of date) is so fast that it can perform 250,000 of these computing steps per second at 4 MHz. (H and & stand for the notation of the hexadecimal numbers in assembler and BASIC). Furthermore, the assembler offers to assign names for memory cells and to calculate addresses for jumps, so that it also brought a relief here.

Assembler lasted a relatively long time. Because with assembler you could get the maximum out of the computer. In times when storage capacity and computing power were still limited, this was an important criterion. However, even with assembler, a fundamental problem has not yet been solved: testing the programs. For testing, you had to load the program to be examined with a special program and set breakpoints at the point where you suspected the error or follow the program sequence step by step. This was not particularly comfortable. In addition, even then, commands could crash the entire computer. Another disadvantage was that the assembler did not exist - and is still there today. It is specific to each processor. A Pentium has a different set of instructions than a Power PC processor or an Alpha Chip. Yes, there are even small - but important in certain areas - differences in a family, e.g. between the Pentium 4, Pentium III and Athlon processors in terms of floating point commands.

Hello world in Z80 assembler:

FORTRAN

The first high-level programming language was FORTRAN. FORTRAN was founded from 1953-1954 under the direction of John Backus. FORTRAN is the abbreviation for "FORmula TRANslator". And that characterizes the language quite well. Back then, most computers were used for scientific purposes. It was about mathematical calculations and you could enter them easily in FORTRAN. But because it was the first programming language, it also had a few weaknesses. One had no idea what a "grammar" of a programming language should look like and so that of FORTRAN is very contradictory and not easy to understand. It was particularly disastrous that variables can contain a space. That opened the door to misunderstandings. The first FORTRAN compiler then took 20 warning years to program.

The age of FORTRAN can also be recognized by the restrictions - only capital letters were allowed, variable names were only max. 6 characters long and one line had a fixed structure:

Program text, for example, should only be present in columns 7-72 ​​of a line, jump labels only in positions 1-5. This was due to the fact that programs were then punched on punched cards: at the top an 80-column line with typewriter text (so that you could read it), below, holes that were evaluated by a punched card reader. A punch card was needed for each line of the program - larger programs required a helper to help carry the cards to the data center ...

FORTRAN initially had no way of structuring a program. Instead, one jumped wildly in the source code to line numbers with the infamous GOTO command. Furthermore, in FORTRAN you could calculate the explosion of a hydrogen bomb, but you couldn't work with texts. Due to the fact that it was the first programming language, it became widely accepted. Then fate struck. A standard began to be worked out that would allow programs for computer A to run on computer B, i.e. everyone should understand the same FORTRAN. Whenever you do something like that, you can be sure that it will take forever. Because the representatives from different manufacturers cannot agree. FORTRAN continued to develop very slowly. The first FORTRAN IV or FORTRAN 66 standard did not exist until 1966. The simplest elements of structured programming were not included in the next standard until 1977. And in 1990 the last FORTRAN standard provided for such progressive things as the departure from the rigid punch card concept, upper and lower case, procedures, names instead of line numbers and CASE queries. Considering that Pascal already had this in 1970, this also shows how eagerly committees work ...

The fundamental disadvantage of FORTRAN was that very confusing programs, so-called spaghetti code, were created in this language. Or as one programmer said, "C encourages people to write poorly structured programs, but FORTRAN reinforces this habit". This is probably one of the reasons why Backus thought he had to do something good for the world and developed the Backus-Naur form, which is still valid today, in which the grammar of higher languages ​​can be described. Starting with ALGOL, all languages ​​used this grammar and this also made it easier to design a translation program (compiler). Soon there were tools for parsing source text and recognizing identifiers / numbers etc.

COBOL

COBOL was the first language a committee came up with. It had some precursors. Grace Hopper worked on the FLOW-MATIC language, the principles of which also flowed into COBOL. Its development was very soon taken up by IBM and passed on from 1959 by a joint committee. The first draft appeared in 1960. COBOL stands for COmmon Business Orientated Language. The goal of COBOL was to create a language that was very easy to read - at least if you could do that with the rigid system of defined meanings of punched card columns and UPPER CASE. COBOL has only one task: processing data, texts, bookings. COBOL therefore quickly established itself in the banking sector or in administration. With COBOL you can, for example, format, output and sort numbers very easily.

But COBOL is extremely text-heavy. Before even a line of executable code comes, half a page of definitions has to be written. Mathematical operations have names (MOVE ZERO TO XYZ; MULTIPLY XYZ BY 5 ....). (Take a look at the example!) There are also countless parameters for commands. Maybe you wanted a programming language that was easy to read or that even businesspeople could understand, in any case you had overdone it.

Just as there were no approaches for structured programming with COBOL, as with FORTRAN, the GOTO was even needed more often, because there were no real subroutines with returns.

COBOL was also standardized: the 1960 standard was followed by the 1968 standard. COBOL 74 and 85 expanded the language, but as with FORTRAN, COBOL remains limited to one area compared to other languages. Still today it can be found in the banking sector and the millennium bug is based on the COBOL habit of storing numbers as BCD digits and not binary - so you saved the century digits ...

ALGOL

Both FORTRAN and COBOL were languages ​​that led to confusing programs. Therefore, from 1958-1962, Europe started to create a new language. The aim was to implement the principles of structured programming. What do you mean with that? Now a program should look like you would think of it colloquially. If you read in a file you would say "Read data in until you get to the end". In COBOL or FORTRAN this became an IF query with a jump to a line number - a number that says nothing. In Algol there were the three basic loops that you need to write a program without wild jumps for the first time: The FOR loop (counting), WHILE loop (condition for termination known before the start) and REPEAT loop (operation at the end of the loop known). Algol had advantages like FORTRAN in the area of ​​math operations, but was weak in the area of ​​character processing.

Algol is a made-up name and means "Algorithmic Language". This was intended to express that, unlike FORTRAN and COBOL, algorithms can be easily implemented in this language without the famous GOTO. Algol itself was not very successful. After the first standard in 1960 there was a second, Algol 68. But after that the language quietly died out. One can speculate why. There were definitely two reasons. On the one hand, like FORTRAN, Algol was strong in the numerical field, but weak in character processing; on the other hand, Algol came from Europe. It just wasn't taken seriously by the big computer manufacturers in America. In doing so, Algol introduced a great deal of what is now an element of a procedural programming language.

It was the first programming language to be defined as grammar in the Backus-Naur form. One thought not only how a language is human readable but also how a compiler could implement it efficiently. Still, it was easier to understand than FORTRAN or COBOL. She turned away from the punch card format and used lowercase letters and some special characters for the first time, which made the source text more readable.

Pascal

One of those involved in the creation of Algol was Niklaus Wirth. He wanted to create a programming language that, unlike the previous students, would educate them to program properly. That meant that this language had a very logical, but also rigid structure. You had to consistently separate data from code parts, blocks were marked with begin and end. Pascal programs are therefore very readable and can be checked efficiently by compilers, which in turn can point to program errors. The first version of Pascal appeared on the CDC 6600 supercomputer in 1970.

Wirth took over a lot from Algol, but also added a few things. Pascal has full-fledged string routines. He introduced two concepts that turned out to be very important:

Structuring of the data: So far there have only been data types that the computer itself represented, such as characters, integers or floating point numbers. In Pascal, you could define composite data (from various simple data types), partial areas (letters only from A to Z) or quantities (hearts, diamonds, spades, clubs) and thus approach more of the abstract environment.

The second was the concept of the stacking machine, after which even the PDP-11 was later built and which is now part of every microcomputer. The problem of a stack memory solved the problem that up to now all programming languages ​​had when they called subroutines and if possible they should have their own variables. These were now created locally on a stack and released there again after the end of the procedure.

In addition, Wirth was the first to invent the concept of the virtual machine - especially in order to be able to distribute Pascal more quickly. The compiler translates Pascal into a simpler byte code, which is then interpreted in a machine-specific manner. The advantage is that this bytecode is the same on all systems and that Pascal could be adapted to a new system very quickly. He wanted to avoid the fate that Algol had as a European language. Pascal was also successful, but Wirth made a mistake: When he developed new concepts that made data even more abstract and allowed a program to be modularized, he created a new language: Modula-2 and a few years later as an object-oriented approach Oberon. He would have done well to improve Pascal, because none of his two following children was even in the beginning as successful as Pascal. Pascal would have had what it takes to be what C is today, because it is easier to learn, just as powerful (for the supercomputers from CDC and Cray there were Pascal compilers first, followed by C later). But: Nikolaus Wirth thought of an academic teaching language and not of practical use. There are two essential things missing in Pascal:

  • Strings have a fixed length and are limited to 255 characters - doing word processing with them is difficult
  • There is no possibility of programming close to the machine - most manufacturers have retrofitted this, but then it is no longer standardized.

Instead, manufacturer Pascal continued to expand. In the PC area, the good old Pascal has become "Delphi", with the best elements from Modula, C ++ and Smalltalk. Delphi is not so successful commercially, also because it is the only Pascal dialect that is still intensively maintained today, but it was the first language in which graphical applications could be developed quickly (Rapid Application Development). Both Java and later C # borrowed from Delphi.

BASIC

Even before Niklaus Wirth, J. Kemeny and T. Kurtz took the approach of inventing a programming language that was easy to learn. This should introduce programmers to more powerful languages, with BASIC you immediately have to think of FORTRAN in terms of syntax. BASIC stands for B.eginners A.ll Symbolic I.nstruction C.or. The original language core was very simple and contained no elements for structured programming, no real procedures, no differentiated data types ... In short, everything that was already available at Algol. In return, BASIC was very easy to learn and very compact. But it was impossible to write larger programs in this language. BASIC was originally interpreted. This makes it easier for users to work with, as they can simply type in commands and try them out.

BASIC had a niche existence for a long time. Pascal was used for teaching, because with it the students learned the elements of structured programming and could not only switch to FORTRAN as with BASIC. It was not until 10 years after its invention that BASIC prevailed - thanks to the emergence of home computers. Individuals should learn a simple language, unlike students. Above all, however, their resources were limited: in the 1980s, home computers had 16-64 Kbytes of memory. A simple BASIC interpreter was housed in an 8 KByte ROM, a comfortable one in 16-32 KByte, these were system requirements that could not be met by any other programming language.

With the extinction of home computers 15 years later, BASIC also sank into oblivion. Only Microsoft continued to develop the language, but is now planning to let the development fall asleep in favor of C #.

Small talk

1In 969, the Xerox company, which had become rich with their photocopiers, created the Xerox PARC at Palo Alto. In this research institute, young scientists were allowed to pursue their ideas without the obligation that something sensible should come out. And things should be invented there that would revolutionize computer technology. The most famous thing that the Xerox Parc produced was the graphical user interface that Apple licensed for the MAC and Microsoft copied. Smalltalk also developed there from 1969-1970. Smalltalk is a programming language that is completely object-oriented.

If there was already the approach in Pascal to treat data in a more abstract way, the object orientation is the logical further development: Objects are data with methods (code) in order to process, convert or output them. This means that the code can no longer be imagined without the data. Both have merged into a single unit. The Simula language had this before, but Smalltalk implemented it radically. Everything is an object. There are no simple data types, yes, the type of a variable can be changed, control structures are objects, everything works with the help of messages. The name arose from the fact that programs in Smalltalk require very little code, so you don't have to "talk a lot".

Like Pascal, the concept of the virtual machine was used to exchange code quickly. A virtual machine is an intermediate interpreter that converts Smalltalk code into machine code. If you want to implement Smalltalk on a new computer, you only have to adapt the virtual machine, not the whole language. After a first draft in 1972, there was the first standard in 1980, with Smalltalk 80, which was fine-tuned in 1983.

Smalltalk's syntax was based on sentences in natural languages. Unfortunately, one did not orientate oneself to already existing programming languages, which did not exactly promote the spread of the language. Nevertheless, Smalltalk was successful and was considered THE object-oriented programming language until Java emerged, which gave developers the advantage of not having to learn a new syntax, but of being able to take over a lot from C.

The language would certainly have caught on better if the users hadn't come up with a very user-friendly system. Smalltalk required a graphical interface with windows for the class definitions, the debugger and the Transscript window in which you can start Smalltalk programs or simply evaluate expressions. In times when programming meant: Editing -> Save program -> Start compiler -> Start linker -> Execute program, each time with different programs from the command line, this was enormously progressive. But when computers with such interfaces prevailed and the hardware required for them was correspondingly inexpensive, Java was just around the corner. Small talk was just 10-15 years ahead of its time. There is a story that when Steven Jobs visited Xerox PARC and saw the interface developed in Smalltalk, he asked "Can you scroll pixel by pixel instead of line by line?" The developer changed the Smalltalk code and the system could then scroll pixel by pixel without shutting down. I am not aware of any other programming language in which this would be possible (with Java you can change a program while a program is running, but not the Java core system).

C.

The inventors of C - Kerningham and Ritchie are arguably some of the most hated personalities in the computer world. During the sixties, both developed the language "C" from the language BCPL via the intermediate step "B". This was originally intended to make the development of UNIX portable and to have a replacement for assembler.

As a result, C has features that are very close to hardware. There are no data types that are not implemented on the machine itself, standards developed back then such as call-by-reference interfaces were omitted, instead a preprocessor was introduced that replaced texts without type testing and the entire syntax was a significant step backwards compared to all previous attempts. Make text legible. Because of its shortcomings, many programmers therefore refuse to regard C as a high-level programming language and see it more as a kind of "super assembler". If you look at C from this point of view: Replacement of assembler in system programming, then its use makes sense. Especially since various machine-level operators made it possible to generate fast code even with "stupid" compilers.

That wouldn't have been so bad if C had only been used to develop operating systems instead of assembler. But since UNIX was created in this system and UNIX became established very quickly, C also prevailed. The bigger systems get, the more pronounced C's shortcomings become: Lack of syntax and type checks, no modular design. The advantage of the cryptic notation, which was used to relieve compilers of thinking, became less important when compilers independently optimize expressions. But by then C was already very widespread. Furthermore, the language appeals to programmers who think Pascal is lengthy and have a preference for very short but concise expressions. This architecture can also be found in UNIX with system programs with a large number of command line parameters and script languages ​​such as Perl, which have an even more operator-based syntax.

After the first version from 1971 there was a standard in 1978, the C standard named after its inventors K&R. An ANSI committee tried in 1989 to remedy the worst shortcomings of C and since then there have been some improvements such as "Better C" or the new ANSI standard from 1999. However, because programs have to remain portable, essential shortcomings of C have remained. The Windows Update function owes a large part of its work to C, because a fundamental security gap is that the C standard routines for string handling do not have a check whether a string that you want to assign fits in the space because you have reserved. If this is not the case, it overwrites code and so malicious code can be smuggled into operating systems and executed. Today, compilers from Microsoft and the GCC try to hide these disadvantages of C with a few tricks - but nobody thought that these dangerous language properties should be abolished. Neither can an assignment in an IF query be very dangerous. This, too, is only allowed in C and means that the program does not do what you think it should do (usually a careless mistake if you confuse "=" for assignment and "==" for comparison).

C would certainly not have established itself, UNIX would not have been distributed free of charge as an operating system for a long time and therefore it would have been used as the operating system at universities, which have always had little money for software. This began to discriminate against other programming languages: system calls were geared towards the data types and functionality of C and other programming languages ​​first had to emulate them - that cost time and was a disadvantage. This trend continued with Windows, where all system calls are also aligned with C. So C could spread like a cancerous tumor in computer science. The syntax was "passed on" to C ++, Java, C #, PHP, JavaScript ....

C ++

Around 1982/3 Bjarne Stroustrup set about developing an object-oriented extension "C with classes" based on C. This officially became C ++ in 1987. He had learned from Niklaus Wirth's mistake and implemented C ++ in such a way that C was included as a subset. This means that a C programmer did not have to learn anything new and could compile old C programs with his C ++ compiler. But that also left the fundamental problem of C - the illegible syntax and the inadequate type checking. For C ++ he made a number of additions available that allowed most problems to be avoided.

So far so good, unfortunately C ++ has now become a language that suffers from featuritis: The entire syntax is so complex, there are so many ways to achieve one and the same thing with linguistic means, which even Bjarne Stroustrup admits, that you only work with a subset if possible should be mastered instead of wasting years to acquire the full range of languages. This is shared by C ++ with other programming languages ​​such as ADA, but unlike them, C ++ is downwardly compatible with C. This means that there are development systems for graphical interfaces in C ++ in which the code is bursting with macros and pointers, as is the case with Visual Studio and others have an object-oriented approach that is more like Java than CBuilder.

The fundamental problem of constructs that are prone to errors, however, remained. Like C, C ++ has established itself in many areas of application development. C ++ had a chance and wasted it: Namely, to carry out a separation: Instead of implementing C as a subset of C ++, C ++ could have established itself as an alternative - and then just threw the errors of C out of the language. The success that Java has had shows that there were enough people waiting for such a language.

Java / C #

In 1991, some programmers at Sun started to create a language - supposedly to avoid implementation-dependent details in the programming of microcontrollers. The result was Java, named after the developers' favorite type of coffee. Like C ++, Java is a language that is based on the basic syntax of C - with this you win a large group of programmers who do not want to learn anything new. In contrast to C ++, however, this language is purely object-oriented, the procedural elements of C are completely omitted. Like Smalltalk or Wirth's first Pascal implementation, Java is based on a virtual machine, which means that all implementation details that are based on hardware properties, such as the pointer that is so dangerous in C, are no longer applicable.

Java would surely have remained just one of many programming languages ​​if it hadn't had two major advantages that became more important with the advent of the Internet:

  • It was platform independent and the virtual machine allowed programs to run "safely" in an environment without access to the computer's resources - important for programs that were run within a browser. Furthermore, Java can run on distributed systems - you can use it to connect computers together and this property was unique when the Internet came up.
  • It also contained routines for a graphical user interface. What this means soon becomes clear when you look at the fact that the class libraries of C ++ are powerful, but the point at which a cursor position was addressed was over. The control of text or even graphics was previously completely implementation-dependent and could not be transferred between systems. In contrast, you can write a graphical user interface in Java that runs on the internally completely different systems Mac OS, Windows and X11 (UNIX).

In 1995 the first version of Java appeared, which soon experienced a hype. After taking over the component model from Object Pascal, the Java 2 version followed in 2000. The boom has since shifted from desktop applications to server applications because Java, as an interpreted programming language, is slower than C or Pascal. This is particularly noticeable with graphical interfaces, because here Java is not directly based on the operating system. For applications that only run on one system, Java is therefore translated into machine code.

When Microsoft presented its own Java compiler for Windows, they "expanded" it so that the generated code only ran under Windows. Sun, creator of Java, licensed the language willingly, but made sure that the language remained portable and sued Microsoft - and was right. As a result, Microsoft stopped developing Java and began developing its own language: C #. C # is more than a language, it's a system of its own. In addition to various detailed improvements where some things could be retrofitted in Java, e.g. by borrowing from Delphi, the main thing was to ensure that C # is attractive for developers and users. In C # you can write Windows applications that are fast and look like Windows applications, although the language is interpreted like Java. C # stands on its own .NET environment which also supports other languages ​​such as Delphi .NET, Visual Basic or Visual C ++. In practice, C # is married to Windows. The only serious project Mono that tries to implement it on Linux is years behind the Windows version.

Technically, there is little difference between C # and Java today. C # has led to Java upgrading versions 5 and 6 considerably in terms of user-friendliness. Rather, they are two very similar languages ​​used by competitors in the computer market. The Hello World program already shows the similarity to Java:

Java has lost ground in the past few years. Interactive websites today use multimedia content that can be better programmed with Flash or use Javascript that is executed in the browser. In addition, according to recent studies (2012), Java seems to be becoming the main source of malware more and more, so that computer magazines advise to switch it off completely.

epilogue

When Pascal and I became interested in the second higher programming language in 1986, I also found out about other programming languages. At that time there were around 100 programming languages. Estimates now (I don't think anyone knows the exact number) are 1000. It is clear that this article can only shed light on a few languages. In addition to historical meanings and importance, the author's language skills naturally also play a role. LISP, PROLOG, PL / 1 and ADA would certainly also be worth an entry, but the author has too little knowledge of these languages ​​to be able to speak about them with expert knowledge.

In many areas it has to be shown which language will prevail, so today a whole hodgepodge of languages ​​and technologies is used for web programming: From universal languages ​​(Java, Python, Ruby) to script languages ​​(Pearl, PHP, JavaScript) to Microsoft-specific ones Technologies (ADO, DCOM, ActiveX, VB Script, C #). It is to be expected that many languages ​​in this area will disappear within the next few years. Just like today nobody talks about languages ​​like PL / 1, LOGO, FORTH and COMAL, which were current when the author made his first experiences in BASIC, Assembler and Pascal from 15-19 years.

Article last changed on 9/19/2012

I have also published a book on the subject of computers. "Computer history (s)" contains what the title says: individual episodes from the early days of the PC. They are episodes from the résumés of Ed Roberts, Bill Gates, Steve Jobs, Stephen Wozniak, Gary Kildall, Adam Osborne, Jack Tramiel and Chuck Peddle and how they created the PC.

The book is rounded off with a brief explanation of the computer technology in front of the PC, as well as a summary of what happened afterwards when the claims were staked. I have tried to write a book that sets it apart from other books in that it not only tells history but also explains why certain products were successful, i.e. deals with the technology.

The second edition, published in 2014, has been updated and slightly expanded. The most extensive change is a 60-page chapter on Seymour Cray and the supercomputers he designed. Due to price reductions for new editions, at 19.90 euros it is 5 euros cheaper than the first edition, despite the increased volume. It has also been published as an e-book for 10.99 euros.

More about the book on its own page.

Here is a complete overview of my books with direct links to the BOD bookshop.The books can also be ordered directly from bookstores (since I write about very special topics, you will hardly find them in the display) and they are of course available on the popular online platforms such as Amazon, Libri, Buecher.de.


© of the text: Bernd Leitenberger. Any publication of this text in whole or in part may only be made with the consent of the author.