Deficits of the computer architecture

Guest contribution by | 07.10.2019

​It all started with a successful hacker attack on the RSA in spring 2011. During the attack, the algorithm used to determine the number combinations on the RSA tokens was stolen. RSA tokens are often used as a second identification for accessing IT systems. From then on, I analyzed computer systems for their vulnerability to hacker attacks. And my findings are sobering: practically all systems available on the market have exactly the same weak point!  It can be found in current systems and even in the first programmable computers. Of course, hackers are still consistently exploiting this weak point today.

A common understanding of terms

Before I use some technical terms which might be misleading, I would like to briefly explain what I mean by these terms. This will (hopefully) give us a common understanding.

  • Computers, processors or controllers are in principle all the same: digital devices controlled by programs stored in the device’s memory.
  • A program in the sense of this article has nothing to do with events or broadcasting, but is a lot of information created to control a computer to do what its user expects it to do.
  • Instructions are the smallest subsets of programs. Changing a single instruction is often enough to change the function of a program significantly.
  • Data is also a lot of information. They differ from programs by representing the information processed by the programs. The ratio of data to programs is – transferred to the handicraft – the same as that of materials to tools.
  • Files are sets of data that can be identified as a unit. To stay with the example, a file corresponds to a workpiece.
  • Interfaces are connections between computers and their environment; they can be printers or scanners, but also external memories and the Internet.
  • Hardware is the material part of a computer system. Hardware alone is essentially the technology of a system. It needs the programs controlling it in order to be useful to the system.
  • Software is the collective term for data and programs. I try to avoid it to make it clear that I write either from data or from programs. I make exceptions with established compound terms (like software configuration), and with the term malicious software, because here the demarcation cannot always be made exactly – by the way this is completely in the sense of the attackers.
  • Utility software is a somewhat common term for the counterpart to malware, and refers to the data and programs installed by the user of a system for this user’s purposes.

 

The execution of programs

When computers were not yet networked, the weak point mentioned at the beginning was not yet relevant – at least not as a gateway to cybercrime. This changed when computers were networked with each other. At that time – about fifty years ago – computer manufacturers should have recognised and eliminated this vulnerability. But that didn’t happen.

The reason was that PC (personal computers) were invented shortly before and became a market hit. They also have the same vulnerability, which is still being maintained today for reasons of compatibility. This is why even modern computers, due to their design, cannot distinguish in principle whether the program they are currently running

  • is part of their intended use,
  • was put under their control by a hacker,
  • is not a program at all, but a dataset that is mistakenly attempted to run as a program.

 

The long road to the antidote

With the rise of cyber attacks and their increasingly serious consequences, antidotes were devised – but not against the actual cause, but against the possible effects.

If one disregards the rules of conduct for computer users, the antidotes are based on programs – or need programs to function. Their strategy is to identify malicious software by its static or dynamic properties and to recognise and react later. This path is very long:

  1. A malicious software attack must be detected.
  2. The malicious software must be identified.
  3. An antidote – which in turn is only a program – has to be programmed.
  4. This antidote must be installed on the infected and threatened computers.

There is a good chance that hackers will laugh their way up their sleeves because they have already tapped the desired information and developed new malware that exploits the same vulnerability in a different way.

The vulnerability

The vulnerability we are talking about is the architecture of the computer hardware. In their honor, I would like to say that it used to be very expensive to manufacture RAMs for computers in the past, before semiconductor memories. This led to solutions that made the most efficient use of available storage space. This included the establishment of less large memory spaces – ideally only one – which could be used as universally as possible. This in turn led to the fact that programs – the tools of an IT system – together with the data – the workpieces of the IT system – were stored in the same memory; metaphorically speaking, tools and workpieces were stored in the same shelf of a workshop.

The consistent maintenance of this weak point has over time led to a co-evolution of hardware and programs, the effects of which are intensified by the fact that the main players in both fields are no longer the same. This has resulted in a mutual blocking of major innovations, and this has manifested the old hardware architecture.

A new computer architecture is needed

So what to do? We have to throw the old hardware architecture overboard! This is ultimately what makes the use of malicious software possible in the first place, because there is no separation between the different categories of data stored in the memories – both random access memories and permanent memories. The same separation is also missing with the interfaces: If it is possible to introduce programs and data into a computer via the same interface, how do you intend to keep the two separate afterwards? This lack of separation makes it easy for hackers to distinguish malware – and these are programs! – as data and store it in the memory. The next trick – and this is where the quality of the software really comes into play – is to get the processor (or processors) to execute these foreign programs.

The easiest and probably most commonly used way is to wrap the malware in an email attachment and get the recipient of that email to open the attachment without first checking what it is really about. Hackers are now pretty good at giving an honest appearance and shamelessly exploiting users’ trust in “It won’t happen yet”.

If you throw the old hardware architecture overboard, a new architecture is inevitable. And the architecture should provide for the different categories of data to be stored inside the machine in separate storage areas, each with its own access attributes. This makes it technically impossible for data introduced into a computer system to be interpreted as instructions.

The separation of the data within the computer consequently also means that a separate storage takes place outside the computer. Here it is sufficient to split the data into software – which means the user software installed by the operator of the system – and the data that the system is to process.

The computer architecture of the future

Last but not least, a further step is necessary, which has certainly already been recognised by fellow thinkers: In order to ensure that the programs cannot be contaminated, the system processor must be deprived of the possibility of changing the RAM intended for instructions. But how are the programs to get into the computer’s memory?

This sounds revolutionary for users of workstation computers, but it is not at all: Many computers used in industry or in vehicles are already coping excellently with this situation. They already have their programs stored in their memories during production – “burned in” is the common term for this.

This procedure is of course unacceptable for workstations and other computers. But even a simple additional processor that performs the task of program installation solves the problem. Of course, this additional processor must not have access to the data to be processed. Such a computer architecture already exists – but so far unfortunately only on paper.

 

Notes:

Friedhelm Becker has published another post in the t2informatik Blog:

t2informatik Blog: What computer manufactures can learn from farmers

What computer manufactures can learn from farmers

Friedhelm Becker
Friedhelm Becker

Friedhelm Becker was born in 1952. He is married and has three daughters and four grandchildren. After successfully completing his studies in chemistry, he worked for three years in a building materials testing laboratory and then for eight years in the German Armed Forces. He is a retired naval officer.

After retiring from military service, Mr. Becker worked for well-known companies in the computer engineering (Univac, Sperry, UNISYS) and aerospace (Lockheed-Martin) sectors in various applications. Since 1974 he has been working in the field of computer-aided sensor-effector integration without any significant interruptions. Since 2013 he has been the owner of DCB Distribution & Consulting Becker.