How do you make a computer that just re-programs?

How do you make a computer that just re-programs?
128
0
0

This question many computer operating system designers and artificial intelligence application creators. To answer many try to copy solutions from nature.

Computer scientists and microbiologists are slowly realizing that the DNA that every living thing has works just like computer programs.

Each cell is reproduced from digital instructions stored in DNA. DNA has the same features as modern digital devices. There are multiple layers of digital encoding, decoding and data storage, error detection, error correction and repair. And besides all that, DNA has the added ability to adapt to the environment. And that’s what computer scientists would like to copy from nature.

How do living beings repair and heal, and adapt to any situation bothering many biologists? By computer calculations, they realized that random mutations could never remake cells on their own, and they came up with new theories in which they assigned cells an intelligence that could determine the target of the mutation and consciously implement it. According to the theory presented in the book Evolution 2.0, evolution is not a random event. The cells, according to this theory, possess an evolutionary set of tools "Swiss Army Knife" with five blades. Changes are targeted, adaptable and conscious. The stations are engineered themselves, in real time, for hours and even minutes. But they don’t know how the stations do it. If they knew the answer to this question, then developers could do something similar in the development of new applications. So far no human software is doing this. Give software millions of chances and billions of years and all it will do is crash.

The stations can do it.

They have DNA that carries the genetic code to the next offspring, and RNA that produces proteins and other molecules under the direction of DNA like a factory. The tasks of coding, message transmission and decoding are divided between these basic macromolecules. But how bugs are fixed, and how new antibodies are devised to fight new diseases, is not even close to biologists and chemists, although everything is written in the genetic code. Code is absolutely necessary for replication and for life. The code is needed for the cells to have instructions for building themselves; the code is required for playback. A code that has the ability to rewrite is key to any adaptation to new circumstances. This methodology of adaptation, written in the genetic code, is undeniable precisely in the creation of antibodies that have the task of destroying foreign toxins, viruses or bacteria. When a bacterium or virus enters the body, their goal is to multiply, not to kill the infested organism. In doing so, some molecules in the bacterium or virus create toxins that harm the attacked organism. Therefore, the attacked cells must produce antibodies that destroy unwanted toxins, but also the macromolecules of the attackers that produce those toxins. If they destroy only the toxins and not the macromolecules that create those toxins, the disease will last longer. Therefore, it is more important to create antibodies that destroy toxin-producing molecules than to destroy the toxins themselves. And these negative macromolecules are found on the body of viruses and bacteria, so their destruction leads to the death of the attacker.

The process of antibody formation is possible only by gradually building a molecule that is capable of chemically destroying an undesirable foreign molecule. This formation of antibodies requires the existence of small chemically unstable molecules, free radicals that the cell pushes towards a foreign body. When such a free radical manages to bind to a foreign macromolecule at some point, the attacked cell pushes other free radicals and other molecules that chemically bind to the earlier free radical associated with the foreign macromolecule. At the same time, it tries to connect with a foreign macromolecule. With such growth of a new molecule that chemically binds to a foreign molecule at the same time, that foreign molecule becomes chemically more unstable. As a result, a foreign macromolecule may disintegrate at some point, leaving a new molecule in its place that destroyed it. This new molecule is a new antibody that cellular RNA then begins to copy in millions of copies.
In a similar way, computer scientists could create applications to fix other applications.
When a computer program breaks, the computer shuts down and someone has to restart it. When an application enters an infinite loop it can spin indefinitely and consume time, energy and other resources until someone shuts it down.
These two errors in operation can be eliminated by dividing the system software into several levels. The lower level controls the higher level. If the higher level shuts down, the lower level can reset it, and if the higher level does something without results for a long time, the lower new level can shut down an application that spins uselessly. And whether that will happen depends on the developers and how well they put the control commands into the programs themselves.
But what happens when this happens to basic software in which control commands did not predict all possible errors?
This problem could be solved by dividing a physical computer into at least three virtual computers connected to a single network in the same machine. Each of these virtual computers should run on a different operating system and each would have its own tasks. But each of them should also have a separate control subroutine with the task of controlling the operation of the other two virtual computers, and detect the problems that these trees of other virtual computers have. If one computer shuts down, the control programs in the other two should analyze why it happened, and you should restart it. If one enters some infinite loop the other two should detect it, and if both come to the same conclusion about the reason for the error they should turn it off and turn it on again. For this job, these control applications should have tools for analyzing machine code operation and error detection, tools for writing code in the source program, and tools for analyzing and controlling such a program, and tools for compiling source code and decompiling machine code into the source program.
With these two basic types of errors, it is important that both other computers detect the same error and suggest the same solution. Only when agreement is reached could action be taken.

The problem is when they discover an error in an application that cannot be fixed by shutting down and turning on such an application. In such a situation, the virtual computer running such an application should, at the suggestion of the other two virtual computers, give such an application a lower level of priority and authority to that application when reading, writing, sending or deleting databases. In this way, even embedded malware such as viruses, worms, Trojans and similar applications would turn into harmless parasites that the user can easily detect and analyze what they are intended for.
At the same time, the other two virtual machines should minimally change the command that causes the problem on the source code of the application where the error occurred, and test such a new version of that application. To do this, you need to find out exactly where the next program cracked, and change only that command. If it is a functional command, the new command must have the same function, and if it is a control command, it must be either modified, or deleted, or a new control must be added to prevent failure. Testing should determine if this new version of the application works, if it has all the features of the old version, and if it has any new features. After that, it should be tested whether the error that appeared in the old version is repeated in this new version as well. If the test shows that the error will continue to recur, the next one should be changed in another way. Only when all tests show an improvement should the virtual machine that made the corrected version send that version of the application to another control virtual computer for testing. If he also confirms the improvement, then that version should be installed in place of the old one, and the old version should be compressed and saved as a backup, in case the new version turns out to be worse. If after a while this happens the new version should be shut down, and the old version reinstalled.
In this way, dividing computers into at least three mutually controlled virtual computers would achieve much greater computer security from shutting down, blocking and minor bugs, and defending against malware, but to create completely new applications that do some completely new functions will still you need an intelligent designer who knows what this new app should do, and how to do it.

Other of my technical-technological analyzes and innovations can be seen in this book.