Humans replicate autonomous biological robots themselves

Continuation of evolution

introduction

Even as a little boy, I was fascinated by artificial beings called robots. The idea that one day humanity will construct a being that can work and think independently is so fantastic that this topic has never let go of me and has shaped and influenced my entire life so far.

This far-reaching area allows the most freedom for conjectures and theories, especially today. One of the most interesting tasks of the work is therefore to compare and weigh up contradicting statements and theories. Rummaging through such mountains of information is challenging and finding the motivation to do so is not always an easy task. But when you then see how all the reports have similarities, how different scientists have similar views of the future and everything comes down to a “common thread”, you realize that the effort was worth it. The thought that nature has created such an ingenious system as the brain, the functioning of which we still do not fully understand, that we can copy it exactly, is one of the most fascinating facts that this area contains.

The question of whether there can be artificial intelligence at all, how one tries to copy the intelligence and what the future of intelligent robots and the effects on humans looks like is what I would like to address in this paper.

A short cultural history of robotics and automata

Like the Greek titan Prometheus, who formed people from clay, people in a long cultural history also strived to rise to the creation of life. The fate of Prometheus is well known: when he brings fire to people, Zeus tied him to a rock. An eagle eats his liver off every day. But Zeus sends Pandora to the people with her box of terrible plagues.

Or think of the golem of Jewish mysticism, that mute artificial man made of clay that pious masters build. Golem, d. H. Hebrew ' soulless matter '. With letter combinations of the 'Book of Creation' (Hebrew: Sefer Yezira) the golem can be programmed - to protect the Jewish people in times of persecution. But woe if he becomes violently and terribly independent: In the silent film classic 'The Golem' based on Meyrink's novel of the same name, this drama is impressively portrayed.

(Scholars who ventured too far into natural research and alchemy in the Middle Ages, feared contemporaries say the production of human-like beings: In Jewish tradition the High Rabbi Löw from Prague, in Christian tradition Albertus Magnus, whose talking automaton was his terrified pupil Thomas Aquinas, more theologian than naturalist, supposed to have smashed with a stick.)

With the beginning of the modern era, the project is approached technically and scientifically. First with the means of mechanics: Baroque music boxes as a model for lively automatons. Finally electricity, light and electricity: the novel was published in 1879 'L’Eve Nouvelle ’(New Dream), in which an artificial woman is enlivened by electricity and endowed with spirit - as a female counterpart to Mary Shelley's 'Frankenstein'.

In 1923, the Czech writer Capek invented a family of robots with which mankind was to be freed from hard labor. Finally, the robots are given emotions. As machine people they can no longer bear their slave existence and rehearse the revolt against their human masters. Up to Stanley Kubrik's science fiction classic '2001 A Space Odyssey' The vision of an intelligent and sentient automaton comes into conflict with its human creator - the Old Testament paradise myth for computers.

Today, however, it is less about recreating an artificial person made of sheet metal and steel. With bio, genetic, information and computer technology we have long been in the process of creating new life, changing it and continuing the evolution with technical means. So it's about a long-term change for us humans and not about the threatening tin monsters from science fiction films. With bio and computer technology we will intervene more and more in evolution. There are opportunities in this, but there are also dangers. From today's perspective, what are the foundations of this development?

Evolution and life

Let us first take a look at our current knowledge of the evolution of nature, which is increasingly influencing the development of computers and technology. The great time machine of the universe emerged approx. 15 billion years ago from a tiny initial state which, according to the laws of quantum mechanics, expands in fractions of a second on a cosmic scale. Gravitation begins to form material structures of galaxies and the first generations of stars, which generate chemical elements and allow them to perish again, in order to allow new ones to emerge until today.

Life in the universe is not limited to the earth. In a prebiotic evolution, molecular systems develop independently under suitable planetary conditions the abilities of material and energy exchange (metabolism), self-replication and mutation. These properties are stored molecularly. Biochemistry is on the trail of the molecular programs that create life. Darwin's evolutionary tree of species on earth can be explained by genetic programming of the DNA code. (Mutations are random changes in the DNA code that create branches in the evolutionary tree. Selections are the driving force.

In this tree, humans, flies and yeast are only a few branches away. But there was no steady evolution on earth. Random events such as B. Meteorite impacts or long-term climate changes have changed the ecological niches of the species. Other forms of life that did not appear in the course of historical evolution would also be legally possible Evolution of intelligent lifeassociated with the development of nervous systems and brains: nerve cells specialize, nervous systems enable learning processes and memory storage. Tools, languages ​​and cultures emerge that are passed on independently of the dying individual. Thus evolution after DNA has developed another type of reproduction. (But even here other intelligent forms of life would have been legally possible if living and environmental conditions had changed.

The human brain is understood today as a complex system of neurons that interconnect in cell networks through neurochemical interactions. Computer-aided PET (positron emission tomography) recordings show flickering switching patterns of areas of the brain that are correlated with perceptions, movements, emotions, thoughts and consciousness. With this observation method, we see in real time that a patient thinks and feels, but not what he thinks and feels.

Evolution has thus become a highly complex one Data and communication network formed. In the central nervous system, millions of neurons organize the complex signal and communication processes of the human organism.) Firing and non-firing neurons produce a dense data flow of binary signals that are decoded by the brain as information (perceptions, feelings, thoughts, etc.).

In evolution, however, data networks are not restricted to individual organisms. The Sociobiology examines animal populations that carry their complex transport, signaling and communication systems through Swarm intelligence to organize. There is no central command and monitoring unit, like a central processor operated by a single animal. The information is stored in a chemical diffusion field through which the individual animals communicate. Only this superorganism is capable of collective services - from creating branched ant trails to constructing complicated ant nests and termite structures. Even individual neurons cannot think and feel, but only generate the wonderful brain performance in collective groups.

These stages of evolution from primordial matter to the development of brains and societies are now being used as models for computer architectures, computer programs and robots. But can robots ever bring about such evolutionary collective thinking?

The next section is therefore called:

Artificial Life and Artificial Intelligence

With Quantum computing we have reached the smallest units of matter and the limits of natural constants such as Planck's quantum of action and the speed of light - the ultima ratio of a computer. (In a conventional computer, a bit corresponds to exactly one of the two transistor states 'charged' (1) or 'uncharged' (0). A quantum bit corresponds with a probability to one of the two quantum states 1 or 0, according to which, for example, an electron of an atomic nucleus is on one of two energy levels or an elementary particle is in one of two possible spin states ('up' or 'down'). Switches made of atoms or elementary particles interact according to the laws of quantum mechanics and allow superposition states (superpositions). This enables parallel calculations of gigantic proportions conceivable at top speed.) Since matter can be reduced to the quantum states of elementary particles and thus quantum bits, it represents, so to speak, 'coagulated' quantum information. The great time machine of the universe is thus at the same time a natural quantum computer. In principle, every piece of matter could be activated as a computer. For the technical construction of quantum computers, however, there are still major problems today (because, for example, superimposed (coherent) quantum states change through interaction with their environment and therefore stable storage of quantum information is very difficult.

In a microtransistor of a conventional computer, an external control voltage ensures that a current flows or not and that a bit sequence can be generated. In a molecular switch of the Molecular Computing (e.g. benzene rings with atomic groups) a control voltage ensures that the molecules are twisted so that a current can or cannot flow. Molecular switches, conductors, and memories allow for greater packing density, speed, and durability. Because of their small size, however, they would have to be manufactured using nano tools or arranged for self-assembly.

Finally, molecular biology as a model for DNA computing: In electronic computers, data are encoded by bit sequences of 0 and 1, in DNA computers by DNA sequences of four nucleotides (A, C, G, T). Because of the massive parallelism, in which billions of DNA strands are processed simultaneously in chemical reactions, because of the high packing density and speed (e.g. 6 grams of DNA for 1 million tera operations per second), highly complex problems could be tackled.

We are already at the transition to Artificial life. Incidentally, it was Leibniz's great vision to understand living beings as a kind of computer. “This is how every organic body of a living being is”, he writes in his Monadology Section 64, “a kind of divine machine or natural automaton, which infinitely surpasses all artificial automatons.” John von Neumanns are a mathematical specification of these ideas cellular automata. (They consist of cells with a finite number of states, which can be visualized, for example, by coloring the cells on a chessboard. The state a cell is in depends on its environment and local interaction rules. Growing "and" passing away "remind of organic shapes.

Every computer (in the sense of a Turing machine) can be simulated by a suitable cellular computer and vice versa. As early as the late 1950s, John von Neumann was able to show in a mathematical proof that cellular automata can reproduce themselves under certain conditions.)

In fact, with cellular automata and genetic algorithms capture essential aspects of evolution. (It is less about the simulation of evolution than about the application of its key mechanisms in programming. The genotype of a cellular automaton is encoded in a bit sequence that corresponds to its local rules for changing cell states. Mutation means random changes of individual bits and thus Rules. Bit sequences can be broken down in a kind of virtual genetic engineering. Selections are made according to the performance of cellular automata for solving tasks. Genetic algorithms ensure that generations of cellular automata are optimized for certain tasks in a virtual evolution.)

Genetic algorithms are already used to e.g. B. to find an optimal movement (between a start and a target position) for a robot arm. In the first generation, random exercise programs were still given. Each new generation tests the fitness levels of the new programs and selects the best.

There are already toy worlds in which populations of simple machines can be assembled from given building blocks, rebuilt and optimized for certain tasks in subsequent generations. It's still a gimmick ...

In the next step, the results of brain and cognitive research will be used as a model for artificial intelligence and neurocomputing.

We're talking about artificial neural networks. As in the cortex of the human brain, multiple layers of firing and non-firing neurons can be layered on top of each other and networked with one another. (Learning algorithms Following the pattern of Hebb's learning rules, synaptic connections are strengthened or broken down after a circuit pattern that has been learned and stored once. In this case we speak of supervised learning, since a change process is based on a given prototype. In unsupervised learning, the brain spontaneously directs its attention to a feature or criterion and classifies it accordingly, e.g. B. Objects of our perception.)

Neural networks are already being used in robotics to learn to move in unfamiliar terrain. A simple stick insect organizes the movements of its six legs in a decentralized manner via feedback networks and constantly adapts them to the environmental conditions.

Learning algorithms in neurocomputing are not all modeled on the learning rules of nature. It depends on their technical usefulness. So in the Prosthetic medicine adaptive neural network encoders are used to translate movement control signals into nerve impulses and to generate movement patterns. (Conversely, registered nerve tissue signals are decoded by a neural network and used to control a movement prosthesis. Neural networks are strong in the pattern recognition of complex masses of data.) Typical cognitive performance are z. B. Pattern recognition in perception processes or learning processes in reading (e.g. NETalk).

in the Affective Computing neural networks are trained to recognize emotional reactions. The aim is to improve the interface and the interaction between the network and users without a mouse and keyboard (e.g. for the disabled). This is because emotions can be characterized by complex physiological learning patterns. The use of appropriate devices in medicine and psychiatry is obvious.

That brings us to the open questions and hypotheses in neurobiology, psychology and Neuroinformatics arrived. Can neural networks awareness produce? In fact, there is no such thing as "the" consciousness. In brain research today, consciousness is understood to be a range of degrees of attention, self-referentiality, self-awareness and self-observation. We differentiate between visual, auditory, tactile and motor consciousness and mean that we perceive ourselves during these physiological processes. We know that we now see, hear, feel, etc. Finally, we think about ourselves and develop self-awareness along with the storage of memories.

Simple pre-forms of self-monitoring have already been implemented in existing computer and information systems. In animals and humans have each other Forms of consciousness of increasing complexity in the evolution educated. In humans, there are also historical, social, cultural and personal experiences that lead to their individual self-awareness. In principle, a technical development of similar systems cannot be ruled out. However, it is an ethical question to what extent we should allow such developments.

As we have heard, evolution is not so much about the individual organism as the population. The next section is therefore:

Swarm intelligence and superorganisms

Imagine a Population of simple robots which, like forklifts, only obstaclessmall tea lights) can push (as long as the friction does not exceed a threshold value) and avoid obstacles. Although the robots have not been programmed accordingly and cannot communicate with one another (e.g. insects), after a certain time they have pushed their obstacle objects together according to certain order patterns. At the Robot soccer should eventually swarm intelligence through distributed artificial intelligence be realized in a team. The aim is for the soccer robots to develop a common game strategy during the course of the game like human players without a central controller, which adapts to new situations independently. We are still a long way from that. But it can no longer be ruled out technically.

The computer networks in which we humans communicate around the world are already a reality. In a technical evolution, a global communication network emerges ('World Wide Web'), whose nodes interconnect decentrally like neurons in the brain when data packets are 'forwarded'. These Analogies between brain, neural networks and World wide web are now used specifically for technical innovations.

(In this way, learning algorithms can independently reinforce and dismantle connections often desired by the user in order to use the World Wide Web with ' synaptic plasticity 'To be provided.) In the' Soft computing ’The adaptability, learning ability and fault tolerance of the brain will become the model for evolutionary learning algorithms and fuzzy logic. The previous support for information searches on the Internet by inflexible search engines leads to hopeless information overload and disorientation.

In the future will Swarm intelligence of mobile Agent populations be indispensable in the network to cope with the flood of information. We'll be using agent populations genetic algorithms breed artificial life in a virtual evolution. (In subsequent generations they improve their fitness levels in order to find interesting information according to the respective user needs. First they learn the user preferences from information examples. Then they search for similar information on the WWW. Finally, they reproduce through mutation and recombination of their virtual genetic codes, if They were successful problem solvers.) As with natural populations, cooperation, conflict and symbiosis of agents will arise in the wild on the Internet. With the help of game theory, a sociology of multi-agent systems ('Socionics') investigated.

Swarm intelligence is by no means only created in virtual reality. In fact, we humans who evolved from “flesh and blood” are not virtual agents. In the course of our technical and cultural history, we have created tools and technologies that support, strengthen and expand our systems and abilities of perception, movement and work. The best Technologies are the ones who take a back seat and become one with the processes that support them. We follow traffic signs without being aware of the process of reading or the flow of traffic control systems. We operate light switches, toasters, alarm clocks, radios, televisions and telephones without knowing anything about electrical engineering.

Sit here ubiquitous information systems at. Information technology is only then 'Ubiquitous' (i.e. widespread everywhere), if their connection to standard computers such as PCs and notebooks is overcome and the bundled functions are relocated back to the actual applications.

Smart devices are tiny intelligent microprocessors that are built into alarm clocks, microwave ovens, televisions, stereos or children's toys. They can communicate telematically with each other or with us via sensors. You do not need a computer interface with mouse and keyboard, just a suitable user interface for the respective purpose, as we are used to from everyday objects. As , Information appliances ‘They are embedded in work and living environments. There is already talk of 'intelligent' households, offices and cars.

Therefore, 'information appliances' do not create virtual reality (virtual reality) in the computer, but expand the possibilities of our everyday physical objects (augmented reality). Not we humans alone or the individual computer functions in things are intelligent in this case, but that Interaction between people and technology brings that intelligent system emerged.

If this technology catches on, it will not only be millions of PC users who will communicate virtually. Billions of small and very small things in the physical world will have to be managed on the Internet. But they are becoming massive Data shadows on the Internet receive. Data shadows are not just a technical problem; they cast basic ones social, legal and ethical Questions on. Fears of the transparent customer, patient or citizen must therefore be countered by means of suitable security procedures.

Summarized is Ubiquitous Computing so one interdisciplinary task. With the technical developments of Computer science, microelectronics and Materials science and the economy as the driving force are those Human sciences challenged in a special way to find information environments that match the Nature of man correspond and not rape him.

The telematically networked superorganism with its usual technologies could prove to be a technical utopia in which we humans remain the benchmark of technology. But it could also be realized in a collective ant colony, eat swarm intelligence in the Jungle of virtual data shadows and viruses crashes.

Evolution and Robotics - Quo vadis?

What are our prospects in these tech scenarios? Will people gradually become familiar with neuro and biotechnical implants and prostheses happy robots transform, as Marvin Minsky suspects? Computer-aided techniques are changing the relationship to our body and our environment more and more profoundly. Not only through spectacular advances in genetic and neurotechnology, but also through new media such as virtual reality and intelligent computer networks, which are based on ever new interfaces with our body, connecting it more and more directly to technological systems. One speaks of cyborgs, of hybrid beings of biological human bodies and technical devices that complement them, of human-machine systems that inseparably connect the two components. Whatever the further development, the networking of the human body with external or implanted systems, the interactive representation of the body in virtual spaces, the independence of virtual agents and "real" robots will lead to a new understanding of what is human and what a human body is. The body is already being understood less and less as a substance and more and more as an interface with the world and with machines that change, that can be redesigned. The anchoring in the spatial and temporal "here" caused by our body, through its sensory and motor restrictions, is increasingly being weakened. Even without remote robots or virtual puppets, we exceed our sensory and motoric limitations through every new form of technology, through every new human-machine system. Our evolutionary wealth of experience, which is always dependent on our body, changes through these extensions or prostheses.

"At some point we will encounter cyborgs in every place on earth," says Prof. Cochrane. "People with artificial hearts, pacemakers, artificial eyes and ears, artificial joints, with all imaginable technical aids."

In 50 years it will be completely normal for us to have electronic or genetic components in our body. "They will enable us to live longer and happier lives. Maybe that way we can cheat aging."

We will have to come to terms with the fact that our body does not stop on our skin when we control a vehicle or even a robot in the distance, virtually move on another continent and do things in the future. Suddenly an eye or a hand can reach thousands of kilometers and really accomplish something without actually being there. Between the preservation and perfection of the old body model and the desire for a new one that adapts to the changes, we sway back and forth, knowing full well that a return to "naturalness" would only be a bigger step into artificiality.

Humanity evolves into one Superorganism with data networks and virtual bodies? Are we finally awakening as a perfect copy on the Internet? Behind such excess fantasies, which stand out from technically foreseeable developments and discharge themselves in science fiction scenarios, hide repressed fears and pseudo-religious expectations of salvation, which are disguised today in technical metaphors. Contrary to the colorful long-term prognoses from the USA, which predict us in ever new bestseller editions, smart or shocking exponential belief in progress or final doom, we should look to ours European tradition of enlightenment reflect in order to set the course here and now for the social development we want.

This is the only way we have a chance Technology development to shape and not to be unleashed in a quasi-natural evolution. The order of nature can by no means serve us as an unrestricted model.

The works for human standards nature namely as evolution with enormous losses (from plant and animal species to human embryos) chaotic, blind and not successful with every attempt. Cancer tumors, severe genetic and neural defects are examples that shock us to this day.

So if we use AI, bio and internet technology as Service to people want to develop in order to be able to heal and help in the medical tradition of Hippocrates, then a value orientation is required that nature, science and technology alone cannot provide.

Which Key competencies are necessary so that the virtual network worlds do not slip out of our hands in a quasi-natural evolution? In addition to professional competence, the ability to think in a networked and interdisciplinary manner is of paramount importance. In the case of increasingly project-oriented work in operations and research, this specifically means that I B. as a computer scientist must also understand the mindset of a media educator, business economist or lawyer in order to be able to offer a product that is in line with the market and accepted by the customer.

Connected thinking only enables team competence. (Linguistic and communicative competences must be added in the information and knowledge society. The different cultures in the network must also intercultural competence to be encountered. But what use is all knowledge if we have not learned to communicate it socially in the group and to implement it with the courage to make decisions and take responsibility, i.e. to show decision-making power and social competence.)

However, toughness and smartness in the sense of management consultants are not enough. In old European tradition I therefore speak of Value orientation as the basis of personality development. In the Greek philosophy the connection of knowledge with value orientation is called wisdom (sophia). Should we not succeed in this connection, then despite all our knowledge and ability, despite all future and high-tech orientation, we would be the real losers of evolution in the end.

Philosophy is therefore more urgently needed than ever. It documents the struggle of man for the standards of his development. In the Greco-Roman and Judeo-Christian tradition, this development culminates in the Enlightenment in the demand for the inaccessibility of people: It is an end in itself or, as Article 1 of the Basic Law simply says: "The dignity of people is inviolable!" to hold on to this tradition, even if we no longer want to surrender to a blind evolutionary dynamic. We should use the fire of Prometheus without opening Pandora's box.

Future scenarios

By 2020 there will be computers that we may not yet carry within us, but which we carry close to us. The first computers still filled a whole high-rise, the computer of the 70s “only” a suite of rooms. The desktop, the notebook, the palmtops and cell phones - the technology is getting smaller and handier until it disappears completely, computers are becoming invisible. Computers become material.

Images are projected directly onto our retina from our glasses and contact lenses, and sounds from mini devices are fed directly into our ears. We will be continuously wirelessly connected to the Internet through connections with very large bandwidths. The electronics for connecting to the Internet, the computing units and displays, will be so small that they will be embedded in our glasses and woven into our clothing. You won't be able to see them. We will be constantly connected to the Internet, and visiting a website will mean entering a virtual environment that includes at least sight and hearing. We are experiencing a very primitive version of visual virtual reality today. It's not total immersion, it's not high resolution, but that's yet to come. And there are experimental technologies today that I've used that are expensive and cumbersome, but that allow full immersion in visual virtual reality. It goes without saying that we have had the virtual reality we have heard for a hundred years - it is called the telephone. By the end of this decade we will be able to meet virtually and have the feeling that we are truly together will be an everyday experience.

By 2040, with the help of nanobots - these are microscopic robots the size of blood cells that enter the bloodstream and travel through the capillaries of our brain - we will be able to fully immerse all of our sensory organs in a virtual one To experience reality. And that's actually a conservative scenario when you put all the different trends I've described together: electronics, miniaturization, high-performance computers, brain scanning, and so on. We send billions of nanobots into the capillaries of the brain, they occupy key positions in these capillaries, and they can communicate with each other and interact with our biological neuron circuits directly and non-invasively. This technology already exists today, unfortunately not yet in a sufficiently sophisticated form.

But there is already a prototype for this concept, the neuron transistor, which can communicate with neurons non-invasively. He doesn't have to plug a cable into the neuron. When the neuron fires, the electromagnetic impulse - quasi the language of the brain - is registered by the neuron transistor, it reads the communication signals of the brain. Conversely, the neuron transistor can make the neuron fire or keep it from firing: that is communication in the other direction. When nanobots take up positions next to selected nerve fibers, for example, they can create a virtual reality from within by replacing signals to which the brain reacts. Signals that seem to come from our real sense organs, our sense of touch, our eyes, our ears, are actually sent by the computer. It replaces these signals with the signals that humans would receive if they were in the virtual environment calculated by all the collaborating computers. He could walk around in that environment, and so on, and take action. If he wants to move his arm in front of his face, his real arm would of course be prevented from moving and his virtual arm would move. People will have the impression that they are moving freely in this virtual environment.

And these are common environments. in these shared virtual environments you can go there with other people and see each other, touch each other and have any kind of experience, from a sensual encounter to a business meeting. Nanobots can stimulate the senses and intensify, if not modify, our sensations. Just like any other technology, this one won't come to perfection overnight. Initially there will be primitive versions. But eventually it will have as high a resolution, it will be as compelling and compelling as real reality, and it will have many advantages in that there will be millions of different virtual environments. Some will be emulations of real environments: You will be able to meet a friend in a virtual café in Hamburg, or go for a walk on a virtual beach, etc.Some of these environments will be imaginary environments that have no earthly counterpart - that might not exist in the world. Of course, in virtual reality you don't have to be the same person, you can be a different person, you don't have to look the same or have the same personality.

Our thinking takes place in the connections between neurons. We have a hundred trillion connections and they are all computing at the same time. It's a paradigm very different from the computers we're used to.

But because they can communicate wirelessly with one another, nanobots can also create new connections. We will own a million or trillion times as much in the future. Thereby we can increase our memory and our thinking power and basically create a cross between our biological intelligence and this nascent non-biological intelligence. Human intelligence will increase. Even this nonbiological intelligence, although built from different materials and based on different paradigms and methods than our biological thinking, is nonetheless derived from human thinking because in many cases it is based on biologically inspired patterns.

We are already working on that today; in my research on speech recognition, for example, we use the mirror image of the construction of the human hearing system, which is much more sophisticated than you might imagine. For example, detailed models have been created for the way the human hearing system processes sounds. They are based on the actual construction of the biological neurons and real brain scanning which shows how these neurons are connected to each other. The brain is made up of hundreds of specialized regions.

Each region is organized very differently. The brain is basically a collection of hundreds of specialized organs that process information.

We have already worked out the mirror image of the construction of several of these organs. And by 2030 we will have such mirror images worked out for them all. We will have very detailed scans of the whole brain and we will understand how the brain works. And we will create non-biological systems inspired by biology that are copies or emulations of biological systems. The main implication of this, in my opinion, is the expansion of our human experience and the expansion of human intelligence so that we can have greater experiences, more complex thoughts - but in a human way, since all technology is derived from human intelligence .

Let me return to what is known as the Bill Joy question for a while. The dangers he describes are real dangers. Actually, he described three nightmares. The closest thing is the biological sciences. In advanced laboratories it is already possible - and in fact it will be possible within four or five years in any ordinary university biotech laboratory - to create designer pathogens. These are potentially more destructive than an atomic bomb because an atomic bomb has at least relatively limited effects. This is of course a very great danger, and we already have an understanding of the biological mechanisms that make such scenarios possible.

The next scenario he describes is nanotechnology, which is very similar to biological hazards, but contains mechanisms and systems that are - strictly speaking - non-biological. Nanotechnology is the ability to create physical objects not just part by part, but atom by atom - i.e. virtually identical. This means that any type of product could at least theoretically be created. There are already many hybrid formations today. In the past few weeks there have been newspaper articles about small machines that are created by taking biological bacteria and adding a certain amount of electronics, creating hybrid beings between biological and non-biological systems. Other non-biological systems can potentially self-replicate. And then we face the danger, on the one hand the fundamentally unintelligent, self-replicating structures and on the other hand the very intelligent, self-replicating structures. The stupid, self-replicating structures, or what is sometimes called the "gray slime problem", becomes a kind of biological pathogen or biological cancer, which is not limited to biological material. Since biological materials are actually not very strong and only function within a limited temperature range, nanotechnology is potentially more dangerous than biotechnology.

Because nanobots can be physically stronger and more intelligent than protein-based entities.

After all, you have the self-replicating intelligent entities, i.e. the robots. When they become much more intelligent than humans - I mean "become" - are they our loyal servants or do they become unkind? Will they come to the conclusion that they have no use for people? Some observers sum up our human behaviors and how we have treated each other in history, and they do not like this thought in connection with intelligent entities. But the most complex machines we have now are still built a million times simpler than humans. For us, machines are brittle, mechanical, predictable devices that have nothing human about them. But our concept of a machine will change. when a machine reaches, or even surpasses, the complexity of humans, we will think differently. I see this as an expansion of our civilization. There will no longer be any clear differences between machine and human.

I come back to the closely related promises and dangers of these technologies. In my opinion, we must first realize that we cannot stop the advancement of technology. The only theoretical way to stop them would be to eliminate the free market economy, since we do not create these bogeymen as specific projects. They are just the inevitable end result of other projects that have very rewarding benefits. Projects to eradicate cancer, overcome disease, to eradicate much of the human suffering that still exists because of our biological problems are creating the very technologies that can lead to biological designer pathogens and terrorist abuse of these technologies.

Nanotechnology is not a single narrowly defined area; it is only part of the ubiquitous miniaturization of technology that is under way today. Today miniaturization is at a level a hundred times greater than what is called nanotechnology, but if you follow the evolution, within twenty-five years we will be working at scales where the key elements are only a few nanometers in size. Then that is nanotechnology.

Then the dangers described by Bill Joy will become real. However, there is no way to stop this development unless we turn off all technology.

I'll give you an example of the type of projects that are going on today. Texas Instruments is creating a projector with a higher resolution than what you can now pretty much see me on video conferencing. Nobody would say it was a dangerous technology. But that's one small step among the tens of thousands of steps driving technology forward. Bill Joy's own company, Sun Microsystems, is constantly making faster computers, and that, too, gradually brings us closer to dangers like the ones he described. But if Sun Microsystems stopped doing that, the company would go bankrupt. It is an economic imperative to drive progress. And all of these projects make sense because they meet market needs, i.e. they meet human needs. But they also create greater dangers that involve these consequences.

How can we deal with this new kind of problem that didn't exist a few decades ago? All of the problems that Bill Joy addresses have to do with self-replicating technologies. It is self-replication that is dangerous because it harbors the specter of something out of control. Of something that runs amok, that goes on alone, that cannot be undone. There is already a variety of self-reproductive, non-biological entities that are only a few decades young: the computer virus.

It is a man-made pathogen. It lives in a certain medium of computer networks.

When these computer viruses first appeared a few decades ago, observers sounded the alarm and said these first viruses weren't very dangerous. They would be a bit primitive, but if they got more sophisticated, they would shut down computer networks and destroy networks and computers and cripple the Internet. How effectively did we deal with this threat? We still worry about computer viruses, and they cause billions of dollars in damage. However, few would suggest that we get rid of networks and the internet and the web because of computer viruses. The computer viruses are more of a kind of disruptive factor. The damage caused by computer viruses represents less than 0.1% of the benefits of the networks they live on. We coped very well with it - and how did we do it? First we set it up with self-regulation and ethical standards, with a certain level of technological immune systems, with anti-virus programs and more sophisticated protective measures that are being developed today, and in certain cases with legal measures.

On the other hand, one might argue that computer viruses do not have the deadly potential of biological viruses or destructive nanotechnology. However, this actually supports my argument. The fact that computer viruses are typically not fatal to humans just means that humans are more likely to create and spread them. There are hackers who spread computer viruses because they think it's just a prank. If they believed or knew that they were going to kill people, most of these people would not do so because they did not want to become murderers. Second, our response to computer viruses, our zeal in fighting them, the application of the law, the technological response would be a hundred times greater if they were deadly. There is a certain appeal when Bill Joy says, well, let's classify this technology as dangerous and that as not dangerous; Nanotechnology has certain dangers, so let's leave that with nanotechnology. But that's not the way technology works, it's not that easily divisible. Many projects that seem very innocuous - including the ones I'm doing that Bill Joy is doing - are driving technology forward and creating such dangerous skills. We are approaching these dangerous skills while at the same time approaching many of the tremendous benefits of technology. Technology remains a double-edged sword.

Their power, their ability to perform and deploy in all possible areas expresses itself in a double way with creative and destructive impulses. The only way we can deal with this power is through the combination of ethical guidelines, technological safeguards, and appropriate legislation. Technology itself can amplify all of these elements. Biotechnology, for example, is definitely not unregulated. It is controlled extremely tightly.

And the nanotechnology community - for example the Foresight Institute founded by Eric Drexler - has formulated very detailed ethical guidelines based on biological guidelines. The basic law of this institute is a requirement that is hardly known in Europe and that names the most central of all dangers: All scientists who deal with nanotechnology should completely forego the development of physical entities that contain their own code for self-reproduction. There have been many detailed suggestions for technological protective measures. Another example is the prohibition of physical artifacts that contain their own codes for self-replication. In what the nanotechnologist Ralph Merkle calls "Broadcast Architecture", such artifacts would get their codes from a centralized secure server that protects against unwanted replication. “Broadcast Architecture” is a way in which nanotechnology can be made safer than biotechnology. When a cell in biological systems replicates a virus or a bacterium, it carries its entire program, its software, with it for the purpose of self-replication. So it is an uncontrollable phenomenon. However - and this is actually part of the ethical guidelines proposed by the Foresight Institute - we can build self-replicating nanotechnological structures that do not have their own “replication codes”. Every time they replicate, they must turn to a centralized secure server for permission to self-replicate. Only then can they download their codes and replicate themselves. Should this procedure go wrong, it can be stopped and vice versa. There are methods to make the conditions for such uncontrollable phenomena extremely difficult. At some point it will even be feasible to create biological nanotechnological hybrid systems, where we replace the DNA function with a nanotechnological structure that essentially represents the biological DNA code, but implements this "broadcast architecture" by doing it without the approval of the safe Server would not use its codes for further self-replication.

There is no way to stop the advancement of technology. In my opinion, it is a mistake to only look at the negative side or only the positive side. I think that the dangers Bill Joy described, as long as it provokes useful discussion, as long as it does not lead to a thoughtless, automatically anti-technological public reaction that would be destructive. We must also remember that there is still much suffering in the world.

I don't think the millions of cancer patients would be happy to hear that we are going to abolish all cancer research because, while it could help them, it also creates technology that could be misused in the future. I don't mean the public would react that way. The pressure, the forces that are driving the cause, which are basically very positive, the opportunity to further extend human life expectancy are omnipresent. I'll just mention another exponential trend here, namely human life expectancy. In the 18th century we increased human life expectancy by a few days every year. In the 19th century we increased human life expectancy by a few weeks every year. Well, today we are adding 120 days to human life expectancy every year. Because of the revolutions in genomics, proteomics, rational drug design and therapeutic cloning, many observers, including myself, expect that within ten years we will extend human life expectancy by more than a year every year. So if you can manage to be around for a few more years, we may all actually experience the closely related promises and dangers of this century. I believe that we have the prospect of continuing to advance human well-being greatly and overcoming problems that have plagued humanity for centuries. And I also believe that we must not avoid addressing these dangers and these challenges. We need regulation, we need ethical standards, we need to spend a lot of time working out the technological structures that will protect us from these dangers. In my opinion, this is the greatest challenge facing humanity in the 21st century. I am optimistic that we can and will be able to meet it without major disasters. However, I cannot prove it scientifically, my own optimistic nature tells me that.

Bibliography:

- Mainz, Computer-New Wings of the Mind? De Gruyter 2nd edition 1995;
- this., Brain computer, complexity, Springer 1997;
- ders .; Computer networks and virtual reality, Springer 1999;
- this., Ubiquitous Computing Perspectives for Business and Society, in: Wirtschaftsinformatik 5/2000, 466-467.