• Read online "Artificial intelligence. Strategies". Artificial intelligence. Stages. Threats. Strategies - Nick Bostrom

    02.07.2023

    What will happen if machines surpass humans in intelligence? Will they help us or destroy the human race? Can we today ignore the problem of the development of artificial intelligence and feel completely safe?

    In his book, Nick Bostrom tries to understand the problem facing humanity in connection with the prospect of the appearance of a superintelligence, and to analyze its response.

    book characteristics

    Date of writing: 2014
    Name: . Stages. Threats. Strategies

    Volume: 760 pages, 69 illustrations
    ISBN: 978-5-00057-810-0
    Translator: Sergey Filin
    Copyright holder: Mann, Ivanov and Ferber

    Preface to the book "Artificial Intelligence"

    The author believes that the mortal threat is associated with the possibility of creating artificial intelligence that surpasses the human mind. A catastrophe can break out both at the end of the 21st century and in the coming decades. The whole history of mankind shows: when there is a collision between a representative of our species, a reasonable person, and any other inhabitant of our planet, the one who is smarter wins. Until now, we were the smartest, but we have no guarantees that this will last forever.

    Nick Bostrom writes that if smart computer algorithms learn to make even smarter algorithms on their own, and those, in turn, even smarter, there will be an explosive growth of artificial intelligence, in comparison with which people will look something like ants next to people now, in intellectually, of course. A new, albeit artificial, but superintelligent species will appear in the world. It doesn’t matter what he “comes to mind”, an attempt to make all people happy or a decision to stop the anthropogenic pollution of the world’s oceans in the most effective way, that is, by destroying humanity, people will still not be able to resist it. No chance of a Terminator movie-style confrontation, no gunfights with iron cyborgs. Checkmate and checkmate are waiting for us - as in a duel between a Deep Blue chess computer and a first-grader.

    Over the past hundred or two years, the achievements of science have aroused hope in some of them for solving all the problems of mankind, while in others they have caused and continue to cause unbridled fear. At the same time, it must be said that both points of view seem quite justified. Thanks to science, terrible diseases have been defeated, mankind today is able to feed an unprecedented number of people, and from one point of the globe you can get to the opposite in less than a day. However, by the grace of the same science, people, using the latest military technology, destroy each other with monstrous speed and efficiency.

    A similar trend - when the rapid development of technology not only leads to the formation of new opportunities, but also creates unprecedented threats - we observe in the field of information security. Our entire industry arose and exists solely because the creation and mass distribution of such wonderful things as computers and the Internet created problems that would have been unimaginable in the pre-computer era. As a result of the advent of information technology, there has been a revolution in human communications. Including it was used by various kinds of cybercriminals. And only now humanity is gradually beginning to realize new risks: more and more objects of the physical world are controlled by computers and software, often imperfect, full of holes and vulnerable; an increasing number of such objects are connected to the Internet, and threats to the cyberworld are rapidly becoming physical security and potentially life and death issues.

    That is why Nick Bostrom's book seems so interesting. The first step to prevent nightmarish scenarios (for a single computer network or for the whole of humanity) is to understand what they can consist of. Bostrom makes a lot of reservations that the creation of an artificial intelligence comparable to or superior to the human mind - an artificial intelligence capable of destroying humanity - is only a likely scenario that may not materialize. Of course, there are many options, and the development of computer technology may not destroy humanity, but will give us the answer to "the main question of life, the universe and everything else" (perhaps it really will be the number 42, as in the novel "The Hitchhiker's Guide to the Galaxy") . There is hope, but the danger is very serious, Bostrom warns us. In my opinion, if the possibility of such an existential threat to humanity exists, then it must be treated accordingly, and in order to prevent it and protect against it, joint efforts should be made on a global scale.

    Introduction

    Inside our skull is a certain substance, thanks to which we can, for example, read. This substance - the human brain - is endowed with capabilities that are absent in other mammals. Actually, people owe their dominant position on the planet precisely to these characteristic features. Some animals are distinguished by the most powerful muscles and the sharpest fangs, but not a single living being, except man, is gifted with such a perfect mind. By virtue of a higher intellectual level, we have been able to create tools such as language, technology and complex social organization. Over time, our advantage only strengthened and expanded, as each new generation, relying on the achievements of its predecessors, moved forward.

    If ever an artificial intelligence is developed that surpasses the general level of development of the human mind, then a super-powerful intelligence will appear in the world. And then the fate of our species will be directly dependent on the actions of these intelligent technical systems - just as the current fate of gorillas is largely determined not by the primates themselves, but by human intentions.

    However, humanity really has an undeniable advantage, since it creates intelligent technical systems. In principle, who prevents us from coming up with such a superintelligence that will take universal human values ​​under its protection? Of course, we have very good reasons to protect ourselves. In practical terms, we will have to deal with the most difficult issue of control - how to control the plans and actions of the supermind. And people will be able to use a single chance. As soon as an unfriendly artificial intelligence (AI) is born, it immediately begins to interfere with our efforts to get rid of it or at least correct its settings. And then the fate of mankind will be sealed.

    In my book, I try to understand the problem that confronts people in connection with the prospect of a superintelligence, and to analyze their response. Perhaps the most serious and frightening agenda that mankind has ever received awaits us. And regardless of whether we win or lose, it is possible that this challenge will be our last. I do not give here any arguments in favor of one version or another: are we on the verge of a great breakthrough in the creation of artificial intelligence; is it possible to predict with certain accuracy when a certain revolutionary event will happen. Most likely in this century. It is unlikely that anyone will name a more specific date.

    Artificial intelligence. Stages. Threats. Strategy - Nick Bostrom (download)

    (introductory fragment of the book)

    What will happen if machines surpass humans in intelligence? Will they help us or destroy the human race? Can we today ignore the problem of the development of artificial intelligence and feel completely safe? In his book, Nick Bostrom tries to understand the problem facing humanity in connection with the prospect of the appearance of a superintelligence, and to analyze its response. Published in Russian for the first time.

    * * *

    The following excerpt from the book Artificial intelligence. Stages. Threats. Strategies (Nick Bostrom, 2014) provided by our book partner - the company LitRes.

    Chapter Two

    Path to superintelligence

    Today, if we take the level of general intellectual development, machines are absolutely inferior to people. But one day - we assume - the mind of the machine will surpass the mind of man. What will be our path from now to the one that awaits us? This chapter describes several possible technological options. First, we will cover topics such as artificial intelligence, full brain emulation, human cognitive enhancement, brain-computer interface, networks and organizations. Then we will evaluate the listed aspects from the point of view of probability, whether they can serve as steps of ascent to the superintelligence. With multiple paths, the chance of ever reaching the destination is clearly increased.

    Let us first define the concept of superintelligence. This any intelligence that significantly exceeds the cognitive capabilities of a person in virtually any area(87) . In the next chapter, we will discuss in more detail what superintelligence is, decompose it into its components, and differentiate all of its possible incarnations. But now let us confine ourselves to such a general and superficial characterization. Notice that in this description there was no place either for the implementation of the supermind into life, or for its qualia, that is, whether it will be endowed with subjective experiences and the experience of consciousness. But in a certain sense, especially ethical, the question is very important. However, now, leaving aside intellectual metaphysics (88), we will pay attention to two questions: the prerequisites for the emergence of supermind and the consequences of this phenomenon.

    According to our definition, the Deep Fritz chess program is not superintelligent, since it is "strong" only in a very narrow area - playing chess - area. Nevertheless, it is very important that the superintelligence has its subject specializations. Therefore, every time when it comes to one or another super-intellectual behavior limited by the subject area, I will separately stipulate its specific field of activity. For example, artificial intelligence, which significantly exceeds human mental abilities in the areas of programming and design, will be called engineering superintelligence. But to refer to systems, generally exceeding the general level of human intelligence - unless otherwise indicated - the term remains overmind.

    How will we reach the time when it will be possible to appear? Which path will we choose? Let's look at some possible options.

    Artificial intelligence

    Dear reader, don't expect this chapter to conceptualize the question of how to create a universal, or strong, artificial intelligence. The project for its programming simply does not exist. But even if I were the happy owner of such a plan, I certainly would not publish it in my book. (If the reasons for this are not yet obvious, I hope that in the following chapters I will be able to clarify my own position unambiguously.)

    However, already today it is possible to recognize some mandatory characteristics inherent in such an intelligent system. It is quite obvious that the ability to learn as an integral property of the core of the system should be included in the design, and not added as an overdue consideration later in the form of an extension. The same goes for the ability to deal effectively with uncertain and probabilistic information. Most likely, among the main modules of modern AI should be the means of extracting useful information from data from external and internal sensors and converting the resulting concepts into flexible combinatorial representations for further use in thought processes based on logic and intuition.

    The first systems of classical artificial intelligence were not primarily aimed at learning, working under conditions of uncertainty and the formation of concepts, probably due to the fact that at that time the corresponding methods of analysis were not sufficiently developed. This is not to say that all the basic ideas of AI are fundamentally innovative. For example, the idea of ​​using learning as a means of developing a simple system and bringing it to the human level was expressed by Alan Turing in 1950 in the article "Computer Science and Intelligence", where he outlined his concept of "child machine":

    Why don't we try to create a program that imitates the mind of an adult, instead of trying to create a program that imitates the mind of a child? After all, if the mind of a child receives an appropriate education, it becomes the mind of an adult (89) .

    Turing foresaw that creating a "child machine" would require an iterative process:

    It is unlikely that we will be able to get a good "child car" on the first try. It is necessary to conduct an experiment on teaching any of the machines of this kind and find out how it lends itself to learning. Then carry out the same experiment with another machine and determine which of the two machines is better. There is an obvious connection between this process and evolution in living nature...

    Nevertheless, one can hope that this process will proceed faster than evolution. Survival of the fittest is too slow a way to assess benefits. The experimenter, by applying the power of the intellect, can speed up the evaluation process. Equally important, it is not limited to using only random mutations. If the experimenter can trace the cause of some deficiency, he is likely to be able to invent the kind of mutation that will lead to the necessary improvement (90).

    We know that blind evolutionary processes can lead to the emergence of human-level general intelligence - at least once this has already happened. Due to the prediction of evolutionary processes - that is, genetic programming, when algorithms are developed and controlled by an intelligent human programmer - we should get similar results with much greater efficiency. It is on this assumption that many scientists rely, including the philosopher David Chalmers and the researcher Hans Moravec (91) , who argue that IICHI is not only theoretically possible, but also practically feasible already in the 21st century. According to them, in the matter of creating intelligence, by evaluating the relative possibilities of evolution and human engineering, we will find that the latter in many areas greatly exceeds evolution and, most likely, will soon overtake it in the rest. Thus, if natural intelligence ever emerged as a result of evolutionary processes, then it follows that human designs in the field of design and development can soon lead us to artificial intelligence. For example, Moravec wrote back in 1976:

    The existence of a few examples of intelligence that has emerged under these kinds of constraints should give us confidence that we will be able to achieve the same very soon. The situation is analogous to the history of the creation of machines that can fly, although they are heavier than air: birds, bats and insects clearly demonstrated this ability long before man made flying machines (92).

    However, one should be more careful with conclusions based on such a chain of reasoning. Of course, there is no doubt that the flight of non-human living beings, which are heavier than air, became possible as a result of evolution much earlier than people succeeded in this - however, they succeeded with the help of mechanisms. In support of this, other examples can be recalled: sonar systems; magnetometric navigation systems; chemical means of warfare; photosensors and other devices with mechanical and kinetic performance characteristics. However, with the same success, we will list the areas in which the effectiveness of human efforts is still very far from the effectiveness of evolutionary processes: morphogenesis; self-healing mechanisms; immune protection. Thus, Moravec's argument does not "give us confidence" that IIHR will be created "very soon". At best, the evolution of intelligent life on Earth can serve as an upper limit to the difficulty of creating intelligence. But this level is still beyond the reach of the current technological capabilities of mankind.

    Another argument in favor of the development of artificial intelligence according to the evolutionary process model is the ability to run genetic algorithms on fairly powerful processors and, as a result, achieve results commensurate with those obtained in the course of biological evolution. Thus, this version of the argument proposes to improve AI through a certain method.

    How true is the assertion that fairly soon we will have at our disposal sufficient computing resources to reproduce the corresponding evolutionary processes, as a result of which the human intellect was formed? The answer depends on the following conditions: first, whether there will be significant progress in computer technology over the next decades; secondly, what computing power will be required so that the mechanisms for launching genetic algorithms are similar to natural selection that led to the appearance of man. It must be said that the conclusions that we come to along the chain of our reasoning are extremely uncertain; but, despite this discouraging fact, it still seems appropriate to try to give at least a rough estimate of this version (see box 3). In the absence of other possibilities, even approximate calculations will draw attention to some curious unknown quantities.

    The bottom line is that the computational power required just to reproduce the necessary evolutionary processes that led to the emergence of human intelligence is practically unattainable and will remain so for a long time, even if Moore's law is valid for another century (see Figure 3 below). However, there is a perfectly acceptable way out: we will have a very large impact on efficiency when, instead of a straightforward repetition of natural evolutionary processes, we develop a search engine focused on the creation of intelligence, using a variety of obvious advantages over natural selection. Of course, it is now very difficult to quantify the resulting gain in efficiency. We do not even know what orders of magnitude we are talking about - five or twenty-five. Therefore, if the evolutionary model argument is not properly developed, we will not be able to meet our expectations and will never know how difficult the roads to human-level artificial intelligence are and how long we have to wait for its appearance.

    Box 3. Assessing Evolutionary Replication Efforts

    Not all achievements of anthropogenesis related to the human mind are of value to modern specialists working on the problem of the evolutionary development of artificial intelligence. Only a small part of what happened as a result of natural selection on Earth goes into action. For example, problems that people cannot ignore are the result of only minor evolutionary efforts. In particular, since we can power our computers with electricity, we do not need to reinvent the molecules of the cellular energy economy system to create intelligent machines - and the molecular evolution of the metabolic mechanism may well have required a significant portion of the total consumption of the power of natural selection, which was at the disposal evolution throughout the history of the Earth(93) .

    There is a concept that the key to the creation of AI is the structure of the nervous system, which appeared less than a billion years ago (94). If we accept this position, the number of "experiments" required for evolution will be greatly reduced. Today, there are approximately (4-6) × 1030 prokaryotes in the world, but only 1019 insects and less than 1010 representatives of the human race (by the way, the population on the eve of the Neolithic revolution was orders of magnitude smaller) (95) . Agree, these figures are not so frightening.

    However, evolutionary algorithms require not only a variety of options, but also an assessment of the fitness of each of the options - usually the most expensive component in terms of computational resources. In the case of the evolution of artificial intelligence, to assess fitness, it seems that modeling of neural development, as well as learning and cognition abilities, is required. Therefore, it is better not to look at the total number of organisms with a complex nervous system, but to estimate the number of neurons in biological organisms that we may need to model to calculate the objective function of evolution. A rough estimate can be made by looking at insects, which dominate the terrestrial biomass (ants alone account for 15–20%)(96) . The volume of the brain of insects depends on many factors. The larger and more social (that is, leads a social life) an insect, the larger its brain; for example, a bee has a little less than 106 neurons, a fruit fly has 105 neurons, an ant with its 250,000 neurons is between them (97). The brains of most smaller insects contain only a few thousand neurons. I propose with extreme caution to dwell on the average value (105) and equate all insects (of which there are 1019 in total in the world) to Drosophila, then the total number of their neurons will be 1024. Let's add another order of magnitude due to crustaceans, birds, reptiles, mammals, etc. - and get 1025. (Compare this with the fact that before the advent of agriculture there were less than 107 people on the planet, and each had about 1011 neurons - that is, in total, the sum of all neurons was less than 1018, although the human brain contained – and contains – many more synapses.)

    The computational cost of modeling one neuron depends on the required degree of model detail. An extremely simple real-time neuron model requires approximately 1000 floating point operations per second (FLOPS). An electrically and physiologically realistic Hodgkin-Huxley model requires 1,200,000 FLOPS. A more complex multi-component model of a neuron would add two to three orders of magnitude, and a higher-level model operating on systems of neurons requires two to three orders of magnitude fewer operations per neuron than simple models(98) . If we need to simulate 1025 neurons over a billion years of evolution (that's more than the lifespan of neural systems in their current form) and we let computers work on this task for a year, then their computational power requirements fall into the range of 1031-1044 FLOPS . For comparison, the most powerful computer in the world, the Chinese Tianhe-2 (as of September 2013) is capable of only 3.39 × 1016 FLOPS. In recent decades, conventional computers have increased their performance by an order of magnitude about once every 6.7 years. Even if computing power grows according to Moore's law for a whole century, it will not be enough to bridge the existing gap. Using more specialized computing systems or increasing computation time can reduce power requirements by only a few orders of magnitude.

    It is likely that the elimination of this kind of inefficiency will save several orders of magnitude of the required power in 1031-1044 FLOPS, calculated earlier. Unfortunately, it's hard to say exactly how much. It is difficult to give even a rough estimate - one can only guess whether it will be five orders of magnitude, ten or twenty-five (101) .

    Rice. 3. Performance of heavy-duty computers. In the truest sense, what is called "Moore's law" is the observation that the number of transistors placed on an integrated circuit chip doubles approximately every two years. However, the law is often generalized, considering that other indicators of computer performance are also growing exponentially. Our graph shows the peak speed of the world's most powerful computers over time (on a logarithmic vertical scale). In recent years, the speed of sequential computing has stopped growing, but due to the spread of parallel computing, the total number of operations continues to increase at the same pace (102) .


    There is another complication associated with the evolutionary factors put forward as the last argument. The problem is that we are not able to calculate - even very roughly - an upper bound on the difficulty of obtaining intelligence in an evolutionary way. Yes, intelligent life once appeared on Earth, but it does not follow from this fact that the processes of evolution with a high degree of probability lead to the emergence of intelligence. Such a conclusion would be fundamentally erroneous, since it does not take into account the so-called observational selection effect, which implies that all observers are on the planet where intelligent life originated, no matter how likely or improbable such an event is on any other planet. Suppose that for the emergence of intelligent life, in addition to the systematic errors of natural selection, a huge amount of lucky matches- so large that intelligent life appeared on only one of 1030 planets where simple replicator genes exist. In such a case, researchers running genetic algorithms in an attempt to replicate what evolution has created may end up with about 1,030 iterations before they find a combination in which all the elements add up correctly. This seems to be quite consistent with our observation that life originated and developed here on Earth. This epistemological barrier can partly be circumvented by careful and somewhat cumbersome logical moves - by analyzing cases of convergent evolution of characteristics related to intelligence, and taking into account the effect of observation in selection. If scientists do not take the trouble to conduct such an analysis, then in the future, none of them will have to estimate the maximum value and find out how much the estimated upper bound on the required computing power to reproduce the evolution of intelligence (see Box 3) can fall below the thirtieth order (or some other equally large value)(103) .

    Let's move on to the next option to achieve our goal: the argument in favor of the feasibility of the evolution of artificial intelligence is the activity of the human brain, which is referred to as the basic model for AI. Various versions of this approach differ only in the degree of reproduction - how accurately it is proposed to imitate the functions of the biological brain. At one pole, which is a kind of "imitation game", we have the concept full brain emulation, that is, a full-scale simulation of the brain (we will return to this a little later). At the other extreme, there are technologies, according to which the functionality of the brain is only a starting point, but the development of low-level modeling is not planned. Ultimately, we will get closer to understanding the general idea of ​​the brain, which is facilitated by advances in neuroscience and cognitive psychology, as well as the constant improvement of tools and hardware. New knowledge will undoubtedly become a guideline in further work with AI. We already know an example of AI that has emerged as a result of modeling the work of the brain - these are neural networks. Another idea taken from neuroscience and transferred to machine learning is the hierarchical organization of perception. The study of reinforcement learning has been driven (at least in part) by the important role that this topic plays in psychological theories describing animal behavior and thinking, as well as reinforcement learning techniques (for example, the TD algorithm). Today, reinforcement learning is widely used in AI systems(104) . There will certainly be more such examples in the future. Since the set of basic mechanisms of brain functioning is very limited - in fact, there are very few of them - all of these mechanisms will sooner or later be discovered thanks to the constant advances in neuroscience. However, it is possible that some hybrid approach will come to an end even earlier, combining models developed, on the one hand, on the basis of the activity of the human brain, on the other hand, exclusively on the basis of artificial intelligence technologies. It is not at all necessary that the resulting system should resemble the brain in everything, even if certain principles of its activity are used in its creation.

    The activity of the human brain as a basic model is a strong argument in favor of the feasibility of creating and further developing artificial intelligence. However, no even the most powerful argument will bring us closer to understanding the future dates, since it is difficult to predict when this or that discovery in neuroscience will occur. We can only say one thing: the deeper into the future we look, the more likely it is that the secrets of the functioning of the brain will be revealed fully enough to embody artificial intelligence systems.

    Researchers working in the field of artificial intelligence have different points of view on how promising the neuromorphic approach is compared to technologies based on fully compositional approaches. The flight of birds demonstrated the physical possibility of the appearance of flying mechanisms heavier than air, which eventually led to the construction of aircraft. However, even the first airplanes that took to the air did not flap their wings. Which way will the development of artificial intelligence go? The question remains open: whether according to the principle of the law of aerodynamics, which keeps heavy iron mechanisms in the air, that is, learning from wildlife, but not directly imitating it; according to the principle of whether the device of an internal combustion engine - that is, directly copying the actions of natural forces.

    Turing's concept of developing a program that receives b O Much of the knowledge through learning, rather than as a result of specifying initial data, is also applicable to the creation of artificial intelligence - both to neuromorphic and compositional approaches.

    A variation on Turing's "baby machine" concept was the idea of ​​an AI(105) germ. However, if the "child machine", as Turing imagined, was supposed to have a relatively fixed architecture and develop its potential by accumulating content, the AI ​​germ will be a more complex system, self-improving its own architecture. In the early stages of existence, the embryonic AI develops mainly through the collection of information, acting by trial and error, with the help of a programmer. "Growing up", he must learn on his own understand in the principles of his work, in order to be able to design new algorithms and computational structures that increase his cognitive efficiency. The required understanding is possible only in those cases when the germ of AI either in many areas has reached a fairly high general level of intellectual development, or in certain subject areas - say, cybernetics and mathematics - has overcome a certain intellectual threshold.

    This brings us to another important concept called "recursive self-improvement". A successful AI embryo must be capable of continuous self-development: the first version creates an improved version of itself that is much smarter than the original; the improved version, in turn, works on an even better version, and so on(106) . Under certain conditions, the process of recursive self-improvement can continue for quite a long time and eventually lead to the explosive development of artificial intelligence. This refers to an event during which, in a short period of time, the overall intelligence of the system grows from a relatively modest level (perhaps in many respects, except for programming and AI research, even below human) to superintelligent, radically superior to human level. In the fourth chapter, we will return to this perspective, which is very important in its significance, and analyze in more detail the dynamics of the development of events.

    Please note that this development model implies the possibility of surprises. Attempts to create a universal artificial intelligence can, on the one hand, end in complete failure, and on the other hand, lead to the last missing critical element, after which the embryo of AI will become capable of sustainable recursive self-improvement.

    Before ending this section of the chapter, I would like to emphasize one more thing: it is not at all necessary that artificial intelligence be likened to the human mind. I fully admit that AI will become completely "alien" - most likely, it will happen. It can be expected that the cognitive architecture of AI will be very different from the human cognitive system; for example, in the early stages, the cognitive architecture will have very different strengths and weaknesses (although, as we will see, AI will be able to overcome the initial shortcomings). Above all, purposeful AI systems may have nothing to do with humanity's purposeful system. There is no reason to believe that mid-level AI will begin to be guided by human feelings, such as love, hate, pride - such a complex adaptation will require a huge amount of expensive work, moreover, the emergence of such an opportunity for AI should be treated very carefully. This is both a big problem and a big opportunity. We will return to AI motivation in later chapters, but this idea is so important to the book that it is worth keeping in mind at all times.

    Full emulation of the human brain

    In a full-scale brain simulation process, which we call “full brain emulation” or “mind uploading,” artificial intelligence is created by scanning and accurately reproducing the computational structure of a biological brain. Thus, one has to draw inspiration entirely from nature - an extreme case of outright plagiarism. For full brain emulation to be successful, a number of specific steps must be followed.

    First stage. A fairly detailed scan of the human brain is being done. This may include fixing the brain of a deceased person by vitrification, or vitrification (resulting in tissues becoming hard as glass). Thin sections are then made from the tissue with one machine and passed through another machine for scanning, possibly using electron microscopes. At this stage, the material is dyed with special dyes to reveal its structural and chemical properties. At the same time, many scanning devices operate in parallel, simultaneously processing various tissue sections.

    Second phase. Raw data from the scanners is loaded into an automatic image processing computer to reconstruct the 3D neural network responsible for cognition in the biological brain. To reduce the number of high resolution images that need to be stored in the buffer, this step can be performed simultaneously with the first. The resulting map is combined with a library of neurocomputational models on neurons of different types or on different neuronal elements (for example, synapses may differ). Some results of scanning and image processing using modern technology are shown in fig. 4.

    End of introductory segment.

    Nick Bostrom

    Nick Bostrom

    Superintelligence

    Paths, Dangers, Strategies

    Scientific editors M. S. Burtsev, E. D. Kazimirova, A. B. Lavrentiev

    Published with permission from Alexander Korzhenevski Agency

    Legal support of the publishing house is provided by the law firm "Vegas-Lex"

    This book was originally published in English in 2014. This translation is published by arrangement with Oxford University Press. The Publisher is solely responsible for this translation from the original work and Oxford University Press shall have no liability for any errors, omissions or inaccuracies or ambiguities in such translation or for any losses caused by reliance thereon.

    © Nick Bostrom, 2014

    © Translation into Russian, edition in Russian, design. LLC "Mann, Ivanov and Ferber", 2016

    * * *

    This book is well complemented

    Avinash Dixit and Barry Nalbuff

    Stephen Strogatz

    Partner's Preface

    ... I have one friend, - Edik said. - He claims that man is an intermediate link necessary for nature to create the crown of creation: a glass of cognac with a slice of lemon.

    Arkady and Boris Strugatsky. Monday starts on Saturday

    The author believes that the mortal threat is associated with the possibility of creating artificial intelligence that surpasses the human mind. A catastrophe can break out both at the end of the 21st century and in the coming decades. The whole history of mankind shows: when there is a collision between a representative of our species, a reasonable person, and any other inhabitant of our planet, the one who is smarter wins. Until now, we were the smartest, but we have no guarantees that this will last forever.

    Nick Bostrom writes that if smart computer algorithms learn to make even smarter algorithms on their own, and those, in turn, even smarter, there will be an explosive growth of artificial intelligence, in comparison with which people will look something like ants next to people now, in intellectually, of course. A new, albeit artificial, but superintelligent species will appear in the world. It doesn’t matter what he “comes to mind”, an attempt to make all people happy or a decision to stop the anthropogenic pollution of the world’s oceans in the most effective way, that is, by destroying humanity, people will still not be able to resist it. No chance of a Terminator movie-style confrontation, no gunfights with iron cyborgs. Checkmate and checkmate are waiting for us - as in a duel between a Deep Blue chess computer and a first-grader.

    Over the past hundred or two years, the achievements of science have aroused hope in some of them for solving all the problems of mankind, while in others they have caused and continue to cause unbridled fear. At the same time, it must be said that both points of view seem quite justified. Thanks to science, terrible diseases have been defeated, mankind today is able to feed an unprecedented number of people, and from one point of the globe you can get to the opposite in less than a day. However, by the grace of the same science, people, using the latest military technology, destroy each other with monstrous speed and efficiency.

    A similar trend - when the rapid development of technology not only leads to the formation of new opportunities, but also creates unprecedented threats - we see in the field of information security. Our entire industry arose and exists solely because the creation and mass distribution of such wonderful things as computers and the Internet created problems that would have been unimaginable in the pre-computer era. As a result of the advent of information technology, there has been a revolution in human communications. Including it was used by various kinds of cybercriminals. And only now humanity is gradually beginning to realize new risks: more and more objects of the physical world are controlled by computers and software, often imperfect, full of holes and vulnerable; an increasing number of such objects are connected to the Internet, and threats to the cyberworld are rapidly becoming physical security issues, and potentially life and death issues.

    That is why Nick Bostrom's book seems so interesting. The first step to prevent nightmarish scenarios (for a single computer network or for the whole of humanity) is to understand what they can consist of. Bostrom makes a lot of reservations that the creation of an artificial intelligence comparable to or superior to the human mind - an artificial intelligence capable of destroying humanity - is only a likely scenario that may not materialize. Of course, there are many options, and the development of computer technology may not destroy humanity, but will give us the answer to "the main question of life, the universe and everything else" (perhaps it really will be the number 42, as in the novel "The Hitchhiker's Guide to the Galaxy") . There is hope, but the danger is very serious, Bostrom warns us. In my opinion, if the possibility of such an existential threat to humanity exists, then it must be treated accordingly, and in order to prevent it and protect against it, joint efforts should be made on a global scale.

    I would like to end my introduction with a quote from Mikhail Weller's book "Man in the System":

    When fantasy, that is, human thought framed in images and plots, repeats something for a long time and in detail - well, there is no smoke without fire. Banal Hollywood action films about the wars of people with the civilization of robots carry a bitter grain of truth under the husk of a commercial looking.

    When the transmitted program of instincts is built into the robots, and the satisfaction of these instincts will be built in as an unconditional and basic need, and this will go to the level of self-reproduction - then guys, stop fighting smoking and alcohol, because it will be a high time to drink and smoke before the Khan for all of us.

    Evgeny Kaspersky, CEO of Kaspersky Lab

    The Unfinished Story of Sparrows

    One day, in the midst of nesting, the sparrows, tired of many days of hard work, sat down to rest at sunset and chirp about this and that.

    We are so small, so weak. Imagine how much easier it would be to live if we kept an owl as helpers! one sparrow chirped dreamily. “She could make nests for us…”

    – Aha! agreed another. “And look after our old people and chicks…”

    “And instruct us and protect us from the neighbor’s cat,” added a third.

    Then Pastus, the eldest sparrow, suggested:

    - Let the scouts fly in different directions in search of an owlet that has fallen out of the nest. However, an owl's egg, and a crow, and even a weasel cub will do. This find will turn out to be the biggest success for our flock! Like the one we found in the backyard with a never-ending supply of grain.

    The sparrows, excited in earnest, chirped that there was urine.

    And only the one-eyed Skronfinkle, a caustic sparrow with a heavy disposition, seemed to doubt the expediency of this enterprise.

    “We have chosen a disastrous path,” he said with conviction. “Shouldn’t you first seriously consider the issues of taming and domesticating owls before letting such a dangerous creature into your environment?”

    “It seems to me,” Pastus said to him, “the art of taming owls is not an easy task. Finding an owl egg is hard as hell. So let's start with the search. If we manage to bring out an owlet, then we will think about the problems of education.

    - Wicked plan! Skronfinkle chirped nervously.

    But no one listened to him. At the direction of Pastus, a flock of sparrows rose into the air and set off.

    Only sparrows remained in place, deciding to figure out how to tame owls. Pretty soon they realized Pastus was right: the task was incredibly difficult, especially in the absence of the owl itself, on which to practice. However, the birds diligently continued to study the problem, because they feared that the flock would return with an owl egg before they could discover the secret of how to control the owl's behavior.

    Introduction

    Inside our skull there is a certain substance, thanks to ...

    8 General

    getAbstract review

    According to Oxford futurist Nick Bostrom, artificial intelligence can become a tool for us to ensure security, economic prosperity and intellectual development, but humanity may not be able to realize the full potential of this tool. Bostrom critiques the inherent flaws in human perception of AI step by step, and it feels like humans don't have the resources or imagination to make sense of the transition from a human-dominated world to a world that is threatened or already enslaved by some entity that has supermind. Bostrom masterfully describes how such a superintelligence could arise, how it would evolve into an all-powerful “singleton,” and what a threat it would pose. For example, what will happen, he asks, if this AI develops to such an extent that it forms a one-world government that is not guided by traditional ethical principles? Informative, full of references to various sources, the book encourages the reader to think about many unknowns. The author's reasoning is based on the analysis of a whole range of possible scenarios for the future. This in-depth study is designed primarily for readers with a special interest in the topic. getAbstract recommends it to politicians, futurists, students, investors, philosophers and everyone who thinks about high technologies.

    From the summary of the book you will learn:

    • How the technology known as “artificial intelligence” is evolving;
    • What do scientists propose in order to use and control AI;
    • Why humanity is not ready to deal with AI.

    about the author

    Nick Bostrom Professor at Oxford University, Founding Director of the Future of Humanity Institute.

    Prospects for the emergence of superintelligence

    In the summer of 1956, a group of scientists gathered at Dartmouth College to study the prospects for human development. They were primarily interested in whether machines could replicate the functions of human intelligence. Research on this topic has continued with varying degrees of success. The 1980s saw the development of rule-based programs, or "expert systems," and it seemed that the technologies that could be used to build artificial intelligence were about to flourish. Then progress stalled and funding dried up. AI efforts received a boost in the 1990s with the advent of “genetic algorithms” and “neural networks.”

    One measure of AI strength is how well specially designed computers play games like chess, bridge, polymath, go, and quiz. In about 10 years, a computer with improved algorithms will be able to defeat the world champion in Go. In addition to games, similar technologies are used in hearing aids, face and speech recognition devices, in navigation, diagnostics, planning, logistics, as well as to create industrial robots, the functionality of which...

    Current page: 1 (total book has 40 pages) [accessible reading excerpt: 8 pages]

    This book is well complemented

    Game theory

    Avinash Dixit and Barry Nalbuff

    Brainiac

    Ken Jennings

    pleasure from x

    Stephen Strogatz

    Superintelligence

    Paths, Dangers, Strategies

    Nick Bostrom

    Artificial intelligence

    Stages. Threats. Strategies

    "Mann, Ivanov and Ferber"

    Information

    from the publisher

    Scientific editors M. S. Burtsev, E. D. Kazimirova, A. B. Lavrentiev

    Published with permission from Alexander Korzhenevski Agency

    Published in Russian for the first time


    Bostrom, Nick

    Artificial intelligence. Stages. Threats. Strategy / Nick Bostrom; per. from English. S. Filin. - M. : Mann, Ivanov and Ferber, 2016.

    ISBN 978-5-00057-810-0

    What will happen if machines surpass humans in intelligence? Will they help us or destroy the human race? Can we today ignore the problem of the development of artificial intelligence and feel completely safe?

    In his book, Nick Bostrom tries to understand the problem facing humanity in connection with the prospect of the appearance of a superintelligence, and to analyze its response.

    All rights reserved.

    No part of this book may be reproduced in any form without the written permission of the copyright holders.

    Legal support of the publishing house is provided by the law firm "Vegas-Lex"

    This book was originally published in English in 2014. This translation is published by arrangement with Oxford University Press. The Publisher is solely responsible for this translation from the original work and Oxford University Press shall have no liability for any errors, omissions or inaccuracies or ambiguities in such translation or for any losses caused by reliance thereon.

    © Nick Bostrom, 2014

    © Translation into Russian, edition in Russian, design. LLC "Mann, Ivanov and Ferber", 2016

    Partner's Preface

    ... I have one friend, - Edik said. - He claims that man is an intermediate link necessary for nature to create the crown of creation: a glass of cognac with a slice of lemon.

    Arkady and Boris Strugatsky. Monday starts on Saturday

    The author believes that the mortal threat is associated with the possibility of creating artificial intelligence that surpasses the human mind. A catastrophe can break out both at the end of the 21st century and in the coming decades. The whole history of mankind shows: when there is a collision between a representative of our species, a reasonable person, and any other inhabitant of our planet, the one who is smarter wins. Until now, we were the smartest, but we have no guarantees that this will last forever.

    Nick Bostrom writes that if smart computer algorithms learn to make even smarter algorithms on their own, and those, in turn, even smarter, there will be an explosive growth of artificial intelligence, in comparison with which people will look something like ants next to people now, in intellectually, of course. A new, albeit artificial, but superintelligent species will appear in the world. It doesn’t matter what he “comes to mind”, an attempt to make all people happy or a decision to stop the anthropogenic pollution of the world’s oceans in the most effective way, that is, by destroying humanity, people will still not be able to resist it. No chance of a Terminator movie-style confrontation, no gunfights with iron cyborgs. Checkmate and checkmate are waiting for us - as in a duel between a Deep Blue chess computer and a first-grader.

    Over the past hundred or two years, the achievements of science have aroused hope in some of them for solving all the problems of mankind, while in others they have caused and continue to cause unbridled fear. At the same time, it must be said that both points of view seem quite justified. Thanks to science, terrible diseases have been defeated, mankind today is able to feed an unprecedented number of people, and from one point of the globe you can get to the opposite in less than a day. However, by the grace of the same science, people, using the latest military technology, destroy each other with monstrous speed and efficiency.

    A similar trend - when the rapid development of technology not only leads to the formation of new opportunities, but also creates unprecedented threats - we see in the field of information security. Our entire industry arose and exists solely because the creation and mass distribution of such wonderful things as computers and the Internet created problems that would have been unimaginable in the pre-computer era. As a result of the advent of information technology, there has been a revolution in human communications. Including it was used by various kinds of cybercriminals. And only now humanity is gradually beginning to realize new risks: more and more objects of the physical world are controlled by computers and software, often imperfect, full of holes and vulnerable; an increasing number of such objects are connected to the Internet, and threats to the cyberworld are rapidly becoming physical security issues, and potentially life and death issues.

    That is why Nick Bostrom's book seems so interesting. The first step to prevent nightmarish scenarios (for a single computer network or for the whole of humanity) is to understand what they can consist of. Bostrom makes a lot of reservations that the creation of an artificial intelligence comparable to or superior to the human mind - an artificial intelligence capable of destroying humanity - is only a likely scenario that may not materialize. Of course, there are many options, and the development of computer technology may not destroy humanity, but will give us the answer to "the main question of life, the universe and everything else" (perhaps it really will be the number 42, as in the novel "The Hitchhiker's Guide to the Galaxy") . There is hope, but the danger is very serious, Bostrom warns us. In my opinion, if the possibility of such an existential threat to humanity exists, then it must be treated accordingly, and in order to prevent it and protect against it, joint efforts should be made on a global scale.

    I would like to end my introduction with a quote from Mikhail Weller's book "Man in the System":


    When fantasy, that is, human thought framed in images and plots, repeats something for a long time and in detail - well, there is no smoke without fire. Banal Hollywood action films about the wars of people with the civilization of robots carry a bitter grain of truth under the husk of a commercial looking.

    When the transmitted program of instincts is built into the robots, and the satisfaction of these instincts will be built in as an unconditional and basic need, and this will go to the level of self-reproduction - then guys, stop fighting smoking and alcohol, because it will be a high time to drink and smoke before the Khan for all of us.

    Eugene Kaspersky,

    General Director of Kaspersky Lab

    The Unfinished Story of Sparrows

    One day, in the midst of nesting, the sparrows, tired of many days of hard work, sat down to rest at sunset and chirp about this and that.

    We are so small, so weak. Imagine how much easier it would be to live if we kept an owl as helpers! one sparrow chirped dreamily. “She could make nests for us…”

    – Aha! agreed another. “And look after our old people and chicks…”

    “And instruct us and protect us from the neighbor’s cat,” added a third.

    Then Pastus, the eldest sparrow, suggested:

    - Let the scouts fly in different directions in search of an owlet that has fallen out of the nest. However, an owl's egg, and a crow, and even a weasel cub will do. This find will turn out to be the biggest success for our flock! Like the one we found in the backyard with a never-ending supply of grain.

    The sparrows, excited in earnest, chirped that there was urine.

    And only the one-eyed Skronfinkle, a caustic sparrow with a heavy disposition, seemed to doubt the expediency of this enterprise.

    “We have chosen a disastrous path,” he said with conviction. “Shouldn’t you first seriously consider the issues of taming and domesticating owls before letting such a dangerous creature into your environment?”

    “It seems to me,” Pastus said to him, “the art of taming owls is not an easy task. Finding an owl egg is hard as hell. So let's start with the search. If we manage to bring out an owlet, then we will think about the problems of education.

    - Wicked plan! Skronfinkle chirped nervously.

    But no one listened to him. At the direction of Pastus, a flock of sparrows rose into the air and set off.

    Only sparrows remained in place, deciding to figure out how to tame owls. Pretty soon they realized Pastus was right: the task was incredibly difficult, especially in the absence of the owl itself, on which to practice. However, the birds diligently continued to study the problem, because they feared that the flock would return with an owl egg before they could discover the secret of how to control the owl's behavior.

    Introduction

    Inside our skull is a certain substance, thanks to which we can, for example, read. This substance - the human brain - is endowed with capabilities that are absent in other mammals. Actually, people owe their dominant position on the planet precisely to these characteristic features. Some animals are distinguished by the most powerful muscles and the sharpest fangs, but not a single living being, except man, is gifted with such a perfect mind. By virtue of a higher intellectual level, we have been able to create tools such as language, technology and complex social organization. Over time, our advantage only strengthened and expanded, as each new generation, relying on the achievements of its predecessors, moved forward.

    If ever an artificial intelligence is developed that surpasses the general level of development of the human mind, then a super-powerful intelligence will appear in the world. And then the fate of our species will be directly dependent on the actions of these intelligent technical systems - just as the current fate of gorillas is largely determined not by the primates themselves, but by human intentions.

    However, humanity really has an undeniable advantage, since it creates intelligent technical systems. In principle, who prevents us from coming up with such a superintelligence that will take universal human values ​​under its protection? Of course, we have very good reasons to protect ourselves. In practical terms, we will have to deal with the most difficult issue of control - how to control the plans and actions of the supermind. And people will be able to use a single chance. As soon as an unfriendly artificial intelligence (AI) is born, it immediately begins to interfere with our efforts to get rid of it or at least correct its settings. And then the fate of mankind will be sealed.

    In my book, I try to understand the problem that confronts people in connection with the prospect of a superintelligence, and to analyze their response. Perhaps the most serious and frightening agenda that mankind has ever received awaits us. And regardless of whether we win or lose, it is possible that this challenge will be our last. I do not give here any arguments in favor of one version or another: are we on the verge of a great breakthrough in the creation of artificial intelligence; is it possible to predict with certain accuracy when a certain revolutionary event will happen. Most likely in this century. It is unlikely that anyone will name a more specific date.

    In the first two chapters, I will look at different scientific areas and briefly touch on such a topic as the pace of economic development. However, the main focus of the book is what will happen after the appearance of the superintelligence. We have to discuss the following issues: the dynamics of the explosive development of artificial intelligence; its forms and potential; strategic options with which he will be endowed and, as a result of which he will receive a decisive advantage. After that, we will analyze the problem of control and try to solve the most important problem: is it possible to model such initial conditions that will allow us to maintain our own superiority and ultimately survive. In the last chapters, we will move away from particulars and look at the problem more broadly in order to cover the whole situation that has developed as a result of our study. I will bring to your attention some recommendations on what should be done today in order to avoid a catastrophe that threatens the existence of mankind in the future.

    Writing this book was not easy. I hope that the path I have traveled will benefit other researchers. They will reach new frontiers without unnecessary obstacles and, full of energy, will be able to quickly get involved in the work, thanks to which people are fully aware of the complexity of the problem facing them. (If, nevertheless, the road of study seems to future analysts somewhat winding and pitted in places, I hope they will appreciate how impassable the landscape was before.)

    Despite the difficulties associated with the work on the book, I tried to present the material in an accessible language; However, now I see that I did not quite cope with this. Naturally, while I was writing, I mentally turned to a potential reader and for some reason always imagined myself in this role, only somewhat younger than the present one - it turns out that I was making a book that could arouse interest primarily in myself, but not burdened by the past for years. Perhaps this is what will determine the small number of readership in the future. However, in my opinion, the contents of the book will be accessible to many people. We just need to put in some mental effort, stop rejecting new ideas out of the blue, and resist the temptation to replace everything incomprehensible with convenient stereotypes that we all easily fish out of our cultural reserves. Readers who do not have special knowledge should not give in to mathematical calculations and unfamiliar terms that occur in places, since the context always allows you to understand the main idea. (Readers who wish, on the other hand, to learn more details will find much of interest in the notes.)

    It is likely that much of the book is stated incorrectly2. Perhaps I have overlooked some important considerations, as a result of which some of my conclusions - and maybe all - will be erroneous. In order not to miss the smallest nuance and indicate the degree of uncertainty with which we are dealing, I had to turn to specific markers - so my text is overloaded with such ugly verbal blobs as "maybe", "could", "maybe", "like" , "probably", "very likely", "almost certainly". However, every time I resort to the help of introductory words extremely carefully and very thoughtfully. However, to indicate the general limitations of epistemological assumptions, one such stylistic device is clearly not enough; the author must develop a systematic approach to reason in conditions of uncertainty and directly indicate the possibility of error. This is not in any way false modesty. I sincerely admit that there may be serious misconceptions and incorrect conclusions in my book, but at the same time I am convinced that the alternative points of view presented in the literature are even worse. Moreover, this also applies to the generally accepted “null hypothesis”, according to which today we can, with absolute reason, ignore the problem of the emergence of a superintelligence and feel completely safe.

    Chapter first

    Past Achievements and Today's Opportunities

    Let's start with an appeal to the distant past. In general terms, history is a succession of different patterns of growth, and the process is progressively accelerating. This pattern gives us the right to assume that the next - even faster - period of growth is possible. However, it is hardly worth attaching too much importance to such a consideration, since the topic of our book is not “technological acceleration”, not “exponential growth”, or even those phenomena that are usually presented under the concept of “singularity”. Next, we will discuss the background: how artificial intelligence research has evolved. Then we will move on to the current situation: what is happening today in this area. Finally, let's take a look at some of the latest assessments of experts and talk about our inability to predict the timing of further developments.

    Growth patterns and human history

    Just a few million years ago, human ancestors still lived in the crowns of African trees, jumping from branch to branch. Appearance Homo sapiens, or Homo sapiens, separated from our common ancestors with anthropoid apes, from a geological and even evolutionary point of view, it happened very smoothly. Ancient people took a vertical position, and the thumbs on their hands began to noticeably stand apart from the rest. Most importantly, however, there were relatively minor changes in the volume of the brain and the organization of the nervous system, which eventually led to a giant leap in human mental development. As a result, people have the ability to think abstractly. They began not only to coherently express complex thoughts, but also to create an information culture, that is, to accumulate information and knowledge and pass them on from generation to generation. It must be said that man has learned to do this much better than any other living beings on the planet.

    Ancient mankind, using the abilities that appeared in it, developed more and more rational methods of production, thanks to which it was able to migrate far beyond the jungles and savannahs. Immediately after the advent of agriculture, the size of the population and its density began to grow rapidly. More people - more ideas, and high density contributed not only to the rapid spread of new trends, but also to the emergence of various specialists, which meant that among people there was a constant improvement of professional skills. These factors have increased pace of economic development, made possible the growth of productivity and the formation of technical capacity. Subsequently, progress of equal importance, leading to the industrial revolution, caused a second historical leap in accelerating the rate of growth.

    This dynamic growth rate had important implications. For example, at the dawn of mankind, when the Earth was inhabited by the progenitors of modern humans, or hominids, economic development was too slow, and it took about a million years for the growth of production capacity, so that the population of the planet allowed itself to increase by a million people, and existing on the verge of survival. And after the Neolithic revolution, by 5000 BC. BC, when humanity moved from a hunter-gatherer society to an agricultural economic model, the growth rate increased so much that two hundred years was enough for the same population growth. Today, since the Industrial Revolution, the global economy has been growing by about the same amount every hour and a half.

    The current growth rate - even if it is mothballed for a relatively long time - will lead to impressive results. Suppose the world economy continues to grow at an average pace characteristic of the last fifty years, all the same, the population of the planet in the future will become richer than today: by 2050 - 4.8 times, and by 2100 - 34 times2.

    However, the prospects for sustained exponential growth pale in comparison to what could happen when the world undergoes the next leap change at a pace comparable in magnitude and impact to the Neolithic and Industrial Revolutions. Based on historical data on economic activity and population, the economist Robin Hanson estimated that the Pleistocene hunting-gathering society had a doubling time of 224,000 years, an agrarian society 909 years, and an industrial society 6.3 years. (According to Hanson's paradigm, the modern economic model, which has a mixed agro-industrial structure, is not yet developing at a doubling rate every 6.3 years.) If such a leap would already have occurred in world development, comparable in its revolutionary significance to the previous two, then the economy would have reached a new level and would have doubled its growth rate approximately every two weeks.

    From the point of view of today, such rates of development seem fantastic. But even witnesses of past eras could hardly have imagined that the growth rate of the world economy would someday double several times over the course of a single generation. What seemed completely unthinkable for them, we perceive as the norm.

    The idea of ​​approaching the moment of the technological singularity has become extremely popular after the emergence of the pioneering work of Vernon Vinge, Ray Kurzweil and other researchers4. However, the concept of “singularity”, which is used in a variety of meanings, has already acquired a stable meaning in the spirit of technological utopianism and has even acquired an aura of something frightening and at the same time quite majestic5. Since most definitions of the word singularity are not related to the subject of our book, we will achieve greater clarity if we get rid of it in favor of more precise terms.

    The idea of ​​interest to us, related to the notion of singularity, is the potential explosive development of intelligence, especially in the perspective of creating an artificial superintelligence. Possibly shown in Fig. 1 growth curves will convince some of you that we are on the verge of a new intense leap in the pace of development - a leap comparable to the Neolithic and industrial revolutions. Most likely, it is even difficult for people who trust charts to imagine a scenario in which the time for doubling the world economy is reduced to weeks without the participation of a super-powerful mind, which is many times faster and more efficient than our usual biological mind. However, it is not necessary to exercise in drawing growth curves and extrapolating historical rates of economic development in order to begin to take responsibility for the revolutionary emergence of artificial intelligence. This problem is so serious that it does not need any argument of this kind. As we shall see, there are far more compelling reasons to be cautious.

    Rice. 1. Dynamics of world GDP over a long historical period. On a linear scale, the history of the world economy is displayed as a line, at first almost merging with the horizontal axis, and then sharply rushing vertically upwards. A. Even expanding the time limits to ten thousand years in the past, we see that the line makes a jerk upward from a certain point almost ninety degrees. B. The line noticeably breaks away from the horizontal axis only at the level of approximately the last hundred years. (The difference in the curves in the diagrams is due to a different data set, so the indicators are somewhat different from each other6.)

    High expectations

    Since the invention of the first electronic computers in the 1940s, people have not ceased to predict the emergence of a computer whose level of intelligence will be comparable to that of a human. This refers to a reasonable technical system endowed with common sense, having the ability to learn and think, able to plan and process information in a comprehensive manner, collected from a variety of sources - real and theoretical. In those days, many expected that such machines would become a reality in twenty-seven years. Since then, the timeline has been shifting at the rate of one year per year, that is, today futurologists, convinced of the likelihood of creating artificial intelligence, continue to believe that “smart machines” will appear in a couple of decades8.

    The term of twenty years is loved by all predictors of fundamental changes. On the one hand, it is not too long - and therefore the subject of discussion attracts wide attention; on the other hand, it is not so fast, which makes it possible to dream about a number of important scientific discoveries - however, the ideas about them at the time of forecasting are very vague, but their implementation is practically beyond doubt. Compare this with the shorter forecast periods set for different technologies that are destined to have a significant impact on the world: from five to ten years - at the time of forecasting, most technical solutions are already partially applied; fifteen years - at the time of forecasting, these technologies already exist in the form of laboratory versions. In addition, twenty years is often close to the forecaster's average remaining professional life, which reduces the reputational risk associated with his audacious prediction.

    However, due to the overestimated and unfulfilled expectations of past years, one should not immediately conclude that the creation of artificial intelligence is impossible in principle and that no one will ever develop it9. The main reason why progress has been slower than expected has to do with the technical problems that arose in the development of intelligent machines. The pioneers did not foresee all the difficulties they had to face. Moreover, the questions: whether the degree of seriousness of these obstacles is great and how far we are from overcoming them - still remain open. Sometimes problems that initially seem hopelessly difficult have a surprisingly simple solution (although more often, perhaps, the opposite is true).

    We will look at the paths that could lead to artificial intelligence that rivals that of humans in the next chapter. But now I would like to draw your attention to one important aspect. There will be many stops between the current starting point and the future when artificial intelligence appears, but this moment is by no means the final destination. Pretty close, the next stop will be the Overmind station, an implementation of artificial intelligence of a level that not only equals the human mind, but vastly surpasses it. After the last stop, our train will accelerate to such an extent that at the station "Chelovek" it will not be able not only to stop, but even to slow down. Most likely, he will whistle past. The British mathematician Irving John Goode, who worked as a cryptographer for Alan Turing's team during World War II, was most likely the first to lay out the crucial details of this scenario. In his oft-cited 1965 paper on the first superintelligent machines, he wrote:


    Let's define a superintelligent machine as a machine that is far beyond the intellectual capacity of any of the smartest human beings. Since the creation of such machines is the result of human mental activity, a machine endowed with superintelligence will be able to develop even more perfect machines; as a result, there will undoubtedly be such an "intellectual explosion" that the human mind will be thrown far back. Thus, the first superintelligent machine will be the last achievement of the human mind - however, only if it does not show sufficient compliance and does not itself explain to us how to keep it under control.

    The explosive development of artificial intelligence can entail one of the main existential risks - these days this state of affairs is perceived as trivial; therefore, the prospects for such growth must be taken with the utmost seriousness, even if it were known (but not true) that the probability of the threat is relatively low. However, the pioneers in the field of artificial intelligence, despite their belief in the imminent emergence of artificial intelligence that is not inferior to the human, for the most part denied the possibility of the emergence of a superintelligence that surpasses the human mind. It seems that their imagination - in an attempt to comprehend the ultimate possibility of future machines comparable in their thinking abilities to humans - simply dried up, and they easily missed the inevitable conclusion: the birth of superintelligent machines will be the next step.

    Most of the pioneers did not support the emerging anxiety in society, considering it complete nonsense that their projects carry a certain risk for humanity11. None of them, in word or deed - not a single serious study on the subject - tried to comprehend either the security anxieties or ethical doubts associated with the creation of artificial intelligence and the potential dominance of computers; this fact is surprising even against the background of not very high standards for evaluating new technologies characteristic of that era12. One can only hope that by the time their bold vision eventually comes to fruition, we will not only be able to achieve a worthy scientific and technical experience to neutralize the explosive development of artificial intelligence, but also rise to the highest level of professionalism, which will not hurt at all if humanity wants to survive the advent of superintelligent machines in their world.

    But before turning our eyes to the future, it would be useful to briefly recall the history of the creation of machine intelligence.

    The Path of Hope and Despair

    Similar articles