6.4 Issues in Artificial Intelligence

At this point, it is not known what are the limits, if any, to the building of artificially intelligent artifacts. Neither is it known whether these will be silicon based, carbon based, some combination of the two, or neither. Such artifacts may well turn out to be genetically modified living organisms, inorganic devices, or both. The issues raised in this chapter are more or less independent of such considerations, and would eventually surface regardless of the type of construction. Some are in the form of common objections to be considered and possibly answered; others are rather more speculative. Many are philosophical or even theological rather than social or strictly ethical. If the most enthusiastic projections of some in the AI community are correct, all these issues will have to be dealt with in this generation. Moreover, some issues will need to be addressed regardless of whether or not the artifacts are exact simulations of the human brain, whether or not downloading succeeds, or whether the only "intelligence" involved is that which PIEA and Metalibrary devices seem to exhibit due to their current users.


Playing God?

One objection raised against AI work (as well as against genetic engineering) is that the researchers are "playing God." On the surface this is a comprehensive and definitive moral objection to such work. However, its meaning and validity are very difficult to analyse. This objection could be taken casually, to mean simply that those involved in the research are overstepping safe bounds and playing with things they ought not to. It may also be regarded as a gut feeling, an emotional response, or a desire for the world forever to remain the same. It may be a statement that the desired action is contrary to nature or God's laws and therefore to be forbidden.

It may be intended and taken literally. If so, then on face value it appears to be a claim of acquaintance with God's agenda and to imply that the research in question is known to usurp rights and privileges God reserves for Himself. Similar assumptions are found in such arguments as: "if God had meant us to fly, we would have wings." One could reply that if God is the all-knowing one, this implies that quests for knowledge about creation are, in general, good because they are searches for something that is of God.

"But," the response could be "it is not knowledge at stake, but its application. Only God has the right to make intelligent beings." The first part of this statement is one few people would argue with, for whatever "good" and "evil" are defined to be, their meaning takes substance in the effects of applications, not in theory alone, even when theory seems to lead to action rather directly. However, the last part of the statement is a rather large presumption, for even if God alone has the power to create ex nihilo, it does not follow that human beings cannot create at all. One could even suggest that if human beings are created "in God's image," this would seem to imply the ability to create in a similar (if not identical) manner.

Those who make this objection have two practical obstacles to overcome. The first is to demonstrate that they do have the credentials to speak for God in such matters, and the second is to devise an effective means of preventing the research they propose to forbid. There does not, at this point, seem to be a way to do either. First, there seems little evidence to support the idea of modern divine appointments to prophetic office. Second, the control of research on an international scale is effectively impossible. Perhaps it is more practical to control the products that might emerge from such work (by taxation?) than attempt to ban it and thereby drive it into secrecy.

There may still be something to the objection that AI researchers are "playing God," for pronouncements made in the name of science can occasionally have a ring of divine revelation to them. Even so, this is a comment on the motivations of the people involved, and not directly on the legitimacy of the work. On the other hand, perhaps the objectors themselves might be accused of "playing God" if they cannot substantiate their claim to know the mind of God. It is possible that the spokesmen/objectors for God may be judging the motivations of researchers by their perceptions of the work in question. That is, the objection might focus upon what the objector supposes the products of the AI work are to be used for. However, it is not clear how applications that are only potential or theoretical (or even imagined) can invalidate the AI research itself.

The objection "playing God" may be well-taken if the goal of AI researchers is to become immortal or to become "like gods." Even such an avowed goal would not validate or invalidate all AI work, for there may well be other motivations and applications. However, these possibilities do suggest other questions about motivations and potential applications. Would the benefits of AI be confined to the creators and controllers, who would indeed become "gods" and "lords" over the rest of humanity? Or, would they be shared with all? History would seem to suggest that the former outcome is more likely than the latter, and therefore to urge a considerable degree of caution when it comes to the development and deployment of any new technology.

Perhaps the value-laden "playing God" objection is instructive, for if nothing else, it forces reflection once again on the interaction of motivation, technique, society, and history. It weakens the classical argument that pure knowledge and technique are morally neutral, for it illustrates that both have a context from which they cannot be extracted pure and value free.


What Does Success in AI Mean?

There are two kinds of questions to consider here. The first has to do with knowing how to apply the word "intelligent" to an artifact, whether organic or inorganic. For example, one would not consider that a motor-driven slide rule or calculator is intelligent. As previously remarked, even a device that passes Turing's test could not thereby be considered "human." Even if a machine could duplicate the results of human thinking exactly, it is not necessarily "intelligent" in the same way as a human being, because the process by which the results are obtained is different. For example, the ability to manipulate objects or symbols does not imply that the manipulator understands what they are. One does not regard a language translator that works by substituting one set of symbols for another set as an understander or speaker of--much less a scholar in--either of the languages involved. It is simply a deterministic machine; it comprehends nothing. Neither is it clear that a mechanical device can have a dynamic memory, be able to learn or forget, be able to think associatively rather than strictly linearly or be able to learn by making mistakes.

In other words, there is an element in the processing of data to a form that can be called information that cannot (yet?) be ignored. The assigning of meaning to data is a uniquely human activity, and it may turn out to have been presumptuous to assume that anything analogous to this thing called "understanding," "knowing" or "perceiving" can be programmed into an artifact.

One could even question the appropriateness of using cognitive terminology, such as "intelligence," to refer in any way to manufactured things. After all, to do so seems to assume that such machines are already intelligent, or surely will become so, when the outcome is still very doubtful, to say the least. For instance, it is already common to speak of the "memory" of a computer, but this is no more analogous to human memory than is a printed page in a book, even though there may be a limited amount of shared functionality. Because machine manipulation of symbols does not require or imply human-like understanding in the machine, it may well be inappropriate to use the word "know" in connection with anything that it contains or does. On the other hand, if the duplication of functionality is the only criterion for calling a machine "intelligent," the use of this word may indeed someday (already?) be appropriate. Perhaps the issue hinges on whether or not logic and empiricism are adequate to express "knowing." If they are adequate, then a similar logic can surely be applied to the a "thinking" artifact that is capable of "knowing" in every way equivalent to humankind. If they are not adequate, it may always be necessary to have one word for human intelligence and another for the manufactured kind.

It should also be noted that success in AI, however measured, does not bear on questions of ultimate origins or meaning for the human race. The great expenditure of time, money, energy and planning needed to build artificially intelligent artifacts will not somehow prove that man's existence is itself an accident. Indeed, though it may suggest the opposite--that humanity too was planned--it will not prove that either. Neither will it provide any new meaning to human life or destiny, however great an achievement it may be. Knowledge about how the universe works, even when taken to the point of building something never before made, does not answer questions of ultimate origins or meaning. From an ethical point of view, it is far more important to consider the uses and effects of the proposed devices as they relate to people than to imply from them fanciful castles of philosophical implication about the origin, purpose, and destiny of humankind.


Who Makes Decisions?

Under this heading come several questions without ready answers. The first concerns the decisions about whether AI devices ought to be made and how they should be used if ever they are finished. As in all major scientific/technological projects there is the question of funding. It is not clear at this point who should make money decisions for such research. At present, it is the national science research councils of various countries, and the board members of various foundations and corporations who decide which projects may go ahead. The public, and even their political representatives, have no direct input here, even though the implications for ordinary citizens could be very important.

Oddly enough, questions of appropriateness are seldom asked at the onset of actual research and development, and ordinary citizens are even less often consulted. That is, although all scientific and technological discoveries have consequences to society as a whole, the decisions to proceed are made by a very narrow group of specialists who do not, in general, have much training in either social theory or ethics. Although most scientists would not wish to involve what they would term "unqualified" people in such decisions, true democracy would seem to demand that those affected ought to have a voice in the process. In fact, since no technology is pure in itself and free from the context of motivations and social effects, all three of these, and not just the technology itself require careful and informed consideration before proceeding with research, and again prior to deployment.

Given the relatively narrow educations many specialists have often had in the past, they may in fact be among the least well qualified to assess the appropriateness of developing and using a proposed technique. It therefore seems necessary to reform both current funding practices and technological decision making, and also to restructure the educational system to produce better qualified decision makers among those who do work directly with technological development.

The second question concerns what artificially intelligent machines might be used for, assuming they are built. Their very nature suggests that they will make decisions or assist in making them. This raises the question of who ought to be making decisions for people, if not people themselves? If the AI devices are an extension of human intelligence and a decision making aid, this objection goes away. But if they are to be independent decision makers, there is a serious potential conflict. If such devices are capable of making autonomous decisions, then in what interests will they make them--the human, or their own? If such autonomy is ever achieved, there is no reason to suppose that these artifacts will be partners with humankind, or share any of the same interests or goals. It is therefore not clear that these ends of AI research are desirable. Those who speak in this context of designing the successor to mankind appear to be casually writing off the whole human race to extinction--hardly a helpful outcome to those extinguished.


What Will be the Nature of AI Devices?

The discussion above leads directly to questions about the attributes of AI devices and their relationship to humankind. Suppose for a moment that the most optimistic and generous assumptions turn out to be true, and truly autonomous AI artifacts were built that were cooperative, benign, and work in human interests. Would they be thought of as "alive"? If they were biological entities, that is they have carbon-based chemistry, the answer is probably, in some sense, "yes." But if they were to be silicon-based, or electrical and mechanical artifacts, then even if they had human-like attributes of "intelligence" so far as can be measured, the question returns. Should "living organisms" then be taken to include silicon-based ones? In either case, the matter can be taken a step further. Will such devices be self-aware? Some futurists are sure they will be, provided they have sufficient memory and computational complexity. Others are equally convinced otherwise. The only way to find out if this is possible is to actually achieve it; but by then the issue of whether the achievement is desirable would be rendered irrelevant.

Other questions centre on whether such entities would be regarded as subordinate to humans, or as equals. If the latter, at what point in their development would they be given the status of persons? For instance, if it were considered that an AI device were alive, and had some semi-human status, would turning off or destroying one be murder? Would an AI device be more human than a child in the mother's womb, or less? No ready answer exists to these questions, and they only become more complex if the thinking machine is housed in an ambulatory robot body, for then their qualities become even more human-like. Moreover, if AI researchers are ever able to download themselves into an artificial brain and hope in that state to be regarded as human, it may be difficult to withhold the same label from one of the same devices that has been programmed but lacks a human download. This subject is already the focus of controversy, for whatever the resolution, it touches upon what is the definition of a human being, and the extent to which that definition is to change.

It should be noted that the definition of a "person" in Western civilization has changed several times in the last century and a half, first to include women, then blacks and orientals, and more recently to exclude unborn children. Will it be changed again?


Rights of AI Devices

This brings the discussion to the heart of the matter, for if devices are to be made that are intelligent in a meaningful human fashion; if they are to share with mankind the essence of thinking humanly, must they not then be accorded the rights of a human being? Should they not have freedom of speech, the right to liberty, and the right to own property? If so, could they by virtue of superior computational ability capture ownership of the entire economy, all the stocks, bonds and properties, and place the original and slower humans entirely in their debt? There are other ways to dominate than by force.

What of the right to bear arms or, for that matter, to marry each other (or human beings) and to adopt or have offspring? Indeed, what about such artifacts as sexual partners with human beings? Futurist Frank Ogden, who goes under the name of Dr. Tomorrow, is fond of shocking his audiences with the notion of "live-in robot lovers"--an idea he presents as a technological fix for the problem of diseases such as AIDS. "If you own 'em, don't clone 'em or loan 'em." is one of his slogans.1

Given the amount of time and energy the average person spends on things sexual, it seems inevitable that various forms of technology will be applied to the satisfaction of sexual urges. While some will question these scenarios as patently unnatural and deviant, such objections are unlikely to make much impression, for these practices would merely be taking a place in the long line of ones for which similar unsuccessful protests have already been made. Indeed, the recent history of the Internet as it stands suggests that technologically-inspired and assisted sexual commerce will only expand.

Objections to new sexual practices are often made with a view to supporting the ideal of monogamous heterosexual marriage relationship as the sole focus of legitimate sexuality. It is not clear that this kind of relationship will be any harder to maintain in the future than it is now. In fact, it may be easier, for it provides a definitive answer to problems of sexually transmitted disease (STD). What is more, if AI artifacts do turn out to be so much more capable, they may be as much interested in cohabitation with humans as most humans would be with a grasshopper.

It should be emphasized at this point that these are not merely fanciful speculations; they are natural questions that arise out of following the logic of building human-like artificially intelligent artifacts. They lead the line of reasoning to its final set of questions.


Can Artificially Intelligent Devices be Moral Agents?

Any discussion of the potential human-like qualities of AI devices would be incomplete without inquiring whether such intelligence will be sufficient for them to be able to ask ethical questions and make moral decisions of their own. Human beings have always assumed that they were the only moral agents on earth, for it can readily be observed that plants and animals do not ordinarily appear to act out of such considerations.

Is it intelligence alone that confers the ability to ask questions about right and wrong and to act on the answers? This question could be answered whenever AI artifacts are first programmed. At one extreme, if "downloading" were to succeed, some would say that the answer would have to be "yes," contending that the human being had simply transferred her residence into the device. Others would counter that the device is an animated dead creature--a zombie--and accuse all participants of murder if the original human body were destroyed after an apparently successful transfer. The tape containing the recording of such a human brain scan also has a doubtful status. It would manifestly not even be alive, yet it would contain the pattern not to grow a human body--as does DNA--but to reconstitute the essence of humanness supposedly residing in an artifact. Would such a tape, if it could ever be made, also have to be regarded as a moral agent? (What difference is there between a running program in a computer, and a copy of one on a disk?)

Even though downloading is one of the express goals of the most optimistic of AI researchers, it is still highly speculative and there may be many more immediate goals with a greater likelihood of achievement. Some, however, lead to very similar questions. For example, suppose an AI device were programmed in such a way that it could make errors--not just those due to data problems, but to choose to ignore what the program indicates is the right choice based on data analysis and to act on the wrong choice instead. The question now becomes whether the choice is merely random, or whether it is volitional. That is, will AI devices ever be able knowingly to make wrong choices? The mere possibility of this, whether through software or hardware is the single key issue in AI work, regardless of the nature of the device. For if the artifact is intelligent, and has volition by standards that are essentially human, then it follows that it is a moral agent, regardless of whether or not it is human. If so, then it is surely capable of choosing to do wrong when it "knows" the right, and this perhaps in nanoseconds rather than upon lengthy deliberation. Some fear AI devices might be capable of doing good or evil at billions of times the rate human beings can. If so, the eventual quick destruction of much of humanity by a berserk machine would seem to be assured, for human beings have done almost as much with a far slower time frame within which to decide and act.

At the very least, this turns the tables on an earlier question, for it leads one to ask whether a house or office computer could be arrested for murdering its owner. Could not such a device, if possessed of free will, become jealous or angry if its tenants decided to move or its users wanted to replace it with a better model? A little judicious fiddling with the building climate controls or the air supply, or a small short circuit on the operator keyboard would quickly eliminate the human inconvenience with no effect on the computer. This scenario too is not just speculation, but a probable outcome of any decision to engineer volitional machines. Thus, if they are to be devised, so must the mechanisms for protecting humans from and dealing with the criminal element among the new devices. This includes arrest procedures, reading their rights, and the means to try them and pass sentence on artifacts. When more ambulatory devices are made, would citizens need protection against mechanical muggers or bank robbers? Would AI machines discover some expensive electronic addiction to help forget their troubles, and need physical and psychological treatment for their equivalent of alcoholism?

How would human beings respond to volitional machines? Would they riot and try to destroy them? Would protective societies come into being, label the termination of AI machines "genocide" and vow to establish legal rights for their mechanical "brothers and sisters"? Such questions should be considered at the time decisions are being made to build thinking machines, rather than after the fact.

Christian and other theologians would also face interesting problems with volitional machines. Granting that a device were regarded as "alive,"--that is, had a soul, the breath of life--would it also be considered to have inherited a fallen nature from man, its creator? If so, could it be said to have a spirit, that is, the ability to relate to God. If so, does salvation apply to it equally as to human beings?


Simple Answers?

The answers to most of the questions raised in this section are simple if AI devices are to remain mechanical expressions of subsets of human thinking patterns--not to be thought of as actually intelligent, or alive in the human sense. Given the present-day agenda of the AI research community, however, the difficult questions seem liable to re-surface from time to time with increasing complexity. At some point, it seems likely that a careful definition of "human" and "intelligent" will have to be agreed upon, and it is not at this time clear what that definition will eventually include, especially since human "intelligence" can scarcely be said to be well defined.


Profile On ... Issues


Machines and Understanding


understand verb 1 : be aware of the meaning of 2 : deduce 3: assign a meaning to


What Does Understanding Involve?

o awareness: Understanding implies the existence of an understander, that is, of a personality that is at least self-aware.

o intentionality: There must be a capacity for mental states that are directed at things or objects outside the understander, i.e., the ability to be aware of the things that are to be understood and to consciously direct understanding at them. (e.g., beliefs)

o meaning: There must be an aspect or an idea that is the subject of the understanding. As noted in Chapter 4, meaning requires both intelligent organization, and the capacity for meaning-preserving communication.


Will Machines Ever Understand?


YES

o Information is the basis of understanding, and it is just data that has been processed. Machines process data, so they can also be said to be intelligent.

o Biological systems are finite and bounded as are physical ones. They can therefore be completely comprehended by the inputs and outputs of their own processes without reference to anything else--including the method of internal processing.

o The human mind can be completely explained by knowing how the brain works. This activity can be duplicated.

o All human activity results from the interaction of the electrical, chemical, and physical properties of the body (brain included). These constitute programs that determine all human activity. We need only learn how to write similar programs to duplicate human activity.

o The processes of the brain are nature's way of executing algorithms with a finite number or (possibly repeated) steps. Since every such algorithm is executable on a machine (Church's Thesis) everything the brain does can be duplicated artificially. Meaning is just something that is implicit in data and algorithms and is encoded along with them.

o Understanding is just a name for a certain kind of data processing. It too can be understood (from within the process, so to speak).

o Experience is another name for memory and social conditioning. Since all information can be encoded, experience too can be programmed.

o It is not even necessary to know what the mind is to be able to duplicate the results it can produce. Only the functionality of the human brain needs to be duplicated to produce an intelligent machine, not the exact way that it works.

o Understanding is something that is contained within programming. A simulation of the activity of neural circuits is sufficient to produce understanding equal to what humans have.

o For instance, a chess playing program is intelligent. It uses different techniques than do human players, but it yields the same results.


Understanding is a duplicatible mechanical process.

NO

o Understanding data so as to create useful information requires interpretative ideas, not just mechanical organization. Humans create ideas, machines do not.

o Biological systems are not just physical. The human mind, viewed as an information processor, is unbounded. The brain's biological state constitutes the environment within which its processes take place. This environment is different in a computing machine.

o The human mind is more than just the brain. The mind might understand the brain, but the brain cannot comprehend the mind.

o There is no known link between the brain's hardware (its constituents as a material and physical system) and its software (its constituents as an information processor). There are no known material elements that give it the ability to be a semantic processor, self aware, or intentional.

o Formalizing the execution of algorithms on a machine (encoding data and its processing) requires only a syntactic notation (like a programming language). Semantic analysis (assigning meaning) is a different kind of activity altogether. It too can be described with a notation, but it can only be created by a human.

o The awareness, intentionality, and meaning that compose understanding are motivators for techniques, but are not themselves techniques. Thus no process or technique used by understanding, whether biological or physical, is sufficient to understand understanding. Nothing (including understanding) is the same when viewed from within itself and its processes as it would be viewed by an observer from outside itself and those processes.

oInteraction with the environment, including other human beings, is necessary for understanding.

o The process of understanding is at least as important (if not more) than the functionality, for the ways in which meanings are assigned and communicated are intimately attached to human experience, and that is something no machine can ever have.

o Consequently, understanding involves more than neural circuits, chemistry, or formal symbols, and is closely tied to the social and biological organism it serves. A simulation may to some degree approximate this, but it will not model it.

o The exhaustive search of all possible moves by a high speed computer is fundamentally different from the technique used by a human player. Even if the result on the board may be the same, the mental outcomes are not comparable, and that is what matters.


Understanding is non-duplicatible and uniquely human.

The Fourth Civilization Table of Contents
Copyright © 1988-2002 by Rick Sutcliffe
Published by Arjay Books division of Arjay Enterprises