Friday, January 15, 2016

A.I.: Terminator or nah?


Robotic Sentience and the Fall of Man


We are all familiar with some example of artificial intelligence from pop-culture. Going as far back as Mary Shelley’s ‘Frankenstein’ in 1818, there have been references in literature and entertainment to human-created consciousness. Some, show a hopeful, practical view of A.I., as in the loveable ‘droids’ from ‘Star Wars’, the socially bumbling yet ever-useful Commander Data of “Star Trek: The Next Generation”, and even Disney’s heart-warmingly adorable “Wall-E”. Others, like the ‘Terminator’ film franchise, Isaac Asimov’s ‘I-Robot’, and most recently ‘The Avengers: Age of Ultron,’ (Marvel’s 2015 super hero blockbuster,) make A.I. out to be a danger so terrible, and so out of our control that it would threaten all of existence. Within the last few years, artificial intelligence has become a popular topic for debate among some of the world’s greatest living minds. World-renowned scientists, entrepreneurs and thinkers like Elon Musk, Bill Gates, Steve Wozniak, Demis Hassabis and Stephen Hawking have expressed the opinion that the existential threat posed by continued research into A.I. may present far more dire consequences than we as humans are capable of dealing with. At the same time, the scientific and technological communities have been experimenting with, and introducing nascent forms of A.I. into a multitude of applications, such as, military operations, space exploration, national security tactics, field medical devices, data-collection, disability assistance, surgical procedures and language translation programs, with overwhelming success. These are only the beginnings of what can be accomplished with continued exploration and a greater knowledge of what can be done with artificial intelligence. Therefore, despite the growing concerns presented by the foremost scholars of our time, the vast array of beneficial possibilities exhibited by this controversial research undoubtedly outweigh the unsubstantiated, arguably histrionic claims of its risks.

Fictional speculation on the subject of AI has been so prevalent across the entire world that its origins are difficult to pin-point. But the beginnings of, what we would now recognize as practical artificial intelligence, were established by the computer scientist and philosopher, Alan Turing in his 1950 paper, Computing Machinery and Intelligence. He proposed, since we have no scientific proof or knowledge of what consciousness really is or how it came to be, that the only way to effectively measure a machine’s intelligence would be to conclusively declare it indistinguishable from that of a human. The Turing Test exhibits this theory by putting a human into two separate, text only, conversations; one with another person, and one with a computer. The goal of the second person, as well as the computer, is to convince the first person that they are a human. The result of this test, according to Turing, would determine the computer’s intelligence by its ability to convince the person that, it too, is a human. While the merits of this theory have been discussed and argued over for nearly a century now, it continues to be the foundation of our concept of artificial intelligence to this day. Other major moments in the history of AI include: the coining of the term ‘artificial intelligence’ at the 1956 Dartmouth Conference, the victory of IBM’s ‘Deep Blue’ super computer over the reigning grandmaster chess champion in 1997 and the overwhelming defeat of the two best contestants from the quiz show “Jeopordy!”  by the question answering automaton, Watson, in 2001. Although these were all significant milestones in the evolution of AI, the pattern suggested by Moore’s law, (which states that the processing capability of computers continues to double every two years,) leads us to believe that we have only begun to scratch the surface.

While there may be some who fear the possible future implications of AI, no one would argue that it is not undeniably useful. Because of the high profile status and nearly limitless funding available to national military programs, they are nearly always the first to utilize any new technology or scientific breakthrough. Artificial Intelligence has proven to be no different. As stated in the letter that was submitted and signed by such scholars as Stephen Hawking and Elon Musk (among others) during the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina just this past July, 2015:
AI technology has reached a point where the deployment of [autonomous weapons] is- practically if not legally- feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”

Many other members of the scientific and technological communities have presented this very same sentiment multiple times lately. And, while the general tone of the conference in Argentina was one of caution, others have inferred that the same information is evidence of continued progress and evolution just as any advancement in another field would be greeted with encouragement and hopefulness.

Already we have seen the use of unmanned drones and even autonomous ground units being used by the U.S. military in Iraq and Afghanistan without incident. When using these types of units as opposed to the traditional enlisted soldiers, we obviously see an immediate drop in the loss of human lives. Instead of a young, healthy, highly trained man or woman being put in harms way, there is a piece of machinery with all the same capabilities and no brothers and sisters or sons and daughters to worry about, completing the same dangerous task. For example, we could have a ‘Pack-bot,’ which weighs 42 pounds and can maneuver through any terrain that a human could, disarm an IED (Improvised Explosive Device) completely autonomously. Or similarly, we could use a ‘raven drone’, which costs a total of one thousand dollars to build and program, to fly over a target and collect information, without any danger of human casualties or even loss of multi-million dollar military aircraft. As one military officer said: “When a robot dies, you don’t have to write a letter to its mother.” This has effectively changed the cost of war. With the continued success of tactics like these, we have the opportunity to re-evaluate what it means to put a soldier’s life at risk. Where as, in the past, we would begrudgingly accept the death of a soldier as merely a sad, yet, inevitable product of our martial objectives, now there is a far more morally palatable alternative. Even further, all of these units are outfitted with video recording capabilities, which, if/when these videos are released to the general public, put a greater level of accountability on the officers making the decisions. (Some videos like these have already made their way on to ‘YouTube’) This changes the dynamic between civilians and our war efforts over seas.

Advancements of AI in fields like factory production have been seen in machines that not only complete tasks efficiently and effectively, as they have since the industrial revolution, but can now be made aware of their surroundings. This allows humans and machines to work together as a more cohesive team by cutting down on worker injuries as well as mechanical malfunctions.

In the field of medicine there have been enormous discoveries and the potential for even greater leaps to be made in the coming years. There are machines that can identify and diagnose disease in a matter of seconds, either in a hospital or out in remote areas of the world where, before the invention of such devices, there would have been no medicine of any kind. In hospitals there are robots that perform menial, time-consuming tasks that would have previously been the responsibility of nurses, allowing them to now spend more time listening and catering to the needs of their patients. Study into the very promising field of nanotechnology (yes, that’s right, we have ‘Borg Nano-probes’) has such varied potential that, not only would it eventually be possible to treat disease or injury on a molecular level, without the need for invasive surgery, but the U.S. government has an entire website (nano.gov) dedicated to its vast array of conceivable applications.

Translation programs that are capable of picking up on the subtleties of a language’s colloquial lexicon and communicating them appropriately; data-collection software that compiles sensitive data without a pre-disposition to social biases and a greater emphasis on discretion; robots with the ability to perform space exploration for periods of time far exceeding a human’s life-span while transmitting their findings back to earth faster; the list goes on. But, this research can only continue to blossom if it is not curtailed by our fear of the unknown.

After learning about all the wonderful things that have and could be made possible by AI, one might think that if these specific people, whom we all know to be incredibly smart and worth listening to, think it’s not worth the risk, what is it they believe AI will do? Stephen Hawking, (world-renowned theoretical physicist and ubiquitously regarded smart guy) who seems to be the most vocal in his opposition, said of AI:
One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand.
(He goes on to add that he doesn’t know who started it but that it was, in fact, us that scorched the sky[see ‘The Terminator’])

 While Hawking seems like he may have watched one too many sci-fi movies, his concerns about AI are supported by Elon Musk, founder of Tesla Motors, SpaceX and PayPal (and, as far as I’m concerned, a real life super hero.) They have both been quoted as saying that they believe AI to be our “greatest existential threat.” But Musk, while originally being guilty of expostulating some similarly flagrant conclusions, has begun addressing his apprehensions with an open mind, just this year donating ten million dollars to companies doing research on artificial intelligence. His hope is that by providing such a substantial contribution to this field, Musk would be encouraging researchers to err on the side of caution when developing these technologies, and that himself and others would be afforded a certain level of oversight as these technologies continue to be developed. Not only does Elon Musk seem to be approaching this issue with equanimity, but the concerns he has presented as to why he believes there is a need for regulation are less melodramatic and have a stronger basis in fact than those of some of his colleagues. The hope for this investment is that it would establish a standard of culpability for the questions of morality and ethics that may be raised by this form of intelligence as it develops. For instance, let’s say there is a robot equipped with an advanced form of artificial intelligence, which has been assigned, by the United States government, with the task of defending U.S. soldiers in a war zone. At some point this autonomous robot is presented with a morally ambiguous dilemma: in order to save the lives of the members of the platoon it has been designated to protect, its only remaining course of action would be to destroy an enemy encampment, killing everyone inside including the few civilians, some of them children. What will it do? Who is responsible for it’s actions? By whom, and with what ethical guidelines, was it programmed? What are the repercussions of whichever option it chooses? Clearly these are questions that would need to be broached before a problem such as this arose. Even on a much smaller scale, being faced with a similar crisis of morality would have huge implications. Without a doubt, there would need to be some level of policy in place and an ongoing discussion to determine jurisdiction as further matters of contention emerged.

         While it is apparent that there are certain considerations that must be examined with great care, there is no doubt that research into the blossoming field of artificial intelligence is worth pursuing. Worries about sentient robots taking over the world and enslaving or exterminating the human race are reminiscent of luddites throughout history. Whenever there has been a technological breakthrough there has always been a voice of dissent that attempts to dissuade us from adopting it. And while being afraid of new technology may raise some questions that need to be addressed, it cannot be allowed to deter progress. In this case particularly, it can be easy to succumb to apprehensions about the apocalyptic results depicted in our pop-culture canon. But if we examine the evidence and don’t allow ourselves to get swept up by ideas based solely in fantasy, we have the opportunity to usher in a future of unimaginable knowledge and growth. In his ‘TED Talk,’ Rodney Brooks, a respected researcher in the field of robotics, explains that creating a “bad robot” is extremely unlikely because first we would have to make a “mildly bad robot” and before that, a “sort-of bad robot” and that, basically, “We’re just not gunna’ let it go that way.” Ultimately, it is not the creation of an inherently evil AI that we should be worried about but rather, what will be the standard of ethics and morality that we one day hope to instill in the ‘C-3P0’s and Arnold Schwarzeneggers of our future.





No comments:

Post a Comment