Monday, June 20, 2016

Will Artificial Intelligence be Annoyed by Our Science Fiction?

Via mysteriousuniverse.org by Micah Hanks

Throughout countless novels, films, and other media over the years, artificial intelligence has been presented to audiences in virtually every way imaginable. From Metropolis to Ex Machina, our fiction has brought us stories of A.I. that are occasionally our benevolent overseers; other times, they are curious beings like ourselves, hunting the meaning behind the “life” they are experiencing. Still in other instances, they manifest as subservient companions to the humans around them, much like the endearing droids George Lucas gave us in Star Wars.

However, there are also the stories that feature A.I. as a potentially grave threat to future humanity; such fictional representations of “evil” A.I. are probably among the influences driving many of today’s leading thinkers, who have warned that the development of A.I. that may be capable of self-improvement — and self- replication — might ultimately be a bad thing for us.

In an editorial that appeared in The Independent in 2014, physicist Stephen Hawking wrote that, “Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks. In the near term, world militaries are considering autonomous-weapon systems that can choose and eliminate targets.”


Hawking has more recently posited that humanity might easily be “superseded by A.I.”, because an intelligence of this sort which might also improve upon its own design could essentially take evolution into its own hands, leaving humans in a trail of dust… or perhaps much worse. As the physicist suggests, unless “we learn how to avoid the risks,” we might be dust… literally.

And yet, how might we “avoid” the risks presented by something we, as a supposedly cautious civilization advancing its scientific prowess, nonetheless seem hell bent for leather on creating?

Hence, it’s a fair question to ask: what might eventually cause A.I. to view humans as a threat? Based on this premise, another question that springs to mind is, “what will artificial intelligence think about films like Terminator once it becomes self aware?”

A wide range of theories exist about the reasons A.I. might go full-blown Terminator on us: humanity may be seen as inconsequential to an advanced intelligence of this sort, or perhaps even an impediment to the furtherance of its eventual objectives. However, would the be just as likely to find our constant portrayal of an A.I. “threat” as an indication of our general distrust for them?

Alternatively, when we consider the ongoing controversies (particularly in America, but in other parts of the world as well) pertaining to xenophobia, sexual preference, and the social issues that continually arise from diversity in general, it is not hard to fathom similar issues arising in a future world that is shared between human and synthetic intelligences. With the difficulties afforded humankind alone, what kind of problems might arise out of a future in which humans and A.I. must coexist, despite each group’s suspicions of the other?

Or maybe future artificial intelligences, while hoping to do good for humanity, may begin to implement their intelligence into a system of government over humanity, which in turn might be viewed as totalitarian, despite having our best interests in mind. Armed with their own ever-growing intelligence, along with technologies that they may use to help enforce their imposed ranking over us, a schism might emerge out of humanity’s unrest toward A.I. which, hoping aid us through what it perceives as a fair system of governance, would be viewed by us as dictatorial.

Returning again to the of A.I. in books and film, perhaps we should give further consideration to the various presentations of A.I. that have appeared in science fiction over the years. Any number (or perhaps even combinations) of these might play out as reality in the coming decades; however, there is at least some likelihood that our projections about the future of A.I. will actually influence what role it will play in our future reality.

Whether by virtue of a sort of “self fulfilling prophecy”, or perhaps, as suggested already, resulting from A.I.’s own interpretation of our attitudes toward it, we must consider that our expectations in relation to A.I. will affect the eventual outcome in this scenario. Hence, one key to understanding “how to avoid the risks” associated with A.I. may be to understand the role we could be playing in how it will behave and interact with us.

Source

No comments:

Post a Comment