Saturday, September 12, 2015

Is Human Technological Growth a Recipe For Future Disaster?

Via mysteriousuniverse.org by Micah Hanks

Transhumanism, along with its related concept of technological singularity, are each motifs that have been featured many times in Mysterious Universe articles over the years. In essence, transhumanism involves the notion that humans can “evolve” beyond our current physical limitations, through the likely implementation of technology. On the other hand, technological singularity, while similar, often incorporates advanced nonhuman intelligences apart from humanity into the equation.

Going as far back as 2011, I had written that, “evidence of a technological Singularity expected within next several decades might already be visible,” and suggested that the technological precursors for a sort of “Singularity” may already begin to become evidential in our existing technologies over the next few years.

More recently, I observed recent developments in the field of robotics where engineers have begun designing autonomous systems that are built not only to create robots themselves, but to employ a sort of “natural selection” in order to improve on existing designs just as well. Viewed within the literature pertaining to transhumanism and technological singularity, perhaps here again we see precursors, if only that, to an eventual artificial system which will not only be capable of self-replication, but also intelligent self-improvement to the extent that evolution becomes steerable, in essence.


As noted in my August 2015 article referenced above, mathematician and science fiction writer Vernor Vinge was first to use the term “Singularity” in relation to a point beyond which foreseeable human expectations about technology and our own advancement begin to break down. The expression was borrowed from the idea of an event horizon within a black hole, in which known physical laws (and thus, human understanding) seem to break down with equal complexity.

Vinge, of course, had not been the earliest theorist to envision a concept similar to our modern ideas about transhumanism and technological singularity. While there had been many others to offer such similar ideas over the years, if we look a bit less than a decade prior to Vinge’s discussion of Singularity, we again find references to a remarkably similar concept that appeared in the work of computer scientist and aerial phenomena researcher Jacques Vallee.

A 1975 essay, co authored with Vallee’s associate Francois Meyer, appeared in the journal Technological Forecasting and Social Change, under the title “The Dynamics of Long-Term Growth”. Here, a handful of notions similar to those expressed by modern transhumanist advocates like Ray Kurzweil had already been present, in relation to the long-term growth of human technological systems occurring at a greater-than exponential rate. Though the term “Singularity” was never used specifically, Vallee, like Kurzweil, predicted that some time in the first half of the twenty-first century, we would see the rate of growth of technology begin to expand so quickly that a sort of “singular point” would be reached.

Vallee feared this could have grim consequences, however, especially when paired alongside steady increases in population growth over time. Specifically, Vallee and Meyer estimated that the target year for this transitional singularity would occur in 2026, whereas in his book The Singularity is Near, Kurzweil places his forecast for what he calls “the knee of the curve” at a point occurring just three years later.

Unlike the more optimistic attitudes toward Singularity expressed by Kurzweil and many in the transhumanist camp today, Vallee and Meyer’s paper had taken a more concerned and skeptical stance, stating that, “the forecast of infinite growth in a finite time interval is absurd. All we can expect of these developments is that some damping effect will take place very soon. The only question is whether this will be accomplished through ‘soft regulation’ or catastrophe.”

One could speculate as to what the authors may have meant by the use of such terms as “soft regulation” paired with “catastrophe.” Had this been a nod to ideas of an Orwellian system that involves micro-management of a populace, or perhaps long-term geo-economic collapse that would stem from issues surrounding overpopulation, combined with the poor management of our banking systems? More likely, Vallee and Meyer had been alluding to an eventual necessity for rationing of resources, in addition to examining conditions pertaining to population in intelligent ways (however controversial the subject of “population control” may have been, and in fact, still is today).

Catastrophe might also represent things less akin to influences from within the State; perhaps the harmful after-effects of intense coronal mass ejections produced by the Sun, or even an asteroid impact, would qualify within the scope of those dangers emanating from nature itself.

Vallee and Meyer’s final statement is perhaps the most cryptic: “It is clear that the rate of growth must eventually decrease. A discussion of the mechanism through which this decrease will take place is beyond the scope of the present study.” It is not difficult to imagine moderate ways the authors might have surmised such “mechanism”, particularly within the modern sphere of concern surrounding themes like world population growth, climate change, and other related topics.

Vernor Vinge was less nebulous when he outlined is ideas on Singularity for Omni magazine back in January 1983, where he said, “To write a (science fiction) story set more than a century hence, one needs a nuclear war in between … so that the world remains intelligible.”

It stands to reason that our long-term forecasts for growth truly may be nearly inseparable from certain dangers to humanity that are equally likely to arise in the coming years. Despite our technological innovations of the modern era, will such advances be able to keep up with problems such as population growth, mass extinctions, and other plausible, if not inevitable, dangers?

Source

No comments:

Post a Comment