Monday, December 3, 2018

The Human Brain Is a Time Traveler

Image: © iStock Photo adventrr
Via nytimes.com by Steven Johnson

Randy Buckner was a graduate student at Washington University in st. Louis in 1991 when he stumbled across one of the most important discoveries of modern brain science. For Buckner — as for many of his peers during the early ’90s — the discovery was so counterintuitive that it took years to recognize its significance.

Buckner’s lab, run by the neuroscientists Marcus Raichle and Steven Petersen, was exploring what the new technology of PET scanning could show about the connection between language and memory in the human brain. The promise of the PET machine lay in how it measured blood flow to different parts of the brain, allowing researchers for the first time to see detailed neural activity, not just anatomy. In Buckner’s study, the subjects were asked to recall words from a memorized list; by tracking where the brain was consuming the most energy during the task, Buckner and his colleagues hoped to understand which parts of the brain were engaged in that kind of memory.

But there was a catch. Different regions of the brain vary widely in how much energy they consume no matter what the brain is doing; if you ask someone to do mental math while scanning her brain in a PET machine, you won’t learn anything from that scan on its own, because the subtle changes that reflect the mental math task will be drowned out by the broader patterns of blood flow throughout the brain. To see the specific regions activated by a specific task, researchers needed a baseline comparison, a control.

At first, this seemed simple enough: Put the subjects in the PET scanner, ask them to sit there and do nothing — what the researchers sometimes called a resting state — and then ask them to perform the task under study. The assumption was that by comparing the two images, the resting brain and the active brain, the researchers could discern which regions were consuming more energy while performing the task.


But something went strangely wrong when Buckner scanned the resting states of their subjects. “What happened is that we began putting people in scanners that can measure their brain activity,” Buckner recalls now, “and Mother Nature shouted back at us.” When people were told to sit and do nothing, the PET scans showed a distinct surge of mental energy in some regions. The resting state turned out to be more active than the active state.

The odd blast of activity during the resting state would be observed in dozens of other studies using a similar control structure during this period. To this first generation of scientists using PET scans, the active rest state was viewed, in Buckner’s words, as “a confound, as troublesome.” A confound is an errant variable that prevents a scientist from doing a proper control study. It’s noise, mere interference getting in the way of the signal that science is looking for. Buckner and his colleagues noted the strange activity in a paper submitted in 1993, but almost as an afterthought, or an apology.

But that passing nod to the strangely active “resting state” turned out to be one of the first hints of what would become a revolution in our understanding of human intelligence. Not long after Buckner’s paper was published, a brain scientist at the University of Iowa named Nancy Andreasen decided to invert the task/control structure that had dominated the early neuroimaging studies. Instead of battling the “troublesome” resting state, Andreasen and her team would make it the focus of their study.

Andreasen’s background outside neuroscience might have helped her perceive the value lurking in the rest state, where her peers saw only trouble. As a professor of Renaissance literature, she published a scholarly appraisal of John Donne’s “conservative revolutionary” poetics. After switching fields in her 30s, she eventually began exploring the mystery of creativity through the lens of brain imaging. “Although neither a Freudian nor a psychoanalyst, I knew enough about human mental activity to quickly perceive what a foolish ‘control task’ rest was,” she would later write. “Most investigators made the convenient assumption that the brain would be blank or neutral during ‘rest.’ From introspection I knew that my own brain is often at its most active when I stretch out on a bed or sofa and close my eyes.”

Andreasen’s study, the results of which were eventually published in The American Journal of Psychiatry in 1995, included a subtle dig at the way the existing community had demoted this state to a baseline control: She called this mode the REST state, for Random Episodic Silent Thought. The surge of activity that the PET scans revealed was not a confound, Andreasen argued. It was a clue. In our resting states, we do not rest. Left to its own devices, the human brain resorts to one of its most emblematic tricks, maybe one that helped make us human in the first place.

It time-travels.

Imagine it’s late evening on a workday and you’re taking your dog for a walk before bedtime. A few dozen paces from your front door, as you settle into your usual route through the neighborhood, your mind wanders to an important meeting scheduled for next week. You picture it going well — there’s a subtle rush of anticipatory pleasure as you imagine the scene — and you allow yourself to hope that this might set the stage for you to ask your boss for a raise. Not right away, mind you, but maybe in a few months. You imagine her saying yes, and what that salary bump would mean: Next year, you and your spouse might finally be able to get out of the rental market and buy a house in a nicer neighborhood nearby, the one with the better school district. But then your mind shifts to a problem you’ve been wrestling with lately: A member of your team is brilliant but temperamental. His emotional swings can be explosive; just today, perceiving a slight from a colleague, he started berating her in the middle of a meeting. He seems to have no sense of decorum, no ability to rein in his emotions.

As you walk, you remember the physical sense of unease in the room as your colleague ranted over the most meaningless offense. You imagine a meeting six months from now with a comparable eruption — only this time it’s happening in front of your boss. A small wave of stress washes over you. Perhaps he’s just not the right fit for the job, you think — which reminds you of the one time you fired an employee, five years ago. Your mind conjures the awkward intensity of that conversation, and then imagines how much more explosive a comparable conversation would be with your current employee. You feel a sensation close to physical fear as your mind runs through the scenario.

In just a few minutes of mental wandering, you have made several distinct round trips from past to future: forward a week to the important meeting, forward a year or more to the house in the new neighborhood, backward five hours to today’s meeting, forward six months, backward five years, forward a few weeks. You’ve built chains of cause and effect connecting those different moments; you’ve moved seamlessly from actual events to imagined ones. And as you’ve navigated through time, your brain and body’s emotional system has generated distinct responses to each situation, real and imagined. The whole sequence is a master class in temporal gymnastics. In these moments of unstructured thinking, our minds dart back and forth between past and future, like a film editor scrubbing through the frames of a movie.

The sequence of thoughts does not feel, subjectively, like hard work. It does not seem to require mental effort; the scenarios just flow out of your mind. Because these imagined futures come so easily to us, we have long underestimated the significance of the skill. The PET scanner allowed us to appreciate, for the first time, just how complex this kind of cognitive time travel actually is.

In her 1995 paper, Nancy Andreasen included two key observations that would grow in significance over the subsequent decades. When she interviewed the subjects afterward, they described their mental activity during the REST state as a kind of effortless shifting back and forth in time. “They think freely about a variety of things,” Andreasen wrote, “especially events of the past few days or future activities of the current or next several days.” Perhaps most intriguing, Andreasen noted that most of the REST activity took place in what are called the association cortices of the brain, the regions of the brain that are most pronounced in Homo sapiens compared with other primates and that are often the last to become fully operational as the human brain develops through adolescence and early adulthood. “Apparently, when the brain/mind thinks in a free and unencumbered fashion,” she wrote, “it uses its most human and complex parts.”

In the years that followed Andreasen’s pioneering work, in the late 1990s and early 2000s, a series of studies and papers mapped out the network of brain activity that she first identified. In 2001, Randy Buckner’s adviser at Washington University, Marcus Raichle, coined a new term for the phenomenon: the “default-mode network,” or just “the default network.” The phrase stuck. Today, Google Scholar lists thousands of academic studies that have investigated the default network. “It looks to me like this is the most important discovery of cognitive neuroscience,” says the University of Pennsylvania psychologist Martin Seligman. The seemingly trivial activity of mind-wandering is now believed to play a central role in the brain’s “deep learning,” the mind’s sifting through past experiences, imagining future prospects and assessing them with emotional judgments: that flash of shame or pride or anxiety that each scenario elicits.

A growing number of scholars, drawn from a wide swath of disciplines — neuroscience, philosophy, computer science — now argue that this aptitude for cognitive time travel, revealed by the discovery of the default network, may be the defining property of human intelligence. “What best distinguishes our species,” Seligman wrote in a Times Op-Ed with John Tierney, “is an ability that scientists are just beginning to appreciate: We contemplate the future.” He went on: “A more apt name for our species would be Homo prospectus, because we thrive by considering our prospects. The power of prospection is what makes us wise.”

It is unclear whether nonhuman animals have any real concept of the future at all. Some organisms display behavior that has long-term consequences, like a squirrel’s burying a nut for winter, but those behaviors are all instinctive. The latest studies of animal cognition suggest that some primates and birds may carry out deliberate preparations for events that will occur in the near future. But making decisions based on future prospects on the scale of months or years — even something as simple as planning a gathering of the tribe a week from now — would be unimaginable even to our closest primate relatives. If the Homo prospectus theory is correct, those limited time-traveling skills explain an important piece of the technological gap that separates humans from all other species on the planet. It’s a lot easier to invent a new tool if you can imagine a future where that tool might be useful. What gave flight to the human mind and all its inventiveness may not have been the usual culprits of our opposable thumbs or our gift for language. It may, instead, have been freeing our minds from the tyranny of the present.

The capacity for prospection has been reflected in, and amplified by, many of the social and scientific revolutions that shaped human history. Agriculture itself would have been unimaginable without a working model of the future: predicting seasonal changes, visualizing the long-term improvements possible from domesticating crops. Banking and credit systems require minds capable of sacrificing present-tense value for the possibility of greater gains in the future. For vaccines to work, we needed patients willing to introduce a potential pathogen into their bodies for a lifetime of protection against disease. We are born with a singular gift for imagining the future, but we have been enhancing those gifts since the dawn of civilization. Today, new enhancements are on the horizon, in the form of machine-learning algorithms that already outperform humans at certain kinds of forecasts. As A.I. stands poised to augment our most essential human talent, we are faced with a curious question: How will the future be different if we get much better at predicting it?

“Time travel feels like an ancient tradition, rooted in old mythologies, old as gods and dragons,” James Gleick observes in his 2017 book, “Time Travel: A History.” “It isn’t. Though the ancients imagined immortality and rebirth and lands of the dead, time machines were beyond their ken. Time travel is a fantasy of the modern era.” The idea of using technology to move through time as effortlessly as we move through space appears to have been first conceived by H.G. Wells at the end of the 19th century, eventually showcased in his pioneering work of science fiction, “The Time Machine.”

But machines have been soothsayers from the beginning. In 1900, sponge divers stranded after a storm in the Mediterranean discovered an underwater statuary on the shoals of the Greek island Antikythera. It turned out to be the wreck of a ship more than 2,000 years old. During the subsequent salvage operation, divers recovered the remnants of a puzzling clocklike contraption with precision-cut gears, annotated with cryptic symbols that were corroded beyond recognition. For years, the device lay unnoticed in a museum drawer, until a British historian named Derek de Solla Price rediscovered it in the early 1950s and began the laborious process of reconstructing it — an effort that scholars have continued into the 21st century. We now know that the device was capable of predicting the behavior of the sun, the moon and five of the planets. The device was so advanced that it could even predict, with meaningful accuracy, solar or lunar eclipses that wouldn’t occur for decades.

The Antikythera mechanism, as it has come to be known, is sometimes referred to as an ancient computer. The analogy is misleading: The underlying technology behind the device was much closer to a clock than a programmable computer. But at its essence, it was a prediction machine. A clock is there to tell you about the present. The mechanism was there to tell you about the future. That its creators went to such great lengths to predict eclipses seems telling: While some ancient societies did believe that eclipses harmed crops, knowing about them in advance wouldn’t have been of much use. What seems far more useful is the sense of magic and wonder that such a prediction could provide, and the power that could be acquired as a result. Imagine standing in front of the masses and announcing that tomorrow the sun will transform for more than a minute into a fire-tinged black orb. Then imagine the awe when the prophecy comes true.

Prediction machines have only multiplied since the days of the ancient Greeks. Where those original clockwork devices dealt with deterministic futures, like the motions of solar bodies, increasingly our time-traveling tools forecast probabilities and likelihoods, allowing us to imagine possible futures for more complex systems. In the late 1600s, thanks to improvements in public-health records and mathematical advances in statistics, the British astronomer Edmund Halley and the Dutch scientist Christiaan Huygens separately made the first rigorous estimates of average life expectancy. Around the same time, there was an explosion of insurance companies, their business made possible by this newfound ability to predict future risk. Initially, they focused on the commercial risk of new shipping ventures, but eventually insurance would come to offer protection against just about every future threat imaginable: fire, floods, disease. In the 20th century, randomized, controlled trials allowed us to predict the future effects of medical interventions, finally separating out the genuine cures from the snake oil. In the digital age, spreadsheet software took accounting tools that were originally designed to record the past activity of a business and transformed them into tools for projecting out forecasts, letting us click through alternate financial scenarios in much the way our minds wander through various possible futures.

But cognitive time travel has been enhanced by more than just science and technology. The invention of storytelling itself can be seen as a kind of augmentation of the default network’s gift for time travel. Stories do not just allow us to conjure imaginary worlds; they also free us from being mired in linear time. Analepsis and prolepsis — flashbacks and flash-forwards — constitute some of the oldest literary devices in the canon, deployed in ancient narratives like the “Odyssey” and the “Arabian Nights.” Time machines have obviously proliferated in the content of sci-fi narratives since “The Time Machine” was published, but time travel has also infiltrated the form of modern storytelling. A defining trick of recent popular narrative is the contorted timeline, with movies and TV shows embracing temporal schemes that would have baffled mainstream audiences just a few decades ago. The epic, often inscrutable plot of the TV show “Lost” veered among past, present and future with a reckless glee. The blockbuster 2016 movie “Arrival” featured a bewildering time scheme that skipped forward more than 50 times to future events, while intimating throughout that they were actually occurring in the past. The current hit series “This Is Us” reinvented the family-soap-opera genre by structuring each episode as a series of time-jumps, sometimes spanning more than 50 years. The final five minutes of the Season 3 opener, which aired earlier this fall, jump back and forth seven times among 1974, 2018 and some unspecified future that looks to be about 2028.

These narrative developments suggest an intriguing possibility: that popular entertainment is training our minds to get better at cognitive time travel. If you borrowed Wells’s time machine and jumped back to 1955, then asked typical viewers of “Gunsmoke” and “I Love Lucy” to watch “Arrival” or “Lost,” they would have found the temporal high jinks deeply disorienting. Back then, even a single flashback required extra hand-holding — remember the rippling screen? — to signify the temporal leap. Only experimental narratives dared challenge the audience with more complex time schemes. Today’s popular narratives zip around their fictional timelines with the speed of the default network itself.

The elaborate timelines of popular narrative may be training our minds to contemplate more complex temporal schemes, but could new technology augment our skills more directly? We have long heard promises of “smart drugs” on the horizon that will enhance our memory, but if the Homo prospectus argument is correct, we should probably be looking for breakthroughs that will enhance our predictive powers as well.

In a way, those advances are already around us, but in the form of software, not pharmaceuticals. If you have ever found yourself mentally running through alternate possibilities for a coming outing — what happens if it rains? — based on a 10-day weather forecast, your prospective powers have been enhanced by the time-traveling skills of climate supercomputers that churn through billions of alternative atmospheric scenarios, drawn from the past and projecting out into the future. These visualizations are giving you, for the first time in human history, better-than-random predictions about what the weather will be like in a week’s time. Or say that dream neighborhood you’re thinking about moving to — the one you can finally afford if you manage to get that raise — happens to sit in a flood zone, and you think about what it might be like to live through a significant flood event 10 years from now, as the climate becomes increasingly unpredictable. That you’re even contemplating that possibility is almost entirely thanks to the long-term simulations of climate supercomputers, metabolizing the planet’s deep past into its distant future.

Accurate weather forecasting is merely one early triumph of software-based time travel: algorithms that allow us to peer into the future in ways that were impossible just a few decades ago, what a new book by a trio of University of Toronto economists calls “prediction machines.” In machine-learning systems, algorithms can be trained to generate remarkably accurate predictions of future events by combing through vast repositories of data from past events. An algorithm might be trained to predict future mortgage defaults by analyzing thousands of home purchases and the financial profiles of the buyers, testing its hypotheses by tracking which of those buyers ultimately defaulted. A result of that training would not be an infallible prediction, of course, but something similar to the predictions we rely on with weather forecasts: a range of probabilities. That time-traveling exercise, in which you imagine buying a house in the neighborhood with the great schools, could be augmented by a software prediction as well: The algorithm might warn you that there was a 20 percent chance that your home purchase would end catastrophically, because of a market crash or a hurricane. Or another algorithm, trained on a different data set, might suggest other neighborhoods where home values are also likely to increase.

These algorithms can help correct a critical flaw in the default network: Human beings are famously bad at thinking probabilistically. The pioneering cognitive psychologist Amos Tversky once joked that where probability is concerned, humans have three default settings: “gonna happen,” “not gonna happen” and “maybe.” We are brilliant at floating imagined scenarios and evaluating how they might make us feel, were they to happen. But distinguishing between a 20 percent chance of something happening and a 40 percent chance doesn’t come naturally to us. Algorithms can help us compensate for that cognitive blind spot.

Machine-learning systems will also be immensely helpful when mulling decisions that potentially involve a large number of distinct options. Humans are remarkably adept at building imagined futures for a few competing timelines simultaneously: the one in which you take the new job, the one in which you turn it down. But our minds run up against a computational ceiling when they need to track dozens or hundreds of future trajectories. The prediction machines of A.I. do not have that limitation, which will make them tantalizingly adept at assisting with some meaningful subset of important life decisions in which there is rich training data and a high number of alternate futures to analyze.

Choosing where to go to college — a decision almost no human being had to make 200 years ago that more than a third of the planet now does — happens to be a decision that resides squarely in the machine-learning sweet spot. There are more than 5,000 colleges and universities in the United States. A great majority of them are obviously inappropriate for any individual candidate. But no matter where you are on the ladder of academic achievement — and economic privilege — there are undoubtedly more than a few dozen candidate colleges that might well lead to interesting outcomes for you. You can visit a handful of them, and listen to the wisdom of your advisers, and consult the college experts online or in their handbooks. But the algorithm would be scanning a much larger set of options: looking at data from millions of applications, college transcripts, dropout rates, all the information that can be gleaned from the social-media presence of college students (which is, today, just about everything). It would also scan a parallel data set that the typical college adviser rarely emphasizes: successful career paths that bypassed college. From that training set it could generate dozens of separate predictions for promising colleges, optimized to whatever rough goals the applicant defined: self-reported long-term happiness, financial security, social-justice impact, fame, health. To be clear, that data will be abused, sold off to advertisers or stolen by cyberthieves; it will generate a thousand appropriately angry op-eds. But it will also most likely work on some basic level, to the best that we’ll be able to measure. Some people will swear by it; others will renounce it. Either way, it’s coming.

In late 2017, the Crime Lab at the University of Chicago announced a new collaboration with the Chicago Police Department to build a machine-learning-based “officer support system,” designed specifically to predict which officers are likely to have an “adverse incident” on the job. The algorithm sifts through the prodigious repository of data generated by every cop on the beat in Chicago: arrest reports, gun confiscations, public complaints, supervisor reprimands and more. The algorithm uses archived data — coupled with actual cases of adverse incidents, like the shooting of an unarmed citizen or other excessive uses of force — as a training set, enabling it to detect patterns of information that can predict future problems.

This sort of predictive technology immediately conjures images of a “Minority Report”-style dystopia, in which the machines convict you of a precrime that by definition hasn’t happened yet. But the project lead, Jens Ludwig, points out that with a predictive system like the one currently in the works in Chicago, the immediate consequence would simply be an officer’s getting some additional support or counseling, to help avert a larger crisis. “People get understandably nervous about A.I. making the final decision,” Ludwig says. “But we don’t envision that A.I. would be making the decision.” Instead, he imagines it as a “decision-making aid” — an algorithm that “can help sergeants prioritize their attention.”

No matter how careful the Chicago P.D. is in deploying this particular technology, we shouldn’t sugarcoat the broader implications here: It seems inevitable that people will be fired thanks to the predictive insights of machine-learning algorithms, and something about that prospect is intuitively disturbing to many of us. Yet we’re already making consequential decisions about people — whom to hire, whom to fire, whom to listen to, whom to ignore — based on human biases that we know to be at best unreliable, at worst prejudiced. If it seems creepy to imagine that we would make them based on data-analyzing algorithms, the decision-making status quo, relying on our meanest instincts, may well be far creepier.

Whether you find the idea of augmenting the default network thrilling or terrifying, one thing should be clear: These tools are headed our way. In the coming decade, many of us will draw on the forecasts of machine learning to help us muddle through all kinds of life decisions: career changes, financial planning, hiring choices. These enhancements could well turn out to be the next leap forward in the evolution of Homo prospectus, allowing us to see into the future with more acuity — and with a more nuanced sense of probability — than we can do on our own. But even in that optimistic situation, the power embedded in these new algorithms will be extraordinary, which is why Ludwig and many other members of the A.I. community have begun arguing for the creation of open-source algorithms, not unlike the open protocols of the original internet and World Wide Web. Drawing on predictive algorithms to shape important personal or civic decisions will be challenging enough without the process’s potentially being compromised or subtly redirected by the dictates of advertisers. If you thought Russian troll farms were dangerous in our social-media feeds, imagine what will happen when they infiltrate our daydreams.

Today, it seems, mind-wandering is under attack from all sides. It’s a common complaint that our compulsive use of smartphones is destroying our ability to focus. But seen through the lens of Homo prospectus, ubiquitous computing poses a different kind of threat: Having a network-connected supercomputer in your pocket at all times gives you too much to focus on. It cuts into your mind-wandering time. The downtime between cognitively active tasks that once led to REST states can now be filled with Instagram, or Nasdaq updates, or podcasts. We have Twitter timelines instead of time travel. At the same time, a society-wide vogue for “mindfulness” encourages us to be in the moment, to think of nothing at all instead of letting our thoughts wander. Search YouTube, and there are hundreds of meditation videos teaching you how to stop your mind from doing what it does naturally. The Homo prospectus theory suggests that, if anything, we need to carve out time in our schedule — and perhaps even in our schools — to let minds drift.

According to Marcus Raichle at Washington University, it may not be too late to repair whatever damage we may have done to our prospective powers. A few early studies suggest that the neurons implicated in the default network have genetic profiles that are often associated with long-term brain plasticity, that most treasured of neural attributes. “The brain’s default-mode network appears to preserve the capacity for plasticity into adulthood,” he told me. Plasticity, of course, is just another way of saying that the network can learn new tricks. If these new studies pan out, our mind-wandering skills will not have been locked into place in our childhood. We can get better at daydreaming, if we give ourselves the time to do it.

What will happen to our own time-traveling powers as we come to rely more on the prediction machines of A.I.? The outcome may be terrifying, or liberating, or some strange hybrid of the two. Right now it seems inevitable that A.I. will transform our prospective powers in meaningful new ways, for better or for worse. But it would be nice to think that all the technology that helped us understand the default network in the first place also ended up pushing us back to our roots: giving our minds more time to wander, to slip the surly bonds of now, to be out of the moment.

Source

No comments:

Post a Comment