Tech Used in Netflix Show ‘Black Mirror’ That Exists or Is on the Horizon
There are things about your life today that you would have trouble explaining to yourself just five years ago. That’s where the sci-fi anthology series Black Mirror lives—in the blurry space between the present and future, often made indistinguishable by the rate at which innovations develop around us. While the show certainly preys on the fears of technophobes, Black Mirror is also a brilliant exercise in prediction.
At many points the series is prescient. Episodes, all of which are set in the not-too-distant future, are sometimes released only to have the tech or major plot points featured in them pop up in real life. The tech itself may, at first, seem unrealistic, but the show’s producers’ diligent attention to detail grounds the show with a hearty sense of plausibility.
RealClearLife thought it would be interesting to explore how far off some of these fictional technologies are. Black Mirror adapts innovations in their incipient form like augmented reality and artificial intelligence. In some instances, these fictional advancements are decades away. In others, they’re already here, and you may not even know it. (Spoiler Alert: We included a few minor spoilers but didn’t reveal any major plot points.)
Although Black Mirror is an anthology, with each episode independent of one another, there is some tech that appears multiple times. Brain-controlled contact lenses—basically, advanced bionic eyes—are featured in episodes “White Christmas,” “The Entire History of You,” “Nosedive,” and “Men Against Fire.” The device imagined in the series takes the augmented reality features of Google Glass or Magic Leap and incorporates them in into a contact lens as if it were almost a synthetic eyeball. Characters use the lenses to livestream their current experiences; review old, recorded memories; peruse social media; see through walls via infrared imaging; or “block” people they don’t want to see. (See an example of the lens in this trailer for “White Christmas” below.)
As an organ, bionic eyes exist already at a rudimentary level. Second Sight, a biomedical tech company, currently produces the only FDA-approved device that uses a glasses-mounted camera to send shadows and outlines to an eye implant connected to an optical nerve. Kohitij Kar, a researcher at the MIT Department of Brain and Cognitive Sciences, predicts wearable tech like the one in the show could be available to consumers in five years. Given the immense amount of external hardware needed to make that happen, a device like that one would have to work in conjunction with a smartphone or another device. Kar told The Ringer that the bionic eye would likely be used for the military before it became a consumer device. The MIT researcher also thinks that mind-controlled contact lenses are feasible, but farther off than one tethered to your smartphone.
Without giving too much away, insect drones are a major component to the plot of the episode entitled “Hated in the Nation.” It’s set in a world where honeybees are dying off; and drones, equipped with cameras and a microphone, have been introduced to fill the void as pollinators— operating out of a hive just like real bees.
Currently, a bevy of animal-like micro-robots like butterflies, caterpillars, dragonflies, and ants are in development. They’ll be used everywhere from the battlefield to scientific research laboratories. As far as the bee drones are concerned, hive-minded autonomous robots, such as the Zooids developed by an Italian and American research team (see below video), are already here. Even flying surveillance drones the size of bees, like the Black Hornet Nano, are already in use by the British and Norwegian militaries. Reporters at Robohub, a robotics news site, think micro-drones—in particular those that fly—need a significant amount of research and development before they get to market. You can read all their thoughts here.
Augmented Reality Brain Implant
In the series’ latest season, the episode entitled “Playtest” depicts an American traveler in Europe, who volunteers as a human guinea pig in trials for an augmented reality gaming platform using a brain implant. After a spinal injection, he is able to experience an augmented reality so immersive its almost indistinguishable from true reality. While this plot may sound like something from the distant future, gaming developers and other tech companies have been working on this exact type of experience for years.
The crucial difference between the tech featured in “Playtest” and something like Magic Leap, a groundbreaking AR headset set for release later this year, are visual projections. Whether its AR or VR, both employ externally created visuals. The implant in “Playtest” tricks the mind into seeing false visuals that aren’t really there. False sensations via biomedical interventions, not nearly as small as those depicted in Black Mirror, have been used to generated false sensations like touch and smell. With a brain-machine interface that incorporates virtual reality, Duke University neuroscientists were able to return feelings and movement to paralyzed patients in August 2016. Later that year, a similar device was used for by a paralysis patient on a day-to-day basis for the first time ever. As groundbreaking as these interfaces may be, they fail to emulate the final frontier of false sensations: sight.
Noted futurist Ray Kurzweil predicts something like brain implants that generate false visuals will exist in the 2030s, and it would likely start with biomedical nanotechnology. “I see this starting with nanobots in our bodies and brains,” Kurzweil said in a discussion about his book, Singularity is Near. He continues:
“The nanobots will keep us healthy, provide full-immersion virtual reality from within the nervous system, provide direct brain-to-brain communication over the Internet, and otherwise greatly expand human intelligence.”
Some industry experts, like Intel scientist Dean Pomerleau, expect the resistance to embedded devices to ease in the next decade—to the point where many use brain-embedded microchips to casually browse the web.
Artificially ‘Resurrecting’ the Dead
The episode entitled “Be Right Back” (trailer below) follows a woman (played by Haley Atwell) struggling to overcome the death of her husband. Out of desperation, Atwell’s character basically brings her husband back to life in various iterations. First, the character uses a chatbot that mimics her husband’s responses based on saved text messages and social media posts, right down to his usual pithy quips. However, left yearning for the real thing, Atwell’s character opts for the fictitious tech company’s high-end alternative—a humanoid robot powered by the AI that emulated her husband.
Resurrecting a person as an AI-driven chatbot has actually been done already. Eugenia Kuyda, one of the co-founders of a robot messaging service called Luka, “recreated” one for her best friends, Roman, using old messages from the Telegram app, after he died in a traffic accident. “It’s still a shadow of a person—but that wasn’t possible just a year ago, and in the very close future we will be able to do a lot more,” Kuyda wrote. Currently, there are a host of startups, such as Replika, that offer personality specific chatbots that mimic people for reasons just like this.
But, there’s obviously a difference between raising the dead to return as a screen-bound AI, and a robot that can replicate a deceased person. There are two parts of this: the body and the brain. The latter arguably exists in the form of personal AI assistants like Apple’s Siri or Amazon’s Alexa. While they don’t have the most robust personalities, it’s likely because they’re designed to appeal to a mass market. Using technology like that of Replika or Luka, it’s easy to imagine a more personable AI. Replicating the body is less feasible, at least currently, since it requires some major advancements to be made in bio-fabrication (the science of manufacturing living tissue). That said, an incubation center for this very type of emerging tech will open at Metro North Hospital in Brisbane, Australia this year. A recent WEF poll of 800 industry experts predicted 3-D–printed organs will be mainstream by 2024. Once that threshold is crossed, people-mimicking bots are far more feasible.
Much like the bionic lenses, storing human consciousness on a computer—a concept called “mind uploading”—is an innovation that appears in numerous episodes of the show, from “White Christmas” to “San Junipero.” Whether the tech is used as a tool for imprisonment or liberation depends on the episode, but those who have their minds “uploaded” operate like a normal person, just within their own reality stored on a hard drive instead of a brain.
In general, this is a pretty far-off concept. The biggest hurdle to developing technology like this is mapping brain connectivity, which could be overcome by constructing an artificial brain for examination. In doing so, we’ll have a better understanding of what exactly defines consciousness. But even still, replicating human consciousness is a major feat that would likely be preceded by doing so for smaller organisms. Something like this, Duke neuroscientist Mikhail A. Lebedev thinks, will likely happen within a decade. However, copying human brains is much farther off—closer to 100 years away by one George Mason economist’s prediction. Rest assured, though, startups like Humai are trying to overcome these obstacles so we can all live inside our computers forever (or more than we already do, anyway).
Hierarchical, Social Merit–Based Society
In the first episode of the latest season “Nosedive,” Bryce Dallas Howard plays an insecure woman looking to impress at her friend’s upcoming wedding. The episode is set in a society where people are given scores based on their social media presence, which creates stratification within society and is tantamount to its rule of law. Scores are peer-determined, so it’s all about perception. Smile at somebody the wrong way, for example, and you may be docked a few points. If someone’s score drops too low, they’re incarcerated until they can behave more appropriately. It’s a truly fascinating exploration of how social media’s been integrated into our daily lives.
For the most part, most aspects of this episode exist. Just think about how the Pokemon Go craze turned so many people into zombies shuffling along the street, glued to their smartphones. There’s even an app called “Peeple” that allows users to rate others and generates a score much like that in “Nosedive.”
The concept of “social-merit”—determining an individual’s capabilities based on an aggregate of their personal information—is gaining popularity, especially in developing countries that are looking to use emerging technology in solving traditional problems caused by limited resources. China, for instance, is beginning to experiment with social credit scores, generating digital records that could be divided into ratings for financial, social, political, and legal credit. In this model, somebody applying to a job could have their chances in landing it hurt by a recent political rant they posted to Facebook. For now, though, that system is in its very early stages, and is hampered by creating scores for China’s 1.4 billion people.
But the same notion is already being used by some banks in South America. Looking to provide financial services for the two billion people on the continent without a bank account, phone carriers and banks are starting to use utility and cell-phone data as metrics to generate credit scores after seeing its success on a smaller scale with some startups. At the end of 2016, two of these startups, Lenddo and EFL Global Ltd., partnered with Fair Isaac Corp., known for its FICO scores, to start assessing small business loans in Russia and India using phone data.
This article was featured in the InsideHook newsletter. Sign up now.
Suggested for you