10 ways science fiction got high tech wrong - Exotic Digital Access
  • Kangundo Road, Nairobi, Kenya
  • support@exoticdigitalaccess.co.ke
  • Opening Time : 07 AM - 10 PM
10 ways science fiction got high tech wrong

10 ways science fiction got high tech wrong

Science fiction writers and directors love creating high-tech visions of the future. Over the years, their collective imagination has dreamed up possible futures that are awesome, amazing, terrifying, and often totally wrong.

Such imaginative speculation isn’t necessarily a crime or even a mistake. After all, it’s just fiction: a bit of fun exploring what might be. It’s not a rigorous, results-oriented slog—that’s for the programmer nerds back in the salt mines. Science fiction is an expedition where we let our imaginations run wild.

Science fiction creators have also been right at least some of the time. Some sci-fi visions have even helped steer the evolution of technology. Star Trek‘s hand-held “communicators,” for instance, might have inspired flip phones.

But all too often, we’re left disappointed. “Where are the flying cars I was promised?” we might ask. The laws of physics make the energy cost of flying cars almost prohibitive, but the disappointment sets our hearts up for dejection all the same.

In the interest of setting things straight and at least tethering our imaginations to firm ground, here are 10 ways that the science fiction of yesteryear got today’s technology wrong.

Science fiction tropes vs. high-tech reality

  • Chatbots aren’t sentient
  • Computers aren’t human
  • Light speed, not lightsabers
  • AI is not the problem
  • The banality of social networks
  • Robots don’t look like us
  • Waiting for the Neuromancer
  • The metaverse is real … sort of
  • Timeouts beat logic bombs
  • The Singularity is still coming

Chatbots aren’t sentient

Science fiction authors love to spin tales about computer sentience or “general intelligence,” which leaves some of us searching for evidence that our machines have come alive, like Frankenstein’s monster after receiving that bolt of lightning. Today’s large language models are the closest we’ve come yet to the dream of artificial intelligence, though, and they are far from sentient. An LLM is mostly just an amazing collection of statistics and a model that can be used to extrapolate new versions of old texts in very believable ways. When chatbots say clever things, it’s because they’re imitating training data that said something just as clever in a similar context. In essence, they are stochastic parrots.

Computers aren’t human

While computers don’t think like humans, they can already do several very useful tasks much better than humans can. They can quickly search through petabytes of information to find exactly what we want. They can also do arithmetic, endlessly computing vast matrices of numbers with a speed and accuracy that leaves humans in their dust.  The power of AI is real, but we often forget it because science fiction has us dreaming of AIs that are as flighty, capricious, or silly as we are.

Light speed, not lightsabers

Flying bolts of energy may fill the screen of Star Wars and Star Trek, but they aren’t very fast. One calculation showed the destructive lasers flew a few feet per frame, or about 50 miles per hour—which is slower than some cyclists in the Tour de France. That doesn’t come close to the speed of light or even some of the hypersonic missiles in action today. Making the combat visually exciting and theatrically immersive forced movie makers to dial back reality from light speed to mule speed.

AI is not the problem

Isaac Asimov is famous for coining the three laws of robotics which gave him a framework for exploring the limits of rules and logic. Can a robot be prevented from harming a human? Are there loopholes that a clever robot might exploit?

In practice, the AI community is confronting much more plebeian problems, like the possibility that humans might sue for libel over an AI hallucination that spills over into real-world effects. In reality, daft humans might just trust AI too much and end up harming themselves.

The banality of social networks

Isaac Asimov’s series about Hari Seldon and the mysterious Foundation chronicled his view of how a rigorous science of “psychohistory” could not only predict events but shape them. Much of the series is devoted to stories of an entity called the Foundation, which could inject just a few small changes into society to stop a massive collapse of civilization.

In the real world, we don’t have a Foundation but social networks, and they aren’t content with making small changes. They’re devoted to shaping the psychological evolution of humanity by controlling the tenor of everything that we read and watch. It’s well known that social media platforms use techniques like content filters and shadow-banning to ensure that people only see what the platform wants seen.

Robots don’t look like us

It’s no surprise that human writers imagined robots would be made in our image, with arms and legs, and a complex stew of feelings and logic. In practice, intelligent machines come in every shape and size. CNC routers and 3D printers, for example, don’t look like carpenters or stone masons. Dishwashers are counter-height boxes that blend in nicely with your cabinets. Too bad they don’t crack jokes like Marvin the Paranoid Android in The Restaurant at the End of the Universe.

Waiting for the Neuromancer

Some early visions of the Internet imagined a sensory experience with rich, colorful forms that, today, may seem more psychedelic than prosaic. Hackers in Neuromancer “jack in” and interact with “ice fractals” and morphing icons in a riot of colors. Unlike the rest of us, they never have to wrestle with command-line parameters that may be indicated with one or two minus signs.

In practice, hacking is an endless slog of keystrokes and the biggest innovation is that we can use Unicode instead of ASCII, but only some of the time. Apparently, some programmers feel that using emoji for variable names is bad form, even worse than not indenting your ASCII correctly.

The metaverse is real … sort of

Books like Snow Crash envision a world where humans slip back and forth between the real world and the online metaverse as easily logging into a smartphone. While most people are accustomed to accessing the world’s information from phones and being in constant contact with friends, the transition to simply living in a 3D reconstructed world seems as distant as ever.

Some companies like Meta have devoted themselves to building out a metaverse, but it seems a bit harder than simply changing their corporate brand name. Other companies like Ronday aim for simpler targets, like providing shared virtual office space for remote workers, but the people seem to be resisting. Data goggles are available but the resolutions and details don’t seem to be as addictive as the marketing suggests.

It’s not that the metaverse hasn’t come true, it’s just not as omnipresent, omniscient, or omnipotent as it was on the page. (And even now, many people still love their printed books on dead trees more than their metaversical eReaders.)

Timeouts beat logic bombs

The computer that is paralyzed by an unsolvable logical problem is a popular science fiction trope. In one episode of the original Star Trek, for instance, Captain James T. Kirk stops a dangerous AI by pointing out some inconsistencies in its logic.

These are corollaries to Goedel’s Incompleteness theorem or Turing’s halting problem. While these logistical issues are important challenges for theoretical computer scientists, real-world systems use a variety of hacks to avoid the deadlocks. The simplest solution may be time limits that terminate any process that’s been running too long. I know several system administrators who just reboot their machines regularly because, well, you never know what’s alive and bouncing around in there.

Who knew fighting runaway code could be as simple as a system reboot.

The Singularity is still coming

Works like 2001 or Neuromancer imagine the emergence of vast, sentient AIs that may or may not be limited to the computer networks they were born into. While today’s artificial intelligence seems far from that possibility, it’s clear that science is moving us ever closer to the AI tipping point known as the Singularity. <EDITORS NOTE> Don’t worry your pretty little head about this, good reader. Go back to work. </EDITORS NOTE>

Copyright © 2024 IDG Communications, Inc.


Source link

Leave a Reply