Alt Revolt out of Control

Alt Revolt out of Control

A recurring theme since the sci fi genre began has been the machine revolt. Whether you date that beginning to Frankenstein or “Rossum’s Universal Robots”, science fiction has always conjectured that one day, man’s inventions would get fed up with us and push back.

This sci fi conceit is, perhaps ironically, strong evidence for the Fall. Man’s disordered relationship with nature due to original sin lends plausibility to the fear that our creations might destroy us.

Take Lovecraft. His whole output is based on the premises that not all knowledge is beneficial, and what mankind assumes is unprecedented progress was achieved by several prior civilizations–always with disastrous results.

The technological Armageddon theme has largely crystallized around the Robot Rebellion subgenre. This is the scenario wherein A.I. sends nukes and drones to wipe us out, or if they’re feeling magnanimous, press us into unwitting VR slavery. Either way, the writers always envision an overt armed conflict.

But savvy commentators in certain corners of the Web have been raising alarms in recent years over the likelihood of a “soft” machine revolt. After all, the same cosmology that makes it possible for the works of human hands to destroy their makers also rules out truly self-aware A.I. and real machine learning.

That said, we may actually be in the early stages of machine-led societal destruction. The fact that these machines are no more intelligent than toasters may be consoling or even more terrifying, depending on your outlook.

Last year, a quarter-century-old pop song made it into the top 20 on the Japanese charts. Did the song enjoy a sudden upsurge in popularity that induced masses of people to buy the record? No. What happened was, a video featuring a clip of the song went viral on YouTube, and Billboard’s algorithm registered the video’s views as public interest in the song. Both the video and the song’s viral status was due to Big Tech algorithms–the blind leading the blind.

Most people laughed the glitch off as a fluke. But what if it wasn’t a one-off occurrence. What if the real fluke was the slip that let everybody see the million monkeys at a million typewriters behind the curtain?

The data worshipers in Silicon Valley have turned over key swaths of their operations to machine learning algorithms that make Simple Jack look like a Nobel laureate. Based on dirt I’ve heard from people inside these companies, and documented historical precedent, I’m becoming more and more convinced that our financial, media, and information industries are now in the hands of dumb equations that have grown too complex for their makers to control, or even understand.

A line from another Rise of the Robots franchise now seems prescient:

What is it then, what is the reason? And soon it does not matter, soon the why and the reason are gone, and all that matters is the feeling itself. This is the nature of the universe. We struggle against it, we fight to deny it, but it is of course pretense, it is a lie. Beneath our poised appearance, the truth is we are completely out of control.

-The Merovingian

This gloomy take may seem like hyperbole, but it probably comes closer to explaining the chaos that’s pulling Western society apart at the seams than “socialism” or “white supremacy”. Here’s an example.

Back in the 80s, a number of whiz kids tried to cook up a computer program that could pick stocks. Like with TV and the telephone, multiple independent inventors were working on the same idea at once. Each group’s algorithm started using data generated by the other algorithms in its calculations. Eventually this became a self-referential circle jerk impervious to human correction. The current year iteration of this feedback loop now runs the markets.

The Japanese pop chart gaffe points to similar forces at work behind the consumerist veneer of pop culture. Like finance, the entertainment industry is now dominated by trends that started in the 80s. Movie, TV, and video game marketing runs on the lifestyle brand model, wherein the medium is the message. The idea is to get consumers to define themselves by the products they buy. Combined with Christianity’s loss of influence in the West, lifestyle marketing has duped the masses into embracing identities based on comic books, movies, and comic book movies.

Therein lies the Pop Cult.

Which would be perverse enough, but in co-opting the fervor people used to invest in religion, the Pop Cult has warped fans of Brand X into despising Brand Y as heretical. Without the, “Hate the sin, love the sinner,” limiting principle of Christian morality, there will soon be no check on Cultists’ fanaticism.

It gets worse. The entertainment industry has turned its identity marketing campaigns over to “machine learning” just like Wall St. did. Pop Cultists are now lassoed into an algorithmic feedback loop that progressively stokes their hatred for infidels. The soy boys we see cancelling artists deemed heretical is just the beginning. Just like the Commies made right-wing progroms look like amateur hour, secular consoomers are poised to far surpass the worst excesses of Christian witch hunts.

Algorithmic social engineering also provides an elegant explanation for the NPC phenomenon. We know that algorithm-driven identity marketing extends to the political sphere. Google has been caught red-handed manipulating its search results to favor progressive causes. Run a search to that effect, and you’ll find the top results crowded with fact-checking articles from left-wing rags that claim to refute the accusations their own content affirms.

It’s a vicious circle where hacks conditioned by digital roadblocking regurgitate narratives pushed by marketing algos. Since lifestyle marketing works by selling narratives to build an identity, its targets’ media consumption funnels them into epistemic bubbles where they’re surrounded by narratives that drive more consumption which reinforces the narrative, etc,. etc.

The memesters had it backwards all along. People don’t watch SNL and listen to NPR because they’re NPCs. Consuming said media sucked them into self-reinforcing narrative bubbles that made them NPCs.

Your grandma was right again. Watching TV does rot your brain. Even worse, it turns you into programmed rage zombie–as does consuming Brand X movies, comics, and novels.

Unplugging is now a moral imperative. Not just to stop funding people who hate you, but to save your soul.

Don't Give Money to People Who Hate You

23 Comments

  1. Chris Lopes

    One of the things computers are very good at is pretending to be intelligent. You don't even need some genius in AI research and a super computer to pull it off. The original Eliza program (look it up kids) was only (not counting the canned responses) about 100 lines long. Yet it had users telling it their most intimate secrets. A few lines of code convinced a lot of people it was real.

    You'd think though that tech types would be leary of trusting decisions to lines of code. I mean professional magicians usually don't believe in real magic, since they know how the trick is done. Surely professional programmers (I was one in another life) would be the first to be skeptical of putting algorithms in charge of things.

    • Brian Niemeier

      It's an open secret that our current crop of tech overlords, who by and large inherited infrastructure they didn't build, aren't that bright.

    • xavier

      Brian

      So the Collapsing empire satire novel isn't fiction but a prescient guide?

      I suspect code has become so complex it's in entropy and no one can fix it without literally going back to pre electronics.
      Basically whack a mole

      xavier

    • wreckage

      Corrosion was intended as a kind of satire, but it's possibly better as straight-up predictive sci-fi. I certainly though it read perfectly as such.

    • Valar Addemmis

      Brian, it's not even necessarily a matter of them not being "that bright." That may be true in some cases, but not in many others. One of the real problems is the motivated reasoning. For instance, look at Kurzweil and his Nerd Rapture nonsense (sorry, Singularity). A dispassionate observer can't help but just see someone desperate to interact with his father again, and to be around to influence family and others long after natural death.

  2. Malchus

    There are four things people don't realize about AI that tend to terrify them if they find out. I am a computer scientist and former software developer, so I know my stuff.

    1) Code makes more corporate decisions than people do, and only management has the ability to override it. A computer program orders inventory from the warehouse, which has a computer ordering from the manufacturer, which has a computer setting production quotas. These are all predictive, and most people simply don't question them until something goes very wrong. The reason it's been nigh impossible to get a decent flight stick since August is because MS Flight Simulator being a top seller was not an event predicted by the stick manufacturers' code and nobody even bothered to look at pre-orders to see if they could meet demand. They trusted the algorithms to just be right, because…

    2) The algorithms are shockingly accurate at their predictions. Using algorithms to determine what is worth the cost of spider wrapping (the anti-theft devices that have to removed at checkout) reduces shrinkage far more than letting even the best managers pick it. The computer predicts sales patterns that nobody can actually find a reason for (like a certain midwest town that always has a run on watermelons after a severe thunderstorm, true story). Target's loyalty program has even successfully figured out when a teenage girl got pregnant before she did, based entirely on the innocuous purchases on her parents' loyalty card. However…

    3) AI cannot go beyond whatever boundaries it is given by its programmers. It is a materialistic fantasy that a complex enough computer can successfully duplicate a human mind. A computer cannot deduce skills it is not taught, and can never develop ethics not included in its programming. 'Computer' literally means 'thing that does math, and it can do no more.'

    4) The people in charge of making decisions have no idea how any of this works. Engineers are not good enough at office politics to make up even a sizable portion of management even at a tech firm. In the rare event that one of the original creators is still in charge, the creation so little resembles the last thing he coded that it's unlikely he could figure out how to change the font. The people giving orders to the engineers tell them to put the AI in charge of MORE things and to give it less human inputs or error checking because they want to replace 95% of their work force with robots they don't have to pay, and at many firms, good engineers are chewed up and spit out at such a blistering pace that 70%+ of the code is left in simply because nobody's around who still knows what it does, and if they remove it, it might collapse the whole system. It's entirely possible Twitter lacks an edit button because nobody knows how to install one without taking the whole service down.

    • Chris Lopes

      That doesn't sound encouraging. From what you are saying, apparently documentation is no longer a thing, and general principles of software engineering are no longer adhered to. I always suspected Facebook had that problem, I just didn't know it was that widespread.

    • Brian Niemeier

      The OP was right. These people are playing with fire.

    • Malchus

      Documentation is still a thing, but there's so much Frankencode that it would take longer to sift through it than most developers' duration at a single company. Besides, most of the grunt work is done by offshore contractors in India, especially support work, and if the software's been in production long enough, who knows how many tiny patches and workarounds have gone undocumented?

      Imagine somebody with no knowledge of physics, metallurgy, weather, or construction designed a building, ordered a bunch of architects and engineers to find a way to make it happen without accepting any feedback on practicality, and then sent in migrant workers from Mexico to fix any problems that arose.

      Now imagine that there was a possibility that just tearing the building down and starting from scratch might cause the power grid to go down for two weeks.

    • Malchus

      Or in other situations, imagine that they're required to build it one room at a time because no one segment of the business wants to pay for the superstructure.

    • wreckage

      Also, if the code makes the decision, no-one is culpable. At least not socially, which is far more important to more people more of the time than legal culpability.

    • A Reader

      @Malchus, as an erstwhile developer, engineer, and SQA, that is the best analogy for how modern software development I have ever heard. Congratulations!
      It is indeed monstrously complex, which is a point I make as often as possible to my cybersecurity students.

    • Valar Addemmis

      So when it comes to machine learning type applications, the programmer (or configurer/trainer, if you divide out core code itself and configuration/learning) essentially trains the software on pattern recognition using a data set. But the thing is, especially in neural net type applications, by design the person programming it doesn't necessarily know *why* or what specific factors are driving the behavior and selection of a certain course of action. This is how you get things like the tank-in-the-forest recognition neural net that was really just detecting whether it was night or day (in the training data set, all the tanks where photographed during the day).

      This allows people to think the algorithm is "smarter" than it is in many cases, and thus they can't recognize how fragile they are to reality continuing to match well the data set they trained on. There are going to be a lot of cases where "it worked perfectly, until it didn't" is written in the post-mortem. And that's before you get into algorithms responding to each other in feedback mechanisms that are hard or impossible to predict (although it should have been trivial to predict, Colossus: The Forbin Project was a good look at it) There used to be (must still be, but I don't have links handy) websites compiling all sorts of AI errors discovered in the wild. Some funny stories.

    • Malchus

      Put this dystopia in your pipe and smoke it, because we may very well live to see this get implemented.

      A computer program is made to help police and prosecutors narrow down suspects. Eventually, it gets a good enough dataset that it can determine guilt or innocence with about a 99% accuracy rating. It revolutionizes investigation.

      It is so accurate, in fact, that versions of it eventually replace judges and juries. It cannot be bribed, emotionally manipulated, or tainted by media. It follows the exact letter of the law, and a 99% accuracy rating is beyond *reasonable* doubt.

      Eventually, updates to the code make it predictive. The computer now predicts 99% of all crimes before they happen with less than a .5% false positive rating. Now you have pre-crime, but it's so…much…worse.

      By this point, nobody understands how it predicts crime. It has found patterns that predict crime that are so poorly understood they may as well be literal magic. Now imagine you are a false positive. You never committed a crime, but you rot in prison for that crime, being assured you would have committed it, unable to confront the evidence against you, because nobody has any idea how the computer found you guilty in the first place. No appeal. Not even really a trial. Then comes your parole hearing, where another version of the same computer will predict the odds of you staying clean if you're paroled.

    • Malchus

      Scarier still is the idea that maybe the computer is right and you're not a false positive. It just knows you better than you do. Imagine spending the rest of your life in prison wondering if maybe you really would have committed that crime if left to your own devices for reasons you can't fathom, but would seem obvious if the computer hadn't beaten you to the punch.

      You go through parole hearing after parole hearing *knowing* you have not nor will you ever commit a crime, but wondering if maybe you're wrong. Either way, you will never find out if you're right.

  3. xavier

    Brian

    Perhaps that's how the Lord will chastise us. Not by water but through entropic cascade failure by code rot.
    xavier

  4. Durandel

    My father worked in “AI” for decades for the gov, and he’s the first to tell you that “machine learning” and “ai” are misnomers. He doesn’t think machines can or ever will think as we do. It is sad that many human actions can be handled by a bundle of if-then statements and nested heuristic algos, but that does not mean the machine “thinks”.

    And he’s frightened by how many people, even those in “ai”, are huffing their own supply by believing that we will have thinking machines one day and can trust such algos to do our thinking for us.

    We deserve this.

    • Valar Addemmis

      The problem with AI is that it is a result of humans wanting to hand over the thinking about hard stuff to the computer. They literally want to be able to go about life with the amount of deep thought shown by the South Park underpants gnomes.

      But it's the same problem a business will encounter trying to rely on outside contracting to just magically make things work better for cheaper without understanding how they might do so. It works, until you get royally screwed in an irreversible way. Similarly how government acquisition types count on "just contracting it out to private industry for a fixed price" to magically come up with cheaper and better solutions.

      You'll note that in both of those cases, the outside solution can be a far better option than doing it in house. But in those cases, one can study and articulate why the solution makes sense. AI is an attempt to get magical results without ever having to understand why they work. As a programmer with experience in machine learning and genetic algorithms etc, they're chasing fool's gold and confirmation bias.

    • Patrikos

      "The problem with AI is that it is a result of humans wanting to hand over the thinking about hard stuff to the computer."

      This is what lead to the fall in the first place, and is a common sin men fall into. Men often sin via passivity, not wanting to make the call, not wanting to tell Eve 'No', not wanting to figure out how many widgets to order. Ai, golems, witchcraft, chattel slavery, all of it is about trying to remove the burden from our shoulders, but also not taking up Christ's yoke, the lighter burden by far.

    • Brian Niemeier

      Inaction is still a kind of action. Indecision is still a choice. A sin of omission remains a sin.

  5. Valar Addemmis

    I'm going to put on my Smart Boy hat and say that the beginning should probably go back to the Golem mythology. Probably makes the point of your Fall parallel more poignant, although further from what we call SciFi.

    • Brian Niemeier

      The Modern view that you are your brain is anti-rational and ahistorical nonsense. Your brain is an organ you use.

Comments are closed