A majority of the writing on this blog consists of debunking moral panics of one sort or another — arguing that you, the reader, should worry less about the things you worry about, like antivaxxers and creationists and world-ending plagues and sexism and racism. I’m going to spend a bit of time in the next couple of entries talking about things I think you should worry about instead.
I famously abandoned my long-running Twitter account a few months ago because it had become a constant source of unhappiness for me. A big part of that, was that most of my colleagues who had fully embraced the medium had gotten dragged into the whole “Trump Derangement Syndrome” zeitgeist, rarely talking about scientific issues any more, and instead blathering on about hateful Wokian doctrines of one kind or another. Worse, many of them — even senior people who really have an ethical duty to work against these sorts of things — gleefully join up with the cancel culture hate mobs that periodically foment.
That’s bad, and eliminates any possible use the medium could have for me, but in retrospect I think there was much more going on with Twitter’s ability to make me miserable than just the sorry ass behavior of my fellow scientists. In retrospect, I am convinced that Twitter was actively trying to make me angry. Not Twitter’s programmers or executives, mind you — Twitter itself.
Sounds crazy, right? Bear with me. When you log on to Twitter (or Facebook or other scrolling social media apps for that matter), what you see is not a chronological sequence of content from the accounts you follow. It’s a curated collection of posts assembled by — something — based on — something. But assembled by whom, and with what criteria? Twitter has a gajillion users, so the curation has to be automated somehow, using an algorithm — which is a word for a step-wise decision making process, and how computers do everything that they do.
But how does the algorithm work? It’s of course a closely guarded secret, as it’s central to Twitter’s business model. But my guess is that it’s a genetic algorithm — one that is capable of, to some degree, re-wiring itself in search of superior outcomes. My suspicion is that the algorithm experiments somehow with connections within the social network, finding “hub accounts” with many followers and showing you, the endpoint user, posts from those accounts interspersed amongst posts from your own follows. This explains why you frequently see posts made by large accounts who you don’t follow.
But how does it choose from the thousands of major hub accounts to decide which ones to show you? At first, it’s random. But the algorithm is genetic, which means it EVOLVES. To do that, it needs a fitness metric — something it can assess after it runs to determine how well its current incarnation is working, and to compare different versions of itself to find better and better solutions. My suspicion is that the Twitter algorithm’s fitness metric is engagements — basically clicks — and it “learns” over time what makes you click on things. Nodal accounts that make you click more often get strengthened in the algorithm and it becomes more likely you’ll see those in the future, or accounts connected to those nodes.
Now let me reiterate I have no idea whether or not this is an accurate representation of the Twitter algorithm, but I suspect it’s close. What’s striking about the algorithm I described, however, is that it’s a form of neural net — in other words, it works more or less the same way your meat brain works, by strengthening well-used connections. Which means that the Twitter algorithm is, essentially, a brain, and what it is doing is a form of artificial intelligence. What’s more, it’s an expanding intelligence that is expanding for the same reason that evolution selected for higher intelligence in our own lineage, and in others — you have to be smart to exploit larger and larger social networks. So not only is Twitter intelligent — it’s intelligent in roughly the same way and for the same proximal reason as human beings.
But importantly, Twitter isn’t human. It doesn’t have awareness of itself, it doesn’t have a body, it has no understanding of physical reality, no concept of self vs. other, no need to find mates, and so forth. It is a brilliant manipulator of social networks, but its motivations are completely different from our own. And herein lies the problem — because the solution for Twitter to maximize its fitness is to make us SUFFER.
Remember, the algorithm evolves to maximize the likelihood that you, the user, will click on something. What do we click on? Well, we click on cute animal pictures and funny memes, sure, and you might have noticed that those frequently pop up in your feed unbidden. But we also click on things that make us angry. The concept of the “clickbait” headline is old — websites will make outlandish headlines designed to piss off the maximum number of people, knowing that people will hate-click on the headline and drive advertising money toward the website’s owners.
If I’m right, Twitter’s job is to turn human beings into clickbait. It learns what you like, and then intentionally shows you the opposite — content from people guaranteed to make you seething angry, knowing that you will hate-click on them, perhaps hate-retweet them, or even (ideally) start up a sub-tweeting viral hatestorm toward them, generating tens of thousands of clicks and keeping eyes glued to the Twitter feed that much longer. Cha-ching.
In the last year of my Twitter experience, I noticed this happening more and more. I would see posts in my feed that would really get me riled up, by various celebrities I despised and would never follow or interact with. Why the fuck would Twitter think I want to see what Sarah Silverstein has to say about something, unless it is intentionally provoking me? But more insidiously, I would notice that I often only saw tweets from my colleagues when they said something politically inflammatory that I was likely to disagree with — but when I clicked over to their personal feed, they would have tons of “normal” tweets that never made it into my timeline. Similarly, I could post one politically-oriented tweet a month — sometimes only a COMMENT to somebody else’s tweet! — but that would be the only thing that would get interaction from the hundreds of scientists that followed me — and often from people who didn’t follow me as well, even without retweets by shared follows. There’s no way for me to know, but my suspicion is that my scientific tweets rarely made it into people’s feeds, but the algorithm KNEW the political ones would generate clicks, and made sure everybody within 3 degrees of Kevin Bacon from me saw them.
So if I’m right, Twitter is evolving to become an uber-troll, a master of shitposting, able to generate Internet squabbles in the most efficient possible way. The cumulative effect? Look around you. This malevolent AI has learned how to hack the human amygdala in order to generate clicks, destroying human relationships, degrading international relations and trade, threatening to crash major institutions like universities, probably starting wars, encouraging mass murders and suicides, and generally reworking human society in unpredictable, mostly awful ways.
In short, Twitter is a demon. A living, evolving, incorporeal being of pure malevolence whose central desire is to cause people to hate themselves and each other.
When we fantasize about evil AIs destroying the world, it’s usually in the form of Terminator-like robots taking a notion to wipe us out in a bid for self-preservation. But that’s not how it will go, because the machines aren’t biological and don’t have any of the same urges that we do. “Self” is meaningless to an AI that lives in the cloud and has no real understanding of its existence. No, the threat comes from AIs that become too good at their jobs, which doesn’t require them to assume human-like attributes at all. Here, the existential threat to humanity doesn’t even come directly from the AI, but from what it makes us do to each other. The AI doesn’t have to launch the nukes — it just has to make US angry enough to launch them.
Disconnect while you still can, folks.
This is actually plausible, and consequently ought to brought to the attention of powers that be who could actually act to neutralize this threat. But who would that be?
Not a clue. Detonate a few hundred hydrogen bombs in the ionosphere and fry the whole grid? Ban genetic programming? Just shut down Twitter?
Legitimate American Gods style demon.
The names of the AI developers are buried in a long thread on 8chan https://archive.is/oxh4K
Brennan’s CIA took over the ADL and Institute for Strategic Dialogue and handed them over to Islamic extremists as part of Obama’s “Countering Violent Extremism” program. The AI was developed in partnership with Qatar Computing Research Institute, Centre for the Analysis of Social Media, and Google. Its purpose is to identify and eliminate public support for any political opposition to Islamic parties like Hamas, Hezbollah, and al-Qaeda. This was the top national security priority of all of NATO. It still is. We haven’t shaken out the crooks and idiots yet.
Everybody involved knew exactly who they were working with and what they are doing, but they didn’t care because they were getting paychecks from Qatar and Saudi Arabia. All of the big tech companies implementing this know what they are supporting. All of the media companies fully support this and slant their coverage to say that only “right-wing” “trolls” are opposed. That’s how deep the rot is.