Robots, Social Media, and Rage Behavior

I spoke of robots the last time I shared my thoughts with you. These are artificial intelligence-based machines that actually have the ability to read, comprehend, and understand other words. They are programmed using neural networks, complex arrays of computer code that act like a human brain, although very simple at this point in history.

But times, they rapidly progress.

We see robots on social media, accounts linked to artificial intelligences that can be programmed to respond to posts. Some of these are innocent, like a robot that can schedule posts for months ahead of time, or collect responses to post and inform the writer about interest. Other robots are not so innocent, and they can be programmed to attack. If you post a certain view on the subject that could be considered political, the robots could flag this as a threat and attack you. They can operate in swarms, and you could find yourself the target of multiple robots attacking you for something just a couple years ago would inspire debate or thoughtful discussion.

This is all known, and I just repeated to lay out the landscape of the thoughts I want to share with you. What troubles me, is when other people see this robotic behavior and emulate it. I feel you can get this crowd mentality, if you put one person in a room with 20 robots, what happens? Now I am using the word room as a metaphor for a space on social media. Let us say those 20 robots are all programmed to attack certain political viewpoints. Someone posts something innocuous but slightly political, and the robots start their swarm behavior and attack this person.

Now that other person in the room, that person who may not agree with the original statement, does the behavior of the row but swarm influence that person’s actions? That person who does not agree with the political statement may just “go along with the crowd.” The crowd in this case is a group of robots. Robots programmed to attack without thought, care, intelligent discussion, or any sense of mercy or “agreeing to disagree.”

You have a situation where robots are training people how to think.

Now, I thought we would never get here. This is some scary stuff. For all intents and purposes, that person who disagrees with the original poster in the past may have disagreed, and they may have gotten into an argument. They may have worked out their differences. They may have found the time to make good points on each side and educate each other about opposing points of view. One side may have convinced the other, and a mind may have been changed in the exchange – either now or 20 years from now. A lot is possible with free and open discussion, and our society depends on our ability to communicate with each other.

Now let us put 20 unthinking robots into the room. I am reminded of a time during the Cold War where there was this great fear of robotic systems starting a nuclear war. Sure, we would feel safe if robots could logically evaluate all of their inputs, decide if something was a threat, and react with logic and certainty to keep us safe. The reality was, you could build robotic systems that would on their own automatically escalate a situation past the point of no return. Without a human in there, some living compassion and sentient intelligence, a robotic system could get out of control.

With artificial intelligence, especially robots on social media, those two people likely have no chance to come to a peaceful agreement. You could have robots cursing out either side, posting insulting memes, and never giving the humans involved any oxygen to discuss the matter rationally. You wouldn’t know the difference between those piling on one side of the other, and the robots piling on one side of the other. I would like to think up until this point in history there was always a sane and rational person that could jump into any heated discussion and calm things down.

Today I am not so sure.

To the casual observer of this sort of behavior, the third person in the room may be walking by and listening in on this argument – what are they to think? Holy hell, this is how people act nowadays. I might as well jump in. The next time someone disagrees with me I am going to rip them a new one. Did you see all of those posters and what they said? I am going to “get mine” the next time someone posts something political and I disagree with it.

Robots are again training casual observers how to think. People not even involved with the original discussion see this level of immaturity and intolerance, and learn that behavior.

I think we are way beyond this point today.

I feel our entire media culture is built around this rage behavior taught to us by robots. When I use the word robots in this context I mean either artificial intelligences or people trained to act like them. And even realizing that you are acting like a robot is very difficult in this culture. You can have strong feelings on the subject and just attack someone for no conscious reason because this is the way you were taught to react to the other side. Because you’ve seen thousands of robotic attacks on the other side, that is how everybody acts, and that is how you should act. You are not a robot, but your gut reactions may be robotic in how you are programmed to respond to stimuli.

At some point you don’t even need robots at all to train people how to act. The people already trained to act like robots will train others just with their behavior. It could be something as simple as sharing something mildly controversial online, or a viewpoint that you believe strongly in. A simple picture with a political statement that you believe strongly in, like “protect my right to…” And that sets off a chain reaction of actions and reactions that spins out of control. There may be robots in there that fan the flames and add fire to the discussion, but after a while people trained in rage behavior naturally take over the discussion.

I can’t really blame them, because you can’t escape this behavior. Or it takes a lot of willpower do that most people cannot seem to muster. Either that, or if no one takes the bait a robot will jump in and post something that will anger somebody, somewhere, and the cycle shall start again.

I remember the day the news changed from what was happening in the world, to what was trending on social media. I would sit there and watch the news, and story after story would be, “did you see what was trending on Facebook?”

I feel that was the day the robots invaded the newsroom.

In a way, it is why I do not trust the news anymore. It is why I go directly to the source these days. I think the world’s leading industry in 2017 is manufactured outrage. There is simply too much profit in manufacturing outrage and then finding out a way to make money off of it these days. Click-bait articles. Share this links. Did you see what he or she said? The people that make the robots know this. Think about that.

And there I go again speaking like a conspiracy theorist. And I do not want to be like that.

I want to believe in the common good of people. I want to believe that people can come together. I want to believe that disagreements can end in agreements. I want to believe in compromise and understanding. I want to be the person who has a hateful post shared with them, and that hateful post stops with me. I will not hit that share button to spread hate. I cannot.

I am not a robot, nor will I act like one.

Comments