Public Shaming of AI Systems: The Case of AI Self-Driving Cars


The pillory is a device made of a wooden or metal framework erected on a post, with holes for securing the head and hands, formerly used for punishment by public humiliation and often further physical abuse.

By Lance Eliot, the AI Trends Insider

Do you remember this childhood ditty: Sticks and stones may break your bones, but names will never harm you. This famous children’s song dates back to the mid-1800s. That was a long time before the advent of today’s globally pervasive social media.

In our modern world, names can hurt you.

Indeed, as we enter into 2019, we are on the precipice of sticks and stones being thrown at AI overall, which I’ll provide as a use case the recent protestations about AI self-driving cars, along with “names” being lobbed at AI too.  There are qualms being expressed on social media about AI replacing human workers and thus wiping out people’s livelihoods, there are concerns about how AI is undercutting our human rights of privacy, and there is the overarching name-calling that AI will possibly overtake our sense of freewill and ultimately enslave or wipe out humanity.

That’s quite a bit of name calling.

In the case of AI self-driving cars, the New York Times (NYT) ran an eye-catching headline-emboldened piece on December 31, 2018 about Arizonians that are apparently attacking self-driving cars. This is the old-fashioned sticks and stones approach as a form of protest.

According to the NYT, a man slashed the tires of a Google Waymo self-driving car while the vehicle was stopped at an intersection in Chandler, Arizona, just outside of Phoenix. Others have reportedly thrown rocks at the Waymo self-driving cars and decided they don’t want to have these AI embodied vehicles on their roadways. In one especially disconcerting instance, a man reportedly drove alongside a Waymo self-driving car and brandished a pistol, threatening the self-driving car and the human back-up driver inside of it.

There are also reported cases of human drivers and pedestrians that have tried to trick the Waymo self-driving cars into coming to an abrupt halt, doing so by getting in front of the self-driving car and either hitting their brakes while in front of the self-driving AI system, or as a pedestrian flaunting the AI system’s predisposed cautiousness to avoid hitting someone or something by leaping into the path of the self-driving car.

I’ve been referring to these kinds of incidents as a form of “pranking” of AI self-driving cars and predicted that it will continue to expand, doing so as a type of sport, doing so to gain an advantage over a self-driving car, and doing so as a form of protest in opposition of AI self-driving cars. Though you might at first consider these acts as seemingly harmless and perhaps jovial, any time that you try to play tricks with a multi-ton car that is in-motion, it is going to potentially have serious consequences. These are pranks that can produce life-or-death results. It’s a bad trend.

For my article about the pranking of AI self-driving cars, see:

For the dangers of pedestrians becoming roadkill by AI self-driving cars, see my article:

For the crossing of the Rubicon about AI self-driving cars, see my article:

For the purported AI singularity that we all face, see my article:


Here’s something else from earlier in history that might ring a bell from your elementary school history books, the use of a pillory for public shaming.

You’ve probably seen a pillory and didn’t know that it was called a pillory. A pillory is a device typically made of wood that a person would put their head and hands through three holes and be locked into the hinged wooden block to essentially be put on display. This was usually done in a public square for purposes of shaming a person. Some might call them blocks, though technically this is actually a pillory.

The overall societal notion was to force a person to be in the public eye and allow the public to see the person and know that they were being shamed. Public humiliation would presumably cause the person to realize they had done something wrong and they would want to avoid being publicly humiliated again, thus they would no longer undertake whatever transgression got them into the pillory to begin with.

At times, the pillory was placed up on a raised platform to make the shaming more prominent. These pillories would be setup wherever people might have a good chance of seeing the person, such as in the town square or at a key crossroads leading into town. A description of the crime that the person presumably committed was often listed on a sign nearby the pillory. In case you’ve never perchance been locked into a pillory, it is generally physical uncomfortable, and so it is both a mental shaming and a physically undesirable situation too. Usually the subject in the pillory would only need to be there for a few hours.

Whenever someone was confined to the pillory, word would often quickly spread and people would go out of their way to come and see who was in it and also seek to taunt the person. Thus, it wasn’t just that whomever wandered by the pillory would happen to see the person, it was actually a kind of notable event that would attract attention. This was an added “bonus” for those attempting to shame the person, since if otherwise no one actually came to see the offender, it would not make perhaps as dramatic an impression on that offending person and have the justice impact intended. In some cases, the audience would mock the person and even throw items at them, such as rotted fruit or worse still excrement from say a horse or other animal.

At times, the crowd would get rowdy and go overboard in terms of the foul treatment toward the person in the pillory. Officials would at times turn a blind eye to this behavior and let it happen, figuring that they weren’t meting out that kind of punishment and it was instead the wisdom of the crowd. If the crowd did harm the person, it would send a signal to warn others that getting into the pillory could have very adverse consequences. Sometimes, officials would overtly and intentionally provide rough treatment. The person’s hair might be cut off, they might be whipped, they might be burned or scared, etc. In some instances, they might have a finger cutoff or other body maiming might take place.

On a rare occasion, the crowd might actually be sympathetic towards the person in the pillory. Perhaps the public thought the person was innocent of the claimed transgression. Or, maybe the person committed the transgression but the people believed it be done accidentally or that the pillory was excessive punishment for the crime committed. In any case, the crowd might try to help the person to be more comfortable, providing water or shade. The crowd might protect the person from others that wanted to toss things at the person or otherwise shame them. The crowd might even toss flowers or at least put flowers next to the pillory to send a signal that they did not believe the person deserved the punishment. It could be that the person was a town hero that maybe politically was out of favor of the town officials but that the public at large supported.

You might be thinking that this whole aspect of the pillory is just history. Nobody does this anymore, you might be saying. It’s barbaric and we’d not do something like that anymore, you insist.

You are right to the degree that in today’s world we have other ways to shame people. The version of the pillory today is often found via the use of social media.

Social Media is the Modern Pillory

Social media has become a popular and effective modern-day pillory.

Without much cost and effort, you can put someone into a kind of virtual pillory. Make a meme that is catchy and it will go viral. Post something untoward about someone on your blog, and it might get a million hits. Put together a short video clip shaming the person, and it could become a big draw on YouTube. It’s easy to do. And, whereas in the past the pillory would only allow a town’s worth of people to participate in the shaming, you nowadays can have thousands upon thousands of people from all around the globe that can add to the shaming.

Admittedly, the person isn’t confined to the wood blocks and so they aren’t forced to sit there and take it. On the other hand, the continual drumming across all of social media can be just as mentally shaming as was the pillory. Think too that half of the planet is involved in shaming you, and it can be pretty damaging to your psyche and ego. The pace at which the public shaming can happen, and the aspect of its geographical spread, across countries, across languages, across cultures, it can be a harsh punishment for someone caught up in it.

At times, the online virtual pillory can become physical and real-world. A person shamed on social media might find themselves being confronted when they try to go and eat in a restaurant. Other patrons might yell at them or threaten them. The restaurant might refuse to feed them a meal. The person can be “shamed” in all areas of life, when eating at a restaurant, when in an airport waiting for a flight, when walking down the street, when sitting in a park, etc. Thus, the social media is more than merely a mental punishment for the person and can have true physical consequences.

Consider too the reputational damage that can be done. The person punished in the old fashioned pillory might have been able to move to a new town and start their life over. Others that they encounter might not have known about the prior pillory experience. With today’s social media, the odds are that the virtual pillory will spread far and wide, and the person will be judged everywhere. It also isn’t going to readily go away, in that the normal pillory is something that happens for a few hours and it’s over, while with social media the virtual pillory might last for years on end (or forever!).

What does the pillory have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars, and we also keep up with the latest trends related to AI and to self-driving cars.

One prediction I’ve made is that we will soon see the AI self-driving car being placed into the virtual pillory on social media.

The New York Times article about people tossing rocks at AI self-driving cars in Arizona, brandishing a gun, slashing tires, and other descriptions of how these hooligans are acting up is is just the tip of the iceberg of what will likely emerge in this coming year. It’s going to be more of the sticks and stones kind of efforts, unfortunately.

This will also spur the name calling too.

Notice that I used the word “hooligans” just now. I refer to those that are physically threatening the self-driving cars as hooligans because they are performing an illegal and highly dangerous act in their form of how they are choosing to protest. I categorically and unequivocally condemn it.

If they want to protest the emergence of AI self-driving cars, I’m fine with their doing so in any legal manner they wish, perhaps lobbying their local elected officials or putting up signs or taking to social media. Trying to showcase their protests via physically accosting the self-driving cars and potentially producing deathly outcomes is not the way to demonstrate their dismay or distrust.  

Right now, AI self-driving cars are just starting to gain some popularity. We’re slowly seeing AI self-driving cars becoming used on our public roadways. Until now, self-driving cars were on our roads as a kind of research and development effort. The auto makers and tech firms were using time on the public roads to try and figure out how to code the AI self-driving car to work in the real-world. At some point, the auto makers and tech firms are each deciding it’s time to put their self-driving car into use, such as ridesharing for the public, or perhaps as a means to deliver groceries or pizza to a customer that has bought their groceries online or ordered a pizza online or via phone.

Let’s clarify what it means to refer to an AI self-driving car. There are various levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that has the AI doing all the driving. For a Level 5, there isn’t a human driver. There isn’t even usually any provision for a human driver (no pedals, no steering wheel, etc.). The notion is that the AI is supposed to be able to drive the car, doing so like a human could, and not need to rely upon a human for any of the driving. Self-driving cars less-than Level 5 are considered cars that require a human driver, such that the AI and the human driver co-share the driving task. There are dangers associated with the less-than Level 5 self-driving cars as to the co-sharing aspects and confusion that can arise.

For levels of the self-driving cars, see my article:

For issues associated with co-sharing the driving task, see my article:

With the AI self-driving cars that are less than Level 5, when those self-driving cars get into an accident of some kind, the odds are that the finger will be pointed at the human driver that was supposed to be responsible for the behavior of the car. The auto makers and tech firms certainly prefer to point the finger at the human driver. This absolves the auto maker or tech firm for any blame in the accident, or so they hope it does. This is something still to be ascertained. There are chances that the attempt to shift the attention to the human driver might not work in all cases, and we’ll eventually see court cases wherein the human driver is considered perhaps partially at fault and the AI system considered also partially at fault.

Costly Lawsuits Could Slow Self-Driving Car Industry

Some are worried that if the auto makers and tech firms get pinned with lawsuits that are costly and lead to their AI being considered partially at fault, they’ll become the cash cow that the lawsuits will all go after. If this happens, it could dampen the efforts to create AI self-driving cars. Auto makers and tech firms might step back from their efforts under concern of large payouts for lawsuits. Other though say that this is perhaps an appropriate marketplace control on the auto makers and tech firms, forcing those firms to make sure that their AI self-driving cars are sufficiently safe. If they can put self-driving cars onto our roadways that aren’t safe and they can just pin the responsibility to the human driver, presumably there’s not much teeth in stopping them from generating these unsafe vehicles.

For more about responsibility and AI self-driving cars, see my article:

For some of the legal aspects and lawsuits about AI self-driving cars, see my article:

For the bonanza of lawsuits and AI self-driving cars, see my article:

For the true AI self-driving car at a Level 5, there’s no means to pin an accident on a human driver in the self-driving car because there isn’t one needed in there. Thus, the auto maker or tech firm is potentially exposed. Now, if the AI self-driving car gets into an accident with a conventional car or a less-than Level 5 self-driving car, there’s the chance of claiming that the other car caused the accident and the finger can again be pointed to the human driver in that other car. But, otherwise, the AI self-driving car itself will get the spotlight of why it got into an accident.

For regulations about self-driving cars, see my article:

For analysis of a self-driving car accident, see my article:

For overall aspects about AI self-driving cars involved in accidents, see my article:

The regular news media has been very excited about covering AI self-driving car stories. Most of the time, the story is one of wonderment. AI self-driving cars are going to change society and offer many tremendous benefits. News media likes to tout this. The public at large is also eager to see the advent of AI self-driving cars. The public wants to know what’s going on. The news media wants to let them know.

This intense interest will tend to magnify anything notable about AI self-driving cars. Sometimes a small story about some “breakthrough” in AI self-driving cars gets star treatment by the news media, even though it’s not really much of a genuine breakthrough. On the other side of the coin, when an AI self-driving car gets into an accident, it can also gain headlines. Love or hate, that’s the way to get news that will garner eyeballs and attention.

For my article about fake news about AI self-driving cars, see:

I’m predicting that we’ll soon be reaching a stage of evolution of AI self-driving cars in the public eye that will lead to public shaming. The AI self-driving car will be placed into the online virtual pillory. The New York Times article provides an example of the kind of potential coverage that is going to gradually come forth this year.

When an AI self-driving car gets into an accident, particularly a Level 5, it will generate intense focus. If the public believes that the Level 5 self-driving car was out-of-hand, there’s a chance of a severe backlash against all AI self-driving cars. If the auto maker or tech firm is not ready to handle the crisis management aspects, they’ll likely inadvertently contribute to the public shaming that will arise. Suppose for example that the auto maker or tech firm seems to be stonewalling as to why the AI self-driving car did what it did, this could spark public outrage.

We’re used to police today wearing body cams and when a shooting occurs, there is public demands for the video. Some police departments won’t release the video, or will only release it once an initial investigation has occurred. The news and the public often don’t want to wait. They want to know right away what happened. Holding back the video is a sure way to get their ire. The police often say that the video can be misleading and they want time to figure out what actually happened. The news and the public though often see this as an excuse to not show what’s there, and even worse that it maybe implies the police were in the wrong and a cover-up is taking place.

This is quite the same for AI self-driving cars. Everyone knows that an AI self-driving car has cameras, radar, sonar, and other such sensory devices. The moment an accident occurs, the news and the public will be expecting to see a release of the video and any other sensory data, doing so that we can all judge as to what actually happened. The odds are that the auto maker or tech firm will try to respond as the police departments have, namely saying that time is needed to first review the sensor data and that at some future point it will be released. The news and the public aren’t likely going to be willing to go along with this notion. They’ll assume that stonewalling is taking place, or worse.

For my article about conspiracy theories and AI self-driving cars, see:

If the public gets enraged, you can bet the virtual pillory will go into high gear. Social media will be flooded with public shaming. It will be a vicious cycle of some public shaming that no one notices, and other public shaming that seems to hit a chord and others will re-tweet it or otherwise share it with others. The clever meme’s and clever posts will be super-viral.

The ire might be aimed at AI self-driving cars overall. This is bound to be reflected in stock prices of auto makers and tech firms that make AI self-driving cars, namely their stock price will drop precipitously. If the public shaming is large enough, regulators might get activated and try to introduce new legislation that will be onerous on AI self-driving cars. The whole thing can become a cascading mess of a loss of faith in AI self-driving cars.

There’s a chance that the public shaming might be aimed at a specific auto maker or tech firm. In that case, the fallout among all AI self-driving car companies might be somewhat lessened. The main brunt might harm that particular firm that was involved in the accident occurring AI self-driving car. It could force that firm to retreat from their AI self-driving car quest. They might need to freeze their further roadway use. They might even stop their internal development efforts. It could cause the firm to go into a “find the witch” mode of self-discovery. And, if there are lawsuits, the firm will need to devote much of its resources to defending the lawsuit, which could detract from any further attempts at moving forward on their AI self-driving car efforts.

The public shaming bandwagon of AI self-driving cars might be powerful enough that it becomes nearly unstoppable. Rather than having just a momentary impact, such as for a day or a week, it could become more permanent and dominant. Some are worried that it would slow down innovations for AI self-driving cars. It might create such a stigma that AI developers refuse to get associated with an AI self-driving car effort. They might leave those efforts, seeking to apply their AI skills to something else such as for spacecraft, for airplanes, or the like.

As mentioned earlier, the pillory can have momentary impacts or longer-term impacts. For a human in a pillory, getting their hand cutoff or being physically scared can last for life. For AI self-driving car companies that get into the virtual pillory, it could either mean just some bad press, of a momentary nature, or it could lead to a widespread and ongoing pillaring of the firm. There’s even a chance that it could put them out of business. They might need to close down entirely their AI self-driving car efforts. Perhaps they can save whatever else the firm does, though the tainting of their brand might have untoward impacts on the other parts of their enterprise.

AI self-driving car makers need to be getting ready for the virtual pillory. The auto makers and tech firms should put in place appropriate crisis management capabilities and be well-prepared to cope with claims that might range from genuine to bogus. Acting as though you were caught unawares is just not going to cut things. If you need to be forewarned to get prepared, I’ve now so informed you to get ready.

Of course, the auto makers and tech firms should also be designing, developing, and fielding their AI self-driving cars in a sensible manner. If they are skirting the right kinds of protocols and safety measures, the public shaming will not only likely occur, it is likely to be well warranted.

For my article about the safety of AI self-driving cars, see:

For the boundaries of AI and self-driving cars, see my article:

For aspects about the coming public crisis of Level 3 self-driving cars, see my article:

For the future of AI self-driving cars and how the Gen Z will be instrumental, see my article:

Some people historically in the pillory deserved to be there, presumably most deserved it.

With social media, it is at times not so clear that someone placed into the virtual pillory really deserves the full extent of the punishment. Nonetheless, it is todays rapid fire way to do a public shaming.

Let’s try to avoid the public shaming of AI self-driving cars, which will require the auto makers and tech firms to prudently and wisely decide when and where to deploy their emerging AI self-driving cars. The auto makers and tech firms need to strive toward public revelation of AI self-driving cars and cannot any further assume that it will just happen because of the excitement of this new form of driving.

Sticks and stones are perhaps a precursor to a larger and global kind of name calling, of which the future of AI self-driving cars might get hampered or stopped if it gets its head jammed into a pillory and cannot get itself out.

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.

Let’s block ads! (Why?)

AI Trends

Subscribe to our newsletter
Sign up here to get the latest news, updates and special offers delivered directly to your inbox.
You can unsubscribe at any time

Leave A Reply

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More