Essentially, bots are programs designed to convey simple communication or gather information. Marketers and brands use them for customer service and advertising activities, and they can be quite impressive in the way they mimic human interaction.
Lately, they’ve been popping up on social media in the form of automated profiles. These bots are often more sinister in nature, and are being used to spread fake media news and information.
Typically, bot social media profiles on Facebook (FB) and Twitter (TWTR) look like real people at first glance, and may even have profile pictures and information that make them appear authentic. Don’t be fooled, a bot profile can usually be identified because of a lack of any real substantial personal information and a tendency to spread highly-charged political messages.
I’ve visited many Facebook profiles while studying the phenomenon, and find that they usually have little, if any, real information on the person. Frankly, they just look fake, and if one takes a moment to inspect them, they can be easily spotted. I’ve even seen two bots on Facebook that have the exact same feed, “family photos”, and profile information. The only difference is the name and avatar image.
In the wake of the politicizing of Hurricane Harvey, bots have taken to social media condemning former President Obama for golfing during Hurricane Katrina. The effort is apparently one that positions current President Trump’s visit to Houston against his predecessor, to make Trump look good, and former President Obama to look bad.
First, let’s set the record straight. President Obama was not playing gold during Hurricane Katrina because he was not president at the time. George W. Bush served as president when Katrina hit New Orleans in 2005. Katrina happened three years prior to Obama being elected. Even then, serving as a senator of Illinois, he went to meet with Katrina evacuees in Houston at the time.
Now, there was a controversy in August of 2016 when President Obama was on the golf course at Martha’s Vineyard during flooding in Louisiana. He was briefed on the situation and did not change his travel plans, which might be the original source of the rumor.However the fact remains: President Obama did not play golf during Katrina because he was not president at the time.
Politics aside, the interesting aspect is the way this message spread on Tuesday, August 29, 2017. Romper.com reported on the bot-Obama-golfing phenomenon. According to the website, the rumor went viral. The question I have, is why?
Because a real person—or persons—areactually behind the army of bots on social media spreading this internet rumor, we are getting insights into the scope and depth of the impact of these bots. It speaks to our gullibility as a species, and the enthusiasm some people take in spreading lies, because it isn’t just bots spreading false information. Real people are picking up the story and sharing it, thinking it is true.
It makes one wonder, how much does the misinformation spread by bots influence our collective perceptions?
It is easy enough to disprove this bot story because it is factually false, no question about it. However, how can something so obviously fake spread and actually be believed?
A recent nine-country study (whichincludes Brazil, Canada, China, Germany, Poland, Ukraine and the United States) from the Oxford Internet Institute’s Computational Propaganda Research Project, finds that social media is being used to spread propaganda and lies by governments and individuals with political agendas, at what I consider to be, an alarming rate. According to The Guardian,which reported on the findings, highly-active Twitter users from Russia are 45% bots—that’s nearly half.The report also claims that, in Taiwan, there was a heavily coordinated effort from mainland China to spread propaganda to influence their recent presidential election.
Philip Howard, Professor of Internet Studies at Oxford, says the tactic most commonly used in the United States by bots is “manufacturing consensus”. Meaning, an illusion of reality is created around the information the bots are spreading and supporting. By increasing likes, commenting on posts, and sharing the fictitious information, they make it look like a lot of individuals agree with the message. By nature, humans tend to “run with the pack” and will jump on board if they feel enough people agree with something.
Howard explains it this way, “Bots massively multiply the ability of one person to attempt to manipulate people. Picture your annoying friend on Facebook, who’s always picking political fights. If they had an army of 5,000 bots, that would be a lot worse, right?”
What can the average person do about bots and the spreading of false information? First, consider the source of any news story or bit of information that is highly politically-charged. If it looks or sounds silly, chances are it is fake.
Second, don’t be swayed by the “bandwagon effect”. A common advertising tactic, bandwagoning is the art of getting others to agree to something because everyone else is. Think of kids on the school playground who convince a younger child to smoke a joint because “everyone else is doing it”. It is a tactic to influence consensus, and has no real substance.
Third, if you find something “juicy” that you want to share, do your own fact-checking. You don’t want to fall victim to a bot’s political agenda by spreading propaganda. Check sites like Snopes, FactCheck.org,Politifact and OpenSecrets.org to find out if the information is true or false.
The most useful weapon we have against bots—and against a mentality that borders on cult-like thinking—is our brain. We can identify bot activity if we look. Despite our political leanings, if a story is untrue, we need to do the right thing and not spread it, even if we feel it aligns with what others are saying. In a world where information can be manufactured at a moments’ notice and spread to every corner of the globe in seconds, it is our responsibility to be smarter than the bots.