Social media bots. Are they a force for good? A manifestation of social media’s capacity to inflict enormous harm? Or simply a clever piece of technology that will do either, depending on the person behind the bot?
As usual, the answer lies somewhere in the middle and hinges on your perspective. For example, despite a general absence of solid evidence, many people credit social media bots with influencing recent election results in the US, UK, Germany, and France.
Short for ‘robot’ and sometimes also referred to as an ‘automaton’, they are also said to be a force behind the numerous international conspiracy theories circulating around the COVID-19 pandemic and its vaccines, for example.
So what is this seemingly all-powerful technology capable of massive global influence and manipulation of public opinion?
Social media bot definition and meaning
A more in-depth overview of the technology
How chatbots can enhance your marketing
The insidious influence of chatbots on social media platforms
A force for good or bad? You decide
The US Government’s Office of Cyber and Infrastructure Analysis gives its definition of social media bots as: “Programs that vary in size depending on their function, capability, and design; and can be used on social media platforms to do various useful and malicious tasks while simulating human behaviour. These programs use artificial intelligence, big data analytics, and other programs or databases to imitate legitimate users posting content.”
Imperva, a leading cybersecurity company, gives a more succinct and balanced definition: “An internet bot is a software application that runs automated tasks over the internet. Tasks run by automated technology are typically simple and performed at a much higher rate compared to human Internet activity.”
Darius Kazemi, a computer programmer and self-proclaimed ‘internet artist’ who studies the nature and behaviour of these robots, distils the bot definition even further: “A computer that attempts to talk to [people] through technology that was designed for humans to talk to humans.” The mighty New York Times has its own characterisation: “Those little automatic programs that talk to us in the digital dimension as if they were human.”
Now that we have a basic definition of social bots, let’s take a more in-depth overview of these little programs found on the internet, on social media platforms, in the app environment, and in online gaming.
Certainly, there are malicious bots used with malevolent intent. But many are simply used for an array of clever, but largely unspectacular, day-to-day jobs that require automation tools in order to maximise efficiency and all-hours availability.
Professor Kathleen Carley of Carnegie Mellon University in the US emphasises that technology is neutral and people determine how it’s used. “[They] are just software. They are used for good things and they are used for bad things,” she says.
Did tweets created by fake accounts help determine the outcome of the 2016 US presidential election? There were certainly, for example, instances where automatons were hijacking Covid-19 hashtags with disinformation and conspiracy hashtags, such as #greatawakening and #qanon. Similarly, an article in Digital Trends reported the spreading of BLM conspiracy theories and disinformation around the protests and the Black Lives Matter hashtag.
Michael Kreil, a data journalist from Germany agrees with Kazemi when he says there’s no proof of tweets, dubious accounts, or any similar activity determining the course of an election.
Indeed, some experts argue that the threat of excessive influence by such technology is vastly exaggerated. For example, a high volume of tweets generated on Twitter about a particular topic is often merely one set of spambots tweeting and retweeting specific hashtags to followers which are actually other spambots. If legitimate followers aren’t reading the tweets and hijacked hashtags, the influence on public opinion of those tweets and their dubious content is negligible.
For marketers, a benefit of social media bots is that they can use them to engage with the customer and complete an end-to-end interaction – from enquiry to sale and payment – in one place and through one account. This applies to popular messaging apps such as:
There’s no need to fill out a contact form on a website and then wait for an employee to respond at some point in the future, or find a phone number on a website and then go offline to make the call, only to be told the person you need to speak to has gone for lunch. Instead, everything happens in one place and at one time, day or night, unless the customer specifically requests an interaction with a representative of the organisation.
By using ever-advancing artificial intelligence and voice-recognition technology, chatbots can deliver a surprisingly seamless customer experience – within parameters of course. Functions that companies may wish to entrust to their talkbots could include:
But chatterbots aren’t people. So the science behind them is to anticipate what your customers are likely to want and the ways they will probably interact – and equip the AI-powered technology to respond accordingly.
Some of the things that chatterbots struggle to do include:
When using chatbots, don’t turn the interaction into an excuse to send clients spammy messages thereafter. Marketers have pretty much ruined the call centre, SMS, and email as a meaningful way to interact with customers because of incessant spam. It's important to build a non-spammy automaton and avoid disgruntled clients reaching for the spam filter.
Each messaging app has its own T&Cs governing the behaviour of this technology. For example, on Messenger, a brand can send a message only if the customer prompted the conversation. You can use a Facebook Messenger bot to reply to the message, however, if he or she doesn't find value and opt-in to receive future notifications within the first 24 hours, no additional communication is allowed.
We’ve seen how chatbot marketing can be a marketer’s friend. Now let’s examine how Facebook bots and other social bots can be a marketing nightmare through their insidious influence on social media platforms such as Instagram, where Instagram bots have posed a significant challenge.
The continued rise of influencer marketing has seen this practice dominate marketing budgets. Why? Because it's an effective way of building a customer base for businesses by aligning a brand with an online persona who then promotes the product or service to their legion of followers and fans.
On Instagram, in particular, social media influencers with Instagram accounts that boast hundreds of millions of followers can command huge sums of money. Footballer Christiano Ronaldo, for example, has almost 259-million followers watching his Instagram account, singer Ariana Grande has 219-million Instagram followers and actor Dwayne (The Rock) Johnson has 215-million people who follow his Instagram account.
Having a staggering number of followers subscribed to your Instagram account means you can also command huge amounts of money for a social media post or re-post on your account. A Ronaldo Instagram post, for example, can cost up to US$777 000, while a post by Kim Kardashian on Instagram could be worth up to US$607 000.
Below these top-tier stars are thousands of other influencers who command various sums to post on Instagram and other social media communities such as TikTok, Twitter, and YouTube. The money they request is largely determined by their number of followers of their account, which is fair enough. But when these influencers have more fake followers than real followers, it becomes a significant problem. If, as a corporate, you’re paying big money to have your Instagram stories seen by more Instagram bots than real followers, you’re being ripped off big time by the creators of those Instagram bots and their customers. With this in mind, check out our blog post "How to Quickly Spot Fake Influencers".
Historically, Instagram bots have been a relatively inexpensive, effective, and easy-to-use tool that Instagram accounts could employ to rapidly increase the number of people who discover their Instagram profile and begin following them. For most, it was the only way to grow on Instagram without having to rely on other unsustainably expensive and often ineffective strategies.
As more people realised the moneymaking potential of Instagram growth, an entire industry dedicated to developing and selling Instagram bots sprung up. Fake followers, fake likes, and automated comments proliferated – thanks to an ever-growing army of Instagram bots manipulating the process. Fake accounts also increased, as did the phenomenon of new followers who almost immediately became non-followers.
Who was a real follower and who was an Instagram bot follower? Which so-called influencers had real Instagram followers that matched a brand’s target audience? And which ones merely had a following of Instagram bots? It was a minefield and, in many cases, brands were paying a lot of money yet not reaching their target audience. Word on the street was that it was all one big, expensive, marketing con, thanks to the multitude of Instagram bots.
Instagram had to act against Instagram bots. And it did. In 2017 it began to crack down on businesses that provided Instagram bots to clients for a price. Two years later, Instagram upped the ante against the dodgy Instagram bot brigade again.
“In 2019, Instagram began implementing measures that effectively curved the use of automation within its user base,” observed Instagram expert Eduardo Morales in an article for the website Better Marketing. In 2021, they are close to perfecting them.”
Although Instagram keeps its Instagram bot-detection strategies secret, Morales says we know from the recent experiences of Instagram users that the site has been successful in reducing Instagram bot activity by implementing a range of measures too lengthy to discuss here.
"This year automation is still effective, but finding [one] that can automate interactions for months, without being blocked or having to change your username, is no longer simple or easy,” he wrote. Could this be the end of Instagram bots? Unlikely. The developers of Instagram bots will keep trying to be one step ahead. But at least the influence of Instagram bots should be diminished.
Like Instagram, Twitter has proved fertile ground for social media influencers – whether they’re paid by businesses for posts, or are seeking to exert influence for political or social reasons. The result has been a proliferation of Twitter bots, fake tweets, followers, and bogus Twitter accounts. Twitter (and Instagram for that matter) has a host of other problems – such as using hashtags in a spammy way, including using unrelated hashtags in a tweet (aka ‘hashtag cramming’).
A 2018 New York Times article carried a claim that as many as 48-million of Twitter’s reported active users – nearly 15 percent – were automated accounts designed to simulate real people. The newspaper emphasised that Twitter asserts the number is far lower. But even if the figure is only was only half right, that’s still a lot of dodgy Twitterbot activity. The same article told of a company called Devumi that had an estimated stock of 3.5-million automated Twitter accounts that it had used to provide customers with more than 200-million fake Twitter followers.
Significant effort goes into detecting Twitter bots. Much of this effort is by the site itself, but other detection measures come from external parties. Indiana University academics, for example, developed a free service called Botometer that checks the activity of Twitter accounts and gives them a score based on how likely they are to be Twitter bots.
There was even one Twitter bot called @stealthmountain. It would abuse Twitter users who used a word or phrase incorrectly – such as the word ‘peak’ when you actually should have used the word ‘peek’. It is currently suspended for violating the rules. Who would have guessed that promoting good word usage would create such a storm in a Twitter app …
Not all tweets are from Twitter bots, not all followers are fake, and not all Twitter accounts have been set up by would-be influencers-for-gain, foreign governments with opaque intentions, criminals intent on who-knows-what, conspiracy theorists, or climate denialists. And not all Twitter bots are bad news.
A certain degree of automation is intended by Twitter, which makes its API freely available. And ‘respectable’ robots provide services such as accurate time, earthquake warnings, stock market news, updates about train delays, and a whole lot more besides. The @tinycarebot, for example, encourages followers to practice self-care and care for others.
More than ever, people expect a response from brands on Twitter. According to its in-house research a few years back, 41% of consumers use Twitter to contact customer service, 37% to voice an opinion on a product or brand, and 25% want to make a product enquiry.
This is where Twitter chatbots can help. Research has shown consumers are accepting of some sort of talkbot interaction. Salesforce, the global CRM business, found 68% of consumers preferred a talkbot for quick communication with a brand. “And while you can't beat having a real person, chatbots can help perform a key role in everything from answering questions to ordering products,” Twitter says on its website.
And why shouldn’t there be YouTube bots as well? According to Gainchanger, an information technology company that specialises in outreach automation, YouTube view bots are being used by YouTubers struggling to get organic views from users since they tend to focus on the most popular channels rather than newcomers.
To get around organic reach, YouTube creators are using automated software that runs in the background on adding views to videos and subscribers to channels. Some automatons will also spam the comments section with automated messages to make them seem real. However, this is rarely successful as most often the comments are generic and very similar. These bots are sometimes also known as YouTube sub bots – the ‘sub’ being short for ‘subscriber’.
Automatons have also made their mark, both positively and negatively, in the ever-growing online gaming and esports market. Twitch, which calls itself “the world's leading live-streaming platform for gamers and the things we love”, is one such example of a gaming channel that makes ongoing efforts to eradicate their 'TwitchBot' problem.
Founded in 2011, Twitch enables streamers to broadcast their gameplay or activity by sharing their screen with fans and subscribers who can hear and watch them live on the Twitch stream. As the esports phenomenon has grown, Twitch has amassed a huge audience and now boasts 30-million average daily visitors and more than 7-million unique streamers going live every month.
How it works is that when you watch a live stream, a split-screen display enables you to see what the streamer sees on their monitor. But you also hear and see them play through a smaller window on the edge of their stream. Users can purchase games through links on a stream and buy products associated with that stream using affiliate links. Subscriptions allow you to support your preferred streamer. According to Business Insider, the company has launched iOS and Android apps so users can get all the same content, alongside a host of mobile-friendly features, wherever they are.
Like the other social media channels discussed, the platform’s large audience and user numbers attract Twitch bots intent on manipulation by artificially inflating viewer numbers, creating fake Twitch chat activity, and falsely increasing follower counts. On its website, Twitch explains why these Twitch bots are undesirable and not in the interests of legitimate Twitch accounts.
“Artificial engagement and botting limit growth opportunities for legitimate broadcasters and are damaging to the community as a whole,” it says. “False viewer growth is not conducive to establishing a career in broadcasting because the ‘viewers’ do not contribute to a healthy, highly engaged community." They comment further, "As a reminder, fake engagement and artificial inflation of channel statistics are violations of our policies. Participating in, organising and/or running these services will lead to an enforcement issued on your account, including and up to indefinite suspension.”
But in-game automated technology (known as IG bots) does have its uses. For example, Twitch offers an Internet Relay Chat (IRC) interface for chat functionality on its website. Applying chatbots in this way allows you to interact programmatically with a Twitch chat feed using IRC standards, and the technology connects to the Twitch IRC network as a client to carry out these actions.
A guide on the Twitch website presents an easy ‘good’ Twitch bot example to get you started, along with next steps for successfully using chatterbots and IRC on the platform.
One of the IG automatons that can enhance the Twitch viewing experience and increase community engagement is the Moobot. Despite its odd name, it has nothing to do with automated cows, but is a chat moderation and command robot automaton that integrates broadcasting software with YouTube’s vast music collection to enable viewers of a Twitch channel to submit song requests.
Flowing from the gaming world is Discord, a group-chat platform originally built for gamers but now a general-use site for a range of communities. Discord allows users to voice chat, video chat, and livestream games from their computers. It is divided into Discord servers, each of which has its own members, topics and channels. According to Business Insider, automatons on the platform are helpful examples of artificial intelligence that can undertake several useful jobs on your server automatically. These include welcoming any new members, banning troublemakers, and moderating the discussion. Some can even add music or games to a server.
In summary, despite the bad press that social media bots tend to get due to a string of high-profile unethical and unsavoury behaviour surrounding some of them, they do have a legitimate place in the sun.
Chatbots (sometimes called ‘talkbots’ or ‘chatterbots’), in particular, can enhance an organisation’s social media presence and brand when used in the right way. They can be useful to consumers by, among other things, providing a rapid response and 24/7 presence. But overuse is counter-productive because social media channels are, ultimately, about people connecting with people rather than with technology.
So, in summary, stay away from the unscrupulous automatons. Cheating, dishonest, duplicitous behaviour will never be eradicated from social media. But being part of the problem rather than the solution will come back to haunt you. Use the technology correctly and it could become an enormous marketing and communications asset.
The debate around social bots tends to be heated and characterised by strong opinions – as you would anticipate of such a controversial topic. Expect efforts to continue to negate the work of those developing and using malicious bots. But there is simply too much at stake for them to meekly disappear into the sunset without an epic technological battle.
If you’d like to add a considered opinion to the debate, tweet us @Meltwater.