Broadly speaking, social media bots are automated programs used to engage in social media. These bots behave in an either partially or fully autonomous fashion, and are often designed to mimic human users. While benevolent social media bots exist, many social media bots are used in dishonest and nefarious ways. Some estimates suggest that these malicious bots make up a sizable percentage of all accounts on social media.
While these terms are sometimes used interchangeably, chatbots are bots that can independently hold a conversation, while social media bots do not have to have that ability. Chatbots are able to respond to user input, but social media bots do not need to “know” how to converse. In fact, many social media bots don’t communicate using language at all; they only perform more simple interactions such as providing ‘follows’ and ‘likes’.
Social media bots also exist on a much larger scale than chatbots, because of the level of human management required. A chatbot often requires a person or even a team of people to maintain its functionality. On the other hand, social media bots are much simpler to manage, and oftentimes hundreds or even thousands of social media bots are managed by a single person.
Some social media bots provide useful services, such as weather updates and sports scores. These ‘good’ social media bots are clearly identified as such and the people who interact with them know that they are bots. However a large number of social media bots are malicious bots disguised as human users.
Malicious social media bots can be used for a number of purposes:
Twitter executives have testified before Congress that as many as 5% of Twitter accounts are operated by bots. Experts who have applied logarithms designed to spot bot behavior have found the number may be closer to 15%. That number likely applies to other social platforms as well.
It’s not easy to pinpoint exactly how many social media accounts are bot accounts, since so many of the bots are designed to mimic human accounts. In many cases, humans cannot tell bot accounts apart from legitimate human accounts.
While some social media bots very obviously exhibit non-human behavior, there is no surefire way to identify more sophisticated bot accounts. A study from the University of Reading School of Systems Engineering found that 30% of people in the study could be deceived into believing a social media bot account was run by a real person.
In some cases it can be very hard to spot a bot. For example, some bots use real users' accounts that were previously hijacked by an attacker. These hijacked bot accounts have very convincing pictures, post histories, and social networks. In fact, even a non-hijacked account can create a real social network: A study found that one in five social media users always accept friend requests from strangers.
While some of the most advanced social media bots can be hard to spot even for experts, there are a few strategies to identify some of the less sophisticated bot accounts. These include:
There is no easy way to get rid of malicious social media bots. While some people are calling on social media platforms to apply more stringent requirements for account creation, social platforms are hesitant to do so because:
While social networks may enable bot management solutions to block some of the bots, users need to be vigilant on social media, as social media bots are an ongoing issue.