In 2018, bots became even more prevalent in the marketplace. According to a study by Distil Networks, a leading bot security company, almost half of web traffic (42.2%) in 2017 was not human. Though some may find this trend surprising or even alarming, bot traffic has been growing consistently for the last five years as more companies add bots into their workflow systems. What has proven a growing concern, according to the media, is the influx and rise of “bad bots.”
Bots can be incredibly helpful by processing mundane or repetitive tasks and allowing humans the opportunity to do more creative, thoughtful work. Bots have been adopted to conduct customer service tasks, help curate individual products for users, among other activities. But, there are also bad bots, which were first taken note of when used to buy tickets online and then offer the same tickets for a much higher resale value. These bots are also responsible for stealing personal information, social media harassment, disrupting the marketplace, and, in the largest show of bot activity, potentially impacting the 2016 US presidential election. The presence and prevalence of bad bots is increasing too, with bad bot traffic up 10% last year, slightly outflanking good bot traffic (21.8% of total web traffic is bad bot vs. 20.4% for good bots).
What makes bots unique is that they tend to mimic, and mimic very well, human behavior. That is what makes bad bots particularly difficult to battle, because they are often very difficult to detect. The existence and growing pervasiveness of bad bots adds to the public concern about the implications of artificial intelligence and whether or not AI can “turn against humans.”
But, like with any technology, security and defense systems are being developed to thwart bad bots. The first legislation, the Better Online Ticket Sales (BOTS) Act passed in September 2016, was to deal with the aforementioned ticket-buying bots (though this continues to be a problem despite the legislation). An op-ed in Fortune earlier this year, calls for both private security implications and government intervention through creating or updating additional legislation that would levy heavy fines and penalties for those parties creating bad bots.
Some in the technology world are leading the charge against bad bots, including Twitter questioning 9.9 million accounts thought to be spam or bots, creating more sophisticated authentication procedures, and preventing an average of 50,000 spam/bot accounts a day from being set up.
Though bad bots are a problem and a threat to the marketplace, they should not overshadow the use of good bots to increase efficiency, improve systems, and analyze data in a variety of industries. Headlines scream that bots are bad, but, in reality, half of the bots out there are refining processes, allowing for further creativity, development, and increased revenue.
As Harley Davis, Vice President, France Lab and Decision Management, IBM Hybrid Cloud writes in a February blog post, “Businesses need solutions that assist in automation rather than simply fulfilling it, handle tasks intelligently and are highly autonomous. These solutions also must deliver customer-centric and personalized experiences, at enormous scale, without a massive back-end operation to prop them up.” The next generation of bots will not simply conduct mundane, repetitive tasks, they will be able to adapt as a company grows and changes, taking on each challenge intelligently. Being able to have a system that fluctuates as goals and needs change is crucial to progress and advancement as the marketplace transforms.