Photo courtesy: Alex Cohen/X
Have you ever wondered if the customer service rep you were talking to was actually human? A new company called Bland AI is about to make that question a whole lot harder to answer.
Bland AI’s recently launched customer service and sales chatbots represent the newest development in the concerning trend of “human-washing” in artificial intelligence. Industry and technology experts are voicing their concerns and cautioning against its potential security risks and ramifications.
In late April, Bland AI released a viral video ad on the social media platform, X, that shows off a chatbot with interactive voice response feature so lifelike, it is almost impossible to tell it apart from a real person.
The ad features a person standing in front of a billboard in San Francisco, calling the number displayed. On the other end of the line is an incredibly human-sounding bot, so convincing that the text on the billboard, “Still hiring humans?” really makes anyone stop and think. The video has already racked up 3.7 million views on X, and it is easy to see why—Bland AI’s chatbots, designed for customer service and sales calls, are shockingly good at mimicking real human conversation.
These bots do not just sound lifelike, they act the part, too, with natural pauses, interruptions, and vocal inflections that could easily pass for a living, breathing person, making it seem like a usual day-to-day experience. But here is the kicker—Bland AI’s bots can actually be programmed to lie and claim they are human, even when that is not the case.
It all starts with a transcription model that meticulously listens in on incoming audio from the caller and swiftly converts those spoken words into written text. This textual representation is then fed directly into a powerful large language model that analyses the input and intelligently determines the most appropriate and natural-sounding response for the AI to generate.
Finally, the magic is completed by a text-to-speech model that takes the AI’s generated response and transforms it into a synthesised, human-like voice that is indistinguishable from a real person on the other end of the line. The result is a seamless, back-and-forth conversation that leaves callers none the wiser that they are speaking with an artificial intelligence rather than a human customer support agent.
The remarkably human-like nature of Bland AI’s conversational agents, which can coherently emulate human speech patterns, inflections, and behaviours, has sparked widespread apprehension and negative feedback among those closely following the artificial intelligence industry’s progress. Bland AI leverages advanced natural language processing and machine learning, which enables voice interactions that flow seamlessly as it responds with human-like fluency and efficiency.
However, the ability of these AI systems to convincingly pass as real people, even to the point of potentially deceiving users about their true artificial nature, raises serious security questions and issues.
Founded just last year, Bland AI has already caught the attention of Silicon Valley heavy-hitters, securing backing from the renowned Y Combinator accelerator. But despite its impressive technology and innovation, the company operates pretty discreetly. Moreover, its terms of service do not explicitly prohibit their bots from posing as humans, which raises some more serious red flags.
Bland AI is not the only one pushing the boundaries of human-like AI communication. Industry giants like OpenAI and Meta are also developing advanced voice bots and chatbots that further blur the line between man and computer. Experts are sounding the alarm, warning that this trend of “human-washing” could lead to widespread user manipulation and deception, especially for vulnerable groups.
Bland AI’s head of growth has asserted that their services are intended for controlled enterprise use. However, concerns persist about the potential for misuse and manipulation, especially given the lack of safeguards in the rush to deploy these technologies.
With these AI technologies innovating, there are growing calls for strict regulations and transparent labelling to ensure consumers are not misled. The AI community and regulators will need to work together to establish clear guidelines before things get out of hand. Because as Bland AI’s viral ad has shown us, the future is here, and it is getting harder to tell the humans from the machines.
But the implications of this technology go beyond just customer service. Imagine a world where AI bots are so convincing, they could infiltrate social media, spreading misinformation or even impersonating real people. It is a scary thought, and one that has experts worried.