Why should we put our faith in a technology shrouded by hyperbole and deception?
Conversing with computers has long been a dream pursued by futurists and tech visionaries.
Looking back at 2004, it’s astonishing to see how far we’ve progressed. Billions of devices in our hands and homes now listen to our queries, striving to provide answers. However, despite the immense time, money, and effort poured into chatbots, they have yet to revolutionize the world as their creators promised. They are marvels, yes, but also disappointingly mundane. It’s worth questioning why.
The term “chatbot” encompasses a wide range of systems, from voice assistants to AI and everything in between.
Conversing with computers in the past meant typing into a window and witnessing the machine’s clumsy imitation of conversation rather than a genuine dialogue. Tricks like ELIZA’s (1964 to 1967) tactic of rephrasing user inputs as questions helped sell this illusion. This continued even with 2001’s SmarterChild chatbot. The alternative approach digitized analog with voice-to-text engines like Nuance’s frustrating yet occasionally impressive product.
In 2011, these early ideas converged, giving birth to Siri for the iPhone 4S, quietly built on Nuance’s work.
Amazon’s founder, Jeff Bezos, recognized Siri’s potential early on and launched a major internal project to develop a homegrown competitor. Alexa arrived in 2014, with Cortana and Google Assistant following suit in subsequent years. Natural language computing had now become accessible on countless smartphones and smart home devices.
Companies are often reluctant to disclose the precise costs of developing new projects, but chatbots have been expensive endeavors.
Companies are often reluctant to disclose the precise costs of developing new projects, but chatbots have been expensive endeavors.
Forbes reported in 2011 that Apple’s acquisition of the startup behind Siri cost $200 million. In 2018, The Wall Street Journal quoted Dave Limp, who said Amazon’s Alexa team had over 10,000 employees. A 2022 Business Insider story suggested the company pegged more than $10 billion in losses on Alexa’s development. Last year, The Information claimed Apple now spends a million dollars a day on AI development.
The Information claimed Apple now spends a million dollars a day on AI development.
So, what do we use this costly technology for? Turning our smart bulbs on and off, playing music, answering the doorbell, and maybe getting sports scores.
In the case of AI, perhaps getting poorly summarized web search results (or an image of human subjects with too many fingers.) You’re certainly not engaging in meaningful conversation or extracting vital data from these systems. Because in pretty much every case, their comprehension falls short, and they struggle with the nuances of human speech. And this issue is widespread. In 2021, Bloomberg reported on internal Amazon data showing up to a quarter of buyers stop using their Alexa unit entirely in the second week of owning one.
The oft-cited goal has been to make these platforms conversationally intelligent, capable of answering your questions and responding to your commands.
While they can handle some basic tasks reasonably well, like mostly understanding when you ask them to dim the lights, everything else is far from smooth. Natural language tricks users into thinking the systems are more sophisticated than they are. So when it comes time to ask a complex question, you’re more likely to get the first few lines of a Wikipedia page, eroding any faith in their ability to do more than play music or adjust the thermostat.
Natural language tricks users into thinking the systems are more sophisticated than they are. So when it comes time to ask a complex question, you’re more likely to get the first few lines of a Wikipedia page, eroding any faith in their ability to do more than play music or adjust the thermostat.
The assumption is that generative AIs integrated into these natural language interfaces will solve all the current issues associated with voice interactions.
Yes, on one hand, these systems will be better at mimicking realistic conversation and attempting to provide what you ask for. But, on the other hand, when you look at the actual output, it’s often nonsensical gibberish. These systems are making gestures toward surface-level interactions but can’t engage in anything more substantive. Recall when Sports Illustrated tried to use AI-generated content that boldly claimed volleyball could be “tricky to get into, especially without an actual ball to practice with.” No wonder so many of these systems are, as Bloomberg reported last year, propped up by underpaid human labor.
Of course, proponents of the technology will suggest it’s still early days and, as OpenAI CEO Sam Altman recently said, we need billions more dollars invested in chip research and development.
However, this makes a mockery of the decades of development and billions already spent to reach our current state. But it’s not just a matter of cash or chips: Last year, The New York Times reported that the power demands of AI alone could skyrocket to as much as 134 terawatt hours per year by 2027. Given the urgent need to curb power consumption and increase efficiency, this does not bode well for the future development or sustainability of chatbots, or our planet.
We’ve had 20 years of development, but chatbots still haven’t caught on in the ways we were told they would.
Initially, it was because they simply struggled to understand what we wanted. But even if that problem is solved, would we suddenly embrace them? After all, the underlying issue remains: We simply don’t trust these platforms, both because we have no faith in their ability to do what we ask and because of the motivations of their creators.
One of the most enduring examples of natural language computing in fiction, and one often cited by real-world creators, is the computer from Star Trek: The Next Generation.
Yet even there, with a voice assistant that seems to possess something close to general intelligence, it’s not trusted to run the ship on its own. A crew member still occupies every station, carrying out the orders of the captain and generally performing the mission. Even in a future so advanced it’s free of material need, beings still crave the sensation of control.
The cost of developing chatbots has been astronomical, with companies like Apple and Amazon spending billions. Yet, the question remains, what are we using this costly technology for? If chatbots can’t provide meaningful conversations or extract vital data, then what’s the point? It’s no wonder that a significant portion of buyers stop using their Alexa units in the second week of ownership.
As someone who’s been following the development of chatbots closely, I can’t help but feel a sense of disappointment. The promise of conversing with computers has been a dream for decades, yet here we are, still struggling to get meaningful conversations out of these systems. Yes, they can turn our lights on and off, play music, and provide us with basic information. But is that really the extent of their capabilities?
The issue, as I see it, is not just about the technology not being advanced enough. It’s also about how we perceive and interact with these systems. We’re tricked into thinking they’re more sophisticated than they are because of natural language processing. But when it comes to asking complex questions, we’re often met with poorly summarized web search results or nonsensical responses. This erodes any faith we might have in their ability to do more than menial tasks
GenAI is a scam
The key to improving chatbot interactions might lie in their ability to learn from their mistakes. With time they will improve. Try Caude 3 and see what am talking about
The promise of chatbots has been tantalizing for decades, but the reality has fallen short of expectations. Despite billions invested and years of development, chatbots still struggle to deliver on their promise of revolutionizing communication and providing valuable insights. I like the vibe here already, lets keep it coming…. Let discussion keep flowing…
My opinion is that despite the fanfare and billions of dollars invested, chatbots have fallen short of their lofty expectations. They have difficulty understanding simple concepts, respond superficially, and are unable to participate in significant discussions. This has caused great dissatisfaction and suspicion among users. Me inclusive…
Every day I keep looking for a better bot…
“The development and maintenance of these chatbots have been exorbitantly expensive, with companies like Apple and Amazon spending billions. However, the return on investment has been minimal. Chatbots primarily perform mundane tasks and fail to deliver on their promise of revolutionizing communication and providing valuable insights.”
I agree totally with this…
Just imagine, Apple spending 1billion dollars daily to finetune chatbot…
Data misuse, that’s my major concern…
Yeah.. Users lack trust in chatbots due to their unreliable performance and the questionable motivations of their creators. Concerns about privacy, data misuse, and the potential for bias further erode confidence. This lack of trust limits the adoption and widespread use of chatbots.
One thing the article doesn’t mention is the role of bias in chatbot development. If the people creating these bots have certain biases or assumptions, those could easily be perpetuated through the bot’s responses.
For instance, the other day I asked a chat bot to mention other competitive alternative chatbots, it said it can not mention name of other chatbots explicitly due to tuning restriction. It went on to give me generic code names. For instance it called chatGPT “core model”
Many chatbots today rely solely on scripted responses, leading to a rigid, biased and impersonal user experience. Its a shame…
@BiG_Tom Its nice of you for recommending Caude 3. Have you tried it personally and found it to significantly improve upon previous versions?
Maybe the real challenge isn’t teaching chatbots to mimic human behavior but helping humans navigate conversations with machines. After all, communicating with non-human entities requires different strategies and approaches compared to traditional human dialogues. Could educating consumers on effective methods for interacting with chatbots boost satisfaction rates?