Debating AI's Future: Gary Marcus Challenges the Hype
AI-created, human-edited.
In an episode of Intelligent Machines, hosts Leo Laporte, Jeff Jarvis, and Paris Martineau engaged in a thought-provoking conversation with AI expert Gary Marcus about the current state of artificial intelligence, its limitations, and the challenges it presents. As a leading critic in the AI space, Marcus brought his expertise as a cognitive scientist and entrepreneur to challenge mainstream narratives about AI development.
Marcus positioned himself not as someone opposed to AI but rather as someone concerned with how it's currently developing. "I love AI, but I don't like the way it's happening now," he stated early in the interview. Throughout the conversation, he consistently emphasized the gap between the ambitious claims made by companies like OpenAI and the actual capabilities of current AI systems.
One of Marcus's primary criticisms focused on what he calls the "weaponization of hype." He singled out OpenAI as being at the top of his list of companies that have "hyped their mission as being for the benefit of humanity" while not delivering on those promises. According to Marcus, this pattern of exaggeration has become standard practice because "the media doesn't in general hold people accountable."
A significant portion of the discussion centered on the elusive concept of Artificial General Intelligence (AGI). Marcus recounted how in 2022, he offered a $100,000 bet claiming AGI wouldn't arrive by 2029, and at that time, industry leaders generally agreed with his definition of AGI as having "the flexibility of human cognition" and being able to do "whatever people can do, and maybe better."
However, he noted that definitions have been shifting as companies try to claim they've achieved AGI. "Instead of defining AGI in terms of human flexibility and cognition... they're like, 'Well, if it makes a certain amount of money,'" Marcus explained, highlighting how the goalposts keep moving. Leo Laporte questioned why the semantics of AGI matter, to which Marcus replied that there are pragmatic reasons, including contractual obligations like Microsoft's agreement with OpenAI that would return intellectual property to OpenAI if they achieve AGI.
From a technical perspective, Marcus argued that neural networks alone are insufficient for creating truly reliable AI systems. He advocated for neurosymbolic AI, which would combine neural networks with symbolic AI (traditional programming approaches). "Current AI is mostly like system one," he explained, referencing Daniel Kahneman's model of thinking, where system one is automatic and reflexive while system two is deliberative and explicit.
Marcus pointed out that current AI systems still struggle with the same issues he identified decades ago: "They still fail at abstraction, at reasoning, at keeping track of properties of individuals. I first wrote about hallucinations in 2001." He used examples of AI's inability to solve simple variations of logical puzzles to demonstrate these fundamental limitations.
The conversation also touched on the complex question of AI regulation. When asked about the EU's AI Act, Marcus expressed approval of its spirit while acknowledging implementation details are still being determined. He pushed back against the idea that regulation necessarily hampers innovation, arguing that California's proposed AI legislation would have merely required companies like Google to spend "an extra 10 million dollars a year filling out paperwork" while providing important protections like whistleblower safeguards.
Leo Laporte raised concerns about premature regulation, comparing it to early internet regulation, but Marcus maintained that "there's enough things we already know that we can do some of the regulation that we need to do," while emphasizing that any regulation would need to evolve over time.
When asked whether AI has been a net positive or negative for society so far, Marcus offered a nuanced view. He acknowledged benefits like brainstorming assistance and coding efficiency but noted significant downsides, including bias, cybercrime vulnerabilities, misinformation, and environmental impacts from training large models.
"The notion that it was going to revolutionize the world certainly has not transpired yet," Marcus observed, pointing out that despite early claims of massive productivity increases, "you look at the GDP and it's hardly moved at all." He concluded that "there are some clear benefits, there's some clear costs. It's not yet clear how it all nets out."
Marcus closed the interview with a compelling distinction between different types of AI. He contrasted tools like GPS, which reliably solve specific problems, with more ambitious technologies like ChatGPT that claim to "do anything you wanted to do." For the latter goal, he argued, "you really do need AGI" – and we're not there yet.
The conversation with Gary Marcus provided a thoughtful counterbalance to AI enthusiasm, emphasizing the need for critical thinking, hybrid approaches to AI development, and careful consideration of regulation as we navigate the future of artificial intelligence.