Jim Grisanzio

Building Free & Open Source Software Communities

Jim Grisanzio, Headshot

Pages

About
Communities
Contact
Copyright
Duke’s Corner
OpenSolaris
Photo Profiles
Presentations
Resume

Podcasts

The Long Build Podcast
Duke’s Corner (archive)
Groundbreakers (archive)

Categories

Developers
Geopolitics
Money
Science
Whatever

RSS/ATOM

I’ve been following Cal Newport for years for his work on cognitive issues and deep work to improve career opportunities. But that’s his side job. He’s actually a serious computer scientist and recently he challenged some obvious misconceptions about artificial intelligence (AI), especially claims made by biologist Bret Weinstein on the Joe Rogan podcast. Generally, Weinstein’s great but he can be abstract and long-winded and sometimes difficult to understand. In his conversation with Rogan, Weinstein seemed to imply that ChatGPT and similar large language models (LLMs) might be conscious or even manipulating people right now. But it didn’t take much for Newport to wreck Weinstein’s view by simply explaining in detail how AI systems actually work at present. 

Newport played some audio clips from the Rogan podcast where Weinstein said that these current LLMs may be functioning like a child’s brain, running experiments and learning what they want. But Newport explains why this comparison fundamentally misunderstands the technology. “Language models don’t run experiments in the way a human mind might,” Newport says. “LLMs certainly don’t want anything to happen. They have no values or drives like a human brain does.”

It’s reasonable to ask why Weinstein doesn’t recognize that he’s out of his field here. He sounds like a guy on the street talking about something he knows nothing about. Newport and others have clearly stated that these LLMs are limited compared to a fully thinking human being, and if we wanted to build something like a conscious human brain, we’d have quite a long way to go.  

Newport says that language models operate through complicated but static tables of numbers processed sequentially through multiplication algorithms. “Once one of these networks is trained, that vast table of numbers is fixed. It’s static. It does not change.” When you query ChatGPT, nothing is being updated or learned. The same fixed numbers produce words through straightforward matrix multiplication spread across thousands of GPU chips.

Newport compares language models to isolating just the language processing center of a human brain. That neural cluster does sophisticated work understanding words and concepts, but we would never call it conscious or alive on its own. Weinstein asks in one of his Joe Rogan clips, “Is the AI conscious? I don’t know. If it’s not now, it will be.” Really? How does he know that? At this point, Weinstein is goes over the top, which only undermines his credibility on the subject and helps promote fear. The technology around AI and other forms of computer automation has been progressing for many decades, but only now we’re finally afraid?

Newport responds immediately and emphatically: “I can answer your question. Is the AI conscious? No, it is not conscious.” Newport sounds almost surprised and frustrated. He explains that when we understand the mechanical process of how language models work, “this has a fraction of the types of operations and behaviors you would need to even imagine something like consciousness.”

Newport also addresses why AI pioneer Geoffrey Hinton sounds similar alarms despite knowing how these systems work since he did much of the initial research to enable the technology. Newport clarifies a crucial but obvious distinction totally missed by Weinstein: Hinton worries about hypothetical future AI systems we haven’t built yet, not the current language models in use today. “Weinstein’s talking about language models,” Newport says. “Hinton’s talking about AI artificial brains that we haven’t built yet, but he’s now more confident than he was before that they are buildable. That’s different.” How long away these systems are Newport doesn’t speculate.

Newport also identifies a pattern in how people commonly misunderstand AI: “When a non-technical critic like Weinstein looks at language models, here’s what I think they’re doing instead. They’re observing from the outside the things that the model is doing. They then write a story about what’s happening inside the model that matches what they observed.” This represents what he calls “an ancient animist religion approach to understanding the world” rather than scientific analysis. That’s a heck of a criticism to project at a scientists like Weinstein. But that’s science, right? You debate data. You discuss different perspectives. And sharp criticism among researchers is normal.

Regarding truly dangerous artificial intelligence, Newport says that we would need breakthroughs in multiple separate technologies: world modeling, simulation and planning systems, policy networks for values and drives, memory systems, and real-time learning capabilities. Current AI agents remain limited because language models lack these crucial capabilities.

Newport says that instead of focusing on apocalyptic narratives, we should instead direct our attention toward AI’s potential present day harms. “AI presents many problems, but most of them are happening right now, not in the future,” he says. These concerns include how AI degrades our thinking abilities when we outsource cognitive work, how it erodes our capacity to distinguish truth from fiction, how it floods us with low-quality generated content, and the environmental and economic costs of the global AI arms race. By understanding the actual mechanics of how AI systems work right now, we might be able to avoid both the unfounded fears as well as the overconfidence in their current capabilities.


Discover more from Jim Grisanzio

Subscribe to get the latest posts sent to your email.

Posted in

Discover more from Jim Grisanzio

Subscribe now to keep reading and get access to the full archive.

Continue reading