.png)
The AI industry is racing toward a singular vision: larger language models, more training data, exponentially greater computational power. But what if we're scaling in the wrong dimension entirely?
Dr. David Roberts, co-founder and CEO of Organic Intelligence Technologies and associate professor at NC State University, is asking the uncomfortable questions that challenge AI's fundamental assumptions. In a recent Dynamic Decisions Podcast conversation, he revealed why biological learning principles—refined over decades of animal behavior research—might hold the key to AI breakthroughs that massive data sets never will.
"The way that AI systems are trained and the way they operate is actually different than what we see in biology," Roberts explains. While neural networks and reinforcement learning borrow terminology from biological systems, the actual functions diverge significantly once you peel back the layers.
Consider this: A puppy masters complex behaviors through shaped reinforcement with surprisingly few examples. Meanwhile, AI systems require millions of labeled images to recognize similar patterns. The disconnect reveals something profound about our approach to artificial intelligence.
Roberts' research focuses on what behavioral science calls "antecedent manipulation"—how changing the environment and task presentation reveals information about expectations. "AI systems as a general rule are terrible at that," he notes. "There's no core principle in AI algorithms that operationalizes this concept as a first principle."
Roberts isn't theorizing in a vacuum. His lab works with guide dog organizations, explosives detection teams, and therapy dog programs—contexts where mistakes have severe consequences. This work forced his team to become relentlessly user-centered, even when users can't articulate their needs in human language.
"We have users that can't articulate, at least not in the same way, their needs and desires," Roberts says. "It's really forced us to put ourselves in the mindset of our users and build in lots of ways to objectively measure the experience."
This led to Pawmetric, his third startup, which uses wearable systems to predict the aptitude of service dogs. What began as academic research for nonprofit organizations revealed unexpected commercial potential—and demonstrated how biological principles translate into practical AI applications.
Roberts' journey to decision intelligence began with a breakfast debate. He and Dr. Lorien Pratt "butted heads immediately" at a conference, engaging in the kind of intense intellectual exchange that makes everyone else at the table uncomfortable. They walked away with mutual respect and a collaboration that would shape both their careers.
What resonated with Roberts about decision intelligence was its fundamental departure from conventional AI thinking. "There has long been this tacit assumption in AI that we have data, let's make decision approach," he explains. "But you have to start with the scenario, the people, the stakeholders that are involved. Then from there, you look for opportunities to figure out how technology can impact that."
This user-centered approach mirrors what Roberts learned from working with animals: understand needs, measure objectively, iterate responsively, and stay humble about getting it wrong the first time.
Roberts admits he "detests being in the hype cycle." The more people become interested in what he does, the less interested he is in continuing. "I like to go find something new," he says.
This contrarian instinct isn't just personality—it's strategy. When everyone moves in one direction, blind spots emerge. Those gaps become opportunities for researchers willing to challenge orthodoxies around supervised learning paradigms, backpropagation dominance, and reward function design.
Academic research offers Roberts something startup founders rarely have: the luxury of taking big swings with long timelines. "It's very much okay if we spend 10 years working on something and it doesn't pan out," he notes. "In the startup world, you've got what? Two years max if you're really good at fundraising."
When asked for guidance for leaders building AI systems that make genuinely better decisions rather than just faster pattern recognition, Roberts' answer was immediate and unequivocal:
"Be user-centered. Maintain a laser focus on your key stakeholders, whether that's your employees working through processes or your customers or society as a whole."
This principle transcends technical architecture decisions. It shapes research priorities, commercial applications, and ethical frameworks. When you're building systems that affect guide dogs and the people who depend on them—or any high-stakes decision environment—you need confidence that you're creating tools that respect the dignity and intelligence of all parties involved.
Roberts' work suggests that the next major AI breakthrough won't come from scaling existing approaches. It will emerge from fundamentally rethinking how artificial systems learn, incorporating principles that biological organisms have refined over millennia.
The question isn't whether AI can process more data faster. It's whether we can build systems that learn efficiently, generalize effectively, and collaborate naturally with human decision-makers.
As the AI industry confronts the limitations of conventional approaches, researchers like Roberts are charting an alternative path—one that looks to biology not for metaphors, but for fundamental principles that could transform how machines learn.
Listen to the full Dynamic Decisions Podcast episode with Dr. David Roberts to explore the intersection of biological intelligence, decision-making, and the future of AI.