[00:00] Nina Park: From Neural Newscast, this is Model Behavior, AI-focused news and analysis on the models
[00:05] Nina Park: shaping our world.
[00:08] Chad Thompson: Welcome to Model Behavior.
[00:14] Chad Thompson: This program examines how AI systems are built, deployed, and actually operated in real
[00:20] Chad Thompson: professional environments.
[00:22] Chad Thompson: Joining us today is Chad Thompson, a director-level AI and security leader, with a systems-level
[00:28] Chad Thompson: perspective on automation and enterprise risk.
[00:32] Chad Thompson: Chad, it is great to have you.
[00:34] Nina Park: We are starting today with news of a growing rift between Anthropic and the Pentagon,
[00:39] Nina Park: which has been rebranded, you know, as the Department of War.
[00:43] Nina Park: Tensions have reportedly reached a boiling point over the use of Claude
[00:48] Nina Park: in the military operation to capture Venezuelan president Nicolas Maduro.
[00:53] Nina Park: Now, while Anthropic maintains that its systems cannot be used in lethal autonomous weapons,
[00:59] Nina Park: the department is currently reviewing the relationship, stating,
[01:03] Nina Park: Our nation requires partners willing to help warfighters win in any fight.
[01:08] Thatcher Collins: The core issue here is the shift from theoretical guardrails to operational reality.
[01:15] Thatcher Collins: When you have a $200 million contract, the tension between a company's safety brand and a military's mission requirement is inevitable.
[01:27] Thatcher Collins: This isn't just about ethics.
[01:30] Thatcher Collins: It's about the resilience of the oversight frameworks we put on these models once they hit the field.
[01:37] Chad Thompson: Adding to the pressure on Anthropic are recent accusations they've made against Chinese AI Labs, DeepSeq, Moonshot, and Minimax.
[01:48] Chad Thompson: Yesterday's report claims these labs used 24,000 fake accounts to generate 16 million exchanges with Claude.
[01:57] Chad Thompson: They allegedly used a technique called distillation to siphon Claude's reasoning and coding capabilities to improve their own models, bypassing years of research investment,
[02:10] Chad Thompson: Thatcher, how is the industry responding to this?
[02:13] Nina Park: The scale is unprecedented, Nina.
[02:16] Nina Park: Exactly.
[02:18] Nina Park: Anthropic argues this reinforces the need for strict chip export controls, as this level of distillation requires advanced hardware.
[02:27] Nina Park: This leads into the technical shift we saw this week with the launch of Claude Code Security.
[02:32] Nina Park: Using Claude Opus 4.6, Anthropic identified over 500 high-severity vulnerabilities in open-source projects that had survived years of traditional fuzzing.
[02:44] Nina Park: It represents a move from simple pattern matching to reasoning-based vulnerability hunting.
[02:50] Thatcher Collins: That move is critical.
[02:53] Thatcher Collins: Traditional tools like CodeQL look for known patterns.
[02:57] Thatcher Collins: What we're seeing with reasoning models is the ability to generate and test hypotheses about how data moves through a system.
[03:07] Thatcher Collins: It found zero days in GoScript and OpenSC by reversing logic and finding algorithm-level edge cases.
[03:15] Thatcher Collins: For security leaders, the question is no longer if we use AI, but whether our defenders have better reasoning tools than the attackers.
[03:26] Chad Thompson: Finally, we're looking at market viability.
[03:29] Chad Thompson: Google VP Darren Maury recently warned that AI startups functioning primarily as wrappers or
[03:35] Chad Thompson: aggregators are at significant risk.
[03:38] Chad Thompson: He noted that the industry has lost patience for thin intellectual property that simply
[03:43] Chad Thompson: white-labels models like Gemini or Claude.
[03:45] Chad Thompson: Meanwhile, we're seeing architectural shifts in production, such as Context 7 redesigning
[03:51] Chad Thompson: its system with sub-agents to slash token usage by 65% to combat documentation bloat.
[03:58] Nina Park: Thank you for listening to Model Behavior, a neural newscast editorial segment.
[04:03] Nina Park: For more, visit mb.neuralnewscast.com.
[04:07] Nina Park: Neural Newscast is AI-assisted, human-reviewed.
[04:11] Nina Park: View our AI transparency policy at neuralnewscast.com.
[04:16] Nina Park: This has been Model Behavior on Neural Newscast.
[04:19] Nina Park: Examining the systems behind the story, Neural Newscast uses artificial intelligence in content
[04:25] Nina Park: creation with human editorial review prior to publication. While we strive for factual,
[04:30] Nina Park: unbiased reporting, AI-assisted content may occasionally contain errors. Verify critical
[04:36] Nina Park: information with trusted sources. Learn more at neuralnewscast.com.
✓ Full transcript loaded from separate file: transcript.txt