
Thursday Apr 30, 2026
Abi Olvera on the Case for AI Optimism
A few weeks ago, the AI company Anthropic announced something genuinely strange. They had built a new model, codenamed Mythos, that was so capable at cybersecurity tasks they decided not to release it to the public. Instead, they're using it, quietly, with a small group of partners, to patch vulnerabilities in the world's most important software before anyone else gets a model this talented.
Abi Olvera is the Research Director at the Golden Gate Institute and the writer behind the Substack "Positive Sum." She specializes in understanding the constraints and abilities of emerging technology, particularly AI. As a result, she has a unique amount of insight on AI's capabilities, and knows what Mythos actually suggests about the pace of AI progress and innovation.
In this episode, I got a chance to speak with her about how her working-class background has affected her views on AI, whether AI is currently growing at an exponential rate, and the positive effects that AI might have on the next generation.
Show Notes
Assessing Claude Mythos Preview’s cybersecurity capabilities
"The optimism gap that's shaping AI policy" by Abi Olvera, Existential Hope
"Kelsey Piper on Whether AI Will Kill Us All" from Frames of Space
"To Forecast AI's Impact on Biosecurity, We Asked: Why are Attacks So Rare?" by Abi Olvera, Second Thoughts
"The Most Powerful and Dangerous AI Model Yet" from Plain English with Derek Thompson
"Could Artificial Intelligence undermine constructive disagreement?" by David Rozado, Free the Inquiry
No comments yet. Be the first to say something!