About 10 years ago, Cecilia Aragon (BS ’82) lost the ability to type for extended periods. For a computer scientist who’d been writing code for decades, it was devastating. “I thought my coding days were over,” she said in the Techer Live session, “AI in the Wild: What Actually Works.” Using Claude to describe software architecture, logic, and data flows in natural language, Aragon is now shipping production code as CTO of Viata.ai. “AI didn’t optimize my workflow,” she said. “It gave me back my career.”
Her story set the tone for the evening: AI is genuinely useful right now, but it works because of human expertise, not in place of it. Five alumni with experience at Google, Eigen Bio, Apple, Microsoft, and NASA spent 75 minutes giving the Techer community a candid, evidence-based assessment of what’s working, what isn’t, and what’s worth paying attention to next.
The panel
Dan Liebling (BS ’02), moderator. AI Engineering Manager, Google Research.
Kristin Gleitsman (PhD ’10), Chief Science Officer, EigenBio; author, AI ∩ Bio newsletter.
Eugene Vinitsky (BS ’14), Assistant Professor, NYU Engineering; founding researcher, Percepta.
Vikram Dendi (MS ’02), former Chief Product Officer, Microsoft; executive advisor, Allen Institute and Gates Foundation.
Cecilia Aragon (BS ’82), Professor, University of Washington; CTO, Viata.ai; former NASA researcher, pilot, and flight instructor.
What panelists use every day
The session opened with each panelist naming the AI tools they’d used that day. The range was striking. Gleitsman pointed to Gamma, a slide-generation tool she’s found useful for synthesizing information into presentable formats at a new job. Vinitsky described using Claude as a controller for Notion, generating charts and schedules with minor manual cleanup. Dendi highlighted Alter, a desktop automation and model-routing tool that lets him bring multiple AI models into his workflow. And Aragon, as described above, uses Claude to write production software through natural-language collaboration. For academic research, Vinitsky mentioned layering AI onto existing tools including Google Scholar, Semantic Scholar, and Scholar Inbox, a personalized paper-recommendation tool. On the enterprise side, Gleitsman noted that Microsoft Copilot wins adoption not because it has the best model, but because it’s embedded in Outlook, Teams, and SharePoint. “If you can have a tool that hooks up seamlessly to the sources of data that you use, that’s gonna win, even if the model is less good.”
“Forty years of software expertise didn’t become irrelevant. It became essential.”
Cecilia Aragon (BS ’82)
Where AI falls short, and where friction is a good thing
The panel was equally direct about limitations. This audience would expect nothing less.
Aragon was blunt about the AGI narrative: “The whole idea that AI is replacing human expertise—that it’s like a smart human—is just so incorrect.” She drew on aviation safety research, where automation complacency, e.g. trusting a system because it was right 100 times, has led to fatal crashes. “We have major crashes that trace directly to automation complacency.”
Gleitsman offered a framework the scientists in the audience could appreciate: treat AI outputs as hypotheses, not conclusions. “These models are at the level of creating really great hypotheses. The challenge is that they present these ideas as if they’re facts.” She also flagged a structural problem with AI for scientific research: models trained on published literature inherit its biases, including the tendency to publish only positive results and the influence of funding trends on what gets studied.
On literature review, both Vinitsky and Aragon described AI as a marginal addition to their workflow, not a transformation. Vinitsky framed the gap precisely: “The thing that determines whether a paper enters the stack is a network of trust around who recommended it. I haven’t found a way yet to get that level of trust from the AI system.” Aragon agreed. AI can summarize, but it misses the creative spark that makes a paper relevant to a specific research question.
Gleitsman challenged the assumption that removing friction is always an improvement. An idea developed entirely through conversations with a chatbot may move faster initially, she argued, but will hit resistance when colleagues encounter it for the first time with no buy-in. “There are places where friction is good—it’s good for collaboration, it’s actually good for scientific advancement.”
Practical advice: start now, go deep, stay human
On choosing tools, the panel converged: prioritize integration over capability, and don’t waste time chasing the best model. “It’s a bit of a golden age,” Dendi said. “If you end up spending a lot of time trying to find the best tool, you’ll waste a lot of time.”
On AI literacy, the advice was unanimous: just get started on low-stakes projects where mistakes don’t matter. Vinitsky shared that he and a colleague spent winter break using LLMs to build a full social media platform that became his primary way of interacting with the internet. “Now I know exactly what it can’t do, because there are certain bugs it will make repeatedly.”
Should people still learn to code? Yes, unanimously. Aragon: “Because that’s how you can tell if the code you get back is wrong.” Dendi: “Learn to code because it makes you a better systems thinker.”
The deeper career advice pointed beyond tools entirely. Aragon urged Techers to invest in domain expertise, not prompting skills. “The one who’s most useful in the future is the one who knows their field deeply enough to collaborate with AI. Caltech graduates have an advantage—being trained to think from first principles. That’s the skill AI can’t yet replicate.”
Dendi refined the point: the enduring human advantage isn’t specialist knowledge alone but the ability to describe problems correctly—to interrogate a problem space, understand its edge cases, and see how it connects to other domains. “If you can get really good at understanding and describing a problem space, you can use all the AI in the world to do your work.”
As Aragon put it: “Forty years of software expertise didn’t become irrelevant. It became essential.”
Techer Live is a virtual event series for the Caltech alumni community. For upcoming events, visit alumni.caltech.edu/techer-live.