‘I Thought I Wasn’t Qualified’—How a Google Engineer Broke Into AI Safety Without a PhD

'I thought I wasn't qualified ': Google engineer reveals what helped his career

“I thought I wasn’t qualified.”

That’s what Emrick Donadei, a software engineer at Google, told himself when he first considered moving into one of tech’s hottest—and most intimidating—fields: AI safety.

No PhD in machine learning. No research papers. No prior experience with large language models (LLMs). Just a traditional backend engineer with curiosity and a willingness to show up.

Fast forward two years, and Donadei is now working on AI safety projects at Google—helping ensure that powerful AI systems behave as intended and don’t amplify harm. His secret weapon? Not a fancy degree, but something far more accessible: internal company hackathons.

His journey offers a rare, real-world playbook for professionals who feel “underqualified” but dream of entering the AI revolution. And the best part? You don’t need to work at Google to apply his strategy.

Table of Contents

Who Is Emrick Donadei—and What Is AI Safety?

Emrick Donadei is a full-stack software engineer who joined Google working on infrastructure systems—far removed from the bleeding edge of AI. But as Google doubled down on large language models like Gemini, he grew fascinated by the ethical and technical challenges they posed.

AI safety isn’t just about preventing “AI doom.” It’s a practical discipline focused on:

  • Ensuring AI systems are aligned with human values
  • Preventing harmful outputs (bias, misinformation, toxicity)
  • Building robustness against adversarial attacks
  • Making models interpretable and controllable

Traditionally, these roles went to researchers with advanced degrees. But as AI becomes embedded in products, companies now need engineers who can implement safety guardrails—not just theorize about them.

The AI Safety Career Mindset Shift

Donadei’s biggest hurdle wasn’t technical—it was psychological. “I kept thinking, ‘I’m not a researcher. I don’t have a math-heavy background. I don’t belong,’” he admitted in an interview.

But he realized something critical: AI safety needs builders, not just theorists. Someone has to write the code that enforces content filters, monitors model drift, and integrates human feedback loops. That’s engineering—not pure research.

This mindset shift—from “I’m not qualified” to “I can contribute”—was the first step in his transformation.

How Hackathons Became His Breakthrough

Google runs frequent internal hackathons, often themed around emerging priorities—like AI safety or responsible innovation. Donadei decided to jump in, even though he felt out of his depth.

During one such event, he teamed up with researchers from Google’s AI Safety team. His task? Build a prototype tool that could visualize how small input changes caused large, unpredictable outputs in an LLM—a phenomenon known as “prompt sensitivity.”

Though he didn’t know PyTorch inside out, he leveraged his strengths:

  • Frontend skills to create an intuitive UI
  • System design knowledge to handle API rate limits
  • Debugging intuition to spot edge cases

The prototype impressed senior leaders. More importantly, it got him noticed. Within months, he was invited to join a cross-functional AI safety sprint—and eventually transitioned into the team full-time.

3 Lessons From His Transition Into AI

Donadei’s story offers actionable takeaways for any engineer eyeing an AI safety career:

  1. Start where you are: You don’t need to master transformers overnight. Use your existing skills (testing, DevOps, UX) as a bridge.
  2. Seek visibility, not perfection: Hackathons reward bold ideas, not flawless code. Ship something—even if it’s rough.
  3. Build relationships, not just models: AI safety is interdisciplinary. Talk to ethicists, product managers, and policy experts. Your network opens doors.

As Donadei put it: “The gap between traditional engineering and AI isn’t a wall—it’s a hallway. You just have to walk through it.”

How You Can Replicate His Path (Even Outside Google)

You don’t need a FAANG job to follow this playbook:

  • Join public hackathons: Events like NeurIPS AI Safety Hackathon or Hugging Face challenges welcome all skill levels.
  • Contribute to open-source AI safety tools: Projects like MLCommons or Partnership on AI need engineers.
  • Build in public: Share your learning journey on GitHub or LinkedIn. Many AI teams scout talent this way.
  • Take targeted courses: Free resources like DeepLearning.AI’s “AI For Everyone” or Anthropic’s Safety tutorials offer practical grounding.

For more career pivots into emerging tech, check out our guide on [INTERNAL_LINK:how-to-get-into-ai-without-a-degree].

Why AI Safety Needs Diverse Backgrounds

The myth that only PhDs can work on AI safety is not just exclusionary—it’s dangerous. As AI impacts healthcare, justice, and finance, we need engineers who understand real-world systems, user behavior, and edge cases.

Donadei’s infrastructure background, for example, helps him design monitoring systems that scale. A former teacher might spot educational bias. A journalist might catch misinformation patterns.

Diversity of experience = robustness in AI. That’s the real lesson here.

Conclusion

Emrick Donadei’s journey proves that an AI safety career isn’t reserved for the academic elite. With curiosity, proactive participation, and the courage to start before you feel “ready,” engineers from any background can make a meaningful impact. In a field racing toward unprecedented power, we don’t just need brilliant minds—we need grounded builders who can keep AI honest, safe, and human-centered. And that might just be you.

Sources

[1] “‘I thought I wasn’t qualified’: Google engineer reveals what helped his career,” Times of India
[2] Google AI Safety Team Public Resources, https://ai.google/safety
[3] MLCommons Open Source Projects, https://github.com/mlcommons
[4] Partnership on AI, https://www.partnershiponai.org/
[5] DeepLearning.AI “AI For Everyone” Course, Coursera

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top