Google’s Multi-Layered Approach to AI Safety
AI-created, human-edited.
Building AI that people can trust isn’t just a technical challenge—it is a foundational requirement for bringing artificial intelligence into our everyday lives. On a recent episode of Intelligent Machines, Tulsee Doshi, Senior Director and Product Lead for Gemini Models at Google, shared her framework for developing safe, inclusive, and powerful AI models.
As AI systems rapidly enter everything from search to productivity apps, Doshi emphasized one principle above all: technology that impacts everyone must work safely for everyone. For her, user trust is non-negotiable if people are to rely on AI for personal, professional, and creative tasks.
Google’s Gemini Models, now at the core of much of the company’s AI stack, are built with safety and inclusivity as core design pillars. Doshi explained that every release is vetted not just for technical quality, but for its ability to handle sensitive content and avoid harmful outputs.
Doshi outlined a three-stage model development process:
- Pre-training: Huge datasets and compute power create a base model.
- Post-training/fine-tuning: The model is refined for quality, accuracy, and to meet Google’s safety benchmarks.
- Inference time: Additional strategies, like “deep think,” can boost model performance and safety on-the-fly as it generates responses for users.
What sets Google apart, according to Doshi, is continuous, multi-dimensional testing. Every new Gemini model is evaluated not only for how well it answers questions, but also for unintended effects like bias, harmful content, privacy issues, or vulnerabilities in areas like cybersecurity.
Google uses a frontier safety framework to test how the most capable models could potentially be misused or manipulated and puts guardrails and emergency processes in place as capabilities increase.
One of the most unique positions Google holds is the ability to personalize AI experiences by leveraging user data while upholding user privacy. Doshi noted that Google’s decades-long investment in privacy infrastructure allows Gemini to offer tailored experiences (like relevant recommendations or context-aware searching) without starting from scratch in data stewardship.
Importantly, Google users retain transparency and control: they can choose where and how their data is used and opt in or out as they see fit. Trust is built not just with features, but with user agency.
Doshi is a leading advocate for equitable technology. She highlighted how Gemini’s improvements in language understanding, localization, and accessibility can help democratize information for non-English speakers and underrepresented communities. By intentionally broadening data collection and evaluation to reflect diverse needs, Google’s AI can serve a global user base more fairly.
The process for reducing bias, Doshi explained, is systemic: every stage—data, metrics, user feedback, and product decisions—includes checks for equity and inclusiveness. Only by designing for differences can AI “work for more people, in more contexts, more of the time.”
Key Takeaways:
- AI safety and trust are built in at every stage: Google’s AI teams measure and improve models for both quality and responsibility, not just performance.
- Continuous testing and “frontier safety” practices help prepare for future models with even greater capability and risk.
- User privacy and agency are foundational: Google’s infrastructure offers control, transparency, and consistency in how data shapes your AI experience.
- Inclusion is a technical and social challenge: Deliberate attention to data diversity, bias checks, and community feedback creates more equitable AI.
- Responsible development now will pay off in broader, safer AI adoption in the long run.
Practical Advice from Tulsee Doshi:
- If you’re building or deploying AI, start with clear safety benchmarks and real-world testing, not just technical performance.
- Design for user choice and transparency—allow people to control how their data is used in your products.
- Expand your teams’ awareness of bias at every development stage; don’t wait until after launch to think about inclusivity.
- Stay proactive: as models become more capable, the ethics and safety risks grow—don’t be caught off guard.
- Remember: building for trust isn’t a constraint, but an accelerator for AI’s positive adoption.
For Tulsee Doshi and Google’s Gemini team, true AI progress is measured not only by capability, but by responsibility. Models that are safe, inclusive, and privacy-respecting are what will unlock AI’s full potential for everyone—not just a tech-savvy few.
Want the full conversation? Listen to the entire interview with Tulsee Doshi on Intelligent Machines for in-depth insights into building safe, inclusive AI at Google.