Nearly All Coders Now Use AI—But Nobody Trusts It, Google Finds
Software developers have embraced artificial intelligence tools with the enthusiasm of kids discovering candy, yet they trust the output about as much as a politician’s promises.
Google Cloud’s 2025 DORA Report, released Wednesday, shows that 90% of developers now use AI in their daily work, a 14% increase from last year.
The report also found that only 24% of respondents actually trust the information these tools produce.
The annual research, which surveyed nearly 5,000 technology professionals worldwide, paints a picture of an industry that is trying to move fast without breaking things.
Developers spend a median of two hours daily working with AI assistants, integrating them into everything from code generation to security reviews. Yet 30% of these same professionals trust AI output either “a little” or “not at all.”
“If you are an engineer at Google, it is unavoidable that you will be using AI as part of your daily work,” Ryan Salva, who oversees Google’s coding tools, including Gemini Code Assist, told CNN.
The company’s own metrics show that more than a quarter of Google’s new code now springs from AI systems, with CEO Sundar Pichai claiming a 10% productivity boost across engineering teams.
Developers mostly use AI to write and modify new code. Other use cases include debugging, reviewing and maintaining legacy code alongside more educational purposes like explaining concepts, or writing documentation.
Despite the lack of trust, over 80% of surveyed developers reported that AI enhanced their work efficiency, while 59% noted improvements in code quality.
However, here’s where things get peculiar: 65% of respondents described themselves as heavily reliant on these tools, despite not fully trusting them.
Among that group, 37% reported “moderate” reliance, 20% said “a lot,” and 8% admitted to “a great deal” of dependence.
‘CopyPasta’ Attack Shows How Prompt Injections Could Infect AI at Scale
This trust-productivity paradox aligns with findings from Stack Overflow’s 2025 survey, where distrust in AI accuracy increased from 31% to 46% in just one year, despite the high adoption rates of 84% for the year.
Developers treat AI like a brilliant but unreliable coworker—useful for brainstorming and grunt work, but everything needs double-checking.
Google’s response involves more than just documenting the trend.
On Tuesday, the company unveiled its DORA AI Capabilities Model, a framework that identifies seven practices designed to help organizations harness the value of AI without incurring risks.
Leave a Comment
Your email address will not be published. Required fields are marked *