Nearly All Coders Now Use AI—But Nobody Trusts It, Google Finds

Nearly All Coders Now Use AI—But Nobody Trusts It, Google Finds

Nearly All Coders Now Use AI—But Nobody Trusts It, Google Finds

Software developers have embraced artificial intelligence tools with the enthusiasm of kids discovering candy, yet they trust the output about as much as a politician’s promises.

Google Cloud’s 2025 DORA Report, released Wednesday, shows that 90% of developers now use AI in their daily work, a 14% increase from last year.

The report also found that only 24% of respondents actually trust the information these tools produce.

The annual research, which surveyed nearly 5,000 technology professionals worldwide, paints a picture of an industry that is trying to move fast without breaking things.

Developers spend a median of two hours daily working with AI assistants, integrating them into everything from code generation to security reviews. Yet 30% of these same professionals trust AI output either “a little” or “not at all.”

“If you are an engineer at Google, it is unavoidable that you will be using AI as part of your daily work,” Ryan Salva, who oversees Google’s coding tools, including Gemini Code Assist, told CNN.

The company’s own metrics show that more than a quarter of Google’s new code now springs from AI systems, with CEO Sundar Pichai claiming a 10% productivity boost across engineering teams.

Developers mostly use AI to write and modify new code. Other use cases include debugging, reviewing and maintaining legacy code alongside more educational purposes like explaining concepts, or writing documentation.

Image: Google
Image: Google

Despite the lack of trust, over 80% of surveyed developers reported that AI enhanced their work efficiency, while 59% noted improvements in code quality.

However, here’s where things get peculiar: 65% of respondents described themselves as heavily reliant on these tools, despite not fully trusting them.

Among that group, 37% reported “moderate” reliance, 20% said “a lot,” and 8% admitted to “a great deal” of dependence.

‘CopyPasta’ Attack Shows How Prompt Injections Could Infect AI at Scale

This trust-productivity paradox aligns with findings from Stack Overflow’s 2025 survey, where distrust in AI accuracy increased from 31% to 46% in just one year, despite the high adoption rates of 84% for the year.

Developers treat AI like a brilliant but unreliable coworker—useful for brainstorming and grunt work, but everything needs double-checking.

Image: Stack Overflow
Image: Stack Overflow

Google’s response involves more than just documenting the trend.

On Tuesday, the company unveiled its DORA AI Capabilities Model, a framework that identifies seven practices designed to help organizations harness the value of AI without incurring risks.

The model advocates for user-centric design, clear communication protocols, and what Google refers to as “small-batch workflows”—essentially, avoiding uncontrolled AI operation without supervision.\

New AI System Predicts Risk of 1,000 Diseases Years in Advance

The report also introduces team archetypes ranging from “Harmonious high-achievers” to groups stuck in a “Legacy bottleneck.”

These profiles emerged from an analysis of how different organizations handle AI integration. Teams with strong existing processes saw AI amplify their strengths. Fragmented organizations watched AI expose every weakness in their workflow.

Image: Google
Image: Google

The full State of AI-assisted Software Development report and the companion DORA AI Capabilities Model documentation are available through Google Cloud’s research portal.

The materials include prescriptive guidance for teams looking to be more proactive in their adoption of AI technologies—assuming anyone trusts them enough to implement them.

Leave a Comment

Your email address will not be published. Required fields are marked *