Baruch Sadogursky - Can We Trust Al-Generated Code? Maybe We've Been Asking the Wrong Question.
- Alexy Khrabrov
- Apr 7
- 1 min read

Baruch Sadogursky is a Head of Developer Relations at TuxCare. He did Java pre-generics and DevOps pre-Docker, starting DevRel at JFrog with ten people, leading to a $6B IPO. He co-authored "Liquid Software" and "DevOps Tools for Java Developers," is a Java Champion and CNCF Ambassador alumnus, serves on various conference committees, and speaks at major industry events.
Can We Trust Al-Generated Code? Maybe We've Been Asking the Wrong Question
No one trusts AI-generated code. It looks right. It sounds confident. But does it actually do what we expect?
Having AI test its own work doesn’t help. If we can’t trust it to write code, why would we trust it to write tests after the fact? That’s not verification; it’s an echo chamber.
That leaves us manually checking everything. The safest bet is to assume it’s wrong and review every line yourself, which doesn’t exactly scream “productivity boost.”
So what’s the alternative?
Maybe we’ve been looking at this the wrong way. AI might be trustworthy, but only if we rethink how we guide it. What if there were a way to ensure it understands intent before it writes a single line of code? A way to catch mistakes before they happen instead of fixing them afterward?
An excited AI developer advocate and a cynical senior engineering manager take the stage to debate whether AI-driven development is finally ready for prime time or just another way to get things wrong.
Comentarios