

There is no shortage of AI content right now. Unfortunately, a lot of it lives somewhere between breathless hype and thinly disguised marketing copy. Every vendor claims AI will transform software delivery, replace half your team, and solve all your operational problems if you just buy the platform and turn the crank.
That is not how real implementation works.
In this episode of Day Two DevOps, Kyler and I spoke with Enrico Teotti, an independent consultant with deep experience in software development, product management, and practical AI adoption. What I appreciated most about Enrico’s perspective is that it was grounded in actual delivery work. Not theory. Not investor-speak. Not “agents will do everything.” Just a clear-eyed look at where AI helps, where it creates risk, and what teams need to change in order to use it responsibly.
One of the big takeaways from the conversation is that AI does not have to do something flashy to be valuable.
Enrico described a workflow where he gave an AI coding tool access to a monolithic application, a production data replica, and supporting observability tools. That combination let him ask questions that would normally require a bunch of manual stitching across systems:
That is not the kind of demo that gets applause on a keynote stage. But it is exactly the kind of thing that makes teams faster and more effective.
The interesting part is not just that the AI could generate a SQL query. Plenty of tools can help with that. The real value came from combining multiple sources of context:
When those pieces come together, AI becomes less of a toy and more of a force multiplier.
Another theme that came through loud and clear was the importance of guardrails.
Enrico was very explicit that he does not trust AI systems enough to let them operate freely in production environments. He talked about using read-only access to production replicas instead of write access, and keeping a human firmly in the loop when investigating bugs or performance issues.
That matches my own experience. If you are going to let AI interact with real systems, the safest starting point is:
This is one of those areas where teams get into trouble because the easy path is to hand the model your existing credentials and hope for the best. That is not a safety model. That is optimism wearing a trench coat.
The better approach is deliberate constraint. Give the tool enough access to be useful, but not enough to cause a disaster because it confidently made the wrong call.
A lot of AI discussion still treats software development as if the hard part is typing code.
It is not.
Enrico’s product background gave him a useful lens on the whole conversation. He kept returning to questions like:
That last question is especially important.
One of the traps AI creates is that it lowers the cost of making changes, which means it becomes dangerously easy to change things just because you can. If a tool helps you identify an issue in minutes, the natural temptation is to go fix it immediately. But if no one uses the feature, or the issue is not tied to an important workflow, then it may not deserve the effort.
This is where product thinking becomes even more valuable. Faster code generation does not eliminate the need for judgment. It increases the need for judgment.
In other words: if AI makes implementation cheaper, prioritization becomes more important, not less.
Enrico made a blunt observation that I think will make some people uncomfortable: the people most at risk are not necessarily the people whose skills are obsolete, but the people who stop being curious.
That resonates. There’s a reason I use the title Curious Human. It encapsulates my outlook and reminds me to maintain my curiosity as I get older and more curmudgeonly.
The most effective use of AI I have seen does not come from blind trust or total resistance. It comes from people who are curious enough to experiment, skeptical enough to verify, and experienced enough to connect what the model says back to the actual business problem.
That combination is hard to automate, and it is probably the real skill set teams should be cultivating right now.
If you are trying to sort signal from noise in the AI conversation, this episode is worth your time.
Enrico brought a pragmatic perspective that I think more teams need to hear. AI can absolutely help with debugging, analysis, implementation, and discovery. But it works best when it is surrounded by good constraints, strong product thinking, testing discipline, cost awareness, and people who know enough to challenge the output.
That may not be as exciting as the promise of fully autonomous software development. But it sounds a lot more like the real world.
And in the real world, that is usually where the value lives.
If you want to listen to the full episode, check out D2DO301: Actually Implementing AI. You can also find more of Enrico’s writing on his blog, which is well worth reading if you care about practical AI implementation instead of hand-wavy futurism.
April 29, 2026

Open Source Malware, NPM, and the Risk of Helpful AI
April 14, 2026

The State of Platform Engineering and DevEx
April 13, 2026
