Insights
What AI engineering teams are actually measuring
Frameworks, research, and practical guides for CTOs and engineering leaders navigating AI coding tools at scale.
What are DORA metrics? The engineering metric AI just broke
DORA metrics are four validated measures of software delivery performance. In 2026, 85% of developers regularly use AI coding tools and the four metrics were designed for a world where engineers wrote every line. That assumption no longer holds.
Prompt-to-PR attribution: what it is, why GitHub cannot track it, and how to evaluate it
Prompt-to-PR attribution connects the engineer's AI coding session to the pull request it produced. It links AI input quality, agent-authored code percentage, and review outcome in a single dataset. No engineering intelligence tool instruments at this layer today.
The four categories of enterprise AI and why engineering needs its own
Enterprise AI categories split into four types, and most purchasing conversations conflate all of them. A CTO who cannot tell these apart will spend budget on a tool that delivers seat usage dashboards when what the board actually needs is session-level attribution.
DORA metrics explained, and why they break at AI-first teams
DORA metrics are four engineering performance measures validated across thousands of organisations as the industry standard for benchmarking software delivery performance. In 2026, they remain necessary but no longer sufficient.
AI code quality metrics beyond DORA: the CTO's guide
Your board is asking for AI ROI numbers. Your traditional DORA metrics cannot produce them. Here is why and what a credible measurement framework looks like instead.