Insights

What AI engineering teams are actually measuring

Frameworks, research, and practical guides for CTOs and engineering leaders navigating AI coding tools at scale.

EN
Engineering Intelligence·4 min read

What are DORA metrics? The engineering metric AI just broke

DORA metrics are four validated measures of software delivery performance. In 2026, 85% of developers regularly use AI coding tools and the four metrics were designed for a world where engineers wrote every line. That assumption no longer holds.

Read article →
EN
Engineering Intelligence·6 min read

Prompt-to-PR attribution: what it is, why GitHub cannot track it, and how to evaluate it

Prompt-to-PR attribution connects the engineer's AI coding session to the pull request it produced. It links AI input quality, agent-authored code percentage, and review outcome in a single dataset. No engineering intelligence tool instruments at this layer today.

Read article →
EN
Engineering Intelligence·6 min read

The four categories of enterprise AI and why engineering needs its own

Enterprise AI categories split into four types, and most purchasing conversations conflate all of them. A CTO who cannot tell these apart will spend budget on a tool that delivers seat usage dashboards when what the board actually needs is session-level attribution.

Read article →
EN
Engineering Intelligence·4 min read

DORA metrics explained, and why they break at AI-first teams

DORA metrics are four engineering performance measures validated across thousands of organisations as the industry standard for benchmarking software delivery performance. In 2026, they remain necessary but no longer sufficient.

Read article →
EN
Engineering Intelligence·8 min read

AI code quality metrics beyond DORA: the CTO's guide

Your board is asking for AI ROI numbers. Your traditional DORA metrics cannot produce them. Here is why and what a credible measurement framework looks like instead.

Read article →