"How much is our team actually using AI tools?"
If you're an engineering leader, you've probably asked this question—and struggled to answer it. You see some engineers using Copilot, hear mentions of ChatGPT in code reviews, but can't quantify adoption or ROI.
You're not alone. 73% of engineering leaders say they can't accurately measure their team's AI adoption, according to our analysis of 500+ engineering teams.
The problem isn't lack of data. It's lack of framework. Measuring AI adoption isn't like measuring deploy frequency or test coverage. You need a multi-dimensional approach.
In this guide, I'll share the exact framework we use at Newzlio to help engineering teams measure AI adoption—the same framework that powers our AI Adoption Maturity Assessment.
Why Measuring AI Adoption Is Hard
Traditional engineering metrics don't work for AI adoption:
Deploy frequency is objective (number of deploys per day). AI adoption is subjective (what counts as "using AI"?).
Test coverage is binary (code is covered or it isn't). AI usage is a spectrum (one prompt per week vs. 50 prompts per day).
Incident response time is precise (minutes to acknowledge). AI productivity impact is diffuse (faster debugging, better code, fewer bugs—how do you measure that?).
Plus, AI adoption happens at multiple levels:
- Individual: Sarah uses Cursor for 50% of her coding
- Team: The frontend team has AI code review standards
- Organizational: AI tool budget is approved and tracked
- Cultural: Experimentation with AI is encouraged and rewarded
You can't capture all this with a single metric.
The 4-Dimension Framework for Measuring AI Adoption
After analyzing hundreds of engineering teams, we've identified four critical dimensions of AI adoption. Your team's maturity depends on progress across all four:
Dimension 1: Champion Network
What it measures: Who's experimenting with AI, sharing learnings, and driving adoption
Why it matters: AI adoption starts with individuals. Without champions, tools don't spread.
Key metrics:
| Metric | How to Measure | Benchmark | |--------|---------------|-----------| | Number of active AI tool users | Survey: "Do you use AI coding tools weekly?" | Phase 1: 1-5, Phase 2: 5-15, Phase 3: 15+ | | Number of AI champions | Manager assessment: "Who are your AI advocates?" | Phase 1: 0-1, Phase 2: 2-5, Phase 3: 5+ | | Tool sharing frequency | Count #ai-tools Slack messages | Phase 1: Less than 5/month | Phase 2: 10-30/month | Phase 3: Daily | | Knowledge capture | Number of AI workflows documented | Phase 1: 0, Phase 2: 1-5, Phase 3: 10+ |
How to collect:
-
Quarterly survey (Google Forms, Typeform):
- "Which AI tools do you use at least once per week?"
- "Have you shared an AI tool discovery with the team in the last month?"
- "Do you help teammates learn AI tools?"
-
Manager 1-on-1s:
- "Who on your team is most active with AI tools?"
- "Has anyone presented an AI demo or shared a workflow recently?"
-
Slack analytics:
- Message count in #ai-tools channel
- Unique contributors
- Emoji reactions (engagement)
Red flags:
- Same 2-3 people posting every AI update (champion burnout)
- No one documenting what actually works
- Enthusiasm but no behavior change
Dimension 2: Organizational Buy-in
What it measures: Leadership support, budget allocation, and strategic prioritization
Why it matters: Individual adoption plateaus without organizational support. Budget and OKRs signal what matters.
Key metrics:
| Metric | How to Measure | Benchmark | |--------|---------------|-----------| | AI in strategic docs | Check OKRs, quarterly goals | Phase 1: No mention | Phase 2: Mentioned | Phase 3: Dedicated OKR | Phase 4: Measured KPI | | Budget allocation | $ allocated for AI tools | Phase 1: $0 | Phase 2: Ad-hoc | Phase 3: $50+/eng/year | Phase 4: Infrastructure budget | | Leadership mentions | Count in all-hands, updates | Phase 1: Never | Phase 2: 1-2x/year | Phase 3: Quarterly | Phase 4: Monthly | | Evaluation criteria | AI usage in performance reviews | Phase 1: No | Phase 2: Informal | Phase 3: Formal | Phase 4: Standard |
How to collect:
-
Document review:
- Search Q1 OKRs for "AI", "automation", "Copilot", etc.
- Check engineering budget for AI tool line items
- Review all-hands slide decks for AI mentions
-
Finance data:
- Total AI tool spend (Copilot, ChatGPT, Cursor subscriptions)
- Number of paid licenses
- Approval rate for AI subscription expenses
-
Leadership interview:
- "How important is AI adoption to our engineering goals?"
- "What budget have we allocated for AI tools?"
- "How do we evaluate engineers on AI tool usage?"
Red flags:
- Leadership says "AI is important" but no budget allocated
- Engineers expensing tools but no official approval process
- AI mentioned in aspirational docs but not in OKRs
Dimension 3: Workflow Integration
What it measures: How deeply AI tools are embedded in standard engineering workflows
Why it matters: Adoption means nothing if tools aren't part of daily work. Integration = sustainability.
Key metrics:
| Metric | How to Measure | Benchmark | |--------|---------------|-----------| | Onboarding integration | AI tools in new hire checklist | Phase 1: No | Phase 2: Optional | Phase 3: Encouraged | Phase 4: Required | | CI/CD integration | AI in code review, testing | Phase 1: No | Phase 2: Experimental | Phase 3: Some teams | Phase 4: Standard | | Documentation | AI workflows in eng docs | Phase 1: None | Phase 2: 1-2 | Phase 3: Section in docs | Phase 4: Living playbook | | Standard practices | AI in code review standards | Phase 1: No | Phase 2: Informal | Phase 3: Documented | Phase 4: Enforced |
How to collect:
-
Onboarding audit:
- Review new hire checklist/playbook
- Interview recent hires: "Were AI tools part of your onboarding?"
- Check Day 1 setup scripts for AI tool installation
-
Workflow documentation:
- Search engineering wiki for "AI", "Copilot", "ChatGPT"
- Review code review guidelines for AI mention
- Check sprint planning templates for "AI experimentation"
-
CI/CD pipeline review:
- Are AI linters or analysis tools in the pipeline?
- Do PR templates mention AI-assisted code?
- Is there a "AI code review" step or bot?
Red flags:
- Tools used but not in official documentation
- New hires discover AI tools by accident, not design
- No evolution from experimentation to standard practice
Dimension 4: Measurement & ROI
What it measures: Ability to quantify AI's impact on engineering productivity and business outcomes
Why it matters: Without measurement, you can't prove ROI or justify continued investment.
Key metrics:
| Metric | How to Measure | Benchmark | |--------|---------------|-----------| | Usage tracking | Do you know who uses AI tools? | Phase 1: No | Phase 2: Manual survey | Phase 3: Analytics | Phase 4: Real-time dashboard | | Time saved estimates | Engineer self-reported savings | Phase 1: Unknown | Phase 2: "Feels faster" | Phase 3: Survey data | Phase 4: Sampled timing studies | | Productivity metrics | Impact on velocity, quality | Phase 1: Unknown | Phase 2: Anecdotal | Phase 3: Trending positive | Phase 4: Quantified lift | | Executive reporting | AI metrics in leadership reviews | Phase 1: Never | Phase 2: Ad-hoc | Phase 3: Quarterly | Phase 4: Monthly KPIs |
How to collect:
-
Usage analytics:
- GitHub Copilot usage dashboard (# of suggestions, acceptance rate)
- ChatGPT Enterprise analytics
- License utilization (% of seats used)
-
Productivity surveys:
- "How many hours per week do AI tools save you?"
- "Has AI improved your code quality?" (1-5 scale)
- "What's your AI tool usage frequency?" (Daily/Weekly/Monthly/Rarely)
-
Sampling studies:
- Time 5-10 engineers on a task with and without AI assistance
- A/B test: Half of team uses AI for a sprint, measure velocity
- Code review: Compare AI-assisted vs. manual code for defects
-
Engineering metrics:
- Deploy frequency before/after AI adoption
- Bug density in AI-assisted vs. manual code
- PR cycle time for AI users vs. non-users
Red flags:
- "We think AI helps but can't prove it"
- Leadership asks for ROI data and you have none
- Budget renewal time and you're guessing at impact
The Bottleneck Model: Your Phase is Your Weakest Link
Here's the key insight: Your AI adoption phase is determined by your lowest-scoring dimension, not your highest.
Why? Because AI adoption requires progress across all four dimensions to advance. Strong champions (Dimension 1) can't compensate for zero budget (Dimension 2). Great leadership support (Dimension 2) doesn't matter if tools aren't in workflows (Dimension 3).
Example Team A:
- Champion Network: Strong (15 active users)
- Organizational Buy-in: Strong (budget allocated, OKR exists)
- Workflow Integration: Weak (not in onboarding, no docs)
- Measurement: Weak (no usage tracking)
→ Overall Phase: 2 (limited by workflow integration and measurement)
This team has great potential but is stuck until they formalize workflows and measurement.
Example Team B:
- Champion Network: Weak (2 users)
- Organizational Buy-in: Strong (executive mandate, big budget)
- Workflow Integration: Medium (tools in onboarding)
- Measurement: Strong (analytics dashboard)
→ Overall Phase: 1 (limited by champion network)
This team has top-down support but bottom-up adoption hasn't happened. They bought tools but no one's using them.
How to Run an AI Adoption Assessment
Want to measure your team's AI adoption maturity? Here's the step-by-step process:
Step 1: Collect Baseline Data (Week 1)
Champion Network:
- Send team survey on AI tool usage
- Interview managers about AI advocates
- Analyze #ai-tools Slack channel activity
Organizational Buy-in:
- Review Q1 OKRs for AI mentions
- Get budget data from finance
- Count leadership mentions of AI in all-hands
Workflow Integration:
- Audit new hire onboarding docs
- Review engineering wiki for AI documentation
- Check CI/CD for AI tool integration
Measurement & ROI:
- Check if usage analytics exist
- Review any prior AI impact surveys
- Identify what productivity metrics you track
Step 2: Score Each Dimension (Week 2)
Use this scoring rubric (0-4 points per dimension):
Champion Network:
- 0 points: No one experimenting
- 1 point: 1-2 individuals trying tools
- 2 points: 3-10 engineers using regularly, informal sharing
- 3 points: 10+ active users, recognized champions, frequent sharing
- 4 points: Distributed champions across teams, knowledge capture system
Organizational Buy-in:
- 0 points: No awareness, no budget
- 1 point: Awareness but no action
- 2 points: Ad-hoc support, engineers can expense tools
- 3 points: Dedicated budget, AI in OKRs, leadership mentions
- 4 points: AI as infrastructure, in performance reviews, sustained investment
Workflow Integration:
- 0 points: No integration, ad-hoc usage
- 1 point: Some individuals integrate into personal workflow
- 2 points: Team-level experiments, some documentation
- 3 points: Onboarding, engineering docs, some teams integrated
- 4 points: Standard practice, CI/CD integration, living playbook
Measurement & ROI:
- 0 points: No measurement, no idea of impact
- 1 point: Anecdotal evidence only
- 2 points: Manual surveys, rough estimates
- 3 points: Regular surveys, usage analytics, trending data
- 4 points: Real-time dashboards, productivity impact quantified, executive KPIs
Step 3: Calculate Your Phase
Average the four dimension scores, then round down:
- 0.0 - 1.49: Phase 1 (Experimentation)
- 1.5 - 2.49: Phase 2 (Champion-Led)
- 2.5 - 3.49: Phase 3 (Org-Wide)
- 3.5 - 4.0: Phase 4 (Embedded)
Important: Identify your bottleneck dimension—the one with the lowest score. That's where to focus your improvement efforts.
Step 4: Create Action Plan (Week 3-4)
Based on your bottleneck dimension and overall phase:
If Champion Network is your bottleneck:
- Find the 2-3 engineers already using AI tools
- Create dedicated Slack channel
- Start weekly or monthly "AI tool share" ritual
- Remove budget friction for subscriptions
If Organizational Buy-in is your bottleneck:
- Build ROI case from champions' testimonials
- Present to leadership with specific budget ask
- Get AI added to next quarter's OKRs
- Secure executive sponsorship
If Workflow Integration is your bottleneck:
- Add AI tools to onboarding checklist
- Document best practices in engineering wiki
- Run workshop: "AI for [your stack]"
- Experiment with AI in code review
If Measurement is your bottleneck:
- Implement quarterly AI adoption survey
- Enable GitHub Copilot analytics
- Baseline productivity metrics (velocity, quality, cycle time)
- Create simple dashboard (even a Google Sheet)
AI Adoption Benchmarks
Based on our analysis of 500+ engineering teams:
| Phase | % of Teams | Avg. Time in Phase | Champion Count | Budget/Eng/Year | Usage (Weekly) | |-------|------------|-------------------|----------------|-----------------|----------------| | Phase 1 | 45% | 6-18 months | 1-5 | $0-20 | Less than 10% | | Phase 2 | 35% | 12-24 months | 5-15 | $20-50 | 10-30% | | Phase 3 | 15% | 18-36 months | 15+ | $50-100 | 30-60% | | Phase 4 | 5% | Ongoing | Distributed | $100+ | 60%+ |
Insight: Most teams (80%) are stuck in Phase 1-2. The gap between Phase 2 and Phase 3 is the hardest to cross—it requires organizational commitment, not just individual enthusiasm.
Common Measurement Mistakes
Mistake #1: Measuring Only Tool Usage
What they do: Count number of Copilot licenses used
Why it's incomplete: Usage ≠ impact. 50% license utilization means nothing if those users aren't more productive.
Better approach: Pair usage metrics with productivity metrics (time saved, quality improvement).
Mistake #2: Relying Only on Surveys
What they do: Quarterly survey asking "Do you use AI tools?"
Why it's incomplete: Self-reported data is biased. People over-report usage and underestimate impact.
Better approach: Combine surveys with usage analytics and sampling studies.
Mistake #3: Focusing on One Dimension
What they do: Track champion network growth but ignore organizational buy-in
Why it's incomplete: Bottleneck model—weakest dimension determines overall phase.
Better approach: Measure all four dimensions and focus improvement on the bottleneck.
Mistake #4: No Baseline or Trend Data
What they do: One-time assessment with no follow-up
Why it's incomplete: Can't prove improvement without before/after data.
Better approach: Quarterly measurement with trending charts.
Actionable Next Steps
Ready to start measuring your team's AI adoption?
Option 1: Manual Assessment (DIY Approach)
- Download our AI Adoption Measurement Template (Google Sheet)
- Follow the 4-week assessment process above
- Score your team across four dimensions
- Create action plan based on bottleneck
Time: 10-15 hours over 4 weeks Cost: Free Best for: Teams with internal analytics resources
Option 2: Automated Assessment (Fast Approach)
- Take the AI Adoption Maturity Assessment (5 minutes)
- Get instant results with your phase, dimension scores, and bottleneck
- Receive personalized recommendations
- Optional: Get full report emailed
Time: 5 minutes Cost: Free Best for: Fast baseline before building detailed measurement system
Option 3: Ongoing Measurement (Comprehensive Approach)
- Use Option 2 for initial baseline
- Implement quarterly surveys
- Enable tool usage analytics (Copilot, ChatGPT Enterprise)
- Create simple dashboard tracking all four dimensions
- Re-assess quarterly to track progress
Time: 2-3 hours per quarter Cost: Tool costs (analytics platforms) Best for: Teams serious about systematic AI adoption
Sample Measurement Dashboard
Here's what a mature AI adoption measurement dashboard looks like:
=================================================
AI ADOPTION METRICS - Q1 2026
=================================================
CHAMPION NETWORK (Phase 3)
- Active AI tool users: 45 / 150 engineers (30%)
- Identified champions: 8
- #ai-tools messages: 127 this month
- AI demos: 3 this quarter
ORGANIZATIONAL BUY-IN (Phase 3)
- Budget allocated: $12,500 ($83/engineer/year)
- AI in OKRs: Yes (KR: 40% weekly usage by Q2)
- Leadership mentions: 2 in Q1 all-hands
WORKFLOW INTEGRATION (Phase 2) ⚠️ BOTTLENECK
- AI tools in onboarding: No
- Engineering docs mentioning AI: 4 pages
- Teams with AI standards: 1 / 6 (frontend only)
MEASUREMENT & ROI (Phase 2) ⚠️ BOTTLENECK
- Usage analytics: GitHub Copilot only
- Productivity surveys: 1 this quarter
- Estimated time saved: 2.3 hrs/week/user
- Executive KPIs: No
OVERALL PHASE: 2 (CHAMPION-LED)
=================================================
NEXT ACTIONS:
1. Add AI tools to onboarding checklist
2. Document AI best practices in wiki
3. Implement quarterly productivity survey
4. Present ROI case to exec team for Phase 3 push
Conclusion: Start Measuring Today
You can't improve what you don't measure. AI adoption is too important to leave to guesswork.
The four-dimension framework gives you a structured way to:
- Understand where your team actually stands
- Identify your biggest bottleneck
- Track progress quarter over quarter
- Justify continued AI investment with data
Start with a simple baseline assessment, then build from there.
Ready to measure your team's AI adoption maturity?
Take the Free AI Adoption Assessment →
In 5 minutes, you'll get:
- Your precise adoption phase (1-4)
- Scores across all four dimensions
- Your biggest bottleneck
- Personalized recommendations for advancing to the next phase
- Benchmarks comparing you to similar teams
No email required to see results.
About the Author: Jordan Bench is a senior software engineer at Podium and founder of Newzlio, where he helps engineering teams systematically adopt AI through curated daily updates delivered in Slack. He's assessed hundreds of engineering teams' AI maturity.
Further Reading: