DX, SPACE & DORA
For years, organizations focused solely on boosting developer productivity to accelerate business outcomes. However, measuring productivity with simple metrics like "lines of code" or "story points" often led to unintended consequences: burnout, gaming the metrics, and decreased retention. Modern frameworks like DORA, SPACE, and DevEx (DX) offer a more holistic approach to understanding and improving how software teams work.
"The best way to help developers achieve more is not by expecting more, but by improving their experience." — Nicole Forsgren, Founder of DORA metrics
What is DORA?
DevOps Research and Assessment (DORA) is a research program that identified four key metrics that indicate software delivery performance. Started by Dr. Nicole Forsgren, Gene Kim, and Jez Humble, DORA conducted multi-year research across thousands of organizations, published in the book Accelerate and annual State of DevOps reports.
DORA metrics focus on outcomes rather than output—measuring what matters for delivering value to customers quickly and reliably.
The Four Key Metrics
How often does your organization deploy code to production?
Why it matters:
Higher deployment frequency enables faster feedback loops, smaller batch sizes, and reduced risk per deployment.
Elite performance:
Multiple deploys per day, on-demand
How to improve:
- Automate your deployment pipeline
- Implement feature flags for safe releases
- Break down large changes into smaller increments
- Reduce manual approval bottlenecks
How long does it take to go from code commit to production?
Why it matters:
Shorter lead times mean faster value delivery, quicker response to market changes, and reduced work-in-progress.
Elite performance:
Less than one hour from commit to production
How to improve:
- Automate testing at every stage
- Streamline code review processes
- Reduce handoffs between teams
- Implement trunk-based development
How long does it take to restore service after an incident?
Why it matters:
Fast recovery minimizes customer impact and demonstrates system resilience. Accepting that failures happen, recovery speed becomes critical.
Elite performance:
Less than one hour to restore service
How to improve:
- Implement robust monitoring and alerting
- Practice incident response through game days
- Build rollback capabilities into deployments
- Maintain runbooks and documentation
What percentage of changes result in degraded service or require remediation?
Why it matters:
Low failure rates indicate quality throughout the pipeline and reduce the cost of deploying frequently.
Elite performance:
0-15% of changes cause failures
How to improve:
- Implement comprehensive automated testing
- Use canary deployments and progressive rollouts
- Conduct thorough code reviews
- Learn from post-incident reviews
💡 Key Insight
DORA research shows these metrics are not trade-offs—elite performers achieve high scores across all four. Speed and stability reinforce each other through practices like automation, small batch sizes, and continuous improvement.
What is SPACE?
SPACE is a framework developed by researchers at GitHub and Microsoft Research that captures the multidimensional nature of developer productivity. Published in the ACM Queue journal by Nicole Forsgren, Margaret-Anne Storey, Thomas Zimmermann, and colleagues, it challenges the myth that productivity can be measured with a single metric.
The framework recognizes that productivity is personal, context-dependent, and includes dimensions that traditional metrics miss entirely.
The Five Dimensions
How fulfilled developers feel with their work, team, tools, and culture. How healthy and happy they are.
Why it matters:
Research shows productivity and satisfaction are correlated. Declining satisfaction can signal upcoming burnout and reduced productivity.
Example metrics:
- Developer satisfaction surveys
- Employee Net Promoter Score (eNPS)
- Burnout indicators
- Developer efficacy (having tools/resources needed)
- Retention rates
How to measure:
Primarily through surveys and qualitative feedback. Regular pulse surveys can detect trends before they become problems.
The outcomes of a system or process—did the code reliably do what it was supposed to do?
Why it matters:
Performance focuses on outcomes rather than output. A developer who produces lots of code may not produce high-quality code that delivers customer value.
Example metrics:
- Code quality and reliability
- Absence of bugs in production
- Customer satisfaction scores
- Feature adoption rates
- Service health and uptime
Challenge:
Individual contributions are hard to tie directly to business outcomes, especially in team-based software development.
Counts of actions or outputs completed in the course of performing work.
Why it matters:
Activity metrics provide limited but valuable insights when used correctly. They should never be used alone to evaluate productivity.
Example metrics:
- Number of commits and pull requests
- Code reviews completed
- Deployments and releases
- Incidents responded to
- Documentation created
⚠️ Warning:
Activity metrics are easily gamed and miss essential work like mentoring, brainstorming, and helping teammates. Never use these alone to reward or penalize developers.
How people and teams communicate and work together effectively.
Why it matters:
Software development is collaborative. Effective teams rely on high transparency, awareness of each other's work, and inclusive practices.
Example metrics:
- Quality of code review feedback
- Documentation discoverability
- Onboarding time for new members
- Cross-team collaboration frequency
- Knowledge sharing sessions
Hidden cost:
Work that supports others' productivity may come at the expense of individual productivity. This "invisible work" needs recognition.
The ability to complete work with minimal interruptions or delays, whether individually or through a system.
Why it matters:
Developers talk about "getting into the flow"—achieving that productive state where complex work happens smoothly. System efficiency affects how quickly work moves from idea to customer.
Example metrics:
- Uninterrupted focus time
- Number of handoffs in processes
- Wait time vs. value-added time
- DORA metrics (lead time, deployment frequency)
- Meeting load and interruption frequency
Connection to DORA:
The DORA metrics fit within this dimension, measuring flow through the delivery system from commit to production.
💡 How to Use SPACE
Choose metrics from at least three dimensions. Include at least one perceptual measure (like surveys). Look for metrics in tension—this is by design, providing a balanced view. For example: commits (Activity) + perceived productivity (Satisfaction) + code review quality (Communication) + deployment frequency (Efficiency).
What is Developer Experience?
Developer Experience (DevEx or DX) represents a paradigm shift from focusing solely on productivity outcomes to focusing on how developers experience their work. The premise: improving the developer experience leads to sustainable productivity gains without the negative side effects of pure productivity pressure.
Microsoft and GitHub established the Developer Experience Lab (DXL) to study developer work and well-being, publishing research that quantifies the business impact of good DevEx.
more productive when developers have a solid understanding of their codebase
more innovative with intuitive tools and work processes
less tech debt when teams can answer questions quickly
The Three Core Dimensions of DevEx
The mental effort required to complete tasks. High cognitive load slows developers down and increases errors.
Factors that increase cognitive load:
- Complex, poorly documented codebases
- Frequent context switching
- Unclear requirements or processes
- Too many tools to learn and maintain
How to reduce:
- Maintain comprehensive, up-to-date documentation
- Standardize tools and processes across teams
- Create clear onboarding paths
- Reduce unnecessary complexity in systems
The speed at which developers can validate their work and learn from it. Faster feedback enables faster iteration.
Types of feedback:
- Build and test results
- Code review comments
- Production monitoring alerts
- Customer usage data
How to accelerate:
- Invest in fast CI/CD pipelines
- Implement real-time linting and type checking
- Set SLAs for code review turnaround
- Deploy feature flags for quick experimentation
The ability to achieve and maintain focus on complex tasks without interruption. Flow is where deep work happens.
Flow blockers:
- Excessive meetings
- Frequent interruptions
- Waiting on dependencies or approvals
- Context switching between tasks
How to enable:
- Establish "focus time" blocks with no meetings
- Use asynchronous communication as default
- Reduce mandatory meetings
- Automate repetitive tasks
💡 DevEx vs. Productivity
DevEx is not anti-productivity—it's about achieving productivity sustainably. Organizations that focus only on productivity metrics often see short-term gains followed by burnout, turnover, and technical debt. DevEx focuses on the inputs (experience) rather than just the outputs (productivity), recognizing that happy, supported developers naturally produce better work.
A Unified View
DORA, SPACE, and DevEx are complementary frameworks that address different aspects of software team effectiveness. Understanding how they relate helps you choose the right approach for your goals.
Focus: Software delivery performance
Scope: Team/system level
Best for: Measuring and improving CI/CD pipeline effectiveness
Limitation: Doesn't capture developer satisfaction or experience
Focus: Multidimensional productivity
Scope: Individual, team, and system levels
Best for: Holistic productivity measurement
Includes: DORA metrics within Efficiency dimension
Focus: Developer-centric improvement
Scope: Individual experience
Best for: Improving day-to-day developer work
Connection: Maps to SPACE's Satisfaction and Efficiency dimensions
Choosing the Right Framework
Use DORA when...
- You want to benchmark against industry standards
- Your focus is on improving deployment and delivery speed
- You need clear, quantifiable metrics for leadership
- You're implementing or improving CI/CD pipelines
Use SPACE when...
- You need a comprehensive view of productivity
- Simple metrics are causing unintended consequences
- You want to balance multiple dimensions
- You're designing a team health dashboard
Use DevEx when...
- Developer satisfaction and retention are priorities
- You're seeing signs of burnout or turnover
- You want to improve the day-to-day developer experience
- You're investing in tooling and infrastructure
💡 Recommendation
Don't choose just one—use them together. Start with DevEx to identify pain points, use SPACE to create a balanced measurement approach, and incorporate DORA metrics to track delivery performance. The frameworks were designed by the same research community and intentionally complement each other.
Practical Steps to Begin
Implementing these frameworks doesn't require expensive tools or massive organizational change. Start small, measure what matters, and iterate based on what you learn.
Step-by-Step Approach
Before implementing any metrics, understand what's working and what's painful. Anonymous surveys about tools, processes, and satisfaction provide baseline data and surface issues you might not know exist.
Questions to ask:
- How often do you feel productive at work?
- What's the biggest obstacle to getting work done?
- How easy is it to get help when you're stuck?
- Would you recommend this team to a friend?
Choose metrics from at least three SPACE dimensions. Include at least one perceptual measure. Look for metrics that create productive tension rather than optimizing one thing at the expense of others.
Starter metric set:
- Satisfaction: Developer satisfaction score (survey)
- Performance: Change failure rate
- Activity: Deployment frequency
- Efficiency: Lead time for changes
DORA metrics require data from your CI/CD pipeline. Most modern DevOps tools provide these metrics out of the box or with minimal configuration.
Data sources:
- Version control (commits, PRs, merge times)
- CI/CD platform (build times, deployment frequency)
- Incident management (MTTR, failure rates)
- Project tracking (cycle time, WIP)
Metrics are for learning, not for judging. Share data with the team, discuss what it means, and collaboratively identify improvements. Avoid using metrics to compare individuals or create competition.
Best practices:
- Share aggregate team metrics, not individual data
- Focus on trends over time, not absolute numbers
- Connect metrics to specific improvement actions
- Celebrate improvements as a team
Your metrics and focus should evolve as your organization matures. What matters today may be less important next year as you solve current problems and new challenges emerge.
Signs to adjust:
- Metrics are being gamed or causing negative behavior
- The metric no longer reflects what you care about
- You've achieved consistent good performance
- New strategic priorities emerge
Tools for Measuring and Improving
Many tools can help you implement these frameworks. Some specialize in specific metrics while others provide comprehensive platforms.
Measurement Tools
Built-in analytics
Provides metrics on PRs, code reviews, deployment frequency, and team collaboration patterns. Includes GitHub Copilot metrics for AI-assisted development.
Built-in analytics
Offers cycle time, lead time, and deployment frequency metrics through Analytics views and dashboards.
Quick assessment
Free online assessment tool to benchmark your DORA metrics against the industry. Available at dora.dev.
Engineering intelligence
Third-party platforms that aggregate data across tools to provide DORA and SPACE metrics automatically.
Developer experience platform
Founded by DevEx researchers, combines survey data with system metrics for comprehensive DevEx measurement.
Engineering analytics
Platforms that provide engineering metrics dashboards with DORA and custom metric support.
Further Reading
The foundational book by Forsgren, Humble, and Kim that introduced DORA metrics and the research behind them.
Original ACM Queue paper introducing the SPACE framework. Available at queue.acm.org.
Annual research reports with updated benchmarks and findings. Available at dora.dev.
Ongoing research from Microsoft and GitHub on developer productivity and well-being. Visit microsoft.com/research/group/developer-experience-lab.
Lessons from the Research
Decades of research and practical experience have surfaced clear patterns for what works—and what doesn't—when measuring developer productivity and experience.
✅ Do This
- Combine quantitative and qualitative data. Numbers tell you what; surveys and conversations tell you why.
- Measure at multiple levels. Individual, team, and system metrics reveal different insights.
- Include perceptual measures. How developers feel about their productivity matters as much as what they produce.
- Look for metrics in tension. If one metric improves while another declines, you're seeing the full picture.
- Share metrics transparently with teams. People improve what they understand and own.
- Connect metrics to actions. A metric without a response plan is just trivia.
- Evolve your metrics over time. What matters changes as your organization matures.
- Protect developer privacy. Report aggregate data, not individual performance.
❌ Avoid This
- Don't rely on a single metric. "Lines of code" or "story points" alone cause gaming and dysfunction.
- Don't compare individuals. Productivity is personal and context-dependent.
- Don't use metrics punitively. Metrics for punishment drive fear, not improvement.
- Don't ignore invisible work. Mentoring, code reviews, and helping others are essential but often unmeasured.
- Don't expect instant results. Culture and process changes take time to reflect in metrics.
- Don't measure for measurement's sake. Every metric should connect to a decision or action.
- Don't assume correlation is causation. High performers have good metrics, but chasing metrics won't make you high performing.
- Don't forget wellbeing. Short-term productivity gains from overwork lead to long-term losses.
💡 The Golden Rule
"Metrics shape behavior." What you measure communicates what you value. Choose metrics carefully because teams will optimize for them—make sure that optimization leads somewhere good.