QEEPP Practical Assessment Framework
The QEEPP Practical Assessment Framework expands the structural integrity assessment into practical diagnostic guidance that can be used in workshops, interviews, and governance reviews.
It translates each of the five QEEPP dimensions into concrete assessment questions, observable checkpoints, practical assessment actions, and scoring guidance to improve consistency and reduce subjective scoring.
The framework is intended to support both initial and recurring assessments of digital transformation initiatives, platforms, portfolios, and operating models.
How the practical framework is used
QEEPP maturity scale
| Score | Maturity State | Structural Meaning |
|---|---|---|
| 1 | Ad hoc | Reactive, unstable, and inconsistent practices |
| 2 | Emerging | Initial structures exist but remain inconsistent |
| 3 | Defined | Processes and controls are established and repeatable |
| 4 | Managed | Execution is measured, monitored, and actively governed |
| 5 | Institutionalized | Practices are embedded, standardized, and self-sustaining |
QEEPP structural sequence used in the framework
QEEPP Structural Control Matrix
The QEEPP framework is organized as a 5 × 5 structural governance matrix. Each transformation dimension is made operational through five structural controls that together determine the structural integrity of that dimension.
This matrix forms the control architecture of the framework. It shows how the five dimensions are translated into practical structural controls for assessment, governance, and recurring integrity evaluation.
| Dimension | Control 1 | Control 2 | Control 3 | Control 4 | Control 5 |
|---|---|---|---|---|---|
| Quality | Architecture | Security | Data | Reliability | Debt and risk |
| Effectiveness | Strategy | Capability | Value streams | Outcomes | Prioritization |
| Efficiency | Operating model | FinOps | Rationalization | DevSecOps | Automation |
| Performance | KPIs | Service metrics | Cadence | Risk reporting | Transparency |
| Productivity | Platforms | Self-service | Reuse | Enablement | Scale capacity |
Each structural control is supported by three diagnostic prompts which are not scored individually. They help to estimate the overall score for the control and correspond to the diagnostic questions in the table below.
| Diagnostic focus | Typical question type |
|---|---|
| Existence | Does the structure or control actually exist? |
| Consistency | Is it applied consistently across the assessed scope? |
| Operational reality | Does it influence real operational decisions and behavior? |
Quality | Stabilize
Structural integrity before velocity
| Structural Control | Diagnostic questions | Observable checkpoints | Practical assessment actions | Scoring guidance |
|---|---|---|---|---|
| Architecture baseline |
|
Target architecture exists, standards are visible, review mechanisms are used, and platform sprawl is understood. | Review architecture documents, inspect standards, examine review decisions, and compare deployed patterns against the intended architecture baseline. |
1: no architecture discipline 2: isolated or inconsistent standards 3: documented and repeatable standards 4: architecture is governed and monitored 5: architecture discipline is embedded across the assessed scope |
| Security guardrails |
|
Security expectations are defined, pipeline controls exist, and vulnerabilities are monitored and acted on. | Inspect security policies, review pipeline controls, examine vulnerability dashboards, and confirm whether security reviews occur during delivery. |
1: security is reactive 2: controls exist but are partial or informal 3: security practices are defined and repeatable 4: security is actively governed and measured 5: security guardrails are embedded and self-sustaining |
| Data foundations |
|
Ownership is assigned, governance is defined, and data quality is treated as an operational discipline rather than an afterthought. | Review data ownership and stewardship, inspect governance artifacts, and verify whether quality issues are monitored and escalated consistently. |
1: fragmented data management 2: limited governance with inconsistent ownership 3: governance is defined and repeatable 4: data quality and governance are actively managed 5: data foundations are embedded at scale |
| Reliability engineering |
|
Reliability targets exist, resilience is designed intentionally, and operational learning is visible after incidents. | Review SLA or SLO records, inspect resilience patterns, evaluate outage history, and verify the presence of recovery testing and post-incident review. |
1: frequent or unmanaged instability 2: reactive recovery with partial controls 3: reliability practices are defined 4: resilience is managed and monitored 5: reliability is engineered and institutionally governed |
| Technical debt and risk |
|
Technical debt is not hidden, risk is visible, and structural remediation competes credibly with feature pressure. | Inspect risk registers, review backlog tagging, examine remediation plans, and confirm whether debt is discussed in governance conversations. |
1: debt and risk are unmanaged 2: acknowledged but not governed 3: tracked and reviewed periodically 4: actively governed and prioritized 5: embedded discipline with sustained structural control |
Effectiveness | Align
Relevance before optimization
| Element | Assessment questions | Checkpoints | Practical assessment actions | Scoring guidance |
|---|---|---|---|---|
| Strategy alignment |
|
Strategy is visible in the assessment scope and transformation effort is not disconnected from business priorities. | Review strategic plans, roadmaps, and executive messaging. Test whether initiatives can be traced clearly to strategic objectives. |
1: disconnected initiative activity 2: weak or inconsistent linkage to strategy 3: strategy alignment is defined 4: alignment is governed and reviewed 5: transformation is consistently strategy-led |
| Capability mapping |
|
Capability language is used consistently and dependencies are visible enough to guide decisions. | Review capability maps, dependency views, and transformation roadmaps. Confirm whether capability strengthening is explicit and traceable. |
1: no meaningful capability mapping 2: partial and inconsistent mapping 3: capability mapping is defined 4: capabilities drive planning and dependency decisions 5: capability mapping is embedded in transformation governance |
| Value stream alignment |
|
Value streams are explicit enough to influence ownership, decision-making, and prioritization. | Review value stream maps, product ownership structures, and operating model decisions to see whether value flow shapes delivery. |
1: siloed work with little value visibility 2: value streams discussed but weakly applied 3: value stream alignment is defined 4: value streams shape governance and delivery 5: value flow is embedded across the assessed scope |
| Outcome definition |
|
Outcomes are explicit, measurable, and used to distinguish real business progress from mere activity. | Inspect KPI sets, OKRs, or outcome statements. Verify whether outcomes are reviewed and influence decisions. |
1: activity without outcome definition 2: partial metrics with weak business meaning 3: outcomes are defined and repeatable 4: outcomes are reviewed and governed 5: outcomes consistently drive transformation decisions |
| Initiative prioritization |
|
Prioritization is not purely political or reactive and can be explained through strategic and structural logic. | Review governance forums, funding discussions, and portfolio decisions to test how trade-offs are actually made. |
1: ad hoc prioritization 2: inconsistent value-based prioritization 3: prioritization is defined and repeatable 4: prioritization is governed through evidence 5: portfolio choice is disciplined and consistently value-led |
Efficiency | Optimize
Lean before expansion
| Element | Assessment questions | Checkpoints | Practical assessment actions | Scoring guidance |
|---|---|---|---|---|
| Operating model |
|
Roles, ownership, and governance pathways are clear enough to support efficient execution. | Review operating model artifacts, team boundaries, and decision forums. Test whether ownership is explicit and workable in practice. |
1: chaotic or unclear operating model 2: early structure with frequent friction 3: operating model is defined 4: operating model is actively managed 5: operating model is embedded and scalable |
| FinOps discipline |
|
Cost transparency exists and operational or platform decisions are influenced by cost insight. | Inspect cost dashboards, showback or chargeback logic, and optimization review cadence to verify whether cost discipline is real. |
1: limited cost visibility 2: cost review is reactive or partial 3: FinOps practices are defined 4: optimization is governed and ongoing 5: cost discipline is embedded in delivery behavior |
| Application rationalization |
|
Rationalization is more than inventory. There is visible reduction of duplication and improvement of coherence. | Review application inventories, rationalization plans, and retirement decisions. Confirm whether consolidation activity is real and governed. |
1: unmanaged sprawl 2: inventory exists but action is weak 3: rationalization is defined and progressing 4: rationalization is managed through governance 5: rationalization is embedded in structural decision-making |
| DevSecOps flow |
|
Delivery flow is visible, repeatable, and increasingly standardized across the assessment scope. | Review pipeline designs, cycle-time metrics, control gates, and exception handling to assess the integrity of delivery flow. |
1: highly manual and inconsistent flow 2: partial automation with uneven discipline 3: DevSecOps flow is defined and repeatable 4: flow is measured and governed 5: flow is embedded, standardized, and sustainable |
| Automation |
|
Automation is systematic enough to improve consistency, speed, and control rather than merely providing isolated technical convenience. | Inspect infrastructure as code repositories, orchestration patterns, and operational workflows to verify whether automation changes the way work is done. |
1: mainly manual operations 2: isolated automation with limited structural impact 3: automation is defined and repeatable 4: automation is governed and measurable 5: automation is embedded and scaled across the scope |
Performance | Measure
Measurement before momentum
| Element | Assessment questions | Checkpoints | Practical assessment actions | Scoring guidance |
|---|---|---|---|---|
| KPI framework |
|
KPIs are explicit, relevant, and active in leadership discussions rather than decorative reporting elements. | Review scorecards, governance packs, and reporting definitions to confirm whether KPI language is operationalized. |
1: little meaningful KPI discipline 2: partial metrics with weak governance use 3: KPI framework is defined 4: KPIs are reviewed and managed 5: KPI discipline is embedded across the organization |
| Service metrics |
|
Service expectations are explicit and used to guide operations, reliability, and governance decisions. | Inspect service definitions, reliability metrics, and operational reviews to verify whether service-level management is functioning consistently. |
1: little service measurement discipline 2: inconsistent service metrics 3: service metrics are defined 4: service performance is monitored and governed 5: service management is embedded and mature |
| Execution cadence |
|
Governance rhythm exists and contributes to accountability rather than producing only ceremonial meetings. | Review governance calendars, meeting outputs, and decision logs to test whether cadence is active and consequential. |
1: weak or absent cadence 2: inconsistent reviews with limited follow-through 3: cadence is defined and repeatable 4: cadence is managed and action-oriented 5: cadence is embedded and structurally reliable |
| Risk and compliance reporting |
|
Risks and compliance exposures are surfaced in time to support corrective action rather than retrospective explanation. | Review risk registers, control reporting, and escalation practices to confirm that risk visibility is active and not purely formal. |
1: limited risk visibility 2: partial reporting with weak escalation 3: reporting is defined and repeatable 4: reporting is managed through governance 5: risk and compliance visibility is embedded institutionally |
| Operational transparency |
|
Operational signals are visible enough to support shared understanding and prompt action across leadership and delivery stakeholders. | Inspect dashboards, audience access, and reporting practices to test whether transparency is real, timely, and actionable. |
1: low visibility and weak transparency 2: fragmented reporting 3: transparency is defined and available 4: transparency is monitored and used in governance 5: visibility is embedded and broadly trusted |
Productivity | Scale
Scale without degradation
| Element | Assessment questions | Checkpoints | Practical assessment actions | Scoring guidance |
|---|---|---|---|---|
| Platform enablement |
|
Platforms are enabling delivery across multiple teams and creating structural leverage rather than isolated technical benefit. | Review platform services, adoption patterns, and team usage. Test whether platform enablement materially changes delivery capability. |
1: little platform leverage 2: partial platform capability with limited adoption 3: platform enablement is defined and usable 4: platforms are managed and broadly adopted 5: platform enablement is embedded and structurally scalable |
| Self-service infrastructure |
|
Self-service exists within guardrails and expands capability without reducing governance strength. | Review provisioning models, guardrails, approval patterns, and usage pathways to confirm whether self-service is both real and governed. |
1: fully dependent manual provisioning 2: limited self-service with inconsistent control 3: self-service is defined and available 4: self-service is governed and effective 5: self-service is embedded, trusted, and scalable |
| Reusable components |
|
Reuse is visible in architecture and delivery patterns and contributes materially to scale efficiency. | Inspect component catalogs, template libraries, and reference implementations to verify whether reuse is systematic and adopted. |
1: high duplication and limited reuse 2: ad hoc reuse with weak structure 3: reusable assets are defined and repeatable 4: reuse is governed and broadly applied 5: reuse is embedded as a structural operating norm |
| Delivery enablement |
|
Teams are enabled by shared tooling, guidance, and operational support rather than left to build everything independently. | Review onboarding paths, enablement assets, standards, and support models to assess whether delivery capability is being scaled systematically. |
1: weak delivery support and high reinvention 2: partial enablement with inconsistency 3: enablement is defined and available 4: enablement is managed and broadly useful 5: enablement is embedded and structurally scalable |
| Scalable capacity |
|
Growth in scope or throughput does not automatically produce fragmentation, instability, or governance breakdown. | Review scaling patterns, team growth, operational stability, and governance quality under expansion to determine whether scale is sustainable. |
1: growth quickly amplifies weakness 2: scale is partial and fragile 3: scalable capacity is defined and plausible 4: scale is managed and supported by structure 5: scale is sustainable and institutionally supported |