The Wrong Stopwatch: Why Contract Cycle Time Metrics Miss What Matters
By Daniel J. Finkenstadt, PhD and Kraig Conrad*
-2.jpg)
When the Office of Federal Procurement Policy announced in January 2021 that agencies would track Procurement Administrative Lead Time (PALT) as a key performance indicator, the logic seemed bulletproof: shorter timelines mean more efficient contracting. The memorandum standardized measurement of a metric that contracting offices had tracked informally for years, defining PALT as the time between initial solicitation and contract award (OFPP, 2021). But this reasoning commits the same error as judging a surgeon's skill by how quickly she completes operations. Some patients need appendectomies; others need experimental cancer treatment. The stopwatch tells you nothing about whether you saved the right life.
What constitutes "PALT" has varied considerably across agencies and over time. As Naval Postgraduate School researchers documented in their 2022 analysis of defense procurement timelines, "procurement lead time is a term sometimes used interchangeably with the terms procurement action lead time, contract action lead time, and administrative lead time. Precise terminology varies among contracting officers and across agencies" (Ortiz et al., 2022, p. 7), with some offices starting their clock at acquisition plan approval, others at purchase request acceptance, still others at solicitation release. The 2021 Office of Federal Procurement Policy memorandum brought standardization to this confusion in line with 2018 National Defense Authorization Act direction. It formally defined PALT as beginning when the solicitation hits the street and ending at contract award. For organizations managing thousands of contracts across wildly different contexts, this singular metric has become a north star for efficiency.
But research finds the standardized definition captures perhaps 20 percent of the actual timeline. Before contracting offices release a solicitation, requirements must be defined, funding secured, technical reviews completed, and stakeholder approvals obtained. Additional Naval Postgraduate School research found this Purchase Request Acceptance Lead Time averaged 22.59 days for routine acquisitions but stretched to 288 days for complex requirements, yet this phase remains entirely invisible in PALT metrics (Letterle & Kantner 2019). The focus on measuring what happens after solicitation also creates perverse incentives that optimize for speed regardless of outcome, treat fundamentally different procurement challenges as equivalent, and compress the very planning phases where careful requirement definition occurs. Organizations measure activity rather than value creation, tracking the final sprint while ignoring the marathon that preceded it.
The 2021 OFPP memorandum didn't just standardize PALT measurement, it provided agencies detailed suggestions for reducing it, recommending practices like pre-solicitation industry dialogues, modular contracting, and artificial intelligence tools. These recommendations assume the goal is correct and execution is the problem. But no amount of process optimization fixes a fundamental measurement error. Faster cycle times help when buying commodities through established channels. They actively harm outcomes when acquiring immature technologies or stimulating nascent markets. The memorandum offers solutions to improve efficiency at a metric that often measures the wrong thing.
The deeper problem is not just that agencies chose the wrong metric. It is that PALT represents the only readily available data point in procurement systems that were never designed to capture what actually drives performance. Federal and defense organizations measure cycle time not because it matters most, but because it's a convenient metric when their systems track little else.
The heterogeneity problem
Consider three concurrent procurement activities at a defense agency. The first buys standard office supplies through an existing indefinite delivery, indefinite quantity (IDIQ) contract. The second acquires a proven satellite system from an established vendor. The third develops a prototype artificial intelligence capability where the requirement itself remains fuzzy and the technology immature. And forget dollar values for these requirements as a proxy for complexity. Most know that large supply contracts can easily amount to more dollars than an AI prototype.
Applying the same cycle time metric across these scenarios is analytically meaningless. The office supply order should take days because everything is predefined: specifications, vendors, pricing structures, and approval workflows. Some organizations now automate these types of transactions entirely, reducing human involvement to exception handling. Here, cycle time serves as a useful efficiency gauge because the process has been standardized and the outcome commoditized.
The satellite acquisition might reasonably take eighteen months. Technical evaluations, security reviews, integration planning, and funding coordination all serve legitimate functions. Shaving six months off this timeline might indicate improved efficiency, but it could equally signal inadequate due diligence or compressed technical risk assessment.
The AI prototype presents the starkest problem. Here, speed to contract award captures almost nothing about procurement effectiveness. What matters is whether the contracting approach enables rapid experimentation, accommodates evolving requirements, and facilitates validated learning. A contract awarded quickly but structured rigidly will fail regardless of its impressive PALT number. Meanwhile, a longer procurement cycle that establishes framework agreements with multiple vendors and builds in staged decision points might deliver substantially better outcomes.
Aggregating these three timelines into a single performance dashboard destroys information. It assumes equivalence where none exists.
What PALT actually measures
The metric's fundamental flaw lies in what it excludes. PALT stops counting at contract award, yet award represents the beginning of value delivery, not its culmination. A contract signed in record time but specifying the wrong requirements, structured around flawed assumptions, or poorly positioned for execution represents a process failure regardless of its speed. In the public sector, those “invisible” failures show up clearly in value-erosion data. Interim analysis led by the CCM Institute shows that many public agencies experience 25–30% loss of contract value from award to closeout on complex programs. The gap between expected value at signing and realized value during execution is not an anomaly but a structural pattern that PALT doesn’t account for.
Research on project success demonstrates that focusing on cycle time as a primary metric represents both a measurement problem and an optimization trap. Atkinson's analysis of project management success criteria identifies the limitation: the "Iron Triangle" of cost, time, and quality measures only the delivery process, excluding system quality and stakeholder benefits that determine actual value creation (Atkinson, 1999). Organizations commit what Atkinson terms a Type II error: not measuring incorrectly, but measuring incompletely.
The problem compounds when incomplete metrics become optimization targets. Public sector performance research documents how measurement systems create predictable pathologies. Van Thiel and Leeuw's examination of administrative reform found that performance assessment instruments generate "tunnel vision," where organizations focus exclusively on measured dimensions while unmeasured aspects deteriorate (Van Thiel & Leeuw, 2002, p. 267). Organizations that optimize for PALT often find themselves engaged in what might be called efficiency theater: activities that look productive on dashboards while undermining actual effectiveness. Contracting offices may prioritize simple, low risk acquisitions that close quickly, improving their metrics while more complex strategic procurements languish. Alternatively, they may rush complex acquisitions to hit cycle time targets, producing hastily written documents that create downstream problems. Organizations demonstrate improving PALT numbers while actual procurement effectiveness (matching approaches to contexts, enabling adaptation, delivering capability) degrades invisibly.
In requirements with high uncertainty, the critical path runs through learning, not paperwork. Eric Ries's work on lean startup methodology emphasizes that in uncertain environments, the goal is validated learning through rapid iteration (Ries, 2011). He contends that “learning” is the real unit of measure for start-ups and new technology. This learning occurs by fielding fast and iterating with real customers measuring real performance and feedback data. Traditional procurement metrics ignore this entirely. They cannot distinguish between a twelve month cycle time spent navigating bureaucracy and a twelve month cycle spent conducting market research, prototyping solutions, and refining requirements based on feedback.
Some procurement scenarios demand deliberate patience. When acquiring breakthrough technologies, requirements maturity often lags far behind the contracting timeline. Forcing artificial speed produces contracts that lock in poorly understood needs or immature technical approaches. The resulting amendments, modifications, and performance failures exact costs that never appear in PALT dashboards.
Manufacturing maturity introduces another dimension of heterogeneity. Buying from mature industrial bases differs fundamentally from stimulating nascent markets. The latter requires supplier development, risk capital allocation, and iterative technical refinement. None of this work shows up in cycle time metrics, yet all of it determines whether the procurement achieves its strategic purpose.
The architecture of ignorance
The reliance on PALT reveals a more fundamental problem with federal procurement systems: they capture transactional data while ignoring the contextual and performance data necessary for learning. Current systems record that a contract was awarded, when it was awarded, and for how much. They rarely capture the characteristics that determine whether the procurement approach matched the problem.
Was the requirement technically mature or exploratory? Did the market offer multiple qualified vendors or require supplier development? Were performance specifications clear or emergent? Did the contract include iterative decision points or lock in a fixed approach? How many modifications followed award, and why? Did the delivered capability meet operational needs? These questions matter enormously for understanding procurement effectiveness, yet the data to answer them systematically does not exist.
Federal procurement systems were never architected to support end-to-end visibility from requirement generation through performance measurement. The Federal Acquisition Regulation (FAR) based contract lifecycle exists across a fragmented landscape of disconnected systems. Requirements emerge in mission planning tools. Budget data lives in financial systems. Procurement actions get recorded in contract writing platforms. Performance measurement happens in siloed management databases. Each system was built independently, often decades apart, with no common data model or integration architecture.
What performance data exists lives in systems like the Contractor Performance Assessment Reporting System (CPARS), siloed from upstream acquisition data and consistently reported late, inaccurately, or incompletely. CPARS captures whether contractors met delivery schedules but not whether requirements made sense, whether contract structures enabled adaptation, or whether delivered capabilities solved operational problems. Linking performance back to requirement characteristics requires manually joining data across disconnected systems with inconsistent identifiers and incompatible structures.
This fragmentation creates massive data gaps that make holistic analysis nearly impossible. Linking a contract's performance back to the original requirement characteristics requires manually joining data across siloed systems with inconsistent identifiers, incompatible data structures, and asynchronous update cycles. Analysts spend more time wrestling with data integration challenges than conducting actual analysis. Many give up entirely, retreating to whatever metrics their individual system happens to capture.
The problem compounds over time. Without rich data linking upstream decisions to downstream outcomes, procurement organizations cannot learn systematically which factors predict success. Does early supplier engagement improve performance? Under what conditions do cost reimbursement contracts outperform fixed price structures? When does competitive prototyping justify its expense? Individual contracting officers accumulate experiential knowledge, but organizational learning stalls because experiences cannot be aggregated, analyzed, or converted into evidence based practice.
The OTA paradox
Recognizing process speed limitations, federal agencies have increasingly turned to alternative procurement vehicles like Other Transaction Authorities (OTAs). These mechanisms promise greater flexibility, faster timelines, and better access to nontraditional vendors who find FAR based contracting too burdensome. For specific mission needs, particularly in rapidly evolving technology domains, OTAs deliver genuine value.
But this shift introduces a new data challenge. OTAs deliberately break from FAR standardization, allowing customized agreement structures tailored to specific circumstances. This flexibility is precisely their advantage for engaging commercial firms accustomed to varied contracting approaches. However, this same variability undermines the already fragile data infrastructure supporting procurement analytics.
FAR based contracts, despite their rigidity, share common data elements: contract type, pricing / line item structure, and performance metrics follow standardized taxonomies. These standards, however imperfect, enable comparison across contracts. With enough manual effort an analyst could identify patterns in how fixed price incentive contracts perform relative to cost plus structures because the categories themselves are defined consistently.
OTAs sacrifice this standardization for flexibility. Agreement structures vary widely. Performance metrics become bespoke. Data fields that exist in one OTA may be absent or defined differently in another. When organizations attempt to analyze OTA performance or compare OTAs against traditional contracts, they discover that the data lacks the structural consistency necessary for meaningful aggregation.
Recent Government Accountability Office findings underscore these data challenges. In a September 2025 report examining DOD's use of Other Transaction Agreements, GAO found that while DOD systematically tracks production OTAs through the Federal Procurement Data System, it lacks reliable methods for tracking FAR-based production contracts that resulted from prototype OTAs (GAO, 2025). GAO's analysis revealed that DOD's manual reporting of follow-on FAR contracts was unreliable and inaccurate. They find only 15 percent of the 48 contracts reported as FAR production contracts were actually FAR contracts, with the majority being prototype and production OTAs instead. This data discontinuity prevents DOD from assessing the extent to which OTAs deliver capabilities to warfighters. Without systematic processes to track these transitions, procurement organizations cannot determine whether flexible acquisition approaches actually result in fielded systems, measure their relative success against traditional contracts, or identify which contextual factors predict successful outcomes.
This creates a painful irony. Agencies adopt OTAs partly to escape the constraints of legacy procurement systems. But in doing so, they fragment their data landscape further, making the business intelligence improvements they seek even more difficult to achieve. The very flexibility that attracts nontraditional vendors undermines the data standardization required for systematic learning.
Building the data infrastructure we need
Transforming procurement performance requires building data systems designed for learning rather than mere transaction recording. This demands moving beyond incremental improvements to fragmented legacy systems toward an integrated architecture that spans the entire acquisition lifecycle.
Modern procurement platforms should capture a rich taxonomy of requirement characteristics such as technical maturity levels, market conditions, performance certainty, innovation objectives, and strategic importance. These fields need not burden contracting officers with extensive additional work. Many assessments already occur informally and could be formalized through structured data entry at natural decision points.
Contract features should be systematically coded regardless of vehicle to include things like pricing structures, best value tradeoff determinants, incentive mechanisms, intellectual property approaches, teaming arrangements, and performance measurement frameworks. For FAR based contracts, this means enhancing existing systems to capture context beyond basic transaction data. For OTAs and other alternative instruments, it means establishing minimum data standards that preserve flexibility while ensuring comparability on core dimensions.
The challenge with alternative procurement vehicles is finding the right balance. Too much standardization defeats their purpose. Too little creates data chaos. A viable approach would define a core data ontology covering universal elements: parties, obligations, milestones, payments, and performance metrics. Beyond this foundation, agencies could extend the model to capture vehicle-specific features while maintaining semantic interoperability.
Negotiation dynamics offer another untapped data source. How many proposal revisions occurred? Which evaluation factors drove source selection? What tradeoffs emerged between cost, schedule, and performance? Recording these patterns would illuminate which acquisition strategies work under different conditions.
Post-award performance tracking must go beyond standard measures for all contracts. We need to know the basics, such as whether the contractor delivered on schedule, but we should also measure broader outcomes. Did the solution meet operational needs? We may know how many modifications were required, but we need to understand why they were needed. What was the total cost of ownership? Linking these outcome measures to upstream decisions would enable systematic learning about which approaches predict success.
Critically, this data architecture must bridge the requirement-to-performance lifecycle within a unified information model. Requirements systems, financial platforms, contracting tools, and program management databases need not merge into a single monolithic application. But they must share common identifiers, publish data to a shared repository, and expose APIs that enable cross-system analytics. The barrier is not technological capability but organizational will and the recognition that measurement systems shape what organizations optimize for.
WARNING: Agency leaders and decision-makers should ensure that they don’t fall into the data migration trap. Too often the disparate data within siloed systems are forced to migrate to new, better systems. If this can be done relatively seamlessly, then go for it. However, history shows that most organizations end up spinning for months or years on end, tying up their best digital talent to move old data into new systems. Sometimes “snapping a line” and carry two systems for as long as it takes for legacy data to become obsolete is a higher value approach, even if it results in added carrying costs in the short run.
Knowing what matters
Even as we discuss standard data elements for general comparison, we must also recognize that effective procurement metrics must reflect the heterogeneity of contracting contexts and draw on data that links process to outcomes. For commodity purchases through established vehicles, cycle time remain mostly appropriate. These transactions resemble manufacturing operations where standardization enables meaningful productivity measurement.
For complex acquisitions, metrics should track alignment between contracting approach and requirement characteristics. Organizations need data systems that can answer: Is the technical maturity level assessed and matched to an appropriate contract type? Does the contract structure enable iterative refinement if needed? Are decision gates aligned with learning milestones?
For innovation focused contracts, measures should emphasize learning velocity: How quickly can the government and vendor test assumptions? How readily can the contract accommodate pivots? What feedback loops exist between technical progress and contractual terms? Time to first prototype or time to validated requirement may matter far more than time to award, but only if systems capture these milestones.
Organizations should measure process fitness: the degree to which procurement approaches match requirement characteristics. A simple scorecard assessing whether high uncertainty requirements receive flexible contract structures, mature requirements receive streamlined processes, and commodity purchases leverage automation would reveal more about procurement health than aggregate cycle times. But this requires data systems that capture both requirement characteristics and contracting approaches in analyzable form.
Most importantly, procurement systems must link upstream decisions to downstream outcomes. Statistical analysis of this integrated data could reveal which contract features, negotiation approaches, and requirement characteristics systematically predict performance. Over time, this evidence base would enable procurement organizations to develop context specific best practices grounded in empirical evidence rather than anecdote or tradition.
The path forward
The federal government faces a choice. It can continue optimizing the metrics its legacy systems happen to capture, driving behavior toward speed regardless of appropriateness. Or it can invest in the data infrastructure necessary for genuine performance improvement. The latter path requires recognizing that procurement is a professional activity demanding situational judgment, not a production process amenable to universal efficiency measures. It requires building systems that capture the contextual richness and performance outcomes necessary for organizational learning. And it requires the analytical sophistication to draw insights from complex, multidimensional data rather than seeking false comfort in single number summaries.
This investment must balance two competing imperatives. First, establishing data standards robust enough to enable systematic learning and business intelligence. Second, preserving the flexibility that makes alternative procurement vehicles valuable for engaging nontraditional vendors and adapting to diverse mission needs.
The same capabilities needed to escape cycle-time tunnel vision are the ones that reduce contract value erosion. Evidence from ongoing CCM Institute research shows that public agencies can materially cut value loss by segmenting portfolios by complexity, engaging commercial expertise earlier, simplifying and aligning contracting structures with operational reality, and investing in lifecycle management competency. These are not post-award remedies; they depend on making smarter upstream decisions grounded in better data about maturity, uncertainty, and supplier capability.
The solution lies in modern data architecture principles: core standards for universal elements, extensible schemas for vehicle-specific features, and semantic interoperability that enables analysis across heterogeneous sources. Government acquisition teams cannot continue to measure what is conveniently available; they must measure what matters (Doerr, 2018).
*Initial copyediting completed with GPT 5.1, finalized by human authors.
References
Atkinson, R. (1999). Project management: Cost, time and quality, two best guesses and a phenomenon, its time to accept other success criteria. International Journal of Project Management, 17(6), 337-342. https://doi.org/10.1016/S0263-7863(98)00069-6
Doerr, J. (2018). Measure what matters: How Google, Bono, and the Gates Foundation rock the world with OKRs. Portfolio/Penguin.
Government Accountability Office. (2025). Other transaction agreements: Improved contracting data would help DOD assess effectiveness (GAO-25-107546). https://www.gao.gov/products/gao-25-107546
Letterle, K. & Kantner, P. (2019). An analysis of contracting activity purchase request acceptance lead time for USMC using unit acquisitions under the Simplified Acquisitions Threshold [Master’s thesis, Naval Postgraduate School]. NPS Archive: Calhoun. https://calhoun.nps.edu/handle/10945/64005
Ortiz, C. A., Fang, C. S., & Ledesma, G. G. (2022). An analysis of procurement acquisition lead time (PALT) and acquisition lead time (ALT) in the Department of Defense (DOD) [Master's thesis, Naval Postgraduate School]. NPS Archive: Calhoun.
Ries, E. (2011). The lean startup: How today's entrepreneurs use continuous innovation to create radically successful businesses. Crown Business.
Van Thiel, S., & Leeuw, F. L. (2002). The performance paradox in the public sector. Public Performance & Management Review, 25(3), 267-281. https://doi.org/10.2307/3381236