This methodology applies to the figures and tables from the latest version of the DoD contracts report. It also applies to the graphs cross referenced by five defense components (Army, Navy, Air Force, DLA, and "Other DoD") and by area (Products/Services/R&D).
Inherent Restrictions of FPDS
Since the analysis presented in this report relies almost exclusively on FPDS data, it incurs four notable restrictions.
- First, contracts awarded as part of overseas contingency operations are not separately classified in FPDS. As a result, we do not distinguish between contracts funded by base budgets and those funded by supplemental appropriations.
- Second, FPDS includes only prime contracts, and the separate subcontract database has historically been radically incomplete, accounting for less than half of the expected obligations. Therefore, only prime contract data are included in this report.
- Third, reporting regulations require that only unclassified contracts be included in FPDS. We interpret this to mean that few, if any, classified contracts are in the database. For DoD, this omits a substantial amount of total contract spending, perhaps as much as 10 percent. Such omissions are probably most noticeable in R&D contracts.
- Finally, classifications of contracts differ between FPDS and individual vendors. For example, some contracts that a vendor may consider as services are labeled as products in FPDS and vice versa. This may cause some discrepancies between vendors’ reports and those of the federal government.
Constant Dollars and Fiscal Years
All dollar amounts in this report are reported as constant fiscal year 2013 dollars unless specifically noted otherwise. Dollar amounts for all years are deflated by the implicit GDP deflator calculated by the U.S. Bureau of Economic Analysis, with FY 2013 as the base year, allowing the CSIS team to more accurately compare and analyze changes in spending across time. Similarly, all compound annual growth values and percentage growth comparisons are based on constant dollars and thus adjusted for inflation.
Due to the native format of FPDS and the ease of comparison with government databases, all references to years conform to the federal fiscal year. Fiscal year 2013, the most recent complete year in the database, spans October 1, 2012, to September 30, 2013.
Supply Side Classification: Small, Medium, and Large Vendors
To analyze the breakdown of competitors in the market into small, medium, and large vendors, the CSIS team assigned each vendor in the database to one of these size categories. Any organization designated as small by the FPDS database—according to the criteria established by the federal government—was categorized as such unless the vendor was a known subsidiary of a larger entity. Due to varying standards across sectors, an organization may meet the criteria for being a small business in certain contract actions and not in others. The study team did not override these inconsistent entries when calculating the distribution of value by vendor size.
Vendors with annual revenue of more than $3 billion, including from nonfederal sources, are classified as large. This classification is based on the vendor’s most recent revenue figure at time of classification. For vendors that have gone out of business or been acquired, this date may be well before 2013. A joint venture between two or more organizations is treated as a single separate entity, and organizations with a large parent are also defined as large. Due to their system integrator role and consistent market share, the study team placed the six largest defense contractors (Lockheed Martin, Boeing, Raytheon, Northrop Grumman, General Dynamics, and BAE) into a separate category called “Big 6 defense vendors.” Any vendor assigned a unique identifier by FPDS but is neither small nor large is classified as “medium.”
In order to identify large vendors, the study team investigated any vendor with total obligations of $500 million in a single year or $2 billion over the study period. Determining revenues is the most labor-intensive part of the process and involves the use of vendor websites, news articles, various databases, and public financial documents. When taken together, all of this work explains the increase in the market share of large vendors versus some older editions of this report. While large vendors are, on rare occasions, reassigned into the middle tier, the vast majority of investigations either maintain the status quo or identify small or medium vendors that should be classified as large.
Handling of Subsidiaries and Mergers and Acquisitions
To better analyze the defense industrial base, the study team made significant efforts to consolidate data related to subsidiaries and newly acquired vendors with their parent vendors. This results in, among other things, a parent vendor appearing once on CSIS's top 20 lists rather than being divided between multiple entries. The assignment of subsidiaries and mergers to parent vendor is done on an annual basis, and a merger must be completed by the end of March in order to be consolidated for the fiscal year in question. This enabled the study team to more accurately analyze the Defense industrial base, the number of players in it, and the players’ level of activity.
Over the past six years, the study team has applied a systematic approach to vendor roll-ups. FPDS uses hundreds of thousands of nine-digit DUNS (Data Universal Numbering System) codes from Dun and Bradstreet to identify service providers. A salutary benefit of this standardization is that FPDS now provides parent vendor codes. These parent codes track the current ownership of vendors but are not backward looking. Thus, a merger that happened in 2010 would not affect parent assignments in 2000. This prevents the study team from adopting these assignments in their entirety. Building off of the work of our departmental reports, we have now expanded and lowered that criterion to $250 million of total contract revenue. We have also added an alternative threshold and investigate every DUNS number with more than $1 billion in obligations between 2000 and 2013, no matter how much they receive in any individual year.
We have reinforced these manual DUNS number assignments with automated assignments based on vendor names. Qualifying for an automated assignment by name requires three criteria: 1) a standardized vendor name that matches with the name of a parent vendor, 2) that name has been matched to the parent vendor by the CSIS or the Parent DUNS number field, and 3) there are no alternative CSIS assignments with that vendor name. This process is not immune to error, but it reduces the risk that a DUNS code is considered large in one year but overlooked in another. As an error-checking mechanism, the study team investigated contradictions by comparing our assignments to those made by Parent DUNS numbers for every DUNS number with $500 million in annual obligations or $2 billion in total obligations.
Demand Side Classification: Contract Characteristics
This study considers a variety of contract characteristics: the contracting component, the type of product or service being procured, the funding mechanism, the contract vehicle, the contract size, and the extent of competition. In several cases, this classification can be derived from a single field of the database, using groupings established by the study team. Characteristics that require multiple fields or introduce other complications are listed below.
The study team followed the DoD methodology and calculated competition by using two fields: extent of competition (which is preferred for awards) and fair opportunity (which is preferred for most IDVs). Additionally, to better evaluate the rate of “effective competition,” the study team categorizes competitively awarded contracts by the number of offers received.
Determining the contract vehicle required classifying both awards and indefinite delivery vehicles (IDVs). While classifying awards is straightforward, classifying IDVs requires the referenced IDV contract type field, which is only available via the FPDS web tool. The study team recreates this field by automatically looking up the referenced parent IDV for each delivery order. When this lookup is unsuccessful, typically because the IDV originated before the study period, the study team relies on tables downloaded from the FPDS web tool. This approach may not exactly match the FPDS web tool results, but it allows for cross-tabulation, enables emulation of the DoD method for calculating competition as discussed below, and removes the discrepancies that result from the use of multiple sources.
For the purposes of this report, a contract refers to either an award with a unique procurement identifier or an IDV with a unique pairing of a delivery order procurement identifier and a referenced IDV procurement identifier. Contracts were classified on the basis of total expenditures for the fiscal year in question. Groupings are in nominal dollars because many regulatory thresholds are not adjusted for inflation; as a result, smaller contracts will be slightly overrepresented in recent years. Unlike some prior reports, de-obligations are excluded rather than being grouped with contracts under $250,000.
Data Reliability Notes and Download Dates
Any analysis based on FPDS information is naturally limited by the quality of the underlying data. Several Government Accountability Office (GAO) studies have highlighted the problems of FPDS (for example, the December 30, 2003, report “Reliability of Federal Procurement Data,” and the September 27, 2005, report “Improvements Needed for the Federal Procurement Data System–Next Generation”).
In addition, FPDS data from past years are continuously updated over time. While fiscal year 2007 was long closed, over $100 billion worth of entries for that year were modified in 2010. This explains any discrepancies between the data presented in this report and those in previous editions. The study team changes over prior year data when a significant change in topline spending is observed in the updates. Tracking these changes does reduce ease of comparison to past years, but the revisions also enable the report to use the best available data and monitor for abuse of updates.
Despite its flaws, the FPDS is the only comprehensive data source of government contracting activity, and it is more than adequate for any analysis focused on trends and order-of-magnitude comparisons. In order to be transparent about weaknesses in the data, this report consistently describes data that could not be classified due to missing entries or contradictory information as “unlabeled” rather than including it in an “other” category.
The 2013 data used in this report were downloaded in February 2014.