Over a decade ago, I was talking to the then-CIO of the United States Air Force, who happened to mention that US military forces depended on hundreds of thousands of copies of a certain piece of software worldwide; that they knew that software comprised some 65,000,000 lines of code; and that due to the fact that the code was a trade secret, the Pentagon hadn't been allowed to see that code, some of which they knew to have been written by developers in a potential adversary nation. That code was, of course, Microsoft Windows. He wasn't complaining about Microsoft, or even those potentially adversarial software developers; he was complaining about his software supply chain.
When Toyota Motors, or Boeing Commercial Aircraft, or any other major manufacturer purchases parts from suppliers, they do acceptance tests before integrating those parts into their assemblies and products. Does that fastener fit properly? Is this axle strong enough to handle the potholes of rural Irkutsk? Are the tolerances close enough on the beams to fit into the design? Manufacturers fit, stretch, snap and crush sample parts to make sure their supply chain risk is under control, at least from the perspective of design requirements. That's just one part of supply chain risk; others include the ability of suppliers to develop, make and deliver the parts in the first place. Even in non-material supply chains (or service chains), acceptance tests are critical to correct functioning: banks check incoming wire transfer requests to ensure they make sense and are actually from partner institutions before executing monetary exchanges.
Of course, supply- or service-chain risk is only one of many risks that companies (and individuals) must face, but the study and practice of risk mitigation isn't particularly new. A quick Web search for "risk mitigation" brings up millions of results, starting with approaches to understand, manage and mitigate risks arising from politics, police actions, earthquakes and even orbital junk collisions. Supply chain risk makes the first page of results, but following that link fails to mention the inherent risks of software integration. Unlike the design of fasteners or axles, nearly all software is trade secret, with a design completely obscure to the buyer.
There have been quite thorough and detailed studies of this problem; the U.S. Department of Homeland Security's Build Security In program covers many ills of software integration, including recently some approaches to software supply chain risk mitigation. The site mentions that, "The programmatic and product complexity inherent in software supply chains increases the risk that defects, vulnerabilities, and malicious code will be inserted into a delivered software product. As a result, effective risk management is essential for establishing and maintaining software supply-chain assurance over time." Assessing software supply chain risk, however, requires access to the [trade secret] source code, which is unlikely to be delivered to most buyers.
How, then, can that assessment be carried out? There are tools in the marketplace, but so far standards are nascent. If those tools supported standard metrics for measuring software quality from several perspectives (including security, privacy, and likely defects caused by known defective coding practices), then software developers could self-assess, and trusted third-party assessors could deliver quality assessments of still-trade secret source code that could be depended on by software buyers. Fortunately, that's precisely what the Consortium for IT Software Quality (CISQ) is delivering; through the joint efforts of the Object Management Group, the leader in technology standards, and Carnegie Mellon University's Software Engineering Institute, the leader in process quality standards, new standards focused precisely on the problem of software quality metrics are starting to emerge.
Finally, the information technology industry is converging on standards for assessing and certifying levels of software risk, by directly measuring software artifacts' quality. In order to deliver metrics of quality related to complexity and size, the first new standard metric out of the gate is a consistent, dependable, automatable measure of code size (in fact, a consistent Function Point metric). The fact that these metrics are not only consistent and dependable, but automatable, brings down the cost (and increases the use) of those metrics, meaning software users will finally have a consistent, standardized and readily-accessible way to measure, mitigate and prevent risk.
CISQ will be hosting a webinar on Standard Metrics to Manage Software Risk on Thursday, December 6th at 11:00 AM EST for those interested in learning more. The webinar will be hosted by Dr. Bill Curtis, the Director of CISQ, and will include as panelist Jasmine Alexander, SVP and CIO of Penton Media. In addition, I will be participating in a CAST Software webinar discussing how to Reduce Software Risk through Improved Quality Measures on Thursday, November 29th at 11:00 AM EST. I hope you will join us for both of these enlightening events!
Comments