When I entered this industry, computing was in its (comparative) infancy. The first stored-program computers had been built in the 1940’s; the first commercial computing devices date to the 1950’s. When I came along in the 1970’s, computing was still a young field—compared to older technology industries like aviation, electric lighting and railroads, anyway. Nevertheless, in those bygone days, when a computer failed, it was big news—everyone noticed, and somebody had trouble on his or her hands. I distinctly remember a mainframe failing (due to a rare software bug) at a customer site; the customer was an automobile manufacturer. The screaming went right up to the chairman of the auto manufacturer, over to the mainframe manufacturer for whom I worked, and back down the chain to one of my colleagues (not me, thank goodness!). Programmers in those early days were far more careful, on average, to ensure that software worked.
Fast-forward to today, to the modern computing industry with billions of devices carried on our persons, settled in cloud data centers, or somewhere in-between. It’s a mature industry in every way now, except one: we are no longer surprised about computers failing; in fact we expect them to fail. Computer industry maturity has brought tiny size, enormous computing capability, low cost, and a terribly low level of software quality. This is despite decades of experience in software development methodology, powerful modeling & simulation capabilities, enormously complete and integrated software development tooling, and detailed maturity models for measuring and increasing the quality of our software development methodologies.
What went wrong?
We could easily blame the “new generation” of developers—but computing is their birthright, and they grew up with their fingers on keyboards. So there must be something missing—and there is: an engineering component, a standard way to measure the quality of their products. Mechanical engineers not only use models (blueprints) to guide their projects, they have well-defined measuring devices and industry-accepted metrics not only for measuring the processes used in the design and implementation of their artifacts; they have devices for measuring the implementations as well. Mechanical engineers have their calipers & micrometers, electrical engineers have their multimeters, chemical engineers have their pipettes, and the engineering world is littered with tribometers, manometers, katharometers, eudiometers and dynamometers and the like to measure engineering artifacts. Where are the software-o-meters for us?
This is in fact the exact arena in which the OMG Structured Metrics Metamodel (SMM) plays. The SMM is a model for software metrics—it originally came from the need to capture information about legacy systems and their coding practices to steer system modernization, the original core mission of the OMG’s Architecture Modernization Platform Task Force. But we can do more, much more—we can measure security and code quality based on an understanding of code size as well as known good and bad coding practices. While perhaps not as perfect a metric as consideration of a code base by an expert with years of experience, an automated software quality metric is much more objective, more repeatable, lower cost and a far more scalable solution.
This is precisely the intent of the Consortium for IT Software Quality (CISQ) project, launched just two years ago as a joint project of OMG and Carnegie Mellon University’s Software Engineering Institute (SEI). When Paul Nielsen, Executive Director of the SEI, and I launched this initiative with a series of Executive Fora all over the world, we envisaged a world of high-quality software, delivered by well-trained developers using tools that provide feedback not only for process quality improvement, but code quality improvement. This approach would be a natural extension of OMG’s standards for software modeling, as well as SEI’s existing and successful process quality improvement measures. The CISQ consortium has been managed by OMG for its members since the first members joined two years ago, and has been focused on proposing these static code-based software quality metrics for the OMG SMM. CISQ’s Director, Bill Curtis, has managed the technical process with drive and finesse since those first members joined.
As we come to the end of 2011, I am delighted to announce that the first metrics will come into the OMG standardization process this year, including for the first time in the industry a consistent, objective code-size metric not based on lines of code. OMG and SEI expect the first tools to implement these metrics to appear in 2012, likely integrated into both commercial and open-source development tool suites. Obviously, static code-based software quality metrics (and automatic bug discovering software) already exist from some commercial vendors; the difference in 2012 will be repeatable, automatable measures delivered from competing vendors with consistent results based on the open, freely-available OMG standards.
So are we done?
Of course not—the CISQ consortium will continue, and grow, to meet the demands of software developers in many different vertical markets. We expect hundreds of new software metrics to appear over the coming years, to solidify a world in which standards govern quality—the same way one can expect the tracks of two rail systems to interconnect painlessly, and switches and lamps from different vendors to work together seamlessly.
At the same time, the “CISQ second wind” will focus on methodologies and tools to help companies adopt consistent, open, standardized software quality metrics. We’ve already heard from companies that will volunteer to discount their products and services for CISQ members, and the growing CISQ membership base is already proposing new metrics for adoption.
Software project completion statistics are still awful, and software quality is still all over the map. Your organization spends millions, perhaps tens of millions, developing or fielding broken software, and cleaning up the inevitable mess afterwards—and yes I’m talking to you, the financial institutions, the insurance carriers, the manufacturers and the transportation companies, not just the software companies. So join us and help make poor software quality a thing of the past and combine the best aspects of 1970’s computing with the exciting advances of the 21st century!