Why does one software project cost twice as much as another? Is it because it is developing twice as much functionality as the other? If you contract with two system integrators, how can you tell which one is more productive? In a multi-year outsourcing arrangement, is your supplier getting more or less efficient year over year?
An enduring challenge in the software industry is establishing a standard unit of measurement that expresses the amount of business functionality in a given information system so that questions like these can be addressed. Most organizations have not adopted a formal measure, but of those that have, the most widely accepted measure is function points which were defined by Allan Albrecht in 1979. But are function points an effective metric for integration projects?
The main weakness of function points is that they assign points based on what functional users of the application perceive such as inputs, outputs, inquiries, stored data, and external interfaces. However, there is no standard method for assigning points based on hidden functionality such as complex algorithms or process orchestration. Several approaches that deal with this weakness include Engineering Function Points (similar to the Halstead Complexity Measure), Feature Points which account for functions not readily perceivable by the user but essential for proper operation, or Weighted Micro Function Points which adjust function points using weights derived from program flow complexity and other intricate factors. The problem with Function Point Analysis (FPA) is that it is difficult to apply consistently and requires highly-trained experts which is one reason why, after 30 years, most organizations have not adopted it.
If you are challenged with establishing a standard unit of measure for integration projects, there is a better way. I have been using a simple and effective method for well over ten years which is also an integral part of Informatica’s Velocity methodology. By way of this blog posting I am announcing the name for this method – Integration Point Analysis (IPA).
There is no industry standards group behind IPA which is a benefit and a limitation. The benefit is that each organization, usually through their Integration Competency Center or enterprise architecture group, can quickly tailor the IPA definitions and weights for their own purposes. While industry standards are useful, they can also become overly complex, bureaucratic, and slow to change which IPA neatly sidesteps. The one drawback is that while IPA is an effective method for comparing projects and suppliers within an organization and for conducting long-term trend analysis, it will not help benchmark across organizations.
IPA essentially involves classifying integrations into four categories; Low complexity, Medium complexity, High complexity, and Custom. The first three categories identify common integration patterns and can be used for project estimating, internal benchmarking, and long-term trend analysis. The factors that are considered for classification include interface specifications, transaction protocols, data transformation complexity, source/target technology platform, volumetrics and operational requirements. For a mature practice, 90% of the integrations typically fall into one of the three standard categories; the Custom category (which by virtue of its variability) is not used for benchmarking.
My experience in implementing IPA in different organizations is that it generally requires just a few weeks for an ICC to tailor the definitions and establish an initial baseline. No formal training is required for staff to use the method.
For questions or suggestions, please post a comment. Or to discuss a possible implementation in your organization, visit Informatica Professional Services and click on contact me.
 A. J. Albrecht, “Measuring Application Development Productivity,” Proceedings of the Joint SHARE, GUIDE, and IBM Application Development Symposium, Monterey, California, October 14–17, IBM Corporation (1979), pp. 83–92.