CIOs Need to Lead the Digital Transformation

Gartner recently released the results of its “2017 CEO Survey: CIOs Must Scale Up Digital Business”.  As I read through it, I saw many links to the messages I have been communicating about driving the business value of software and that will be discussed in my new book “The Business Value of Software” being published by CRC Press in August 2017. 

CIOs The Gartner research found that the top priorities for CEOs in 2017 are 1) growth, 2) technology-related business change and 3) product improvement and innovation.  These three priorities are interconnected and are driven by the digital transformations occurring at many organizations.  Therefore, it is essential that the CIO and their team be intimately involved in the strategic discussions related to these three areas. 

Part of these strategic discussions needs to be measuring the success of the initiatives.  This is a topic that I have discussed in-depth when talking about visualizing the value of software and Gartner emphasizes it in their report.  In order to drive value of a software development initiative, for example, it is essential to clearly understand the goals and objectives of the initiative, and collaborate with the business unit to discuss, define, measure, and prioritize projects based on their ability to deliver on the value expectations.  In the Gartner research, they found that 53 percent of the respondents could not provide a clear metric for success.  It is not only critical that the C-suite have distinct metrics for a software development initiative, such as revenue, sales and profit goals, but that they communicate these goals to the entire technology team that are making the day-to-day tactical decisions that will impact the strategic direction of the project and ultimately the business value. 

The report also highlights that 57 percent of organizations will be building up their in-house information technology and digital capabilities in 2017 versus 29 percent that will be outsourcing this function.  Either way, the IT/digital team needs to be considered a partner in developing solutions that drive business value and not just a tactical arm that develops and implements the solutions.

CIOs need to step up.  They should establish and lead the digital strategy for their organization, collaborating tightly with the appropriate business unit managers and then communicating the goals to the IT team in order to deliver on the expected business value.  By defining metrics based on business value, the success of a project can be measured throughout the development lifecycle, stakeholders can be held accountable and projects can be modified throughout the process to realign it with its goals and objectives. 

If you are interested in help with your value delivery metrics, feel free to contact me.

Michael D. Harris

CEO

 

Written by Michael D. Harris at 12:20

An Introduction to Functional Size and Function Points: Part 1 | DCG

Function points are a measure of the functional size provided to the user by an application. The user is any entity (either a human or another application), outside of the application being measured, that considers the function to be important. Size is determined by applying one of several rule sets, including those outlined by the International Function Point Users Group (IFPUG) or the Common Software Measurement International Consortium (COSMIC). It is important to note that these systems do not measure size in terms of the number of lines of code, but instead based on what the user can ultimately see anduse.

Sizing software helps to establish an appropriate timeline for completing the work, as well as to appropriately allocate resources, including both team members and budget. But how does this relate to the testing process? The adage, “What you can’t measure, you can’t manage,” applies here. 

As a preventative measure, it can be difficult to determine the importance of software testing or how much effort should be applied to it. Coding, by contrast, reveals its value more readily, as it provides clear-cut value by fulfilling requirements. This difference in visibility can cause testing to be less emphasized than other steps, when it should ideally be emphasized throughout the development process. What is needed is a method of making the importance and effectiveness of testing more visible and measurable. Function points can help with this, providing measures that can be applied in test planning and in measuring the effectiveness oftesting.

First Step: Estimating Testing Requirements

One of the measures that is of particular use in testing is test case coverage i, which is defined as:

Number of test cases / Total number of function points

This provides a measure of how thorough the testing process is for a given project. If the test case coverage from a similar project that was determined to have been successful (perhaps using a measure like defect density, described in the next section) is available, this can be used to give an estimate of software test cases needed on a new project. Once the number of function points is known, simply use the number of test cases that gives similar coverage. Since function points can be measured based on requirements, rather than waiting until coding, a good idea of testing needs becomes apparent early in the process. This can be especially helpful when using defect prevention techniques (done throughout development), rather than defect removal (traditionally done at the end of development).

Initially, a software development organization may not have the metrics necessary to determine exactly how much testing is necessary for a given project. This does not have to be an impediment to using software measurement to get an initial estimate. There are a number of sites that publish benchmarking statistics, such as the International Software Benchmarking Standards Group (www.isbsg.org), which can be used in creating initial plans. If industry data is not available for a similar project, then it is still possible for experienced testers to give an estimate based on the expected size of the final product. Once the project is finished, continued measurement can be used to help in test planning for future projects, as well as to determine if the project met expectations.

Measuring Quality

Another good measure used in testing, and an indicator of both testing effectiveness and software quality, is defect densityi, defined as:

Total number of defects / Total function points

Defect density can also be compared to industry data or projects previously created to determine if the software meets standards. It does so in a neutral fashion, unlike some measures, such as cost per defect, which can penalize efforts at software qualityii. Ultimately, testing should be part of the process of improving quality, and measuring its effects on defect density is one way to track changes in quality.

Maintenance and Process Improvement

Once the software development project is finished, there is still the process of maintaining it. Even during this time, software may require re-engineering at some point. The choice to re-engineer, like most business decisions, should be made at a point when it is likely to be most cost effective. Function points can be used in this decision as well. The amount of maintenance an application requires per function point can give an idea of when re-engineering should take place: the longer spent on an application that provides a given amount of benefit (measured by function points), the more likely that it needs to be re-workediii. This should be based on company standards or industry standards, as appropriate for the project.

Measures are also necessary for the continuous improvement of processes. While the data may not be available at the beginning of the measurement process, perhaps because industry data including a similar project does not exist, as data is collected and analyzed it can be used to improve future projects. This data collection is a requirement of certain testing models, such as the Testing Maturity Model integration (TMMi)iv. Level Four of the TMMi model, Measured, requires test measurement, so function points and the analysis that can be based on them are one way to fulfill that need.

Conclusion

Software testing can be a challenge to track in a rigorous fashion. It does not itself create easily seen benefits, but instead usually prevents issues down the line. By combining function point sizings and other information, the very real benefits of testing can be more easily seen, analyzed and compared with other projects. This makes the sizing potentially valuable, allowing testing to be planned earlier in the process, giving a measure of effectiveness and allowing maintenance planning. By making the test process measurable, function points can facilitate process improvement, such as in the TMMi model. Perhaps most importantly, function points can be used to demonstrate the importance of testing to non-testers, which often includes managers. That, by itself, may be a sufficient reason to use them.

Further Reading

 

i Function Point Analysis, David Garmus and David Herron, pages37-39

ii Function Points as a Universal Software Metric, CapersJones

iii. Using Function Points,http://www.softwaremetrics.com/Articles/using.htm

iv Test Maturity Model integration, Erik van Veenendaal and BrianWells

 

Written by Default at 11:56

Software Value: Impact on Software Process Improvement | DCG

Business value has not always been the primary driver of software process improvement, but that is changing.  This is the main point of an excellent article by Richard Turner in the March/April edition of CrossTalk, “The impact of Agile and Lean on Process Improvement.”

Turner’s article is a concise and refreshingly frank walk through the history of software process improvement from the perspective of an expert who has been intimately involved.  With a hint of frustration that I certainly share, Turner captures perfectly the thinking that has led to a move away from process improvement initiatives like CMMi in commercial software development organizations:

“One of the drawbacks of earlier process improvement approaches was the concept and distribution of value. The overall value of the process improvement was often situational at best and nebulous at worst.  Where it was seen as a necessity for competitive credibility [as was the case for my development group at Sanchez Computer Associates back in 2001], the value was in passing the audit rather than in any value to the organization and the customer.  In other cases, the value was essentially associated with the success of one or two champions and disappeared if they failed, changed positions or left the company [as I did].  On those occasions where PI was primarily instituted for the actual improvement of the organization, the internal focus on practices was often valued as a way of cutting costs, standardizing work [We certainly needed to make our processes repeatable] or deploying better predictive management capabilities rather than improving the product or raising customer satisfaction.”

While I agree with 95% of Turner’s analysis here, in my experience both passing the audit and standardizing our processes raised customer satisfaction.  We went from having one customer ready to give us a reference to most of our customers being referenceable on the basis of solid evidence that we had fixed the reliability of our software development

Turner contrasts historic process improvement initiatives, mostly targeted at waterfall operations, where business value was not a prime driver to today’s initiatives where, “With the emergence of Agile and Lean, the concept of value became more aligned with outcomes.  The focus on value stream and value-based decision making and scheduling brought additional considerations to what were considered best practices.”

Turner recognizes that in today’s Agile and Lean software development teams, the teams themselves are responsible for their own processes.  Mostly, this is a strength because creative people are likely to optimize processes under their control out of simple self-interest (which benefits the organization).  Where this falls down in my experience is where, “These organizations rely on cross-fertilization of personnel across multiple projects to improve the organization as a whole.”  To put it bluntly, this rarely happens.  Teams can be self-organizing but groups of teams don’t typically self-organize.  Hence, there is still a place for organizational process improvement – with a lean, software value driven emphasis – in the most modern software development organization.  By way of evidence, scrum teams that are working together on the same program struggle to develop ways to coordinate and synchronize their efforts unless a framework such as SAFe is introduced through a process improvement initiative. 

That said, I will leave the last word to Turner, “Process improvement that does not improve the ability to adapt has little value.”

 

Michael D. Harris, CEO

Written by Michael D. Harris at 13:36

Can Function Points Be Used to Estimate Code Complexity?

Software code can get really complex and we all agree that complex code is harder to maintain and enhance.  This leads to lower business value for the application as the inevitable changes required for the application to keep pace with a changing environment cost more and more to implement for a given function size. Can function points be used to estimate code

Consequently, I was a little surprised to see the title, “Cozying up to Complexity,” at the head of a book review by Robert W. Lucky in the January 2017 edition of the IEEE Spectrum.  Lucky reviewed the new book by Samuel Arbesman, “Overcomplicated.” Lucky identifies Arbesman’s three key drivers of increasing technological complexity: “accretion”, “interconnection”, and “edge cases”.  Accretion is the result of the continual addition of functionality on top of and connecting existing systems.  Connecting ever more systems leads to interconnection complexity.  Edge cases are the rare but possible use cases or failure modes that have to be accounted for in the initial design or incorporated when they are discovered.  Over time, these edge cases add a lot of complexity that is not apparent from majority uses of the system.  Increased software complexity can be a problem for outsourcing software development because more complex code is more difficult to maintain and more difficult to enhance.  This becomes a problem for software vendor management as costs go up due to reduced vendor productivity.

There are measurements and metrics for software complexity but Lucky reports that Arbesman’s suggested solutions for complexity including the novel idea that we should not take a physicists mathematical view to try to build a model.  Instead, we should take a biologists view: record the complexity we find (e.g. in nature) and look for patterns that might repeat elsewhere.  Arbesman does not necessarily see increased complexity as a bad thing.

If we accept that some level of complexity is a good and necessary thing to achieve the “magic” of current and future software capabilities, I wonder if there is a way to identify the point of maximum useful complexity?  Perhaps “useful complexity” could be measured in function points per line of code?  Too much complexity would be indicated by a low “useful complexity” value – trying to shoehorn too much functionality into too few lines of code.  At the other end of the spectrum – what Arbesman might refer to as his edge cases – we might see too little functionality being delivered by too many lines of code.

My train of thought was as follows:

  • A program with zero functionality (and zero function points) may have complexity but I’m going to exclude it.
  • A program with 1 function point must have some lines of code and some small complexity.
  • For a program with a reasonable number of function points, I (as a former ‘C’ programmer) could make the program more complex by reducing the number of lines of code.
  • Adding lines of code could make the program less complex and easier to maintain or enhance by spreading out the functionality (and adding explanatory comments although these don’t usually count as lines of code) up to a certain point, after which diminishing returns would apply.  The question is where is that point.
  • It must also be true that there must be a certain complexity inherent in coding a certain number of function points.  This implies a lower limit for the complexity given a fixed number of function points.
  • This suggests that, for a given number of function points, there might be a non-linear inverse relationship between complexity and lines of code.

I’d welcome people’s ideas on this topic.  Thoughts?

Written by Michael D. Harris at 10:03

Using Software Value to Drive Organizational Transformation

I was delighted to read a thought leadership article from McKinsey recently, “How to start building your next-generation operating model,” that emphasizes some key themes that I have been pushing for years (the quotes below are from the article):

  • The importance of orienting the organization around value streams to maximize the flow of business value – “One credit-card company, for example, shifted its operating model in IT from alignment around systems to alignment with value streams within the business.
  • Perfection is the enemy of good enough – “Successful companies prioritize speed and execution over perfection.
  • Continuous improvement relies on metrics to identify which incremental, experimental improvements work and which don’t.  Benchmarking and trend analysis help to prioritize areas where process improvement can offer the most business value – “Performance management is becoming much more real time, with metrics and goals used daily and weekly to guide decision making.”
  • Senior leaders, “hold themselves accountable for delivering on value quickly, and establish transparency and rigor in their operations.
  • “Leading technology teams collaborate with business leaders to assess which systems need to move faster.”

Using Software Value to Drive Organizational Transformation

There is one “building block” for transformation in the article to which I am a recent convert and so kudos to the McKinsey team for including it in this context.   Their “Building Block #2” is “Flexible and modular architecture, infrastructure and software delivery.”  We are all familiar with the flexible infrastructure that cloud provides, but I have been learning a lot recently about the flexible, modular architecture and software delivery for application development and application integration that is provided by microservices frameworks such as the AnyPoint PlatformTM from Mulesoft.

While they promote organizing IT around business value streams, the McKinsey authors identify a risk to be mitigated in that value streams should start to build up software, tools and skills specific to each value stream.  This might be contrary to the tendency in many organizations to make life easier for IT by picking a standard set of software, tools and skills across the whole organization.  I agree that it would be a shame indeed if agile and lean principles that started life in IT software development are constrained by legacy IT attitudes as the agile and lean principles roll out into the broader organization.

There are a lot more positive ideas for organizational transformation in the article, so I recommend that you take a few minutes to read it.  My only small gripe is that while the authors emphasize organizing around value throughout, they do not mention prioritizing by business value.  Maybe at the high level that McKinsey operates in organizations that concept is taken for granted.  My experience is that as soon as you move away from the top level, if business value priorities are not explicit, then managers and teams will use various other criteria for prioritization and the overall results may be compromised. 

Written by Michael D. Harris at 14:16
Categories :

"It's frustrating that there are so many failed software projects when I know from personal experience that it's possible to do so much better - and we can help." 
- Mike Harris, DCG President

Subscribe to Our Newsletter
Join over 30,000 other subscribers. Subscribe to our newsletter today!