The “Chaos Report” Myth Busters

Chris VerhoefIn a previous blog post titled, Let’s say “No” to groupthink and stop quoting the Chaos Report, I wrote that:

“We need to be able to examine the underlying data and measurement methods used as the basis for any report or study on IT project failures. Without examining the data, to continue quoting such reports is simply engaging in groupthink”

While we will never be able to examine the actual data on which the Chaos Report is based, we now have research that refutes its findings. In summary, this research found the Chaos Report to be misleading and one-sided.  It perverts the estimation practice and results in meaningless figures.

Laurenz Eveleens and Chris Verhoef, of Vrije Universiteit Amsterdam, recently published the research in the article “The Rise and Fall of the Chaos Report Figures” in the January/February 2010 issue of IEEE Software magazine.

I had the opportunity recently to interview Mr. Verhoef about this research. Here is the full text of the interview:

What was the motivation for doing this research?

This particular research paper is part of a larger project called EQUITY, which is short for Exploring Quantifiable IT Yields.  Let me tell you a bit more about that project.

The invisible motor of our western economy is software, an emerging production factor comparable to natural resources, labor, and capital. Current paradigms indicate that software is just a cost center, and these costs must be lower. This is like saying that from less iron ore, more steel must be produced. The EQUITY project intends to explore potential connections between value creation and information technology, to enable competition with software in a calculated manner.

The bottom line is that we wish to trace the actual impact of IT on the value creation or destruction, e.g., in the form of stock value, also known as the equity of a firm.  It is our ambition to develop a quantitative approach that is both accurate and usable within software-intensive organizations to facilitate rational decision-making about software investments.  Achieving this would be a break-through since no-one has successfully explored the territory of information technology yields before by purely quantitative means.

Within the EQUITY project we work on developing the competencies to understand the possible connections between investing in software and the ensuing value creation or destruction via quantitative methods. Using such methods enables the development of predictive models so that competing with software becomes feasible through maximizing value creation and minimizing value destruction.

In the EQUITY project we work with six people: four Ph.D. students and a former top executive. Let me introduce them briefly:

  • Erald Kulk just received his Ph.D. and worked on requirements creep.  With real-world data he figured out when volatile requirements are healthy and when they start to become dangerous. Without requirements change you get the system you asked for, and with some healthy modifications you get the system that you meant.  But when you do not know what you want, creep turns into a failure factor.  We came up with (complex) mathematical methods that warn you at an early stage that you have reached the danger zone of failure. Dr. Kulk also worked on predicting IT project risks like budget overrun and how you can quantify this risk in terms of easily measured aspects of IT projects. Erald Kulk was recruited by our national government where he assists our federal CIO, Mr. Hillenaar, with the installation of nationwide IT portfolio management to improve the IT performance by the Dutch government.
  • Peter Kampstra is another Ph.D. student working on the EQUITY project.  He is a very talented young man with a great intuition for mathematics and statistics. You could call him Mr. Beanplot, since he invented a new statistical tool he dubbed a beanplot.  We used his intuitive statistical visualization technique (see paper and spreadsheet) to benchmark the risk of failure of large Dutch governmental projects against 6,000 IT projects in the private sector.  He also works on the reliability of function points counts.  When investing in custom IT systems, it is important to know “how much” IT you are going to make. The function point measure is one of the possible candidates. We investigated many tens of thousands of function point totals from many projects. It turned out that the function point totals were a good measure on which to base predictions. The totals gave plausible numbers and were accurate.  Peter is still working on the EQUITY project.
  • Then we have Lukasz Kwiatkowski.  While Erald and Peter work with management data, Lukasz also works with source code.  The idea is that IT decision-making is ruled by existing applications, whether you like it or not. We call that the bit-to-board approach. We extract bit-level data from large source portfolios and aggregate that up to the executive level. No information gets lost by management filters.  A good example is operational cost. This is often a significant factor but what can you do about it? The answer is to dive into the source code and look for the low hanging fruit. Lukasz worked on a nice example where he waded through a source portfolio of 20 million lines of code (250 apps) of a large multinational company, seeking to reduce MIPS.  We could identify just a very small part of the giant portfolio as code that could be optimized so that the operational cost had a potential of decreasing MIPS usage by 5-10%.
  • Laurenz Eveleens is working on quantifying the quality of IT forecasts. By now you have seen that an important aspect of IT decision-making is that executives use only prior experience and forecasts as bases for their decisions. Obviously, you have to know the quality of those forecasts.  But it turns out that not many researchers work on that. Again with large amounts of data from various industrial parties we worked on methods to assess forecasting quality. Also, complex math is involved, and we went to great lengths to get it all right.  Laurenz is recruited by PricewaterhouseCoopers where he works in the Software Assessment and Control group. One day a week he works to finalize his Ph.D. thesis.
  • Finally Dr. Rob Peters is also working on the EQUITY project.  Rob is a veteran academic and has worked for many years at a university. He has a Ph.D. in econometrics. He worked for many years at ING Group, a large financial service provider based in the Netherlands. He initiated quantitative thinking at ING and that is where we met years ago when I was invited by ING to work with them on IT portfolio management. Rob and I are working with the Ph.D. students and the industrial parties on the important themes of the EQUITY project. We also collaborate on IT portfolio management.  For instance, we recently proposed a method to quantify the yield of risk-bearing IT portfolios.

You can imagine that this type of research is only possible with substantial amounts of code and data. We have access to this type of data because of our decades-long connections with many industrial parties, and the added value our research brings to them. Of course this data is not meant for publication or sharing with others; it is crucial data that the competition is not allowed to have.

Of course that is a problem within our field; data is scarce and almost never publically available.

The Chaos Report data and methods of measurement are not available for verification. You say in your report that:

Nicholas Zvegintsov has placed low reliability on information where researchers keep the actual data and data sources hidden. He argued that because the Standish Group hasn’t explained, for instance, how it chose the organizations it surveyed, what survey questions it asked, or how many good responses it received, there’s little to believe.

Yes we fully agree.  Now the problem is that you often cannot publish actual data. Instead we publish statistical aggregates of the data. That is not as good as the data itself but it is a start.

Isn’t it expected that research studies, especially those with enormous impact, such as the Chaos Report, disclose their data and analysis methods to the research community for verification and validation?

This question has been asked more than once of Standish but they would not disclose their data.

Why do you think the Chaos Report is so widely quoted without any basis to validate its findings?

I think because the numbers are astounding, at least that is why I quoted these reports. In 1994 they came up with a 16% success rate. In retrospect I can predict that kind of percentage by a small Gedanken experiment.  Suppose we are to predict cost, time, and the amount of functionality. Success means we are below cost and time predictions and above the amount of functionality. Now assume we have a 50% chance of getting each number right (so this is random!). If the three numbers are not correlated, their combined change is a formula1b change. So the 16% success rate is in fact high. Now the snag is that not many quoting this report really read these definitions out loud and absorbed their true meaning.

 

Others have previously challenged the Chaos Report findings. In your report you have cited Nicholas Zvegintsov, Robert Glass, and Magne Jørgensen. How is your approach to challenging the Chaos Report different from previous ones?

Laurenz Eveleens and I were working on assessing the quality of IT forecasts using large amounts of data from various sources. The Standish Group definitions are about some form of forecasting quality, and not about what success constitutes in general terms. We carried out the exact same calculations as Standish reported on in their chaos chronicles. It turned out that these results were not at all in accordance with reality. Therefore the research is not reproducible. In medical science this is a normal procedure: when someone publishes a result other groups reproduce it.

Zveginstov’s argument was about the Standish Group’s practice of non-disclosure.  Glass argued that if so many projects fail how can we claim to live in an information age? Jørgensen’s argument was twofold: the definitions did not cover all cases, and other research findings were wildly different. In fact other research in this area suffers from the same problem as the Standish figures. Also, that research does not take institutional bias into account, which leads to meaningless rates.  So for us it is no surprise that Jørgensen found these large discrepancies.

Our argument is fundamentally different; we have actual data, we know the quality of it, and we apply it to their definitions. The outcomes simply do not at all coincide with reality.

 

You applied the Standish definitions to extensive data when you collected 5,457 forecasts of 1,211 real-world projects totaling hundreds of millions of Euros. What is the process you went through to get this data and how long did the research take from start to finish?

It takes decades to build industrial relations so that important and confidential data comes your way.  Once relations are firm and added value is returned, plenty of data becomes available.

 

How did you make sure that your research uses the same underlying assumptions or measurements as those used in the Chaos Report?

If you read the public versions of their reports closely this information is there.

 

Since you released your findings, what has been the reaction from other researchers and the media?

In 2009 we published a mathematically dense and substantial paper, Quantifying IT Forecast Quality. This paper contained the findings that we separately published in early 2010 in IEEE Software.  On the Internet the IEEE Software paper is now attracting attention. There is a lot of discussion going on about the Standish reports. Our findings seem to be trickling into those discussions.

 

Scientific articles and media reports widely cite the Chaos Report. The report found its way to the President of the United States to support the claim that processes and U.S. software products are inadequate. What impact do the findings of the Chaos Report have on software projects and project management in general?

If quoting and citation is a measure for impact then the impact in general is still substantial.

 

What impact do you hope your report findings will make?

We hope that others will also make an effort to assess the forecasting quality of their own data so that fact-based decision-making in our field becomes the norm.

 

The Chaos Report defines a project as successful based on how well it did with respect to its original estimates of cost, time, and functionality. Can you give us a brief summary of the definitions used by the Chaos Report for successful, challenged, and failed projects?

Laurenz and I translated their definitions into more mathematical terms, but they are equivalent:

  • Resolution Type 1, or project success. The project is completed, the forecast to actual ratios  (f/a) of cost and time are ≥1, and the f/a ratio of the amount of functionality is ≤1.
  • Resolution Type 2, or project challenged. The project is completed and operational, but f/a < 1 for cost and time and f/a > 1 for the amount of functionality.

Let’s talk about the four findings from your research. Your first finding is that the definitions are misleading. Can you explain to us the basis for this conclusion?

They’re misleading because they’re solely based on an estimation of accuracy for cost, time, and functionality. But Standish labels projects as successful or challenged, suggesting much more than deviations from their original estimates.

So basically the definitions of successful and challenged projects are based on estimation deviation only. Readers of the report who associate words like “challenged” and “success” with something other than their definitions will interpret the figures differently.

Your second finding is that the report contains unrealistic rates. I know you go to great lengths in the report on how you arrived at this conclusion but can you give us a summary of your findings?

The Standish Group’s measures are one-sided because they neglect underruns for cost and time and overruns for the amount of functionality. We took a best-in-class forecasting organization and used projects for which we had cost and amount of functionality estimates. The quality of those forecasts was high; half the projects have a time-weighted average deviation of 11% for cost and 20% deviation for functionality. Combined, half the projects have an average time-weighted deviation of only 15% from both actuals. In IT this is known as best-in-class.
Yet, even though this organization’s cost and functionality forecasts are accurate, when we apply the Standish definitions to the initial forecasts, we find only a 35% success rate. This is unrealistic.

 

The 3rd finding is that basing estimates on the Chaos definitions leads to perverting accuracy. You say:

The organization adopted the Standish definitions to establish when projects were successful. This caused project managers to overstate budget requests to increase the safety margin for success. However, this practice perverted forecast quality.

What led you to this conclusion?

If you optimize on a high Standish success rate, the strategy is to not exceed the duration and budget that was initially stated and to not deliver less functionality than initially promised.  In practice, what you do is ask for a lot of time and money and promise nothing. This is exactly what we found in one company. Indeed, this company had high Standish ratings but 50% of the projects had a time-weighted average deviation of 233% or more from the actual. Hence, these definitions hinder rather than help increasing estimation practice.

The 4th and final conclusion is that the Chaos Report provides meaningless figures. You say:

Comparing all case studies together, we show that without taking forecasting biases into account, it is almost impossible to make any general statement about estimation accuracy across institutional boundaries.

Give an overview of some the work you did to arrive at this conclusion

We found institutional biases in forecasting. For instance, we found a salami tactic: this is systematically underestimating the actual.  Or we found sand bagging: overestimating systematically. When you average numbers with an unknown bias the average does not mean anything. And that is what Standish did.

 

What was your reaction to the Standish Group’s response to your findings that:

All data and information in the Chaos reports and all Standish reports should be considered Standish opinion and the reader bears all risk in the use of this opinion.

Laurenz and I fully support this disclaimer, which to our knowledge was never stated in the Chaos reports.

What is your advice to those who continue to use the Chaos Report project failure statistics, without really understanding the basis of its conclusions?

Read the IEEE Software paper, and if you want all the gritty details read the full paper with all the math included.

So what is next for Mr. Verhoef?

Helping IT governors to make IT decision making more fact-based and transparent.

 

How can our readers contact you and find out more about your research?

There’s plenty of information on the Web and one can reach me via email:
Email: x@cs.vu.nl

Website: http://www.cs.vu.nl/~x

If you like this interview, you will also like: Advanced Project Thinking – A conversation with Dr. Harvey Maylor

 

 

38 Responses to The “Chaos Report” Myth Busters
  1. Shim Marom
    March 24, 2010 | 7:27 am

    Excellent article Samad. Most (apart from a few) Project Management blogs have neglected to deal with this issue in a deep, constructive and meaningful way. I am still amazed when I read posts that quote or mention a low project success rate, without understanding the basic inconsistencies and methodological issues associated with the numbers quoted. Sensationalism and mediocrity take precedence over serious research and inquisitive discussion. Good on you for joining the rational thinkers who are not afraid to challange some of our profession’s most prevalent urban myths.

    Cheers, Shim Marom
    http://www.quantmleap.com

    • samad_aidane
      March 24, 2010 | 11:19 am

      Shim,

      Thank you so much for taking the time to read and comment.

      I appreciate the work that Chris and his team have done and I wanted to make sure that I do everything I can so others can read it and benefit from it. I think it is valuable work and it need to be read by all those who are interested in IT and software project failure.

      On a personal note, I appreciate this research because I am passionate about the topic of IT Failure.

      I lived all my professional life (during the last 15 years) in IT departments and IT consulting companies. It has been painful to me, since the first chaos report came out, to experience firsthand the negative perception of IT in the business community. The perception is that we in IT are unable to increase the success rate of projects, despite all the progress and great work that has been done over the last 15 years.

      The negative impact of the Chaos Report findings on the perception of IT by the business community is real. It undermines the trust in IT project management and self confidence of IT project managers.

      I never believed the Chaos Report because I knew that no organization I worked with in the past can tolerate 30% success rate of its projects. From my own personal experience, I knew that this number is incorrect. Only thru research, such as this one from Chris and his team, can we begin to correct this perception with real data.

      By the way, working with Chris on this interview was a wonderful experience for me and I am grateful for his time, efforts, and patience.

  2. Derek Huether
    March 25, 2010 | 11:33 pm

    Wow, I’m very impressed with this read. I didn’t think it was the easiest thing to get through. Then again, not all reads should be easy. You can’t dumb down information like this and expect to communicate the same message. I had to read it twice!

    Without examining data objectively, you get nothing more than subjective conjecture. I just kept asking myself, 30% success rate? I thought only the weatherman could have a 30% success rate and keep his job. If we, as project managers, were only successfully delivering 30% of the time, we’d be out on our rears.

    Regards,
    Derek
    http://twitter.com/derekhuether

    • samad_aidane
      March 26, 2010 | 3:13 am

      Derek,

      Thank you so much my friend for reading and commenting,

      You are so right. You really can’t dumb down this type of information and expect to deliver the same points. A lot of work went into the research that Chris and his team did and I am just grateful to Chris for taking the time, from his busy schedule, to answer my questions and make this information available to us.

      I have felt the same way you did about the 30% success rate that is frequently quoted. Like I mentioned in my follow-up to Chim’s comments, I have never worked at an organization (or heard of one) that would tolerate this low success rate. If the low success rate was true, I would have left IT a long time ago.

      I think studies such as the Chaos Report are very powerful as they shape the perception that the business stakeholders develop of IT Projects and IT project managers. The perception is often negative and can range from skepticism to outright hostility. In some organizations, it takes a lot of hard work and many years of solid track record before the perception is corrected. My hope, from getting the word out about this research, is to equip project managers with information they can use to educate their stakeholders about IT Failure myths. Ultimately, I want IT project managers to expect that they will success, to feel confident that failure is not the norm, and to believe in themselves that they have the capacity to deliver successful projects. Dammit!!! we deserve success!!! 🙂

      Cheers my friend.

  3. Steve Romero, IT Governance Evangelist, PMP
    March 26, 2010 | 12:59 am

    I have to begin my comments by stating (confessing?) I “widely quote” the Chaos Report – and not because the numbers are astounding. I quote the report because it showed project failure rates – even higher than the Standish Group concludes. I agree the study is flawed and misleading about project failure rates, but my assertion is not based on anything remotely resembling the incredibly comprehensive and detailed analysis of Vrije Universitei. My chief complaint is in regard to the Standish use of a 3-type characterization of project results. I submit projects falling in the Standish “challenged” category are actually failures, and the subset of “failures” deemed so because they were killed before completion, are not necessarily failures at all.

    I don’t lament the non-availability of Standish data simply because I have become accustomed to their practice of not sharing it. This terrible research practice matters little because I use their flawed results to convince organizations to aggressively address project failure rates.

    Every study I have seen in the past two decades has shown at least half of all IT projects fail. And yes, Enterprises have managed to take us into the information age despite these high failure rates. This is simply explained when project failure is defined as a project that does not meet its intended and stated commitments. Using this definition, a project failure does not necessarily mean the effort should never have been sanctioned. It simply means the mechanisms used to make project decisions (from ideation to completion) did not meet their stated objectives. These mechanisms constitute the project’s failings, even in those instances where the technology indeed brings us into the information age. Side note: I contend the majority of project failures are caused by poor Project and Portfolio Management (PPM) practices, as opposed to poor Project Management practices.

    After almost 30 years working in almost every area of IT, I became an IT Governance Evangelist. I have been traveling the world touting IT Governance and its essential processes and mechanisms for over 3 years now. I have spoken to thousands of people in over 200 forums in which I have presented (100 of these to individual companies). I rarely encounter Enterprises with established specific definitions of project success and failure and the associated ability to make decisions based on applying those definitions. Most people I meet don’t even have the word “failure” in their corporate vernacular (given its incredibly negative connotation and the pervasive human aversion to the word).

    Vrije Universitei does an incredible job noting the many variables and moving parts associated with understanding and determining project success and failure. I just don’t believe I could use their impressive research and analysis to influence the necessary changes to improve project success in Enterprises today. I would bet most of my audiences would be asleep before I could get halfway through just explaining your blog post, let alone taking a deep dive into the actual research. I use the Standish Report and its suspect failure rates because it is an effective means to motivate organizations to take action. I use the studies from the Standish Group, Garter, Forrester and MIT CISR (my favorite) to urge folks to define project success and failure in their Enterprises and to use those definitions to understand and subsequently improve their success rates.

    Your great post and the incredible work done by Vrije Universitei will now put an end to my practice of citing the Standish Chaos report, even if it was under the veil of good intentions. Better still, I will still cite their results, explain my misgivings regarding their approach, and then cite your blog post and the findings of Vrije Universitei. Frankly, I care little about the nuances and resulting disparities between one research approach and another. What I do care about is inciting Enterprises to take action regarding their ability to understand project failure and increase project success.

    Let me close by noting the serendipitous nature of the timing of your post. I received a call from Standish Group Customer Service today asking about my use of the Standish website. I told them I only used it to obtain summaries of their research (because it is free) to quote in my presentations. I said I had little insight or interest in their paid services. I agreed to schedule some time with them next week, to give them the opportunity to talk to me about their services. After reading your post, I am sure it will be a very interesting conversation.

    Steve Romero, IT Governance Evangelist
    http://community.ca.com/blogs/theitgovernanceevangelist/

    • Shim Marom
      March 26, 2010 | 3:28 am

      Hi Steve, I’ve read through your comments a number of times in order to make sure I understand it correctly. The point I found so amazing is your admission (which I suspect represents a larger culprit audience) that you choose to use a study, which you agree is flawed and misleading, just to be able to make a point!!!

      Do I really need to elaborate on the ethical and professional issues associated with such an admission?

      The Chaos report is incorrect – full stop! To make use of such report just to advance an agenda, whatever the agenda is, is incorrect, to say the least.

      Shim Marom
      http://www.quantmleap.com

      • Steve Romero, IT Governance Evangelist, PMP
        March 26, 2010 | 10:09 am

        I completely understand your response Shim. I should have taken greater care in explaining my willing use of a “flawed and misleading” report.

        First, I cite numerous studies in my presentations. Upon citing the Standish Report, I tell audiences I believe the report is flawed and I explain the basis of my views. The reasons I even bother to take audiences through the exercise of citing flawed research are as follows:
        – Many people are aware of the Standish Group
        – The study shows a high degree of failure which ignites discussion
        – Their flawed (in my view) approach to project failure characterization underscores the need for enterprises to define project failure in terms that result in their acquiring the data essential to making good project investment decisions

        I came to much higher level conclusions than those cited in this post. I did not have near the insights I have now, which is why I have to provide even more explanation when citing the Standish Chaos report.

        Steve Romero, IT Governance Evangelist

    • samad_aidane
      March 26, 2010 | 3:58 am

      “What I do care about is inciting Enterprises to take action regarding their ability to understand project failure and increase project success.”

      I love that and your passion shows in your wonderful posts on the IT Governance Evangelist blog.

      Steve, first of all, I want to thank you for your candid comments about your experience with the Chaos Report.

      It is very helpful to me to understand how others have used the Chaos Report figures. I can appreciate how its findings can be a powerful tool to motivate executives to establish proper governance structure in their organizations.

      This conversation we are having was exactly my hope from interviewing Chris and getting the word out about his team’s findings. It is through these conversations that we gain a better understanding of how we can help our organizations better understand failure and increase project success.

      It is funny that I got introduced to your blog just the day before by a tweet from Rick Morris. I enjoyed your recent post: “IT Governance Stops the Hate between the Business and IT”.

      Thank you again Steve.

      • Shim Marom
        March 26, 2010 | 8:34 am

        Hi Samad,

        I seriously fail to understand how using a “flawed and misleading” (to quote Steve’s comment) can be used as the basis for “findings can be a powerful tool to motivate executives to establish proper governance structure in their organizations” (to quote your comment).

        I don’t think the end justifies the means, which means that just to convince organizations to adopt proper project management disciplines should not be used as a justification for using the wrong data. If this was the case we could all write our own imaginary reports and present them to executives, every-time we wanted to convince them of that point or another. Clearly it is wrong and as professionals we should shy away from using shoddy data to strengthen our arguments (irrespective of how valid our arguments are).

        • Steve Romero, IT Governance Evangelist, PMP
          March 26, 2010 | 10:18 am

          Agreed! How do you feel about using “shoddy data” if you fully disclose its shoddiness? (Which has been my practice, and again, I have a lot more explaining to do now that I have read this great post.)

          Steve

        • samad_aidane
          March 26, 2010 | 12:24 pm

          Shim,

          Totally agree with you. The end should not justify the means.

          My point, which I didn’t do a good job articulating in my previous comment, is that the report figures are astounding and shocking. I can see how, when used as a persuasion tool, their shock and awe impact can be very effective with executives. I am not saying that this is justified. I am just saying that as a persuasion tool, fear is much more effective than reward. And this is what I think makes the Chaos Report very widely quoted.

          With research, such as the one that Chris and his team produced, we can begin to show that the Chaos Report is not the right persuasion tool. We will then have the opportunity to think of more effective ways to help our organizations increase their awareness about how to reduce project failure and increase success.

          • Shim Marom
            March 27, 2010 | 5:48 am

            Not to over elaborate the point, both you and Steve seem to suggest that we have too many project failures. I actually totally disagree. I have elaborated on this, extensively in a comment to a post published by Patrick Richard (see http://thehardnosedpm.typepad.com/my-blog/2010/03/chaos-theory.html) so I will just refer to it briefly here.

            The current level of project failures represents an acceptable level based on the the fact that:
            a. It is not possible to consistently achieve 100% success rate (that’s almost a physical law); and
            b. Where 100% success rate is required (for instance in space exploration missions), exuberant amounts of money are required to make it happen. So, in principle, the current level (more or less) of success rate represents a socially acceptable level based on the opportunity cost involved with changing it either way (up or down).

            My conclusion: Let’s stop talking about a problem that doesn’t really exists.

            Cheers, Shim.

            • Steve Romero, IT Governance Evangelist, PMP
              March 27, 2010 | 12:26 pm

              Shim, I now have to completely disagree with you. There are far too many project failures – again, if the project failure is defined as: “the project does not meet its intended and stated commitments.”

              Project failure does not only occur when the solution doesn’t work. Yes, missing performance objectives is something we see with IT projects, but I have found these types of failures to be in the minority, and seldom does the technology fail outright.

              Project failure occurs when:
              – Organizations invest in the wrong projects (not those most essential to meeting Enterprise Strategy)
              – The speed of the project does not meet the needs of the business (projects are approved predicated on when value is expected to be realized)
              – The value of the project is not commensurate to the investment in the project (and I find most organizations are not even able to accurately determine investment value – and sometimes, the “actual” cost of the project, which includes indirect, business process change, and full-lifecycle costs)

              I highly suggest you start following @mkrigsman. Michael Krigsman writes a post entirely dedicated to project failures. For us project success advocates, his blog is sort of our “Project Failure Maypole.”

              My last comment is in regard to you assertion, “It is not possible to consistently achieve 100% success rate.” Here is where I completely agree with you. In fact, I want to always see a certain level of failure because it means we’re pushing the envelope and not playing it too safe. So I hope you weren’t making the assertion based on the assumption that I was advocating something that is not only unrealistic, but contrary to businesses accepting a healthy amount of risk in their investments.

              For those of us who find IT Project failure rates to be unbearable, I doubt there is anyone who has ever suggested the target is a 100% success rate. What I do want is for Enterprises (not just IT) to do what they say they are going to do, and deliver what they say they are going to deliver. And once again, I find few that are able when it comes to their IT projects.

              Steve Romero, IT Governance Evangelist

              • Shim Marom
                March 27, 2010 | 11:29 pm

                Hi Steve, this is now turning into a serious discussion that needs to have it’s own space. I will reply to your comments, in detail, in a separate blog post later on today.

                Cheers, Shim Marom
                http://www.quantmleap.com

      • Steve Romero, IT Governance Evangelist, PMP
        March 26, 2010 | 10:13 am

        It is nice to “meet” you as well Samad. I found you on Twitter and I will be “following” you more closely. I look forward to it.

        And thanks again for a great post.

        Steve

  4. Scott Ambler
    July 29, 2010 | 3:29 pm

    It’s nice to see your writings on this topic. I’ve been running industry surveys for several years now, see http://www.ambysoft.com/surveys/ , and have consistently gotten different (and more positive) results than what is published in the Chaos Report.

    • samad_aidane
      August 4, 2010 | 10:16 am

      Scott,

      Thank you so much for your comment. I found http://www.ambysoft.com/surveys to be a great resource when I was researching scaling agile. Your surveys were extremely helpful to me about what specific agile practices people are using on their project.

      I plan to use your site more and recommend it as a resource as I continue the blog post series I started on tailoring agile for large system integration projects.

      Thank you again Scott.

  5. Mike Clayton
    August 25, 2010 | 3:33 am

    Samad – a simple thank-you from me. I have long wondered if the Chaos Report would be worth buying and have been shy of taking the risk. Your blog does us all a valuable service by making the EQUITY team’s work available, and helping us assess that risk.
    Mike

    PS: Thanks too, to Glen Alleman at http://herdingcats.typepad.com/ for signposting me here.

    • samad_aidane
      August 25, 2010 | 4:02 am

      Mike,

      Thank you so much for your comment. I am glad you found this information helpful. I have been concerned for a long time about the negative impact the Chaos Report has on the perception of IT projects. Especially the perception of the business sponsors. I was thrilled when Mr. Chris Verhoef agreed to do the interview. I am also grateful to Mr. Glen Alleman for mentioning this post on his great blog: http://herdingcats.typepad.com.

      Thank you again Mike.

  6. Tannguy
    September 30, 2010 | 12:24 pm

    Very interesting comments. I used to quote the standish survey, as did the PMI in their PMI Fact Book…
    I can understand that definitions of ‘Project success” & “Project challenged” are not reliable. However, ‘Project cancelled” seems to be stronger.
    My concern is the standish report is the only one I know on that topic. Maybe wrong, I agree.

    Where are reliable datas??? As i said, the PMI was using these datas a few years ago as a reference for project succes rate. Is it still the point?
    regards
    Tannguy

    • samad_aidane
      September 30, 2010 | 9:59 pm

      Tannguy,

      Thank you for taking the time to comment.

      Unfortunately I am not aware of any other reliable report at this point. I don’t think PMI uses the figures from the Chaos Report anymore but I am not 100% sure. I would love to know if there are other reliable studies.

      Thank you again.

  7. Brian Finnerty
    October 1, 2010 | 2:05 pm

    Good interview and thanks for the added perspective on the authors of this paper. I’ve often questioned the validity of the project failure rates in the Chaos report, but this really lays out a compelling counter argument.

    See a continuation of this discussion with comments at http://blogs.innerworkings.com/fmckeagney/2010/10/01/meaningless-the-rise-fall-of-the-chaos-report-figures/.

    Have we heard any response from the Chaos report authors as yet?

    • samad_aidane
      October 17, 2010 | 12:21 am

      Brian,

      Thank you for the comments.

      So sorry for the late reply. I just saw your comments this afternoon.

      To answer your question, there has not been any response from Chaos Report authors.

      Thanks for the link to Fran’s article. Loved it and left a comment. It is is great to see that the interview contributed to the conversation about the Chaos Report.

  8. Rohane
    October 28, 2011 | 8:20 am

    Thanks for this article
    The key question for me is “What is the alternative?”
    CHAOS has presented an easily packaged ready reference. Without it, how do normal folks get a sense of the stats?
    What can you say about it?

    • Shim Marom
      December 19, 2011 | 2:58 am

      Rohane,

      If you think about it for a minute you will see that your question doesn’t really make much sense at all.

      Just because the Chaos report is packaged in a nice and easily digestible way does not absolve it of the need to also be accurate. Are you suggesting that incorrect evidence is better than no evidence at all?

      Cheers, Shim

  9. Marc Schluper
    January 20, 2012 | 11:48 am

    Many years ago, in an undergraduate math class, I learned that when solving a problem, more than half the effort goes into thoroughly defining it. In software development, if we really understand the problem we need to solve, the solution almost pops up before our eyes. If we can’t get a clear idea about what the software should do, it’s fine to build prototypes as long as we know we are building prototypes. We get into trouble whenever we make ourselves and others believe that we are building a solution while pretending we know what’s needed. So when we are measuring the success of projects we should not mix projects that have a clear understanding of what needs to be build with those that don’t.
    My advice: If you don’t know, ask. Never pretend you know while you don’t. Don’t rely on people who pretend.

  10. tarek
    April 9, 2012 | 2:47 pm

    hello dears ,,,
    can you please explain me or give me anything about the chaos report such as the definiation or the benefit of the chaos report .
    thank you

  11. Hard Truths About Public Sector IT
    July 8, 2012 | 11:45 pm

    […] predictor applies whether or not you accept the 1995 or 2009 Standish Reports or not, or […]

  12. Peter Hawkins
    December 16, 2012 | 8:51 pm

    My son has an honours degree in maths and finance and a post grad in economics and has helped Phd students with statistics and I asked him, how I should view these reports that from my experience (nearly 40 years in IT) don’t make much sense. He summed it up: 40% of people don’t believe in statistics.

  13. […] não é  unânime, há  quem duvide de sua validade para todos os tipos de projetos [veja este e este links, por exemplo] e há quem afirme que a taxa de sucesso está acima de 50% [com amostra […]

  14. […] [4] S. Aidane, The “Chaos Report” Myth Busters, 26 March 2010, see here. […]

  15. […] de Chaos rapporten van de Standish Group, maar deze zijn zeer twijfelachtig. Ik adviseer je om het volgende interview te lezen, en daarna je persoonlijke kopie van die rapporten in de prullenbak te […]

  16. Amol
    September 30, 2013 | 2:10 am

    Thanks All for the great information. Many of us in our consulting careers used Chaos Report reference for somewhere. Great to know other side of story, though it is very dense read!!!

  17. […] If you like this interview, you might also like: The Chaos Report Myth-busters. […]

  18. […] Er bestaat er ook de nodige kritiek op het Chaos Report, deels omdat vastgehouden wordt aan de ouderwetse Triple Constraints en voorbij gegaan wordt aan meer moderne inzichten als de Six Triple Constraints (met als toevoegingen: Quality | Risk | Customer Satisfaction). Verder is er ook veel kritiek op de gehanteerde methodes voor het verzamelen en interpreteren van de onderliggende data. Zie hiervoor onder meer: The Rise and Fall of the Chaos Report Figures en: The “Chaos Report” Myth Busters. […]

  19. […] agrees with the Standish Group. While not an easy read, you will find Samad Aidane’s article, The Chaos Report Mythbusters a pretty thought provoking critique of the Chaos Report. Based on my experiences though, it is easy […]

  20. […] unânime, há  quem duvide de sua validade para todos os tipos de projetos [veja este e este links, por exemplo] e há quem afirme que a taxa de sucesso está acima de 50% […]

The “Chaos Report” Myth Busters

Chris VerhoefIn a previous blog post titled, Let’s say “No” to groupthink and stop quoting the Chaos Report, I wrote that:

“We need to be able to examine the underlying data and measurement methods used as the basis for any report or study on IT project failures. Without examining the data, to continue quoting such reports is simply engaging in groupthink”

While we will never be able to examine the actual data on which the Chaos Report is based, we now have research that refutes its findings. In summary, this research found the Chaos Report to be misleading and one-sided.  It perverts the estimation practice and results in meaningless figures.

Laurenz Eveleens and Chris Verhoef, of Vrije Universiteit Amsterdam, recently published the research in the article “The Rise and Fall of the Chaos Report Figures” in the January/February 2010 issue of IEEE Software magazine.

I had the opportunity recently to interview Mr. Verhoef about this research. Here is the full text of the interview:

What was the motivation for doing this research?

This particular research paper is part of a larger project called EQUITY, which is short for Exploring Quantifiable IT Yields.  Let me tell you a bit more about that project.

The invisible motor of our western economy is software, an emerging production factor comparable to natural resources, labor, and capital. Current paradigms indicate that software is just a cost center, and these costs must be lower. This is like saying that from less iron ore, more steel must be produced. The EQUITY project intends to explore potential connections between value creation and information technology, to enable competition with software in a calculated manner.

The bottom line is that we wish to trace the actual impact of IT on the value creation or destruction, e.g., in the form of stock value, also known as the equity of a firm.  It is our ambition to develop a quantitative approach that is both accurate and usable within software-intensive organizations to facilitate rational decision-making about software investments.  Achieving this would be a break-through since no-one has successfully explored the territory of information technology yields before by purely quantitative means.

Within the EQUITY project we work on developing the competencies to understand the possible connections between investing in software and the ensuing value creation or destruction via quantitative methods. Using such methods enables the development of predictive models so that competing with software becomes feasible through maximizing value creation and minimizing value destruction.

In the EQUITY project we work with six people: four Ph.D. students and a former top executive. Let me introduce them briefly:

  • Erald Kulk just received his Ph.D. and worked on requirements creep.  With real-world data he figured out when volatile requirements are healthy and when they start to become dangerous. Without requirements change you get the system you asked for, and with some healthy modifications you get the system that you meant.  But when you do not know what you want, creep turns into a failure factor.  We came up with (complex) mathematical methods that warn you at an early stage that you have reached the danger zone of failure. Dr. Kulk also worked on predicting IT project risks like budget overrun and how you can quantify this risk in terms of easily measured aspects of IT projects. Erald Kulk was recruited by our national government where he assists our federal CIO, Mr. Hillenaar, with the installation of nationwide IT portfolio management to improve the IT performance by the Dutch government.
  • Peter Kampstra is another Ph.D. student working on the EQUITY project.  He is a very talented young man with a great intuition for mathematics and statistics. You could call him Mr. Beanplot, since he invented a new statistical tool he dubbed a beanplot.  We used his intuitive statistical visualization technique (see paper and spreadsheet) to benchmark the risk of failure of large Dutch governmental projects against 6,000 IT projects in the private sector.  He also works on the reliability of function points counts.  When investing in custom IT systems, it is important to know “how much” IT you are going to make. The function point measure is one of the possible candidates. We investigated many tens of thousands of function point totals from many projects. It turned out that the function point totals were a good measure on which to base predictions. The totals gave plausible numbers and were accurate.  Peter is still working on the EQUITY project.
  • Then we have Lukasz Kwiatkowski.  While Erald and Peter work with management data, Lukasz also works with source code.  The idea is that IT decision-making is ruled by existing applications, whether you like it or not. We call that the bit-to-board approach. We extract bit-level data from large source portfolios and aggregate that up to the executive level. No information gets lost by management filters.  A good example is operational cost. This is often a significant factor but what can you do about it? The answer is to dive into the source code and look for the low hanging fruit. Lukasz worked on a nice example where he waded through a source portfolio of 20 million lines of code (250 apps) of a large multinational company, seeking to reduce MIPS.  We could identify just a very small part of the giant portfolio as code that could be optimized so that the operational cost had a potential of decreasing MIPS usage by 5-10%.
  • Laurenz Eveleens is working on quantifying the quality of IT forecasts. By now you have seen that an important aspect of IT decision-making is that executives use only prior experience and forecasts as bases for their decisions. Obviously, you have to know the quality of those forecasts.  But it turns out that not many researchers work on that. Again with large amounts of data from various industrial parties we worked on methods to assess forecasting quality. Also, complex math is involved, and we went to great lengths to get it all right.  Laurenz is recruited by PricewaterhouseCoopers where he works in the Software Assessment and Control group. One day a week he works to finalize his Ph.D. thesis.
  • Finally Dr. Rob Peters is also working on the EQUITY project.  Rob is a veteran academic and has worked for many years at a university. He has a Ph.D. in econometrics. He worked for many years at ING Group, a large financial service provider based in the Netherlands. He initiated quantitative thinking at ING and that is where we met years ago when I was invited by ING to work with them on IT portfolio management. Rob and I are working with the Ph.D. students and the industrial parties on the important themes of the EQUITY project. We also collaborate on IT portfolio management.  For instance, we recently proposed a method to quantify the yield of risk-bearing IT portfolios.

You can imagine that this type of research is only possible with substantial amounts of code and data. We have access to this type of data because of our decades-long connections with many industrial parties, and the added value our research brings to them. Of course this data is not meant for publication or sharing with others; it is crucial data that the competition is not allowed to have.

Of course that is a problem within our field; data is scarce and almost never publically available.

The Chaos Report data and methods of measurement are not available for verification. You say in your report that:

Nicholas Zvegintsov has placed low reliability on information where researchers keep the actual data and data sources hidden. He argued that because the Standish Group hasn’t explained, for instance, how it chose the organizations it surveyed, what survey questions it asked, or how many good responses it received, there’s little to believe.

Yes we fully agree.  Now the problem is that you often cannot publish actual data. Instead we publish statistical aggregates of the data. That is not as good as the data itself but it is a start.

Isn’t it expected that research studies, especially those with enormous impact, such as the Chaos Report, disclose their data and analysis methods to the research community for verification and validation?

This question has been asked more than once of Standish but they would not disclose their data.

Why do you think the Chaos Report is so widely quoted without any basis to validate its findings?

I think because the numbers are astounding, at least that is why I quoted these reports. In 1994 they came up with a 16% success rate. In retrospect I can predict that kind of percentage by a small Gedanken experiment.  Suppose we are to predict cost, time, and the amount of functionality. Success means we are below cost and time predictions and above the amount of functionality. Now assume we have a 50% chance of getting each number right (so this is random!). If the three numbers are not correlated, their combined change is a formula1b change. So the 16% success rate is in fact high. Now the snag is that not many quoting this report really read these definitions out loud and absorbed their true meaning.

 

Others have previously challenged the Chaos Report findings. In your report you have cited Nicholas Zvegintsov, Robert Glass, and Magne Jørgensen. How is your approach to challenging the Chaos Report different from previous ones?

Laurenz Eveleens and I were working on assessing the quality of IT forecasts using large amounts of data from various sources. The Standish Group definitions are about some form of forecasting quality, and not about what success constitutes in general terms. We carried out the exact same calculations as Standish reported on in their chaos chronicles. It turned out that these results were not at all in accordance with reality. Therefore the research is not reproducible. In medical science this is a normal procedure: when someone publishes a result other groups reproduce it.

Zveginstov’s argument was about the Standish Group’s practice of non-disclosure.  Glass argued that if so many projects fail how can we claim to live in an information age? Jørgensen’s argument was twofold: the definitions did not cover all cases, and other research findings were wildly different. In fact other research in this area suffers from the same problem as the Standish figures. Also, that research does not take institutional bias into account, which leads to meaningless rates.  So for us it is no surprise that Jørgensen found these large discrepancies.

Our argument is fundamentally different; we have actual data, we know the quality of it, and we apply it to their definitions. The outcomes simply do not at all coincide with reality.

 

You applied the Standish definitions to extensive data when you collected 5,457 forecasts of 1,211 real-world projects totaling hundreds of millions of Euros. What is the process you went through to get this data and how long did the research take from start to finish?

It takes decades to build industrial relations so that important and confidential data comes your way.  Once relations are firm and added value is returned, plenty of data becomes available.

 

How did you make sure that your research uses the same underlying assumptions or measurements as those used in the Chaos Report?

If you read the public versions of their reports closely this information is there.

 

Since you released your findings, what has been the reaction from other researchers and the media?

In 2009 we published a mathematically dense and substantial paper, Quantifying IT Forecast Quality. This paper contained the findings that we separately published in early 2010 in IEEE Software.  On the Internet the IEEE Software paper is now attracting attention. There is a lot of discussion going on about the Standish reports. Our findings seem to be trickling into those discussions.

 

Scientific articles and media reports widely cite the Chaos Report. The report found its way to the President of the United States to support the claim that processes and U.S. software products are inadequate. What impact do the findings of the Chaos Report have on software projects and project management in general?

If quoting and citation is a measure for impact then the impact in general is still substantial.

 

What impact do you hope your report findings will make?

We hope that others will also make an effort to assess the forecasting quality of their own data so that fact-based decision-making in our field becomes the norm.

 

The Chaos Report defines a project as successful based on how well it did with respect to its original estimates of cost, time, and functionality. Can you give us a brief summary of the definitions used by the Chaos Report for successful, challenged, and failed projects?

Laurenz and I translated their definitions into more mathematical terms, but they are equivalent:

  • Resolution Type 1, or project success. The project is completed, the forecast to actual ratios  (f/a) of cost and time are ≥1, and the f/a ratio of the amount of functionality is ≤1.
  • Resolution Type 2, or project challenged. The project is completed and operational, but f/a < 1 for cost and time and f/a > 1 for the amount of functionality.

Let’s talk about the four findings from your research. Your first finding is that the definitions are misleading. Can you explain to us the basis for this conclusion?

They’re misleading because they’re solely based on an estimation of accuracy for cost, time, and functionality. But Standish labels projects as successful or challenged, suggesting much more than deviations from their original estimates.

So basically the definitions of successful and challenged projects are based on estimation deviation only. Readers of the report who associate words like “challenged” and “success” with something other than their definitions will interpret the figures differently.

Your second finding is that the report contains unrealistic rates. I know you go to great lengths in the report on how you arrived at this conclusion but can you give us a summary of your findings?

The Standish Group’s measures are one-sided because they neglect underruns for cost and time and overruns for the amount of functionality. We took a best-in-class forecasting organization and used projects for which we had cost and amount of functionality estimates. The quality of those forecasts was high; half the projects have a time-weighted average deviation of 11% for cost and 20% deviation for functionality. Combined, half the projects have an average time-weighted deviation of only 15% from both actuals. In IT this is known as best-in-class.
Yet, even though this organization’s cost and functionality forecasts are accurate, when we apply the Standish definitions to the initial forecasts, we find only a 35% success rate. This is unrealistic.

 

The 3rd finding is that basing estimates on the Chaos definitions leads to perverting accuracy. You say:

The organization adopted the Standish definitions to establish when projects were successful. This caused project managers to overstate budget requests to increase the safety margin for success. However, this practice perverted forecast quality.

What led you to this conclusion?

If you optimize on a high Standish success rate, the strategy is to not exceed the duration and budget that was initially stated and to not deliver less functionality than initially promised.  In practice, what you do is ask for a lot of time and money and promise nothing. This is exactly what we found in one company. Indeed, this company had high Standish ratings but 50% of the projects had a time-weighted average deviation of 233% or more from the actual. Hence, these definitions hinder rather than help increasing estimation practice.

The 4th and final conclusion is that the Chaos Report provides meaningless figures. You say:

Comparing all case studies together, we show that without taking forecasting biases into account, it is almost impossible to make any general statement about estimation accuracy across institutional boundaries.

Give an overview of some the work you did to arrive at this conclusion

We found institutional biases in forecasting. For instance, we found a salami tactic: this is systematically underestimating the actual.  Or we found sand bagging: overestimating systematically. When you average numbers with an unknown bias the average does not mean anything. And that is what Standish did.

 

What was your reaction to the Standish Group’s response to your findings that:

All data and information in the Chaos reports and all Standish reports should be considered Standish opinion and the reader bears all risk in the use of this opinion.

Laurenz and I fully support this disclaimer, which to our knowledge was never stated in the Chaos reports.

What is your advice to those who continue to use the Chaos Report project failure statistics, without really understanding the basis of its conclusions?

Read the IEEE Software paper, and if you want all the gritty details read the full paper with all the math included.

So what is next for Mr. Verhoef?

Helping IT governors to make IT decision making more fact-based and transparent.

 

How can our readers contact you and find out more about your research?

There’s plenty of information on the Web and one can reach me via email:
Email: x@cs.vu.nl

Website: http://www.cs.vu.nl/~x

If you like this interview, you will also like: Advanced Project Thinking – A conversation with Dr. Harvey Maylor

 

 

38 Responses to The “Chaos Report” Myth Busters
  1. Shim Marom
    March 24, 2010 | 7:27 am

    Excellent article Samad. Most (apart from a few) Project Management blogs have neglected to deal with this issue in a deep, constructive and meaningful way. I am still amazed when I read posts that quote or mention a low project success rate, without understanding the basic inconsistencies and methodological issues associated with the numbers quoted. Sensationalism and mediocrity take precedence over serious research and inquisitive discussion. Good on you for joining the rational thinkers who are not afraid to challange some of our profession’s most prevalent urban myths.

    Cheers, Shim Marom
    http://www.quantmleap.com

    • samad_aidane
      March 24, 2010 | 11:19 am

      Shim,

      Thank you so much for taking the time to read and comment.

      I appreciate the work that Chris and his team have done and I wanted to make sure that I do everything I can so others can read it and benefit from it. I think it is valuable work and it need to be read by all those who are interested in IT and software project failure.

      On a personal note, I appreciate this research because I am passionate about the topic of IT Failure.

      I lived all my professional life (during the last 15 years) in IT departments and IT consulting companies. It has been painful to me, since the first chaos report came out, to experience firsthand the negative perception of IT in the business community. The perception is that we in IT are unable to increase the success rate of projects, despite all the progress and great work that has been done over the last 15 years.

      The negative impact of the Chaos Report findings on the perception of IT by the business community is real. It undermines the trust in IT project management and self confidence of IT project managers.

      I never believed the Chaos Report because I knew that no organization I worked with in the past can tolerate 30% success rate of its projects. From my own personal experience, I knew that this number is incorrect. Only thru research, such as this one from Chris and his team, can we begin to correct this perception with real data.

      By the way, working with Chris on this interview was a wonderful experience for me and I am grateful for his time, efforts, and patience.

  2. Derek Huether
    March 25, 2010 | 11:33 pm

    Wow, I’m very impressed with this read. I didn’t think it was the easiest thing to get through. Then again, not all reads should be easy. You can’t dumb down information like this and expect to communicate the same message. I had to read it twice!

    Without examining data objectively, you get nothing more than subjective conjecture. I just kept asking myself, 30% success rate? I thought only the weatherman could have a 30% success rate and keep his job. If we, as project managers, were only successfully delivering 30% of the time, we’d be out on our rears.

    Regards,
    Derek
    http://twitter.com/derekhuether

    • samad_aidane
      March 26, 2010 | 3:13 am

      Derek,

      Thank you so much my friend for reading and commenting,

      You are so right. You really can’t dumb down this type of information and expect to deliver the same points. A lot of work went into the research that Chris and his team did and I am just grateful to Chris for taking the time, from his busy schedule, to answer my questions and make this information available to us.

      I have felt the same way you did about the 30% success rate that is frequently quoted. Like I mentioned in my follow-up to Chim’s comments, I have never worked at an organization (or heard of one) that would tolerate this low success rate. If the low success rate was true, I would have left IT a long time ago.

      I think studies such as the Chaos Report are very powerful as they shape the perception that the business stakeholders develop of IT Projects and IT project managers. The perception is often negative and can range from skepticism to outright hostility. In some organizations, it takes a lot of hard work and many years of solid track record before the perception is corrected. My hope, from getting the word out about this research, is to equip project managers with information they can use to educate their stakeholders about IT Failure myths. Ultimately, I want IT project managers to expect that they will success, to feel confident that failure is not the norm, and to believe in themselves that they have the capacity to deliver successful projects. Dammit!!! we deserve success!!! 🙂

      Cheers my friend.

  3. Steve Romero, IT Governance Evangelist, PMP
    March 26, 2010 | 12:59 am

    I have to begin my comments by stating (confessing?) I “widely quote” the Chaos Report – and not because the numbers are astounding. I quote the report because it showed project failure rates – even higher than the Standish Group concludes. I agree the study is flawed and misleading about project failure rates, but my assertion is not based on anything remotely resembling the incredibly comprehensive and detailed analysis of Vrije Universitei. My chief complaint is in regard to the Standish use of a 3-type characterization of project results. I submit projects falling in the Standish “challenged” category are actually failures, and the subset of “failures” deemed so because they were killed before completion, are not necessarily failures at all.

    I don’t lament the non-availability of Standish data simply because I have become accustomed to their practice of not sharing it. This terrible research practice matters little because I use their flawed results to convince organizations to aggressively address project failure rates.

    Every study I have seen in the past two decades has shown at least half of all IT projects fail. And yes, Enterprises have managed to take us into the information age despite these high failure rates. This is simply explained when project failure is defined as a project that does not meet its intended and stated commitments. Using this definition, a project failure does not necessarily mean the effort should never have been sanctioned. It simply means the mechanisms used to make project decisions (from ideation to completion) did not meet their stated objectives. These mechanisms constitute the project’s failings, even in those instances where the technology indeed brings us into the information age. Side note: I contend the majority of project failures are caused by poor Project and Portfolio Management (PPM) practices, as opposed to poor Project Management practices.

    After almost 30 years working in almost every area of IT, I became an IT Governance Evangelist. I have been traveling the world touting IT Governance and its essential processes and mechanisms for over 3 years now. I have spoken to thousands of people in over 200 forums in which I have presented (100 of these to individual companies). I rarely encounter Enterprises with established specific definitions of project success and failure and the associated ability to make decisions based on applying those definitions. Most people I meet don’t even have the word “failure” in their corporate vernacular (given its incredibly negative connotation and the pervasive human aversion to the word).

    Vrije Universitei does an incredible job noting the many variables and moving parts associated with understanding and determining project success and failure. I just don’t believe I could use their impressive research and analysis to influence the necessary changes to improve project success in Enterprises today. I would bet most of my audiences would be asleep before I could get halfway through just explaining your blog post, let alone taking a deep dive into the actual research. I use the Standish Report and its suspect failure rates because it is an effective means to motivate organizations to take action. I use the studies from the Standish Group, Garter, Forrester and MIT CISR (my favorite) to urge folks to define project success and failure in their Enterprises and to use those definitions to understand and subsequently improve their success rates.

    Your great post and the incredible work done by Vrije Universitei will now put an end to my practice of citing the Standish Chaos report, even if it was under the veil of good intentions. Better still, I will still cite their results, explain my misgivings regarding their approach, and then cite your blog post and the findings of Vrije Universitei. Frankly, I care little about the nuances and resulting disparities between one research approach and another. What I do care about is inciting Enterprises to take action regarding their ability to understand project failure and increase project success.

    Let me close by noting the serendipitous nature of the timing of your post. I received a call from Standish Group Customer Service today asking about my use of the Standish website. I told them I only used it to obtain summaries of their research (because it is free) to quote in my presentations. I said I had little insight or interest in their paid services. I agreed to schedule some time with them next week, to give them the opportunity to talk to me about their services. After reading your post, I am sure it will be a very interesting conversation.

    Steve Romero, IT Governance Evangelist
    http://community.ca.com/blogs/theitgovernanceevangelist/

    • Shim Marom
      March 26, 2010 | 3:28 am

      Hi Steve, I’ve read through your comments a number of times in order to make sure I understand it correctly. The point I found so amazing is your admission (which I suspect represents a larger culprit audience) that you choose to use a study, which you agree is flawed and misleading, just to be able to make a point!!!

      Do I really need to elaborate on the ethical and professional issues associated with such an admission?

      The Chaos report is incorrect – full stop! To make use of such report just to advance an agenda, whatever the agenda is, is incorrect, to say the least.

      Shim Marom
      http://www.quantmleap.com

      • Steve Romero, IT Governance Evangelist, PMP
        March 26, 2010 | 10:09 am

        I completely understand your response Shim. I should have taken greater care in explaining my willing use of a “flawed and misleading” report.

        First, I cite numerous studies in my presentations. Upon citing the Standish Report, I tell audiences I believe the report is flawed and I explain the basis of my views. The reasons I even bother to take audiences through the exercise of citing flawed research are as follows:
        – Many people are aware of the Standish Group
        – The study shows a high degree of failure which ignites discussion
        – Their flawed (in my view) approach to project failure characterization underscores the need for enterprises to define project failure in terms that result in their acquiring the data essential to making good project investment decisions

        I came to much higher level conclusions than those cited in this post. I did not have near the insights I have now, which is why I have to provide even more explanation when citing the Standish Chaos report.

        Steve Romero, IT Governance Evangelist

    • samad_aidane
      March 26, 2010 | 3:58 am

      “What I do care about is inciting Enterprises to take action regarding their ability to understand project failure and increase project success.”

      I love that and your passion shows in your wonderful posts on the IT Governance Evangelist blog.

      Steve, first of all, I want to thank you for your candid comments about your experience with the Chaos Report.

      It is very helpful to me to understand how others have used the Chaos Report figures. I can appreciate how its findings can be a powerful tool to motivate executives to establish proper governance structure in their organizations.

      This conversation we are having was exactly my hope from interviewing Chris and getting the word out about his team’s findings. It is through these conversations that we gain a better understanding of how we can help our organizations better understand failure and increase project success.

      It is funny that I got introduced to your blog just the day before by a tweet from Rick Morris. I enjoyed your recent post: “IT Governance Stops the Hate between the Business and IT”.

      Thank you again Steve.

      • Shim Marom
        March 26, 2010 | 8:34 am

        Hi Samad,

        I seriously fail to understand how using a “flawed and misleading” (to quote Steve’s comment) can be used as the basis for “findings can be a powerful tool to motivate executives to establish proper governance structure in their organizations” (to quote your comment).

        I don’t think the end justifies the means, which means that just to convince organizations to adopt proper project management disciplines should not be used as a justification for using the wrong data. If this was the case we could all write our own imaginary reports and present them to executives, every-time we wanted to convince them of that point or another. Clearly it is wrong and as professionals we should shy away from using shoddy data to strengthen our arguments (irrespective of how valid our arguments are).

        • Steve Romero, IT Governance Evangelist, PMP
          March 26, 2010 | 10:18 am

          Agreed! How do you feel about using “shoddy data” if you fully disclose its shoddiness? (Which has been my practice, and again, I have a lot more explaining to do now that I have read this great post.)

          Steve

        • samad_aidane
          March 26, 2010 | 12:24 pm

          Shim,

          Totally agree with you. The end should not justify the means.

          My point, which I didn’t do a good job articulating in my previous comment, is that the report figures are astounding and shocking. I can see how, when used as a persuasion tool, their shock and awe impact can be very effective with executives. I am not saying that this is justified. I am just saying that as a persuasion tool, fear is much more effective than reward. And this is what I think makes the Chaos Report very widely quoted.

          With research, such as the one that Chris and his team produced, we can begin to show that the Chaos Report is not the right persuasion tool. We will then have the opportunity to think of more effective ways to help our organizations increase their awareness about how to reduce project failure and increase success.

          • Shim Marom
            March 27, 2010 | 5:48 am

            Not to over elaborate the point, both you and Steve seem to suggest that we have too many project failures. I actually totally disagree. I have elaborated on this, extensively in a comment to a post published by Patrick Richard (see http://thehardnosedpm.typepad.com/my-blog/2010/03/chaos-theory.html) so I will just refer to it briefly here.

            The current level of project failures represents an acceptable level based on the the fact that:
            a. It is not possible to consistently achieve 100% success rate (that’s almost a physical law); and
            b. Where 100% success rate is required (for instance in space exploration missions), exuberant amounts of money are required to make it happen. So, in principle, the current level (more or less) of success rate represents a socially acceptable level based on the opportunity cost involved with changing it either way (up or down).

            My conclusion: Let’s stop talking about a problem that doesn’t really exists.

            Cheers, Shim.

            • Steve Romero, IT Governance Evangelist, PMP
              March 27, 2010 | 12:26 pm

              Shim, I now have to completely disagree with you. There are far too many project failures – again, if the project failure is defined as: “the project does not meet its intended and stated commitments.”

              Project failure does not only occur when the solution doesn’t work. Yes, missing performance objectives is something we see with IT projects, but I have found these types of failures to be in the minority, and seldom does the technology fail outright.

              Project failure occurs when:
              – Organizations invest in the wrong projects (not those most essential to meeting Enterprise Strategy)
              – The speed of the project does not meet the needs of the business (projects are approved predicated on when value is expected to be realized)
              – The value of the project is not commensurate to the investment in the project (and I find most organizations are not even able to accurately determine investment value – and sometimes, the “actual” cost of the project, which includes indirect, business process change, and full-lifecycle costs)

              I highly suggest you start following @mkrigsman. Michael Krigsman writes a post entirely dedicated to project failures. For us project success advocates, his blog is sort of our “Project Failure Maypole.”

              My last comment is in regard to you assertion, “It is not possible to consistently achieve 100% success rate.” Here is where I completely agree with you. In fact, I want to always see a certain level of failure because it means we’re pushing the envelope and not playing it too safe. So I hope you weren’t making the assertion based on the assumption that I was advocating something that is not only unrealistic, but contrary to businesses accepting a healthy amount of risk in their investments.

              For those of us who find IT Project failure rates to be unbearable, I doubt there is anyone who has ever suggested the target is a 100% success rate. What I do want is for Enterprises (not just IT) to do what they say they are going to do, and deliver what they say they are going to deliver. And once again, I find few that are able when it comes to their IT projects.

              Steve Romero, IT Governance Evangelist

              • Shim Marom
                March 27, 2010 | 11:29 pm

                Hi Steve, this is now turning into a serious discussion that needs to have it’s own space. I will reply to your comments, in detail, in a separate blog post later on today.

                Cheers, Shim Marom
                http://www.quantmleap.com

      • Steve Romero, IT Governance Evangelist, PMP
        March 26, 2010 | 10:13 am

        It is nice to “meet” you as well Samad. I found you on Twitter and I will be “following” you more closely. I look forward to it.

        And thanks again for a great post.

        Steve

  4. Scott Ambler
    July 29, 2010 | 3:29 pm

    It’s nice to see your writings on this topic. I’ve been running industry surveys for several years now, see http://www.ambysoft.com/surveys/ , and have consistently gotten different (and more positive) results than what is published in the Chaos Report.

    • samad_aidane
      August 4, 2010 | 10:16 am

      Scott,

      Thank you so much for your comment. I found http://www.ambysoft.com/surveys to be a great resource when I was researching scaling agile. Your surveys were extremely helpful to me about what specific agile practices people are using on their project.

      I plan to use your site more and recommend it as a resource as I continue the blog post series I started on tailoring agile for large system integration projects.

      Thank you again Scott.

  5. Mike Clayton
    August 25, 2010 | 3:33 am

    Samad – a simple thank-you from me. I have long wondered if the Chaos Report would be worth buying and have been shy of taking the risk. Your blog does us all a valuable service by making the EQUITY team’s work available, and helping us assess that risk.
    Mike

    PS: Thanks too, to Glen Alleman at http://herdingcats.typepad.com/ for signposting me here.

    • samad_aidane
      August 25, 2010 | 4:02 am

      Mike,

      Thank you so much for your comment. I am glad you found this information helpful. I have been concerned for a long time about the negative impact the Chaos Report has on the perception of IT projects. Especially the perception of the business sponsors. I was thrilled when Mr. Chris Verhoef agreed to do the interview. I am also grateful to Mr. Glen Alleman for mentioning this post on his great blog: http://herdingcats.typepad.com.

      Thank you again Mike.

  6. Tannguy
    September 30, 2010 | 12:24 pm

    Very interesting comments. I used to quote the standish survey, as did the PMI in their PMI Fact Book…
    I can understand that definitions of ‘Project success” & “Project challenged” are not reliable. However, ‘Project cancelled” seems to be stronger.
    My concern is the standish report is the only one I know on that topic. Maybe wrong, I agree.

    Where are reliable datas??? As i said, the PMI was using these datas a few years ago as a reference for project succes rate. Is it still the point?
    regards
    Tannguy

    • samad_aidane
      September 30, 2010 | 9:59 pm

      Tannguy,

      Thank you for taking the time to comment.

      Unfortunately I am not aware of any other reliable report at this point. I don’t think PMI uses the figures from the Chaos Report anymore but I am not 100% sure. I would love to know if there are other reliable studies.

      Thank you again.

  7. Brian Finnerty
    October 1, 2010 | 2:05 pm

    Good interview and thanks for the added perspective on the authors of this paper. I’ve often questioned the validity of the project failure rates in the Chaos report, but this really lays out a compelling counter argument.

    See a continuation of this discussion with comments at http://blogs.innerworkings.com/fmckeagney/2010/10/01/meaningless-the-rise-fall-of-the-chaos-report-figures/.

    Have we heard any response from the Chaos report authors as yet?

    • samad_aidane
      October 17, 2010 | 12:21 am

      Brian,

      Thank you for the comments.

      So sorry for the late reply. I just saw your comments this afternoon.

      To answer your question, there has not been any response from Chaos Report authors.

      Thanks for the link to Fran’s article. Loved it and left a comment. It is is great to see that the interview contributed to the conversation about the Chaos Report.

  8. Rohane
    October 28, 2011 | 8:20 am

    Thanks for this article
    The key question for me is “What is the alternative?”
    CHAOS has presented an easily packaged ready reference. Without it, how do normal folks get a sense of the stats?
    What can you say about it?

    • Shim Marom
      December 19, 2011 | 2:58 am

      Rohane,

      If you think about it for a minute you will see that your question doesn’t really make much sense at all.

      Just because the Chaos report is packaged in a nice and easily digestible way does not absolve it of the need to also be accurate. Are you suggesting that incorrect evidence is better than no evidence at all?

      Cheers, Shim

  9. Marc Schluper
    January 20, 2012 | 11:48 am

    Many years ago, in an undergraduate math class, I learned that when solving a problem, more than half the effort goes into thoroughly defining it. In software development, if we really understand the problem we need to solve, the solution almost pops up before our eyes. If we can’t get a clear idea about what the software should do, it’s fine to build prototypes as long as we know we are building prototypes. We get into trouble whenever we make ourselves and others believe that we are building a solution while pretending we know what’s needed. So when we are measuring the success of projects we should not mix projects that have a clear understanding of what needs to be build with those that don’t.
    My advice: If you don’t know, ask. Never pretend you know while you don’t. Don’t rely on people who pretend.

  10. tarek
    April 9, 2012 | 2:47 pm

    hello dears ,,,
    can you please explain me or give me anything about the chaos report such as the definiation or the benefit of the chaos report .
    thank you

  11. Hard Truths About Public Sector IT
    July 8, 2012 | 11:45 pm

    […] predictor applies whether or not you accept the 1995 or 2009 Standish Reports or not, or […]

  12. Peter Hawkins
    December 16, 2012 | 8:51 pm

    My son has an honours degree in maths and finance and a post grad in economics and has helped Phd students with statistics and I asked him, how I should view these reports that from my experience (nearly 40 years in IT) don’t make much sense. He summed it up: 40% of people don’t believe in statistics.

  13. […] não é  unânime, há  quem duvide de sua validade para todos os tipos de projetos [veja este e este links, por exemplo] e há quem afirme que a taxa de sucesso está acima de 50% [com amostra […]

  14. […] [4] S. Aidane, The “Chaos Report” Myth Busters, 26 March 2010, see here. […]

  15. […] de Chaos rapporten van de Standish Group, maar deze zijn zeer twijfelachtig. Ik adviseer je om het volgende interview te lezen, en daarna je persoonlijke kopie van die rapporten in de prullenbak te […]

  16. Amol
    September 30, 2013 | 2:10 am

    Thanks All for the great information. Many of us in our consulting careers used Chaos Report reference for somewhere. Great to know other side of story, though it is very dense read!!!

  17. […] If you like this interview, you might also like: The Chaos Report Myth-busters. […]

  18. […] Er bestaat er ook de nodige kritiek op het Chaos Report, deels omdat vastgehouden wordt aan de ouderwetse Triple Constraints en voorbij gegaan wordt aan meer moderne inzichten als de Six Triple Constraints (met als toevoegingen: Quality | Risk | Customer Satisfaction). Verder is er ook veel kritiek op de gehanteerde methodes voor het verzamelen en interpreteren van de onderliggende data. Zie hiervoor onder meer: The Rise and Fall of the Chaos Report Figures en: The “Chaos Report” Myth Busters. […]

  19. […] agrees with the Standish Group. While not an easy read, you will find Samad Aidane’s article, The Chaos Report Mythbusters a pretty thought provoking critique of the Chaos Report. Based on my experiences though, it is easy […]

  20. […] unânime, há  quem duvide de sua validade para todos os tipos de projetos [veja este e este links, por exemplo] e há quem afirme que a taxa de sucesso está acima de 50% […]

The “Chaos Report” Myth Busters

Chris VerhoefIn a previous blog post titled, Let’s say “No” to groupthink and stop quoting the Chaos Report, I wrote that:

“We need to be able to examine the underlying data and measurement methods used as the basis for any report or study on IT project failures. Without examining the data, to continue quoting such reports is simply engaging in groupthink”

While we will never be able to examine the actual data on which the Chaos Report is based, we now have research that refutes its findings. In summary, this research found the Chaos Report to be misleading and one-sided.  It perverts the estimation practice and results in meaningless figures.

Laurenz Eveleens and Chris Verhoef, of Vrije Universiteit Amsterdam, recently published the research in the article “The Rise and Fall of the Chaos Report Figures” in the January/February 2010 issue of IEEE Software magazine.

I had the opportunity recently to interview Mr. Verhoef about this research. Here is the full text of the interview:

What was the motivation for doing this research?

This particular research paper is part of a larger project called EQUITY, which is short for Exploring Quantifiable IT Yields.  Let me tell you a bit more about that project.

The invisible motor of our western economy is software, an emerging production factor comparable to natural resources, labor, and capital. Current paradigms indicate that software is just a cost center, and these costs must be lower. This is like saying that from less iron ore, more steel must be produced. The EQUITY project intends to explore potential connections between value creation and information technology, to enable competition with software in a calculated manner.

The bottom line is that we wish to trace the actual impact of IT on the value creation or destruction, e.g., in the form of stock value, also known as the equity of a firm.  It is our ambition to develop a quantitative approach that is both accurate and usable within software-intensive organizations to facilitate rational decision-making about software investments.  Achieving this would be a break-through since no-one has successfully explored the territory of information technology yields before by purely quantitative means.

Within the EQUITY project we work on developing the competencies to understand the possible connections between investing in software and the ensuing value creation or destruction via quantitative methods. Using such methods enables the development of predictive models so that competing with software becomes feasible through maximizing value creation and minimizing value destruction.

In the EQUITY project we work with six people: four Ph.D. students and a former top executive. Let me introduce them briefly:

  • Erald Kulk just received his Ph.D. and worked on requirements creep.  With real-world data he figured out when volatile requirements are healthy and when they start to become dangerous. Without requirements change you get the system you asked for, and with some healthy modifications you get the system that you meant.  But when you do not know what you want, creep turns into a failure factor.  We came up with (complex) mathematical methods that warn you at an early stage that you have reached the danger zone of failure. Dr. Kulk also worked on predicting IT project risks like budget overrun and how you can quantify this risk in terms of easily measured aspects of IT projects. Erald Kulk was recruited by our national government where he assists our federal CIO, Mr. Hillenaar, with the installation of nationwide IT portfolio management to improve the IT performance by the Dutch government.
  • Peter Kampstra is another Ph.D. student working on the EQUITY project.  He is a very talented young man with a great intuition for mathematics and statistics. You could call him Mr. Beanplot, since he invented a new statistical tool he dubbed a beanplot.  We used his intuitive statistical visualization technique (see paper and spreadsheet) to benchmark the risk of failure of large Dutch governmental projects against 6,000 IT projects in the private sector.  He also works on the reliability of function points counts.  When investing in custom IT systems, it is important to know “how much” IT you are going to make. The function point measure is one of the possible candidates. We investigated many tens of thousands of function point totals from many projects. It turned out that the function point totals were a good measure on which to base predictions. The totals gave plausible numbers and were accurate.  Peter is still working on the EQUITY project.
  • Then we have Lukasz Kwiatkowski.  While Erald and Peter work with management data, Lukasz also works with source code.  The idea is that IT decision-making is ruled by existing applications, whether you like it or not. We call that the bit-to-board approach. We extract bit-level data from large source portfolios and aggregate that up to the executive level. No information gets lost by management filters.  A good example is operational cost. This is often a significant factor but what can you do about it? The answer is to dive into the source code and look for the low hanging fruit. Lukasz worked on a nice example where he waded through a source portfolio of 20 million lines of code (250 apps) of a large multinational company, seeking to reduce MIPS.  We could identify just a very small part of the giant portfolio as code that could be optimized so that the operational cost had a potential of decreasing MIPS usage by 5-10%.
  • Laurenz Eveleens is working on quantifying the quality of IT forecasts. By now you have seen that an important aspect of IT decision-making is that executives use only prior experience and forecasts as bases for their decisions. Obviously, you have to know the quality of those forecasts.  But it turns out that not many researchers work on that. Again with large amounts of data from various industrial parties we worked on methods to assess forecasting quality. Also, complex math is involved, and we went to great lengths to get it all right.  Laurenz is recruited by PricewaterhouseCoopers where he works in the Software Assessment and Control group. One day a week he works to finalize his Ph.D. thesis.
  • Finally Dr. Rob Peters is also working on the EQUITY project.  Rob is a veteran academic and has worked for many years at a university. He has a Ph.D. in econometrics. He worked for many years at ING Group, a large financial service provider based in the Netherlands. He initiated quantitative thinking at ING and that is where we met years ago when I was invited by ING to work with them on IT portfolio management. Rob and I are working with the Ph.D. students and the industrial parties on the important themes of the EQUITY project. We also collaborate on IT portfolio management.  For instance, we recently proposed a method to quantify the yield of risk-bearing IT portfolios.

You can imagine that this type of research is only possible with substantial amounts of code and data. We have access to this type of data because of our decades-long connections with many industrial parties, and the added value our research brings to them. Of course this data is not meant for publication or sharing with others; it is crucial data that the competition is not allowed to have.

Of course that is a problem within our field; data is scarce and almost never publically available.

The Chaos Report data and methods of measurement are not available for verification. You say in your report that:

Nicholas Zvegintsov has placed low reliability on information where researchers keep the actual data and data sources hidden. He argued that because the Standish Group hasn’t explained, for instance, how it chose the organizations it surveyed, what survey questions it asked, or how many good responses it received, there’s little to believe.

Yes we fully agree.  Now the problem is that you often cannot publish actual data. Instead we publish statistical aggregates of the data. That is not as good as the data itself but it is a start.

Isn’t it expected that research studies, especially those with enormous impact, such as the Chaos Report, disclose their data and analysis methods to the research community for verification and validation?

This question has been asked more than once of Standish but they would not disclose their data.

Why do you think the Chaos Report is so widely quoted without any basis to validate its findings?

I think because the numbers are astounding, at least that is why I quoted these reports. In 1994 they came up with a 16% success rate. In retrospect I can predict that kind of percentage by a small Gedanken experiment.  Suppose we are to predict cost, time, and the amount of functionality. Success means we are below cost and time predictions and above the amount of functionality. Now assume we have a 50% chance of getting each number right (so this is random!). If the three numbers are not correlated, their combined change is a formula1b change. So the 16% success rate is in fact high. Now the snag is that not many quoting this report really read these definitions out loud and absorbed their true meaning.

 

Others have previously challenged the Chaos Report findings. In your report you have cited Nicholas Zvegintsov, Robert Glass, and Magne Jørgensen. How is your approach to challenging the Chaos Report different from previous ones?

Laurenz Eveleens and I were working on assessing the quality of IT forecasts using large amounts of data from various sources. The Standish Group definitions are about some form of forecasting quality, and not about what success constitutes in general terms. We carried out the exact same calculations as Standish reported on in their chaos chronicles. It turned out that these results were not at all in accordance with reality. Therefore the research is not reproducible. In medical science this is a normal procedure: when someone publishes a result other groups reproduce it.

Zveginstov’s argument was about the Standish Group’s practice of non-disclosure.  Glass argued that if so many projects fail how can we claim to live in an information age? Jørgensen’s argument was twofold: the definitions did not cover all cases, and other research findings were wildly different. In fact other research in this area suffers from the same problem as the Standish figures. Also, that research does not take institutional bias into account, which leads to meaningless rates.  So for us it is no surprise that Jørgensen found these large discrepancies.

Our argument is fundamentally different; we have actual data, we know the quality of it, and we apply it to their definitions. The outcomes simply do not at all coincide with reality.

 

You applied the Standish definitions to extensive data when you collected 5,457 forecasts of 1,211 real-world projects totaling hundreds of millions of Euros. What is the process you went through to get this data and how long did the research take from start to finish?

It takes decades to build industrial relations so that important and confidential data comes your way.  Once relations are firm and added value is returned, plenty of data becomes available.

 

How did you make sure that your research uses the same underlying assumptions or measurements as those used in the Chaos Report?

If you read the public versions of their reports closely this information is there.

 

Since you released your findings, what has been the reaction from other researchers and the media?

In 2009 we published a mathematically dense and substantial paper, Quantifying IT Forecast Quality. This paper contained the findings that we separately published in early 2010 in IEEE Software.  On the Internet the IEEE Software paper is now attracting attention. There is a lot of discussion going on about the Standish reports. Our findings seem to be trickling into those discussions.

 

Scientific articles and media reports widely cite the Chaos Report. The report found its way to the President of the United States to support the claim that processes and U.S. software products are inadequate. What impact do the findings of the Chaos Report have on software projects and project management in general?

If quoting and citation is a measure for impact then the impact in general is still substantial.

 

What impact do you hope your report findings will make?

We hope that others will also make an effort to assess the forecasting quality of their own data so that fact-based decision-making in our field becomes the norm.

 

The Chaos Report defines a project as successful based on how well it did with respect to its original estimates of cost, time, and functionality. Can you give us a brief summary of the definitions used by the Chaos Report for successful, challenged, and failed projects?

Laurenz and I translated their definitions into more mathematical terms, but they are equivalent:

  • Resolution Type 1, or project success. The project is completed, the forecast to actual ratios  (f/a) of cost and time are ≥1, and the f/a ratio of the amount of functionality is ≤1.
  • Resolution Type 2, or project challenged. The project is completed and operational, but f/a < 1 for cost and time and f/a > 1 for the amount of functionality.

Let’s talk about the four findings from your research. Your first finding is that the definitions are misleading. Can you explain to us the basis for this conclusion?

They’re misleading because they’re solely based on an estimation of accuracy for cost, time, and functionality. But Standish labels projects as successful or challenged, suggesting much more than deviations from their original estimates.

So basically the definitions of successful and challenged projects are based on estimation deviation only. Readers of the report who associate words like “challenged” and “success” with something other than their definitions will interpret the figures differently.

Your second finding is that the report contains unrealistic rates. I know you go to great lengths in the report on how you arrived at this conclusion but can you give us a summary of your findings?

The Standish Group’s measures are one-sided because they neglect underruns for cost and time and overruns for the amount of functionality. We took a best-in-class forecasting organization and used projects for which we had cost and amount of functionality estimates. The quality of those forecasts was high; half the projects have a time-weighted average deviation of 11% for cost and 20% deviation for functionality. Combined, half the projects have an average time-weighted deviation of only 15% from both actuals. In IT this is known as best-in-class.
Yet, even though this organization’s cost and functionality forecasts are accurate, when we apply the Standish definitions to the initial forecasts, we find only a 35% success rate. This is unrealistic.

 

The 3rd finding is that basing estimates on the Chaos definitions leads to perverting accuracy. You say:

The organization adopted the Standish definitions to establish when projects were successful. This caused project managers to overstate budget requests to increase the safety margin for success. However, this practice perverted forecast quality.

What led you to this conclusion?

If you optimize on a high Standish success rate, the strategy is to not exceed the duration and budget that was initially stated and to not deliver less functionality than initially promised.  In practice, what you do is ask for a lot of time and money and promise nothing. This is exactly what we found in one company. Indeed, this company had high Standish ratings but 50% of the projects had a time-weighted average deviation of 233% or more from the actual. Hence, these definitions hinder rather than help increasing estimation practice.

The 4th and final conclusion is that the Chaos Report provides meaningless figures. You say:

Comparing all case studies together, we show that without taking forecasting biases into account, it is almost impossible to make any general statement about estimation accuracy across institutional boundaries.

Give an overview of some the work you did to arrive at this conclusion

We found institutional biases in forecasting. For instance, we found a salami tactic: this is systematically underestimating the actual.  Or we found sand bagging: overestimating systematically. When you average numbers with an unknown bias the average does not mean anything. And that is what Standish did.

 

What was your reaction to the Standish Group’s response to your findings that:

All data and information in the Chaos reports and all Standish reports should be considered Standish opinion and the reader bears all risk in the use of this opinion.

Laurenz and I fully support this disclaimer, which to our knowledge was never stated in the Chaos reports.

What is your advice to those who continue to use the Chaos Report project failure statistics, without really understanding the basis of its conclusions?

Read the IEEE Software paper, and if you want all the gritty details read the full paper with all the math included.

So what is next for Mr. Verhoef?

Helping IT governors to make IT decision making more fact-based and transparent.

 

How can our readers contact you and find out more about your research?

There’s plenty of information on the Web and one can reach me via email:
Email: x@cs.vu.nl

Website: http://www.cs.vu.nl/~x

If you like this interview, you will also like: Advanced Project Thinking – A conversation with Dr. Harvey Maylor

 

 

38 Responses to The “Chaos Report” Myth Busters
  1. Shim Marom
    March 24, 2010 | 7:27 am

    Excellent article Samad. Most (apart from a few) Project Management blogs have neglected to deal with this issue in a deep, constructive and meaningful way. I am still amazed when I read posts that quote or mention a low project success rate, without understanding the basic inconsistencies and methodological issues associated with the numbers quoted. Sensationalism and mediocrity take precedence over serious research and inquisitive discussion. Good on you for joining the rational thinkers who are not afraid to challange some of our profession’s most prevalent urban myths.

    Cheers, Shim Marom
    http://www.quantmleap.com

    • samad_aidane
      March 24, 2010 | 11:19 am

      Shim,

      Thank you so much for taking the time to read and comment.

      I appreciate the work that Chris and his team have done and I wanted to make sure that I do everything I can so others can read it and benefit from it. I think it is valuable work and it need to be read by all those who are interested in IT and software project failure.

      On a personal note, I appreciate this research because I am passionate about the topic of IT Failure.

      I lived all my professional life (during the last 15 years) in IT departments and IT consulting companies. It has been painful to me, since the first chaos report came out, to experience firsthand the negative perception of IT in the business community. The perception is that we in IT are unable to increase the success rate of projects, despite all the progress and great work that has been done over the last 15 years.

      The negative impact of the Chaos Report findings on the perception of IT by the business community is real. It undermines the trust in IT project management and self confidence of IT project managers.

      I never believed the Chaos Report because I knew that no organization I worked with in the past can tolerate 30% success rate of its projects. From my own personal experience, I knew that this number is incorrect. Only thru research, such as this one from Chris and his team, can we begin to correct this perception with real data.

      By the way, working with Chris on this interview was a wonderful experience for me and I am grateful for his time, efforts, and patience.

  2. Derek Huether
    March 25, 2010 | 11:33 pm

    Wow, I’m very impressed with this read. I didn’t think it was the easiest thing to get through. Then again, not all reads should be easy. You can’t dumb down information like this and expect to communicate the same message. I had to read it twice!

    Without examining data objectively, you get nothing more than subjective conjecture. I just kept asking myself, 30% success rate? I thought only the weatherman could have a 30% success rate and keep his job. If we, as project managers, were only successfully delivering 30% of the time, we’d be out on our rears.

    Regards,
    Derek
    http://twitter.com/derekhuether

    • samad_aidane
      March 26, 2010 | 3:13 am

      Derek,

      Thank you so much my friend for reading and commenting,

      You are so right. You really can’t dumb down this type of information and expect to deliver the same points. A lot of work went into the research that Chris and his team did and I am just grateful to Chris for taking the time, from his busy schedule, to answer my questions and make this information available to us.

      I have felt the same way you did about the 30% success rate that is frequently quoted. Like I mentioned in my follow-up to Chim’s comments, I have never worked at an organization (or heard of one) that would tolerate this low success rate. If the low success rate was true, I would have left IT a long time ago.

      I think studies such as the Chaos Report are very powerful as they shape the perception that the business stakeholders develop of IT Projects and IT project managers. The perception is often negative and can range from skepticism to outright hostility. In some organizations, it takes a lot of hard work and many years of solid track record before the perception is corrected. My hope, from getting the word out about this research, is to equip project managers with information they can use to educate their stakeholders about IT Failure myths. Ultimately, I want IT project managers to expect that they will success, to feel confident that failure is not the norm, and to believe in themselves that they have the capacity to deliver successful projects. Dammit!!! we deserve success!!! 🙂

      Cheers my friend.

  3. Steve Romero, IT Governance Evangelist, PMP
    March 26, 2010 | 12:59 am

    I have to begin my comments by stating (confessing?) I “widely quote” the Chaos Report – and not because the numbers are astounding. I quote the report because it showed project failure rates – even higher than the Standish Group concludes. I agree the study is flawed and misleading about project failure rates, but my assertion is not based on anything remotely resembling the incredibly comprehensive and detailed analysis of Vrije Universitei. My chief complaint is in regard to the Standish use of a 3-type characterization of project results. I submit projects falling in the Standish “challenged” category are actually failures, and the subset of “failures” deemed so because they were killed before completion, are not necessarily failures at all.

    I don’t lament the non-availability of Standish data simply because I have become accustomed to their practice of not sharing it. This terrible research practice matters little because I use their flawed results to convince organizations to aggressively address project failure rates.

    Every study I have seen in the past two decades has shown at least half of all IT projects fail. And yes, Enterprises have managed to take us into the information age despite these high failure rates. This is simply explained when project failure is defined as a project that does not meet its intended and stated commitments. Using this definition, a project failure does not necessarily mean the effort should never have been sanctioned. It simply means the mechanisms used to make project decisions (from ideation to completion) did not meet their stated objectives. These mechanisms constitute the project’s failings, even in those instances where the technology indeed brings us into the information age. Side note: I contend the majority of project failures are caused by poor Project and Portfolio Management (PPM) practices, as opposed to poor Project Management practices.

    After almost 30 years working in almost every area of IT, I became an IT Governance Evangelist. I have been traveling the world touting IT Governance and its essential processes and mechanisms for over 3 years now. I have spoken to thousands of people in over 200 forums in which I have presented (100 of these to individual companies). I rarely encounter Enterprises with established specific definitions of project success and failure and the associated ability to make decisions based on applying those definitions. Most people I meet don’t even have the word “failure” in their corporate vernacular (given its incredibly negative connotation and the pervasive human aversion to the word).

    Vrije Universitei does an incredible job noting the many variables and moving parts associated with understanding and determining project success and failure. I just don’t believe I could use their impressive research and analysis to influence the necessary changes to improve project success in Enterprises today. I would bet most of my audiences would be asleep before I could get halfway through just explaining your blog post, let alone taking a deep dive into the actual research. I use the Standish Report and its suspect failure rates because it is an effective means to motivate organizations to take action. I use the studies from the Standish Group, Garter, Forrester and MIT CISR (my favorite) to urge folks to define project success and failure in their Enterprises and to use those definitions to understand and subsequently improve their success rates.

    Your great post and the incredible work done by Vrije Universitei will now put an end to my practice of citing the Standish Chaos report, even if it was under the veil of good intentions. Better still, I will still cite their results, explain my misgivings regarding their approach, and then cite your blog post and the findings of Vrije Universitei. Frankly, I care little about the nuances and resulting disparities between one research approach and another. What I do care about is inciting Enterprises to take action regarding their ability to understand project failure and increase project success.

    Let me close by noting the serendipitous nature of the timing of your post. I received a call from Standish Group Customer Service today asking about my use of the Standish website. I told them I only used it to obtain summaries of their research (because it is free) to quote in my presentations. I said I had little insight or interest in their paid services. I agreed to schedule some time with them next week, to give them the opportunity to talk to me about their services. After reading your post, I am sure it will be a very interesting conversation.

    Steve Romero, IT Governance Evangelist
    http://community.ca.com/blogs/theitgovernanceevangelist/

    • Shim Marom
      March 26, 2010 | 3:28 am

      Hi Steve, I’ve read through your comments a number of times in order to make sure I understand it correctly. The point I found so amazing is your admission (which I suspect represents a larger culprit audience) that you choose to use a study, which you agree is flawed and misleading, just to be able to make a point!!!

      Do I really need to elaborate on the ethical and professional issues associated with such an admission?

      The Chaos report is incorrect – full stop! To make use of such report just to advance an agenda, whatever the agenda is, is incorrect, to say the least.

      Shim Marom
      http://www.quantmleap.com

      • Steve Romero, IT Governance Evangelist, PMP
        March 26, 2010 | 10:09 am

        I completely understand your response Shim. I should have taken greater care in explaining my willing use of a “flawed and misleading” report.

        First, I cite numerous studies in my presentations. Upon citing the Standish Report, I tell audiences I believe the report is flawed and I explain the basis of my views. The reasons I even bother to take audiences through the exercise of citing flawed research are as follows:
        – Many people are aware of the Standish Group
        – The study shows a high degree of failure which ignites discussion
        – Their flawed (in my view) approach to project failure characterization underscores the need for enterprises to define project failure in terms that result in their acquiring the data essential to making good project investment decisions

        I came to much higher level conclusions than those cited in this post. I did not have near the insights I have now, which is why I have to provide even more explanation when citing the Standish Chaos report.

        Steve Romero, IT Governance Evangelist

    • samad_aidane
      March 26, 2010 | 3:58 am

      “What I do care about is inciting Enterprises to take action regarding their ability to understand project failure and increase project success.”

      I love that and your passion shows in your wonderful posts on the IT Governance Evangelist blog.

      Steve, first of all, I want to thank you for your candid comments about your experience with the Chaos Report.

      It is very helpful to me to understand how others have used the Chaos Report figures. I can appreciate how its findings can be a powerful tool to motivate executives to establish proper governance structure in their organizations.

      This conversation we are having was exactly my hope from interviewing Chris and getting the word out about his team’s findings. It is through these conversations that we gain a better understanding of how we can help our organizations better understand failure and increase project success.

      It is funny that I got introduced to your blog just the day before by a tweet from Rick Morris. I enjoyed your recent post: “IT Governance Stops the Hate between the Business and IT”.

      Thank you again Steve.

      • Shim Marom
        March 26, 2010 | 8:34 am

        Hi Samad,

        I seriously fail to understand how using a “flawed and misleading” (to quote Steve’s comment) can be used as the basis for “findings can be a powerful tool to motivate executives to establish proper governance structure in their organizations” (to quote your comment).

        I don’t think the end justifies the means, which means that just to convince organizations to adopt proper project management disciplines should not be used as a justification for using the wrong data. If this was the case we could all write our own imaginary reports and present them to executives, every-time we wanted to convince them of that point or another. Clearly it is wrong and as professionals we should shy away from using shoddy data to strengthen our arguments (irrespective of how valid our arguments are).

        • Steve Romero, IT Governance Evangelist, PMP
          March 26, 2010 | 10:18 am

          Agreed! How do you feel about using “shoddy data” if you fully disclose its shoddiness? (Which has been my practice, and again, I have a lot more explaining to do now that I have read this great post.)

          Steve

        • samad_aidane
          March 26, 2010 | 12:24 pm

          Shim,

          Totally agree with you. The end should not justify the means.

          My point, which I didn’t do a good job articulating in my previous comment, is that the report figures are astounding and shocking. I can see how, when used as a persuasion tool, their shock and awe impact can be very effective with executives. I am not saying that this is justified. I am just saying that as a persuasion tool, fear is much more effective than reward. And this is what I think makes the Chaos Report very widely quoted.

          With research, such as the one that Chris and his team produced, we can begin to show that the Chaos Report is not the right persuasion tool. We will then have the opportunity to think of more effective ways to help our organizations increase their awareness about how to reduce project failure and increase success.

          • Shim Marom
            March 27, 2010 | 5:48 am

            Not to over elaborate the point, both you and Steve seem to suggest that we have too many project failures. I actually totally disagree. I have elaborated on this, extensively in a comment to a post published by Patrick Richard (see http://thehardnosedpm.typepad.com/my-blog/2010/03/chaos-theory.html) so I will just refer to it briefly here.

            The current level of project failures represents an acceptable level based on the the fact that:
            a. It is not possible to consistently achieve 100% success rate (that’s almost a physical law); and
            b. Where 100% success rate is required (for instance in space exploration missions), exuberant amounts of money are required to make it happen. So, in principle, the current level (more or less) of success rate represents a socially acceptable level based on the opportunity cost involved with changing it either way (up or down).

            My conclusion: Let’s stop talking about a problem that doesn’t really exists.

            Cheers, Shim.

            • Steve Romero, IT Governance Evangelist, PMP
              March 27, 2010 | 12:26 pm

              Shim, I now have to completely disagree with you. There are far too many project failures – again, if the project failure is defined as: “the project does not meet its intended and stated commitments.”

              Project failure does not only occur when the solution doesn’t work. Yes, missing performance objectives is something we see with IT projects, but I have found these types of failures to be in the minority, and seldom does the technology fail outright.

              Project failure occurs when:
              – Organizations invest in the wrong projects (not those most essential to meeting Enterprise Strategy)
              – The speed of the project does not meet the needs of the business (projects are approved predicated on when value is expected to be realized)
              – The value of the project is not commensurate to the investment in the project (and I find most organizations are not even able to accurately determine investment value – and sometimes, the “actual” cost of the project, which includes indirect, business process change, and full-lifecycle costs)

              I highly suggest you start following @mkrigsman. Michael Krigsman writes a post entirely dedicated to project failures. For us project success advocates, his blog is sort of our “Project Failure Maypole.”

              My last comment is in regard to you assertion, “It is not possible to consistently achieve 100% success rate.” Here is where I completely agree with you. In fact, I want to always see a certain level of failure because it means we’re pushing the envelope and not playing it too safe. So I hope you weren’t making the assertion based on the assumption that I was advocating something that is not only unrealistic, but contrary to businesses accepting a healthy amount of risk in their investments.

              For those of us who find IT Project failure rates to be unbearable, I doubt there is anyone who has ever suggested the target is a 100% success rate. What I do want is for Enterprises (not just IT) to do what they say they are going to do, and deliver what they say they are going to deliver. And once again, I find few that are able when it comes to their IT projects.

              Steve Romero, IT Governance Evangelist

              • Shim Marom
                March 27, 2010 | 11:29 pm

                Hi Steve, this is now turning into a serious discussion that needs to have it’s own space. I will reply to your comments, in detail, in a separate blog post later on today.

                Cheers, Shim Marom
                http://www.quantmleap.com

      • Steve Romero, IT Governance Evangelist, PMP
        March 26, 2010 | 10:13 am

        It is nice to “meet” you as well Samad. I found you on Twitter and I will be “following” you more closely. I look forward to it.

        And thanks again for a great post.

        Steve

  4. Scott Ambler
    July 29, 2010 | 3:29 pm

    It’s nice to see your writings on this topic. I’ve been running industry surveys for several years now, see http://www.ambysoft.com/surveys/ , and have consistently gotten different (and more positive) results than what is published in the Chaos Report.

    • samad_aidane
      August 4, 2010 | 10:16 am

      Scott,

      Thank you so much for your comment. I found http://www.ambysoft.com/surveys to be a great resource when I was researching scaling agile. Your surveys were extremely helpful to me about what specific agile practices people are using on their project.

      I plan to use your site more and recommend it as a resource as I continue the blog post series I started on tailoring agile for large system integration projects.

      Thank you again Scott.

  5. Mike Clayton
    August 25, 2010 | 3:33 am

    Samad – a simple thank-you from me. I have long wondered if the Chaos Report would be worth buying and have been shy of taking the risk. Your blog does us all a valuable service by making the EQUITY team’s work available, and helping us assess that risk.
    Mike

    PS: Thanks too, to Glen Alleman at http://herdingcats.typepad.com/ for signposting me here.

    • samad_aidane
      August 25, 2010 | 4:02 am

      Mike,

      Thank you so much for your comment. I am glad you found this information helpful. I have been concerned for a long time about the negative impact the Chaos Report has on the perception of IT projects. Especially the perception of the business sponsors. I was thrilled when Mr. Chris Verhoef agreed to do the interview. I am also grateful to Mr. Glen Alleman for mentioning this post on his great blog: http://herdingcats.typepad.com.

      Thank you again Mike.

  6. Tannguy
    September 30, 2010 | 12:24 pm

    Very interesting comments. I used to quote the standish survey, as did the PMI in their PMI Fact Book…
    I can understand that definitions of ‘Project success” & “Project challenged” are not reliable. However, ‘Project cancelled” seems to be stronger.
    My concern is the standish report is the only one I know on that topic. Maybe wrong, I agree.

    Where are reliable datas??? As i said, the PMI was using these datas a few years ago as a reference for project succes rate. Is it still the point?
    regards
    Tannguy

    • samad_aidane
      September 30, 2010 | 9:59 pm

      Tannguy,

      Thank you for taking the time to comment.

      Unfortunately I am not aware of any other reliable report at this point. I don’t think PMI uses the figures from the Chaos Report anymore but I am not 100% sure. I would love to know if there are other reliable studies.

      Thank you again.

  7. Brian Finnerty
    October 1, 2010 | 2:05 pm

    Good interview and thanks for the added perspective on the authors of this paper. I’ve often questioned the validity of the project failure rates in the Chaos report, but this really lays out a compelling counter argument.

    See a continuation of this discussion with comments at http://blogs.innerworkings.com/fmckeagney/2010/10/01/meaningless-the-rise-fall-of-the-chaos-report-figures/.

    Have we heard any response from the Chaos report authors as yet?

    • samad_aidane
      October 17, 2010 | 12:21 am

      Brian,

      Thank you for the comments.

      So sorry for the late reply. I just saw your comments this afternoon.

      To answer your question, there has not been any response from Chaos Report authors.

      Thanks for the link to Fran’s article. Loved it and left a comment. It is is great to see that the interview contributed to the conversation about the Chaos Report.

  8. Rohane
    October 28, 2011 | 8:20 am

    Thanks for this article
    The key question for me is “What is the alternative?”
    CHAOS has presented an easily packaged ready reference. Without it, how do normal folks get a sense of the stats?
    What can you say about it?

    • Shim Marom
      December 19, 2011 | 2:58 am

      Rohane,

      If you think about it for a minute you will see that your question doesn’t really make much sense at all.

      Just because the Chaos report is packaged in a nice and easily digestible way does not absolve it of the need to also be accurate. Are you suggesting that incorrect evidence is better than no evidence at all?

      Cheers, Shim

  9. Marc Schluper
    January 20, 2012 | 11:48 am

    Many years ago, in an undergraduate math class, I learned that when solving a problem, more than half the effort goes into thoroughly defining it. In software development, if we really understand the problem we need to solve, the solution almost pops up before our eyes. If we can’t get a clear idea about what the software should do, it’s fine to build prototypes as long as we know we are building prototypes. We get into trouble whenever we make ourselves and others believe that we are building a solution while pretending we know what’s needed. So when we are measuring the success of projects we should not mix projects that have a clear understanding of what needs to be build with those that don’t.
    My advice: If you don’t know, ask. Never pretend you know while you don’t. Don’t rely on people who pretend.

  10. tarek
    April 9, 2012 | 2:47 pm

    hello dears ,,,
    can you please explain me or give me anything about the chaos report such as the definiation or the benefit of the chaos report .
    thank you

  11. Hard Truths About Public Sector IT
    July 8, 2012 | 11:45 pm

    […] predictor applies whether or not you accept the 1995 or 2009 Standish Reports or not, or […]

  12. Peter Hawkins
    December 16, 2012 | 8:51 pm

    My son has an honours degree in maths and finance and a post grad in economics and has helped Phd students with statistics and I asked him, how I should view these reports that from my experience (nearly 40 years in IT) don’t make much sense. He summed it up: 40% of people don’t believe in statistics.

  13. […] não é  unânime, há  quem duvide de sua validade para todos os tipos de projetos [veja este e este links, por exemplo] e há quem afirme que a taxa de sucesso está acima de 50% [com amostra […]

  14. […] [4] S. Aidane, The “Chaos Report” Myth Busters, 26 March 2010, see here. […]

  15. […] de Chaos rapporten van de Standish Group, maar deze zijn zeer twijfelachtig. Ik adviseer je om het volgende interview te lezen, en daarna je persoonlijke kopie van die rapporten in de prullenbak te […]

  16. Amol
    September 30, 2013 | 2:10 am

    Thanks All for the great information. Many of us in our consulting careers used Chaos Report reference for somewhere. Great to know other side of story, though it is very dense read!!!

  17. […] If you like this interview, you might also like: The Chaos Report Myth-busters. […]

  18. […] Er bestaat er ook de nodige kritiek op het Chaos Report, deels omdat vastgehouden wordt aan de ouderwetse Triple Constraints en voorbij gegaan wordt aan meer moderne inzichten als de Six Triple Constraints (met als toevoegingen: Quality | Risk | Customer Satisfaction). Verder is er ook veel kritiek op de gehanteerde methodes voor het verzamelen en interpreteren van de onderliggende data. Zie hiervoor onder meer: The Rise and Fall of the Chaos Report Figures en: The “Chaos Report” Myth Busters. […]

  19. […] agrees with the Standish Group. While not an easy read, you will find Samad Aidane’s article, The Chaos Report Mythbusters a pretty thought provoking critique of the Chaos Report. Based on my experiences though, it is easy […]

  20. […] unânime, há  quem duvide de sua validade para todos os tipos de projetos [veja este e este links, por exemplo] e há quem afirme que a taxa de sucesso está acima de 50% […]