Archive for the ‘SCAMPI’ Category


Out of the recession, at last?

January 19, 2014
Number of appraisals per calendar year

Number of appraisals per calendar year

We may not be completely out of the tunnel yet, but things are looking up. After many years of recession and “credit crunch”, it seems that companies are beginning to invest in the future and in their improvement programmes again. This is not an absolute indicator, and the trend might still change, but after a few years of reduction, 2013 was the second year in which the number of reported CMMI appraisals (SCAMPI) increased. While the 2012 increase was small enough to be considered a flat line, 2013 shows an acceleration in the rate of increase.

This is only a small indicator, but it does show that organizations are starting to take things seriously again and realize that it is time to start up their improvement programmes to survive future downturns.

I am optimistic.


Why are you doing this?

June 9, 2013

I am regularly approached by organizations who want to be appraised at CMMI Maturity Level x. When asked why, they give me a variety of responses, which basically come down to the fact that they would like a certificate to hang in the lobby. It may be that a customer or prospect has requested this, or it may be that someone on the board of directors read an article. When challenged and questioned on the level of investment, the disruptive aspect of an appraisal, management’s responsibility for the results, they often show that they have no understanding of what they are trying to do.

Another traditional subject covered is that “we are looking at achieving maturity level 3, we do not need to go higher (it is too expensive)” and the question “is this little bit enough to satisfy the maturity level?”

We must ask ourselves how much money I am willing to spend on getting a piece of paper in the lobby. It might open the door to being able to respond to a request for proposal from a potential customer, but that paper will not noticeably improve quality, time-to-market, productivity, reliability, ability to meet deadlines, customer satisfaction, employee retention, or make sure repeat business from satisfied customers. Just like a university diploma does not make you intelligent.

The aim of an improvement programme, of a change programme (using CMMI or any other technique) is to improve organizational performance, and not to implement fancy processes. If what you are doing does not help you to manage your organizational performance, you are wasting time and energy and not really improving anything. It is the difference between studying what may be useful in your career and studying to pass the test and forget everything the next day.

CMMI, ISO, Six-Sigma, Lean and the others are not necessary to improve organizational performance: they are tools and if they are used intelligently, they may help guide you, but implementing them without thinking will only lead to expensive long-term failure. Within CMMI, there is a process area called “Organizational Performance Management” (or OPM). OPM is listed at the highest level (maturity level 5), because this is the goal, the rest of the model, practices, goals, process areas, etc. are only some of the steps which are required to be able to manage your organizational performance effectively and efficiently.

Managing performance requires understanding performance. That can only be done when you have stabilized the performance of your teams, projects, services and are delivering products in a predictable way. In order to do that, you need to understand the level of predictability of your most important work practices (or processes), which means they need to be regularly monitored and analysed. You can only do that if you are sharing the practices in teams and projects enough to get statistically significant data. And of course, you only want to share the practices and processes which are bringing real benefit to your business, your staff, your products and services.

And so, we look  at what you need to do, from the beginning, we can travel through your capability maturity (maturity is how well you know your own strengths and weaknesses, how well you understand what are the limits of your potential, this comes with time, experience, successes and failures).

The first step we must consider is what you are trying to achieve. If I talk about your productivity, what do you understand? Are you trying to produce the highest number of widgets, reduce the time to market, offer zero-defect products, or be the cheapest service provider in the world? This is necessarily the first step in your improvement programme: decide, define, document and distribute your vision for the organization; there is little point in trying to be recognized as the best in the world, if your staff is cutting corners to keep down costs. Your goals are well communicated, and you are putting metrics in place which support them. From the start, you need to understand that people act according to how they are measured. I am always amazed at the number of companies which tell me that “quality” is their primary motivation, but then only measure delays and budgets: you are in fact communicating that quality means fast and cheap.

After this, you need to allow the professionals to do their jobs as they believe is most appropriate to meet these goals. The results, practices and methods are analysed and compared so that we can figure which are the tools, practices and processes worth sharing across the organization. Once they are shared, we can start to measure the predictability of the results and refine them which will finally allow us to manage our organizational performance.

The starting point is not to identify steps and document this as the standard process which must be obeyed at all costs. The starting point is not to just talk about quality, but measure only delays and budgets. The starting point is not to find the minimum required to satisfy some theory; the starting point is to inspire your teams to reach the end-point.

The end point is organizational performance management.

In CMMI terms: maturity level 5 is the only destination possible, the rest are dead-ends.


Reviving CMMI – a Conclusion?

June 3, 2013

Third item on this topic, I know. Some people believe that  I spend too much time complaining without proposing a solution, so here is my proposal: measurement and analysis should be expected from the start.

Many times, when I have asked for evidence of the implementation of measurement and analysis, I have been provided with evidence of project monitoring and control. Measuring that your project tasks are progressing, that you are respecting delays and budgets is not part of MA, it is part of WMC/PMC. It should not be difficult to include in either the model or the appraisal methodology a better set of examples of how MA should be applied, and place this as a requirement in the appraisal process.

Currently, the appraisal method focuses on the practices; during the training, we say that the goals are what is important and are required, the practices are only “expected”. However, during the appraisal, we focus on measuring the practices rather than the goals. We assume that if all the practices are in place, then the goal must be satisfied, if one practice is missing, then the goal is not satisfied. And even if we did, I cannot help but notice that the goals (and the purpose) statement all focus on activities, on tasks, on practices, and not on results.

Let’s require business measurements and demonstrations of results for each goal. CMMI is (supposedly) there to help achieve business productivity results and not just to do a series of tasks to get a certificate we can hang up behind the reception desk.

Why do you do “Requirements Management”? Show me results. Show me that the number of issues related to unidentified change requests has diminished; show me the reduction in unexpected requirements appearing during the V&V stage; show me data that show how the time to find a deviation from customer requirements in your project plans and other work products has gone down, because you have implemented a successful “two way traceability”. Show me measurements that demonstrate the impact on the quality of your products and services, on customer satisfaction. If you cannot show me that, you have not implemented a useful approach to requirements management. Of course, people will argue that you cannot measure improvements at maturity level 2, you can only really have useful measurements at maturity level 4. But that is not true. You do not need control charts or five years of history to show a trend. Because if you did, no one would manage to satisfy the expectation for a quality trends in the PPQA process area at maturity level 2 – or maybe you have just skipped over that passage and focused on having a static checklist? If you implement a practice, whether from the CMMI or elsewhere, it is because you expect to see a change one that practice is performed; if you expect to see a change, there is something which can be measured.

This is not a complicated addition to the model, it is only a clarification of the real purpose behind each one of the goals. A change in the appraisal process of this magnitude would ensure that people understand that CMMI actually has a benefit, and it would allow us (finally) to have some decent metrics as to the value of the approach.

I don’t know if anyone will read this or pay attention, but I am glad I got it off my chest. Hopefully, my next post will be about something else. Maybe I will develop a business based appraisal system of my own, but if I am not supported by the community at large, no one will use it and I will waste my time: it is so much easier to do what the model says without thinking.

and you might get a certificate to hang up, a logo to put on your website…


Reviving CMMI – the response

May 23, 2013

It seems that my previous post called “Reviving CMMI” generated quite a lot of reactions. Some people seem to understand that I was suggesting to let the old girl die, some encourage the idea, others were horrified at this attempted matricide. A few reacted supportively or critically without giving enough information for me to know what they thought I had said.

So, I decided to clarify my feelings on the subject. I am a process improvement consultant, I have been living off CMMI for many years and would not recommend cutting off the hand that feeds me without careful consideration.

1. CMMI cannot be considered as fit for purpose

This is largely because the owners of the model, the user community and the market do not agree on its purpose. The CMMI Institute (following on in the footsteps of the SEI) places a large emphasis on appraisals and maturity levels, publishing numbers of appraisals, time to reach a level, number of maturity levels per country and per industry – in fact all the measurements and data produced are directly measurements of the appraisal results. But, at the same time, we are presenting CMMI as a tool for process and productivity improvement rather than a certification diploma.

Without a clear understanding of the purpose, it is not possible to design something fit for purpose.

2. As a Tool for Improvement

As a tool for productivity improvement, the model does not contain enough information to facilitate a seriously useful and helpful implementation. There are hidden relationships between process areas and practices which are not easily identified or understood.

Personally, I try to use the model as an efficiency and quality improvement tool; I need to spend an excessive amount of time clarifying the cause and effect relationships within the structure of the model. I also need to explain in detail how to understand the purpose and meaning of things within a business context. The standard training does not explain the evolution from maturity level 2 to 3 and beyond. There is a vague statement that it is not recommended to skip levels, but no clear rationale clarifying what are the risks and consequences. The relationship between specific and generic practices is not sufficiently clear in the model or the training. These are vital facts if you want to use the model for your business.

If the model is to be focused on improving quality and productivity, it needs to include more information on how to apply it successfully.

3. As an Appraisal Model

An ISO audit takes a couple of days, a CMMI appraisal can take a couple of weeks. Why? A number of “certified lead appraisers” do not appear to understand the purpose of the model. There was a recent case of an organization which was required, according to their appraiser, to have a separate policy document for each CMMI process area, clearly stating the name and structure of the PA – this is not the goal of the model, but people with no experience of the “real” world are being authorized to appraise successful organizations; they are frequently focused on respecting the comma of the law without understanding.

The current appraisal method spends a lot of time trying to find evidence of practices, but could be a lot more focused on the impact and results of successful implementation of recognized and accepted best practices.

Moving forward

As I stated, I believe it is time to perform an in-depth lessons-learned analysis to find out what went wrong and how to correct the product, making it into something that will have the impact which was promised.

This must start with an understanding of the purpose of the beast. If we are talking about a tool for process improvement, we need an approach to educating of practitioners and users, which focuses a lot more on the practical side of change management and improvement. We need more information regarding the implementation of the practices. Potentially, this may mean that the core model gets completed with a series of “recipe books” for different industries or contexts.

I would like to see the model completed with clear business related impact and influence statements, clarifying why things need to be done to save those who are implementing the letter of the law from their own stupidity.

I would like to see CMMI separated and organized so as to distinguish the improvement potential from the appraisal requirements. I am not sure if both can survive with the same name, but trust that the SCAMPI appraisal methodology can be adapted to other models and standards and be recognized in its own right. The appraisal methodology needs to focus a lot more on the business and cultural aspects of the model, stopping lead appraisers seeking to burden businesses with bureaucracy because a sub-practice says that is the way it should be.

CMMI should be perceived as a pragmatic approach to assist organizations increasing job satisfaction and customer satisfaction. And we should be able to demonstrate that from the beginning. The appraisal method should focus on measuring the results, not the practices.


The Sweet Smell of Success

November 9, 2012

It is so rewarding when one gets the opportunity to encounter real maturity and growth. When I first visited this company a little over two years ago, it was a typical organization, seeking to have a CMMI “certification” in order to be able to advertise their success. They had made the usual mistakes, taking short-cuts in the wrong places, trying to force through an approach that did not really correspond to the company’s culture or their business objectives.

Today, I am faced with a company that is well on its way to a successful maturity level 4. Engineers are telling me how useful quality assurance is, people at every level of the organization can explain how an intelligent use of measurements has made them more productive, has increased the quality of the products and services they deliver to their customers. Staff are pleased with the way things have progressed, particularly over the past year. The expansion of the company has been facilitated by this improvement, as they have learned that rather than using CMMI to attract customers, it is more interesting to use quality to keep them.

Measurements and trends of cost of quality and numbers of defects are progressing beautifully. Of course, all is not perfect, but progress is accelerating in all aspects. They are demonstrating how an international organization can combine a successful, flexible and cost efficient approach through a combination of Agile, CMMI and Prince2.

Over the past few years, I  have given them some training, I have tried – as is my wont – to educate rather than to instruct them – and so, I feel I can take some pride in this beautiful success story, but they did it. They understood the principles, they changed the culture, they used training and well placed measurement. They understood that the principles behind models and standards are focused on communication, learning and sharing – and not (as many would have you believe) on bureaucracy and pointless paperwork.

I am proud of this company, and I am willing to grab my little bit of responsibility in their success. But, more than anything, I need to say: congratulations, ISDC! Keep it up.


Sampling: Does It Work?

April 22, 2011

When using the new version of SCAMPI appraisals (v.1.3), there is a new sampling methodology that aims to ensure the selected projects being analysed are actually representative of the organization. Previously, the appraiser needed to select some three (or four) projects that were considered as representative of what the organization did. By demonstrating that these projects did a good job, as they were representative of the whole, they naturally showed that the whole organization always did a good job. Of course, companies were eager to present only what they did best, there are many techniques that can be used to hide the projects you don’t want to see.
With the new system, one needs to check all the factors that may influence the way work-practices (processes) may be implemented in different manners, consider the various possible combinations, and select a proportional sample of projects from each possible combination. The number of projects to select is influenced by the total number of projects in the organization vs the number of projects included in the subset.
Let’s consider a commercial software development organization, not a big multinational, but a running concern, which develops and sells products to a variety of customers. They have a number of different projects running, they are all for different customers, related to different types of products. Some of them are implemented according to Agile principles, others are more “waterfall” and a few are a home-made combination of the two. Some projects have heavy involvement by the customer, who can control the requirements, do the project management or even run the testing; others are completely hands-off, depending on the customer. Some of the developments are quite large, others are quite small. This is just a software development company, they work to customer requirements.
The MDD requires that the following factors be considered: location, customer, size, organizational structure and type of work, as well as any other factors that may influence the manner in which the processes are being implemented. In my (theoretical) example above, we have a different customer for each project, the organizational structure depends on the level of interference/control allowed to the customer, the sizes of the projects vary widely, they use different life-cycles and a variety of languages. Very rapidly, when doing the calculations for the sampling, we can discover that every single project is potentially in a subset that is peculiar to that one project, meaning that in order to get a true representative sample of the projects in the organization, there is an implicit need to review every project in detail! At this point in time, the natural tendency will be to determine that maybe this factor does not really have that much influence, perhaps the other is not really a variation, and the number of projects to consider is creatively rationalized down to something realistic, honestly and in good faith.
Of course, a series of small projects, run by the same team, using the same processes do not implement them in different manners, every time a parameter changes in the environment, so the appraiser is required to analyse which of the parameters have an impact on how the processes are implemented. How do you determine this? You can discuss it with the appraisal sponsor and participants, who then have the opportunity of making the claims they want to make, once again, focusing attention on the best projects and hiding the others. Otherwise, the appraiser will have to perform a detailed analysis of the differences in implementation, and that would be done through a mini appraisal of all the projects in order to identify what are the factors that really matter – so one would need to perform a full appraisal of all projects in order to determine how to select the factors to sample the projects to review. Not an economically viable solution.
Once the subsets of projects have been identified, there is a requirement to review one project from A to Z, and to collect “artefacts or affirmations” from another project within the same group, for “at least” one process area. So a second project can be reviewed and approved based only on affirmations in a single area – and this guarantees that everything else is done correctly.
Shame, when I first heard about the sampling factors, I thought this was a good idea; unfortunately, it seems that a choice needs to be made in implementation: go to a bureaucratic, extensive and expensive appraisal of everything, or select to play the numbers with as much ease and facility as previously.
The consequence of this: lead appraisers (and organizations) that believe in quality and want to do a good job, will continue to provide reliable results; those who want to give away (or receive) the highest maturity level possible, will continue to falsify data.



October 5, 2010

The Standard CMMI Based Appraisal Method for Process Improvement (SCAMPI) version 1.3 is planned to be published in January 2011. There are some good things in the new approach which clarify and tighten the rules. Please note that the method is out for review and is not yet released. Changes will continue in the coming months, these comments and reviews are based on an intermediate release.

First of all, the selection of the projects or teams to be included in the appraisal is based on an understanding of the products and services provided by the organization in scope. The lead appraiser is required to understand the variations that exist, in terms of technology, location, customer, etc. Then, the projects to participate are selected to statistically represent the whole organization. This is performed through a methodology that is probably easier to apply than to explain; particularly for the moment, as the official explanation requires clarification of terms such as “basic unit”, “sampling factor” and “subgroups”. But it really is not as complicated as it sounds. For a diverse organization, this new approach will require that a larger number of projects participate in an appraisal, which would lead to more honest ratings for the unit. If the number of projects and teams to be reviewed exceeds the constraints of time and cost for the appraisal, the scope will have to be reduced. Now, the question is how will this be demonstrated on the PARS and official reports of the appraisal so as to avoid the advertising of an organization being Maturity Level x, when in fact only a reduced scope was considered.

As a consequence of this, the concepts of focus and non-focus projects have been removed from the method.

The requirement for “direct” and “indirect” artefacts no longer exists. Instead, there is a requirement for the lead appraiser to ensure that there is enough evidence, both written (artefacts) and “oral” (affirmations), to demonstrate that the practice is correctly implemented. I would assume that we still need to demonstrate that the practice is performed, used and understood.

Another change is the fact that lead appraiser’s experience may no longer count in calculating the experience of the team. The team members, excluding lead, will have to demonstrate an average of 6 years experience and a total of 25 years, at least. This may be a serious issue for some of the younger organizations, particularily when working in countries such as Romania or Ukraine. There is a waiver option for young organizations, but the SLA will have to make a specific request to the SEI each time, in time.

On the downside, the new licencing structure means that each Class A appraisal will require a $1000.00 charge, which will put up the cost of appraisals at a difficult time for most businesses.

Change requests can be send to “cmmi-comment{at}” with “SCAMPI v1.3” referenced in the subject.

%d bloggers like this: