Mike Cohn & Kenny Rubin's Comparative Agility Survey attempts to measure your team's behavior in relation to the Agile values and principles. It compares your results to the survey sample population's Agility. It doesn't give you a absolute score, but rather a relative score.
Does it provide realistic information? Maybe, I hope so. How do we know? One technique that the social and behavioral sciences uses is a test of reliability and validity of the survey. When I talked with Rubin this summer he said there are plans to do such a study in the fall of 2009. One of the typical test of validity is the expert review. So if you have done a few years of Agile software development, take the test and give your opinion. Note the definition of an expert according to author Malcolm Gladwell is 10,000 hours of concentrated study in the field (for example 10 years of 4 hours per day study of a complex skill such as kung-fu, or playing the base guitar, or accounting).
Well by the strict definition I'm no expert, I'm not sure there are more than 100 experts in Agile in the world in 2009. However I do have some experience (4 years) in Agile practices and over 25 years in software development. I participated in the survey with a recent project team in mind (a VoIP product team client). Below are the results. In total I believe the survey instrument to be fair and well done overall. There are some areas I'd prefer to see expanded, a few questions that could easily be refined, and some terms that should be defined for clarity.
Did it give accurate and useful results? In my opinion perhaps... well yes! The team I described in the survey was one of the best teams I've had the privilege to work with in my 4 years of consulting on Agile Transformation project teams. Overall it scored consistently high in relative terms to the other 800 respondents.
Results:
See below for interpretation of results - basically long bars to the right (standard deviations) are better than the average of all other teams; while short bars are slightly better. Bars to the left are worse than the average team.
Does it provide realistic information? Maybe, I hope so. How do we know? One technique that the social and behavioral sciences uses is a test of reliability and validity of the survey. When I talked with Rubin this summer he said there are plans to do such a study in the fall of 2009. One of the typical test of validity is the expert review. So if you have done a few years of Agile software development, take the test and give your opinion. Note the definition of an expert according to author Malcolm Gladwell is 10,000 hours of concentrated study in the field (for example 10 years of 4 hours per day study of a complex skill such as kung-fu, or playing the base guitar, or accounting).
Well by the strict definition I'm no expert, I'm not sure there are more than 100 experts in Agile in the world in 2009. However I do have some experience (4 years) in Agile practices and over 25 years in software development. I participated in the survey with a recent project team in mind (a VoIP product team client). Below are the results. In total I believe the survey instrument to be fair and well done overall. There are some areas I'd prefer to see expanded, a few questions that could easily be refined, and some terms that should be defined for clarity.
Did it give accurate and useful results? In my opinion perhaps... well yes! The team I described in the survey was one of the best teams I've had the privilege to work with in my 4 years of consulting on Agile Transformation project teams. Overall it scored consistently high in relative terms to the other 800 respondents.
Results:
See below for interpretation of results - basically long bars to the right (standard deviations) are better than the average of all other teams; while short bars are slightly better. Bars to the left are worse than the average team.
The following information compares your specific survey results against the entire Comparative Agility database of 805 surveys that existed on Sep 23, 2009.
Your results are shown in terms of the number of standard deviations that your answers differ from the surveys in the Comparative Agility database. If your score differs by a positive number, then your answers for that dimension or characteristic are "better than" the average answers in the Comparative Agility database. If your score differs by a negative number, then your answers for that dimension or characteristic are “worse than” the average answers in the Comparative Agility database.
The length of the bar for each dimension or characteristic represents the magnitude of the difference between your answer and the average answer in the database. More specifically, for each dimension and characteristic, the Comparative Agility website computes the average answer of all questions for the specific dimension or characteristic by examining the responses to all surveys (excluding your survey). In addition, the website computes the standard deviation among all of the surveys for that dimension or characteristic. Finally, your answers are compared with the average answers and the difference between your answers and the average answers is then expressed in terms of the number of standard deviations.
For example, if your combined answers for a dimension average 3.5 and the database average (excluding your answers) is 3.0 with a standard deviation of 0.25, then you would see a bar with a length of +2 standard deviations, indicating the for that dimension, your answers are two standard deviations more positive than the average of all the surveys in the Comparative Agility database.
If you see an X on the zero line in the graph for a particular dimension or characteristic, that means that your answer was equal to the average answer in the Comparative Agility database (in other words, zero standard deviation difference).
There are two graphs below. The first graph is for the seven Comparative Agility dimensions: Teamwork, Requirements, Planning, Technical practices, Quality, Culture, and Knowledge Creation. In this graph we take all of the questions related to the dimension (e.g., all of the teamwork questions) and we compute the statistics referenced above and then show how your answers compare to the Comparative Agility database of surveys.
The second graph shows all of the characteristics. Each dimension is made up of three to six characteristics. As with the dimension graph, each result in the characteristics graph shows how you compare to the Comparative Agility database of surveys for all of the questions within a particular characteristic.
By examining the Dimensions and Characteristics graphs, you can see how you compare to other organizations that have taken the survey.
Comments
http://www.agilejournal.com/articles/columns/column-articles/2588