Skip to main content

Comparative Agility Survey Results

Mike Cohn & Kenny Rubin's Comparative Agility Survey attempts to measure your team's behavior in relation to the Agile values and principles. It compares your results to the survey sample population's Agility. It doesn't give you a absolute score, but rather a relative score.

Does it provide realistic information? Maybe, I hope so. How do we know? One technique that the social and behavioral sciences uses is a test of reliability and validity of the survey. When I talked with Rubin this summer he said there are plans to do such a study in the fall of 2009. One of the typical test of validity is the expert review. So if you have done a few years of Agile software development, take the test and give your opinion. Note the definition of an expert according to author Malcolm Gladwell is 10,000 hours of concentrated study in the field (for example 10 years of 4 hours per day study of a complex skill such as kung-fu, or playing the base guitar, or accounting).

Well by the strict definition I'm no expert, I'm not sure there are more than 100 experts in Agile in the world in 2009. However I do have some experience (4 years) in Agile practices and over 25 years in software development. I participated in the survey with a recent project team in mind (a VoIP product team client). Below are the results. In total I believe the survey instrument to be fair and well done overall. There are some areas I'd prefer to see expanded, a few questions that could easily be refined, and some terms that should be defined for clarity.

Did it give accurate and useful results? In my opinion perhaps... well yes! The team I described in the survey was one of the best teams I've had the privilege to work with in my 4 years of consulting on Agile Transformation project teams. Overall it scored consistently high in relative terms to the other 800 respondents.

Results:
See below for interpretation of results - basically long bars to the right (standard deviations) are better than the average of all other teams; while short bars are slightly better. Bars to the left are worse than the average team.











The following information compares your specific survey results against the entire Comparative Agility database of 805 surveys that existed on Sep 23, 2009.
Your results are shown in terms of the number of standard deviations that your answers differ from the surveys in the Comparative Agility database. If your score differs by a positive number, then your answers for that dimension or characteristic are "better than" the average answers in the Comparative Agility database. If your score differs by a negative number, then your answers for that dimension or characteristic are “worse than” the average answers in the Comparative Agility database.
The length of the bar for each dimension or characteristic represents the magnitude of the difference between your answer and the average answer in the database. More specifically, for each dimension and characteristic, the Comparative Agility website computes the average answer of all questions for the specific dimension or characteristic by examining the responses to all surveys (excluding your survey). In addition, the website computes the standard deviation among all of the surveys for that dimension or characteristic. Finally, your answers are compared with the average answers and the difference between your answers and the average answers is then expressed in terms of the number of standard deviations.
For example, if your combined answers for a dimension average 3.5 and the database average (excluding your answers) is 3.0 with a standard deviation of 0.25, then you would see a bar with a length of +2 standard deviations, indicating the for that dimension, your answers are two standard deviations more positive than the average of all the surveys in the Comparative Agility database.
If you see an X on the zero line in the graph for a particular dimension or characteristic, that means that your answer was equal to the average answer in the Comparative Agility database (in other words, zero standard deviation difference).
There are two graphs below. The first graph is for the seven Comparative Agility dimensions: Teamwork, Requirements, Planning, Technical practices, Quality, Culture, and Knowledge Creation. In this graph we take all of the questions related to the dimension (e.g., all of the teamwork questions) and we compute the statistics referenced above and then show how your answers compare to the Comparative Agility database of surveys.
The second graph shows all of the characteristics. Each dimension is made up of three to six characteristics. As with the dimension graph, each result in the characteristics graph shows how you compare to the Comparative Agility database of surveys for all of the questions within a particular characteristic.
By examining the Dimensions and Characteristics graphs, you can see how you compare to other organizations that have taken the survey.
1 comment

Most Popular on Agile Complexification Inverter

David's notes on "Drive"

- "The Surprising Truth about what Motivates Us" by Dan Pink.

Amazon book order
What I notice first and really like is the subtle implication in the shadow of the "i" in Drive is a person taking one step in a running motion.  This brings to mind the old saying - "there is no I in TEAM".  There is however a ME in TEAM, and there is an I in DRIVE.  And when one talks about motivating a team or an individual - it all starts with - what's in it for me.

Introduction

Pink starts with an early experiment with monkeys on problem solving.  Seems the monkeys were much better problem solver's than the scientist thought they should be.  This 1949 experiment is explained as the early understanding of motivation.  At the time there were two main drivers of motivation:  biological & external influences.  Harry F. Harlow defines the third drive in a novel theory:  "The performance of the task provided intrinsic reward" (p 3).  This is Dan Pink's M…

Exercise:: Definition of Ready & Done

Assuming you are on a Scrum/Agile software development team, then one of the first 'working agreements' you have created with your team is a 'Definition of Done' - right?



Oh - you don't have a definition of what aspects a user story that is done will exhibit. Well then, you need to create a list of attributes of a done story. One way to do this would be to Google 'definition of done' ... here let me do that for you: http://tinyurl.com/3br9o6n. Then you could just use someone else's definition - there DONE!

But that would be cheating -- right? It is not the artifact - the list of done criteria, that is important for your team - it is the act of doing it for themselves, it is that shared understanding of having a debate over some of the gray areas that create a true working agreement. If some of the team believes that a story being done means that there can be no bugs found in the code - but some believe that there can be some minor issues - well, …

What belongs on the Task Board?

I wonder about these questions a lot - what types of task belong on the task board?  Does every task have to belong to a Story?  Are some tasks just too small?  Are some tasks too obvious?  Obviously some task are too larger, but when should it be decomposed?  How will we know a task is too large?

I answer these questions with a question.  What about a task board motivates us to get work done?  The answer is: T.A.S.K.S. to DONE!



Inherent in the acronym TASKS is the point of all tasks, to get to done.  That is the measure of if the task is the right size.  Does it motivate us to get the work done?  (see notes on Dan Pink's book: Drive - The surprising Truth about what motivates us) If we are forgetting to do some class of task then putting it on the board will help us remember.  If we think some small task is being done by someone else, then putting it on the board will validate that someone else is actually doing it.  If a task is obvious, then putting it on the board will take vi…

Elements of an Effective Scrum Task Board

What are the individual elements that make a Scrum task board effective for the team and the leadership of the team?  There are a few basic elements that are quite obvious when you have seen a few good Scrum boards... but there are some other elements that appear to elude even the most servant of leaders of Scrum teams.









In general I'm referring to a physical Scrum board.  Although software applications will replicated may of the elements of a good Scrum board there will be affordances that are not easily replicated.  And software applications offer features not easily implemented in the physical domain also.





Scrum Info Radiator Checklist (PDF) Basic Elements
Board Framework - columns and rows laid out in bold colors (blue tape works well)
Attributes:  space for the total number of stickies that will need to belong in each cell of the matrix;  lines that are not easy eroded, but are also easy to replace;  see Orientation.

Columns (or Rows) - labeled
    Stories
    To Do
    Work In P…

What is your Engagement Model?

What must an Agile Transformation initiative have to be reasonably assured of success?

We "change agents" or Agilist, or Organizational Development peeps, or Trouble Makers, or Agile Coaches have been at this for nearly two decades now... one would think we have some idea of the prerequisites for one of these Transformations to actually occur.  Wonder if eight Agile Coaches in a group could come up with ONE list of necessary and sufficient conditions - an interesting experiment.  Will that list contain an "engagement model"?  I venture to assert that it will not.  When asked very few Agile Coaches, thought leaders, and change agents mention much about employee engagement in their plans, models, and "frameworks".  Stop and ask yourselves ... why?

Now good Organizational Development peeps know this is crucial, so I purposely omitted them from that list to query.

One, central very important aspect of your Agile Transformation will be your Engagement model.