Friday, September 25, 2009

Scrum Experience Report for CSP

This is my experience report for the Scrum Alliance's Certified Scrum Practitioner (CSP) requirements. I publish it online now because of the debate on the CSM Exam. It was written in April, 2009 in reference to a project that lasted over one year.

----

1. Through the questions below, please describe one project on which you have used Scrum over the past twelve months.

1.1. What was the project’s purpose? What business goal was the project intended to deliver?

To introduce Agile practices to existing organization while developing a new VoIP product for home use by consumers. The software to be developed created the new users account and provisioned the consumer’s hardware on the network and the VoIP Broadsoft server.


1.2. What was the project length? What was the duration of the project?

I was on site for about 1 year (Oct, 2007 - Oct, 2008) the project continued on until approx. Feb, 2009
when it was canceled. After having delivered a product to the market place for regional market test. 1.3. What was the cost of the project? How did budgeted costs compare to actual costs?
I’m not familiar with the cost/budget numbers. However the total including hardware was around 10 million for the first year.

1.4. Discuss the value of the project? How did projected benefits compare to actual (if measured) benefits?

The value was a product in the new market space of consumer voice over IP to be sold in a major consumer electronics store. One estimate of the market space in 2007 was $1.2 billion and growing rapidly.

1.5. Discuss the project’s size. How many people were on the project team(s)? How were they organized into teams?

In Oct. 2007 the project started with about 5 people, these people became the team leaders and project leaders. Hiring of team members started and the first few sprints started on infrastructure and largely working on the large backlog of features. Hiring continued for approx. 6 months culminating with about 30 developers (Java programmers, QA, Web UI programmers). With 3 - 5 business analysis, 4 -8 systems engineers, managers, accountants, directors and VP not included. We started with two very small teams of about 4 or 5 people each (Java devs and QA) increased to 3 on site teams of 7 to 9 people and one off shore team of about 5 developers.

1.6. Describe the project’s teams. Were the teams cross-functional and self-organizing? Were the teams collocated in an open space? Were the teams physically separated within one location, or located in more than one physical location?

The original team was collocated in one open space that we frequently rearranged physically to suit our needs. As the space was consumed by the growing team these physical (desk location) changes reduced to changing desks with individuals to be with a team.
The teams were cross functional up to a point. The QA specialist were embedded on a team with the Java developers and for quite a few sprints all UI was done by these developers. Later Web UI specialist were hired and embedded on a team; later on they floated among the 4 development teams as there were only 2 UI devs and a UI arch. and graphic designer. And toward the release date the UI team members acted more like their own team, with part-time dev team responsibilities.
The system engineers were always a separate team, however collocated physically in an adjacent room, easily accessed. The business analysis were physically in a separate room but interacted frequently with the developers.
The development teams were fairly self organizing. Several times electing to redistribute the team members and shuffling the team/squad make-up to share knowledge/skills. The leadership group remained core to the group but as the group expanded the leadership was also developed and several time interim leads appointed because of long term vacations.
The off-shore team sent two developers for an initial 2 or 3 week session before starting work. Then they returned and were managed by an on-site scrum master/delivery manager. Except for this group of 5 developers all development was done in a collocated open space. The space was not perfect and not large enough toward the end, but did include meeting spaces, and enough open rooms for private conversations most of the time.

1.7. Tell us about the project’s initiation. How was the project initiated? How was the team trained to use the Scrum process?

Envisioned as a large project to be grown from just a few existing employees. Most of the people would be new hires/consultants. The original employees had done some agile-like practices on other projects and were interested in learning and applying Agile practices. They were given Scrum Master training, and several others were given Product Owner training by SolutionsIQ. In addition to this training the consultants that were brought on-site by SolutionsIQ were well versed in Agile/Scrum/XP practices. This embedding of Scrum/XP knowledgeable people on the team in addition to a strong desire to practice Agile by the core existing employees created a core group of Agile minded people.
The new hires were screened for a desire to use Agile practices but originally very few had strong experience with Scrum/Agile practices. Much of the training was done as on the job training - just in time. Explanations and practices by the core group in Scrum and later XP practices were the major learning transfer mechanisms.

1.8. Discuss project reporting. How did you report progress to management? To customers?

Most visible progress reporting was done at sprint demos, for management. Working software was demonstrated, such as the first phone call placed using the software was demoed at a sprint review with the CEO taking the call. Reports were also given to management in meetings via reports on progress, challenges and dependencies. Dependencies with other internal systems and external vendors being some of the most challenging issues throughout the project.
Customers (end users) were not informed.
Internal customers (users of internal facing systems) were consulted during design and development of stories and present at demos.

1.9. How was change handled? What difficulties were surfaced by Scrum that had to be resolved? How were these resolved?

Interfaces with other groups such as systems engineering that required large lead times and large planning windows for hardware purchases and configuration changes was a challenge. In many cases Scrum functioned very well within the develop teams, however at the boundaries the other groups not using Agile process/planning were often problems. These problems were often handled in more traditional ways - overtime to get the job done, the changes made to met deadlines for other teams and sometimes the development team.
A sprint (3 weeks) using overtime was tried to increase development throughput, but upon completion of the sprint it was found to result in very little additional output, and less satisfaction of the developers. The cost vs. benefit didn’t work out too well, management never asked for overtime
again. Instead they faced the facts and decided to postpone the launch date. While at the same time decided to cut the scope of the minimal feature set required for launch.

1.10. Discuss management. What was the previous role of the ScrumMaster? Who took on the role of Product Owner? To what degree were they successful in fulfilling their roles?

Having a new team to be built was an advantage, the role of Scrum Master was new and the organization hired an experienced consultant to play this role for all teams. Previous development groups at the organization were less defined than ‘teams’ more of a ‘work group’ lead by a Director with ‘lead developers’. The Scrum Master was instrumental in the projects success, was instrumental in alerting management to impediments and helping the teams to function and practice Scrum well.
The Product Owner role was given to a product manager that was trained in Scrum/PO and after several months was doing a very fine job. The PO and her team converted a backlog of many stories estimated at over 2000 story points into a release plan of around 750 story points, and then held to that basic release point by removing or reducing scope as additional work was discovered. This was key to the success of the project!

1.11. Discuss engineering. What environmental factors or software engineering practices had to be changed?

The teams adopted many of the XP practices, although there was resistance on some practices. Over time the resistance faded on some practices. For example pair-programming was an issue for quite some time, but with time most developers were very confortable with pairing, and it was a “standard” practice. TDD was recognized but not practiced well or never became the “standard”. However writing unit test, integration test and acceptance test during the story development was “standard” practice. Acceptance test were noted as a great benefit in systems integration tests of the pre-production systems - having these automated test (StoryTestIQ) allowed the integration team to reach a very productive state in less than one day (assumed to need a week).
The practice of developers writing QA tests, was an issue for quite some time. The QA members of the team were viewed (at the beginning of the project) to be “tainted” and the desire to have external QA was thought to be a “requirement” for quality. In the middle of the project duration (5 or 6 months in) the sheer size of the automated acceptance test suite was an issue of maintenance and cost for a small QA group embedded in the teams to maintain and work on continuing stories. More and more responsibility for maintaining the suites was given to developers and shared across the teams.
The practice of not doing speculative work (because we’ll need it later) took many of the Business Analysis and Product Owner staff a while to be OK with. Many of the new dev team members had to also be cautioned against this also. This became the DB Write Only data issue, if we don’t need to read that data yet.... consider it written - it’s DONE.
The practice of not breaking the build took a very long time to sink in. There had been an existing practice of breaking the builds of existing projects - view as OK by current staff. This view was allowed to become “standard” practice, and quite a few sprints there were days when the apps couldn’t be built. It took quite a few months of pointing out lost time, risk, energy spent to fix the builds, etc. and this practice of breaking the build as OK appeared to die out. It would have been much easier to institute that policy if the original core development team had believed in that from the beginning. So even with a good CI build box, it is the people and attitudes that matter.

1.12.Tell us about stabilization. For how long did the software have to be stabilized before it could be released? How did you structure this stabilization process?

We did new development for about 9 months then about 5 weeks of stabilization/integration on production hardware and network. Production hardware was viewed as too expensive to bring in (purchase/setup) before “needed”. In hind sight we wished to have mock-production hardware in a mock-production network. Much of the stabilization/integration issues had to do not with software problems but with networking system. For example firewall issues between servers, network open connection timeout issues; the sort of things that the development team took for granted that these issues were too low level to be a problem. The network/system engineers had a different belief system about these issues than the software development teams; these lead to the largest issues and most finger pointing. Had we had mock-hardware to start testing upon earlier these issues and differences could have been ironed out in less stressful environments.
To adapt to the appearance of a need to quicker releases we changed from a 3 week sprint cycle to a 1 week sprint cycle during the integration 5 week period. Turns out there were much fewer application releases than anticipated, as fewer issues were software bugs and mostly system / software configuration issues. The quicker pace also produced unneeded stress on the people. However the 5 week time frame - reserved with no new feature development (just bug fixes) was a smart planning move. I believe with a mock-hardware/network it could have been reduced, but other factors such as work load (some temp QA staff were brought in to run manual acceptance test suites, etc) would have been impacted. We went back to the 3 week sprint cycle after this stabilization period, when working on the 1.1 release.

1.13.Discuss success. To what degree was the project successful? To what degree was Scrum instrumental in the project’s success?

The original project scope and deadline where very unrealistic, Scrum allowed the PO and org MGMT to understand this was not feasible and to envision what would be feasible. This didn’t happen at the beginning of the project but about 3 months into the project. Scrum’s empirical tracking and estimating allowed the MGMT to predict that the project would not be completed by the unrealistic date, and then to predict when a smaller project could be completed with a larger team. Scrum then
allowed the PO a way of reducing scope to the “minimum releasable” feature set. Scrum allowed the PO to change what was important to the goal of a release during different “phases” of the development. For example at one phase it was important to work on user interface and usability stories, but after the usability testing was done and a few issues were fixed, the focus changed to billing system stories. Reprioritizing the backlog allow the PO to change what the dev team worked on and to communicate why that was important.
If the project is view by the company as successful - I don’t know. The product made it to the market, but not in the timeframe originally desired, with the features desired. However it did make it to market at the predicted timeframe (which was very important) and with the predicted features. It was pulled from the market, so it did not succeed in the market place. Would it have fared better with any other development methodology? No, not at all. Was the market failure Scrums fault - no not at all. The product’s viability in the market had been questioned all along. However Scrum was seen as a winning software development framework by the organization, they continue to use it.

1.14. Discuss the Scrum Process on this project. To what degree was the Scrum process implemented "out of the box?" To what degree did you have to modify the Scrum process for this project? For each modification, how did you formulate the modification so that the basic inspect/adapt mechanisms continued to function? What parts of Scrum couldn't be implemented, or failed, and why?

I believe the process was very close to the “box instructions”. We did a 3 week sprint (not monthly), our cross functional team did not include everyone required, but they were very close to the team and collocated in the same space. We used “team leads” per team and one Scrum Master for all teams, but that worked very well. The Scrum Master was present at all super-team functions and always available to the team leads. The team leads facilitated “Scrum Master” roles when he was not present, and sometimes when he was present. We had the typical meetings (planning part A & B, Scrum stand up, product review & retrospective). We included backlog grooming meetings and release planning meetings. Upon inspection of testing needs for stabilization; needing to produce a releasable product once per week we changed our process to a 1 week sprint, and reduced the meeting overhead by reducing but not eliminating the planning/demo/reto meetings. And also changed back when we realized that frequent of releases was not needed/practical and the teams were not sustainable at that pace.


2. How do you cause the accuracy of Product Backlog estimates to improve? To what degree does their accuracy matter?

Several ways. Have multiple people estimate, have the estimate discussed allow the whole team to estimate. Allow time for the team to reconsider the estimate, allow the team to preview the stories or do research on the stories or implementations before estimating. Develop domain knowledge in the product, by doing stories and
those similar stories estimates will improve. Track the performance of the team - measure the accuracy of the estimates.
Accuracy matters because people want to be correct. The team will not be happy with inaccurate estimates, someone will desire improvement. Inaccurate estimates will result in the team taking on too little or too much, both of which create inefficient story progress through the work queue. The accuracy will reflect in the team velocity, or lack of a steady velocity. It will matter to the PO who would like to be able to predict releases and what features can be expected.


3. How do you ensure that what a team commits to for a Sprint is what the team actually delivers?

Make the commitment visible to all - write it down. Make sure both the team & the PO understand the stories, the goal of the sprint and how each story helps achieve the goal. Understand the acceptance criteria of the stories. Make sure that the team tracks the progress toward the sprint goal, the stories - make sure that the teams work during the sprint is targeted toward the sprint goal/stories - not on distractions (other work).
Acceptance test ensure that the understood commitments are working at the end of a sprint. Run some or all of the acceptance test (if automated it is easy) during the sprint demo.

4. What metrics do you use to track the development process? Which metrics have been changed, removed, or newly implemented as a result of using Scrum?

The release burn down charts, stories completed from the backlog versus stories in the backlog (added/ removed stories). Cost of the development team per sprint. On a recent project I’ve used Agile EVM in addition to release plans to track progress, the client found this helpful. Frequent release allow the client and/ or end users to track the progress - it becomes much more visible, an implicit metric of success.

5. What type of training, resources, or tools would best help you successfully employ Scrum in the future?

A world/land in Second Life that was running a Scrum - ie a simulation that could be experienced by the team and stakeholders, that would help them to understand how empirical process controls allow for the possible to happen in improbable situations.

6. Describe the largest impediments you have encountered and how you have resolved it (or not!)

On a newly forming team the concept of not breaking the build was carried over from some of the core members from previous groups in which the rule “don’t break the build” was not followed. For quite a few sprints the teams would struggle with hours and some times days of having the build broken and no functional in some way. I discussed, preached, harassed, etc. the team to improve on this concept, but with the core team members setting a tone of “well it was easier to let it break and then fix it later”; it was hard to overcome this poor practice. I decided to make the issue a bit more visible so at a few retrospectives I announced the amount of time the build was broken, the number of times, stats derived from the build machine. Shockingly the build was broken something like 18 - 20 times in a 15 day sprint, with the mean time to fix in the several hour range (don’t remember now the exact figures). Reporting these stats at the reto started to get some attention, more people started to pay attention to the build and its state (working/broken). We got a build monitor for the team room (this helped in identifying when it was broken) and this reduced the mean time to fix it. However the practice of breaking the build (number of occurrence) was still quite high. Getting the QA team members to loudly complain when the latest build would not function for their testing helped. At one retrospective I lead the 25 person team in an initiative (group game) to experience an analogy to total team throughput when defects stop the process, and to discover who is best suited to fixing the defects (responsibility). Within a few more sprints and continual improvement we got to a point where the build was rarely broken.
However this experience taught me that so times it is very important to fight early for team norms, that I believe in. Since changing norms is much harder.

7. Describe how you have worked with other ScrumMasters to advance the use of Scrum within an organization and within the community.

On the project I’m writing about we had 3 or 4 CSM, many times we would have lunch to discuss the project discuss issues and possible tacks to resolve issues. By the end of the project (approx. 1 year) the organization had adopted Scrum as a development practice and were very happy with the process and the success of the project. They even had non-development groups wishing to adopt Scrum.
Within the community I participate in various users groups (XP, Scrum, Beyond Agile) attend and participate in lectures by various Agile luminaries, and read and post to various Agile discussion groups on line, and started blogging on AgileIQ.

Apple sees the Holistic Environmental Picture

Apple has been criticized for it's environmental policies in the past (Green Peace).

But is the holistic picture not what Green Peace wishes the view to be? The environment is after all a holistic system.

Apple took a big picture, long term view on the greenhouse gas emission. Including the consumer products usage in their analysis (the point of disagreement). In the report on Apples Environmental site, the largest percentage of emission is found to be in the product use - 53%. Why would other companies wish to exclude this large portion in the analysis (HP & Dell)?

Those with a long term memory may note that a one time almost president sits on the board at Apple - this same person also raised the awareness of global warming from a scientific debate to a social movement. Would Al Gore have a big picture view of this issue?

I applaud Apple's view of its responsibility to the world for the cradle to grave analysis of products. Reminds me of another leader in the environment & sustainability arena, Ray Anderson (TED Video).



Disclaimer: David used an Apple to type this post, requiring the emission of green house gas.

Thursday, September 24, 2009

Pair Programming Blinders


Could you use some blinders for your pair?

Roobasoft’s Concentrate ( or Anti-Socialis a bit like blinders that I use to put on the pony while hitching up the plow (and although I'm not that old - yes we used our pony to plow the garden).

It works by turning off applications that could distract either pair such as access to Facebook, Twitter, chat, mail, etc. allowing your to concentrate on the task at hand.

I can think of several people I've paired with that this would have increased productivity about 300%! It was a task just to keep them focused. Of course the asides and tangents I went down were all for the better - yea right!

See Also:
10 Concentration Apps that will help you to Focus


Wednesday, September 23, 2009

Comparative Agility Survey Results

Mike Cohn & Kenny Rubin's Comparative Agility Survey attempts to measure your team's behavior in relation to the Agile values and principles. It compares your results to the survey sample population's Agility. It doesn't give you a absolute score, but rather a relative score.

Does it provide realistic information? Maybe, I hope so. How do we know? One technique that the social and behavioral sciences uses is a test of reliability and validity of the survey. When I talked with Rubin this summer he said there are plans to do such a study in the fall of 2009. One of the typical test of validity is the expert review. So if you have done a few years of Agile software development, take the test and give your opinion. Note the definition of an expert according to author Malcolm Gladwell is 10,000 hours of concentrated study in the field (for example 10 years of 4 hours per day study of a complex skill such as kung-fu, or playing the base guitar, or accounting).

Well by the strict definition I'm no expert, I'm not sure there are more than 100 experts in Agile in the world in 2009. However I do have some experience (4 years) in Agile practices and over 25 years in software development. I participated in the survey with a recent project team in mind (a VoIP product team client). Below are the results. In total I believe the survey instrument to be fair and well done overall. There are some areas I'd prefer to see expanded, a few questions that could easily be refined, and some terms that should be defined for clarity.

Did it give accurate and useful results? In my opinion perhaps... well yes! The team I described in the survey was one of the best teams I've had the privilege to work with in my 4 years of consulting on Agile Transformation project teams. Overall it scored consistently high in relative terms to the other 800 respondents.

Results:
See below for interpretation of results - basically long bars to the right (standard deviations) are better than the average of all other teams; while short bars are slightly better. Bars to the left are worse than the average team.











The following information compares your specific survey results against the entire Comparative Agility database of 805 surveys that existed on Sep 23, 2009.
Your results are shown in terms of the number of standard deviations that your answers differ from the surveys in the Comparative Agility database. If your score differs by a positive number, then your answers for that dimension or characteristic are "better than" the average answers in the Comparative Agility database. If your score differs by a negative number, then your answers for that dimension or characteristic are “worse than” the average answers in the Comparative Agility database.
The length of the bar for each dimension or characteristic represents the magnitude of the difference between your answer and the average answer in the database. More specifically, for each dimension and characteristic, the Comparative Agility website computes the average answer of all questions for the specific dimension or characteristic by examining the responses to all surveys (excluding your survey). In addition, the website computes the standard deviation among all of the surveys for that dimension or characteristic. Finally, your answers are compared with the average answers and the difference between your answers and the average answers is then expressed in terms of the number of standard deviations.
For example, if your combined answers for a dimension average 3.5 and the database average (excluding your answers) is 3.0 with a standard deviation of 0.25, then you would see a bar with a length of +2 standard deviations, indicating the for that dimension, your answers are two standard deviations more positive than the average of all the surveys in the Comparative Agility database.
If you see an X on the zero line in the graph for a particular dimension or characteristic, that means that your answer was equal to the average answer in the Comparative Agility database (in other words, zero standard deviation difference).
There are two graphs below. The first graph is for the seven Comparative Agility dimensions: Teamwork, Requirements, Planning, Technical practices, Quality, Culture, and Knowledge Creation. In this graph we take all of the questions related to the dimension (e.g., all of the teamwork questions) and we compute the statistics referenced above and then show how your answers compare to the Comparative Agility database of surveys.
The second graph shows all of the characteristics. Each dimension is made up of three to six characteristics. As with the dimension graph, each result in the characteristics graph shows how you compare to the Comparative Agility database of surveys for all of the questions within a particular characteristic.
By examining the Dimensions and Characteristics graphs, you can see how you compare to other organizations that have taken the survey.

To Dvorak or not to Dvorak

The Dvorak question.

I'm thinking about learning a lot these days. In an Organizational Leadership master's program the L-word is bantered about quite a bit. Can organizations learn? What environment does it take for people to learn?

I ran across this article on the Dvorak Simplified Keyboard by Robert Parkinson on John Shipman's site. It is astonishing to me that we still teach and use the QWERTY keyboard. I was looking for a keyboard that one could use one-handed. Use one hand for typing while the other hand controls the mouse. I didn't know it but the Mac already has that keyboard "built-in", its called the Lefthand Dvorak, along with the Righthand Dvorak and the Dvorak/Qwerty keyboards.
Dvorak on the Mac.
System Preferences : Language & Text : Input Sources tab. Scroll down the list, check the box beside Dvorak.

Why would you want to switch? Take a look at the Gettysburg Address example (from the Parkinson paper - click to get hi-res image) you'll notice the abundance of keystrokes required to be made off the home row on a QWERTY keyboard versus the Dvorak.



Why is that important? Because the frequency of the strokes under the home keys makes for faster input. Try typing the passage below, called the Fraser Street example.

Fraser Street was in West Everett, Westward of Fraser Street was the great vast sea. Awed, we gazed seawards, attracted by crested waves which raced and ebbed. Children were scattered on the beach edged and strewed with seaweed. They waded in water as the sea surged in and retreated. They bagged crabs as eagerly as beavers saw trees. Brave crews, seafarers in fact, steered sea craft far away. The site of Fraser Street was not overrated.

Few vegetated in Fraser Street. Nobody wasted time abed. "Acts test the breed," was ever the sacred adage. Varied crafts and trades were represented. There was a caterer, a barber, a weaver, a cabaret, and a garage. Attracted, we started to see several scenes. We were greeted as friends.

The barber catered to a varied trade, representing diverse careers and different creeds. Saturday drew the best crowd. All were seated, relaxed, aware of fewer cares, less fagged. There we saw a few starved tattered beggars who bragged of "bracers" served at cafes after a wee meal of beef stew and cabbage. A better fare was resered for those reared on earth's greater swards.
-by Robert Parkinson

If you are from Seattle you may recognize the setting (Dr Davorak was at the Univ of Washington). If you do type in the passage on a QWERTY keyboard you will notice something peculiar. Your left hand does 90% of the work! Yes it may be a bit contrived an example, but WOW! Take a look (click for hi-res image) graphically... QWERTY keyboard on left, Dvorak keyboard on right.



Should I invest the time now to save myself time for the rest of my life?

Of the one skill that I learned in high school - typing is something I still use and use every day. I didn't want to take typing. I was in a class of 45 girls with one other boy. You would have thought all the smart studs would be taking typing, but no - just a couple of nerds. I never asked Bobby why he was taking the class, but my father made me. Talk about foresight.

Saturday, September 19, 2009

Luke Arm - How long between invention and innovation?

Look at the advances in less than a decade.  Here's a video from May 2016 from IEEE Spectrum.

Prosthetic Hand Restores Amputee's Sense of Touch


Luke Arm (Sept. 2009)




How long did Luke Skywalker have to wait for his prosthetic arm? Not long he was back in the action within days it seems.

I’ve been loosely following the Luke Arm - Dean Kamen’s name of the DARPA funded prosthesis. First I’m floored that in 30 months the team could create such a beautiful design (beautiful in an engineering way). The human arm (counting the hand also) has 22 degrees of freedom (DOF) (movements like forearm rotation). The traditional hook and cable prosthetic has just 3 DOF. That hook was designed in the days of sailing ships, updated with aircraft cabling to allow some movement after World War I.

The typical user gives up on the frustrating hook/cable arm after a few years. Why? Kamen’s group think it is the low return on investment (ROI) the arms provide the user. They don’t provide enough benefit (mobility, dexterity, functionality) for their cost - here cost is not money as many are provided by VA benefits - but cost in more human terms, like comfort of the device interface. Just one of the drawbacks; the socket is designed to cup the “stump” and becomes sweaty and slippery reducing functionality and decreasing comfort.

Update:  see 3D Printing a Better Socket for Prosthetic Limbs (TED.com) by David Sengeh of MIT Media Lab

Want to see the Luke arm in action? YouTube - IEEE Spectrum (video).

Now what puzzles me is that it took less that 3 years to invent the prosthetic arm but will take at least 3 years in clinical trials (FDA) if there is funding for the trials. DARPA funding does not extend into the clinical trial. But why does this device even require FDA approval and trials?

If this was a game console (Wii or Xbox) controller - like a Power Glove it would not require FDA approval. How is the Luke arm different?

Deka (Kamen’s company) received $18.1 million for a 30 month contract started in 2005, and delivered working prototypes in late 2007. A modular device weighing less that the human arm and nearly just as capable! It appears to be a wonderful invention, waiting on FDA approval to become an innovation.

What’s the difference between invention and innovation? Innovation is when that wonderful idea actually gets to usefulness in society. It is typically a 30 year process between invention and innovation. Will we have to wait another 30 years?

Ray Kurzweil has plotted the exponential curve of innovation for some well know inventions, see:  Mass use of Inventions (mass use defined as 1/4 of US population).

More info on the Luke arm and other DARPA prosthetic arm research at IEEE Spectrum.

Update: May 2014 - It appears that the DARPA project is on a fast track - it will not take 30 years.

USA FDA approves Deka to market a prosthetic arm that can perform multiple, simultaneous powered movements controlled by signals picked up from sensors on the user's arm (electromyogram or EMG).

See story and video from re/code:
FDA Approves Robotic Prosthesis Controlled by Muscle Contractions By James Temple

See Also:

Weird but related - a robotic arm that can catch random objects thrown to it.
Robotic Arm Catches Objects on the Fly

Kids Design Their Own Prosthetics 



A Budget Exoskeleton

Friday, September 18, 2009

CSM Exam - about time!

Reading Danube's blog The CSM Exam.

Some background - the CSM (Certified Scrum Master) is a certification by the Scrum Alliance that has been given to every (99.999% perhaps I exaggerate) one that attended the 2 day training program. The certification has been a contronversy for as long as it has been offered (many years now). The Alliance is putting a real written exam into place as a requirement for certification.

I think a real test is a GREAT thing. Given that it is a “certification”.

Definition of certification: a document attesting a level of achievement in a course of study or training.


Attendance, the previous requirement does not “attest to a level of achievement”.

In Danube's post they state:
“Of course, the flipside is that an exam will only test attendees on certain aspects of the Scrum framework in a format that does not necessarily promote a deep understanding of Scrum’s values.”


The assumption in this statement appears to be that the test is not well designed or that it cannot test values. I have not seen the test, but assume that it will test the knowledge of the values of Scrum. This is very testable. The certification (as typically applied in many industries) attest to the knowledge of a body of knowledge (BOK) (don’t go all PIMBOK on me - yes Scrum has its own BOK). The certification does not state that the bear has the attitudes and exhibits the behaviors of the values. Which I think is what Danube is concerned with in their statement about Scrum’s values.

So how does one test or assert the affective nature or the behavior nature of a person? This is typically done via case study of the person. Is this not what the next level of Scrum certification attempts to do? The CSP (Certified Scrum Practitioner) is a certification that attest that the bear of said certification (oh so formal - just say - the CSP) has shown (through self report) the values and behaviors taught in the BOK of Scrum.

Testing for CSM brings the Scrum certification into minimum compliance with the common understanding of the terms. That is a value of Scrum/Agile - to state clearly what we are going to do, then do it, and have an objective measure of DONE, demonstrate that level of DONENESS, and then be capable of continuing down the path.

I say - about time - what took you so long - and don’t give me that stinking incremental-iterative argument. The facts will not bear out the delay in a real test over how many thousands of CSM there are (982 pages A-Z).

Thursday, September 17, 2009

Single Payer Health Care will Work

Health Care Reform

Here's why I want a ONE payer system. Just got a BCBS statement (explanation of benefits - wait for it, we will come back for this). It was for a general checkup, I got a tetanus shot (they really push them - like drugs) and a small 'skin-tag' removed from my underarm.

Itemized:
Office visit $221
Preventive Service $242
Surgery-Skin tag removal $176

on the second statement - completely separate envelope for the same visit:

Drugs $65
Vaccine Admin $30
Treatment Room $120
Laboratory $12
Laboratory $74
Laboratory $22
Laboratory $43

Yep that explains it!

Would any reasonable person believe that being charged an office visit and a treatment room for the same visit is NOT redundant double billing. A phone call to the provider Virginia Mason assures me that this is standard procedure.

So how does the double speak of EXPLANATION of BENEFITS sound now? I'm confused, and just believe that a single payer system will make my life better! Sure a lot of insurance claims admin people will lose there jobs - but they are really smart can work computer systems like no-body's business - they can find another job.

Monday, September 14, 2009

Environment's Role in High Performance Teams

What environmental qualities does it take to foster a high performance team?

Do not make the assumption that good people will perform at the top of their game regardless of the environment. It is crucial to provide the environment that encourages top performers. It is leadership that sets the stage of the environment. To continue the play analogy: the leaders are in control of the stage manager's budgets for prop and materials, provide vision for the director's action instructions, and inspire risk taking to achieve works of art. A good leader will know the bounds of the environment, perhaps by testing, scouting and exploring the area. This knowledge of the environment will help them to guide the organization within the sustainability envelope. It is not in the overall interest of the organization to exceed the sustainable performance characteristics of the team. The environment has a large impact on sustainability of team performance.

Graham Jones' article in Chief Learning Officer magazine "Environment's Role in High Performance" prompted my thinking on this subject.

How do we set the stage for high performance teams in Agile communities?

Do we as leaders provide the performance enablers (Information, Instruments, Incentives); do we support our people with the aspects that effect performance (Attitudes, Capacity, Behaviors)? Are we aware of the whole eco-system?

A case study in the effects of environment in the attempts to modify human behavior was done by a well-respected psychiatric researcher named Lee Robins as part of President Nixon's Special Action Office of Drug Abuse Prevention in 1971.  She studied heroin addicted veterans form the Vietnam war.  Some reports put the addiction rate at 15% of returning vets.  And the current thinking was that it was practically impossible to kick the addiction.

The Office put in place a program of treating the soldiers in Vietnam before returning them home.  The rates of relapse to heroin use was 5% (95% successfully made a very difficult change) in Robins' program.  Other programs that treated soldiers at home had relapse rates in the 90% range (only 10% making a change in behavior).  What was the secret?  This program had the opposite results from the expected results.  Decades later the study of behavioral change points out the reasons for this drastic inversion of expectations.  The secret - behavior is highly influenced by environment.  The soldiers that got clean and then changed environment were successful in adopting the desired behavioral changes (staying clean).

Listen to the NPR story, What Heroin Addiction Tells Us About Changing Bad Habits by Alix Spiegel.

If environment is this crucial keystone in breaking heroin addiction what environment change shall we make to facilitate the behavioral changes needed when transitioning to an Agile mindset?  Or as we saw with these case studies, if we retain the environment and expect new behaviors to flourish after training and a bit of coaching, should we be surprised when the old behaviors return?
Behaviors that are not supported by the environment will be hard to maintain, and the environmental clues keep pointing to the old behaviors.

My suggestion, if you wish your Agile transition to have a long term impact upon people's behavior then you best be changing the environment along with the process, procedures and practices.


Monday, September 7, 2009

Methods of Work

I just realized something about my self. Maybe I've known it, but it popped this time. It is in big type, this time. It is "methods of work". That's the part I like the best.



I've been reading my father's Fine Wood Working magazines for over 20 years (at $8 who can afford them?). I just bought the Oct 2009 issue of Fine Wood Working, my wife saw it and noted that Sam Maloof (1916 - 2009) had died, they feature him on the cover in one of his fine rocking chairs (a great cover). I read the articles, and enjoy looking at great furniture, however I keep finding that I enjoy the how to articles the most. It not just the 'how to' that intrigues me it is the process of making something even better/easier to accomplish or how to make a jig that allows for a more precise/accurate machining step. I've read my father's Fine Wood Working off and on for years now. This issue it hit me like a ton of... rough cut 2x8s - the title of the how to section is now called "methods of work"! It has always been my favorite section (regardless of its name). It is the section with tips from readers, with drawings of jigs and fixtures, diagrams of how to assemble some complex component, a how to section on making a blind dove-tail joint.

Well it fits. It is the same thing I love about working with Agile teams. I love to make them function just a little bit better. Maybe if the task board had sticky notes shaped like people with a name, then we could apply the sticky-people to a story and know who is working on that story today. And the arts-n-crafts factor would be high, so it would challenge the Scrum Master's scissors dexterity. Great I'm trying it!

For me Agile is all about methods of work, experimenting, and finding the things that do work and then tweaking them, small incremental improvements to process as well as product. Find something interesting to experiment with. As Sam Maloof did with his chair designs that have become a classic, he is/was the master, and he taught many people how to make similarly beautiful functional pieces of art.

Saturday, September 5, 2009

Groupthink in Scrum Teams


How do you combat this known dysfunction of a group in Scrum teams?

Groupthink is defined best I think by Irving Janis, who studied it's effects in the Bay of Pigs invasion, the attack on Pearl Harbor, and the escalation of the Vietnam War.

Groupthink - A mode of thinking that people engage in when they are deeply involved in a cohesive in-group, when the members' strivings for unanimity override their motivation to realistically appraise alternative courses of action.
- Irving Janis. Victims of Groupthink. 1972, p. 9.
President Kennedy's thigh-knit group got caught up in groupthink, but learned their lesson and changed their group-decision making process by the time they dealt with the Cuban missile crises. A small proof of a learning organization.

Did President Bush's group learn there groupthink lessons?

Janis gives 8 indicators of groupthink:
  • Invulnerability - many members of the ingroup have an illusion of invulnerability.
  • Rationale - they rationalize away warnings and other negative feedback.
  • Morality - they have a belief in inherent morality on their side.
  • Stereotypes - they hold stereotyped views of opposing groups leadership.
  • Pressure - they apply direct pressure to any dissenting individual.
  • Self-censorship - they avoid deviating from group consensus.
  • Unanimity - they share an illusion of unanimity.
  • Mindguards - they appoint themselves as guards to protect the leader from adverse information.
Janis has some recommendation on the remedies for groupthink:

  • assign the role of critical evaluator to each member & reinforced by leader's actions
  • leaders should adopt an impartial stance at beginning
  • set-based design; separate groups working the same issue
  • require members to discuss with their associates
  • outside experts invited to challenge the views
  • assign a devil's advocate for general evaluations
  • devote a sizable block of time to rival orgs. responses & warning signals
  • subdivide the group to work under different chairpersons
  • hold a second-chance meeting - encourage doubts to be rasied
Janis warns of the disadvantages that these remedies may bring about, i.e. prolonged debates, rejection, anger, power struggles, etc. However the history tells us that we make big, HUGE mistakes (like looking for WMD in the sand) when we allow ourselves the luxury of groupthink. So putting in place a few checks and balances would be very prudent - don't you think?

Thursday, September 3, 2009

Are you an Theory X or Theory Y manager?



Have you heard of Douglas McGregor's two theories on motivation of humans?

In the 1960s McGregor proposed two competing theories on motivation and management of people.

Theory X was based in the assumption that employees don't like work, don't want to be at work, would goof off if they could, and must be coerced to higher performance.

Theory Y was based in the assumption that employees do like their work, are intrinsically motivated, can be creative, seek responsibility, and exercise self-direction and self-control.

McGregor was basing his two theories on Maslow's hierarchy of needs (lower order to higher order needs: physiological, safety, social, esteem, self-actualization). With the assumption that Theory X was concerned with the lower order needs (physiological & safety) while Theory Y was concerned with the higher order needs (esteem & self-actualization). McGregor didn't believe there was a continuous continuum. That perhaps motivation was bi-modal.

Which leads into Frederick Herzberg's Two-Factor Theory. Basicly the higher order needs lead to motivators, where as the absences of lower order needs lead to dissatisfction - which is not a single continuum, but rather a dual continuum (see graphic).

By the way, you have McGregor to thank for the term "human resource".

So which are you? Does it depend on the situation? Would you want a Theory X or Y ship captain when you hit an ice-berg?

Some interesting links on Motivation.

Can you calculate this?

Can you do math calculations in your head? How about long division on paper? Have you forgotten the multiplication tables above 5? Well then I'll bet you don't have any idea how to use a slide rule. Much less a Thatcher's Calculating Instrument.



This calculator is in the Bremerton Navy Museum. A little web searching found this description of a similar instrument in the Powerhouse Museum:
Description:
Cylindrical slide rule, metal / paper / wood, designed by Edwin Thacher, New York, United States of America, 1897-1907

'Thacher's calculating instrument' is a cylindrical slide rule that can be used to calculate results by adding and subtracting logarithms. The machine consists of a cylinder with wooden handles at either end. The cylinder has been covered in glossy paper that is printed with log scales; it rotates inside a series of twenty brass bars that are also covered with gloss paper printed with log scales. Along the front of the cylinder there is a brass bar to which a sliding holder for a magnifying glass is fixed. The machine is mounted on a rectangular wooden base that has a label printed with instructions for use fixed to it.
Designer: Thacher, Edwin
Designed in: New York, USA
Designed date: 1871

Read more: http://www.powerhousemuseum.com/collection/database/?irn=206938#ixzz0Q21zV3N1
Under Creative Commons License: Attribution Non-Commercial



Statement of significance:
This is a very accurate desk-top slide rule of a type used to perform calculations between 1881 and the 1960s, when electronic calculators became available. Slide rules are analog devices that the user manipulates to add and subtract the logarithms of the numbers involved in a calculation.

The first cylindrical slide rule, with extra long logarithmic scales to provide greater accuracy, was introduced by Irishman George Fuller in 1879. Edwin Thacher's 1881 design set new standards in accuracy by pushing computations to four digits. This feat was accomplished by dividing up a very long linear rule into equal lengths and arranging the pieces around a cylinder with a barrel slide. Sometimes referred to as a 'squirrel cage' slide rule, its use of both rotary and longitudinal movement gave the rule an effective length nearly 40 times greater than that of an ordinary slide rule.

By 1897 the New York company of Keuffel & Esser had taken over production of Thacher's slide rules and, in a curious footnote to history, misspelled his name 'Thatcher' for the entire life of their production.


Significance Statement by Geoff Barker, March 2007

Wednesday, September 2, 2009

Motivation & Herzberg Two-Factor Theory

I read a great article on motivation (intrinsic vs extrinsic) in Agile adoption.

Intrinsic and Extrinsic Motivation in Agile Development, Adoption


However I find I don't totally agree with some of the comments tending to tell us that we should not applaud good behavior or success. In thinking about this I think one could apply Herzberg's Two-Factor Theory.


I believe that applauding stories completed in a sprint review would fall in the Recognition factor, and is highly correlated to job satisfaction. While I believe that punishment for not completing a story (in whatever manner) would be found somewhere on the hygiene side of the chart, the side that leads to dissatisfaction.

To understand the difference image that your trash has not been taken out in a few weeks - does this make you dissatisfied, yes! But does the trash always being removed on schedule make you satisfied, no. Therefore the trash factor is a hygiene factor, and does not lead to satisfaction. Herzberg found that there is a dual continuum: a continuum from satisfaction to no-satisfaction with a separate continuum from no-dissatisfaction to dissatisfaction. Keep this in mind when you think of motivators, there may be a disconnect in the continuum you intuitively perceive.

Some interesting links on motivation.

A BBC video of Fred Herzberg describing the Two-Factor Theory.  Or it may be a Doctor Who episode.

See Also:
One More Time:  How do you motivate employees?
David's notes on Drive by Dan Pink