December's issue of Harvard Business Review creates a compelling case for the concept of persistent teams. And hey, if they do it in the operating room, then there must be good science behind it. But heck, we don't need no stinking science, we just know it works, right?
Our company just discovered an interesting impediment this past week. The Dallas ICEpocalypse of 2013 resulted in many companies shutting their offices, ours did this also, both Friday and Monday (Dec. 6th & 9th) - someone even opened an outdoor ice skating rink in Dallas. Yet many team members worked from home, and to do this many had to use the Virtual Private Network (VPN) to access secure systems. Guess what impediment a few thousand employees all working from home the same day causes? The VPN is a licensed infrastructure with a license limit. Yep! only a fraction of the people working from home could access the limited licenses of the software VPN.
So I wonder if there is a system, known to mankind, that is designed to deal with this sort of constraint and surge in usage.... isn't that new fangled cloud computing environment designed to handle this very sort of on demand scalability?
So I understand the need of the company IT department to purchase a limited number of licenses. I understand the VPN vendor to limit the users of the system. I understand the workers wanting to stay within the safety of their warm homes and use those new high tech remote computing platforms and networking that we are so famous for creating.
I understand that we as a company have spent millions of dollars on fail safe redundant data centers to server our customers. We have fault tolerant fail over systems, disaster recovery systems. But we don't have them fool proof. And one weather system proves it. While I'm sure the customer systems continued to hum during the ICEpocalypse of 2013. The support and development groups got taken out by a VPN license agreement.
I'm not an expert on contractual license agreements - but when I was tasked with writing a license monitoring framework for our network infrastructure product many years ago - I chose to buy one from a company that's core competency was licensing. We integrated it (FlexLM) with our software. It had the capability to change license restrictions with the editing of a file and restarting a license manager service. We could have easily given a client a thousand licenses for a week if asked, we could have charged them for this service or done it for free. All because the license manager company had already solved these problems before.
So I wonder why that capability doesn't manifest itself today in core infrastructure, as part of disaster mitigation plans.
Refactoring is when a developer changes the structure of the code without changing the behavior. To do this with little or no risk a craftsman will have a set of well maintained tests that prove the code still behaves exactly as before.
A simple way to visualize refactoring is to think of 12 eggs. Most people will imagine a carton of eggs. But it is possible to think of 2 cartons of 6 eggs; or even 6 packages of two hardboiled eggs. Refactoring get's its name from the factorization of numbers. Twelve has multiple factors (6 x 2), (3 x 4), (2 x 6), (1 x 12). No matter which factorization the resulting collection is twelve eggs. In TDD we have test to make sure we don't break the eggs.
Note: Many IDE's claim to do refactoring, and many times they work just as expected; however not all IDE refactorings are reliable. I suggest you test your IDE's refactoring before you trust it.
To introduce beginner developers to the basics of refactoring - start with a well understood class with many blocks of code. First refactoring to practice is Extract Method.
But first make a program of blocks in this order: white, red, blue, blue (again), red, yellow, green, red, yellow at the bottom.
Extract Method: using the extract method refactoring - pull out the red block of code; replacing it with a call (the 4x2 red plate) to the extracted block.
Repeat the Extract Method operation on each red section of the program.
Now, a step of refactoring (or TDD) is to remove duplication. All three of those red blocks of code are theoretically the very same, so we really only need one. Set two aside and place the one remaining at the bottom of the program stack.
Now let's use Extract Method operation on the two blue blocks of code.
And remove the duplication. Remembering the one blue block remaining is place on the bottom of the program stack.
Practice, practice, practice.... the art of a craftsman.
Let's practice again with the yellow block; perform the Extract Method operation on yellow blocks of code.
Now the program looks a bit wonky... unbalanced by the large green block - let's extract it also.
There, good practice... and continue with the white block....
There all the in-lined class code has been extracted to methods and method calls. That is well factored. But let's do a bit of house keeping. Clean up and reorder the methods and calls into a similar order.
When separated the blocks of code and the calling plates look like this:
Before and after refactoring images.
(1) Green 4x2 Plate; (1) Green 4x2 Block
(1) White 4x2 Plate; (1) White 4x2 Block
(2) Yellow 4x2 Plate; (2) Yellow 4x2 Block
(2) Blue 4x2 Plate; (2) Blue 4x2 Block
(3) Red 4x2 Plate; (3) Red 4x2 Block Reference:
I hear lots of colleagues using the term 'technical debt' and the scenario that plays in my brain's cineplex is from The Princess Bride when Inigo Montoya remarks to Vizzini; "You keep using that word. I don't think it means what you think it means."
So what does "Technical Debt" mean? And what do my colleague's typically mean if it is not truly technical debt.
The first is easy; the definition of Technical Debt:
He who coins the term gets to define the term.
OK, so to be truly technical debt one must negotiate the debt with the business. The business should achieve some objective sooner and incur an obligation to repay the technical team the time and effort required to put the system back into a proper state of clean well factored code.
But, wait... what could my colleagues mean when they misuse the term technical debt? I think they mean many things, but since there is no good word for what they mean they appropriate the popular term. I've referred to the concept as:
bugs just waiting to be discovered
short cuts that will come to haunt us later
things we will fix one day
engineering done by the new guy
design choices that time permitted
There appears to be a problem here; there is no good word or phrase for this concept. So let's create one! How about unclean code. Inverting the concept from Robert Martin's Clean Code: A Handbook of Agile Software Craftsmanship.
Let's define the term 'unclean code'. Software code that might work, has known deficiencies, needs to be thoroughly tested, and probably would make a master craftsman's nose turn up at the smell.
Let's see if the term can replace the misappropriated 'technical debt' in a sentence. "We have some technical debtunclean code this sprint that will need to be added to the backlog, but the features are all done."
What do you do when the Product Solutions Director comes to you and suggest that she would like a product delivered within a 5% error on the delivery date?
One suggestion is to run through a thought experiment with her. For example: Let's assume this is a project that will take about 6 months. Let's base the schedule on a 180 day time line. So you desire us to hit that 180 day target from six months away to within 5%. OK that's 0.05 * 180 = 9 days. Now is that a plus or minus 5% or a 5% range? Or in absolute terms for this example do I have to be within 171 - 189 days (+/-5%) or within 176 - 185 days (5%). So to continue this example, consider a team doing 2 week sprints. This would equate to 12 - 13 sprints with one sprint error.
But perhaps more important is what this one prime aspect of project success says about the other aspects of the project. So lets try to balance the project success aspects with the schedule being the one most important aspect. Given that the aspects must balance (rules of the game), then one can choose only one other high important aspect and most leaders chose quality. This will give a picture something like this. Meaning that the four aspects of the typical iron triangle (schedule, cost, scope & the unchanging quality) with an emphasis on schedule will lead to cost and scope changes (increased cost and decreased scope). And what happens when the leadership doesn't increase the cost and decrease the scope? Well, that quality on the inside of the iron triangle that no one wishes to degrade is .... well, degraded -- while everyone says that it is not. And that folks, is how one creates design-dead legacy applications in six months.
Sprint length - a fun debate. What is the best practice - a funny question. There is no best practice for sprint length. But what factors should go into the decision?
The team's ability to become predictable within the sprint duration.
The Product Owner's ability to plan and to commit to the unknown of not changing the plan for the sprint's duration.
The frequency of needed feedback on the direction the team is making toward the release goal.
The ability of the team to create their sustainable pace.
Many team's I've worked with have trouble defining their sustainable pace. I've argued that this pace that allows the team to deliver both working software that is potentially shippable each sprint and to have high quality deliverables along with team learning is quite a bit below the teams typical sprint velocity.
When teams are under extreme pressure to deliver they typically forget one of the 7 habits of effective people - to sharpen the saw. So why not build this habit into the structure of your team's cadence? Instead of a two week sprint (10 work days) try the 13+2 model. Thirteen work days followed by 2 days of slack (read the book: Slack: Getting Past Burnout, Busywork and the Myth of Total Efficiency).
This slack will give the team time to reflect upon the many things they wish they had time to do, but didn't; and now perhaps they will do them. Silly little task like cleaning up the automated build scripts, pruning out the dead wood out of the smoke tests, refactoring the last story to a design pattern that now appear obvious after the fact.
This three week sprint length will add in 13% slack to your sprint in a tempo that is easily predictable for the team members. You might find that the team members start using this 2 day slack time for things like doctor visits and Fed-Ex days.