Agile Principles: Team Reflection Provides Growth

larry apke, agile doctor, team reflection

larry apke, agile doctor, team reflection

At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.

I coach the majority of my teams by first modeling the behavior that I expect, allowing teams to copy that behavior as I observe and correct behavior so that it matches the modeled behavior. To achieve this a great deal of my time is spent as an active scrum master – and I love it! I love having the opportunity to interact with individuals on teams, but there are times, after literally hundreds of iteration plannings and reviews (and thousands of daily stand ups), where it is difficult to keep things “fresh” or stay motivated, but this is never a problem when it comes to retrospectives, which is what the twelfth (and last) principal refers to.

Retrospectives are my favorite of all the scrum activities because they represent the opportunity to reflect on how we are implementing and to adjust our behavior to be more effective.

I have said to my teams on many occasions that if I were forced to choose only one scrum ceremony that my choice, without hesitation or reservation, would be the retrospective. Without it, how could we ever expect to improve? What essential difference would an “Agile” project have over the many death march projects that teams have come to accept?

There are always better ways of doing business. As a team we must come together frequently to honestly discuss current processes, evaluate potential alternatives and then experiment with alternatives to prove (or disprove) their value.

Since not everything works well for every team, it is important that potential improvements are seen as experiments. It may be that it is the right thing to do, but it is merely not the right time. I regularly tell my teams that we need to be “scientists” in that there are many things that we will try not knowing what the result will be, but it is important to propose a theory and conduct a proper experiment.

I extend this concept of theory, implementation / observation and retrospective to more aspects of a team than just the official retrospective. For example, instead of time “estimates” I propose that we have a theory on how long something will take and that we will test this theory by the end of the iteration. This provides a less threatening way of estimating so we can use real information to adjust our estimates to provide better estimates in the future (since regardless of the methodology people will, quite logically, expect to understand what teams are capable of in the long term).

One thing I have noticed in regards to retrospectives is that teams struggling with Agile transformation will often drop the retrospective ceremony (while keeping all the others though I have noticed a tendency to sit during stand ups as well). There are a number of possible reasons for this. One reason is that the organization is so addicted to a top down problem solving approach that management does not value identification and solving of problems at the team level so the meeting is quickly dropped. In these organizations honesty is rarely valued. Without honesty a proper retrospective cannot occur. One team I coached referred to their organization’s “more than healthy aversion to reality.”

I have run into organizations that have a misguided desire to track team experiments. These “best practices” shops think that if we only identify what works for one team then we can codify this practice and force all others to adopt this practice. This is a misunderstanding of Agile. While there may be “best practices” that work for all teams, this approach flies in the face of self-organization and ignores the fact that each team will mature at different times so the practice that works for one team may be completely inappropriate for another. I certainly do not have the same expectations for my seven year-old that I have for my twenty year-old.

Tracking experiments at the organization level also makes it more difficult to experiment, producing a chilling effect on teams. Instead of trusting that the team(s) will attempt experiments that are best for their particular circumstance, there is one more hoop to jump, one more monitoring, one more instance of distrust and more business as usual. As a result, teams experiment less, progress stagnates, the team experiences disincentives and any real change is exchanged for “acceptable” window dressing. After experiencing a few retrospectives under these constraints it is not difficult to see why teams choose not to pursue more retrospectives.

Three are times when teams have just not had coaching or training on how to properly conduct retrospectives. Some retrospectives become little more than glorified bitching sessions with no substantive changes discussed or attempted. This happens frequently when organizations tell teams that they will be agile but do provide real support in removing the systemic obstacles to team success. For every problem identified in a retrospective there should be a corresponding action – even if that action is the need to escalate this item to management. Once something has been escalated to management, it is important for management to be held accountable to the team and, from time to time, to report to the team on the progress of removing the impediments.

More importantly, since many problems may be systemic and beyond the ability of the team to change, it is incumbent on the scrum master to keep the team focused on the problems that can be solved. It is best to start small with things like meeting times, rooms, etc. Fixing “small” things that are in the team’s power can go a long way in helping them to “gel” as a self-organizing team.

As to the retrospective meeting itself, there are quite a number of folks who have interesting facilitation techniques. Personally, I find most of these “team building” techniques forced, gimmicky and condescending to adults. If they work for you, fine, but they are not my way.

Here’s how a typical retrospective meeting goes for me, I encourage everyone to try out my process to see if it works for your team. If it doesn’t, then try, try again!

I keep my retrospectives simple and generic in asking:

  • What went well?
  • What could be improved?
  • What actions can we take? (To ensure more of the first and less of the second)

Gimmicks trade style for substance and if I do a good job of the “vanilla” retrospective, I find that I can keep it interesting and people engaged without tricks. The content of the discussion wins the day!

And I start each meeting with a simple statement of our intent, usually including something like Norm Kerth’s Prime Directive, assuring the team that the purpose of our discussion is to improve our work as a team and nothing is personal.

The second thing, that is often forgotten and is critical to retrospective success, is to take some time at the beginning to review what was discussed and actions undertaken in the last retrospective.

This way we can hold people accountable for delivering on the changes promised and not constantly tread over the same issues again and again. Of course, if the same problems persist and it is a good jump off for discussion, there is no rule to say that we cannot re-discuss or re-emphasize something from a previous meeting. Without this follow up step I have witnessed a great number of teams with great intentions, but poor execution.

The next thing that we do is to go over the three columns:

  • For each thing that we identify as going well (or not so well) the team is encouraged to come up with some kind of action that will help us continue the good and improve on the not so good.
  • For every action that is taken, someone (or a group) is assigned to be accountable. If need be, a retrospective action can be planned during iteration planning to make sure there is time to get the work done.
  • The meeting ends with a statement of appreciation for the team’s honesty and courage in improving their work.

As far as timing of the meeting, I generally make sure that the retrospective is held in advance of the iteration planning so, as I mention above, any stories or tasks necessary for completion of retrospective actions can be accommodated.

Just as we try our best to eliminate work-in-progress for stories, we do the same with retrospective actions. Trying to do too many things at once is a recipe for disaster. Small incremental progress is the key.

While it is the last principle, as you can tell by the amount of commentary I have written around it, it is certainly not the least.

In fact, I contend that a team cannot achieve sustainable agility without the frequent feedback and course correction that team reflection provides.

And finally, keep on the look out for my new and improved ebook! I am adding the principles to it, so it will be full of great Agile information for anyone interested in transitioning to Agile or needing a refresher.

Larry Apke

Agile Principles: Why You Need Self-Organizing Teams

larry apke, self-organizing teams, agile, scrum, scrum master

larry apke, self-organizing teams, agile, scrum, scrum master

I have often argued that the founders of Agile did not provide reasons why their approaches worked just that they did. Their was empirical evidence, proven by doing the work, or, as they state in the beginning of the manifesto – uncovering better ways of developing software by doing it and helping others do it. From their very pragmatic approach, they figured out that better software was created by following the values and principles. One of those discoveries was that better software was created by self-organizing teams.

One of the things I speak of during my talk on Complexity Theory and Cynefin (Complexity Theory and Why Waterfall Development Works (Sometimes)) is that most software development is complex and that is the reason that Agile works well and is generally preferable to Waterfall. Those projects that might benefit from Waterfall are those that are complicated, those where all the answers can be known up front and experts are effective.

Agile works better when projects are complex, those where all the answers cannot be known upfront and big upfront expert analysis is a liability. Also, according to George Rzevski, one of the seven criteria for complexity is Self-Organization in which complex systems are capable of self-organization in response to disruptive events. While this addresses the fact that a complex system will self-organize and does not address self-organizing teams in particular, I believe it does inform us that in response to complex systems the best use of people is to allow them to self organize around the work.

In addition to the relationship with complexity theory, this principle also relates to the fact that it is the people who do the work that are the best to make decisions with regards to architectures, requirements and designs. While to most people this would be common sense, in the world of corporate IT, with its fetish for top down, command and control hierarchy I have found this to be the exception.

In some cases I have found organizations so tied to the misunderstanding that software development is complicated (as opposed to complex), that work can be identified, analyzed and designed at one level (the world of architects, Bas, team “leads”, so on) and simply passed down to a “lower” level (usually to offshore), that they really have no conception of what Agile means when it talks about a team.

Even if they can be convinced that they need to think of teams as the people who actually do the work, it is usually too radical to expect people to actually know their jobs and be able to organize their own work. These organizations are stuck in the world of Taylor, but all the evidence shows us that knowledge workers are squarely in the world of Deming.

Pink, in his wonderful book Drive: The Surprising Truth About What Motivates Us, tells us that people are motivated by AMP (autonomy, mastery and purpose) and not surprising, self-organizing teams provide a healthy dose of all of these while receiving piecemeal work from “experts” is not at all motivating. No wonder that the best architectures, requirements, and designs emerge from self-organizing teams.

What are your thoughts?

Larry Apke

Agile Principles: Simplicity is Essential

larry apke, agile doctor, simplicity

larry apke, agile doctor, simplicity

In 2002, Jim Johnson of the Standish Group (made famous by their Chaos Report of software project “success”) presented findings of features and functions used in a typical system. The number of features that were never or rarely used totaled a whopping 64% while sometimes, often and always weighed in with 16%, 13% and 7% respectively. For those acquainted with the Pareto principle (80/20 rule), notice how the often and always used features – those things we should concentrate on building for our customers and those things things that bring us the most value – is exactly 80%.

In other words, a great deal of our effort is generally spent creating things that customers do not use or want.

A lot of times this is a forgotten principle as people get caught up in the world of implementing stories and forget that there may be a plethora of stories that don’t need to ever be implemented.

What value is there is doing work faster and better if we are doing four times the amount of work that we need to do?

This principle fits well with the concept of business and development working daily. Business needs to be intensely involved with the process, if for nothing more than identifying the 80% of the work that we really don’t have to do. Just think of the amount of money that could be saved every year by reducing project scope to only those features and functions that are actually used! Think of how quickly we could deliver functionality! Think of how many more “projects” we could complete!

While simplicity provides huge benefits with regards to the stories and work that we choose to implement, it also applies to the implementation of stories that we choose. As I have written about so many times, by using techniques like BDD and TDD we write only that software that is necessary to implement the acceptance criteria and are not tempted to “gold plate”.

TDD provides us with a certain simplicity at the code level while also providing us the ability to allow our code to evolve over time to satisfy changing requirements. Simplicity of code allows us to refactor code mercilessly which is essential to agility over the long term.

In the end, simplicity of what we do and how we do it results in producing the most valuable software in a high quality manner and this is essential to being agile.

Larry Apke

Agile Principles: Excellent Design Needs BDD & TDD

larry apke, agile doctor, bdd, tdd, excellent design, agile principles

larry apke, agile doctor, bdd, tdd, excellent design, agile principles

This principle is much like the one previous about sustainable development. Agile doesn’t ask us to shortcut quality and increase technical debt in an effort to deliver software faster. It is precisely because we do not shortcut quality and incur technical debt that we are able to move faster.

I have worked with many teams to introduce Behavior Driven Development (BDD) because, among a great number of other advantages, BDD allows developers an easier way to access the practice of Test Driven Development (TDD). And, in my experience, TDD is the only way I have seen out of the practice of “Big Up Front Design”.

Big Up Front Design is generally a waterfall practice in that architects and designers spend a great amount of time before coding begins to attempt to foresee all possible design considerations in advance so that the final design could be implemented without issues. The problems with this approach are outlined wonderfully in Martin Fowler’s “Is Design Dead?” blog.

The one I would concentrate most on is the issue of changing requirements. Since most of software development falls in the complex quadrant (see my Complexity Theory and Why Waterfall Development Works (Sometimes) presentation), it generally has a great deal of nonlinearity (sometimes referred to as the “butterfly effect”). This means that any small change in requirements can have a great ripple effect, usually nullifying the extensive work that designers and architects (our experts) have created.

If you want a rule of thumb measure of an organization’s (or team’s) relative agility, bring up the word refactor. A rigid organization will recoil in horror while an agile one will recognize refactoring as desirable. The answer to nonlinearity is the concept of evolutionary design and this is simply not possible without refactoring and refactoring is simply not possible without a safety net. That safety net is created by a suite of tests that were created as a result of TDD (using something like BDD) and are leveraged via continuous integration and continuous delivery.

With respect to this principle, continuous attention to technical excellence is expressed through the XP practices of BDD/TDD and CI/CD, learning how to create testable requirements (via BDD) which are expressed through tests created prior to coding (TDD) and through near instantaneous feedback (CI/CD). I can be assured that my refactoring of design addresses not only the new requirements but also the legion of existing requirements (through automated regression) so that nonlinearity is not expressed through regression defects in the final product.

Because of the technical excellence above I can then use evolutionary design to create not just “good” but excellent design.

“This might all sound fine,” you say, “but I live in the real world and it doesn’t work for us because of (fill in your favorite excuse).”

To you I say, yes, it does work.

No matter what your situation, you can leverage the above practices. That is not to say that it won’t be challenging because you may be struggling with existing technical debt, but it is possible if your organization had the understanding of the costs of not adhering to this principle.

For the skeptics out there I leave you with a little story:

There was one team that I worked with to teach BDD/TDD and pushed them to adopt these practices. Though they were initially skeptical, they did it anyway. After only a few short days they began to see the “method to my madness” and roundly declared that this was the way that all software should be developed.

After a few months of success, they were asked to present what they learned to other teams. Like a proud father I stood in the background and listened to what they said. Not only did they say that they couldn’t imagine doing software development without these practices, but that there were times that they felt pressured (as we all do at times) to produce software faster and they shortcut the processes and every time they did, it was this code that was identified later to have defects; defects that they had to spend time fixing.

Because the time spent finding and fixing code that wasn’t created using TDD was greater than if they had slowed down and done the initial coding properly, trying to write code faster by neglecting technical excellence was actually slower in the long run.

To this day these folks that I had the pleasure of teaching a bit of technical excellence to email me from time to time to tell me that they have convinced yet another team (or vendor) to pursue technical excellence.

Why? Because continuous attention to technical excellence and good design enhances agility.

Larry Apke

Agile Principles: How to Maintain a Sustainable Pace

larry apke, agile, scrum, indiegogo, agile development, agile doctor

larry apke, agile, scrum, indiegogo, agile development, agile doctor

Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.

When I think on this principle I cannot help but think about the potential “dark side” of agile and how it can be misunderstood and implemented incorrectly. It also reminds me of an interesting story I was told by one of my coaching colleagues recently.

Once upon a time a company hired a very talented vice president of software development. Unfortunately, when this brave soul entered employment the amount of technical debt in the code was enormous. This was a situation that needed to be fixed because this pasta code was very expensive to maintain and made it difficult to deliver software quickly and with quality.

The company’s leadership heard about agile and decided that this was the answer to all their problems so they set about sprinting. Since the concepts are so easy they felt they could forge ahead without expert agile scrum help. In their quest for agility they found that they could indeed write code faster, but without proper guidance they forgot about the concept of sustainability and did nothing more than create technical debt faster. Unfortunately for our VP, the pleas to adopt sustainable agility went unheeded and six months was all the VP could take before moving on.

The bottom line is that many companies misuse agile because they think by being agile they can cheat the iron triangle of development. What too few people realize is that you don’t choose two of three sides because it is actually an iron square where you choose three of four sides (scope, resources, schedule and quality or, as Jeff Atwood refers to it, an Iron Stool). You misuse agile when you choose everything but quality because the code becomes unmaintainable over time and agility becomes mired in the big ball of mud you have created.  I refer you to my article about refactoring in the Agile Record for the problems with unnecessarily complex and technical debt laden code.

The misguided desire to emphasize speed over quality leads to the accumulation of technical debt and is a symptom of project (and not product) centric thinking. Like I refer to in an earlier blog post, there are reasons why no one washes a rental car.  Overtime you will no longer get speed or quality and your ability to sustain agile over long periods of time is compromised.

I try to run at least a few miles everyday, but I do not sprint the entire run. If I did, I would barely make it more than about a quarter of a mile. This is why I have begun to prefer the term iteration over sprint. Sprinting goes against this principle because sprinting is, almost by definition, unsustainable. It certainly is not “constant pace  indefinitely”.

I argue that in order to maintain constant pace indefinitely there are two things an team must do and an organization must support, acceptance test driven development (ATDD) and continuous integration / delivery (CI/CD). I believe currently that BDD is the best means of accessing ATDD (and TDD) so I have taught that to my clients with spectacular success.

Without ATDD and CI/CD all teams are doing is what I call feature chasing. The question is not one of sustainability but how many and how quickly can I deliver new features. While this might be important for startups, most are not involved in such high competition that chasing features at the expense of quality and long term sustainability is ludicrous. Even those who must feature chase to remain competitive must recognize that they are creating technical debt that must be paid, and paid quickly, before servicing the “interest” on the debt is all that can be afforded.

Interestingly enough, though many people believe that employing ATDD, TDD and CI/CD slows the progress of software delivery, my experience is that, with very little training and a healthy dose of discipline, the gains far outweigh the investment. This is obvious if we look at the product and not just the project, but my experience shows that even within the misguided and arbitrary project the payoff is realized.

I have a number of teams that I have coached that delivered high quality software into production in short project time frames precisely because, and not in spite of, BDD. As Bob Martin states, “The only way to go fast is to go well,” and no one is more recognized as an expert on quality code than him.

My last point is related to the above in that one of the greatest dangers of feature chasing is not just that we tend to accumulate technical debt faster, but it (and the sprinting as fast as we can mentality) generally pressures us to not take advantage of training opportunities like learning TDD, BDD and the like. With technology changing so quickly it is critical that our people make certain to invest their time not just chasing features but building the skills necessary for sustainable development so they can maintain a constant pace indefinitely.

Larry Apke

Agile Principles: Working Software is Primary Progress

laptop, primary, working software, larry apke

laptop, primary, working software, larry apke

Metrics. Metrics. Metrics. We love numbers. We measure and put numbers to all kinds of things. We use these numbers to mark our projects as red, yellow and red (of course, the project is always green until there are a few weeks left when someone finally blinks and acknowledges reality and begins to use yellow or, god forbid, red).

Unfortunately, in our headlong rush to create metrics we tend to forget the why of what we are doing. Numbers and statuses become an end unto themselves.

There are a myriad of problems with this. First, what get measured gets done. In our rush to get numbers we need to be very careful because measuring the wrong things will lead to all kinds of behaviors that can be detrimental to long term sustainability. For example, one company I worked for misunderstood the team velocity metric and rewarded teams based on the number of points completed. What happened? Over time the point values for stories increased so that teams would look better but the amount of throughput did not increase. This misuse of story points completely invalidated their even relative gross sizes to a point where they could no longer be used to give accurate information back to the business of what teams were capable over the long term. In other words, the valuable ability to be predictable was lost to service a poorly misunderstood metric.

The next problem is that we tend to measure those things that are easy to measure not necessarily those things that are important. There is an old joke about a drunk man looking for his keys under the street lamp.

I found the following account on Wikipedia:

A policeman sees a drunk man searching for something under a streetlight and asks what the drunk has lost. He says he lost his keys and they both look under the streetlight together. After a few minutes the policeman asks if he is sure he lost them here, and the drunk replies, no, that he lost them in the park. The policeman asks why he is searching here, and the drunk replies, “This is where the light is.”

If we only measure only those things that are easy to measure (usually easy to quantify with numbers) as opposed to those things that really matter, then we are no better than that drunk man looking under the streetlight because the light is better. As he will never find his keys, we may never find the truth by measuring what is easy versus what is important.

I often quote from Deming when discussing measurements: “The most important things cannot be measured,” and, “The most important things are unknown or unknowable.”

There is a very simple example that I use often when explaining this concept.

I ask people, “Do you have children?” “Do you love them?” “How do you go about measuring this love?” “Do you use minutes spent? Money spent? An combined weighted score that takes into account both money and time? Or do you do some regular poll of your children to see how much loved they feel on a Likert scale?”

Obviously, the love a parent has for his or her children is of paramount importance, but this is something very hard to measure.

I once spoke to a group of project managers and explained that we measure way too much. We measure things that are either easy to measure or do not really result in better behavior. You would have thought that I advocated clubbing baby seals! They decided that I was against all measurement. The answer is not that I am against all measures, but that I know that measures are limited in value due to the reasons outlines above (and many other human biases), so we need to measure less and be very careful what we measure.

In software development the primary measure of progress has to be working software that meets the needs of the end users. Of course we can measure other things, but there is no more important measure and all other measures need to be subservient to our ability to produce working software.

Larry Apke

Remember to check out my !