Wagerfall

wagerfall certification

happy new yearThe end of the year is upon us and, like most, I have begun to reflect on the year that was 2015. It was a great year for me both personally and professionally. Earlier this year I was given the opportunity of a lifetime to help build an Agile practice and continue my work as an Agile coach. As I looked back on 2015 I could not help but notice the really big news in the world of Agile was the concept of scaling. There has been a proliferation of scaling frameworks. Time will tell which, if any, will be beneficial, but it did get me to wondering if maybe there wasn’t room for just one more.

What follows is a satirical press release for the newest of Agile scaling frameworks. My purpose was not to offend or disparage, but to amuse. My apologies if I missed the mark. I wish you all a happy and prosperous new year!


“Wagerfall” – A New Way to Scale Agile

Agile Scaling Society announced today a new framework for scaling Agile called “Wagerfall”. This new lightweight framework trumps all others in an increasingly crowded field for the ease of its implementation.

NEW YORK, NEW YORK (PR FILE) DECEMBER 30, 2015

You might be familiar with Agile scaling frameworks like SAFe, DAD, LeSS and the like and today a new scaling framework has joined the list, Wagerfall. Wagerfall is the brainchild of Steven Anderson of the Agile Scaling Society. According to Anderson, Wagerfall is different in that it represents the easiest of all the scaling frameworks.

“The beauty of Wagerfall is in its simplicity. The framework is nothing more than the name Wagerfall because the name explains it all. You start with your current waterfall and sprinkle in a little agile somewhere in the middle. The “g” in the middle is for represents the Agile part,” asserts Anderson. When asked why not the “ag” for agile, Anderson mumbled something about it not being mandatory and holy wars.

wagerfall certificationThe announcement today also coincided with another press release detailing Wagerfall certification. In the spirit of lean and reduction of waste, the Agile Scaling Society has done away with any mandatory training or testing to get certification. “Why go to the trouble of pulling someone out of their job for two days?,” asks Anderson. “Send us your money and we will send you a lovely (and very fancy) certificate you can hang on your wall and you too can say that you are Wagerfall certified. Doesn’t it make sense if your company is not going to change anyway?” When asked why certification cost so much, Anderson replied, “science has taught us the more money we invest in something, the more value we place on it.”

There are already a great number of clients joining the ranks of Wagerfall. Donald Love, CIO of Great Big Company, credits Anderson and Wagerfall for their successful Agile transformation. “I can’t tell you how refreshing it was to work with Wagerfall. We transitioned to Agile immediately without the messy process of organization change, with all the training and thinking that comes with it. I can now check this one off my list and get my big fat bonus.”

It is not just the C-suite that has fallen in love with Wagerfall; the frontline workers have embraced it as well. Vijay Patel, a software developer, credits Wagerfall with allowing him to go about his business as he always done. “We’ve tried other Agile transitions, and I believed in them. I put myself out there and honestly tried to change. When the truth surfaced and us developers found out it was just lip service, we were crushed. Our morale is still low under Wagerfall, but we know Wagerfall is just lip service, so we got that going for us.”

In addition to the framework and the certifications, Anderson has a cadre of Wagerfall coaches. Anderson states, “The problem with most Agile coaches is that they are either not competent or too earnest. Wagerfall deals with these two problems head on. Since there is no recognizable change, competency is not an issue. Furthermore, our coaches provide a patina of credibility without pestering people to change their existing behavior. We often refer to what we do as ‘homeopathic agile.’”

While other scaling frameworks have detailed flowcharts, organizational structure documents, etc., Wagerfall avoids such complexities. Mindy Minter, Head Architect at Great Big Company, praises Wagerfall for its simplicity. “We are big believers in the KISS principle. You can’t get more KISS than Wagerfall. Pay your fee. Get your certification. Claim you’re Agile.”

waterfallAccording to Anderson, perhaps the most valuable aspect of Wagerfall is in the ability to roll back should the transition not work out as planned. “Just imagine, run a global search and replace on all your process documentation. Voilà. Wagerfall is turned back to Waterfall and you can go about your business as if nothing ever changed.”

About Agile Scaling Society

The Agile Scaling Society, headquartered in New York (to give it legitimacy), was founded on the belief that most companies want to be Agile without the hard work of actually changing their culture, philosophy or business processes. They provide certifications, training and coaching to allow companies to claim to be Agile while operating exactly as they always have. They claim that literally hundreds of major companies are currently following the Wagerfall framework and are in litigation with many of them for infringing on their trademarked Wagerfall process.

About Steven Anderson

Steven Anderson has been described as a “jack of all trades” with experience in selling homeopathic medicine, street preaching and HVAC. While new to Agile, he recognized the opportunity to make money and has embraced it. He once owned a PC and wrote some excel macros. He has parlayed this experience into a successful consulting company and has recently founded the Agile Scaling Society.

 

On Better Hiring (or Do Coding Interviews Work? Part 2)

Whiteboarding

Last week I wrote about coding interviews and questioned whether they are the best method to predict future job success. There were strong opinions on both sides of the argument. Someone expressed the opinion that I was taking a quote from Laszlow Bock, senior vice president of people operations at Google, out of context, so I would like to begin by giving him the first word on what Google has discovered about interviews. Interestingly, he refers to cognitive biases, specifically confirmation bias (as I did in my post) as a reason traditional interviews are not good indicators of job performance.

In April 2015 he wrote an excellent article for Wired Magazine and I encourage everyone to take a look so that they can form their own conclusions. I will quote at length in order to make sure that Mr. Bock’s ideas are not taken out of context. In the article he stated:

In 1998, Frank Schmidt and John Hunter published a meta-analysis of 85 years of research on how well assessments predict performance. They looked at 19 different assessment techniques and found that typical, unstructured job interviews were pretty bad at predicting how someone would perform once hired.

Unstructured interviews have an r2 of 0.14, meaning that they can explain only 14 percent of an employee’s performance. This is somewhat ahead of reference checks (explaining 7 percent of performance), ahead of the number of years of work experience (3 percent).

The best predictor of how someone will perform in a job is a work sample test (29 percent). This entails giving candidates a sample piece of work, similar to that which they would do in the job, and assessing their performance at it. Even this can’t predict performance perfectly, since actual performance also depends on other skills, such as how well you collaborate with others, adapt to uncertainty, and learn.

I believe that the work sample test he refers to is analogous to the coding interview. If not, I will error on the side of caution since equating the two would mean that coding interviews are the most effective. Even so, the very best interview technique is only twice as effective as an unstructured interview in predicting job performance. I hope some statisticians can weigh in on the topic, but an r2 value of 0.29 does not seem to be very indicative of future job success. I vaguely remember from college that in physical sciences we look for much higher r2 values before we accept a hypothesis as proven.

Bock goes on to claim that using a combination of interview techniques in a “structured interview” is even more predictive (though he does not present any r2 values for the combination of techniques). And, believe it or not, I agree with him. He has presented some real scientific information that a structured interview process is better than an unstructured one, but I still wonder if we have committed a type 3 error (solving the wrong problem precisely).

Bock and I both agree that “the goal of our interview process is to predict how candidates will perform once they join the team,” but he also admits that doing things as he recommends is “hard to develop” and “a lot of work.” Even Bock states:

Full disclosure: I’m the Senior Vice President of People Operations at Google, and some of these (unstructured) interview questions have been and I’m sure continue to be used at the company. Sorry about that.

And this is where I would begin to question whether this process is one that everyone should follow. Of course, for highly desirable and highly capitalized companies like Google, Facebook, etc. the benefits likely outweigh the costs, but for the majority of companies that interview software developers, this may be a luxury they cannot afford. This overall hiring style is time consuming. There is not doubt that developers are willing to jump through hoops to work at Google or Facebook, but such is not the case for most companies where time is not a friend. If a majority of companies use this process they may find a much higher cost in potential candidates lost than the marginal effect of hiring better. This is the reality that many managers and companies face.

InterviewThis begs the question of what should those who are not Google or Facebook, those without unlimited resources and legions of potential candidates, do to improve their hiring of software development professionals? The first thing is to be aware of the true costs and benefits associated with pursuing one path over another. Eschew convention and ask the hard questions like, “Given limited time and financial resources, is this the best use of either (or both)?” It appears that Bock agrees there is a high cost to more effective interviews, but for most companies is the higher cost justified? Is there a better way to allocate time and money? I believe there are things that should be considered and evaluated.

One thing I have suggested in the past is the concept of apprenticeship and creating a pipeline of talent from within a company. This is an essential aspect of good management and would reduce the need for interviewing new candidates. In my experience is that in software development in particular we often look for perfect fits and do not do a good job of creating a pipeline of candidates. This puts us “behind the eight ball” and increases the risk of hiring poorly, providing high pressure to make the right hire.

If we are truly looking for the best predictor of how well someone will preform on the job, then we could certainly achieve a much higher r2 value by actually putting someone on the job for a limited period (or at least give them an experience that would be as much like the job as possible). In response to my original blog, Duncan Campbell had perhaps the most insightful comment when he wrote:

The best way to find out if someone is good for a job is for them to do the job… which is why our “coding interview” is a real problem done in the candidate’s own time on their own PC followed by a code review.

I agree with Duncan, but it may even be cost effective to go further. Instead of spending many hours via interview, make the quick decision to bring the candidate into the company on a trial basis, perhaps two weeks at pay. We all seem to have projects that are less critical that would be a good proving ground for potential long-term employees. If the candidate doesn’t work out after two weeks, then part company. It may prove more cost effective for managers to actually do the work of managing existing people than spending a huge chunk of time in a laborious interview process.

thumbs upAnother possibility is the concept of contract to hire or using staffing agencies to do the heavy lifting. Full disclosure: I work for a company that has staffing as one of our offerings. Nevertheless, having another company take the time to screen candidates and having the employment-related risk owned by a third party does have its place in this discussion.

My original article, titled “Do Coding Interviews Work?”, was purposely open-ended. I have tried my best to present information to help people answer that particular question. The title was not “are there better ways to interview software developers?” for a reason. I am truly questioning any interviewing technique because of the high cost and the low correlation – even when interviews are conducted perfectly (or as least as perfectly as science has instructed us). I do not think that we will get rid of interviewing altogether, but it is important to know that for a great majority of companies there may be alternatives that, given all the costs and benefits, may be more effective ways of answering the larger question of how best to hire. I gave a few suggestions. I look forward to others.

Do Coding Interviews Work?

Coding Test

I have recently come across some interesting information regarding coding interviews. If you are not familiar with coding interviews, these are interviews for technical people, usually software developers, to prove that they have the ability to code so they are sometimes referred to as programming interviews. These can be either taken as a computer-based test or frequently done as whiteboard exercises. They often take the form of brain teasing riddles or binary search questions. The premise is that these coding interviews, conducted in an arbitrary environment, are a good proxy for determining whether or not someone will perform well in the real world.

WhiteboardingAs with all things, instead of relying on our human instinct, which is riddled with cognitive biases, we must rely on science to understand true cause and effect. The science has spoken loud and clear; there is no relationship between coding interviews and performance on the job for software developers. Don’t believe me; here’s what Laszlo Bock, Senior Vice President of People Operations at Google had to say on the topic:

…everyone thinks they’re really good at it. The reality is that very few people are.

Years ago, we did a study to determine whether anyone at Google is particularly good at hiring. We looked at tens of thousands of interviews, and everyone who had done the interviews and what they scored the candidate, and how that person ultimately performed in their job. We found zero relationship. It’s a complete random mess…

It appears to me that very often the interviewer is much more concerned with showing the candidate how astute he or she is as opposed to finding out whether or not the candidate is a good fit for the position. I recently read a blog post that stated that candidates should spend a great deal of time preparing for these coding interviews, in the neighborhood of about 40 hours. While this might be what it takes to “ace” such an interview, it still begs the question of whether the coding interview is actual predictive of the candidate’s ability to function in the position. It is not.

InterviewThis is where the cognitive biases come in. It appears that there is a great deal of the illusion of control, which, as humans, we are highly susceptible to. We think that somehow we are able to ask some questions and magically be able to determine how one will perform on the job. I would expect there is a bit of confirmation bias because we are subject to cherry-picking our evidence to support our previously held views (i.e. coding interviews are effective) and a similar bias called choice-supportive bias which is the tendency to remember one’s own choices as better than they actually are. I am certain that a whole host of other biases can be brought forth which not only explain why we think coding interviews are effective when there is evidence to the contrary, but also the stubborn way in which these have continued to persist in spite of such evidence.

In my career I have taken a few of these interviews and I may have my own biases since I don’t recall ever getting a job offer after one of these interviews. I remember taking one many years ago on SQL and ETL. I had been doing SQL and ETL quite successfully for over a year and knew I could perform very well in the position.

QuizNevertheless, the test was taken not on my own computer, but a computer that I was wholly unfamiliar with, a laptop with a built in mouse. I remember that I had some frustration just with the configuration of the computer I was using. I also remember that the majority of the questions I could have easily answered had I been able to use reference materials like I would be able to do in the real world. It felt like the test was measuring how well I could fix my parachute after I had been thrown from the plane. It did not measure how I would perform on my job, but how well I had memorized simple syntax that is probably not worth memorizing.

I know there are those who will say that one should remember such commands, but given that the average programmer contributes five lines per day to the final product, does it really make that much sense? Perhaps it would be better to fill one’s mind with other more important things? What I do know is this – had I been offered the position I would have outperformed many who would happen to ace this test because I have a wealth of experience outside of the ability to memorize coding syntax.

In a recent blog post I wrote a tongue-in-cheek title, “Accenture Ends Annual Review (and Admits Earth Orbits the Sun)”. Of all my dozens of blogs (I have posted over 100 over the years), this was perhaps the most provocative of them all and certainly the most popular, with literally thousands of views. In this case it took literally decades to finally admit what science has taught us with respect to annual reviews. Therefore, I expect that coding interviews will be with us for some time to come, but at least I can look forward to the day when I write the blog “Company X abolishes the coding interview (and Admits Earth is Round).”

Brainstorming – Effective Technique or Sacred Cow?

sacred cow

cognitive biasI have spent a great deal of time studying and reading about human cognitive biases and their effect on business, especially the business of software development. This past weekend I finished the groundbreaking book by Stuart Sutherland, appropriately title “Irrationality: The Enemy Within”.

Since I have made quite a bit of study on the topic previously, some of the material was either referenced by other materials or has lost its shock value since I have become thoroughly convinced of humankind’s built in propensity not only for irrational behavior, but their inability to recognize that these biases are a problem. In fact, my experience is that a large segment of our population is not only ignorant of biases but seems to revel in a willful ignorance of scientific evidence. Certainly there appears to be a great deal of cognitive bias (mostly the confirmation bias) in the debate on climate change.

My previous understanding of human cognitive bias withstanding, while the book was published in 1992, the information is still relevant, interesting and cogent. I would suppose that there are a number of things that are worthy of note, but since there is such a wealth of information in the book, I decided to choose a single instance to write about here and encourage those interested in more examples to actually get a copy of the original material.

brain stormingThe one thing that caught my attention and has stuck in my mind is the example of using a technique called “brainstorming” to improve creativity and productivity. For those who have lived on another planet, brainstorming is the process of getting as many ideas out as possible without judging or filtering of the ideas. It has been used for decades since its introduction by Alex Olsen in the book Applied Imagination. Olsen claimed that in his experience using brainstorming in advertising agencies resulted in 44% more worthwhile ideas than individuals thinking up ideas without the benefit of group discussion.

Ever since that time, brainstorming has been widely used to improve creativity and productivity of groups. However, here’s the kicker, since as long ago as 1958, Osborn’s claims has been subject to numerous studies which almost universally cast doubt upon the effectiveness of brainstorming.  Keith Sawyer, a psychologist at Washington University in St. Louis, states: “Decades of research have consistently shown that brainstorming groups think of far fewer ideas than the same number of people who work alone and later pool their ideas.” In other words, brainstorming doesn’t work quite as well as we think it does (or should).

With scientific evidence questioning the effectiveness of brainstorming vast, the real question is why does the use of brainstorming persist? The question is at the heart of much of my agile practice in that the prime issue is not whether one is merely effective, but that one is optimal. It is obvious to me that several cognitive biases are in play in keeping brainstorming around.

herd behaviorThere is something of the availability cascade to brainstorming “which is a self-reinforcing process in which a collective belief gains more and more plausibility through its increasing repetition in public discourse (or ‘repeat something long enough and it will become true’)” (Wikipedia). Furthermore, a whole host of cognitive biases around groupthink, herd behavior and the bandwagon effect certainly have their influence on the popularity of brainstorming. Since brainstorming “seems” to make sense it is also subject to the belief bias, which is seen when the believability of the conclusion leads us to misunderstand the true effectiveness of the process. Frankly, I would suppose that I could find literally dozens of cognitive biases, which allow brainstorming to proliferate as the “go to” technique for group creativity and productivity.

Given that brainstorming may very well not be optimal, what are the alternatives that have actually been scientifically proven to be more effective? In a 2012 article for Psychology Today, Ray Williams proposes a few modifications to the brainstorming approach:

  • Have groups collaborate frequently by having them in close physical proximity to each other;
  • Pay attention to creating physical spaces that enable good collaboration, which facilitates people frequently “running into each other” while at work;
  • Revise the “no criticism” script of brainstorming to encourage debate about ideas;
  • Use appreciative inquiry techniques, where group participants build on ideas suggested by each individual in the group.

ideasMost interesting to me about these suggestions is how closely they align to the things that Agile (and I) speak to, namely close attention to co-location of people within an Agile team to increase good collaboration, allowing an environment where there is embracing of feedback as opposed to “failure” and using iterative feedback to improve ideas (and software) incrementally.

There are a great number of cognitive biases inherent in human beings. The first step is to be aware that these irrationalities exist. We must also acknowledge that we, as individuals, are subject to these irrationalities. Furthermore, we need to create an environment of safety that gives us the freedom and encouragement to continually explore and seek the underlying scientific truths, the “why” of what we do – the freedom to gore the sacred cows.

Agile – It’s All About Making Better Decisions

cognitive bias

I’ve been spending a lot of time recently doing research, reading and presenting on human cognitive biases. To the initiated, cognitive biases are defined as

“…a systematic pattern of deviation from norm or rationality in judgment, whereby inferences about other people and situations may be drawn in an illogical fashion. Individuals create their own ‘subjective social reality’ from their perception of the input.” (Wikipedia Definition)

In other words, cognitive biases exist when there is a gap between our perception of reality and objective reality. For example, there is the “confirmation bias” which is our human tendency to seek out or interpret information that confirms one’s existing opinions.

everestWhile the term “cognitive bias” is relatively new (it was coined in 1972 by Amos Tversky and Daniel Kahneman), researchers have already uncovered literally over a hundred cognitive biases, some which are relatively tame like the “google effect” (or digital amnesia), where there is a tendency to forget information that can be easily researched, to ones that can lead to more disastrous consequences like the Sunk Cost Fallacy where people justify increased investment in a decision based on prior investment instead of looking only at future efficacy. The Sunk Cost Effect, along with the Overconfidence Effect and Receny Effect, played a role in the May 1996 mountain climbing tragedy, made famous in the movie Everest, that resulted in the death of five experienced climbers.

A great number of cognitive biases have been found through the work of behavioral economics researchers like Dan Ariely who wrote the wonderful books Predictably Irrational and The Upside of Irrationality. Underlying all of classic economics is the concept of homo economicus, or economic man who behaves in rational ways to maximize individual returns and acts in his own self-interest.   Unfortunately, this is not the case and humans often act irrationally (and predictably so) because of their inherent cognitive biases. Humans all have biases for loss aversion and would choose to avoid loss over a larger corresponding potential gain and thus act as “homo irrationalis” as discovered by behavioral economics instead of “homo economicus” as predicted by classic economics.

It is our cognitive biases that cause us to make irrational decisions. Since behavioral economists found many of these cognitive biases, it was not a great leap to see how cognitive biases would be a paramount concern for the economics of software development. In my coaching practice, a great deal of my time and effort is used in helping organizations make better decisions about software development. Many times the optimal decisions are counter intuitive to people’s inherent biases so my job (and my passion) is helping companies see the world of software development differently so that, when it comes down to making a decision, they have all the knowledge necessary to make the optimal economic decision.

smokestackOne of the most prevalent biases in software development is to see the world in a mechanistic / Tayloristic manner. Taylor’s viewpoint was fine for the old world of physical work, but does not hold up in the complex knowledge work being done by software development professionals today. Unfortunately, most of the people making software development decisions are predominantly influenced by this old, less optimal way of viewing the world, and, as a result, make sub-optimal decisions. For example, in the mechanistic worldview, adding more people to an effort results in a corresponding increase in output. If there is an existing team of seven people and we add seven more then we would (if we hold this mechanistic bias) expect the work to be approximately twice as fast. However, like the behavioral economists that found the real world to be counter intuitive to homo economicus, actual studies have found that the need for increased communication of knowledge work nearly outpaces any incremental increase in individual productivity (see Top Performing Projects Use Small Teams). I have always said that if you want to double productivity of a fourteen person team all that is necessary is to create two teams of seven.

The mechanistic bias can also be seen in many of the ways that the Agile philosophy is implemented. For example, the scrum framework is often trained as a series of ceremonies and actions with little or no understanding of the reason such mechanistic actions are successful. “Scrum Masters” are “certified” with only two days of training and a simple test. The training deals with ideal situations, but when the scrum master actually has to implement scrum, he or she is woefully unprepared. In the real world compromises and decisions must be made. Without understanding the underlying “why” of agile and the basic nature of software development, the decisions and compromises that are made are not optimal. In my experience, this is why project managers are tougher to train than people with no project management experience. When faced with ambiguous information and the need to make optimal decisions, project managers tend to fall back on existing mechanistic knowledge and the decisions made range from mildly irritating to completely disastrous. As I have often pointed out, to say that one was successful with waterfall reeks of confirmation bias because it begs the question of whether or not one would have been more successful using another methodology or framework like Lean or Scrum.

rental carIn addition to the mechanistic bias, software development suffers from another bias, the project-centric bias, which is the tendency to see all work done in terms of projects. Unfortunately, the project-centric bias is so ingrained in companies that there needs to be some radical changes to the way we view software development across all areas, including accounting. Viewing work as a project when we are actually working on software products results in a whole raft of poor software economic decisions like concentrating on features more than quality and security. Remember that no one washes a rental car.

As I think back on my coaching work in agile, the blogs I have written, the many discussions I have had and the presentations I have made, I think that all of these boil down into one very simple thing – my work is all about helping people understand the true nature of the software development business process and, thereby helping them to make better decisions. Understanding our cognitive biases, therefore, is extremely important for my clients and myself because, in the end, Agile is all about making better decisions.

On Death and Dying and Agile Transformation

death and dying

I was recently involved with a large scale Agile transformation and noticed what I thought was an interesting correlation, jotted down a note to blog about it and then promptly did nothing for a very long time. Usually these blinding flashes of light quickly lose their luster and find themselves relegated to the bottom of the blog backlog, never seeing the light of day, but this particular one reignited my attention as I sat down to write my newest blog.

ideaMy earth-shattering insight was that any organizational transformation, which obviously includes an Agile transformation, involves the very same stages that were first identified by Swiss psychiatrist Elisabeth Kübler-Ross in her 1969 book, On Death and Dying. For those who were sleeping through Psych 101, Kübler-Ross proposed that there are a series of stages that are experienced by survivors when faced with the death of a close friend or relative. These stages could be experienced linearly but also in no particular order, but that everyone would go through the five stages she recognized through her work with terminally ill patients.

The five stages are: Denial, Anger, Bargaining, Depression, and Acceptance. Though the model was originally created to explain the stages of grief following the death of a loved one, it was later expanded to encompass the grief stages associated with any major loss like the loss of a job or income or divorce / end of a relationship. It is my opinion that these stages can also be applied to the loss of a treasured idea. In fact, I think these stages are better explained by the death (loss) of a cherished idea since love, attachment, etc. are all associated with mental constructs (ideas). Our world is merely the sum of our mental perception so the loss of a loved one, loss of a job, or loss of a relationship are nothing more than the loss of an idea.

Once we understand that the grief stages are in response to the loss of an idea, it is not a great leap to apply this to any company transformation. It is well known that there are some who will readily embrace change, but there are a great number that see any change as a threat. What is the nature of the threat? I think that either consciously, or more often subconsciously, the threat is to an idea that one has grown to “love” and that there is a very real fear that if this idea were replaced that its death would cause grief. In my experience with a great number of agile transformations I do tend to see the five stages that Kübler-Ross outlined.

sadnessThere is certainly a large share of denial when I have tried to help companies become more agile. There is never a shortage of people who will defend the status quo and insist that the current way of creating software (nearly always waterfall) is already successful and that there is no need to bring agile. The minds of the people in denial are closed to any external threat to their enshrined beliefs.

I also see a great deal of anger during transformations. People have loved their ideas for so long that they are like a member of the family. How dare you agile folks try to kill off my favorite processes? I will do everything in my power to try to stop you, railing at the purveyors of such dangerous ideas.

I see my share of bargaining too. If we cannot outright defeat the new ways, we can at least try to keep as many of the old ways intact. Maybe we don’t have to kill off all the waterfall phases. Maybe we can keep the phases, but just do them in shorter time frames. Maybe we can just do this “agile” thing for development and leave the rest of the sacred cows not slaughtered. I don’t have to give up my old way of thinking or deal with the death of my ideas, is there not room for both?

As new ideas begin to take hold, I have also seen my share of depression. People have viewed the world in one way for so long that once their ideas are shown to be outdated or not optimal, they begin to look forlorn and some even begin to despair. With waterfall gone, how am I to complete a software development project?

handshakeAnd finally, if the company has the intestinal fortitude to stick it out through the first four stages, you will finally get to acceptance. No with your new idea having taken hold, it probably won’t be long before you will have to go through it all again with another new idea. I think the more that we realize that changes our ideas result in the death of old ideas and that the death of old ideas will result in some recognizable stages the more we will be able to quickly move through those stages and adopt new ideas more readily.

Postscript: Interestingly enough, when I had just finished the above blog, I did some research to see if I had written on this subject before because it had seemed eerily familiar. I could find no blog that I had published on the subject, but I may have written about it before and not published. What my research did find was that I am not alone in my link between the Kübler-Ross model and Agile. I have found other references to this on Mindstorm, and Agile Helpline. While I do not recall reading these blogs prior to my current blog and it is possible that we have all come to the same insight independently, I reference these here just in case I did read them at some point and perhaps forgot. Regardless, the fact there are others who have written about the very same topic leads me to believe in the concept’s applicability.

Post Postscript: This blog represents my 100th blog post under the agile-doctor.com website. While this is mostly symbolic and my 100th blog post will not guarantee me syndication (like a sitcom), it is still a moment to celebrate. Many thanks to everyone who has given support over the years! Stay agile my friends!