Agile Principles: Team Reflection Provides Growth

larry apke, agile doctor, team reflection

larry apke, agile doctor, team reflection

At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.

I coach the majority of my teams by first modeling the behavior that I expect, allowing teams to copy that behavior as I observe and correct behavior so that it matches the modeled behavior. To achieve this a great deal of my time is spent as an active scrum master – and I love it! I love having the opportunity to interact with individuals on teams, but there are times, after literally hundreds of iteration plannings and reviews (and thousands of daily stand ups), where it is difficult to keep things “fresh” or stay motivated, but this is never a problem when it comes to retrospectives, which is what the twelfth (and last) principal refers to.

Retrospectives are my favorite of all the scrum activities because they represent the opportunity to reflect on how we are implementing and to adjust our behavior to be more effective.

I have said to my teams on many occasions that if I were forced to choose only one scrum ceremony that my choice, without hesitation or reservation, would be the retrospective. Without it, how could we ever expect to improve? What essential difference would an “Agile” project have over the many death march projects that teams have come to accept?

There are always better ways of doing business. As a team we must come together frequently to honestly discuss current processes, evaluate potential alternatives and then experiment with alternatives to prove (or disprove) their value.

Since not everything works well for every team, it is important that potential improvements are seen as experiments. It may be that it is the right thing to do, but it is merely not the right time. I regularly tell my teams that we need to be “scientists” in that there are many things that we will try not knowing what the result will be, but it is important to propose a theory and conduct a proper experiment.

I extend this concept of theory, implementation / observation and retrospective to more aspects of a team than just the official retrospective. For example, instead of time “estimates” I propose that we have a theory on how long something will take and that we will test this theory by the end of the iteration. This provides a less threatening way of estimating so we can use real information to adjust our estimates to provide better estimates in the future (since regardless of the methodology people will, quite logically, expect to understand what teams are capable of in the long term).

One thing I have noticed in regards to retrospectives is that teams struggling with Agile transformation will often drop the retrospective ceremony (while keeping all the others though I have noticed a tendency to sit during stand ups as well). There are a number of possible reasons for this. One reason is that the organization is so addicted to a top down problem solving approach that management does not value identification and solving of problems at the team level so the meeting is quickly dropped. In these organizations honesty is rarely valued. Without honesty a proper retrospective cannot occur. One team I coached referred to their organization’s “more than healthy aversion to reality.”

I have run into organizations that have a misguided desire to track team experiments. These “best practices” shops think that if we only identify what works for one team then we can codify this practice and force all others to adopt this practice. This is a misunderstanding of Agile. While there may be “best practices” that work for all teams, this approach flies in the face of self-organization and ignores the fact that each team will mature at different times so the practice that works for one team may be completely inappropriate for another. I certainly do not have the same expectations for my seven year-old that I have for my twenty year-old.

Tracking experiments at the organization level also makes it more difficult to experiment, producing a chilling effect on teams. Instead of trusting that the team(s) will attempt experiments that are best for their particular circumstance, there is one more hoop to jump, one more monitoring, one more instance of distrust and more business as usual. As a result, teams experiment less, progress stagnates, the team experiences disincentives and any real change is exchanged for “acceptable” window dressing. After experiencing a few retrospectives under these constraints it is not difficult to see why teams choose not to pursue more retrospectives.

Three are times when teams have just not had coaching or training on how to properly conduct retrospectives. Some retrospectives become little more than glorified bitching sessions with no substantive changes discussed or attempted. This happens frequently when organizations tell teams that they will be agile but do provide real support in removing the systemic obstacles to team success. For every problem identified in a retrospective there should be a corresponding action – even if that action is the need to escalate this item to management. Once something has been escalated to management, it is important for management to be held accountable to the team and, from time to time, to report to the team on the progress of removing the impediments.

More importantly, since many problems may be systemic and beyond the ability of the team to change, it is incumbent on the scrum master to keep the team focused on the problems that can be solved. It is best to start small with things like meeting times, rooms, etc. Fixing “small” things that are in the team’s power can go a long way in helping them to “gel” as a self-organizing team.

As to the retrospective meeting itself, there are quite a number of folks who have interesting facilitation techniques. Personally, I find most of these “team building” techniques forced, gimmicky and condescending to adults. If they work for you, fine, but they are not my way.

Here’s how a typical retrospective meeting goes for me, I encourage everyone to try out my process to see if it works for your team. If it doesn’t, then try, try again!

I keep my retrospectives simple and generic in asking:

  • What went well?
  • What could be improved?
  • What actions can we take? (To ensure more of the first and less of the second)

Gimmicks trade style for substance and if I do a good job of the “vanilla” retrospective, I find that I can keep it interesting and people engaged without tricks. The content of the discussion wins the day!

And I start each meeting with a simple statement of our intent, usually including something like Norm Kerth’s Prime Directive, assuring the team that the purpose of our discussion is to improve our work as a team and nothing is personal.

The second thing, that is often forgotten and is critical to retrospective success, is to take some time at the beginning to review what was discussed and actions undertaken in the last retrospective.

This way we can hold people accountable for delivering on the changes promised and not constantly tread over the same issues again and again. Of course, if the same problems persist and it is a good jump off for discussion, there is no rule to say that we cannot re-discuss or re-emphasize something from a previous meeting. Without this follow up step I have witnessed a great number of teams with great intentions, but poor execution.

The next thing that we do is to go over the three columns:

  • For each thing that we identify as going well (or not so well) the team is encouraged to come up with some kind of action that will help us continue the good and improve on the not so good.
  • For every action that is taken, someone (or a group) is assigned to be accountable. If need be, a retrospective action can be planned during iteration planning to make sure there is time to get the work done.
  • The meeting ends with a statement of appreciation for the team’s honesty and courage in improving their work.

As far as timing of the meeting, I generally make sure that the retrospective is held in advance of the iteration planning so, as I mention above, any stories or tasks necessary for completion of retrospective actions can be accommodated.

Just as we try our best to eliminate work-in-progress for stories, we do the same with retrospective actions. Trying to do too many things at once is a recipe for disaster. Small incremental progress is the key.

While it is the last principle, as you can tell by the amount of commentary I have written around it, it is certainly not the least.

In fact, I contend that a team cannot achieve sustainable agility without the frequent feedback and course correction that team reflection provides.

And finally, keep on the look out for my new and improved ebook! I am adding the principles to it, so it will be full of great Agile information for anyone interested in transitioning to Agile or needing a refresher.

Larry Apke

Agile Principles: Excellent Design Needs BDD & TDD

larry apke, agile doctor, bdd, tdd, excellent design, agile principles

larry apke, agile doctor, bdd, tdd, excellent design, agile principles

This principle is much like the one previous about sustainable development. Agile doesn’t ask us to shortcut quality and increase technical debt in an effort to deliver software faster. It is precisely because we do not shortcut quality and incur technical debt that we are able to move faster.

I have worked with many teams to introduce Behavior Driven Development (BDD) because, among a great number of other advantages, BDD allows developers an easier way to access the practice of Test Driven Development (TDD). And, in my experience, TDD is the only way I have seen out of the practice of “Big Up Front Design”.

Big Up Front Design is generally a waterfall practice in that architects and designers spend a great amount of time before coding begins to attempt to foresee all possible design considerations in advance so that the final design could be implemented without issues. The problems with this approach are outlined wonderfully in Martin Fowler’s “Is Design Dead?” blog.

The one I would concentrate most on is the issue of changing requirements. Since most of software development falls in the complex quadrant (see my Complexity Theory and Why Waterfall Development Works (Sometimes) presentation), it generally has a great deal of nonlinearity (sometimes referred to as the “butterfly effect”). This means that any small change in requirements can have a great ripple effect, usually nullifying the extensive work that designers and architects (our experts) have created.

If you want a rule of thumb measure of an organization’s (or team’s) relative agility, bring up the word refactor. A rigid organization will recoil in horror while an agile one will recognize refactoring as desirable. The answer to nonlinearity is the concept of evolutionary design and this is simply not possible without refactoring and refactoring is simply not possible without a safety net. That safety net is created by a suite of tests that were created as a result of TDD (using something like BDD) and are leveraged via continuous integration and continuous delivery.

With respect to this principle, continuous attention to technical excellence is expressed through the XP practices of BDD/TDD and CI/CD, learning how to create testable requirements (via BDD) which are expressed through tests created prior to coding (TDD) and through near instantaneous feedback (CI/CD). I can be assured that my refactoring of design addresses not only the new requirements but also the legion of existing requirements (through automated regression) so that nonlinearity is not expressed through regression defects in the final product.

Because of the technical excellence above I can then use evolutionary design to create not just “good” but excellent design.

“This might all sound fine,” you say, “but I live in the real world and it doesn’t work for us because of (fill in your favorite excuse).”

To you I say, yes, it does work.

No matter what your situation, you can leverage the above practices. That is not to say that it won’t be challenging because you may be struggling with existing technical debt, but it is possible if your organization had the understanding of the costs of not adhering to this principle.

For the skeptics out there I leave you with a little story:

There was one team that I worked with to teach BDD/TDD and pushed them to adopt these practices. Though they were initially skeptical, they did it anyway. After only a few short days they began to see the “method to my madness” and roundly declared that this was the way that all software should be developed.

After a few months of success, they were asked to present what they learned to other teams. Like a proud father I stood in the background and listened to what they said. Not only did they say that they couldn’t imagine doing software development without these practices, but that there were times that they felt pressured (as we all do at times) to produce software faster and they shortcut the processes and every time they did, it was this code that was identified later to have defects; defects that they had to spend time fixing.

Because the time spent finding and fixing code that wasn’t created using TDD was greater than if they had slowed down and done the initial coding properly, trying to write code faster by neglecting technical excellence was actually slower in the long run.

To this day these folks that I had the pleasure of teaching a bit of technical excellence to email me from time to time to tell me that they have convinced yet another team (or vendor) to pursue technical excellence.

Why? Because continuous attention to technical excellence and good design enhances agility.

Larry Apke

Agile Principles: How to Maintain a Sustainable Pace

larry apke, agile, scrum, indiegogo, agile development, agile doctor

larry apke, agile, scrum, indiegogo, agile development, agile doctor

Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.

When I think on this principle I cannot help but think about the potential “dark side” of agile and how it can be misunderstood and implemented incorrectly. It also reminds me of an interesting story I was told by one of my coaching colleagues recently.

Once upon a time a company hired a very talented vice president of software development. Unfortunately, when this brave soul entered employment the amount of technical debt in the code was enormous. This was a situation that needed to be fixed because this pasta code was very expensive to maintain and made it difficult to deliver software quickly and with quality.

The company’s leadership heard about agile and decided that this was the answer to all their problems so they set about sprinting. Since the concepts are so easy they felt they could forge ahead without expert agile scrum help. In their quest for agility they found that they could indeed write code faster, but without proper guidance they forgot about the concept of sustainability and did nothing more than create technical debt faster. Unfortunately for our VP, the pleas to adopt sustainable agility went unheeded and six months was all the VP could take before moving on.

The bottom line is that many companies misuse agile because they think by being agile they can cheat the iron triangle of development. What too few people realize is that you don’t choose two of three sides because it is actually an iron square where you choose three of four sides (scope, resources, schedule and quality or, as Jeff Atwood refers to it, an Iron Stool). You misuse agile when you choose everything but quality because the code becomes unmaintainable over time and agility becomes mired in the big ball of mud you have created.  I refer you to my article about refactoring in the Agile Record for the problems with unnecessarily complex and technical debt laden code.

The misguided desire to emphasize speed over quality leads to the accumulation of technical debt and is a symptom of project (and not product) centric thinking. Like I refer to in an earlier blog post, there are reasons why no one washes a rental car.  Overtime you will no longer get speed or quality and your ability to sustain agile over long periods of time is compromised.

I try to run at least a few miles everyday, but I do not sprint the entire run. If I did, I would barely make it more than about a quarter of a mile. This is why I have begun to prefer the term iteration over sprint. Sprinting goes against this principle because sprinting is, almost by definition, unsustainable. It certainly is not “constant pace  indefinitely”.

I argue that in order to maintain constant pace indefinitely there are two things an team must do and an organization must support, acceptance test driven development (ATDD) and continuous integration / delivery (CI/CD). I believe currently that BDD is the best means of accessing ATDD (and TDD) so I have taught that to my clients with spectacular success.

Without ATDD and CI/CD all teams are doing is what I call feature chasing. The question is not one of sustainability but how many and how quickly can I deliver new features. While this might be important for startups, most are not involved in such high competition that chasing features at the expense of quality and long term sustainability is ludicrous. Even those who must feature chase to remain competitive must recognize that they are creating technical debt that must be paid, and paid quickly, before servicing the “interest” on the debt is all that can be afforded.

Interestingly enough, though many people believe that employing ATDD, TDD and CI/CD slows the progress of software delivery, my experience is that, with very little training and a healthy dose of discipline, the gains far outweigh the investment. This is obvious if we look at the product and not just the project, but my experience shows that even within the misguided and arbitrary project the payoff is realized.

I have a number of teams that I have coached that delivered high quality software into production in short project time frames precisely because, and not in spite of, BDD. As Bob Martin states, “The only way to go fast is to go well,” and no one is more recognized as an expert on quality code than him.

My last point is related to the above in that one of the greatest dangers of feature chasing is not just that we tend to accumulate technical debt faster, but it (and the sprinting as fast as we can mentality) generally pressures us to not take advantage of training opportunities like learning TDD, BDD and the like. With technology changing so quickly it is critical that our people make certain to invest their time not just chasing features but building the skills necessary for sustainable development so they can maintain a constant pace indefinitely.

Larry Apke

The Most Important (and Least Understood) Software Development Fact

Facts and Fallacies of Software Development Book

Facts and Fallacies of Software Development BookI love the holidays – the family, friends, food, drink, and, of course, the gift cards. I love to read and time off plus Amazon Gift Cards means the opportunity to indulge my passion. As the New Year begins, I find myself knee deep in two wonderful books about software development and the workplace. One of these, Facts and Fallacies of Software Engineering by Robert L. Glass, I found fascinating even though it was published over ten years ago. While some of the material is dated, I found that most of the facts and fallacies are as valid today as they would have been decades ago. This alone is disconcerting. You would have expected that, by now, most of these facts and fallacies would be well understood and to some extent corrected. If anything, my experience is that there is less understanding of the true nature of software development now than ever. It should be required reading for anyone in the business of managing software development because it would certainly save many a great deal of frustration by learning vicariously from one of luminaries of software development.

It was just about midway into the 55 facts (Fact 21 in fact), that I stumbled onto what could easily be construed as the most important (and least understood) software development fact in Glass’ book. The fact is titled “Complexity” and states:

For every 25 percent increase in problem complexity, there is a 100 percent increase in complexity of the software solution. That’s not a condition to try to change (even though reducing complexity is always a desirable thing t0 do); that’s just the way it is.

Glass, Robert L.; Facts and Fallacies of Software Engineering

Glass goes on to state, “If you remember nothing else from reading this book, remember this.” He closes his statement of fact with “And remember, also, that there are no silver bullets for overcoming this problem. Software solutions are complex because that’s the nature of this particular beast.“

What struck me most about this particular fact and Glass’ treatment of it is how closely it aligns with my experience and things I have written and spoken about in the past. Software development is knowledge work and is generally complex, but a majority of people in software development are unaware of this fact and continue to treat software development as if it were physical (mechanistic) work and either simplistic or merely complicated.

It is by understanding this fact that software development will improve. Why? The things that we employ to combat the Complex are much different than the things that work against the Complicated. The Agile movement was a response against using complicated tactics in the complex world. The Agile philosophy engendered a whole host of frameworks, methodologies and development techniques all of which address the need to address complexity. constructionWhy do large teams not work so well in software development? Why does scrum talk about small teams? Because small teams are better at attacking complexity. Large teams are adequate for attacking the complicated. The same holds true about co-location, dedication, stability, and cross-functionality. These are all things that work well for complexity.

If we want to create better software we would be well to head Glass’ fact. We need to stop treating software development like we are building a house or assembling a car. Software is much too complex to be built using the tired old mechanistic means. Remember that as complexity of the problem increases, the complexity of the solution increases at a much higher rate, along with the risks attendant on increased complexity. If you want to mitigate your risk, it is essential that you learn how to properly deal with complexity. You might want to learn how to become Agile.

Do Coding Interviews Work?

Coding Test

I have recently come across some interesting information regarding coding interviews. If you are not familiar with coding interviews, these are interviews for technical people, usually software developers, to prove that they have the ability to code so they are sometimes referred to as programming interviews. These can be either taken as a computer-based test or frequently done as whiteboard exercises. They often take the form of brain teasing riddles or binary search questions. The premise is that these coding interviews, conducted in an arbitrary environment, are a good proxy for determining whether or not someone will perform well in the real world.

WhiteboardingAs with all things, instead of relying on our human instinct, which is riddled with cognitive biases, we must rely on science to understand true cause and effect. The science has spoken loud and clear; there is no relationship between coding interviews and performance on the job for software developers. Don’t believe me; here’s what Laszlo Bock, Senior Vice President of People Operations at Google had to say on the topic:

…everyone thinks they’re really good at it. The reality is that very few people are.

Years ago, we did a study to determine whether anyone at Google is particularly good at hiring. We looked at tens of thousands of interviews, and everyone who had done the interviews and what they scored the candidate, and how that person ultimately performed in their job. We found zero relationship. It’s a complete random mess…

It appears to me that very often the interviewer is much more concerned with showing the candidate how astute he or she is as opposed to finding out whether or not the candidate is a good fit for the position. I recently read a blog post that stated that candidates should spend a great deal of time preparing for these coding interviews, in the neighborhood of about 40 hours. While this might be what it takes to “ace” such an interview, it still begs the question of whether the coding interview is actual predictive of the candidate’s ability to function in the position. It is not.

InterviewThis is where the cognitive biases come in. It appears that there is a great deal of the illusion of control, which, as humans, we are highly susceptible to. We think that somehow we are able to ask some questions and magically be able to determine how one will perform on the job. I would expect there is a bit of confirmation bias because we are subject to cherry-picking our evidence to support our previously held views (i.e. coding interviews are effective) and a similar bias called choice-supportive bias which is the tendency to remember one’s own choices as better than they actually are. I am certain that a whole host of other biases can be brought forth which not only explain why we think coding interviews are effective when there is evidence to the contrary, but also the stubborn way in which these have continued to persist in spite of such evidence.

In my career I have taken a few of these interviews and I may have my own biases since I don’t recall ever getting a job offer after one of these interviews. I remember taking one many years ago on SQL and ETL. I had been doing SQL and ETL quite successfully for over a year and knew I could perform very well in the position.

QuizNevertheless, the test was taken not on my own computer, but a computer that I was wholly unfamiliar with, a laptop with a built in mouse. I remember that I had some frustration just with the configuration of the computer I was using. I also remember that the majority of the questions I could have easily answered had I been able to use reference materials like I would be able to do in the real world. It felt like the test was measuring how well I could fix my parachute after I had been thrown from the plane. It did not measure how I would perform on my job, but how well I had memorized simple syntax that is probably not worth memorizing.

I know there are those who will say that one should remember such commands, but given that the average programmer contributes five lines per day to the final product, does it really make that much sense? Perhaps it would be better to fill one’s mind with other more important things? What I do know is this – had I been offered the position I would have outperformed many who would happen to ace this test because I have a wealth of experience outside of the ability to memorize coding syntax.

In a recent blog post I wrote a tongue-in-cheek title, “Accenture Ends Annual Review (and Admits Earth Orbits the Sun)”. Of all my dozens of blogs (I have posted over 100 over the years), this was perhaps the most provocative of them all and certainly the most popular, with literally thousands of views. In this case it took literally decades to finally admit what science has taught us with respect to annual reviews. Therefore, I expect that coding interviews will be with us for some time to come, but at least I can look forward to the day when I write the blog “Company X abolishes the coding interview (and Admits Earth is Round).”

The High Cost of “Low Cost” Software Development

money

I recently spoke with some software development professionals about the economics of software development and how Agile Values and Principles, when properly applied through a framework like Scrum, could improve a company’s bottom line. One thing we discussed was that so many companies who develop software are ignorant of the economics of software development and the Total Cost of Ownership (TCO) of software development. Companies often try to save money by choosing the lowest cost software development option. My friends referred to this as sourcing software development on price alone. At the new division I manage, 10XP Solutions, we call this the high cost of “low cost” software development.

crowdWhat is the high cost of “low cost” software development? This is the tendency for people involved with financial decisions regarding software development to put too great an emphasis on the cost of software developers. I have recently coined the Law of “Low Cost” Software Development that states, “In the absence of additional information and a lack of understanding of the economics of software development, the choice of software developers will be based on cost alone.” Unfortunately, in practice this law leads to an overall higher cost of software development.

One example I use frequently involves the concept of technical debt. Once I was asked to write an article on technical debt. When my article was published, it was stripped of a definition of technical debt because the editor believes everyone in software development knows what technical debt is. I protested, but to no avail. Nevertheless, I still maintain this assumption is not only incorrect, but it is fundamentally dangerous. I have found the number of people who make decisions affecting the creation of software who do not understand, or may have not even heard of, technical debt remains shockingly high.

For those uninitiated with the concept, technical debt was invented by Ward Cunningham as a metaphor to explain the real cost associated with short-term decision-making and shortcuts taken in software development. A classic example would be the first time code is shipped and market forces dictate that speed to market trumps all other concerns. Cunningham’s conception would be that technical debt would be a conscious decision made with the trade-offs known. Over time it seems that technical debt has morphed because these days a great deal of debt accrued by organizations is unconscious. In other words, they are creating technical debt with little or no awareness. In these cases technical debt is like high blood pressure – a silent killer.

moneyThere are now ways to quantify technical debt. The CRASH Report calculated the cost to remediate technical debt concluded that the current levels of technical debt average $3.61 per line of code (the amount is higher for Java code at $5.42). As technical debt increases so does the complexity of the code and the difficulty in making changes to existing code. This means that new features added to existing code will take much longer to develop. How much longer? A study by Dan Sturtevant at MIT, entitled “Technical Debt in Large Systems: Understanding the cost of software complexity” found that complex (technical debt-laden) code resulted in:

  • Up to a two-fold increase in the amount of time to enhance software (50% decrease in developer productivity)
  • Up to a 310% increase in defect density
  • Developers working on poor quality code had up to a 10x increase in employee turnover

How these translate into hard dollars may be difficult to determine, but we can certainly infer that defect laden code increases the maintenance cost, QA cost, tracking cost, defect reporting cost, and costs related to poor customer satisfaction. The most surprising finding was developers working on poor quality code had a greatly increased amount of turnover. There is as obvious hard cost for replacing already difficult to find developers, but also untold morale cost for those who remain.

Total cost of ownership (TCO) addresses the total cost of software development from inception to sun setting. In 2011, the CRASH report stated the total cost of ownership for software code was $18/Line of Code (LOC). Of this, it is generally accepted that the majority of this cost is related to the maintenance of the software after its initial creation, with estimates ranging from 60-90%.

Because the majority of our cost involves maintenance it doesn’t make a great deal of sense for us to spend our effort trying to pare down the initial cost of development by employing lower cost developers. If we use less expensive resources, we could expect technical debt to increase. As technical debt increases, so too does the cost of maintenance. If we assume a slight increase in technical debt (50%) which results in maintenance negatively impacted by only 33%, our “low cost” resources have now actually cost us $4.92/LOC more. In contract, an approach that focuses on higher quality, while a little more expensive in initial cost, results in overall savings.

Strategy Average “Low Cost” High Quality
Initial Cost 4.50 2.25 6.75
Technical Debt 5.42 8.13 2.71
Maintenance 13.50 18.00 7.50
Total Cost 23.42 28.38 16.96

There are other factors to consider in addition to just initial cost, technical debt and maintenance. Many people employing the “low cost” software development model are rarely paying attention to another hidden cost – the cost of delay. Frequently a trade off is made between cost of developers and productivity with lower cost developers being less productive. This results in software products that take longer to produce and deploy.

crowdOf course, because these developers are lower cost, one could always just throw more people at the problem, which is often done. However, adding more people to solving software development problems does not result in a corresponding increase in productivity and obviously eats into cost “savings”. There are many who believe doubling the number of people results in a doubling of productivity. This is an example of applying mechanistic thinking to knowledge problems. There are numerous studies indicating increasing the size of teams results in productivity increases much lower than expected. Therefore, “low cost” development leads to longer cycle times and a higher cost of delay.

While it may be seductive to think that you can save money on software development by using “low cost” developers, it rarely results in overall cost savings when considering TCO and cost of delay. The cost of delay and technical debt are generally hidden costs (at least on the balance sheet). Over the years I have had numerous discussions with software development professionals (CIO, CTO, Development Managers, Product Owners, etc.) regarding the “low cost” software development models and there is nearly a universal befuddlement over why the model continues to flourish. Unfortunately, many people making financial decisions regarding software development resourcing simply do not understand the nature of software development and TCO. If they did, my guess is that they would make drastically different decisions.