Do Coding Interviews Work?

Coding Test

I have recently come across some interesting information regarding coding interviews. If you are not familiar with coding interviews, these are interviews for technical people, usually software developers, to prove that they have the ability to code so they are sometimes referred to as programming interviews. These can be either taken as a computer-based test or frequently done as whiteboard exercises. They often take the form of brain teasing riddles or binary search questions. The premise is that these coding interviews, conducted in an arbitrary environment, are a good proxy for determining whether or not someone will perform well in the real world.

WhiteboardingAs with all things, instead of relying on our human instinct, which is riddled with cognitive biases, we must rely on science to understand true cause and effect. The science has spoken loud and clear; there is no relationship between coding interviews and performance on the job for software developers. Don’t believe me; here’s what Laszlo Bock, Senior Vice President of People Operations at Google had to say on the topic:

…everyone thinks they’re really good at it. The reality is that very few people are.

Years ago, we did a study to determine whether anyone at Google is particularly good at hiring. We looked at tens of thousands of interviews, and everyone who had done the interviews and what they scored the candidate, and how that person ultimately performed in their job. We found zero relationship. It’s a complete random mess…

It appears to me that very often the interviewer is much more concerned with showing the candidate how astute he or she is as opposed to finding out whether or not the candidate is a good fit for the position. I recently read a blog post that stated that candidates should spend a great deal of time preparing for these coding interviews, in the neighborhood of about 40 hours. While this might be what it takes to “ace” such an interview, it still begs the question of whether the coding interview is actual predictive of the candidate’s ability to function in the position. It is not.

InterviewThis is where the cognitive biases come in. It appears that there is a great deal of the illusion of control, which, as humans, we are highly susceptible to. We think that somehow we are able to ask some questions and magically be able to determine how one will perform on the job. I would expect there is a bit of confirmation bias because we are subject to cherry-picking our evidence to support our previously held views (i.e. coding interviews are effective) and a similar bias called choice-supportive bias which is the tendency to remember one’s own choices as better than they actually are. I am certain that a whole host of other biases can be brought forth which not only explain why we think coding interviews are effective when there is evidence to the contrary, but also the stubborn way in which these have continued to persist in spite of such evidence.

In my career I have taken a few of these interviews and I may have my own biases since I don’t recall ever getting a job offer after one of these interviews. I remember taking one many years ago on SQL and ETL. I had been doing SQL and ETL quite successfully for over a year and knew I could perform very well in the position.

QuizNevertheless, the test was taken not on my own computer, but a computer that I was wholly unfamiliar with, a laptop with a built in mouse. I remember that I had some frustration just with the configuration of the computer I was using. I also remember that the majority of the questions I could have easily answered had I been able to use reference materials like I would be able to do in the real world. It felt like the test was measuring how well I could fix my parachute after I had been thrown from the plane. It did not measure how I would perform on my job, but how well I had memorized simple syntax that is probably not worth memorizing.

I know there are those who will say that one should remember such commands, but given that the average programmer contributes five lines per day to the final product, does it really make that much sense? Perhaps it would be better to fill one’s mind with other more important things? What I do know is this – had I been offered the position I would have outperformed many who would happen to ace this test because I have a wealth of experience outside of the ability to memorize coding syntax.

In a recent blog post I wrote a tongue-in-cheek title, “Accenture Ends Annual Review (and Admits Earth Orbits the Sun)”. Of all my dozens of blogs (I have posted over 100 over the years), this was perhaps the most provocative of them all and certainly the most popular, with literally thousands of views. In this case it took literally decades to finally admit what science has taught us with respect to annual reviews. Therefore, I expect that coding interviews will be with us for some time to come, but at least I can look forward to the day when I write the blog “Company X abolishes the coding interview (and Admits Earth is Round).”

Brainstorming – Effective Technique or Sacred Cow?

sacred cow

cognitive biasI have spent a great deal of time studying and reading about human cognitive biases and their effect on business, especially the business of software development. This past weekend I finished the groundbreaking book by Stuart Sutherland, appropriately title “Irrationality: The Enemy Within”.

Since I have made quite a bit of study on the topic previously, some of the material was either referenced by other materials or has lost its shock value since I have become thoroughly convinced of humankind’s built in propensity not only for irrational behavior, but their inability to recognize that these biases are a problem. In fact, my experience is that a large segment of our population is not only ignorant of biases but seems to revel in a willful ignorance of scientific evidence. Certainly there appears to be a great deal of cognitive bias (mostly the confirmation bias) in the debate on climate change.

My previous understanding of human cognitive bias withstanding, while the book was published in 1992, the information is still relevant, interesting and cogent. I would suppose that there are a number of things that are worthy of note, but since there is such a wealth of information in the book, I decided to choose a single instance to write about here and encourage those interested in more examples to actually get a copy of the original material.

brain stormingThe one thing that caught my attention and has stuck in my mind is the example of using a technique called “brainstorming” to improve creativity and productivity. For those who have lived on another planet, brainstorming is the process of getting as many ideas out as possible without judging or filtering of the ideas. It has been used for decades since its introduction by Alex Olsen in the book Applied Imagination. Olsen claimed that in his experience using brainstorming in advertising agencies resulted in 44% more worthwhile ideas than individuals thinking up ideas without the benefit of group discussion.

Ever since that time, brainstorming has been widely used to improve creativity and productivity of groups. However, here’s the kicker, since as long ago as 1958, Osborn’s claims has been subject to numerous studies which almost universally cast doubt upon the effectiveness of brainstorming.  Keith Sawyer, a psychologist at Washington University in St. Louis, states: “Decades of research have consistently shown that brainstorming groups think of far fewer ideas than the same number of people who work alone and later pool their ideas.” In other words, brainstorming doesn’t work quite as well as we think it does (or should).

With scientific evidence questioning the effectiveness of brainstorming vast, the real question is why does the use of brainstorming persist? The question is at the heart of much of my agile practice in that the prime issue is not whether one is merely effective, but that one is optimal. It is obvious to me that several cognitive biases are in play in keeping brainstorming around.

herd behaviorThere is something of the availability cascade to brainstorming “which is a self-reinforcing process in which a collective belief gains more and more plausibility through its increasing repetition in public discourse (or ‘repeat something long enough and it will become true’)” (Wikipedia). Furthermore, a whole host of cognitive biases around groupthink, herd behavior and the bandwagon effect certainly have their influence on the popularity of brainstorming. Since brainstorming “seems” to make sense it is also subject to the belief bias, which is seen when the believability of the conclusion leads us to misunderstand the true effectiveness of the process. Frankly, I would suppose that I could find literally dozens of cognitive biases, which allow brainstorming to proliferate as the “go to” technique for group creativity and productivity.

Given that brainstorming may very well not be optimal, what are the alternatives that have actually been scientifically proven to be more effective? In a 2012 article for Psychology Today, Ray Williams proposes a few modifications to the brainstorming approach:

  • Have groups collaborate frequently by having them in close physical proximity to each other;
  • Pay attention to creating physical spaces that enable good collaboration, which facilitates people frequently “running into each other” while at work;
  • Revise the “no criticism” script of brainstorming to encourage debate about ideas;
  • Use appreciative inquiry techniques, where group participants build on ideas suggested by each individual in the group.

ideasMost interesting to me about these suggestions is how closely they align to the things that Agile (and I) speak to, namely close attention to co-location of people within an Agile team to increase good collaboration, allowing an environment where there is embracing of feedback as opposed to “failure” and using iterative feedback to improve ideas (and software) incrementally.

There are a great number of cognitive biases inherent in human beings. The first step is to be aware that these irrationalities exist. We must also acknowledge that we, as individuals, are subject to these irrationalities. Furthermore, we need to create an environment of safety that gives us the freedom and encouragement to continually explore and seek the underlying scientific truths, the “why” of what we do – the freedom to gore the sacred cows.

Agile – It’s All About Making Better Decisions

cognitive bias

I’ve been spending a lot of time recently doing research, reading and presenting on human cognitive biases. To the initiated, cognitive biases are defined as

“…a systematic pattern of deviation from norm or rationality in judgment, whereby inferences about other people and situations may be drawn in an illogical fashion. Individuals create their own ‘subjective social reality’ from their perception of the input.” (Wikipedia Definition)

In other words, cognitive biases exist when there is a gap between our perception of reality and objective reality. For example, there is the “confirmation bias” which is our human tendency to seek out or interpret information that confirms one’s existing opinions.

everestWhile the term “cognitive bias” is relatively new (it was coined in 1972 by Amos Tversky and Daniel Kahneman), researchers have already uncovered literally over a hundred cognitive biases, some which are relatively tame like the “google effect” (or digital amnesia), where there is a tendency to forget information that can be easily researched, to ones that can lead to more disastrous consequences like the Sunk Cost Fallacy where people justify increased investment in a decision based on prior investment instead of looking only at future efficacy. The Sunk Cost Effect, along with the Overconfidence Effect and Receny Effect, played a role in the May 1996 mountain climbing tragedy, made famous in the movie Everest, that resulted in the death of five experienced climbers.

A great number of cognitive biases have been found through the work of behavioral economics researchers like Dan Ariely who wrote the wonderful books Predictably Irrational and The Upside of Irrationality. Underlying all of classic economics is the concept of homo economicus, or economic man who behaves in rational ways to maximize individual returns and acts in his own self-interest.   Unfortunately, this is not the case and humans often act irrationally (and predictably so) because of their inherent cognitive biases. Humans all have biases for loss aversion and would choose to avoid loss over a larger corresponding potential gain and thus act as “homo irrationalis” as discovered by behavioral economics instead of “homo economicus” as predicted by classic economics.

It is our cognitive biases that cause us to make irrational decisions. Since behavioral economists found many of these cognitive biases, it was not a great leap to see how cognitive biases would be a paramount concern for the economics of software development. In my coaching practice, a great deal of my time and effort is used in helping organizations make better decisions about software development. Many times the optimal decisions are counter intuitive to people’s inherent biases so my job (and my passion) is helping companies see the world of software development differently so that, when it comes down to making a decision, they have all the knowledge necessary to make the optimal economic decision.

smokestackOne of the most prevalent biases in software development is to see the world in a mechanistic / Tayloristic manner. Taylor’s viewpoint was fine for the old world of physical work, but does not hold up in the complex knowledge work being done by software development professionals today. Unfortunately, most of the people making software development decisions are predominantly influenced by this old, less optimal way of viewing the world, and, as a result, make sub-optimal decisions. For example, in the mechanistic worldview, adding more people to an effort results in a corresponding increase in output. If there is an existing team of seven people and we add seven more then we would (if we hold this mechanistic bias) expect the work to be approximately twice as fast. However, like the behavioral economists that found the real world to be counter intuitive to homo economicus, actual studies have found that the need for increased communication of knowledge work nearly outpaces any incremental increase in individual productivity (see Top Performing Projects Use Small Teams). I have always said that if you want to double productivity of a fourteen person team all that is necessary is to create two teams of seven.

The mechanistic bias can also be seen in many of the ways that the Agile philosophy is implemented. For example, the scrum framework is often trained as a series of ceremonies and actions with little or no understanding of the reason such mechanistic actions are successful. “Scrum Masters” are “certified” with only two days of training and a simple test. The training deals with ideal situations, but when the scrum master actually has to implement scrum, he or she is woefully unprepared. In the real world compromises and decisions must be made. Without understanding the underlying “why” of agile and the basic nature of software development, the decisions and compromises that are made are not optimal. In my experience, this is why project managers are tougher to train than people with no project management experience. When faced with ambiguous information and the need to make optimal decisions, project managers tend to fall back on existing mechanistic knowledge and the decisions made range from mildly irritating to completely disastrous. As I have often pointed out, to say that one was successful with waterfall reeks of confirmation bias because it begs the question of whether or not one would have been more successful using another methodology or framework like Lean or Scrum.

rental carIn addition to the mechanistic bias, software development suffers from another bias, the project-centric bias, which is the tendency to see all work done in terms of projects. Unfortunately, the project-centric bias is so ingrained in companies that there needs to be some radical changes to the way we view software development across all areas, including accounting. Viewing work as a project when we are actually working on software products results in a whole raft of poor software economic decisions like concentrating on features more than quality and security. Remember that no one washes a rental car.

As I think back on my coaching work in agile, the blogs I have written, the many discussions I have had and the presentations I have made, I think that all of these boil down into one very simple thing – my work is all about helping people understand the true nature of the software development business process and, thereby helping them to make better decisions. Understanding our cognitive biases, therefore, is extremely important for my clients and myself because, in the end, Agile is all about making better decisions.

Technical Debt on the Balance Sheet

Mind the GAAP

My recent article on the high cost of “low cost” software seems to have struck a nerve with lots of views, comments, likes, shares, etc. A lot of software development professionals feel the weight daily of the tendency to chase the lowest price of software development and have added to the conversation I started last week. Below are some of the comments:

There is always a price for “free.”

In the end, it’s all about understanding the nature of software development in order to make wise decisions.

As the saying goes: “if you believe a senior developer is expensive, wait till you hire a Junior”.

Another aspect of poorly written code is the inefficient use of hardware which it runs on … leading to negative user experience .. Resulting in losing $$ in terms of user productivity .. Adding to the higher cost !!

My unscientific guess is corporate America is wasting billions each year on “low cost software “.

It’s funny how there never seems to be enough time to do it right, but always enough time to do it twice.

The technical debt gets worse when you add an army of developers to it.

The old saying “you get what you pay for” rings true.

Perhaps the most interesting comment came from Jason Ross who stated:

It’s often the case that people get “value” and “cost” confused. Technical debt is a very debt that must eventually get serviced. In the same manner that debt service kills cash flow and becomes a drag on the balance sheet, so does technical debt deprive an the ability to deliver new features and effectively update the code base as new technologies arrive that could be leveraged. In the long run, the maintenance and upkeep costs becomes the dominant factors in many things, and especially in software, they are often entirely neglected as factors.

Balance SheetWhen I read his comment and saw his mention of “a drag on the balance sheet” it reminded me of an article I had read a few years back from Israel Gat for his blog The Agile Executive titled Technical Debt on Your Balance Sheet

Israel’s assertion is now that there are ways to rather easily assess technical debt (like SonarQube with SQALE plugin), technical debt could (and should) be added to a company’s balance sheet. I applaud and support his efforts.

Unfortunately, it appears that any efforts to include technical debt on a company balance sheet might run into some accounting industry standards that would make this not currently possible. In his post, Clear Costs and Technical Debt, Trent opines

Accounting standards don’t allow provisions or liabilities to be shown for “not performing enough maintenance” or similar intangibles. There needs to be a present obligation before a liability is incurred and a present obligation is only usually formed through a contract.

In his scenario, financial wizards will chalk up the lower cost of software development as a savings.

The accountants will note a(n)… asset on the books, congratulate themselves for saving … capex and chalk up the increased opex of maintenance as the cost of doing business.

AccountingMy guess is this is precisely what happens now. If Trent is correct, then the issue facing companies developing software is that current accounting practices are inadequate and need to be modified. Technical Debt can be measured and tracked and is a very real liability.

The question remains: can there be a new GAAP around technical debt or can an existing GAAP be instructive? While I am not a CPA, I have done work in accounting in the past so I know “enough to be dangerous.” My line of thought is that technical debt probably resembles something that currently exists in GAAP. After doing some research I think that maybe GAAP around warranties may prove helpful. In reading about warranties in an article GAAP Accounting for Product Recalls. I noticed some similarities:

Product warranties present manufacturers with a bit of a conundrum. …The manufacturer is actually responsible for the product for the length of the warranty. In accrual accounting, the finality of the transaction does not occur until the period of responsibility ends. However, warranties … present a bit of an exception to the rule. (Emphasis mine)

Notice how, like technical debt and the Total Cost of Ownership (TOC) of software, the complete transaction is not final until the period of responsibility ends. I can see that the life cycle of software (and liability of technical debt – present or future) is like the life cycle of a product with a warranty attached. Even I can see how warranties could cause accounting headaches. However,

Rather than being on the hook for the cost of the product for the entire life of the product or throughout the product’s warranty period, manufacturers can instead follow the generally accepted accounting principles regarding products under warranty. … Under GAAP, product warranties and recalls can be reasonably accounted for with the initial sale of the product by estimating the potential warranty expense of a return or recall.

I can reasonable see how it might be possible to substitute an estimate of Technical Debt liability for the similar liability of estimated warranty expense. For example, with each release we classify software as a capital expense (CapEx). At this point we could also include on the balance sheet an estimation of technical debt as a long-term liability.

In the case of warranties, the actual accounting mechanism under GAAP is done by

 …making two separate entries, one a debit and the other a credit for the same amount. These are entered into the ledger as a warranty expense and a separate entry as an allowance for warranty costs.

What is to stop a company for entering two separate entries for technical debt – one a debit as technical debt expense and the other a credit under allowance for technical debt? I certainly see this as in the realm of possibility. This way GAAP can be followed AND technical debt can be raised to true financial visibility. It is this visibility that will allow companies to begin seeing the true total cost and guard against the high cost of “low cost” software development.

The High Cost of “Low Cost” Software Development

money

I recently spoke with some software development professionals about the economics of software development and how Agile Values and Principles, when properly applied through a framework like Scrum, could improve a company’s bottom line. One thing we discussed was that so many companies who develop software are ignorant of the economics of software development and the Total Cost of Ownership (TCO) of software development. Companies often try to save money by choosing the lowest cost software development option. My friends referred to this as sourcing software development on price alone. At the new division I manage, 10XP Solutions, we call this the high cost of “low cost” software development.

crowdWhat is the high cost of “low cost” software development? This is the tendency for people involved with financial decisions regarding software development to put too great an emphasis on the cost of software developers. I have recently coined the Law of “Low Cost” Software Development that states, “In the absence of additional information and a lack of understanding of the economics of software development, the choice of software developers will be based on cost alone.” Unfortunately, in practice this law leads to an overall higher cost of software development.

One example I use frequently involves the concept of technical debt. Once I was asked to write an article on technical debt. When my article was published, it was stripped of a definition of technical debt because the editor believes everyone in software development knows what technical debt is. I protested, but to no avail. Nevertheless, I still maintain this assumption is not only incorrect, but it is fundamentally dangerous. I have found the number of people who make decisions affecting the creation of software who do not understand, or may have not even heard of, technical debt remains shockingly high.

For those uninitiated with the concept, technical debt was invented by Ward Cunningham as a metaphor to explain the real cost associated with short-term decision-making and shortcuts taken in software development. A classic example would be the first time code is shipped and market forces dictate that speed to market trumps all other concerns. Cunningham’s conception would be that technical debt would be a conscious decision made with the trade-offs known. Over time it seems that technical debt has morphed because these days a great deal of debt accrued by organizations is unconscious. In other words, they are creating technical debt with little or no awareness. In these cases technical debt is like high blood pressure – a silent killer.

moneyThere are now ways to quantify technical debt. The CRASH Report calculated the cost to remediate technical debt concluded that the current levels of technical debt average $3.61 per line of code (the amount is higher for Java code at $5.42). As technical debt increases so does the complexity of the code and the difficulty in making changes to existing code. This means that new features added to existing code will take much longer to develop. How much longer? A study by Dan Sturtevant at MIT, entitled “Technical Debt in Large Systems: Understanding the cost of software complexity” found that complex (technical debt-laden) code resulted in:

  • Up to a two-fold increase in the amount of time to enhance software (50% decrease in developer productivity)
  • Up to a 310% increase in defect density
  • Developers working on poor quality code had up to a 10x increase in employee turnover

How these translate into hard dollars may be difficult to determine, but we can certainly infer that defect laden code increases the maintenance cost, QA cost, tracking cost, defect reporting cost, and costs related to poor customer satisfaction. The most surprising finding was developers working on poor quality code had a greatly increased amount of turnover. There is as obvious hard cost for replacing already difficult to find developers, but also untold morale cost for those who remain.

Total cost of ownership (TCO) addresses the total cost of software development from inception to sun setting. In 2011, the CRASH report stated the total cost of ownership for software code was $18/Line of Code (LOC). Of this, it is generally accepted that the majority of this cost is related to the maintenance of the software after its initial creation, with estimates ranging from 60-90%.

Because the majority of our cost involves maintenance it doesn’t make a great deal of sense for us to spend our effort trying to pare down the initial cost of development by employing lower cost developers. If we use less expensive resources, we could expect technical debt to increase. As technical debt increases, so too does the cost of maintenance. If we assume a slight increase in technical debt (50%) which results in maintenance negatively impacted by only 33%, our “low cost” resources have now actually cost us $4.92/LOC more. In contract, an approach that focuses on higher quality, while a little more expensive in initial cost, results in overall savings.

Strategy Average “Low Cost” High Quality
Initial Cost 4.50 2.25 6.75
Technical Debt 5.42 8.13 2.71
Maintenance 13.50 18.00 7.50
Total Cost 23.42 28.38 16.96

There are other factors to consider in addition to just initial cost, technical debt and maintenance. Many people employing the “low cost” software development model are rarely paying attention to another hidden cost – the cost of delay. Frequently a trade off is made between cost of developers and productivity with lower cost developers being less productive. This results in software products that take longer to produce and deploy.

crowdOf course, because these developers are lower cost, one could always just throw more people at the problem, which is often done. However, adding more people to solving software development problems does not result in a corresponding increase in productivity and obviously eats into cost “savings”. There are many who believe doubling the number of people results in a doubling of productivity. This is an example of applying mechanistic thinking to knowledge problems. There are numerous studies indicating increasing the size of teams results in productivity increases much lower than expected. Therefore, “low cost” development leads to longer cycle times and a higher cost of delay.

While it may be seductive to think that you can save money on software development by using “low cost” developers, it rarely results in overall cost savings when considering TCO and cost of delay. The cost of delay and technical debt are generally hidden costs (at least on the balance sheet). Over the years I have had numerous discussions with software development professionals (CIO, CTO, Development Managers, Product Owners, etc.) regarding the “low cost” software development models and there is nearly a universal befuddlement over why the model continues to flourish. Unfortunately, many people making financial decisions regarding software development resourcing simply do not understand the nature of software development and TCO. If they did, my guess is that they would make drastically different decisions.

 

 

Sony IT Insider Claims Hack Not From North Korea

As a software development consultant, agile coach and someone who has been in the IT industry for nearly 20 years, I’ve had the pleasure of meeting literally thousands of IT executives. One of my contacts, who just happens to work at Sony, reached out to me this week and urged me to “blog to the world what really happened”.

He stated that the recent hack on Sony was not perpetrated by the North Korean government, but was one of a series of attacks that have been going on for over two years. According to my source, these recent attacks were part of a longer term series of snooping breaches that included successful DDoS (denial of service) attacks during the World Cup based on Sony sponsorship of FIFA.

My source was further dumbfounded by the US government’s response and the press briefing given by our President given the history of attacks that Sony has experienced over the years. While he agreed that the threats to theaters was something best handled by Homeland Security, he also stated that Sony has two media companies, Video Unlimited and Crackle, that could handle release of the movie The Interview.

My source goes on to state that the pattern of attacks have been known to Sony and that they have worked with 3rd party vendors to beef up security and have been successful in preventing additional attacks and that Sony is beginning the forensics to understand when and how the breaches occurred in the first place.

My personal opinion is that it is possible that the recent hacks were perpetrated by the North Korean government and that my source is confusing the previous DDoS attacks with the newer breaches. What I find more interesting is the speed with which the culprit was identified and the coincidence that the villain would be the same as the one depicted in the film. Of course, this whole fiasco is a wonderful marketing coup for the movie and with the free publicity in the press (including this blog if you can refer to it as the press), it will garner more revenue than it merits.

The only other time I ventured to blog about current events was the Healthcare.gov fiasco and in both these cases there was a speedy judgement and presidential pronouncement. While the press and politicians like a simple world with clear, black and white solutions (anyone find any weapons of mass destruction in Iraq lately?), my experience has taught me that the world of software development and computer networks is much more complex. In the end, we may find out that the hack was by the North Korean government or that my source at Sony was correct. Only time will tell.

– Larry Apke