Summary by James R. Martin, Ph.D., CMA
Professor Emeritus, University of South Florida
Lab and Experimental Research Main |
Decision Theory Main Page
Thomke and Manzi begin this article with a description of how decisions made at J.C. Penney based on experience and intuition resulted in a decline in sales and substantial losses. This could have been prevented by subjecting the proposed changes to a rigorous test. However, to ensure that an experiment is worth the expense and effort, a firm must consider a number of crucial questions. The purpose of this paper is to discuss those questions and their implications.
Attitudes and Complexities
Problems related to conducting valid experiments include the fact that most companies are reluctant to fund experiments, and a variety of challenges make it difficult to execute the tests. Conducting an A/B test of two versions of a website is simple enough. However, most consumer business (perhaps 90%) is more complex involving large distribution systems such as store networks, sales territories, bank branches, and fast food franchises. Conducting useful experiments requires overcoming management attitudes and a variety of analytical complexities, e.g., problems with sample sizes, control groups, and randomization.
An Ideal Experiment
An ideal experiment involves separating an independent variable from a dependent variable while holding other related variables constant, and then manipulating the independent variable to study changes in the dependent variable. In an ideal situation, observation and analysis of the results provides insight into any cause and effect relationships that can lead to improved business decisions.
Questions Provide a Checklist for Experimentation
To ensure that an experiment is worth the expense and effort, companies need to ask a number of crucial questions: Does the experiment have a clear purpose? Have stakeholders made a commitment to abide by the results? Is the experiment doable? How can the company ensure reliable results? Has the maximum value been obtained from the experiment?
Does the experiment have a clear purpose?
What does management want to determine, and is an experiment the only practical way to answer this question? The hypothesis to be tested needs to include specific independent and dependent variables. In addition, many situations require going beyond the direct effects of changes in the independent variable to consider the ancillary effects that can be either positive or negative.
Have stakeholders made a commitment to abide by the results?
Experiments are frequently needed for objective assessments of initiatives backed by influential members of an organization. For this reason, all stakeholders should agree on how they will proceed after the results are obtained, that they will not cherry-pick the data, and are willing to reject a proposed change if it is not supported by the data. Publix Super Market's approval process provides a good example of a learning agenda. All of their large retail projects must undergo an experiment starting with an analysis by finance. Then analytics professionals design experiments for acceptable projects, that must then be reviewed and approved by a committee before they are finally conducted.
Is the experiment doable?
Although experiments must have testable predictions, the complexity of the variables, and their interactions can make cause-and-effect relationships difficult to determine. Solving this sort of problem (referred to as causal density) involves determining the right sample size to minimize testing costs, and ensure that the results will be statistically valid. Software is available for this purpose.1
How can the company ensure reliable results?
When applying the test-and-learn approach, companies have to make trade-offs related to reliability, cost, time, and other considerations. The authors discuss three methods that can help with these decisions.
Randomized field trials can help prevent systemic bias from affecting an experiment and evenly spread potential causes between the test and control groups.
Blind tests minimize biases and increase the reliability of the experiments. Blind tests also prevent the Hawthorne effect, i.e., where participants modify their behavior (consciously or unconsciously), when they know they are part of an experiment.
Big data can help ensure valid results with small samples. Where sample sizes are less than 100, special algorithms, combined with multiple data sets of big data can be utilized to match test subjects to control subjects. The same algorithms and big data can be used to address problems associated with non randomized natural experiments.
Has the maximum value been obtained from the experiment?
Concentrating on investment projects with the most potential payback, and using value engineering helps ensure that a firm will obtain the maximum value from its test-and-learn approach. The idea is to go beyond finding correlation to investigate and understand causality. Without understanding what causes changes in customer behavior, managers are likely to make decisions that backfire. Conducting an experiment is just the beginning. Analyzing and exploiting the data provides the most value.
Challenging Conventional Wisdom
There are two important lessons from adopting the discipline of business experimentation. First, it can lead to better, data driven decisions. In addition, it can help companies discard wrongheaded conventional wisdom and decisions based on faulty intuition. Knowledge, not just intuition should drive business decisions.
____________________________________________________
Footnote:
1 For example, Manzi's firm, Applied Predictive Technologies provides test-and-learn predictive analytics software to analyze experiments. Predictivetechnologies. For many other A/B and multivariate testing tools see MAAW's Experimental Research Tools and Links.
Related summaries:
Anderson, E. T. and D. Simester. 2011. A step-by-step guide to smart business experiments. Harvard Business Review (March): 98-105. (Summary).
Appelbaum, D., A. Kogan and M. A. Vasarhelyi. 2017. An introduction to data analysis for auditors and accountants. The CPA Journal (February): 32-37. (Summary).
Appelbaum, D., A. Kogan, M. Vasarhelyi and Z. Yan. 2017. Impact of business analytics and enterprise systems on managerial accounting. International Journal of Accounting Information Systems (25): 29-44. (Summary).
Davenport, T. H. 1998. Putting the enterprise into the enterprise system. Harvard Business Review (July-August): 121-131. (Summary).
Davenport, T. H. 2009. How to design smart business experiments. Harvard Business Review (February): 68-76. (Summary).
Davenport, T. H. and J. Glaser. 2002. Just-in-time delivery comes to knowledge management. Harvard Business Review (July): 107-111. (Summary).
Kohavi, R. and S. Thomke. 2017. The surprising power of online experiments: Getting the most out of A/B and other controlled tests. Harvard Business Review (September/October): 74-82. (Summary).
Spear, S. J. 2004. Learning to lead at Toyota. Harvard Business Review (May): 78-86. (Summary).
Spear, S. and H. K. Bowen. 1999. Decoding the DNA of the Toyota production system. Harvard Business Review (September-October): 97-106. (Summary).
Tschakert, N., J. Kokina, S. Kozlowski and M. Vasarhelyi. 2017. How business schools can integrate data analytics into the accounting curriculum. The CPA Journal (September): 10-12. (Summary).