A Long-Lens Look at Apple and Disruption
Apple’s recent release of the M1 series Macs, based on the newly minted Apple Silicon technology, was a hot topic in November. This led us to take a new look at Apple’s past, present, and future through the lens of Disruption, published here in three parts.
Disruptive Encounters of the Third Kind
In our discussion of Apple and Disruption, I mention that beyond low-end and new-market Disruptions, there is a third way that firms can disrupt: by means of a different business model type.
Quibi through the lens of JTBD
Quibi is not doing as well as expected. They have fallen well short of their subscriber target. And the app peaked briefly in the top ten downloads, but has quickly fallen out of the top 100.
Anomalies Wanted: Models must be accountable to reality
Today, I’m posting something I wrote earlier this week. This is an adaptation of work that my long-time collaborator and friend, Clayton Christensen, and I did together. (It was his birthday Monday–4/6–so this topic drifted to the top of my mind.) To be clear, this is not the original, but an adaptation of our joint work.
Why should business care about Simpson’s Paradox?
Chances are you’ve never heard of Simpson’s paradox, but it matters. In sum, Simpson’s paradox—sometimes referred to as the “reversal paradox”—says that how we choose to analyze a data set may cause us to draw opposite conclusions. For instance, in the 1980s, there was a study about the most effective way to treat kidney stones, using existing outcome data. When looking at the outcomes for all patients, procedure A showed better results. However, when scientists cut the data by stone size, they found that for all stone sizes, procedure B was better. To make matters worse, there is no boundary to Simpson’s paradox. If researchers had cut the data by gender or height or BMI or whatever, they could have seen reversal after reversal after reversal.
Anomalies: Improving Good Theory
There is another important consequence of the power of anomaly: all good theories must be susceptible to anomalies, otherwise there is no progress. Correlation is like a sliding scale of confidence–it’s akin to an analog signal that we receive with varying degrees of fidelity. Consequently, we can seldom state definitively if some event is an anomaly or merely “noise.” On the other hand, a causal model is more like a digital signal—it gives us discrete “yes” or “no” indicators for anomalies. As we mention in the first chapter, the causal approach to manufacturing created clear go/no-go signals that either kept work moving or initiated an investigation into a defect. Without the clear binary signals, sub-par work would inevitably sneak through.
How an Anomaly Focus Eliminates Bias
The current widespread focus on the problems of replication in scientific studies—especially those of the social and biomedical sciences—has been to invest in ever more sophisticated analytical tools and techniques: more data, more sophisticated tools of analysis, faster computing, etc. All of these efforts have missed an alternative solution: causality. Most research in biomedical and social science begins with data analysis. But data analysis alone is not capable of yielding a genuine causal model. Essentially, all analysis is subject to the problem of induction—roughly, correlation is not causation.
A few days ago I wrote a post on how easy it is to have “false negatives”—negative for virus, but the person is sick. I made a table to show how easy it is to overestimate the efficacy of a test.
Beds(Coronavirus) = Beds(Flu) x 3
Welcome back to nerd corner. Today a note about Healthcare Capacity.