We’ve all heard of the Innovator’s Dilemma. Should a business give customers what they think they want, or take a leap of faith and introduce new products or services? Even if the new product or service is of groundbreaking quality, deviating from what consumers are used to can be risky…
In 2006, Blockbuster Video launched Total Access, a service that allowed its customers to rent videos online and return them to stores. The strategy was an immediate hit with customers and before long its online unit was making big gains against Netflix. It seemed that the video giant had finally cracked to code to renting videos on the Internet.
Alas, it was not to be. Investors balked at the cost of the new plan, while franchisees feared that online rentals would make them obsolete. In 2007, the company’s CEO, John Antioco, was fired and the online strategy was scrapped. Just three years later, in 2010, Blockbuster filed for bankruptcy.
Traditionally, we have looked at strategy solely as a set of plans designed to achieve specific goals. However, as we increasingly operate in a world of networks rather than hierarchies, leaders need to learn the lessons of social movements and focus on shared values. As the story of Blockbuster shows, you can’t change behaviors without first changing core beliefs. Continue reading “You Can’t Change Fundamental Behaviors Without Changing Fundamental Beliefs”
Input from more than a dozen consultants portrays an industry struggling to adapt to a dramatically different and rapidly changing information economy.
The post A View from the Outside — Trends and Challenges Consultants See in Scholarly Publishing appeared first on The Scholarly Kitchen.
Every bold new business idea starts with a success story. Either it is a single organization or an aggregate sample that implemented a particular strategy and achieved outstanding results. That solid track record helps to convince others to adopt it, yet somehow the new management fad fails to deliver as promised.
The problem is often one of survivorship bias. While it’s fairly easy to find an examples of those who were successful with a particular strategy, the ones who tried it and failed are often overlooked. Other times, a post hoc fallacy is at work. Just because someone implemented a particular strategy doesn’t mean that’s what led to success.
The truth is that a strategy can never be validated backward, only forward. The past is a very imperfect indicator for the future because circumstances are constantly in flux. Technology, competition and customer preferences change all the time, so whenever anybody tells us that they have come up with a sure-fire way to succeed, we need to be skeptical. Continue reading “Don’t Bet On Someone Else’s Success Story”
I love it when the numbers jump off the screen.
In my dozen-plus years in the news business, whenever I’m asked — often in a job interview — what my favorite part of my work is, I tell a little story about the first time I took a serious look at an analytics report for multiple news sites, back at GateHouse Media. Sure, I had been responsible for paying attention to Omniture (or was it still HitBox?) back at the Santa Cruz Sentinel, and we even had a pretty cool heatmap plugin of some sort which let us see how stories on the homepage were doing. (This was years before Chartbeat.) But at GateHouse, I worked with around 125 community newspaper sites.
Read the full post How to Tell the Story of Metrics Inside Your News Organization on MediaShift.
Ahead of speaking at Disruption Summit Europe, D/SRUPTION co-founder shares his views. . .
What is ‘digital transformation’?
My experience has led me to form two specific opinions about digital transformation. The first is that it describes a culture where data forms the basis of business decision making and innovation. That’s pretty dramatic for most organizations where decisions are made by learned and experienced individuals, but remain subjective by nature. Data takes the subjectivity out of decision making and as a direct result organizations become a lot crisper and more informed.
The second is that it entails removing the word ‘failure’ from the company lexicon and replacing it with the question, ‘what have we learned?’ A classic example would be Google Glass. Four or five years ago, Google introduced Google Glass, then, about 18 months ago, they withdrew it and everybody assumed it had failed. However, Google simply treated it as an experiment to learn from. This July, they re-launched an enterprise version of Google Glass with lots of applications for specific industries such as the petroleum industry, retail, healthcare, etc. Unlike most corporates which regard failure as being a catastrophic event, Google actually regarded it as a learning exercise.
Which industries have been most impacted by digitalization and which are likely to be next?
Finding a business that is not being impacted by digitalization is actually quite hard, but I think the industry that has been most dramatically affected by digitalization is the advertising and publishing business. The whole business is being transformed by the fact that what was before a very imprecise art has now become a precise art driven by major technology companies such as Google and Facebook.
In your experience, how many companies are really prepared for digital transformation?
If a company is asking now whether or not it is prepared for digital transformation, it is about eight years too late and the game is close to over. The simple fact is that it is already happening. There is in fact massive appetite for digital transformation from the corporate world but that appetite needs to be focused. There are two steps to this. The first is realizing your company can save considerable amounts of money thanks to digitalization. Companies then need to put an entrepreneurial hat on and say, ‘where can I make money out of digital transformation?’ That’s where the creative work is: digital opportunity.
Which well-known companies do you think have handled digital transformation the best?
Without inside knowledge of an organization, the way to tell a company that is digitally transformed is to look at it from the outside in. I have two examples, one in the B2C space and one in the B2B space. The B2C example is Mattel, the big US toy company, which I think has got its head around digitalization. For instance, they just launched a home 3D printer, a virtual reality product range and a Barbie which connects to IBM Watson’s artificial intelligence computer. The B2B example is a business called Kern which makes weighing scales so precise they can weigh tiny differences in the earth’s gravity. Kern gamified its business and turned it into a sensational success. Take a look at Kern: The Gnome Experiment’ on YouTube here:
Given the rapid pace of technological development, how should management teams go about discerning between available technologies?
Ten years ago, companies were often forced to choose between different suppliers with lots of proprietary technology. This completely changed with the advent of open source technology. Google, for instance, puts significant amounts of the software it develops into open source. You can use it free of charge as long as you make a development contribution to it rather than a cash contribution. Companies like Microsoft, Facebook and IBM are doing the same. This means the proprietary nature of software is fading in all except legacy IT situations.
Why are technology businesses giving away their software like this?
The answer is twofold. The first reason is that when you turn your software over to the open source community, you add a significant multiplier to the number of developers working on that software. It’s like Wikipedia in concept. The second reason technology businesses are giving away their software is that they recognize the value is not in software anymore, it’s in data. Open source software allows us to use the data we hold in a more effective manner. Most of the organizations I come into contact with don’t even know what data they have. One of the first things all companies should be doing from an operational perspective is a data audit: find out what data you have, find out what data is available from third parties and then see what you can do with it using open source AI to create new products.
Is open source technology relevant for every sector?
Yes, especially around data analytics, layered data and AI. If the software is open source that means a healthcare company for example can pull AI software out of the open source community, modify it to their needs and then plug in their data. Going a step further, the real game changer companies should be looking at is something called API [Application Programming Interface]. API is a layer of technology that allows disparate computer systems to talk to other disparate computer systems, allowing all different sorts of opportunities, including cost saving opportunities.
One of the big debates about digital transformation is the potential net cost to human jobs. How real a concern should this be?
A rule of thumb here is that if you have a very tight job description, your job is threatened. If your job description is creative, you are much less threatened although even AI is becoming creative. However, this is not like falling off a cliff – to start with we are taking about job augmentation for humans, not replacement. Workers in the trucking business are not going to be replaced by [self-driving technology] tomorrow morning, although it will eventually happen. The job replacement by AI is likely to be stealthy.
In terms of what this means for us humans, I think we’re heading into an age where entertainment and enjoyment will become a much more fundamental part of our lives. The difficulty with all this of course is the economics that are attached to it – when you have that many unemployed, who pays for them? All companies should be doing a data audit: find out what data you have, what is available from third parties, and then see what you can do with it.
Is digital transformation a leveler of the playing field – for example between companies in developed and developing countries – or does it further exacerbate gaps in development?
US and European startups, probably up to a D round, will concentrate on their own local markets instead of attempting to enter emerging markets. This gives local businesses and startups in other parts of the world an enormous opportunity to copy US and European startups and be the first in their marketplace with a successful new technology. But there are going to be winners and there are going to be losers. I can normally see the winners as soon as I walk through the door. That’s what happened at SPi Global: I immediately saw a winner.
How is digital transformation likely to impact SPi Global and how, in your view, is the company positioned to take advantage of that?
Initially, my role in the acquisition was to verify the management team was aware of new technology and had some use cases and plans to be able to use it. However, as soon as I walked through the door and met the IT director and CEO, I knew they were in the right place. I had a conversation with the IT director very quickly about using machine learning and data parsing to increase the efficiency of their business. That meant my role then changed into looking for opportunities for expansion. I believe SPi can enter new markets and use its emerging digital technology to turn those marketplaces into very profitable businesses.
For example, a fundamental part of SPi’s business is the re-purposing of content for different platforms, which is a very good business. In the future, they could be using AI to perform intelligent data parsing, whereby content and data that is incompatible with other systems is made compatible via machine learning, automating a number of processes.
What emerging technologies you are most excited about?
I’m an entrepreneur; every time I get out of bed in the morning I see exciting opportunities around disruption! There are two technologies that I am extremely positive about. Firstly, Chatbots, an online technology that takes over conversations you might ordinarily have with a call centre worker. As AI becomes more powerful, the human operator will appear later and later in the conversation – and this will happen by stealth. In the end, this will make a company’s net promoter score higher because consumers will speak with one computer entity instead of a call center worker with a script. This is the future interface to the web.
Secondly, material science, which has fundamental opportunities to change every object that we lay our hands on to make those objects much more efficient, changing industries in the process. An example of this is provided by Boeing, which about six months ago launched a new metal called microlattice. This is ten times as strong as stainless steel and almost as light as air – and it’s 3D printable. In the future, they plan to make aircraft out of it. I would say that’s pretty ground breaking and has enormous implications for the transport business.
Your steps in digital transformation
Recently, I was invited to give a keynote speech on the impacts of digital disruption at a company’s Innovation Day event. The participants, consisting mainly of the company’s senior management team, were enthusiastic and motivated to develop innovation strategies for their business. During the course of the event, the team discussed digitisation of their business operations as the core of digital transformation.
Although in many cases, digitisation (converting something analogue to digital formats e.g., replacing paper based filing system for online document management) is a key element in organizational transformation for the digital age, I believe it is only one part of digital business transformation in its entirety. Incumbent organisations looking to gain competitive advantage in the digital economy will need to address several levels of change, improvements and development. I have developed a simple illustration of these levels to help companies understand the scope, impact and potential outcome of initiatives at each level.
Level 1 – OPTIMIZE the foundation
The ‘Optimize’ level focuses on initiatives that strengthen and improve the existing business elements to create strong foundation. As a business, basic operations need to function smoothly for the company to exist and sustain itself. These include the physical operations, day-to-day activities, financial management, among other things. The focus here is really to ensure these basic operations are optimized – reduce redundancy, high efficiency, low costs etc. Many ‘Optimize’ activities may involve digitization of the operations, for example process automation, implementation of ERP systems, launching a responsive website, or using social media channels for customer interactions.
‘Optimize’ initiatives are activities that companies should be implementing to improve operations and efficiency, whether ready for digital transformation journey or not. Companies that have not optimized will struggle to stay relevant in the digital economy.
Level 2 – EVOLVE the business
The ‘Evolve’ level focuses on transforming selected foundation elements to prepare for dynamic needs of digital disruption. The rapid development of disruptive technologies are enabling new ways of doing things in an organization. Companies now have the possibility to apply these technologies to enable, improve or transform their business. For example, the use of chatbots on Facebook messenger to improve customer interaction and increase sales conversion, or using machine learning or AI in data analytics to gather predictive insights on customer behaviour or preferences.
‘Evolve’ initiatives typically focus on digitalization of a specific function or business area to improve or transform by leveraging digital technologies. For example, I recently saw a smart factory demonstration where speed and efficiency of the assembly line was improved through the use of smart bins. The bins were programmed to order parts as soon as it hits a minimum threshold level to avoid delays.
Initiatives at this level can go a long way to creating competitive advantage for companies. However, it is not as simple as throwing new technology at a business problem. I have seen many examples of companies launching a mobile app in the name of digital transformation. Here, real benefits are gained only by leveraging technology in a strategic and targeted way to resolve or transform an existing business challenge.
Level 3 – INNOVATE for the future
The ‘Innovate’ level focuses on exploring innovative initiatives to transform the business and create a sustainable competitive edge in the digital economy. This level involves true organization wide digital business transformation. For example, disruptive business model innovation falls within the Innovate level.
I believe that digital disruption will hit every industry at some point over the next five years and change the global business landscape. However, some industries will be hit quicker than others, simply due to the nature of their business. For example, the first big wave of disruption will be technology, media, retail and financial services due to rapid developments in technology, evolving consumer behaviours and high number of digital entrants. On the other hand, oil and gas or utilities may be hit at a later stage.
Despite this, every company should be exploring Innovate initiatives in order to prepare for this disruption. As a result, new operational activities, products / services, or business models may surface, creating a need for organization wide transformation.
Evolution process, not one time change
The digital business transformation journey can be viewed as an evolution process, with change happening over a course of time. As a knee-jerk reaction to disruption, business leaders may be tempted to implement a new technology in an effort to ‘digitize’ the business. However, it is worth exploring initiatives across the various levels to determine where a company needs to focus for highest returns. Keep in mind that not all levels need to be implemented simultaneously. As a starting point, focus on implementing ‘Optimize’ and ‘Evolve’ initiatives. But leadership teams need to already be exploring the ‘Innovate’ level, particularly in the highly disruptive industries.
Kamales Lardi is a digital business transformation strategist and dynamic keynote speaker. She helps companies leverage digital disruption to create new opportunities for business and generate revenue. Kamales is also a published author, lecturer, mentor to entrepreneurs and member of the MBA Advisory Board at Durham University, UK.
Tucked into the last day of WWDC was a session on podcasting, and it contained some big news for the burgeoning industry. Before getting into the specific announcements, though, the session itself is worth a bit of analysis, particularly the opening from Apple Podcasts Business Manager James Boggs:
First we want to talk for a moment about how we think about modern podcasts. Long-form and audio. We get excited about episodic content that entertains, informs, and inspires. We get excited and many of our users have gotten excited too.
I went on to transcribe the next 500 or so words of Boggs’s presentation, which included various statistics on downloads, catalog size, and reach; a listing of Apple “partners” organized by media and broadcast organizations, public media, and independents; and even started in on Boggs’s review/promotion of individual podcasts like “Up and Vanished” and “Masters of Scale” before I realized Boggs was never going to actually say “how [Apple] think[s] about modern podcasts.” I won’t make you read the transcript — take my word when I say that there was nothing there.
Still, that itself was telling; Boggs’s presentation perfectly reflects the state of podcasting today: Apple is an essential piece, even as they really don’t have anything to do with what is going on (but naturally, are happy to take credit).
A Brief History of Podcasts
Probably the first modern podcast was created by Dave Winer in 2003, although it wasn’t called a “podcast”: that was coined by Ben Hammersley in 2004, and the inspiration was Apple’s iPod. Still, while the media had a name, the “industry”, such that it was, was very much the wild west: a scattering of podcast creators, podcatchers (software for downloading the podcasts), and podcast listeners, finding each other by word-of-mouth.
A year later Apple made the move that cemented their current position as the accidental guerrilla of the industry: iTunes 4.9 included support for podcasts and, crucially, the iTunes Music Store created a directory (Apple did not — and still does not — host the podcast files themselves). The landscape of podcasting was completely transformed:
Centralization occurs in industry after industry for a reason: everyone benefits, at least in the short term. Start with the users: before iTunes 4.9 subscribing and listening to a podcast was a multi-step process, and most of those steps were so obscure as to be effective barriers for all but the most committed of listeners.
- Find a podcast
- Get a podcatcher
- Copy the URL of the podcast feed into the podcatcher
- Copy over the audio file from the podcatcher into iTunes
- Sync the audio file to an iPod
- Listen to the podcast
- Delete the podcast from the iPod the next time you sync’d
iTunes 4.9 made this far simpler:
- Find a podcast in the iTunes Store and click ‘Subscribe’
- Sync your iPod
Recounting this simplification may seem pedantic, but there is a point: this was the most important improvement for podcast creators as well. Yes, the iTunes Music Store offered an important new discovery mechanism, but it was the dramatic improvement to the user experience that, for the vast majority of would-be listeners, made podcasts even worth discovering in the first place. Centralized platforms win because they make things easier for the user; producers willingly follow.
Interestingly, though, beyond that initial release, which was clearly geared towards selling more iPods, Apple largely left the market alone, with one important exception: in 2012 the company released a standalone Podcasts app for iOS in the App Store, and in 2014 the app was built-in to iOS 8. At that point the power of defaults did its job: according to the IAB Podcast Ad Metrics Guidelines released last fall, the Apple Podcast App accounts for around 50% of all podcast players across all operating systems (iTunes is a further ~10%).1
The Business of Podcasting
It’s not clear when the first podcast advertisement was recorded; a decent guess is Episode 67 of This Week in Tech, recorded on September 3, 2006 (Topic: “Does the Google CEO’s place on Apple’s board presage a Sun merger?”). The sponsor was surprisingly familiar — Visa (“Safer, better money. Life takes Visa.”), and Dell joined a week later.
Over the ensuing years, though, the typical podcast sponsor was a bit less of a name brand — unless, of course, you were a regular podcast listener, in which case you quickly knew the brands by heart: Squarespace, Audible, Casper Mattress, Blue Apron, and recent favorite MeUndies (because who doesn’t want to hear a host-read endorsement for underwear!). Companies like Visa or Dell were few and far between: a study by FiveThirtyEight suggested brand advertisers were less than five percent of ad reads.
The reason is quite straightforward: for podcasts there is neither data nor scale. The data part is obvious: while podcasters can (self-)report download numbers, no one knows whether or not a podcast is played, or if the ads are skipped. The scale bit is more subtle: podcasts are both too small and too big. They are too small in that it is difficult to buy ads at scale (and there is virtually no quality control, even with centralized ad sellers like Midroll); they are too large in that the audience, which may be located anywhere in the world listening at any time, is impossible to survey in order to measure ad effectiveness.
That is why the vast majority of podcast advertisers are actually quite similar: nearly all are transaction-initiated subscription-based services. The “transaction-initiated” bit means that there is a discrete point at which the customer can indicate where they heard about the product, usually through a special URL, while the “subscription-based” part means these products are evaluating their marketing spend relative to expected lifetime value. In other words, the only products that find podcast advertising worthwhile are those that expect to convert a listener in a measurable way and make a significant amount of money off of them, justifying the hassle.2
The result is an industry that, from a monetization perspective, looks a lot like podcasting before iTunes 4.9; there are small business to be built, but the industry as a whole is stunted.
Apple Podcast Analytics
This is the context for what Apple actually announced. Jason Snell had a good summary at Six Colors:
New extensions to Apple’s podcast feed specification will allow podcasts to define individual seasons and explain whether an episode is a teaser, a full episode, or bonus content. These extensions will be read by the Podcast app and used to present a podcast in a richer way than the current, more linear, approach…
The other big news out of today’s session is for podcasters (and presumably for podcast advertisers): Apple is opening up in-episode analytics of podcasts. For the most part, podcasters only really know when an episode’s MP3 file is downloaded. Beyond that, we can’t really tell if anyone listens to an episode, or how long they listen—only the apps know for sure. Apple said today that it will be using (anonymized) data from the app to show podcasters how many people are listening and where in the app people are stopping or skipping. This has the potential to dramatically change our perception of how many people really listen to a show, and how many people skip ads, as well as how long a podcast can run before people just give up.
The new extensions are a nice addition, and a way in which Apple can enhance the user experience to the benefit of everyone. As you might expect, though, I’m particularly interested in the news about analytics. Problem solved, right? Or is it problem caused? What happens when advertisers realize that everyone is skipping their ads?
Advertisers: Not Idiots
In fact, I expect these analytics to have minimal impact, at least in the short run. For one, every indication is that analytics will only be available to the podcast publishers, although certainly advertisers will push to have them shared.3 More pertinently, though, all of the current podcast publishers know exactly what they are getting: X amount of podcast ads results in Y number of conversions that result in Z amount of lifetime value.
Indeed, contrary to what many folks seem to believe, advertisers, whether they leverage podcasts, Facebook, Google, or old school formats like radio or TV, are not idiots blindly throwing money over a wall in the vague hopes that it will drive revenue, ever susceptible to being shocked, shocked! that their ads are being ignored. Particularly in the case of digital formats advertisers are quite sophisticated, basing advertising decisions off of well-known ROI calculations. That is certainly the case with podcasts: knowing to a higher degree of precision how many ads are skipped doesn’t change the calculation for the current crop of podcast advertisers in the slightest.
What more data does do is open the door to more varied types of advertisers beyond the subscription services that dominate the space. Brand advertisers, in particular, are more worried about reaching a guaranteed number of potential customers than they are tracking directly to conversion, and Apple’s analytics will help podcasters tell a more convincing story in that regard.
In truth, though, Apple’s proposed analytics aren’t nearly enough: advertisers still won’t know who they are reaching or where they are located, and while brand advertisers may not have the expectation of tracking-to-purchase no one wants to throw money to the wind either. The problem of surveying effectively to measure things like brand lift is as acute as ever, and it simply isn’t worth the trouble to do a bunch of relatively small media buys with zero quality control.
This, though, is why Apple’s centralized role is so intriguing. Remember, the web was thought to be a wasteland for advertising until Google provided a centralized point that aggregated users and could be sold to advertisers. Similarly, mobile was thought to monetize even worse than the (desktop) web until Facebook provided a centralized point that aggregated users and could be sold to advertisers. I expect a similar dynamic in podcasts: the industry will remain the province of web hosting and underwear absent centralization and aggregation, and the only entity that can accomplish that is Apple.
One can envision the broad outlines of what the business for a centralized aggregator for podcasts might look like:
- The centralized aggregator would likely offer hosting to podcast creators, not only to secure the user experience and get better analytics (including on downloads through other apps) but also to dynamically insert advertisements. Those advertisements would also be available to smaller podcasts that are currently not worth the effort to advertisers.
- Advertisers would get their own dashboard for those analytics and, more importantly, the opportunity to buy ads at far greater scale across a large enough audience to make it worth their while. Ideally, at least from their perspective, they would actually be able to target their advertising buys as well.
- Users would, at least in theory, benefit from a far broader array of content made possible by the growth in revenue for the industry broadly.
There are already companies trying to do just this: I wrote about E.W. Scripps’ Midroll and their acquisition of podcast player Stitcher last year. The problem is that Stitcher only has around 5% of listeners, and it is the ownership of users/listeners, not producers/podcast from which true market power derives. Apple has that ownership, and thus that power; the question is will they use it?
Surely the safe bet is “no”. iAd, Apple’s previous effort at building an advertising business, failed spectacularly, and Apple’s anti-advertising rhetoric has only deepened since then. That’s a problem not only in terms of image but culture: Apple seems highly unlikely to be willing to put in the effort necessary to build a real advertising business, and given how small such a business might be even in the best-case scenario relative to the rest of the company, that’s understandable.4
To be sure, should Apple decline to seize this opportunity it will be celebrated by many, particularly those doing well in the current ecosystem. Podcasting is definitely more open than not, with no real gatekeepers in terms of either distribution or monetization. That, though, is why the money is so small: gatekeepers are moneymakers, and while podcasts may continue to grow, it is by no means inevitable that, absent a more active Apple, the money will follow.
Disclosure: Exponent, the podcast I host with James Allworth, does have a (single) sponsor; the revenue from this sponsorship makes up a very small percentage of Stratechery’s overall revenue and does not impact the views in this article
- For what it’s worth, Exponent has a much different profile: Apple Podcasts has about 13% share, while Overcast leads the way with 26% share, followed by (surprisingly!) Mobile Safari with 23%
- This shows why Casper mattresses are the exception that proves the rule: mattresses are not a subscription service, but they are much more expensive than most products bought online, which achieves the same effect as far as lifetime value is concerned
- I’m less worried about the fact other podcast players may not offer similar analytics: the Apple Podcast app will be used as a proxy, although this may hurt podcasts that have a smaller share of downloads via the Apple Podcast app (as total listeners may be undercounted absent similar analytics from other apps)
- It’s Google’s challenge in building a real hardware business in reverse
Connecting the classical & colloquial
It won’t have escaped your notice that ‘disruption’ has been the de rigueur term for many people in business, technology and innovation for a while now. The danger of an increased popularity and visibility of such a term is its dilution and misappropriation. As a person seemingly genetically opposed to business-speak and buzzwords, and as Managing Editor of D/SRUPTION, it is especially important to me that we keep a close eye on it.
Talk of disruption and disruptive technology first appeared in a 1995 article by Clayton M. Christensen called, ‘Disruptive Technologies: Catching the wave’. He developed these ideas further in his now well known book, published in 1997, ‘The Innovator’s Dilemma’. Christensen’s theory started the revolution in business thinking that we can call ‘classical’ disruption.
What is disruption?
Disruption is the process that happens when an upstart company with few resources is able to successfully take on an incumbent company, and win. . . It goes a little something like this:
– The incumbent tends to focus its efforts on improving products and services for its most profitable customers. Due to this, it pays less attention to its less profitable customers (or ignores them and other potential new markets entirely).
– As the incumbent business has spent all its effort focusing on its higher value customers, it has ignored the lower value segments, these present an opportunity for the potential upstart disruptor.
– The upstart can become the disruptor of the large company by successfully catering for these ignored or untended groups, providing them with similar products or services to the incumbent, or by answering similar needs and often at a cheaper price.
– The incumbent may not acknowledge or even know about the threat because initially the upstart is just meddling in a small section of its least profitable market. So the upstart may be operating under the radar or written off as piffling, and it often will be – not all upstart businesses go on to succeed let alone be disruptors.
– Notably, the disruptor will often find a way to take people from these less tended segments who aren’t much of a customer for the incumbent (or indeed a customer at all) and turn them into one for themselves.
– Once the upstart has established itself by fulfilling the needs of the people ignored or untended by the incumbent, it will be working hard to grow and expand upwards.
– As a result of this growth, its products and services begin to improve and will start to include more of the qualities and features that the more high value customers of the incumbent company demand or expect.
Ultimately this results in some of those customers shifting from the incumbent to the upstart. Once that number is significant – that is disruption.
Disruption in Christensen’s model has the clear and specific definition that I’ve outlined above. Many uses of ‘disruption’ today are not in keeping with this model at all. It is perhaps due to the availability of the word in everyday speech that there is a significant gap between Christensen’s definition, and the more colloquial uses of the word.
Understandably, like anyone with a good theory, Christensen is keen to preserve his own, and to make sure the terminology used stays tight to his definitions to ensure his theory remains as specific as it is. He has himself expressed regret about choosing the term ‘disruption’ to denote this precise meaning because of this potential confusion – between the specific and the colloquial. Many baulk at putting even a toe outside of Christensen’s definition – but I’ve encountered many more that use the term ‘disruption’ seemingly without any awareness of the theory, perhaps other than in name, at all.
Although it may prickle some, language and terminology will always evolve. Despite the inevitable frustrations, the term has come to mean other things to other people – or more accurately, it continues to mean what it had always meant long before Christensen arrived with his dilemma.
Colloquially, we tend to use ‘disruption’ to describe when an event, system, or process, is interrupted and prevented from continuing or operating in its usual way. It is this definition that many have in mind when they say they, or their businesses, are ‘disruptive’, and there is certainly nothing wrong with that provided we keep in mind the clear distinction between this and Christensen’s model.
Classical vs colloquial – Airbnb vs Uber
You can’t go far into a conversation about disruption these days without someone mentioning Airbnb or Uber. What’s interesting is that by Christensen’s ‘classical’ definition – Uber’s taxi business is not actually disruptive at all (although other aspects of Uber’s wider business may prove to be). Christensen has pointed this out. In terms of his theory, we can see that Uber did not create a new market nor did it begin at the low end of an existing market. Airbnb on the other hand does indeed fulfil the ‘classical’ definition of disruption.
However, in terms of colloquial disruption, both Uber and Airbnb are widely, and rightly, regarded as disruptive for a combination of sheer impact, speed of growth, and the use of technology to radically change the way the world works.
A significant marker for both ‘classical’ and colloquial disruption is that these businesses often appear to come out of nowhere – although this is an illusion – no business or innovation whether disruptive or not comes from nowhere – but often the shock impact and exponential growth will indicate something major has happened, and quickly – and that is usually disruptive in one sense or the other.
Symptom & Cause
If we understand the model as intended, we can also be free to explore other concepts; even if unfortunately they are forced to share the same name. Christensen points out that broadening the definition of disruption in relation to his theory undermines it, and in this he is correct. But, so long as we understand that we are holding two different things in mind that share a name – but not a definition, we are capable of gaining a lot from each.
While acknowledging the theory and definition put forward by Christensen, there are some wider uses of disruption that are also worthy of consideration. Outside of what I’ve named, ‘classical’ disruption, the term has come to signify a range of approaches to business and innovation and often indicates a radical change to traditional ways of working, thinking and doing.
Disruption in a ‘classical’ sense is a symptom of a serious problem that incumbent businesses have experienced due to an upstart, it is retrospective and an indicator that a major shift has occurred from them to the upstart. Disruption in the colloquial sense, that of ‘disruptive thinking’ and ‘disruptive approaches’ is a cause.
Colloquial disruption is an approach, both a fire starter and a rallying cry to create new futures and explore the possibilities, opportunities and hazards. By considering these possibilities we are able to inform strategies and mitigate risks. The exact nature of the future is unknown, but that doesn’t mean we can’t know anything about it.
If we use foresight in an informed and intelligent way, we can identify possible threats and opportunities and use these to our advantage in our businesses. An awareness of the potential threat of disruption in Christensen’s sense, and a disruptive approach to innovation in the colloquial sense can both be essential parts of business planning. While they are certainly not strategies in their own right, the potential threat posed by Christensen’s disruption, and the opportunities opened by colloquailly disruptive approaches can tangibly and helpfully influence how we plan for our future when faced, (as we always are), by a complex world of constant change.
Satisfaction is stagnation
Applying decent futures thinking to a business is a smart move. A business doing well would doubtless like to stabilise and maintain its position but digging ourselves in and holding hard isn’t an option when faced with unavoidable change. It may work for a time but it is not sustainable in the longer term.
Recognising the constant state of flux, exercising foresight, and understanding and incorporating what disruption means in all senses are all prerequisites to putting businesses in the best position to be adaptable in the face of change, and to give the best chance of survival and success.
Merely stating, ‘we are disruptive’ is just not good enough. This thinking must influence planning, strategy, culture and brand as part of an ongoing process if it is to mean anything useful at all; and those that understand disruption both colloquially – (as an approach to be used), and classically – (as an indicator of a threat to be avoided), by far stand to gain the most.
When I first worked for a (student) newspaper, the job of a publisher seemed odd to me; as far as I and my editorial colleagues were concerned, the publisher was the person the editor-in-chief, who we viewed as the boss, occasionally griped about after a few too many drinks, usually with the assertion that he (in that case) was a bit of a nuisance.
That attitude, of course, was the luxury of print: whatever happened on the other side of the office didn’t have any impact on the (in our eyes) heroic efforts to produce fresh content every day. We were the ones staying in the office until the wee hours of the night, writing, editing, and laying out the newspaper that would magically appear on newsstands the next morning, all while the publisher and his team were at home in bed.
The moral of this story is obvious: the publisher represented the business side of the newspaper, and the effect of the Internet was to make the job and impact of editorial easier and that of a publisher immeasurably harder, in large part because many of a publisher’s jobs became obsolete; it is the editorial side, though, that has paid the price.
The Jobs a Publisher Did
In the days of print, publishers provided multiple interlocking functions that made newspapers into fabulous businesses:
- Brand: A publisher had a brand, specifically, the name of the publication; this was the primary touchpoint for readers, whether they were interested in national news, local news, sports, or the funny pages.
- Revenue Generation: Most publishers drove revenue in two ways: some money was made through subscriptions, the selling, administration, and support of which was handled by dedicated staff; most money was made from advertising, which had its own dedicated team.
- Human Resources: Editorial staff were free to write and complain about their publishers because everything else in their work life was taken care of, from payroll to travel expenses to office supplies.
What tied these functions together was distribution: a publisher owned printing presses and delivery trucks which, combined with their established readership and advertising relationships, gave most newspapers an effective monopoly (or oligopoly) in their geographic area on readers and advertisers and writers:
Each of these functions supported the other: the brand drove revenue generation which paid for editorial that delivered on the brand promise, all underpinned by owning distribution.
Publishing’s Downward Spiral
It is hardly new news, particularly on this blog, to note that this model has fallen apart. The most obvious culprit is that on the Internet, distribution, particular text and images, is effectively free, which meant that advertisers had new channels: first ad networks that operated at scale across publishers, and increasingly Facebook and Google who offer the power to reach the individual directly.
I wrote about this progression in Popping the Publishing Bubble, and the intertwined functionality of publishers explains the downward spiral that followed: with less revenue there was less money for quality journalism (and a greater impetus to chase clicks), which meant a devaluing of the brand, which meant fewer readers, which led to even less money.
What made this downward spiral particularly devastating is that, as demonstrated by the advertising shift, newspapers did not exist in a vacuum. Readers could read any newspaper, or digital-only publisher, or even individual bloggers. And, just as social media made it possible for advertisers to target individuals, it also made everyone a content creator pushing their own media into the same feed as everyone else: the brand didn’t matter at all, only the content, or, in a few exceptional cases, the individual authors, many of whom amassed massive followings of their own; one prominent example is Bill Simmons, the American sportswriter.
Vox Media + The Ringer
I wrote about Simmons two years ago in Grantland and the (Surprising) Future of Publishing, and noted that media entities needed to think about monetization holistically:
Too much of the debate about monetization and the future of publishing in particular has artificially restricted itself to monetizing text. That constraint made sense in a physical world: a business that invested heavily in printing presses and delivery trucks didn’t really have a choice but to stick the product and the business model together, but now that everything — text, video, audio files, you name it — is 1’s and 0’s, what is the point in limiting one’s thinking to a particular configuration of those 1’s and 0’s?
In fact, it’s more than possible that in the long-run the current state of publishing — massive scale driven by advertising on one hand, and one-person shops with low revenue numbers and even lower costs on the other — will end up being an aberration. Focused, quality-obsessed publications will take advantage of bundle economics to collect “stars” and monetize them through some combination of subscriptions (less likely) or alternate media forms. Said media forms, like podcasts, are tough to grow on their own, but again, that is what makes them such a great match for writing, which is perfect for growth but terrible for monetization.
My back-of-the-envelope calculations estimated that Simmons’ Ringer podcast network was likely generating millions of dollars, and in an interview with Recode earlier this year, Simmons confirmed that is the case, claiming that podcast revenue was more than covering the cost of creating not just podcasts but the website that, at least in theory, created podcast listeners.
Still, given Simmons’ ambitions, it would certainly be better were the site more than a cost center, which makes the company’s most recent announcement particularly interesting. From the New York Times:
The Ringer, a sports and culture website created by Bill Simmons, will soon be hosted on Vox Media’s platform but maintain editorial independence under a partnership announced on Tuesday. Mr. Simmons, a former ESPN personality, will keep ownership of The Ringer, but Vox will sell advertising for the site and share in the revenue. The Ringer will leave its current home on Medium, where it has been hosted since it began in June 2016.
Jim Bankoff, Vox’s chief executive, said in a phone interview that the partnership was the first of its type for the company and would allow it to expand its offerings to advertisers. Mr. Simmons said in a statement: “This partnership allows us to remain independent while leveraging two of the things that Vox Media is great at: sales and technology. We want to devote the next couple of years to creating quality content, innovating as much as we can, building our brand and growing The Ringer as a multimedia business.”
Simmons is exactly right about the benefits he gets from the deal: instead of building duplicative technology and ad sales infrastructure, The Ringer can simply use Vox Media’s. This is less important with regards to the technology (Vox’s insistence that Chorus is a meaningful differentiator notwithstanding) but hugely important when it comes to advertising. It’s not simply the expense of building an infrastructure for ad sales; the top line is even more critical: it is all but impossible to compete with Google and Facebook for advertising dollars without massive scale.
Make no mistake, Simmons is the sort of writer that many advertisers would be happy to advertise next to (his podcast has had an impressive slate of brand names, in addition to the usual mainstays like Squarespace and Casper mattresses); the problem is that when it comes to the return-on-investment of buying ads, the “investment” — particularly time — is just as important as the “return”: a brand looking to advertise directly on premium media is far more likely to deal with Vox Media and its huge stable of sites than it is to do a relatively small deal with a site like The Ringer.
Indeed, the bifurcation in the Internet’s impact on editorial and advertising — the former is becoming atomized, the latter consolidated — explains why the implications for Vox Media are, in my estimation, the more important takeaway from this deal.
Vox Media’s Upside
To date Vox Media has been a relatively traditional publisher, albeit one that has executed better than most: the company has built strong brands that attract audiences which can be monetized through advertising, and that revenue, along with venture capital, has been fed into an impressive editorial product that builds up the company’s brands.
The Ringer, though, is not a Vox Media brand: it is Simmons’ brand, a point he emphasized in his statement, and that’s great news for Vox. The problem with editorial is that while the audience scales, production doesn’t: content still has to be created on an ongoing basis, and that means high variable costs.
Infrastructure, though, does scale: Vox Media uses the same underlying technology for all of its sites, which is exactly what you would expect given that software can be replicated endlessly. Crucially, the same principle applies to advertising: one sales team can sell ads across any number of sites, and the more impressions the better. Presuming The Ringer ends up being not an outlier but rather the first of many similar deals,1 then that means that Vox Media has far more growth potential than it did as long as it was focused only on monetizing its owned-and-operated content.
Publishers of the Future
The new model portended by this deal looks something like this:
In this model the most effective and scalable publisher is faceless: atomized content creators, fueled by social media, build their own brands and develop their own audiences; the publisher, meanwhile, builds scale on the backside, across infrastructure, monetization, and even human-resource type functions.2 This last point makes a faceless publisher more than an ad network, and crucially, I suspect the greatest impact will not be (just) about ads.
Earlier this month I wrote about the future of local news, which I argued would entail relatively small subscription-based publications. Said publications would be more viable were there a faceless publisher in place to provide technology, including subscription and customer support capabilities, and all of the other repeatable minutiae that comes with running a business. Publishers still matter, but much of what matters can be scaled and offered as a service without being tied to a brand and a specific set of content.
I suspect this is part of the endgame for publishing on the Internet: free distribution blew up the link between editorial and publishing and drove them in opposite directions — atomization on one side and massively greater scale on the other. And now, that same reality makes possible a new model: a huge number of small publications backed by entities more concerned with building viable businesses than having memorable names.
- There is already a parallel to The Ringer within Vox Media: the company’s vast network of team-specific sites that sit under the SBNation umbrella
- This is where Medium went wrong: the company made motions towards this model — which is why The Ringer is hosted there — but has decided to pursue a Medium subscription model instead
My favorite part of keynotes is always the opening. That is the moment when the CEO comes on stage, not to introduce new products or features, but rather to create the frame within which new products and features will be introduced.
This is why last week’s Microsoft keynote was so interesting: CEO Satya Nadella spent a good 30 minutes on the framing, explaining a new world where the platform that mattered was not a distinct device or a particular cloud, but rather one that ran on all of them. In this framing Microsoft, freed from a parochial focus on its own devices, could be exactly that; the problem, as I noted earlier this week, is that platforms come from products, and Microsoft is still searching for an on-ramp other than Windows.
The opening to Google I/O couldn’t have been more different. There was no grand statement of vision, no mind-bending re-framing of how to think about the broader tech ecosystem, just an affirmation of the importance of artificial intelligence — the dominant theme of last year’s I/O — and how it fit in with Google’s original vision. CEO Sundar Pichai said in his prepared remarks:
It’s been a very busy year since last year, no different from my 13 years at Google. That’s because we’ve been focused ever more on our core mission of organizing the world’s information. And we are doing it for everyone, and we approach it by applying deep computer science and technical insights to solve problems at scale. That approach has served us very, very well. This is what has allowed us to scale up seven of our most important products and platforms to over a billion users…It’s a privilege to serve users at this scale, and this is all because of the growth of mobile and smartphones.
But computing is evolving again. We spoke last year about this important shift in computing, from a mobile-first, to an AI-first approach. Mobile made us re-imagine every product we were working on. We had to take into account that the user interaction model had fundamentally changed, with multitouch, location, identity, payments, and so on. Similarly, in an AI-first world, we are rethinking all our products and applying machine learning and AI to solve user problems, and we are doing this across every one of our products.
Honestly, it was kind of boring.
Google’s Go-to-Market Problem
After last year’s I/O I wrote Google’s Go-To-Market Problem, and it remains very relevant. No company benefited more from the open web than Google: the web not only created the need for Google search, but the fact that all web pages were on an equal footing meant that Google could win simply by being the best — and they did.
Mobile has been much more of a challenge: while Android remains a brilliant strategic move, its dominance is rooted more in its business model than in its quality (that’s not to denigrate its quality in the slightest, particularly the fact that Android runs on so many different kinds of devices at so many different price points). The point of Android — and the payoff today — is that Google services are the default on the vast majority of phones.
The problem, of course, is iOS: Apple has the most valuable customers (from a monetization perspective, to be clear), who mostly don’t bother to use different services than the default Apple ones, even if they are, in isolation, inferior. I wrote in that piece:
Yes, it is likely Apple, Facebook, and Amazon are all behind Google when it comes to machine learning and artificial intelligence — hugely so, in many cases — but it is not a fair fight. Google’s competitors, by virtue of owning the customer, need only be good enough, and they will get better. Google has a far higher bar to clear — it is asking users and in some cases their networks to not only change their behavior but willingly introduce more friction into their lives — and its technology will have to be special indeed to replicate the company’s original success as a business.
To that end, I thought there were three product announcements yesterday that suggested Google is on the right track:
Google Assistant was first announced last year, but it was only available through the Allo messenger app, Google’s latest attempt to build a social product; the company also pre-announced Google Home, which would not ship until the fall, alongside the Pixel phone. You could see Google’s thinking with all three products:
- Given that the most important feature of a messaging app is whether or not your friends or family also use it, Google needed a killer feature to get people to even download Allo. Enter Google Assistant.
- Thanks to the company’s bad bet on Nest, Google was behind Amazon in the home. Google Assistant being smarter than Alexa was the best way to catch up.
- A problem for Google with voice computing is that it is not clear what the business model might be; one alternative would be to start monetizing through hardware, and so the high-end Pixel phone was differentiated by Google Assistant.
All three approaches suffered from the same flaw: Google Assistant was the means to a strategic goal, not the end. The problem, though, is that unlike search, Google Assistant was not yet established as something people should jump through hoops to get: driving Google Assistant usage needs to be the goal; only then can it be leveraged for something else.
To that end Google has significantly changed its approach over the last 12 months.
- Google Assistant is now available as its own app, both on Android and iOS. No unwanted messenger app necessary.
- The Google Assistant SDK will allow Google Assistant to be built in to just about anything. Scott Huffman, the VP of Google Assistant said:
We think the assistant should be available on all kinds of devices where people might want to ask for help. The new Google Assistant SDK allows any device manufacturer to easily build the Google Assistant into whatever they’re building, speakers, toys, drink-mixing robots, whatever crazy device all of you think up now can incorporate the Google Assistant. We’re working with many of the world’s best consumer brands and their suppliers so keep an eye out for the badge that says “Google Assistant Built-in” when you do your holiday shopping this year.
This is the exact right approach for a services company.
- That leads to the Pixel phone: earlier this year Google finally added Google Assistant to Android broadly — built-in, not an app — after having insisted just a few months earlier it was a separate product. The shifting strategy was a big mistake (as, arguably, is the entire program), but at least Google has ended up where they should be: everywhere.
Google Assistant has a long ways to go, but there is a clear picture of what success will look like: Google Photos. Launched only two years ago, Pichai bragged that Photos now has over 500 million active users who upload 1.2 billion photos a day. This is a spectacular number for one very simple reason: Google Photos is not the default photo app for Android1 or iOS. Rather, Google has earned all of those photos simply by being better than the defaults, and the basis of that superiority is Google’s machine learning.
Moreover, much like search, Photos gets better the more data it gets, creating a virtuous cycle: more photos means more data which means a better experience which means more users which means more photos. It is already hard to see other photo applications catching up.
Yesterday Google continued to push forward, introducing suggested sharing, shared libraries, and photo books. All utilize vision recognition (for example, you can choose to automatically share pictures of your kids with your significant other) and all make Photos an even better app, which will lead to new users, which will lead to more data.
What is particularly exciting from Google’s perspective is that these updates add a social component: suggested sharing, for example, is self-contained within Google Photos, creating ad hoc private networks with you and your friends. Not only does this help spread Google Photos, it is also a much more viable and sustainable approach to social networking than something like Google Plus. Complex entities like social networks are created through evolution, not top-down design, and they must rely on their creator’s strengths, not weaknesses.
Google Lens was announced as a feature of Google Assistant and Google Photos. From Pichai:
We are clearly at an inflection point with vision, and so today, we are announcing a new initiative called Google Lens. Google Lens is a set of vision-based computing capabilities that can understand what you’re looking at and help you take action based on that information. We’ll ship it first in Google Assistant and Photos, and then other products.
How does it work? If you run into something and you want to know what it is, say a flower, you can invoke Google Lens, point your phone at it and we can tell you what flower it is…Or if you’re walking on a street downtown and you see a set of restaurants across you, you can point your phone, because we know where you are, and we have our Knowledge Graph, and we know what you’re looking at, we can give you the right information in a meaningful way.
As you can see, we are beginning to understand images and videos. All of google was built because we started understanding text and web pages, so the fact that computers can understand images and videos has profound implications for our core mission.
The profundity cannot be overstated: by bringing the power of search into the physical world, Google is effectively increasing the addressable market of searchable data by a massive amount, and all of that data gets added back into that virtuous cycle. The potential upside is about more than data though: being the point of interaction with the physical world opens the door to many more applications, from things like QR codes to payments.
My one concern is that Google is repeating its previous mistake: that is, seeking to use a new product as a means instead of an end. Limiting Google Lens to Google Assistant and Google Photos risks handicapping Lens’ growth; ideally Lens will be its own app — and thus the foundation for other applications — sooner rather than later.
Make no mistake, none of these opportunities are directly analogous to Google search, particularly the openness of their respective markets or the path to monetization. Google Assistant requires you to open an app instead of using what is built-in (although the Android situation should improve going forward), Photos requires a download instead of the default photos app, and Lens sits on top of both. It’s a far cry from simply setting Google as the home page of your browser, and Google making more money the more people used the Internet.
All three apps, though, are leaning into Google’s strengths:
- Google Assistant is focused on being available everywhere
- Google Photos is winning by being better through superior data and machine learning
- Google Lens is expanding Google’s utility into the physical world
There were other examples too: Google’s focus with VR is building a cross-device platform that delivers an immersive experience at multiple price points, as opposed to Facebook’s integrated high-end approach that makes zero sense for a social network. And, just as Apple invests in chips to make its consumer products better, Google is investing in chips to make its machine learning better.
The Beauty of Boring
[The first hour was] a veritable smorgasbord of features and programs that [lacked a] unifying vision, just a sense that Google should do them. An operating system for the home? Sure! An Internet of Things language? Bring it on! Android Wear? We have apps! Android Pay? Obviously! A vision for Android? Not necessary!
None of these had a unifying vision, just a sense that Google ought to do them because they’re a big company that ought to do big things.
What was so surprising, though was that the second hour of that keynote was completely different. Pichai gave a lengthy, detailed presentation about machine learning and neural nets, and tied it to Google’s mission, much like he did in yesterday’s introduction. After quoting Pichai’s monologue I wrote:
Note the specificity — it may seem too much for a keynote, but it is absolutely not BS. And no surprise: everything Pichai is talking about is exactly what Google was created to do…The next 30 minutes were awesome: Google Now, particularly Now on Tap, was exceptionally impressive, and Google Photos looks amazing. And, I might add, it has a killer tagline: Gmail for Photos. It’s so easy to be clear when you’re doing exactly what you were meant to do, and what you are the best in the world at.
This is why I think that Pichai’s “boring” opening was a great thing. No, there wasn’t the belligerence of early Google IOs, insisting that Android could take on the iPhone. And no, there wasn’t the grand vision of Nadella last week, or the excitement of an Apple product unveiling. What there was was a sense of certainty and almost comfort: Google is about organizing the world’s information, and given that Pichai believes the future is about artificial intelligence, specifically the machine learning variant that runs on data, that means that Google will succeed in this new world simply by being itself.
That is the best place to be, for a person and for a company.
- Except for the Pixel, natch
- This is a Daily Update but I have made it publicly available
It’s hardly controversial to note that the traditional business model for most publishers, particularly newspapers, is obsolete. Absent the geographic monopolies formerly imposed by owning distribution, newspapers have nothing to offer advertisers: the sort of advertising that was formerly done in newspapers, both classified and display, is better done online. And, contra this rather fanciful suggestion by New York Times media columnist Jim Rutenberg that advertisers prop up newspapers for the good of democracy, nothing is going to change that.
I already explained the problems with Rutenberg’s idea in yesterday’s Daily Update: advertisers are (rightly) motivated by what is best for their business, plus there is a collective action problem. I added, though, mostly in passing, that the future of “local news” would almost certainly be subscription, not advertising-based.
I think it’s worth expounding on that point. What most, including Rutenberg, fail to understand about newspapers is that it is not simply the business model that is obsolete: rather, everything is obsolete. Most local newspapers are simply not worth saving, not because local news isn’t valuable, but rather because everything else in your typical local newspaper is worthless (from a business perspective). That is why I was careful in my wording: subscriptions will not save newspapers, but they just might save local news, and the sooner that distinction is made the better.
The Unnecessary Newspaper
To be clear, I agree with Rutenberg when he states that “A vibrant free press…keeps government honest and voters informed.” Local government needs oversight, which is another way of saying local news is necessary for a well-functioning democracy. The problem is that assuming oversight must be provided by a newspaper is akin to suggesting that a tank be used to kill a fly: sure, it may get the job done, but there is a lot of equipment, ordinance, and personnel that is really not necessary when a flyswatter would not only be sufficient, but actually more effective.
For newspapers, the analogies to equipment, ordinance, and personal are physical infrastructure, business operations, and editorial staff; just about none of them (yes, including most of the editorial staff) are actually necessary for covering local news.
Printing presses are obviously obsolete: while some newspapers have finally closed them down, others hold on because there is still a modicum of print advertising to be earned. It’s the most prominent example of how newspapers are fundamentally incapable of evolving. Naturally, this extends to distribution centers, delivery trucks, newsstands, and all of the administrative infrastructure that goes into moving around pieces of paper that have zero connection to the actual distribution of local news.
The infrastructure overhead, though, does not stop there: without a print edition there is no need for layout, for high-end photography, or a centralized office space to assemble everything on deadline. There is also a drastically reduced need for editors: when text was printed copy was permanent, raising the cost of a mistake high enough to justify editing workforces nearly as numerous as journalistic ones. Digital stories, though, can be updated after-the-fact. Moreover, digital stories are interactive: readers can submit feedback instantly, and as I noted while writing about Wikitribune, the collective knowledge of readers will always be greater than the most seasoned set of editors.
Moreover, given that local news requires little more than text and images and perhaps some video, there is no need for expensive digital infrastructure either; a basic WordPress site is more than sufficient. In short, the entire infrastructure category, which makes up probably 60%~70% of a newspaper’s cost structure (possibly more if you include the editors), has nothing to do with sustainable local news.
Monetizing via print advertisements requires a lot of staff: salespeople to sell the ad, graphic artists to lay it out, account managers to collect the money, plus all the management required to make it work. For large national newspapers like The New York Times, this may all still be necessary, thanks to the ability to sell premium advertising online. However, all of this can be eliminated for most digital-only operations: simply use an ad network. Of course, those come with their own problems: ad networks make web pages suck, and just as importantly, most consumption is shifting to mobile where ad network monetization is particularly ineffective; to the extent advertising is part of the business model relying on Facebook is (still) probably the best option. Or better yet, don’t have any ads at all.
A purely subscription-based business model not only drastically cuts costs, it also makes for a better user experience, a particularly attractive point given that users are the paying customers. Even better, thanks to services like Stripe, digital subscriptions not only cost far less to administer than traditional newspaper subscriptions, but are far more user-friendly as well.
The reality is that for local news this entire category probably only needs to be one person: handle customer service for self-service subscriptions, do the books, and that’s about it. The 15~20% of revenue newspapers are paying for business operations has nothing to do with local news.
This is the biggest blindspot for those lamenting the travails of local newspapers: it may be obvious that printing presses don’t make much sense with the Internet, and most websites have moved to ad networks for the obvious reasons; in fact, though, nearly all of the content in most newspapers is not just unnecessary but in fact actively harmful to building a sustainable future for local news.
Start with the front page (of a physical newspaper, natch): most newspapers have given up on having international, national, or even regional reporters, instead relying on wire services. Even that, though, is a waste: those wire services have their own websites, and international publications are only a click away. Maintaining the veneer of comprehensive coverage is simply clutter, and a cost to boot.
The same thing applies to the opinion section: any column or editorial that is concerned with non-local affairs is competing with the entire Internet (including social media). It’s the same thing with non-local business coverage. Moreover, the cost is more than clutter and dollars: almost by definition the content is inferior to what is available elsewhere, which reduces the willingness to pay.
It’s the same story in what were traditionally the most valuable parts of newspapers:1 sports and the (variously named) lifestyle sections. There are multiple national entities dedicated to covering sports all the way down to the university level, augmented by a still-thriving sports blogosphere. Granted, there may still be a market for local sports coverage, but that is a different market than local news: there is no reason it has to be bundled together.
As for the lifestyle section, it is everywhere. BuzzFeed has set its sight on cooking, crafts, and the horoscope;2 there are all kinds of sites covering gossip and advice; meanwhile, not only are there web comics, but social media provides far more humor than the funny pages ever did. What’s left, bridge? Why not simply play online?
A lot of this content has long since been standardized across newspapers, but the broader point remains the same: absolutely none of it has anything to do with local news, and it should not exist in the local news publication of the future.
Bundles and Business Models
What is critical to understand is that everything in the preceding section is interconnected: by owning printing presses and delivery trucks (and thanks to the low marginal cost of printing extra pages), newspapers were the primary outlet for advertising that didn’t work (or couldn’t afford) TV or radio — and there was a lot of it. Maximizing advertising, though, meant maximizing the potential audience, which meant offering all kinds of different types of content in volume: thus the mashup of wildly disparate content listed above, all focused on quantity over quality. And then, having achieved the most readership and the ability to expand to fit it all, the biggest newspaper could squeeze out its competitors.
In short, the business model drove the content, just as it drove every other piece of the business. It follows, though, that if the content bundle no longer makes sense — which it doesn’t in the slightest — that the business model probably doesn’t make sense either. This is the problem with newspapers: every aspect of their operations, from costs to content, is optimized for a business model that is obsolete. To put it another way, an obsolete business model means an obsolete business. There is nothing to be saved.
The Subscription Business Model
I’ve already hinted at the general outline of a sustainable local news publication, but the critical point is the one I just made: everything must start with the business model, of which there is only one choice — subscriptions.
It is very important to clearly define what a subscriptions means. First, it’s not a donation: it is asking a customer to pay money for a product. What, then, is the product? It is not, in fact, any one article (a point that is missed by the misguided focus on micro-transactions). Rather, a subscriber is paying for the regular delivery of well-defined value.
Each of those words is meaningful:
- Paying: A subscription is an ongoing commitment to the production of content, not a one-off payment for one piece of content that catches the eye.
- Regular Delivery: A subscriber does not need to depend on the random discovery of content; said content can be delivered to to the subscriber directly, whether that be email, a bookmark, or an app.
- Well-defined Value: A subscriber needs to know what they are paying for, and it needs to be worth it.
This last point is at the crux of why many ad-based newspapers will find it all but impossible to switch to a real subscription business model. When asking people to pay, quality matters far more than quantity, and the ratio matters: a publication with 1 valuable article a day about a well-defined topic will more easily earn subscriptions than one with 3 valuable articles and 20 worthless ones covering a variety of subjects. Yet all too many local newspapers, built for an ad-based business model that calls for daily content to wrap around ads, spend their limited resources churning out daily filler even though those ads no longer exist.
A sustainable local news publication will be fundamentally different: a minimal rundown of the news of the day, with a small number of in-depth articles a week featuring real in-depth reporting, with the occasional feature or investigative report. After all, it’s not like it is hard to find content to read on the Internet: what people will pay for is quality content about things they care about (and the fact that people care about their cities will be these publications’ greatest advantage).
It’s also worth noting what a subscription business model does not — must not — include:
- Content that is widely available elsewhere. That means no national or international news (except what has a local impact, and even that is questionable), no non-local business content, no lifestyle section.
- Non-journalistic costs centers. As I noted above, a publication might need one business operations person, and maybe a copy editor; they can probably be the same person. Nearly everything else, including subscription management, hosting, payments, etc. can leverage widely available online services (and you can include social networks: treating all content the same hurts big media companies, but it’s a big opportunity for small ones).
- Any sort of wall between business and editorial. This is perhaps the easiest change to make, and the hardest for newspaper advocates to accept. A subscription business is just that: a business that must, through its content, earn ongoing revenue from customers. That means understanding what those customers want, and what they don’t. It means focusing on the user experience, and the content mix. And it means selling by every member of the organization.
Notice how different this looks from a newspaper, as it must. After all, the business model is different.
I strongly believe the market for this sort of publication is there. My hometown city of Madison, WI has around 250,000 people (500,000 in Dane County), primarily served by The Wisconsin State Journal. To the paper’s credit the website is almost all local news; unfortunately, most of it is uninteresting filler. Worse, to produce this filler took a staff of 52 people, of which only 10 by my count are local reporters (supported by at least 8 editors).
Were a new publication to come along, offering a five minute summary of Madison’s local news of the day, plus an actually relevant story or two a week with the occasional feature or investigative report,3 I’d gladly pay, and I don’t even live there anymore. What I won’t do, though, is bother visiting the Wisconsin State Journal because there simply is too much dreck to wade through, created at ridiculous cost in service of an obsolete business model.4
Indeed, the real problem with local newspapers is more obvious than folks like Rutenberg wish to admit: no one — advertisers nor subscribers — wants to pay for them because they’re not worth paying for. If newspapers were actually holding local government accountable I don’t think they would have any problem earning money; that they aren’t is a function of wasting time and money on the past instead of the future.
- Other than the classifieds, that is
- “Choose these foods and we will tell you your ideal mate!”
- With typos
- This is where news foundations and benefactors can actually make a difference: stop supporting local newspapers and instead fund new startups until they build a critical mass of subscribers
The shamelessness was breathtaking.
Having told a few jokes, summarized his manifesto, and acknowledged the victim of the so-called “Facebook-killer” in Cleveland, Facebook founder and CEO Mark Zuckerberg opened his keynote presentation at the company’s F8 developer conference like this:
You may have noticed that we rolled out some cameras across our apps recently. That was Act One. Photos and videos are becoming more central to how we share than text. So the camera needs to be more central than the text box in all of our apps. Today we’re going to talk about Act Two, and where we go from here, and it’s tied to this broader technological trend that we’ve talked about before: augmented reality.
In the way that the flashing cursor became the starting point for most products on desktop computers, we believe that the camera screen will be the starting point for most products on smartphones. This is because images created by smartphone cameras contain more context and richer information than other forms of input like text entered on a keyboard. This means that we are willing to take risks in an attempt to create innovative and different camera products that are better able to reflect and improve our life experiences.
Snap may have declared itself a camera company; Zuckerberg dismissed it as “Act One”, making it clear that Facebook intended to not simply adopt one of Snapchat’s headline features but its entire vision.
Facebook and Microsoft
Shortly after Snap’s S-1 came out, I wrote in Snap’s Apple Strategy that the company was like Apple; unfortunately, the Apple I was referring to was not the iPhone-making juggernaut we are familiar with today, but rather the Macintosh-creating weakling that was smushed by Microsoft, which is where Facebook comes in.
Today, if Snap is Apple, then Facebook is Microsoft. Just as Microsoft succeeded not because of product superiority but by leveraging the opportunity presented by the IBM PC, riding Big Blue’s coattails to ecosystem dominance, Facebook has succeeded not just on product features but by digitizing offline relationships, leveraging the desire of people everywhere to connect with friends and family. And, much like Microsoft vis-à-vis Apple, Facebook has had The Audacity of Copying Well.
I wrote The Audacity of Copying Well when Instagram launched Instagram stories; what was brilliant about the product is that Facebook didn’t try to re-invent the wheel. Instagram Stories — and now Facebook Stories and WhatsApp Stories and Messenger Day — are straight rip-offs of Snapchat Stories, which is not only not a problem, it’s actually the exact optimal strategy: Instagram’s point of differentiation was not features, but rather its network. By making Instagram Stories identical to Snapchat Stories, Facebook reduced the competition to who had the stronger network, and it worked.
Microsoft and Monopoly
Microsoft, of course, was found to be a monopoly, and, as I wrote a couple of months ago in Manifestos and Monopolies, it is increasingly difficult to not think the same about Facebook. That, though, is exactly what you would expect for an aggregator. From Antitrust and Aggregation:
The first key antitrust implication of Aggregation Theory is that, thanks to these virtuous cycles, the big get bigger; indeed, all things being equal the equilibrium state in a market covered by Aggregation Theory is monopoly: one aggregator that has captured all of the consumers and all of the suppliers. This monopoly, though, is a lot different than the monopolies of yesteryear: aggregators aren’t limiting consumer choice by controlling supply (like oil) or distribution (like railroads) or infrastructure (like telephone wires); rather, consumers are self-selecting onto the Aggregator’s platform because it’s a better experience.
This self-selection, particularly onto a “free” platform, makes it very difficult to calculate what cost, if any, Facebook’s seeming monopoly exacts on society. Consider the Econ 101 explanation of why monopolies are problematic:
- In a perfectly competitive market the price of a good is set at the intersection of demand and supply, the latter being determined by the marginal cost of producing that good:1
- The “Consumer Surplus”, what consumers would have paid for a product minus what they actually paid, is the area that is under the demand curve but over the price point; the “Producer Surplus”, what producers sold a product for minus the marginal cost of producing that product, is the area above the marginal cost/supply curve and below the price point:
- In a monopoly situation, there is no competition; therefore, the monopoly provider makes decisions based on profit maximization. That means instead of considering the demand curve, the monopoly provider considers the marginal revenue (price minus marginal cost) that is gained from selling additional items, and sets the price where marginal revenue equals marginal cost. Crucially, though, the price is set according to the demand curve:
- The result of monopoly pricing is that consumer surplus is reduced and producer surplus is increased; the reason we care as a society, though, is the part in brown: that is deadweight loss. Some amount of demand that would be served by a competitive market is being ignored, which means there is no surplus of any kind being generated:2
The problem with using this sort of analysis for Facebook should be obvious: the marginal cost for Facebook of serving an additional customer is zero! That means the graph looks like this:
So sure, Facebook may have a monopoly in social networking, and while that may be a problem for Snap or any other would be networks, Facebook would surely argue that the lack of deadweight loss means that society as a whole shouldn’t be too bothered.
Facebook and Content Providers
The problem is that Facebook isn’t simply a social network: the service is a three-sided market — users, content providers, and advertisers — and while the basis of Facebook’s dominance is in the network effects that come from connecting all of those users, said dominance has seeped to those other sides.
Content providers are an obvious example: Facebook passed Google as the top traffic driver back in 2015, and as of last fall drove over 40% of traffic for the average news site, even after an algorithm change that reduced publisher reach.
So is that a monopoly when it comes to the content provider market? I would argue yes, thanks to the monopoly framework above.
Note that once again we are in a situation where there is not a clear price: no content provider pays Facebook to post a link (although they can obviously make said link into an advertisement). However, Facebook does, at least indirectly, make money from that content: the more users find said content engaging, the more time they will spend on Facebook, which means the more ads they will see.
This is why Facebook Instant Articles seemed like such a brilliant idea: on the one side, readers would have a better experience reading content, which would keep them on Facebook longer. On the other side, Facebook’s proposal to help publishers monetize — publishers could sell their own ads or, enticingly, Facebook could sell them for a 30% commission — would not only support the content providers that are one side of Facebook’s three-sided market, but also lock them into Facebook with revenue they couldn’t get elsewhere. The market I envisioned would have looked something like this:
However, Instant Articles haven’t turned out the way I expected: the consumer benefits are there, but Facebook has completely dropped the ball when it comes to monetizing the publishers using them. That is not to say that Facebook isn’t monetizing as a whole, thanks in part to that content, but rather that the company wasn’t motivated to share. Or, to put it another way, Facebook kept most of the surplus for itself:
In this case, it’s not that Facebook is setting a higher price to maximize their profits; rather, they are sharing less of their revenue; the outcome, though, is the same — maximized profits. Keep in mind this approach isn’t possible in competitive markets: were there truly competitors for Facebook when it came to placing content, Facebook would have to share more revenue to ensure said content was on its platform. In truth, though, Facebook is so dominant when it comes to attention that it doesn’t have to do anything for publishers at all (and, if said publishers leave Instant Articles, well, they will still place links, and the users aren’t going anywhere regardless).
Facebook and Advertisers
There may be similar evidence — that Facebook is able to reduce supply in a way that increases price and thus profits — emerging in advertising. In a perfectly competitive market the cost of advertising would look like this:
Facebook, though, will soon be limiting quantity, or at least limiting its growth. On last November’s earnings call CFO Dave Wehner said that Facebook would stop increasing ad load in the summer of 2017 (i.e. Facebook has been increasing the number of ads relative to content in the News Feed for a long time, but would stop doing so). What was unclear — and as I noted at the time, Wehner was quite evasive in answering this — was whether or not that would cause the price per ad to rise.
There are two possible reasons for Wehner to have been evasive:
- Prices will not rise, which would be a bad sign for Facebook: it would mean that despite all of Facebook’s data, their ads are not differentiated, and that money that would have been spent on Facebook will simply be spent elsewhere
- Prices will rise, which would mean that Facebook’s ads are differentiated such that Facebook can potentially increase profits by restricting supply
To put the second possibility in graph form:
Note that Facebook has already said that revenue growth will slow because of this change; that, though, is not inconsistent with having monopoly power. Monopolists seek to maximize profit, not revenue. Alternately, it could simply be that Facebook is worried about the user experience; it will be fascinating to see how the company’s bottom line shifts with these changes.
Monopolies and Innovation
Still, even if Facebook does have monopoly power when it comes to content discovery and distribution and in digital advertising, is that really a problem for users? Might it even be a good thing?
Facebook board member Peter Thiel certainly thinks so. In Zero to One Thiel not only makes the obvious point that businesses that are monopolies are ideal, but says that models like the ones I used above aren’t useful because they presume a static world.
In a static world, a monopolist is just a rent collector. If you corner the market for something, you can jack up the price; others will have no choice but to buy from you…But the world we live in is dynamic: it’s possible to invent new and better things. Creative monopolists give customers more choices by adding entirely new categories of abundance to the world. Creative monopolies aren’t just good for the rest of society; they’re powerful engines for making it better.
The dynamism of new monopolies itself explains why old monopolies don’t strangle innovation. With Apple’s iOS at the forefront, the rise of mobile computing has dramatically reduced Microsoft’s decades-long operating system dominance. Before that, IBM’s hardware monopoly of the ’60s and ’70s was overtaken by Microsoft’s software monopoly. AT&T had a monopoly on telephone service for most of the 20th century, but now anyone can get a cheap cell phone plan from any number of providers. If the tendency of monopoly businesses were to hold back progress, they would be dangerous and we’d be right to oppose them. But the history of progress is a history of better monopoly businesses replacing incumbents. Monopolies drive progress because the promise of years or even decades of monopoly profits provides a powerful incentive to innovate. Then monopolies can keep innovating because profits enable them to make the long-term plans and to finance the ambitious research projects that firms locked in competition can’t dream of.
The problem is that Thiel’s examples refute his own case: decades-long monopolies like those of AT&T, IBM, and Microsoft sure seem like a bad thing to me! Sure, they were eventually toppled, but not after extracting rents and, more distressingly, stifling innovation for years. Think about Microsoft: the company spent billions of dollars on R&D and gave endless demos of futuristic tech; the most successful product that actually shipped (Kinect) ended up harming the product it was supposed to help.3
Indeed, it’s hard to think of any examples where established monopolies produced technology that wouldn’t have been produced by the free market; Thiel wrongly conflates the drive of new companies to create new monopolies with the right of old monopolies to do as they please.
That is why Facebook’s theft of not just Snapchat features but its entire vision bums me out, even if it makes good business sense. I do think leveraging the company’s network monopoly in this way hurts innovation, and the same monopoly graphs explain why. In a competitive market the return from innovation meets the demand for customers to determine how much innovation happens — and who reaps its benefits:
A monopoly, though, doesn’t need that drive to innovate — or, more accurately, doesn’t need to derive a profit from innovation, which leads to lazy spending and prioritizing tech demos over shipping products. After all, the monopoly can simply take others’ innovation and earn even more profit than they would otherwise:
This, ultimately, is why yesterday’s keynote was so disappointing. Last year, before Facebook realized it could just leverage its network to squash Snap, Mark Zuckerberg spent most of his presentation laying out a long-term vision for all the areas in which Facebook wanted to innovate. This year couldn’t have been more different: there was no vision, just the wholesale adoption of Snap’s, plus a whole bunch of tech demos that never bothered to tell a story of why they actually mattered for Facebook’s users. It will work, at least for a while, but make no mistake, Facebook is the only winner.
- If any individual firm’s marginal costs are higher, they will go out of business; if they are lower they will temporarily dominate the market until new competitors enter. Yes, this is all theoretical!
- Note: this picture is slightly off: the producer surplus should only be the part above the marginal cost curve. Please note the same mistake was made on all the monopoly graphs
- I’m referring to the fact the Xbox One had a higher price and lower specs than the PS4, thanks in large part to the bundled Kinect