tldr; both sides wrong in the net neutrality debate. we need to look at networks and interworking differently. otherwise digital and wealth divides will continue and worse. a new way of understanding networks and network theory is called equilibrism.
The term “net neutrality” is a contrivance at best and a farcical notion at worst. That’s why both sides can be seen as right and wrong in the debate. Net neutrality was invented in the early 2000s to account for the loss of equal access (which was the basis for how the internet really started) and the failure to address critical interconnection issues in the Telecom Act of 1996 across converging ecosystems of voice, video and data on wired and wireless networks.
The reality is that networks are multi-layered (horizontal) and multi-boundaried (vertical) systems. Supply and demand issues in this framework need to be cleared across all of these demarcation points. Sounds complex. It is complex (see here for illustration). Furthermore imbalance in one area exerts pressure in another. Now add to that concept of a single network an element of “inter-networking” and the complexity grows exponentially. The inability for net neutrality to be applied consistently across the framework(s) is its biggest weakness.
That's the technology and economic perspective.
Now let’s look at the socio-economic and political perspective. Networks are fundamental to everything around and within us; both physical and mental models. They explain markets, theory of the firm, all of our laws, social interaction, even the biology and chemistry of our bodies and the physical laws that govern the universe. Networks reduce risk for the individual actors/elements of the network. And networks exhibit the same tendencies, be they one-way or two-way; real-time or store and forward.
These tendencies include value that grows geometrically by the number and nature of transactions/participants and gets captured at the core and top of this framework that is the network, while costs grow more or less linearly (albeit with marginal differences) and are mostly borne at the bottom and edge. The costs can be physical (as in a telecom or cable network) or virtual (as in a social media network, where the cost is higher anxiety or loss of privacy, etc..). To be sustainable and generative*, networks need some conveyance of value from the core and top to the costs at the bottom and edge. I refer to this as equilibrism. Others call it universal service. There is a difference.
(*-If we don’t have some type of equilibrism the tendency in all networks is towards monopoly or oligopoly; which is basically what we see under neo-liberalism and early forms of capitalism before trust-busters and Keynesian policies.)
To understand the difference between universal service and equilibrism into this “natural law of networks” we have to throw in two other immutable, natural outcomes evident everywhere, namely pareto and standard distributions. The former easily show the geometric (or outsized) value capture referred to above. Standard distributions (bell curves) reflect extreme differences in supply and demand at the margin. Once we factor both of these in, we find that networks can never completely tend toward full centralization or full decentralization and be sustainable. So the result is constant push/pull of tradeoffs horizontally in the framework (between core and edge) facilitated by tradeoffs vertically (between upper and lower layers).
For example, a switch in layer 3 offsets the cost and complexity of layers 1 and 2 (total mesh vs star). This applies to distance and density and how the upper layers of the stack affect the lower layers. For a given set of demand, supply can either be centralized or distributed (ie cloud vs openfog or MEC; or centralized payment systems like Visa vs blockchain). A lot of people making the case for fully distributed or for fully centralized systems seemingly do not understand these horizontal and vertical tradeoffs.
The bottom-line: a network (or series of internetworks) that is fully centralized or decentralized is unsustainable and a network (internetworks) where there is no value/cost (re)balancing is also unsustainable. Much of what we are seeing in today’s centralized or monopolistic “internet” and the resulting call for decentralization (blockchain/crypto) is evidence of these conclusions. The confusion or misunderstanding lies in the fact that the network is nothing without its components or actors, and the individual actors are nothing without the network. Which is more important? Both.
Now add back in the “inter-networking” piece and it seems that no simple answer, like net neutrality, solves how we make networks and ecosystems sustainable; especially when supply depreciates rapidly and costs are in near constant state of (relative or absolute) decline and demand is infinite and infinitely diverse. These parameters have always been around (we call them technology and market differentiation/specialization), but they’ve become very apparent in the last 50 years with the advent of the computers and digital networked ecosystems that dominate all aspects of our lives. Early signs abound since the first digital network (the telegraph) was invented**, but we just haven’t realized it until now with a monopolized internet at global scale; albeit one that was intended to be free and open and is anything but. So, there exists NO accepted economic theory explaining or providing the answer to the problems of monopoly information networks that have been debated for well over 100 years when Theodore Vail promised “One Policy, One System, Universal Service” and the US government officially blessed information monopolies.
(** — digital impacts arguably began with the telegraph 170 years ago and their impact on goods and people, e.g. railroads, and stock markets, e.g. the ticker tape. The velocity of information increased geometrically. Wealth and information access divides became enormous by the late 1800s.)
"Equilibrism" may be THE answer that provides a means towards insuring universal access in competitive digital networked ecosystem. Equilibrism holds that settlements across and between the boundaries and layers are critical and that greater network effects occur the more interconnected the networks are down towards the bottom and edges of the ecosytems. Settlements serve two primary functions. First they are price signals. As such they provide incentives and disincentives between actors (remember the standard distribution between all the marginal actors referred to above?). Second they provide a mechanism for value conveyance between those who capture the value and those who bear the costs (remember the pareto distribution above?). In the informational stackas we’ve illustrated it, settlements exist north-south (between app and infrastructure layers) and east-west (between actors/networks). But a lack of these settlements has resulted in extreme misshaping of both the pareto optimum and normal distribution.
We find very little academic work around settlements*** and, in particular, the proper level for achieving sustainability and generativity. The internet is a “settlement free” model and therefore lacks incentives/disincentives and in the process makes risk one sided. Also without settlements, a receiving party cannot subsidize the sender (say goodbye to 800 like services which scaled the competitive voice WAN in the 1980s and 90s and paved the way for the internet to scale). Lastly, and much more importantly than the recent concerns over security, privacy and demagoguery, the internet does not facilitate universal service.
(*** — academic work around “network effects” on the other hand has seen a surge over the last 40 years since the concept was derived at Bell Labs in 1974 from an economist studying the failure of the famous Picturephone from 1963. Of course a lot of this academic work is flawed (and limited) without an understanding of the role of settlements.)
Unlike the winner takes all model (the monopoly outcomes referred to above), equilibrism points to a new win/win model where supply and demand are cleared much more efficiently and the ecosystems are more sustainable and generative. So where universal service is seen as a “taking” or tax on those who have and giving to those who don’t and addressing only portions of the above 2 curves, equilibrism is fundamentally about “giving” to those who have albeit slightly less than those who don’t. Simply put equilibrism is at work when the larger actor pays a slightly higher settlement than the smaller actor, but in return the larger actor will still get a relatively larger benefit due to network effects. Think of it in gravity terms between 2 masses.
We are somehow brainwashed into thinking winner takes all is a natural outcome. In fact it is not. Nature is about balance and evolution; a constant state of disequilibria striving for equilibrium. It’s not survival of the fittest; it’s about survival of the most adaptable. Almost invariably adaptation comes from the smaller actor or new entrant into the ecosystem. And that ultimately is what drives sustainability and advancement; not unfettered winner takes all competition. Change should be embraced, not rejected, because it is constantly occurring in nature. That’s how we need to begin to think about all our socio-economic and political institutions. This will take time, since it cuts against what we believe to be true since the dawn of mankind. If you don’t think so, take a refresher course in Plato’s Republic.
If we don’t change our thinking and approach, at best digital and wealth divides will continue and we’ll convulse from within. At worst, outside forces, like technology (AI), other living organisms (contagions) and matter (climate change) will work against us in favor of balance. That’s just natural.
The current debate over the state of America's broadband services and over the future of the internet is like a 3-ring circus or 3 different monarchists debating democracy. In other words an ironic and tragically humorous debate between monopolists, be they ultra-conservative capitalists, free-market libertarians, or statist liberals. Their conclusions do not provide a cogent path to solving the single biggest socio-political-economic issue of our time due to pre-existing biases, incorrect information, or incomplete/wanting analysis. Last week I wrote about Google's conflicts and paradoxes on this issue. Over the next few weeks I'll expand on this perspective, but today I'd like to respond to a Q&A, Debunking Broadband's Biggest Myths, posted on Commercial Observer, a NYC publication that deals with real estate issues mostly and has recently begun a section called Wired City, dealing with a wide array of issues confronting "a city's" #1 infrastructure challenge. Here's my debunking of the debunker.
To put this exchange into context, the US led the digitization revolutions of voice (long-distance, touchtone, 800, etc..), data (the internet, frame-relay, ATM, etc...) and wireless (10 cents, digital messaging, etc...) because of pro-competitive, open access policies in long-distance, data over dial-up, and wireless interconnect/roaming. If Roslyn Layton (pictured below) did not conveniently forget these facts or does not understand both the relative and absolute impacts on price and infrastructure investment then she would answer the following questions differently:
Real Reason/Answer: our bandwidth is 20-150x overpriced on a per bit basis because we disconnected from moore's and metcalfe's laws 10 years ago, due to the Telecom Act, then special access "de"regulation, then Brand-X or shutting down equal access for broadband. This rate differential is shown in the discrepancy between rates we pay in NYC and what Google charges in KC, as well as the difference in performance/price of 4G and wifi. It is great Roslyn can pay $3-5 a day for Starbucks. Most people can't (and shouldn't have to) just for a cup a Joe that you can make at home for 10-30 cents.
Real Reason/Answer: Because of their vertical business models, carriers are not well positioned to generate high ROI on rapidly depreciating technology and inefficient operating expense at every layer of the "stack" across geographically or market segment constrained demand. This is the real legacy of inefficient monopoly regulation. Doing away with regulation, or deregulating the vertical monopoly, doesn’t work. Both the policy and the business model need to be approached differently. Blueprints exist from the 80s-90s that can help us restructure our inefficient service providers. Basically, any carrier that is granted a public ROW (right of way) or frequency should be held to an open access standard in layer 1. The quid pro quo is that end-points/end-users should also have equal or unhindered access to that network within (economic and aesthetic) reason. This simple regulatory fix solves 80% of the problem as network investments scale very rapidly, become pervasive, and can be depreciated quickly.
Real Reason/Answer: Quasi monopolies exist in video for the cable companies and in coverage/standards in frequencies for the wireless companies. These scale economies derived from pre-existing monopolies or duopolies granted by and maintained to a great degree by the government. The only open or equal access we have left from the 1980s-90s (the drivers that got us here) is wifi (802.11) which is a shared and reusable medium with the lowest cost/bit of any technology on the planet as a result. But other generative and scalabeable standards developed in the US or with US companies at the same time, just like the internet protocol stacks, including mobile OSs, 4G LTE (based on CDMA/OFDM technology), OpenStack/Flow that now rule the world. It's very important to distinguish which of these are truly open or not.
Real Reason/Answer: The 3rd of the population who don't have/use broadband is as much because of context and usability, whether community/ethnicity, age or income levels, as cost and awareness. If we had balanced settlements in the middle layers based on transaction fees and pricing which reflect competitive marginal cost, we could have corporate and centralized buyers subsidizing the access and making it freely available everywhere for everyone. Putting aside the ineffective debate between bill and keep and 2-sided pricing models and instead implementing balanced settlement exchange models will solve the problem of universal HD tele-work, education, health, government, etc… We learned in the 1980s-90s from 800 and internet advertising that competition can lead to free, universal access to digital "economies". This is the other 20% solution to the regulatory problem.
Real Reason/Answer: The real issue here is that America led the digital information revolution prior to 1913 because it was a relatively open and competitive democracy, then took the world into 70 years of monopoly dark ages, finally breaking the shackles of monopoly in 1983, and then leading the modern information revolution through the 80s-90s. The US has now fallen behind in relative and absolute terms in the lower layers due to consolidation and remonopolization. Only the vestiges of pure competition from the 80s-90s, the horizontally scaled "data" and "content" companies like Apple, Google, Twitter and Netflix (and many, many more) are pulling us along. The vertical monopolies stifle innovation and the generative economic activity we saw in those 2 decades. The economic growth numbers and fiscal deficit do not lie.
Back in 1998 I wrote, “if you want to break up the Microsoft software monopoly then break up the Baby Bell last-mile access monopoly.” Market driven broadband competition and higher-capacity digital wireless networks gave rise to the iOS and Android operating systems over the following decade which undid the Windows monopoly. The 2013 redux to that perspective is, once again, “if you want to break up the Google search monopoly then break up the cable/telco last mile monopolies.”
Google is an amazing company, promoting the digital pricing and horizontal service provider spirit more than anyone. But Google is motivated by profit and will seek to grow that profit as best it can, even if contrary to founding principles and market conditions that fueled its success (aka net neutrality or equal access). Now that Google is getting into the lower layers in the last mile they are running into paradoxes and conflicts over net neutrality/equal access and in danger of becoming just another vertical monopoly. (Milo Medin provides an explanation in the 50th minute in this video, but it is self-serving, disingenuous and avoids confronting the critical issue for networks going forward.)
Contrary to many people’s beliefs, the upper and lower layers have always been inextricably interdependent and nowhere was this more evident than with the birth of the internet out of the flat-rate dial-up networks of the mid to late 1980s (a result of dial-1 equal access). The nascent ISPs that scaled in the 1980s on layer 1-2 data bypass networks were likewise protected by Computers II-III (aka net neutrality) and benefited from competitive (WAN) transport markets.
Few realize or accept the genesis of Web 1.0 (W1.0) was the break-up of AT&T in 1983. Officially birthed in 1990 it was an open, 1-way store and forward database lookup platform on which 3 major applications/ecosystems scaled beginning in late 1994 with the advent of the browser: communications (email and messaging), commerce, and text and visual content. Even though everything was narrowband, W1.0 began the inexorable computing collapse back to the core, aka the cloud (4 posts on the computing cycle and relationship to networks). The fact that it was narrowband didn't prevent folks like Mark Cuban and Jeff Bezos from envisioning and selling a broadband future 10 years hence. Regardless, W1.0 started collapsing in 1999 as it ran smack into an analog dial-up brick wall. Google hit the bigtime that year and scaled into the early 2000s by following KISS and freemium business model principles. Ironically, Google’s chief virtue was taking advantage of W1.0’s primary weakness.
Web 2.0 grew out of the ashes of W1.0 in 2002-2003. W2.0 both resulted from and fueled the broadband (BB) wars starting in the late 1990s between the cable (offensive) and telephone (defensive) companies. BB penetration reached 40% in 2005, a critical tipping point for the network effect, exactly when YouTube burst on the scene. Importantly, BB (which doesn't have equal access, under the guise of "deregulation") wouldn’t have occurred without W1.0 and the above two forms of equal access in voice and data during the 1980s-90s. W2.0 and BB were mutually dependent, much like the hardware/software Wintel model. BB enabled the web to become rich-media and mostly 2-way and interactive. Rich-media driven blogging, commenting, user generated content and social media started during the W1.0 collapse and began to scale after 2005.
“The Cloud” also first entered people’s lingo during this transition. Google simultaneously acquired YouTube in the upper layers to scale its upper and lower layer presence and traffic and vertically integrated and consolidated the ad exchange market in the middle layers during 2006-2008. Prior to that, and perhaps anticipating lack of competitive markets due to "deregulation" of special access, or perhaps sensing its own potential WAN-side scale, the company secured low-cost fiber rights nationwide in the early 2000s following the CLEC/IXC bust and continued throughout the decade as it built its own layer 2-3 transport, storage, switching and processing platform. Note, the 2000s was THE decade of both vertical integration and horizontal consolidation across the board aided by these “deregulatory” political forces. (Second note, "deregulatory" should be interpreted in the most sarcastic and insidious manner.)
Web 3.0 began officially with the iPhone in 2007. The smartphone enabled 7x24 and real-time access and content generation, but it would not have scaled without wifi’s speed, as 3G wireless networks were at best late 1990s era BB speeds and didn’t become geographically ubiquitous until the late 2000s. The combination of wifi (high speeds when stationary) and 3G (connectivity when mobile) was enough though to offset any degradation to user experience. Again, few appreciate or realize that W3.0 resulted from two additional forms of equal access, namely cellular A/B interconnect from the early 1980s (extended to new digital PCS entrants in the mid 1990s) and wifi’s shared spectrum. One can argue that Steve Jobs single-handedly resurrected equal access with his AT&T agreement ensuring agnostic access for applications. Surprisingly, this latter point was not highlighted in Isaacson's excellent biography. Importantly, we would not have had the smartphone revolution were it not for Jobs' equal access efforts.
W3.0 proved that real-time, all the time "semi-narrowband" (given the contexts and constraints around the smartphone interface) trumped store and forward "broadband" on the fixed PC for 80% of people’s “web” experience (connectivity and interaction was more important than speed), as PC makers only realized by the late 2000s. Hence the death of the Wintel monopoly, not by government decree, but by market forces 10 years after the first anti-trust attempts. Simultaneously, the cloud became the accepted processing model, coming full circle back to the centralized mainframe model circa 1980 before the PC and slow-speed telephone network led to its relative demise. This circularity further underscores not only the interplay between upper and lower layers but between edge and core in the InfoStack. Importantly, Google acquired Android in 2005, well before W3.0 began as they correctly foresaw that small-screens and mobile data networks would foster the development of applications and attendant ecosystems would intrude on browser usage and its advertising (near) monopoly.
Web 4.0 is developing as we speak and no one is driving it and attempting to influence it more with its WAN-side scale than Google. W4.0 will be a full-duplex, 2-way, all-the time, high-definition application driven platform that knows no geographic or market segment boundaries. It will be engaging and interactive on every sensory front; not just those in our immediate presence, but everywhere (aka the internet of things). With Glass, Google is already well on its way to developing and dominating this future ecosystem. With KC Fiber Google is illustrating how it should be priced and what speeds will be necessary. As W4.0 develops the cloud will extend to the edge. Processing will be both centralized and distributed depending on the application and the context. There will be a constant state of flux between layers 1 and 3 (transport and switching), between upper and lower layers, between software and hardware at every boundary point, and between core and edge processing and storage. It will dramatically empower the end-user and change our society more fundamentally than what we’ve witnessed over the past 30 years. Unfortunately, regulators have no gameplan on how to model or develop policy around W4.0.
The missing pieces for W4.0 are fiber based and super-high capacity wireless access networks in the lower layers, settlement exchanges in the middle layers, and cross-silo ecosystems in the upper layers. Many of these elements are developing in the market naturally: big data, hetnets, SDN, openflow, open OS' like Android and Mozilla, etc… Google’s strategy appears consistent and well-coordinated to tackle these issues; if not far ahead of others. But its vertically integrated service provider model and stance on net neutrality in KC is in conflict with the principles that so far have led to its success.
Google is buying into the vertical monopoly mindset to preserve its profit base instead of teaching regulators and the markets about the virtues of open or equal access across every layer and boundary point (something clearly missing from Tim Wu's and Bob Atkinson's definitions of net neutrality). In the process it is impeding the development of W4.0. Governments could solve this problem by simply conditioning any service provider with access to a public right of way or frequency to equal access in layers 1 and 2; along with a quid pro quo that every user has a right to access unhindered by landlords and local governments within economic and aesthetic reason. (The latter is a bone we can toss all the lawyers who will be looking for new work in the process of simpler regulations.) Google and the entire market will benefit tremendously by this approach. Who will get there first? The market (Google or MSFT/AAPL if the latter are truly hungry, visionary and/or desperate) or the FCC? Originally hopeful, I’ve become less sure of the former over the past 12 months. So we may be reliant on the latter.
I've written about the impacts of and interplay between Moore’s, Metcalfe’s and Zipf’s laws on supply and demand of communication services and networks. Moore’s and Metcalfe’s laws can combine to drive bandwidth costs down 50% annually. Others have pointed out Butter’s law, coming from a Bell Lab’s wizard, Gerry Butter, which arrives at a more aggressive outcome; a 50% drop every 9 months! Anyway those are the big laws that are immutable and washing against and over vertically integrated monopolies like giant unseen tsunamis.
Then there are the smaller laws, like my friend Russ McGuire at Sprint who penned, “The value of any product or service increases with its mobility.” Wow, that’s very metcalfian and almost infinite in value because the devices and associated pathways can move in 3 planes. I like that and have always believed in McGuire’s Law (even before he invented it!).
Since the early 1990s, when I was one of the few, if only, analyst on the Street to cover wired and wireless telecoms, I’ve been maintaining that wireless is merely access to wireline applications. While that has been validated finally with “the cloud” and business models and networks have been merging (at least at the corporate level) the majority of people still believe them to be fundamentally distinct. It shows in simple things like interfaces and lack of interoperability across 4 screens. Thankfully all that is steadily eroding due to cloud ecosystems and the enormous fight happening in the data world between the edge and the core and open vs closed: GOOG vs AAPL vs MSFT (and let’s not forget Mozilla, the OS to rule all OS’?).
Anyone who works in or with the carriers knows wireless and wired networks are indelibly linked and always have been in terms of backhaul transport to the cell-tower. But over the past 6 years the symbiosis has become much greater because of the smartphone. 1G and 2G digital networks were all capable of providing “data” connections from 1998-2006, but it really wasn’t until the iPhone happened on the scene in 2007 along with the advent of 3G networks that things really started taking off.
The key was Steve Jobs’ demand to AT&T that smartphone applications purchased through the App Store have unfettered access to the internet, be it through:
2G, which was relatively pervasive, but slow at 50-300kbps,
3G, which was not pervasive, but faster at 500-1500 kbps, or
Wifi (802.11g), which was pervasive in a lot of “fixed” areas like home, work or school.
The latter made a ton of sense in particular, because data apps, unlike voice, will more likely be used when one is relatively stationary, for obvious visual and coordination and safety reasons; the exception being music. In 2007 802.11g Wifi was already 54 mbps, or 30-50x faster than 3G, even though the Wifi radios on smartphones could only handle 30 mbps. It didn’t matter, since most apps rarely need more than 2 mbps to perform ok. Unfortunately, below 2 mbps they provided a dismal experience and that’s why 3G had such a short shelf-life and the carriers immediately began to roll out 4G.
Had Jobs not gotten his way, I think the world would be a much different place as the platforms would not have been so generative and scaled so quickly without unconstrained (or nearly ubiquitous) access. This is an example of what I call Metcalfian “suck” (network effect pull-through) of the application ecosystem for the carriers and nothing exemplified it better than the iPhone and App Store for the first few years as AT&T outpaced its rivals and the Android app ecosystem. And it also upset the normal order of business first and consumer second through the bring your own device (BYOD) trend, blurring the lines between the two traditionally separate market segments.
Few people to this day realize or appreciate the real impact that Steve Jobs had, namely reviving equal access. The latter was something the carriers and federal government conspired to and successfully killed in the early 2000s. Equal access was the horse that brought us competitive voice in the early 1980s, competitive data in the early 1990s and helped scale digital wireless networks nationwide in the late 1990s. All the things we’re thankful for, yet have forgotten, or never entirely appreciated, or even how they came about.
Simply put, 70% of all mobile data access is over Wifi and we saw 4G networks develop 5 years faster than anyone thought possible. Importantly, not only is Wifi cheaper and faster access, it is almost always tied to a broadband pipe that either fiber or becomes fiber very quickly.
Because of this “smart” or market driven form of equal access and in appreciation of Steve Jobs’ brilliance, I am going to introduce a new law. The Law of Wireless Gravity which holds, "a wireless bit will seek out fiber as quickly and cheaply as possible.” I looked it up on google and it doesn’t exist. So now I am introducing it into the public domain under creative commons. Of course there will be plenty of metaphors about clouds and attraction and lightning to go along with the law. As well, there will be numerous corollaries.
I hope people abide by this law in all their thinking about and planning for broadband, fiber, gigabit networks, application ecosystems, devices, control layers, residential and commercial demand, etc…because it holds across all of those instances. Oh, yeah, it might actually counter the confusion over and disinformation about spectrum scarcity at the same time. And it might solve the digital divide problem, and the USF problem, and the bandwidth deficit….and even the budget deficit. Ok, one step at a time.
TCP/IP Won, OSI Lost. Or Did It? Clue: Both Are Horizontal
Edmund Burke said, “Those who cannot remember the past are doomed to repeat it.”What he didn’t add, as it might have undermined his point, is that “history gets created in one moment and gets revised the next.”That’s what I like to say.And nothing could be more true when it comes to current telecom and infomedia policy and structure.How can anyone in government, academia, capital markets or the trade learn from history and make good long term decisions if they don’t have the facts straight?
I finished a book about the origins of the internet (ARPAnet, CSnet, NSFnet) called “where wizards stay up late, The Origins of The Internet” by Katie Hafner and Matthew Lyon written back in 1996, before the bubble and crash of web 1.0. It’s been a major read for computer geeks and has some lessons for people interested in information industry structures and business models. I cross both boundaries and was equally fascinated by the “anti-establishment” approach by the group of scientists and business developers at BBN, the DoD and academia, as well as the haphazard and evolutionary approach to development that resulted in an ecosystem very similar to what the original founders envisioned in the 1950s.
The book has become something of a bible for internet, and those I refer to as upper layer (application), fashionistas who, unfortunately, have, and are provided in the book with, very little understanding of the middle and lower layers of the service provider “stack”. In fact the middle layers all but dissappear as far as they are concerned. While those upper layer fashionistas would like to simplify things and say, “so and so was a founder or chief contributor of the internet,” or “TCP/IP won and OSI lost,” actual history and reality suggest otherwise.
Ironically, the best way to look at the evolution of the internet is via the oft-maligned 7-layer OSI reference model. It happens to be the basis for one dimension of the InfoStack analytical engine. The InfoStack relates the horizontal layers (what we call service provisioning checklist for a complete solution) to geographic dispersion of traffic and demand on a 2nd axis, and to a 3rd axis which historically covered 4 disparate networks and business models but now maps to applications and market segments. Looking at how products, solutions and business models unfold along these axis provides a much better understanding of what really happens as 3 coordinates or vectors provides better than 90% accuracy around any given datapoint.
The book spans the time between the late 1950s and the early 1990s, but focuses principally on the late 1960s and early 1970s. Computers were enormously expensive and shared by users, but mostly on a local basis because of high cost and slow connections. No mention is made of the struggle modems and hardware vendors had to get level access to the telephone system and PCs had yet to burst on the scene. The issues around the high-cost monopoly communications network run by AT&T are only briefly mentioned; their impact and import lost to the reader.
The book makes no mention that by the 1980s development of what became the internet ecosystem really started picking up steam. After struggling to get a foothold on the “closed” MaBell system since the 1950s, smartmodems burst on the scene in 1981. Modems accompanied technology developments that had been occurring with fax machines, answering machines and touchtone phones; all generative aspects of a nascent competitive voice/telecoms markets.
Then, in 1983, AT&T was broken up and an explosion in WAN (long-distance) competition drove pricing down, and advanced intelligent networks increased the possibility of dial-around bypass. (Incidentally, by 1990s touchtone penetration in the US was over 90% vs less than 20% in the rest of the world driving not only explosive growth in 800 calling, but VPN and card calling, and last but not least the simple "touchtone" numeric pager; one of the percursors to our digital cellphone revolution). The Bells responded to this potential long-distance bypass threat by seeking regulatory relief with expanded calling areas and flat-rate calling to preserve their Class 5 switch monopoly.
All the while second line growth exploded, primarily as people connected fax machines and modems for their PCs to connect to commercial ISPs (Compuserve, Prodigy, AOL, etc...). These ISPs benefited from low WAN costs (competitive transit in layer 2), inexpensive routing (compared with voice switches) in layer 3, and low-cost channel banks and DIDs in those expanded LATAs to which people could dial up flat-rate (read "free") and remain connected all day long. The US was the only country in the world that had that type of pricing model in the 1980s and early 1990s.
Another foundation of the internet ecosystem, PCs, burst from the same lab (Xerox Parc) that was run by one of the founders of the Arpanet, Bob Taylor, who could deserve equal or more credit than Bob Kahn or Vint Cerf (inventors of TCP) for development of the internet. As well, the final two technological underpinnings that scaled the internet, Ethernet and Windows, were developed at Xerox Parc. These technology threads which should have been better developed in the book for their role in the demand for and growth of the internet from the edge.
In the end, what really laid the foundation for the internet were numerous efforts in parallel that developed outside the monopoly network and highly regulated information markets. These were all 'generative' to quote Zitrane. (And as I said a few weeks ago, they were accidental). These parallel streams evolved into an ecosystem onto which www, http, html and mosaic, were laid--the middle and upper layers--of the 1.5 way, store and foreward, database lookup “internet” in the early to mid 1990s. Ironically and paradoxically this ecosystem came together just as the Telecom Act of 1996 was being formed and passed; underscored by the fact that the term “internet” is mentioned once in the entire Act and one of the reasons I labeled the Act “farcical” back in 1996.
But the biggest error of the book in my opinion is not the omission of all these efforts in parallel with the development of TCP/IP and giving them due weight in the internet ecosystem, rather concluding with the notion that TCP/IP won and the OSI reference model lost. This was disappointing and has had a huge, negative impact on perception and policy. What the authors should have said is that a horizontally oriented, low-cost, open protocol as part of a broader similarly oriented horizontal ecosystem beat out a vertically integrated, expensive, closed and siloed solution from monopoly service providers and vendors.
With a distorted view of history it is no wonder then that:
The list of ironic and unfortunate paradoxes in policy and market outcomes goes on and on because people don’t fully understand what happened between TCP/IP and OSI and how they are inextricably linked. Until history is viewed and understood properly, we will be doomed, in the words of Burke, to repeat it. Or, as Karl Marx said, "history repeats itself twice, first as tragedy and second as farce."
Thursday December 19, 2013 will commemorate the 100 year anniversary of the Kingsbury Commitment. There are 528 days remaining. Let's plan something special to observe this tragic moment.
In return for universal service, AT&T was granted a "natural monopoly". The democratic government in the US, one of the few at the time, recognized the virtue of open communications for all and foolishly agreed to Ted Vail's deceptions. Arguably, this one day changed the course of mankind for 50-70 years. Who knows what might have been if we had fostered low-cost communications in the first half of the century?
Anyway, when universal service didn't happen (no sh-t sherlock) the government stepped in to ensure universal service in 1934. So on top of an overpriced monopoly the American public was taxed to ensure 100% of the population got the benefit of being connected. Today, that tax amounts to $15 billion annually to support overpriced service to less than 5% of the population. (Competitive networks have shown how this number gets driven to zero!)
Finally in the early 1980s, after nearly 30 years (the final case started in 1974 and took nearly 9 years) of trying the Department of Justice got a Judge to break up the monopoly into smaller monopolies and provide "equal access" to competitors across the long-distance piece starting and ending at the Class 5 (local switch and calling) boundary. The AT&T monopoly was dead; long live the Baby Bell monopolies! But the divestiture began a competitive long-distance (WAN) digitization "wave" in the 1980s that resulted in, amongst other things:
99% drop in pricing over 10 years
90% touchtone penetration by 1990 vs 20% ROW
Return of large volume corporate traffic via VPN services and growth of switched data intranets
Explosion of free, 800 access (nearly 50% of traffic by 1996)
Over 4 (upwards of 7 in some regions/routes) WAN fiber buildouts
Bell regulatory relief on intralata tolls via expanding calling areas (LATAs)
Introduction of flat-rate local pricing by the Bells
The latter begat the Internet, the second wave of digitization in the early 1990s. The scaling of Wintel driven by the Internet paved the way for low-cost digital cellphones, the third wave of digitization in the late 1990s. (Note, both the data and wireless waves were supported by forms of equal access). By 1999 our economy had come back to the forefront on the global scene and our budget was balanced and we were in a position to pay down our national debt. I expected the 4th and Final Wave of last mile (broadband) digitization to start sometime in the mid to late 2000s. It never came. In fact the opposite happened because of 3 discrete regulatory actions:
1996 Telecom Act
2002 Special Access Deregulation
2004 Rescision of Equal Access and Bell entry into Long Distance (WAN)
Look at the following 6 charts and try not to blink or cry. In all cases, there is no reason why the prices in the US are not 50-70% lower; if not more. We have the scale. We have the usage. We have the industries. We have the technology. We started all the 3 prior waves and should have oriented our vertically integrated service providers horizontally a la the data processing industry to effectively deal with rapid technological change. Finally, we have Moore's and Metcalfe's laws, which argue for a near 60% reduction in bandwidth pricing and/or improved performance annually!
But the government abetted a remonopolization of the sector over the past 15 years.
It's almost a tragedy to be American on this July 4 week. The FCC and the government killed competition brought about by Bill McGowan. But in 2007 Steve Jobs resurrected equal access and competition. So I guess it's great to be American after all! Many thanks to Wall and the Canadian government for these stats.
Previously we have written about “being digital” in the context of shifting business models and approaches as we move from an analog world to a digital world.Underlying this change have been 3 significant tsunami waves of digitization in the communications arena over the past 30 years, underappreciated and unnoticed by almost all until after they had crashed onto the landscape:
The WAN wave between 1983-1990 in the competitive long-distance market, continuing through the 1990s;
The Data wave, itself a direct outgrowth of the first wave, began in the late 1980s with flat-rate local dial-up connections to ISPs and databases anywhere in the world (aka the Web);
The Wireless wavebeginning in the early 1990s and was a direct outgrowth of the latter two.Digital cellphones were based on the same technology as the PCs that were exploding with internet usage.Likewise, super-low-cost WAN pricing paved the way for one-rate, national pricing plans. Prices dropped from $0.50-$1.00 to less than $0.10. Back in 1996 we correctly modeled this trend before it happened.
Each wave may have looked different, but they followed the same patterns, building on each other.As unit prices dropped 99%+ over a 10 year period unit demand exploded resulting in 5-25% total market growth.In other words, as ARPu dropped ARPU rose; u vs U, units vs Users.Elasticity.
Yet with each new wave, people remained unconvinced about demand elasticity.They were just incapable of pivoting from the current view and extrapolating to a whole new demand paradigm.Without fail demand exploded each time coming from 3 broad areas: private to public shift, normal price elasticity, and application elasticity.
Private to Public Demand Capture.Monopolies are all about average costs and consumption, with little regard for the margin.As a result, they lose the high-volume customer who can develop their own private solution.This loss diminishes scale economies of those who remain on the public, shared network raising average costs; the network effect in reverse.Introducing digitization and competition drops prices and brings back not all, but a significant number of these private users.Examples we can point to are private data and voice networks, private radio networks, private computer systems, etc…that all came back on the public networks in the 1980s and 1990s.Incumbents can’t think marginally.
Normal Price Elasticity.As prices drop, people will use more.It gets to the point where they forget how much it costs, since the relative value is so great.One thing to keep in mind is that lazy companies can rely too much on price and “all-you-can-eat” plans without regard for the real marginal price to marginal cost spread.The correct approach requires the right mix of pricing, packaging and marketing so that all customers at the margin feel they are deriving much more value than what they are paying for; thus generating the highest margins.Apple is a perfect example of this.Sprint’s famous “Dime” program was an example of this.The failure of AYCE wireless data plans has led wireless carriers to implement arbitrary pricing caps, leading to new problems.Incumbents are lazy.
Application Elasticity. The largest and least definable component of demand is the new ways of using the lower cost product that 3rd parties drive into the ecosystem.They are the ones that drive true usage via ease of use and better user interfaces.Arguably they ultimately account for 50% of the new demand, with the latter 2 at 25% each.With each wave there has always been a large crowd of value-added resellers and application developers that one can point to that more effectively ferret out new areas of demand.Incumbents move slowly.
Demand generated via these 3 mechanisms soaked up excess supply from the digital tsunamis. In each case competitive pricing was arrived at ex ante by new entrants developing new marginal cost models by iterating future supply/demand scenarios. It is this ex ante competitive guess, that so confounds the rest of the market both ahead and after the event. That's why few people recognize that these 3 historical waves are early warning signs for the final big one.The 4th and final wave of digitization will occur in the mid-to-last mile broadband markets. But many remain skeptical of what the "demand drivers" will be. These last mile broadband markets are monopoly/duopoly controlled and have not yet realized price declines per unit that we’ve seen in the prior waves. Jim Crowe of Level3 recently penned a piece in Forbes that speaks to this market failure. In coming posts we will illustrate where we think bandwidth pricing is headed, as people remain unconvinced about elasticity, just as before. But hopefully the market has learned from the prior 3 waves and will understand or believe in demand forecasts if someone comes along and says last mile unit bandwidth pricing is dropping 99%. Because it will.
I first started using clouds in my presentations in 1990 to illustrate Metcalfe’s Law and how data would scale and supersede voice. John McQuillan and his Next Gen Networks (NGN) conferences were my inspiration and source. In the mid-2000s I used them to illustrate the potential for a world of unlimited demand ecosystems: commercial, consumer, social, financial, etc…Cloud computing has now become a part of everyday vernacular.The problem is that for cloud computing to expand the world of networks needs to go flat, or horizontal, as in this complex looking illustration to the left.
This is a static view.Add some temporality and rapidly shifting supply/demand dynamics and the debate begins as to whether the system should be centralized or decentralized.Yes and no.There are 3 main network types:hierarchical, centralized and fully distributed (aka peer to peer).None fully accommodate metcalfe’s, moore’s and zipf’s laws.Network theory needs to capture the dynamic of new service/technology introduction that initially is used by a small group, but then rapidly scales to many.Processing/intelligence initially must be centralized but then traffic and signaling volumes dictate pushing the intelligence to the edge. The illustration to the right begins to convey that lateral motion in a flat, layered architecture, driven by the 2-way, synchronous nature of traffic; albeit with the signalling and transactions moving vertically up and down.
But just as solutions begin to scale, a new service is borne superseding the original.This chaotic view from the outside looks like an organism in constant state of expansion then collapse, expansion then collapse, etc…
A new network theory that controls and accounts for this constant state of creative destruction* is Centralized Hierarchical Networks (CHNs) CC.A search on google and duckduckgo reveals no known prior attribution, so Information Velocity Partners, LLC (aka IVP Capital, LLC) both lays claim and offers up the term under creative commons (CC).I actually coined the CHN term in 2004 at a Telcordia symposium; now an Ericsson subsidiary.
CHN theory fully explains the movement from mainframe to PC to cloud. It explains the growth of switches, routers and data centers in networks over time. And it should be used as a model to explain how optical computing/storage in the core, fiber and MIMO transmission and cognitive radios at the edge get introduced and scaled. Mobile broadband and 7x24 access /syncing by smartphones are already beginning to reveal the pressures on a vertically integrated world and the need to evolve business models and strategies to centralized hierarchical networking.
*--interesting to note that creative destruction was original used in far-left Marxist doctrine in the 1840s but was subsumed into and became associated with far-right Austrian School economic theory in the 1950s. Which underscores my view that often little difference lies between far-left and far-right in a continuous circular political/economic spectrum.
Wireless service providers (WSPs) like AT&T and Verizon are battleships, not carriers.Indefatigable...and steaming their way to disaster even as the nature of combat around them changes.If over the top (OTT) missiles from voice and messaging application providers started fires on their superstructures and WiFi offload torpedoes from alternative carriers and enterprises opened cracks in their hulls, then Dropbox bombs are about to score direct hits near their water lines.The WSPs may well sink from new combatants coming out of nowhere with excellent synching and other novel end-user enablement solutions even as pundits like Tomi Ahonen and others trumpet their glorious future.Full steam ahead.
Instead, WSP captains should shout “all engines stop” and rethink their vertical integration strategies to save their ships.A good start might be to look where smart VC money is focusing and figure out how they are outfitted at each level to defend against or incorporate offensively these rapidly developing new weapons.More broadly WSPs should revisit the WinTel wars, which are eerily identical to the smartphone ecosystem battles, and see what steps IBM took to save its sinking ship in the early 1990s.One unfortunate condition might be that the fleet of battleships are now so widely disconnected that none have a chance to survive.
The bulls on Dropbox (see the pros and cons behind the story) suggest that increased reliance on cloud storage and synching will diminish reliance on any one device, operating system or network.This is the type of horizontalization we believe will continue to scale and undermine the (perceived) strength of vertical integration at every layer (upper, middle and lower).Extending the sea battle analogy, horizontalization broadens the theatre of opportunity and threat away from the ship itself; exactly what aircraft carriers did for naval warfare.
Synching will allow everyone to manage and tailor their “states”, developing greater demand opportunity; something I pointed out a couple of months ago.People’s states could be defined a couple of ways, beginning with work, family, leisure/social across time and distance and extending to specific communities of (economic) interest.I first started talking about the “value of state” as Chief Strategist at Multex just as it was being sold to Reuters.
Back then I defined state as information (open applications, communication threads, etc...) resident on a decision maker’s desktop at any point in time that could be retrieved later.Say I have multiple industries that I cover and I am researching biotech in the morning and make a call to someone with a question.Hours later, after lunch meetings, I am working on chemicals when I get a call back with the answer.What’s the value of bringing me back automatically to the prior biotech state so I can better and more immediately incorporate and act on the answer?Quite large.
Fast forward nearly 10 years and people are connected 7x24 and checking their wireless devices on average 150x/day.How many different states are they in during the day?5, 10, 15, 20?The application world is just beginning to figure this out.Google, Facebook, Pinterest and others are developing data engines that facilitate “free access” to content and information paid for by centralized procurement; aka advertising.Synching across “states” will provide even greater opportunity to tailor messages and products to consumers.
Inevitably those producers (advertisers) will begin to require guaranteed QoS and availability levels to ensure a good consumer experience. Moreover, because of social media and BYOD companies today are looking at their employees the same way they are looking at their consumers.The overall battlefield begins to resemble the 800 and VPN wars of the 1990s when we had a vibrant competitive service provider market before its death at the hands of the 1996 Telecom Act (read this critique and another that questions the Bell's unnatural monopoly).Selling open, low-cost, widely available connectivity bandwidth into this advertising battlefield can give WSPs profit on every transaction/bullet/bit across their network.That is the new “ship of state” and taking the battle elsewhere.Some call this dumb pipes; I call this a smart strategy to survive being sunk.
Last week we revisted our seminal analysis from 1996 of the 10 cent wireless minute plan (400 minutes for C$40) introduced by Microcell of Canada and came up with the investment theme titled “The 4Cs of Wireless”.To generate sufficient ROI wireless needed to replace wireline as a preferred access method/device (PAD).Wireless would have to satisfy minimal cost, coverage, capacity and clarity requirements to disrupt the voice market.We found:
marginal cost of a wireless minute (all-in) was 1.5-3 cents
dual-mode devices (coverage) would lead to far greater penetration
software-driven and wideband protocols would win the capacity and price wars
CDMA had the best voice clarity (QoS); pre-dating Verizon’s “Can you hear me now” campaign by 6 years
In our model we concluded (and mathematically proved) that demand elasticity would drive consumption to 800 MOUs/month and average ARPUs to north of $70, from the low $40s.It all happened within 2 short years; at least perceived by the market when wireless stocks were booming in 1998. But in 1996, the pricing was viewed as the kiss of death for the wireless industry by our competitors on the Street.BTW, Microcell, the innovator, was at a disadvantage based on our analysis, as the very reason they went to the aggressive pricing model to fill the digital pipes, namely lack of coverage due to a single-mode GSM phone, ended up being their downfall. Coverage for a "mobility" product trumped price, as we see below.
What we didn’t realize at the time was that the 4Cs approach was broadly applicable to supply of communication services and applications in general. In the following decade, we further realized the need for a similar checklist on the demand side to understand how the supply would be soaked up and developed the 4Us of Demand in the process. We found that solutions and services progressed rapidly if they were:
easy to use
usable across an array of contexts
ubiquitous in terms of their access
universal in terms of appeal
Typically, most people refer only to user interface (UI or UX) or user experience (UE), but those aren't granular enough to accomodate the enormous range of demand at the margin. Look at any successful product or service introduction over the past 30 years and they’ve scored high on all 4 demand elements.The most profitable and self-sustaining products and solutions have been those that maximized perceived utility demand versus marginal cost.Apple is the most recent example of this.
Putting the 4Cs and 4Us together in an iterative fashion is the best way to understand clearing of marginal supply and demand ex ante. With rapid depreciation of supply (now in seconds, minutes and days) and infinitely diverse demand in digital networked ecosystems getting this process right is critical.
Back in the 1990s I used to say the difference between wireless and wired networks was like turning on a lightswitch in a dark room filled with people.Reaction and interaction (demand) could be instantaneous for the wireless network.So it was important to build out rapidly and load the systems quickly. That made them generative and emergent, resulting in exponential demand growth. (Importantly, this ubiquity resulted from interconnection mandated by regulations from the early 1980s and extended to new digital entrants (dual-mode) in the mid 1990s). Conversely a wired network was like walking around with a flashlight and lighting discrete access points providing linear growth.
The growth in adoption we are witnessing today from applications like Pinterest, Facebook and Instagram (underscored in this blogpost from Fred Wilson) is like stadium lights compared with the candlelight of the 1990s. What took 2 years is taking 2 months. You’ll find the successful applications and technologies score high on the 4Cs and 4Us checklists before they turn the lights on and join the iOS and Android parties.
You all know Monsieurs (MM.) Moore et Metcalfe.But do you know Monsieur (M.) Zipf?I made his acquaintance whilst researching infinite long tails.Why does he matter, you inquire?Because M. Zipf brings some respectability to Moore et Metcalfe, who can get a little out of control from time to time.
Monsieur Moore is an aggressive chap who doubles his strength every 18 months or so and isn’t shy about it.Monsieur Metcalfe has an insatiable appetite, and every bit he consumes increases his girth substantially.Many people have made lots of money from MM Moore’s et Metcalfe’s antics over the past 30 years. The first we refer to generally as the silicon or processing effect, the latter as the network effect. Putting the two together should lead to declines of 50-60% in cost for like performance or throughput. Heady, rather piggish stuff!
Monsieur Zipf, on the other hand, isn’t one for excess.He follows a rather strict regimen; one that applies universally to almost everything around us; be it man-made or natural.M. Zipf isn’t popular because he is rather unsocial.He ensures that what one person has, the next chap only can have half as much, and the next chap half that, and so on.It’s a decreasing, undemocratic principle. Or is it?
Despite his unpopularity, lack of obvious charm and people’s general ignorance of him, M. Zipf’s stature is about to rise.Why?Because of the smartphone and everyone’s desire to always be on and connected; things MM Moore and Metcalfe wholeheartedly support.
M. Zipf is related to the family of power law distributions.Over the past 20 years, technologists have applied his law to understanding network traffic.In a time of plenty, like the past 20 years, M. Zipf’s not been that important.But after 15 years of consolidation and relative underinvestment we are seeing demand outstrip supply and scarcity is looming. M. Zipf can help deal with that scarcity.
Capacity will only get worse as LTE (4G) devices explode on the scene in 2012, not only because of improved coverage, better handsets, improved Android (ICS), but mostly because of the iconic iPhone 5 coming this summer!Here’s the thing with 4G phones, they have bigger screens and they load stuff 5-10x faster.So what took 30 seconds now takes 3-10 seconds to load and stuff will look 2-3x better!People will get more and want more; much to MM Moore’s et Metcalfe’s great pleasure.
“Un moment!” cries M. Zipf.“My users already skew access quite a bit and this will just make matters worse!Today, 50% of capacity is used by 1% of my users.The next 9% use 40% and the remaining 90% of users use just 10% of capacity.With 4G the inequality can only get worse.Indignez-vous! Which is the latest French outcry for equality.” It turns out Zipf's law is actually democratic in that each person consumes at their marginal not average rate. The latter is socialism.
Few of us will see this distribution of usage as a problem short-term, except when we’re on overloaded cellsites and out of reach of a friendly WiFi hotspot.The carriers will throw more capex at the problem and continue to price inefficiently and ineffectively. The larger problem will become apparent within 2 years when the 90% become the 10% and the carriers tell Wall Street they need to invest another $50B after 2015 just after spending $53B between 2010-2014.
Most people aware of this problem say there is a solution.More Spectrum = more Bandwidth to satisfy MM Moore et Metcalfe.But they’ve never heard of M. Zipf nor understood fully how networks are used.Our solution, extended as a courtesy by M. Zipf, is to “understand the customer” and work on “traffic offloading” at the margin.Pricing strategies, some clever code, and marketing are the tools to implement a strategy that can minimize the capital outlays, and rapidly amortize investment and generate positive ROI.
We’ve been thinking about this since 1996 when we first introduced our 4Cs of Wireless (cost, coverage, capacity and clarity) analyzing, understanding and embracing 10 cent wireless pricing (introduced by French Canada's revolutionary MicroCell). As a result we were 2-3 years ahead of everybody with respect to penetration, consumption and wireline substitution thinking and forecasts. Back in 1995 the best wireless prices were 50 cents per minute and just for buying a lot of local access. Long-distance and roaming charges applied. So a corporate executive who travelled a lot would regularly rack up $2000-3000 monthly phone bills. The result was less than 10% penetration, 80 minutes of use per month, and ARPUs declining from $45 to $40 to $35 in analysts' models because the marginal customers being added to the network were using the devices infrequently and more often than not putting them into the glove compartment in case of emergencies. Fewer than 3% of the population actually used the devices more than 1x day.
We used to poll taxi drivers continuously about wireless and found that their average perceived price of $0.75 per minute was simply too high to justify not having to pull over and use a payphone for $0.25. So that was the magical inflection point in the elasticity curves. When MicroCell introduced $0.10 late in the Spring of 1996 and we polled the same set of users, invariably we were just able to avoid an accident they got so excited. So we reasoned and modeled that more than just taxi drivers would use wireless as a primary access device. And use it a lot. This wireless/wireline substitution would result in consumption of 700-800 minutes of use per month, penetration hitting 100% quickly and ARPUs, rather than declining, actually increasing to $70. The forecast was unbelievably bullish. And of course no one believed it in 1996, even though all those numbers were mostly reached within 5 years.
But we also recognized that wireless was a two-edge sword with respect to localized capacity and throughput; taking into account the above 3 laws. So we also created an optimal zone, or location-based, pricing and selling plan that increased ARPUs and effective yield and were vastly superior to all you can eat (AYCE) and eat what you want (EWYW) plans. Unfortunately, carriers didn't understand or appreciate M. Zipf and within 2 years they were giving away night and weekend minutes for free, where they could have monetized them for 3-6 cents each. Then some carriers responded by giving away long-distance (whose marginal cost was less than a minutes without access; but still could cost 2-3 cents). Then AT&T responded with the One-rate plan, which destroyed roaming surcharges and led to one-rate everywhere; even if demand was different everywhere.
Here’s a snapshot of that analysis that is quite simple and consistent with Zipf's law and highly applicable today. Unfortunately, where my approach would have kept effective yield at 8 cents or higher, the competitive carriers responded by going to all you can eat (AYCE) plans and the effective yield dropped to 4 cents by 2004. Had intercarrier SMS not occurred in the 2003-4 timeframe, they would have all been sunk with those pricing models, as they were in the middle of massive 2G investment programs for the coming "wireless data explosion", which actually didn't happen until 3G and smartphones in the 2008-2009 timeframes. It was still a voice and blackberry (texting and email) world in 2007 when the iPhone hit. With ubiquitous SMS and people's preference to text instead of leaving vmail, minutes dropped from 700 to 500, lowering carrier's costs, and they were able to generate incremental revenues on SMS pricing plans (called data) in the 2004-2007 timeframe.
All that said, the analysis and approach is even more useful today since extreme consumption of data will tend to occur disproportionately in the fixed mode (what many refer to as offload). I let you come up with your own solutions. À bientôt! Oh, and look up Free in France to get an idea of where things are headed. What is it about these French? Must be something about Liberté, égalité, fraternité.
Every institution, every industry, every company has or is undergoing the transformation from analog to digital.Many are failing, superseded by new entrants.No more so than in the content and media industries: music, retail, radio, newspaper and publishing.But why, especially as they’ve invested in the tools and systems to go digital?Their failure can be summed up by this simple quote, “Our retail stores are all about customer service, and (so and so) shares that commitment like no one else we’ve met,” said Apple’s CEO. “We are thrilled to have him join our team and bring his incredible retail experience to Apple.”
Think about what Apple’s CEO emphasized.“Customer service.” Not selling; and yet the stores accounted for $15B of product sold in 2011!When you walk into an Apple store it is like no other retailing experience, precisely because Apple stood the retail model on its head.Apple thought digital as it sold not just 4 or 5 products--yes that’s it—but rather 4-5 ecosystems that let the individual easily tailor their unique experience from beginning to end.
Analog does not scale.Digital does.Analog is manual.Digital is automated.Analog cannot easily be software defined and repurposed.Digital can.Analog is expensively two-way.With Digital 2-way becomes ubiquitous and synchronous.Analog is highly centralized. Digital can be easily distributed. All of this drives marginal cost down at all layers and boundary points meaning performance/price is constantly improving even as operator/vendor margins rise.
With Digital the long tail doesn’t just become infinite, but gives way to endless new tails.The (analog) incumbent sees digital as disruptive, with per unit price declines and same-store revenues eroding. They fail to see and benefit from relative cost declines and increased demand. The latter invariably occurs due to a shift from "private" to public consumption, normal price elasticity, and "application" elasticity as the range of producers and consumers increases.The result is overall revenue growth and margin expansion for every industry/market that has gone digital.
Digital also makes it easy for something that worked in one industry to be easily translated to another.Bill Taylor of Fast Company recently wrote in HBR that keeping pace with rapid change in a digital world requires having the widest scope of vision, and implementing successful ideas from other fields.
The film and media industries are a case in point.As this infographic illustrates Hollywood studios have resisted “thinking digital” for 80 years.But there is much they could learn from the transformation of other information/content monopolies over the past 30 years.This blog from Fred Wilson sums up the issues between the incumbents and new entrants well.Hollywood would do well to listen and see what Apple did to the music industry and how it changed it fundamentally; because it is about to do the same to publishing and video. If not Apple then others.
Another aspect of digital is the potential for user innovation.Digital companies should constantly be looking for innovation at the edge.This implies a focus on the “marginal” not average consumer.Social media is developing tremendous tools and data architectures for this. If companies don’t utilize these advances, those same users will develop new products and markets, as can be seen from the comments field of this blog on the financial services industry.
Digital is synonymous with flat which drives greater scale efficiency into markets. Flat (horizontal) systems tend toward vertical completeness via ecosystems (the Apple or Android or WinTel approach). Apple IS NOT vertically integrated. It has pieced together and controls very effectively vertically complete solutions. In contrast, vertically integrated monopolies ultimately fail because they don’t scale at every layer efficiently. Thinking flat (horizontal) is the first step to digitization.
Data is just going nuts!Big data, little data, smartphones, clouds, application ecosystems.So why are Apple and Equinix two of only a few large cap companies in this area with stocks up over 35% over the past 12 months, while AT&T, Verizon, Google and Sprint are market performers or worse?It has to do with pricing, revenues, margins and capex; all of which impact ROI.The former’s ROI is going up while the latters’ are flat to declining.And this is all due to the wildness of mobile data.
Data services have been revealing flaws and weaknesses in the carriers pricing models and networks for some time, but now the ante is being upped.Smartphones now account for almost all new phones sold, and soon they will represent over 50% of every carriers base, likely ending this year over 66%.That might look good except when we look at these statistics and facts:
Streaming just 2 hours of music daily off a cloud service soaks up 3.5GB per month
Carriers still derive more than 2/3 of their revenues from voice.
Cellular wireless (just like WiFi) is shared.
Putting this together you can see that on the one hand a very small percentage of users use the bulk of the network. Voice pricing and revenues are way out of sync with corresponding data pricing and revenues; especially as OTT IP voice and other applications become pervasive.
Furthermore, video, which is growing in popularity will end up using 90% of the capacity, crowding out everything else, unless carriers change pricing to reflect differences in both marginal users and marginal applications. Marginal here = high volume/leading edge.
So how are carriers responding? By raising data prices. This started over a year ago as they started capping those “unlimited” data plans. Now they are raising the prices and doing so in wild and wacky ways; ways we think that will come back to haunt them just like wild party photos on FB. Here are just two of many examples:
This past week AT&T simplified its pricing and scored a marketing coup by offering more for more and lowering prices even as the media reported AT&T as “raising prices.” They sell you a bigger block of data at a higher initial price and then charge the same rate for additional blocks which may or may not be used. Got that?
On the other hand that might be better than Walmart’s new unlimited data plan which requires PhD level math skills to understand. Let me try to explain as simply as possible. Via T-Mobile they offer 5GB/month at 3G speed, thereafter (the unlimited part) they throttle to 2G speed. But after March 16 the numbers will change to 250MB initially at 3G, then 2G speeds unlimited after that. Beware the Ides of March’s consumer backlash!
Unless the carriers and their channels start coming up with realistic offload solutions, like France’s Free, and pricing to better match underlying consumption, they will continue to generate lower or negative ROI. They need to get control of wild data. Furthermore, if they do not, the markets and customers will. With smartphones (like Apple's, who by the way drove WiFi as a feature knowing that AT&T's network was subpar) and cloudbased solutions (hosted by Equinix) it is becoming easier for companies like Republic Wireless to virtually bypass the expensive carrier plans using their very own networks. AT&T, VZ, Sprint will continue to be market performers at best.
Counter-intuitive thinking often leads to success.That’s why we practice and practice so that at a critical moment we are not governed by intuition (chance) or emotion (fear).No better example of this than in skiing; an apt metaphor this time of year.Few self-locomoted sports provide for such high risk-reward requiring mental, physical and emotional control.To master skiing one has to master a) the fear of staying square (looking/pointing) downhill, b) keeping one’s center over (or keeping forward on) the skis, and c) keeping a majority of pressure on the downhill (or danger zone) ski/edge.Master these 3 things and you will become a marvelous skier.Unfortunately, all 3 run counter to our intuitions driven by fear and safety of the woods at the side of the trail, leaning back and climbing back up hill. Overcoming any one is tough.
What got me thinking about all this was a Vint Cerf (one of the godfathers of the Internet) Op-Ed in the NYT this morning which a) references major internet access policy reports and decisions, b) mildly supports the notion of the Internet as a civil not human right, and c) trumpets