The current debate over the state of America's broadband services and over the future of the internet is like a 3-ring circus or 3 different monarchists debating democracy. In other words an ironic and tragically humorous debate between monopolists, be they ultra-conservative capitalists, free-market libertarians, or statist liberals. Their conclusions do not provide a cogent path to solving the single biggest socio-political-economic issue of our time due to pre-existing biases, incorrect information, or incomplete/wanting analysis. Last week I wrote about Google's conflicts and paradoxes on this issue. Over the next few weeks I'll expand on this perspective, but today I'd like to respond to a Q&A, Debunking Broadband's Biggest Myths, posted on Commercial Observer, a NYC publication that deals with real estate issues mostly and has recently begun a section called Wired City, dealing with a wide array of issues confronting "a city's" #1 infrastructure challenge. Here's my debunking of the debunker.
To put this exchange into context, the US led the digitization revolutions of voice (long-distance, touchtone, 800, etc..), data (the internet, frame-relay, ATM, etc...) and wireless (10 cents, digital messaging, etc...) because of pro-competitive, open access policies in long-distance, data over dial-up, and wireless interconnect/roaming. If Roslyn Layton (pictured below) did not conveniently forget these facts or does not understand both the relative and absolute impacts on price and infrastructure investment then she would answer the following questions differently:
Real Reason/Answer: our bandwidth is 20-150x overpriced on a per bit basis because we disconnected from moore's and metcalfe's laws 10 years ago, due to the Telecom Act, then special access "de"regulation, then Band-X or shutting down equal access for broadband. This rate differential is shown in the discrepancy between rates we pay in NYC and what Google charges in KC, as well as the difference in performance/price of 4G and wifi. It is great Roslyn can pay $3-5 a day for Starbucks. Most people can't (and shouldn't have to) just for a cup a Joe that you can make at home for 10-30 cents.
Real Reason/Answer: Because of their vertical business models, carriers are not well positioned to generate high ROI on rapidly depreciating technology and inefficient operating expense at every layer of the "stack" across geographically or market segment constrained demand. This is the real legacy of inefficient monopoly regulation. Doing away with regulation, or deregulating the vertical monopoly, doesn’t work. Both the policy and the business model need to be approached differently. Blueprints exist from the 80s-90s that can help us restructure our inefficient service providers. Basically, any carrier that is granted a public ROW (right of way) or frequency should be held to an open access standard in layer 1. The quid pro quo is that end-points/end-users should also have equal or unhindered access to that network within (economic and aesthetic) reason. This simple regulatory fix solves 80% of the problem as network investments scale very rapidly, become pervasive, and can be depreciated quickly.
Real Reason/Answer: Quasi monopolies exist in video for the cable companies and in coverage/standards in frequencies for the wireless companies. These scale economies derived from pre-existing monopolies or duopolies granted by and maintained to a great degree by the government. The only open or equal access we have left from the 1980s-90s (the drivers that got us here) is wifi (802.11) which is a shared and reusable medium with the lowest cost/bit of any technology on the planet as a result. But other generative and scalabeable standards developed in the US or with US companies at the same time, just like the internet protocol stacks, including mobile OSs, 4G LTE (based on CDMA/OFDM technology), OpenStack/Flow that now rule the world. It's very important to distinguish which of these are truly open or not.
Real Reason/Answer: The 3rd of the population who don't have/use broadband is as much because of context and usability, whether community/ethnicity, age or income levels, as cost and awareness. If we had balanced settlements in the middle layers based on transaction fees and pricing which reflect competitive marginal cost, we could have corporate and centralized buyers subsidizing the access and making it freely available everywhere for everyone. Putting aside the ineffective debate between bill and keep and 2-sided pricing models and instead implementing balanced settlement exchange models will solve the problem of universal HD tele-work, education, health, government, etc… We learned in the 1980s-90s from 800 and internet advertising that competition can lead to free, universal access to digital "economies". This is the other 20% solution to the regulatory problem.
Real Reason/Answer: The real issue here is that America led the digital information revolution prior to 1913 because it was a relatively open and competitive democracy, then took the world into 70 years of monopoly dark ages, finally breaking the shackles of monopoly in 1983, and then leading the modern information revolution through the 80s-90s. The US has now fallen behind in relative and absolute terms in the lower layers due to consolidation and remonopolization. Only the vestiges of pure competition from the 80s-90s, the horizontally scaled "data" and "content" companies like Apple, Google, Twitter and Netflix (and many, many more) are pulling us along. The vertical monopolies stifle innovation and the generative economic activity we saw in those 2 decades. The economic growth numbers and fiscal deficit do not lie.
Back in 1998 I wrote, “if you want to break up the Microsoft software monopoly then break up the Baby Bell last-mile access monopoly.” Market driven broadband competition and higher-capacity digital wireless networks gave rise to the iOS and Android operating systems over the following decade which undid the Windows monopoly. The 2013 redux to that perspective is, once again, “if you want to break up the Google search monopoly then break up the cable/telco last mile monopolies.”
Google is an amazing company, promoting the digital pricing and horizontal service provider spirit more than anyone. But Google is motivated by profit and will seek to grow that profit as best it can, even if contrary to founding principles and market conditions that fueled its success (aka net neutrality or equal access). Now that Google is getting into the lower layers in the last mile they are running into paradoxes and conflicts over net neutrality/equal access and in danger of becoming just another vertical monopoly. (Milo Medin provides an explanation in the 50th minute in this video, but it is self-serving, disingenuous and avoids confronting the critical issue for networks going forward.)
Contrary to many people’s beliefs, the upper and lower layers have always been inextricably interdependent and nowhere was this more evident than with the birth of the internet out of the flat-rate dial-up networks of the mid to late 1980s (a result of dial-1 equal access). The nascent ISPs that scaled in the 1980s on layer 1-2 data bypass networks were likewise protected by Computers II-III (aka net neutrality) and benefited from competitive (WAN) transport markets.
Few realize or accept the genesis of Web 1.0 (W1.0) was the break-up of AT&T in 1983. Officially birthed in 1990 it was an open, 1-way store and forward database lookup platform on which 3 major applications/ecosystems scaled beginning in late 1994 with the advent of the browser: communications (email and messaging), commerce, and text and visual content. Even though everything was narrowband, W1.0 began the inexorable computing collapse back to the core, aka the cloud (4 posts on the computing cycle and relationship to networks). The fact that it was narrowband didn't prevent folks like Mark Cuban and Jeff Bezos from envisioning and selling a broadband future 10 years hence. Regardless, W1.0 started collapsing in 1999 as it ran smack into an analog dial-up brick wall. Google hit the bigtime that year and scaled into the early 2000s by following KISS and freemium business model principles. Ironically, Google’s chief virtue was taking advantage of W1.0’s primary weakness.
Web 2.0 grew out of the ashes of W1.0 in 2002-2003. W2.0 both resulted from and fueled the broadband (BB) wars starting in the late 1990s between the cable (offensive) and telephone (defensive) companies. BB penetration reached 40% in 2005, a critical tipping point for the network effect, exactly when YouTube burst on the scene. Importantly, BB (which doesn't have equal access, under the guise of "deregulation") wouldn’t have occurred without W1.0 and the above two forms of equal access in voice and data during the 1980s-90s. W2.0 and BB were mutually dependent, much like the hardware/software Wintel model. BB enabled the web to become rich-media and mostly 2-way and interactive. Rich-media driven blogging, commenting, user generated content and social media started during the W1.0 collapse and began to scale after 2005.
“The Cloud” also first entered people’s lingo during this transition. Google simultaneously acquired YouTube in the upper layers to scale its upper and lower layer presence and traffic and vertically integrated and consolidated the ad exchange market in the middle layers during 2006-2008. Prior to that, and perhaps anticipating lack of competitive markets due to "deregulation" of special access, or perhaps sensing its own potential WAN-side scale, the company secured low-cost fiber rights nationwide in the early 2000s following the CLEC/IXC bust and continued throughout the decade as it built its own layer 2-3 transport, storage, switching and processing platform. Note, the 2000s was THE decade of both vertical integration and horizontal consolidation across the board aided by these “deregulatory” political forces. (Second note, "deregulatory" should be interpreted in the most sarcastic and insidious manner.)
Web 3.0 began officially with the iPhone in 2007. The smartphone enabled 7x24 and real-time access and content generation, but it would not have scaled without wifi’s speed, as 3G wireless networks were at best late 1990s era BB speeds and didn’t become geographically ubiquitous until the late 2000s. The combination of wifi (high speeds when stationary) and 3G (connectivity when mobile) was enough though to offset any degradation to user experience. Again, few appreciate or realize that W3.0 resulted from two additional forms of equal access, namely cellular A/B interconnect from the early 1980s (extended to new digital PCS entrants in the mid 1990s) and wifi’s shared spectrum. One can argue that Steve Jobs single-handedly resurrected equal access with his AT&T agreement ensuring agnostic access for applications. Surprisingly, this latter point was not highlighted in Isaacson's excellent biography. Importantly, we would not have had the smartphone revolution were it not for Jobs' equal access efforts.
W3.0 proved that real-time, all the time "semi-narrowband" (given the contexts and constraints around the smartphone interface) trumped store and forward "broadband" on the fixed PC for 80% of people’s “web” experience (connectivity and interaction was more important than speed), as PC makers only realized by the late 2000s. Hence the death of the Wintel monopoly, not by government decree, but by market forces 10 years after the first anti-trust attempts. Simultaneously, the cloud became the accepted processing model, coming full circle back to the centralized mainframe model circa 1980 before the PC and slow-speed telephone network led to its relative demise. This circularity further underscores not only the interplay between upper and lower layers but between edge and core in the InfoStack. Importantly, Google acquired Android in 2005, well before W3.0 began as they correctly foresaw that small-screens and mobile data networks would foster the development of applications and attendant ecosystems would intrude on browser usage and its advertising (near) monopoly.
Web 4.0 is developing as we speak and no one is driving it and attempting to influence it more with its WAN-side scale than Google. W4.0 will be a full-duplex, 2-way, all-the time, high-definition application driven platform that knows no geographic or market segment boundaries. It will be engaging and interactive on every sensory front; not just those in our immediate presence, but everywhere (aka the internet of things). With Glass, Google is already well on its way to developing and dominating this future ecosystem. With KC Fiber Google is illustrating how it should be priced and what speeds will be necessary. As W4.0 develops the cloud will extend to the edge. Processing will be both centralized and distributed depending on the application and the context. There will be a constant state of flux between layers 1 and 3 (transport and switching), between upper and lower layers, between software and hardware at every boundary point, and between core and edge processing and storage. It will dramatically empower the end-user and change our society more fundamentally than what we’ve witnessed over the past 30 years. Unfortunately, regulators have no gameplan on how to model or develop policy around W4.0.
The missing pieces for W4.0 are fiber based and super-high capacity wireless access networks in the lower layers, settlement exchanges in the middle layers, and cross-silo ecosystems in the upper layers. Many of these elements are developing in the market naturally: big data, hetnets, SDN, openflow, open OS' like Android and Mozilla, etc… Google’s strategy appears consistent and well-coordinated to tackle these issues; if not far ahead of others. But its vertically integrated service provider model and stance on net neutrality in KC is in conflict with the principles that so far have led to its success.
Google is buying into the vertical monopoly mindset to preserve its profit base instead of teaching regulators and the markets about the virtues of open or equal access across every layer and boundary point (something clearly missing from Tim Wu's and Bob Atkinson's definitions of net neutrality). In the process it is impeding the development of W4.0. Governments could solve this problem by simply conditioning any service provider with access to a public right of way or frequency to equal access in layers 1 and 2; along with a quid pro quo that every user has a right to access unhindered by landlords and local governments within economic and aesthetic reason. (The latter is a bone we can toss all the lawyers who will be looking for new work in the process of simpler regulations.) Google and the entire market will benefit tremendously by this approach. Who will get there first? The market (Google or MSFT/AAPL if the latter are truly hungry, visionary and/or desperate) or the FCC? Originally hopeful, I’ve become less sure of the former over the past 12 months. So we may be reliant on the latter.
Intermodal competition is defined as: “provision of the same service by different technologies (i.e., a cable television company competing with a telephone company in the provision of video services).”
Intramodal competition is defined as: “competition among identical technologies in the provision of the same service (e.g., a cable television company competing with another cable television company in the offering of video services).”
Focus on 4 words: same, different, identical, same. Same is repeated twice.
Saying wireless represents intermodal competition to wired (fiber/coax) is like saying that books compete with magazines or radio competes with TV. Sure, the former both deliver the printed word. And the latter both pass for entertainment broadcast to us. Right?
Yet these are fundamentally different applications and business models even if they may share common network layers and components, or in English, similarities exist between production and distribution and consumption. But their business models are all different.
Wireless Is Just Access to Wireline
So are wireless and wired really the SAME? For voice they certainly aren’t. Wireless is still best efforts. It has the advantage of being mobile and with us all the time, which is a value-added, while wired offers much, much better quality. For data the differences are more subtle. With wireless I can only consume stuff in bite sizes (email, twitter, peruse content, etc..) because of throughput and device limitations (screen, processor, memory). I certainly can’t multi-task and produce content the way I can on a PC linked to a high-speed broadband connection.
That said, increasingly people are using their smartphones as hotspots or repeaters to which they connect their notebooks and tablets and can then multi-task. I do this a fair bit and it is good while I'm on the road and mobile, but certainly no substitute for a fixed wired connection/hotspot in terms of speed and latency. Furthermore, wireless carriers by virtue of their inefficient vertically integrated, siloed business models and the fact that wireless is both shared and reused, have implemented onerous price caps that limit total (stock) consumption even as the increase speed (flow). The latter creates a crowding out effect and throughput is degraded as more people access the same radio, which I run into a lot. I know this because my speed decreases or the 4G bars mysteriously disappear on my handset and indicate 3G instead.
Lastly, one thing I can do with the phone that I can’t do with the PC is take pictures and video. So they really ARE different. And when it comes to video, there is as much comparison between the two as a tractor trailer and a motorcycle. Both will get us there, but really everything else is different.
At the end of the day, where the two are similar or related is when I say wireless is just a preferred access modality and extension of wired (both fixed and mobile) leading to the law of wireless gravity: a wireless bit will seek out fiber as quickly and cheaply as possible. And this will happen once we move to horizontal business models and service providers are incented to figure out the cheapest way to get a bit anywhere and everywhere.
Lack of Understanding Drives Bad Policy
By saying that intermodal competition exists between wireless and wired, FSF is selectively taking aspects of the production, distribution and consumption of content, information and communications and conjuring up similarities that exist. But they are really separate pieces of the of the bigger picture puzzle. I can almost cobble together a solution that is similar vis a vis the other, but it is still NOT the SAME for final demand!
This claiming to be one thing while being another has led to product bundling and on-net pricing--huge issues that policymakers and academics have ignored--that have promoted monopolies and limited competition. In the process of both, consumers have been left with overpriced, over-stuffed, unwieldy and poorly performing solutions.
In the words of Blevins, FSF is once again providing a “vague, conflicting, and even incoherent definition of intermodal competition.” 10 years ago the US seriously jumped off the competitive bandwagon after believing in the nonsense that FSF continues to espouse. As a result, bandwidth pricing in the middle and last mile disconnected from moore’s and metcalfe’s laws and is now overpriced 20-150x impeding generative ecosystems and overall economic growth.
I've written about the impacts of and interplay between Moore’s, Metcalfe’s and Zipf’s laws on supply and demand of communication services and networks. Moore’s and Metcalfe’s laws can combine to drive bandwidth costs down 50% annually. Others have pointed out Butter’s law, coming from a Bell Lab’s wizard, Gerry Butter, which arrives at a more aggressive outcome; a 50% drop every 9 months! Anyway those are the big laws that are immutable and washing against and over vertically integrated monopolies like giant unseen tsunamis.
Then there are the smaller laws, like my friend Russ McGuire at Sprint who penned, “The value of any product or service increases with its mobility.” Wow, that’s very metcalfian and almost infinite in value because the devices and associated pathways can move in 3 planes. I like that and have always believed in McGuire’s Law (even before he invented it!).
Since the early 1990s, when I was one of the few, if only, analyst on the Street to cover wired and wireless telecoms, I’ve been maintaining that wireless is merely access to wireline applications. While that has been validated finally with “the cloud” and business models and networks have been merging (at least at the corporate level) the majority of people still believe them to be fundamentally distinct. It shows in simple things like interfaces and lack of interoperability across 4 screens. Thankfully all that is steadily eroding due to cloud ecosystems and the enormous fight happening in the data world between the edge and the core and open vs closed: GOOG vs AAPL vs MSFT (and let’s not forget Mozilla, the OS to rule all OS’?).
Anyone who works in or with the carriers knows wireless and wired networks are indelibly linked and always have been in terms of backhaul transport to the cell-tower. But over the past 6 years the symbiosis has become much greater because of the smartphone. 1G and 2G digital networks were all capable of providing “data” connections from 1998-2006, but it really wasn’t until the iPhone happened on the scene in 2007 along with the advent of 3G networks that things really started taking off.
The key was Steve Jobs’ demand to AT&T that smartphone applications purchased through the App Store have unfettered access to the internet, be it through:
2G, which was relatively pervasive, but slow at 50-300kbps,
3G, which was not pervasive, but faster at 500-1500 kbps, or
Wifi (802.11g), which was pervasive in a lot of “fixed” areas like home, work or school.
The latter made a ton of sense sense, in particular, because data apps, unlike voice will more likely be used when one is relatively stationary, for obvious visual and coordination and safety reasons; the exception being music. In 2007 802.11g Wifi was already 54 mbps, or 30-50x faster than 3G, even though the Wifi radios on smartphones could only handle 30 mbps. It didn’t matter, since most apps rarely need more than 2 mbps to perform ok. Unfortunately, below 2 mbps they provided a dismal experience and that’s why 3G had such a short shelf-life and the carriers immediately began to roll out 4G.
Had Jobs not gotten his way, I think the world would be a much different place as the platforms would not have been so generative and scaled so quickly. This is an example of what I call Metcalfian “suck” of the application ecosystem for the carriers and nothing exemplified it better than the iPhone and App Store for the first few years as AT&T outpaced its rivals. But it also upset the normal order of business first and consumer second through the bring your own x (BYOD) trend, blurring the lines between the two traditionally separate segments.
But few people to this day realize or appreciate the real impact that Steve Jobs had, namely reviving equal access. The latter was something the carriers and federal government conspired to and successfully killed in the early 2000s. Equal access was the horse that brought us competitive voice in the early 1980s, competitive data in the early 1990s and helped scale digital wireless networks nationwide in the late 1990s. All the things we’re thankful for, yet have forgotten, or never entirely appreciated, how they came about.
Because of this “smart” or market driven form of equal access and in appreciation of Steve Jobs’ brilliance, I am going to introduce a new law. “The Law of Wireless Gravity: a wireless bit will seek out fiber as quickly and cheaply as possible.” I looked it up on google and it doesn’t exist. So now I am introducing it into the public domain under creative commons. Of course there will be plenty of metaphors about clouds and attraction and lightning to go along with the law.
I hope people abide by this law in all their thinking about and planning for broadband, fiber, gigabit networks, application ecosystems, devices, control layers, residential and commercial demand, etc…because it holds across all of those instances. Oh, yeah, it might actually counter the confusion over and disinformation about spectrum scarcity at the same time. And it might solve the digital divide problem, and the USF problem, and the bandwidth deficit….and even the budget deficit. Ok, one step at a time.
Is IP Growing UP? Is TCPOSIP the New Protocol Stack? Will Sessions Pay For Networks?
Oracle’s purchase of leading SBC vendor (session border controller Acme Packets), is a tiny seismic event in the technology and communications (ICT) landscape. Few notice the potential for much broader upheaval ahead.
SBCs, which have been around since 2000, facilitate traffic flow between different networks; IP to PSTN to IP and IP to IP. Historically traffic has been mostly voice, where minutes and costs count because that world has been mostly rate-based. Increasingly they are being used to manage and facilitate any type of traffic “sessions” across an array of public and private networks, be it voice, data, or video. The reasons are many-fold, including, security, quality of service, cost, and new service creation; all things TCP-IP don't account for.
Session control is layer 5 to TCP-IP’s 4 layer stack. A couple of weeks ago I pointed out that most internet wonks and bigots deride the OSI framework and feel that the 4 layer TCP-IP protocol stack won the “war”. But here is proof that as with all wars the victors typically subsume the best elements and qualities of the vanquished.
The single biggest hole in the internet and IP world view is bill and keep. Bill and keep’s origins derive from the fact that most of the overhead in data networks was fixed in the 1970s and 1980s. The component costs were relatively cheap compared with the mainframe costs that were being shared and the recurring transport/network costs were being arbitraged and shared by those protocols. All the players, or nodes, were known and users connected via their mainframes. The PC and ethernet (a private networking/transmission protocol) came along and scaled much later. So why bother with expensive and unnecessary QoS, billing, mediation and security in layers 5 and 6?
Then along came the break-up of AT&T and due to dial-1 equal access, the Baby Bells responded with flat-rate, expanded area (LATA) pricing plans to build a bigger moat around their Class 5 monopoly castles (just like AT&T had built 50 mile interconnect exclusion zones in the 1913 Kingsbury Commitment due to the threat of wireless bypass even back then, and just like the battles OTT providers like Netflix are having with incumbent broadband monopolies today) in the mid to late 1980s. The nascent commercial ISPs took advantage of these flat-rate zones, invested in channel banks, got local DIDs and the rest as they say is history. Staying connected all day on a single flat-rate back then was perceived of as "free". So the "internet" scaled from this pricing loophole (even as the ISPs received much needed shelter from vertical integration by the monopoly Bells in Computers 2/3) and further benefited from WAN competition and commoditization of transport to connect all the distributed router networks into seamless regional and national layer 1-2 low-cost footprints even before www and http/html and the browser hit in the early to mid 1990s. The marginal cost of "interconnecting" these layer 1-2 networks was infinitesimal at best and therefore bill and keep, or settlement-free peering, made a lot of sense.
But Bill and Keep (B&K) has three problems:
It supports incumbents and precludes new entrants
It stifles new service creation
It precludes centralized procurement and subsidization
With Acme, Oracle can provide solutions to problems two and three; with the smartphone driving the process. Oracle has java on 3 billion phones around the globe. Now imagine a session controller client on each device that can help with application and access management and preferential routing and billing etc, along with guaranteed QoS and real-time performance metrics and auditing; regardless of what network the device is currently on. The same holds in reverse in terms of managing "session state" across multiple devices/screens across wired and wireless networks.
The alternative to B&K is what I refer to as balanced settlements. In traditional telecom parlance, instead of just being calling party pays, they can be both called and calling party pays and are far from the regulated monopoly origination/termination tariffs. Their pricing (transaction fees) will reflect marginal costs and therefore stimulate and serve marginal demand. As a result, balanced settlements provide a way for rapid, coordinated roleout of new services and infrastructure investment across all layers and boundary points. They provide the price signals that IP does not.
Balanced settlements clear supply and demand north-south between upper (application) and lower (switching,transport and access) layers, as well as east-west from one network or application or service provider to another. Major technological shifts in the network layers like open flow, software defined networks (SDN) and network function virtualization (NFV) can develop rapidly. Balanced settlements will reside in competitive exchanges evolving out of today's telecom tandem networks, confederation of service provider APIs, and the IP world's peering fabric, driven by big data analytical engines and advertising exchanges.
Perhaps most importantly, balanced settlements enable subsidization or procurement of edge access from the core. Large companies and institutions can centrally drive and pay for high definition telework, telemedicine, tele-education, etc... solutions across a variety of access networks (fixed and wireless). The telcos refer to this as guaranteed quality of service leading to "internet fast lanes." Enterprises will do this to further digitize and economize their own operations and distribution reach (HD collaboration and the internet of things), just like 800, prepaid calling cards, VPNs and the internet itself did in the 1980s-90s. I call this process marrying the communications event to the commercial/economic transaction and it results in more revenue per line or subscriber than today's edge subscription model. As well, as more companies and institutions increasingly rely on the networks, they will demand backups, insurance and redundancy ensuring that there will be continous investment in multiple layer 1 access networks.
Along with open or shared access in layer 1 (something we should have agreed to in principal back in 1913 and again in 1934 as governments provide service providers a public right of way or frequency), balanced settlements can also be an answer to inefficient universal service subsidies. Three trends will drive this. Efficient loading of networks and demand for ubiquitous high-definition services by mobile users will require inexpensive uniform access everywhere with concurrent investment in high-capacity fiber and wireless end-points. Urban demand will naturally pay for rural demand in the process due to societal mobilty and finally the high volume low marginal cost user (enterprise or institution) will amortize and pay for the low-volume high marginal cost user to be part of their "economic ecosystem" thereby reducing the digital divide.
TCP/IP Won, OSI Lost. Or Did It? Clue: Both Are Horizontal
Edmund Burke said, “Those who cannot remember the past are doomed to repeat it.”What he didn’t add, as it might have undermined his point, is that “history gets created in one moment and gets revised the next.”That’s what I like to say.And nothing could be more true when it comes to current telecom and infomedia policy and structure.How can anyone in government, academia, capital markets or the trade learn from history and make good long term decisions if they don’t have the facts straight?
I finished a book about the origins of the internet (ARPAnet, CSnet, NSFnet) called “where wizards stay up late, The Origins of The Internet” by Katie Hafner and Matthew Lyon written back in 1996, before the bubble and crash of web 1.0. It’s been a major read for computer geeks and has some lessons for people interested in information industry structures and business models. I cross both boundaries and was equally fascinated by the “anti-establishment” approach by the group of scientists and business developers at BBN, the DoD and academia, as well as the haphazard and evolutionary approach to development that resulted in an ecosystem very similar to what the original founders envisioned in the 1950s.
The book has become something of a bible for internet, and those I refer to as upper layer (application), fashionistas who, unfortunately, have, and are provided in the book with, very little understanding of the middle and lower layers of the service provider “stack”. In fact the middle layers all but dissappear as far as they are concerned. While those upper layer fashionistas would like to simplify things and say, “so and so was a founder or chief contributor of the internet,” or “TCP/IP won and OSI lost,” actual history and reality suggest otherwise.
Ironically, the best way to look at the evolution of the internet is via the oft-maligned 7-layer OSI reference model. It happens to be the basis for one dimension of the InfoStack analytical engine. The InfoStack relates the horizontal layers (what we call service provisioning checklist for a complete solution) to geographic dispersion of traffic and demand on a 2nd axis, and to a 3rd axis which historically covered 4 disparate networks and business models but now maps to applications and market segments. Looking at how products, solutions and business models unfold along these axis provides a much better understanding of what really happens as 3 coordinates or vectors provides better than 90% accuracy around any given datapoint.
The book spans the time between the late 1950s and the early 1990s, but focuses principally on the late 1960s and early 1970s. Computers were enormously expensive and shared by users, but mostly on a local basis because of high cost and slow connections. No mention is made of the struggle modems and hardware vendors had to get level access to the telephone system and PCs had yet to burst on the scene. The issues around the high-cost monopoly communications network run by AT&T are only briefly mentioned; their impact and import lost to the reader.
The book makes no mention that by the 1980s development of what became the internet ecosystem really started picking up steam. After struggling to get a foothold on the “closed” MaBell system since the 1950s, smartmodems burst on the scene in 1981. Modems accompanied technology developments that had been occurring with fax machines, answering machines and touchtone phones; all generative aspects of a nascent competitive voice/telecoms markets.
Then, in 1983, AT&T was broken up and an explosion in WAN (long-distance) competition drove pricing down, and advanced intelligent networks increased the possibility of dial-around bypass. (Incidentally, by 1990s touchtone penetration in the US was over 90% vs less than 20% in the rest of the world driving not only explosive growth in 800 calling, but VPN and card calling, and last but not least the simple "touchtone" numeric pager; one of the percursors to our digital cellphone revolution). The Bells responded to this potential long-distance bypass threat by seeking regulatory relief with expanded calling areas and flat-rate calling to preserve their Class 5 switch monopoly.
All the while second line growth exploded, primarily as people connected fax machines and modems for their PCs to connect to commercial ISPs (Compuserve, Prodigy, AOL, etc...). These ISPs benefited from low WAN costs (competitive transit in layer 2), inexpensive routing (compared with voice switches) in layer 3, and low-cost channel banks and DIDs in those expanded LATAs to which people could dial up flat-rate (read "free") and remain connected all day long. The US was the only country in the world that had that type of pricing model in the 1980s and early 1990s.
Another foundation of the internet ecosystem, PCs, burst from the same lab (Xerox Parc) that was run by one of the founders of the Arpanet, Bob Taylor, who could deserve equal or more credit than Bob Kahn or Vint Cerf (inventors of TCP) for development of the internet. As well, the final two technological underpinnings that scaled the internet, Ethernet and Windows, were developed at Xerox Parc. These technology threads which should have been better developed in the book for their role in the demand for and growth of the internet from the edge.
In the end, what really laid the foundation for the internet were numerous efforts in parallel that developed outside the monopoly network and highly regulated information markets. These were all 'generative' to quote Zitrane. (And as I said a few weeks ago, they were accidental). These parallel streams evolved into an ecosystem onto which www, http, html and mosaic, were laid--the middle and upper layers--of the 1.5 way, store and foreward, database lookup “internet” in the early to mid 1990s. Ironically and paradoxically this ecosystem came together just as the Telecom Act of 1996 was being formed and passed; underscored by the fact that the term “internet” is mentioned once in the entire Act and one of the reasons I labeled the Act “farcical” back in 1996.
But the biggest error of the book in my opinion is not the omission of all these efforts in parallel with the development of TCP/IP and giving them due weight in the internet ecosystem, rather concluding with the notion that TCP/IP won and the OSI reference model lost. This was disappointing and has had a huge, negative impact on perception and policy. What the authors should have said is that a horizontally oriented, low-cost, open protocol as part of a broader similarly oriented horizontal ecosystem beat out a vertically integrated, expensive, closed and siloed solution from monopoly service providers and vendors.
With a distorted view of history it is no wonder then that:
The list of ironic and unfortunate paradoxes in policy and market outcomes goes on and on because people don’t fully understand what happened between TCP/IP and OSI and how they are inextricably linked. Until history is viewed and understood properly, we will be doomed, in the words of Burke, to repeat it. Or, as Karl Marx said, "history repeats itself twice, first as tragedy and second as farce."
Last summer I attended a Bingham event at the Discovery Theatre in NYC’s Time Square to celebrate the Terracotta Warriors of China’s first emperor, Qin Shi Huang.What struck me was how far our Asian ancestors had advanced technically, socially and intellectually beyond our western forefathers by 200 BC. Huang's reign, which included the building of major transportation and information networks was followed by a period of nearly 1,500 years of relative peace (and stagnation) in China.It would take another 1,000 years for the westerners to catch up during periods of war, plague and socio-political upheaval.But once they passed their Asian brethren by the 15th and 16th centuries they never looked back. Having just finishedArt of War, by Sun Tsu, I asked myself, is war and strife necessary for mankind to advance?
This question was reinforced over the holidays upon visiting the Loire Valley in France, which most people associate with beautiful Louis XIV chateaus, a rich fairy-tale medieval history, and good wines. What most people don’t realize is that the Loire was a war-torn area for the better part of 400 years as the French (Counts of Blois) and English (Counts of Anjou; precursors to the Plantagenet dynasty of England) vied for domination of a then emerging Europe. The parallels between China and France 1,000 years later couldn’t have been more poignant.
After the French finally kicked the English out in the 1400s this once war-torn region became the center of the European renaissance and later the birthplace of the age of enlightenment. Francois 1st brought Leonardo from Italy for the last 3 years of his life and the French seized upon his way of thinking; to be followed a few centuries later by Voltaire and Rousseau. The French aristocracy, without wars to fight, invited them to stay in their Chateaus, built on the fortifications of the medieval castles, and develop their enduring principles of liberty, equality and fraternity. These in turn would become the foundations upon which America broadly based its constitution and structure of government; all of which in theory supports and leads to competitive markets and network neutrality; the basis of the internet.
And before I left on my trip, I bought a kindle version of Sex, Bombs and Burgers by Peter Nowak on the recommendation of an acquaintance at Bloomberg. Nowak’s premise is to base much of America’s advancement and success over the past 50 years on our warrior instincts and need to procreate and sustain life. I liked the book and recommend it to anyone, especially as I used to quip, “Web 1.0 of the 1990s was scaled by the 4 (application) horsemen: Content, Commerce, Communication and hard/soft-Core porn.” But the book also provides great insights beyond the growth of porn on the internet into our food industry and where our current military investments might be taking us physically and biologically.
While the book meanders on occasion, my take-away and answer to my above question is that war (and the struggle to survive by procreating and eating) increases the rate of technological innovations, which often then result in new products; themselves often mistakes or unintended commercial consequences from their original military intent. War increases the pace of innovation out of necessity, intensity and focus. After all, our state of fear is unnaturally heightened when someone is trying to kill us, underscoring the notion that fear and greed are man’s primary psychological and commercial motivators; not love and happiness.
Most people generally believe the internet is an example of a technological innovation hatched from the militarily driven space race; which is the premise for another book I am just starting Where Wizards Stay Up Late, by Hafner and Lyon. What most people fail to realize, including Nowak, is that the internet was an unintended consequence of the breakup of AT&T in 1983; another type of conflict or economic war that had been waged in the 1950s-1970s. In that war we had General William McGowan of MCI (microwave, the M in MCI, was a technology principally scaled during WW II) battling MaBell along with his ally the DOJ. At the same time, a group of civilian scientists in the Pentagon had been developing the ARPAnet, a weapon/tool developed to get around MaBell’s monopoly long-distance fortifications to enable low cost computer communications across the US and globally.
The two conflicts aligned in the late 1980s as the remnants of MaBell, the Baby Bells, sought regulatory relief through state and federal regulators from a viciously competitive WAN/long-distance sector to preserve two arcane, piggishly profitable monopoly revenue streams; namely intrastate tolls and terminating access. The regulatory relief provided was to expand local calling areas (LATAs) and go to flat rate (all you can eat) pricing models. By then modems and routers, outgrowths of ARPA related initiatives, had gotten cheap enough that the earliest ISPs could cost effectively build and market their own layer 1-2 nationwide "data bypass" networks across 5,000 local calling areas.
These networks allowed people to dial up a free or low cost local number and stay connected with a computer or database or server anywhere all day long. The notions of “free” and “cheap” and the collapse of distance were born. The internet started and scaled in the US because of partially competitive communications networks, whom no one else had in 1990. It would be 10 years before the ROW had an unlimited flat-rate access topology like the US.
Only after these foundational (pricing and infrastructure) elements were in place, did the government allow commercial nets to interconnect via the ARPAnet in 1988. This was followed by Tim B Lee's WWW in 1989 (a layer 3 address simplification standard) and http and html in subsequent years providing the basis for a simple to use, mass-market browser, mosaic, the precursor to Netscape, in 1993. The result was the Internet or Web 1.0, which was a 4 or 5 layer asynchronous communications stack mostly used as a store and forward database lookup tool.
The internet was the result of two wars being fought against the monopolies of the Soviet communists and American bellheads; both of which, ironically, share(d) common principles. Participants and commentators in the current network neutrality, access/USF reform and ITU debates, including Nowak, should be aware of these conflict-driven beginnings of the internet, in particular the power and impact of price, as it would modify their positions significantly with respect to these debates. Issues like horizontal scaling, vertical disintermediation and completeness, balanced settlement systems and open/equal access need to be better analyzed and addressed. What we find in almost every instance on the part of every participant in these debates is hypocritical and paradoxical positions, since people do not fully appreciate history and how they arrived at their relative and absolute positions.
A year ago it was rumored that 250 Apple employees were at CES 2012, even as the company refused to participate directly.The company could do no wrong and didn’t need the industry.For the better part of 9 months that appeared to be the case and Apple’s stock outperformed the market by 55%.But a few months on, and a screen size too small for phones and too big for tablets, a mapping app too limited, and finally a buggy OS, Apple’s excess performance over the market has narrowed to 10%.
Two major themes of this year’s CES--mobile device screen size and extensive application ecosystems to connect just about anything--will place Apple’s mobile dominance and lead further in doubt. To us it was already in evidence last year when we talked about the singularity. But the real reason today is becoming apparent to all, namely that Apple wants to keep people siloed into their product specific verticals. People and their applications don’t want that because the cloud lets people update, access and view information across 3 different screens and any platform. If you want Apple on one device, all your devices have to be Apple. It’s a twist on the old Henry Ford maxim, “you can have any device…as long as it is Apple.”
This strategy will fail further when the access portion of the phone gets disconnected from all the other components of the phone. It may take a few years for that to happen, but it will make a lot of sense to just buy an inexpensive dongle or device that connects to the 4G/5G/Wifi network (metro-local or MAN/LAN) and radiates Bluetooth, NFC and Wifi to a plethora of connected devices in the personal network (PAN). Imagine how well your “connection hub” would last if it didn’t need to power a screen and huge processor for all the different apps? There goes your device centric business model.
And all that potential device and application/cloud supply-side innovation means that current demand is far from saturated. The most recent Cisco forecasts indicate that 1/3rd of ALL internet traffic will be from mobile devices by 2016. In the US 37% of the mobile access will be via Wifi. Applications that utilize and benefit from mobility and transportability will continue to grow as overall internet access via a fixed computer will drop to 39% from 60% today.
While we believe this to be the case, the reality is far different today according to Sandvine, the broadband policy management company. This should cause the wireless carriers some concern as they look at future capacity costs. In their recent H2-2012 report Sandvine reveals that power smartphone users are already using 10x more than average smartphone users, or 317 megabytes a month vs 33. But even the former number is a far cry from the 7.3 gigabytes (20x) that the average person uses on their fixed broadband pipes (assuming 2.3 people per fixed broadband line). Sandvine estimates that total mobile access will grow from ~1 petabyte in H2-2012 to 17 petabytes by H2-2018.
My own consumption, since moving from 3G to 4G and going from a 4 to 4.7 inch screen is a 10x increase to 1-2 gigs 4G access and 3-6 gigs wifi access, for a total of 4-8 gigs a month. This is because I have gone from a “store and forward” mentality to a 7x24 multimedia consumption model. And I am just getting comfortable with cloud based access and streaming. All this sounds positive for growth and investment especially as the other 95% of mobile users evolve to these usage levels, but it will do the carriers no good if they are not strategically and competitively well positioned to handle the demand. Look for a lot of development in the lower access and transport layers including wifi offload and fiber and high-capacity microwave backhaul.
How To Develop A Blue Ocean Strategy In A Digital Ecosystem
Back in 2002 I developed a 3 dimensional macro/micro framework based strategy for Multex, one of the earliest and leading online providers of financial information services. The result was to sell themselves to Reuters in a transaction that benefited both companies. 1+8 indeed equaled 12. What I proposed to the CEO was simple. Do “this” to grow to a $500m company or sell yourself. After 3-4 weeks of mulling it over, he took a plane to London and sold his company rather than undertake the “this”.
What I didn’t know at the time was that the “this” was a Blue Ocean Strategy (BOS) of creating new demand by connecting previously unconnected qualitative and quantitative information sets around the “state” of user. For example a portfolio manager might be focused on biotech stocks in the morning and make outbound calls to analysts to answer certain questions. Then the PM goes to a chemicals lunch and returns to focus on industrial products in the afternoon, at which point one of the biotech analysts gets back to him. Problem. The PM’s mental and physical “state” or context is gone. Multex had the ability to build a tool that could bring the PM back to his morning “state” in his electronic workplace. Result, faster and better decisions. Greater productivity, possible performance, definite value.
Sounds like a great story, except there was no BOS in 2002. It was invented in 2005. But the second slide of my 60 slide strategy deck to the CEO had this quote from the author’s of BOS, W.Chan Kim and Renee Mauborgne, of INSEAD, the Harvard Business School of Europe:
“Strategic planning based on drawing a picture…produces strategies that instantly illustrate if they will: stand out in the marketplace, are easy to understand and communicate, and ensure that every employee shares a single visual reference point.”
So you could argue that I anticipated the BOS concept to justify my use of 3D frameworks which were meant to illustrate this entirely new playing field for Multex.
But this piece is less about the InfoStack’s use in business and sports and more about the use of the 4Cs and 4Us of supply and demand as tools within the frameworks to navigate rapidly changing and evolving ecosystems. And we use the BOS graphs postulated by Kim/Mauborgne. The 4Cs and 4Us lets someone introducing a new product, horizontal layer (exchange) or vertical market solution (service integration) figure out optimal product, marketing and pricing strategies and tactics a priori. A good example of this is a BOS I created for a project I am working on in the area of Wifi offload and Hetnet (heterogeneous access networks that can be self-organising) area called HotTowns (HOT). Here’s a picture of it comparing 8 key supply and demand elements across fiber, 4G macro cellular and super saturation offload in a rural community. Note that the "blue area" representing the results of the model can be enhanced on the capacity front by fiber and on the coverage front by 4G.
The same approach can be used to rate mobile operating systems and any other product at a boundary of the infostack or horizontal or vertical solution in the market. We'll do some of that in upcoming pieces.
I met the Godfather of New York Venture capital a few weeks ago and I was talking about an arbitrage opportunity of a lifetime in the communications sector. I started talking about the lack of competition and resulting high prices (which I highlighted last week) brought about by bandwidth being 20-150x overpriced. He just looked at me and said, “bandwidth issue? What bandwidth issue!” It just so happens that his current prize investment is an IPTV application. I just rolled my eyes thinking, “if he only knew!”, remembering what happened to all the web 1.0 companies that ran into the broadband brick wall in 2000.
This statement is symptomatic of the complacency amongst the venture community; those investing billions in the upper layers of the stack. Yet people on Main Street, as evidenced by the Kansas City Fiber video on the Fiber To The Home Council website indicating that 1,000 communities had responded to the contest with over 200,000 people directly involved, know otherwise.
The numbers tell a worse story. Because of the CLEC boom-bust 10-15 years ago, rescission of equal access, failure of muni-WiFi and Wimax and BTOP crowding-out Telecom spending has disconnected from other venture spending over the past decade. Based on overall VC spending telecom spending should be 2-3x greater than it is. Instead it stands 70% below where it was from 1995-2005. It took a while for competition to die, but now it is official!
Venture spending today for the sector, which used to average 15-20% of total VC spending is now down below 5% over the past 3 years. All the other TMT sectors have held nearly constant with overall VC spending.
Everyone should look at these numbers with alarm and reach out to policy makers, academics, trade folks, the venture community and capital markets to make them aware of the dearth of investment as a result of the lack of competition. Now, more than ever contrarian investors should look at the monopoly pricing and realize there is significant profits to be made at all layers of the stack.