SpectralShifts Blog 
Monday, 17 March 2014

A New Visionary In Our Midst?

The US has lacked a telecom network visionary for nearly 2 decades.  There have certainly been strong and capable leaders, such as John Malone who not only predicted but brought about the 500 channel LinearTV model.  But there hasn’t been someone like Bill McGowan who broke up AT&T or Craig McCaw who first had the vision to build a national, seamless wireless network, countering decades of provincial, balkanized thinking.  Both of them fundamentally changed the thinking around public service provider networks.

But with a strong message to the markets in Washington DC on March 11 from Masayoshi Son, Sprint’s Chairman, the 20 year wait may finally be over.  Son did what few have been capable of doing over the past 15-20 years since McGowan exited stage left and McCaw sold out to MaBell: telling it like it is.  The fact is that today’s bandwidth prices are 20-150x higher than they should be with current technology.

This is no one’s fault in particular and in fact to most people (even informed ones) all measures of performance-to-price compared to 10 or 20 years ago look great.  But, as Son illustrated, things could be much, much better.  And he’s willing to make a bet on getting the US, the most advanced and heterogeneous society, back to a leadership role with respect to the ubiquity and cost of bandwidth.  To get there he needs more scale and one avenue is to merge with T-Mobile.

There have been a lot of naysayers as to the possibility of a Sprint-T-Mo hookup, including leaders at the FCC.  But don’t count me as one; it needs to happen.  Initially skeptical when the rumors first surfaced in December, I quickly reasoned that a merger would be the best outcome for the incentive auctions.  A merger would eliminate spectrum caps as a deterrent to active bidding and maximize total proceeds.  It would also have a better chance of developing a credible third competitor with equal geographic reach. Then in January the FCC and DoJ came out in opposition to the merger.

In February, though, Comcast announced the much rumored merger with TW and Son jumped on the opportunity to take his case for merging to a broader stage.  He did so in front of a packed room of 300 communications pundits, press and politicos at the US Chamber of Commerce’s prestigious Hall of Flags; a poignant backdrop for his own rags to riches story.  Son’s frank honesty about the state of broadband for the American public vs the rest of the world, as well as Sprint’s own miserable current performance were impressive.  It’s a story that resonates with my America’s Bandwidth Deficit presentation.

Here are some reasons the merger will likely pass:
  • The FCC can’t approve one horizontal merger (Comcast/TW) that brings much greater media concentration and control over content distribution, while disallowing a merger of two small players (really irritants as far as AT&T and Verizon are concerned).
  • Son has a solid track record of disruption and doing what he says.
  • The technology and economics are in his favor.
  • The vertically integrated service provider model will get disrupted faster and sooner as Sprint will have to think outside the box, partner, and develop ecosystems that few in the telecom industry have thought about before; or if they have, they’ve been constrained by institutional inertia and hidebound by legacy regulatory and industry siloes.

Here are some reasons why it might not go through:

  • The system is fundamentally corrupt.  But the new FCC Chairman is cast from a different mold than his predecessors and is looking to make his mark on history.
  • The FCC shoots itself in the foot over the auctions.  Given all the issues and sensitivities around incentive auctions the FCC wants this first one to succeed as it will serve as a model for all future spectrum refarming issues. 
  • The FCC and/or DoJ find in the public interest that the merger reduces competition.  But any analyst can see that T-Mo and Sprint do not have sustainable models at present on their own; especially when all the talk recently in Barcelona was already about 5G.

Personally I want Son’s vision to succeed because it’s the vision I had in 1997 when I originally brought the 2.5-2.6 (MMDS) spectrum to Sprint and later in 2001 and 2005 when I introduced Telcordia’s 8x8 MIMO solutions to their engineers.  Unfortunately, past management regimes at Sprint were incapable of understanding the strategies and future vision that went along with those investment and technology pitches.  Son has a different perspective (see in particular minute 10 of this interview with Walt Mossberg) with his enormous range of investments and clear understanding of price elasticity and the marginal cost of minutes and bits.

To be successful Sprint’s strategy will need to be focused, but at the same time open and sharing in order to simultaneously scale solutions across the three major layers of the informational stack (aka the InfoStack):

  • upper (application and content)
  • middle (control)
  • lower (access and transport)

This is the challenge for any company that attempts to disrupt the vertically integrated telecom or LinearTV markets; the antiquated and overpriced ones Son says he is going after in his presentation.    But the US market is much larger and more robust than the rest of the world, not just geographically, but also from a 360 degree competitive perspective where supply and demand are constantly changing and shifting.

Ultimate success may well rest in the control layer, where Apple and Google have already built up formidable operating systems which control vastly profitably settlement systems across multiple networks.  What few realize is that the current IP stack does not provide price signals and settlement systems that clear supply and demand between upper and lower layers (north-south) or between networks (east-west) in the newly converged “informational” stack of 1 and 2-way content and communications.

If Sprint’s Chairman realizes this and succeeds in disrupting those two markets with his strategy then he certainly will be seen as a visionary on par with McGowan and McCaw.

Posted by: Michael Elling AT 09:58 am   |  Permalink   |  0 Comments  |  Email
Wednesday, 07 August 2013

Debunking The Debunkers

The current debate over the state of America's broadband services and over the future of the internet is like a 3-ring circus or 3 different monarchists debating democracy.  In other words an ironic and tragically humorous debate between monopolists, be they ultra-conservative capitalists, free-market libertarians, or statist liberals.  Their conclusions do not provide a cogent path to solving the single biggest socio-political-economic issue of our time due to pre-existing biases, incorrect information, or incomplete/wanting analysis.  Last week I wrote about Google's conflicts and paradoxes on this issue.  Over the next few weeks I'll expand on this perspective, but today I'd like to respond to a Q&A, Debunking Broadband's Biggest Myths, posted on Commercial Observer, a NYC publication that deals with real estate issues mostly and has recently begun a section called Wired City, dealing with a wide array of issues confronting "a city's" #1 infrastructure challenge.  Here's my debunking of the debunker.

To put this exchange into context, the US led the digitization revolutions of voice (long-distance, touchtone, 800, etc..), data (the internet, frame-relay, ATM, etc...) and wireless (10 cents, digital messaging, etc...) because of pro-competitive, open access policies in long-distance, data over dial-up, and wireless interconnect/roaming.  If Roslyn Layton (pictured below) did not conveniently forget these facts or does not understand both the relative and absolute impacts on price and infrastructure investment then she would answer the following questions differently:

Real Reason/Answer:  our bandwidth is 20-150x overpriced on a per bit basis because we disconnected from moore's and metcalfe's laws 10 years ago, due to the Telecom Act, then special access "de"regulation, then Brand-X or shutting down equal access for broadband.  This rate differential is shown in the discrepancy between rates we pay in NYC and what Google charges in KC, as well as the difference in performance/price of 4G and wifi.  It is great Roslyn can pay $3-5 a day for Starbucks.  Most people can't (and shouldn't have to) just for a cup a Joe that you can make at home for 10-30 cents.

Real Reason/Answer:  Because of their vertical business models, carriers are not well positioned to generate high ROI on rapidly depreciating technology and inefficient operating expense at every layer of the "stack" across geographically or market segment constrained demand.  This is the real legacy of inefficient monopoly regulation.  Doing away with regulation, or deregulating the vertical monopoly, doesn’t work.  Both the policy and the business model need to be approached differently.  Blueprints exist from the 80s-90s that can help us restructure our inefficient service providers.  Basically, any carrier that is granted a public ROW (right of way) or frequency should be held to an open access standard in layer 1.  The quid pro quo is that end-points/end-users should also have equal or unhindered access to that network within (economic and aesthetic) reason.  This simple regulatory fix solves 80% of the problem as network investments scale very rapidly, become pervasive, and can be depreciated quickly.

Real Reason/Answer:  Quasi monopolies exist in video for the cable companies and in coverage/standards in frequencies for the wireless companies.  These scale economies derived from pre-existing monopolies or duopolies granted by and maintained to a great degree by the government.  The only open or equal access we have left from the 1980s-90s (the drivers that got us here) is wifi (802.11) which is a shared and reusable medium with the lowest cost/bit of any technology on the planet as a result.  But other generative and scalabeable standards developed in the US or with US companies at the same time, just like the internet protocol stacks, including mobile OSs, 4G LTE (based on CDMA/OFDM technology), OpenStack/Flow that now rule the world.  It's very important to distinguish which of these are truly open or not.

Real Reason/Answer: The 3rd of the population who don't have/use broadband is as much because of context and usability, whether community/ethnicity, age or income levels, as cost and awareness.  If we had balanced settlements in the middle layers based on transaction fees and pricing which reflect competitive marginal cost, we could have corporate and centralized buyers subsidizing the access and making it freely available everywhere for everyone.  Putting aside the ineffective debate between bill and keep and 2-sided pricing models and instead implementing balanced settlement exchange models will solve the problem of universal HD tele-work, education, health, government, etc…  We learned in the 1980s-90s from 800 and internet advertising that competition can lead to free, universal access to digital "economies".  This is the other 20% solution to the regulatory problem.

Real Reason/Answer:  The real issue here is that America led the digital information revolution prior to 1913 because it was a relatively open and competitive democracy, then took the world into 70 years of monopoly dark ages, finally breaking the shackles of monopoly in 1983, and then leading the modern information revolution through the 80s-90s.  The US has now fallen behind in relative and absolute terms in the lower layers due to consolidation and remonopolization.  Only the vestiges of pure competition from the 80s-90s, the horizontally scaled "data" and "content" companies like Apple, Google, Twitter and Netflix (and many, many more) are pulling us along.  The vertical monopolies stifle innovation and the generative economic activity we saw in those 2 decades.  The economic growth numbers and fiscal deficit do not lie.

Posted by: Michael Elling AT 08:02 am   |  Permalink   |  0 Comments  |  Email
Wednesday, 31 July 2013

Is Google Trying to Block Web 4.0?

Back in 1998 I wrote, “if you want to break up the Microsoft software monopoly then break up the Baby Bell last-mile access monopoly.”  Market driven broadband competition and higher-capacity digital wireless networks gave rise to the iOS and Android operating systems over the following decade which undid the Windows monopoly.  The 2013 redux to that perspective is, once again, “if you want to break up the Google search monopoly then break up the cable/telco last mile monopolies.” 

Google is an amazing company, promoting the digital pricing and horizontal service provider spirit more than anyone.  But Google is motivated by profit and will seek to grow that profit as best it can, even if contrary to founding principles and market conditions that fueled its success (aka net neutrality or equal access).  Now that Google is getting into the lower layers in the last mile they are running into paradoxes and conflicts over net neutrality/equal access and in danger of becoming just another vertical monopoly.  (Milo Medin provides an explanation in the 50th minute in this video, but it is self-serving, disingenuous and avoids confronting the critical issue for networks going forward.)

Contrary to many people’s beliefs, the upper and lower layers have always been inextricably interdependent and nowhere was this more evident than with the birth of the internet out of the flat-rate dial-up networks of the mid to late 1980s (a result of dial-1 equal access).  The nascent ISPs that scaled in the 1980s on layer 1-2 data bypass networks were likewise protected by Computers II-III (aka net neutrality) and benefited from competitive (WAN) transport markets.

Few realize or accept the genesis of Web 1.0 (W1.0) was the break-up of AT&T in 1983.   Officially birthed in 1990 it was an open, 1-way store and forward database lookup platform on which 3 major applications/ecosystems scaled beginning in late 1994 with the advent of the browser: communications (email and messaging), commerce, and text and visual content.  Even though everything was narrowband, W1.0 began the inexorable computing collapse back to the core, aka the cloud (4 posts on the computing cycle and relationship to networks).  The fact that it was narrowband didn't prevent folks like Mark Cuban and Jeff Bezos from envisioning and selling a broadband future 10 years hence.   Regardless, W1.0 started collapsing in 1999 as it ran smack into an analog dial-up brick wall.  Google hit the bigtime that year and scaled into the early 2000s by following KISS and freemium business model principles.  Ironically, Google’s chief virtue was taking advantage of W1.0’s primary weakness.

Web 2.0 grew out of the ashes of W1.0 in 2002-2003.  W2.0 both resulted from and fueled the broadband (BB) wars starting in the late 1990s between the cable (offensive) and telephone (defensive) companies.  BB penetration reached 40% in 2005, a critical tipping point for the network effect, exactly when YouTube burst on the scene.  Importantly, BB (which doesn't have equal access, under the guise of "deregulation") wouldn’t have occurred without W1.0 and the above two forms of equal access in voice and data during the 1980s-90s.  W2.0 and BB were mutually dependent, much like the hardware/software Wintel model.   BB enabled the web to become rich-media and mostly 2-way and interactive.  Rich-media driven blogging, commenting, user generated content and social media started during the W1.0 collapse and began to scale after 2005.

“The Cloud” also first entered people’s lingo during this transition.  Google simultaneously acquired YouTube in the upper layers to scale its upper and lower layer presence and traffic and vertically integrated and consolidated the ad exchange market in the middle layers during 2006-2008.  Prior to that, and perhaps anticipating lack of competitive markets due to "deregulation" of special access, or perhaps sensing its own potential WAN-side scale, the company secured low-cost fiber rights nationwide in the early 2000s following the CLEC/IXC bust and continued throughout the decade as it built its own layer 2-3 transport, storage, switching and processing platform.  Note, the 2000s was THE decade of both vertical integration and horizontal consolidation across the board aided by these “deregulatory” political forces.  (Second note, "deregulatory" should be interpreted in the most sarcastic and insidious manner.)

Web 3.0 began officially with the iPhone in 2007.  The smartphone enabled 7x24 and real-time access and content generation, but it would not have scaled without wifi’s speed, as 3G wireless networks were at best late 1990s era BB speeds and didn’t become geographically ubiquitous until the late 2000s.  The combination of wifi (high speeds when stationary) and 3G (connectivity when mobile) was enough though to offset any degradation to user experience.  Again, few appreciate or realize that  W3.0 resulted from two additional forms of equal access, namely cellular A/B interconnect from the early 1980s (extended to new digital PCS entrants in the mid 1990s) and wifi’s shared spectrum.  One can argue that Steve Jobs single-handedly resurrected equal access with his AT&T agreement ensuring agnostic access for applications.  Surprisingly, this latter point was not highlighted in Isaacson's excellent biography.  Importantly, we would not have had the smartphone revolution were it not for Jobs' equal access efforts.

W3.0 proved that real-time, all the time "semi-narrowband" (given the contexts and constraints around the smartphone interface) trumped store and forward "broadband" on the fixed PC for 80% of people’s “web” experience (connectivity and interaction was more important than speed), as PC makers only realized by the late 2000s.  Hence the death of the Wintel monopoly, not by government decree, but by market forces 10 years after the first anti-trust attempts.  Simultaneously, the cloud became the accepted processing model, coming full circle back to the centralized mainframe model circa 1980 before the PC and slow-speed telephone network led to its relative demise.  This circularity further underscores not only the interplay between upper and lower layers but between edge and core in the InfoStack.  Importantly, Google acquired Android in 2005, well before W3.0 began as they correctly foresaw that small-screens and mobile data networks would foster the development of applications and attendant ecosystems would intrude on browser usage and its advertising (near) monopoly. 

Web 4.0 is developing as we speak and no one is driving it and attempting to influence it more with its WAN-side scale than Google.  W4.0 will be a full-duplex, 2-way, all-the time, high-definition application driven platform that knows no geographic or market segment boundaries.  It will be engaging and interactive on every sensory front; not just those in our immediate presence, but everywhere (aka the internet of things).  With Glass, Google is already well on its way to developing and dominating this future ecosystem.  With KC Fiber Google is illustrating how it should be priced and what speeds will be necessary.  As W4.0 develops the cloud will extend to the edge.  Processing will be both centralized and distributed depending on the application and the context.  There will be a constant state of flux between layers 1 and 3 (transport and switching), between upper and lower layers, between software and hardware at every boundary point, and between core and edge processing and storage.  It will dramatically empower the end-user and change our society more fundamentally than what we’ve witnessed over the past 30 years.  Unfortunately, regulators have no gameplan on how to model or develop policy around W4.0.

The missing pieces for W4.0 are fiber based and super-high capacity wireless access networks in the lower layers, settlement exchanges in the middle layers, and cross-silo ecosystems in the upper layers.   Many of these elements are developing in the market naturally: big data, hetnets, SDN, openflow, open OS' like Android and Mozilla, etc… Google’s strategy appears consistent and well-coordinated to tackle these issues; if not far ahead of others. But its vertically integrated service provider model and stance on net neutrality in KC is in conflict with the principles that so far have led to its success.

Google is buying into the vertical monopoly mindset to preserve its profit base instead of teaching regulators and the markets about the virtues of open or equal access across every layer and boundary point (something clearly missing from Tim Wu's and Bob Atkinson's definitions of net neutrality).  In the process it is impeding the development of W4.0.  Governments could solve this problem by simply conditioning any service provider with access to a public right of way or frequency to equal access in layers 1 and 2; along with a quid pro quo that every user has a right to access unhindered by landlords and local governments within economic and aesthetic reason.  (The latter is a bone we can toss all the lawyers who will be looking for new work in the process of simpler regulations.)  Google and the entire market will benefit tremendously by this approach.  Who will get there first?  The market (Google or MSFT/AAPL if the latter are truly hungry, visionary and/or desperate) or the FCC?  Originally hopeful, I’ve become less sure of the former over the past 12 months.  So we may be reliant on the latter.

Related Reading:

Free and Open Depend on Where You Are in Google's InfoStack

InfoWorld defends Google based on its interpretation of NN; of which there are 4

DSL reports thinks Google is within its rights because it expects to offer enterprise service.  Only they are not and heretofore had not mentioned it.

A good article on Gizmodo about state of the web what "we" are giving up to Google

The datacenter as an "open access" boundary.  What happens today in the DC will happen tomorrow elsewhere.

Posted by: Michael Elling AT 10:57 am   |  Permalink   |  4 Comments  |  Email
Monday, 28 January 2013

TCP/IP Won, OSI Lost.  Or Did It?  Clue: Both Are Horizontal

Edmund Burke said, “Those who cannot remember the past are doomed to repeat it.”  What he didn’t add, as it might have undermined his point, is that “history gets created in one moment and gets revised the next.”  That’s what I like to say.  And nothing could be more true when it comes to current telecom and infomedia policy and structure.  How can anyone in government, academia, capital markets or the trade learn from history and make good long term decisions if they don’t have the facts straight?

I finished a book about the origins of the internet (ARPAnet, CSnet, NSFnet) called “where wizards stay up late, The Origins of The Internet” by Katie Hafner and Matthew Lyon written back in 1996, before the bubble and crash of web 1.0.  It’s been a major read for computer geeks and has some lessons for people interested in information industry structures and business models.  I cross both boundaries and was equally fascinated by the “anti-establishment” approach by the group of scientists and business developers at BBN, the DoD and academia, as well as the haphazard and evolutionary approach to development that resulted in an ecosystem very similar to what the original founders envisioned in the 1950s.

The book has become something of a bible for internet, and those I refer to as upper layer (application), fashionistas who, unfortunately, have, and are provided in the book with, very little understanding of the middle and lower layers of the service provider “stack”.  In fact the middle layers all but dissappear as far as they are concerned.  While those upper layer fashionistas would like to simplify things and say, “so and so was a founder or chief contributor of the internet,” or “TCP/IP won and OSI lost,” actual history and reality suggest otherwise.

Ironically, the best way to look at the evolution of the internet is via the oft-maligned 7-layer OSI reference model.  It happens to be the basis for one dimension of the InfoStack analytical engine.  The InfoStack relates the horizontal layers (what we call service provisioning checklist for a complete solution) to geographic dispersion of traffic and demand on a 2nd axis, and to a 3rd axis which historically covered 4 disparate networks and business models but now maps to applications and market segments.  Looking at how products, solutions and business models unfold along these axis provides a much better understanding of what really happens as 3 coordinates or vectors provides better than 90% accuracy around any given datapoint.

The book spans the time between the late 1950s and the early 1990s, but focuses principally on the late 1960s and early 1970s.  Computers were enormously expensive and shared by users, but mostly on a local basis because of high cost and slow connections.  No mention is made of the struggle modems and hardware vendors had to get level access to the telephone system and PCs had yet to burst on the scene.  The issues around the high-cost monopoly communications network run by AT&T are only briefly mentioned; their impact and import lost to the reader.

The book makes no  mention that by the 1980s development of what became the internet ecosystem really started picking up steam.  After struggling to get a foothold on the “closed” MaBell system since the 1950s, smartmodems burst on the scene in 1981.  Modems accompanied technology developments that had been occurring with fax machines, answering machines and touchtone phones; all generative aspects of a nascent competitive voice/telecoms markets.

Then, in 1983, AT&T was broken up and an explosion in WAN (long-distance) competition drove pricing down, and advanced intelligent networks increased the possibility of dial-around bypass.  (Incidentally, by 1990s touchtone penetration in the US was over 90% vs less than 20% in the rest of the world driving not only explosive growth in 800 calling, but VPN and card calling, and last but not least the simple "touchtone" numeric pager; one of the percursors to our digital cellphone revolution).  The Bells responded to this potential long-distance bypass threat by seeking regulatory relief with expanded calling areas and flat-rate calling to preserve their Class 5 switch monopoly.  

All the while second line growth exploded, primarily as people connected fax machines and modems for their PCs to connect to commercial ISPs (Compuserve, Prodigy, AOL, etc...).  These ISPs benefited from low WAN costs (competitive transit in layer 2), inexpensive routing (compared with voice switches) in layer 3, and low-cost channel banks and DIDs in those expanded LATAs to which people could dial up flat-rate (read "free") and remain connected all day long.  The US was the only country in the world that had that type of pricing model in the 1980s and early 1990s. 

Another foundation of the internet ecosystem, PCs, burst from the same lab (Xerox Parc) that was run by one of the founders of the Arpanet, Bob Taylor, who could deserve equal or more credit than Bob Kahn or Vint Cerf (inventors of TCP) for development of the internet.  As well, the final two technological underpinnings that scaled the internet, Ethernet and Windows, were developed at Xerox Parc.  These technology threads which should have been better developed in the book for their role in the demand for and growth of the internet from the edge.

In the end, what really laid the foundation for the internet were numerous efforts in parallel that developed outside the monopoly network and highly regulated information markets.  These were all 'generative' to quote Zitrane.  (And as I said a few weeks ago, they were accidental).  These parallel streams evolved into an ecosystem onto which www, http, html and mosaic, were laid--the middle and upper layers--of the 1.5 way, store and foreward, database lookup “internet” in the early to mid 1990s.  Ironically and paradoxically this ecosystem came together just as the Telecom Act of 1996 was being formed and passed; underscored by the fact that the term “internet” is mentioned once in the entire Act and one of the reasons I labeled the Act “farcical” back in 1996.

But the biggest error of the book in my opinion is not the omission of all these efforts in parallel with the development of TCP/IP and giving them due weight in the internet ecosystem, rather concluding with the notion that TCP/IP won and the OSI reference model lost.  This was disappointing and has had a huge, negative impact on perception and policy.  What the authors should have said is that a horizontally oriented, low-cost, open protocol as part of a broader similarly oriented horizontal ecosystem beat out a vertically integrated, expensive, closed and siloed solution from monopoly service providers and vendors.

With a distorted view of history it is no wonder then that:

The list of ironic and unfortunate paradoxes in policy and market outcomes goes on and on because people don’t fully understand what happened between TCP/IP and OSI and how they are inextricably linked.  Until history is viewed and understood properly, we will be doomed, in the words of Burke, to repeat it. Or, as Karl Marx said, "history repeats itself twice, first as tragedy and second as farce."

Posted by: Michael Elling AT 10:50 am   |  Permalink   |  0 Comments  |  Email
Friday, 18 January 2013

Last summer I attended a Bingham event at the Discovery Theatre in NYC’s Time Square to celebrate the Terracotta Warriors of China’s first emperor, Qin Shi Huang.  What struck me was how far our Asian ancestors had advanced technically, socially and intellectually beyond our western forefathers by 200 BC.  Huang's reign, which included the building of major transportation and information networks was followed by a period of nearly 1,500 years of relative peace (and stagnation) in China.  It would take another 1,000 years for the westerners to catch up during periods of war, plague and socio-political upheaval.  But once they passed their Asian brethren by the 15th and 16th centuries they never looked back.  Having just finished Art of War, by Sun Tsu, I asked myself, is war and strife necessary for mankind to advance?

This question was reinforced over the holidays upon visiting the Loire Valley in France, which most people associate with beautiful Louis XIV chateaus, a rich fairy-tale medieval history, and good wines.  What most people don’t realize is that the Loire was a war-torn area for the better part of 400 years as the French (Counts of Blois) and English (Counts of Anjou; precursors to the Plantagenet dynasty of England) vied for domination of a then emerging Europe.  The parallels between China and France 1,000 years later couldn’t have been more poignant.

After the French finally kicked the English out in the 1400s this once war-torn region became the center of the European renaissance and later the birthplace of the age of enlightenment.  Francois 1st brought Leonardo from Italy for the last 3 years of his life and the French seized upon his way of thinking; to be followed a few centuries later by Voltaire and Rousseau.  The French aristocracy, without wars to fight, invited them to stay in their Chateaus, built on the fortifications of the medieval castles, and develop their enduring principles of liberty, equality and fraternity.  These in turn would become the foundations upon which America broadly based its constitution and structure of government; all of which in theory supports and leads to competitive markets and network neutrality; the basis of the internet.

And before I left on my trip, I bought a kindle version of Sex, Bombs and Burgers by Peter Nowak on the recommendation of an acquaintance at Bloomberg.  Nowak’s premise is to base much of America’s advancement and success over the past 50 years on our warrior instincts and need to procreate and sustain life.  I liked the book and recommend it to anyone, especially as I used to quip, “Web 1.0 of the 1990s was scaled by the 4 (application) horsemen: Content, Commerce, Communication and hard/soft-Core porn.”  But the book also provides great insights beyond the growth of porn on the internet into our food industry and where our current military investments might be taking us physically and biologically.

While the book meanders on occasion, my take-away and answer to my above question is that war (and the struggle to survive by procreating and eating) increases the rate of technological innovations, which often then result in new products; themselves often mistakes or unintended commercial consequences from their original military intent.  War increases the pace of innovation out of necessity, intensity and focus.  After all, our state of fear is unnaturally heightened when someone is trying to kill us, underscoring the notion that fear and greed are man’s primary psychological and commercial motivators; not love and happiness.

Most people generally believe the internet is an example of a technological innovation hatched from the militarily driven space race; which is the premise for another book I am just starting Where Wizards Stay Up Late, by Hafner and Lyon.  What most people fail to realize, including Nowak, is that the internet was an unintended consequence of the breakup of AT&T in 1983; another type of conflict or economic war that had been waged in the 1950s-1970s.  In that war we had General William McGowan of MCI (microwave, the M in MCI, was a technology principally scaled during WW II) battling MaBell along with his ally the DOJ.  At the same time, a group of civilian scientists in the Pentagon had been developing the ARPAnet, a weapon/tool developed to get around MaBell’s monopoly long-distance fortifications to enable low cost computer communications across the US and globally.

The two conflicts aligned in the late 1980s as the remnants of MaBell, the Baby Bells, sought regulatory relief through state and federal regulators from a viciously competitive WAN/long-distance sector to preserve two arcane, piggishly profitable monopoly revenue streams; namely intrastate tolls and terminating access.  The regulatory relief provided was to expand local calling areas (LATAs) and go to flat rate (all you can eat) pricing models.  By then modems and routers, outgrowths of ARPA related initiatives, had gotten cheap enough that the earliest ISPs could cost effectively build and market their own layer 1-2 nationwide "data bypass" networks across 5,000 local calling areas.

These networks allowed people to dial up a free or low cost local number and stay connected with a computer or database or server anywhere all day long.  The notions of “free” and “cheap” and the collapse of distance were born.  The internet started and scaled in the US because of partially competitive communications networks, whom no one else had in 1990.  It would be 10 years before the ROW had an unlimited flat-rate access topology like the US.

Only after these foundational (pricing and infrastructure) elements were in place, did the government allow commercial nets to interconnect via the ARPAnet in 1988.  This was followed by Tim B Lee's WWW in 1989 (a layer 3 address simplification standard) and http and html in subsequent years providing the basis for a simple to use, mass-market browser, mosaic, the precursor to Netscape, in 1993.  The result was the Internet or Web 1.0, which was a 4 or 5 layer asynchronous communications stack mostly used as a store and forward database lookup tool.

The internet was the result of two wars being fought against the monopolies of the Soviet communists and American bellheads; both of which, ironically, share(d) common principles.  Participants and commentators in the current network neutrality, access/USF reform and ITU debates, including Nowak, should be aware of these conflict-driven beginnings of the internet, in particular the power and impact of price, as it would modify their positions significantly with respect to these debates.  Issues like horizontal scaling, vertical disintermediation and completeness, balanced settlement systems and open/equal access need to be better analyzed and addressed.  What we find in almost every instance on the part of every participant in these debates is hypocritical and paradoxical positions, since people do not fully appreciate history and how they arrived at their relative and absolute positions.

Posted by: Michael Elling AT 12:10 pm   |  Permalink   |  0 Comments  |  Email
Sunday, 03 June 2012

Since I began covering the sector in 1990, I’ve been waiting for Big Bang II.  An adult flick?  No, the sequel to Big Bang (aka the breakup of MaBell and the introduction of equal access) was supposed to be the breakup of the local monopoly.  Well thanks to the Telecom Act of 1996 and the well-intentioned farce that it was, that didn’t happen and equal access officially died (equal access RIP) in 2004.  Or did it? 

I am announcing that Equal Access is alive and well, albeit in a totally unexpected way.  Thanks to Steve Jobs’ epochal demands put on AT&T to counter its terrible network, every smartphone has an 802.11 backdoor built-in.  Together with the Apple and Google operating systems being firmly out of carriers’ hands and scaling across other devices (tablets, etc…) a large ecosystem of over-the-top, unified communications and traffic offload applications is developing to attack the wireless hegemony. 

First, a little history.  Around the time of AT&T's breakup the government implemented 2 forms of equal access.  Dial-1 in long-distance made marketing and application driven voice resellers out of the long-distance competitors.  The FCC also mandated A/B cellular interconnect to ensure nationwide buildout of both cellular networks.  This was extended to nascent PCS providers in the early to mid 1990s leading to dramatic price declines and enormous demand elasticities.  Earlier, the competitive WAN/IXC markets of the 1980s led to rapid price reductions and to monopoly pricing responses that created the economic foundations of the internet in layers 1 and 2; aka flat-rate or "unlimited" local dial-up.  The FCC protected the nascent ISP's by preventing the Bells from interfering at layer 2 or above.  Of course this distinction of MAN/LAN "net-neutrality" went away with the advent of broadband, and today it is really just about WAN/MAN fights between the new (converged) ISPs or broadband service providers like Comcast, Verizon, etc... and the OTT or content providers like Google, Facebook, Netflix, etc...

But another form of equal access, this one totally unintentioned happened with 802.11 in the mid 1990s.  The latter became "nano-cellular" in that power output was regulated limiting hot-spot or cell-size to ~300 feet.  This had the impact of making the frequency band nearly infinitely divisible.  The combination was electric and the market, unencumbered by monopoly standards and scaling along with related horizontal layer 2 data technologies (ethernet), quickly seeded itself.  It really took off when Intel built the capability directly into it's chips in the early 2000s.

Cisco just forecast that 50% of all internet traffic will be generated from 802.11 connected devices.  Given that 802.11’s costs are 1/10th those of 4G something HAS to give for the communications carrier.  We’ve talked about them needing to address the pricing paradox of voice and data better, as well as the potential for real obviation at the hands of the application and control layer worlds.  While they might think they have a near monopoly on the lower layers, Steve Job’s ghost may well come back to haunt them if alternative access networks/topologies get developed that take advantage of this equal access.  For these networks to happen they will need to think digital, understand, project and foster vertically complete systems and be able to turn the "lightswitch on" for their addressable markets.

 

Posted by: Michael Elling AT 10:21 am   |  Permalink   |  2 Comments  |  Email
Sunday, 29 April 2012

The first quarter global smartphone stats are in and it isn’t even close.  With the market growing more than 40%+, Samsung controls 29% of the market and Apple 24%.  The next largest, Nokia came in 60-70% below the leaders at 8%, followed by RIMM at 7% and HTC at 5%, leaving the scraps (28%) to Sony, Motorola, LG, ZTE.  They've all already lost on the scale front; they need to change the playing field. 

While this spread sounds large and improbable, it is not without historical precedent.  In 1914, just 6 years after its introduction the Ford Model T commanded 48% market share.  Even by 1923 Ford still had 40% market share.  2 years later the price stood at $260, which was 30% of the original model in 1908, and less than 10% what the average car cost in 1908; sounds awfully similar to Moore’s law and the pricing of computer/phone devices over the past 30 years.  Also, a read on the Model T's technological and design principles sounds a lot like pages taken out of the book of Apple.  Or is it the other way around?

Another similarity was Ford’s insistence on the use of black beginning in 1914.  Over the life of the car 30 different variations of black were used!  The color limitation was a key ingredient in the low cost as prior to 1914, the company used blue, green, red and grey.  Still 30 variations of black (just like Apple’s choice of white and black only and take it or leave it silo-ed product approach) is impressive and is eerily similar to Dutch Master Frans Hals’ use of 27 variations of Black, so inspirational to Van Gogh.  Who says we can’t learn from history.  

Ford’s commanding lead continued through 1925 even as competitors introduced many new features, designs and colors.  Throughout, Ford was the price leader, but when the end came for that strategy it was swift. Within 3 years the company had completely changed its product philosophy introducing the Model A (with 4 colors and no black) and running up staggering losses in 1927-28 in the process.  But the company saw market share rebound from 30% to 45% in the process; something that might have been maintained for a while had not the depression hit. 

The parallels between the smartphone and automobile seem striking.  The networks are the roads.  The pricing plans are the gasoline.  Cars were the essential component for economic advancement in the first half of the 20th century, just as smartphones are the key for economic development in the first half of the 21st century.  And now we are finding Samsung as Apple's GM; only the former is taking a page from both GM and Ford's histories.  Apple would do well to take note.

So what are the laggards to do to make it an even race?  We don’t think Nokia’s strategy of coming out with a different color will matter.  Nor do we think that more features will matter.  Nor do we think it will be about price/cost.  So the only answer lies in context; something we have raised in the past on the outlook for the carriers.  More to come on how context can be applied to devices.  Hint, I said devices, not smartphones.  We'll also explore what sets Samsung and Apple apart from the rest of the pack.

Related Reading:

Good article on Ford and his maverick ways.  Qualities which Jobs possessed.

 

Posted by: Michael Elling AT 09:24 am   |  Permalink   |  0 Comments  |  Email
Sunday, 18 March 2012

Previously we have written about “being digital” in the context of shifting business models and approaches as we move from an analog world to a digital world.  Underlying this change have been 3 significant tsunami waves of digitization in the communications arena over the past 30 years, underappreciated and unnoticed by almost all until after they had crashed onto the landscape:

  • The WAN wave between 1983-1990 in the competitive long-distance market, continuing through the 1990s;
  • The Data wave, itself a direct outgrowth of the first wave, began in the late 1980s with flat-rate local dial-up connections to ISPs and databases anywhere in the world (aka the Web);
  • The Wireless wave  beginning in the early 1990s and was a direct outgrowth of the latter two.  Digital cellphones were based on the same technology as the PCs that were exploding with internet usage.  Likewise, super-low-cost WAN pricing paved the way for one-rate, national pricing plans.  Prices dropped from $0.50-$1.00 to less than $0.10.  Back in 1996 we correctly modeled this trend before it happened.

Each wave may have looked different, but they followed the same patterns, building on each other.  As unit prices dropped 99%+ over a 10 year period unit demand exploded resulting in 5-25% total market growth.  In other words, as ARPu dropped ARPU rose; u vs U, units vs Users.  Elasticity. 

Yet with each new wave, people remained unconvinced about demand elasticity.  They were just incapable of pivoting from the current view and extrapolating to a whole new demand paradigm.  Without fail demand exploded each time coming from 3 broad areas: private to public shift, normal price elasticity, and application elasticity.

  • Private to Public Demand Capture.  Monopolies are all about average costs and consumption, with little regard for the margin.  As a result, they lose the high-volume customer who can develop their own private solution.  This loss diminishes scale economies of those who remain on the public, shared network raising average costs; the network effect in reverse.  Introducing digitization and competition drops prices and brings back not all, but a significant number of these private users.  Examples we can point to are private data and voice networks, private radio networks, private computer systems, etc…that all came back on the public networks in the 1980s and 1990s.  Incumbents can’t think marginally.
  • Normal Price Elasticity.  As prices drop, people will use more.  It gets to the point where they forget how much it costs, since the relative value is so great.  One thing to keep in mind is that lazy companies can rely too much on price and “all-you-can-eat” plans without regard for the real marginal price to marginal cost spread.  The correct approach requires the right mix of pricing, packaging and marketing so that all customers at the margin feel they are deriving much more value than what they are paying for; thus generating the highest margins.  Apple is a perfect example of this.  Sprint’s famous “Dime” program was an example of this.  The failure of AYCE wireless data plans has led wireless carriers to implement arbitrary pricing caps, leading to new problems.  Incumbents are lazy.
  • Application Elasticity. The largest and least definable component of demand is the new ways of using the lower cost product that 3rd parties drive into the ecosystem.   They are the ones that drive true usage via ease of use and better user interfaces.  Arguably they ultimately account for 50% of the new demand, with the latter 2 at 25% each.  With each wave there has always been a large crowd of value-added resellers and application developers that one can point to that more effectively ferret out new areas of demand.  Incumbents move slowly.  

Demand generated via these 3 mechanisms soaked up excess supply from the digital tsunamis.  In each case competitive pricing was arrived at ex ante by new entrants developing new marginal cost models by iterating future supply/demand scenarios.  It is this ex ante competitive guess, that so confounds the rest of the market both ahead and after the event.  That's why few people recognize that these 3 historical waves are early warning signs for the final big one.  The 4th and final wave of digitization will occur in the mid-to-last mile broadband markets.  But many remain skeptical of what the "demand drivers" will be.  These last mile broadband markets are monopoly/duopoly controlled and have not yet realized price declines per unit that we’ve seen in the prior waves.  Jim Crowe of Level3 recently penned a piece in Forbes that speaks to this market failure.  In coming posts we will illustrate where we think bandwidth pricing is headed, as people remain unconvinced about elasticity, just as before.  But hopefully the market has learned from the prior 3 waves and will understand or believe in demand forecasts if someone comes along and says last mile unit bandwidth pricing is dropping 99%.  Because it will.

 

Posted by: Michael Elling AT 10:47 am   |  Permalink   |  0 Comments  |  Email
Sunday, 26 February 2012

Wireless service providers (WSPs) like AT&T and Verizon are battleships, not carriers.  Indefatigable...and steaming their way to disaster even as the nature of combat around them changes.  If over the top (OTT) missiles from voice and messaging application providers started fires on their superstructures and WiFi offload torpedoes from alternative carriers and enterprises opened cracks in their hulls, then Dropbox bombs are about to score direct hits near their water lines.  The WSPs may well sink from new combatants coming out of nowhere with excellent synching and other novel end-user enablement solutions even as pundits like Tomi Ahonen and others trumpet their glorious future.  Full steam ahead. 

Instead, WSP captains should shout “all engines stop” and rethink their vertical integration strategies to save their ships.  A good start might be to look where smart VC money is focusing and figure out how they are outfitted at each level to defend against or incorporate offensively these rapidly developing new weapons.  More broadly WSPs should revisit the WinTel wars, which are eerily identical to the smartphone ecosystem battles, and see what steps IBM took to save its sinking ship in the early 1990s.  One unfortunate condition might be that the fleet of battleships are now so widely disconnected that none have a chance to survive. 

The bulls on Dropbox (see the pros and cons behind the story) suggest that increased reliance on cloud storage and synching will diminish reliance on any one device, operating system or network.  This is the type of horizontalization we believe will continue to scale and undermine the (perceived) strength of vertical integration at every layer (upper, middle and lower).  Extending the sea battle analogy, horizontalization broadens the theatre of opportunity and threat away from the ship itself; exactly what aircraft carriers did for naval warfare.

Synching will allow everyone to manage and tailor their “states”, developing greater demand opportunity; something I pointed out a couple of months ago.  People’s states could be defined a couple of ways, beginning with work, family, leisure/social across time and distance and extending to specific communities of (economic) interest.   I first started talking about the “value of state” as Chief Strategist at Multex just as it was being sold to Reuters.

Back then I defined state as information (open applications, communication threads, etc...) resident on a decision maker’s desktop at any point in time that could be retrieved later.  Say I have multiple industries that I cover and I am researching biotech in the morning and make a call to someone with a question.  Hours later, after lunch meetings, I am working on chemicals when I get a call back with the answer.  What’s the value of bringing me back automatically to the prior biotech state so I can better and more immediately incorporate and act on the answer?  Quite large.

Fast forward nearly 10 years and people are connected 7x24 and checking their wireless devices on average 150x/day.  How many different states are they in during the day?  5, 10, 15, 20?  The application world is just beginning to figure this out.  Google, Facebook, Pinterest and others are developing data engines that facilitate “free access” to content and information paid for by centralized procurement; aka advertising.  Synching across “states” will provide even greater opportunity to tailor messages and products to consumers.

Inevitably those producers (advertisers) will begin to require guaranteed QoS and availability levels to ensure a good consumer experience.  Moreover, because of social media and BYOD companies today are looking at their employees the same way they are looking at their consumers.  The overall battlefield begins to resemble the 800 and VPN wars of the 1990s when we had a vibrant competitive service provider market before its death at the hands of the 1996 Telecom Act (read this critique and another that questions the Bell's unnatural monopoly).  Selling open, low-cost, widely available connectivity bandwidth into this advertising battlefield can give WSPs profit on every transaction/bullet/bit across their network.  That is the new “ship of state” and taking the battle elsewhere.  Some call this dumb pipes; I call this a smart strategy to survive being sunk. 

Related Reading:

John Mahoney presents state as representing content and context

Smartphone users complaints with speed rise 50% over voice problems

Posted by: Michael Elling AT 09:54 am   |  Permalink   |  0 Comments  |  Email
Sunday, 12 February 2012

Last week we revisted our seminal analysis from 1996 of the 10 cent wireless minute plan (400 minutes for C$40) introduced by Microcell of Canada and came up with the investment theme titled “The 4Cs of Wireless”.  To generate sufficient ROI wireless needed to replace wireline as a preferred access method/device (PAD).  Wireless would have to satisfy minimal cost, coverage, capacity and clarity requirements to disrupt the voice market.  We found:

  • marginal cost of a wireless minute (all-in) was 1.5-3 cents
  • dual-mode devices (coverage) would lead to far greater penetration
  • software-driven and wideband protocols would win the capacity and price wars
  • CDMA had the best voice clarity (QoS); pre-dating Verizon’s “Can you hear me now” campaign by 6 years

In our model we concluded (and mathematically proved) that demand elasticity would drive consumption to 800 MOUs/month and average ARPUs to north of $70, from the low $40s.  It all happened within 2 short years; at least perceived by the market when wireless stocks were booming in 1998.  But in 1996, the pricing was viewed as the kiss of death for the wireless industry by our competitors on the Street.  BTW, Microcell, the innovator, was at a disadvantage based on our analysis, as the very reason they went to the aggressive pricing model to fill the digital pipes, namely lack of coverage due to a single-mode GSM phone, ended up being their downfall.  Coverage for a "mobility" product trumped price, as we see below.

What we didn’t realize at the time was that the 4Cs approach was broadly applicable to supply of communication services and applications in general.  In the following decade, we further realized the need for a similar checklist on the demand side to understand how the supply would be soaked up and developed the 4Us of Demand in the process.  We found that solutions and services progressed rapidly if they were:

  • easy to use
  • usable across an array of contexts
  • ubiquitous in terms of their access
  • universal in terms of appeal

Typically, most people refer only to user interface (UI or UX) or user experience (UE), but those aren't granular enough to accomodate the enormous range of demand at the margin.  Look at any successful product or service introduction over the past 30 years and they’ve scored high on all 4 demand elements.  The most profitable and self-sustaining products and solutions have been those that maximized perceived utility demand versus marginal cost.  Apple is the most recent example of this.

Putting the 4Cs and 4Us together in an iterative fashion is the best way to understand clearing of marginal supply and demand ex ante.  With rapid depreciation of supply (now in seconds, minutes and days) and infinitely diverse demand in digital networked ecosystems getting this process right is critical.

Back in the 1990s I used to say the difference between wireless and wired networks was like turning on a lightswitch in a dark room filled with people.  Reaction and interaction (demand) could be instantaneous for the wireless network.  So it was important to build out rapidly and load the systems quickly.  That made them generative and emergent, resulting in exponential demand growth.  (Importantly, this ubiquity resulted from interconnection mandated by regulations from the early 1980s and extended to new digital entrants (dual-mode) in the mid 1990s).  Conversely a wired network was like walking around with a flashlight and lighting discrete access points providing linear growth.

The growth in adoption we are witnessing today from applications like Pinterest, Facebook and Instagram (underscored in this blogpost from Fred Wilson) is like stadium lights compared with the candlelight of the 1990s.  What took 2 years is taking 2 months.  You’ll find the successful applications and technologies score high on the 4Cs and 4Us checklists before they turn the lights on and join the iOS and Android parties.

Related Reading:
Fred Wilson's 10 Golden Principles

Posted by: Michael Elling AT 11:37 am   |  Permalink   |  0 Comments  |  Email
Thursday, 05 January 2012

Counter-intuitive thinking often leads to success.  That’s why we practice and practice so that at a critical moment we are not governed by intuition (chance) or emotion (fear).  No better example of this than in skiing; an apt metaphor this time of year.  Few self-locomoted sports provide for such high risk-reward requiring mental, physical and emotional control.  To master skiing one has to master a) the fear of staying square (looking/pointing) downhill, b) keeping one’s center over (or keeping forward on) the skis, and c) keeping a majority of pressure on the downhill (or danger zone) ski/edge.  Master these 3 things and you will become a marvelous skier.  Unfortunately, all 3 run counter to our intuitions driven by fear and safety of the woods at the side of the trail, leaning back and climbing back up hill.  Overcoming any one is tough.

What got me thinking about all this was a Vint Cerf (one of the godfathers of the Internet) Op-Ed in the NYT this morning which a) references major internet access policy reports and decisions, b) mildly supports the notion of the Internet as a civil not human right, and c) trumpets the need for engineers to put in place controls that protect people’s civil (information) rights.  He is talking about policy and regulation from two perspectives, business/regulatory and technology/engineering, which is confusing.  In the process he weighs in, at a high level, on current debates over net neutrality, SOPA, universal service and access reform, from his positions at Google and IEEE and addresses the rights and governance from an emotional and intuitive sense.

Just as with skiing, let’s look at the issues critically, unemotionally and counter-intuitively.  We can’t do it all in this piece, so I will establish an outline and framework (just like the 3 main ways to master skiing) and we’ll use that as a basis in future pieces to expound on the above debates and understand corporate investment and strategy as 2012 unfolds.

First, everyone should agree that the value of networks goes up geometrically with each new participant.  It’s called Metcalfe’s law, or Metcalfe’s virtue.  Unfortunately people tend to focus on scale economies and cost of networks; rarely the value.  It is hard to quantify that value because most have a hard time understanding elasticity and projecting unknown demand.  Further few rarely distinguish marginal from average cost.  The intuitive thing for most to focus on is supply, because people fear the unknown (demand).

Second, everyone needs to realize that there is a fundamental problem with policy making in that (social) democrats tend to support and be supported by free market competitors, just as (conservative) republicans have a similar relationship with socialist monopolies.  Call it the telecom regulatory paradox.  This policy paradox is a function of small business vs big business, not either sides’ political dogma; so counter-intuitive and likely to remain that way.

Third, the internet was never open and free.  Web 1.0 resulted principally from a judicial action and a series of regulatory access frameworks/decisions in the mid to late 1980s that resulted in significant unintended consequences in terms of people's pricing perception.  Markets and technology adapted to and worked around inefficient regulations.  Policy makers did not create or herald the internet, wireless and broadband explosions of the past 25 years.  But in trying to adjust or adapt past regulation they are creating more, not less, inefficiency, no matter how well intentioned their precepts.  Accept it as the law of unintended consequences.  People feel more comfortable explaining results from intended actions than something unintended or unexplainable.

So, just like skiing, we’ve identified 3 principles of telecoms and information networks that are counter-intuitive or run contrary to accepted notions and beliefs.  When we discuss policy debates, such as net neutrality or SOPA, and corporate activity such as AT&T’s aborted merger with T-Mobile or Verizon’s spectrum and programming agreement with the cable companies, we will approach and explain them in the context of Metcalfe’s Virtue (demand vs supply), the Regulatory Paradox (vertical vs horizontal orientation; not big vs small), and  the law of unintended consequences (particularly what payment systems stimulate network investment).  Hopefully the various parties involved can utilize this approach to better understand all sides of the issue and come to more informed, balanced and productive decisions.

Vint supports the notion of a civil right (akin to universal service) for internet access.  This is misguided and unachievable via regulatory edict/taxation.  He also argues that there should be greater control over the network.  This is disingenuous in that he wants to throttle the open-ness that resulted in his godchild’s growth.  But consider his positions at Google and IEEE.  A “counter-intuitive” combination of competition, horizontal orientation and balanced payments is the best approach for an enjoyable and rewarding experience on the slopes of the internet and, who knows, ultimately and counterintuitively offering free access to all.  The regulators should be like the ski patrol to ensure the safety of all.   Ski school is now open.

Related reading:
A Perspective from Center for New American Security

Network Neutrality Squad (NNsquad) of which Cerf is a member

Sad State of Cyber-Politics from the Cato Institute

Bike racing also has a lot of counter-intuitive moments, like when your wheel locks with the rider in front.  Here's what to do!

Posted by: Michael Elling AT 01:23 pm   |  Permalink   |  0 Comments  |  Email
Thursday, 29 December 2011

67 million Americans live in rural areas. The FCC says the benchmark broadband speed is at least 4 Mbps downstream and 1 Mbps upstream. Based on that definition 65% of Americans actually have broadband, but only 50% who live in rural markets do; or 35 million. The 50% is due largely because 19 million Americans (28%) who live in rural markets do not even have access to these speeds. Another way of looking at the numbers shows that 97% of non-rural Americans have access to these speeds versus 72% living in rural areas.  Rural Americans are at a significant disadvantage to other Americans when it comes to working from home, e-commerce or distance education.  Clearly 70% are buying if they have access to it.

Furthermore we would argue the FCC standard is no longer acceptable when it comes to basic or high-definition multimedia, video and file downloads.  These applications require 10+ Mbps downstream and 3+ Mbps upstream to make applications user friendly.  Without those speeds you get what we call the "world-wide-wait" in rural markets for most of today's high-bandwidth applications.  In the accompanying 2 figures we see a clear gap between the blue lines (urban) and green lines (rural) for both download and upload speeds.  The result is that only 7% of rural Americans use broadband service with 6+/1.5+ Mbps versus 22% nationwide today.

The problem in rural markets is lack of alternative and affordable service providers. In fact the NTIA estimates that 4% of Americans have no broadband provider to begin with, 12% only 1 service provider and 44% just 2 providers. Almost all rural subscribers fall into 1 of these 3 categories. Rural utilities, municipalities, businesses and consumers would benefit dramatically from alternative access providers as economic growth is directly tied to broadband penetration.

The accompanying chart shows how vital broadband is to regional economic growth.  If alternative access drives rural broadband adoption to levels similar to urban markets, then local economies will grow an additional 3% annually.  That's because new wireless technology and applications such as home energy management, video on demand, video conferencing and distance learning provide the economic justification for alternative, lower-cost, higher bandwidth solutions.

Related Reading

FCC Broadband Map

US 3G Wireless Coverage Map

The UK is Far Ahead of the US; Their deficient is our average

Rural Telcos Against FCC USF Reform

GA Tries to Reduce Subsidies, Again

 

Posted by: Michael Elling AT 08:09 am   |  Permalink   |  0 Comments  |  Email