From the Editor: Linux's Broadening Foundation

It's time to embrace 5G, starting with the Edge in our homes and hands. By Doc Searls

In June 1997, David Isenberg, then of AT&T Labs Research, wrote a landmark paper titled "Rise of the Stupid Network". You can still find it here. The paper argued against phone companies' intent to make their own systems smarter. He said the internet, which already was subsuming all the world's phone and cable TV company networks, was succeeding not by being smart, but by being stupid. By that, he meant the internet "was built for intelligence at the end-user's device, not in the network".

In a stupid network, he wrote, "the data is boss, bits are essentially free, and there is no assumption that the data is of a single data rate or data type." That approach worked because the internet's base protocol, TCP/IP, was as general-purpose as can be. It supported every possible use by not caring about any particular use or purpose. That meant it didn't care about data rates or types, billing or other selfish concerns of the smaller specialized networks it harnessed. Instead, the internet's only concern was connecting end points for any of those end points' purposes, over any intermediary networks, including all those specialized ones, without prejudice. That lack of prejudice is what we later called neutrality.

The academic term for the internet's content- and purpose-neutral design is end-to-end. That design was informed by "End-to-End Arguments in System Design", a paper by Jerome Saltzer, David P. Reed and David D. Clark, published in 1980. In 2003, David Weinberger and I later cited both papers in "World of Ends: What the Internet Is and How to Stop Mistaking It for Something Else". In it, we explained:

When Craig Burton describes the Net's stupid architecture as a hollow sphere comprised entirely of ends, he's painting a picture that gets at what's most remarkable about the Internet's architecture: Take the value out of the center and you enable an insane flowering of value among the connected end points. Because, of course, when every end is connected, each to each and each to all, the ends aren't endpoints at all.

And what do we ends do? Anything that can be done by anyone who wants to move bits around.

Notice the pride in our voice when we say "anything" and "anyone"? That comes directly from the Internet's simple, stupid technical architecture.

Because the Internet is an agreement, it doesn't belong to any one person or group. Not the incumbent companies that provide the backbone. Not the ISPs that provide our connections. Not the hosting companies that rent us servers. Not the industry associations that believe their existence is threatened by what the rest of us do on the Net. Not any government, no matter how sincerely it believes that it's just trying to keep its people secure and complacent.

To connect to the Internet is to agree to grow value on its edges. And then something really interesting happens. We are all connected equally. Distance doesn't matter. The obstacles fall away and for the first time the human need to connect can be realized without artificial barriers.

The Internet gives us the means to become a world of ends for the first time.

Or the last. Because right now, the descendants of the phone companies David Isenberg schooled on the virtues of the internet's stupidity are working very hard to make the internet of tomorrow as smart as can be. Their name for tomorrow's internet—or one big part of it—is 5G.

Simply put, 5G is an upgrade to the existing cellular data system. It will feature low latencies (typically in the single digits), local clouds for high-demand purposes and very high data speeds (typically 1GB/s down and 300MB/s up). So, rather than the "last mile", 5G is the last acre. Or less.

Here are some of the arguments I've heard for making the 5G internet smart:

  1. Many purposes at end points require low latency, high reliability and minimum data speeds. Gaming is a big one, but there are many others: telemedicine, traffic control, autonomous driving, virtual reality, utility usage optimization, to name just a few. The general-purpose stupid internet doesn't support those things well, or at all. So adjustments need to be made. For more on this argument, see this draft for the IETF by Ericsson. (Context: Ericsson plans to be even bigger in 5G than it already is in 4G, which is huge.)
  2. A purely stupid internet cannot deal with what's going on in today's internet already, much less in the 5G future. There is too much data traffic moving between too many end points, with too many specialized purposes at those end points, and with too many build-out requirements for all the intermediary networks.
  3. The internet's innards never have been completely stupid anyway. To obey TCP/IP's end-to-end design, which requires finding the best available paths for data within and between networks, routers need their own kinds of smarts. So do Content Delivery Networks (CDNs), which operate near collections of ends, such as in cities. Many big players distribute content through Akamai, Cloudflare and other CDN companies, but some of those players have gone direct or are in the process. Those include Amazon (through CloudFront), Apple, Disney and Netflix. CDNs are a big reason why TV streamed over the internet looks better than the broadcast kind—especially when shows and movies are in 4K HDR and at 60 frame per second(fps), which are also the current ideal for gaming. When resolutions go to 8k and up, we'll need what only the 5G players are planning for at scale. (The cable and FTTH players are looking mostly at fixed service to homes, while the 5G players are looking at all wireless devices.)
  4. Some of the giant services at a few of the internet's ends dominate usage by all the rest of them and, therefore, require special treatment—for both their users and themselves. These include Google, Facebook, Twitter, Apple, Amazon, Microsoft and every other net-based service you can name. All of those are, technically and functionally, peers on the internet's "pipes", owing just to their traffic volumes.
  5. Non-human end points—things—on the internet will soon number in the trillions, if they don't already. Dealing with those things will require a great deal of intelligence (artificial and otherwise). Security around those things also will need to be managed and updated constantly. It's easier to deal with those IoT eventualities with localized approaches that are close to the end-thing population, and not just at or in any of those things themselves.

I am told by those making these arguments that they appreciate the internet's base-level stupidity and its general-purpose nature—and that they see 5G's advantages as additional to those virtues.

But what if 5G in a practical way gets built out only for the big players' own special purposes? Will that effectively lock out everything else, especially what can only come from individuals doing original stuff at the internet's ends?

Think about this: 5G networks will be optimised by lots of AI and ML by large companies operating centrally, and will contain lots of services built around corporate APIs on which apps at the ends will rely. Will all this augment or thwart human intelligence and creativity? Or both, in ways unknown over which none of us have much if any control?

So you see my big concern. It's almost too easy to imagine 5G ending up being nothing but proprietary and closed services. Add to that the simple fact that it is easier to build closed and proprietary stuff on top of the internet than it is to build closed and proprietary stuff on top of Linux (which was a concern of my column last month).

In fact, 5G is already controversial in some ways. Paranoia about new wireless build-out leads to stories such as Rienette Senum's "The 5G Network: What You Don't Know May Kill You". In "Enough of the 5G Hype", Ernesto Falcon of EFF writes, "But don't be fooled. They are only trying to focus our attention on 5G to try to distract us from their willful failure to invest in a proven ultrafast option for many Americans: fiber to the home, or FTTH." When I asked one FTTH company CEO for some thoughts about 5G, he replied, "(Screw) 5G! People love fat pipe and all the 5G hype just lets the smart money lay fiber. It is like watching a movie you know the ending to and the plot is really slow!"

Other old internet hands give 5G a nearly unanimous thumbs-down. Some examples from a list I'm on where many of those hands hang out:

Still, the investment in 5G is massive, and that warrants attention, whether it's a bubble or not. And, the most constructive attention 5G is getting right now happens to come from The Linux Foundation. So, to learn more about what's going on with that, I went to the LF's Open Networking Summit (ONS) North America in April of this year. (Photos here.)

I went there thinking it would help that I already knew a fair amount about Linux, open source and networks. What I didn't expect was to find that the overlap between the ONS and what I knew already would round to zero, or at least would feel that way.

Looking across the agenda and the show floor, all I saw at first was a mess of two-, three-, four- and five-letter acronyms, all set like jewels in a display case of dense arcana. Some samples:

Once I dug into it, however, I found that all this stuff is more than interesting—it's exciting, but also scary, especially if you have problems (as I do) with giant companies intermediating our networked lives even more than they do already. So here are my take-aways.

First, 5G is real. Or it will be. The build-out is happening, right now, in real time, all over the place. I also doubt it's a bubble. But we'll see.

Second, open source will enable it all. While Linux will support pretty much all of 5G's build-out at the base level, most open-source development within 5G networks is happening at layers above Linux and below usage by you and me.

Third, it's all happening below mainstream media radar. Stories need character and conflict, and very little of either shows through all the acronymic camouflage. But I'm holding our own radar gun here, and what I see is beyond huge. I can't count the number of open-source projects in those layers, or how many developers are involved within each of them, but I at least can see that their number and variety are legion.

For a helpful view toward some of this work, go to the Cloud Native Computing Foundation (CNCF)'s Interactive Landscape. Give it time to load. It's huge. (If possible, use a big screen.) A small block of text there explains:

This landscape is intended as a map through the previously uncharted terrain of cloud native technologies. There are many routes to deploying a cloud native application, with CNCF Projects representing a particularly well-traveled path. So dig some paths down through the 'cards' there, each of which looks like a square tile.

To make digging easier, knock out the non-open-source cards by clicking on the "Open source landscape" filter. When you do, at the top, it will say something like, "You are viewing 317 cards with a total of 1,566,515 (GitHub) stars, market cap of $6.01T and funding of $28.5B." These numbers change dynamically as development progresses and are sure to be larger when you read this. Also bear in mind that cloud native computing is just one part of the Open Networking/5G picture.

Fourth, this is a cooperative thing. No one company, no matter how big and dominant, is going to make 5G happen by itself. It's too costly for companies to invent the same or similar wheels and to risk market-slowing choices between products and services based on deeply incompatible standards (such as the one we saw here in the US with GSM vs. CDMA). Every player involved—carriers, services, software and hardware providers, you name it—has to work together on the lower-level protocols and code on which all of them will depend.

Fifth, there are table stakes required to play in collaborative 5G open-source development, and they are not small. Large sums of geek power are required to create the standards, run the orgs and write the code—and all of those cost money. But the expenses can be rationalized by their large because effects, which are what happens when you make money because of your investment, rather than with it. I've been talking up because effects since forever it seems (and in fact co-coined the term with J.P. Rangaswami, many years ago). It always has been a hard principle for big tech companies (or any business) to grasp, but clearly The Linux Foundation has found a way to convince a lot of large companies at once to embrace the principle. Hats off.

Sixth, the downsides are barely on the table yet. Everybody developing toward 5G is clear about the good stuff it will enable. The bad is nowhere to be heard or seen at shows like this one. (Except for the obvious security stuff, which is always a big focus with any new technology.)

Seventh, it's still early enough for others to get involved. I volunteer Linux Journal and readers who can bring work and value to the 5G table.

Two weeks have passed since I flew back from the conference, with my mind still blown by the volume and variety of work going on. In that time, nearly all my work cycles have been devoted to putting that work on our radar, and thinking about what it means for Linux Journal and its readers.

I see threats and opportunities, but not as distinct issues, because there are opportunities within the threats.

The biggest threat I see is potentially losing the free and open internet—the goose that laid all the golden eggs of today's online world, including eggs that themselves became golden egg-laying geese. Let's call them GELGs. The biggest GELG of them all, other than the mother goose—the internet—is Linux.

It should help to remember that Linux was hatched as a gosling on Linus Torvalds' personal computer, starting with one email to one Usenet newsgroup, on the free and open internet. Nearly everything that has happened for Linux since then is thanks to an internet that was as stupid—in the Isenbergian sense—as it could be.

Will the smart new 5G space be as good a hatchery for GELGs as the stupid old internet? More specifically, will a Linus of tomorrow be able to hatch on 5G the kind of massively useful and world-making thing that the Linus of old did on the internet in 1991?

One could point at GitHub and say "Look at the millions of new open-source code bases being developed" and claim the answer is yes. Yet not all those open-source code bases are built to preserve and embody the same kinds of freedoms their creators enjoy. And, the telcos have a long and awful record of fighting the free, open and stupid internet. Why would they change now?

Perhaps because The Linux Foundation makes damn sure they do. Or so I hope.

A confession I need to make here is that I'm still new to getting my head around The Linux Foundation, which has massive scope. Half the Global 2000 belongs to The Linux Foundation, and that's a pretty damned amazing fact, just by itself. But I have a long way to go before I fully grok what's going on.

Yet so far, I'm very encouraged by the role I see The Linux Foundation playing with big companies especially, and how it seems to perform as a kind of United Nations, where positive mutual interests are brought together, problems are worked out, and wholes get more done than any parts or sums of parts. To my knowledge, no other organization in the world is better at doing that kind of thing or at the same scale.

Years ago, when Dan Frye was running a corner of IBM that employed a number of Linux kernel developers, he told me it took years before IBM discovered that it couldn't tell its those developers what to do—and that in fact, things worked the other way around: it was the kernel developers who told IBM what to do. Put another way, adaptation was by applications to the kernel, not by the kernel to applications. True, uses naturally informed kernel development to some degree, but no company was in charge of kernel development, regardless of how many kernel developers a company employed or how well it paid them.

I like to think the same applies in The Linux Foundation's relationship to its corporate members. I suspect those members don't tell the Linux Foundation what to do, and that it's really the other way around—whether those companies know it yet or not.

I was able to test that hypothesis at the ONS by attending its keynote presentations. As is the custom at tradeshows, sponsors got time on the ONS stage to give presentations of their own. Typically, these tend to be vanity efforts: corporate brochures in the form of slide shows and videos. But I saw relatively little of that at the ONS. Instead, I saw one company after another present their thoughts and insights as parts of groups working on the same kind of thing, using cooperatively developed open standards and open-source code. Yes, there was plenty of corporate self-flattery, but most of it was secondary to reporting on work shared by others toward common goals.

I should pause here to acknowledge complaints that some of us have had about The Linux Foundation. (This Reddit thread includes most of them.) I also think those complaints are irrelevant (or at least secondary) to the opportunities materializing in the 5G build-out, which require engagement.

I see two opportunities here: one for our readers and one for us. The first is to help where we can to make sure the internet's original stupidity survives and thrives as well over 5G as it does over wide-open fiber. The second is to expand Linux Journal's coverage to include more of what's happening under The Linux Foundation's many umbrellas.

For the first opportunity, I think we can contribute best to what The Linux Foundation calls the Edge. Its focus on that was announced in January 2019, when it launched what it calls LF Edge (LFedge.org). If you go to that site, what you'll see today (or at least when I'm writing this, in late April 2019) looks like pretty corporatized stuff. Try to ignore that. In the conversations I had with The Linux Foundation and other people at the ONS, it was clear to me that Edge is still a wide-open bucket of interests and possibilities.

Metaphorically, Edge is the side of the 5G table where each of us will sit. There are also empty chairs there for the kinds of geeks who are reading these words right now.

"All the significant trends start with technologists", Marc Andreessen told me, way back when Netscape was open-sourcing the browser that became Firefox. I think that's still one of the most simple, true and challenging statements that has ever been uttered.

If you're a technologist who would like to start a significant trend where one is very much needed, now is the time, 5G is the space, and LF Edge is the place. And so is Linux Journal. If you've got one of those trends, talk to us about it.

About the Author

Doc Searls is a veteran journalist, author and part-time academic who spent more than two decades elsewhere on the Linux Journal masthead before becoming Editor in Chief when the magazine was reborn in January 2018. His two books are The Cluetrain Manifesto, which he co-wrote for Basic Books in 2000 and updated in 2010, and The Intention Economy: When Customers Take Charge, which he wrote for Harvard Business Review Press in 2012. On the academic front, Doc runs ProjectVRM, hosted at Harvard's Berkman Klein Center for Internet and Society, where he served as a fellow from 2006–2010. He was also a visiting scholar at NYU's graduate school of journalism from 2012–2014, and he has been a fellow at UC Santa Barbara's Center for Information Technology and Society since 2006, studying the internet as a form of infrastructure.

Doc Searls