EOF

What's Our Next Fight?

Doc Searls

Issue #266, June 2016

We won the battle for Linux, but we're losing the battle for freedom.

Linux turns 25 in August 2016. Linux Journal turned 21 in April 2016. (Issue #1 was April 1994, the month Linux hit version 1.0.) We're a generation into the history of our cause, but the fight isn't there anymore, because we won. Our cause has achieved its effects.

It helps to remember that Linux was a fight, and so were free software and open source. If they weren't fights, they wouldn't have won what they did. They also wouldn't have been interesting, meaning there wouldn't have been any Linux stories, or a Linux Journal.

Stories are what make a subject interesting. To program a story, you need three elements:

  1. A protagonist, which could be a person, a group or an easily personified cause that people can identify with.

  2. A problem, or a set of problems, against which the protagonist struggles.

  3. Movement toward resolution.

If you lack one those, you don't have a story.

With Linux, we had a cause and a person who personified it, whether he liked it or not. Our problem was mentalities embodied in opponents such as Microsoft and the herd of dull enterprise suppliers of “solutions” based on proprietary variants of UNIX. As Linux and its opposition grew, we had movement toward what Linus called “world domination”.

Which is where we are now. In terms of actual use, Linux's quo has more status than any of its early opponents ever had. Damn near everything runs on Linux, or on something so similar that you can open a shell on it and get stuff done. (Example: Apple's OS X, which wouldn't be what it is if Linux hadn't already been the leading *nix OS.) Even Microsoft runs lots of its own stuff (such as Bing) on Linux. Bill Gates no longer cares. He's a philanthropist now. Steve Jobs is dead. Linux's old UNIX enemies are zombies or gone. And, most of the world's smart mobile devices run on Android, a derivative of Linux.

So what's our next fight?

Here are some candidates. Rather than argue with any of the cases I make for them (which will all be too brief), tell me the cases we should be making. Think: What will grow our community of readers and writers here at Linux Journal? and What effects do we want to have in the world?

General-Purpose Computing and Networking

Linux was born on a generic 386 computer and grew on a boundless population of others that were called “compatibles” or “clones” in the early days.

They were an accident of history. When Intel introduced the 8086 in 1978, the idea wasn't to make its descendants the most ubiquitous CPUs ever. Intel wanted to sell chips to makers of closed and proprietary devices at a time when there were no other kinds. “Open” back then meant “we'd like this thing to get along with some other things as long as it doesn't threaten our market position”.

When IBM came out with the 8088-based IBM PC in 1981, the idea was to sell desktop IBM boxes into the business marketplace. The PC succeeded mostly because it had a lot of backplane, a strategy modeled by the Apple II in prior years (and abandoned by Steve Jobs with the first round of Macintosh computers). This opened markets for expansion cards, peripherals, publications, training, events and software to run on MS-DOS—the OS that IBM, in a move deeply out of character, licensed from somebody else. It helped that the PC could run other OSes as well, such as CP/M. But not many bothered with that until Linus broke through nine years later, thanks to another accident of history called the Internet. That too was a general-purpose thing that no company ever would have invented on its own. Even as late as 1994, Microsoft fought the Internet with an “on-line service” of its own called Marvel (scripting.com/davenet/1994/10/18/billgatesvstheinternet.html). Fortunately for Microsoft, Marvel failed instantly. Bill Gates then got wise, and issued his “Internet Tidal Wave” memo in early 1995.

IBM saw the PC as an exclusive hardware play. The only reason it failed to remain exclusive was that Phoenix Technologies came out with a compatible BIOS (www.computerworld.com/article/2585652/app-development/reverse-engineering.html), which it reverse-engineered to keep it legal. The clone market was born when Phoenix then began selling its knock-off BIOS, and it grew as chip makers did the same with knock-offs of the x86 CPU. As a result, the PC became a generic commodity, and the general-purpose computer was born, and it's still with us.

For now.

In The Future of the Internet—and How to Stop It (https://dash.harvard.edu/handle/1/4455262), Jonathan Zittrain calls general-purpose computers and networks generative, meaning by nature they generate and support countless other inventions and services, and the markets that grow around them.

The term “platform” suggests a bottom level that supports stuff above. But generative computers and networks support whole markets both above and below their own level in the stack. To illustrate this, Zittrain uses an hourglass, with the generative thing at the waist in the middle (Figure 1).

Figure 1a. PC Hourglass

Figure 1b. IP Hourglass

He also notes how Apple, for example, limits hardware generativity below the waist of the hourglass by preventing other companies from making devices that run its OS.

More important, he warns that generativity itself is under threat by a new model exemplified by giant controlling vendors such as Google, Apple, Facebook and Amazon (together called “GAFA” in Europe). Each, he says, has a “model for lockdown” that:

...exploits near-ubiquitous network connectivity to let vendors change and control their technologies long after they've left the factory—or to let them bring us, the users, to them, as more and more of our activities shift away from our own devices and into the Internet's “cloud”.

These technologies can let geeky outsiders build on them just as they could with PCs, but in a highly controlled and contingent way....

This model is likely the future of computing and networking, and it is no minor tweak. It is a wholesale revision to the Internet and PC environment we've experienced for the past thirty years....We are at risk of embracing this model, thinking it is the best of both worlds—security and whimsy—when it may be the worst. Even fully grasping how untenable our old models have become, consolidation and lockdown need not be the only alternative. We can stop the future.

That book came out in 2008. It's much worse now. The general-purpose PC business is itself a zombie. IBM is long gone, having sold its PC business to Lenovo, which makes nice boxes but also likes to install adware on its new laptops. Other clone-makers have left the business or the planet entirely. Microsoft now makes its own closed and proprietary hardware on the Apple model. And, Google has at least as much control over the Android mobile device market as Microsoft had over its PC OEMs, back in the decade.

CyanogenMod (www.cyanogenmod.org) is a worthy Android alternative, but Google appears to be controlling the mobile market at least as well as Microsoft controlled its PC one. Mobile hardware also gets old fast, making it a swarm of moving targets, all changing constantly. So it's hard for a generative OS to support whole stacks of hardware below and software above.

All four GAFA companies also are better at taking advantage of our next enemy: centralization.

Decentralization and Distributed Everything

The original model for the Internet was drawn by Paul Baran (https://en.wikipedia.org/wiki/Paul_Baran) in 1962 (www.rand.org/about/history/baran.html). It's the one on the right in Figure 2.

Figure 2. Centralized, Decentralized and Distributed Networks

He called it “distributed”, to contrast it with “centralized” and “decentralized”, which were the prevailing network architectures of the time, and for the foreseeable future. As Baran saw it, a distributed network would be composed of independent peers, each of which could connect to any other peer or combination of peers. TCP/IP (https://en.wikipedia.org/wiki/Internet_protocol_suite), the Internet's base protocol pair, assumed a distributed network to begin with, and that's why it became so generative in 1995, when commercial activity was no longer kept off of it.

Yet nearly all the sites and services using the Web are built on the client-server computing model (https://en.wikipedia.org/wiki/Client-server_model). While client-server is ideal for distributed applications, it still presumes a server as a center, and can be used to overlay many centralized assets and services onto the distributed Internet. This is how dominant companies create worldwide webs of deep dependencies, controlling whole markets for hardware, software, providers, customers and up to a billion and more users.

One of the great inventions on the Web was blogging. Thanks to RSS, anybody could syndicate what he or she wrote to the whole world, meaning anybody's publication easily could get subscribers. I started blogging myself in late 1999. By 2003, I was one of the top 20 bloggers in the world, according to Technorati, the blog search engine that Dave Sifry (https://en.wikipedia.org/wiki/Dave_Sifry) invented while helping me write a story about blogging for Linux Journal (www.linuxjournal.com/article/6497). My blog (doc.weblogs.com) had between a few and many thousands of visitors per day, most from people who subscribed to my RSS feed.

Blogs were part of what my son Allen around the same time called “the live Web” (www.linuxjournal.com/article/8549), which he saw branching off the “static” Web of “sites” at “addresses” with “domains” and “locations” that were “built” and “visited” or “browsed”.

Back in the early 2000s, it would take search engines like Google up to a month to re-visit and index a static Web site. But over the coming years, three things caused the live Web to eat the static one: search engine time-to-index approached zero, “social media” (starting with Twitter and Facebook) took off, and smartphones (with their apps) became a required accessory.

During that transition, Hossein Derakhshan (https://en.wikipedia.org/wiki/Hossein_Derakhshan), a Canadian journalist from Iran who blogged by the handle Hoder (at hoder.ir, now gone), served six years in an Iranian prison (yes, for his blogging) getting out in 2014. Appalled by what happened to the Web, and especially to blogging, he wrote “The Web We Have to Save” (https://medium.com/matter/the-web-we-have-to-save-2eb1fe15a426#.6ktnkemaw) on the centralized blogging platform Medium (https://medium.com), a recent creation of Ev Williams (https://en.wikipedia.org/wiki/Evan_Williams_(Internet_entrepreneur)), who co-created Blogger (https://en.wikipedia.org/wiki/Evan_Williams_%28Internet_entrepreneur%29#Pyra_Labs_and_Blogger), which was acquired by Google in 2003 and somehow survives. In that piece, Hossein wrote, “The rich, diverse, free web that I loved—and spent years in an Iranian jail for—is dying. Why is nobody stopping it?”

He especially mourned the loss of hyperlinks that make the Web a web:

Since I got out of jail, though, I've realized how much the hyperlink has been devalued, almost made obsolete.

Nearly every social network now treats a link as just the same as it treats any other object—the same as a photo, or a piece of text—instead of seeing it as a way to make that text richer. You're encouraged to post one single hyperlink and expose it to a quasi-democratic process of liking and plussing and hearting: adding several links to a piece of text is usually not allowed. Hyperlinks are objectivized, isolated, stripped of their powers.

At the same time, these social networks tend to treat native text and pictures—things that are directly posted to them—with a lot more respect than those that reside on outside web pages. One photographer friend explained to me how the images he uploads directly to Facebook receive a large number of likes, which in turn means they appear more on other people's news feeds. On the other hand, when he posts a link to the same picture somewhere outside Facebook—his now-dusty blog, for instance—the images are much less visible to Facebook itself, and therefore get far fewer likes. The cycle reinforces itself.

Some networks, like Twitter, treat hyperlinks a little better. Others, insecure social services, are far more paranoid. Instagram—owned by Facebook—doesn't allow its audiences to leave whatsoever. You can put up a web address alongside your photos, but it won't go anywhere. Lots of people start their daily online routine in these cul de sacs of social media, and their journeys end there. Many don't even realize that they're using the Internet's infrastructure when they like an Instagram photograph or leave a comment on a friend's Facebook video (qz.com/333313/milliions-of-facebook-users-have-no-idea-theyre-using-the-internet). It's just an app.

But hyperlinks aren't just the skeleton of the web: they are its eyes, a path to its soul. And a blind webpage, one without hyperlinks, can't look or gaze at another webpage—and this has serious consequences for the dynamics of power on the web.

What made this happen is centralization. The GAFA giants and their like dominate by plying the arts and sciences of centralization to a near-absolute degree. As a result we are forgetting and failing to protect the distributed nature of the Net itself.

I despair of fighting this, and said so in “Giving Silos Their Due” (www.linuxjournal.com/content/giving-silos-their-due), an EOF a few months back. Phil Windley responded with “Decentralization Is Hard, Maybe Too Hard” (www.windley.com/archives/2016/02/decentralization_is_hard_maybe_too_hard.shtml), which was even more despairing. Writes Phil:

I remember telling Doc a while back that I'm often afraid that the Internet is an aberration. That it's a gigantic accident brought on by special circumstances. That accident showed us that large-scale, decentralized systems can be built, but those circumstances are not normal.

We have now lived so long as serfs in GAFA's feudal castles (https://www.schneier.com/blog/archives/2012/12/feudal_sec.html) that it is hard to imagine the networked world lacking dependence on overlords to provide much of what we need and take for granted—all on their terms rather than ours. Which brings us to our next cause.

Privacy:

At Black Hat 2015, Jennifer Stisa Granick (cyberlaw.stanford.edu/about/people/jennifer-granick), Director of Civil Liberties at the Stanford Center for Internet and Society (cyberlaw.stanford.edu) gave a keynote talk titled “The End of the Internet Dream” (https://backchannel.com/the-end-of-the-internet-dream-ba060b17da61#.spambd160). Among many other scary things, she said this:

The first casualty of centralization has been privacy. And since privacy is essential to liberty, the future will be less free.

This is the Golden Age of Surveillance. Today, technology is generating more information about us than ever before, and will increasingly do so, making a map of everything we do, changing the balance of power between us, businesses, and governments. The government has built the technological infrastructure and the legal support for mass surveillance, almost entirely in secret.

Here's a quiz. What do emails, buddy lists, drive back-ups, social networking posts, web browsing history, your medical data, your bank records, your face print, your voice print, your driving patterns and your DNA have in common?

Answer: The US Department of Justice (DOJ) doesn't think any of these things are private. Because the data is technically accessible to service providers or visible in public, it should be freely accessible to investigators and spies....

The physical design and the business models that fund the communications networks we use have changed in ways that facilitate rather than defeat censorship and control.

Privacy is something we define and control with technology and norms. In the physical world, we've had thousands of years to create those, starting with the original privacy tech: clothing and shelter. In the networked world, we've had only a couple decades. That's not enough. So we have a lot of work to do, starting with the equivalents of clothing and shelter. What are those? The answers need to be ones any muggle can use—not just wizards like us.

The True Internet of Things:

The “Internet of Things” today is a mess traveling as a fantasy. Most Internet-connected “things” sold by Amazon, Google, GE and others live in silo'd systems meant to trap customers and fail—on purpose—to interoperate with things in other companies' silos. Together these comprise what Phil Windley calls The CompuServe of Things (www.windley.com/archives/2014/04/the_compuserve_of_things.shtml).

Worse, many of them are designed to spy on you. As Jennifer Granick puts it:

Now we have networked devices, the so-called Internet of Things, that will keep track of our home heating, and how much food we take out of our refrigerator, and our exercise, sleep, heartbeat, and more. These things are taking our off-line physical lives and making them digital and networked, in other words, surveillable.

Shoshana Zuboff says this is inevitable (www.faz.net/aktuell/feuilleton/the-surveillance-paradigm-be-the-friction-our-response-to-the-new-lords-of-the-ring-12241996.html?printPagedArticle=true#pageIndex_2), because it follows three laws:

First, that everything that can be automated will be automated. Second, that everything that can be informated will be informated. And most important to us now, the third law: in the absence of countervailing restrictions and sanctions, every digital application that can be used for surveillance and control will be used for surveillance and control, irrespective of its originating intention.

So, in obedience to an original intention of giving you better advertising, new Samsung TVs watch you while you watch them. Exactly what nobody will ever ask for.

Phil Windley sums up the challenge this way:

On the Net today we face a choice between freedom and captivity, independence and dependence. How we build the Internet of Things has far-reaching consequences for the humans who will use—or be used by—it. Will we push forward, connecting things using forests of silos that are reminiscent of the on-line services of the 1980s, or will we learn the lessons of the Internet and build a true Internet of Things?

Good question.

Freedom:

I have a long list of other topics, but every one I can think of goes back to where we were in the first place—or before the first place. To freedom.

Linux is called Gnu Linux by many in the Free Software movement. Their ethos and their code helped make Linux possible, and Linux still embodies both.

The problem with “free software”, besides the fact that it needed explanation (“free as in freedom, not as in beer”), was that it had no box office. “Open source” did have box office, and we (myself included) did a pretty good job of getting it known, if not well understood, by the whole technical world.

I don't think that making a big thing about open source hurt the cause of freedom. But I also don't think it helped much, if at all. Regardless of the causalities involved, we took our eye off the freedom ball. Here's how Eben Moglen put it in a talk at Freedom to Connect in 2012 called “Innovation under Austerity” (https://www.softwarefreedom.org/events/2012/freedom-to-connect_moglen-keynote-2012.html):

...if we'd had a little bit more disintermediated innovation, if we had made running your own Web server very easy, if we had explained to people from the very beginning how important the logs are, and why you shouldn't let other people keep them for you, we would be in a rather different state right now.

The next Facebook should never happen. It's intermediated innovation serving the needs of financiers, not serving the needs of people. Which is not to say that social networking shouldn't happen, it shouldn't happen with a man-in-the-middle attack built in to it. Everybody in this room knows that. The question is how do we teach everybody else....

The nature of the innovation established by Creative Commons, by the Free Software Movement, by Free Culture, which is reflected in the Web, in Wikipedia, in all the Free Software operating systems now running everything, even the insides of all those locked-down vampiric Apple things I see around the room. All of that innovation comes from the simple process of letting the kids play and getting out of the way. Which, you are aware, we are working as hard as we can to prevent now completely.

Increasingly, all around the world the actual computing artifacts of daily life for human individual beings are being made so you can't hack them. The computer science laboratory in every twelve-year-old's pocket is being locked-down.

How did we let that happen? And who are “we” anyway? In “A Tale of Three Cultures” (www.linuxjournal.com/article/5912), which ran in LJ in 2002, I tried to pull apart the separate cultures in our community:

One is purely technical. It's pre-Net, pre-UNIX and maybe even pre-cultural. It shows up where raw technology meets the real world, and its concerns are utterly practical. “Here's the problem”, it says. “Let's solve it.” This is a heads-down culture and civilization depends on it. Embedded systems are what run our cash registers and brake systems, our airplane guidance systems, our factory robotics, our flow meters, our stoplights and our heating systems. The Net and Linux are both handy ways to solve countless embedded systems problems—extremely handy, it turns out. One morning at SXSW I read that embedded Linux soon will run in something like 60,000 cash registers at Home Depot. It's a big story, but mostly a technical one. Does Home Depot give a damn about Linux as a cause? Or about the lawmaking that threatens to turn the Net into nothing more than a backbone for industrial-grade commerce, plus a bunch of culverts for moving “content” stamped and sanitized by ubiquitous digital content management? I kind of doubt it.

The other two cultures are the geeks and the entertainment industry, what Larry Lessig and others like to characterize geographically as Silicon Valley and Hollywood.

The geeks built the Net and want to keep it free. Hollywood wants to control it. That's the basic conflict. Since the beginning, the geeks have had resolute faith in the Net's ability to resist control by government and commercial interests. Geeks interpret attempts at control as mere problems the Net will naturally route around. The same goes for Linux, which has proven handy for extending the Net upward into the operating system and outward into the world. That geek philosophy was manifest in John Perry Barlow's “A Declaration of the Independence of Cyberspace”, even six years after it was written in February 1996. The provocations have changed, but the sides remain the same. And, like I said, those sides dwell in our own heads.

Turns out it wasn't just Hollywood. Geeks who succeeded went both Hollywood and Wall Street—also away from themselves as ordinary folks like you and me. Google, for example, does lots of good in the world. I know lots of people there, and they're all very nice—including the founders, who I've met and like. Yet Google, for all the good it does in the world, is a colossus that plays a huge role in countless lives, yet has almost zero accountability to individual human beings. By design.

A good example comes from a recent post at BoingBoing titled “A Plea for Help From Someone Being Casually Crushed Under Google's Heel” (https://bbs.boingboing.net/t/a-plea-for-help-from-someone-being-casually-crushed-under-googles-heel/76666). It's by a couple who says say they are paying customers of Google, for storage of Google Apps data; yet Google has yanked their accounts for some machine reason no humans can be found to fix. My wife and I are in the same boat with Gmail. Something went wrong a few months back, and Gmail barely works any more for either of us. Fortunately, Gmail was a secondary system for us, but we feel the pain.

An irony here is that Google prides itself on knowing people extremely well. Yet even that has a Matrix-like inhumanity to it.

We see the same thing with Facebook. Mark Zuckerberg is another super-smart geek who now runs a giant company involved in more than a billion human lives, with almost no accountability to the individuals who depend on the company's services.

Like Google, Facebook is a B2B business that sells data about its consumers to its actual customers, which are corporations. So, while Facebook talks one game—about doing good things for individuals—it plays another. For example, at the latest F8 conference, in April, Mark Zuckerberg said this (money.cnn.com/2016/04/12/technology/facebook-messenger-bots/index.html):

Now that Messenger has scaled, we're starting to develop ecosystems around it. And the first thing we're doing is exploring how you can all communicate with businesses.

You probably interact with dozens of businesses every day. And some of them are probably really meaningful to you. But I've never met anyone who likes calling a business. And no one wants to have to install a new app for every service or business they want to interact with. So we think there's gotta be a better way to do this.

We think you should be able to message a business the same way you message a friend. You should get a quick response, and it shouldn't take your full attention, like a phone call would. And you shouldn't have to install a new app.

Let's pause here. It looks like he's going to give us a better way to talk to businesses, right? Maybe a new way to issue a call for help, or to send out a request for a plumber or a licensed electrician—something that helps us deal with the typical pains of being a customer of many products and services in the real world. Now, let's hit Play again:

So today we're launching Messenger Platform. So you can build bots for Messenger.

Who is the “you” he's talking about here? It's not the “you” who wants a better way to talk to businesses. It's developers working for businesses that doesn't want human beings to talk to customers, a decision they already made by replacing customer service people with apps customers install. Zuck again:

And it's a simple platform, powered by artificial intelligence, so you can build natural language services to communicate directly with people. So let's take a look.

CNN, for example, is going to be able to send you a daily digest of stories, right into messenger. And the more you use it, the more personalized it will get. And if you want to learn more about a specific topic, say a Supreme Court nomination or the zika virus, you just send a message and it will send you that information.

And thus he obeys all three of Zuboff's Laws.

And he's not the only one misdirecting attention away from surveillance. Nearly every story about Facebook's new bot thing focuses on lost jobs or the threatened app marketplace. Not on the loss of freedom.

In “'Bot' is the wrong name...and why people who think it's silly are wrong” (https://medium.com/lightspeed-venture-partners/bot-is-the-wrong-name-and-why-people-who-think-they-are-silly-are-wrong-dc0c0b76ae18#.mkl3r99xo), Aaron Batalion says all kinds of functionality now found only in apps will move to Messenger. “In a micro app world, you build one experience on the Facebook platform and reach 1B people.”

Nobody suggests building one method for connecting a billion people to every business they deal with—which, in case you don't know by now, is what I've been evangelizing with ProjectVRM (blogs.harvard.edu/vrm) for the last ten years. Because it's easier to think big than think right. And by “right” I mean free.

So where are we headed here?

“In The End of the Internet Dream” (https://backchannel.com/the-end-of-the-internet-dream-ba060b17da61#.wf135a2n1), Jennifer Granick writes,

Twenty years from now:

  • You won't necessarily know anything about the decisions that affect your rights, like whether you get a loan, a job, or if a car runs over you. Things will get decided by data-crunching computer algorithms and no human will really be able to understand why.

  • The Internet will become a lot more like TV and a lot less like the global conversation we envisioned 20 years ago.

  • Rather than being overturned, existing power structures will be reinforced and replicated, and this will be particularly true for security.

  • Internet technology design increasingly facilitates rather than defeats censorship and control.

And it will all be done on Linux.

Remember how Zuboff's Third Law said “In the absence of countervailing restrictions and sanctions”?

It's our job to correct that absence.

Doc Searls is Senior Editor of Linux Journal. He is also a fellow with the Berkman Center for Internet and Society at Harvard University and the Center for Information Technology and Society at UC Santa Barbara.