Tim Berners-Lee and the Web He Refused to Own
1. What You’re Actually Doing Right Now
Look at the address bar of your browser. There’s a good chance it starts with https://. You’ve seen that string thousands of times. You’ve probably never thought about it.
That :// is a small fossil. Tim Berners-Lee himself admitted it was unnecessary — a syntactic artifact from a few days of design work in 1990 that got baked into every URL ever typed since. He later told a reporter he could have left it out entirely. “There you go,” he said. “It seemed like a good idea at the time.”
That story is funnier when you realize the same person who casually invented the double-slash also designed the entire addressing system for the World Wide Web, the protocol that delivers its pages, the markup language that structures them, and then — critically — chose not to patent any of it.
Before we go further, one distinction matters and most people get it wrong: the internet and the Web are not the same thing. The internet is the network — a global infrastructure of routers, cables, and protocols for moving packets of data between computers. It was designed in the 1970s by Vint Cerf, Bob Kahn, and others, and by 1989 it was already running. Email ran on it. FTP ran on it. Usenet ran on it.
The Web is a layer on top of the internet. Think of the internet as the road network — the physical infrastructure of cables, routers, and protocols. The Web is one thing that runs on that road: a specific system for publishing and linking documents, built on three specs:
URL ──identifies──> [ Web Server ]
|
HTTP
(request / response)
|
[ Your Browser ]
|
renders
|
HTML
(the document, with more URLs in it)All three of those specs — HTTP, HTML, and the URL — were designed by one person, working largely alone, in about a year. And when the time came to decide who would own them, he gave them away.
This is the story of Tim Berners-Lee: the engineer who built the most consequential piece of software infrastructure in history, and then refused to keep it.
2. CERN, 1989: The Problem That Started Everything
Tim Berners-Lee was born in London in 1955. Both his parents were mathematicians who worked on the Ferranti Mark I — one of the first commercial computers ever sold. He grew up soldering circuits and building things. He studied physics at Oxford. He wrote software. He understood systems.
In 1980, he took a six-month consulting contract at CERN, the European particle physics laboratory near Geneva. He didn’t invent the Web then — but he built something that planted the seed. Working on his own time, he wrote a personal information management program he called ENQUIRE. It stored notes and linked them together by association, the way memory actually works, rather than forcing everything into a hierarchy. He used it to keep track of the people, projects, and machines at CERN. When the contract ended, the program stayed on a CERN computer. The disk was eventually lost.
He came back to CERN as a full fellow in 1984. By then, CERN had thousands of researchers, hundreds of computers, and a chronic information problem. Scientists from dozens of countries came and went on rotating contracts. Every time someone left, institutional knowledge walked out with them. Every time someone new arrived, they had to rediscover what existed. Documents lived on incompatible systems. Programs ran on different machines. There was no reliable way to find what you needed.
Berners-Lee described the problem plainly: people were storing information in their heads, and heads kept leaving.
In March 1989, he wrote a proposal. He called it Information Management: A Proposal. The idea was to build a distributed hypertext system — a web of linked documents that could live across many machines, accessed through a common protocol, with no central point of control. Any document could link to any other. Any machine could host content. Anyone could read it.
His boss at CERN, Mike Sendall, scrawled four words on the cover page before returning it: “Vague but exciting.”
It was not immediately approved. It was not an official CERN project. But Sendall quietly gave Berners-Lee room to work on it. In 1990, Belgian systems engineer Robert Cailliau joined Berners-Lee in formalizing a management proposal that helped push the project forward inside CERN. In September 1990, Sendall approved the work and CERN acquired the NeXT machine Berners-Lee wanted to build it on. In October 1990, Berners-Lee wrote the first Web browser-editor. By the end of 1990, the first web server and browser were running at CERN. By August 1991, the first website was live and publicly accessible.
From “vague but exciting” to a working global information system: roughly two years.
3. The Trio: HTML, HTTP, and the URL
Three specifications. That’s the foundation of everything you’re reading right now — and every website, web app, REST API, and browser-based interface that has ever existed. It’s worth understanding what each one actually does.
URL — The Universal Address
Before the Web, there was no standard way to refer to a resource on a remote machine. You might know a file existed on a server somewhere, but addressing it required knowing the machine’s hostname, the protocol to use, and the path in a system-specific format. There was no universal syntax.
Berners-Lee defined the URL (originally called UDI — Universal Document Identifier, then URI before settling on URL) as a single string that encodes everything needed to locate any resource on any machine:
https://example.com/path/to/resource?query=value
──┬─── ────┬──── ────────┬────── ──────┬────
│ │ │ │
scheme authority path query
(protocol) (host) (resource path) (parameters)One address. Any resource. Any machine. Any protocol. The design was deliberately generic — the scheme prefix (http://, ftp://, mailto:) meant the same system could address resources across completely different protocols. That generality is why URLs still work today for things Berners-Lee never imagined. If you’ve ever copied a link and sent it to someone on a different device, a different OS, or in a different country, that just worked because of this design.
HTTP — The Protocol
HTTP (HyperText Transfer Protocol) defines how a browser and a server talk to each other. It is deliberately simple: the client sends a request, the server sends a response.
Client Server
| |
| GET /index.html HTTP/1.1 |
| Host: example.com |
|------------------------------>|
| |
| HTTP/1.1 200 OK |
| Content-Type: text/html |
| |
| <html>...</html> |
|<------------------------------|The original HTTP/0.9 was one page long. One method (GET). One response (the document). No headers. No status codes. Berners-Lee wrote it that way intentionally — simple enough that anyone could implement a server or client. That simplicity is what allowed the Web to spread so fast. You didn’t need a special toolkit or a commercial license. You just needed to implement the spec.
HTTP has since evolved dramatically — HTTP/1.1 added persistent connections and headers, HTTP/2 added multiplexing and binary framing, HTTP/3 runs over QUIC instead of TCP — but the fundamental request/response model Berners-Lee defined in 1990 is unchanged. Every time you load a webpage, stream a video, or call an API, that same basic conversation is happening underneath.
HTML — The Language
HTML (HyperText Markup Language) is the format for documents on the Web. It borrowed from SGML, an existing document markup standard, but Berners-Lee stripped it down to what mattered: structure, links, and the ability to embed references to other resources.
The critical innovation was the hyperlink — <a href="...">. A single tag that made any word in any document a portal to any other document, anywhere on the network. Ted Nelson had theorized hypertext since 1965. Berners-Lee implemented it in a way that ran over the open internet, with no central link registry and no permission required to link to anything.
That last point is underappreciated. The Web’s link model is decentralized by design. No authority controls what can link to what. The graph of the Web — billions of pages connected by hundreds of billions of links — emerged from that one design decision.
Together, these three specs form a stack that is both minimal and complete. You can build anything on top of them. And crucially, none of them required permission to use. That last point is the one that made the difference: any developer, anywhere, could build a browser, a server, or a web application without asking anyone, paying anyone, or being compatible with any proprietary system. The Web grew so fast because the entry cost was essentially zero.
4. The Decision That Changed Everything
By 1993, the Web was growing. Researchers outside CERN were using it. The first graphical browser, Mosaic — built by Marc Andreessen and Eric Bina at the National Center for Supercomputing Applications — had just launched, making the Web accessible to people who weren’t command-line comfortable. Adoption was accelerating fast.
CERN had a decision to make. The organization had developed this technology on its time and resources. It could license it. It could charge royalties. The Web was clearly going to be valuable, and CERN, like any institution, had legitimate reasons to consider protecting its investment.
Berners-Lee pushed hard in the other direction. He understood something that wasn’t obvious at the time: a proprietary Web wasn’t really a Web. A universal information system that required a license wasn’t universal. If you had to pay to build a browser, most people wouldn’t build one. If you had to pay to run a server, most organizations wouldn’t run one. The network effects that make the Web powerful depend entirely on there being no barrier to participation.
To be clear about what this decision meant personally: had he patented the core specifications and licensed them even modestly, he would have become one of the wealthiest people in history. He knew this. He chose not to.
He later put it directly: “Had the technology been proprietary, and in my total control, it would probably not have taken off. You can’t propose that something be a universal space and at the same time keep control of it.”
On 30 April 1993, CERN released the core Web software into the public domain. Later, CERN issued another release under an open licence — a more durable legal mechanism for ensuring the technology stayed freely usable. No patents. No licensing fees. No restrictions.
That single decision is arguably the most consequential act of technical generosity in the history of computing. It’s the reason the Web became the substrate of the global economy rather than a product owned by a corporation.
For comparison: around the same time, other hypertext systems existed. HyperCard was owned by Apple and never left the Apple ecosystem. Gopher — a competing document-retrieval protocol that ran on the same internet infrastructure and, for a time, looked like a serious rival — stumbled after the University of Minnesota introduced licensing for parts of its implementation. The contrast was stark and instructive. Openness won.
Berners-Lee never became rich from the Web. He made a deliberate choice not to. That choice is why you’re reading this.
5. W3C: Keeping the Web Nobody’s
Giving the Web away was the right call. But it created a new problem: who maintains the standards?
Without a governing body, the Web’s specs would fragment. Different browser vendors would implement HTML differently. Corporations would extend HTTP in incompatible ways. The open, interoperable system Berners-Lee had built would slowly Balkanize into a collection of incompatible walled gardens — exactly what had existed before the Web.
In 1994, Berners-Lee left CERN and founded the World Wide Web Consortium — the W3C — at MIT. The problem he was solving was specific: the Web had no owner, which was good, but it also had no steward. Without a neutral body to coordinate standards, the natural move for any company building web technology was to extend the specs in its own direction and hope its implementation became the de facto standard. That path leads to fragmentation. Berners-Lee had seen what happened to incompatible systems at CERN; he wasn’t going to let it happen to the Web.
He chose MIT deliberately — a neutral academic institution with no commercial stake in the Web’s direction, credible enough to bring major industry players to the table. The model was unusual: not a government body, not a corporation, not a purely academic institution. The W3C operates through member organizations — browser vendors, tech companies, universities, governments — who participate in working groups to develop and ratify web standards. Membership costs money. Influence requires participation. But the standards themselves are royalty-free and open.
The W3C’s charter, driven by Berners-Lee from the start, had one non-negotiable principle: standards must be based on royalty-free technology. If a company contributes a technology to a web standard, it agrees not to charge for its use. No standard would become a patent trap.
The list of what the W3C has standardized reads like a history of the Web itself: CSS, XML, SVG, WCAG (web accessibility guidelines), WebRTC, JSON-LD, the semantic web stack. Every time the web needed a new capability and could have fractured into proprietary implementations, the W3C provided a coordination mechanism that kept it interoperable.
This wasn’t frictionless. The browser wars of the late 1990s were partly a failure of standards coordination — Microsoft and Netscape both extended HTML and CSS in incompatible directions, creating years of pain for developers. The W3C eventually reasserted influence, but the episode demonstrated how fragile the open web could be when major commercial players had incentives to fragment it.
Berners-Lee remained deeply involved with the W3C for decades. It’s one of the less glamorous parts of his legacy — standards work rarely makes headlines — but it’s arguably as important as the original invention. The Web exists because of what he built in 1990. It stayed open because of what he built in 1994.
6. The Activist: When the Web Started Going Wrong
The Web Berners-Lee designed was decentralized. Any machine could be a server. Any person could publish. No node was more important than any other. The architecture was flat, distributed, and deliberately resistant to central control.
What actually happened was something different.
By the 2010s, a small number of platforms had captured most of the Web’s attention and data. Search ran through Google. Social graphs ran through Facebook. Retail ran through Amazon. Cloud infrastructure ran through AWS, Azure, and GCP. The physical architecture of the Web remained distributed, but the economic and social architecture had centralized dramatically. The original vision — a universal space where anyone could participate equally — had given way to something more concentrated: a web where a handful of platforms controlled the defaults, owned the data, and captured most of the value that flowed through the system.
Berners-Lee was not quiet about this. He has spent years as a vocal advocate for what the Web was supposed to be, fighting on several fronts simultaneously.
On net neutrality — the principle that internet service providers must treat all traffic equally, regardless of source — he argued consistently that a non-neutral network was incompatible with the open Web. If ISPs could charge more for access to certain sites, or throttle competitors’ services, the Web’s level playing field would collapse. He wrote, testified, and campaigned on this for over a decade.
On surveillance and privacy, he pushed back against both government mass surveillance programs and the data collection practices of commercial platforms. He described the current state of the web as one where users are the product, their behavior tracked and monetized without meaningful consent.
In 2018, he wrote a widely circulated piece in Vanity Fair saying he was devastated by the way the web had been used for misinformation, harassment, and surveillance. This wasn’t hand-wringing. It was a technical assessment from the person who understood the system’s design better than anyone.
In 2019, he launched the Contract for the Web — a set of principles for governments, companies, and individuals to commit to protecting the web as an open, safe, and accessible resource. It was signed by more than 150 organizations. Whether it had teeth is a fair question. That it existed at all — that the inventor of the Web was still fighting for it thirty years later — says something.
At the 2012 London Olympics opening ceremony, Berners-Lee sat at a NeXT computer on the stadium floor, live-tweeting to an audience of hundreds of millions of people. The tweet read: “This is for everyone.” It was displayed in lights across the stadium.
The phrase deserves unpacking. It wasn’t a marketing slogan. It was a statement of original intent — a reminder that the Web was designed with no access restrictions, no preferred users, and no built-in hierarchy of who gets to publish versus who only gets to read. Anyone with a connection could be a server. Anyone could link to anything. The architecture was deliberately egalitarian. “This is for everyone” was Berners-Lee’s way of saying: that’s not an accident, it’s the point.
7. Solid: The Third Layer
Berners-Lee didn’t stop at advocacy. In 2016, working at MIT, he began developing a technical response to the centralization problem. He called it Solid — short for Social Linked Data.
The diagnosis behind Solid is precise: the Web’s original design had no standard mechanism for identity, authentication, or data ownership. When you use a web application, your data lives on that application’s servers. You don’t control it. You can’t easily move it. If the service shuts down or changes its terms, your data goes with it. This wasn’t an accident of corporate greed — it was a gap in the original specs. HTTP and HTML say nothing about where data should live or who should own it. Applications filled that gap by centralizing everything on their own servers, because that was the easiest architecture to build. The result, multiplied across thousands of services, is the surveillance economy.
Berners-Lee has said he didn’t anticipate this when designing the Web in 1990. The architecture was optimized for sharing documents, not for managing personal data at scale. By the 2010s, it was clear the gap needed to be filled at the protocol level — not by regulation, not by corporate goodwill, but by a new open standard.
Solid’s answer is the Pod — a personal online data store that you control and host wherever you want: your own server, a cloud provider, or a Solid hosting service. Applications don’t store your data. They request access to your Pod, with your permission, and read or write through a standard API. When you revoke access, it’s gone. When you switch to a different app, your data stays with you.
Traditional Web Solid Web
───────────────── ─────────────────────────────────
[App A] ──stores──> [Server A] [App A] ──reads/writes──> [Your Pod]
[App B] ──stores──> [Server B] [App B] ──reads/writes──> [Your Pod]
[App C] ──stores──> [Server C] [App C] ──reads/writes──> [Your Pod]
You have no copy. You own the data. Always.
You have no portability. Apps are decoupled from storage.
You have no exit. You can switch apps freely.Technically, Solid is built on existing W3C standards: RDF for linked data, WebID for decentralized identity, and a set of access-control specifications layered on top of HTTP. It’s not a blockchain — Berners-Lee has been explicit about that. Blockchain’s public-ledger model is a bad fit for privacy, and its transaction costs are a bad fit for everyday use. Solid uses standard web infrastructure: any server that implements the Solid protocol is part of the network.
In 2018, he co-founded Inrupt, a company meant to build commercial infrastructure around Solid and help create the ecosystem needed for adoption. Governments and media organizations have explored Solid-based approaches. The government of Flanders has been one of the most prominent public-sector adopters. The BBC has run Solid-based experiments around personalization and consented data access. In healthcare, Pod-style approaches have appeared mostly in pilots, prototypes, and interoperability research rather than broad production deployment.
Solid is still early. Adoption is not yet mainstream. Whether it succeeds at re-decentralizing the web at scale remains an open question. But the intellectual move is significant: the person who invented the Web looked at what it had become, understood the root technical cause, and proposed a standards-based fix. Not a new proprietary platform. Not a blockchain. A new open protocol, built on the existing web stack, designed to give the original vision another chance.
He has done this before.
8. Legacy: Infrastructure for Civilization
Berners-Lee received the 2016 ACM A.M. Turing Award, announced by ACM in April 2017, for inventing the World Wide Web, the first web browser, and the foundational protocols and algorithms that allowed the Web to scale. The citation is accurate but it undersells the scope.
Consider what runs on HTTP and HTML today. E-commerce. Banking. Journalism. Education. Healthcare records. Government services. Scientific publishing. Entertainment. Social movements. The global supply chain’s communication layer. The platforms through which billions of people now participate in public discourse. All of it, ultimately, runs on three specifications written by one person in a year, built on the principle that no one should need permission to participate.
The numbers are hard to contextualize. Billions of people use the internet. The Web has created industries that didn’t exist before it — cloud computing, e-commerce, the app economy, the creator economy — and disrupted industries that did. The economic value created by the open Web is incalculable.
What makes Berners-Lee’s contribution singular is not just the technical work. It’s the combination of technical capability and deliberate restraint. He could build the system. He understood what the system needed to become valuable. And he understood that him owning it was incompatible with it being valuable. That combination — technical vision plus principled abdication of control — is extraordinarily rare.
For contrast: imagine if Vint Cerf had patented TCP/IP. Imagine if Dennis Ritchie had locked down C. Imagine if Linus Torvalds had made Linux proprietary. The open infrastructure of computing depends on a small number of people who chose, at critical moments, to give things away. Berners-Lee’s choice is in that category — but his gift may have had the largest single impact of any of them, because the Web is the layer that made all the others visible to everyone.
He was knighted in 2004. He is in the Internet Hall of Fame. He has received many honorary degrees. In 2012, Time named him one of the 100 most important people of the 20th century.
He is still working. Still writing. Still arguing for the open web. Still building Solid. The man who gave away the most valuable thing he ever made has spent the decades since trying to make sure nobody takes it back.
That’s the contribution. Not just the code. The code plus the principles behind it. The insistence that a universal space has to actually be universal, or it’s nothing at all.
Further reading: Design Issues — Berners-Lee’s ongoing technical and philosophical notes on web architecture, published continuously since the early 1990s. One of the most underread technical blogs on the internet.