- What does it mean for the Open Web if users don’t know they’re on the internet? – via QUARTZ:
- This is more than a matter of semantics. The expectations and behaviors of the next billion people to come online will have profound effects on how the internet evolves. If the majority of the world’s online population spends time on Facebook, then policymakers, businesses, startups, developers, nonprofits, publishers, and anyone else interested in communicating with them will also, if they are to be effective, go to Facebook. That means they, too, must then play by the rules of one company. And that has implications for us all.
- Hard Drive Data Sets -via Backblaze: Backblaze provides online backup services storing data on over 41,000 hard drives ranging from 1 terabyte to 6 terabytes in size. They have released an open, downloadable dataset on the reliability of these drives.
- The Open Source Question: critically important web infrastructure is woefully underfunded – via Slate: on the strange dichotomy of Silicon Valley: a “hypercapitalist steamship powered by it’s very antithesis”
- February 21st is Open Data Day- via Spatial Source: use this interactive map to find an Open Data event near you (or add your own)

- Security is at the heart of the web – via O’Reilly Radar:
- …we want to be able to go to sleep without worrying that all of those great conversations on the open web will endanger the rest of what we do.
- Making the web work has always been a balancing act between enabling and forbidding, remembering and forgetting, and public and private. Managing identity, security, and privacy has always been complicated, both because of the challenges in each of those pieces and the tensions among them.
- Complicating things further, the web has succeeded in large part because people — myself included — have been willing to lock their paranoias away so long as nothing too terrible happened.
- Follow us @CommonCrawl on Twitter for the latest in Big Open Data
Erratum:
Content is truncated
Some archived content is truncated due to fetch size limits imposed during crawling. This is necessary to handle infinite or exceptionally large data streams (e.g., radio streams). Prior to March 2025 (CC-MAIN-2025-13), the truncation threshold was 1 MiB. From the March 2025 crawl onwards, this limit has been increased to 5 MiB.
For more details, see our truncation analysis notebook.
Erratum:
Nodes in Domain-Level Webgraphs Not Sorted and May Include Duplicates
The nodes in domain-level Web Graphs may not be properly sorted lexicographically by node label (reversed domain name). It's also possible that few nodes are duplicated, that is two nodes share the same label. For more details, see the Issue Report in the cc-webgraph repository.
The issue affects all domain-level Web Graphs until the issue has been fixed for the May, June/July, August 2022 Web Graph (cc-main-2022-may-jun-aug-domain) and the following Web Graph releases.
