Common Crawl Blog

The latest news, interviews, technologies, and resources.

Filter by Category or Search by Title

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Towards Social Discovery - New Content Models; New Data; New Toolsets

Towards Social Discovery - New Content Models; New Data; New Toolsets

This is a guest blog post by Matthew Berk, Founder of Lucky Oyster. Matthew has been on the front lines of search technology for the past decade.
Matthew Berk
Matthew Berk is a founder at Bean Box and Open List, worked at Jupiter Research and Marchex. Matthew studied at Cornell University and Johns Hopkins University.
blekko donates search data to Common Crawl

blekko donates search data to Common Crawl

We are very excited to announce that blekko is donating search data to Common Crawl! Founded in 2007, blekko has created a new type of search experience that enlists human editors in its efforts to eliminate spam and personalize search.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Winners of the Code Contest!

Winners of the Code Contest!

We’re very excited to announce the winners of the First Ever Common Crawl Code Contest! We were thrilled by the response to the contest and the many great entries. Several people let us know that they were not able to complete their project in time to submit to the contest. We’re currently working with them to finish the projects outside of the contest and we’ll be showcasing some of those projects in the near future!
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
Common Crawl Code Contest Extended Through the Holiday Weekend

Common Crawl Code Contest Extended Through the Holiday Weekend

Do you have a project that you are working on for the Common Crawl Code Contest that is not quite ready? If so, you are not the only one. A few people have emailed us to let us know their code is almost ready but they are worried about the deadline, so we have decided to extend the deadline through the holiday weekend.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
TalentBin Adds Prizes To The Code Contest

TalentBin Adds Prizes To The Code Contest

The prize package for the Common Crawl Code Contest now includes three Nexus 7 tablets thanks to TalentBin!
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
2012 Crawl Data Now Available

2012 Crawl Data Now Available

I am very happy to announce that Common Crawl has released 2012 crawl data as well as a number of significant enhancements to our example library and help pages.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Amazon Web Services sponsoring $50 in credit to all contest entrants!

Amazon Web Services sponsoring $50 in credit to all contest entrants!

Did you know that every entry to the First Ever Common Crawl Code Contest gets $50 in Amazon Web Services (AWS) credits? If you're a developer interested in big datasets and learning new platforms like Hadoop, you truly have no reason not to try your hand at creating an entry to the code contest!
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
Mat Kelcey Joins The Common Crawl Advisory Board

Mat Kelcey Joins The Common Crawl Advisory Board

We are excited to announce that Mat Kelcey has joined the Common Crawl Board of Advisors! Mat has been extremely helpful to Common Crawl over the last several months and we are very happy to have him as an official Advisor to the organization.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Still time to participate in the Common Crawl code contest

Still time to participate in the Common Crawl code contest

There is still plenty of time left to participate in the Common Crawl code contest! The contest is accepting entries until August 30th, why not spend some time this week playing around with the Common Crawl corpus and then submit your work to the contest?
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Big Data Week: meetups in SF and around the world

Big Data Week: meetups in SF and around the world

Big Data Week aims to connect data enthusiasts, technologists, and professionals across the globe through a series of meet-ups. The idea is to build community among groups working on big data and to spur conversations about relevant topics ranging from technology to commercial use cases.
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
OSCON 2012

OSCON 2012

We're just one month away from one of the biggest and most exciting events of the year, O'Reilly's Open Source Convention (OSCON). This year's conference will be held July 16th-20th in Portland, Oregon.
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
The Open Cloud Consortium’s Open Science Data Cloud

The Open Cloud Consortium’s Open Science Data Cloud

Common Crawl has started talking with the Open Cloud Consortium (OCC) about working together. If you haven’t already heard of the OCC, it is an awesome nonprofit organization managing and operating cloud computing infrastructure that supports scientific, environmental, medical and health care research.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Twelve steps to running your Ruby code across five billion web pages

Twelve steps to running your Ruby code across five billion web pages

The following is a guest blog post by Pete Warden, a member of the Common Crawl Advisory Board. Pete is a British-born programmer living in San Francisco. After spending over a decade as a software engineer, including 5 years at Apple, he’s now focused on a career as a mad scientist.
Pete Warden
Pete is a British-born programmer living in San Francisco, and is a member of the Common Crawl advisory board.
Common Crawl's Brand Spanking New Video and First Ever Code Contest!

Common Crawl's Brand Spanking New Video and First Ever Code Contest!

At Common Crawl we've been busy recently! After announcing the release of 2012 data and other enhancements, we are now excited to share with you this short video that explains why we here at Common Crawl are working hard to bring web crawl data to anyone who wants to use it.
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
Learn Hadoop and get a paper published

Learn Hadoop and get a paper published

We're looking for students who want to try out the Apache Hadoop platform and get a technical report published.
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
Data 2.0 Summit

Data 2.0 Summit

Next week a few members of the Common Crawl team are going the Data 2.0 Summit in San Francisco.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Common Crawl's Advisory Board

Common Crawl's Advisory Board

As part of our ongoing effort to grow Common Crawl into a truly useful and innovative tool, we recently formed an Advisory Board to guide us in our efforts. We have a stellar line-up of advisory board members who will lend their passion and expertise in numerous fields as we grow our vision.
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
Common Crawl on AWS Public Data Sets

Common Crawl on AWS Public Data Sets

Common Crawl is thrilled to announce that our data is now hosted on Amazon Web Services' Public Data Sets.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Web Data Commons

Web Data Commons

For the last few months, we have been talking with Chris Bizer and Hannes Mühleisen at the Freie Universität Berlin about their work and we have been greatly looking forward the announcement of the Web Data Commons.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
SlideShare: Building a Scalable Web Crawler with Hadoop

SlideShare: Building a Scalable Web Crawler with Hadoop

Common Crawl on building an open Web-Scale crawl using Hadoop.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Video: Gil Elbaz at Web 2.0 Summit 2011

Video: Gil Elbaz at Web 2.0 Summit 2011

Hear Common Crawl founder discuss how data accessibility is crucial to increasing rates of innovation as well as give ideas on how to facilitate increased access to data.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Video: This Week in Startups - Gil Elbaz and Nova Spivack

Video: This Week in Startups - Gil Elbaz and Nova Spivack

Nova and Gil, in discussion with host Jason Calacanis, explore in depth what Common Crawl is all about and how it fits into the larger picture of online search and indexing.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Video Tutorial: MapReduce for the Masses

Video Tutorial: MapReduce for the Masses

Learn how you can harness the power of MapReduce data analysis against the Common Crawl dataset with nothing more than five minutes of your time, a bit of local configuration, and 25 cents.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Common Crawl Enters A New Phase

Common Crawl Enters A New Phase

A little under four years ago, Gil Elbaz formed the Common Crawl Foundation. He was driven by a desire to ensure a truly open web. He knew that decreasing storage and bandwidth costs, along with the increasing ease of crunching big data, made building and maintaining an open repository of web crawl data feasible.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Gil Elbaz and Nova Spivack on This Week in Startups

Gil Elbaz and Nova Spivack on This Week in Startups

Nova and Gil, in discussion with host Jason Calacanis, explore in depth what Common Crawl is all about and how it fits into the larger picture of online search and indexing. Underlying their conversation is an exploration of how Common Crawl's open crawl of the web is a powerful asset for educators, researchers, and entrepreneurs.
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
MapReduce for the Masses: Zero to Hadoop in Five Minutes with Common Crawl

MapReduce for the Masses: Zero to Hadoop in Five Minutes with Common Crawl

Common Crawl aims to change the big data game with our repository of over 40 terabytes of high-quality web crawl information into the Amazon cloud, the net total of 5 billion crawled pages.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Answers to Recent Community Questions

Answers to Recent Community Questions

In this post we respond to the most common questions. Thanks for all the support and please keep the questions coming!
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Common Crawl Discussion List

Common Crawl Discussion List

We have started a Common Crawl discussion list to enable discussions and encourage collaboration between the community of coders, hackers, data scientists, developers and organizations interested in working with open web crawl data.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.

Common Crawl Blog

March 2025 Crawl Archive Now Available

April 1, 2025

We are pleased to announce that the crawl archive for March 2025 is now available. The data was crawled between March 15th and March 28th, and contains 2.74 billion web pages (or 455 TiB of uncompressed content).

Read More...

Introducing Common Crawl AI Agent by ReadyAI

March 31, 2025

We are pleased to announce the launch of an experimental AI Agent, developed by our friends at ReadyAI. The agent offers a conversational interface designed to help users explore Common Crawl’s data, use cases, and community initiatives.

Read More...

Submission to the UK’s Copyright and AI Consultation

March 3, 2025

Read our submission to the UK government's Copyright and AI consultation, supporting a legal exception for text and data mining (TDM) while respecting creators’ rights.

Read More...

Host- and Domain-Level Web Graphs December 2024 and January/February 2025

February 25, 2025

Announcing our February 2025 Web Graph release based on the crawls of December 2024 and January/February 2025, consisting of 267.4 million nodes and 2.7 billion edges at the host level, and 106.5 million nodes and 1.9 billion edges at the domain level.

Read More...

February 2025 Crawl Archive Now Available

February 23, 2025

The crawl archive for February 2025 is now available. The data was crawled between February 6th and February 20th, and contains 2.6 billion web pages (or 402 TiB of uncompressed content). Page captures are from 47.6 million hosts or 38.5 million registered domains and include 1 billion new URLs, not visited in any of our prior crawls.

Read More...

Opening the Gates to Online Safety

February 17, 2025

Last week in Paris, at the AI Action Summit, a coalition of major technology companies and foundations announced the launch of ROOST: Robust Online Open Safety Tools.

Read More...

January/February 2025 Newsletter

February 3, 2025

We’re happy to share our January/February 2025 newsletter with updates and insights from the world of open data and web archiving.

Read More...

Host- and Domain-Level Web Graphs November/December 2024 and January 2025

February 1, 2025

We are pleased to announce a new release of host-level and domain-level web graphs based on the crawls of November, December 2024 and January 2025. The host-level graph consists of 277.7 million nodes and 2.7 billion edges, and the domain-level graph has 100.8 million nodes and 1.9 billion edges.

Read More...

January 2025 Crawl Archive Now Available

January 31, 2025

We're pleased to announce our first crawl of 2025, containing 3.0 billion pages, and 460 TiB uncompressed content.

Read More...

Introducing cc-downloader

January 21, 2025

Introducing a command-line tool written in Rust for downloading data from Common Crawl.

Read More...

Host- and Domain-Level Web Graphs October, November, and December 2024

December 21, 2024

We are pleased to announce a new release of host-level and domain-level Web Graphs based on the crawls of October, November, and December 2024. The crawls used to generate the graphs were CC-MAIN-2024-42, CC-MAIN-2024-46, and CC-MAIN-2024-51.

Read More...

December 2024 Crawl Archive Now Available

December 18, 2024

The crawl archive for December 2024 is now available. The data was crawled between December 1st and December 15th, and contains 2.64 billion web pages (or 394 TiB of uncompressed content). Page captures are from 47.5 million hosts or 38.3 million registered domains and include 1.05 billion new URLs, not visited in any of our prior crawls.

Read More...

Common Crawl Foundation at NeurIPS 2024: Expanding Horizons and Building Connections

December 17, 2024

The Common Crawl Foundation attended NeurIPS 2024, connecting with organisations, hosting a social event on tech and social impact, and showcasing contributions to AI research and data access.

Read More...

Expanding the Language and Cultural Coverage of Common Crawl

December 11, 2024

We aim to enhance linguistic diversity in our dataset by inviting community contributions of non-English URLs and collaborating with MLCommons on a Language Identification campaign.

Read More...

October/November 2024 Newsletter

November 25, 2024

We’re pleased to announce this month's newsletter, featuring key updates, upcoming events, and community highlights.

Read More...

Host- and Domain-Level Web Graphs September, October, November 2024

November 20, 2024

We are pleased to announce a new release of host-level and domain-level Web Graphs based on the crawls of September, October, and November 2024. The crawls used to generate the graphs were CC-MAIN-2024-46, CC-MAIN-2024-42, and CC-MAIN-2024-38.

Read More...

November 2024 Crawl Archive Now Available

November 18, 2024

The crawl archive for November 2024 is now available. The data was crawled between November 1st and November 15th, and contains 2.68 billion web pages (or 405 TiB of uncompressed content). Page captures are from 47.5 million hosts or 38.3 million registered domains and include 1 billion new URLs, not visited in any of our prior crawls.

Read More...

Reflections on Recent Talks at the Turing Institute and UCL

November 4, 2024

Thom Vaughan and Pedro Ortiz Suarez discussed the power of Common Crawl’s open web data in driving research and innovation during two notable presentations last week.

Read More...

Introducing the Common Crawl Errata Page for Data Transparency

October 30, 2024

As part of our commitment to accuracy and transparency, we are pleased to introduce a new Errata page on our website.

Read More...

Host- and Domain-Level Web Graphs August, September, and October 2024

October 22, 2024

We are pleased to announce a new release of host-level and domain-level Web Graphs based on the crawls of August, September, and October 2024. The crawls used to generate the graphs were CC-MAIN-2024-33, CC-MAIN-2024-38, and CC-MAIN-2024-42.

Read More...

October 2024 Crawl Archive Now Available

October 20, 2024

The data was crawled between October 3rd and October 16th, and contains 2.49 billion web pages (or 365 TiB of uncompressed content). Page captures are from 47.5 million hosts or 38.3 million registered domains and include 1.03 billion new URLs, not visited in any of our prior crawls.

Read More...

White House Briefing on Open Data’s Role in Technology

October 8, 2024

We recently had the honor of briefing the White House Office of Science and Technology Policy (OSTP) on the role of The Common Crawl Foundation as critical infrastructure in the artificial intelligence ecosystem and how we can support U.S. federal efforts in advancing responsible AI use and research.

Read More...

IAB Workshop on AI-CONTROL

September 30, 2024

Earlier this month, the Common Crawl Foundation had the privilege of participating in a groundbreaking workshop hosted by the Internet Architecture Board (IAB) in Washington DC.

Read More...

Host- and Domain-Level Web Graphs July, August, and September 2024

September 26, 2024

We are pleased to announce a new release of host-level and domain-level web graphs based on the crawls of July, August, and September 2024. The crawls used to generate the graphs were CC-MAIN-2024-30, CC-MAIN-2024-33, and CC-MAIN-2024-38.

Read More...

September 2024 Crawl Archive Now Available

September 24, 2024

The crawl archive for September 2024 is now available. The data was crawled between September 7th and September 21st 2024, and contains 2.8 billion web pages (or 410 TiB of uncompressed content).

Read More...