Common Crawl Blog

The latest news, interviews, technologies, and resources.

Filter by Category or Search by Title

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Common Crawl URL Index

Common Crawl URL Index

We are thrilled to announce that Common Crawl now has a URL index! Scott Robertson, founder of triv.io graciously donated his time and skills to creating this valuable tool.
Scott Robertson
Scott Robertson is a founder of triv.io, and is a passionate believer in simplifying complicated processes.
Towards Social Discovery - New Content Models; New Data; New Toolsets

Towards Social Discovery - New Content Models; New Data; New Toolsets

This is a guest blog post by Matthew Berk, Founder of Lucky Oyster. Matthew has been on the front lines of search technology for the past decade.
Matthew Berk
Matthew Berk is a founder at Bean Box and Open List, worked at Jupiter Research and Marchex. Matthew studied at Cornell University and Johns Hopkins University.
blekko donates search data to Common Crawl

blekko donates search data to Common Crawl

We are very excited to announce that blekko is donating search data to Common Crawl! Founded in 2007, blekko has created a new type of search experience that enlists human editors in its efforts to eliminate spam and personalize search.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Winners of the Code Contest!

Winners of the Code Contest!

We’re very excited to announce the winners of the First Ever Common Crawl Code Contest! We were thrilled by the response to the contest and the many great entries. Several people let us know that they were not able to complete their project in time to submit to the contest. We’re currently working with them to finish the projects outside of the contest and we’ll be showcasing some of those projects in the near future!
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
Common Crawl Code Contest Extended Through the Holiday Weekend

Common Crawl Code Contest Extended Through the Holiday Weekend

Do you have a project that you are working on for the Common Crawl Code Contest that is not quite ready? If so, you are not the only one. A few people have emailed us to let us know their code is almost ready but they are worried about the deadline, so we have decided to extend the deadline through the holiday weekend.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
TalentBin Adds Prizes To The Code Contest

TalentBin Adds Prizes To The Code Contest

The prize package for the Common Crawl Code Contest now includes three Nexus 7 tablets thanks to TalentBin!
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
2012 Crawl Data Now Available

2012 Crawl Data Now Available

I am very happy to announce that Common Crawl has released 2012 crawl data as well as a number of significant enhancements to our example library and help pages.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Amazon Web Services sponsoring $50 in credit to all contest entrants!

Amazon Web Services sponsoring $50 in credit to all contest entrants!

Did you know that every entry to the First Ever Common Crawl Code Contest gets $50 in Amazon Web Services (AWS) credits? If you're a developer interested in big datasets and learning new platforms like Hadoop, you truly have no reason not to try your hand at creating an entry to the code contest!
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
Mat Kelcey Joins The Common Crawl Advisory Board

Mat Kelcey Joins The Common Crawl Advisory Board

We are excited to announce that Mat Kelcey has joined the Common Crawl Board of Advisors! Mat has been extremely helpful to Common Crawl over the last several months and we are very happy to have him as an official Advisor to the organization.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Still time to participate in the Common Crawl code contest

Still time to participate in the Common Crawl code contest

There is still plenty of time left to participate in the Common Crawl code contest! The contest is accepting entries until August 30th, why not spend some time this week playing around with the Common Crawl corpus and then submit your work to the contest?
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Big Data Week: meetups in SF and around the world

Big Data Week: meetups in SF and around the world

Big Data Week aims to connect data enthusiasts, technologists, and professionals across the globe through a series of meet-ups. The idea is to build community among groups working on big data and to spur conversations about relevant topics ranging from technology to commercial use cases.
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
OSCON 2012

OSCON 2012

We're just one month away from one of the biggest and most exciting events of the year, O'Reilly's Open Source Convention (OSCON). This year's conference will be held July 16th-20th in Portland, Oregon.
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
The Open Cloud Consortium’s Open Science Data Cloud

The Open Cloud Consortium’s Open Science Data Cloud

Common Crawl has started talking with the Open Cloud Consortium (OCC) about working together. If you haven’t already heard of the OCC, it is an awesome nonprofit organization managing and operating cloud computing infrastructure that supports scientific, environmental, medical and health care research.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Twelve steps to running your Ruby code across five billion web pages

Twelve steps to running your Ruby code across five billion web pages

The following is a guest blog post by Pete Warden, a member of the Common Crawl Advisory Board. Pete is a British-born programmer living in San Francisco. After spending over a decade as a software engineer, including 5 years at Apple, he’s now focused on a career as a mad scientist.
Pete Warden
Pete is a British-born programmer living in San Francisco, and is a member of the Common Crawl advisory board.
Common Crawl's Brand Spanking New Video and First Ever Code Contest!

Common Crawl's Brand Spanking New Video and First Ever Code Contest!

At Common Crawl we've been busy recently! After announcing the release of 2012 data and other enhancements, we are now excited to share with you this short video that explains why we here at Common Crawl are working hard to bring web crawl data to anyone who wants to use it.
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
Learn Hadoop and get a paper published

Learn Hadoop and get a paper published

We're looking for students who want to try out the Apache Hadoop platform and get a technical report published.
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
Data 2.0 Summit

Data 2.0 Summit

Next week a few members of the Common Crawl team are going the Data 2.0 Summit in San Francisco.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Common Crawl's Advisory Board

Common Crawl's Advisory Board

As part of our ongoing effort to grow Common Crawl into a truly useful and innovative tool, we recently formed an Advisory Board to guide us in our efforts. We have a stellar line-up of advisory board members who will lend their passion and expertise in numerous fields as we grow our vision.
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
Common Crawl on AWS Public Data Sets

Common Crawl on AWS Public Data Sets

Common Crawl is thrilled to announce that our data is now hosted on Amazon Web Services' Public Data Sets.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Web Data Commons

Web Data Commons

For the last few months, we have been talking with Chris Bizer and Hannes Mühleisen at the Freie Universität Berlin about their work and we have been greatly looking forward the announcement of the Web Data Commons.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
SlideShare: Building a Scalable Web Crawler with Hadoop

SlideShare: Building a Scalable Web Crawler with Hadoop

Common Crawl on building an open Web-Scale crawl using Hadoop.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Video: Gil Elbaz at Web 2.0 Summit 2011

Video: Gil Elbaz at Web 2.0 Summit 2011

Hear Common Crawl founder discuss how data accessibility is crucial to increasing rates of innovation as well as give ideas on how to facilitate increased access to data.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Video: This Week in Startups - Gil Elbaz and Nova Spivack

Video: This Week in Startups - Gil Elbaz and Nova Spivack

Nova and Gil, in discussion with host Jason Calacanis, explore in depth what Common Crawl is all about and how it fits into the larger picture of online search and indexing.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Video Tutorial: MapReduce for the Masses

Video Tutorial: MapReduce for the Masses

Learn how you can harness the power of MapReduce data analysis against the Common Crawl dataset with nothing more than five minutes of your time, a bit of local configuration, and 25 cents.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Common Crawl Enters A New Phase

Common Crawl Enters A New Phase

A little under four years ago, Gil Elbaz formed the Common Crawl Foundation. He was driven by a desire to ensure a truly open web. He knew that decreasing storage and bandwidth costs, along with the increasing ease of crunching big data, made building and maintaining an open repository of web crawl data feasible.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Gil Elbaz and Nova Spivack on This Week in Startups

Gil Elbaz and Nova Spivack on This Week in Startups

Nova and Gil, in discussion with host Jason Calacanis, explore in depth what Common Crawl is all about and how it fits into the larger picture of online search and indexing. Underlying their conversation is an exploration of how Common Crawl's open crawl of the web is a powerful asset for educators, researchers, and entrepreneurs.
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
MapReduce for the Masses: Zero to Hadoop in Five Minutes with Common Crawl

MapReduce for the Masses: Zero to Hadoop in Five Minutes with Common Crawl

Common Crawl aims to change the big data game with our repository of over 40 terabytes of high-quality web crawl information into the Amazon cloud, the net total of 5 billion crawled pages.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Answers to Recent Community Questions

Answers to Recent Community Questions

In this post we respond to the most common questions. Thanks for all the support and please keep the questions coming!
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Common Crawl Discussion List

Common Crawl Discussion List

We have started a Common Crawl discussion list to enable discussions and encourage collaboration between the community of coders, hackers, data scientists, developers and organizations interested in working with open web crawl data.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.

Common Crawl Blog

September 2024 Crawl Archive Now Available

September 24, 2024

The crawl archive for September 2024 is now available. The data was crawled between September 7th and September 21st 2024, and contains 2.8 billion web pages (or 410 TiB of uncompressed content).

Read More...

August/September 2024 Newsletter

September 10, 2024

We're pleased to announce our newsletter for August and September 2024.

Read More...

Host- and Domain-Level Web Graphs June, July, and August 2024

August 21, 2024

We are pleased to announce a new release of host-level and domain-level web graphs based on the crawls of June, July, August 2024. The crawls used to generate the graphs were CC-MAIN-2024-33, CC-MAIN-2024-30, and CC-MAIN-2024-26.

Read More...

August 2024 Crawl Archive Now Available

August 18, 2024

The crawl archive for August 2024 is now available. The data was crawled between August 3rd and August 16th, and contains 2.3 billion web pages (or 327.4 TiB of uncompressed content).

Read More...

The Increase of Common Crawl Citations in Academic Research

August 6, 2024

Common Crawl's impact on research has grown substantially since its beginning. Our crawls have become a vital resource for researchers in various fields, from natural language processing to red teaming.

Read More...

Host- and Domain-Level Web Graphs May, June, and July 2024

July 30, 2024

We are pleased to announce a new release of host-level and domain-level Web Graphs based on the crawls of May, June, and July 2024.

Read More...

July 2024 Crawl Archive Now Available

July 28, 2024

We are pleased to announce that the crawl archive for July 2024 is now available, containing 2.5 billion web pages, or 360 TiB of uncompressed content.

Read More...

Common Crawl Statistics Now Available on Hugging Face

July 22, 2024

We're excited to announce that Common Crawl’s statistics are now available on Hugging Face!

Read More...

The Environmental Impact of the Cloud - the Common Crawl Case Study

July 16, 2024

Looking at tools (Green Software) and methodologies to evaluate the environmental impact of the cloud (a nascent activity coined GreenOps).

Read More...

Host- and Domain-Level Web Graphs April, May, and June 2024

June 30, 2024

We are pleased to announce a new release of host-level and domain-level web graphs based on the crawls of April, May, June 2024. The crawls used to generate the graphs were CC-MAIN-2024-18, CC-MAIN-2024-22, and CC-MAIN-2024-26.

Read More...

June 2024 Crawl Archive Now Available

June 28, 2024

The crawl archive for June 2024 is now available. The data was crawled between June 12th and June 26th, and contains 2.7 billion web pages (or 382 TiB of uncompressed content). Page captures are from 52.7 million hosts or 41.4 million registered domains and include 945 million new URLs, not visited in any of our prior crawls.

Read More...

Dialog and Discovery at AI_dev 2024

June 28, 2024

This month members from the Common Crawl Foundation attended the AI_dev: Open Source GenAI & ML Summit in Paris, where discussions focused on AI advancements, ethics, and Open Source solutions.

Read More...

May/June 2024 Newsletter

June 25, 2024

We’re pleased to share our newsletter for May/June 2024, featuring the latest updates, events, and highlights from our community.

Read More...

Host- and Domain-Level Web Graphs February/March, April, and May 2024

June 4, 2024

We are pleased to announce a new release of host-level and domain-level web graphs based on the crawls of February, April, and May 2024.

Read More...

May 2024 Crawl Archive Now Available

June 3, 2024

The crawl archive for May 2024 is now available. The data was crawled between May 18th and May 31st, and contains 2.7 billion web pages (or 377 TiB of uncompressed content). This is our 100th crawl!

Read More...

Host- and Domain-Level Web Graphs November/December 2023, February/March 2024, and April 2024

May 5, 2024

We are pleased to announce a new release of host-level and domain-level web graphs based on the crawls of November, February, April 2024.

Read More...

April 2024 Crawl Archive Now Available

May 1, 2024

We are pleased to announce that the crawl archive for April 2024 is now available. The data was crawled between April 12th and April 25th, and contains 2.7 billion web pages (or 386 TiB of uncompressed content). Page captures are from 47.24 million hosts or 37.65 million registered domains and include 0.98 billion new URLs not visited in any of our prior crawls.

Read More...

March/April 2024 Newsletter

March 26, 2024

We're excited to share an update on some of our recent projects and initiatives in this newsletter!

Read More...

Host- and Domain-Level Web Graphs September/October, November/December 2023 and February/March 2024

March 14, 2024

We are pleased to announce a new release of host-level and domain-level web graphs based on the crawls of September, November, February 2023-24.

Read More...

February/March 2024 Crawl Archive Now Available

March 11, 2024

The crawl archive for February/March 2024 is now available. The data was crawled between February 20th and March 5th, and contains 3.16 billion web pages (or 424.7 TiB of uncompressed content).

Read More...

Web Archiving File Formats Explained

March 1, 2024

In the ever–evolving landscape of digital archiving and data analysis, it is helpful to understand the various file formats used for web crawling. From the early ARC format to the more advanced WARC, and the specialised WET and WAT files, each plays an important role in the field of web archiving. In this post, we explain these formats, exploring their unique features, applications, and the enhancements they offer.

Read More...

A Further Look Into the Prevalence of Various ML Opt–Out Protocols

February 22, 2024

This post details some experiments that we have done regarding Machine Learning Opt–Out protocols. We decided to investigate the prevalence of some of these protocols, by taking a deeper look at our WARC files, and finding which proportions of domains are using which opt–out protocols.

Read More...

Balancing Discovery and Privacy: A Look Into Opt–Out Protocols

February 13, 2024

What opt–out protocols are, their importance, how you can use them, how we respect them, and what the emerging initiatives are that surround them.

Read More...

Host- and Domain-Level Web Graphs May/Sep/Nov 2023

December 22, 2023

We are pleased to announce a new release of host-level and domain-level web graphs based on the crawls of May, September, and November of 2023.

Read More...

November/December 2023 Crawl Archive Now Available

December 15, 2023

The crawl archive for November/December 2023 is now available. The data was crawled between November 28th and December 12th, and contains 3.35 billion web pages (or 454 TiB of uncompressed content).

Read More...