Common Crawl Blog

The latest news, interviews, technologies, and resources.

Filter by Category or Search by Title

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
The Winners of The Norvig Web Data Science Award

The Winners of The Norvig Web Data Science Award

We are very excited to announce that the winners of the Norvig Web Data Science Award Lesley Wevers, Oliver Jundt, and Wanno Drijfhout from the University of Twente!
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Common Crawl URL Index

Common Crawl URL Index

We are thrilled to announce that Common Crawl now has a URL index! Scott Robertson, founder of triv.io graciously donated his time and skills to creating this valuable tool.
Scott Robertson
Scott Robertson is a founder of triv.io, and is a passionate believer in simplifying complicated processes.
Towards Social Discovery - New Content Models; New Data; New Toolsets

Towards Social Discovery - New Content Models; New Data; New Toolsets

This is a guest blog post by Matthew Berk, Founder of Lucky Oyster. Matthew has been on the front lines of search technology for the past decade.
Matthew Berk
Matthew Berk is a founder at Bean Box and Open List, worked at Jupiter Research and Marchex. Matthew studied at Cornell University and Johns Hopkins University.
blekko donates search data to Common Crawl

blekko donates search data to Common Crawl

We are very excited to announce that blekko is donating search data to Common Crawl! Founded in 2007, blekko has created a new type of search experience that enlists human editors in its efforts to eliminate spam and personalize search.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Winners of the Code Contest!

Winners of the Code Contest!

We’re very excited to announce the winners of the First Ever Common Crawl Code Contest! We were thrilled by the response to the contest and the many great entries. Several people let us know that they were not able to complete their project in time to submit to the contest. We’re currently working with them to finish the projects outside of the contest and we’ll be showcasing some of those projects in the near future!
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
Common Crawl Code Contest Extended Through the Holiday Weekend

Common Crawl Code Contest Extended Through the Holiday Weekend

Do you have a project that you are working on for the Common Crawl Code Contest that is not quite ready? If so, you are not the only one. A few people have emailed us to let us know their code is almost ready but they are worried about the deadline, so we have decided to extend the deadline through the holiday weekend.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
TalentBin Adds Prizes To The Code Contest

TalentBin Adds Prizes To The Code Contest

The prize package for the Common Crawl Code Contest now includes three Nexus 7 tablets thanks to TalentBin!
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
2012 Crawl Data Now Available

2012 Crawl Data Now Available

I am very happy to announce that Common Crawl has released 2012 crawl data as well as a number of significant enhancements to our example library and help pages.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Amazon Web Services sponsoring $50 in credit to all contest entrants!

Amazon Web Services sponsoring $50 in credit to all contest entrants!

Did you know that every entry to the First Ever Common Crawl Code Contest gets $50 in Amazon Web Services (AWS) credits? If you're a developer interested in big datasets and learning new platforms like Hadoop, you truly have no reason not to try your hand at creating an entry to the code contest!
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
Mat Kelcey Joins The Common Crawl Advisory Board

Mat Kelcey Joins The Common Crawl Advisory Board

We are excited to announce that Mat Kelcey has joined the Common Crawl Board of Advisors! Mat has been extremely helpful to Common Crawl over the last several months and we are very happy to have him as an official Advisor to the organization.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Still time to participate in the Common Crawl code contest

Still time to participate in the Common Crawl code contest

There is still plenty of time left to participate in the Common Crawl code contest! The contest is accepting entries until August 30th, why not spend some time this week playing around with the Common Crawl corpus and then submit your work to the contest?
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Big Data Week: meetups in SF and around the world

Big Data Week: meetups in SF and around the world

Big Data Week aims to connect data enthusiasts, technologists, and professionals across the globe through a series of meet-ups. The idea is to build community among groups working on big data and to spur conversations about relevant topics ranging from technology to commercial use cases.
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
OSCON 2012

OSCON 2012

We're just one month away from one of the biggest and most exciting events of the year, O'Reilly's Open Source Convention (OSCON). This year's conference will be held July 16th-20th in Portland, Oregon.
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
The Open Cloud Consortium’s Open Science Data Cloud

The Open Cloud Consortium’s Open Science Data Cloud

Common Crawl has started talking with the Open Cloud Consortium (OCC) about working together. If you haven’t already heard of the OCC, it is an awesome nonprofit organization managing and operating cloud computing infrastructure that supports scientific, environmental, medical and health care research.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Twelve steps to running your Ruby code across five billion web pages

Twelve steps to running your Ruby code across five billion web pages

The following is a guest blog post by Pete Warden, a member of the Common Crawl Advisory Board. Pete is a British-born programmer living in San Francisco. After spending over a decade as a software engineer, including 5 years at Apple, he’s now focused on a career as a mad scientist.
Pete Warden
Pete is a British-born programmer living in San Francisco, and is a member of the Common Crawl advisory board.
Common Crawl's Brand Spanking New Video and First Ever Code Contest!

Common Crawl's Brand Spanking New Video and First Ever Code Contest!

At Common Crawl we've been busy recently! After announcing the release of 2012 data and other enhancements, we are now excited to share with you this short video that explains why we here at Common Crawl are working hard to bring web crawl data to anyone who wants to use it.
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
Learn Hadoop and get a paper published

Learn Hadoop and get a paper published

We're looking for students who want to try out the Apache Hadoop platform and get a technical report published.
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
Data 2.0 Summit

Data 2.0 Summit

Next week a few members of the Common Crawl team are going the Data 2.0 Summit in San Francisco.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Common Crawl's Advisory Board

Common Crawl's Advisory Board

As part of our ongoing effort to grow Common Crawl into a truly useful and innovative tool, we recently formed an Advisory Board to guide us in our efforts. We have a stellar line-up of advisory board members who will lend their passion and expertise in numerous fields as we grow our vision.
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
Common Crawl on AWS Public Data Sets

Common Crawl on AWS Public Data Sets

Common Crawl is thrilled to announce that our data is now hosted on Amazon Web Services' Public Data Sets.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Web Data Commons

Web Data Commons

For the last few months, we have been talking with Chris Bizer and Hannes Mühleisen at the Freie Universität Berlin about their work and we have been greatly looking forward the announcement of the Web Data Commons.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
SlideShare: Building a Scalable Web Crawler with Hadoop

SlideShare: Building a Scalable Web Crawler with Hadoop

Common Crawl on building an open Web-Scale crawl using Hadoop.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Video: Gil Elbaz at Web 2.0 Summit 2011

Video: Gil Elbaz at Web 2.0 Summit 2011

Hear Common Crawl founder discuss how data accessibility is crucial to increasing rates of innovation as well as give ideas on how to facilitate increased access to data.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Video: This Week in Startups - Gil Elbaz and Nova Spivack

Video: This Week in Startups - Gil Elbaz and Nova Spivack

Nova and Gil, in discussion with host Jason Calacanis, explore in depth what Common Crawl is all about and how it fits into the larger picture of online search and indexing.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Video Tutorial: MapReduce for the Masses

Video Tutorial: MapReduce for the Masses

Learn how you can harness the power of MapReduce data analysis against the Common Crawl dataset with nothing more than five minutes of your time, a bit of local configuration, and 25 cents.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Common Crawl Enters A New Phase

Common Crawl Enters A New Phase

A little under four years ago, Gil Elbaz formed the Common Crawl Foundation. He was driven by a desire to ensure a truly open web. He knew that decreasing storage and bandwidth costs, along with the increasing ease of crunching big data, made building and maintaining an open repository of web crawl data feasible.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Gil Elbaz and Nova Spivack on This Week in Startups

Gil Elbaz and Nova Spivack on This Week in Startups

Nova and Gil, in discussion with host Jason Calacanis, explore in depth what Common Crawl is all about and how it fits into the larger picture of online search and indexing. Underlying their conversation is an exploration of how Common Crawl's open crawl of the web is a powerful asset for educators, researchers, and entrepreneurs.
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
MapReduce for the Masses: Zero to Hadoop in Five Minutes with Common Crawl

MapReduce for the Masses: Zero to Hadoop in Five Minutes with Common Crawl

Common Crawl aims to change the big data game with our repository of over 40 terabytes of high-quality web crawl information into the Amazon cloud, the net total of 5 billion crawled pages.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Answers to Recent Community Questions

Answers to Recent Community Questions

In this post we respond to the most common questions. Thanks for all the support and please keep the questions coming!
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Common Crawl Discussion List

Common Crawl Discussion List

We have started a Common Crawl discussion list to enable discussions and encourage collaboration between the community of coders, hackers, data scientists, developers and organizations interested in working with open web crawl data.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.

Common Crawl Blog

July 2019 crawl archive now available

July 30, 2019

The crawl archive for July 2019 is now available! It contains 2.6 billion web pages or 220 TiB of uncompressed content, crawled between July 15th and 24th.

Read More...

June 2019 crawl archive now available

July 2, 2019

The crawl archive for June 2019 is now available! It contains 2.6 billion web pages or 220 TiB of uncompressed content, crawled between June 16th and 27th with an operational break from 21st to 24th.

Read More...

May 2019 crawl archive now available

May 31, 2019

The crawl archive for May 2019 is now available! It contains 2.65 billion web pages or 220 TiB of uncompressed content, crawled between May 19th and 27th.

Read More...

Host- and Domain-Level Web Graphs Feb/Mar/Apr 2019

May 9, 2019

We are pleased to announce a new release of host-level and domain-level web graphs based on the published crawls of February, March and April 2019. Additional information about the data formats, the processing pipeline, our objectives, and credits can be found in the announcements of prior webgraph releases.

Read More...

April 2019 crawl archive now available

April 30, 2019

The crawl archive for April 2019 is now available! It contains 2.5 billion web pages or 198 TiB of uncompressed content, crawled between April 18th and 26th.

Read More...

March 2019 crawl archive now available

April 1, 2019

The crawl archive for March 2019 is now available! It contains 2.55 billion web pages or 210 TiB of uncompressed content, crawled between March 18th and 27th.

Read More...

February 2019 crawl archive now available

March 1, 2019

The crawl archive for February 2019 is now available! It contains 2.9 billion web pages or 225 TiB of uncompressed content, crawled between February 15th and 24th.

Read More...

Host- and Domain-Level Web Graphs Nov/Dec/Jan 2018 - 2019

February 20, 2019

We are pleased to announce a new release of host-level and domain-level web graphs based on the published crawls of November, December 2018 and January 2019. Additional information about the data formats, the processing pipeline, our objectives, and credits can be found in the announcements of prior webgraph releases.

Read More...

January 2019 crawl archive now available

January 28, 2019

The crawl archive for January 2019 is now available! It contains 2.85 billion web pages or 240 TiB of uncompressed content, crawled between January 15th and 24th.

Read More...

December 2018 crawl archive now available

December 22, 2018

The crawl archive for December 2018 is now available! It contains 3.1 billion web pages or 250 TiB of uncompressed content, crawled between December 9th and 19th.

Read More...

November 2018 crawl archive now available

November 29, 2018

The crawl archive for November 2018 is now available! It contains 2.6 billion web pages or 220 TiB of uncompressed content, crawled between November 12th and 22nd.

Read More...

Host- and Domain-Level Web Graphs Aug/Sep/Oct 2018

November 13, 2018

We are pleased to announce a new release of host-level and domain-level web graphs based on the published crawls of August, September and October 2018. Additional information about data formats, the processing pipeline, our objectives, and credits can be found in the announcements of prior webgraph releases.

Read More...

October 2018 crawl archive now available

October 30, 2018

The crawl archive for October 2018 is now available! It contains 3.0 billion web pages and 240 TiB of uncompressed content, crawled between October 15th and 24th.

Read More...

September 2018 crawl archive now available

October 3, 2018

The crawl archive for September 2018 is now available! It contains 2.8 billion web pages and 220 TiB of uncompressed content, crawled between September 17th and 26th.

Read More...

August Crawl Archive Introduces Language Annotations

August 26, 2018

The crawl archive for August 2018 is now available! It contains 2.65 billion web pages and 220 TiB of uncompressed content, crawled between August 14th and 22th. Together with an upgrade of the crawler software we've plugged in a language detector and now provide as annotation the language a web page is written in.

Read More...

Host- and Domain-Level Web Graphs May/June/July 2018

August 12, 2018

We are pleased to announce a new release of host-level and domain-level web graphs based on the published crawls of May, June and July 2018. Additional information about data formats, the processing pipeline, our objectives, and credits can be found in the announcements of prior webgraph releases.

Read More...

3.25 Billion Pages Crawled in July 2018

July 28, 2018

The crawl archive for July 2018 is now available! The archive contains 3.25 billion web pages and 255 TiB of uncompressed content, crawled between July 15th and 23th.

Read More...

June 2018 Crawl Archive Now Available

July 2, 2018

The crawl archive for June 2018 is now available! The archive contains 3.05 billion web pages and 235 TiB of uncompressed content, crawled between June 18th and 25th.

Read More...

May 2018 Crawl Archive Now Available

June 1, 2018

The crawl archive for May 2018 is now available! The archive contains 2.75 billion web pages and 215 TiB of uncompressed content, crawled between May 20th and 28th.

Read More...

Host- and Domain-Level Web Graphs Feb/Mar/Apr 2018

May 7, 2018

We are pleased to announce a new release of host-level and domain-level web graphs based on the published crawls of February, March and April 2018. Additional information about data formats, the processing pipeline, our objectives, and credits can be found in the announcements of prior webgraph releases.

Read More...

April 2018 Crawl Archive Now Available

May 2, 2018

The crawl archive for April 2018 is now available! The archive contains 3.1 billion web pages and 230 TiB of uncompressed content, crawled between April 19th and 27th.

Read More...

March 2018 Crawl Archive Now Available

March 29, 2018

The crawl archive for March 2018 is now available! The archive contains 3.2 billion web pages and 250+ TiB of uncompressed content, crawled between March 17th and 25th.

Read More...

February 2018 Crawl Archive Now Available

March 2, 2018

The crawl archive for February 2018 is now available! The archive contains 3.4 billion web pages and 270+ TiB of uncompressed content, crawled between February 17th and Feb 26th.

Read More...

Index to WARC Files and URLs in Columnar Format

March 1, 2018

We're happy to announce the release of an index to WARC files and URLs in a columnar format. The columnar format (we use Apache Parquet) allows to efficiently query or process the index and saves time and computing resources. Especially, if only few columns are accessed, recent big data tools will run impressively fast.

Read More...

Host- and Domain-Level Web Graphs Nov/Dec/Jan 2017-2018

February 8, 2018

We are pleased to announce a new release of host-level and domain-level web graphs based on the published crawls of November, December 2017 and January 2018. These graphs, along with ranked lists of hosts and domains, follow the prior web graph releases (Feb/Mar/Apr 2017, May/Jun/Jul 2017 and Aug/Sep/Oct 2017).

Read More...