< Back to Blog
January 9, 2015

December 2014 Crawl Archive Available

The crawl archive for December 2014 is now available! This crawl archive is over 160TB in size and contains 2.08 billion webpages.
Stephen Merity
Stephen Merity
Stephen Merity is an independent AI researcher, who is passionate about machine learning, open data, and teaching computer science.

The crawl archive for December 2014 is now available! This crawl archive is over 160TB in size and contains 2.08 billion webpages. The files are located in the commoncrawl bucket at /crawl-data/CC-MAIN-2014-52/.

Data Type File List #Files Total Size
Compressed (TiB)
Segments segment.paths.gz 314
WARC warc.paths.gz 43636 32.00
WAT wat.paths.gz 43636 10.41
WET wet.paths.gz 43636 3.69
URL index files cc-index.paths.gz 302 0.13
Columnar URL index files cc-index-table.paths.gz 300 0.14

To assist with exploring and using the dataset, we’ve provided gzipped files that list:

By simply adding either s3://commoncrawl/ or https://data.commoncrawl.org/ to each line, you end up with the S3 and HTTP paths respectively. Thanks again to blekko for their ongoing donation of URLs for our crawl!

Errata
No items found.
This release was authored by:
No items found.