< Back to Blog
August 26, 2018

August Crawl Archive Introduces Language Annotations

Note: this post has been marked as obsolete.
The crawl archive for August 2018 is now available! It contains 2.65 billion web pages and 220 TiB of uncompressed content, crawled between August 14th and 22th. Together with an upgrade of the crawler software we've plugged in a language detector and now provide as annotation the language a web page is written in.
Sebastian Nagel
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.

The crawl archive for August 2018 is now available! It contains 2.65 billion web pages and 220 TiB of uncompressed content, crawled between August 14th and 22th.

Data Type File List #Files Total Size
Compressed (TiB)
Segments segment.paths.gz 100
WARC warc.paths.gz 71520 67.79
WAT wat.paths.gz 71520 16.76
WET wet.paths.gz 71520 6.92
Robots.txt files robotstxt.paths.gz 71520 0.19
Non-200 responses non200responses.paths.gz 71520 1.79
URL index files cc-index.paths.gz 302 0.21
Columnar URL index files cc-index-table.paths.gz 900 0.24

Together with an upgrade of the crawler software we've plugged in a language detector and now provide as annotation the language a web page is written in.

Please note that the WARC files of August 2018 (CC-MAIN-2018-34) are affected by a WARC format error and contain an extra \r\n between HTTP header and payload content. Also the given "Content-Length" is off by 2 bytes. For more information about this bug see this post on our user forum.

Language Annotations

We now run the Compact Language Detector 2 (CLD2) on HTML pages to identify the language of a document. CLD2 is able to identify 160 different languages and up to 3 languages per document. The detected languages resp. the ISO-639-3 code are shown in the URL index as a new field, e.g., "languages": "zho,eng". The WARC metadata records contain the full CLD2 response including scores and text coverage:


On github you'll find the Java bindings to the CLD2 native library and the distribution of the primary document languages as part of our crawl statistics. Please note that the columnar index does not contain the detected languages for now. This requires a change of the table schema. We plan to add the new fields later after we've verified that an update of the schema does not break common tools (e.g., Spark or Presto/Athena) used to process the table.

Crawler Software Upgrade and Minor Changes to WARC Files

Our crawler has been upgraded and is now based on the most recent version of Apache Nutch (1.15). The source code can be found on github in our Nutch fork.

In conjunction with the crawler upgrade we made the following minor changes affecting the WARC record format of the crawl archives:

  • "HTTP 304 notmodified" responses are now stored as WARC revisit records in the "crawldiagnostics" subset along with 404s, redirects and other non-200 responses. For now the revisit records contain a payload digest although there is no payload sent together with HTTP 304 responses.  The stupid reason is that the columnar index requires the digest field and we want to make sure that all tools continue to work as expected. The SHA-1 digest of an empty payload (zero bytes) is used for the revisit records.
  • All HTTP response headers are now preserved. As before, if the page content is truncated or was compressed or chunked during transfer, the headers "Content-Encoding", "Transfer-Encoding" and "Content-Length" need to be rewritten, otherwise WARC readers may fail reading the record payload. E.g., a page compressed on the HTTP protocol layer may have the following headers – the original headers are prefixed with X-Crawler-:
    X-Crawler-Content-Encoding: gzip
    X-Crawler-Content-Length: 2010
    Content-Length: 16125
  • The crawler may now also store pages fetched partially because of a network disconnect. These captures are marked as WARC-Truncated: disconnect in the WARC record header. Note that the crawler may also truncate the page payload because of a content limit (we store only 1 MB per page) or a time limit (after 10 minutes a page download is canceled).
  • the WARC record headers indicate still "WARC/1.0" although we follow the WARC specification, v1.1. While testing various WARC reader libraries we've found that at least two of them fail on records with a "WARC/1.1" header.

Please note that due to a bug the first two crawled segments are without robots.txt captures.

Archive Location and Download

The August crawl archive is located in the commoncrawl bucket at crawl-data/CC-MAIN-2018-34/.

To assist with exploring and using the dataset, we provide gzipped files which list all segments, WARC, WAT and WET files. By simply adding either s3://commoncrawl/ or https://data.commoncrawl.org/ to each line, you end up with the S3 and HTTP paths respectively.

The Common Crawl URL Index for this crawl is available at: https://index.commoncrawl.org/CC-MAIN-2018-34/. Also the columnar index has been updated to contain this crawl.

Please donate to Common Crawl if you appreciate our free datasets! We’re also seeking corporate sponsors to partner with Common Crawl for our non-profit work in open data. Please contact info@commoncrawl.org for sponsorship information.

This release was authored by:
No items found.

Erratum: 

WAT data: repeated WARC and HTTP headers are not preserved

Originally reported by: 
Permalink

Repeated HTTP and WARC headers were not represented in the JSON data in WAT files. When a header was repeated adding a further value of that header, only the last value was stored and other values were lost. This issues was fixed with CC-MAIN-2024-51, see ia-web-commons#18. All WAT files from CC-MAIN-2013-20 until CC-MAIN-2024-46 are affected.

Erratum: 

WARC revisit metadata records

Originally reported by: 
Permalink

The revisit records in the Common Crawl WARC archives in all crawls from CC-MAIN-2018-34 to CC-MAIN-2024-46 (since Aug 2018) lack the metadata record which is attached to all response records. Fixed with CC-MAIN-2024-51, see commoncrawl/nutch#33. Note: before CC-MAIN-2018-34, WARC revisit records were not stored at all.

Erratum: 

Erroneous title field in WAT records

Originally reported by: 
Robert Waksmunski
Permalink

The "Title" extracted in WAT records to the JSON path `Envelope > Payload-Metadata > HTTP-Response-Metadata > HTML-Metadata > Head > Title` is not the content included in the <title> element in the HTML header (<head> element) if the page contains further <title> elements in the page body. The content of the last <title> element is written to the WAT "Title". This bug was observed if the HTML page includes embedded SVG graphics.

The issue was reported by the user Robert Waksmunski:

...and was fixed for CC-MAIN-2024-42 by commoncrawl/ia-web-commons#37.

This erratum affects all crawls from CC-MAIN-2013-20 until CC-MAIN-2024-38.

Erratum: 

Redundant extra line in response records

Originally reported by: 
Greg Lindahl
Permalink

The WARC files of the August 2018 crawl contain a redundant empty line between the HTTP headers and the payload
of WARC response records. This extra line may cause the following problems when processing the WARC files:

  • Because WARC readers/parsers assume only a single empty line, the extracted payload content starts with \r\n. While leading new lines are usually ignored by HTML processors, document parsers for binary formats (PDF, office documents, etc.) are likely to fail.
  • The length of the payload in the optional HTTP Content-Length header
    is off by 2. This may also cause WARC processors to fail.

Please see this issue on GitHub for more information. We apologise for this bug!

Erratum: 

Incorrect fetch_time metadata

Originally reported by: 
Permalink

In crawls CC-MAIN-2016-36 to CC-MAIN-2016-50, and CC-MAIN-2018-34 to CC-MAIN-2019-47 the fetch_time metadata for robots.txt might be incorrect. The correct times can be found in collinfo.json. See the related issue (commoncrawl/nutch#14) for more information.

Erratum: 

Missing Language Classification

Originally reported by: 
Permalink

Starting with crawl CC-MAIN-2018-39 we added a language classification field (‘content-languages’) to the columnar indexes, WAT files, and WARC metadata for all subsequent crawls. The CLD2 classifier was used, and includes up to three languages per document. We use the ISO-639-3 (three-character) language codes.