< Back to Blog
April 1, 2025

March 2025 Crawl Archive Now Available

Note: this post has been marked as obsolete.
We are pleased to announce that the crawl archive for March 2025 is now available. The data was crawled between March 15th and March 28th, and contains 2.74 billion web pages (or 455 TiB of uncompressed content).
Thom Vaughan
Thom Vaughan
Thom is Principal Technologist at the Common Crawl Foundation.

The crawl archive for March 2025 is now available.

The data was crawled between March 15th and March 28th, and contains 2.74 billion web pages (or 455 TiB of uncompressed content). Page captures are from 46.7 million hosts or 38 million registered domains and include 0.9 billion new URLs, not visited in any of our prior crawls.

File List #Files Total Size
Compressed (TiB)
Segments segment.paths.gz 100
WARC warc.paths.gz 100000 96.67
WAT wat.paths.gz 100000 18.33
WET wet.paths.gz 100000 7.27
Robots.txt robotstxt.paths.gz 100000 0.15
Non-200 responses non200responses.paths.gz 100000 3.31
URL index cc-index.paths.gz 302 0.21
Columnar URL index cc-index-table.paths.gz 900 0.24

Archive Location & Download

The March 2025 crawl archive is located in the commoncrawl bucket at crawl-data/CC-MAIN-2025-13/.

To assist with exploring and using the dataset, we provide gzip compressed files which list all segments, WARC, WAT and WET files.

By simply adding either s3://commoncrawl/ or https://data.commoncrawl.org/ to each line, you end up with the S3 and HTTP paths respectively. Please see Get Started for detailed instructions.

What's New?

The content limit for fetched payloads has been increased from 1 MiB to 5 MiB.

Compared to the February crawl, which is almost equal in the number of fetched pages, this lead to:

  • 13% more fetched content (up from 403 TiB to 455 TiB)
  • 18% more WARC files (82.17 vs. 96.67 TiB)
  • as expected, a decrease in the number of web pages / documents truncated because of the configured content limit
    • from 2.25% to 0.14% for all MIME types
    • PDF documents only: 25.7% -> 6.8%
    • HTML: 2.2% -> 0.04%

We'd love to hear your feedback, so feel free to join us on our Discord server or in our Google group.

This release was authored by:
Sebastian is a Distinguished Engineer with Common Crawl.
Sebastian Nagel
Thom is Principal Technologist at the Common Crawl Foundation.
Thom Vaughan

Erratum: 

Content is truncated

Originally reported by: 
Permalink

Some archived content is truncated due to fetch size limits imposed during crawling. This is necessary to handle infinite or exceptionally large data streams (e.g., radio streams). Prior to March 2025 (CC-MAIN-2025-13), the truncation threshold was 1 MiB. From the March 2025 crawl onwards, this limit has been increased to 5 MiB.

For more details, see our truncation analysis notebook.