WikiReverse- Visualizing Reverse Links with the Common Crawl Archive
February 18, 2015
This is a guest blog post by Ross Fairbanks, a software developer based in Barcelona. He mainly develops in Ruby and is interested in open data and cloud computing. This guest post describes his open data project and why he built it.
Read More...5 Good Reads in Big Open Data: Feb 13 2015
February 13, 2015
What does it mean for the Open Web if users don't know they're on the internet? Via QUARTZ: “This is more than a matter of semantics. The expectations and behaviors of the next billion people to come online will have profound effects on how the internet evolves. If the majority of the world’s online population spends time on Facebook, then policymakers, businesses, startups, developers, nonprofits, publishers, and anyone else interested in communicating with them will also, if they are to be effective, go to Facebook. That means they, too, must then play by the rules of one company. And that has implications for us all.”
Read More...5 Good Reads in Big Open Data: Feb 6 2015
February 6, 2015
The Dark Side of Open Data - via Forbes: “There’s no reason to doubt that opening to the public of data previously unreleased by governments, if well managed, can be a boon for the economy and, ultimately, for the citizens themselves. It wouldn’t hurt, however, to strip out the grandiose rhetoric that sometimes surrounds them, and look, case by case, at the contexts and motivations that lead to their disclosure.”
Read More...The Promise of Open Government Data & Where We Go Next
January 29, 2015
One of the biggest boons for the Open Data movement in recent years has been the enthusiastic support from all levels of government for releasing more, and higher quality, datasets to the public. In May 2013, the White House released its Open Data Policy and announced the launch of Project Open Data, a repository of tools and information--which anyone is free to contribute to--that help government agencies release data that is “available, discoverable, and usable.”
Read More...December 2014 Crawl Archive Available
January 9, 2015
The crawl archive for December 2014 is now available! This crawl archive is over 160TB in size and contains 2.08 billion webpages.
Read More...November 2014 Crawl Archive Available
December 24, 2014
The crawl archive for November 2014 is now available! This crawl archive is over 135TB in size and contains 1.95 billion webpages.
Read More...Please Donate To Common Crawl!
December 10, 2014
Big data has the potential to change the world. The talent exists and the tools are already there. What’s lacking is access to data. Imagine the questions we could answer and the problems we could solve if talented, creative technologists could freely access more big data.
Read More...October 2014 Crawl Archive Available
November 20, 2014
The crawl archive for October 2014 is now available! This crawl archive is over 254TB in size and contains 3.72 billion webpages.
Read More...September 2014 Crawl Archive Available
November 12, 2014
The crawl archive for September 2014 is now available! This crawl archive is over 220TB in size and contains 2.98 billion webpages.
Read More...August 2014 Crawl Data Available
September 22, 2014
The August crawl of 2014 is now available! The new dataset is over 200TB in size containing approximately 2.8 billion webpages.
Read More...Web Data Commons Extraction Framework for the Distributed Processing of CC Data
August 29, 2014
This is a guest blog post by Robert Meusel, a researcher at the University of Mannheim in the Data and Web Science Research Group and a key member of the Web Data Commons project. The post below describes a new tool produced by Web Data Commons for extracting data from the Common Crawl data.
Read More...July 2014 Crawl Data Available
August 7, 2014
The July crawl of 2014 is now available! The new dataset is over 266TB in size containing approximately 3.6 billion webpages.
Read More...April 2014 Crawl Data Available
July 16, 2014
The April crawl of 2014 is now available! The new dataset is over 183TB in size containing approximately 2.6 billion webpages.
Read More...Navigating the WARC file format
April 2, 2014
Wait, what's WAT, WET and WARC? Recently CommonCrawl has switched to the Web ARChive (WARC) format. The WARC format allows for more efficient storage and processing of CommonCrawl's free multi-billion page web archives, which can be hundreds of terabytes in size.
Read More...March 2014 Crawl Data Now Available
March 26, 2014
The March crawl of 2014 is now available! The new dataset contains approximately 2.8 billion webpages and is about 223TB in size.
Read More...Common Crawl's Move to Nutch
February 20, 2014
Last year we transitioned from our custom crawler to the Apache Nutch crawler to run our 2013 crawls as part of our migration from our old data center to the cloud. Our old crawler was highly tuned to our data center environment where every machine was identical with large amounts of memory, hard drives and fast networking.
Read More...Lexalytics Text Analysis Work with Common Crawl Data
February 4, 2014
This is a guest blog post by Oskar Singer, a Software Developer and Computer Science student at University of Massachusetts Amherst. He recently did some very interesting text analytics work during his internship at Lexalytics. The post below describes the work, how Common Crawl data was used, and includes a link to code.
Read More...Winter 2013 Crawl Data Now Available
January 8, 2014
The second crawl of 2013 is now available! In late November, we published the data from the first crawl of 2013. The new dataset was collected at the end of 2013, contains approximately 2.3 billion webpages and is 148TB in size.
Read More...New Crawl Data Available!
November 27, 2013
We are very please to announce that new crawl data is now available! The data was collected in 2013, contains approximately 2 billion web pages and is 102TB in size (uncompressed).
Read More...Hyperlink Graph from Web Data Commons
November 13, 2013
The talented team at Web Data Commons recently extracted and analyzed the hyperlink graph within the Common Crawl 2012 corpus. Altogether, they found 128 billion hyperlinks connecting 3.5 billion pages.
Read More...Startup Profile: SwiftKey’s Head Data Scientist on the Value of Common Crawl’s Open Data
August 14, 2013
Sebastian Spiegler is the head of the data team and SwiftKey and a volunteer at Common Crawl. Yesterday we posted Sebastian’s statistical analysis of the 2012 Common Crawl corpus. Today we are following it up with a great video featuring Sebastian talking about why crawl data is valuable, his research, and why open data is important.
Read More...A Look Inside Our 210TB 2012 Web Corpus
August 13, 2013
Want to know more detail about what data is in the 2012 Common Crawl corpus without running a job? Now you can thanks to Sebastian Spiegler!
Read More...Professor Jim Hendler Joins the Common Crawl Advisory Board!
March 22, 2013
We are extremely happy to announce that Professor Jim Hendler has joined the Common Crawl Advisory Board. Professor Hendler is the Head of the Computer Science Department at Rensselaer Polytechnic Institute (RPI) and also serves as the Professor of Computer and Cognitive Science at RPI’s Tetherless World Constellation.
Read More...URL Search Tool!
March 5, 2013
A couple months ago we announced the creation of the Common Crawl URL Index and followed it up with a guest post by Jason Ronallo describing how he had used the URL Index. Today we are happy to announce a tool that makes it even easier for you to take advantage of the URL Index!
Read More...The Winners of The Norvig Web Data Science Award
February 25, 2013
We are very excited to announce that the winners of the Norvig Web Data Science Award Lesley Wevers, Oliver Jundt, and Wanno Drijfhout from the University of Twente!
Read More...