< Back to Blog
February 17, 2025

Opening the Gates to Online Safety

Note: this post has been marked as obsolete.
Last week in Paris, at the AI Action Summit, a coalition of major technology companies and foundations announced the launch of ROOST: Robust Online Open Safety Tools.
Thom Vaughan
Thom Vaughan
Thom is Principal Technologist at the Common Crawl Foundation.
ROOST: Robust Online Open Safety Tools

Last week in Paris, at the AI Action Summit, a coalition of major technology companies and foundations announced the launch of ROOST: Robust Online Open Safety Tools (https://roost.tools). ROOST makes critical data and tools for online safety openly accessible to benefit everyone; a mission which closely aligns with ours at Common Crawl.

The lack of robust infrastructure for online safety [1] has had significant consequences. For example, research [2] [3] has shown that large language models (LLMs) generate significantly more unsafe responses in non-English languages than in English, a disparity which Common Crawl's recent efforts to improve coverage of low-resource languages aim to address, but initiatives like ROOST further bridge the gap in infrastructure by providing accessible safety tools for a wider range of contexts.

Work to improve AI safety across the industry has only just begun.

“Recent discussions and research in AI safety have increasingly emphasized the deep connection between AI safety and existential risk from advanced AI systems, suggesting that work on AI safety necessarily entails serious consideration of potential existential threats. However, this framing has three potential drawbacks: it may exclude researchers and practitioners who are committed to AI safety but approach the field from different angles; it could lead the public to mistakenly view AI safety as focused solely on existential scenarios rather than addressing a wide spectrum of safety challenges; and it risks creating resistance to safety measures among those who disagree with predictions of existential AI risks.”
~ AI Safety for Everyone, Balint Gyevnar, et al, February 2025 [4]

AI safety is often discussed in broad theoretical terms, but practical solutions (tools, resources, and frameworks) are often closed off, expensive, or controlled by a few major players. This not only reduces effectiveness of safety interventions, but also creates barriers for smaller organisations and independent developers. ROOST aims to ensure that developers at all levels can implement best practices in safety.

Left to right: Juliet Shen, Emily Liu, Clint Smith, Audrey Tang, Thom Vaughan, Vilas Dhar, Camille François, Eli Sugarman, Chris DiBona, Paul Ash, Alexandra Reeve Givens, Nabiha Syed, at the ROOST launch in Paris, France.

We at Common Crawl have always believed that access to high quality web data should not be limited to a select few. The Internet is a shared resource, and making web data freely available has driven incredible progress in innovation and across countless research fields. In the same way, opening up safety tools creates a much healthier and more balanced ecosystem in tech, where developers, researchers, and policymakers can work together to build safer and more transparent systems.

References

[1] "Landscape of AI safety concerns -- A methodology to support safety assurance for AI-based autonomous systems", Ronald Schnitzer, et al. https://arxiv.org/abs/2412.14020

[2] "All Languages Matter: On the Multilingual Safety of Large Language Models", Wenxuan Wang, et al. https://arxiv.org/abs/2310.00905

[3] "Multilingual Jailbreak Challenges in Large Language Models", Yue Deng, et al. https://arxiv.org/html/2310.06474v3

[4] "AI Safety for Everyone", Balint Gyevnar, et al. https://arxiv.org/abs/2502.09288

This release was authored by:
No items found.