Sustainable Computing Practices: A Guide for AI Researchers and Practitioners

This practical guide provides concrete recommendations for implementing sustainable computing practices in AI research and development. The research outlines specific strategies for reducing energy consumption and carbon emissions throughout the AI development lifecycle, from experiment design to deployment. The authors present case studies and empirical evidence demonstrating the effectiveness of various sustainability practices in real-world AI projects.

Sustainable NLP: An Analysis of Efficient Language Processing Methods

This research investigates methods for developing environmentally sustainable natural language processing systems, focusing on reducing computational costs and energy consumption. The study analyzes various efficiency techniques specific to NLP tasks, including model compression, efficient attention mechanisms, and task-specific optimizations. The authors provide empirical evidence of energy savings and performance trade-offs across different NLP tasks and model architectures.

The carbon and water footprints of data centers and what this could mean for artificial intelligence

Although there are ways to estimate the global power demand of AI systems, it remains challenging to quantify the associated carbon and water footprints. The lack of distinction between AI and non-AI workloads in the environmental reports of data center operators makes it possible to assess the environmental impact of AI workloads only by approximating them through data centers’ general performance metrics. The environmental disclosure of tech companies is, however, often insufficient to determine even the total data center performance of these companies. The carbon footprint of AI systems alone could be between 32.6 and 79.7 million tons of CO2 emissions in 2025, while the water footprint could reach 312.5–764.6 billion liters. The shortcomings in the environmental disclosure of data center operators could be remedied with new policies mandating the disclosure of additional metrics. Because the environmental impact of data centers is growing rapidly, the urgency of transparency in the tech sector is also increasing.

The ML.ENERGY Benchmark: Toward Automated Inference Energy Measurement and Optimization

As Generative AI becomes increasingly integrated into real-world services, energy consumption has become a significant bottleneck—yet it remains under-measured and under-optimized in machine learning (ML) systems. This paper introduces the ML.ENERGY Benchmark and Leaderboard, an open-source suite and evaluation platform designed to measure and compare the inference energy use of AI models in realistic service environments. The authors present four core principles for effective energy benchmarking and illustrate their application within the tool. Results from the benchmark detail energy metrics for 40 popular model architectures across 6 tasks, showcase case studies on design decisions affecting energy use, and demonstrate that automatic optimizations can cut energy consumption by over 40% without sacrificing output quality. The ML.ENERGY Benchmark is extensible, making it a practical resource for both researchers and practitioners seeking to evaluate and minimize the energy footprint of their AI applications.