Addition is All You Need for Energy-efficient Language Models

This innovative research demonstrates how simple addition operations can be used to create more energy-efficient language models without sacrificing performance. The authors propose a novel architecture that significantly reduces computational complexity and energy consumption while maintaining model capabilities. The study provides empirical evidence showing substantial energy savings compared to traditional transformer architectures.

Baromètre Green IT 2025 - État des lieux des pratiques numériques responsables

Le Baromètre Green IT 2025 présente un état des lieux complet des pratiques numériques responsables dans les organisations françaises. Cette étude analyse les tendances et l’évolution des pratiques Green IT, mesurant les progrès réalisés et identifiant les axes d’amélioration pour un numérique plus durable. Le rapport fournit des indicateurs clés et des recommandations concrètes pour améliorer la maturité des organisations en matière de numérique responsable.

Carbon Emissions and Large Neural Network Training

This comprehensive study analyzes the real carbon footprint of training large neural network models, taking into account multiple often-overlooked factors. The research provides a detailed methodology for calculating CO2 emissions and demonstrates how the choice of data center location and timing can significantly impact the environmental cost of AI training. The authors show that thoughtful choices about where and when to train models can reduce CO2 emissions by up to 100x compared to random choices.

Carbon footprints embodied in the value chain of multinational enterprises in the Information and Communication Technology sector

Understanding the carbon footprints (CFs) within the value chains of Information and Communication Technology (ICT) multinational enterprises (IMNEs) is vital for reducing their global environmental impact. Using a multi-regional input-output model, we assess for the first time the evolution of IMNEs’ value chain CFs from 2000 to 2019 and apply structural path analysis to identify key emissions hotspots for mitigation. We found that IMNEs’ CFs accounted for over 4 % of global emissions during this period. By 2019, China became the largest host, contributing 558 MtCO2, but geopolitical shifts post-2010 led to growing emissions in India and Southeast Asia by 4.0 % and 4.8 % annually. Upstream and downstream emissions made up 94.5 %–95.8 % of total CFs respectively. ICT manufacturing multinational enterprises (MNEs) had significant upstream emissions from electricity and heavy manufacturing, while ICT services MNEs were more affected by downstream business and transportation emissions. Low-income economies contributed heavily to direct emissions, while high-income economies experienced a rise in downstream emissions, reaching 46.8 % in 2019. Middle-income economies shifted toward more downstream activities, with upstream emissions declining to 67 %. Thus, we highlight the need for targeted emissions reduction based on the distribution of value-chain CFs to maximize mitigationpotential.

Carbon-Aware Computing: Measuring and Reducing AI's Environmental Impact

This research introduces new methodologies for measuring and reducing the carbon footprint of AI computations across different computing environments. The study presents tools and techniques for accurate carbon impact assessment of AI workloads, considering factors such as hardware efficiency, datacenter location, and time-of-day energy mix. The authors provide practical recommendations for implementing carbon-aware computing practices in AI development and deployment.

Efficient Large Language Model Deployment: A Survey and Empirical Study

This comprehensive survey investigates various approaches for deploying large language models efficiently, focusing on reducing computational resources and energy consumption. The research evaluates different deployment strategies including model compression, quantization, and hardware acceleration techniques, providing empirical evidence of their effectiveness. The authors present a systematic comparison of deployment methods and their impact on model performance, latency, and energy usage.

Efficient Training of Large Language Models: A Survey

This comprehensive survey examines various approaches to make the training of large language models more efficient and environmentally sustainable. The research analyzes different techniques including model compression, efficient attention mechanisms, and hardware-aware training strategies that can significantly reduce the computational and energy costs. The authors provide a systematic comparison of different efficiency methods and their impact on model performance, training time, and energy consumption.

Efficient Transformers: A Survey of Modeling and Training Approaches

This comprehensive survey examines various approaches to making transformer models more computationally efficient and environmentally sustainable. The research analyzes different architectural innovations and training strategies that reduce the computational and energy requirements of transformer models while maintaining their effectiveness. The authors provide a systematic comparison of different efficiency techniques and their impact on model performance, training costs, and environmental footprint.

Efficient Vision Transformers: Methods and Applications

This comprehensive study explores methods for developing energy-efficient vision transformers while maintaining high performance in computer vision tasks. The research evaluates various optimization techniques including architecture modifications, training strategies, and inference optimizations specifically designed for vision transformers. The authors demonstrate significant reductions in computational costs and energy consumption while preserving model accuracy across different vision tasks.

Energy and Policy Considerations for Deep Learning in NLP

This pioneering study examines the carbon footprint of training natural language processing models. The authors quantify the financial and environmental costs of training various NLP models. The study reveals that training a single BERT model can emit as much CO2 as a trans-Atlantic flight, and that the computational costs of NLP models double every 3-4 months. The authors provide concrete recommendations to reduce environmental impact, particularly by prioritizing energy efficiency in model design and using renewable energy sources for training.