Intelligence artificielle, données, calcul : quelles infrastructures pour un monde décarboné ?

Ce rapport intermédiaire du Shift Project examine les implications environnementales des technologies d’intelligence artificielle. L’étude analyse la consommation d’énergie, les émissions de carbone et les ressources nécessaires à l’entraînement et au déploiement des modèles d’IA. Le rapport formule des recommandations pour développer et utiliser l’IA en accord avec les objectifs de durabilité écologique et les principes de sobriété numérique.

Baromètre Green IT 2025 - État des lieux des pratiques numériques responsables

Le Baromètre Green IT 2025 présente un état des lieux complet des pratiques numériques responsables dans les organisations françaises. Cette étude analyse les tendances et l’évolution des pratiques Green IT, mesurant les progrès réalisés et identifiant les axes d’amélioration pour un numérique plus durable. Le rapport fournit des indicateurs clés et des recommandations concrètes pour améliorer la maturité des organisations en matière de numérique responsable.

Carbon footprints embodied in the value chain of multinational enterprises in the Information and Communication Technology sector

Understanding the carbon footprints (CFs) within the value chains of Information and Communication Technology (ICT) multinational enterprises (IMNEs) is vital for reducing their global environmental impact. Using a multi-regional input-output model, we assess for the first time the evolution of IMNEs’ value chain CFs from 2000 to 2019 and apply structural path analysis to identify key emissions hotspots for mitigation. We found that IMNEs’ CFs accounted for over 4 % of global emissions during this period. By 2019, China became the largest host, contributing 558 MtCO2, but geopolitical shifts post-2010 led to growing emissions in India and Southeast Asia by 4.0 % and 4.8 % annually. Upstream and downstream emissions made up 94.5 %–95.8 % of total CFs respectively. ICT manufacturing multinational enterprises (MNEs) had significant upstream emissions from electricity and heavy manufacturing, while ICT services MNEs were more affected by downstream business and transportation emissions. Low-income economies contributed heavily to direct emissions, while high-income economies experienced a rise in downstream emissions, reaching 46.8 % in 2019. Middle-income economies shifted toward more downstream activities, with upstream emissions declining to 67 %. Thus, we highlight the need for targeted emissions reduction based on the distribution of value-chain CFs to maximize mitigationpotential.

Impacts environnementaux du numérique dans le monde 2025

L’objectif de cette étude et d’apporter un éclairage scientifique par une évaluation quantifiée des impacts environnementaux du numérique, afin que chacun∙e d’entre nous, citoyen, entreprise, dirigeant politique, puisse prendre la mesure des impacts du numérique et prendre nos responsabilités pour réduire ces impacts.

Exploring the sustainable scaling of AI dilemma: A projective study of corporations' AI environmental impacts

The rapid growth of artificial intelligence (AI), particularly Large Language Models (LLMs), has raised concerns regarding its global environmental impact that extends beyond greenhouse gas emissions to include consideration of hardware fabrication and end-of-life processes. The opacity from major providers hinders companies’ abilities to evaluate their AI-related environmental impacts and achieve net-zero targets. In this paper, we propose a methodology to estimate the environmental impact of a company’s AI portfolio, providing actionable insights without necessitating extensive AI and Life-Cycle Assessment (LCA) expertise. Results confirm that large generative AI models consume up to 4600x more energy than traditional models. Our modelling approach, which accounts for increased AI usage, hardware computing efficiency, and changes in electricity mix in line with IPCC scenarios, forecasts AI electricity use up to 2030. Under a high adoption scenario, driven by widespread Generative AI and agents adoption associated to increasingly complex models and frameworks, AI electricity use is projected to rise by a factor of 24.4. Mitigating the environmental impact of Generative AI by 2030 requires coordinated efforts across the AI value chain. Isolated measures in hardware efficiency, model efficiency, or grid improvements alone are insufficient. We advocate for standardized environmental assessment frameworks, greater transparency from the all actors of the value chain and the introduction of a “Return on Environment” metric to align AI development with net-zero goals.

Evaluation de l'impact environnemental du numérique en France

Cette étude vise à mettre à jour les données de l’étude menée avec l’Arcep en 2020 sur l’évaluation de l’impact environnemental du numérique en France, aujourd’hui et demain. En effet, n’avait été pris en compte dans les hypothèses de l’étude de 2020, que les data centers situés sur le territoire français. Or une partie importante des usages en France sont hébergés à l’étranger (environ 53 %) ce qui représente des impacts très loin d’être négligeables. Par ailleurs, entre 2020 et 2022, le mix entre les télévisions OLED et LCD-LED a varié au profit des télévisions OLED plus grandes et plus impactantes ainsi que les usages notamment due à l’arrivée massive de l’IA.

Green LLM: Studying Key Factors Affecting Energy Consumption of Code Assistants

In recent years,Large Language Models (LLMs) have significantly improved in generating high-quality code, enabling their integration into developers’ Integrated Development Environments (IDEs) as code assistants. These assistants, such as GitHub Copilot, deliver real-time code suggestions and can greatly enhance developers’ productivity. However, the environmental impact of these tools, in particular their energy consumption, remains a key concern. This paper investigates the energy consumption of LLM-based code assistants by simulating developer interactions with GitHub Copilot and analyzing various configuration factors. We collected a dataset of development traces from 20 developers and conducted extensive software project development simulations to measure energy usage under different scenarios. Our findings reveal that the energy consumption and performance of code assistants are influenced by various factors, such as the number of concurrent developers, model size, quantization methods, and the use of streaming. Notably, a substantial portion of generation requests made by GitHub Copilot is either canceled or rejected by developers, indicating a potential area for reducing wasted computations. Based on these findings, we share actionable insights into optimizing configurations for different use cases, demonstrating that careful adjustments can lead to significant energy savings.

Addition is All You Need for Energy-efficient Language Models

This innovative research demonstrates how simple addition operations can be used to create more energy-efficient language models without sacrificing performance. The authors propose a novel architecture that significantly reduces computational complexity and energy consumption while maintaining model capabilities. The study provides empirical evidence showing substantial energy savings compared to traditional transformer architectures.

Environmental Impact of AI Data Centers: Challenges and Solutions

This comprehensive study analyzes the environmental impact of data centers specifically used for AI training and inference. The research provides detailed measurements of energy consumption and carbon emissions from major AI computing facilities. The authors present innovative solutions for reducing the environmental footprint of AI infrastructure, including advanced cooling systems, renewable energy integration, and workload optimization strategies. The paper also introduces new metrics for measuring and comparing the environmental efficiency of different AI computing architectures and deployment strategies.

Efficient Training of Large Language Models: A Survey

This comprehensive survey examines various approaches to make the training of large language models more efficient and environmentally sustainable. The research analyzes different techniques including model compression, efficient attention mechanisms, and hardware-aware training strategies that can significantly reduce the computational and energy costs. The authors provide a systematic comparison of different efficiency methods and their impact on model performance, training time, and energy consumption.