Sustainable NLP: An Analysis of Efficient Language Processing Methods

This research investigates methods for developing environmentally sustainable natural language processing systems, focusing on reducing computational costs and energy consumption. The study analyzes various efficiency techniques specific to NLP tasks, including model compression, efficient attention mechanisms, and task-specific optimizations. The authors provide empirical evidence of energy savings and performance trade-offs across different NLP tasks and model architectures.

AI Accessibility Barriers: Understanding and Addressing Challenges for Users with Disabilities

This comprehensive study examines the accessibility challenges that people with disabilities face when interacting with AI systems. The research identifies key barriers in current AI technologies and proposes solutions. The authors analyze how AI can both help and hinder accessibility, providing concrete examples of both beneficial applications and problematic implementations that create new barriers. The paper presents a framework for evaluating AI accessibility and offers guidelines for developing more inclusive AI systems that work for users of all abilities.

Measuring the Carbon Intensity of AI in Cloud Instances

This paper presents a methodology for accurately measuring the carbon emissions of AI workloads running in cloud environments. The research provides detailed measurements across different cloud providers and regions, showing how carbon intensity can vary significantly based on location and time of day. The authors also release tools and best practices for researchers and practitioners to measure and reduce the carbon footprint of their AI applications.