UNESCO: Slash AI Energy Use by 90% While Keeping Performance

Image Credit: Israel Palacio | Splash

Small adjustments to the development and deployment of artificial intelligence systems known as large language models could cut their energy use by up to 90% while preserving performance, according to a report released last week by the United Nations Educational, Scientific and Cultural Organization (UNESCO) and University College London (UCL).

The findings come amid growing concerns over the environmental footprint of AI technologies, which consume vast amounts of electricity and water to power data centers.

Key Findings

The report, titled "Smarter, Smaller, Stronger: Resource Efficient AI and the Future of Digital Transformation", was published on July 8 and draws on experiments conducted by UCL computer scientists.

Researchers tested measures on models like Meta's LLaMA 3.1, finding that techniques such as rounding down internal calculations—known as quantization—reduced energy needs by up to 44% while retaining at least 97% accuracy on tested tasks, though some studies note potential greater losses in ultra-low bit settings for complex applications.

Shorter user queries and AI outputs halved energy use in some cases, while switching to compact, task-focused models for activities like translation or summarization achieved the largest savings, often exceeding 90% compared to broad-purpose systems.

Globally, generative AI handles about one billion daily interactions, based on usage statistics from OpenAI's ChatGPT as a representative example, each using roughly 0.34 watt-hours, resulting in an annual energy demand of 310 gigawatt-hours—comparable to the electricity needs of over 3 million people in a low-income African nation.

Background and Reasons

The study stems from UNESCO's 2021 ethical guidelines on AI, which emphasize environmental safeguards alongside human rights.

It addresses the surge in AI adoption since tools like ChatGPT emerged in late 2022, driving exponential growth in computational demands as models grow more complex to handle diverse queries.

High energy use arises from activating entire large models for every task, often unnecessarily, amid a concentration of AI infrastructure in wealthier nations that exacerbates global divides—only 5% of Africa's AI experts have adequate computing access.

UNESCO Assistant Director-General Tawfik Jelassi said the research supports member states in pursuing sustainable digital shifts, noting, "To make AI more sustainable, we need a paradigm shift in how we use it".

Recommendations and Impacts

The report urges governments and companies to prioritize investments in efficient AI research, promote user education on energy implications, and adopt designs like "mixture of experts", where specialized sub-models activate only as needed.

Such changes could power the equivalent of 34,000 UK households daily if applied to repetitive AI tasks worldwide, easing pressure on electricity grids and water supplies for cooling data centers.

Positive outcomes include broader AI access in regions with limited resources, potentially narrowing technological gaps, and aligning innovation with climate goals.

However, challenges involve transitioning from dominant large models, requiring coordinated efforts across platforms, and possible upfront costs for redesigns, though long-term savings in operations are projected.

Analysis and Future Trends

From a third-party perspective, the report highlights a tension in AI development: while larger models drive breakthroughs, their inefficiency risks amplifying environmental harms and inequalities if not addressed.

Pros of the proposed shifts include substantial resource conservation and maintained functionality, fostering inclusive growth; cons encompass resistance from industries invested in expansive systems and the need for widespread user adaptation.

In high-stakes domains, the up to 3% accuracy loss from techniques like quantization may vary in significance: in medical fields such as radiology, where human diagnostic error rates are typically 3-5%, surveys show that many clinicians expect AI to perform at least as well as or better than average specialists, suggesting a small additional error margin could be tolerable in assistive roles for some tasks if AI enhances efficiency and explainability; but in military applications, experts warn that even low error rates introduce risks of brittleness, hallucinations, and catastrophic outcomes in mission-critical operations like targeting.

Looking ahead, experts anticipate market competition pushing toward leaner models, with UCL's Professor Ivana Drobnjak describing a move to "smarter, leaner" systems akin to specialized brain regions.

This could steer AI toward sustainability, but success depends on policy enforcement and ethical frameworks to balance progress with planetary limits.

3% Cover the Fee
TheDayAfterAI News

We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.

Next
Next

AI Identifies 28 Out of 8,000 Catalysts to Boost Green Ammonia Efficiency