ALTITUDE

A 60-Year-Old Framework Meets AI

The consumer rights framework is 60 years old. It still holds. But AI is testing every one of the six rights – safety, information, choice, redress, representation, and education – in ways we haven't seen before.

15 March 20264 min readAIConsumer RightsConsumer ProtectionEU AI ActSustainability2026
Sheltered within structure – consumer rights as the branches that hold, while the pace of AI streams past.
Sheltered within structure – consumer rights as the branches that hold, while the pace of AI streams past.

A 60-Year-Old Framework Meets AI

There's a consumer rights framework that's been around since the 1960s. Six rights: Safety. Information. Choice. Redress. Representation. Education. They were written for an era of unsafe food and misleading advertising. They've since been applied to digital finance, e-commerce, and platform economics.

They apply to AI too. Each one is now being tested in new ways.

The pace factor

AI moves faster than most consumer-facing technologies before it. Tools change monthly. Interfaces update weekly. Features appear, get renamed, or fold into something else. The language itself shifts – "assistant" becomes "agent" becomes "agentic workflow" – often without clear explanation of what changed.

This creates a practical challenge. According to Forrester, 62% of global consumers now use generative AI weekly, but a third hesitate to share their data with AI agents. That gap between usage and comfort suggests consumers are adopting AI faster than they can fully understand it – a pattern consumer protection frameworks were designed to address.

Six rights, six pressure points

Each of the six consumer rights maps to a specific area where AI introduces new dynamics.

Safety. AI systems recommend products, draft legal documents, provide health information, and manage finances. When an AI agent processes a tax return or suggests a medication interaction and gets it wrong, the liability question is genuinely unresolved. The consumer clicked "confirm," but didn't write the logic. Product safety standards are still adapting to products that generate their own behaviour.

Information. The EU AI Act will require labelling of AI-generated content from August 2026 – a significant step. But the information challenge runs deeper: consumers often don't know what data they've shared, what's been inferred from their usage, or how their inputs feed back into model training. Terms of service run to thousands of words. The gap between what consumers technically consent to and what they practically understand is widening.

Choice. In theory, switching between AI platforms is straightforward – they're all a browser tab away. In practice, anyone who has built projects, preferences, and context inside one platform knows the switching cost is real. One AI lab recently launched a migration tool that works by pasting a single prompt summary of your history from a competitor. For users with months of accumulated context, meaningful portability remains limited. This mirrors the data portability challenges the EU has addressed in adjacent sectors.

Redress. When a traditional product harms you, consumer law provides recourse. With AI, the chain of accountability is more complex. Was the issue with the platform, the model provider, or the developer who integrated the API? The layered nature of AI systems makes responsibility harder to assign, and clear redress mechanisms for consumers are still developing.

Representation. AI governance conversations happen at OECD summits, in corporate boardrooms, and in regulatory consultations. Consumer voices are present – organisations like Consumers International dedicated their 2024 World Consumer Rights Day campaign to "Fair and responsible AI." But the technical complexity of AI policy creates a capacity challenge: meaningful participation requires resources and expertise that consumer advocacy organisations don't always have.

Education. Consumer education traditionally assumes a reasonably stable product landscape. AI doesn't offer that. By the time someone learns how one tool works, the tool may have changed. This raises a design question: should the focus be on making consumers more knowledgeable, or on building systems that require less specialist knowledge to use safely?

Where the market is focused

The current competition between AI labs – sometimes called the "consumer AI war" – is primarily focused on product experience, pricing, and ecosystem integration. Consumer protection is addressed, but it isn't the primary competitive differentiator. This is not unusual for a fast-growing technology sector; digital finance followed a similar pattern of rapid innovation and adoption, with consumer protection frameworks catching up over time.

The EU AI Act represents the most comprehensive regulatory response so far, with provisions phasing in through 2026 and 2027. Other jurisdictions are developing their own approaches.

The sustainability connection

Consumer rights and sustainable consumption have always been linked. The ability to make informed choices about the environmental impact of a product depends on accurate information, genuine alternatives, and recourse when claims prove misleading. These are the same foundations that apply to AI.

As AI becomes embedded in more consumer decisions – from product recommendations to financial planning to energy usage – the intersection between consumer protection, AI governance, and sustainability will become harder to separate. The six rights provide a useful lens for mapping where the pressure points are and where the frameworks need to develop.


Pandion works at the intersection of sustainability, AI, and organisational strategy. For more on how consumer dynamics connect to the wider sustainability system, see our Consumer & Society framework.

A 60-Year-Old Framework Meets AI | Pandion Studio