Skip to content
logo
Close-up of a hand touching a glowing, futuristic digital interface with blue data patterns, set against a blurred city lights background, with YouGov and Meltwater logos displayed.

Trust in the Age of AI: What Consumers Really Think About AI Generated Content


Apr 27, 2026

TL;DR Do People Trust AI Generated Content?

  • Most consumers are still sceptical about AI-generated content, and that scepticism directly impacts brand trust
  • Transparency isn’t optional anymore, it’s expected by the vast majority of your audience
  • Where and how you use AI matters just as much as whether you use it at all

Generative AI is everywhere now. You see it in content, campaigns, customer experiences, and even internal workflows. 

It promises speed, scale, and efficiency, but there’s a harder question sitting underneath all that progress; Do your audiences actually trust it?

That’s what we set out to understand in our latest report with YouGov, based on responses from nearly 10,000 people across seven global markets. The results point to something many brands are starting to feel in practice - AI is a double-edged sword, offering cost savings and productivity gains, but potentially damaging customer trust.

Contents

Consumers Aren’t as Excited by Gen AI as You Might Think

There’s a gap between how the industry talks about AI and how people actually feel about it.

You might assume that excitement is high because, after all, AI dominates headlines and product roadmaps for seemingly most companies. But when you look at consumer sentiment, the picture is more mixed. In fact, more people (51%) feel uncertain or sceptical than excited (39%) about a future shaped by generative AI.

That hesitation means your audience isn’t automatically on board with AI-driven content, no matter how advanced or efficient it is. By introducing a new tool into your workflow, you’re introducing a new layer of doubt into how your content is perceived.

Younger audiences are more open, with 48% of those aged 25-34 reporting that they are excited by AI, compared to only 31% of those aged over 55. 

Some markets are more optimistic than others too; Germany (56%) and Singapore (55%) show the highest levels of excitement towards AI, while the UK (23%) and the U.S. (25%) are the least excited. But overall, enthusiasm is far from universal. You can’t assume broad AI acceptance, you have to earn it.

The Real Concern isn’t AI, it’s What AI Enables

When people talk about AI risk, they’re rarely talking about the technology itself, they’re talking about outcomes.

Consumers are deeply concerned about how AI-generated content could be used. Fake news, scams, misleading information, and the erosion of authenticity all rank high on their list of worries. These aren’t fringe concerns; they’re held by a large majority of respondents across every market we studied.

That tells you something important: your audience is not evaluating AI in isolation, but judging it based on the potential for misuse.

So when your brand uses AI, people don’t always see the efficiency and cost savings that drove you to adopt the technology; they see risk, and unless you address that risk head-on, it can quietly undermine your credibility.

Confidence is High, but so is Doubt

Many consumers believe they can spot AI-generated content, but at the same time, an overwhelming majority worry that people in general won’t be able to tell what’s real and what isn’t.

That contradiction creates a strange dynamic in which, on a personal level, people feel confident, but on a societal level they feel uneasy.

For brands, that means trust is fragile. Even if your content is accurate and well-intentioned, it exists in an environment where people expect confusion, misinformation, and blurred lines between human and machine.

For brands, the battle is more than fighting for consumer attention now, they’re competing to earn trust. 

Context Shapes Acceptance More than Capability

Not all AI use is judged equally; the context of how and where it’s used matters when audiences form their opinions. 

Consumers are more comfortable with AI in areas like entertainment and advertising, where 53% and 47% respectively said they found it acceptable, perhaps because these are spaces where creativity, experimentation, and even a bit of artificiality are expected, so the stakes feel lower.

But shift that same AI into news (21% acceptable), politics (18% acceptable), or other high-trust environments, and attitudes change quickly. Acceptance drops and resistance rises.

This is one of the clearest signals from the data; People don’t reject AI outright, instead they evaluate it based on context, intent, and perceived impact.

So the question for your brand isn’t just “Should we use AI?” It’s “Where does AI make sense, and where does it damage trust?”

AI Can Cost You Trust if You’re Not Careful

One of the strongest findings in the report is the impact of AI on brand trust. A significant portion (32%) of consumers say they would trust a brand less if they knew its content was generated using AI, whereas only a small minority (15%) say it would increase their trust.

That imbalance should get your attention, since it means AI doesn’t come with a built-in trust advantage; rather, in many cases, it starts at a disadvantage.

It gets worse when certain lines are crossed, for example, if AI-generated content feels misleading, if it’s used without disclosure, or if it replaces human creativity entirely, trust drops sharply.

So if you’re using AI, you need to think beyond efficiency and consider the impact on audience perception. 

Transparency Is Now the Baseline

If there’s one takeaway from the Trust in the Age of Generative AI report that you act on, make it this; Consumers expect you to be transparent about your use of AI, as a basic requirement.

An overwhelming majority say it’s important that brands clearly disclose when content has been created using generative AI, and that expectation holds across every market in the study.

This is an interesting shift in how trust works, because in the past, brands could focus on outcomes, but now the process matters too. People want to know how content is made, not just what it says, and if they feel you’re hiding something, even unintentionally, it can erode trust quickly.

Transparency has become a signal of credibility.

What This Means for You

AI is not going away. You’re going to use it, your competitors are going to use it, and your audience knows that. The difference will come down to how you use it.

Brands that win in this next phase won’t be the ones that adopt AI the fastest, but the ones that use it thoughtfully, communicate it clearly, and keep human judgment at the centre of what they do.

That means being deliberate about where AI adds value, setting clear guidelines for disclosure, and monitoring how your audience responds, not just assuming they’ll accept it.

Most of all, it means recognising that trust is now part of your AI strategy.

Want the Full Picture?

This is just a snapshot of what we found. The full report breaks down how attitudes differ by country, age group, and content type. It also explores where AI is gaining acceptance, where it’s facing resistance, and what that means for your communications strategy.

If you’re thinking about how to use AI without damaging trust, it’s worth a closer look.

Download the full report to see the complete data and insights.


FAQs: Trust in AI-Generated Content

Do people trust AI-generated content?
Most consumers are sceptical about AI-generated content. Over half feel uncertain, and many say it can reduce trust in brands using it.

Why don’t consumers trust AI content?
Trust issues stem from concerns about misinformation, fake news, scams, and lack of authenticity enabled by AI-generated content.

Does AI-generated content hurt brand trust?
Yes, it can. Around one-third of consumers say they trust brands less when they learn content is AI-generated, especially without disclosure.

When is AI-generated content acceptable?
Acceptance depends on context. Consumers are more comfortable with AI in marketing and entertainment, but less so in news, politics, or high-trust industries.

Can people detect AI-generated content?
Many believe they can identify AI content, but most worry that others cannot, contributing to broader uncertainty and distrust.

How important is AI transparency for brands?
Transparency is essential. Most consumers expect brands to clearly disclose when AI is used to create content.

Should brands use AI in content creation?
Yes, but carefully. Brands should use AI where it adds value, maintain human oversight, and align usage with audience expectations.

How can brands build trust when using AI?
To build trust, brands should disclose AI use, avoid misleading content, apply AI in appropriate contexts, and prioritise human input.

Loading...