How to argue against AI-First Research-Smashing Magazine

How to argue against AI-First Research-Smashing Magazine

With AI above us, companies have recently turned to their attention to “Synthetic” user test -AI-driven research replacing UX research. Questions are answered by AI-Generated “Customers”, human tasks “performed” by AI agents.

However, it is not only for desktop surveys or discovery that AI is used for; It’s one Actually ease of use with “AI personas” that imitating human behavior of actual customers in the actual product. It’s like UX Research, Just & MLDR; Without the users.

One of the tools to perform “synthetic test” or AI-generated UX research without users. (Source: synthetic users) (large preview)

If this sounds worrying, confusing and foreign, it is – but this does not prevent companies from adopting AI “research” to run business decisions. Although not surprisingly the company may be Dangerous, riskyAnd expensive and usually decrease user value.

This article is Part of our ongoing series At Ux. You can find more details about Design patterns and UX strategy In Smart Interface Design patterns 🍣 – With Live UX training coming up soon. Free preview.

Quick, cheap, light and mldr; And imaginary

Erika Hall noted famous that “design is only ‘human -centered’, as the business model allows.” If a company is strongly driven by Hunks, assumptions and strong opinionsThere will be little to no interest in properly executed UX research in the first place.

The possibility of business value is by delivering user value when users are fighting.
The possibility of business value is by delivering user value when users are fighting. By Erika Hall. (Large preview)

But as opposed to UX research, AI -research (conveniently called Synthetic test) is Quick, cheap and easy to run again. It does not raise unpleasant questions and it does not mark incorrect assumptions. It does not require the recruitment of user, much time or long -term debates.

And: it can handle Thousands of ai -personas at once. By studying AI-generated output, we can discover ordinary travel, navigation patterns and common expectations. We can foresee how people behave and what they would do.

Well, that’s Big promise. And this is where we start running into big problems.

Llms are people pleasers

Good UX research has roots in What actually happenedNot what power has happened or what power happen in the future.

By nature, LLMs are trained to give the most “Plausible“Or probably output based on patterns trapped in its training data. However, these patterns appear in expected behavior by statistical” average “profiles extracted from content on the web. But these people do not exist, they never have.

By default are user segments not scoped and not curated. They do not represent the customer base of any product. So to be useful, we must eloquently pray AI by explaining who users are, what they do and how they behave. Otherwise, output does not match user needs and does not apply to our users.

Each LLM hallucinates, but newer models perform better with some tasks, such as summary.
Each LLM hallucinates, but newer models perform better with some tasks, such as summary. By Nature.com. (Large preview)

When you “produce” user insights, LLMs cannot generate unexpected things beyond what we are already asking about.

In comparison, researchers are only able to define what is relevant when the process takes place. In actual user testing can insight help Shift priorities Or radically genimagine the problem we are trying to solve, as well as potential business results.

Real insight comes from Unexpected behaviorFrom reading behavioral clues and emotions, from observing a person who does the opposite of what they said. We can’t repeat it with LLMS.

AI -user research is not “better than nothing”

Pavel Samsonov articulates that things that sound like customers power Say them are worthless. But things that customers actually have said, made or experienced bearing inherent value (though they could be exaggerated). We just have to interpret them correctly.

AI user research is not “better than nothing” or “more effective.” It creates one Illusion of customer experiences It never happened and is at best good guesses, but at worst misleading and non-usable. Relying on AI-generated “insight” alone is not much different from reading tea leaves.

The cost of mechanical decisions

We often hear about the breakthrough of automation and knowledge generation with AI. Yet we often forget that automation often comes at a price: the cost of mechanical decisions typically arbitraryfavors uniformity and eroding quality.

Some research questions generated by AI may be useful, others useless.
Some research questions generated by AI may be useful, others useless. By Maria Rosala. (Large preview)

As Maria Rosala and Kate Moran write, the problem of AI research is that it will certainly be MisrepresentationAnd without real research, you will not catch and correct these inaccuracies. Making decisions without talking to real customers is dangerous, harmful and expensive.

In addition to that, synthetic testing assumes that people fit in well -defined boxes, which is rarely true. Human behavior is shaped by our experiences, situations, habits that cannot be replicated by text generation alone. AI Strengthen biases, supports hangingand amplifies stereotypes.

Triangulating insight instead of verifying them

Of course AI can give Useful starting point to explore early in the process. But inherently, it also invites false impressions and non -verified conclusions – presented with an incredible level of confidence and security.

Start with human research Made with real customers using a real product is just much more reliable. After doing so, we can still use AI to see if we might miss something critical in user interviews. AI can improve but not replace UX research.

Trianguate linear customer iron by laying them on top of each other to identify the most frequent use areas.
Trianguate linear customer iron by laying them on top of each other to identify the most frequent use areas. By John Cutler. (Large preview)

When we use AI for desktop surveys it may be tempting to try to “validate“AI” insight “with actual user test. Once we have plants a seed of insight into our head, it is easy to recognize its signs everywhere – though it really isn’t there.

Instead we study actual customers, then Triangulating data: Track clusters or most trading in parts of the product. It may be that Analytics and AI Desk Research confirm your hypothesis. It would give you a much stronger status of moving forward in the process.

Wrapping

I may sound like a broken record, but I still wonder why we feel it is urgent to replace UX work with automated AI tools. Good design requires a good amount of Critical thinkingObservation and planning.

For me personally, cleaning up after AI-generated output takes far more time than doing the actual work. There is one Incredible value When talking to people who actually use your product.

I would always choose a day with a real customer instead of an hour with 1,000 synthetic users pretending to be people.

Useful resources

New: How To Measurous UX and Design influence

Meet Measurement UX & Design Impact (8H), a new practical guide to designers and UX leads to measurement and shows your UX influence on the business. Use the code 🎟 IMPACT To save 20% off today. Jump to the details.

How to measure UX and Design influence with Vitaly Friedman.
Smashing editorial
(Cm)

Leave a Reply

Your email address will not be published. Required fields are marked *