Press ESC to close

UserTesting expands platform with generative AI to scale human insights

The unveiling of a new set of generative AI-driven features for UserTesting’s platform kicks off the Human Insights Summit today.

The new tools, which the business simply calls UserTesting AI, are designed to assist companies in scaling up their experience research efforts using AI. The initial set of tools includes an interaction with OpenAI to assist users in writing summaries and developing reports from research data more quickly. They build on current AI capabilities developed in-house by UserTesting in recent years to assist organizations in better understanding user behaviour and sentiment for goods and services.

UserTesting introduced its machine learning (ML)-powered barrier detection solution for behavioural analytics in April.

The recently released UserTesting AI tools aim to go beyond what the firm has already done by using the potential of next-generation AI technologies like OpenAI’s ChatGPT.

In an interview, CEO Andy MacMillan said that UserTesting AI provides AI-powered solutions for research, design, marketing, and product teams to boost productivity.

How UserTesting is incorporating generative AI into its existing machine learning

UserTesting, according to MacMillan, has created its own ML models to collect data from its platform, allowing teams to test how people interact with and perceive a service or application. UserTesting captures user sessions and then applies machine learning models to extract information. The models have assisted in identifying sentiment, intent, and where people become stuck in a workflow. 

The business isn’t simply providing raw data to the gen AI model to process using the new UserTesting AI tools. MacMillan emphasized that UserTesting is incorporating next-generation AI with its existing models.

Many of those machine learning outputs are being used by us, and from them, we’ve extracted interesting information for researchers, such as friction, insights, and recommendations, in addition to transcripts and we’re providing that and we’re developing tasks, paragraphs and research report summaries using large language models (LLMs),” he said.

UserTesting’s generative AI helps to *avoid* bias 

Until far, user experience researchers have primarily required to generate reports and summaries on their own based on data and insights from a UserTesting operation.

However, now, if a team needs to test a new mobile app, the platform discovers and connects them with profiles of people who are appropriate for testing. UserTesting ML models detect relevant data points while users test the app prototype. The user is also video and audio recorded, and the entire session is transcribed.

McMillan stated that they use machine learning models to filter various data streams to identify key moments.

The UserTesting software then displays a results page with a set of important data points and session highlights. Researchers may now prepare a thorough summary and report based on detailed findings using UserTesting AI. AI-generated reports and summaries will also include particular citations and references to assist academics in delving into individual data points.


While there is considerable worry about the larger usage of gen AI and how it may be biased, MacMillan believes that UserTesting AI may assist in mitigating any bias. Despite worries about the greater use of gen AI, he says that UserTesting AI might decrease possible bias by enhancing customer efficiency, reducing researchers from missing information, and perhaps helping researchers to notice things they might not see.