Concept testing is a crucial step in evaluating new ideas before bringing them to market. With Outset, you can efficiently gather qualitative and quantitative insights on different concepts to determine their potential success. This guide outlines best practices for setting up and conducting a concept test using Outset.
Before setting up your study
To ensure a successful concept testing study, prepare the following:
List of quantitative metrics
Decide which key metrics will help you compare concepts effectively. Common metrics include:Appeal
Relevance
Likelihood to use
Likelihood to purchase
Expected frequency of use
Uniqueness
Believability
Choose the top 3-4 metrics that align with your study goals.
Assumptions per concept
Identify any assumptions you want to validate. If your study duration allows (ideally under 45 minutes), include questions to test your most important assumptions about each concept.
Recruitment criteria
Define your audience by answering:Who should participate?
How many respondents do you need?
What is your recruitment budget?
Will you use Outset’s recruitment or an external panel?
Concept stimuli
For each concept, it is recommended to create a JPEG file that includes an image and a text description of the concept.
Study recommendations
Keep the study under one hour to maintain respondent engagement and data quality.
Limit the test to four concepts per study.
Use the “duplicate study” feature to easily replicate and modify studies if testing multiple concepts.
Alternatively - you can use one study but limit the number of sections that each participant sees by using monadic randomization (e.g., set up 8 sections in your interview guide for 8 different concepts, and randomize so that each participant sees 4 concepts)
Study structure
1. Welcome message
Start with a friendly introduction explaining the process.
Example:
"Thanks for joining this interview! I am an AI-powered interview chatbot. Think of this as any other sort of interview or survey!
Today, we have a few products we want to show you and want to ask your opinion on them. There is no right or wrong answer so please be as honest as possible.
I'd love to hear more about your opinions so when answering your questions please explain a bit more about the WHY behind your answers too."
2. Warm-up questions
Help respondents feel comfortable with easy, open-ended questions:
Examples:
Tell me about the last time you purchased a new {CONCEPT CATEGORY}.
If you could design your perfect {CONCEPT CATEGORY}, what would it be and why?
You can also add "Probing instructions" for your AI moderator to explore more deeply (e.g., "Follow up to understand why this would be their perfect snack or meal")
3. Concept-specific questions
💡 TIP: Once a section is set up for one concept with questions like the examples below, duplicate and adjust it for the others.
For each concept, ask:
A) Initial reactions
Take a look at this concept, called {CONCEPT NAME}. You can click the image to enlarge it. What are your initial reactions to this product idea?
What do you like about this product?
What do you dislike about this product?
B) Quantitative key metric questions
Appeal
Question: How appealing do you find this concept?
Question Type: Single-select
Options: Very appealing; Appealing; Neutral; Unappealing; Very unappealing
Probing instructions: Follow up on why they provided that response and what their thinking is
Willingness to purchase
Question: How likely would you be to purchase this product if it were available at your local stores?
Question Type: Single-select
Options: Very unlikely to purchase; Unlikely to purchase; Neutral; Likely to purchase; Very likely to purchase
Probing instructions: Follow up on why they provided that response and what their thinking is
💡 TIP: If possible, include a price in your stimulus, as otherwise respondents might find it hard to say whether they would purchase it or not. If you are unable to include a price, you can add “if it were available at a reasonable price” to the question.
Believability
Question: How believable do you find this concept’s claims?
Question Type: Single-select
Options: Extremely believable; Mostly believable; Somewhat believable; Not very believable; Not at all believable
Probing instructions: Follow up on why they provided that response and what their thinking is
Frequency of use/purchase
Question: How often would you expect to {USE/PURCHASE] a product like {CONCEPT NAME}?
Question Type: Single-select
Options: Add mutually exclusive options depending on the expected frequency of use
Probing instructions: Follow up on why they provided that response and what their thinking is
C) Assumption testing
Depending on your study focus, ask about:
When and where they would use the product (i.e., "moment of use", "place of use")
Preferred pricing/ reasonable price point
Sales channel
Soft launch and adjustments
Before full deployment, it is recommended to conduct a soft launch with 5-10 respondents.
After responses are gathered, review responses to evaluate:
Response depth
Interview duration
Clarity of questions
If needed, adjust the guide (e.g., adjust "Probing instructions" for deeper insights) or refine recruitment.
Reporting and analysis
1. Quantitative insights
In order to retrieve the numbers you need, as long as you set up quantitative key questions as multiple choice questions, you will already get the results automatically generated in a graph in Outset’s reports. You can view these results on the "Insights" tab or within Custom Reports.
You can also export the results via CSV (on the "Results" tab) to calculate quantitative insights via Excel. For example, you may want to set up a table like this within Excel to compare your quantitative metrics across concepts. You can also use conditional formatting to highlight top-performing concepts to easily differentiate the results.
| Concept #1 | Concept #2 |
Frequency | 39% | 60% |
Believability | 45% | 53% |
2. Qualitative insights
Using either the "Insights" tab or Custom Reporting within Outset, use AI-generated insights to analyze:
Reasons behind participant responses
Initial impressions
What they liked/disliked
Suggested improvements
Example report questions:
What reasons did respondents provide for [refer to previous quantitative / multiple choice answer]
What were respondent’s initial impressions of the product?
What did respondents like the most about the product?
What did respondents dislike the most about the product?
What would respondents change about the product in order to make it more relevant for them?
Presenting your results
Concepts are typically presented with both a concept comparison slide and individual concept deep-dives.
Concept comparison slide:
(See example below) This slide includes all of the tested concepts, with quantitative metrics listed for each concept. Highlighting can help to call out where certain concepts are spiking with higher or lower values.
Individual concept slides:
You can create an individual slide for each concept. These deep-dive slides typically show all key quantitative metrics and also qualitative insights that explain the quantitative metrics.
For example, if a product had a very high appeal score, you can look at some of the AI-generated insights to learn what respondents liked the most about the product, as well as insights from people that had a positive first impression of the product, as well as the reasons why people that said they found the product appealing were triggered to give that answer. Combining these insights will allow you to explain what specifically made the product appealing to people.
These concept deep-dive slides can also include quotes to explain these insights using direct customer wording.
Following these best practices ensures that your concept testing with Outset delivers reliable, actionable insights for making informed product decisions.
Hope this helps! If you have any further questions, please reach out to our team at [email protected] or chat.