

GFI’s sensory science playbook takes aim at alt-protein’s most costly blind spot
The Good Food Institute recently published a new practical guide on sensory evaluation that tackled one of the alternative protein sector’s most persistent and expensive challenges: knowing when a product was actually ready for market. Titled Sensory evaluation of alternative proteins: A quick-start guide, the document set out a structured framework for how companies should design, select, and interpret sensory tests across every stage of product development, from early concepts through scale-up and post-launch.
• The Good Food Institute published a practical guide urging alternative protein companies to integrate sensory testing across the full product development cycle
• The guide warned that many product failures stemmed from using the wrong sensory methods or misinterpreting results, rather than from technical limitations
• Properly designed sensory studies were positioned as a way to reduce cost, risk, and late-stage reformulation as competition in alternative proteins intensified
Rather than treating sensory work as a late-stage validation exercise, the guide argued that it should function as a decision-making tool embedded throughout the development process. The premise was straightforward. Many alternative protein products failed not because of technology gaps, but because teams asked the wrong sensory questions, used the wrong methods, or overinterpreted results that were never designed to answer commercial questions in the first place. The guide positioned sensory science as a way to reduce that risk, provided it was applied with discipline and intent.
At the heart of the document was a simple principle: sensory studies should always start with a clearly defined question. Whether the goal was to understand if two products were perceptibly different, how they differed, or whether consumers actually liked them determined everything that followed, including the method used, the participants selected, and the way results were interpreted.
The guide mapped different sensory approaches to specific stages of the development cycle. In early concept development, exploratory methods such as focus groups, projective techniques, consumer co-creation, and concept testing were highlighted as tools to shape direction rather than confirm performance. During prototype development, descriptive analysis, rapid profiling, and early hedonic testing helped teams screen formulations and identify promising paths forward.
As products moved into scale-up, discrimination testing and temporal methods became more important. These approaches allowed developers to assess whether process changes, ingredient substitutions, or shelf-life effects introduced detectable differences, even if consumers could not yet articulate what had changed. At commercialization and post-launch, consumer acceptance testing and shelf-life studies were framed as essential for confirming parity, preference, and consistency in real-world conditions.

A central feature of the guide was a decision tree designed to help teams select the right sensory method. If the question was whether products were different, discrimination tests such as triangle, tetrad, or ABX tests were appropriate. If the goal was to understand how products differed, descriptive methods using trained panels were required. If the objective was to measure liking, affective tests with target consumers were necessary. Using the wrong method, the guide warned, often produced data that looked rigorous but failed to inform real decisions.
Many alternative protein products failed not because of technology gaps, but because teams asked the wrong sensory questions, used the wrong methods, or overinterpreted results that were never designed to answer commercial questions
The document repeatedly cautioned against common mistakes that undermined sensory work. Internal testing using employees or stakeholders was flagged as a major source of bias, particularly for affective studies. Employees were familiar with the product and invested in its success, making them poor proxies for target consumers. External participants were essential when the goal was to understand market response.
Study design details also received significant attention. Proper blinding, balanced and randomized sample order, consistent serving conditions, and the inclusion of appropriate benchmarks were presented as non-negotiable elements of credible sensory research. Benchmarks, in particular, were positioned as critical for context. Claims of parity required comparison to a well-liked animal product, while high-performing alternative products could serve as additional reference points.
The guide also addressed how results should be analyzed and interpreted. Statistical significance was not the same as commercial relevance, and a detected difference did not automatically imply a preference. Large variability in responses could indicate consumer segmentation rather than product failure. Overinterpretation of p-values without considering effect size and distribution was identified as a recurring issue across the sector.
Context emerged as another recurring theme. Alternative proteins differed from conventional products not only in flavor, but also in mouthfeel, aftertaste, and consumption context. Testing products in formats that did not reflect how they were actually eaten risked producing misleading conclusions. Expectation effects from labeling, product descriptions, or brand cues could also shape perception, making it important to separate intrinsic sensory quality from extrinsic influences.
To ground its recommendations, the guide included detailed case studies. One illustrated how a triangle test could determine whether consumers could distinguish a plant-based nugget from a leading conventional chicken nugget, without claiming anything about preference. Another showed how rapid descriptive methods such as flash profiling could map the sensory positioning of plant-based milks relative to dairy. A third demonstrated how a 9-point hedonic scale could be used to assess whether a plant-based meatball had achieved sensory parity with its animal benchmark.
Across these examples, the guide was explicit about what each method could and could not deliver. Sensory science was presented not as a tool to prove success, but as a way to inform iteration, identify trade-offs, and support evidence-based decisions.
For alternative protein companies operating under tight capital constraints, the implications were clear. Poorly designed sensory studies wasted time and money, while well-integrated sensory programs reduced the likelihood of late-stage reformulation or market failure. As competition intensified and claims of parity faced increasing scrutiny, the ability to generate credible sensory evidence became less of a nice-to-have and more of a strategic necessity.
Rather than offering a one-size-fits-all solution, the guide positioned sensory evaluation as a discipline that demanded the same rigor as process engineering or nutrition science. For a sector increasingly judged on taste as much as technology, it framed sensory science not as a supporting function, but as a core capability.
If you have any questions or would like to get in touch with us, please email info@futureofproteinproduction.com

.png)



.jpg)
