The brain’s visual perception system automatically and unconsciously guides decision-making through valence perception.
This is what a new study from Carnegie Mellon University’s Center for the Neural Basis of Cognition (CNBC) has shown.
The review hypothesizes that valence, which can be defined as the positive or negative information automatically perceived in the majority of visual information, integrates visual features and associations from experience with similar objects or features. In other words, it is the process that allows our brains to rapidly make choices between similar objects.
The findings offer important insights into consumer behaviour in ways that traditional consumer marketing focus groups cannot address. For example, asking individuals to react to package designs, ads or logos is simply ineffective. Instead, companies can use this type of brain science to more effectively assess how unconscious visual valence perception contributes to consumer behaviour.
To transfer the research’s scientific application to the online video market, the CMU research team is in the process of founding the start-up company neonlabs through the support of the National Science Foundation (NSF) Innovation Corps (I-Corps).
“This basic research into how visual object recognition interacts with and is influenced by affect paints a much richer picture of how we see objects,” said Michael J. Tarr, the George A. and Helen Dunham Cowan Professor of Cognitive Neuroscience and co-director of the CNBC.
“What we now know is that common, household objects carry subtle positive or negative valences and that these valences have an impact on our day-to-day behaviour,” he explained.
Tarr added that the NSF I-Corps program has been instrumental in helping the neonlabs’ team take this basic idea and teaching them how to turn it into a viable company.
“The I-Corps program gave us unprecedented access to highly successful, experienced entrepreneurs and venture capitalists who provided incredibly valuable feedback throughout the development process,” he said.
The Tarr team are launching neonlabs to apply their model of visual preference to increase click rates on online videos, by identifying the most visually appealing thumbnail from a stream of video.
The web-based software product selects a thumbnail based on neuroimaging data on object perception and valence, crowd sourced behavioral data and proprietary computational analyses of large amounts of video streams.
“Everything you see, you automatically dislike or like, prefer or don’t prefer, in part, because of valence perception,” said Sophie Lebrecht, lead author of the study and the entrepreneurial lead for the I-Corps grant.
“Valence links what we see in the world to how we make decisions,” she noted.
The study was published in the journal Frontiers in Psychology.