"LivingLens, the leading video intelligence platform, has extended the capabilities for analyzing consumer video content with the launch of advanced recognition capabilities.
The launch includes facial emotional recognition, tonal recognition and object recognition. The new capabilities supplement the existing video intelligence suite which allows users of the platform to decipher the full range of human behaviour demonstrated in video content.
LivingLens has incorporated the latest artificial intelligence which works by identifying key landmarks and expressions of the human face. A collection of deep learning algorithms then analyze this information to classify facial expressions within video content which is then mapped to a specific emotion.
LivingLens Tonal recognition provides insight into the way in which people are communicating, advancing analysis beyond 'what' people are saying to 'how' they are saying it. The tone in which words are spoken can reveal additional understanding of how consumers are feeling and add further context to the spoken word. The tonal analysis delivers an added layer of understanding when used alongside the existing sentiment analysis.
Object recognition identifies the objects within a video which provides additional context to the content. Objects help to determine where consumers are, be that in a shop, at the airport or in a kitchen for example, and therefore what they are doing. The LivingLens platform not only highlights the objects that are most prevalent within the content, it also allows users to select from all the objects seen and navigate to where they appear.
Carl Wong CEO says, ""Our mission is to unlock the insight in people's stories to inspire decisions and technology is allowing us to accelerate our ability to interpret consumer video content at scale. We are delighted with the latest additions to our existing suite of capabilities, which provide a lens into the all-important emotions of consumers and gives additional context to consumers' content through their surroundings.""
Wong continues, ""Historically video has been challenging to work with but, we are seeing the use of video expand as technology continues to develop and improve, providing high levels of accuracy which previously would have required human intervention. Where once video was limited to small scale studies, it's exciting to see projects with large volumes which simply weren't practical before.""
The three new recognition capabilities are time stamped against the corresponding video content to allow researchers to quickly and easily pinpoint the exact moments of interest. Results are returned within seconds, making the analysis of video content possible in near real-time.
LivingLens' leading video capture and analytics platform unlocks the insight in people's stories to inspire decisions. The LivingLens platform captures & analyzes multimedia content, translating human behaviour into a usable data asset. Putting the consumer just one click away, LivingLens is redefining how brands get closer to their audiences, by utilising the range of information held within video content including, speech, emotions, activity and objects. LivingLens makes leveraging the power of video efficient and scalable for global brands, agencies and technology providers, enabling fast insight generation.
LivingLens has offices in Liverpool, London, New York and Toronto, contact us at email@example.com for further information.