Being able to see—and tease apart—a video image’s layers of light can help reveal objects of interest for purposes of security, bioimaging and more
CAMBRIDGE, MA—More than one billion cameras worldwide by 2020 will provide 30 billion frames of video per day. This tidal wave of content will come from security cameras, surveillance systems and a citizen army of smart phones. Not all video is equal, however. To the expert eye, video images often hide what observers want to see because of glare, window reflections and backlit signal sources that obscure the people in the scene.
Andrew Berlin, distinguished member of the technical staff at Draper, believes the solution to this needle-in-the-haystack problem is to analyze video images as layers of light. By using a set of techniques borrowed from the world of hearing aids, Berlin suggests we can better detect, visualize and recognize subjects of interest in video images that are largely hidden from view.
“In a typical video, living, breathing human subjects move around by at least a few pixels each second. In contrast, background lighting sources tend to vary far more slowly. A multilayer approach uses this difference in rate of change to selectively amplify a subject of interest while de-emphasizing confounding signals,” Berlin said.
Hearing aids use spectral subtraction to build a model of the background sound environment, so that features such as the hum of air ventilation systems may be removed from the audio signal before amplification. The payoff: speech is amplified but the other sound sources aren’t.
Berlin applied the same technique to video images, working alongside BARS Imaging founder Rajeev Surati. The two researchers used a set of techniques called video deconfounding to isolate and amplify the light associated with the subject of a video without amplifying the confounding light sources, such as reflections.
Surati said, “An advantage of the video deconfounding approach is that it exploits rates of change rather than explicit models of subject motion. Thus it can be effective even in situations where the subject is stationary but is changing shape, appearance or orientation.”
Berlin and Surati describe their technique, called video deconfounding for identifying and classifying images, in a paper titled “Video Deconfounding: Hearing-Aid Inspired Video Enhancement.” They presented their technique recently at the 2018 IEEE Image, Video and Multidimensional Signal Processing Workshop.
Draper has continued to advance the understanding and application of human-centered engineering to optimize the interaction and capabilities of the human’s ability to better understand, assimilate and convey information for critical decisions and tasks. Through its Human-Centered Solutions capability, Draper enables accomplishment of users’ most critical missions by seamlessly integrating technology into a user’s workflow. This work leverages human-computer interaction through emerging findings in applied psychophysiology and cognitive neuroscience. Draper has deep skills in the design, development, and deployment of systems to support cognition – for users seated at desks, on the move with mobile devices or maneuvering in the cockpit of vehicles – and collaboration across human-human and human-autonomous teams.
Draper combines specific domain expertise and knowledge of how to apply the latest analytics techniques to extract meaningful information from raw data to better understand complex, dynamic processes. Our system design approach encompasses effective organization and processing of large data sets, automated analysis using algorithms and exploitation of results. To facilitate user interaction with these processed data sets, Draper applies advanced techniques to automate understanding and correlation of patterns in the data. Draper’s expertise encompasses machine learning (including deep learning), information fusion from diverse and heterogeneous data sources, optimized coupling of data acquisition and analysis and novel methods for analysis of imagery and video data.