CAMBRIDGE, MA— A research team has now developed a next-generation computational photography technique that allows you to detect objects around corners from a single ordinary photograph. The technique separates the shadow cast by a hidden object from the underlying floor pattern—and then reconstructs the hidden scene on a computer screen.
The research earned the team the best poster award at this year’s IEEE International Conference on Computational Photography (ICCP). Computational photography seeks to create new photographic functionalities and experiences that go beyond what is possible with traditional cameras and image processing tools.
“Imaging the scene behind a barrier can provide tactical advantage in many real-life scenarios, for instance, autonomous vehicle navigation, and search and rescue,” said Sheila Werth, the team’s lead investigator.
Werth is pursuing her Ph.D. as a Draper Fellow, co-advised by Vivek Goyal, professor at Boston University, and Chris Yu at Draper. Additional authors of the poster include Yanting Ma, John Murray-Bruce and Charles Saunders at BU, and William Freeman, professor at the Massachusetts Institute of Technology.
The poster describes a common scenario where the obstruction is a wall, and light from the hidden scene is cast onto the floor around the corner, forming a penumbra, or partial shadow. Using a combination of geometry, discrete modeling and algorithms, the team was able to separate the penumbra light from the underlying floor pattern, and reconstruct the hidden scene.
The IEEE International Conference on Computational Photography is organized with the vision of fostering the community of researchers, from many different disciplines, working on computational photography. The poster is titled “Occluder-Aided Computational Periscopy.”