In 2003 zoologist Eric Warrant took a cold call from an unexpected would-be collaborator: Toyota Motor Corporation. Toyota was eager to help drivers see the road better at night. Most pedestrian fatalities occur at that time because walkers are hard to see (alcohol consumption may contribute to this statistic as well). The company was in the early stages of thinking about driverless cars, which would require cameras that see roads and signs not only during the day but also at night. Warrant’s specialty is nocturnal vision in insects, and the automaker wondered if the Lund University scientist could help them develop biology-inspired night vision. Standard infrared night-vision technology has problems on the road, notes Henrik Malm, one of the mathematicians at Lund whom Warrant recruited for the project. For example, some night cameras illuminate a scene with infrared light and use its reflection to compose an image. That might work for one car to spot a pedestrian at a dark intersection, but if two cars came down a street toward each other at night, both sending out infrared beams, they’d simply blind each other’s cameras. Plus, infrared cameras do not pick up colors, which is a problem with colored traffic signs. Warrant and his colleagues had learned that one trick night-active animals use is to sum their visual input, adding together incoming light signals to perceive a brighter image in two different ways. An animal’s brain can combine the light signals triggered by several neighboring light receptors and also add up several signals that come in over many milliseconds. Malm set out to design an algorithm that would do the same with images from a digital camera. The concept is simple enough: combine incoming light from several pixels and sum the signal that came in over the past several milliseconds. But that’s a lot of numbers for a computer to crunch, and for the camera to do drivers any good, it would have to provide the images in real time. “The big challenge was to make it fast enough,” Malm says. He and another Lund mathematician, Magnus Oskarsson, wrote a program that would process the images quickly on a computer graphics card (not standard in typical cameras at the time) that ran the calculations in parallel and thus faster. Another challenge the researchers overcame, Oskarsson notes, is that the camera and associated algorithms would have to quickly adapt to changing conditions such as car speed and exterior light levels. Pooling signals over time would work well at slow speeds but might blur the image too much when the driver hit the accelerator. More than a decade ago the scientists drove their test camera around the Swedish countryside and on streets surrounding Toyota Motor Europe’s headquarters in Brussels. “The algorithm performed incredibly well even though everything was a little bit grainier—and a bit more sluggish,” Warrant says. “It was just like somebody turned the lights on.” Jonas Ambeck-Madsen, senior manager in artificial intelligence and automated driving for Toyota Motor Europe’s research and development center, called the results “wonderful.” While Toyota hasn’t specified yet what it will do with Oskarsson and Malm’s algorithm, Ambeck-Madsen says it’s certainly something that could be applied to cars the company wants to build.
Toyota was eager to help drivers see the road better at night. Most pedestrian fatalities occur at that time because walkers are hard to see (alcohol consumption may contribute to this statistic as well). The company was in the early stages of thinking about driverless cars, which would require cameras that see roads and signs not only during the day but also at night. Warrant’s specialty is nocturnal vision in insects, and the automaker wondered if the Lund University scientist could help them develop biology-inspired night vision.
Standard infrared night-vision technology has problems on the road, notes Henrik Malm, one of the mathematicians at Lund whom Warrant recruited for the project. For example, some night cameras illuminate a scene with infrared light and use its reflection to compose an image. That might work for one car to spot a pedestrian at a dark intersection, but if two cars came down a street toward each other at night, both sending out infrared beams, they’d simply blind each other’s cameras. Plus, infrared cameras do not pick up colors, which is a problem with colored traffic signs.
Warrant and his colleagues had learned that one trick night-active animals use is to sum their visual input, adding together incoming light signals to perceive a brighter image in two different ways. An animal’s brain can combine the light signals triggered by several neighboring light receptors and also add up several signals that come in over many milliseconds. Malm set out to design an algorithm that would do the same with images from a digital camera. The concept is simple enough: combine incoming light from several pixels and sum the signal that came in over the past several milliseconds.
But that’s a lot of numbers for a computer to crunch, and for the camera to do drivers any good, it would have to provide the images in real time. “The big challenge was to make it fast enough,” Malm says. He and another Lund mathematician, Magnus Oskarsson, wrote a program that would process the images quickly on a computer graphics card (not standard in typical cameras at the time) that ran the calculations in parallel and thus faster.
Another challenge the researchers overcame, Oskarsson notes, is that the camera and associated algorithms would have to quickly adapt to changing conditions such as car speed and exterior light levels. Pooling signals over time would work well at slow speeds but might blur the image too much when the driver hit the accelerator.
More than a decade ago the scientists drove their test camera around the Swedish countryside and on streets surrounding Toyota Motor Europe’s headquarters in Brussels. “The algorithm performed incredibly well even though everything was a little bit grainier—and a bit more sluggish,” Warrant says. “It was just like somebody turned the lights on.”
Jonas Ambeck-Madsen, senior manager in artificial intelligence and automated driving for Toyota Motor Europe’s research and development center, called the results “wonderful.” While Toyota hasn’t specified yet what it will do with Oskarsson and Malm’s algorithm, Ambeck-Madsen says it’s certainly something that could be applied to cars the company wants to build.