One sensor to rule them all

One sensor to rule them all

Jeff Bier’s Column: The unique position of image sensors

It’s no secret that sensors are proliferating. Our smartphones, for example, contain accelerometers, magnetometers, ambient light sensors, microphones – over a dozen distinct types of sensors. A modern automobile contains roughly 200 sensors.
As sensors proliferate, the amount of data generated by these sensors grows too, of course. But different types of sensors produce vastly different amounts of data. As Chris Rowen, CTO of Cadence’s IP group, recently pointed out in a presentation, image sensors occupy a unique position in the sensor world. While the number of image sensors deployed across all products is a small fraction of the total number of sensors deployed, the amount of data generated by deployed image sensors dwarfs the amount of data generated by all other types of sensors combined. The math behind this is simple: image sensors generate a lot of data. Even an old-fashioned VGA resolution image sensor running at 30fps generates over 25MB of color video data per second. That’s 1,000 to 1,000,000 times more data per second than most other common sensor types. You may point out that data is not the same thing as information. Indeed, not every pixel streaming from an image sensor contains useful information. But, we’re discovering that there’s valuable information contained in more pixels than we might have suspected. Most sensors are single-purpose: one type of sensor for temperature, another for magnetic field, another for ambient light, etc. Image sensors are unique in that – when coupled with the right algorithms and sufficient processing power – they can become ’software-defined sensors‘, capable of measuring many different types of things. For example, using video of a person’s face and shoulders, it’s possible to identify the person, estimate their emotional state, determine heart rate and respiration rate, detect intoxication and drowsiness, and determine where the person’s gaze is directed. Similarly, in cars and trucks, a single image sensor (or a small cluster of them) can detect other vehicles, brake lights, pedes-trians, cyclists, lane markings, speed limit signs, and more. Demonstrating the versatility of vision sensors, the Mercedes-Benz Magic Body Control system measures the detailed topography of the road surface. According to Mercedes: „The system is thus able to recognize an uneven road surface before you come to drive over it, thus enabling the suspension to adjust itself in order to counteract, as far as possible, the undulations in the road.“ To paraphrase Chris Rowen’s presentation: In the future, approximately 100% of the data generated by all types of deployed sensors will be image and video data. This creates enormous opportunities, and big challenges as well. The opportunities stem from the fact that image and video data can provide so many different types of useful information. The challenges derive from the inherent complexity of reliably extracting valuable information from pixels. Fortunately, as an industry, we’re making rapid progress on these challenges through a combination of more powerful and energy-efficient processors, more reliable computer vision and deep learning algorithms, and better development tools. These improvements are enabling us to build devices, systems and applications that are safer, more autonomous, more responsive and more capable.

Embedded Vision Alliance
www.embedded-vision.com

Das könnte Sie auch Interessieren

Anzeige

Bild: ©Gorodenkoff/Shutterstock.com
Bild: ©Gorodenkoff/Shutterstock.com
Gaming trifft NDT

Gaming trifft NDT

Messdaten in eine Form zu bringen, die intuitiv verständlich und doch präzise ist, ist eine der zentralen Herausforderungen in der Qualitätskontrolle. Cloud-flight sowie Recendt setzten hierfür in einem Forschungsprojekt zum Einsatz von Augmented Reality in der zerstörungsfreien Werkstoffprüfung (NDT) Algorithmen ein, um den Prototyp einer skalierbaren AR-Lösung zu schaffen.

Bild: Beckhoff Automation GmbH & Co. KG
Bild: Beckhoff Automation GmbH & Co. KG
Zyklussynchrone Vision

Zyklussynchrone Vision

Aixemtec entwickelt automatisierte Lösungen für die Präzisionsmontage von elektro-optischen Systemen. Das Unternehmen bietet kundenindividuelle Lösungen auf Basis eines Modulbaukastens, der von der Materialzuführung und -handhabung über Mikromanipulation und Vermessung bei der hochpräzisen Montage bis hin zur Qualitätssicherung reicht. PC-based Control von Beckhoff inklusive Twincat Vision sorgt dabei für exakte wie schnelle Prozessabläufe.

Bild: Framos GmbH
Bild: Framos GmbH
Weiterhin verfügbar

Weiterhin verfügbar

Intel RealSense richtet sein Geschäftsmodell neu aus und kündigt verschiedene Produkte ab, um den Fokus verstärkt auf die bestehende D400 Stereo Vision Produktlinie zu legen. Die inVISION sprach mit Darren Bessette, Category Manager Devices bei Framos, was dies nun konkret bedeutet.

Bild: Fraunhofer-Institut IPM
Bild: Fraunhofer-Institut IPM
Fingerabdrücke aus Holz

Fingerabdrücke aus Holz

Ein markierungsfreies Identifizierungsverfahren soll bald die individuelle Rückverfolgung von Baumstämmen oder Stammteilen sichern – von der Ernte im Wald bis zur Vermessung im Sägewerk. Speziell hierfür optimiert das Fraunhofer IPM sein Track&Trace Fingerprint-Verfahren, um Baumstämme anhand von Oberflächenstrukturen an den Sägeflächen eindeutig zu identifizieren.

Anzeige

Anzeige

Anzeige

Anzeige

Anzeige