Panel Discussion: Embedded Vision Everywhere?

inVISION: Will AI play an important role?

Schmitt: We do have applications where customers use AI, but at the moment it is on a limited sector for ITS.

Carlson: Our customers don’t want to deploy AI just for the sake of AI. They want to figure out a new way to deliver value. At the low end you can have a concept like Sparse Modelling of Hacarus and at the high end a deep learning concept. That whole range is going to happen at the edge.

Scheubel: AI is a tool that we apply to solve customer problems. It opens the door and makes many applications possible, which could not be solved with classical computer vision.

Gross: AI should happen on the embedded board not on the camera. The AI training will happen usually in the cloud, where you have kind of unlimited processing power. The deployment is at the edge, where you just have the models.

Scheubel: iBut in some cases, there is the possibility that processing (inference) can be done very close to the camera. You can place especially dedicated hardware accelerators right behind the image processor. Also, our experience is that it’s cheaper to train neural networks on premise with own GPU clusters than using the cloud. Entry barriers are very low in the cloud and you can scale very fast, but it will also get expensive very fast. It is important to keep in mind that for training you need much more advanced hardware than for the deployment and inference itself.

Bach: At the end of the day the customer who expects a solution buys a black box. For them it doesn´t really matter whether it’s a traditional algorithm or an AI.

Abel: If we want to support our operator or robots, we need a huge environmental understanding. This can be done with classical computer vision or we need a deep learning method for this. But we have to provide robust products, especially our fork lifts as well as their assistant systems. For machine learning it is hard to see where the boundary of the machine learning algorithm is. That´s a huge problem.

Scheubel: If we apply AI for detection or classification tasks, we always combine it with classical computer vision. Only with a hybrid, highest levels of accuracy can be reached. Subsequently, we integrate the AI model combined with computer vision into the software environment of our customers.

Schmitt: AI can help on some applications, where you do not need 100% accuracy. But if we are talking about machine vision in industrial applications sometimes people are expecting 100%.

inVISION: What are the limits of AI?

Carlson: The only limit is the price performance ratio. But we have seen systems where customers want to put together cameras and Lidar and doing AI with GPUs and CPUs and pretty extensive systems. So I don’t see any real limitations.

Scheubel: When we train models for customers, it´s very important that the model and the data set are very well balanced. If they are unbalanced, then the models and the AI will show unexpected detections. Another limitation is the ability of certification. We are from Germany and we want to certify everything, but it´s quite hard to certify AI based software.

www.alliedvision.com

www.congatec.com

www.cst-gmbh.eu

www.cubemos.com

www.still.de

www.vision-components.com

Seiten: 1 2 3 4Auf einer Seite lesen

Allied Vision Technologies GmbH

Das könnte Sie auch Interessieren

Bild: TeDo Verlag GmbH
Bild: TeDo Verlag GmbH
Webinar Spectral Imaging

Webinar Spectral Imaging

Am 7. Mai findet um 14 Uhr das inVISION TechTalk Webinar ‚Spectral Imaging‘ statt. Dabei stellen Vision & Control (Tailored Optics and Lighting for Hyper- and Multispectral Imaging), Lucid Vision (Advanced sensing with latest SWIR and UV cameras) und Baumer (Inspect the invisible with powerful SWIR & UV Cameras) verschiedene Trends zu SWIR, UV und Hyperspectral Imaging vor.