Data Quality & AI

Expert Panel: Why AI Vision is (not) Easy-to-Use

With AI users should be able to train and implement inspection systems themselves without prior vision knowledge. But who is already using AI vision, or is the topic perhaps more complex? A panel with experts from ARC Advisory Group, IDS, Maddox AI and MVTec discussed it at the inVISION Days 2023 conference.

Eckstein: In theory Copy&Paste should be possible, if the new conditions and images are represented by the data set that the model was trained on. Often, this is not the case, because conditions like lighting change or the background color of a conveyor belt is different. The way we want to solve this problem is with Continual Learning. You train a basic model and then use it with a few images on the production side. You just train on the CPU with an additional hundred images, which only takes a minute.

Is Continual Learning with some few images really deployed in production?

Eckstein: The general problem in retraining a model, with Continual Learning, is that you put only a few images in a data set and hope that it’s better than before. However, those methods that we had in the past were like humans. Humans greatly overvalue current information over old information. What these models suffered from was termed catastrophic forgetting, meaning although they could classify the new images, they couldn’t classify the old images anymore.

So if you have a classification model with three classes and you want to add a fourth class with a new defect type, some companies in the past added a branch only to one class, which ended up not being a good solution.

Routschka: On an embedded system, Copy&Paste depends on the case and how stable it is. If it’s an application that’s going to be distributed to different production facilities, then the setup is quite different from a one-off application. On the other hand, I’ve tried it several times with our NXT system with applications from different people and in different locations, and it works. It depends what you want to detect. If you’re looking for the micro scratches or trying to detect the apple / banana stuff or something else, those are already pretty robust applications. Even for industrial applications like solder inspection, there are pretty good basic models with a good data base.

Güldner: Many models tend to drift over time. So even if you do Plug&Play you have to re-adjust them after a while. Of course it depends on the setup and if you can do everything remotely or how you implement and then retrained it. Furthermore, you have to look at the future and the very flexible production layouts. This means that the repeatability of the plants looks less and less like the other. These decreased complexity on the plant floor makes it complicated just to reuse the models. The target should be – and we will get there – that more than 99% will be reusable. A good strategy to increase repeatability and scalability is a layered model. So we can have a core technology that identified defects, and on top of that a layer that is suitable for electronics and another layer for automotive industry. You should be able to enlarge parts of your model, but you don’t need to enlarge the entire model.

Rotschka: There are so many factors that influence this issue, for example, how many parts suppliers do you have in production? It happens all the time that you go into production because the AI system is not working and you ask: ‚Did you change anything? No no, everything is fine‘ and then you figure out that the customer has a new supplier for the plastic parts. That is something they have to be aware of and most of these factors are also faced by common computer vision that we had in the past as well. This can´t be an excuse, but it need to be a continuous improvement and that’s what people are also expecting from AI usage. We are developing over a certain period and getting better every time.

Will autonomous vision systems be possible in the future thanks to AI?

Eckstein: Where do we have the fastest breakthroughs with AI? We had them where we had big data sets and clearly defined problems. With ChatGPT we had big data as text. The more you can generalize the problem, the more people have exactly the same problem, the more you can move towards an autonomous vision system. For very generalized applications like surveillance or Deep OCR, you can have more and more autonomous systems that analyse this applications in general. For a very specific problem like spot welding, which has a defect underneath and somehow the AI should see that through the weld, that’s a very specific problem and there will be not a generalized AI solution.

Routschka: I think for simple cases like code reading or OCR, there are already these kinds of self-learning AI tools where you simply illuminate with different light factors and then figure out which the best image to start with. In more complex cases like quality inspection with different perspectives, it will take at least a few more years until we get there.

Güldner: There will be applications where we can actually do that in the near future. The whole AI or vision complexity will be taken away from the user – because of shortage of workers – and it will work as if it’s autonomous, but human AI experts will be in the loop all over the world, teaching, training and optimizing the model in the background.

n Florian Güldner, Managing Director, ARC Advisory Group

n Daniel Routschka, Sales Manager AI, IDS

n Peter Droege, CEO, Maddox AI

n Christian Eckstein, Business Developer, MVTec Software

Moderation: Dr.-Ing. Peter Ebert, Editor in Chief, inVISION

Seiten: 1 2 3Auf einer Seite lesen

Das könnte Sie auch Interessieren

Bild: Nikon Metrology GmbH
Bild: Nikon Metrology GmbH
Session 2: Inline Metrology

Session 2: Inline Metrology

Die Inline Metrology Session (Start 11:10 Uhr) startet mit Nikon (Laser Radar Metrology in EV Production), Micro-Epsilon (Inline 3D Measurements), bevor Eleven Dynamics (Universal Comaptibility with any Robot, Software & Sensor) und API Metrology (Efficiency through Integrated API Measurement Solutions for Automation) ihre Innovationen präsentieren.

Bild: TeDo Verlag GmbH
Bild: TeDo Verlag GmbH
Session 3: Surface Inspection

Session 3: Surface Inspection

Die Surface Metrology Session (Start 12:50 Uhr) umfasst Opto (Fast Anomaly detection on technical surfaces), Heliotis (3D inline sensor for inspection machines), Mahr (Modern surface measurement technology for hand use) sowie Precitec / Enovasense (Breakthrough in high resolution under surface analysis).