At the Edge or in the Cloud?

Panel Discussion: Machine Vision 2025
On the second day of our virtual event inVISION Days experts from Amazon Web Services, EMVA, Intel and Xilinx joined Editor-in-Chief Dr.-Ing. Peter Ebert for a panel discussion about the future of machine vision.

Where will future vision evaluations take place? On CPUs, GPUs, FPGAs or ASICs?

Alexey Myakov
Alexey MyakovBild: TeDo Verlag GmbH

Alexey Myakov (Intel): All of the above plus some new names. It’s going to be very use case specific. You cannot win them all with one piece of hardware. There will be applications with strict requirement for response time which will demand a FPGA and there will be some applications which will require a massive cluster of GPUs. But there will also be a lot of applications which can run on your desktop. We have a huge brownfield of desktops out there and somehow those are not actually perceived as a viable platform for AI and that is wrong. The way the democratization of AI in general is going, everybody can actually do AI applications on their own desktop.

Marco Diani (EMVA): I agree, the problem is always the application. And the main problem using FPGA or ASIC from the users point of view is not the hardware itself but the interface. How can the user program the FPGA? Are there some easy ways to do that? We saw something in the past, but you need a deep knowledge of the FPGA as well as the GPU. So the main problem is not the hardware because that really is powerful enough for most applications that we are seeing, but the interface to the hardware.

Jan Metzner (AWS): I would even go a little bit further: you need to look at the integration itself. You can’t just look at the vision part, you want to consider the integration with other sensors as well. So it’s not only about where it runs but the whole ecosystem that matters.

Quenton Hall (AMD/Xilinx): As was said before, the answer is clearly all of the above. Evaluations will take place on all of those types of products. There is more refined and better defined silicon solutions available for different applications and those applications will be best served by specific niche devices. This is maybe what we think of as democratization or commoditization of the technology. The other thing that is important here is not just how the solution or the semiconductor device lends itself towards the specific application, but also in terms of how users, particularly in the machine vision market, will take advantage of the devices. It’s one thing to assume that a device has the computational performance to perform a given task. But how does a company who is deploying these solutions act as an integrator? How do they incorporate the required customizations? Is it the camera vendor that supplies the machine learning models? What is the interface to deploy those machine learning models on the solution? That’s an area where there is a lot of thought going into what that ecosystem is going to look like in two or three years, from an OEM standpoint.

What are the technical requirements to make Edge Vision or Cloud Vision? What do I need for efficiency, low latency or a small bandwith use?

Metzner: If you train a machine learning model in the cloud it’s very efficient. So it doesn’t make really sense to do that locally. For the inference parts it depends, obviously. Do you want to stream thousands of cameras to the cloud? That’s not really efficient. But we have customers doing a few cameras with quality inspection in the cloud. Yes, it is possible, but it always depends on the big picture, what is efficient and what are the use cases you’re driving.

Diani: Normally, industrial machine vision applications require real time, but real time always depends on the application itself. So if you have a a real time application for mechanical inspection of 300 components per second, probably the bandwidth will never be enough, so you always need something like an embedded vision system. But I see that for example a 5G network can be very useful internally to a company. This way you can have local computation with short latency time.

Hall: If you consider the problem in a manufacturing facility you may have conveyors that are running literally thousands of feet per minute. So it may be required to inspect and divert products in real time while they are in freefall. And in order to do that it requires the capture and analysis of a given image of a given object in within milliseconds. Although 5G is widely toted as providing a solution to many of the current network latency problems, I think that we’re probably several years away from a time when 5G solutions will be mature. In the meantime, edge computing is continuing to march forward.

Metzner: I wonder why the question is always edge or cloud? To be honest, the cloud is coming to the edge anyway.

Myakov: I always use the following example in debates about edge or cloud: our eyes, nose, mouth and ears are in our heads and the brain is in the middle. Nature created us like that because it helps with latency. So it makes sense that computing should happen close to the sensors. But I think the trend for the next five years is going to be more towards edge computing and then at some point it’s going to find a healthy balance.

Are there already cloud solutions for quality data from production processes?

Jan Metzner
Jan MetznerBild: TeDo Verlag GmbH

Metzner: A year ago we released one of our services called Amazon Lookout for Vision that is doing this. On the other hand we have also hardware that can do the inference on the edge. So there are tools to do that both in the cloud as well as on the edge. Coming back to the point you mentioned before, it depends just on the requirements and how many data streams you have.

Hall: It really does depend on the scale of the solution. From a high level perspective, for many years large companies have developed their own clusters for in-house computing. But regarding smaller companies, they are looking at cloud computing solutions because of cost. In that context, where cloud solutions fit they are finding homes. I don’t think that that has to be limited to training. In some cases inference or statistical reporting, quality metrics, production metrics, reporting of all of that data makes a lot of sense in in the cloud. So it can go both ways, but it really depends on the specific use case.

Diani: We are speaking about AI application for machine vision but in my experience AI always has to be combined with the standard tools for machine vision. So I believe that mix of cloud computing and localized computing can solve most of the applications. The learning for sure is a phase in which you need a lot of power but there are small CPUs with a lot of power already available. But I still believe that has to be mixed together with the standard tools of machine vision. So combined tools, local and cloud could be the best solution in the near future.

Seiten: 1 2Auf einer Seite lesen

Das könnte Sie auch Interessieren