At the Edge or in the Cloud?

Panel Discussion: Machine Vision 2025
On the second day of our virtual event inVISION Days experts from Amazon Web Services, EMVA, Intel and Xilinx joined Editor-in-Chief Dr.-Ing. Peter Ebert for a panel discussion about the future of machine vision.

Where will future vision evaluations take place? On CPUs, GPUs, FPGAs or ASICs?

Alexey Myakov
Alexey MyakovBild: TeDo Verlag GmbH

Alexey Myakov (Intel): All of the above plus some new names. It’s going to be very use case specific. You cannot win them all with one piece of hardware. There will be applications with strict requirement for response time which will demand a FPGA and there will be some applications which will require a massive cluster of GPUs. But there will also be a lot of applications which can run on your desktop. We have a huge brownfield of desktops out there and somehow those are not actually perceived as a viable platform for AI and that is wrong. The way the democratization of AI in general is going, everybody can actually do AI applications on their own desktop.

Marco Diani (EMVA): I agree, the problem is always the application. And the main problem using FPGA or ASIC from the users point of view is not the hardware itself but the interface. How can the user program the FPGA? Are there some easy ways to do that? We saw something in the past, but you need a deep knowledge of the FPGA as well as the GPU. So the main problem is not the hardware because that really is powerful enough for most applications that we are seeing, but the interface to the hardware.

Jan Metzner (AWS): I would even go a little bit further: you need to look at the integration itself. You can’t just look at the vision part, you want to consider the integration with other sensors as well. So it’s not only about where it runs but the whole ecosystem that matters.

Quenton Hall (AMD/Xilinx): As was said before, the answer is clearly all of the above. Evaluations will take place on all of those types of products. There is more refined and better defined silicon solutions available for different applications and those applications will be best served by specific niche devices. This is maybe what we think of as democratization or commoditization of the technology. The other thing that is important here is not just how the solution or the semiconductor device lends itself towards the specific application, but also in terms of how users, particularly in the machine vision market, will take advantage of the devices. It’s one thing to assume that a device has the computational performance to perform a given task. But how does a company who is deploying these solutions act as an integrator? How do they incorporate the required customizations? Is it the camera vendor that supplies the machine learning models? What is the interface to deploy those machine learning models on the solution? That’s an area where there is a lot of thought going into what that ecosystem is going to look like in two or three years, from an OEM standpoint.

What are the technical requirements to make Edge Vision or Cloud Vision? What do I need for efficiency, low latency or a small bandwith use?

Metzner: If you train a machine learning model in the cloud it’s very efficient. So it doesn’t make really sense to do that locally. For the inference parts it depends, obviously. Do you want to stream thousands of cameras to the cloud? That’s not really efficient. But we have customers doing a few cameras with quality inspection in the cloud. Yes, it is possible, but it always depends on the big picture, what is efficient and what are the use cases you’re driving.

Diani: Normally, industrial machine vision applications require real time, but real time always depends on the application itself. So if you have a a real time application for mechanical inspection of 300 components per second, probably the bandwidth will never be enough, so you always need something like an embedded vision system. But I see that for example a 5G network can be very useful internally to a company. This way you can have local computation with short latency time.

Hall: If you consider the problem in a manufacturing facility you may have conveyors that are running literally thousands of feet per minute. So it may be required to inspect and divert products in real time while they are in freefall. And in order to do that it requires the capture and analysis of a given image of a given object in within milliseconds. Although 5G is widely toted as providing a solution to many of the current network latency problems, I think that we’re probably several years away from a time when 5G solutions will be mature. In the meantime, edge computing is continuing to march forward.

Metzner: I wonder why the question is always edge or cloud? To be honest, the cloud is coming to the edge anyway.

Myakov: I always use the following example in debates about edge or cloud: our eyes, nose, mouth and ears are in our heads and the brain is in the middle. Nature created us like that because it helps with latency. So it makes sense that computing should happen close to the sensors. But I think the trend for the next five years is going to be more towards edge computing and then at some point it’s going to find a healthy balance.

Are there already cloud solutions for quality data from production processes?

Jan Metzner
Jan MetznerBild: TeDo Verlag GmbH

Metzner: A year ago we released one of our services called Amazon Lookout for Vision that is doing this. On the other hand we have also hardware that can do the inference on the edge. So there are tools to do that both in the cloud as well as on the edge. Coming back to the point you mentioned before, it depends just on the requirements and how many data streams you have.

Hall: It really does depend on the scale of the solution. From a high level perspective, for many years large companies have developed their own clusters for in-house computing. But regarding smaller companies, they are looking at cloud computing solutions because of cost. In that context, where cloud solutions fit they are finding homes. I don’t think that that has to be limited to training. In some cases inference or statistical reporting, quality metrics, production metrics, reporting of all of that data makes a lot of sense in in the cloud. So it can go both ways, but it really depends on the specific use case.

Diani: We are speaking about AI application for machine vision but in my experience AI always has to be combined with the standard tools for machine vision. So I believe that mix of cloud computing and localized computing can solve most of the applications. The learning for sure is a phase in which you need a lot of power but there are small CPUs with a lot of power already available. But I still believe that has to be mixed together with the standard tools of machine vision. So combined tools, local and cloud could be the best solution in the near future.

Where do you see the main benefits of cloud based solutions for automation related vision tasks? What barriers are there to overcome to get customer acceptance for those solutions?

Myakov: Privacy laws and concerns are a big problem. When you transfer data from the edge to the cloud, it actually opens up your data for other risks as well, like theft or copying. But not all data is privacy tricky and yet we have to take the regulatories into account when designing our solutions.

Hall: There is a tendency within the industry to believe that edge computing or on premises computing is is more secure. Maybe because it doesn’t add the element of the transfer to the cloud. However, there is a bit of a fallacy here. In the context of devices which developers believe are to be used in a closed network, they decide not to take advantage of many of the security features that are available in modern devices. So I think the problem with security is not one of edge versus cloud as it is the overall thinking of the developer when they consider what it is exactly that they’re going to implement, whether it’s an edge device or a cloud connection.

Will Software-as-a-Service be sold more than vision systems and components in the future? And who will offer the new vision systems?

Metzner: The question in the end is who will build the solution and who is offering and who is integrating the solution? A solution consists of many different components, so it’s a broad ecosystem in which we all need to work together.

Myakov: Different companies have different specialties and I don’t think that we are gonna see one to rule them all in the near future. There are some niche examples when such power is concentrated in one hand and but there are very few of those. But typically ecosystems are multilayered: you evolve and you focus on something and then you excel in it.

Diani: What I saw over the last 10 years is a big revolution. The machine vision market is going up to very high speed and high resolution applications but also to the bottom where the application doesn’t need a lot of power. So I believe that there is space for everyone.

Quenton Hall
Quenton HallBild: TeDo Verlag GmbH

Hall: The reality is that the problems that we have today are best solved through a marriage of these different technologies. For example, if we want to take novel data from any camera from any geographic region, push it back up to the cloud, and to develop in an iterative way a more robust machine learning model which can then be redeployed to all those devices without anybody plugging a USB stick into the camera, how do we get there? How do we work with OEM’s? How do we work with system integrators? How do we work with end customers to provide solutions like this?

Myakov: The democratization of deployment is a missing piece right now. You can train a model but putting it on a camera is complex because tho thirds of the market globally is run with a very closed ecosystem approach. So essentially there are cameras, but there’s no way for anybody to deploy on them. In machine vision the situation is different, but it’s a niche market. But if you think globally, that’s what’s gonna actually impede the progress. And I totally agree that training in the cloud and deploying anywhere you want is what’s going to drive innovation. I don’t think we have uncovered all of the use cases out there just yet. So having the ability to deploy on any device would be important and instrumental in driving that innovation, but there are some very objective barriers to that.

What is your best estimate as to when it will be completely the norm to store and share quality and production data via the cloud?

Metzner: This is already the norm. Therefore, if you haven’t done it yet, you need to prepare for it. There are barriers that you need to overcome. Some technical challenges, let’s say on the networking side. But it can be done. The real problem is in the mind. If you are afraid of new things, you will not change it. This is actually the barrier that we are facing most of the time.

Myakov: Things which require low latency, they will stay at the edge and that’s not going to change. Only things which are practical for the cloud will go to the cloud. And I agree that security in the cloud done by professionals is better than security at the edge done by non-professionals. But again, there will be regular regulatory barriers and whether real or perceived. So it’s gonna be a healthy mix of edge and cloud. I think probably by 2025-2027 the ratio between cloud and the edge is going to be clear.

Marco Diani
Marco DianiBild: TeDo Verlag GmbH

Diani: You need to have your mind open to new technology. The cloud is coming and a lot of things are changing, but if you don’t have an open mind you will not accept these changes. But we need people that believe in the technology. The market is growing. The technology is growing. I know that machine vision is a niche market but is a market that is a very interesting, completely different from many other markets. The only thing that I don’t see now in cloud computing is some standardization for the acquisition of images. I believe that big companies can drive this standardization.

Hall: I think it is clear from this discussion that there is this unique opportunity to collect and correlate data and make that data available in the cloud. So we have this interesting new challenge ahead of us. How do we marry the capabilities of cloud in terms of data collection, data annotation, model training and connect that with Edge deployment? I think all of the pieces are there and in the next three to five years we will achieve a state that is more clearly defined.

Teilnehmer

Jan Metzner, Specialist Solutions Architect Manufacturing, AWS

Marco Diani, CEO of Image S, EMVA

Alexey Myakov, Chief Computer Vision Advocate, Intel

Quenton Hall, System Architect, AMD/Xilinx

Das könnte Sie auch Interessieren