Panel Discussion: Embedded Vision Everywhere?

Panel Discussion: Embedded Vision Everywhere?

Embedded Vision has been discussed for years, but where is the technology already in use? What new developments and applications are there and which role will AI play in the future? These questions were discussed with manufacturers (Allied Vision, Congatec, Cubemos, Vision Components) and users (Still, CST) at the Embedded World 2020.

„Through applying analytics to the images, the customer is figuring out a way to capture more value.“, Jason Carlson, Congatec (Bild: Nürnberg Messe / Frank Boxler)

inVISION: What is Embedded Vision?

Jan-Erik Schmitt (Vision Components): Embedded Vision is a specialised solution to realise a specific task with low power consumption.

Jason Carlson (Congatec): Congatec is coming from the embedded computing point of view and is seeing more and more vision systems being combined with embedded computing.

Dr. Christopher Scheubel (Cubemos): It is the marriage between the processor and the imaging module on an embedded device.

Gion-Pittchen Gross (Allied Vision): An Embedded Vision system is optimised for a certain task and that is what it does really well in contrast to a PC-based vision system, which can do a lot of different tasks. Furthermore we usually have constraints in size or in power and very often in costs.

Dr. Michael Bach (CST): On the application side, the factors that define Embedded Vision are primarily the imaging sensor, the processing unit and the interface. The smaller the footprint, the more the system is embedded. For us power consumption is not so much an issue.

Bengt Abel (Still): Embedded Vision is a camera system that enriches the images with additional information and transmits this information to our units for robotic or assistant systems in real time.

inVISION: Is it really something new?

Schmitt: Embedded Vision isn’t something new, it started 25 years ago. It started with smart cameras, then vision sensors. We have a processing unit combined with an imaging unit. Today if we look at the IoT smart cities or ITS the applications have changed in broader areas.

Carlson: The trend is driven by edge computing, where a lot of analytics have been done in the cloud. But now with what’s happening at the price point we can do this at the edge. The biggest data driver is related to vision and doing analytics right there on the spot. An examples is police wearing body-worn cameras and something happens in a crowd. They have to redact everyone’s face, except for the one bad actor. Today, this can be done in real time automatically on the device that the police officer is wearing.

Scheubel: The consumer sector has triggered a movement. We are now in the downward spiral of prices and in an upward spiral of performance. You get the same components with the same performance for a fraction of the price.

Bach: We have the same footprint size, that was available ten years ago, but these days we have orders of magnitude higher processing powers available.

Abel: The distribution of compute power gets more and more important. We have dedicated chips for several tasks. This allows us to have smaller computer units to control the whole process.

inVISION: Is Embedded Vision a rather non-industrial topic?

Schmitt: As we know from history, Embedded Vision is open for any type of application, like code reading or quality control, and newer ones like facial recognition, cashier free stores or ITS. The big difference is that 20 years ago Embedded Vision was only seen by specialised people and now everybody hears about it because it is driven by the consumer market.

Carlson: Through applying analytics to the images, the customer is figuring out a way to capture more value. They can do something more efficient and more intelligent. For these companies the payback is in months or less than one year.

Scheubel: The origins of Embedded Vision are certainly in the consumer sector. However, due to the increasing performance and reliability of components, Enbedded Vision can also be used for industrial applications.

Gross: Because processing power has increased so much even x86 based Windows Embedded Vision systems can be applied nowadays, e.g. in automatic passport control systems at the airport. There you have a camera inspecting passports and another camera checking your face and comparing it to what is actually on your passport and checking the biometrics.

Bach: For us Embedded Vision means basically delivering products and services to the industry. We definitely appreciate the presence of products like the Raspberry, but it is really hard to make a Raspberry part of a product that will be delivered to the industry.

Abel: Everybody wants a transparent supply chain at the logistic sector. Customers like to see where their goods are and in what quality they are. Embedded Vision is the enabler for this.

inVISION: What are you expecting from an Embedded Vision system?

Bach: For us the requirement for an Embedded system is definitely reliability. We can hardly afford to have systems out in the field that permanently crash or are not available for the tasks they were designed for.

Gross: Embedded Vision systems are usually purpose-built and therefore quite customized. A lot of work is involved in getting them up and running. Therefore we try to make our cameras as easy as possible to integrate. We standardize where possible so for example all cameras are controlled with the same programming interface and offer the same functionality.

Abel: We like to have a low price because nobody wants to pay for logistic. But we need standards in the interfaces to give the information to an overall system, e.g. fleet or warehouse management system and so on. We have standards for image transportation but we don’t see standards for Embedded Vision systems at the moment.

Scheubel: We definitely need standardisation. To realise an Embedded Vision application, many components need to seamlessly interact with each other. It begins with the embedded processor and its operating system (OS). On this processor, or a connected hardware accelerator, the neural networks that we develop must be implemented. To generate the images needed for the Embedded Vision task, a compatible imaging module must be integrated. All these components must be seamlessly connected, therefore standardisation could facilitate development a lot.

Carlson: We believe in Workload Consolidation. Once you have got that CPU power you can just take all this other stuff and put it in there. And so in the world of multicore CPUs you can have a six core CPU where one core is used as a gateway and four cores are doing analytics.

inVISION: Is energy consumption a topic?

Schmitt: We have a lot of customers who have stand-alone solutions that are battery or solar powered and then of course power consumption is an issue.

Scheubel: Even if there is enough power, heat can still be a problem in the embedded processor.

Gross: Power consumption is always coupled to heat generation. If you have a small mobile device that you hold in your hand, it should not get warmer than 40°C. A low power consumption helps to keep the systems cool, which provides better image quality because the noise you have in the images is lower.

Abel: Power consumption is not the problem for us.

inVISION: Embedded systems usually use Linux whereas machine vision systems are using Windows. Is there a compatibility problem?

Carlson: If your vision systems have a six-core chip you can run four different OS on it, one of them is Windows, one is a real-time OS or whatever flavour you want. This is an absolutely do-able workload consolidation. You can have a Microsoft GUI, which is not typically known for being real time performance friendly, with a real time OS. Collaborative robots are a great example where you have a vision system but you also provide real time control of the robot.

Gross: What makes Linux interesting on embedded platforms is that it can be customized much more than Windows systems.

Bach: Even if you have different platforms, specifications or tunings of the Linux kernel it is always the same interface. You can actually copy&paste a lot of the actual workflow and service infrastructure to the target platform. It´s all given by the same OS which only varies a little in configuration. Our development team works on Windows machine and our platforms are primarily Linux.

Abel: If you are looking at an assistant system, it does not care if it is a Windows or Linux system. We are looking for a hardware interface that is digital I/O or CANopen or Ethernet with a standardized protocol. But if we are looking at the robotic side, this is an issue. If these systems are not really compatible you will get a huge jitter in the communication. This maybe leads your robot to move somewhere you don’t want it to move to.

inVISION: Machine vision is mainly based on x86 platforms whereas Embedded systems are using different platforms. Is this a problem?

Scheubel: We see both in the market, x86 and ARM processors. The choice of the processing platform mainly depends on the history of the customer and which is easiest to integrate into his system environment.

Carlson: The software the customer wants to run for the application is usually dictating whether it is an ARM or a x86 system. If they have a x86 history and they have got backwards legacy they are very likely to stay running Windows. If it is an entirely new application and there is no need for Windows and they are at the lower performance end maybe they are going to be more leaning towards ARM.

Schmitt: It is not only the question between ARM and x86 but also a combination with GPU, ARM and FPGA. It´s a bit more complex but at the end it is depending on the application and the taste of the developers.

Abel: We are working on x86 but we are seeing all the edge GPUs. For a developer it is usually not different to program both systems. But if we are looking at machine learning, this is a completely different development process.

inVISION: Will AI play an important role?

Schmitt: We do have applications where customers use AI, but at the moment it is on a limited sector for ITS.

Carlson: Our customers don’t want to deploy AI just for the sake of AI. They want to figure out a new way to deliver value. At the low end you can have a concept like Sparse Modelling of Hacarus and at the high end a deep learning concept. That whole range is going to happen at the edge.

Scheubel: AI is a tool that we apply to solve customer problems. It opens the door and makes many applications possible, which could not be solved with classical computer vision.

Gross: AI should happen on the embedded board not on the camera. The AI training will happen usually in the cloud, where you have kind of unlimited processing power. The deployment is at the edge, where you just have the models.

Scheubel: iBut in some cases, there is the possibility that processing (inference) can be done very close to the camera. You can place especially dedicated hardware accelerators right behind the image processor. Also, our experience is that it’s cheaper to train neural networks on premise with own GPU clusters than using the cloud. Entry barriers are very low in the cloud and you can scale very fast, but it will also get expensive very fast. It is important to keep in mind that for training you need much more advanced hardware than for the deployment and inference itself.

Bach: At the end of the day the customer who expects a solution buys a black box. For them it doesn´t really matter whether it’s a traditional algorithm or an AI.

Abel: If we want to support our operator or robots, we need a huge environmental understanding. This can be done with classical computer vision or we need a deep learning method for this. But we have to provide robust products, especially our fork lifts as well as their assistant systems. For machine learning it is hard to see where the boundary of the machine learning algorithm is. That´s a huge problem.

Scheubel: If we apply AI for detection or classification tasks, we always combine it with classical computer vision. Only with a hybrid, highest levels of accuracy can be reached. Subsequently, we integrate the AI model combined with computer vision into the software environment of our customers.

Schmitt: AI can help on some applications, where you do not need 100% accuracy. But if we are talking about machine vision in industrial applications sometimes people are expecting 100%.

inVISION: What are the limits of AI?

Carlson: The only limit is the price performance ratio. But we have seen systems where customers want to put together cameras and Lidar and doing AI with GPUs and CPUs and pretty extensive systems. So I don’t see any real limitations.

Scheubel: When we train models for customers, it´s very important that the model and the data set are very well balanced. If they are unbalanced, then the models and the AI will show unexpected detections. Another limitation is the ability of certification. We are from Germany and we want to certify everything, but it´s quite hard to certify AI based software.

www.alliedvision.com

www.congatec.com

www.cst-gmbh.eu

www.cubemos.com

www.still.de

www.vision-components.com

Allied Vision Technologies GmbH

Das könnte Sie auch Interessieren