Vision Tech in the Digital Lifecycle: AR & Fingerprints

Vision Tech in the Digital Lifecycle - Part 2/2
The article series describes the role that Vision Tech is currently playing in the digital lifecycle and will play in the future. Part one (inVISION 2/23) explained the topics CAD, Metrology & Vision; QIF (Quality Information Framework), and Synthetic Data. Part 2 describes the different aspects of process information from vision, AR/VR & Maintenance, Disposal, Traceability & Identification.
Image 1 | The chart shows the increase in data volume which is produced moving from measuring a single point on a single part, through multiple points on a part, to multiple batches, multiple tools, and ultimately multiple facilities. As all this data is captured, the opportunity for extracting more valuable insights increases much more rapidly than the data volume itself.
Image 1 | The chart shows the increase in data volume which is produced moving from measuring a single point on a single part, through multiple points on a part, to multiple batches, multiple tools, and ultimately multiple facilities. As all this data is captured, the opportunity for extracting more valuable insights increases much more rapidly than the data volume itself. Image: Vision Ventures

Process Information from Vision

One topic which is of great interest to many companies in the industrial sector is how to use the multitude of data being captured across manufacturing processes to gain greater insights, productivity, or efficiency. A conceptual outline to value creation and capture is shown in figure 1. Horizontally, the chart shows the increase in data volume which is produced moving from measuring a single point on a single part, through multiple points on a part, to multiple batches, multiple tools, and ultimately multiple facilities. As all this data is captured, the opportunity for extracting more valuable insights increases much more rapidly than the data volume itself. For example, with a single measurement only a single point on a part, the only information available is that the individual point is OK which provides limited value. Conversely, if there is access to data from across the enterprise, valuable decisions can be made on whether entire process lines, factories, and enterprises are functioning as required.

An important point is that the opportunity for value capture and value creation rests on the ability to have access to data from as many points across processes as possible, because the value creation opportunity increases dramatically with scale. Machine vision solutions often acts a primary data capture device, often in conjunction with other sensors, and it can be difficult to either own or have access to wider process data. However, several companies are looking at how they can better leverage and make use of the data which they have access to.

Although larger companies with a greater process coverage have a natural advantage, there are directions from earlier stage companies to use data from multiple locations our sources to generate greater value for their customers. For example, Eigen Innovations (www.eigen.io) in Canada, are seeking to directly address the access to additional data, by using data feeds from beyond their vision inspection systems and edge algorithms. By using multiple data sources, the intention is to be able to provide a much more accurate assessment of root cause effect or prediction of potential issues. Similarly, is Elementary Robotics (www.elementaryrobotics.com) which have recently launched an ‚Analyze‘ platform which is intended to provide advanced visualization and control of processes through exactly capture of data from multiple stations cross parts, products, batches, and tools.

AR/VR & Maintenance

Augmented & Virtual Reality applications are also of high visibility in both industrial and consumer markets with an important aspect that the AR/VR is intended for human, rather than machine, consumption. Within industry this is another area where CAD can be used as the underlying information to enable a vision inspection application. For example, companies such as Visometry (www.visometry.com), CDM Tech (www.cdmtech.de), and PTC (www.ptc.com) each provide software libraries and solutions which take an input CAD model as the ground truth and then use a video feed to identify a physical example of the object in the real world and track its position in 3D. Once an object is registered and tracked, multiple information overlays can be provided to the user including highlighting geometric deviations, missing sub-assemblies, or to provide information on hidden parts for training and maintenance operations. These solutions are mainly intended to be used to aid human inspection in a mobile environment, where 3D object detection relies on the motion of a camera contained on a mobile device. The detection and tracking algorithms are generally not based on neural networks and are heavily optimised to run on the available compute platforms in mobile devices to achieve real-time capability.

Another human-centric vision application is the identification of spare parts later during the product’s operational lifetime. For example, Nyris (www.nyris.io) have developed a visual search engine to accurately identify a part in the field. The starting point is a 3D CAD model, typically provided by the customer, which the Nyris software then uses as a basis for generating synthetic data images, which are used to train a neural network for part identification. Runtime inference is then deployed as a search capability through a software-as-a-service model, where the user may simply use a mobile phone to take an image of a part, which is then recognised by a network running in the cloud, which provides the correct part identification back to the user.

An interesting technical aspect which is important in the development of CAD to AI training, is the topic of material transfer. Typically, a CAD model will not contain a detailed description of how a part material or texture should look, potentially only identifying the material type. The area of style transfer to CAD models is an area which is under active development by many institutes and several companies to ensure the link between CAD model and physically realistic image, and it is likely that this topic will increase in visibility as the use of synthetic data becomes more widespread.

Image 2 | Visometry takes an input CAD model as the ground truth and then uses a video feed to identify a physical example of the object in the real world and tracks its position in 3D.
Image 2 | Visometry takes an input CAD model as the ground truth and then uses a video feed to identify a physical example of the object in the real world and tracks its position in 3D. Image: Visometry GmbH

Recycling & Disposal

End-of-life concerns and a focus on sustainability are very high-level strategic considerations for many companies, particularly large consumer facing multinationals as their customer base becomes increasingly aware and concerned of environmental impacts. The entire topic of developing circular economies is important for the vision industry, as vision sensing is central to enabling may key applications, for example in automated sorting and recycling activities. Working with products during the recycling or disposal phase represents a very complex vision problem. Products are close to the end of their useful life, and can come in many different forms, conditions, and are very different from perfectly formed repeat parts created on a production line.

It is one of the successes of recent AI techniques to handle these types of unconstrained images and applications classes, where the very diverse nature of the target object can make traditional rule-based image processing complex. As an example of continued innovation in this segment, an interesting recent development project was undertaken by Pickit (www.pickit.com) in the UK together with the University of Deflt. The intent was to produce synthetic data to train AI models for waste handling, but also to include a physical simulation step where a physical dynamics engine is used to create realistic or feasible deformations of perfect CAD models prior to training a neural network. This is not a fully developed solution or widespread approach but does give an indication of how the combination of CAD models and physical simulation can be used to support sustainability much later in a product lifecycle.

Traceability & Identification

Across the entire digital thread two features are required to enable the concept: the ability to identity the part and have unified access to all data and information relating to the part, whether this be the original design, the inspections performed on the part, or the ultimate end-user or disposal instructions. Unique identification is also a primary means to ensure authentication of an object, which is of increasing importance to anti-counterfeiting and anti-tamper, for example approved spare parts and devices, or customer returns from ecommerce sites.

Vision tech has a long and successful history in the automatic identification most familiar in the datacode scanners provided by many companies in the vision sector. Beyond direct marking of parts with data codes and dot matrix patterns, several companies are exploring including additives directly within the manufacturing process to create a unique part ID. For example, Dust Identity (www.dustidentity.com), produce a polymer coating which contains many diamond nanocrystals each with a preferred orientation. The polymer is applied to a part, hardened, and creates a unique fingerprint from the orientations of the diamonds. A vision sensor is used to read the fingerprint, by capturing the optical signature across the area where the polymer was coated. Another additive approach to creating a unique fingerprint is from TruTags (www.trutags.com) who have developed microscopic silica particles each of which can be created with a specific microstructure. Each different microstructure reflects a different wavelength of light, so by creating mixtures of different particles a ‚color‘ fingerprint can be created. The technology was developed to be used directly within pills and drugs, and so used silica which is often used in the pharmaceutical industry.

Another approach is to use the inherent characteristics of a part itself to create a fingerprint. For example, companies such as Alitheon (www.alitheon.com) and DeTagTo (www.detagto.com) have developed identification technology that relies on the natural microstructure present in every object. By capturing an image of the microstructure of a surface and using sophisticated algorithms, a unique fingerprint can be generated which can then be used to uniquely identify the part. The algorithms used will typically use features from the whole image surface to create the ID, which provides a level of robustness to local surface defects such as scratches or oil stains. The general process will include a registration step, where the part is imaged and the unique ID is created and stored in a local or cloud data base. Later, at further points in a manufacturing cycle, or during the product lifecycle, an image of the same area can be captured, and the part of object be uniquely identified.

Conclusion

Although Vision tech is not the sole enabler of the digital lifecycle, it certainly clear that vision has an important role to play. There are many links between different phases of the digital product lifecycle which can be implemented using vision tech to provide value and benefits to end-users, as well as create new business concepts. At the same time, the strategic direction of increased digitalization across the product lifecycle is only likely to increase and provide a context which provides many opportunities for the vision industry.

www.vision-ventures.eu

www.emva.org

EMVA European Machine Vision Association

Das könnte Sie auch Interessieren

Bild: TeDo Verlag GmbH
Bild: TeDo Verlag GmbH
Highspeed Vision Webinar

Highspeed Vision Webinar

Am Dienstag den 18. Juni findet ab 14 Uhr (CET) das inVISION TechTalks Webinar ‚High-Speed Vision‘ statt. Dabei stellen Emergent Vision, Baumer und Euresys verschiedene Themen in den Fokus wie Highspeed-Übertragungen mit RMDA: Pro & Contra, 100GigE Vision und kein Ende; 100G Highspeed Streaming über CoaXPress over Fiber FPGA IP Core, Ausblick auf GigE Vision 3.0 mit RDMA (ROCEV2)-Unterstützung …