Machine learning and data science are two of the fastest-growing fields in computer science. The tools they use have been expanded and refined to offer new programming paradigms and to optimize workflow. New design tools that have rapidly gained popularity include notebooks to store executable codes together with documentation and visualization in a single place. Notebooks are powerful interfaces for exploratory programming, as they allow users to experiment, visualize results, and document insights in a unified web-based interface. One of the most popular frameworks for the development and use of notebooks is the Jupyter open-source project and its latest version, JupyterLab.
JupyterLab is the latest web-based interactive development environment for notebooks, code, and data. Its flexible interface allows users to configure and arrange workflows in data science, scientific computing, and machine learning. Notebook documents can contain live Python code, rich text elements, and visualizations, allowing application developers to build interactive GUIs with live output from a camera stream.
JupyterLab for Machine Vision
Machine vision application designers can leverage the interactive computing and workflow benefits enabled by Jupyter notebooks to quickly test and validate all camera features and to speed up product qualification without having to set up a custom development environment. Lucid Vision Labs recently integrated JupyterLab in its ArenaView software viewer to offer a preconfigured and expandable development environment within a familiar viewer interface. JupyterLab comes preinstalled with its own virtual Python environment and can be used without any further configuration. Advanced users have the option of using their own custom virtual environments.
JupyterLab is underpinned by the Python programming language, an object-oriented, interpreted language that gains much of its power from a large constellation of libraries, including popular modules for computer vision, machine learning, and scientific computing. While interpreted languages tend to be slower than compiled languages for time-critical applications, most Python libraries are written in the C programming language. The Python standard library consists of more than 200 core modules. Specifically for machine vision, the modular nature of the Python package architecture enables application developers to leverage a large number of open-source computer vision and machine learning packages. Libraries such as OpenCV or TensorFlow are installed and configured with a simple pip (Package Installer for Python) command, allowing users to experiment right away in a Jupyter notebook. Lucid’s JupyterLab resource center at the company´s homepage contains multiple examples of typical machine vision applications, such as OCR, 1D/2D barcode reading, and AI-based object detection, provided in the form of easy-to-use notebooks.
Lucid’s Triton Edge camera featuring AMD Xilinx’s Zynq UltraScale+ MPSoC allows application engineers to leapfrog the competition by providing faster time-to-market thanks to its miniaturized hardware design and validated industrial reliability. Powerful on-camera system resources, such as twin dual-core ARM processors and FPGA, provide increased flexibility for the development of unique vision IP. Without having to rely on a camera manufacturer’s SDK or to develop and run code on a host PC, OEMs are free to develop on-camera edge processing, including AI, through various development environments, such as Xilinx Vitis, PetaLinux, and Jupyter Notebook. Using Jupyter notebooks directly on an edge device enables application designers to develop solutions without the complexity of setting up a custom development environment or resorting to remote debugging for experimentation. Furthermore, the AMD Xilinx PYNQ framework provides the means to control the underlying hardware logic and to load FPGA overlay with a high-level Python API.
Lucid Vision will be showcasing a variety of new camera technologies at the Vision. The Triton EVS camera using Prophesee’s Metavision event-based sensor. Furthermore, the comapny is enabling SWIR imaging in Triton’s IP67-rated compact form factor, featuring THE Sony SenSWIR 1.3 MP IMX990 and 0.3MP IMX991 InGaAs sensors, capable of capturing images across both visible and short-wave infrared spectrums. A 65MP model within the high-end Atlas 10GigE camera portfolio will be unveiled and a 8.1MP camera utilizing the new Sony IMX487 ultraviolet (UV) sensor. The Helios 2 3D ToF-camera family will have new variants such as the Helios2 Wide Field of View (FoV) with a wide-angle lens providing a 108º angle of view. The camera delivers 640×480 depth resolution at up to an 8.3m working distance and 30 fps.