
The device is not yet commercially available, but pre-order updates are available. .lumen’s approach has been recognized globally, being named a CES Innovation Awards 2026 Honoree in the Accessibility and Longevity category, and winning the CTA Foundation’s 2026 Pitch Competition. Arrow supports .lumen’s efforts to scale its production by providing engineering and supply chain services for inventory reliability, cost control, and improved performance. Globally, the potential market is enormous: 43 million people worldwide are blind and 338 million have significant vision impairments, and the numbers are expected to grow significantly. And the opportunity for a breakthrough vision technology is apparent: only 28,000 blind people worldwide have service dogs. Training takes two years and costs as much as $75,000 USD. Most dogs fail the program. „Technology is the only scalable solution to provide blind people with the mobility and freedom they need,“ Amariei said. „Our glasses replicate the main features of a guide dog, without the huge cost or the maintenance.“

How it works
.lumen’s glasses combine AI, computer vision, and stereo edge vision on a single wearable headset. Its durable, weather-proof wearable that sits comfortably across the forehead. It is powered by a rechargeable battery. Using a proprietary navigation system called Pedestrian Autonomous Driving AI, the .lumen glasses integrate six cameras to observe the surroundings, much like how an autonomous car operates. Two of the cameras are color, and four are infrared in stereoscopic pairs. The camera detects
- Ground-level obstacles (curbs, potholes, parked cars, scooters)
- Overhead dangers (branches, signs, window ledges)
- Hazardous and irregular surfaces such as puddles and mud
- Key landmarks such as pedestrian crossings, stairs, doors, and bus stops
The integrated view is sent to an Nvidia vision engine. The module is loaded with a proprietary Pedestrian Semantic Segmentation ML Model, which processes the flow of visual data 100 times per second and contextually understands the view. The startup’s engineers regularly update the model for expanded functionality and improved accuracy. The continuous analysis is not completed remotely in the cloud, but rather on the device itself. In this way, the glasses work on the intelligent edge. Intelligent edge devices operate by generating and processing data at the source on the same platform rather than relying on centralized cloud computing. This enables immediate decision-making based on real-time data analysis, whether by a robot on a factory floor, an autonomous vehicle on the street or, in the case of .lumen, a blind person walking across a busy street.
Helping the blind to navigate
In individuals who are blind, especially those who lost their sight later in life, the visual cortex typically is still active but the eyes and the neural connection to the brain are damaged. Imaging research shows this area repurposes to process non-visual sensory information, such as sound and touch, and may even contribute to spatial awareness and memory. So Amariei and his team made the fateful decision not to whisper directions and warnings into the blind person’s ear. That’s because voice commands can be hard to distinguish, especially on noisy city streets. And people with vision impairments already rely on their sense of hearing far more than people with normal sight. Instead, the glasses communicate mostly via haptic sensors. Haptics stimulate the sense of touch through vibrations, forces, or motions, adding a different level of detail to a person’s perception. To avoid an obstacle, gentle vibrations indicate the safest route to advance. To find a safe path, the user feels an insistent pulse to the center of the forehead.
Follow the vibrations
„Haptics are intuitive,“ Amariei said. „We don’t want our instructions to be too complex and degrade the user’s perception of the situation.“Now that. lumen users can walk freely and avoid obstacles, the next step towards independent living is using the glasses to find specific locations and objects. The startup is introducing a new feature called ´Find Me´which combines visual understanding and spatial relations to help users find what they need. In this new application, .lumen is combining speech with its haptic instructions. For example, a blind person enters a chilly hotel room and wants to adjust the temperature. Normally, it takes many frustrating minutes to find the thermostat in an unfamiliar environment. The „Find Me“ program is loaded with images of hundreds of common objects. As the user scans the hotel room, the headset’s cameras work to recognize the thermostat on the wall. Subtle haptic vibrations indicate where the user should turn. Getting closer, spatial sound tells user the distance and orientation.


















