Home
Resources
Inbolt x UR AI Accelerator: Ushering in the Era of Vision-Guided Robotics by Default

Inbolt x UR AI Accelerator: Ushering in the Era of Vision-Guided Robotics by Default

Published on
June 10, 2025

Inbolt’s AI-powered software is now integrated into the NVIDIA-powered Universal Robots AI Accelerator Kit, offering a seamless, out-of-the-box solution for intelligent cobot systems.

Inbolt’s software-first approach to 3D vision for real-time robot guidance just took a major leap forward.

We’re proud to announce the seamless integration of our AI-powered robot guidance software into Universal Robots' AI Accelerator, a plug-n-play toolkit for developing AI-powered applications running on the NVIDIA Jetson AGX Orin system-on-module and featuring an Orbecc 3D camera This delivers a truly intelligent cobot system out-of-the-box, no custom code, no lengthy setups, and no compromises.

This will be publicly showcased at NVIDIA GTC Paris 2025.

Software-First, Hardware-Ready

What makes this leap truly unique is how little effort was required to integrate. Inbolt’s software was embedded onto the UR Accelerator hardware with minimal adaptation, proving our company vision: that robot intelligence should come by default, not as a costly, custom afterthought.

With this integration, end users receive a ready-to-deploy intelligent robot in a single shipment. No vision system configuration. No AI tuning. No code rewrites.

The Future of Robotics Is Software-Native

We believe it’s only a matter of time before every robot is shipped with a 3D vision sensor and embedded AI software. And Inbolt is leading that charge.

Our technology transforms robots from blind executors into adaptive, context-aware machines, capable of responding to real-world variability in real time. For manufacturers, that means:

  • <5 minutes to train on any new CAD model

  • No parameter tuning required

  • Works in any lighting condition, even total darkness

  • <80 ms part detection latency

  • Real-time trajectory correction at unprecedented speeds

  • Accessible to non-expert users through standard robot programming tools

  • Fully compatible with moving parts and dynamic assembly lines

From bin picking to screwdriving, gluing, or complex assembly, our software adapts instantly, without changing the robot or the layout.

From Custom Engineering to Intuitive Intelligence

For decades, automation has been synonymous with painful integration: custom fixtures, and rigid setups.

In the past years, machine vision did very little to help customers. Solutions were hardware first, providing customers with tools rather than a solution, requiring expert tuning, long development cycles and low reliability. Inbolt is changing that.

We’re building the foundation for a world where robot manufacturers integrate our software natively as OEMs, giving customers immediate access to intelligent, adaptable automation. We’re building the future of robotics.

It’s 2025. Let’s leave static automation in the past.
It’s time for robotics to see, adapt, and act, by default.

Last news & events about inbolt

Articles

Inbolt x UR AI Accelerator: Ushering in the Era of Vision-Guided Robotics by Default

Inbolt’s AI-powered software is now integrated into the NVIDIA-powered Universal Robots AI Accelerator Kit, offering a seamless, out-of-the-box solution for intelligent cobot systems.

View more

Articles

Inbolt and FANUC Pioneer Robots that Think and Act on the Fly at Moving Assembly Line Speeds

Inbolt and FANUC are launching a manufacturing breakthrough enabling FANUC robots to tackle one of the most complex automation challenges: performing production tasks on continuously moving parts at line speeds. With Inbolt’s AI-powered 3D vision, manufacturers can now automate screw insertion, bolt rundown, glue application and other high-precision tasks on parts moving down the line without costly infrastructure investments or cycle time compromises.

View more

Articles

Sim2Real Gap: Why Machine Learning Hasn’t Solved Robotics Yet

The most successful areas of application for deep learning so far have been Computer Vision (CV), where it all started, and more recently, Natural Language Processing (NLP). While research in Robotics is more active than ever, the translation from research to real-world applications is still a promise, not a reality. But why?

View more