AZoSensors speaks with Heiko Seitz, technical author at IDS Image Development Systems GmbH about their current work and breakthroughs in AI-Vision and how they open up new opportunities for those working with AI-based applications and how to make the technology more accessible.
Could you briefly outline what it is that you are trying to achieve at IDS?
Today, it is all about the exciting new features for our next-generation AI-based system, IDS NXT. With the coming software update, we present a whole new look at approaches to AI-based application development, which will enable everyone to create their own individual AI-Vision application quickly and easily and deploy it directly on an IDS NXT camera.
What motivated you to pursue this avenue of making AI-Vision more accessible?
We made it our mission to ensure that everyone has the capacity to create their own AI-based image processing. We wanted to create solutions suitable for almost any application and provide users with everything they need to get started with AI-Vision-based applications right away.
In addition to the complete hardware, we have developed the complete software environment, including access to IDS NXT lighthouse to train neural networks and help create custom image processing applications in cameras.
What exactly is IDS NXT and what kind of hardware are IDS NXT cameras?
The IDS NXT hardware systems are smart, intelligent, industrial-quality cameras suitable for edge use. They can process vision tasks independently on the camera without a PC and generate results. For this purpose, they have their own IDS AI processor based on an FPGA, the deep ocean core, which can execute neural networks, hardware accelerated in the camera.
Application processes are realized via Vision apps, which can be exchanged quickly and easily, similar to a smartphone. The cameras communicate, for example, inference results to PCs, PLCs and other external devices or machines.
Intelligent cameras are already available on the market, but what makes IDS’s system different from the others?
This is where our idea of simple AI vision comes into play. In addition to the advanced hardware of IDS NXT, we offer a completely self-developed software environment that covers the entire workflow, the creation or training of a neural network through images with a suitable categorization of the image content, as well as the realization of the application software that later performs that inspection task on the edge device, on which the application works in a fully optimized and autonomous way.
Who is it intended to help, and what can you do with it?
When you have to operate with AI-Vision and if you think that you do not have the necessary expertise in artificial intelligence or application programming, then IDS NXT is exactly the right system because you do not need any special knowledge in these disciplines to operate with our system. And to develop, maintain and evolve an application is really easy.
It offers you everything you need to get started right away. In addition to the complete hardware, you also get the complete software environment, including access to IDS NXT lighthouse to train your first neural networks and create your own image processing application in cameras.
How exactly does the system work?
You no longer have to program in a classic platform-specific way with a direct device connection because the basic device functions are pre-programmed in a universally configurable modular function kit and therefore do not have to be reprogrammed each time.
Additionally, for training the machine learning algorithms, we already have an easy-to-use CNN training tool in IDS NXT lighthouse.
Image Credits: IDS Imaging
How does your system make AI-Vision more accessible?
Firstly, you do not need to be familiar with a special text-based programming language to combine the camera functionalities or to bring them into an application flow, because this has now all been integrated in lighthouse. With such a high-level development environment, you neither have to deal with platform-specific programming nor with the special syntax of a programming language.
The complete AI-based embedded vision development now takes place in the cloud. So no installation of a complex development environment is necessary. You only need a licence for this web-based AI-Vision Studio. This allows you to start immediately and concentrate fully on what your application should do, which will later be executed in the camera.
How do you make it so easy for anyone to create IDS NXT Vision Apps?
We offer different approaches that enable the end goal to be achieved quickly in a manner that is as simple and flexible as possible. Start creating vision apps with the use case assistant is extremely application-oriented and offers a selection of ready-made use cases for an easy start and guide through the entire process of app creation with simple queries and tips, without a need for any special knowledge of whether objects need to be detected or classified.
It prepares the right application components for you in the background to find the desired objects, counts them and makes the results available via the selected device interfaces. It asks you for images according to the selected use case and explains how to label and train them.
IDS is all about combining “high-quality’ with “ease of use.” How is this realized in IDS NXT and how can it make complex processes easy to understand?
With IDS NXT lighthouse a complete embedded vision app is created in just a few steps in the cloud that users can run directly on the IDS NXT camera. And with its new use case assistant many simple tasks can already be completely covered.
And for some more complex processes, such as a two-stage analysis, which then requires several neural networks, you do not have to switch immediately to C++ or another text-based programming language and leave the comfort of the AI-Vision Studio.
For this purpose, we have integrated a block-based visual code editor that uses Google’s Blockly library. Processes can be combined like puzzle pieces to form sequences for any complexity. In this way, we enable a much higher flexibility of the application description while, at the same time, making the process easy to follow.
How quickly can a new user get set-up and started with IDS NXT lighthouse?
Thanks to the intuitive user interface, even beginners can use the block-based editor to quickly create customized sequences. So variables, parameters and AI results can be easily connected through logic links with mathematical calculations and conditional if-else statements or recurring actions through loops. And it is also possible to use several neural networks in one sequence.
What new possibilities does IDS’s embedded vision development enable?
Embedded vision systems such as the IDS NXT inference camera platform are highly optimized devices that typically use components such as FPGAs, GPUs or other dedicated devices in addition to a CPU in order to perform their specialized tasks extremely efficiently while using as few resources as possible.
But to develop applications for such an embedded device is only possible with SOME prior knowledge. Not only for setting up the interaction between the development system and the edge device should you know how to deal with interfaces, communication protocols, debuggers, toolchains and an IDE.
And now Imagine an edge system for which you no longer have to "program" in that classic platform-specific way with a direct device connection and without prior knowledge in all these disciplines. And all this can be done without to setup a special development environment, cause all you need is located in the cloud in one tool.
But is it also possible to test the developed embedded vision solution in the cloud?
When you want to make sure that your developed vision app is running when downloading it to put it on the embedded device, the users have the possibility to upload sample images that can then be used in the Vision app. It is even more easy, to use a dataset preprepared in IDS NXT lighthouse. A kind of application simulation in the cloud.
In this way, images with certain content or problematic situations can be prepared by the user to test, debug and edit the app and the neural network used before the app is executed live in the camera.
How will the app, developed in the cloud, find its way into the camera?
The next step to take to complete the workflow is to upload the Vision app into the camera. Once that is complete, users can connect their IDS NXT camera with the IDS NXT cockpit PC tool to install the new app. To do this, users enter the Vision app manager and install the new build Vision app. Once activated, it executes locally as new app in the camera.
Can the embedded vision app be evolved once installed locally, or do users need to go back in the cloud to make changes or extend the application?
Good question. Once the app is up and running and functioning properly in the camera, you may want to extend the application or even discover that there could be a bug. For this situation, there is another excellent feature built into the new software release that makes any subsequent changes, updates or fixes much easier to execute.
If you may want to extend the application to detect additional objects in the images, you simply need to go back into the IDS NXT cockpit. Because every app created with IDS NXT lighthouse also contains the block-based editor. This means, the application code is now also available to edit directly in the camera. To see this, we just open the camera’s website and what you will see is the block-based code of the running app into the camera.
To detect the additional objects, you only need to add a few more blocks. Maybe a second detection block which uses another trained CNN that was previously trained in lighthouse and uploaded into the camera. This is done with just a few clicks.
And to mark the additional objects, you would only need to draw further ROIs into the same camera image. To do this, use another ROI-drawing block with the results of the second detection. This is all done very quickly.
So the IDS NXT users don't need programming knowledge to make updates to their vision system?
No, you do not have to be a programmer to develop or evolve the system. Modifying and updating an existing vision app with the new block-based editor is easy and can be done by anyone.
About Heiko Seitz
Dipl.-Ing. Heiko Seitz has been part of the IDS team since 2001. As a development engineer, he was involved in the evolution of IDS machine vision products from frame grabbers to today's camera technology. Through his in-depth experience in various areas of camera software, he knows the challenges as a developer as well as the requirements of the users. Since 2016, he applies his technical background in the marketing team as an author. Among other things, he is responsible for the creation of communication concepts and technical articles about IDS products and their technology.
This information has been sourced, reviewed and adapted from materials provided by IDS Imaging Development Systems GmbH.
For more information on this source, please visit IDS Imaging Development Systems GmbH.
Disclaimer: The views expressed here are those of the interviewee and do not necessarily represent the views of AZoM.com Limited (T/A) AZoNetwork, the owner and operator of this website. This disclaimer forms part of the Terms and Conditions of use of this website.