Unity Sentis: Real-time Object Detection for AR Apps

Posted By : Saurabh Singh | 21-Jan-2025

Unity Sentis: Real-time Object Detection for AR Apps.
 

Unity Sentis gives developers an easy on-ramp for their integration of AI capability into augmented reality (AR) applications, which allows for seamless object detection and delivers a more profound user experience. Thanks to the support of ONNX (Open Neural Network Exchange) models, Unity Sentis can directly integrate pre-trained models or let you build your own, providing much flexibility to customize solutions for your specific use cases.

Key Features of Unity Sentis

On-Device Processing:

Unity Sentis runs AI models directly on the device, meaning there's no need for constant internet connectivity. This allows for real-time processing and instant responses, making it perfect for applications that require immediate feedback.

 

Cross-Platform Compatibility:

Sentis works across platforms, such as AR glasses to mobile devices and headsets with virtual reality. That makes the range of users of applications from it possible on varied environments.

 

Optimized Performance:

The system guarantees that the model is optimized toward the ability of the device and ensures delivery to the level that performance does not lag despite being used in less powerful hardware equipment.

Use Case: Router Status Detection and Display in Augmented Reality.

Let's go through an example, where we are going to build a simple application using Unity Sentis for real-time Case: Router Status Detection and Display in Augmented Reality

Let's consider an example of developing a simple application using Unity Sentis for real-time object detection. We want to build an Android application running on AR glasses like Vuzix Smart Glasses to detect the DLink Router and display its status dynamically. The router status will be updated, fetched from the API every minute, and displayed in real-time on the glasses.

 

As for the steps, here's what you'll do:

 

Setup the Development Environment

 

First, download some software for the Unity installation and download the Unity Sentis package. Second, install the relevant SDK for your AR glasses; it is either ARCore for Android or ARKit for iOS.

Prepare the ONNX Model:

If you already don't have an object detection model, then you can take a pre-trained model such as YOLO or SSD and convert it to ONNX.

Otherwise, you can train your model specific to the DLink Router object and export it to ONNX.

Unity Script for Object Detection

Now that's done; you can draft your object detection script. Here is an example detection logic of how you might do it. me object detection. We want to build an Android application running on AR glasses, like Vuzix Smart Glasses, to detect a DLink Router and display its status dynamically. We will be getting the status of the router from an API every minute, and the information will be displayed on the glasses in real time.

 

Here's how you can do it step by step:

 

Setup the Development Environment:

First, install Unity on your machine, as well as the Unity Sentis package. Now, set up your AR glasses with the relevant SDK. It is either ARCore for Android or ARKit for iOS.

 

Prepare the ONNX Model:

If you already don't have an object detection model, then you can take a pre-trained model such as YOLO or SSD and convert it into the ONNX format.

Otherwise, you can train your model on specific objects like the DLink Router and then export it to ONNX.

 

Unity Script for Object Detection:

Now that all this is done, you can start writing your script for object detection. Below is an example of how you might implement the detection logic

Example Code for Real-Time Object Detection in Unity.

using UnityEngine;

using UnityEngine.UI;

using Sentis;  // Import Unity Sentis namespace

using System.Collections;

 

public class ObjectDetectionManager: MonoBehaviour

{

    public RawImage displayImage;  // For showing the detected image (optional)

    private SentisModel model;     // The object detection model

 

    // Start is called before the first frame update

    void Start()

    {

        // Load the ONNX model

        string modelPath = "Assets/Models/dlink_router_model.onnx"; // Path to your ONNX model

        model = new SentisModel(modelPath);

        model.LoadModel();  // Load the model into Unity Sentis

    }

 

    // Update is called once per frame

    void Update()

    {

        DetectObjects();  // Call object detection function every frame

    }

 

    void DetectObjects()

    {

        // Assume we have a texture from the camera or a frame

        Texture2D frameTexture = GetCameraFrame(); // Get camera frame

        displayImage.texture = frameTexture; // Optional: display the image on screen

 

        // Run inference on the frame

        var results = model.RunInference(frameTexture);

 

        // Process the results (for simplicity, assuming it's a list of detected objects)

        foreach (var result in results)

        {

            if (result.label == "DLinkRouter" && result.confidence > 0.8f)  // Threshold for detection

            {

                // Display AR information

                DisplayRouterStatus(result);

            }

        }

    }

 

    // Method to simulate getting a camera frame (replace with actual camera frame)

    Texture2D GetCameraFrame()

    {

        // For now, returning a placeholder texture

        return new Texture2D(512, 512);

    }

 

    // Display the router status in AR (simulate with a simple UI update for this example)

    void DisplayRouterStatus(DetectedObject result)

    {

        string routerStatus = GetRouterStatusFromAPI();

        Debug.Log($"Router detected: {routerStatus}");

 

        // Update the UI with the router status (could be in AR environment in a real app)

        Text statusText = displayImage.GetComponentInChildren<Text>();

        statusText.text = $"Router Status: {routerStatus}";

    }

 

    // Simulated API call to get router status

    string GetRouterStatusFromAPI()

    {

        // For simplicity, return a random status

        string[] statuses = { "Online", "Error: Connection Lost", "Offline" };

        return statuses[Random.Range(0, statuses.Length)];

    }

}


Integrating with AR Glasses:

Using AR Foundation: Unity's AR Foundation package can help you access the camera feed on AR glasses. By using the AR Foundation API, you can capture the camera feed and pass it as input to your object detection model.

Displaying the Status in AR: Once the object detection algorithm identifies the DLink Router, you can show the router's status as an AR overlay. You could position the status message in 3D space near the detected object (router) or display it as a UI element directly in the user's field of view.

 

void DisplayRouterStatusInAR(string status)

{

    GameObject statusObject = new GameObject("RouterStatusText");

    TextMesh statusText = statusObject.AddComponent<TextMesh>();

    statusText.text = status;

    statusText.fontSize = 30;

    statusText.color = Color.green;

 

    // Position the text in front of the router's detected location in AR space

    statusObject.transform.position = new Vector3(0, 1, 2);  // Example position

}


Conclusion:

Using Unity Sentis and ONNX models, developers will find it easy to add real-time object detection into augmented reality. In this example, how to detect a device, such as a DLink Router, and its status in AR using smart glasses can be shown. This technology can be extended to smart factories, warehouses, and maintenance applications, where technicians and workers will quickly get contextual information about the objects in question in real-time.

Unity Sentis lets you tightly integrate the strongest AI capabilities in AR applications while providing seamless and efficient object detection and real-time decision-making capabilities. From solving remote maintenance problems to building complex logistics and training systems, Unity Sentis allows you to unlock the future of AI-powered AR experiences.

 

About Author

Author Image
Saurabh Singh

I’m Saurabh Singh, a passionate Unity developer with extensive experience in creating immersive and engaging applications. Specializing in interactive 3D environments and real-time simulations, I focus on delivering high-quality solutions tailored to client needs. My expertise spans game development, VR/AR experiences, and custom Unity integrations. Committed to innovation and excellence, I strive to push the boundaries of interactive technology and bring creative visions to life.

Request for Proposal

Name is required

Comment is required

Sending message..