CI/CD in AR apps development

Introduction

In my previous article, I wrote about how testing and debugging AR apps can be tough.
This time, with the goal of enabling CI/CD cycle in AR application development, I would like to dig deep into how we can save time in AR application testing.
I also created a PoC to test AR behavior in a development environment and a CI environment using Github Actions.

- What is a CI/CD?
- How to develop AR apps
- What to do in AR apps testing
- Considering automated AR apps testing
- Challenges in automating AR apps testing
- Approaches to automated AR apps testing
- Summary

What is a CI/CD?

CI/CD stands for Continuous Integration (Continuous Integration)/Continuous Delivery (Continuous Delivery) and refers to an approach that enables software development to be tested, built, and deployed automatically and continuously. Repeating this approach in a small cycle improves software quality and development efficiency, allowing for faster software releases and improvements.
In the context of AR apps development, as mentioned in the previous article, in the development site, manual intervention such as building and running apps then checking with a device or checking manually. Moreover, the need to increase test variations in a changing environment are bottlenecks in testing, so making these programmable is likely to lead to CI/CD.

How to develop AR apps

Let’s take a look at some examples of AR apps, explain how to develop them, and list up the test scenarios you’ll need.

1. An app that makes AR objects appear in a user-specified location, such as checking products and placing furniture using 3D models in online shopping.

Raycast to a plane recognized by an action, such as a user tap, and place the object at the acquired coordinates.
The tests you’ll need to do are to be able to see the plane correctly in various environments, and to be able to position the 3D models (Objects) in the correct orientation and size by user action.

2. Applications for virtual ads, device manuals, and other location-specific AR apps

If you want to use image anchors to attach to specific images, or if you want to anchor objects to specific locations, you can use AR cloud anchors, such as Google Cloud Anchors or Azure Spatial Anchors, to restore (re-localization) the positional relationship (coordinates) between placed objects and devices.
In addition to the 3D models (Objects) that can be placed in the appropriate orientation and size, you will also need to test the position of the model to see if it can properly capture spatial features and restore anchors in a changing environment such as light intensity.

3. Apps that allow multiple users to share the same AR space, such as games and architectural simulations

In addition to using anchors, real-time network engines such as Photon are used to synchronize object changes across multiple clients.
In addition to the tests listed in (2), the tests are needed including the successful propagation of actions from one device to the other, and synchronization delays in terms of concurrency.

What to do in AR apps testing

In summary, the following tests are required in the AR context.

[Event occurrence] Make the specified object appear with a tap. — Is the object placed correctly?

[Object orientation, size] Adjusting the size and orientation of placed objects. — Is the placed object the right size and orientation?

[Anchor position] Find the anchor from geospatial information and restore the object to which it is bound. — Is the anchor found in the correct position and the restored objects placed in the correct position?

[Interaction with objects] Actions on the objects work. — Does the object move, collide between objects correctly?

[Synchronizing objects] Synchronizing objects between multiplayers. — Does they propagate correctly or synchronize without delay?

[Appearance of the object] Harmonize objects in real space. — Are the light source, occlusion and position correct and does the object look natural?

Considering automated AR apps testing

Now, let’s think about how to make the tests mentioned in the AR apps test above programmable.
Since many developers use Unity as their development environment for AR apps development, the following example assumes that they are developing with Unity and uses Unity Test Framework (Formerly known as Unity Test Runner), which is the standard testing framework for Unity.

[Event occurrence]

The test is to check with Assert whether this instance has been generated, since object placement by tapping will often be described as instantiating the object’s prefab with the Instantiate() method after getting the Raycast.
For example, after tagging an object, use the FindGameObjectsWithTag() method to locate the object and use the Assert.AreNotEqual() method to ensure that the instanced object is not 0.

obj = GameObject.FindGameObjectsWithTag("arobject");
UnityEngine.Assertions.Assert.AreNotEqual(obj.Length, 0);

[Object orientation, size]

You can get the orientation and size of an object using transform.rotation or transform.localScale.
The test seems to be able to check that it is within the range expected by the Assert.AreApproximatelyEqual() method, as shown below.

obj = GameObject.FindGameObjectsWithTag("ar1");
var quaternion = obj[0].transform.rotation;
Quaternion expected = new Quaternion(0.0f, 1.0f, 0.0f, 0.1f);
UnityEngine.Assertions.Assert.AreApproximatelyEqual(expected.x, quaternion.x, 0.1f);
UnityEngine.Assertions.Assert.AreApproximatelyEqual(expected.y, quaternion.y, 0.1f);
UnityEngine.Assertions.Assert.AreApproximatelyEqual(expected.z, quaternion.z, 0.1f);
UnityEngine.Assertions.Assert.AreApproximatelyEqual(expected.w, quaternion.w, 0.1f);var size = obj[0].transform.localScale;
UnityEngine.Assertions.Assert.AreApproximatelyEqual(1.2f, size.x, 0.2f);
UnityEngine.Assertions.Assert.AreApproximatelyEqual(1.2f, size.y, 0.2f);
UnityEngine.Assertions.Assert.AreApproximatelyEqual(1.2f, size.z, 0.2f);

[Anchor position]

To check whether the anchor was found in the correct position, you may need to input spatial information, such as camera images, during the test because you need to relocalize and retrieve the anchor.
This is likely to lead to manual AR testing, or building and testing on devices. The challenges for automating AR apps testing are described below.
If we can get the anchor from the input, then we can translate and map the anchor and the device’s position in space. (Typically, AR starts at the point where the application is started.) Now you might want to test the position and orientation of objects placed using Assert, as described above.

[Interaction with objects]

Suppose you want to perform a physical simulation on an object using a Rigidbody or Collider, and then write a sequence to make the object fall, rotate, or collide.
For example, if you want to make an object fall on a plane recognized by AR, you could attach a MeshCollider to the plane and use the OnCollisionEnter function called when the object collides to check if it collides with the plane.
This test also assumes that a plane has been generated by recognizing the space, so you will probably need to input the space information based on the camera image during the test.

void OnCollisionEnter(Collision collision)
{
UnityEngine.Assertions.Assert.IsTrue(collision.gameObject.tag == planeTag);
}

[Synchronizing objects]

PUN (Photon Unity Networking) and MUN (Monobit Unity Networking) are popular to synchronize objects across a network for multiple clients on Unity.
In the Photon example, you could attach the PhotonView component to the object you want to synchronize, then call the PhotonView.RPC() method to synchronize the position, rotation and property values across the network.
If you want multiple clients to work together, you can set up a test RPC method, synchronize it with the PhotonView.RPC() method, and test the location of objects with Assert.

[Appearance of the object]

The geometric consistency of an object, such as its orientation, size, and position, can be tested quantitatively. However, the optical consistency of a texture, such as whether an object is in harmony with real space, is qualitative, and is affected by the light source environment in real space.
However, methods such as inverse rendering, which estimates object shape, reflection characteristic and light source distribution from rendered images, are also being actively studied. This technique could be applied to determine if the calculated light source is close to the actual light source, so that the AR shadow or tone could be detected as appropriate.

Challenges in automating AR apps testing

The challenges of automating AR app testing include dealing with input data, optical integrity, and multi-device support.

The first is that the AR experience is based on the input of pose information by the device’s cameras and sensors. This can be done, for example, by recording the input beforehand and replaying the data during the test or by creating a 3D model and walking through the 3D model to generate the input artificially.

The second measure of optical integrity comes from the fact that the appearance of AR depends on the ambient environment, such as ambient light. It is possible to prepare input data under various conditions, and to check the result of rendering by scoring.

The third reason for supporting multiple devices is that each device has different characteristics such as spatial recognition performance and viewing angle. In addition to the input data obtained by each device, you will need an emulator that can reproduce the behavior of the device.

Approaches to automated AR apps testing

As my first approach to automating AR apps testing, I’ve created a proof of concept (PoC) that could record AR input data (Camera images, recognized planes, raycasts, pose information, etc.), replay the data on Unity during testing, and test AR behaviors on Unity Test Runner.

By replaying AR data, it is possible to run AR on an editor using data collected from devices without building and running to devices. By using a standard testing framework like Unity Test Runner for AR testing and automating it, you can reduce testing workload.
For example, it may be effective for regression-testing AR apps that update content frequently, or for testing AR apps that have many objects to handle. In addition, multiple patterns of AR input data can be prepared and used for testing against changes in the AR execution environment.

The following demonstration video is an example of how Assert verifies that a tap placement event occurs successfully. If the AR object does not appear as expected, it can be detected as a test result NG, as shown in the example window on the right.

The AR data that can be retrieved by the AR Foundation includes most data from AR library like ARKit / ARCore, such as Device tracking, plane tracking, point clouds, face tracking, meshing, and raycasts. The AR data storage format is based on ARKitStreamer, which is published by Mr.Koki Ibukuro as a remote debugger of ARKit. I have implemented the AR data storage format so that packet data and camera images that are transferred by connecting a device to a PC can be stored by the device itself. The following is sample code.

AR data playback is implemented in such a way that saved packet data and camera image data are read in an editor and passed to the Receiver of the ARKitStreamer. Sample code is shown below. The Receiver loads custom ARSubsytems that send AR data to the AR Foundation so that the AR can be reproduced on the editor.

The following is an example of code that tests the AR behavior described in the section 4. In the following example code, UnitTest is used to generate events such as the successful instantiation of the AR object, and to test whether the AR object is the correct size and orientation.

In addition, the PoC has used Github Actions as a CI environment to test AR behavior by running the Unity PlayMode test with code pushing into the repository as a trigger. The following is an example of a workflow definition.

name: Test projecton:
pull_request: {}
push: { branches: [master] }env:
UNITY_LICENSE: ${{ secrets.UNITY_LICENSE }}jobs:
testAllModes:
name: Test in ${{ matrix.testMode }} on version ${{ matrix.unityVersion }}
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
projectPath:
- ./
unityVersion:
- 2019.3.2f1
testMode:
- playmode
- editmode
steps:
- uses: actions/checkout@v2
with:
lfs: true
- uses: actions/cache@v1.1.0
with:
path: ${{ matrix.projectPath }}/Library
key: Library-${{ matrix.projectPath }}
restore-keys: |
Library-
- uses: webbertakken/unity-test-runner@v1.4
id: tests
with:
projectPath: ${{ matrix.projectPath }}
unityVersion: ${{ matrix.unityVersion }}
testMode: ${{ matrix.testMode }}
artifactsPath: ${{ matrix.testMode }}-artifacts
- uses: actions/upload-artifact@v1
with:
name: Test results for ${{ matrix.testMode }}
path: ${{ steps.tests.outputs.artifactsPath }}

Summary

In the field of AR development, human intervention such as build-and-run on a device and checking AR behavior with an device or checking it manually, and the need to increase test variations in a changing environment become bottlenecks in testing. Making these programmable and automated is expected to reduce the development cost of AR development.

In this article, I examined approaches to test patterns and automation in AR testing, and conducted a PoC for test automation by recording and reproducing AR data. In the future, it will be necessary to study how to make it possible to test AR on a larger scale, such as using the AR cloud, and how to upgrade scenarios such as combining deep learning infernces to test AR.

XR Metaverse Researcher, R&D Engineer at NTT, Japan. Excited for the future of AR and what amazing people create.