Augmented Reality’s RoomPlan for iOS: Getting Started - Exotic Digital Access
  • Kangundo Road, Nairobi, Kenya
  • support@exoticdigitalaccess.co.ke
  • Opening Time : 07 AM - 10 PM
Augmented Reality’s RoomPlan for iOS: Getting Started

Augmented Reality’s RoomPlan for iOS: Getting Started

RoomPlan is Apple’s newest addition to its Augmented Reality frameworks. It creates 3D models of a scanned room. Additionally, it recognizes and categorizes room-defining objects and surfaces.

You can use this information in your app to enrich the AR experience or export the model to other apps.

In this tutorial, you’ll learn everything you need to get started with RoomPlan. You’ll explore different use cases and see how easily combining real, live objects with the AR world is.

Getting Started

Download the materials by clicking the Download Materials button at the top or bottom of this tutorial.

You’ll need a device with a LiDAR sensor to follow this tutorial. Apple uses the LiDAR sensor to detect surfaces or objects in your room. Examples of devices supporting LiDAR sensors are: iPhone 12 Pro, iPhone 12 Pro Max, iPhone 13 Pro, iPhone 13 Pro Max, iPhone 14 Pro and iPhone 14 Pro Max.

A quick way to check if your device contains the LiDAR sensor is to look at the back of your device.

Augmented Reality’s RoomPlan for iOS: Getting Started

This device contains a black-filled circle, or the LiDAR sensor, below the camera. Apple uses this sensor to measure distances between the surface or objects in the room and the camera itself. Hence, this device works for RoomPlan.

Now, open the starter project, then build and run on a device with a LiDAR sensor. It might be obvious, but it’s worth stating clearly. You won’t be able to use the simulator at all for this project.

You’re greeted with this screen:

Augmented Reality’s RoomPlan for iOS: Getting Started

There are three different navigation options: Custom AR View, Room Capture View and Custom Capture Session. Tap the first one, titled Custom AR View, and the app shows you a new view that looks like this:

Augmented Reality’s RoomPlan for iOS: Getting Started

The screen is filled with a custom subclass of ARView, and there is a button in the lower left corner. Point your device to a horizontal plane and tap the button.

Augmented Reality’s RoomPlan for iOS: Getting Started

You’ll see two things:

  • A black block appears on the horizontal plane.
  • A second button appears with a trash icon. Tapping this button removes all blocks and hides the trash button.

Your First Custom AR View

Now back in Xcode, take a look at CustomARView.swift.

This is a subclass of ARView which provides a simple interface for adding an AR experience to an iOS app.

Take a look at placeBlock(). This will create a new block by generating a model and then applying a black material to it. Then it creates an anchor with the block and adds it to the ARView‘s scene. The result is like so:

Augmented Reality’s RoomPlan for iOS: Getting Started

Of course, putting digital blocks on the floor is a big hazard, other people could trip over them. :]

That’s why you’ll use the framework RoomPlan to learn more about the scanned room. With more context, you can place blocks on tables instead of any horizontal plane.

Looking back to the main screen of the app now. The navigation options Room Capture View and Custom Capture Session don’t work yet. In this tutorial, you’ll add the missing pieces and learn about the two different ways to use RoomPlan.

Scanning a Room

In the WWDC video Create parametric 3D room scans with RoomPlan Apple differentiates between two ways of using RoomPlan; Scanning experience API and Data API:

  • Scanning Experience API: provides an out-of-the-box experience. It comes in the form of a specialized UIView subclass called RoomCaptureView.
  • Data API: allows for more customization but also requires more work to integrate. It uses RoomCaptureSession to execute the scan, process the data and export the final result.

You’ll now learn how both of these work. First up is the scanning experience API.

Using the Scanning Experience API

Using the scanning experience API, you can integrate a remarkable scanning experience into your apps. It uses RoomCaptureView, consisting of different elements as in the below screenshot:

Augmented Reality’s RoomPlan for iOS: Getting Started

In the background, you can see the camera feed. Animated outlines highlight surfaces such as walls, doors, and room-defining objects like beds and tables.

Look at the following screenshot:

Augmented Reality’s RoomPlan for iOS: Getting Started

In the upper part of the view, a text box with instructions helps you to get the best possible scanning result. Finally, the lower part of the view shows the generated 3D model. RoomPlan generates and refines this 3D model in real time while you scan the room.

All three elements together, the camera view with animated outlines, the text box with instructions and the 3D model, make it easy to scan a room. Although this seems pretty extensive, Apple describes it as an out-of-the-box scanning experience.

Using RoomCaptureView to Capture a Room

Now you’ll learn how to use RoomCaptureView. Open RoomCaptureViewController.swift. You’ll find RoomCaptureViewController and RoomCaptureViewRepresentable, making it possible to use it in SwiftUI.

RoomCaptureViewController has a member called roomCaptureView which is of type RoomCaptureView. viewDidLoad adds roomCaptureView as a subview of the view controller and constrains it inside filling the entire view. It also sets up bindings to the viewModel.

The first step you need to do is start the session. To do so, add the following to startSession:

let sessionConfig = RoomCaptureSession.Configuration()
roomCaptureView?.captureSession.run(configuration: sessionConfig)

Here you create a new configuration for the scanning session without any customization. You then start a room-capture session with this configuration.

Build and run, then tap Room Capture View. Move your device around your room, and you’ll see the 3D model generated. It’s truly an out-of-the-box scanning experience, exactly like Apple promised.

Augmented Reality’s RoomPlan for iOS: Getting Started

Working with the Scanning Result

In this section, you’ll learn how to use the 3D model that the scanning experience API captures. You’ll conform RoomCaptureViewController to the protocol RoomCaptureSessionDelegate. By doing so, the view controller gets informed about updates of the scan. This delegate protocol makes it possible to react to events in the scanning process. This includes the start of a room-capture session or its end. Other methods inform you about new surfaces and objects in the scanning result. For now, you’re only interested in general updates to the room.

Continue working in RoomCaptureViewController.swift. Start by adding this new property below roomCaptureView:

private var capturedRoom: CapturedRoom?

A CapturedRoom represents the room that you’re scanning. You’ll explore it in more detail in a moment, but for now, continue by adding this extension above RoomCaptureViewRepresentable:

extension RoomCaptureViewController: RoomCaptureSessionDelegate {
  func captureSession(
    _ session: RoomCaptureSession,
    didUpdate room: CapturedRoom
  ) {
    capturedRoom = room
    DispatchQueue.main.async {
      self.viewModel.canExport = true
    }
  }
}

This implements the RoomCaptureSessionDelegate protocol, implementing one of the delegate methods which is called when the room being captured is updated. Your implementation stores the updated room in the capturedRoom property. It also informs the viewModel that exporting the 3D model of the scanned room is possible.

For the RoomCaptureViewController to act as the room-capture session delegate, you also need to set it as its delegate. Add this line to the bottom of viewDidLoad:

roomCaptureView.captureSession.delegate = self

Build and run. Tap the navigation option Room Capture View and start scanning your room. A new button appears as soon as a model is available for exporting. This button doesn’t have any functionality yet, you’ll learn how to export the model next.

Augmented Reality’s RoomPlan for iOS: Getting Started

Taking a Look at a Scan Result

Before exporting the model, look at what the result of a scan looks like.

Scanning a room with RoomCaptureView creates a CapturedRoom. This object encapsulates various information about the room. It contains two different types of room-defining elements: Surface and Object.

Surface is a 2D area recognized in the scanned room. A surface can be:

  • A wall
  • An opening
  • A window
  • An opened or closed door

An Object is a 3D area. There are a lot of object categories:

  • Storage area
  • Refrigerator
  • Stove
  • Bed
  • Sink
  • Washer or dryer
  • Toilet
  • Bathtube
  • Oven
  • Dishwasher
  • Table
  • Sofa
  • Chair
  • Fireplace
  • Television
  • Stairs

That’s a pretty extensive list, right? Additionally, both surfaces and objects have a confidence value, which can either be low, medium or high. They also have a bounding box called dimensions. Another common property is a matrix that defines position and orientation called transform.

How Can We Access Room Data?

You may wonder what you can do with the resulting room data! RoomPlan makes it easy to export the depth and complex scanning result as a USDZ file.

USDZ is an addition to Pixars Universal Scene Description file format, USD in short. This file format describes 3D scenes and allows users to collaboratively work on them across different 3D programs. USDZ is a package file combining USD files, images, textures and audio files.

To learn more about USD and USDZ, check out Pixars Introduction to USD and Apple’s documentation about USDZ.

Once you export your room model as a USDZ file, you’ll be able to open, view and edit the file in other 3D applications like Apple’s AR Quick Look.

Exporting your Room Data

Now it’s time for you to export your room model. All you need to do is call export(to:exportOptions:) on the captured room.

Still in RoomCaptureViewController.swift replace the empty body of export with:

do {
  // 1
  try capturedRoom?.export(to: viewModel.exportUrl)
} catch {
  // 2
  print("Error exporting usdz scan: \(error)")
  return
}
// 3
viewModel.showShareSheet = true

Here’s what’s happening:

  1. Exporting the model is as easy as calling export(to:exportOptions:) on the captured room. You can export the model either as polygons or as a mesh. You don’t define custom export options here, so it’s exported as a mesh by default.
  2. Like any other file operation, exporting the model can fail. In a real app, you would try to handle the error more gracefully and show some information to the user. But in this example, printing the error to the console is fine.
  3. Finally, you inform the view model that the app needs to show a share sheet to allow the user to select where to send the exported USDZ file.

Build and run. Scan your room, and you’ll see the export button again. Tap it, and this time you’ll see a share sheet allowing you to export the 3D model of your room.

Augmented Reality’s RoomPlan for iOS: Getting Started

Now that you’re an expert in using the scanning experience API in the form of RoomCaptureView, it’s time to look at the more advanced data API.

Advanced Scanning With the Data API

RoomCaptureView is pretty impressive. But unfortunately, it doesn’t solve your problem of potentially dangerous boxes lying around on the floor. :] For that, you need more customization options. That’s where the second way of using RoomPlan comes into play: the data API.

Open CustomCaptureView.swift. Like RoomCaptureViewController.swift, this file already contains a bunch of code. CustomCaptureView is a custom ARView, different than CustomARView that you saw earlier. You’ll use RoomPlan to add context to the scene. Important parts are missing, and you’ll create the missing pieces in this section of the tutorial.

Again, the first step is to start the room capture session.

Start by adding these two properties below viewModel:

private let captureSession = RoomCaptureSession()
private var capturedRoom: CapturedRoom?

captureSession is the session used for scanning the room and capturedRoom stores the result.

Next, add this line to the body of startSession:

captureSession.run(configuration: RoomCaptureSession.Configuration())

Just like before, this starts the session with a default configuration.

Setting up Delegate Callbacks

The next step is to prepare placing blocks whenever an updated room model is available. To do so, add these two lines of code at the beginning of setup:

captureSession.delegate = self
self.session = captureSession.arSession

This informs the captureSession that CustomCaptureView acts as its delegate. Now it needs to conform to that delegate protocol. Add the following code above CustomCaptureViewRepresentable:

extension CustomCaptureView: RoomCaptureSessionDelegate {
  // 1
  func captureSession(_ session: RoomCaptureSession, didUpdate: CapturedRoom) {
    // 2
    capturedRoom = didUpdate
    // 3
    DispatchQueue.main.async {
      self.viewModel.canPlaceBlock = didUpdate.objects.contains { 
        $0.category == .table 
      }
    }
  }
}

This is what’s going on:

  1. You implement the delegate method to get updates on the scanned room just like earlier.
  2. You store the new room in the property capturedRoom.
  3. If there are tables in the list of objects of the updated room, you change the view model’s property canPlaceBlock. This makes the place block button appear.

Build and run. This time tap the navigation option Custom Capture Session at the bottom of the list. Once you start scanning a room and the session recognizes a table, the place block button appears. It doesn’t do anything yet, that’s what you’ll change next.

Augmented Reality’s RoomPlan for iOS: Getting Started

Other Capture Session Delegate Methods

Again, you’re only using the delegate method captureSession(_:didUpdate:) of RoomCaptureSessionDelegate. That’s because it informs you of all updates to the captured room. But there are more methods available that provide a more fine-granular control.

For updates on surfaces and objects, you can implement three different methods:

  1. captureSession(_:didAdd:): This notifies the delegate about newly added surfaces and objects.
  2. captureSession(_:didChange:): Informs about changes to dimension, position or orientation.
  3. captureSession(_:didRemove:): Notifies when the session removes a surface or object.

The next delegate method is captureSession(_:didProvide:). RoomCaptureSession calls this one whenever new instructions and recommendations are available to show the user. These instructions are part of the enum RoomCaptureSession.Instruction and contain hints like moveCloseToWall and turnOnLight. You can implement this method to show your own instruction view, similar to the one RoomCaptureView shows.

Finally, there are captureSession(_:didStartWith:) and captureSession(_:didEndWith:error:) delegate methods. They notify you about the start and end of a scan.

All of these delegate methods have an empty default implementation, so they are optional.

Trying to Place an Object on the Table

Whenever a user taps the button to place a block, it sends the action placeBlock via ARViewModel to CustomCaptureView. This calls placeBlockOnTables, which doesn’t do anything at the moment. You’ll change this now.

Replace the empty body of placeBlockOnTables()/code> with the following:

// 1
guard let capturedRoom else { return }
// 2
let tables = capturedRoom.objects.filter { $0.category == .table }
// 3
for table in tables {
  placeBlock(onTable: table)
}

Here’s what’s happening:

  1. First, you make sure that there’s a scanned room and that it’s possible to access it.
  2. Unlike surfaces, where each type of surface has its own list, a room stores all objects in one list. Here you find all tables in the list of objects by looking at each object category.
  3. For each table recognized in the scanned room, you call placeBlock(onTable:).

Placing a Block on the Table

The compiler warns that placeBlock(onTable:) is missing. Change this by adding this method below placeBlockOnTables:

private func placeBlock(onTable table: CapturedRoom.Object) {
  // 1
  let block = MeshResource.generateBox(size: 0.1)
  let material = SimpleMaterial(color: .black, isMetallic: false)
  let entity = ModelEntity(mesh: block, materials: [material])

  // 2
  let anchor = AnchorEntity()
  anchor.transform = Transform(matrix: table.transform)
  anchor.addChild(entity)

  // 3
  scene.addAnchor(anchor)

  // 4
  DispatchQueue.main.async {
    self.viewModel.canDeleteBlocks = true
  }
}

Taking a look at each step:

  1. You create a box and define its material. In this example, you set its size to 0.1 meters and give it a simple black coloring.
  2. You create an AnchorEntity to add a model to the scene. You place it at the table’s position by using table.transform. This property contains the table’s position and orientation in the scene.
  3. Before the scene can show the block, you need to add its anchor to the scene.
  4. You change the view model’s property canDeleteBlocks. This shows a button to remove all blocks.

Finally, add this code as the implementation of removeAllBlocks:

// 1
scene.anchors.removeAll()
// 2
DispatchQueue.main.async {
  self.viewModel.canDeleteBlocks = false
}

This is what the code does:

  1. Remove all anchors in the scene. This removes all blocks currently placed on tables.
  2. Since there are no blocks left, you change the view model’s property canDeleteBlocks. This hides the delete button again.

Build and run. Tap Custom Capture Session and start scanning your room. You need a table in the room you’re scanning for the place block button to appear. Continue scanning until the button appears. Now point your phone at a table and tap the button. You’ll see a screen similar to this:

Augmented Reality’s RoomPlan for iOS: Getting Started

A block appears, but it’s not where it’s supposed to be. Instead of laying on the table, it floats mid-air underneath the table. That’s not how a block would behave in real life, is it?

Something went wrong, but don’t worry, you’ll fix that next.

Understanding Matrix Operations

So, what went wrong? The faulty line is this one:

anchor.transform = Transform(matrix: table.transform)

An AnchorEntity places an object in the AR scene. In the code above, you set its transform property. This property contains information about scale, rotation and translation of an entity. In the line above you use the table’s transform property for this, which places the block in the middle of the table.

The table’s bounding box includes the legs and the top of the table. So when you place the block in the middle of the table, it will be in the middle of this bounding box. Hence the block appears underneath the top of the table, between the legs.

You can probably already think of the solution for this: You need to move the block up a little bit. Half the height of the table, to be precise.

But how, you may wonder?

You can think of a Transform as a 4×4 matrix, so 16 values in 4 rows and 4 columns. The easiest way to change a matrix is to define another matrix that does the operation and multiply the two. You can do different operations like scaling, translating or rotating. The type of operation depends on which values you set in this new matrix.

You need to create a translate matrix to move the block up by half the table height. In this matrix, the last row defines the movement, and each column corresponds to a coordinate:

1  0  0  tx
0  1  0  ty
0  0  1  tz
0  0  0  1

tx is the movement in x, ty in y and tz in z direction. So, if you want to move an object by 5 in the y-direction, you need to multiply it with a matrix like this:

1  0  0  0
0  1  0  5
0  0  1  0
0  0  0  1

To learn more about matrices and how to apply changes, check out Apple’s documentation Working with Matrices.

Now it’s time to apply your new knowledge!

Actually Placing a Block on the Table!

Ok, time to place the block on the table. Open CustomCaptureView.swift to the following code:

let anchor = AnchorEntity()
anchor.transform = Transform(matrix: table.transform)
anchor.addChild(entity)

Replace it with this code:

// 1
let tableMatrix = table.transform
let tableHeight = table.dimensions.y

// 2
let translation = simd_float4x4(
  SIMD4(1, 0, 0, 0),
  SIMD4(0, 1, 0, 0),
  SIMD4(0, 0, 1, 0),
  SIMD4(0, (tableHeight / 2), 0, 1)
)

// 3
let boxMatrix = translation * tableMatrix

// 4
let anchor = AnchorEntity()
anchor.transform = Transform(matrix: boxMatrix)
anchor.addChild(entity)

This might look complicated at first, so inspect it step-by-step:

  1. transform is the position of the table and dimensions is a bounding box around it. To place a block on the table, you need both its position and the top of its bounding box. You get these properties via the y value of dimensions.
  2. Before, you placed the block at the center of the table. This time you use the matrix defined above to do a matrix multiplication. This moves the position of the box up in the scene. It’s important to note that each line in this matrix represents a column, not a row. So although it looks like (tableHeight / 2) is in row 4 column 2, it’s actually in row 2, column 4. This is the place you define the y-translation at.
  3. You multiply this new translation matrix with the table’s position.
  4. Finally, you create an AnchorEntity. But this time, with the matrix that’s the result of the translation.

Build and run. Tap Custom Capture Session, scan your room, and once the place block button appears, point your device at a table and tap the button.

Augmented Reality’s RoomPlan for iOS: Getting Started

This time, the block sits on top of the table. Great work! Now nobody will trip over your digital blocks! :]

Where to Go From Here?

You can download the completed version of the project using the Download Materials button at the top or bottom of this tutorial.

Augmented Reality is an increasingly important topic. Apple continues to extend and improve their developer tools. This allows us developers to create astonishing AR experiences. RoomPlan integrates great with other AR frameworks like ARKit and RealityKit. This framework makes it easy to enrich AR applications with real-world information. You can use the location and dimensions of tables and other real-world objects in your app.

Now it’s up to you to explore the possibilities to create more immersive AR experiences.

If you have any questions or comments, please join the forum discussion below!


Source link

Leave a Reply