WebXR with Babylon.js
In a previous blog post I introduced WebXR using Three.js. Three.js is the most used 3D web library but in second place is Babylon.js. Babylon.js is more "batteries included" and includes things like physics which makes the bundle size larger. Babylon.js has a bundle size of 3.24MB while Three.js is 603K. Babylon.js is closer to a game engine like PlayCanvas. Regl is another popular WebGL library. It's fairly minimal and takes a functional stateless approach. It doesn't have any WebXR features so you will have to interact with the native WebXR API directly. The WebXR documentation for Babylon.js is a lot more extensive than the equivalent for Three.js.
To initialize WebXR in Babylon you just have to add this line:
const xr = await scene.createDefaultXRExperienceAsync();
It defaults to trying to initialize a immersive-vr
session but if you want to initialize an AR session you would do:
const xr = await scene.createDefaultXRExperienceAsync({ uiOptions: { sessionMode: 'immersive-ar' }});
If the browser supports the session you requested it will show a pair of glasses in the bottom right for the user to click on so they can start the session. If the browser doesn't support the session type it doesn't show the pair of glasses and doesn't display a message. You will have to do that yourself. You can check if the session type is supported with:
const isSupported = await xr.baseExperience.sessionManager.isSessionSupportedAsync('immersive-vr');
I've created VR and AR examples below. They are very similar to the ones I created for Three.js. You can download the code for the VR example here and the AR example here.
VR
The following code adds an event handler when the trigger button of a controller is pressed or released:
this.controllers = [];
this.xr.input.onControllerAddedObservable.add(controller => {
controller.userData = {trigger: { pressed: false, pressedPrev: false }};
this.controllers.push(controller);
controller.onMotionControllerInitObservable.add(motionController => {
const triggerComponent = motionController.getComponent('xr-standard-trigger');
triggerComponent.onButtonStateChangedObservable.add(() => {
controller.userData.trigger.pressed = triggerComponent.pressed;
});
});
});
You can get the IDs of the motion controllers with:
const ids = motionController.getComponentIds();
Your VR headset should have at least xr-standard-trigger
and xr-standard-squeeze
though Babylon doesn't seem to work properly with the WebXR emulator - pressing the squeeze button triggers the trigger event handler and pressing the select button has no effect. You can also get the first component by type instead of id with getComponentOfType(type) method, where type can be trigger
, squeeze
, touchpad
, thumbstick
or button
.
In the render loop I call handleController()
method for each controller:
engine.runRenderLoop(() => {
if (this.controllers) {
this.controllers.forEach(controller => {
this.handleController(controller);
});
}
if (this.scene) {
this.scene.render();
}
});
handleController(controller) {
if (controller.userData.trigger.pressed) {
if (!controller.userData.trigger.pressedPrev) {
// Trigger pressed
const ray = this.getControllerRay(controller);
const hit = this.scene.pickWithRay(ray);
if (hit && hit.hit) {
this.selectedMesh = hit.pickedMesh;
this.selectedMesh.material.diffuseColor = objectSelectedColor;
this.selectedMeshDistance = Vector3.Distance(this.selectedMesh.position, controller.pointer.position);
}
} else if (this.selectedMesh) {
// Move selected object so it's always the same distance from controller
const ray = this.getControllerRay(controller);
const moveVector = ray.direction.scale(this.selectedMeshDistance);
this.selectedMesh.position.copyFrom(controller.pointer.position.add(moveVector));
}
} else if (controller.userData.trigger.pressedPrev) {
// Trigger released
if (this.selectedMesh != null) {
this.selectedMesh.material.diffuseColor = objectUnselectedColor;
this.selectedMesh = null;
}
}
controller.userData.trigger.pressedPrev = controller.userData.trigger.pressed;
}
getControllerRay(controller) {
const ray = new Ray(new Vector3(), new Vector3());
controller.getWorldPointerRayToRef(ray);
return ray;
}
The controller returns a ray in getControllerRay()
method and I use that ray to pick a mesh in the scene with scene.pickWithRay()
. If the ray hits a mesh the color is changed. When you move the controller the selected mesh moves with it. When the trigger button is released the mesh is returned to its original color.
You can view the complete code of this example here.
AR
AR is setup in a similar way to VR but you use immersive-ar
instead of immersive-vr
:
const xr = await this.scene.createDefaultXRExperienceAsync({ uiOptions: { 'immersive-ar' }});
Also make sure you don't add a background color to the body in CSS otherwise the passthrough video won't show. Enabling hit-testing is done a little differently to Three.js. You enable the feature using the features manager of WebXRExperienceHelper. The WebXRExperienceHelper can be got through the baseExperience
property of WebXRDefaultExperience which is what is returned from scene.createDefaultXRExperienceAsync()
.
const hitTest = xr.baseExperience.featuresManager.enableFeature(WebXRHitTest, 'latest', { entityTypes: ['plane'] }, true, true);
Only the first argument of enableFeature() is required. The third argument is options for the feature. In this instance they are of type IWebXRHitTestOptions. One of the options is the entity types you want to detect. This is an array of XRHitTestTrackableType which is a WebXR type rather than a Babylon type and can be point
, plane
or mesh
. The last argument is whether the feature is required. If this is set to true the session will fail to initialize if the feature is not available. To be notified when something has been detected you can do this:
hitTest.onHitTestResultObservable.add(results => {
if (results.length) {
this.hitTestResult = results[0];
this.hitTestResult.transformationMatrix.decompose(undefined, this.reticle.rotationQuaternion, this.reticle.position);
this.reticle.isVisible = true;
} else {
this.reticle.isVisible = false;
this.hitTestResult = null;
}
});
results
is an array of IWebXRHitResult which contains the POSE of the hit. In the above code I rotate and position the reticle to that of the first hit result.
Unlike Three.js, Babylon.js doesn't make use of controllers for tap events, instead you add event handlers to the scene. In the following code I position the box at the center of the reticle when you tap the screen.
this.scene.onPointerDown = () => {
if (this.hitTestResult) {
this.hitTestResult.transformationMatrix.decompose(undefined, this.box.rotationQuaternion, this.box.position);
this.box.position.y += boxSize / 2;
this.box.isVisible = true;
}
}
You can view the complete code of this example here.
Conclusion
Babylon.js's API is different to Three.js so knowing one doesn't automatically mean you will know the other. One reason for choosing one over the other is if it has a feature that the other doesn't. I haven't worked with either enough to come across an exclusive feature. Babylon.js does have physics built-in but you can always use a physics library like Cannon.js with Three.js and if you don't need a physics library you are going to have to download it anyway with Babylon.js.