Show/Hide Mobile Menu

Getting started with WebXR and Three.js

Jan 14, 2022

WebXR is VR and AR for the web. It started off as WebVR but when AR started to be implemented natively on smartphones they scraped the draft standard and started from scratch on WebXR. WebXR is supported on Android devices with Chrome 79 or above. The VR mode of WebXR is supported on Meta Quest's browser. WebXR is not currently supported on iOS's Mobile Safari but there is limited support on Mozilla's XR Viewer. It is in development on WebKit so hopefully it will eventually make its way to Mobile Safari.

Ideally you would get a VR headset or a recent Android smartphone to develop for web VR/AR but you can view WebXR content on desktop browsers with the help of a WebXR emulator. Currently WebXR is supported by Chrome, Edge and Opera. There is a WebXR extension for Chrome and Firefox. Bring up the developer tools and click on the WebXR tab. You will need to select the one AR device (Samsung Galaxy S8+) if you want to view AR content, the other devices are for VR. To access native WebXR implementations you generally need to use http://localhost, or https. It will work on some clients like a desktop browser without a secure context - it's best to try it first. To use a secure context for local development you can use ngrok. For browsers that support the old WebVR standard there is a polyfill that allows you to use the VR part of WebXR.

There are alternatives to WebXR that allow you to view models in AR. On iOS it's called Quick Look. You link to a USDZ model like this:

<a rel="ar" href="my-model.usdz">
  <img src="my-model-thumbnail.jpg">

The equivalent on Android is Model Viewer which is a web component that also works on iOS:

<script type="module" src=""></script>

<model-viewer src=""
              alt="A 3D model of an astronaut"

Generally in 3D for the web gltf is the model format used or glb for the binary version but Apple uses USDZ format, which is a compressed version of USD - a format developed by Pixar.

There are AR libraries that don't depend on the browser having WebXR support. AR.js can anchor 3D content to images, markers and latitude/longitude. 8th Wall is a commercial library that supports anchoring to planes and faces like the native AR SDKs.

WebXR doesn't provide any rendering so you will generally use it in conjunction with a WebGL library like Three.js, Bablyon.js or AFrame. You can interact with the WebXR API directly but these libraries do a lot of that work for you. For example in Three.js you basically only need to add the following to a normal Three.js scene to get something working:

import { VRButton } from 'three/examples/jsm/webxr/VRButton';

renderer.xr.enabled = true;

For AR you would replace VRButton with ARButton and set alpha to true when creating the renderer. The reason you need a button is because an XR session can only be started when the user interacts with the page.


Usually a VR headset has two controllers - one for each hand. You get a controller from WebXRManager in Three.js. You get the direction that the controller is pointing using getController() method and the grip with getControllerGrip(). Both these methods return a Group. You then add a model of the grip to the grip group using XRControllerModelFactory:

import { XRControllerModelFactory } from 'three/examples/jsm/webxr/XRControllerModelFactory';


buildControllers() {
  const controllerModelFactory = new XRControllerModelFactory();

  const geometry = new BufferGeometry().setFromPoints([
    new Vector3(0, 0, 0),
    new Vector3(0, 0, -1)

  const line = new Line(geometry);
  line.scale.z = 10;

  const controllers = [];

  for (let i = 0; i < 2; i++) {
    const controller = this.renderer.xr.getController(i);
    controller.userData.selectPressed = false;
    controller.userData.selectPressedPrev = false;

    const grip = this.renderer.xr.getControllerGrip(i);

  return controllers;

Here I'm adding a line to the controller so you can see which direction it's pointing. It's presumed that controllers have at least 2 buttons, a primary action button and a primary squeeze action button. The event names for the primary action are selectstart and selectend. The event names for the primary squeeze action are squeezestart and sqeezeend. Below I've added two event listeners when the primary action button is pressed and released. this in the event handlers refers to the controllers and user data is added to the userData property so it won't clash with the built-in properties.

initVR() {
  this.renderer.xr.enabled = true;
  this.controllers = this.buildControllers();

  function onSelectStart() {
    // this refers to the controller
    this.children[0].scale.z = 10;
    this.userData.selectPressed = true;

  function onSelectEnd() {
    // this refers to the controller
    this.children[0].scale.z = 0;
    this.userData.selectPressed = false;

  this.controllers.forEach(controller => {
    controller.addEventListener('selectstart', onSelectStart);
    controller.addEventListener('selectend', onSelectEnd);

In the render method I check the user data on each of the controllers. If the primary action button is pressed I show the line and check if it intersects an object. To do this I create a Raycaster where the ray's origin is set to the controller's position and the ray's direction is initially set to a unit vector pointing away from the controller and then matches the orientation of the controller. If a ray intersects an object I shorten the line to the distance to the object and change the object's color. When the controller moves it also moves the selected object until you release the button.

render() {
  if (this.controllers) {
    this.controllers.forEach(controller => {


handleController(controller) {
  if (controller.userData.selectPressed) {
    if (!controller.userData.selectPressedPrev) {
      // Select pressed
      controller.children[0].scale.z = 10;
      const rotationMatrix = new Matrix4();
      const raycaster = new Raycaster();
      raycaster.ray.direction.set(0, 0, -1).applyMatrix4(rotationMatrix);
      const intersects = raycaster.intersectObjects(this.objects);
      if (intersects.length > 0) {
        controller.children[0].scale.z = intersects[0].distance;
        this.selectedObject = intersects[0].object;
        this.selectedObject.material.color = objectSelectedColor;
        this.selectedObjectDistance = this.selectedObject.position.distanceTo(controller.position);
    } else if (this.selectedObject) {
      // Move selected object so it's always the same distance from controller
      const moveVector = controller.getWorldDirection(new Vector3()).multiplyScalar(this.selectedObjectDistance).negate();
  } else if (controller.userData.selectPressedPrev) {
    // Select released
    controller.children[0].scale.z = 10;
    if (this.selectedObject != null) {
      this.selectedObject.material.color = objectUnselectedColor;
      this.selectedObject = null;
  controller.userData.selectPressedPrev = controller.userData.selectPressed;

You can download the complete code from here.


In the example I've created, I show a ring on detected planes and when you tap the screen a box appears in the center of the ring. The hit-test feature is required so when creating the AR button I add 'hit-test' to the required features:

document.body.appendChild(ARButton.createButton(this.renderer, {sessionInit: {requiredFeatures: ['hit-test']}}));

In AR the screen is the controller so to check when the screen has been tapped you would do this:

this.controller = this.renderer.xr.getController(0);
this.controller.addEventListener('select', this.onSelect.bind(this));

To do hit testing you need to setup a hit test source:

async requestHitTestSource() {
  const session = this.renderer.xr.getSession();
  session.addEventListener('end', () => {
    this.hitTestSourceRequested = false;
    this.hitTestSource = null;
  const referenceSpace = await session.requestReferenceSpace('viewer');
  this.hitTestSource = await session.requestHitTestSource({ space: referenceSpace, entityTypes: ['plane'] });
  this.hitTestSourceRequested = true;

entityTypes in requestHitTestSource method defaults to ['plane'] so the argument isn't required if you want to detect planes. In the render function I call the following function to do the hit test. If a plane is hit a ring is shown.

getHitTestResults(frame) {
  const hitTestResults = frame.getHitTestResults(this.hitTestSource);
  if (hitTestResults.length) {
    const hit = hitTestResults[0];
    const pose = hit.getPose(this.renderer.xr.getReferenceSpace());
    this.reticle.visible = true;
  } else {
    this.reticle.visible = false;

When the screen is tapped, onSelect is called where the box is positioned in the center of the ring:

onSelect() {
  if (this.reticle.visible) {; += / 2; = true;

You can download the complete code for this example here.