Skip to main content

How Babble's face tracking works

· 6 min read
dfgHiatus
Maintainer of the Project Babble docs
note

Portions of this blog post are paraphrased from the presentation we gave at Furality Somna 2025. You can view and download a copy of our presentation here

Previously, I wrote an article describing how our eye tracking works. While we have given fully-fledged presentations in person and virtually on how our face tracking works, it's only dawning on me that we have never written a blog post on this. Heck!

At a high level, our face model is a heavily modified EfficentNetv2. Unlike our eye model, which requires a per-user calibration, our face model is a strong generalist model that will work out of the box.

How Babble's eye tracking (calibration) works

· 10 min read
dfgHiatus
Maintainer of the Project Babble docs

Intro

One thing I have been asked time and time again is how our eye tracking works, and how it differs from other headsets. For starters, our production eye tracking model estimates normalized gaze as well as normalized eye openness. To date, inference takes 5 millisecond on the high end with average inference taking about 10 milliseconds, and has an accuracy within ~5 degrees.

note

Our model is pre-trained on approximately 4 million synthetic frames and 30k real frames. These metrics were taken before we implemented our eye data collection program, which to date has seen 1,200 submissions from 240 unique users.

This is as far as any similarities go. In order for our eye track to work, we train a neural network on your device following an eye tracking calibration.

MIT Reality Hack 2026

· 4 min read
dfgHiatus
Maintainer of the Project Babble docs

MIT Reality Hack was amazing. I've never been to a hackathon before, let alone place first at one!

For context, Reality Hack is a premier hackathon with a focus on XR hardware/software. My team created Reboot: Anomaly Holdout, someday we might turn this into its own game. Here are some pictures from the event.

How VRCFaceTracking was ported to Linux (and macOS)

· 13 min read
dfgHiatus
Maintainer of the Project Babble docs

I've been using VRCFaceTracking for quite some time now. At the time of writing this, the better part of 2/3 years.

I've seen its beginnings as a mod, then a console app, then a Windows Form and finally (and currently) a fully-fledged WinUI3 application.

During this time, I have also seen the growth of VR on Linux. I understand the majority of people who use VR are on Windows, and by extension users of social VR apps, myself included. I'm writing this on my 11-year old workstation I built back in 2014, and it's seen every version of Windows 10 since.

I believe Linux has a place in the VR scene. More importantly, I wanted to enable my friends to smile and laugh like I'm able to in VRChat. So, I set out to port VRCFaceTracking to Linux.

Reverse Engineering the Vive Facial Tracker

· 10 min read
dfgHiatus
Maintainer of the Project Babble docs

My name is Hiatus, and I am one of the developers on the Project Babble Team.

For the past 2 years at the time of writing this, me and a team have been working on the eponymous Project Babble, a cross-platform and hardware-agnostic VR lower-face expression tracking solution for VRChat, ChilloutVR, and Resonite.

This blog post was inspired by a member of our community DragonLord. His work is responsible for this feature, and this blog post largely paraphrases his findings. Again, put where credit is due and check out his Github. You can check out his findings as well as his repo here.

This is the story of how the Vive Facial Tracker, another VR face tracking accessory was reverse engineered to work with the Babble App.

Buckle in.