Baballonia Eye/Face Tracking

Baballonia Eye/Face Tracking #

This guide will help you set up Baballonia Eye/Face Tracking under Linux.

Requirements: #

  • A supported eye or face tracking device.
  • Windows 10/11 dual-boot availability or separate Windows 10/11 machine to train your model if your device requires it.

Baballonia releases can be found here: https://github.com/Project-Babble/Baballonia/releases

VRCFT.Avalonia releases can be found here: https://github.com/dfgHiatus/VRCFaceTracking.Avalonia/releases

You will need to boot into Windows and use Baballonia.Desktop there in order to train any and all models as needed so that you can then transfer them over to your Linux filesystem. They are typically stored at %APPDATA%/Roaming/ProjectBabble/Models on Windows and can be loaded from wherever you like via the Babbalonia.Desktop GUI.

Then you can start the Linux native Baballonia.Desktop and tell it to load your pre-trained model.

NOTE: GPU Acceleration currently does not work on the Linux native build of Baballonia, so you can disable it for now. You’ll also need to either disable the one-euro filter option or ensure that values are set for each of its parameters, else Baballonia will not send out usable data.

Baballonia.Desktop should show your tracking device in the camera dropdown. You may need to adjust cropping and brightness for the cameras as necessary.

Ensure Baballonia.Desktop is set to provide Eye or Both tracking to VRCFT.Avalonia, alongside the Native OSC option if desired, and then launch VRCFT.Avalonia. If successful, you should see some small transfers of data, assuming the VRCFT-Babble module is loaded in VRCFT.Avalonia.

For eye tracking to function correctly within VRCFT.Avalonia using the Babble module, you will need to manually edit its configuration file.

After VRCFT.Avalonia has been launched at least once and installed Babble module, navigate to ~/.config/VRCFaceTracking/CustomLibs/{uuid of babble module}. Inside this directory, locate and open BabbleConfig.json and change the line "IsEyeSupported": false to "IsEyeSupported": true.

Assuming all goes well, you should be able to see OSC data coming in with it enabled after launching VRChat under Proton. Avatar parameters should load into VRCFT.Avalonia and you should see both send/receive values for data rates at this point alongside live results within VRChat itself.