http://blog.leapmotion.com/bending-reality-north-stars-calibration-system/
Bringing new worlds to life doesn’t end with bleeding-edge software – it’s also a battle with the laws of physics. Project North Star is a compelling glimpse into the future of AR interaction and an exciting engineering challenge, with wide-FOV displays and optics that demanded a whole new calibration and distortion system.
Just as a quick primer: the North Star headset has two screens on either side. These screens face towards the reflectors in front of the wearer. As their name suggests, the reflectors reflect the light coming from the screens, and into the wearer’s eyes.
As you can imagine, this requires a high degree of calibration and alignment, especially in AR. In VR, our brains often gloss over mismatches in time and space, because we have nothing to visually compare them to. In AR, we can see the virtual and real worlds simultaneously – an unforgiving standard that requires a high degree of accuracy.
North Star sets an even higher bar for accuracy and performance, since it must be maintained across a much wider field of view than any previous AR headset. To top it all off, North Star’s optics create a stereo-divergent off-axis distortion that can’t be modelled accurately with conventional radial polynomials.
North Star sets a high bar for accuracy and performance, since it must be maintained across a much wider field of view than any previous augmented reality headset. How can we achieve this high standard? Only with a distortion model that faithfully represents the physical geometry of the optical system. The best way to model any optical system is by raytracing – the process of tracing the path rays of light travel from the light source, through the optical system, to the eye. Raytracing makes it possible to simulate where a given ray of light entering the eye came from on the display, so we can precisely map the distortion between the eye and the screen.
But wait! This only works properly if we know the geometry of the optical system. This is hard with modern small-scale prototyping techniques, which achieve price effectiveness at the cost of poor mechanical tolerancing (relative to the requirements of near-eye optical systems). In developing North Star, we needed a way to measure these mechanical deviations to create a valid distortion mapping.
One of the best ways to understand an optical system is… looking through it!. By comparing what we see against some real-world reference, we can measure the aggregate deviation of the components in the system. A special class of algorithms called “numerical optimizers” lets us solve for the configuration of optical components that minimizes the distortion mismatch between the real-world reference and the virtual image.
Leap Motion North Star calibration combines a foundational principle of Newtonian optics with virtual jiggling. For convenience, we found it was possible to construct our calibration system entirely in the same base 3D environment that handles optical raytracing and 3D rendering. We begin by setting up one of our newer 64mm camera modules inside the headset and pointing it towards a large flat-screen LCD monitor. A pattern on the monitor lets us to triangulate its position and orientation relative to the headset rig.
With this, we can render an inverted virtual monitor on the headset in the same position as the real monitor in the world. If the two versions of the monitor matched up perfectly, they would additively cancel out to uniform white. (Thanks Newton!) The module can now measure this “deviation from perfect white” as the distortion error caused by the mechanical discrepancy between the physical optical system and the CAD model the raytracer is based on.
This “one-shot” photometric cost metric allows for a speedy enough evaluation to run a gradient-less simplex Nelder-Mead optimizer in-the-loop. (Basically, it jiggles the optical elements around until the deviation is below an acceptable level.) While this might sound inefficient, in practice it lets us converge on the correct configuration with a very high degree of precision.
This might be where the story ends – but there are two subtle ways that the optimizer can reach a wrong conclusion. The first kind of local minima rarely arises in practice. The more devious kind comes from the fact that there are multiple optical configurations that can yield the same geometric distortion when viewed from a single perspective. The equally devious solution is to film each eye’s optics from two cameras simultaneously. This lets us solve for a truly accurate optical system for each headset that can be raytraced from any perspective.
In static optical systems, it usually isn’t worth going through the trouble of determining per-headset optical models for distortion correction. However, near-eye displays are anything but static. Eye positions change for lots of reasons – different people’s interpupillary distances (IPDs), headset ergonomics, even the gradual shift of the headset on the head over a session. Any one of these factors alone can hamper the illusion of augmented reality.
Fortunately, by combining the raytracing model with eye tracking, we can compensate for these inconsistencies in real-time for free![6] We’ll cover the North Star eye tracking experiments in a future blog post.
The following instructions can be used for version 1 of the calibration rig
The above video shows the general process of performing a 3D calibration on your project northstar headset.
This requires:
3D Printing the mechanical assembly, search for it here: https://leapmotion.github.io/ProjectNorthStar/
Affixing TWO of these Stereo Cameras to it: https://www.amazon.com/ELP- Industrial- Application- Synchronized- ELP- 960P2CAM- V90- VC/dp/B078TDLHCP/
Acquiring a large secondary monitor to use as the calibration target
Find the exact model and active area of the screen for this monitor; we'll need it in 3)
Print out an OpenCV calibration chessboard, and affixing it to a flat backing board.
Flatness is absolutely crucial for the calibration.
Editing the config variables at the top of dualStereoChessboardCalibration.py
with the correct:
Number of interior corners on each axis
Dimensions of each square on the checkerboard (in meters!)
Install Python 3 on your machine and run pip install numpy
and pip install opencv-contrib-python
Run it from the python scripts folder, usually something like C:\Users\*USERNAME*\AppData\Local\Programs\Python\Python36\Scripts
Running dualStereoChessboardCalibration.py
First, ensure that your upper stereo camera appears on top in the camera visualizer
If not, exit the program, and unplug/replug your cameras' USB ports in various orders/ports until they do.
Hold your checkerboard in front of your camera array, ensuring to move it around to gain good coverage.
Every time the calibrator takes a snapshot, it will print a notice in the terminal.
After 30 snapshots in both camera views, it will run the calibration routines and display rectified views.
If the calibration went well, you will see your live camera stream rectified such that all straight lines in the real world will appear straight in the camera image, and the images will look straightened and vertically aligned between the screens.
If this happened, congratulations! Exit out of the program and run it one more time.
Running it again verifies the calibration can be loaded AND GENERATES THE CALIBRATION JSON
If the calibration did not go well (you see horrible warping and badness), you can attempt the calibration again by:
Deleting the created dualCameraCalibration.npz
AND cameraCalibration.json
files from the main folder
Trying again: ensuring the checkerboard is flat, the config parameters are correct, and that you have good coverage (including along depth)
You should have a good cameraCalibration.json
file in the main folder (from the last step)
Ensure that your main monitor is 1920x1080 is and the Calibration Monitor appears to the left of the main monitor, and the north star display appears to the right of it.
This ensures that the automatic layouting algorithm detects the various monitors appropriately.
Edit config.json
to have the active area for your calibration monitor you found earlier.
Download this version of the Leap Service: https://github.com/leapmotion/UnityModules/tree/feat-multi-device/Multidevice%20Service
The calibrator was built with this version; it will complain if you don't have it :/
Now Run the NorthStarCalibrator.exe
You should see the top camera's images in the top right, and the bottom camera's images on the bottom.
If this is not so, please reconnect your cameras until it is (same process as for the checkerboard script)
You should also see a set of sliders and buttons running along the top.
These control the calibration process.
First, point the bare calibration rig toward the calibration monitor
Ensure it is roughly centered on the monitor, so it can see all of the vertical area.
Then Press "1) Align Monitor Transform"
This will attempt to localize the monitor in space relative to the calibration rig. This is important for the later steps.
Next, place the headset onto the calibration rig and press "2) Create Reflector Mask"
This should mask out all of the camera's FoV except the region where the screen and reflectors overlap the calibration monitor.
If it does not appear to do this, double check that all of the prior steps have been followed correctly...
Now, before we press "3) Toggle Optimization", we'll want to adjust the bottom two sliders until the both represent roughly equal brightnesses.
This is important since the optimizer is trying to create a configuration that yields a perfectly gray viewing area.
Now press "3) Toggle Optimization" and observe it.
It's switching between being valid for the upper and lower camera views, so only one image is going to appear to improve at a time.
You should it see it gradually discovering where the aligned camera location is.
This is the finickiest step in the process, it's possible that the headset is outside the standard build tolerances.
If you suspect this is the case, increase the simplexSize in the config.json
to increase the area it will search.
If it does converge on an aligned image, then congratulations! Toggle the optimization off again.
Press button 4) to hide the pattern, put the headset on, and use the arrow keys to adjust the view dependent/ergonomic distortion and the numpad 2,4,6,8 keys to adjust the rotation of the leap peripheral.
When satisfied, press 5) to Save the Calibration
This will save your calibration as a "Temp" calibration in the Calibrations
Folder (a shortcut is available in the main folder).
You can differentiate between calibrations by the time in which they were created.