Face Tracking – FaceTrack
visage|SDK™ FaceTrack package is an extremely powerful, fully configurable face tracking tracking engine. It finds and tracks the face and facial features in video sequences in real time (e.g. 30 frames per second) and returns full 3D head pose (translation and rotation), gaze direction, facial features coordinates and a wealth of other information. Tracking works in color, grayscale, near infrared and thermal video. visage|SDK™ FaceTrack package offers this technology in form of a well-documented C++ Software Development Kit.
The underlying technology is based on fitting a 3D model to the facial image and estimating the 3D motion of the head and the facial expression. visage|SDK™ FaceTrack enables numerous applications including character animation, facial expression analysis, assistive technologies, identification technologies, market research etc.
For applications that rely solely on 3D head pose, such as Augmented Reality (Magic Mirror – virtual eyewear, headwear etc.), view control in gaming or view-dependent rendering, visage|SDK™ HeadTrack package is provided for licensing purposes. It includes the same powerful tracking technology of visage|SDK™ FaceTrack but restricts the output to 3D head pose.
- For each processed video frame, returns 3D head pose and other information – see detailed list of tracker outputs.
- Depending on the configuration, tracker can track the mouth contour, chin pose, eyebrow contours, eye closure and eye rotation (gaze direction).
- Fully automatic operation (manual fine-tuning in initial video frame available optionally for enhanced precision; results of such setup can be saved and reused).
- Robustly recovers from any losses due to occlusions, face turning away, tracked person coming and going etc.
- Automatically re-initialises if a new person appears in front of the camera.
- Tracks from webcam or AVI video files.
- Raw image interface allows tracking from any video source.
- Tracks in color or grayscale video (internal processing performed on grayscale).
- Tracks in infrared video, both near infrared and thermal range (thermal videos currently require manual setup in first frame; customization is possible to overcome this issue and make it fully automatic – contact us for further information).
- No markers or makeup are needed on the face.
- Fully user-configurable to suit a number of different applications; default configurations include head tracking and facial features tracking. See details about configuring the tracker.
- Minimal size of the face in the video image is approx. 80 pixels wide.
- Minimum input video resolution is approx. 320×240. Higher resolutions (e.g. 640×480, 800×600)give better results.
- Head rotation is tracked up to approx. 50 degrees, though in good conditions it can be more.
- Extensive tracking volume (as a rough indication, with a 640×480 pixel camera the head can move up to approx. 120 cm away from the camera, approx. 50 cm left or right and approx. 30 cm up or down).
- Lightweight technology (example videos shown on these pages were recorded on an Intel Duo T7500 2.20 GHz processor with video capture running in parallel).
Face tracking outputs
The tracker offers easy-to-use API for accessing the tracking data on-the-fly during tracking operation. The available data includes:
- 3D head pose (translation and rotation).
- Facial feature coordinates in global 3D space, relative to the head or in 2D image space. The feature points are specified according to the MPEG-4 FBA standard.
- Gaze direction (see video); screen-space gaze coordinates based on callibration are planned for next release.
- Eye closure.
- A set of Action Units (e.g. jaw drop, lips stretch, brow raise…) describing the current facial expression; these parameters can be used to animate a face model.
- Standard MPEG-4 FBA Face Animation Parameters available through API interface and as FBA file output.
- 3D model of the face in current pose and expression, returned as single textured 3D triangle mesh; this extremely powerful function enables applications such as drawing the model into the Z buffer to achieve correct occlusion of virtual objects by the head in AR applications; using the texture coordinates to cut out the face from the image; drawing the 3D model from a different perspective than the one in the actual video or inserting the face into another video or 3D scene. See video illustrating the 3D model.
Application development/platform availability
|For developers wishing to integrate the tracker with the Unity 3D engine, the visage|SDK for Windows, iOS and Android packages include a working sample project integrating the tracker into Unity with full source code and documentation.|
|Through a partnership with Xylon, an electronics company focused on FPGA IP cores and application solutions, Visage Technologies Face Detection and Tracking is available on the Xilinx® Zynq®-7000 All Programmable SoC architecture. For more information please contact firstname.lastname@example.org|
Fully configurable tracker
Face tracking is fully configurable through an extensive set of parameters in easily manageable configuration files. Each configuration file fully defines the tracker operation, in effect customising the tracker for a particular application. Default configuration files include:
- Head tracking configuration
- Facial features tracking configuration
- Off-line Facial features tracking configuration (the above video was recorded using this configuration)
Extensive documentation allows users to create own application-specific configurations. The documentation lists and documents all available configuration options. A partial list of main options is here:
- Input, work and display resolution settings.
- Camera mirror option (flip camera image horizontaly).
- Video file sync option (skip frames to sync or process all frames).
- Automatic or semi-autimatic initialisation.
- Smoothing filters to reduce noise in tracking results.
- Full control of the 3D head model internally used by the tracker, including the animation rig (this advanced option can potentially be used to completely replace the 3D model by a custom one).
- Control of regions of interest to be tracked.
- Control of number and positioning of actual track points on the face and the search precision (allowing tradeoffs between performance and precision).
- Control of facial actions to be tracked (e.g. jaw opening, eyebrow motion, eye rotation/closure etc.).
Furthermore, Visage Technologies consulting and custom development services are available to adapt the technology in terms of precision, performance and any other requirements in order to meet the needs of specific applications.