new VisageTracker(configurationName)
VisageTracker is a face tracker capable of tracking the head pose, facial features and gaze for multiple faces in video coming from a video file, camera or other sources.
Frames (images) need to be passed sequentially to the track() method, which immediately returns results for the given frame.
The tracker offers the following outputs, available through FaceData:
The tracker requires, by default, data bundled in visageSDK.data file and configuration files located in www/lib folder.
For every application, visageSDK.data file must be copied to the same folder where the application's main html file is located (e.g. www/Samples/VirtualEyewearTryOn folder).
If webassembly format is used, then the visageSDK.wasm file is also expected to be located in the same folder as the application's main html file.
Configuration file (as well as license key file) has to be preloaded before instantiating VisageTracker class by calling FS_createPreloadedFile() function. Note that the FS_createPreloadedFile() function must be called after visageSDK.js script is loaded, but before the .data is completely downloaded (an example can be found in www/Samples/ShowcaseDemo/ShowcaseDemo.html).
Changing the location of the .data file
Location of the .data and .wasm file can be changed by overriding the locateFile attribute of the VisageModule object to return the URL where the data file is currently stored. Note that all .data files and .wasm file must be located in the same folder (i.e same URL) (see www/Samples/ShowcaseDemo/ShowcaseDemo.html). This additional code needs to be added to the application's main html file and the VisageModule attribute must be specified in a script element before the one that loads the data file (in this case visageSDK.js).
Sample usage - changing .data and .wasm files location:
The tracker is fully configurable through comprehensive tracker configuration files provided in visage|SDK and VisageConfiguration class allowing to customize the tracker in terms of performance, quality and other options. The configuration files are intended to be used for tracker initialization while the VisageConfiguration class allows the specific configuration parameter to change in runtime.
visage|SDK contains optimal configurations for common uses such as head tracking, facial features tracking and ear tracking.
The VisageTracker Configuration Manual (later in text referred to as VTCM) provides the list of available configurations and full detail on all available configuration options.
Specific configuration parameters are used to enable features such as:
For the list of model's vertices and triangles see chapter 2.3.2.1 The jk_300_wEars of VTCM.
A set of three configuration parameters is used to configure ear tracking:
VNN algorithm requires additional set of data bundled in visageVNNData.data. Data is loaded with visageVNNData.js loader script located in www/lib folder.
For every application, visageVNNData.data file must be copied to the same folder where the application's main html file is located.
See use_vnn in chapter 2.1. Configuration parameters of VTCM.
See smoothing_factors in chapter 2.1. Configuration parameters of VTCM.
Note: After the end of use VisageTracker object needs to be deleted to release the allocated memory. Example:
Frames (images) need to be passed sequentially to the track() method, which immediately returns results for the given frame.
The tracker offers the following outputs, available through FaceData:
- 3D head pose
- facial expression
- gaze information
- eye closure
- iris radius
- facial feature points
- full 3D face model, textured
Dependencies
The tracker requires, by default, data bundled in visageSDK.data file and configuration files located in www/lib folder.
For every application, visageSDK.data file must be copied to the same folder where the application's main html file is located (e.g. www/Samples/VirtualEyewearTryOn folder).
If webassembly format is used, then the visageSDK.wasm file is also expected to be located in the same folder as the application's main html file.
Configuration file (as well as license key file) has to be preloaded before instantiating VisageTracker class by calling FS_createPreloadedFile() function. Note that the FS_createPreloadedFile() function must be called after visageSDK.js script is loaded, but before the .data is completely downloaded (an example can be found in www/Samples/ShowcaseDemo/ShowcaseDemo.html).
Changing the location of the .data file
Location of the .data and .wasm file can be changed by overriding the locateFile attribute of the VisageModule object to return the URL where the data file is currently stored. Note that all .data files and .wasm file must be located in the same folder (i.e same URL) (see www/Samples/ShowcaseDemo/ShowcaseDemo.html). This additional code needs to be added to the application's main html file and the VisageModule attribute must be specified in a script element before the one that loads the data file (in this case visageSDK.js).
Sample usage - changing .data and .wasm files location:
<script>
var locateFile = function(dataFileName) {var relativePath = "../../lib/" + dataFileName; return relativePath}
</script>
<script src="../../lib/visageSDK.js"></script>
<script>
VisageModule = VisageModule({onRuntimeInitialized: onModuleInitialized, locateFile: locateFile});
var preloadFiles = function() {
VisageModule.FS_createPreloadedFile('/', 'Facial Features Tracker - High.cfg', "../../lib/Facial Features Tracker - High.cfg", true, false);
VisageModule.FS_createPreloadedFile('/', licenseName, licenseURL, true, false, function(){ }, function(){ alert("Loading License Failed!") });
};
VisageModule.preRun.push(preloadFiles);
</script>
//if visageVNNData.data is used
<script src="../../lib/visageVNNData.js"> </script>
Configuring VisageTracker
The tracker is fully configurable through comprehensive tracker configuration files provided in visage|SDK and VisageConfiguration class allowing to customize the tracker in terms of performance, quality and other options. The configuration files are intended to be used for tracker initialization while the VisageConfiguration class allows the specific configuration parameter to change in runtime.
visage|SDK contains optimal configurations for common uses such as head tracking, facial features tracking and ear tracking.
The VisageTracker Configuration Manual (later in text referred to as VTCM) provides the list of available configurations and full detail on all available configuration options.
Specific configuration parameters are used to enable features such as:
- ear tracking
- experimental VNN algorithm
- smoothing filter
Ear tracking
Ear tracking includes tracking of additional 24 points (12 points per ear). Detailed illustration of the points' location can be found in the description of the featurePoints2D member. Ears' feature points are part of the group 10 (10.1 - 10.24). Tracking the ears' points require the 3D model with defined ears vertices, as well as corresponding points mapping file that includes definition for group 10. visage|SDK containes examples of such model files within visageSDK.data: jk_300_wEars.wfm and jk_300_wEars.fdp.For the list of model's vertices and triangles see chapter 2.3.2.1 The jk_300_wEars of VTCM.
A set of three configuration parameters is used to configure ear tracking:
- refine_ears
- mesh_fitting_model and mesh_fitting_fdp if fine 3D mesh is enabled, otherwise pose_fitting_model and pose_fitting_fdp
- smoothing_factors 'ears' group (smoothing_factors[7])
VNN algorithm
The tracker may be configured to use the experimental VNN algorithm. It will significantly improve tracking precision and robustness but reduce tracking speed (performance).VNN algorithm requires additional set of data bundled in visageVNNData.data. Data is loaded with visageVNNData.js loader script located in www/lib folder.
For every application, visageVNNData.data file must be copied to the same folder where the application's main html file is located.
See use_vnn in chapter 2.1. Configuration parameters of VTCM.
Smoothing filter
The tracker can apply a smoothing filter to tracking results to reduce the inevitable tracking noise. Smoothing factors are adjusted separately for different parts of the face. The smoothing settings in the supplied tracker configurations are adjusted conservatively to achieve optimal balance between smoothing and delay in tracking response for a general use case.See smoothing_factors in chapter 2.1. Configuration parameters of VTCM.
Note: After the end of use VisageTracker object needs to be deleted to release the allocated memory. Example:
<script>
m_Tracker = new VisageModule.VisageTracker("Facial Features Tracker - High.cfg");
...
m_Tracker.delete();
</script>
Parameters:
Name | Type | Description |
---|---|---|
configurationName |
string | the name of the tracker configuration file (.cfg; default configuration files are provided in lib folder;
for further details see VTCM).
|
Methods
-
track(frameWidth, frameHeight, p_imageData, faceDataArray, format, origin, widthStep, timeStamp, maxFaces) → {Int32Array}
-
Performs face tracking in the given image and returns tracking results and status. This function should be called repeatedly on a series of images in order to perform continuous tracking.
If the tracker needs to be initialized, this will be done automatically before tracking is performed on the given image. Initialization means loading the tracker configuration file, required data files and allocating various data buffers to the given image size. This operation may take several seconds. This happens in the following cases:
- In the first frame (first call to VisageTracker.track() function).
- When frameWidth or frameHeight are changed, i.e. when they are different from the ones used in the last call
to VisageTracker.track() function.
- If setTrackerConfigurationFile() function was called after the last call
to {@link VisageTracker#track|VisageTracker.track()} function.
- When maxFaces is changed, i.e. when it its different from the one used in the last call to track() function.
Sample usage:
var m_Tracker, faceData, faceDataArray, frameWidth, frameHeight; function initialize(){ //Initialize licensing with the obtained license key file //It is imperative that initializeLicenseManager method is called before the constructor is called in order for licensing to work VisageModule.initializeLicenseManager("xxx-xxx-xxx-xxx-xxx-xxx-xxx-xxx-xxx-xxx-xxx.vlc"); //Instantiate the tracker object m_Tracker = new VisageModule.VisageTracker("../../lib/Facial Features Tracker - High.cfg"); //Instantiate the face data object faceDataArray = new VisageModule.FaceDataVector(); faceData = new VisageModule.FaceData(); faceDataArray.push_back(faceData); frameWidth = canvas.width; frameHeight = canvas.height; //Allocate memory for image data ppixels = VisageModule._malloc(mWidth*mHeight*4); //Create a view to the memory pixels = new Uint8ClampedArray(VisageModule.HEAPU8.buffer, ppixels, mWidth*mHeight*4); } function onEveryFrame(){ //Obtain the image pixel data var imageData = canvas.getContext('2d').getImageData(0,0, mWidth, mHeight).data; //...Fill pixels with image data //Call the tracking method of the tracker object with 4 parameters: image width, image height, image pixel data and face data object instance var trackerStatus = []; trackerStatus = m_Tracker.track( frameWidth, frameHeight, ppixels, faceDataArray, VisageModule.VisageTrackerImageFormat.VISAGE_FRAMEGRABBER_FMT_RGBA.value, VisageModule.VisageTrackerOrigin.VISAGE_FRAMEGRABBER_ORIGIN_TL.value ); //Based on the tracker return value do some action with the return values located in face data object instance if (trackerStatus.get(0) === VisageModule.VisageTrackerStatus.TRACK_STAT_OK.value){ drawSomething(faceDataArray.get(0)); } }
The tracker results are returned in faceDataArray.
Parameters:
Name Type Argument Default Description frameWidth
number Width of the frame. frameHeight
number Height of the frame. p_imageData
number Pointer to image pixel data, size of the array must correspond to frameWidth and frameHeight. faceDataArray
FaceDataVector Array of FaceData objects that will receive the tracking results. The size of the faceDataArray is equal to maxFaces parameter. format
number <optional>
VisageModule.VISAGE_FRAMEGRABBER_FMT_RGB Format of input images passed in p_imageData. It can not change during tracking. Format can be one of the following:
- VisageModule.VISAGE_FRAMEGRABBER_FMT_RGB: each pixel of the image is represented by three bytes representing red, green and blue channels, respectively.
- VisageModule.VISAGE_FRAMEGRABBER_FMT_BGR: each pixel of the image is represented by three bytes representing blue, green and red channels, respectively.
- VisageModule.VISAGE_FRAMEGRABBER_FMT_RGBA: each pixel of the image is represented by four bytes representing red, green, blue and alpha (ignored) channels, respectively.
- VisageModule.VISAGE_FRAMEGRABBER_FMT_BGRA: each pixel of the image is represented by four bytes representing blue, green, red and alpha (ignored) channels, respectively.
- VisageModule.VISAGE_FRAMEGRABBER_FMT_LUMINANCE: each pixel of the image is represented by one byte representing the luminance (gray level) of the image.origin
number <optional>
VisageModule.VISAGE_FRAMEGRABBER_ORIGIN_TL No longer used, therefore, passed value will not have an effect on this function. However, the parameter is left to avoid API changes. widthStep
number <optional>
0 Width of the image data buffer, in bytes. timeStamp
number <optional>
-1 The timestamp of the the input frame in milliseconds. The passed value will be returned with the tracking data for that frame (FaceData.timeStamp). Alternatively, the value of -1 can be passed, in which case the tracker will return time, in milliseconds, measured from the moment when tracking started. maxFaces
number <optional>
1 Maximum number of faces that will be tracked. Increasing this parameter will reduce tracking speed. Returns:
array of tracking statuses for each of the tracked faces - see FaceData for more details- Type
- Int32Array
- In the first frame (first call to VisageTracker.track() function).
-
setTrackerConfiguration(trackerConfigFile, au_fitting_disabled, mesh_fitting_disabled)
-
Sets configuration file name.
The tracker configuration file name and other configuration parameters are set and will be used for the next tracking session (i.e. when track()) is called). Default configuration files (.cfg) are provided in the www/lib folder. Please refer to the VisageTracker Configuration Manual for further details on using the configuration files and all configurable options.Parameters:
Name Type Argument Default Description trackerConfigFile
string Name of the tracker configuration file. au_fitting_disabled
boolean <optional>
false Disables the use of the 3D model used to estimate action units (au_fitting_model configuration parameter). mesh_fitting_disabled
boolean <optional>
false Disables the use of the fine 3D mesh (mesh_fitting_model configuration parameter). -
setConfiguration(configuration)
-
Sets tracking configuration.
The tracker configuration object is set and will be used for the next tracking session (i.e. when track()) is called).Parameters:
Name Type Description configuration
VisageConfiguration configuration object obtained by calling getTrackerConfiguration() function. -
getConfiguration() → {VisageConfiguration}
-
Returns tracking configuration.
Returns:
- VisageConfiguration object with the values currently used by tracker.
- Type
- VisageConfiguration
-
setIPD(IPD)
-
Sets the inter pupillary distance.
Inter pupillary distance (IPD) is used by the tracker to estimate the distance of the face from the camera. By default, IPD is set to 0.065 (65 millimetres) which is considered average. If the actual IPD of the tracked person is known, this function can be used to set the IPD. As a result, the calculated distance from the camera will be accurate (as long as the camera focal length is also set correctly). This is important for applications that require accurate distance. For example, in Augmented Reality applications objects such as virtual eyeglasses can be rendered at appropriate distance and will thus appear in the image with real-life scale.
Parameters:
Name Type Description IPD
number The inter pupillary distance (IPD) in meters. - See:
-
getIPD() → {number}
-
Returns the current inter pupillary distance (IPD) setting.
IPD setting is used by the tracker to estimate the distance of the face from the camera. See setIPD() for further details.- See:
Returns:
current setting of inter pupillary distance (IPD) in meters.
- Type
- number
-
reset()
-
Reset tracking.
Resets the tracker. Tracker will reinitialise with the next call of track() function.
-
stop()
-
- Deprecated:
- Stops the tracking.
- Stops the tracking.