new VisageFaceAnalyser()
VisageFaceAnalyser contains face analysis algorithms capable of estimating age, gender, and emotion from facial images.
Following types of analysis can be used:
Age, gender and emotion analysis is based on the FaceData obtained from the VisageTracker or VisageDetector API.
Note: After the end of use VisageFaceAnalyser object needs to be deleted to release the allocated memory. Example:
VisageFaceAnalyser requires algorithm data file, neural network configuration file and license key file to be preloaded to virtual file system. Data and neural network configuration file can be found in the www/lib folder.
Changing the location of data files
By default, loader scripts expect the .data files to be in the same location as the application's main html file, while visageSDK.wasm is expected to be in the same location as visageSDK.js library file. However, location of the .data and .wasm files can be changed.
The code example below shows how to implement locateFile function and how to set it as an attribute to the VisageModule object.
The order in which the VisageModule is declared and library and data scripts are included is important.
Sample usage - changing data files location and script including order:
Following types of analysis can be used:
| ANALYSIS TYPE | FUNCTION |
| image analysis | analyseImage() |
| image stream (video) analysis | analyseStream() |
Age, gender and emotion analysis is based on the FaceData obtained from the VisageTracker or VisageDetector API.
Note: After the end of use VisageFaceAnalyser object needs to be deleted to release the allocated memory. Example:
<script>
m_FaceAnalyser = new VisageModule.VisageFaceAnalyser();
...
m_FaceAnalyser.delete();
</script>
Dependencies
VisageFaceAnalyser requires algorithm data file, neural network configuration file and license key file to be preloaded to virtual file system. Data and neural network configuration file can be found in the www/lib folder.
Data files
An external loader script visageAnalysisData.js is provided for preloading the visageAnalysisData.data file.Changing the location of data files
By default, loader scripts expect the .data files to be in the same location as the application's main html file, while visageSDK.wasm is expected to be in the same location as visageSDK.js library file. However, location of the .data and .wasm files can be changed.
The code example below shows how to implement locateFile function and how to set it as an attribute to the VisageModule object.
Configuration file and license key files
Configuration file and the license key files are preloaded using VisageModule's API function assigned to the preRun attribute:
VisageModule.FS_createPreloadedFile(parent, name, url, canRead, canWrite)
where parent and name are the path on the virtual file system and the name of the file, respectively.
visage|SDK initialization order
The order in which the VisageModule is declared and library and data scripts are included is important.
- First, VisageModule object is declared
- including preloading of the configuration files, license files and possibly, changing the location of data files
- then visageSDK.js library script is included and
- last, visageAnalysisData.js external data loader script is included
Sample usage - changing data files location and script including order:
<script>
licenseName = "lic_web.vlc"
licenseURL = "lic_web.vlc"
var locateFile = function(dataFileName) {var relativePath = "../../lib/" + dataFileName; return relativePath};
VisageModule = {
locateFile: locateFile,
preRun: [function() {
VisageModule.FS_createPreloadedFile('/', 'NeuralNet.cfg', "../../lib/NeuralNet.cfg", true, false);
VisageModule.FS_createPreloadedFile('/', 'Head Tracker.cfg', "../../lib/Head Tracker.cfg", true, false);
VisageModule.FS_createPreloadedFile('/', licenseName, licenseURL, true, false, function(){ }, function(){ alert("Loading License Failed!") });
}],
onRuntimeInitialized: onModuleInitialized
}
</script>
<script src="../../lib/visageSDK.js"> </script>
<script src="../../lib/visageAnalysisData.js"> </script>
Methods
-
analyseImage(frameWidth, frameHeight, p_imageData, faceData, options, results) → {VFAReturnCode}
-
Performs face analysis on a given image.
This function is primarily intended for performing face analysis on a single image, or consecutive unrelated images. As such, it outputs raw, unfiltered estimation data without smoothing or averaging.
Note: Prior to using this function, it is necessary to process the facial image or video frame using VisageTracker or VisageDetector and pass the frame and obtained data to this function. When using data from VisageTracker, only data obtained when tracking status was VisageTrackerStatus.TRACK_STAT_OK should be passed.
This function estimates gender, age, and/or emotions for the last image processed by VisageTracker.track() or VisageDetector.detectFeatures() function.Parameters:
Name Type Description frameWidthnumber Width of the frame frameHeightnumber Height of the frame p_imageDatanumber Pointer to image pixel data, size of the array must correspond to frameWidth and frameHeight. faceDataFaceData FaceData object filled with tracking results from a previous successful (VisageTrackerStatus.TRACK_STAT_OK) call of the VisageTracker.track() or VisageDetector.detectFeatures() function. optionsnumber Bitwise combination of VFAFlags which determines the analysis operations to be performed. resultsAnalysisData AnalysisData struct containing success flags for individual operations and their assorted results Returns:
Value indicating the status of the performed analysis.- Type
- VFAReturnCode
-
analyseStream(frameWidth, frameHeight, p_imageData, faceData, options, results, faceIndex) → {VFAReturnCode}
-
Performs face analysis on a given image stream (video).
This function is primarily intended for performing face analysis on a continuous stream of related frames containing the same person, such as a video or camera feed. Sampling face analysis data from multiple frames can increase estimation accuracy by averaging the result over multiple frames. Internally, the suitability of frames chosen for analysis is continually evaluted based on head pose and overall tracking quality. This guarantees that the analysis buffer is always working with the best available frames, ensuring highest possible estimation accuracy.
Important notes:
This method should only be called with FaceData obtained from a successful tracking operation that returned VisageTrackerStatus.TRACK_STAT_OK tracking status. If options parameter is changed between subsequent calls to analyseStream(), the internal state will be reset and previously collected analysis data will be lost. For optimal results options parameter should remain constant during single stream analysis session. If a new person in the continuous stream replaces the old one, it is neccessary to call resetStreamAnalysis() method otherwise results will not be correct as analyseStream() method has no capability of differentiating faces.
Note: Prior to using this function, it is necessary to process the facial image or video frame using VisageTracker and pass the frame and obtained data to this function. This function estimates gender, age, and/or emotions for the last image processed by VisageTracker.track() function.Parameters:
Name Type Description frameWidthnumber Width of the frame frameHeightnumber Height of the frame p_imageDatanumber Pointer to image pixel data, size of the array must correspond to frameWidth and frameHeight. faceDataFaceData FaceData object filled with tracking results from a previous successful (VisageTrackerStatus.TRACK_STAT_OK) call of the VisageTracker.track() function. optionsnumber Bitwise combination of VFAFlags which determines the analysis operations to be performed. resultsAnalysisData AnalysisData struct containing success flags for individual operations and their assorted results faceIndexnumber Index of the face for which analysis should be performed. Returns:
Value indicating the status of the performed analysis.- Type
- VFAReturnCode
-
resetStreamAnalysis(faceIndex)
-
Resets collected face analysis data. Erases age, gender and emotion data collected up to this point. This is intended to be used in cases when a new person replaces the old one in the same continous input stream. If an index parameter is specified, only data for that specific face is erased. If no parameter is specified, data for all faces is erased.
Parameters:
Name Type Argument Description faceIndexnumber <optional>
Index of the face for which analysis data should be reset