Linux CameraDetector - age and gender classifier latency / inertion



Hello everyone,

I am using Emotion SDK on Linux and found an odd behavior and I am wondering whether it is by design or a bug.

I have setup my CameraDetector to use my C920 webcam (640x480, not HD resolution) and to print out any faces spotted in ImageListener’s onImageResults callback.

It can spot faces just fine but when I try getting estimated age and gender I get AGE_UNKNOWN and Unknown for the first few seconds (I process 1fps btw. with 15 fps video source). After few frames the classifier somehow manage to guess these parameters pretty accurately. It tends to guess gender before the age though.
My video stream is captured on my face fronting webcam with barely no movement, with short exposure time and good lighting conditions.

Is it by design? Is it more accurate this way (maybe averaging frames to get rid of outliers)? Can I somehow configure CameraDetector to work like PhotoDetector, to be able to guess these values immediately on the first frame with the new face introduced? I am hoping to analyze crowded places and any extra latency introduced by classifier is not welcome in my scenario.

Kind regards,

P.S. In case I forgot to specify interesting details you can find my sandbox source code below:

#include “PhotoDetector.h”
#include “FaceListener.h”
#include “CameraDetector.h”
#include “paths.h”
#include <stdio.h>
#include <unistd.h>
#include “opencv2/core/core.hpp”
#include “opencv2/highgui/highgui.hpp”

class MyListener : public affdex::FaceListener, public affdex::ImageListener

void onFaceFound( float timestamp, affdex::FaceId faceId ) override;
void onFaceLost( float timestamp, affdex::FaceId faceId ) override;
void onImageResults(std::map<affdex::FaceId, affdex::Face> faces, affdex::Frame image) override;
void onImageCapture(affdex::Frame image) override;




void MyListener::onImageResults(std::map<affdex::FaceId, affdex::Face> faces, affdex::Frame image)
for(auto f : faces)
const char * ageDesc = NULL;
case affdex::Age::AGE_65_PLUS:
ageDesc = “65+”;
case affdex::Age::AGE_55_64:
ageDesc = “55-64”;
case affdex::Age::AGE_45_54:
ageDesc = “45-54”;
case affdex::Age::AGE_35_44:
ageDesc = “35-44”;
case affdex::Age::AGE_25_34:
ageDesc = “25-34”;
case affdex::Age::AGE_18_24:
ageDesc = “18-24”;
case affdex::Age::AGE_UNDER_18:
ageDesc = “Under 18”;
ageDesc = “?Unknown”;

    const char * genderDesc = NULL;
        case affdex::Gender::Male:
            genderDesc = "Male";
        case affdex::Gender::Female:
                genderDesc = "Female";
            genderDesc = "?Unknown";

    std::cout << "=" << f.first << "- gender = " << genderDesc <<  " - age = " << ageDesc << "\n";


void MyListener::onImageCapture(affdex::Frame image)


void MyListener::onFaceFound( float timestamp, affdex::FaceId faceId )
std::cout << “+” << timestamp << " - " << faceId << “\n”;

void MyListener::onFaceLost( float timestamp, affdex::FaceId faceId )
std::cout << “-” << timestamp << " - " << faceId << “\n”;

int main(int argc, char ** argsv)
int c =0;

//cv::VideoCapture capture;
//cv::Mat img;

//affdex::PhotoDetector photoDetector(8);
affdex::CameraDetector detector(0, 15, 1, 2, affdex::FaceDetectorMode::LARGE_FACES);

std::string classifierPath="/home/mkarczewski/affdex-sdk/data";

MyListener * listener = new MyListener();


//cv::Mat img = cv::imread("/home/mkarczewski/affdex-sdk/images/family.jpg");
//detector.process(affdex::Frame(img.size().width, img.size().height,, affdex::Frame::COLOR_FORMAT::RGB));

    std::cout << "Processing" << c << "...\n";


return 0;



Hi Janek,
This behavior is by design. Judging age and gender from the face only provides some challenges with our current classifier set, so the SDK takes a set of readings for the first few seconds when it sees a new face before reporting the detected age and gender.

This is described in the Appearance metrics section on our metric description page:


Hi Matt, thank you for explaining it. The link was very useful, it makes sense.

In the meanwhile I have been experimenting with using stale image (PhotoDetector) variation of the algorithm and it was working just fine. So instead of processing the live feed I will probably end up processing several frames per second.

This should be a good walk-around except for the face tracking. I did not succeed finding any information on tracking faces between pictures in your docs. By tracking I mean finding a way of telling if it’s the same person on two pictures. I reckon I would have to roll something out on my own. Lack of tracking does not make Affectiva SDK useless for my use-case, it just makes the data more noisy and I’d like to normalize it.

Any thoughts or suggestions on this subject?


@janek_bajanek the SDK doesn’t do facial recognition. Face recognition is where you want to identify who a person is regardless of their expression. Our technology solves the complementary problem (also called the ‘dual’ problem): we want to recognize the expression regardless of the person’s identity. Our SDK doesn’t care who they are, but rather what they are showing on their face, and to that effect doesn’t perform any identity recognition.