Save the emotion attributes for analysis


#1

I am working on research project in visual studio 2015 Using AffdexMe Csharp widows application, I want to record the emotion attributes in a file (csv/ excel/ DB). Could you please help which method of class returns these attributes. I found DrawData() , but don’t know the details. Kindly help.

Mumtaz


#2

Hi, take a look at https://knowledge.affectiva.com/docs/getting-started-with-the-emotion-sdk-for-windows.

Using the C# API, the results come back in a Face object:
http://affectiva.github.io/developerportal/pages/platforms/v3_4_1/windows/classdocs/Affdex/html/6ba06b80-e3de-63ab-3985-5555663602f7.htm


#3

Hello! I would also want to know how to do this:

I checked the ‘Getting Started’ page, especially the ‘Analyze the camera stream’ link, and I can’t seem to find the part in which the extraction of information in a specific file is explained. Would you kindly explain how to do it, please?

I have another question, although this is not related to this topic. I would like to use the FaceListener class to be able to know when the capture begins and ends, but I don’t really know how to use the ‘timestamp’ and ‘faceId’ parameters, especially the latter. I’m thinking it would be through the use of either a ‘for’/‘while’ cycle or an ‘if’/‘else’, but I’m not sure how to go at it. If you could help me and/or give me some pointers, I’d be very appreciated. :slightly_smiling_face:

PS: I’ve had a few programming classes, but my course isn’t really about programming, so I’m sorry if these questions are obvious. :sweat:


#4

To get the emotion metrics, create an object which implements ImageListener interface and register it with the detector object by calling its setImageListener method. When you’re listener’s onImageResults callback is invoked, you’ll get a dictionary of Face objects, keyed by face ID.

void onImageResults(
	Dictionary<int, Face> faces,
	Frame frame
)

You can get the metrics values for emotions/expressions/etc from each Face object.

see:
http://affectiva.github.io/developerportal/pages/platforms/v3_4_1/windows/classdocs/Affdex/html/01540d38-9ade-2974-0182-44633d8b0722.htm
and
http://affectiva.github.io/developerportal/pages/platforms/v3_4_1/windows/classdocs/Affdex/html/6ba06b80-e3de-63ab-3985-5555663602f7.htm


#5

The FaceLIstener’s callback methods notify you when a face with a given ID appears or disappears. Each face found in the input frame stream will be assigned an ID when it appears, and the results returned via ImageListener.onImageResults will be keyed by the Face IDs of the faces in the frame.

So for example, you might have one face appear at timestamp 1, and be assigned face ID 0. The FaceListener.onFaceFound callback will be invoked with that information. While it continues to appear, the calls to ImageListener.onImageResults will contain one Face object with key 0. Later, if a second face joins the first one at timestamp 10, it would be assigned face ID 1. The onFaceFound callback would be invoked with that information, and while both faces are present, the calls to onImageResults will contain two Faces in the dictionary, keyed by IDs 0 and 1. If the first face disappears at timestamp 20, but the second one remains, the onFaceLost callback will be invoked to report that face ID 0 is now gone, and subsequent calls to onImageResults will have one Face in the dictionary keyed by the value 1.


#6

Hello! Sorry for not coming here for a while, I’ve been focusing on other parts of my project.

About the FaceListener, I found this code while browsing this website, but I had never seen commands like the std::cout, so I made some research and discovered that it’s a standard output stream command. I was wondering if there’s another way of coding this without using commands like these, since I don’t know them very well.

http://en.cppreference.com/w/cpp/io/cout
http://www.cplusplus.com/reference/iostream/cout/

About the ImageListener, I think I already had made the first part. This is the code I have (see below). When the program is running, I can see the ‘values’ (valence and engagement) for each emotion, but I think they aren’t being saved anywhere. I’m not sure if I perceived the links you provided in a wrong way, but the ToString method is the one being used to show those values when running the program, and I’d like it to save said information in a file (like .txt) as well. :sweat:

using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Reflection;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;

namespace CamDemoDevDay
{
public partial class ProcessCameraFeed : Form, Affdex.FaceListener, Affdex.ImageListener
{
    public ProcessCameraFeed(Affdex.Detector detector)
    {
        detector.setFaceListener(this);
        detector.setImageListener(this);

        InitializeComponent();
    }
    
    public void onImageResults(Dictionary<int, Affdex.Face> faces, Affdex.Frame frame)
    {
        foreach(KeyValuePair<int, Affdex.Face> pair in faces)
        {
            Affdex.Face face = pair.Value;

            if (face != null)
            {
                foreach (PropertyInfo prop in typeof(Affdex.Emotions).GetProperties() )
                {
                    float value = (float)prop.GetValue(face.Emotions, null);
                    String output = String.Format("{0}, {1:0.00}", prop.Name, value);
                    System.Console.WriteLine(output);
                }

                System.Console.WriteLine(face.Appearance.Age.ToString());
                System.Console.WriteLine(face.Appearance.Gender.ToString());
                System.Console.WriteLine(face.Appearance.Ethnicity.ToString());
                System.Console.WriteLine(face.Appearance.Glasses.ToString());
            }
        }

        frame.Dispose();
    }

    public void onImageCapture(Affdex.Frame frame)
    {
        frame.Dispose();
    }
}
}

#7

Hi, just to recap – if you write a class that implements the FaceListener interface, create an object of that class and register it with the detector via its setFaceListener method, and you do likewise for the ImageListener interface using setImageListener, you will get callbacks to your FaceListener when faces are found or lost, and to your ImageListener with metric values for each face in a frame.

If your program is successfully receiving these callbacks, and you want to write the information provided to a file but don’t know how to do that, then this is getting away from a question about our SDK and more into a general programming topic… for that, I suggest that you consult C# programming books or websites for tutorials on file I/O.