How do you calculate the user attention using pitch roll and yawn of head?


What are the threshold values of pitch, roll and yawn for the user to be attentive? Any info is appreciated.


Attention is our approximation of whether or not an individual is looking at the screen. We use information about the head orientation to determine if they are facing forward, or looking away and map that to a 0 to 100 scale, where 0 is a person looking away entirely (and untracked), to a person looking directly forward (100).


What are the facial features that are considered to determine attention? only head pose? isn’t eye direction necessary?


Ideally this metric would make use of eye-gaze information, however we have yet to integrate webcam only eye-tracking into our technology. Additionally eye-tracking normally requires an explicit calibration step which Affectiva technology does not require