Can we get result for the facial action units for the sdk rather than the emotion?
Yes, the AU’s are mapped to expressions. If you enable the expressions on the detector object you should be able to get them on the ImageListener callback. Here is the mapping of AU’s to expression (https://www.cs.cmu.edu/~face/facs.htm). And here is how we map expressions to emotions (https://developer.affectiva.com/mapping-expressions-to-emotions/)