How to take photo with Affectiva



Hello Dear Affectiva Team!

Could you suggest the best way to take photo while affectiva detector is work?
I have next case: user see the camera stream on some surface. Affectiva detect angles: pitch, yaw roll, and when this angles good for me, then I need to take picture from front camera with best quality for this device.

Thank you!


Hi, could you be more specific about what problem you’re having with our SDK? The SDK does not implement functionality for taking pictures; it just analyzes the images it receives and outputs the metrics which have been enabled (such as the angles you mentioned). If you want to take a picture when those angles meet some criteria, you would need to implement that functionality in your app.


Thank you for reply!
SDK really great and detect angles very well. I tried to start your project from Github: and take picture there. But I have next problem:
I need to get the best quality photo which the camera allows. In the method:
void onImageResults(List <Face> faces, Frame image, float timeStamp),
an image that can be fetched is with a sufficiently low quality.
To avoid this, I used the the android.hardware.camera2 library, but the Affectiva and this library work with different SurfaceView. And camera not work in parallel for android.hardware.camera2 and Affectiva. When I starting work with camera2 library affectiva not working. Do you have any case for resolve this?


You may find that the approach used in the FrameDetectorDemo example in works better for you, as in that example, the integration with the camera done in the sample code (not in the Affectiva SDK), and it uses the Affectiva FrameDetector class instead of CameraDetector.

Since the FrameDetector class doesn’t access the camera or any surfaces at all; with this approach, your code would be completely in control of that part of the logic and you can do whatever you need to do.


Thank you! I’ll try it


I tried your advice, today. It’s really work perfect. Thank you! But I saw one incorrect behaviour. I’ll describe it step by step:

  • Place face to front of camera
  • Init FrameDetector
  • Detector start and detect
  • Next, I take off the phone on a few second. Of course detector return faces count - 0
  • Then I return the phone to the front of the face, but the detector does not detect face. I tried different angles, no emotions and so on.
  • And I repaired it only with one little hardcode. I pasted detector.reset(), if face count is 0. But I think this is very bad solution to paste this code.
    Maybe have you any idea about this?


It’s difficult to guess without seeing your code. Are you still getting callbacks (with face count == 0) after returning your face to the front of the camera?

You’re right that calling reset() repeatedly is not the ideal solution.


Sorry, I think it was my bug. I sent to method detector.process (..) not every frame (every third), and used for timestamp increment in 0.001 (didn’t calculate like your example). When I fixed both it began work perfect!

Thank you very much, for your help! I very happy!


Great, glad you got things working. I’ll close this topic.