[announcement] iOS SDK 4.0 release



We are very excited to announce the availability of 4.0 version of the next Affectiva Emotion SDK for iOS. This version includes updates to the face detector, and the expression and emotion models.

Get started now ( release notes )

if you already have the SDK installed into your application, make sure to run pod update AffdexSDK-iOS to get the latest version of the SDK.

In addition, we are adding support for distributing the SDK via Carthage, see more information in the get started guide.

Abdo Mahmoud
Product Manager, SDK Team



@ahamino Is this version updated to be compatible with iMessage extension?
Can the AFDXDetector be called directly on this, instead of using AVcapture, as our current AVcapture method is leading to some conflicts.

If not, is there a work-around to get the AFDXDetector working correctly on iMessage extension?


This release should be compatible with requirements for an iMessage extension.


We tried checking the AFDXDetector issue on iMessage extension but it does not appear to be solved. I was referring to this issue:

@ahmed_almoraly Can you please confirm if this was solved? The AVCapture method is not giving us results as good as AFDXDetector.


@sumesh_dugar the new SDK is now compatible with iMessages extensions (It will pass App Store validation). The orientation issue is still not fixed. You have to use AVCapture APIs and pass the images to the detector yourself.


Thanks for the reply @ahmed_almoraly

But we are not getting the kind of output, that we get on AFDXDetector. How do you suggest we get the same quality output?

Also if I may ask, how was the pods issue solved?


What do you mean by the quality of the output? If you’re using AVCapture to handle the camera, then passing the images to AFDXDetector, you’ll get the same result as if you were using using the SDK’s camera implementation. But you’ll be responsible for limiting the number of frames/images passed to the detector in a second.


@ahmed_almoraly Please check both types of, one with AVCapture and one with AFDXDetector projects and notice the shakes in the dots in the AV one.


Ahmed, have you had a chance to look at this?


The problem with your AVCapture implementation is that you’re processing every frame you get from the camera.
If you’re passing images directly to the detector, you’re responsible for controlling the number of frames being processed. You have to take care of how many frames being passed to the detector in a second.


How many frames/second would you suggest ideally to replicate the quality of AFDEXdetector?