Face never processed


#1

I am using the code posted in this query: https://stackoverflow.com/questions/36649238/how-to-use-affdex-sdk-in-swift

The detector method always ends in the else clause, signaling that the image is always unprocessed. Why is this and how can I fix it? I am trying to analyze the video feed. Thank you


#2

The detector always calls this delegate method with the original/unprocessed image for each frame it receives whether it will process it or not. Then the detector calls this method again after processing with the processed image and the results. So you end up with 2 delegate calls for each processed frame.
Processing depends on the detector’s maxProcessRate property. Setting it to 0 will pause the processing and you will end up receiving only the unprocessed images.

Can you check the value of maxProcessRate?


#3

Thanks. I changed the maxProcessRate from 5 to 100 and it seems to be processing faces. However, the hasResults NSMutableDictionary keeps having 0 keys/values (not nil, just empty) when recording a face. Do you have advice for how to fix this? Much appreciated.

I also realized the detector initializer is deprecated. Could this be causing my issues? How can I initialize the detector in Swift? I tried detector = AFDXDetector(delegate: self, usingCaptureDevice: __, withHorizontalFlip: false, maximumFaces: 1, faceMode: LARGE_FACES). The problem with this is that I am using ARKit and can’t access the AVCaptureDevice from the ARSession.

Thanks


#4

detector(_ detector: AFDXDetector!, hasResults faces: NSMutableDictionary!, for image: UIImage!, atTime time: TimeInterval) would be called with nil faces before processing and then a non-nil faces dictionary after processing the frame. A variety of factors can come into play in face detection, including lighting, contrast, orientation of the face relative to the camera, size of the face relative to the overall image size, and occlusions that make it difficult to discern the shape of the face. Try processing different images and see if you get any results.

If you are already using a capture device or don’t have access to the current capture device, you can initialize the detector to process images instead:
AFDXDetector(delegate: self, discreteImages: false, maximumFaces: 1, face: LARGE_FACES)
Then after configuring the detector and calling detector.start() you can pass images to the detector using processImage(_ UIImage, atTime: TimeInterval).