detector(_ detector: AFDXDetector!, hasResults faces: NSMutableDictionary!, for image: UIImage!, atTime time: TimeInterval) would be called with
nil faces before processing and then a non-nil
faces dictionary after processing the frame. A variety of factors can come into play in face detection, including lighting, contrast, orientation of the face relative to the camera, size of the face relative to the overall image size, and occlusions that make it difficult to discern the shape of the face. Try processing different images and see if you get any results.
If you are already using a capture device or don’t have access to the current capture device, you can initialize the detector to process images instead:
AFDXDetector(delegate: self, discreteImages: false, maximumFaces: 1, face: LARGE_FACES)
Then after configuring the detector and calling
detector.start() you can pass images to the detector using
processImage(_ UIImage, atTime: TimeInterval).