Analyze a pre-recorded video file for face and voice emotions


#1

Our requirement is to analyze a pre-recorded video file (mp4) as an input. And output should be the emotions based on face and voice.

I understand that Affective has 2 methods –
a. Media Processing API or Index Service
i. Pros – you don’t need anything installed
ii. Cons – Needs internet connectivity
b. Affectiva SDK
i. Pros – Works without internet
ii. Cons – you need SDK installed.

  1. Can someone please confirm if my above understanding is correct? Further, we would like to explore both the above. Can you suggest next steps?
  2. Do you have a sample javascript version that shows how to analyze a pre-recorded video file (mp4) as an input

#2

Same here. Looking for analyzing a local video using JS SDK. Did you have any success yet?


#3

Only by running it in a video element, copying to canvas and analysing in
real time. Speeding up the playback speed helps, but I’m sure there’s a
better way.


#4

Maybe https://github.com/schaermu/node-fluent-ffmpeg ?


#5

I am currently stuck at this very point.
Can you have a look at my post here to see what I am missing?


Any help would be appreciated.
Thanks!


#6

I’m not in a place where I can run your code right now, but you ought to
check the image parameter in onImageResultsSuccess to see if it looks like
real data or if it’s mostly zeros. If it’s the later, you probably have an
issue with pulling the stream through the canvas.

-Ryan


#7

Yeah, that is what I figured out that the video being played in the canvas is not returning proper results. By any chance could you share your code maybe?


#8

Since it’s for my employer and proprietary, I can’t post the whole project,
but here are the key bits:

Layout in Jade/Pug:
video#video(preload=“auto” controls=“false”)
source(type=“video/mp4”)
canvas#canvas(width=“640” height=“360”)
#textwrapper
textarea#csv …

JavaScript:
function startAnalysis() {
var url = $(‘input#url’).val();
$(’#video source’).attr(‘src’, url);
$("#video")[0].load();
$("#video")[0].volume = 0.25;
$(’#analysis’).show();
$(’#urlinput’).hide();

canvas = document.getElementById(‘canvas’);
ctx = canvas.getContext(‘2d’);
video = document.getElementById(‘video’);
detector = new affdex.FrameDetector(affdex.FaceDetectorMode.LARGE_FACES);

// Set up a loop to draw frames to the canvas element
video.addEventListener(‘play’, onVideoPlay, 0);

// Set up and start the detector
detector.detectAllExpressions();
detector.detectAllEmotions();
detector.detectAllAppearance();

detector.addEventListener(“onInitializeSuccess”, function() {
document.getElementById(‘video’).play();
startTimestamp = (new Date()).getTime() / 1000;
heartbeat = setInterval(analyzeVideoFrame, 333); // 3fps

printout = setInterval(displayBuffer, 20000);

});
detector.addEventListener(“onInitializeFailure”, function() {
console.error(“Affectiva failed to initialize.”);
});

detector.addEventListener(“onImageResultsSuccess”, onImageResultsSuccess);
detector.addEventListener(“onImageResultsFailure”, onImageResultsFailure);

detector.start();

}

function onVideoPlay() {
vid_width = $(’#video’).width();
vid_height = $(’#video’).height();

$(‘canvas’).attr(‘width’, vid_width);
$(‘canvas’).attr(‘height’, vid_height);
$(‘canvas’).show();
$(’#video’).hide();
}

function analyzeVideoFrame() {

//Get a canvas element from DOM
var aCanvas = document.getElementById(“canvas”);
var context = aCanvas.getContext(‘2d’);

//Process the frame
if (!video.paused && !video.ended) {
ctx.drawImage(video, 0, 0);
detector.process(context.getImageData(0, 0, vid_width, vid_height),
video.currentTime);
}
}

-Ryan