Best practices and threads


#1

Is there some document like development guidelines for Linux platform?

For example questions that I have in my mind after playing a little with the SDK:

  1. Does the CameraDetector (or any other detector) should be used in a different thread than the main GUI thread? Is it blocking somehow or is it spawning another thread by itself?
  2. When listener notifications are executed - is it the same thread in which detector was created?
  3. Is it costly to start and stop detector? Should it be done once when application is started and closed or can it be done on demand when user performs some action on which I should use the detector?

Do other detectors behave the same way?
Are there any other best practices for using the SDK?


#2

Hi, sorry for the delayed reply. I posted some answers on S.O. to your question there:

but if you have some follow-up questions, please add them here. Thanks.


#3

Actually I have.
Regarding start and stop methods. I am using CameraDetector and if I call start and stop once everything works. But if I do it again in my application (workflow can be executed mutliple times), next time no images are grabbed from camera and there is an error
VIDIOC_STREAMON: Invalid file handle (translated from polish)

Example log from my app (Result updated means frame was captured by CameraDetector, “Błędny deskryptor pliku” means Invalid File Handle ):

Starting detector:
HIGHGUI WARNING: V4L/V4L2: VIDIOC_S_CROP
Result updated:

Result updated:
Stopping detector:
Starting detector:
HIGHGUI ERROR: V4L2: Pixel format of incoming image is unsupported by OpenCV
VIDIOC_STREAMON: Błędny deskryptor pliku
Stopping detector:
Unable to stop the stream.: Błędny deskryptor pliku

I searched the web and it looks like this happens, when camera is not properly released. And this might be the case, since camera is off before calling start, but stays on after stop is called. Could that be the case?

Additionally, why it is not possible to specify camera resolution in CameraDetector?


#4

After some deeper investigation, the camera was probably the reason for errors here (I am using iSight through VirutalBox).

But if I use v4l2loopback device, then it also doesn’t work - second call to CameraDetector.start() grabs two frames and then freezes. Can you check, whether sample program with CameraDetector where start and stop are called multiple times works?


#5

FInally got it working :slight_smile:

With my own loop which reads frames from camera and passes to FrameDetector. I noticed that it didn’t work because I did not pass “seconds” parameter to the Frame constructor. Once I did with a simple counter, it started to work, but when I set this counter to 0 in second detector pass - it failed.
Right now counter is not set to 0 but incremented all the time and then it works. In the example I saw that you are counting seconds from application start. In the docs it states that this parameter is some kind of a frame id. Then I would assume that it should be unique for all frames. But in your example, when you count seconds, it could be that multiple frames will have the same id.

Can you explain what is the purpose of this parameter and how it should be calculated?


#6

Hey there. I have no answer about the “why”, but I have one for the “how”. As the result is a second with fractions, it will not be the same unless the frame is taken in the absolute same time (or within the granularity used).

I do it like this and it seems to work for fps < 1000 (1 ms):

auto timeStampMsecs = std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::steady_clock::now() - firstFrameStamp_);
float timeStampSeconds = static_cast<float>(timeStampMsecs.count()) / 1000.0f;

Cheers,
Dirk


#7

OK, will try and let you know.


#8

Yes, as the documentation indicates, that parameter is a timestamp in seconds (as a float), thus values should increase with each successive frame and no two frames should have the same timestamp.


#9