I managed to have some good working interaction between OpenCV and Blender to do face tracking, using a simple webcam, in order to control a camera in the BGE.
For those who don’t know what is OpenCV, they will find the answers here. OpenCV is a really powerful library for real-time image processing, so I strongly recommend you people interested in that to have a look at it!
Anyway how does it work this system I developed? There are basically two components:
- The face tracker
- The real-time BGE camera tracking
The face tracker gets frames from the webcam and for each frame checks if there are faces in the image using the pattern file “haarcascade_frontalface_alt.xml”, where the shape is defined by a set of coordinates (a lot as you can see yourself if you open the file!) which defines face’s pattern. Beware that multiple faces can be detected, but if you are using my example they won’t work, you must use just one face. Once the bounding box of the face is found, its center and area is calculated. CenterX and CenterY are used to control the camera position on the X and Z axis, while the area is the Y, so the bigger the closer the camera gets to the center of the scene (a cube). In the .blend file the area is not used because I’m still doing some experiments to get the best usage of that value, but you can try it yourself, just play with it.
The face tracking works really well in almost every light condition, but I recommend to have a plain light and a good color difference between you and the background. I haven’t made yet any screen recording to show you how it works, if somebody wants to do that in the meanwhile and send me the link of the video, I’ll put it in this post.
Note: this example works on Linux/OSX/Windows platforms as long as the webcam is supported by OpenCV. Mine wasn’t so I had to use VideoCapture module for windows. If you are going to run it in OSX/Linux, change the part of code regarding the webcam device initialization and image capture using the OpenCV methods instead of VideoCapture ones and you are ready to go! Basically you just need a PIL image format value that can be passed computed by OpenCV.
You can find both sources and a win32 build that doesn’t need anything installed on your machine, just run it! The python script needs OpenCV installed. Once you install it you will find in the folder a Python26 subfolder with libs and DLLs. Copy them in your Python folder and that’s all, those are the python wrappers for OpenCV.