The new LIRE web demo is based on Apache Solr and features and index of the MIRFLICKR data set. The new architecture allows for extremely fast retrieval. Moreover, there’s a new walk through video with some short peeks behind the screen. The source of the plugin will be released in the near future.
The current LireDemo 0.9.4 beta release features a new indexing routine, which is much faster than the old one. It’s based on the producer-consumer principle and makes — hopefully — optimal use of I/O and up to 8 cores of a system. Moreover, the new PHOG feature implementation is included and you can give it a try. Furthermore JCD, FCTH and CEDD got a more compact representation of their descriptors and use much less storage space now. Several small changes include parameter tuning on several descriptors and so on. All the changes have been documented in the CHANGES.txt file in the SVN.
The LIRE web demo now includes an RGB color histogram as well as the MPEG-7 edge histogram implementation. The color histogram works well for instance for line art, such as this query.The edge histogram works fine for clear, gloabl edge distributions like queries such as this one. However, it’s performing different from PHOG. An example for the difference is this PHOG query compared to the according edge histogram query. The image below shows both queries.
Topics of interest include, but are not limited to:
– Multimedia content analysis and understanding
– Content-based browsing, indexing and retrieval of images, video and audio
– Advanced descriptors and similarity metrics for multimedia
– Audio and music analysis, and machine listening
– Audio-driven multimedia content analysis
– 2D/3D feature extraction
– Motion analysis and tracking
– Multi-modal analysis for event recognition
– Human activity/action/gesture recognition
– Video/audio-based human behavior analysis
– Emotion-based content classification and organization
– Segmentation and reconstruction of objects in 2D/3D image sequences
– 3D data processing and visualization
– Content summarization and personalization strategies
– Semantic web and social networks
– Advanced interfaces for content analysis and relevance feedback
– Content-based copy detection
– Analysis and tools for content adaptation
– Analysis for coding efficiency and increased error resilience
– Multimedia analysis hardware and middleware
– End-to-end quality of service support
– Multimedia analysis for new and emerging applications
– Advanced multimedia applications
– Proposal for Special Sessions: 4th January 2013
– Notification of Special Sessions Acceptance: 11th January 2013
– Paper Submission: 8th March 2013
– Notification of Papers Acceptance: 3rd May 2013
– Camera-ready Papers: 24th May 2013
See http://wiamis2013.wp.mines-telecom.fr/ for more information.
Recently I posted binaries and packaged libraries for face detection based on OpenCV an OpenIMAJ here and here. Basically both employ similar algorithms to detect faces in photos. As this is based on supervised classification not only the algorithm but also the employed training set has strong influence on the actual precision (and recall) of results. So out of interest I took a look on how well the results of both libraries are correlated:
imaj_20 1.000 0.933 0.695 imaj_40 0.933 1.000 0.706 opencv_ 0.695 0.706 1.000
Above table shows the Pearson correlation of the face detection algorithm with the default models of OpenIMAJ (with a minimum face size of 20 and 40 pixels) and OpenCV. As can be seen the results correlate, but are not the same. Conclusion is: make sure that you check which one to use for your aplication and eventually train one yourself (as actually recommended by the documentation of both libraries).
This experiment has been done on just 171 images, but experiments with larger data sets have shown similar results.
WIAMIS 2012 has started in the morning and first kenote was Prof. Mubarak Shah from University of Central Florida. He talked about primitives for detection of human actions. Especially the visualization of his ideas and approaches was really great! Currently the retrieval session is going on.
My own presentation on user intentionsin video production is scheduled on Friday as the very last presentation, just before the closing remarks.
Face detection is basically a common tasks in image retrieval and management. However, finding a stable, well maintained and free-to-use Java library for face detection may prove hard. The OpenIMAJ project contains a common approach and yields rather fine results. However, the packaged version of all the JARs used in OpenIMAJ is quite bunch of classes making up a 30 MB jar file.
For those of you just interested in face detection I compiled and packaged the classes needed for this tasks in a ~5MB file. Finding the faces then with this library is actually a 3 lines of code task:
FaceDetector<DetectedFace,FImage> fd = new HaarCascadeDetector(80);
List<DetectedFace> faces = fd. detectFaces (Transforms.calculateIntensity(image));
All the imports needed along with their dependencies are packaged in the facedetect-openimaj.jar file (see archive below).
- FaceDetect-java.zip – ZIP, 5.4M – contains the library and the sample source.
Sometimes you just need a small command line utility to extract some local feature from an image … and you have no time to set up and compile OpenCV right this time. Here’s the solution: I did the task (actually for my students and for me, but still you might use it :).
The utility is absolutely basic stuff. Just start “extractSurf.exe” on Windows 7, give it an image as first parameter and it will spit out the surf feature descriptors (on stdout) headed by the x and y coordinates and the response value. Source – of course – is also provided … but it’s not magic. It’s all about the convenience of the binary.
Links to the OpenCV wiki on how to compile the stuff are provided in a small README in the source archive.
Netflix was reported last year to be the source of nearly 30% of the North American internet backbone traffic. Well that’s impressive, but that’s something that many non North Americans can’t understand … and there’s a simple reason for that: the service is not available in many countries. Several well known and well received services are restricted to a range of IP adresses that are considered in a geographic location where users have access to this services. Here is a small but still interesting list of services that have obviously impact on the usage of the internet, but cannot be accessed in many European countries.
- Netflix – major video streaming service (subscription based)
- Pandora – music streaming service / adaptive online radio (ad supported)
- Hulu – major video streaming service of already aired TV content (ad supported)
- Vevo – music video streaming service (ad supported). Most of the music videos on Vevo are available on YouTube for Austrians, but most of these music videos are not accessible of Germans.
- NBC – video streaming service of already aired NBC TV content.
- ABC – video streaming service of already aired ABC TV content.
Just finished my presentation at ACM MM’s open source competition in 2011. Many interested researchers and developers came by to discuss ideas and developments. I’m looking forward to turning many of those idea into code 😉
For those of you interested in the poster I uploaded it here.
I also uploaded the presentation to slideshare.