Just added an additional link to the Lire API docs generated from the SVN in the course of the nightly build. You’ll hopefully find them helpful 🙂 Please let me know if you note errors, missing documentation or if you have contributions to the documentation.
Find the docs at http://www.itec.uni-klu.ac.at/lire/nightly/api/index.html.
Obviously the release cycle of lire is quite a long one. Therefore I just added a cronjob for a nightly build on one of our institute’s servers. The current SVN version of Lire will be downloaded, compiled, packaged and put online at 0:01 am CET everyday, 7 days a week. While there won’t be too much change on an everyday scale, you still can obtain a freshly compiled lire.jar for use in your project.
You’ll find the link in the right column of the page or at the end of this post. Please let me know if there are any errors etc.
Two research contributions of me and my colleagues finally made their way online. The paper Adaptive Visual Information Retrieval by Changing Visual Vocabulary Sizes in Context of User Intentions by Marian Kogler, Oge Marques and me investigates how the size and generation process of visual word vocabularies influences retrieval for different degrees of intentionality, being a clear search intent, a surfing intent and a browsing intent. The paper Which Video Do You Want to Watch Now? Development of a Prototypical Intention-based Interface for Video Retrieval by Christoph Lagger, Oge Marques and me presents selected results of a large scale study on the motivations of video consumption on the internet.
Apache Commons has a nice sub project called Sanselan. It’s a pure Java image library for reading and writing images from and to PNG, PSD (partially), GIF, BMP, ICO, TGA, JPEG and TIFF. It also supports EXIF, IPTC and XMP metadata formats, read for all, write for some. Examples for reading and writing images, EXIF, guessing image formats etc. are provided in the source package. Currently Sanselan is available in version 0.9.7 and the release date of this version seems to be in 2009. I’m not sure if this counts as abandoned project, but it definitely doesn’t count as alive 🙂
Face detection is basically a common tasks in image retrieval and management. However, finding a stable, well maintained and free-to-use Java library for face detection may prove hard. The OpenIMAJ project contains a common approach and yields rather fine results. However, the packaged version of all the JARs used in OpenIMAJ is quite bunch of classes making up a 30 MB jar file.
For those of you just interested in face detection I compiled and packaged the classes needed for this tasks in a ~5MB file. Finding the faces then with this library is actually a 3 lines of code task:
MBFImage image = ImageUtilities.readMBF(new FileInputStream(“image.jpg”));
FaceDetector<DetectedFace,FImage> fd = new HaarCascadeDetector(80);
List<DetectedFace> faces = fd. detectFaces (Transforms.calculateIntensity(image));
All the imports needed along with their dependencies are packaged in the facedetect-openimaj.jar file (see archive below).
Again 🙂 Sometimes there is no time to compile OpenCV right here right now. Therefore I compiled a small win32 command line utility for this excact task. Just give a photo with some faces as the first parameter and you’ll get the faces centers, width and height on stdout.
Frequently asked question in the mailing list is: Lire cannot handle my images, what can I do? In most cases it turns out that Java can’t read those images and therefore the indexing routine can’t create a pixel array from the file. Java is unfortunately limited in it’s ability to handle images. But there are two basic workarounds.
(1) You can convert all images to a format that Java can handle. Just use ImageMagick or some other great tool to batch process yout images and convert them all to RGB color JPEGs. This is a non Java approach and definitely the faster one.
(2) You can circumvent the ImageIO.read(..) method by using ImageJ. In ImageJ you’ve got the ImagePlus class, which supports loading and decoding of various formats and is much more error resilient than the pure Java SDK method. Speed, however, is not increased by this approach. It’s more the other way round.
Find some code example on how to do this in the wiki.
Sometimes you just need a small command line utility to extract some local feature from an image … and you have no time to set up and compile OpenCV right this time. Here’s the solution: I did the task (actually for my students and for me, but still you might use it :).
The utility is absolutely basic stuff. Just start “extractSurf.exe” on Windows 7, give it an image as first parameter and it will spit out the surf feature descriptors (on stdout) headed by the x and y coordinates and the response value. Source – of course – is also provided … but it’s not magic. It’s all about the convenience of the binary.
Links to the OpenCV wiki on how to compile the stuff are provided in a small README in the source archive.
Netflix was reported last year to be the source of nearly 30% of the North American internet backbone traffic. Well that’s impressive, but that’s something that many non North Americans can’t understand … and there’s a simple reason for that: the service is not available in many countries. Several well known and well received services are restricted to a range of IP adresses that are considered in a geographic location where users have access to this services. Here is a small but still interesting list of services that have obviously impact on the usage of the internet, but cannot be accessed in many European countries.
- Netflix – major video streaming service (subscription based)
- Pandora – music streaming service / adaptive online radio (ad supported)
- Hulu – major video streaming service of already aired TV content (ad supported)
- Vevo – music video streaming service (ad supported). Most of the music videos on Vevo are available on YouTube for Austrians, but most of these music videos are not accessible of Germans.
- NBC – video streaming service of already aired NBC TV content.
- ABC – video streaming service of already aired ABC TV content.