Tag Archives: Java

Serialization of LIRE global features updated

In the current SVN version three global features have been re-visited in terms of serialization. This was necessary as the index of the web demo with 300k images already exceed 1.5 GB.

Feature prior now
EdgeHistogram 320 bytes 40 bytes
JCD 1344 bytes 84 bytes
ColorLayout 14336 bytes 504 bytes

This significant reduction in space leads to (i) smaller indexes, (ii) reduced I/O time, and (iii) therefore, to faster search.

How was this done? Basically it’s clever organization of bytes. In the case of JCD the histogram has 168 entries, each in [0,127], so basically half a byte.Therefore, you can stuff 2 of these values into one byte, but you have to take care of the fact, that Java only supports bit-wise operations on ints and bytes are signed. So the trick is to create an integer in [0, 2^8-1] and then subtract 128 to get it into byte range. The inverse is done for reading. The rest is common bit shifting.

The code can be seen either in the JCD.java file in the SVN, or in the snippet at pastebin.com for your convenience.

Video tutorial on how to download and compile LIRE

The realization that setting up the project is not too trivial led to the video howto. It’s available on YouTube and shows all steps from (an already started) fresh IntelliJ IDEA to running a Junit test for LIRE. Make sure you watch the video in 1080p / full HD to be able to read all the text.

Links

Canny Edge Detector in Java

With the implementation of the PHOG descriptor I came around the situation that no well-performing Canny Edge Detector in pure Java was available. “Pure” in my case means, that it just takes a Java BufferedImage instance and computes the edges. Therefore, I had to implement my own :)

As a result there is now a “simple implementation” available as part of LIRE. It takes a BufferedImage and returns another BufferedImage, which contains all the edges as black pixels, while the non-edges are white. Thresholds can be changed and the blurring filter using for preprocessing can be changed in code. Usage is dead simple:

BufferedImage in = ImageIO.read(new File("testdata/wang-1000/128.jpg")); 
CannyEdgeDetector ced = new CannyEdgeDetector(in, 40, 80); 
ImageIO.write(ced.filter(), "png", new File("out.png"));

The result is the picture below:

cannyedgeLinks

 

Large image data sets with LIRE – some new numbers

People lately asked whether LIRE can do more than linear search and I always answered: Yes, it should … but you know I never tried. But: Finally I came around to index the MIR-FLICKR data set and some of my Flickr-crawled photos and ended up with an index of 1,443,613 images. I used CEDD as main feature and a hashing algorithm to put multiple hashes per images into Lucene — to be interpreted as words. By tuning similarity, employing a Boolean query, and adding a re-rank step I ended up with a pretty decent approximate retrieval scheme, which is much faster and does not loose too many images on the way, which means the method has an acceptable recall. The image below shows the numbers along with a sample query. Linear search took more than a minute, while the hashing based approach did (nearly) the same thing in less than a second. Note that this is just a sequential, straight forward approach, so no optimization has been done to the performance. Also the hashing approach has not yet been investigated in detail, i.e. there are some parameters that still need some tuning … but let’s say it’s a step into the right direction.

Results-CEDD-Hashing

Lire 0.9.3 released

I just uploaded Lire 0.9.3 to the all new Google Code page. This is the first version with full support for Lucene 4.0. Run time and memory performance are comparable to the version using Lucene 3.6. I’ve made several improvements in terms of speed and memory consumption along the way, mostly within the CEDD feature. Also I’ve added two new features:

  • JointHistogram – a 64 bit RGB color histogram joined with pixel rank in the 8-neighborhood, normalized with max-norm, quantized to [0,127], and JSD for a distance function
  • Opponent Histogram – a 64 bit histogram utilizing the opponent color space, normalized with max-norm, quantized to [0,127], and JSD for a distance function

Both features are fast in extraction (the second one naturally being faster as it does not investigate the neighborhood) and yield nice, visually very similar results in search. See also the image below showing 4 queries, each with the new features. The first one of a pair is always based on JointHistogram, the second is based on the OpponentHistogram (click ko see full size).

Samples-JointHistogram

 

I also changed the Histogram interface to double[] as the double type is so much faster than float in 64 bit Oracle Java 7 VM. Major bug fix was in the JSD dissimilarity function. So many histograms now turned to use JSD instead of L1, depending on whether they performed better in the SIMPLIcity data set (see TestWang.java in the sources).

Final addition is the Lire-SimpleApplication, which provides two classes for indexing and search with CEDD, ready to compile with all libraries and an Ant build file. This may — hopefully — help those that still seek Java enlightenment :-D

Finally this just leaves to say to all of you: Merry Christmas and a Happy New Year!

News on LIRE performance

In the course of finishing the book, I reviewed several aspects of the LIRE code and came across some bugs, including one with the Jensen-Shannon divergence. This dissimilarity measure has never been used actively in any features as it didn’t work out in retrieval evaluation the way it was meant to. After two hours staring at the code the realization finally came. In Java the short if statement, “x ? y : z” is overruled by almost any operator including ‘+’. Hence,

System.out.print(true ? 1: 0 + 1) prints '1',

while

System.out.print((true ? 1: 0) + 1) prints '2'

With this problem identified I was finally able to fix the implementation of the Jensen-Shannon divergence implementation and came to new retrieval evaluation results on the SIMPLIcity data set:

map p@10 error rate
Color Histogram – JSD 0,450 0,704 0,191
Joint Histogram – JSD 0,453 0,691 0,196
Color Correlogram 0,475 0,725 0,171
Color Layout 0,439 0,610 0,309
Edge Histogram 0,333 0,500 0,401
CEDD 0,506 0,710 0,178
JCD 0,510 0,719 0,177
FCTH 0,499 0,703 0,209

Note that the color histogram in the first row now performs similarly to the “good” descriptors in terms of precision at ten and error rate. Also note that a new feature creeped in: Joint Histogram. This is a histogram combining pixel rank and RGB-64 color.

All the new stuff can be found in SVN and in the nightly builds (starting tomorrow :)

Lire migrated to Google Code & new version released

I just uploaded version 0.9.2 of Lire and LireDemo to Google Code. Yes, Google Code! I also migrated (more or less in a under cover action some month ago) the SVN trunk to Google Code and will move on with development there. Main reasons were that ads were getting more and more aggressive over at sf.net and the interface of a Google Code project is so much cleaner and easier to handle from a project manager point of view.

Lire 0.9.2 fixes two bugs in KMeans and GenericImageSearcher. Both were critical. The KMeans fix allows now for the use of the bag of visual words approach. The GenericImageSearcher fix makes search much faster.

Links

New very basic Lire sample project

Due to numerous requests I prepared a package showing off a simple indexer and a simple search. Inside there are two classes: Indexer and Searcher. Each of them does what their name suggests.

Indexer takes the first command line argument, interprets it as directory, gets all images from this directory and indexes and stores them in a newly created directory called “index”. Searcher searches in excactly this image index for the query image specified with the first argument.

The sample application employs CEDD and provides an ANT build file. IDEs like NetBeans, Eclipse or IntelliJ IDEA should have no problems importing the sources and using the build.xml file for compiling and running. Arguments can be changed in the build.xml file.

Links

Apache Commons Sanselan – Image and Metadata I/O

Apache Commons has a nice sub project called Sanselan. It’s a pure Java image library for reading and writing images from and to PNG, PSD (partially), GIF, BMP, ICO, TGA, JPEG and TIFF. It also supports EXIF, IPTC and XMP metadata formats, read for all, write for some. Examples for reading and writing images, EXIF, guessing image formats etc. are provided in the source package. Currently Sanselan is available in version 0.9.7 and the release date of this version seems to be in 2009. I’m not sure if this counts as abandoned project, but it definitely doesn’t count as alive :)

Face Detection in Java

Face detection is basically a common tasks in image retrieval and management. However, finding a stable, well maintained and free-to-use Java library for face detection may prove hard. The OpenIMAJ project contains a common approach and yields rather fine results. However, the packaged version of all the JARs used in OpenIMAJ is quite bunch of classes making up a 30 MB jar file.

For those of you just interested in face detection I compiled and packaged the classes needed for this tasks in a ~5MB file. Finding the faces then with this library is actually a 3 lines of code task:

MBFImage image = ImageUtilities.readMBF(new FileInputStream(“image.jpg”));
FaceDetector<DetectedFace,FImage> fd = new HaarCascadeDetector(80);
List<DetectedFace> faces = fd. detectFaces (Transforms.calculateIntensity(image));

All the imports needed along with their dependencies are packaged in the facedetect-openimaj.jar file (see archive below).

Files