Tag Archives: cbir

LireSolr updated to Solr 6.3.0

By Source, Fair use, https://en.wikipedia.org/w/index.php?curid=10443585Apache Solr is a search server application widely in use around the world and LireSolr is the integration project of Lire into this search server. As a last work item of the year, we have updated LireSolr to work with Solr 6.3.0. This also involves the move to Lucene 6.3.0 for LIRE and — for both — Java 8 as minimum SDK version. LireSolr also found a new home on Github: https://github.com/dermotte/liresolr.

There’s detailed documentation on how to set up Solr, how to add a core, how to configure it for the use with LireSolr and there are even indexing tools to get your photos inside Solr. Give it a try at Github!

CBMI 2017 – Call For Papers

CBMI aims at bringing together the various communities involved in all aspects of content-based multimedia indexing for retrieval, browsing, management, visualization and analytics.

Authors are encouraged to submit previously unpublished research papers in the broad field of content-based multimedia indexing and applications. In addition to multimedia and social media search and retrieval, we wish to highlight related and equally important issues that build on content-based indexing, such as multimedia content management, user interaction and visualization, media analytics, etc.


  • Full/short paper submission deadline: February 28, 2017
  • Demo paper submission deadline: February 28, 2017
  • Special Session proposals submission: November 16, 2016

more information …

LIRE Use Case “What Anime is this?”

screenshot-animeI do not often hear of application built with LIRE, however if I do I really appreciate it. The use case of What Anime is this? is exceptional in many ways. First of all LIRE was very well applied and can really solve a problem there, and second, Soruly Ho tuned it to search through over 360 million images on a single server with incredibly reasonable response time.

The web page built by Soruly Ho provides a search interface to (re-)find frames in Anime videos. Not being into Anime myself I still know it’s a hand drawn or computer animation and it’s hugely popular among fans … and there are a lot of them.

Soruly Ho was so nice to compile background information on his project:

Thanks to the LIRE Solr Integration Project, I was able to develop the first prototype just 12 hours after I met LIRE, without touching a line of the source code! After setting up the web server and Solr, I just have to write a few scripts to put all the pieces together. To analyze the video, I use ffmpeg to extract each frame as a jpg file with the timecode as the file name. Then, the ParallelSolrIndexer analyze all these images and generate an XML file. Before loading this XML into Solr, I use a Python script to put the video path and timecode to the title field. Finally, I write a few lines of Javascript to use Solr REST API to submit the image URL to the LireRequestHandler. After some magic, it would return a list of matching images sorted by similarity, with the original video path and timecode in the title field. The idea is pretty simple. Every developer can build this.

But scaling is challenging. There are over 15,000 hours of video indexed in my search engine. Assume they are all 24 fps, there would be 1.3 billion frames in total. This is too big to fit in my server (which is just a high-end PC). Video always play forward in time, so I use a running window to remove duplicate frames. Unlike real life video, most anime are actually drawn in 12 fps or less, this method significantly reduces number of frames by 70%. Out of many feature classes supported by LIRE, I only use the Color Layout Descriptor and drop others to save space, memory and computation time for analysis. Now, each analyzed frame in my Solr index only occupies 197 Bytes. Still, solely relying on one image descriptor already achieves very high accuracy. Even after such optimization, the remaining 366 million frames are still too much that the query would often timeout. So I studied and modified a little bit of the LireRequestHandler. (It is great that LIRE is free and open source!) Instead of using the performance-killing BooleanClause.Occur.SHOULD, I search the hashes with BooleanClause.Occur.MUST one by one until a good match is found. I am only interested to images with similarity > 90%, i.e. there is at least one common hash if I select 10 out of 100 hash values at random. The search would complete in at most 10 iterations, otherwise, I assume there is no match. But random is not good because results are inconsistent, thus, cannot be cached. So I ran an analysis on the hash distribution, and always start searching from the least populated hash. So, similarity calculation is performed on a smaller set of images. The Color Layout Descriptor does not produce an evenly distributed hash on anime. Least populated hash matches only a few frames while most populated hash matches over 277 million frames. The last performance issue is keeping a 67.5GB index with just 32GB RAM, which I think can be solved with just more RAM.

The actual source I have modified and my hash distribution table, can be found on Github.

You can try What Anime is this? yourself at https://whatanime.ga/. Thanks to Soruly Ho for sharing his thoughts and building this great search engine!

LIRE 1.0b2 released


Today, the first official beta version of LIRE 1.0 has been released. After loads of internal tests we decided to pin it down to quasi-stable. There are loads of new features compared to 0.9.5 including metric spaces indexing, DocValues based searching, the SIMPLE approach for localizing global descriptors, new global ones, like CENTRIST, a lot of performance tweaks, and so on.

For those of you using the nightly build or the repository version not much changed, everyone else might check out the new APIs and possibilities staring from the SimpleApplication collection moving over to the LIRE documentation.

You will find the pre-compiled binaries on the download page, hosted by ITEC / Klagenfurt University.



LireDemo 0.9.4 beta released

The current LireDemo 0.9.4 beta release features a new indexing routine, which is much faster than the old one. It’s based on the producer-consumer principle and makes — hopefully — optimal use of I/O and up to 8 cores of a system. Moreover, the new PHOG feature implementation is included and you can give it a try. Furthermore JCD, FCTH and CEDD got a more compact representation of their descriptors and use much less storage space now. Several small changes include parameter tuning on several descriptors and so on. All the changes have been documented in the CHANGES.txt file in the SVN.


Serialization of LIRE global features updated

In the current SVN version three global features have been re-visited in terms of serialization. This was necessary as the index of the web demo with 300k images already exceed 1.5 GB.

Feature prior now
EdgeHistogram 320 bytes 40 bytes
JCD 1344 bytes 84 bytes
ColorLayout 14336 bytes 504 bytes

This significant reduction in space leads to (i) smaller indexes, (ii) reduced I/O time, and (iii) therefore, to faster search.

How was this done? Basically it’s clever organization of bytes. In the case of JCD the histogram has 168 entries, each in [0,127], so basically half a byte.Therefore, you can stuff 2 of these values into one byte, but you have to take care of the fact, that Java only supports bit-wise operations on ints and bytes are signed. So the trick is to create an integer in [0, 2^8-1] and then subtract 128 to get it into byte range. The inverse is done for reading. The rest is common bit shifting.

The code can be seen either in the JCD.java file in the SVN, or in the snippet at pastebin.com for your convenience.

Canny Edge Detector in Java

With the implementation of the PHOG descriptor I came around the situation that no well-performing Canny Edge Detector in pure Java was available. “Pure” in my case means, that it just takes a Java BufferedImage instance and computes the edges. Therefore, I had to implement my own 🙂

As a result there is now a “simple implementation” available as part of LIRE. It takes a BufferedImage and returns another BufferedImage, which contains all the edges as black pixels, while the non-edges are white. Thresholds can be changed and the blurring filter using for preprocessing can be changed in code. Usage is dead simple:

BufferedImage in = ImageIO.read(new File("testdata/wang-1000/128.jpg")); 
CannyEdgeDetector ced = new CannyEdgeDetector(in, 40, 80); 
ImageIO.write(ced.filter(), "png", new File("out.png"));

The result is the picture below:



Pyramid Histogram of Oriented Gradients (PHOG) implemented in LIRE

Yesterday I checked in the latest LIRE revision featuring the PHOG descriptor. I basically goes along image edge lines (using the Canny Edge Detector) and makes a fuzzy histogram of gradient directions. Furthermore it does that on different pyramid levels, meaning that the image is split up like a quad-tree and all sub-images get their histogram. All histograms of levels & sub-images are concatenated and used for retrieval. First tests on the SIMPLIcity data set have shown that the current configuration of PHOG included in LIRE outperforms the EdgeHistogram descriptor.

You can find the latest version of LIRE in the SVN & in the nightly builds.


  • A. Bosch, A. Zisserman & X. Munoz. 2007. Representing shape with a spatial pyramid kernel. In Proceedings of CIVR ’07 — [DOI] [PDF]

Large image data sets with LIRE – some new numbers

People lately asked whether LIRE can do more than linear search and I always answered: Yes, it should … but you know I never tried. But: Finally I came around to index the MIR-FLICKR data set and some of my Flickr-crawled photos and ended up with an index of 1,443,613 images. I used CEDD as main feature and a hashing algorithm to put multiple hashes per images into Lucene — to be interpreted as words. By tuning similarity, employing a Boolean query, and adding a re-rank step I ended up with a pretty decent approximate retrieval scheme, which is much faster and does not loose too many images on the way, which means the method has an acceptable recall. The image below shows the numbers along with a sample query. Linear search took more than a minute, while the hashing based approach did (nearly) the same thing in less than a second. Note that this is just a sequential, straight forward approach, so no optimization has been done to the performance. Also the hashing approach has not yet been investigated in detail, i.e. there are some parameters that still need some tuning … but let’s say it’s a step into the right direction.


Updates on LIRE (SVN rev 39)

LIRE is not a sleeping beauty, so there’s something going on in the SVN. I recently checked in updates on Lucene (now 4.2) and Commons Math (now 3.1.1). Also I removed some deprecation things still left from Lucene 3.x.

Most notable addition however is the Extractor / Indexor class pair. They are command line applications that allow to extract global image features from images, put them into an intermediate data file and then — with the help of Indexor — write them to an index. All images are referenced relatively to the intermediate data file, so this approach can be used to preprocess a whole lot of images from different computers on a network file system. Extractor also uses a file list of images as input (one image per line) and can be therefore easily run in parallel. Just split your global file list to n smaller, non overlapping ones and run n Extractor instances. As the extraction part is the slow one, this should allow for a significant speed-up if used in parallel.

Extractor is run with

$> Extractor -i <infile> -o <outfile> -c <configfile>
  • <infile> gives the images, one per line. Use “dir /s /b *.jpg > list.txt” to create a compatible list on Windows.
  • <outfile> gives the location and name of the intermediate data file. Note: It has to be in a folder parent to all images!
  • <configfile> gives the list of features as a Java Properties file. The supported features are listed below the post. The properties file looks like:

Indexor is run with

Indexor -i <input-file> -l <index-directory>
  • <input-file> is the output file of Extractor, the intermediate data file.
  • <index-directory> is the directory of the index the images will be added (appended, not overwritten)

Features supported by Extractor:

  • net.semanticmetadata.lire.imageanalysis.CEDD
  • net.semanticmetadata.lire.imageanalysis.FCTH
  • net.semanticmetadata.lire.imageanalysis.OpponentHistogram
  • net.semanticmetadata.lire.imageanalysis.JointHistogram
  • net.semanticmetadata.lire.imageanalysis.AutoColorCorrelogram
  • net.semanticmetadata.lire.imageanalysis.ColorLayout
  • net.semanticmetadata.lire.imageanalysis.EdgeHistogram
  • net.semanticmetadata.lire.imageanalysis.Gabor
  • net.semanticmetadata.lire.imageanalysis.JCD
  • net.semanticmetadata.lire.imageanalysis.JpegCoefficientHistogram
  • net.semanticmetadata.lire.imageanalysis.ScalableColor
  • net.semanticmetadata.lire.imageanalysis.SimpleColorHistogram
  • net.semanticmetadata.lire.imageanalysis.Tamura