Searching with Lire in big datasets

Having received several complaints about the slowness of Lire when searching in 100k+ documents I took my time to write a small how to to explain approaches for search in big (relatively) data sets.

Lire has the ability to create indexes with lots of different features (descriptors, like RGB color histograms or CEDD). While this opens the opportunity to flexibility at search time as we can select the feature at the time we create a query, the index tends to get bigger and bigger and searcher take longer and longer.

With a data set of 121,379 images the index created with the features selected for default in Lire Demo has a size of 14,3 GB on the disk. In contrast to that an index just storing the CEDD feature along with the image identifier has a size of 29 MB.

Due to the size of the index also linear search tends to get slower. While for the index stripped down to the CEDD feature and the identifier searching takes (on a AMD Quad-Core computer with 4GB RAM and Java 1.7) roughly 0.33 seconds, searching the big index takes 7 minutes and 3 seconds.

So if you want to index and search big data sets (> 100.000 images for instance) I recommend to

  • select which features you need,
  • create the index with a minimum set of features, and
  • eventually split the index per feature and select the index on the fly instead of the feature
  • also you can load the index into RAM

For more on loading the index to RAM and the option to use local features read on in the developer wiki.