I do not often hear of application built with LIRE, however if I do I really appreciate it. The use case of What Anime is this? is exceptional in many ways. First of all LIRE was very well applied and can really solve a problem there, and second, Soruly Ho tuned it to search through over 360 million images on a single server with incredibly reasonable response time.
The web page built by Soruly Ho provides a search interface to (re-)find frames in Anime videos. Not being into Anime myself I still know it’s a hand drawn or computer animation and it’s hugely popular among fans … and there are a lot of them.
Soruly Ho was so nice to compile background information on his project:
Thanks to the LIRE Solr Integration Project, I was able to develop the first prototype just 12 hours after I met LIRE, without touching a line of the source code! After setting up the web server and Solr, I just have to write a few scripts to put all the pieces together. To analyze the video, I use ffmpeg to extract each frame as a jpg file with the timecode as the file name. Then, the ParallelSolrIndexer analyze all these images and generate an XML file. Before loading this XML into Solr, I use a Python script to put the video path and timecode to the title field. Finally, I write a few lines of Javascript to use Solr REST API to submit the image URL to the LireRequestHandler. After some magic, it would return a list of matching images sorted by similarity, with the original video path and timecode in the title field. The idea is pretty simple. Every developer can build this.
But scaling is challenging. There are over 15,000 hours of video indexed in my search engine. Assume they are all 24 fps, there would be 1.3 billion frames in total. This is too big to fit in my server (which is just a high-end PC). Video always play forward in time, so I use a running window to remove duplicate frames. Unlike real life video, most anime are actually drawn in 12 fps or less, this method significantly reduces number of frames by 70%. Out of many feature classes supported by LIRE, I only use the Color Layout Descriptor and drop others to save space, memory and computation time for analysis. Now, each analyzed frame in my Solr index only occupies 197 Bytes. Still, solely relying on one image descriptor already achieves very high accuracy. Even after such optimization, the remaining 366 million frames are still too much that the query would often timeout. So I studied and modified a little bit of the LireRequestHandler. (It is great that LIRE is free and open source!) Instead of using the performance-killing BooleanClause.Occur.SHOULD, I search the hashes with BooleanClause.Occur.MUST one by one until a good match is found. I am only interested to images with similarity > 90%, i.e. there is at least one common hash if I select 10 out of 100 hash values at random. The search would complete in at most 10 iterations, otherwise, I assume there is no match. But random is not good because results are inconsistent, thus, cannot be cached. So I ran an analysis on the hash distribution, and always start searching from the least populated hash. So, similarity calculation is performed on a smaller set of images. The Color Layout Descriptor does not produce an evenly distributed hash on anime. Least populated hash matches only a few frames while most populated hash matches over 277 million frames. The last performance issue is keeping a 67.5GB index with just 32GB RAM, which I think can be solved with just more RAM.
The actual source I have modified and my hash distribution table, can be found on Github.
You can try What Anime is this? yourself at https://whatanime.ga/. Thanks to Soruly Ho for sharing his thoughts and building this great search engine!