Tag Archives: evaluation

News on LIRE performance

In the course of finishing the book, I reviewed several aspects of the LIRE code and came across some bugs, including one with the Jensen-Shannon divergence. This dissimilarity measure has never been used actively in any features as it didn’t work out in retrieval evaluation the way it was meant to. After two hours staring at the code the realization finally came. In Java the short if statement, “x ? y : z” is overruled by almost any operator including ‘+’. Hence,

System.out.print(true ? 1: 0 + 1) prints '1',

while

System.out.print((true ? 1: 0) + 1) prints '2'

With this problem identified I was finally able to fix the implementation of the Jensen-Shannon divergence implementation and came to new retrieval evaluation results on the SIMPLIcity data set:

map p@10 error rate
Color Histogram - JSD 0,450 0,704 0,191
Joint Histogram - JSD 0,453 0,691 0,196
Color Correlogram 0,475 0,725 0,171
Color Layout 0,439 0,610 0,309
Edge Histogram 0,333 0,500 0,401
CEDD 0,506 0,710 0,178
JCD 0,510 0,719 0,177
FCTH 0,499 0,703 0,209

Note that the color histogram in the first row now performs similarly to the “good” descriptors in terms of precision at ten and error rate. Also note that a new feature creeped in: Joint Histogram. This is a histogram combining pixel rank and RGB-64 color.

All the new stuff can be found in SVN and in the nightly builds (starting tomorrow 🙂