In the process of updating to Lucene version 6.3.0 we switched the main build system to Gradle. The source code is now better organized, following the structure of Maven with all the source divided into main and test, and the resources being separated. It allows for the use of Maven repositories, automates most tasks out-of-the-box and is extended easily using the Groovy programming language.
Gradle also comes with a wrapper, so as long Java is installed, the build process is fully automated and does not need additional software installed.
Moreover, build.gradle files can be easily used to import projects into JetBrains IDEA. LireDemo and SimpleApplication are sub projects with their own build.gradle files. Checkout the new structure at https://github.com/dermotte/lire.
We all know that setting up a server application is a huge pain in the ass, but there’s hope: Docker is an open source project for packaging those things and making them runnable within containers. Only thing one needs to do is to create an image, which then can be used to create a container.
LireSolr now provides such a Docker image on Docker Hub. Basically you can install Docker, then use the docker command in the shell (Linux) or the PowerShell (MS Windows) to automatically download and run a container based on the image with a single command:
$> docker run -p 8983:8983 dermotte/liresolr:latest
What you need to do then is to index some images and take a look at the sample application. A detailed step-by-step howto is provided in the documentation.
Yesterday I checked in the latest LIRE revision featuring the PHOG descriptor. I basically goes along image edge lines (using the Canny Edge Detector) and makes a fuzzy histogram of gradient directions. Furthermore it does that on different pyramid levels, meaning that the image is split up like a quad-tree and all sub-images get their histogram. All histograms of levels & sub-images are concatenated and used for retrieval. First tests on the SIMPLIcity data set have shown that the current configuration of PHOG included in LIRE outperforms the EdgeHistogram descriptor.
People lately asked whether LIRE can do more than linear search and I always answered: Yes, it should … but you know I never tried. But: Finally I came around to index the MIR-FLICKR data set and some of my Flickr-crawled photos and ended up with an index of 1,443,613 images. I used CEDD as main feature and a hashing algorithm to put multiple hashes per images into Lucene — to be interpreted as words. By tuning similarity, employing a Boolean query, and adding a re-rank step I ended up with a pretty decent approximate retrieval scheme, which is much faster and does not loose too many images on the way, which means the method has an acceptable recall. The image below shows the numbers along with a sample query. Linear search took more than a minute, while the hashing based approach did (nearly) the same thing in less than a second. Note that this is just a sequential, straight forward approach, so no optimization has been done to the performance. Also the hashing approach has not yet been investigated in detail, i.e. there are some parameters that still need some tuning … but let’s say it’s a step into the right direction.
I just uploaded Lire 0.9.3 to the all new Google Code page. This is the first version with full support for Lucene 4.0. Run time and memory performance are comparable to the version using Lucene 3.6. I’ve made several improvements in terms of speed and memory consumption along the way, mostly within the CEDD feature. Also I’ve added two new features:
JointHistogram – a 64 bit RGB color histogram joined with pixel rank in the 8-neighborhood, normalized with max-norm, quantized to [0,127], and JSD for a distance function
Opponent Histogram – a 64 bit histogram utilizing the opponent color space, normalized with max-norm, quantized to [0,127], and JSD for a distance function
Both features are fast in extraction (the second one naturally being faster as it does not investigate the neighborhood) and yield nice, visually very similar results in search. See also the image below showing 4 queries, each with the new features. The first one of a pair is always based on JointHistogram, the second is based on the OpponentHistogram (click ko see full size).
I also changed the Histogram interface to double as the double type is so much faster than float in 64 bit Oracle Java 7 VM. Major bug fix was in the JSD dissimilarity function. So many histograms now turned to use JSD instead of L1, depending on whether they performed better in the SIMPLIcity data set (see TestWang.java in the sources).
Final addition is the Lire-SimpleApplication, which provides two classes for indexing and search with CEDD, ready to compile with all libraries and an Ant build file. This may — hopefully — help those that still seek Java enlightenment 😀
Finally this just leaves to say to all of you: Merry Christmas and a Happy New Year!
I just released Lire and Lire Demo in version 0.9 on sourceforge.net. Basically it’s the alpha version with additional speed and stability enhancements for bag of visual words (BoVW) indexing. While this has already been possible in earlier versions I re-furbished vocabulary creation (k-means clustering) and indexing to support up to 4 CPU cores. I also integrated a function to add documents to BoVW indexes incrementally. So a list of major changes since Lire 0.8 includes
Major speed-up due to change and re-write of indexing strategies for local features
Auto color correlation and color histogram features improved
Re-ranking filter based on global features and LSA
Parallel bag of visual words indexing and search supporting SURF and SIFT including incremental index updates (see also in the wiki)
Added functionality to Lire Demo including support for new Lire features and a new result list view
ACM Transactions on Information Systems is soliciting contributions to a special issue on the topic of “Searching Speech”. The special issue will be devoted to algorithms and systems that use speech recognition and other types of spoken audio processing techniques to retrieve information, and, in particular, to provide access to spoken audio content or multimedia content with a speech track.
Submission Deadline: 1 March 2011
The field of spoken content indexing and retrieval has a long history dating back to the development of the first broadcast news retrieval systems in the 1990s. More recently, however, work on searching speech has been moving towards spoken audio that is produced spontaneously and in conversational settings. In contrast to the planned speech that is typical for the broadcast news domain, spontaneous, conversational speech is characterized by high variability and the lack of inherent structure. Domains in which researchers face such challenges include: lectures, meetings, interviews, debates, conversational broadcast (e.g., talk-shows), podcasts, call center recordings, cultural heritage archives, social video on the Web, spoken natural language queries and the Spoken Web.
We invite the submission of papers that describe research in the following areas:
Integration of information retrieval algorithms with speech recognition and audio analysis techniques
Interfaces and techniques to improve user interaction with speech collections
Indexing diverse, large scale collections
Search effectiveness and efficiency, including exploitation of additional information sources
Intelligent algorithms for mining and retrieval are the key technology to cope with the information need challenges in our media-centered society. Methods for text-based information retrieval receive special attention, which results from the important role of written text, from the high availability of the Internet, and from the enormous importance of Web communities.
Advanced information retrieval and extraction uses methods from different areas: machine learning, computer linguistics and psychology, user interaction and modeling, information visualization, Web engineering, artificial intelligence, or distributed systems. The development of
intelligent retrieval tools requires the understanding and combination of the achievements in these areas, and in this sense the workshop provides a common platform for presenting and discussing new solutions.
The following list organizes classic and ongoing topics from the field of text-based IR for which contributions are welcome:
Theory. Retrieval models, language models, similarity measures, formal analysis
Mining and Classification. Category formation, clustering, entity resolution, document classification, learning methods for ranking
Web. Community mining, social network analysis, structured retrieval from XML documents
NLP. Text summarization, keyword extraction, topic identification
User Interface. Paradigms and algorithms for information visualization, personalization, privacy issue
User Context. Context models for IR, context analysis from user behaviour and from social networks
Multilinguality. Cross-language retrieval, multilingual retrieval, machine translation for IR
Evaluation. Corpus construction, experiment design, conception of user studies
Semantic Web. Meta data analysis and tagging, knowledge extraction, inference, and maintenance
Software Engineering. Frameworks and architectures for retrieval technology, distributed IR
The workshop is held for the seventh time. In the past, it was characterized by a stimulating atmosphere, and it attracted high quality contributions from all over the world. In particular, we encourage participants to present research prototypes and demonstration tools of their research ideas.
Mar 30, 2010 Deadline for paper submission
Apr 20, 2010 Notification to authors
May 17, 2010 Camera-ready copy due
Aug 30, 2010 Workshop opens
Contributions will be peer-reviewed by at least two experts from the related field. Accepted papers will be published as IEEE proceedings by IEEE CS Press.
Benno Stein, Bauhaus University Weimar
Michael Granitzer, Know-Center Graz & Graz University of Technology
Since Blobworld was down I was hoping for a new online CBIR system to show in lectures. Last week I received word about the img(Anaktisi) image search engine from Savvas Chatzichristofis. This search engine is based on 2 new content based descriptors that combine color and edge features. The results look promising!
There is also an offline sketch based retrieval system being developed. A screencast can be found on YouTube.