Our submission to the interactive Art Track of ACM Multimedia 2014 has been accepted: “Gone: An Interactive Experience for Two People” by Michael Riegler, Mathias Lux, Christian Zellot, Lukas Knoch, Horst Schnattler, Sabrina Napetschnig, Julian Kogler, Claus Degendorfer, Norbert Spot und Manuel Zoderer.
Our project is an interactive installation, where two people interact based solely on audio clues triggered by one person. The other person moves an avatar through a virtual space base on the audio clues.
I just got word that our joint submission with Giuseppe Becchi, Marco Bertini and colleagues from Firenze has been accepted for presentation and publication at the open source software track at ACM Multimedia 2014 in Orlando, FL:
Giuseppe Becchi, Marco Bertini, Lorenzo Cioni, Alberto Del Bimbo, Andrea Ferracani, Daniele Pezzatini and Mathias Lux (2014) Loki+Lire: A Framework to Create Web-Based Multimedia Search Engines, in Proceedings ACM Multimedia 2014, Orlando, FL (to appear )
The power of crowds – leveraging a large number of human contributors and the capabilities of human computation – has enormous potential to address key challenges in the area of multimedia research. Crowdsourcing offers a time- and resource-efficient method for collecting large volumes of input for system design and evaluation, making it possible to optimize multimedia systems more rapidly and to address human factors more effectively. At present, crowdsourcing remains notoriously difficult to exploit effectively in multimedia settings: the challenge arises from the fact that a community of users or workers is a complex and dynamic system highly sensitive to changes in the form and the parameterization of their activities.
The submission deadline has been extended to July 15, 2014
The power of crowds – leveraging a large number of human contributors and the capabilities of human computation – has enormous potential to address key challenges in the area of multimedia research. Crowdsourcing offers a time- and resource-efficient method for collecting large volumes of input for system design and evaluation, making it possible to optimize multimedia systems more rapidly and to address human factors more effectively.
At present, crowdsourcing remains notoriously difficult to exploit effectively in multimedia settings: the challenge arises from the fact that a community of users or workers is a complex and dynamic system highly sensitive to changes in the form and the parameterization of their activities.
Intelligent algorithms for data mining and information retrieval are the key technology to cope with the information need challenges in our media-centered society. Methods for text-based information retrieval receive special attention, which results from the important role of written text, from the high availability of the World Wide Web, and from the enormous impact of Web communities and social media on our life.
The development of advanced information retrieval solutions requires the understanding and the combination of methods from different research areas, including machine learning, data mining, computer linguistics, artificial intelligence, user interaction and modeling, Web engineering, or distributed systems. This workshop provides a common platform for presenting and discussing new solutions, novel ideas, or specific tools focusing on text-based information retrieval. The following list organizes classic and ongoing topics for which contributions are welcome, but are not limited to:
Theory. Retrieval models, language models, similarity measures, formal analysis
Web Search. Ranking, indexing, semantic search, query classification and segmentation, relevance feedback, vertical search
Personalization and User Mining. Just-in-time retrieval, personalized retrieval, context detection, profile mining
Multilinguality. Cross-language retrieval, machine translation, language identification
Evaluation. Corpus construction, experiment design, performance measures
Text Mining and Classification. Web mining, text reuse, topic identification, sentiment analysis
NLP. Information extraction, text summarization and simplification, named entity recognition, question answering
Social Media Analysis. Community mining, social network analysis, trend analysis, information diffusion
Information Quality. Text quality assessment, quality-based ranking, readability assessment, trust and author reputation
Big Data Text Analytics. Parallel and distributed retrieval, online algorithms, scalability
Semantic Web. Meta data analysis and tagging, knowledge extraction, inference, maintenance
The workshop is held for the eleventh time. In the past, it was characterized by a stimulating atmosphere, and it attracted high quality contributions from all over the world.
Accepted papers will appear in the proceedings of DEXA’14 Workshops published by the Conference Publishing Services (CPS) of IEEE Computer Society.
Submissions to TIR 2014 must be original, unpublished contributions.
Papers are limited to 5 pages in IEEE format (two columns, A4) and must be written in English.
Submission is made electronically in PDF format using our conference management systemConfDriver.
Submitted papers will be peer-reviewed by at least three experts from the related field.
At least one author of each accepted paper is required to register for the DEXA’14 conference, attend the workshop, and present the paper.
April 24, 2014: Deadline for paper submission (24:00 CET)
May 12, 2014: Notification to authors
May 20, 2014: Camera-ready copy due
September 1 – 5, 2014: DEXA’14 conference
Maik Anderka (Co-Chair), University of Paderborn, Germany
Michael Granitzer (Co-Chair), University of Passau, Germany
As an integral part of the ACM MMSys conference since 2011, the Dataset Track provides an opportunity for researchers and practitioners to make their work available (and citable) to the multimedia community. MMSys encourages and recognizes dataset sharing, and seeks contributions in all areas of multimedia (not limited to MM systems). Authors publishing datasets will benefit by increasing the public awareness of their effort in collecting the datasets.
Submission deadline is Nov. 11th 2013! Make sure not to miss it! See also the Call for Papers
The Solr plugin itself is fully functional for Solr 4.4 and the source is available at https://bitbucket.org/dermotte/liresolr. There is a markdown document README.md explaining what can be done with plugin and how to actually install it. Basically it can do content based search, content based re-ranking of text searches and brings along a custom field implementation & sub linear search based on hashing.
The new LIRE web demo is based on Apache Solr and features and index of the MIRFLICKR data set. The new architecture allows for extremely fast retrieval. Moreover, there’s a new walk through video with some short peeks behind the screen. The source of the plugin will be released in the near future.
The beta update features (i) improvements on local feature handling. i.e. stronger quantization of local feature histograms and several bug fixes, (ii) critical bug fixes for CEDD and JCD, which were not thread safe, and (iii) improvements on the ParallelExtractor and Indexor classes as well as the intermediate binary format.
I’ve just uploaded LIRE 0.9.4 beta to the Google Code downloads page. This is an intermediate release that reflects several changes within the SVN trunk. Basically I put it online as there are many, many bugs solved in this one and it’s performing much, much faster than the 0.9.3 release. If you want to get the latest version I’d recommend to stick to the SVN. However, currently I’m changing a lot of feature serialization methods, so there’s no guarantee that an index created with 0.9.4 beta will work out with any newer version. Note also that the release does not work with older indexes
Major changes include, but are not limited to:
New features: PHOG, local binary patterns and binary patterns pyramid
Parallel indexing: a producer-consumer based indexing application that makes heavy use of available CPU cores. On a current Intel Core i7 or considerably large Intel Xeon system it is able to reduce extraction to a marginal overhead to disk I/O.
Intermediate byte based feature data files: a new way to extract features in a distributed way
In-memory cached ImageSearcher: as long as there is enough memory all linear searching is done in memory without much disk I/O (cp. class GenericFastImageSearcher and set caching to true)
Approximate indexing based on hashing: tests with 1.5 million led to search time < 300ms (cp. GenericDocumentBuilder with hashing set to true and BitSamplingImageSearcher)
Footprint of many global descriptors has been significantly reduced. Examples: EdgeHistogram 40 bytes, ColorLayout 504 bytes, FCTH 96 bytes, …
New unit test for benchmarking features on the UCID data set.