Monthly Archives: March 2010

DEXA-Workshop on Text-Based Information Retrieval (TIR’10)

TIR’10 – 7th International Workshop on Text-Based Information Retrieval

In conjunction with the DEXA 2010

21st International Conference on Database and Expert Systems Applications

Bilbao, Spain, August 30 – September 3
http://www.tir.webis.de

About this Workshop

Intelligent algorithms for mining and retrieval are the key technology to cope with the information need challenges in our media-centered society. Methods for text-based information retrieval receive special attention, which results from the important role of written text, from the high availability of the Internet, and from the enormous importance of Web communities.

Advanced information retrieval and extraction uses methods from different areas: machine learning, computer linguistics and psychology, user interaction and modeling, information visualization, Web engineering, artificial intelligence, or distributed systems. The development of
intelligent retrieval tools requires the understanding and combination of the achievements in these areas, and in this sense the workshop provides a common platform for presenting and discussing new solutions.

The following list organizes classic and ongoing topics from the field of text-based IR for which contributions are welcome:

  • Theory. Retrieval models, language models, similarity measures, formal analysis
  • Mining and Classification. Category formation, clustering, entity resolution, document classification, learning methods for ranking
  • Web. Community mining, social network analysis, structured retrieval from XML documents
  • NLP. Text summarization, keyword extraction, topic identification
  • User Interface. Paradigms and algorithms for information visualization, personalization, privacy issue
  • User Context. Context models for IR, context analysis from user behaviour and from social networks
  • Multilinguality. Cross-language retrieval, multilingual retrieval, machine translation for IR
  • Evaluation. Corpus construction, experiment design, conception of user studies
  • Semantic Web. Meta data analysis and tagging, knowledge extraction, inference, and maintenance
  • Software Engineering. Frameworks and architectures for retrieval technology, distributed IR

The workshop is held for the seventh time. In the past, it was characterized by a stimulating atmosphere, and it attracted high quality contributions from all over the world. In particular, we encourage participants to present research prototypes and demonstration tools of their research ideas.

Important Dates
Mar 30, 2010 Deadline for paper submission
Apr 20, 2010 Notification to authors
May 17, 2010 Camera-ready copy due
Aug 30, 2010 Workshop opens

Contributions will be peer-reviewed by at least two experts from the related field. Accepted papers will be published as IEEE proceedings by IEEE CS Press.

Workshop Organization

Benno Stein, Bauhaus University Weimar
Michael Granitzer, Know-Center Graz & Graz University of Technology

Contact: tir@webis.de
Information about the workshop can be found at http://www.tir.webis.de

Workshop on Impact of Scalable Video Coding on Multimedia Provisioning (SVCVision)

Workshop on Impact of Scalable Video Coding on Multimedia Provisioning (SVCVision)
Collocated with MobiMedia – 6th International Mobile Multimedia Communications Conference
6th-8th September 2010 – Lisbon, Portugal

http://www.mobimedia.org/ws_SVCVision.html

Aims and Scope

Scalable Video Coding (SVC) refers to the possibility of removing certain parts of a video bit stream in order to adapt it to a changing usage environment, e.g., end device capabilities, network condition or user preferences. SVC has been an active standardization and research area for at least 20 years, reaching back to H.262/MPEG-2, which offered scalable profiles. However, these previous attempts suffered from a significant loss in coding efficiency as well as a large increase in decoder complexity (and thus energy consumption), which hindered market adoption. Only the most recent attempt, i.e., the SVC extension of H.264/AVC, focuses on avoiding these disadvantages. Since H.264/SVC standardization started in 2003, it has been at the focus of many multimedia research groups.

Today’s increasing variety of end devices (smart phones, tablet PCs, Netbooks, Laptops, PCs, networked HDTVs, …) and the associated multitude of Internet connectivity options (GPRS/EDGE, UMTS, ADSL, PLC, WiMAX, …) provide particular momentum for SVC, which can be easily and pervasively adapted to these various usage environments. SVC also allows end devices to only decode a sub-set of the SVC bit stream, thus enabling in particular mobile end devices to minimize the necessary (processing) power requirements.

This workshop aims to provide a forum for both academic and industrial participants to exchange and discuss recent advancements and future perspectives of SVC.

Topics

SVC topics of interest include, but are not limited to:
– Robust streaming, error resilience and error concealment
– Streaming in heterogeneous environments
– Peer-to-Peer (P2P) video distribution
– Internet Protocol television (IPTV)
– Energy-efficient video distribution
– Content adaptation (e.g., scaling, rewriting, transcoding) and summarization
– Complexity optimization and new tools for achieving scalability
– Adaptation decision taking & context information
– Storage & file format
– Conditional access & protection
– Novel applications & implementation experiences

Important Dates

Paper Submission: 23. April 2010
Notification: 28. May 2010
Camera Ready: 25. June 2010

All accepted papers will be published in Springer Lecture Notes of ICST (LNICST) series and included in major article indexing services.

Visual Attention in Lire

While doing my preparations for my multimedia information systems lecture I finally came around to implement the visual attention model of Stentiford myself. I just check in the sources (SVN). The algorithm gives actually really nice results compared to its actual simplicity (implementationwise). You can see an example in the following figure. On the left hand side there is the original image and on the right hand side a visualization of the attention map. The light areas (especially the white ones) are deemed centers of attention. Sky and sand are so to say just random noise (there is a lot of “random” in this approach).

Links

Search through a million images in less than a second

We reached a milestone here! Sebastian Kielmann reported that he used LIRe to index a million images. While that’s actually no problem he further managed to search through the images in < 1 second! Whoot!

Sebastian used the metric index, which implements the ideas of Giuseppe Amato. The approach is easy, but works out really fine. Currently CEDD is the standard descriptor, but others are integrated easily.  However, using the metric index is not trivial and requires some knowledge on the process. Also the results are approximate and might differ from the results obtained by linear search.

Links

LIRe development mailing list

Few weeks ago I was asked if there is a mailing list dedicated to LIRe and development of applications with LIRe. That was reason enough for moe to create one. The mailing list is available at Google Groups and it’s called lire-dev. Please feel free to subscribe and ask (and of course answer and discuss) any questions regarding LIRe.

Links

Flickr uploads per minute – mean uploads per hour

I was wondering when people actually uploaded all their stuff, so I started to grab the uploads per minute on a regular basis some time ago (see here). It seems that people upload 4,682 images per minute in average. This remained more or less stable from the last experiment, which gave an average number of 4,602. Now I have enough data, 1,938 samples, for a first shot on the question: when do people upload their stuff?

It seams that people concentrate their uploads in between 3 pm and 11 pm (CET) and that there is not much going on around 8-10 am (CET). So that looks reasonable to me if the typical Flickr user is American or European :)

Lire 0.8 released

I just released LIRe v0.8. LIRe – Lucene Image Retrieval – is a Java library for easy content based image retrieval. Based on Lucene it doesn’t need a database and works reliable and rather fast. Major change in this version is the support of Lucene 3.0.1, which has a changed API and better performance on some OS. A critical bug was fixed in the Tamura feature implementation. It now definitely performs better :) Hidden in the depths of the code there is an implementation of the approximate fast indexing approach of G. Amato. It copes with the problem of linear search and provides a method for fast approximate retrieval for huge repositories (millions?). Unfortunately I haven’t tested with millions, just with tens thousands, which proves that it works, but it doesn’t show how fast.

Links

No arc, no wand, just move …

Sony finally unveiled their plans regarding the motion sensing and 3D registration capabilities of their new controller thingy called Move. The device capabilities look pretty awesome, so I think everyone expects the new hard- and software to be quite fast and accurate. On the official blog post you can see some photos and some game announcements. Well it looks like they have more or less the top sellers of the Wii all ported for launch :-D. Release date is “Fall 2010″ and price (including the ps3 eye cam) 100 USD, so once again we don’t know nothing when and how much for Europe ;).

Links

Lire v0.8 is on its way … just some more tests

I just checked in my latest code for LIRe and it looks like it’s nearly v0.8 release ready. Major changes include the use of Lucene 3.0.1, some bug fixes on descritors, several new test files (including one that shows how to do an LSA with image features) and of course an updated demo application. While everything needs a bit more testing as well as an documentation update, I can offer a pre-compiled demo here. All changed and added sources can be found in the SVN.

Links

CfP: Modeling Social Media

Markus Strohmaier has pointed out the following CfP to me: International Workshop on Modeling Social Media 2010 (MSM’10). It takes place June 13th, 2010, and is co-located with Hypertext 2010, Toronto, Canada. Submission deadline is April 9th, 2010 and topics are

  • new modeling techniques and approaches for social media
  • models of propagation and influence in twitter, blogs and social tagging systems
  • models of expertise and trust in twitter, wikis, newsgroups, question and answering systems
  • modeling of social phenomena and emergent social behavior
  • agent-based models of social media
  • models of emergent social media properties
  • models of user motivation, intent and goals in social media
  • cooperation and collaboration models
  • software-engineering and requirements models for social media
  • adapting and adaptive hypertext models for social media
  • modeling social media users and their motivations and goals
  • architectural and framework models
  • user modeling and behavioural models
  • modeling the evolution and dynamics of social media

More information can be found on the workshop home page.

Links