Enriching Content Objects for Multimodal Search with Data from the Linking Open Data Cloud

Presented at: 9th Extended Semantic Web Conference (ESWC2012)

by Jonas Etzold, Thomas Steiner, Arnaud Brousseau, Paul Grimm

In this paper, we report on the I-SEARCH EU (FP7 ICT STREP) project whose objective is the development of a multimodal search engine that supports multimodal in- and output, as well as multimodal query refinement. An important aspect of I-SEARCH is the so-called Rich Unified Content Description (RUCoD) format for the description of low and high level features of content objects—rich media presentations, enclosing different types of media. We have developed a tool called CoFetch for the creation of such content objects, which partly retrieves its data from the Linking Open Data cloud. During the session, we will present a live demonstration of the I-SEARCH search engine and CoFetch, and—via pre-defined use cases—show how we imagine multimodal search in the future. We are looking for networking opportunities with projects dealing with semantic annotation of multimedia archives and projects interested in RUCoD feature extraction techniques.

Keywords: linked data, multimedia annotation, multimodal search


Resource URI on the dog food server: http://data.semanticweb.org/conference/eswc/2012/paper/project-networking/369


Explore this resource elsewhere: