SNOW 2014 Data Challenge

on .

General Description

Consider a scenario of news professionals who use social media to monitor the newsworthy stories that emerge from the crowd. The volume of information is very high and it is often difficult to extract such stories from a live social media stream. The task of this challenge is to automatically mine social streams to provide journalists a set of headlines and complementary information that summarize the most important topics for a number of timeslots (time intervals) of interest. In the context of the SocialSensor project, we found that this is a very important and challenging problem, and for this reason the project organizes this challenge to explore novel and effective solutions.

Infotainment Evaluation Infographic

on .

September has been an evaluation month for all the innovative prototypes and techologies developed in the context of SocialSensor and the Infotainment use xase in particular. 

As the smartphones penetration in markets keeps growing so does the need of people for useful apps that enhance their transactions and experiences and make them feel empowered and connected. Being able to demonstrate the Infotainment use case prototypes to a number of different users with various backgrounds and needs provided us with valuable feedback with regards to the current work that has been done and to create a list of updated specification for the second development iteration. The infographic that follows outlines the most important evaluation outcomes that pertain to the Infotainment Prototype.

Release of CERTH SED implementation

on .

SocialSensor successfully participated in the Social Event Detection task of the MediaEval 2012 workshop. MediaEval, a workshop focused on focused evaluation tasks with respect to multimedia analysis, took place in Pisa, Italy (4-5 October 2012). CERTH took part in the Social Event Detection (SED) task that involved three challenges: (a) detection of technical events in Germany, (b) soccer events in two major European cities, (c) detection of Indignados events in Madrid. All challenges were defined on a large set of over 160,000 images from Flickr. A more detailed definition of the task is available here.

In all the challenges, CERTH achieved median performance using a versatile and flexible approach developed within the SocialSensor project. The approach employed by CERTH is described here, while the presentation given at the workshop is available here. We have also made available the implementation of the approach in the form of a Java library.

Social Event Detection dataset made available for researchers

on .

SocialSensor has successfully contributed to the organization of this year's Social Event Detection task (SED 2012) that took place in Pisa 4-5 October, in the context of MediaEval 2012. The SED 2012 dataset has been made publicly available by CERTH for download and use by the research community. You can freely download the dataset from . The downloadable archives include the following:

  - The three 2012 SED Challenges definitions,
  - The XML metadata for the images in the test dataset,
  - The actual image files of the test dataset,
  - Ground truth results for the defined challenges/dataset,
  - Our evaluation script.

Please feel free to use this dataset for research purposes, and also to disseminate the above information to anyone else who may be interested.

CERTH participated in the ImageCLEF 2012 Flickr photo annotation task

on .

CERTH participated in ImageCLEF Photo Annotation and Retrieval 2012, and more specifically in the first subtask, namely Visual and Concept detection, annotation using Flickr photos. 

The objective of taking part in the competition was to evaluate two multimedia indexing approaches developed within SocialSensor and to compare the performance of methods using different sets of image features.

The first of the tested approaches constructs a similarity graph that includes both train and test images and trains concepts detectors using the graph Laplacian Eigenmaps (LE) as features. The second approach utilizes the concept of a “same class” model which takes as input the set of distances between the image to be annotated and a reference item representing a target concept, and predicts if the image belongs to the target concept.

Five runs were submitted. Run 1 ranked 5th out of 17 text-based methods (MiAP) and 7th (GMiAP, F-ex). The performance of the visual-based run (#2) was close to the median performance, ranking 13th out of 28 visual-based ones (MiAP, GMiAP) and 10th (F-ex). Finally, the multimodal runs ranked above average compared to competition. For instance, Run 3 ranked 15th out of the 35 multimodal runs (MiAP, GMiAP) and 18th (F-ex). Detailed results are accessible at the official task page.