Image Annotation as Text-Image Matching: Challenge Design and Results

Luis Pellegrin, Octavio Loyola-González, José Ortiz-Bejar, Miguel Angel Medina-Pérez, Andres Eduardo Gutiérrez-Rodríguez, Eric S. Tellez, Mario Graff, Sabino Miranda-Jiménez, Daniela Moctezuma, Mauricio García-Limón, Alicia Morales-Reyes, Carlos A. Reyes-García, Eduardo Morales, Hugo Jair Esclalante

Abstract


This paper describes the design of the 2017 RedICA: Text-Image Matching (RICATIM) challenge, including the dataset generation, a complete analysis of results, and the descriptions of the top-ranked developed methods. The academic challenge explores the feasibility of a novel binary image classification scenario, where each instance corresponds to the concatenation of learned representations of an image and a word. Instances are labeled as positive if the word is relevant for describing the visual content of the image, and negative otherwise. This novel approach of the image classification problem poses an alternative scenario where any text-image pair can be represented in such space, so any word could be considered for describing an image. The proposed methods are diverse and competitive, showing considerable improvements over the proposed baselines.

Keywords


Text-image matching, image annotation, multimodal information processing, academic challenges

Full Text: PDF