A Deep Attention based Framework for Image Caption Generation in Hindi Language

Rijul Dhir, Santosh Kumar Mishra, Sriparna Saha, Pushpak Bhattacharyya

Abstract


Image captioning refers to the process of generating a textual description for an image which defines the object and activity within the image. It is an intersection of computer vision and natural language processing where computer vision is used to understand the content of an image and language modelling from natural language processing is used to convert an image into words in the right order. A large number of works exist for generating image captioning in English language, but no work exists for generating image captioning in Hindi language. Hindiis the official language of India, and it is the fourth most-spoken language in the world, after Mandarin, Spanish and English. The current paper attempts to bridge this gap. Here an attention-based novel architecture for generating image captioning in Hindi language is proposed. Convolution neural network isused as an encoder to extract features from an input image and gated recurrent unit based neural network isused as a decoder to perform language modelling up to the word level. In between, we have used the attention mechanism which helps the decoder to look into the important portions of the image. In order to show theefficacy of the proposed model, we have first created a manually annotated image captioning training corpusin Hindi corresponding to popular MS COCO English dataset having around 80000 images. Experimental results show that our proposed model attains a BLEU1score of 0.5706 on this data set.

Keywords


Image captioning, hindi language, convolutional neural network, recurrent neural network, gated recurrent unit, attention mechanism

Full Text: PDF