Comparison of Transfer Style Using a CycleGAN Model with Data Augmentation

Gerardo Lugo-Torres, José E. Valdez-Rodríguez, Diego A. Peralta-Rodríguez, Hiram Calvo

Abstract


Image-to-image translation (I2I) is a specialized technique aimed at converting images from one domain to another while retaining their intrinsic content. This process involves learning the relationship between an input and its corresponding output image through a dataset of aligned pairs. Our study utilizes the CycleGAN model to pioneer a method for transforming images from the domain of Monet’s paintings to a domain of varied photographs without the need for paired training examples. We address challenges such as mode collapse and overfitting, which can affect the integrity and quality of the translated images. Our investigation focuses on enhancing the CycleGAN model’s performance and stability through data augmentation strategies, such as flipping, mirroring, and contrast enhancement. We propose that judicious dataset selection for training can yield superior outcomes with less data compared to indiscriminate large-volume training. By online scraping Monet’s artwork and curating a diverse, representative image subset, we fine-tuned our model. This targeted approach propelled our results to 2nd place in the Kaggle challenge ”I am something of a Painter Myself” as of August 3rd, 2023, demonstrating the efficacy of our enhanced training protocol.

Keywords


Generative adversarial network, image-to-image translation, data augmentation, cycle consistency

Full Text: PDF