Carleton University - School of Computer Science Honours Project
Fall 2020
Deep Learning Approaches For Images To Sketches Translation
Tri Cao
SCS Honours Project Image
ABSTRACT
In machine learning and computer vision, deep generative models are among the most popular research topics in recent years. With the goal to explore and gain valuable knowledge in this sector, we explore two existing convolutional generative models which try to solve the problem of translating photos of objects into 'human-like' sketches: (1) CycleGAN - a generative adversarial network which not only uses adversarial training, but also utilizes the cycle-consistent property of translation to translate samples of any two domains, (2) Synthesizing Conditional Convolutional Decoder - An encoder-decoder based model which creatively uses the object labels and paired photo-sketch samples in addition to the pixelated data from images to achieve high performance. We experiment and evaluate the models with two experiments: generating black-white sketches and generating colorful sketches, where "colorful" means each stroke in the training sketch sample is randomly applied with a unique color. As the result, both models are able to generate impressive sketches whose depicted objects are recognizable by human eyes, although there are still limitations on difficult input photos and clarity.