Using GANs to Produce Art in a Particular Style from Semantic Maps

Abstract

We investigate the role of noise in the training process of an image-to-image style transference GAN. We will do this by comparing two models trained on the same training data, a set of semantic maps and their target image; however, in one of the models, we will add noise to the semantic maps prior to the training process. We will then apply our models to a semantic map that was not used in the training process and measure the Euclidean distance, treating the RGB pixel values as spacial coordinates, of the generated image from its target image to compare the accuracy of our two models.

Which image was generated by an AI and which one was created by a person?

Presentation and Paper(paper is coming soon)

The image on the left is the target image, the middle image is the source image, and the image on the right is the generated image.

Results

On the left, we have the image generated by each generation of the pix2pix using data without noise. On the right, we have the target image.
On the left, we have the image generated by each generation of the pix2pix using data with noise. On the right, we have the target image.

Code(coming soon) and Data Set

The data set: https://archive.org/details/wikiart-dataset


Related Papers

Categories:

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s