The pioneers of AI generated art
The first widely known attempt to use AI to make art was Google's DeepDream. DeepDream is an algorithm that was originally intended to be used as a face classifier - detecting faces in images.
Alexander Mordvintsev realized that it can also be run in reverse, applying face-like features on the input. This created dreamlike, highly psychedelic images.
Neural Style Transfer vs Generative Adversarial Networks
These days, when people talk about AI generated art they mean one of two distinct things: Neural Style Transfer or Generative Adversarial Networks.
Neural Style Transfer is a name for a family of algorithms that are used to apply the style of one or more existing images to an input image. The operator of the algorithm has to choose an input image (i.e, a picture of the Eiffel tower) and a style image (i.e The Starry Night by Vincent van Gogh) and the output is the first image in the "style" of the second.
Both DeepDream and methods of Neural Style Transfer were amazing developments in artificial intelligence. However, their output was not truly AI generated art as the user is required to choose images that already exist. This is why these algorithms were criticized for essentially being just like a fancy Instagram filter. In fact, this is very similar to how Instagram filters work.
The last kind of AI art is based on Generative Adversarial Networks (GANs), and it is the most similar thing we have to a human artist in the world of artificial intelligence. Traditionally, GANs tend to generate images that are very similar to existing art styles. The famous Edmond de Belamy portrait that sold for $432,500 is based on this technology. This is also the technology we use at ART AI.
Art AI uses a variation of Generative Adversarial Networks
Our Generator learns separately about style and content, which allows it to interpolate between styles and mix style and content in novel ways. This is an algorithm that can generate new original images from scratch. Training this kind of AI requires the training of two separate neural networks - the "Critic" and the "Generator".
The Critic is given a vast database of human art in different styles from throughout history as input to analyze. The Generator, which has never "seen" art before, gets a random seed as an input and starts generating an image from scratch. The output then goes through the Critic, which based on its knowledge, dictates whether or not the generated image looks like art made by humans.
The two networks are then trained together, with the Critic trying to get better at detecting "fake" images, and the generator trying to get better in fooling the Critic. The Generator improves and advances through this process and ultimately learns how to generate images that pass as "real"art in the eyes of the Critic.
When the process is concluded the Critic is discarded and the Generator is used to generate new images. Once a respectable amount of images is generated, our team curates them into collections. They name each piece with the help of a separate AI that generates titles.
We upload new collections every day!