AI art in training
GANS and Goodfellow
Many AI artworks use a type of algorithm called generative adversarial networks (GANs). Introduced in 2014 by computer scientist Ian Goodfellow and colleagues, these are called adversarial because there are two sides: one is a generator to create new images and the other a discriminator to decide which created images are considered successful.
A GAN trained on photographs can create new photographs that look superficially authentic to the human eye. For example, an artist can feed landscapes from the past 500 years into a generative AI algorithm and it then would attempt to imitate these inputs in the form of a range of output images.
Google, dreams and DALL-E
A growth in the development of AI art in the last decade – and an increase in the accessibility to user generate it – has seen a rise in the media coverage the technologies have been receiving. Google released DeepDream in 2015 which uses a network to find and enhance patterns in images via algorithmic pareidolia (a psychological phenomenon that causes people to see patterns in a random stimulus) to create a dream-like appearance in deliberately over-processed images.
Several programs use AI to generate a variety of images based on various text prompts. They include OpenAI's DALL-E and DALL-E Mini (trending on Twitter during 2022), Google Brain's Imagen and Parti, and Microsoft's NUWA-Infinity. Many other AI art generation programs and tools, such as the popular MidJourney, can turn imagination into artwork from simple text.