Instead, we are going to tap again into a new development in Art and Neural Networks as a metaphor of where AI progress sits today, and what is feasible in the years to come. For our 2019 “initiation” on this topic with foundational concepts, see here. Today, let’s talk about OpenAI’s CLIP model, connecting natural language inputs with image search navigation, and the generative neural art models like VQ-GAN.
Compared to GPT-3, which is really good at generating language, CLIP is really good at associating language with images through adjacent categories, rather than by training on an entire image data set.