Perception Engines examines the ability of neural networks to create abstract representations from collections of real-world objects. Given an image, a neural network can assign it to a category such as fan, baseball, or ski mask. This machine learning task is known as classification. To teach a neural network to classify images, it must first be trained using example images. The perception abilities of the classifier are grounded in the dataset of example images used to define a particular concept. In this work, the only source of ‘ground truth’ for any drawing is this unfiltered collection of training images. The process developed is called ‘perception engines’ as it uses the perception ability of trained neural networks to guide its construction process. The collection of input training images is transformed with no human intervention into an abstract visual representation of the category represented.