Explore open access research and scholarly works from STORE - University of Staffordshire Online Repository

Advanced Search

Macchia

McCarthy, Penny and DAY, Michael (2017) Macchia. [Artefact]

[thumbnail of Macchia01.mp4] Video
Macchia01.mp4
Available under License Type All Rights Reserved.

Download (16MB)

Abstract or description

Macchia is a collaborative project that explores inscrutable cloud-based, ‘black-boxed’ processing platforms, and the way that automated aesthetic encounters differ from embodied ones.

The project was initiated by Penny McCarthy and was developed collaboratively, motivated by curiosity about the seemingly magical functioning of reverse image search, whereby an image sent to it would return a response that both seemed visually similar and also uncannily dissimilar. Macchia allows image search queries to be made from a selection of source material from McCarthy’s drawing practice.

Macchia uses a commercial search API, which indexes images on the web and passes them through a convolutional neural network, creating a fingerprint (feature vector) of salient visual information specific to each image. When a search query is made, the submitted image is analysed and images with statistically similar feature vectors are returned. Feature vectors represent statistical probabilities that multiple subsections of the image share visually salient characteristics. A small area of an image might have a high probability of being a bird in flight, for example, and a slightly lower probability of being an aeroplane, but in comparing those probabilities, overlaps might arise that provide visual matches from very different categories of thing.

The level of complexity that the feature vector can contain introduces the magic: levels of visual interpretation that are this nuanced are commonly confined to the qualitative realm, and carried out by humans. Convolutional neural networks are modelled on the human perceptual system, and as such, their image-recognition capacity can be seen as attempting to reproduce that found in human vision (Kogan, 2018). This gap between the capacities of human and machine vision to identify and contextualise salience, and in the inscrutability of the operation of neural networks, reveals widely-held concerns about these technologies.

Reference
Kogan, G. (2017). Convolutional neural networks. [Online]. December 2017. Available from: https://ml4a.github.io/ml4a/convnets/. [Accessed: 7 January 2018].

Item Type: Artefact
Faculty: School of Creative Arts and Engineering > Art and Design
Event Location: Site Gallery, Sheffield, UK
Depositing User: Michael DAY
Date Deposited: 19 Dec 2018 12:01
Last Modified: 24 Feb 2023 13:53
URI: https://eprints.staffs.ac.uk/id/eprint/5045

Actions (login required)

View Item
View Item