Cross-Modal Compositions | Isle of Portland, UK | Unit 21 | 2022
Awarded the Fitzroy Robinson Drawing Prize
rolandas.markevicius.20@ucl.ac.uk
The developments in deep neural networks force us to reconsider the role of references. The project frames this statement through the challenge of establishing synaesthetic links between architecture and music suggesting that the new instruments offered by machine learning allow designers to play with abstract features systematically. The project proposes a practical implementation of a Pix2pix general adversarial network creating common maps between architectural drawings and audio spectrograms shaping methodologies for formalising relationships between any sets of data.
Personalised software and data augmentation methodologies are developed throughout the project to assist in the goal of designing a music school in Portland UK.
Interface overview
The interface is developed to improve productivity for neural network-based image translation and editing, enabling bidirectional translations from auditory and spatial modalities.
Self-Organising-Maps
SOMs have been used as one of the key techniques in organising unrelated datasets. The mapping for spectrograms and sections is represented through u-matrixes, classification distributions and image distributions.
Cross-modal composition
A musical walk is curated by iteratively adjusting architectural outputs with consideration for the musical composition. Audio is generated through general adversarial networks sequentially composing the architecture.