ArcheoFuturism – A Personal Statement on the Advent of A.I Art

Contributed by Daniel Mirante on September 1, 2022

I have been observing with interest the vitrolic discussions erupting throughout the online community, about the sudden preponderance of so-called AI artwork. Set against the zeitgeist of Covid and chaotic globalist forces and new technologies threatening to render much human labour redundant, AI art appears as a threatening incursion upon sacrosanct forms of organic human creativity.

The traditional painter believes themselves to be threatened with being made redundant through automation along with factory workers, farmers basket makers, clothes makers and so on. It seems like barely any parts of our lives right now is untouched by this far-reaching technological revolution, which brings with it a kind of existential dread.

It is therefore with empathy and understanding the concerns of many of my colleagues and contemporaries in their lamentations about artificial intelligence that I would like to offer this post, in my usual attempt at ‘middle path’ and the attempt at striking some kind of balance, opening the way for a more mature dialogue about the potentials and threats within new technology.

My background in art began as a child and as a teenager interested in comic pulp and otherworldly art. This quest for vision led me to studying fine art in Manchester during the millennium during the advent of the internet into all of our lives. The university lecturers were fascinated by the potentials of the web and therefore, during our degree, rather than learning to paint, we learnt HTML and 3D modelling. Myself and Emma Gilbert in 1998 worked on The International symposium of Electronic Art producing a boggling CD-ROM interface based on glitch video and quantum theory. This was given to all delegates. We were interested in exploring evolutionary and algorithmic art back then 25 years ago.

I was inspired by the diagrams in Richard Dawkins ‘Blind Watchmaker’ and was fascinated with the field of systems theory and complexity, and emerging algorithmic art all the way back then. I experimented with L-Parser, VRML and Mutator and a number of other programs, primarily intended for computation and scientists exploring evolution, but which could be bent towards aesthetic mean.

My tangent for a number of years then took me away from things digital. As I immersed myself in ancient initiatic traditions, vision questing, world travel, and developed the skills that I was not given in university by apprenticing within the Ernst Fuchs lineage, developing my traditional craft skills. I still remain firmly embedded in traditional craft. For me, it is a meditation and a way to get deeper to the Self/No-Self.

However, I have remained curious and inquisitive about art technology, watching many of my peers do very well and make very new things in the digital worlds. Embracing photoshop, using it as a tool in preparing paintings and photo-montage, admiring the work of 3D artists etc.

I’ve observed in my time 3D art transcending it’s initial constraints, and has become a powerful medium for people to realise their vision. 3D software was so simple and basic, it completely prescribed a style to the user. It was orinally very difficult for anyone to express a personal mark through those means these days. It’s much more possible now.

The first I became aware of the new wave of AI innovation in the arts was via the website ‘This is not a Person’, where, with every browser refresh, a new photorealistic face appears of what appears to be a real person – but in fact is a product of a learning network, which has generalised the average of the features that make up human face and can repose recompose new faces at the click of a button. This was highly intriguing to me and ignited my curiosity of what the potentials of such learning networks would be if they were fed artworks.

I began to play around with Runway Machine Learning two years ago. Building my own data sets made of thousands of fragments of my own paintings and historical references and seeing what kind of resulting forms could be generated. This work was extremely labour intensive, and did not involve any of the convenience and easy interfaces that are now available. However, it did allow for a lot of customisation. For instance, the cultivation and design of one’s very own data set, which means that the information and aesthetics gleaned from such data sets is customisable by the artists themselves, which is not the case with many of the consumer apps that now exist.

Runway ML also did not allow imagery to be defined by a text prompts but instead produced a latent space which one could travel through and as it were mine useful bits and bobs that could be found along that trail. It was incredibly labor-intensive and involved a lot of intellectual struggle and learning.

I was exploring what these new tools could do in order to learn the average of my forms and aesthetics. I appreciated to reflect to myself the essence of the kind of aesthetic that I wish to cultivate in my own work. This was a very personal journey and I undertook it without guidance or understanding from my peers.

I next explored more direct forms of style transfer which is the process whereby the palette and certain generalised forms from one piece of work can be transposed upon another piece of work. This style transfer process requires one to have an initial image. This initial image can be ones own photograph, montage or painting.
These days, convenient mobile phone apps have simplified this process so people can upload their own selfie and have it rendered in style of Da Vinci etc. However, the raw software I was using allowed for very large, scale style transfers, and rather than working off a selfie, I would use it to create areas of texture, arabesque and style swatches that I then employed in my digital art. Again, this was a tremendously labor-intensive process.
The past year has seen the advent of text prompt (‘clip’) based, artificial intelligence like Midjourney.

Initially ‘Clip’ had to be done from a Python command line in Google Co-laboratory involving renting Google Cloud services. It was complex, expensive messy and time-consuming and very hit and miss. Apps like Midjourney and Dall-E-2 have provided user interfaces so that people who are not programmatically savvy can use clip and use whatever random databases these apps plug into in order to create instant imagery. They are extremely powerful.
I have read a lot of claims that this work is without beauty and soul. This to me seems like very generalised judgements upon a diverse field of experimentation and human curiosity.

I am not concerned about confusing these networks with some cinematic sci-fi ruminating entity that wants to thwart humanity. These applications have no personal agency or agenda. However, they are limited. I’m interested in people exploring the limitations of these new forms of aesthetic image generation to find exactly what the limits are. To me, *some* of the imagery that people produce is grotesque and ugly, and others are beautiful, and novel.

There are still human beings in the loop. In this process, it takes a human being to initiate the process. It takes a human being to design a text prompt (‘promptism’). It takes a human being to select that image and then from that image, whatever other process can be added, for instance, a generated image may be used for its texture. It may be used for a shape. It may be used for a single element in a digital painting that then becomes a real painting that is recomposed through immense effort and dedication. The arguments that AI threatens to completely replace the human artist sound moot to my ears, although I strongly concede that these algorithms produce elements that gain more value and worth by being carefully developed from their starting points into something more personal.

I don’t understand why this process is claimed to have less integrity than someone collaging from found imagery that they acquire from Google images. There is more opportunity here for an artist to create a bank of their own personal resource and use that personal resource. As a very custom starting point for whatever else they want to take it forward.

I have caught a lot of controversy with my standpoint. My standpoint is that exploring and fulfilling human curiosity is healthy and that artists should not be pigeonholed on whether they choose to explore this software or not. I have read very unkind and divisive words from people (I will not name names) suggesting that artists are ruined or spoilt by exploring or touching the potentials of these interesting emergences, that as it were they have done a deal with the devil. Insulting catholic bullshit akin to the old claims a camera stealing peoples souls.
There are genuine problems, concerning automation of human skills, which this field does bear upon. Also the long arguments about attribution – these arguments also concern collage – such as those of Dada and Max Ernst! And people painting from other people’s photographs, which is hugely commonplace.

For integrity. I would suggest that an artist does not use the raw output from these clip-based applications as an end point but rather as a starting point to understand their own aesthetic taste and intentions many things, many tool powerful tools can be used recklessly or they can be used with an enquiring and inventive spirit.
I am intrigued by where this field is going and excited by the new forms of imagery, which are appearing. I also think that the advance of mainstream technological art can only help traditional artists by clearly differentiating the hard labour of analogue and traditional art from the kind of arts that people are rapidly able to identify as belonging to this or that app.

However, the skilful use of several apps collaging, inventive use of the created material and then a folding back into traditional media is an approach I call ‘Archeofuturism’, which I believe to be a fertile ground for the painter and illustrator, and means to break one’s own patterns, and open up new potentials and possibilities within traditional work.

I have experimented and using these approaches as it means to illustrate a book which is the work of my own human imagination. A book, a story that has taken a number of years to cohere, and I’ve found it a fun and fascinating process to rapidly, create visualisations that are fairly exacting to the world that has been long cultivated within the story and within my mind.

Electronic music allowed people to sample pieces of music, sample their environment, and take sound apart on a genetic level and recompose it into their own compositions. Even if they had no traditional musical training. Cubase and Ableton allowed methodical composing of art. This unleashed a realm of music that that many of us enjoy daily and I see no coherent arguement yet as to why the advents of these new visual tools really is any different.
In the words of William, Blake:

“I will not reason and compare, my business is to create”

Daniel Mirante is a painter, historian, scholar, teacher and writer.

Leave a reply

Go to top ⇡