Beauty and the machine: teaching algorithms to detect the simulacrum

A couple of days ago I wrote a short thread on twitter about my interest in bootlegs. I am interested mostly in fashion/ clothing related items and toys. I am less knowledgeable (or even interested) in the niche of music related bootlegs. I said then that one of the reasons I love bootlegs is because they shatter the notions of authenticity and purity. I am not so much interested in bootlegs that attempt to reproduce exactly what they imitate but in those that are more like “referential objects”, evoking rather than merely copying. I also believe that bootlegs are disruptive of class in that they “cheapen” the value of consumer objects that the rich use as reference points (ie markers of class). In this “cheapening”, the consumer good becomes “tainted” by its proximity to the wrong class and ethnicities. For an example of this, see the way the Burberry pattern became “tarnished” once bootlegs with the pattern were widely adopted by British working class

I see bootlegs as a bastardized genre of pop culture. My favourite ones are not identical copies of the object they reproduce but are more like a pastiche of what consumers value in terms of class aspirations. I see them as a mishmash of brands and aesthetics akin to a consumerist chimera of sorts: this fabulous beast made of bits and pieces either because whoever created the product was confused/ purposefully disruptive/ didn’t care.

I also see bootlegs as a sort of consumer fiction that requires a suspension of disbelief in order to function: ie we “agree” to pretend that the simulacra is real for the purpose of participating in the cultural phenomenon. this simulacrum denounces the asymmetrical power structure behind brand capitalism. the “illegal” copy is treated as an aesthetic abomination but the appropriation of indie/ marginalised creators is normalised as part of capitalism’s predatory nature.

I have spent the last couple of years researching, lecturing and writing about “The Coloniality of the Algorithm”. Now I am starting a new research project called “Beauty and the Machine”. 

From the project’s pitch:

A starting point: Lev Manovich’s ideas around “Cultural Analytics” (“the science of analytical reasoning facilitated by visual interactive interfaces”) and how we are transferring our cultural notions of aesthetics into machines through the process known as “machine learning”.

From Manovich’s essay of 2017 “Automating Aesthetics: Artificial Intelligence and Image Culture”, on cultural analytics

Visual data analysis blends highly advanced computational methods with sophisticated graphics engines to tap the extraordinary ability of humans to see patterns and structure in even the most complex visual presentations. Currently applied to massive, heterogeneous, and dynamic datasets, such as those generated in studies of astrophysical, fluidic, biological, and other complex processes, the techniques have become sophisticated enough to allow the interactive manipulation of variables in real time. Ultra high-resolution displays allow teams of researchers to zoom in to examine specific aspects of the renderings, or to navigate along interesting visual pathways, following their intuitions and even hunches to see where they may lead. New research is now beginning to apply these sorts of tools to the social sciences and humanities as well, and the techniques offer considerable promise in helping us understand complex social processes like learning, political and organizational change, and the diffusion of knowledge 

trying to understand how these cultural analytics would operate vis a vis Barthes’ semiotic codes, particularly in regards to teaching machines to identify highly subjective codes (not even necessarily shared among humans and so heavily dependant on culture/ethnicity/ gender etc)

The political implications of teaching these cultural specific aesthetics to machines: Consider how, for example, Instagram’s algorithm “recommends” certain images in detriment of others, how the algorithm deems a photo to be “beautiful” and worth promoting etc

I am interested not only in how these decisions are made (ie a programer “taught” the machine) but also the taxonomies that went into play for this process and the type of aesthetic choices that are made to create the taxonomies. Not only how “beauty” or “ugliness” are coded into the machine but the cultural aspects of defining these categories.

Of course this current research is a continuation of last year’s project about “the coloniality of the algorithm” that situated Linnaean taxonomies at the heart of both colonial history and our contemporary uses of technology.

So, to go back to the simulacrum: how do machines learn what is real vs what is simulacra? (Side note for Blade Runner’s machines themselves being the simulacra) 

Artificial Intelligence Is Being Used to Combat Luxury Fakes 

Once they had enough samples, their algorithm took over, analyzing the tiny details that make up the DNA of the genuine articles. They posit that these details are too difficult for counterfeiters to reproduce. Entrupy’s founders co-authored an article that states, “Even if microscopic details are observed [by counterfeiters], manufacturing objects at a micron or nano-level precision is both hard and expensive.”

Entrupy’s CTO and co-founder, Ashlesh Sharma, completed a PhD at NYU specializing in computer vision, which allows computers to capture an object’s microscopic surface data. Sharma saw the potential to use this technology to map what Entrupy calls the “genome of physical objects.”

“The genome of physical objects”

Interesting to me is that the founders published a paper with the title “The Fake vs Real Goods Problem: Microscopy and Machine Learning to the Rescue” and that machines are being trained to detect simulacra at microscopic level. From the paper:

we introduce a new mechanism that uses machine learning algorithms on microscopic images of physical objects to distinguish between genuine and counterfeit versions of the same product. The underlying principle of our system stems from the idea that microscopic characteristics in a genuine product or a class of products (corresponding to the same larger product line), exhibit inherent similarities that can be used to distinguish these products from their corresponding counterfeit versions.

thinking of Deleuze’s Difference and Repetition, particularly “simulacra as the avenue by which an accepted ideal or “privileged position” could be “challenged and overturned” and simulacra as “those systems in which different relates to different by means of difference itself”. Especially in terms of class issues surrounding counterfeit goods, how if the simulacra is “too good”, it shatters the class signifier through which “different relates to different” (ie rich signaling their wealth to fellow rich). But, on the other hand, if the simulacrum can only be detected at microscopic level, how much of a simulacrum is it (at least in terms of delivering the class signifier its meant to deliver)? ie if the copy cannot be detected with the naked eye, if it looks authentic and serves the function for which it was created, then what is real?

Deleuze: “The simulacrum is not just a copy, but that which overturns all copies by also overturning the models

and 

Does this not mean that simulacra provide the means of challenging both the notion of the copy and that of the model?

Again, from the paper (emphasis mine)

In the counterfeiting industry, most of the counterfeits are manufactured or built without paying attention to the microscopic details of the object. Even if microscopic details are observed, manufacturing objects at a micron or nano-level precision is both hard and expensive. This destroys the economies of scale in which the counterfeiting industry thrives. Hence we use microscopic images to analyze the physical objects.

Feature extraction. Once the image is captured using the microscope imaging hardware, it is split into chunks of smaller images for processing. Splitting an image into smaller chunks is important for multiple reasons: i) the field of view of the microscopic imaging hardware is large (compared to off-the-shelf digital microscopes) around 12mm x 10mm. We need to look at microscopic variations at the 5µm-10µm range, so the images have to be split such that we are able to process these minor variations. ii) splitting the image into smaller chunks helps in preserving the minor variations in the visual vocabulary. Since during the quantization process minor variations of the image tend to get lost.  

Based on tests conducted by customers, our system is able to also easily identify “superfake” bags which may tend to use the same material on some regions.

A “superfake” (fakes of such good quality they can be barely distinguished from the authentic) and Baudrillard: “a simulacrum is not a copy of the real, but becomes truth in its own right


For the past decade and a half I have been making all my content available for free (and never behind a paywall) as an ongoing practice of ephemeral publishing. This site is no exception. If you wish to help offset my labor costs, you can donate on Paypal or you can subscribe to Patreon where I will not be putting my posts behind a lock but you'd be helping me continue making this work available for everyone. Thank you.  Follow me on Twitter for new post updates.

Scroll to top
Close