
Reflection on Strategic AI Workshop w/ sea salt learning
4 days ago
6 min read
3
30
0
I will start by reiterating my gratitude for the invitation to this discussion. It was so inspiring to be guided by Julian, who has been able to embrace so many different levels of creativity and to build a business within which the primary focus is to inspire and build for the future. In truth, I had no clue what he possibly could have thought I might contribute to the conversation, given our brief interaction at DLD was mostly based on art installation. Julian is also interested in space and the manipulation of it: how space influences social interaction and interacts with the formation of political narratives, and I did not expect this level of interaction with my work; this was humbling and very encouraging, as often people relegate artwork and artists insularity: the world of beautiful stuff is over there, while the world of real and important things is over here. I was reminded of the value of sharing my work with people beyond the art world.
Julian's interest of space was encompassed within his theory of "information architecture", and how this architecture is no longer physical, but rather relational. He noted that his research showed people on average occupy fifteen different spaces per week to facilitate effectiveness. He defined space through boundaries and relationships, and linked this to power. I discussed Marc Augé, the Agora, and Hanna Arendt's theory of inter-relational power, and he linked this to a differentiation between social and structural power: the current challenge in the latter, he theorized, lies in a contemporary challenge to organizational identity (where I would, but did not, bring in the formation of myth). In relation to non-physical space, I discussed Harry Yeff's assertion that "the future will be screenless", which I have been thinking about a lot. At some point, he also brought up Diana Kalenova's AI Games, which I must look into.
Conversations like these, while feats of mental exertion, give me hope. I often feel as though, especially within the art world, there is a lack of depth in approaches to AI particularly, and systems analysis more broadly. Many conversations talk around, but do not talk about, the impact of AI and the crisis of our systems. These analytical masquerades become exhausting - mental work and work and nothing much established, and then to leave with all these questions and thoughts which just sort of spin around the brain, it really is taxing. Even the opportunity to have a conversation on the level of systems and organizations that didn't end with a "ugh... capitalism" was itself affirming, and to then be told that my insights were of actual value, maybe even consequence, to the people making important decisions for our future was monumentally encouraging.
Julian opened the conversation by discussing his meeting the day before, in which he explored similar ideas with a team from a professional services network; he said there were a lot of empty words and some empty promises, and I can resonate with his disillusionment. It often feels as though people haven't grasped the full consequence of the coming shift, like there's this huge tsunami approaching and you're standing on the beach, just looking at it, astonished that everyone else doesn't seem to see it.

Our discussion did not end this way: it was clear that people were aware of not just the existential issue of AI ("will humans matter at all") but also the immediate threat of automation completely emptying out high-skilled labor. Julian discussed the threat to the existing system of value architecture, resulting from a decrease in levels of information asymmetry. Citing a case study in the legal field which said law firms have seen a 9% increase in clients showing up with specialized knowledge, from 31% to 40%, over the past year, Julian named the threat to jobs of "higher order analysis", which includes the work of relationships, values, trends. According to Julian, the value architecture within most firms (i.e. how most firms categorize positions as high value), rests on two things: the first is this "higher order analysis", and the second is the creation of sophisticated things or objects. The threat to the former as discussed, and the threat to the latter, he claimed, laying in Generative AI's ability to "take the value out of sophisticated artefacts". (I am not sure if I think this is true).
Regardless, though, we named the threat, the unsaid was said, and began to pick it apart. Our conversation revolved around taxation, and data, and governance, but conversation kept turning back to state capture -- deep sea datacenters! Julian discussed tokenization as a potential future pathway, which I am so interested by: the idea that you can tokenize anything at all, including your relationships: assets which were previously unstated will be named and commodified, but (and this is really interesting) without having to be quantified. We thought math would replace language, but instead language is replacing math.
There is something beautiful about this replacement, and very democratic. Suddenly nepotism can be accessible, as can specialized information. I have been thinking a lot recently about how Western modernization has followed the pattern of the proletariat being admitted into the veil of the bourgeoisie. We discussed knowledge and its value, and I made the point that knowledge is thought of as free only because it used to be produced by people who had the leisure to make it because they had amassed enough wealth not to work, and the bourgeois structure of this pursuit did not change when more people came to access education. Same with democracy: it used to be only for the land owners, and so too rights, and then this bourgeois system was expanded to cover everybody else. I always thought, and maybe still do, that AI will be the final step in this: there will be no more proletariat, or rather AI will be the proletariat, and everyone will be able to have the stuff of the bourgeoisie. People seem to be so worried that AI is taking over the beautiful professions: that of the artist, poet, writer. I think this goes back to the idea that the work of beauty is separate from the rest of the world (the important world). People forget that, really, the purpose of true art (and not just making beautiful objects) is to create new things and imagine new realities.
When it comes to AI, we are so worried about the existential that we forget to focus on the immediate, which is that millions of smart young people will not have jobs in a few years if we do not do something now: maybe that could be tokenization. (I like this sense of the token, it has something meaty to it, physical, like I could hold it between my fingers, or put it in a leather draw-string sack. I would like to run around to all the data centers and ask them for my data back, and they would give me a golden coin, like the kind they have in those large arcade machines).

Almost immediately after the workshop, I had to rally myself for more deep and insightful discussion for our second reading group. Rudy Taylor, a master lithographer who creates collages promoting diversity, and also a very close friend of mine, chose an excerpt of Francois Jullien's "There is No Such Thing as Cultural Identity". At first, I, being exhausted and resultantly gadflyish, needled this as post-structuralist mumbo-jumbo. However, upon closer reading, we decided Jullien wasn't actually claiming the non-existence of negative identity, but actually saying that the "interspace" in the dialogic identity, the space between Socrates and his collocutor, must be held of equal import. I always agreed with this: the synthesis is not within the vanquishing of one idea by the other, but lives in the conversation at large: why have the conversation if there is just one point?

Our conversation declined with some wine, and we ended up in the pits of US politics and general future doomerism. However, my friend Maxim, always the optimist despite looking at war images all day, made the point that we have always thought the end of the world was happening: at the beginning of the industrial revolution, people thought machines would take every job, and there would be nothing left for humans. But then we came up with a different sort of work, in collaboration with the machines.
This was how I introduced myself at the conference: somebody who is actually working with AI as a collaborator to create new things. Vibe coding. This is really present only in the arts right now: Sougwen Chung, Harry Yeff, Matt Dryhurst and Holly Herndon. I do believe this is the interspace within the dialogue, not just the one of creating identity but also the one of creating conditions. It has become imperative to renegotiate the place of the arts for their function as r&d mechanisms in how AI can be integrated into the creation of new forms of labor and functions of expertise. In order to do this, though, there must be an evaluation of the contribution of this research and development, and I am not sure how that can take place.






