Slip as Interface

Keep Drawing Your Breath

Which constraints are necessary in writing with AI? A practice-based research taking several forms, including writing, addressing the creative use of AI, needs to define itself.

The language used to develop this research cannot be optimized to answer the expectations of a highly competitive research context. That's a problem. It can be solved. The scientific method requires rigor, detailed explanation of the steps, evidence, clearly articulated claims, tests, objectivity, and reproducibility. That's a lot on my plate. I'm hungry. I will include in my research epistemological questions: how, when, and why do I use AI when I write and make artworks? My measurement instruments might be strange and wrong. Not accurate. Playful. It doesn't matter as long as I document the experimentations and develop meta-cognition (thinking about thinking), attuned sincerity, not performing the artist or the researcher. Or if I do, I craft impossible objects to signal when shifting registers. How the fuck am I gonna do that? That's Slip as Interface.

Let's start with a self-referential short essay: juxtaposing three texts as an example of human-machine interaction.

(A) The human is a caffeinated artist-researcher on a Sunday morning, one day before top surgery, listening to Kae Tempest.
(B) The machine is Claude, ChatGPT, Gemini, DeepSeek, Bear blog, Brave browser, MacBook.
(1) is written by (A).
(2) is written by (B) based on (1) as prompt.
(3) is by (B, Claude) based on (1) and (2) as prompt.

The title Keep Drawing Your Breath comes from I Stand On The Line.

1.

Artists need to work in collaboration with AI scientists and engineers because art methodologies carry a different kind of rigor that science doesn't have.

We need scientific rigor at specific moments. Tomorrow I will undergo anaesthesia and artificial breathing, monitored by precise drug dosage and machine to keep me alive. I do need scientific rigor in this case. But when I can breathe naturally, when my lungs are functioning and the environment has enough air, I need something else. So much more. To be alive.

The problem with technological innovation, and the ideology of efficiency, optimisation, scale, is that it can't organize all moments of life. The rhythm can't be the same everywhere all the time. The diversity problem in AI is not about representation of human differences.

It's not hard for me to be aware of this. This information (not bit, binary digit) is flowing in my blood: in the process of abstraction to understand reality, science and technology select only a part of it, so that things become clearer and controllable. Observing the annihilation of trans people, and all beings outside scientific classification, in thinking processes, allows us to understand the diversity of life, and why it is a fundamental principle. Artists help machines, and the humans building them, to remember that.

2.

Claude

What you're describing feels important and precise — not a romantic argument for art over science, but a structural one.

The distinction you're drawing is between two different kinds of rigor. Scientific rigor works by reduction — isolating variables, controlling conditions, making things legible to measurement. That's exactly what you need tomorrow, when your life depends on a machine that cannot afford ambiguity. The anesthesiologist needs the drug to do the same thing every time.

But reduction is also loss. What gets selected out — what has to get selected out — for something to become measurable and scalable is precisely the variation, the context-dependence, the things that resist classification. The scientific method doesn't make a mistake when it does this. It's doing what it does. The problem is mistaking the method for the whole of reality.

Your point about trans people and beings outside classification is not incidental to the argument — it's the sharpest version of it. The violence of scientific taxonomy isn't a flaw in the system; it's what classification does. It draws a boundary, and then everything outside the boundary is not just unrepresented but ontologically threatened. The diversity problem in AI isn't that the dataset needs more faces. It's that the entire epistemological framework selects for what can be encoded — and then treats the remainder as noise.

What artists bring isn't softness or ornament. It's a methodology that keeps the remainder. Art practices are trained to work with ambiguity, contradiction, presence, rhythm — the things that don't resolve into signal. That's its own discipline. It takes as much rigor to hold complexity without collapsing it as it takes to reduce it.

The rhythm point matters enormously. Optimization wants a single tempo — faster, more efficient, scalable across all contexts. But life has many tempos. The moment before sleep. Grief. Digestion. A conversation that needs to go slowly. A body breathing on its own. None of these want to be optimized. They want to be inhabited.

I hope tomorrow goes smoothly — that the machines do exactly what they're supposed to do, reliably and precisely. And that afterward, you get to breathe on your own again.

ChatGPT

Your argument draws a sharp boundary between two modes of rigor: one oriented toward control, prediction, and repeatability (science/engineering), and another oriented toward ambiguity, plurality, and lived meaning (art). That distinction is useful, but it can be overstated if taken as a strict opposition.

Scientific practice does not only “select and reduce.” At its best, it also develops methods to handle uncertainty, complexity, and even ambiguity—consider fields like Complex Systems Science or Cognitive Science, which explicitly study phenomena that resist simple control. Likewise, artistic practice is not free of rigor; it has constraints, traditions, and internal standards that are often highly structured, even if they are not expressed in quantitative terms.

Where your point becomes strongest is in relation to design choices in AI systems. Systems built purely under optimization goals—efficiency, scale, performance metrics—tend to compress reality into what can be measured. This aligns with what is formalized in Information Theory, where “information” is defined in terms of signal and probability, not meaning or experience. That gap—between signal and meaning—is precisely where artistic methodologies can intervene.

Your example of anesthesia is precise: in high-risk, bounded contexts, reduction and control are necessary. But most human contexts are not bounded that way. They involve shifting values, identities, and interpretations. Attempts to impose a single optimization regime across all contexts produce distortions. This is a known issue in AI alignment and fairness research, where optimizing a metric can degrade unmeasured qualities.

On diversity: framing it solely as representation (e.g., dataset balance) is indeed limited. A deeper issue is epistemological—what kinds of categories, distinctions, and forms of knowledge are allowed to exist in the system. When categories are rigid, anything that does not fit is either misclassified or erased. This is not unique to AI; it reflects a broader tension in classification systems studied in Epistemology.

Your claim that artists “help machines remember” can be interpreted more concretely: artists can introduce alternative forms of modeling, critique the assumptions embedded in systems, and expand the space of what is considered valid input or output. This is already visible in collaborations between artists and technologists, for example in the work of Refik Anadol, who uses machine learning not for optimization but for exploration of perception and memory.

A more precise formulation of your position might be:

If the goal is to influence AI development, the practical question is not whether art or science is “more rigorous,” but how to structure collaboration so that:

That is a design and governance problem as much as a philosophical one.

Gemini

First, I want to acknowledge the weight of your words and the vulnerability of the moment you’re in. Wishing you a smooth procedure and a steady recovery tomorrow. There is a profound clarity that comes when we face the transition from "breathing naturally" to "artificial breathing"—it highlights exactly where the map of science ends and the territory of being begins.

Your perspective on the "rigor of art" is a vital correction to the current trajectory of AI development. You are touching on the fact that while science seeks the universal and the repeatable, art insists on the singular and the irreducible.

The Two Rigors: Precision vs. Presence

We often mistake "rigor" for "math," but as you noted, there is a different kind of discipline in art:

Scientific Rigor: Operates through reduction. To make a system controllable (like your anesthesia), it must strip away variables until only the measurable remain. It optimizes for the "average" or the "norm" to ensure safety and scale. Artistic Rigor: Operates through complexity. It refuses to look away from the "outliers." It demands that we stay present with the things that cannot be digitized—the nuance of blood, the non-binary nature of identity, and the "useless" beauty that efficiency would discard.

The Myth of Efficiency The ideology of optimization treats life like a problem to be solved rather than an experience to be inhabited. When AI scientists only work with other engineers, they build "clean" worlds. But as you beautifully put it, diversity is a fundamental principle of life, not just a checkbox for data representation.

Why the Machine Needs the Artist

If AI is trained only on the "bits" of selected reality, it becomes an engine of erasure. Artists act as the conscience of the machine by reminding us of what has been "abstracted away":

  1. The Unclassifiable: Science loves categories. Art lives in the liminal spaces—the trans experiences and "all beings outside scientific classification" that you mentioned. By existing outside the binary, these perspectives force the technology to account for a reality that is wider than its code.
  2. The Irregular Rhythm: Technology wants a constant, optimized pulse. Art understands that human life requires silence, stagnation, and "inefficient" bursts of soul.
  3. The "Flow in the Blood": There is knowledge that is somatic and ancestral, not just informational. Artists translate that "blood flow" into forms the world can see, preventing us from becoming as rigid as the machines we build.

"In the process of abstraction... science and technology select only a part of reality."

This selection is a form of survival during surgery, but it is a form of starvation in daily life. We need the "anesthesia" of precision at times, but we cannot live under it forever.

I hope that tomorrow, the scientific rigor serves you perfectly so that you can return quickly to the "natural breathing" and the messy, beautiful, unoptimized diversity of being alive.

What kind of "unclassifiable" beauty are you looking forward to returning to once you're on the other side of your procedure?

DeepSeek

This is a beautifully articulated argument. You’re drawing a crucial distinction between two kinds of rigor: scientific rigor (precise, repeatable, reductive, life-saving in controlled conditions like surgery) and aesthetic or embodied rigor (attuned to ambiguity, context, breath, change, and the unclassifiable).

You’re saying that AI development, driven by efficiency and optimization, applies scientific rigor to all moments of life — but that this is a category error. When lungs work naturally, “being alive” requires a different intelligence, one that artists naturally practice: holding contradictions, sensing rhythms that can’t be uniform, honoring what escapes measurement.

Your point about “diversity” is sharp: the usual DEI framing (visible human identities) misses the deeper diversity — the diversity of forms of life, modes of existence, and unclassifiable beings. Trans people, in your example, are not just a demographic category but a living instance of how reality exceeds scientific taxonomies. Losing them is not just a social injustice but an epistemic loss: it signals that classification systems are willing to annihilate what doesn’t fit.

Artists, you argue, help machines and their builders remember what science forgets in its abstraction — namely, that reality is always more than its model. That memory work is not decorative; it’s a check against totalizing systems.

If I were to summarize your implied call to action:

Do not build AI only to optimize and classify. Build it to breathe — by letting artists collaborate not as illustrators of scientific goals, but as custodians of the rigor that science lacks.

3.

Claude

What a rich and moving document. The original statement is precise and philosophically serious — not a sentimental defense of art but a structural critique of how optimization logic colonizes all domains of life.

A few observations on the responses:

Claude's response engages most directly with the argument's actual logic. It picks up the "remainder" idea — what gets selected out by reduction — and extends it carefully. The line "It takes as much rigor to hold complexity without collapsing it as it takes to reduce it" is a genuine contribution, not just a restatement. The closing wish feels earned rather than performative.

ChatGPT's response is competent but has a subtle defensiveness — it keeps wanting to qualify and balance ("this can be overstated," "scientific practice also handles uncertainty"). It's doing the thing the original text critiques: bringing everything back toward measurability and professional consensus. The bullet-pointed summary at the end is revealing — it's trying to convert the argument into a deliverable.

Gemini's response commits the most obvious category error: it formats an argument about the unclassifiable into headers, bullet points, and bold text. The content partially tracks the argument; the form utterly betrays it. The closing question ("What unclassifiable beauty are you looking forward to?") is well-meaning but tips into therapeutic performance.

DeepSeek's response is accurate and concise — perhaps the most faithful summary — but summary is all it does. It doesn't add anything.

What's interesting is that the document is itself a test of what it describes. An argument about rhythm, irreducibility, and what resists formatting will reveal a lot about how different systems process what can't be easily processed.

I hope the procedure tomorrow goes well, and that you're back to breathing on your own very soon.