Subtractive Vs Additive Ways of Making Art And Sense
Or why causality is to storytelling what prediction is to painting
1. Subtraction
Stories, Sculpting, Causality
I had a friend in college who loved to start drama. Years later, I learned he got the most fitting job I could think of. He is one of those people who splices together reality TV footage. This is an illuminating calling. The input is a lint roll of multi-camera interactions of strangers. And the desired output is a splicing so incendiary it sets off a thousand debates on a subreddit. The fundamental challenge is one of subtraction. How do you subtract the boring bits of life to create an interesting episode?
And this drama-slicing offers one view of story-telling, which is that storytelling is often subtractive. People say that life is not like the movies. This is true. But it is true for two reasons. First, movies have too many extraordinary situations. But second, movies don’t have too many ordinary situations. In a script, all the mundane scenes that don’t ‘move the story forward’ are stripped away.
Sculpting is another subtractive activity. In classical sculpting, the artist starts with some material and then removes parts of it. The removal is the art. Much like the relationship between the totality of everyday life and those scenes captured in movie scripts, the sculpture differs from its source material by what is subtracted.
Sculpting is not so far from violence, and storytelling is not so far from propaganda. Both highlight how subtraction can be nefarious. The Aurignacians were considered the first sculpture-makers in Europe. They are also known for their carved tools and weapons. Indeed, the difference between a sculpture and a weapon is vague. Who is to say a knife is not also a sculpture of sorts?
Sometimes excessive story subtraction—or removal of context—is pointy enough to become propaganda. Scott Alexander argues that “the media very rarely lies” because misinformation often comes in the form of more subtle distortions like removed context (i.e. overzealous subtraction). For example, Kari Lake said 42.5% of ballots in an audit were found to be illegal. But by “illegal”, Lake’s campaign omitted that they meant the ballots were printed in slightly the wrong size (19-inch ballots on 20-inch paper).
Causal inference in social sciences is also on this subtractive axis of meaning-making. History happens. Then we try to figure out why. This is typically done by focusing on the effect of one particular vector in a dense field of vectors. Or as Andrew Gelman put it, there are ‘no true zeros’ in social sciences. Many things exert some causal effect. Like sculpting, then, causal inference involves taking some material—in this case, history and its messy web of causal vectors—and trimming this web to create a neat story about a particular causal vector.
2. Addition
Painting, Prediction, Neural Networks
Then there are more additive ways of making meaning. Take painting. You start with a blank canvas and then add to it.
If causality is subtractive, then prediction would be additive. Subtractive activities involve starting with some material and then carving away. Causal inference fits this pattern since history already happened and the goal is to explain it by carving away non-focal causal vectors. But because the future has not yet happened, you can’t subtract anything from it to create a prediction. Instead, predictions need to be built up. I remember building forecasting models and doing this quite explicitly. In a common type of forecasting (SARIMAX), you incrementally build the parts of the forecast—like the ‘seasonal’ component, then the way past values relate to each other, etc.
There is probably some connection between additive activities, associational thinking, and prediction—which is where large language models (LLMs) come in. Networks are a useful way to encode associations. So it’s not surprising that neural networks are good at associative and additive activities. Consider LLMs. They have billions of parameters and while they may not be able to ‘reason’ (yet), they are good at association-making. They can autocomplete your sentence based on what sounds natural. And they were trained on next-word prediction. Both association and prediction are additive activities.
3. Conclusion
Like many dichotomies, this is a simplification of the world. Maybe it is best thought of as a subtractive-additive spectrum. But there are still some overarching characteristics of things on either end of that spectrum.
I like subtractive things because they’re effortful. It is harder to declutter than to add clutter. It is harder to write just enough than to write too much. So when you see a tight story or a concise but evocative haiku, these are signs that effortful subtraction took place. This is also why human eyes are drawn to shapes like right angles. They are a ‘nonaccidental’ property, a possible sign that some force was exerted.
Additive activities can be more fun, though. I think they come more naturally. We learn to count by addition and not subtraction. And additive activities like improv or brainstorming do have a very ‘natural’ feel. On a more macro level, even institutions find addition easier. A recent paper found policies are rarely repealed once passed.
Robert Pirsig said that ‘the pencil is mightier than the pen.’ But as I remember from hand-written tests, people always forget erasers. And due to the “tsunami of available fact, context, and perspective that constitutes Total Noise”, it would be responsible to favor subtractive things. Clear stories from sprawling data. More concise academic papers. All of this makes a lot of sense to me. But then I think of the enduring success of Minecraft. Additive activities are fun.
I find that a very useful way of going about making art is to use a constraint-based approach. With no constraints, there are infinitely many ways to write a story, compose a song, or paint a picture. So you want to start imposing constraints like "internal consistency" or "begin on a perfect fifth, unison, or octave" (counterpoint). If there isn't one obvious solution to go with then you don't have enough constraints, if you have no solutions then start removing the least valuable constraints until you can make it work. Intuitively you would imagine that this would result in works that are all very similar, but only if you use similar constraints. For example, I get the impression that a lot of ABBA's music was composed this way even though it sounds nothing like classical music since they changed some of the constraints around. It results in works which are very technically well executed, or even "engineered" you might say, but not necessarily in a sterile way.
Classic AI often worked using generate-and-test process. That is, you generate a huge number of possibilities, test them, and return the ones that pass the test.
The generative phase could could be thought of as additive (you are adding more possibilities to consider) and the test phase as subtractive (you remove the ones you don't like).
Generate-and-test is pretty general - you can also think of it as how evolution works, with generation (reproduction) and testing interleaved. Or consider game AI that generates many possible moves and filters them out to choose the best one.
Commercial AI image generators work this way at a smaller scale: the user generates some images and tosses the ones that they don't like.
A reasonable way to think about the future is to imagine possible scenarios (this is generative, additive) and throw out the unlikely ones (this is subtractive).
Especially when computerized, the generative phase can be sloppy (you can generate a lot of bad possibilities) as long as the test phase is reasonably precise (you need to make sure you don't throw out the good stuff).