How to draw a cover: An essay against AI art
Written by a software engineer who accidentally learned to draw
Hello everyone!
Last newsletter, I talked about following my zoomies, aaaaaand in January and February I faced the consequences of that lol: I needed to focus and finish Pouch Issue 2!! So that's what I've been up to for the last two months, and I'm happy to say that Issue 2 is now done! 😮💨 I'm starting the print process and if all goes well, I'll start fulfilling preorders in a couple of weeks. More soon!
Now I have time to write my newsletter again, and I wanted to return with a topic that was light and airy: My process for drawing the cover of Pouch Issue 2!
…but gnawing at me has been a much more complicated topic: Generative AI, and where I stand on it.
Well! So much for light and airy 😂
I started my sabbatical over a year and a half ago, with the sole intention to build software. I decided to take a drawing class after talking with an illustrator friend who encouraged me to do so, and after that, I ended up spending a large portion of my sabbatical studying drawing and design. It is by sheer coincidence that I did this during the current AI wave.
When I sat down to explain my process for creating the cover of Pouch Issue 2 — against the backdrop of my own field declaring that this sort of work shall soon be a relic of the past — I could not bring myself to talk about one without discussing the other. This essay was the result.
Like always, I hope you enjoy today's newsletter 🩵
♡ vrk
💌 PS: Today’s post is a long one!! If you’re reading via email, you may need to click on the title to open the whole thing in a new tab.
NOTE FOR TECHNOLOGISTS: I'm going to say "generative AI" a lot here, but I actually mean "the exploitative use of diffusion models (and similar) for the purposes of creating visual art from scratch." I'm using "generative AI" and "AI art" as shorthand for ease of language. I recognize this may be misleading, and that's not my intention. Despite the shorthand, I do not feel the same way about all applications of deep learning, though my feelings are still complex. See this footnote for more.1
I. How to define "soul" in a drawing
When people talk about AI art, the word "soulless" is thrown around a lot. But what does it actually mean for art to have or lack "soul"? As a software engineer, I found this concept quite tricky.
I thought perhaps "soulless" meant a lack of feeling, like you look at an image and feel nothing. But while the majority of AI art I saw felt lifeless, I would occasionally see an AI-generated image that looked quite impressive, and did evoke feeling from me. I had heard definitions of "art" being something like, "created by humans, with human intention"; but the definition still confused me. When you type out a prompt, is that not "human intention"? Surely that can't be right. It all seemed to go over my head.
But after studying art for the past year, I… think I'm starting to understand the concept of "soul" in a creation. I don't think it's some terribly esoteric thing. Anyone can draw a picture with soul. Children do it without thinking. You can do it right now.
Here are some excerpts from my doodle diary, where I challenge myself to draw a scene from my day in 30-90 seconds.
An example of me meeting friends at a cafe:
Another, about a new leg exercise my PT taught me:
Another, working on the cover of Pouch Issue 2:
I believe each of these drawings have "soul" – or if the spiritual connotation of that word bothers you, it's also what cartoonist Lynda Barry calls an "aliveness." In her book Syllabus2, Lynda Barry describes a drawing exercise she gives to her class where she asks them to draw Batman in under 2 minutes without looking at any pictures:
There is something beautiful in these lines made by people who stopped drawing a long time ago. … Sometimes we say this kind of picture looks like a kid drew it, and people are dismayed by this and even ashamed enough to destroy the picture – get rid of it – immediately.
But what if the way kids draw – that kind of line that we call childish – what if that is what a line looks like when someone is having an experience by hand? A live wire! There is an aliveness in these drawings that can't be faked, and when I look at them, that aliveness seems to come into me. I'm glad to see and feel them.
This aliveness – my aliveness – is something that AI cannot generate. I'm not saying this to "insult" AI; another human cannot copy my aliveness. It is emergent in my drawings, and it is mine.
I could have used AI to generate an image for each day of my journal, instead of drawing an image myself:
But, can you see it? Can you see how the drawing on the left gives you a peek into my soul, and the drawing to the right does not? It has nothing to do art style, or with how many fingers and toes the image has. Look, my own drawings are rife with mistakes. The AI might draw the scene more "correctly" but it loses something precious: a voice that only I can imbue.
I don't want you to misunderstand me: I do not think drawings like these are the ONLY way to channel "aliveness"; I do not think "aliveness" is the ONLY quality of expressive art (it just scratches the surface); and I do not think analog mediums are the ONLY mediums through which a human can imbue their soul.
For instance, a Korean comic artist named Chon Kye-Young, author of the webtoon Love Alarm (좋아하면 울리는), created a significant portion of her comic via voice command into 3D graphics software after a surgery to remove a tumor in her hand left her struggling to even hold a toothbrush.
I believe that Chon Kye-Young is able to express her soul through these drawings – despite them being rendered by technology, and despite her hands never touching a pen – because she is still making the vast majority of the decisions in the artwork. The computer is her pen, which she controls through her voice; her computer is not drawing for her.
I offer, then, a crude3 heuristic: I believe that the decisions you make in your drawing – conscious and unconscious – are the vehicle by which your soul, or aliveness, or intention, or WHATEVER, is infused in your creation. And I believe you need to make the vast majority of the decisions that compose a piece of art for you to claim it as your own.
Imagine you had a boss who asked you, "Can you draw a picture of me?" and you do so, and your boss takes your drawing and is like, "Wow, I can't believe I drew this!" Would you agree with him? Even if he gave you a more detailed description of how he wanted it to look: "uhh make me look cool and handsome, and draw it like Dragon Ball actually, I wanna look like Goku" – the problem isn't that he's drawing by voice command (Chon Kye-Young draws by voice command) – it's that he's hardly making any of the decisions that comprise the final drawing.
AI image generators make almost all the decisions of the final image for you. Not only that, the value proposition of AI-generated art is this lopsided ratio: "Look, with a few mere words, a glorious picture emerges! Look at how much time and money you've saved!" Sure, but I believe you "created" that image as much as you "created" an image returned to you by a well-written Google Image Search query.
When I say that AI art is "soulless" and the art that you generate is not "yours" – this is what I mean.
II. How artists learn to draw
Is AI art plagiarism?
Most AI models are trained using the work of human artists without their consent. OpenAI and others do this because their models won't be good otherwise. At the heart of the legal debate is the question of "fair use." But regardless of where the legal rulings land, this is exploitation and plagiarism in my view. Artists did not upload their work to the internet knowing that this was a possibility, and now that it is happening, virtually NONE of these artists want their work to be used in this way for very obvious reasons (i.e. their work being stolen with zero attribution, compensation, or consent, for the benefit of everyone but them).
Still, this isn't what I want to talk about right now.
Instead I want to talk about how artists learn to draw.
I had never taken an art class before my sabbatical. I drew a bit when I was younger, but I would mostly try to draw exact copies of characters from comics, cartoons, and pop culture that I enjoyed. I used to think if I wanted to get good at drawing, this is what I would need to do:
Look at lots of drawings by artists I really like.
Practice drawing them myself.
Then after a while, start creating my own unique drawings.
That didn't seem ENTIRELY different from what AI training does. (Though… I will point out that the details of machine learning vs human learning ARE entirely different – I know of exactly 0 humans who learn to draw by removing noise from an image, for example.)
But I was surprised to learn that my assumption of how to "get good at drawing" was completely wrong. Through the School of Visual Arts’ continuing ed program, I've now taken 3 beginner drawing classes. They are intended to teach the mechanics of drawing, and aren't meant to develop your artistry. So I'm not talking about how to get good at drawing in the artistic sense; these were classes meant purely to improve your mechanical drawing skills.
In all three of my classes, my SVA art professors banned us from looking at pictures when we drew.
Figure drawing class
Two of the classes I took at SVA were figure drawing classes. In figure drawing class, you learn to draw the human body.
In the middle of the room is a live, nude model, and students all stand in a big circle surrounding the model. The model holds different poses, and students try to draw what they see, while the teacher walks around the room and gives corrections.
If you're new to figure drawing, this might sound needlessly fussy: "Why a live model? Sure, that's how they did things in the 1500s, but we have cameras now! Why not just take a picture of a model and draw from that?"
The problem lies in decision-making: Who decides where the lines on the page go? If you take a photo of a model, the photo has decided the lines. YOU, the artist, are supposed to decide. Your eyes are not cameras; the way you see the world through your eyes, in 3D space, is not a series of still snapshots. You are learning to draw what your eyes see, not what a camera sees.
If we drew from a photo instead, that'd be considered "copying."
Back to plagiarism
To the idea that, well, isn’t AI training kinda the same thing that humans do when they learn to draw? Now that I know more about how drawing works, I don’t think they are anything alike.
Like, off the bat: If you're starting from flat images and pixels, you're not learning to draw the way that artists learn to draw original work. You might protest: “But vrk, if we can’t use pixels, how can we teach a computer to draw?” Yeah, exactly. You can teach a computer to imitate the other images you’ve given it, but that’s a whole lot closer to plagiarism than it is to creating something new.
It's not that “look at an image and try to copy it” isn't done by human artists – it is a form of study called “drawing from reference” or “copying.” And while maybe it’s obvious that you shouldn’t copy work as a study and then publish the work as your own, all 3 of my SVA art professors discouraged this even just for practice, because… if we did this as beginners, we would copy too much. We wouldn't learn how to make decisions for ourselves.
So when artists say that AI-generated art is just copying other artists, it is because the process of how AI models are trained – well, it doesn’t really resemble the human process of learning to draw at all – but it most resembles a practice called “copying,” and the output most resembles what you’d expect to get when you’ve copied too much.
III. Walkable cities
Some fans of AI art say things like, "What's the harm in AI art generators? If you don't like it, don't use it; pencils are still right there. You can still walk even though cars exist."
First – "If you don't like it, don't use it" – OK, if we're going to follow this rule, then we agree that people creating AI models shouldn't use others’ art in their training data without permission first. If you're saying "What's the big deal? You have a choice in this" THEN GIVE ARTISTS THE CHOICE.
Second – Let's talk about cars. "If you don't like cars, you can just walk" is not actually a choice most people have in the US, and the lack of walkable cities is a major problem. How did we get here? I don't want to add another big complicated topic to this already big complicated topic, but to oversimplify: When cars were a new and shiny technology, America over-invested in infrastructure for cars and under-invested in infrastructure for public transportation starting around the 1910s. Then decades later people realized, oh wait, even though cars are awfully convenient, and a transformative technology in many ways, building car-dependence was something we really wish hadn't happened. There are so many benefits of living in a walkable city: better for your physical health, mental health, social life, the environment…
This is what I'm worried about when it comes to AI art: Leaning into "technological progress" does not guarantee that quality of life improves. It's not that "generative AI, the technology" is harmful in a vacuum, just as I don't think cars shouldn't have been invented, but the widespread way that generative AI is being deployed by the tech industry is actively and significantly harming our creative ecosystem. We can already see the effects of this as AI-generated garbage clogs up our content streams. Even the AI companies are feeling it.
To the AI proponents who say, "Well the artists will be fine; AI just gives them more options" as an excuse to do nothing about the problems they're causing: I suggest you listen to some illustrators. Many are losing job opportunities today because companies don't want to pay them for work that they could get done for "free" with generative AI. These companies don't care that the output is worse; they don't care that the models were trained unethically; they just care about saving time and money. Yes, AI could be used in a way to empower artists, but right now it's largely doing the exact opposite.
I imagine what the world might look like if the majority of human-made art is replaced with AI-generated images, and to me that world is cold and depressing.4 I want to be able to look at images in my day-to-day life – even something as mundane as a pattern printed on my coffee cup – and know it was made by a person who made a million little decisions.
IV. "But can you tell if it's AI?"
I think someone in good faith could read everything I’ve written so far and wonder: "You explained what's unique about human-drawn images, but can you even tell if a drawing is made by AI?"
I don't think this is the right question to ask.
(I mean, the answer is yes, usually we can tell the difference, and even when we can't, I do think it's because the "AI-generated art" plagiarizes work of human artists – but let’s put that aside for the sake of argument.)
Did you watch The Truman Show, or do you know its premise? It's a movie from 1998 about a man named Truman whose whole life is actually a reality show, and he's the only one who doesn't know it. The sun is artificial, the weather is artificial, his neighbors are artificial, his society artificial.
The fact that Truman can't tell his world is a mirage – does that make his world any less fake?
If you lie to my face and I believe you, does the lie somehow become "valid" because I'm fooled by it?
I'm saying that even if I can't tell that the world around me is a mirage, I do not want to live in a mirage.
I'm not trying to have a moral or philosophical debate here. Maybe you don't see the problem. Maybe you would enjoy living in a Truman-like simulation, so long as you're happy. Or, what I'm starting to suspect from many of my tech colleagues: Maybe you'd be happy making other people live in a simulation so long as you remained on the production team. Or, maybe you believe none of those things, and you think all of my arguments so far have been horribly flawed for one reason or another. That's fine. We can just call this a matter of personal preference: It is my personal preference to live in a world surrounded by artwork made by real people with real intention. And as a technologist, due to this personal preference, I'm not interested in creating more and more convincing facsimiles of genuine connection which, underneath the covers, are just math-fueled simulations.
V. Flamethrowers
I'm extremely frustrated by how my colleagues in tech are choosing to react when confronted with the significant harm we are causing to the art community.
Many of you have a "survival of the fittest" mentality that sounds something like this: "WELL if the current ecosystem of artists and illustrators is so fragile that generative AI tools can destroy it, that means it was DESTINED to be destroyed! WE bear no responsibility; it's just that TECHNOLOGY changes and they need to ADAPT."
If a person points a flamethrower at a building and the building burns down, we don't all say "WELL y'know that building needed to adapt to technology." Some things don't have to be fireproof to be worthy of respect and deserving to stand.
A similar refrain: "WELL, like it or not, vrk: AI is the future. It's just the natural progression of technology. If you don't want to get left behind, you just gotta accept it."
No, we build the future we choose to build. There is no technological basis behind the idea that generative AI must be at the center of our products; we cannot magically predict the future; this is not some natural progression of technology. In my opinion, the widespread productization of generative AI is simply the natural progression of Silicon Valley greed. The tech industry is gambling on a future where AI is at the heart of everything we do, a bet that has not yet paid off. Knowing this, “AI is the future; you gotta adapt” does not sound like a fair warning from a neutral good samaritan; it sounds like a fear-mongering marketing tactic from an industry trying to secure a yet-to-be-realized return off of their multi-billion-dollar investment.
Then there are those of you who think that if I would just be objective, I would see how, from a pure technology standpoint, generative AI simply provides a "better" solution to the problem of drawing images than humans producing art by hand.
Let me present to you my objective analysis:
Art particularly excels in areas where technology is laughably bad at today.
Tech, whose products feel so bad to use that people proudly declare when they do not use them.
Tech, where we’ve implemented a feature to alert people on how much “screen time” they’ve spent because clearly no one should waste their life using the services we’re providing.
Compare this to art. I've spent hours and hours of my precious sabbatical time simply moving a pencil across paper, and I cannot be more pleased. ZERO of these hours feel wasted.
On internet platforms, it's easy for the simplest discussion between strangers to devolve into nastiness; and yet the art community has figured out how to get a room of 20+ strangers to draw someone's naked body for 3+ hours, and not only does everyone feel safe, but you leave feeling enriched.
Through the learning experiences I've had in the last year and a half, provided to me by the art community, I've grown as a person and I've been gifted a feeling of connection to every entity of my world.
What exactly should convince me that typing prompts into an AI model is a "better" solution for producing drawings than the experience I've just had? Because it'd be more efficient? Because it's easier? Because it would allow me to produce more stuff? What if efficiency is not the only metric by which we should measure value?
I believe everyone is creative, and I fully support the mission to empower everyone to create what's in their head. I think that consumer interest in generative AI tools indicates that there's a great yearning for more accessible creative outlets. But by forcing generative AI into all of our tech products, are we really creating onramps into meaningful, lasting creative outlets? Or are we just providing more outlets for brief, fleeting moments of pleasure, outlets that ultimately leave you feeling sad and empty and deeply lonely, like so many other tech products do today?
I believe there are countless other ways to try to meet the human desire to express oneself creatively, that do not center technology, that do not center generative AI, ways that would be far more effective, and do not require the destruction of the creative communities before us.
VI. The future I want to build
I am a software engineer with over 13 years of professional experience. I love technology. I try my best to evaluate new tools with an open mind, and I am not afraid of radical change – the possibility of radical, dramatic change excites me, and like so many others, it's part of what attracts me to the tech field.
I am strongly opposed to AI art, but not because I’m against change; it’s because the change it brings is pointed in the opposite direction from where I want us to go.
I reflect on the experiences I just had on sabbatical, and I marvel at how my life improved in virtually every way: Gathering with others in a classroom each week; listening, reading, learning, practicing; finding community; witnessing my improvement after months of effort. I learned how to look at a drawing properly – that line, that space, that pattern, that specific shade of yellow – and see how each choice is imbued with story. I find it profoundly moving. There's no part of me that reflects on my sabbatical and goes, "Gee, I wish we could have skipped all that by typing words into a textbox instead!" There’s no part of me that looks at my fellow students’ sketches and thinks, “This would be way better if they could only use AI.” Unsurprisingly, I feel the exact opposite: "Wow, what a privilege it is to walk around a room full of art and feel each artist’s story. Wow, I wish I could bring the gift of creative fulfillment to everyone I know. Wow, what would technology feel like if we built it with 1000x more thought, intention, and heart?"
This feels like the radically different future I want to build: A future where every image we see, every story we hear, every tool we use, every object we interact with is made with soul. A future where everyone has the luxury to slow down and look, to think, to imagine, to express themselves, to see a connection between themselves and a stranger. Where we're able to see all the little miracles around us. All these ways in which we're connected. I want to see aliveness everywhere I look. I do not want this aliveness to be an elaborate mirage.
There is a place for technology in this future I imagine, but to me it starts from the creative ecosystem that already exists – the world of artists and illustrators – and integrates within it. I think Procreate does a wonderful job of this. I think there's so much room for more.
I don't think "AI" as a technology, in a vacuum, is incompatible with the future I imagine. If generative AI helps us remove the background from a photo more accurately? And the technology doesn't incur other dire societal consequences5? OK sure! Let’s remove backgrounds with precision and ease. I can also imagine experiments in AI for interesting artistic purposes, like Poetry Camera. I believe it's quite possible to integrate this technology into our creative community in ways that feel appropriate and, indeed, enriching.
But right now, exploring AI is my last priority. So long as it continues to have all the issues that it has, it will continue to be of no interest to me. But also, it's just not what's exciting me. I look at the art community that has so generously welcomed me to create with them, and I feel infinitely inspired. What might we learn from their wisdom? So much of tech – generative AI included – has been built under the assumption that efficiency is king, and as a technologist, I feel exhilarated by the opportunities that await us if we reexamine all of tech with a radically different hypothesis: What if efficient is not always best? What if speed sometimes deprives us of experiences rather than saves us time? What if we prioritized just about anything else?
VII. Something you believe in
I do not expect this essay will be persuasive to tech people who are fully bought into AI. That’s fine. I didn't write this to persuade you.
The essay I just wrote is unclear sometimes. My analogies aren't perfect. My words don't always logically cohere. For instance, I'm hard-pressed to find a logical explanation for why moving a pencil across a piece of paper over and over again, to create drawings that are nowhere close to the "best" anyone's ever seen, that are not particularly original, that will not change the world, that in many cases will be seen by no one but myself, is so fulfilling to me. I understand that this might defy logic, but it doesn't mean I'm wrong. My essay is not perfect, but right now these are the best words I can find to explain how I feel.
I expect the imperfections to be a deal-breaker for some folks who want to nitpick my exact words; or who are convinced that actually if I tried using OpenAI's latest reasoning model, I'd find it does a great job of blah blah blah…
If you've read this essay and you still disagree with me, I understand. I genuinely appreciate you reading all the same, and I respect your opinion. I don't want to argue6 and I’m not trying to get in your way; go forth and build your dream, and I will build mine.
I also have a lot of empathy for people who use AI image generators because they feel they now have access to a creative outlet that was previously inaccessible to them. It's similar to how I feel about people who buy fashion designers' stolen designs from Shein because the clothes are affordable and it makes them feel pretty. I strongly prefer you didn't do these things, but my criticism is largely directed at companies and industries rather than individual consumers.
So why did I write this?
I wrote this essay to share my experiences and express my point of view on a complicated topic as best as I can.
I wrote this for my friends in the art community whose work is being openly stolen, exploited and devalued; and whose ecosystems are being destroyed by callous, greedy tech companies under a false banner of "progress." I am but one software engineer, but I see you and I am on your side. I promise you that I am not the only software engineer on your side, either.
I wrote this because I'm planning to end my sabbatical soon and start building software again as an independent solo developer. I plan to build software for people in creative spaces who have great reasons not to trust someone with a tech background like mine, and I am writing this to help my future audience make an informed decision on whether or not they want to use my products and support my work.
I wrote this because I want to inspire change. If you're a software engineer or another person in tech who agrees with me more than you disagree: If you are working at a company whose values do not align with your own; or working on AI not because you believe in what you're making but because that's just what you were told to do; because you don't want to be "left behind"; because everyone else is doing it; because you’re scared to stand out or take a risk — If you have the financial means (and many of you do), I urge you to quit your job and build something you believe in.
If my essay resonated with you, then if you feel so inclined, I invite you to comment on this post to let me know, and/or to follow my work on Substack or elsewhere. I would love to stay connected.
VIII. How to draw a cover
This is how I drew the cover of Pouch Issue 2.
Despite me using the umbrella term "generative AI" in this essay, I'm not trying to generalize all of deep learning; I'm mostly talking about using AI for creative output and personal communication. My opinions on deep learning differ widely based on the domain in which it's deployed, how the model is trained, the manner in which it is used, etc. For example, I'm in favor of AI helping stroke patients speak again, and I imagine most artists agree with me. I am also mostly in favor of using AI for purposes like processing data, summarization, or generating code for software development. On the other hand, I have major issues with plenty of other uses, like building chatbots that have feigned "authority" on subject matters they are not equipped to handle. At the end of the day, I currently avoid LLMs because I have major ethical issues with the broader LLM ecosystem, which my friend Nicole captures well in her essay, Can I ethically use LLMs?.
I strongly recommend Syllabus or Making Comics to anyone curious to understand this phenomenon more deeply. This is the exact subject Lynda Barry explores through these books, in far more compelling words and pictures than I can provide to you here.
This is a crude metric. It's imperfect, imprecise, and will probably annoy both artists and technologists with how crude it is. But it's the best I can come up with for now.
Not to mention, the generative AI models will STOP PROGRESSING because they don't generate """new""" ideas without being trained using new human work.
Like boiling the ocean, perpetuating misinformation, etc.
If you still find yourself with a burning desire to debate me, I am sorry, I do not have time for this. Since I'm guessing you’re a fan of LLMs, I request that you do not comment here, and debate with ChatGPT instead. And if that feels unsatisfying to you, ask yourself why.
Thank you for verbalizing everything I feel 💛 This line especially hit "Maybe you'd be happy making other people live in a simulation so long as you remained on the production team" and the arms race to 'win' at AI parallels a lot of what I find icky and depraved about crypto. Both technologies have helpful, life-changing uses, but the ways corporate greed are using them are NOT IT.
I hit the like button so hard on this I broke my phone!!! (Jokes, jokes)
For real though, this is a great reflection. I’m consistently convinced that somehow too many folks have lost the ability toward enjoying practice, and play... And then, couple that with a kind of apathy and even honestly an outright contempt for people who use their products and, well, it’s bad out there. “Making other people live in a simulation so long as you remained on the production team” goes so hard!!! It’s so true. “Here’s an example of a satirically bad place I was inspired by; I have taken steps to create it and ensure I’m governing it so I don’t have to endure it like the rest of you.”
Super excited for what’s in store for you vrk and thank you again for another excellent essay. ❤️❤️