Inkwell Insights Episodes

The Inevitable AI Episode

Written by Blake Reichenbach | Jan 3, 2025 9:24:42 PM

Episode Summary

Personal Interactions with Books: A Window into Authentic Creativity

Blake Reichenbach shares an intimate detail about his reading habits to underscore his argument about the irreplaceable value of human creativity. Frequently revisiting "My Antonia" by Willa Cather, he marks the text with handwritten notes, underlining, and personal inserts. This deep engagement with the material epitomizes a human-made synthesis of experience and interpretation, a nuance that AI currently cannot replicate.

The act of annotating a book involves more than processing information; it is about invoking a personal dialogue with the text. Unlike AI, which can only surface patterns, these annotations capture emotional truths and cultural nuances, making it clear that writing is far more than assembling words—it is about imbuing them with human essence.

The Unseen Limitations of AI in Creative Writing

Blake’s experience as a product manager for AI projects in the software industry informs his critical stance. AI, while proficient at pattern recognition and data synthesis—valuable for marketing and data analysis—falls short in creative writing. AI-generated text, although coherent, lacks the fingerprint of human experience.

Writing's essence lies in its ability to evoke complex emotional landscapes and subtle truths. The soul of a written piece is found in the nuances that only a human touch can render. In a society increasingly striving for authenticity, outsourcing our stories to machines devoid of consciousness poses a significant philosophical dilemma.

Intellectual Property and Ethical Quandaries

The podcast also addresses intellectual property concerns accompanying AI-generated content. When AI pulls from a vast array of existing texts to create new material, ethical questions arise about originality and ownership. The uniqueness of creative work is deeply tied to the creator's personal experiences, insights, and emotions. When AI mimics human art, the boundaries of ownership blur, questioning whether machines, relying solely on pre-existing patterns, can genuinely 'create.'

The Accessibility Debate: More Than Meets the Eye

Another critical narrative scrutinized is the claim that AI democratizes creativity, especially aiding those with disabilities. While AI can offer accessibility tools, advocating for deeper, structural changes is paramount. True inclusivity requires not just technological aids but also fostering environments where diverse voices can flourish. The solution extends beyond AI and involves rethinking the norms and infrastructures that inhibit accessibility.

Environmental Impacts of AI

While marveling at AI's computational prowess, there is an often-overlooked cost—its substantial resource consumption. The energy and water required to operate and cool massive data centers pose significant environmental challenges. This facet of AI's impact is crucial as society leans more heavily on these technologies, necessitating a broader discussion about sustainability.

Craft vs. Art: Understanding the Dichotomy

Reichenbach delves into the philosophical distinction between writing as a craft and as an art form. Craft encompasses techniques and rules that can be academically acquired. In contrast, art springs from human experience—an inherently chaotic and raw endeavor that AI cannot emulate. Writing remains one of the most personal acts, embodying reflection, catharsis, and rebellion.

Episode Transcript

Heads up! Transcripts are automatically generated and may contain errors.

Oh, hey, it's that guy who marks the books that you loan him with his own notes. Blake Reichenbach. How's it going, everybody? Welcome back to the Inkling Insights podcast.

Once again. Disclaimer on my cold open. I don't mark books when you loan them to me. I do dog ear the pages. I've never really used bookmarks for some reason. The one exception to that is when I'm traveling, if I'm flying somewhere, I will use a plane ticket as a bookmark, but I've never used an actual bookmark as a bookmark. I have no idea why. I've always just been a turn down the corner of the page kind of guy and for some reason stick an airline ticket in the book kind of guy. It's just how I roll. I can't explain it, baby. I was born this way.

But I won't write in books that you loan me. My own books, I will fill them with pen to the margins. I love underlining. I love taking notes in a good book. That is one of my favorite things to do. I have one particular copy of My Antonia by Willa Cather that I consider my copy of My Antonia, which for, you know, quick frame of reference, I have, I think five separate copies of my antinea. But my copy, which, if you're hearing this, which I imagine you are because it's a podcast and that is in fact how podcasts work, I want you to envision the words my copy in bold or italicized or bold and italicized. But my copy of My Antonia is full of my notes. It's underlined, it has things stuck between the pages. It's sort of like my commonplace book. And it's something that I go back to and I reread that book every few years.

It's one of those books where depending upon where I'm at in life and what stage of life I'm at, it takes on new meaning. And so it's a book that I will continue to mark up. It's getting to the point where I have to read it with like index cards or post it notes because some of the pages, I can't mark them up anymore. There's nowhere for me to mark if I want to be able to continue reading the book. So that is how well loved that book is. But again, if you loan me a book, I promise not to do that. Only the ones that I own now.

I don't know why I decided to have such a long diatribe about marking up books for this episode. Because I'm going to be talking about something that feels just diametrically opposed to a good printed book that you mark up by hand with a good ink pen. Right. I'm talking about AI today. I feel like this episode is inevitable.

I'm slightly groaning and rolling my eyes as I record this because you can't get online right now. You can't open a magazine, you can't, I don't know, turn on the news. I don't watch the news unless it's on a TV at the gym, but I'm assuming some people still do. You can't do any of those things right now without hearing someone talking about AI. And the reason I sound a little exasperated, I think, is because I do think the way we talk about AI right now is so overhyped. And I think that the majority of consumers are using AI, I should say, is just so overrated. So overrated. And I want to be very clear and upfront about that bias.

Now... The other bias I should be really clear and upfront about is that this podcast is not my day job. I am a product manager in the software industry and I specifically am a product manager for AI products. So I work in the business technology space and I specifically lead teams that build products that leverage and incorporate AI technologies to facilitate business processes. So I'm very in the weeds in the AI industry.

I am not anti-AI. I would say I am critical of AI, but I am not anti-AI. And AI in many ways does pay my bills because again, that is my job. That is what I do as a product manager is I look at all of these business use cases and at emerging technologies that are available and figure out new and novel ways that we can apply them, adapt them, and make it easier for people to run their businesses. So I'm also coming at it through that lens. Right? I know my stuff when it comes to AI. I know it really well and I'm good at what I do and I'm comfortable saying that.

So I'm not coming into this conversation as someone who is going to say, like, never, AI, AI is always shit. I'm coming into it as someone with, I feel like, has a pretty realistic perspective that there are things AI is good at, there are things AI can do but isn't great at, and there are things that AI has been oversold on and that we are led to believe it is much better at than it actually is when it comes to writing and publishing. AI is such a minefield. Such a minefield.

And it's so difficult to talk about AI in writing and publishing because it feels like it should be an entirely separate world. It feels like this should not be a Venn diagram, it should be two completely separate spheres. And I think part of that is right, my own biases around writing creativity. But part of that is also around what it takes to produce good writing and the inherent limitations of generative AI. Before I get too deep into the inherent limitations, excuse me, of generative AI, I do want to acknowledge some of the just general publishing non specific criticisms and faults of Gen AI.

Now, throughout this podcast I'm going to be talking about AI as an umbrella term, but keep in mind that there are so many flavors and facets of AI. Primarily what we're going to be talking about throughout this episode is generative AI and LLMs or large language models. That is not the extent of AI. There is so much more that AI can do, and I would say much more productive use cases for AI. For example, AI is really good at matching and identifying patterns and categorizing data. And so imagine you have a database with 10 million entries in it. And within this database, maybe it's something like in the health sciences world or public health. That would be a good example. Right? You have this database of 10 million people with various conditions, various geographic details, right? AI is really good at synthesizing data, matching patterns, so it could look at that database and find patterns that could help predict health outcomes or help predict interventions that would make sense for a really large cohort. Right? That's a really good use case for AI.

Bringing it slightly closer to writing and publishing, but I would say not fully in the realm of writing, AI also has a lot of really good, like marketing applications. So one of the hardest parts of marketing, especially for people who are more on the creative side of things, is the like, analytics and strategic side of things. So being able to put together ad campaigns and to tailor a marketing campaign to your audience is not only really effort intensive, but it's also very data intensive.

Especially when you're working with platforms like Meta or Google that can theoretically extend your message to tens of thousands or even millions of people, depending on how large your budget is. Knowing how to refine your content or refine your audience so that you're not just wasting money is really, really hard. Being able to let AI take some of that work of figuring out what variables to test what changes to make, that can make things a lot more efficient. It can save your budget a lot more.

And again, what it's doing is not taking over the creative process, it's taking over that pattern matching and cause effect analysis. So there are positive use cases for AI across most industries. Right. And I think something like the marketing example that I gave would be really beneficial inside of an industry like publishing, where things are pretty unanimously not great and where folks get pretty concerned about the use of AI regardless of industry, typically comes down to a few buckets first and foremost.

And I think my chief concern for myself is when it comes to energy and clean water consumption. AI is a very resource intensive process. The types of infrastructure that are needed to fuel AI resourcing, they use a lot of power and they require a lot of water to keep everything cool and operating efficiently. And that water, if you know if it's coming from like a potable reservoir or something like that, is often not recycled in a way that it is able to be used within a drinking water cycle. So the greenhouse gas emissions, the effects on water supplies is across the board pretty negative, pretty staggering. And when you compare the energy usage of something like AI to let's say a traditional Google search, which I would say increasingly is a difficult comparison to make because Google by default is more and more relying upon generative AI as part of its search results. But in theory, if you compare generative search to, something like ChatGPT, to a traditional search query, the energy that is required to produce those results is a significant difference. I want to say it's something like a tenfold difference, but I could be misquoting that. But in short, significantly more energy consumption to produce a result with AI than with standard Internet search.

And if you've ever interacted with something like ChatGPT, you know that it's probably not going to be like a one-and-done interaction. You're probably going to have follow up questions or otherwise need to refine your criteria quite a bit so that energy consumption is just continuously growing. One of the other key sticking points, especially for a lot of writers and creatives, is around intellectual property. And here's where we get into some of the nitty gritty around large language models and how they work, right? So the way that large language models work, and this confuses a lot of people, they are not thinking, they do not have mastery over language, they are algorithmic, they are predictive.

Large language models are trained on large bodies of text in a given language and then they mathematically understand how entities within that cluster of text connect to each other. And even there I just use the word understand, right? Because that is how we as humans evaluate the type of interactions that we have with AI, it does not have understanding in the way that you and I have understanding of language. If you're really into mathematics and understand, like graph theory, that's closer, right? It's a mathematical principle of vectorized data. If you want to get really into it and really nerdy, it's essentially predicting if this word happens, what is the next word that's most likely to happen. And that's oversimplifying quite a bit, but that's the foundation of how large language models work when they are producing text as an output.

And for writers, when you think, well, wait a minute, what is that large body of text that this large language model is trained on for a lot of the earlier models that hit the mainstream? Think like GPT3, GPT 3.5. We're talking about a lot of the crawlable Internet. We're talking about things like Wikipedia pages and just search results that you can find on Google. And so if you've written a blog, if you have published stories, if you've published articles, there's a good chance that a company has trained their model on your work. So it has looked at your writing and it has incorporated that into the body of text that it is trained off of to say, okay, based on how all of these words are strung together, I know that when this word is used, we're likely to then use this word, then these words and these words and these words. And it's highly unlikely that a large language model is ever going to directly plagiarize an individual writer.

You know, if you got on a large language model and said, write three sentences of Blake Reichenbach, for example, it's pretty unlikely that it is going to give you three sentences that I have directly written. Reason being is when you're looking at the bodies of text that a large language model is trained on, we are talking about an astronomically large number of texts. So we're not talking about thousands of words. We're not talking about millions of words. We're talking about billions, maybe trillions of words. And so your specific language, when that is averaged out, is just infinitesimally small. Right. So your influence on the model is there. And if you have a really large, really prominent body of work, it's possible that a model could emulate your style. But it is highly unlikely that a model, especially unprovoked, is going to directly plagiarize you.

Now, that said, it still means that a corporate entity, even though OpenAI started as a nonprofit entity, it's it's still a corporate entity, especially right now as it is trying to move to a corporate for profit entity. It means that a corporate entity is profiting off of your work and you are not compensated for it. Also that you have not consented to having your work incorporated into this model. That has raised quite a few legal questions around who actually owns what an AI creates. We see this quite a bit around AI art, and when people give the advice for self-published authors to use an AI image generation tool or something like ChatGPT to generate cover work or artwork for their book covers.

I highly, highly, highly don't advise doing that because who actually owns that art? Do you own it because you wrote the prompt for the image? Does OpenAI own it because they generated the image? Or is ownership unassignable because the image is the result of an aggregate of hundreds or thousands of artists work that have been incorporated into this body of imagery that the model is trained off of? I don't believe that that question has been settled yet on a legal basis, and I don't know exactly if that same line of questioning is being pursued for written content as it is for imagery content. But I think it's still questionable enough that I would highly discourage using AI to write anything that you intend to market, sell, promote.

Now with all of that in mind, a lot of writers, a lot of people in the, what I would call it, creative community are pretty AI adverse. And I think that's fair, right? Most writers are not going to want someone to profit off of their work or to non consensually take their work and use it to fuel someone else's work. In addition to that, writing isn't easy. Writing is something that you work hard at to get good at and it is a skill that is immensely valuable. And AI, I would say, takes the craft out of that, it takes the fun out of writing.

Yet there are certain flavors of tech-bro, let's say, who think that AI allows them to fulfill ambitions of being a writer, of being a published author, of being a novelist, because all they have to do is come up with these prompts, come up with these concepts and let AI handle the rest. And in fact there was a pretty controversial article that came out on the bookseller website. I will link to that article in the show Notes came out on October 3 by Nadeem Sadek titled Craft and Creativity. Pretty innocuous title and the subheader is AI is ushering in a new Generation of Creators that Publishing should embrace. And throughout this article Sadek argues that AI is a great democratizer that AI is democratizing creativity. And I want to read part of this article for you. Okay, so here we go. Are you ready for this?

We have forever conflated creativity with craft. Not only have we historically needed to identify within ourselves something other than the normal, we've had to take that creative impulse, insight, or notion and express it in a way others can grasp. We have galleries full of paintings, concerts full of musicians, books, books, books and more. Books everywhere read with zest and wonder. We like artifacts. A thing has been created. It has presence. Inspectable, digestible. Criticism, review and acclaim surround it. If we're lucky, each new piece of creativity we produce through the craft, we perceive the creativity. There are less perceivably creative people in our world. The architects who design perfect arches, whoever invented the wheel, the master distiller blending liquids in casks of sherry and port to make that perfect single malt. Or a nurse who finds a way to make an old woman comfortable by playing her songs from her childhood. These days, also the tiktoker who produces a new meme combining a societal insight with a memorable tune and perhaps a signature dance. Each human is creative, but not each human can craft. Whether it's with paintbrushes, words, or filters on a social media site, AI solves this. It's not a Stradivarius, it's not a Porsche, it's not squirrel hair brush. But it is a new expressor, a means of fashioning an artifact from a creative impulse without having to master the craft of expression. So long as you can articulate your notion, AI can make a decent stab at producing an artifact to represent your creativity. It'll make music to your command. Write words, produce an image. Whatever you're trying to conceive and give birth to, AI disintermediates the historic imperative of crafting. It takes your ideas and makes them evident. Others can see what you intend. People can relate to what you wish to convey. The liberation of global creativity excites me. Giving voice to the voiceless is exciting. Thinking about all the ideas we might be about to meet, previously unexpressed is a pulse racer.

 

When I read articles like this, all I can really do is let out an exasperated sigh if I'm being honest, because I can appreciate the optimism in a utopic view like this, right? Where someone wants to believe that technology is enabling a future of greater creativity. Where it is, you know, somehow propelling us forward to a place where people are better, create more uninhibitedly, and, you know, there are very Real things to be said about accessibility. There are people with physical and cognitive limitations that make traditional acts of writing or painting or sculpting or, you know, architecting, whatever, more difficult or at times completely unacceptable. And I fully acknowledge that and respect that, and it fucking sucks.

But here's where I have to be a bit of a Debbie Downer and where I want to be very direct and very honest. Not everything is for everyone. And I don't mean to be mean when I say that. I don't mean to be discouraging when I say that. But the thing is, creative acts require effort. And that is not a bad thing. It does not mean that only a select few get to be creative. It does not mean that only those who are innately blessed get to be creative. What it means is that creativity, creative works are worthy of respect. They're worthy of effort. They're worthy of nurturing, they're worthy of investing time and money and energy into them.

This notion that AI enables anyone to engage deeply and creatively and produce something that is a tangible output of the same caliber, of the same order, is deeply misguided, not only when we look at quality but when we look at also what goes into producing a work of creativity. So I'll start with quality, because that's kind of the easier of the two to address, right, when we're talking about quality. Remember how I mentioned that LLMs essentially work off of the concept of graph theory? Right. LLMs work off of a law of mathematical averages.

Even if you write a really good prompt for whatever generative tool you are using, you're going to get back something that's fairly average. There are ways you can train the model. There are ways you can feed additional context and data into the model to make it sound more human, sound more like you, to, you know, even bypass AI detection tools. But that's not what makes good writing. Good writing has voice. It has something that is unique to the person who created it, not something that is the mathematical average of a million writers who have come before. Good writing is going to have perspective. It's going to have that human influence that comes with being a person with lived experiences and going through the really tough process of translating those lived experiences into words. And that, I think, feeds into the second aspect of why creating by hand and creating by AI are so different.

People don't create just to have the artifact, as I think the article that I read from is assuming people create because the creative process is a way to encapsulate specific facets of the lived experience. There's this really great quote from Willa Cather. Oh, hey, I get to talk about Cather twice in a podcast about AI. Kind of weird, but also really great. There's a really great quote from Willa Cather in her novel the Song of the Lark, where the protagonist Thea Kronborg's coming up out of a river in, I believe, Panther Canyon, and she reflects, you know, what was any art but a mold to imprison? But for a moment, that shining, elusive element which is life itself hurrying past us and running away, too strong to stop, yet too sweet to lose. And it's the process of earthing your own lived experiences, of figuring out what is that shining, elusive element of life itself that you want to grab onto, that you want to put into that mold, because it is too strong to stop, but too sweet to lose. And you need to preserve just a glimmer of it.

That process is so challenging, and it requires you to think critically about not only the words that you're using, but about how you understand them, about how you understand the events that have unfolded in your life, about how you relate to the world around you, about how you make sense of your position within the world. And all of that together is what produces the artifact. It is not sitting down and writing. It's not just spewing words out until there's a book in front of you. It is having that point of view, having that perspective, having that desire to take something that is profound or exciting or energizing and putting it down onto page through your body. And I know that that sounds very new age and very hippie, but that is what I've come to believe is true about writing and about creativity.

And it's not just true of, like, literature with a capital L, you know, even if you want to talk about, like, smut, I think it's a great example because it's often dismissed as like, oh, this is a silly book. This is not a serious work of art. But the thing is, that's someone taking arousal, someone taking erotic desire. That is a powerful glimmer, a powerful aspect of life itself that they are saying, ooh, you know what? I'm feeling freaky. This tentacly monster is what gets me off, and I am going to transfigure this sensation into words. Probably a weird example, but I've been playing a lot of Baldur's Gate, and I did romance the Emperor for the achievement, so that's where that example came from.

But my point is, it's not as simple as, I have an idea, I put down words, now I have an artifact. And I think that this assumption that AI can replicate the human process of creating art is misguided. Creating art is not the same thing as just writing words down. Right? Those are two separate acts. AI can write words down. Absolutely. And if your goal, you know– I think the author uses the phrase like giving voice to the voiceless, and others can see what you intend. People can relate to what you wish to convey. Maybe there's a use case in there of like using AI to come up with a brief or helping you refine your pitch. You know, those are things where it's essentially distilling ideas and helping you convey them in a more concise way, or acting as your sounding board for an idea.

Again, still not like perfect use cases, but like, that I can see as something that makes sense. A lot of writers really struggle with queries and synopsis writing because they have this 300-page book in their mind that they've spent the last two years writing, and distilling that down into two paragraphs is a total nightmare. So maybe there's a way that they can explain their book to an LLM and show it what a really good synopsis looks like and have it help them refine their book into a synopsis.

I still would not advise feeding your entire book into an LLM because then you are training the model on your book and as we've established, probably not a great idea unless you're just like open to that.

But again, that is not the same thing as like the creative fulfillment that comes from producing a work of art. Those are two separate acts inherently. Now as I started to touch on with this idea of like, well, maybe like, there's a way you could use it for like your synopsis or kind of like I touched on earlier with the concept of like, ad targeting and marketing. AI may have some places in a writer's life, you know, particularly on the marketing side of the house. Like, if you're the kind of writer who just wants to focus on your craft and you don't want to have a social media presence. But because modern publishing is a mess and you're expected to have a social media presence, maybe there's a way there that you could do something like, you know, anytime you publish an article or do an interview or do a podcast, an AI model drafts up a couple social posts for you and share those out to your account after you review them. Like, something like that, frankly, is pretty innocuous. It's not super harmful. But again, that's going to come back to your personal comfort level with interacting with the AI tools.

And frankly, if you're like a never AI kind of person, I don't blame you. Again, I'm not never AI. I think there are use cases, there are applications in various industries where AI just, it makes sense. But when it comes to writing and publishing, if you want to say that's a firm boundary that you're not crossing, I say go for it. Put that boundary in place, don't cross it. But yeah, I was really reluctant to dive in and talk about AI because again, God, we can't do anything these days without hearing about AI. AI fatigue is so real. It's so real. But I do think that it is worth talking about why some of these like, utopic views of AI are misguided. Why it's okay to say that AI is not this great equalizer, it's not this great democratizing tool, and it's not going to be a great liberator or means of accessibility that people like to position it as.

In fact, I would argue, and if you read the blog post that I wrote about NaNoWriMo and their many, many flaws, one of the things that I talked about is like, their stance on AI is like, oh, well, if you want to use it, okay, we don't really care, right? And they use again, this language of like, disability and inclusion and accessibility. And it feels very hollow and it feels like they're using the language of inclusion without actually understanding, like, what it looks like to be an ally to advocate for meaningful structural changes as opposed to, like, just, I don't know, being chronically online and using the language of inclusion. But I think, again, because AI and intellectual property ownership is so ambiguous and is so contentious and is likely to have several years of litigation to follow, I think saying, you know, hey, if you have a physical or cognitive disability, just use AI is again, setting up populations that are already being excluded from a lot of mainstream publishing and creative activities, setting them up for more hardship because they use AI tools for their creative endeavors.

They may or may not face repercussions for that. Because there is so much ambiguity around AI. I think a better solution would be, rather than shoving AI tools at disabled people, would be talking to disabled people and asking how we can make creative spaces more inclusive for people who are neurodivergent, who are physically disabled, and making sure that they have a seat at the table in our creative communities and aren't just having chatgpt shoved at them as a band aid.

With all of that, I am gonna go wrap up. It is a beautiful, beautiful day here in Kentucky. We're finally starting to get a little bit of fall weather. And I love fall. I am a spooky girl. So I think I'm going to go march through the leaves. Probably need to clean my house a little bit as well. A little bit behind on laundry. And I don't want to be a stinky boy. Why am I a spooky girl and yet a stinky boy? I don't know. I've been talking for too long and words are starting to lose all meaning. Because unlike an LLM, I don't have math predicting what is going to come out of my mouth next. Okay, bye.