Where and how we use AI
My dear internet friend M. Walker put into words a feeling around AI in music that I’ve been struggling to articulate:
There’s no such thing as AI music. Music is human culture. The sounds AI platforms output are imitations of human music, created by feeding AI models recorded music that humans actually created, training the model for complex pattern recognition abilities. They didn’t pay those recordings owners to use their copyrighted work. It is a simulacrum of music, stolen & unethical in origin, badly regurgitated by people too lazy and indifferent to even attempt to make music themselves.
This is a perfect reflection of what about AI does not sit well with me. AI accelerationists use LLM-based products to simulate human-created art and tell the world that “music is cooked”, but it’s not actually art at all. It’s pattern recognition to fit a purpose.
The thing is that this satisfies the capitalist goal of the music industry: to command attention with brands, with earworms as the marquee features. If your goal is to extract money from people by getting songs stuck in their heads and forming a dopamine addiction, why not use AI for that?
As I’ve said before, I understand the rationale, but I think it’s horrible for the world.
The interesting thing is that’s not how many software engineers think about this stuff at all. So many software engineers I know are already deeply embedding AI tools into their workflows. And the more I think about it, the more this feels like it makes sense: Most of the software programming languages, and even entire software solutions and frameworks, were created with the intent of their languages and their uses being shared openly and widely with anyone wanting to use their languages to solve a problem. In that context, an LLM understanding and acting on those open-source frameworks at scale is less an insult to an art form (to many, but not all), and more a 100x efficiency gain.
It’s also weird as a product person without formal software engineering experience, though I’ve tinkered enough to do damage, to be able to use this stuff to build things. I could potentially figure out how to build the things I’d build now with an LLM’s help, but it would take so much more time to achieve an end state (eg. bring Unstream to market) to the point where I might never realistically make the thing. But it’s possible, so I did. And I think the world is better now that Unstream exists for the people who want to change their habits away from streaming music services.
So I still don’t know what to make of this, other than: LLMs are children of ruthless capitalism, and as tools they can be effective when used in the right disciplines to solve the right problems. But they are not a replacement for human connection or creative expression.
The current problem is that we are trying to throw LLMs at everything. And most companies perpetuating AI are doing so in bad faith, deliberately fooling public in order to justify its value in spaces where it is harmful to humanity. And I mean that not just in terms of an LLM’s ability to sufficiently fake the work of a human, but whether it’s actually the morally or ethically right thing to do. For instance: yes, I know Suno can do a decent job creating a mediocre indie rock song soundalike, but I also know there is general public consensus that this is not a good thing once people realize it’s happening. But at the same time, 97% of people don’t notice, and the companies making AI music simply don’t care to inform them.
This is a problem in as harmless of a space like music; imagine what might happen if we start using flawed AI platforms to do things like fight wars on countries’ behalves. Oh wait, they’re already trying to do that.