One of the commonplaces directing our relationship with AI is that AI does not think.
Yet the folk wisdom is that we should approach AI as a tool that does what we tell it.
Next, we are "told" that we have to be mindful of what we tell AI as well as be ready to spend time re-iterating what we tell it to refine the results, sometimes even re-starting our AI conversation so as not to have the results too biased on the conversation history.
No less, AI does not think, yet we test it with instruction.
And we still have to think.
We know that in well-defined environments, instructions to AI tend to generate productive results.
Repeatability is an achievement.
But AI tends to produce syncretic overlap in its responses when its environment is not well-defined. Hence, we find its output confusing or worthless. The same applies to search engine results, though.
AI seems most creative when it is wrong. ...
Disambiguation is critical here. Hence, re-iteration. Correctness.
Yet, tangent to the "AI does not think" commonplace, is another argument about how AI should improve our thinking.
But why should AI improve our thinking? Good questions are here.
And there are plenty of critics of AI Slop .
And critics such as Gary Marcus have plenty to discuss, especially in terms of the market hype for AI.
But where does any of this take us?
The argument arises that computers are agents of copy-cat, but this capacity has typically been one of the favored aspects of computing.
And AI is also a copycat.
And human beings are also copycats.
The Frankfurt School, of course, developed a theory on the copy-cat tendencies of modern culture, but then, long before the Frankfurt School, Adam Smith developed a theory of economics based on the speed of production, which is largely based on copy-cat. And previous thereto, Plato developed a theory of ideal forms in opposition to copy-cats, sort of. ...
And Andy Warhol also did his take on copy-cat
Yet copy-cat is often a problem for lines of thought that do not understand re-production as necessarily creative. Try this argument, for example.
But we are not lost in the asemics of the indistinguishable so much as stuck in the asemics of creativity and its double sense: to reproduce, to make original.
Hmmm.
AI Slop often seems so much kitsch.
AI Overview on AI Slop
"AI slop"—defined as low-effort, high-volume AI-generated content (images, text, video)—is generally considered derivative rather than truly creative. While it can demonstrate novelty by producing unexpected, surreal, or "so bad it's good" results, it is often characterized by a lack of intentionality, human emotion, and genuine meaning.
- Lack of Intent and Meaning: AI slop is generated without taste, purpose, or understanding. It is often created to exploit the attention economy as clickbait.
- Derivative Nature: AI models are trained on existing human work and "remix" or mimic patterns rather than creating something fundamentally new.
- "Inhuman" Quality: It is characterized as having a "banal, realistic style" that is often formulaic, repetitive, or eerily distorted (e.g., warped shadows, extra fingers).
- Lack of Craft: Because it requires minimal human effort to produce, it lacks the "human labor" and "soul" that traditional art uses to convey emotion.
- A New Aesthetic: Some argue that AI slop is a new, unique, and "surreal" art form in its own right, mirroring the chaotic nature of the internet.
- Democratization of Creativity: It allows non-artists to express ideas or create prototypes quickly.
- Catalyst for Human Creativity: Even if the raw output is "slop," it can act as a "creative tool" to spark ideas in humans, who then refine it into something meaningful.
- Surprising Results: Some people rate AI-generated images as more creative than average human art because of the unexpected combinations AI can produce.
No comments:
Post a Comment