Now playing in the multiplexes:
The AI Doc: or How I Became an Apocaloptimist--Frustratingly for co-director Daniel Roher, the talking heads at the beginning of this documentary can't even really give him a clear explanation of what AI is. Roher, a young filmmaker and artist who won an Oscar for his 2022 documentary Navalny, is anxious about the apocalyptic potential of AI for humanity, particularly in light of the possibility of him bringing children into the world.
His initial round of interviews doesn't offer him much to relieve the anxiety; more than one sage suggests that the human race could be extinct within a decade. Even if they don't go that far, they note the loss of both white-collar and blue-collar jobs without any plan to accommodate the unemployed, and the potential for massive, ubiquitous, society-wide surveillance. They also note AI's environmental impact, through the staggering amounts of power these systems require.
Apparently Roher's worries about all this weren't sufficient to encourage marital precautions, because in the course of the film he learns that his beautiful wife is pregnant with their first child. Soon he's getting pep talks to pull him out of his virtual despondency--more talking heads, mostly CEOs with dollar signs in their eyes this time, acknowledge that AI will certainly create turbulence. But they also insist that it has the potential to bring about a technocratic utopia, a world without disease, hunger or want.
The danger in both scenarios is that they sound so absurdly like old school sci-fi that they could breed either Chicken Little terror or complacency. This sprightly, rapid-cut, graphically engaging movie, co-directed by Charlie Tyrell, tries to find a tolerable way to navigate a middle course. The result is a shrug: it could go either way, or a thousand other ways we can't predict, so you might as well have your kid and hope for the best.
This is sensible but of course insufficient, so Roher and Tyrell offer suggestions for what to do in response to this crisis, which even AI's staunchest defenders seem to agree is a crisis. One of the talking heads calls himself an "apocaloptimist"--Roher pounces eagerly on the term--that is, aware of the dangers of AI but also of its possibilities, and prepared to take positive actions.
Most of these seem to involve demanding government regulation, and under normal circumstances that would be the obvious way to start. But...this government? Isn't there somebody else we can lobby, at least for the time being?
I've been repulsed by AI from the first I heard of it, partly for selfish reasons. It hits home for sorry little freelancers like me; the people who pay pittances for little scraps of writing are drooling at the prospect of having a machine write them for free. It also revolts me, perhaps even more so, as a reader. I read to get into the heads of other human beings; I don't want to read a bunch of crap spewed by a computer.
So far, most of the AI-generated text, and music and visual art, that I've seen (and known I was seeing) has been gratifyingly awful. But they're working hard to get it better, and that's no comfort, because I wouldn't want to read AI-generated prose even if it was good. Maybe even especially if it was good.
In any case, the concerns suggested by Roher and Tyrell's movie go quite a ways beyond a few hack writers losing work; they're talking about the survival of the human race. It begs the question: if we can create these systems to begin with, isn't there a way, not to mention a moral imperative, to program them to not harm or oppress us? Here too, I automatically revert to foundational sci-fi; couldn't they be programmed with some version of Asimov's Three Laws, or with the programming that shut down Robby the Robot in Forbidden Planet if he was ordered to hurt a human?
If it's not seen as practicable to put basic safety measures like this in place, I would guess that there's one reason: it would inhibit the technology's potential to make money. Even the future of humanity can't compete with profitability. That's why my own apocaloptimism, at this point, remains cautious.


No comments:
Post a Comment