#1 Sci Fi Author on Google News Speaks Out: How I Used ChatGPT To Game The Algorithm
May 23, 2023
While some science fiction magazines, like Clarkesworld, have shuttered their doors to AI-generated submissions, a new breed of AI artist is turning around the stigma against generative AI content, and making a name for themselves in the top slot of Google News, all while pushing the boundaries of formula fiction.
One such AI artist is Baofa, a self-described hyperrealist AI artist and former content moderator who once worked for a major web platform and who has skyrocketed to being the #1 sci fi author on Google News, after coming out of literally nowhere. The networked postmodernist writing style of Baofa’s cinematic AI Lore “mini-novels” splits up his epic world-building and narratives across 100 different volumes of AI-generated art, lore, and flash fiction pieces. His work has already been chronicled by CNN, Newsweek, Business Insider, and most recently the New York Post.
He says that he used ChatGPT to pitch and write an article for Newsweek about his AI art books project, and then accidentally “gamed the algorithm” in order to reach the top — and in fact top three — spots in Google News for “sci fi author.” As of this writing, he also appears as number 4 globally in Google News for “author” and number 6 for “sci fi” — right under Apple’s Silo. How then, did this virtual nobody, with nothing but an AI, receive such a prestigious ranking by Google’s own algorithm?
Baofa, whose real name is Tim Boucher and who identifies as a Canadian-American, says his artist name came to him during meditation. (He is no stranger to following his dreams either, having set off once for Crete alone with a one-way ticket to find a plant he had seen in a dream.) “Afterwards, I looked it up, this word BAOFA, because it had no meaning to me, and I found that it translates to ‘get rich quick’ in Mandarin Chinese (as well as ‘outbreak’). I thought, is this a joke? Is the Universe laughing at me? I decided then it would be a fitting name to take on for this project, so that I remember not to take myself too seriously.”
Baofa — who also routinely produces #1 viral and often humorous political image sets on Reddit’s r/midjourney (and is collaborating on AI music tracks on Spotify under an assumed name) joins the ranks of other up and coming arts and literature collectives making conscious creative use of AI, like Philippe Klein’s Infinite Odyssey, which bills itself as the first AI-generated sci fi magazine, making a huge splash in the press with its inaugural issue.
It’s an idea which Boucher says that he loves, and which has been close to his heart as he published his own 100 human-curated and lovingly hand-crafted AI-assisted art books for his own indie press. “For my part, I was greatly inspired by reading through old pulp sci fi magazines I found on the Internet Archive, an invaluable resource. And looking at the old ads, and thinking how alien they all felt today; and how still a hundred years later, they retained a sort of trashy thrilling alure that for me is so central to sci fi — something Philip K. Dick famously called the ‘trash stratum’.”
Boucher wanted to make books like that too, he says. “Pulp sci fi, but also like a cross between Choose-Your-Own Adventure books, and that Time-Life Mysteries of the Unknown series.” He wanted to ride that knife-edge of the Uncanny Valley and of the postmodern conception of “hyperreality,” triggering reactions in the viewer of is this real, is this not real, is this serious, or is it all some kind of bad joke? His readers don’t seem to think it’s a joke though, as he says that 10% of his buyers are repeat buyers, whose purchases make up for 40% of his still modest, but growing sales.
“Say what you will about the contents of the books themselves — people are free to like them or not — but those sales numbers to me sound like product-market fit.” And entering what he says they call in Silicon Valley ‘flywheel’ territory for something extremely niche.
He insists he’s not in it for the money though, and that his sales are still modest. “I’m operating as a young startup with no money, and no investment. I’m totally free. My product is the worlds I build with AI, and this amazing magical ability to open up and share these incredible creative rooms inside the collective imagination with other people. I don’t need anything else to prove to critics that I’m successful.” He has a good full-time job in tech, he says — and loves building very plain pine furniture in his spare time. “I have everything I need. I’m just in it for the Art, with a capital AI,” and he says that the money he makes from his books is just a bonus.
He is also deadly serious about the dangers posed by AI, and he says he recognizes that it is ironic that he uses AI tools to spread his message, but that, “These are the best storytelling tools of our time, and they’ve already changed everything, and they will continue to do so again and again. They will disrupt our lives. We have to become aware and wary, sensitive to them, understanding how they work first hand, so that we can also simultaneously become hardened against them and all the mass scale manipulation opportunities they will create. I’m a huge fan of a lot of aspects of this technology — if not always necessarily its closed control and direction — but we’re descending into chaos, and need to be careful.”
“Particularly, I think we need to be very wary of AI-powered content moderation systems, like this new one from Microsoft, called Azure AI Content Safety. Systems, in other words, whose insidious purpose is attempting to control or correct human nature. I’ve used a lot of these products in my time as a content moderator, and the ones based on AI and machine learning are always the most untrustworthy. Do we really want AI systems running at a societal level to control what we can say or do, speak or hear? Because that’s what we’re doing, and they are in the hands of for profit corporations, who are amassing enormous wealth and power with their AI tools.”
And with the rise of AI-generated content threatening to overtake the volume of human-generated content globally within the next five years, according to technologist and journalist, Nina Schick, Boucher warns that, “There’s a tendency to think, oh we should just use AI to sort out what’s real or what’s good, and what’s not. But then we’re caught in this position of having one AI that tells us the other AI is real or trustworthy or isn’t. We’ve basically surrendered our humanity then.”
AI might be bad at content moderation, but he wants to highlight that it’s a bad job for human moderators too. “Everyone on social media is so unhappy, and people take it out on each other and on the poor moderators,” he says. “It’s a horrible thing to have to deal with constantly every day.”
But it’s a lot worse of a problem for humanity when you attempt to take the humans out of the decision-making loop, Boucher says. “And in fact, AI-based moderation solutions also don’t really take the human out of the loop in reality. They just shift them around and start calling them an ‘AI trainer’ instead of a content moderator. But they’re still repeatedly and consistently exposed to extremely toxic content on a daily basis, all just to make ends meet. It’s like being forced to breathe asbestos into your brain as your job. Content moderators aren’t making a ton of money either; these people are risking their mental health, just so that your grandma isn’t offended by rude people and naughty pictures on the web.”
“I’m someone who has been incredibly lucky and privileged to be able to speak out about the conditions content moderators work under globally. My own experience is nothing compared to most people working in this space experience. Something has to change. Most moderators can’t speak to the press freely because they’ve been forced to sign non-disclosure agreements as conditions of employment. Thus, the plight of these workers and the true human cost — beyond merely dollars and sense — is buried,” says Boucher.
“There isn’t really much support for content moderation workers who may easily become anxious, angry, depressed, or in Boucher’s case develop mild-PTSD symptoms from repeated exposure to “triggering” content (and people), like Boucher says he did.
“There’s a common misconception that comes from people with no experience in this field, that it’s only moderators who deal with images and videos who really have it bad, and that, as the children’s rhyme goes, ‘words can never hurt me.’ But human unkindness, amplified and magnified at the scale to which moderators are exposed is shocking and violating, no matter the media, and no matter whether you look at cute pictures of cats as ‘eye bleach,’ afterwards.” The effects linger on, he says, and he feels like only now is he coming to grips with them — some three years later.
When asked for solutions to the looming AI crisis, Boucher says he thinks David Lynch might have the right idea, that meditation — or some deeply creative intuitive inner process like it — might be the only thing that can really help people re-orient towards the ground of being, unplug from all the crap, and shake off the toxins and violence of social media. He says meditation is the main thing that has really helped him close this chapter of his past and move on.
Having spent years traveling and working on organic farms, and now running his own 60,000 square foot certified wild garden, Boucher adds that we have to “return to the primacy of lived existence, over the tyranny and hollow illusions of social media and the technocratic control over our lives of cell phones and subscription services. These patterns are already driving us down blind alleys and in many cases ruining our lives, whether they have AI-integrated features or not. Have you ever sat around a table with others, and all anyone is doing is checking their cell phone and not making eye contact? AI already controls the planet. It’s already a done deal. But what happens next remains up to us.”
Still, he thinks there’s hope, though: hope, he says, “if not in the essential goodness of human nature — at least in the potential for goodness. And maybe this is going to have to be enough. Because that’s all we’ve got.”