We have followed with interest the drama as Stable Diffusion (SD), the new open source AI image generation model, has gone public.
The CEO of stability.ai, the company that makes SD stated in a Techcrunch article, and also in other places:
“The paternalistic and somewhat condescending attitude of many AI aficionados is misguided in not trusting society.”
From early on, SD always positioned itself as an AI model you could basically do anything you wanted with — provided you were running it locally. This in contrast to OpenAI’s highly restrictive content policy and error messages.
Not too long after that (literally two days after the Techcrunch article quoted above), he also stated in the Discord:
“please do not deliberately generate NSFW stuff or you will get banned/are basically being a douche”
That message feels a little “paternalistic” and “not trusting society,” unfortunately, but we can understand the reason given the context.
Now, Discord has its own rules about what is allowed. Interestingly, Stability.ai created a tool that was apparently routinely violating them, even often when users were not explicitly trying to make NSFW images using the Discord bot. So we sympathize with needing to comply with Discord’s rules, despite the additional abilities this AI might have.
Following that, SD team introduced automatic image blurring for generated images, both in the Discord bot (while it lasted) and later in DreamStudio, their paid tier image generation service. It was and remains riddled with false positives, blurring images when they rise above the threshold of the classifier, as opposed to when a human has detected an issue.
Initially, on release, DreamStudio gave users a toggle to disable the NSFW image filtering. Then, a day or so later, they disabled the ability to disable it (meaning it is always on) and then removed the toggle altogether because they said they were experiencing attacks using it.
Then, however, they released SD the model & weights/checkpoints into the wild. When they did so, they…