

They have lots of snobbish gatekeeping still, it just exists at a higher level. Entry level knowledge is abundant. But once you seek a community with more specialized expertise, the IRC channels will be private and have passwords, and you better have contributed something to a novel exploit or something…

Most AI platforms allow sexualized content to varying degrees. Google, Instagram, Tiktok, etc all host CSAM. Always have. It’s been understood that they’re not liable to the extent that they remove it when reported. Their detection technology is pretty good as to automatically handle it, but never perfect. They keep track of origins and comply with subpoenas which has gotten tons of people convicted.
Grok image gen was put behind a paywall, which people claim is worse. However, most people paying lose anonymity and thus can be appropriately handled when they request illicit content, even if Grok refuses the request like it usually does.
I think the Grok issue is sensationalized and taken out of context of the realities of what happens online and in law enforcement.