PC

AI Dungeon’s new filter for stories involving minors incenses fans

AI Dungeon is an infinite, procedurally generated game in which players create a classic text adventure-style narrative by writing inventive or clever prompts. In a recent update, AI Dungeon developer Latitude implemented a new system that stops the game from generating sexual content involving minors — and the community is outraged at the implementation.

The system (which the developer is currently testing) was designed to detect “explicit content involving descriptions or depictions of minors.” In that circumstance, AI Dungeon will tell the player “Uh oh, this took a weird turn…” and force them to try other prompts. According to a statement from Latitude, the system cast a far wider net than anticipated, sometimes blocking the procedural generation of stories involving children or anything related to specific phrases like “five years old.”

On Tuesday, Latitude posted a long blog explaining the update in an attempt to assuage the community’s biggest concerns.

Yesterday, we released a test system to prevent the generation of certain sexual content that violates our policies, specifically content that may involve depictions or descriptions of minors (for which we have zero tolerance), on the AI Dungeon platform. We did not communicate this test to the Community in advance, which created an environment where users and other members of our larger community, including platform moderators, were caught off guard. Because of this, some misinformation has spread across Discord, Reddit, and other parts of the AI Dungeon community. As a result, it became difficult to hold the conversations we want to have about what type of content is permitted on AI Dungeon.

The developer said that the test had unintended implications, writing: “While this test has largely only prevented the AI from generating sexual content involving minors, because of technical limitations it has sometimes prevented the generation of content that it wasn’t intended to.”

Fans have been reacting to these changes — both the intended purpose and unintended side effects — on Latitude’s social media. Some of these posts are memes that are meant to be a simple dunk on the developer and little else, while other posts appear to reflect legitimate anger.

An image of the AI Dungeon subreddit as of April 28, reflecting the community reaction

Users have been sharing examples of their stories coming to a sudden end when the test system seems to detect controversial content … even when there clearly isn’t any. AI Dungeon allows for unlimited roleplay and storytelling possibilities, and Latitude supports other NSFW material, including sex, violence, and swearing. Some players feel alarmed that their private fiction with adult themes could be subject to moderation and read by another person from the development team. Latitude said that its system will flag potentially rule-breaking posts, which can then be further reviewed by a staff member.

“Latitude reviews content flagged by the model for the purposes of improving the model, to enforce our policies, and to comply with law,” the developer said. In response to the question “Is Latitude reading my unpublished adventures?” the developer wrote, “We built an automated system that detects inappropriate content.” We’ve reached out to Latitude for comment and clarification.

This has users worried about their security and privacy, especially if they’ve submitted vulnerable or personal information into the AI Dungeon system. For now, the test system is still in place.

Source: Read Full Article