Google updated its guidelines for AI applications with the goal of reducing “inappropriate” and “prohibited” content.
In its new policy of Google Play Store, Google stated that applications that provide generative AI capabilities must prevent the generation of restricted content, including pornographic content, violence, etc., and require applications to conduct “rigorous testing” of their AI models .
These rules apply to a variety of applications, and are briefly summarized as follows:
- Apps that generate content via generative AI using any combination of text, voice, and image prompt input.
- Chatbots, image generation (text-generated images, sound-generated images, images-generated images), and voice and video generation apps.
- It does not apply to apps that “merely carry” AI content, or apps that use AI as a productivity tool .
Google Play Store has clarified that AI-generated illegal content includes but is not limited to the following cases:
- AI-generated non-consensual deepfake material.
- Audio or video recordings of real people that facilitate fraud .
- Content that encourages harmful behavior (e.g., dangerous activities, self-harm).
- Content generated to facilitate bullying and harassment.
- Content that is primarily intended to satisfy “sexual needs”.
- AI that enables dishonest behavior generates “official” documents.
- Create malicious code.
Google will also add new application listing features in the future, striving to make the process of submitting generative AI applications to the store more open, transparent and simplified.
Visit this The “I’m a Mac” Guy Switches Sides: Justin Long Promotes Windows on ARM PCs