
Artificial intelligence (AI) is changing the technology landscape rapidly. AI generators are now used in various capacities, from generating images to providing search results. Google, a front-runner in the world of AI, is urging app developers to be more cautious when harnessing AI-generated content.
Developer Guidelines and Google Play Store Policies
Karina Newton, the senior director of global product policy with Android, provided clear directions on October 25 for developers regarding AI-generated content. The main points from the post include:
- Developers must report or highlight any offensive AI-created material.
- The policy mainly concerns text-to-text, text-to-image, voice-to-image, and image-to-image applications available on the Google Play Store.
- AI-created voice and video recordings are covered, but apps that don’t utilize AI for content generation, even if they employ AI-created media, are exempt.
- Productivity applications using AI and those summarizing non-AI content are also excluded.
Enhancing Android User Data Control
These modifications are among the latest Google Play Store policy revisions aimed at developers. Other important changes encompass:
- Restricting app access to photos and videos, unless crucial for the app’s functionality.
- Setting boundaries for full-screen notifications, enabling device owners to grant specific permissions to apps.
- As a part of its “Data Safety” feature, Android 14 will allow users to monitor if apps are disseminating their data after acquiring access permission.
Google is taking proactive steps to ensure that Android users have heightened control over their personal data and digital privacy. Staying informed about these updates can greatly assist individuals in making educated choices regarding their data and privacy. Learn more about Android 14’s features here.
Google’s Vulnerability Rewards Program (VRP) Expansion
As generative AI stirs more debates and security concerns, Google is widening its VRP to focus on threats specific to AI.
Updated VRP Guidelines
Google’s recent guidelines elucidate which AI-related discoveries are eligible for rewards. Highlights include:
- Finding training data leaks that expose private and sensitive details qualifies for a reward. – Extracting public, non-sensitive data doesn’t meet the reward criteria.
- Last year, Google rewarded security researchers with a whopping $12 million for spotting vulnerabilities. AI presents distinct challenges compared to other technologies, like model bias and manipulation. Google believes that the VRP expansion will motivate research in AI safety and security, ensuring a safer AI environment for everyone. Additionally, the company is increasing its efforts in open-source security, aiming to make AI supply chain security information universally accessible and verifiable.
Anticipating a New Executive Order
An important backdrop to Google’s VRP expansion is a forthcoming “sweeping” executive order from President Biden, anticipated for October 30. This order is expected to establish rigorous evaluations and prerequisites for AI models before their adoption by government bodies.
Securing Generative AI: Challenges and Initiatives
Generative AI presents unique issues, including unfair bias, data hallucinations, and model manipulation, as underscored by Google’s Laurie Richardson and Royal Hansen. The updated VRP focuses on numerous categories:
- Prompt injections
- Sensitive data leakage from training datasets
- Model manipulation and theft
- Adversarial perturbation attacks causing misclassification
Strengthening the AI Supply Chain
Earlier in July, Google established an AI Red Team as a component of its Secure AI Framework (SAIF). Another noteworthy step is the company’s endeavor to fortify the AI supply chain through existing open-source security platforms, including the Supply Chain Levels for Software Artifacts (SLSA) and Sigstore. These tools enable:
- Verification of software integrity with digital signatures from Sigstore.
- Insight into software content and creation methods, allowing users to detect vulnerabilities and advanced threats.
Collaborative Initiatives for AI Safety
Notably, OpenAI has introduced a new internal Preparedness team concentrating on extensive risks to generative AI, spanning from cybersecurity threats to chemical and nuclear dangers. In a joint effort, Google, alongside OpenAI, Anthropic, and Microsoft, has also unveiled a $10 million AI Safety Fund to further research in AI safety. With these endeavors, companies like Google aim to strike a balance between innovation and security, ensuring that AI advancements are both groundbreaking and safe for users for the broader community.
Open Source: A Platform for Collective Growth
Open-source platforms like SLSA and Sigstore play an integral role in the AI ecosystem. By making AI tools and solutions accessible to a wider community, it fosters collective growth and innovation. Such platforms also encourage transparency, allowing for a more extensive examination and mitigation of potential risks.
In conclusion, while the road to a fully secure AI environment is long and fraught with challenges, the concerted efforts of tech giants, researchers, and the global community provide optimism. By laying a strong foundation of policies, rewards, and collaborative research, we are gearing up for a future where AI not only simplifies our lives but does so with utmost security and integrity.