AI Provider Policies

This document outlines the core policies, terms, and principles for the responsible integration, usage, and development of AI-powered features within this project. Minovative Mind integrates models exclusively from Google. All contributors, maintainers, and users must comply with the referenced policies and best practices to ensure safety, legality, and ethical stewardship of AI technology.

Table of Contents

1. Responsible AI Principles

Minovative Mind is committed to the ethical development and deployment of AI. We align our integration strategies with the principles established by Google, emphasizing:

  • Safety and accountability
  • Human-centered design
  • Transparency and explainability
  • Privacy and data protection
  • Fairness and prevention of bias

Reference Principles:

2. Acceptable and Prohibited Uses

All project artifacts, integrations, and user interactions with AI models must adhere to the prohibited use policies of our provider. In summary, prohibited uses include (but are not limited to):

  • Unlawful or exploitative activities.
  • Generating harmful, misleading, or deceptive content.
  • Bullying, harassment, or creation of malicious code.
  • Scams, deepfake abuse, and non-consensual data use.
  • Content that violates privacy, intellectual property, or encourages self-harm.

Provider Policies:

3. Provider-Specific Terms & Privacy

By using the AI features in Minovative Mind, you agree to the terms of the provider.

3.1 Google (Gemini)

Current models: Gemini 2.5 Pro, Gemini 2.5 Flash, Gemini 2.5 Flash-Lite, Gemini 3.1 Pro, Gemini 3.1 Flash-Lite.

4. Security, Data Retention, and Abuse Monitoring

Minovative Mind utilizes secure enterprise-grade APIs (Vertex AI for managed routing). Notably:

  • Model Training: Data submitted to the AI models via the extension (both in API Key and Managed Mode) is typically not used to train the providers' models without explicit user opt-in, per enterprise API terms.
  • Data Caching: Input and output data may be cached by providers for up to 24 hours by default to improve diagnostics and performance.
  • Managed Mode (Proxy): Workspace content and prompts pass through our secure Cloud Function proxy in transit to the provider. This data is not persisted on our servers.
  • Investigation Logs: Detailed logs of the Context Agent's investigative steps are stored in Firestore for auditability and debugging.

5. Product Integration Guidelines

When contributing to AI integrations in this project, developers must:

  • Secure Credential Management: Never commit API keys or service account credentials to public repositories. Use VS Code's SecretStorage for local keys and Google Cloud Secret Manager for backend keys.
  • User Notification: Clearly notify users when AI models generate content or perform autonomous actions.
  • Attribution: Ensure correct attribution and model identification in the UI using the provider's canonical naming convention (e.g., "Powered by Gemini 2.5 Pro").
  • Safety Filtering: Implement and respect the safety settings provided by the AI APIs to filter out harmful content.
  • Model String Hygiene: Avoid hardcoding deprecated model IDs. Subscribe to provider changelogs to catch upcoming retirements.

6. Reporting Issues and Security Vulnerabilities

All security concerns or potential AI policy violations should be reported through the channels below:

Disclaimer

This document is a summary of the official policies of our AI provider and must be used in conjunction with the latest policy links provided above. Official provider terms always take precedence. Model availability, pricing, and terms are subject to change; always verify against the provider's current documentation before deploying to production.