Secure AI Guidelines

Privacy & Security

AI security relies on UM's established IT policies combined with smart data practices for responsible innovation

Our Security Approach

UM doesn't create separate security policies for AI tools. Instead, we rely on our established IT policies and standards that already address data protection, privacy, and security requirements.

This approach ensures consistent security practices while allowing flexibility as AI technology rapidly evolves.

Why Data Privacy Matters More with AI Tools

AI tools present unique privacy and security challenges that go beyond traditional software. Understanding these risks helps you make informed decisions about when and how to use different AI technologies.

Data Security Risks

  • Training Data Exposure: Your conversations might be used to improve AI models, potentially exposing sensitive information to future users.
  • Data Breaches: AI systems process vast amounts of data, making them attractive targets for cybercriminals seeking access to personal or institutional information.
  • Third-Party Sharing: Many AI tools share data with partners, advertisers, or other services, potentially exposing your information beyond the original platform.
  • International Data Transfer: Your data may be processed in countries with different privacy laws and protections than the United States.

Privacy Concerns

  • Data Persistence: Even "deleted" conversations may remain in backups, logs, or training datasets long after you think they're gone.
  • Inference and Profiling: AI systems can infer sensitive information about you from seemingly innocuous data, creating detailed personal profiles.
  • Lack of Transparency: Many AI systems don't clearly explain how your data is used, stored, or shared, making it difficult to assess privacy risks.
  • Compliance Gaps: Some AI tools may not meet the same regulatory standards (FERPA, HIPAA) that govern university data handling.

The UM Advantage

This is why tools like Amplify GenAI are so valuable - they're hosted on UM's infrastructure with our security controls, keeping your data local and under institutional governance rather than subject to external commercial terms of service.

Key Security Principles for AI Use

Protect Sensitive Data

Never input FERPA-protected student information, HIPAA health data, or confidential university information into public AI tools.

Use Approved Tools First

Prioritize vetted tools like Amplify GenAI that meet UM's security standards and keep data on university infrastructure.

Read Privacy Policies

Before using any AI tool, understand how it handles your data - does it train on your input, share with third parties, or store information?

When in Doubt, Don't

If you're unsure whether information is appropriate to share with an AI tool, err on the side of caution and contact us for guidance.

Official UM IT Policies

These established policies provide the foundation for secure and responsible technology use at UM, including AI tools.

UM IT Policies & Standards

Comprehensive policies covering data protection, security, and acceptable use

Key Policy Areas Include:

  • Data governance and classification
  • Information security requirements
  • Acceptable use guidelines
  • Privacy protection standards

Security Incident Reporting

If you suspect a security issue related to AI tool usage - such as accidental sharing of sensitive data or suspicious activity - report it immediately.

infosec@umontana.eduFollow UM's incident reporting procedures for all security concerns

Remember: Security is Shared Responsibility

While UM provides secure tools and clear policies, individual users play a crucial role in maintaining security by making smart decisions about data sharing and tool selection.

Questions About AI Security?

Our team can help you understand how UM's security policies apply to your specific AI use case.