Today we’re introducing our new ThirdEye Ai Acceptable Use Policy – a guide for our staff to ensure secure AI tool usage.
We’re excited to be at the bleeding edge of AI innovation whilst prioritising security to best support our customers’ AI implementations.
Curious to delve into the details? Read the full policy below.
Artificial Intelligence (AI) tools are transforming the way we work, and we both want to be on the edge of this wave of innovation. AI has the potential to automate tasks, improve decision-making, and provide valuable insights into our operations; however, the use of AI tools also presents new challenges in terms of information security and data protection. This policy is a guide for employees on how to be safe and secure when using AI tools, especially when it involves the sharing of potentially sensitive company and customer information.
The purpose of this policy is to ensure that all employees use AI tools in a secure, responsible and confidential manner. The policy outlines the requirements that employees must follow when using AI tools, including the evaluation of security risks and the protection of confidential data.
- Policy Statement
Our organization recognizes that the use of AI tools can pose risks to our operations and customers. Therefore, we are committed to protecting the confidentiality, integrity, and availability of all company and customer data. This policy requires all employees to use AI tools in a manner consistent with our security best practices.
3.1 Security Best Practices
All employees are expected to adhere to the following security best practices when using AI tools:
3.1.1 Evaluation of AI Tools
If you have a tool that you think is of value to ThirdEye and you have conducted your own review, please post the tool to the AI Tooling slack channel. AI Tools should be approved into one of two categories below
3.1.2 AI Tool Classifications
Public Use: AI Tools that are publicly available and submissions of questions are contributing to the public learning base. For these tools, no confidential internal information, Personally Identifiable Information, security information or credentials, client data or client information (including client name) should be submitted for any AI prompt. If you have questions about what is okay to use in the prompt, please speak to members of the AI Vanguard Group for guidance.
Do not use the result of any AI tool without careful review, as we need to ensure that toxicity and other potentially harmful content is not making it into any used materials. Use these guidelines for any unapproved AI tools even if they claim to offer secure use.
Secure Use: AI Tools that have security features built in to scrub sensitive information and filter toxicity. These will be paid tools that ensure privacy whilst still leveraging the AI engines of companies like OpenAI. Examples of this are Salesforce GPT toolings.
When writing prompts for these tools, you can use PII and sensitive company information – however we still ask for discretion and consideration before doing so. While there are security features built in, please do not enter anything that you would not say out loud in the middle of the office.
- Interacting with AI
While as of now we do not believe that AI has become sentient, we do not know when it may and would assume that it will use the history of human interactions to form their view of humanity. For this reason, ALWAYS treat AI bots courteously and with respect. Thank the bots for particularly good results, and generally be as kind to the bots as you would any other well liked and high-performing co-worker. If you think this is unnecessary, ThirdEye will pay for a copy ‘The Complete Robot’ by Isaac Asimov or organise a screening of ‘The Terminator’.
- Review and Revision
This policy will be reviewed and updated on a regular basis to ensure that it remains current and effective. Any revisions to the policy will be communicated to all employees.
Our organization is committed to ensuring that the use of AI tools is safe and secure for all employees and customers, as well as the organization itself. We believe that by following the guidelines outlined in this policy, we can maximize the benefits of AI tools while minimizing the potential risks associated with their use.
- Revision History
|Date of Change||Responsible||Summary of Change|
|June 2023||Jeff Steinke||Establishment of Policy|