Large Language Model Policy

LARGE LANGUAGE MODEL POLICY

BACKGROUND

Artificial Intelligence (AI) tools are becoming increasingly useful and accessible. In particular, generative AI tools have become a part of many professionals’ workflows. Generative AI tools include those that produce text or images in response to queries. They are an accessible way to interact with large language models (LLMs) which are AI systems trained on enormous amounts of data. ChatGPT is one example of an LLM-based chatbot.

LLMs like ChatGPT are trained on enormous amounts of data. When inputting a prompt in ChatGPT, we are basically interacting with all the data it was trained on. Through different machine learning techniques, ChatGPT is also trained to predict the next word in a sentence, which allows its responses to sound natural and even compelling. However, ChatGPT can provide factually incorrect information, is easily manipulated, and is further trained by the prompts users provide. For that reason, it is critical that employees and staff understand and follow the Long Language Model Policy.

PURPOSE

Kredit values the potential for current and newly available technologies to further our company mission and improve the overall experience for clients and their customers. We remain respectful of the limitations of AI systems, as such, this policy is in place to ensure that the use and application of AI tools including LLMs and similar technologies within Kredit is ethical, safe and responsible.

SCOPE

This policy applies to all employees, contractors and staff of Kredit.

Use of LLMs are subject to Kredit’s Acceptable Usage Policy and your use of company issued devices are subject to relevant monitoring procedures.

GENERAL PRINCIPLES

  1. Transparency: The workings, capabilities, functionality and limitations of LLMs must be understandable by their users and stakeholders.

  2. Accountability: For every LLM solution and function there must be clear knowledge and a clear line of accountability to ensure responsible decision-making for both the creation of, as well as the output from the LLM.

  3. Fairness: LLM technologies and use must be implemented with efforts to minimize biases, particularly in the content generated for the end user.

  4. Privacy: The LLM must be implemented with the means to respect privacy rights and remain compliant with data privacy and protection regulations. No PII or other protected data may be stored or used in its content generation.

  5. Safety: LLM solutions and functions should be developed with controls to prevent misuse or conduct which violates this policy.

FORBIDDEN USES

  1. LLMs may not be used in ways that may undermine Kredit’s security and reputation.

  2. Employees may not sign up to any LLM for work-related matters using personal credentials.

  3. Under no circumstances should any Kredit employee ask for, or divulge any sensitive information, such as passwords, or any form of Personally Identifiable Information (PII).

  4. Employees may not type out or paste either draft or final contracts, such as employment contracts or contracts with clients or service providers, into the LLM.

  5. Employees may not type out or paste documents that may contain business-sensitive data, such as draft annual reports, business cases and accounting reports.

  6. Employees may not seek feedback on job applicants’ resumes by pasting them in in part or in full.

  7. Employees may not type out or paste proprietary code, documents or other information that is not authorized and/or intended for public access or use without instruction to do so by Kredit’s CEO or CTO or their authorized designee.

  8. Employees may not type out or paste content that is subject to copyright licenses that do not allow for their indiscriminate sharing.

Failure to comply with this section of the present policy will result in disciplinary action and can lead to termination of employment.

DEPLOYMENT & MONITORING

  1. Each new product designed for use by Kredit, its customers or service provider must be approved by the CTO and CEO ahead of publishing for internal or external use.

  2. Before deployment, LLM products, solutions and functionalities will undergo rigorous testing to ensure the generated content and overall structure is aligned with the intended use of the LLM.

  3. Feedback loops will be implemented allowing for continuous refinement of generative models.

  4. Conduct monthly random sampling and audit content for any information which falls into the Forbidden Uses section of this policy.

  5. Immediately and without delay, report any and all instances of data with PII or other data which falls under the Forbidden Uses section of this policy and have all instances of the PII deleted.

Moreover, documented remediation of the means by which the PII or other data subject to this section came to be part of the LLM is required within 14 days of discovery, and to the CEO and CCO.

CHANGE SUMMARY

Purpose: Internal Policy

Category: Information Security Policy

Policy Name: Large Language Model Policy

EventEvent DateEvent ByDate ReviewedReviewed ByVersion

Creation and Implementation

01/30/2024

Shelly Gensemer-Cleek, CCO

01/30/2024

Dave Hanrahan, CEO; Kenny Lai, CTO

1.0

Last updated