LEXR Legal BlogBlog / Data Protection

Navigating AI tools and data protection: A guide for compliant company practices 

By Sebastian Schneider, Francisco Arga e Lima

Last Updated 29/08/2023

AI tools are here to stay, that’s why it is necessary to explore how you can utilize AI tools, specifically Large Language Models (“LLMs”) such as ChatGPT, in a manner that complies with data protection laws such as the EU’s General Data Protection Regulation (“GDPR”) and the Swiss Federal Act on Data Protection (“FADP”). The aim of this blog post is to provide practical insights on leveraging AI while safeguarding users’ and your company’s privacy, both when utilizing AI tools internally, as well as when incorporating them into your company’s website or app. 

Using AI tools internally 

Firstly, confidential and sensitive information should be handled with caution when using AI tools within your internal operations. Data protection principles like lawfulness, fairness and transparency, purpose limitation, and confidentiality also apply to personal data that you process with (external) AI tools. As such, it is advisable for companies to: 

  • Implement appropriate safeguards to protect personal data from unauthorized access or disclosure. This means for example internal access restrictions based on a need-to-know principal as well as technical security measures. See if there are contracts (like Data Processing Agreements) in place with the AI tool provider that secure a compliant way of processing. 
  • Especially, sensitive data such as financial records, trade secrets, and medical data should be carefully evaluated and anonymized before being shared with AI tools. 

Similarly, companies should carefully consider the implications of allowing AI service providers, such as OpenAI for ChatGPT, to use their content for model training. Depending on the nature of the content and potential privacy concerns, you may choose to opt out of content usage to protect sensitive information and maintain data privacy compliance. It is essential to review and understand the terms and conditions provided by AI service providers regarding content usage to ensure alignment with data protection regulations and the company’s own privacy policies.  

Secondly, bear in mind that the output generated by LLMs is not always true and is susceptible to errors. For that reason, before relying on the results provided, companies should establish validation mechanisms, in order to ensure that the output is correct and, if not, to amend it appropriately. This may involve: 

  • Cross-referencing AI-generated information with trusted sources; and 
  • Subjecting the output to human review to verify its accuracy.  

Verifying the truthfulness of AI-generated output promotes reliable decision-making and prevents potential errors or misinformation. Also, it is necessary for complying with your data protection obligation to keep personal data up-to-date and accurate. Data subjects have the right to obtain the rectification of inaccurate personal data concerning him or her. 

Lastly, proper training of users like employees is essential to promote responsible and effective usage. Companies should: 

  • Provide training programs that educate individuals on the capabilities and limitations of the tools used; 
  • Foster a culture of responsible AI use, to mitigate risks associated with misuse or misinterpretation of AI-generated information. 

Educating users on appropriate AI tool usage enhances their understanding and promotes responsible decision-making. 

Using AI Tools externally 

Integrating AI tools like ChatGPT or chatbots into websites and mobile apps can enhance user experiences and provide useful functionalities. 

When including these AI tools into your services, you need to be aware of potential third-party data processing and establish a clear distribution of the roles and responsibilities involved.  

For example, when using AI tools from a provider like OpenAI, it is crucial to review and understand the data processing activities carried out by the provider and establish a clear controller-to-controller or controller-to-processor relationship, if that is the case. This means that companies should: 

  • Review the data processing practices of AI tool providers and choose the ones that are the most privacy-friendly; 
  • Enter into data privacy agreements with AI tool providers and ensure compliance with relevant data protection regulations; 
  • If there are data transfers to third countries – namely the USA – ensure that you have appropriate safeguards to do so. If the EU Commission or the Swiss Federal Council have not adopted adequacy decisions regarding those countries, make sure to include Standard Contractual Clauses in your Data Processing Agreement as well as to adopt other measures deemed appropriate; 
  • Make a clear distribution of roles between you and the AI service provider, where it is clear who is(are) the data controller(s) and who is the data processor, if any. 
  • Apply additional technical and organizational measures like pseudonymization or anonymization and internal guidelines where necessary. 

In this context, companies should also establish appropriate data retention policies for AI-powered chatbot interactions. This retention period must be based on the purposes of the data processing and ensure that data is securely deleted or anonymized once it is no longer necessary.  

On the other hand, transparency is key for companies utilizing AI tools externally. For that reason, when deploying these tools, you need to: 

  • Clearly inform users and stakeholders about the usage of AI tools and its purpose including the implications of this inclusion for the data processing and their privacy, namely when it comes to third-party involvement, third-country transfers and data security; 
  • Ensure that privacy policies are easily accessible, written in plain language, and provide clear instructions for users to exercise their rights; 
  • Inform and grant additional rights if you use an AI tool for an automated decision that has a significant impact on the data subject for example in your hiring process or when granting access to your services solely based on an AI decision. 

Conclusion

In conclusion, you can harness the power of AI tools and LLMs, such as ChatGPT, while remaining compliant with data protection laws like the GDPR and the FADP.  

By paying attention to privacy best practice guidelines, you can strike a balance between AI innovation and safeguarding privacy rights. With that in mind, companies looking to integrate AI tools into their products and processes should check out the following practices: 

  • Review and update internal policies and procedures: Ensure that they align with data protection regulations and address the specific considerations of using AI tools like ChatGPT. Include guidelines for using confidential/sensitive information, verifying output accuracy, and training employees on appropriate AI tool usage. 
  • Implement robust data protection measures: Employ data minimization techniques, such as collecting only necessary data, pseudonymization, and anonymization, to reduce privacy risks. Establish clear data retention policies and securely delete or anonymize data when no longer required. 
  • Communicate transparently with user and stakeholders: Update privacy policies to clearly state the usage of AI tools and any third-party involvement. Ensure that privacy policies are easily accessible, written in plain language, and provide instructions for users to exercise their rights. 
  • Stay updated on evolving regulations and best practices: Continuously monitor updates in data protection laws and other relevant regulations, guidelines, and industry best practices related to AI tool usage. 

For more information, do not hesitate to speak to one of our privacy experts!

Book a free call

Related

Let’s Go!

Book a free, non-binding discovery call to discuss how we can help you achieve your business goals.

Or feel free to reach us directly via email at [email protected].

Book a free call