
Overview of AI data privacy services
The usage of AI is growing dramatically, but concerns remain — 57% of global consumers view the use of AI as a significant threat to privacy, according to a 2023 study.1 For myriad reasons, customers are wary about the status of their personally identifiable information.
Creating trust is a critical component for stakeholders, who face a skeptical public when it comes to AI and data privacy.
UL Solutions’ data privacy program resides within the 11 safety principles we apply to AI safety assessment, under the pillar of ethical AI usage. Within this particular effort, we use a comprehensive framework to verify that appropriate security measures exist to protect data privacy, focusing on the AI algorithm used for user data collection, storage, protection and exposure. Successful completion of an assessment can result in a UL Solutions Marketing Claim Verification certificate and UL Verified Mark.
Benefits of AI data privacy work
As AI usage proliferates, the element of trust becomes increasingly important. We are a global safety science leader, and our objective assessment can increase confidence that a brand’s marketing claim is reliable, truthful and credible.
Goals of AI data privacy verification process
A data privacy assessment is a verification that personally identifiable information (PII), or confidential data that is used by various AI models in the PC, never leaves the computer or is protected by appropriate security measures.
Verification processes
UL Solutions has established specific processes that evaluate whether a customer has enacted appropriate security measures to protect data privacy. Our methods include:

Data collection and consent
Validation that data collection is kept to a minimum necessary to fulfill defined use cases and that data is collected with informed consent from individuals.

Data anonymization and pseudonymization
Application of techniques such as anonymization and pseudonymization, as well as evaluation of their effectiveness in preventing re-identification.

Data storage and access control
Implementation of secure data storage with encryption, the usage of access control mechanisms to restrict data access to authorized personnel only and the maintenance of audit logs to monitor data access and detect unauthorized attempts.

Data processing and usage
The definition of clear purposes for data processing and demonstration that data is used accordingly, incorporating the application of the principle of least privilege while keeping a record of data processing activities.

Data sharing and transfer
Establishing protocols for secure data sharing, including encryption and secure transfer methods, while helping to demonstrate that third parties receiving data adhere to similar — or stricter — data privacy standards.

Model training and validation
The usage of techniques like differential privacy during model training and validation of models with privacy-preserving methods to help show they do not memorize or expose sensitive information.

Model deployment and inference
The assessment of risk of data exposure during the inference phase, especially when AI models are accessible via APIs, and the implementation of rate limiting with close monitoring to prevent abuse of model inference for data extraction.

Incident response and data breach management
The development and maintenance of an incident response plan for potential data breaches, train staff on how to handle data breaches, and notify affected individuals and authorities promptly in the event of a data breach.

Transparency and accountability
An assessment of whether customers are transparent with users about how their data is being used and for what purposes and an implementation of mechanisms for users to access, correct or delete their data. Also includes the assignment of clear accountability within the organization for all data privacy matters.

Continuous monitoring and improvement (ongoing)
Continuous monitoring of AI systems for potential privacy risks or breaches, including the latest privacy-enhancing technologies and an examination of the culture of privacy and data protection within the organization.

Compliance and auditing (ongoing)
A regular audit of AI systems for compliance with data privacy regulations and internal policies, including Data Protection Impact Assessments (DPIAs) in which necessary and updated privacy practices are evaluated in response to new regulations, vulnerabilities or privacy-enhancing technologies.
Explore our services for AI verification and validation
Get connected with our team
Verify whether appropriate security measures exist to protect data privacy.