BLOG

Enabling AI Regulation Compliance for Enterprises

Learn about new government regulations for safer, more trustworthy AI products – and how we're addressing them

As generative AI continues to make headlines, international efforts are underway to both harness the promise and address the risks of artificial intelligence. Increasingly, governments are going to focus on ensuring the safety and trust of those who integrate and use this evolving technology. The Biden administration's recent executive order on safe, secure, and trustworthy AI and the UK AI Safety Summit highlight the increasing importance of AI regulation. 

With the EU AI Act expected to come into force in 2024, it's clear that comprehensive AI legislation is on the horizon. In this article, we'll highlight how many companies, including deepset, are already working hard to ensure that their generative technology offerings comply with existing and emerging regulations – and to provide users with the highest standard of security for their generative AI technology.

Conducting safety tests in the AI era

The recent executive order requires that “developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government.” The order seeks to accomplish this through the development of standards, tools, and tests to ensure the safety, security, and trustworthiness of AI systems. Similarly, the EU's forthcoming AI Act will introduce conformity assessments and quality management systems for high-risk AI systems.

Red teaming as safety testing for AI

One of the key components of ensuring compliance with AI regulations is conducting security testing, such as red teaming. Red teaming is a form of adversarial model testing that attempts to identify undesirable behavior in an AI system, using methods such as prompt injection to expose the system's latent biases.

The AIOps teams at major tech companies such as Microsoft, Google, OpenAI, and Meta are already using red teaming to evaluate and improve the security of their AI systems. In the future, red teaming will be essential for any high-risk AI system, not just the foundational models. In the near future, AI systems will have specific requirements depending on the domain in which they're deployed.

Watermarking of generated content

Thanks to large language models like GPT-4, not only is there more AI-generated content – it is also increasingly hard to distinguish from human-generated content. To mitigate the creation of fake news, deep fakes, and the like, regulators in the US and EU are planning to create mechanisms to help end users distinguish the origin of content. While there's no law yet, the recent executive order in the U.S. requires the Department of Commerce to issue guidance on tools and best practices for identifying AI-generated synthetic content.

Reporting duties 

Biden's executive order introduces new reporting requirements for organizations that develop (or demonstrate an intent to develop) foundational models. These are defined as AI models with more than 10 billion parameters that are trained on large amounts of data and demonstrate high performance on tasks that pose a "risk to national security." Organizations working on such models will have to report regularly to the federal government on their activities and plans.

The current proposal for an EU AI law foresees similar reporting requirements for "high-risk use cases," including central registration in an EU-wide database.

Transparency and data governance

Many of the current regulations are still being drafted and are therefore often vague in their exact reporting and documentation requirements. However, it is expected that the EU AI law will include several documentation requirements for AI systems that disclose the exact process that went into their creation. This will likely include the origin and lineage of data, details of model training, experiments conducted, and the creation of prompts.

While the EU AI Act is not yet an active law, organizations working on new AI use cases should be aware of it as they develop their own AI systems, and build future-proof processes that ensure the traceability and documentation of systems created today.

deepset: A Partner in AI Compliance

deepset has long been committed to helping organizations navigate the evolving AI regulatory landscape. We're SOC 2 Type 2 certified, but our commitment to ensuring our organization and those we serve meet evolving AI compliance guidelines doesn't stop there. Our LLM platform for AI teams, deepset Cloud, is built with the highest security standards in mind.

We currently support AI compliance through:

  • Unified environment: deepset Cloud provides a unified environment where all information is in one place, simplifying governance by effectively monitoring, managing and tracking all aspects of AI security.
  • User-friendly testing: Our platform enables you to set up security tests in minutes, not months, with one-click deployments and an intuitive interface – so stakeholders at different technical levels can run tests quickly and comfortably.
  • Integrated feedback collection: Our approach makes it easy to collect both input and output data from AI security tests to track whether application performance meets security standards.
  • Real-time monitoring: Continuous monitoring of your AI system in production ensures that the system remains secure after deployment.
  • Proactive measures: Our prompt injection detector and hallucination detection component are powerful tools against AI security threats.

AI compliance for the future

As AI legislation continues to evolve around the world, it's critical for organizations to stay ahead of the compliance curve. deepset Cloud already has the processes and practices in place to ensure our customers' applications both harness the benefits of AI innovation and mitigate the risks associated with it.

Read our latest blog post on deepset Cloud to learn more about how our platform reduces the overhead associated with building a production-ready AI application, while ensuring the highest level of security for our users and their customers.

Together, we will navigate a changing regulatory landscape while building a future where AI is safe, secure, and trusted.