On Tuesday, August 27, 2024, Grammarly held an AI Regulatory Master Class webinar hosted by Grammarly’s Scout Moran, Senior Product Counsel, and Alan Luk, Head of Governance, Risk, and Compliance (GRC). The webinar provided a high-level overview of the current regulatory requirements that will impact an organization’s AI strategies. The presenters also provided insights on questions to ask when evaluating providers of AI solutions, which include generative AI.
Neither Moran nor Luk provided detailed analyses of these laws and regulations (and both stipulated that none of what they said constituted legal advice). The following slide highlights some of the existing (and forthcoming) AI regulations in the U.S. and European Union (EU). The value here is in seeing how quickly regulatory and standards bodies have responded to the emergence of AI and generative AI.
Note that this illustration is not exhaustive. Per this May 2024 post by law firm Davis & Gilbert, “over the last year, nearly 200 new laws were proposed across dozens of [US] states to regulate AI technology.”
A Brief Overview of the EU AI Act
The EU AI Act went into effect on August 2, 2024. It applies to providers of AI systems/services to EU markets regardless of where those providers are located, as well as importers and distributors of AI systems, product manufacturers of AI systems, representatives of providers, and affected persons located in the EU. (Editor’s note: The preceding was a highly abbreviated version of the Act’s Article 2: Scope.)
According to Moran, the EU AI Act is about making sure “AI is being deployed ‘safely’ and that AI will not be put on the market if it's not safe, and it can be pulled off the market if it falls below the requirements to get at this safety requirement. It’s more about setting a ‘floor’ for AI regulation. Individual countries may choose to be more stringent.”
Attorney and frequent No Jitter contributor Martha Buyer offered this critique: “This, to me, is both unrealistic and potentially inaccurate. AI will not be put on the market if it’s not safe? Really? I don’t believe this for a minute. And, for that matter, who decides what ‘safe’ is?”
It appears that the EU AI Act does not nail down what ‘safety’ means. As Hadrien Pouget and Ranj Zuhdi wrote in AI and Product Safety Standards Under the EU AI Act, “At the core of the AI Act are safety requirements that companies must meet before placing an AI product on the EU market. As in other EU legislation, requirements are merely outlined at a high level, leaving significant room for standards to fill in the blanks.”
The Act does prohibit multiple types of AI practices (Article 5), including AI systems that: are manipulative or deceptive; exploit vulnerabilities of natural persons (age, disability, etc.); detrimentally classify natural persons over time; assess or predict the risk a natural person may commit criminal offences, etc. (Editor’s note: This list is abbreviated and not verbatim.)
Furthermore, the EU AI Act classifies AI according to four risk levels:
- Unacceptable: Defined to include systems like social scoring and manipulative AI.
- High-risk: Includes AI systems identified as high-risk include AI technology used in critical civic infrastructures that could put the health and safety of citizens at risk, essential public and private services, systems providing access to employment, and administration of justice.
- Limited risk: Refers to the risks associated with lack of transparency in AI usage.
- Minimal risk: Includes applications such as AI-enabled video games or spam filters.
- For the full text, see here.
Note that minimal or limited risk AI systems are subject to fewer regulatory requirements than high-risk systems. The first enforcement data for “prohibited uses of AI” occurs on February 2, 2025, and that enforcement for most other requirements (notification obligations, governance, general purpose AI systems) begins on August 2, 2025. (See here for a detailed implementation timeline of the Act.)
AI Regulations in the US
With respect to the US Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, Moran said that it is more about the US protecting its national secrets versus the EU AI Act which is focused on keeping consumers safe. (For more, check out Martha Buyer’s article, “A New Executive Order on AI Guidelines.”)
At the US state level, Moran said that she has seen some core concepts emerging across state laws, both those enacted and in progress. “We see requirements for clear disclosures relating to AI interaction [so] that people understand when they are interacting with AI and not humans. Also, there are bills giving data subjects the right to opt out of using their personal data for model training.”
Some specific cited examples of US state legislation include:
- CA SB-1047: Who bears responsibility if Al causes harm-the Al or the person using it?
- UT SB-149: Clear disclosure of AI interactions.
(Editor’s note: The descriptive text used here is verbatim from Grammarly’s slide; NJ added the links.)
“As long as such regulations aren’t ‘on the books’ in all [US] states, for whom will compliance apply?” Buyer commented. “And since this is a state issue, states’ laws will not be identical, making identification of violations, let alone prosecution of them, virtually impossible.”
Questions to Ask Vendors
With respect to implementing an AI-powered solution, Moran suggested that organizations need to ensure that the prospective vendors meet general security standards around cybersecurity (e.g., SOC2). Organizations also need to determine how their solution provider will use their data with respect to training the large language models (LLMs), for example. Some organizations may not want their data used at all, but vendor usage of data needs to be determined before proceeding.
“Also ask if employees of your vendor can view your data. This is usually known as ‘eyes off,’ and it's gained a lot of traction recently,” Moran said. “The good news is ‘eyes off’ is typically incorporated into an enterprise license, but always ask if and when data that you share can be accessed by the vendor’s employees.”
Here, Buyer stressed that enterprise users should view AI-generated content with skepticism. “How can the publisher of the Grammarly-based document [for example] be assured that the work it created is original and not the property of someone else that has been mined rather than created?” Buyer commented. This question is, of course, relevant to anyone using Gen AI writing assistance whether it is Grammarly or Microsoft Copilot, OpenAI ChatGPT, Zoom AI Companion, Webex AI Assistant, Google Gemini, etc.
Good Third-Party Agreements Are Critical
Luk stated that Grammarly does not sell its customers’ data. It only uses customer data to deliver its services. Moreover, Grammarly does not use customer data to train its models, and this prohibition extends to their third-party model providers.
“You're only as good as the agreements you have with your third party LLMs, so we have contractual commitments for them to not train their models on data we send to them, nor store anything on their end,” Luk said.
The following slide illustrates how an organization might frame its investigation of the data flow between itself, its AI vendor and any sub-processor and/or LLM provider used by that vendor. Luk said that Grammarly uses several internal teams when reviewing third-party vendors – procurement, privacy, compliance, security, legal and responsible AI experts.
Some of the questions Grammarly suggested that organizations should ask include the following:
- Is the technology stack designed to be confidential and protected?
- Can your organization’s data be segregated from other customer data?
- What protections are in place to keep data secure? For example, is the data encrypted in transit and at rest? Is it deleted?
- What security certifications serve as the proof points of having mature security, privacy and compliance program?
Buyer expanded upon those bona fides, asking: “Where these certifications exist, how often and with what degree of 'drilling down' will they be updated? What happens when the standards for certification change?”
Other considerations include, as mentioned above, the use of data in training the models. Organizations might choose to investigate the data on which the LLMs were pre-trained. Either way, high data quality can minimize common risks associated with using generative AI – e.g., hallucinations, trustworthiness, bias, generation of harmful or inappropriate content, etc.
“With respect to bias, it cannot be prevented. But what must happen is that it must be identified and managed,” Buyer commented. “Saying something is unbiased only reflects an indication that whomever the speaker is doesn’t understand what bias is.”
Data, AI and Regulations Will Change
Monitoring this data flow and architecture over time is also important. Grammarly’s Moran recommends asking questions such as, “‘Do you still use the model in the way that you expected when you onboarded the vendor? Have you changed the kind of data that you share with them? Has the vendor alerted you of any sub processors that worry you?’” Moran noted, as well, that these inquiries may be similar to the questions already being asked in an organization’s procurement process.
Given how quickly AI-powered solutions have been implemented and incorporated into many different software products, both Moran and Luk said that it takes ongoing effort to stay abreast of what is new and relevant to one’s organization – as well as the vast number of new and forthcoming regulations regarding the use of AI.
Want to know more?
Editor’s note: Check out these resources on AI risk management and governance:
- ISO/IEC 23894:2023: Guidance on how organizations that develop, produce, deploy or use products, systems and services that utilize AI can manage risk specifically related to AI.
- ISO/IEC 42001:2023: Requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organizations.
- ISO 31000: Provides principles and a framework and a process for managing risk. Per the site, “organizations using it can compare their risk management practices” to ISO 31000.
- NIST AI Risk Management Framework: Intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
- ISACA AI Audit Toolkit: A control library designed to facilitate the assessment of the governance and controls over the AI system in an enterprise.
- MIT AI Risk Repository: A comprehensive living database of over 700 AI risks categorized by their cause and risk domain.
- A list of all published ISO standards related to AI.
- A list of all ISO standards under development that are related to AI.
- Decoding the EU Artificial Intelligence Act: KPMG’s analysis of the Act.
- The European Union's AI Act: What You Need to Know, by the Holland & Knight law firm.
- Another analysis of the EU AI Act, by the Davis & Gilbert law firm.