Share
Share

How will AI be Regulated? What Brands Need to Know

Lawmakers around the world are scrambling to regulate advanced artificial intelligence (AI) technologies. The European Union (EU) recently passed the Artificial Intelligence Act, while President Biden has signed several executive orders regarding advanced AI. Canada’s Bill C-27, not yet passed, attempts to regulate AI but has come under heavy criticism for being insufficient. As retail and CPG brands explore how to integrate these emerging technologies into their business, keeping an eye on what regulations they can expect in the future is key.

DISCLAIMER: This blog is not legal advice. Legislation around the world is evolving in real-time. This blog intends to highlight key focus areas that signal potential or imminent legislation. Any use of advanced AI in your business should be vetted by your legal team.

LEADING THE WAY – WHAT WE CAN LEARN FROM THE EU AND THE AI ACT

The EU has historically been the most proactive body when it comes to the regulation of technology. They recently passed the AI Act, which is thorough and prescriptive, making them the first legislative body to pass comprehensive regulations of advanced AI.

EU

Image Source: Shutterstock

RISK-BASED APPROACH

The AI Act follows a risk-based approach, which means it categorizes the usages of advanced AI based on the potential risks to human beings. It outlines “high-risk” uses, many of which focus on security, law enforcement, and immigration. Any use of high-risk systems is subject to detailed regulatory compliance, such that for some organizations, the cost of compliance may outweigh the benefits. Other jurisdictions will likely adopt risk-based approaches to AI regulation as well.

WHAT BUSINESS USE CASES FOR ADVANCED AI FALL INTO THE HIGH-RISK CATEGORY?

The high-risk system designations for retailers and brands to be aware of include:

  • Biometric real-time and post-monitoring, including facial recognition technology (FRT), monitoring of sentiment (emotions), and categorization of people using biometric data
  • AI-generated content, interactive AI intended to “manipulate” or influence behavior, especially when interacting with vulnerable groups such as children or people with disabilities
  • Training, recruitment, and employee evaluation tools
  • Collection and usage of consumer data, including personally identifiable data and behavioral data, through advanced AI systems

Key concerns regarding these AI systems include:

  1. Data privacy: Biometric monitoring, sentiment recognition, and FRT are considered to be extreme invasions of personal privacy by the EU. In addition, the usage of personally identifiable data and behavioral data, though already regulated in the EU, faces new potential abuses with advanced AI.
  2. Opacity: Systems that operate without explicit consumer awareness, and content created to deceive are some examples that fall into this category.
  3. Distortion of behavior: Systems that manipulate people to behave in ways that may be harmful to themselves or others, especially high-risk groups, and especially where there is a lack of transparency about the system.
  4. Violation of human rights: There is a significant concern that AI systems increase discrimination. “Scoring” of people based on personally identifiable information (PII) is in the high-risk category, along with training, education, recruitment, and employee evaluations, due to bias in AI systems.

Although not labeled as “high-risk”, machine learning systems are also subject to specific reporting and regulations. ChatGPT and other generative AI systems will be required to document copyright materials that have been used to teach their systems, opening the door for content creators to seek compensation.

Generative AI

Image Source: Shutterstock

CAN BUSINESSES IN THE EU STILL USE THESE TOOLS?

If businesses follow the compliance measures outlined in the act, they will still be able to use most of these tools. The question is, will it be worth it? The compliance requirements include the development and maintenance of risk management systems, third-party assessments, extensive reporting, complex transparency obligations, and specific mandates for human oversight… and this list is not comprehensive. Tech companies may develop third-party compliance supports, but companies can still be on the hook if something goes wrong. The cost of compliance may negate the business case in many circumstances.

THE U.S.

President Biden has signed several executive orders about advanced AI. The Blueprint for an AI Bill of Rights deals with human rights and discrimination, while the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence is broader.

Key areas of interest for brands include the following:

  1. Content authentication and transparency: For example, potentially requiring companies to watermark content that has been created through generative AI.
  2. Protection for workers: For example, employer surveillance, risk assessments for the job market, and training for future jobs that will use AI.
  3. Protection against bias: Considerations for the use of biometric monitoring, the collection and use of personal data, hiring, and training.
  4. Data privacy: This may signal the coming of legislation similar to GDPR or CASL/PIPEDA.

While none of these topics are legally regulated yet at a federal level, they give companies a good idea of what they can expect.

COURT RULINGS AND JOB ACTIONS

In addition, existing legislation is being used by courts to address advanced AI. Some examples include:

  • In Canada, a major big box retailer was forced to remove FRT and destroy the data it collected after a court ruled it had violated existing data collection laws.
  • In the UK, Germany, Australia, and the U.S., courts have ruled against patents and copyrights for AI-generated materials.
  • Clearview AI, the facial recognition tech company, has been censured by some nations, with some requiring that the company delete all data collected about their citizens.

Job action is also creating precedents for the commercial use of AI. For example, the SAG-AFTRA strike in the U.S. was the longest in the history of the union, with generative AI being the most contentious issue. The outcome provides clear guidance for copyright, consent, and compensation for AI-generated content, establishing a strong precedent for future unions to follow.

FACIAL RECOGNITION AND LAW ENFORCEMENT/SECURITY

The use of FRT and other biometric monitoring technologies is a top concern for all legislative bodies and will almost certainly be strictly prohibited. Can private companies use facial recognition or biometric monitoring to prevent theft?

  • In the EU, it appears they cannot. FRT and biometric technologies that discern people’s emotions and collect personally identifiable data will be prohibited by both public authorities and private entities in “public access spaces,” which include retail stores.
  • In the U.S., various states and cities have passed legislation regulating the use of FRT and other biometric monitoring technologies. Baltimore, Portland, New York City, Texas, California, and Illinois are all either considering restrictions on these technologies or already having regulations in place for private-sector usage.

In Canada, companies must obtain explicit consent before using these technologies, and posting a sign at the front entrance is insufficient. Retailers not only need explicit opt-in but must provide detailed information to consumers about the data use and management.

Facial recognition

Image Source: Shutterstock

THE TAKEAWAY

The use of advanced AI promises unprecedented convenience, personalization, and innovation. Balancing the potential for discrimination, the need for data privacy/transparency, and workers’ rights will be critical when businesses consider how they will use these technologies.

Here’re what brands should consider in terms of potential or imminent legislation:

  1. The regulation of FRT and other biometric monitoring in retail environments and other public spaces will become widespread. Where it is not banned outright, brands will have to take significant measures to comply with rules that permit its use. Therefore, investment in these technologies should be carefully weighed against the possibility they may become banned or costly to maintain. Alternatives to consider include traditional security monitoring, hybrid human-tech self-checkouts, or (where permitted) getting consumers to opt in through loyalty apps.
  2. Segmenting people based on biometric data or personally identifiable data will become regulated, though regulations are likely to vary greatly. This means the development of customer personas and targeting, especially where there are vulnerable populations or where segmenting can cause undue harm to customers, will have to be conducted with care. In some cases, social scoring will be banned, while it may be permitted in certain contexts or with specific limitations. Alternatives to consider include segmentation studies using data from opted-in consumers rather than relying on passively collected data.
  3. Generative AI will face various regulations. This could mean using watermarks or forcing companies to explicitly inform users that they are viewing AI-generated content or interacting with an AI system. Additionally, AI-generated content and “inventions” will face copyright and intellectual property regulations. This means that visual design elements created by generative AI, such as logos, could be difficult for companies to own. Alternatives to consider include using generative AI programs that only use licensed materials, or only using generative AI for brainstorming instead of creating final content.
  4. Behavioral manipulation through advanced AI systems will be regulated. This may be more challenging to quantify and therefore will most likely be linked to outcomes. For example, the EU’s AI Act includes specific language around the potential harm that such systems could cause. Besides, using behavioral strategies to target minors will likely be banned outright. Focusing only on behavioral tools that provide users with direct, objective benefits should mitigate potential negative outcomes for brands.
  5. HR teams will likely face some of the strictest limitations and compliance for the use of AI in recruitment, training, and employee monitoring/evaluations, due to the risk of bias. These technologies may have to evolve significantly before they can become used more confidently.
  6. Workers’ rights will be considered in legislation, but how that unfolds will vary greatly. Transitioning workers to jobs that use AI, employee surveillance, and the use of employee data are the subjects most likely to come under regulation. Monitoring job action in the countries where you conduct business will be key, as regulation is likely to stem from union agreements that become standards for other industries and jurisdictions.

To protect data for future use, companies should proactively adhere to the most stringent data collection practices, such as explicit opt-in. Data that are found to be improperly collected or violate the spirit of any existing law could potentially be vulnerable to required deletion, especially where more invasive tools are used or where companies have been opaque or deceptive. Companies SHOULD be anticipating the use of advanced AI tools and need to start developing relevant data sets. Safeguarding an ethical, transparent approach is not only good for consumers, but it also ensures brands move quickly when the right tools exist to use it.

Data collection

Image Source: Shutterstock

BUILDING TRUST IN SYSTEMS SHOULD BE PRIORITIZED

Today, retailers and brands are experimenting freely with technologies that will soon fall under regulation. However, rather than waiting for legislation, brands should consider broader human values and needs as a guide. Advanced AI technologies have such enormous potential for both good and bad that creators of these technologies are speaking up about the need for regulation. Private sector use will have a strong influence on how people feel about advanced AI and could impact trust, innovation, and adoption.

Building trust in advanced AI systems is critical if human beings are to reap the significant benefits these technologies promise to provide. If companies use these tools in ways that violate consumer trust, they will contribute to an increased likelihood of consumer rejection of AI, thereby making it harder for innovation to flourish. By embracing the human-centric spirit of the G7’s guiding principles for AI usage, companies will help support innovation and adoption of tools that have the potential to benefit humankind.

“While harnessing the opportunities of innovation, organizations should respect the rule of law, human rights, due process, diversity, fairness and non-discrimination, democracy, and human-centricity, in the design, development and deployment of advanced AI systems.” – G7 International Guiding Principles on Artificial Intelligence