Blog

Proposed Principles for Artificial Intelligence Published by the White House

A draft memorandum outlining a proposed Guidance on Regulation of Artificial Intelligence Application(“Memorandum“) for agencies to follow when regulating and taking non-regulatory actions affecting artificial intelligence was published by the White House on January 7, 2020. The proposed document addresses the objective identified in an Executive Order 13859 on Maintaining American Leadership in Artificial Intelligence, (“Executive Order 13859”) published by the White House in February 2019.2 

The Memorandum sets out policy considerations that should guide oversight of artificial intelligence (AI) applications developed and deployed outside the Federal government. It is intended to inform the development of regulatory and non-regulatory approaches regarding technologies and industrial sectors that are empowered or enabled by artificial intelligence and consider ways to reduce barriers to the development and adoption of AI technologies.

Principles for the Stewardship of AI Applications

The memorandum sets forth ten proposed principles:

  • Ensure public trust in AI
  • Public participation in all stages of rulemaking process
  • Scientific integrity and information quality
  • Consistent application of risk assessment and management
  • Maximizing benefits and evaluating risks and costs of not implementing
  • Flexibility to adapt to rapid changes
  • Ensure Fairness and non-discrimination in outcomes
  • Disclosure and transparency to ensure public trust
  • Promote safety and security
  • Interagency cooperation

Details on each of these principles are provided below

1. Public Trust in AI. 

Government regulatory and non-regulatory approaches to AI should promote reliable, robust and trustworthy AI applications that contribute to public trust in AI.

2. Public Participation. 

Agencies should provide opportunities for the public to provide information and participate in all stages of the rulemaking process. To the extent practicable, agencies should inform the public and promote awareness and widespread availability of standards, as well as the creation of other informative documents.

3. Scientific Integrity and Information Quality. 

Agencies should hold to a high standard of quality, transparency and compliance information that is likely to have substantial influence on important public policy or private sector decisions governing the use of AI. They should develop regulatory approaches to AI in a manner that informs policy decisions and fosters public trust in AI. Suggested best practices would include: (a) transparently articulating the strengths, weaknesses, intended optimizations or outcomes; (b) bias mitigation; and (c) appropriate uses of the results of AI application.

4. Risk Assessment and Management. 

The fourth principle caution against an unduly conservative approach to risk management. It recommends the use of a risk-based approach to determine which risks are acceptable, and which risks present the possibility of unacceptable harm, or harm whose expected costs are greater than expected benefits. It also recommends that agencies be transparent about their evaluation of risks.

5. Benefits and Costs.

The fifth principle provides that agencies should consider the full societal costs, benefits, and distributional effects before considering regulations related to the development and deployment of an AI application. Agencies should also consider critical dependencies when evaluating AI costs and benefits because data quality, changes in human processes, and other technological factors associated with AI implementation may alter the nature and magnitude of risks.

6. Flexibility. 

When developing regulatory and non-regulatory approaches, agencies should pursue performance-based and flexible approaches that can adapt to rapid changes and updates to AI applications. Agencies should also keep in mind international uses of AI.

7. Fairness and Non-Discrimination. 

Agencies should consider whether AI applications produce discriminatory outcomes as compared to existing processes, recognizing that AI has the potential of reducing present-day discrimination caused by human subjectivity.

8. Disclosure and Transparency. 

The eighth principle comments that transparency and disclosure may increase public trust and confidence. These disclosures may include identifying when AI is in use, for instance, if appropriate for addressing questions about how an application impacts human end-users. Further, agencies should carefully consider the sufficiency of existing or evolving legal, policy, and regulatory environments before contemplating additional measures for disclosure and transparency. 

9. Safety and Security. 

Agencies are encouraged to promote the development of AI systems that are safe, secure, and operate as intended, and to encourage the consideration of safety and security issues throughout the AI design, development, deployment, and operation process. Particular attention should be paid to the controls in place to ensure the confidentiality, integrity, and availability of the information processed, stored, and transmitted by AI systems. Further, agencies should give additional consideration to methods for guaranteeing systemic resilience, and preventing bad actors from exploiting AI system weaknesses, cybersecurity risks posed by AI operation, and adversarial use of AI against a regulated entity’s AI technology.

10. Interagency Cooperation. 

Agencies should coordinate with each other to ensure consistency and predictability of AI-related policies that advance innovation and growth in AI, while appropriately protecting privacy, civil liberties, and allowing for sector- and application-specific approaches when appropriate.

Non-Regulatory Approaches to AI

The Memorandum recommends that an agency consider taking no action or considering non-regulatory approaches when it determines, after evaluating a particular AI application, that existing regulations are sufficient, or the benefits of a new regulation do not justify its costs. Examples of such non-regulatory approaches include: (a) sector-specific policy guidance or frameworks; (b) pilot programs and experiments; and (c) the development of voluntary consensus standards

Reducing Barriers to the Development and Use of AI

The Memorandum points out that Executive Order 13859 on Maintaining American Leadership in Artificial Intelligence, instructs OMB to identify means to reduce barriers to the use of AI technologies in order to promote their innovative application while protecting civil liberties, privacy, American values, and United States economic and national security.  The Memorandum provides examples of actions that agencies can take, outside the rulemaking process, to create an environment that facilitates the use and acceptance of AI. One of the examples is agency participation in the development and use of voluntary consensus standards and conformity assessment activities.

Next Steps

The Memorandum points out that Executive Order 13859 requires that implementing agencies review their authorities relevant to AI applications and submit plans to OMB on achieving the goals outlined in the Memorandum within 180 days of the issuance of the final version of the Memorandum. In this respect, such agency plan will have to:

  • Identify any statutory authorities specifically governing agency regulation of AI applications; 
  • Identify collections of AI-related information from regulated entities; 
  • Describe any statutory restrictions on the collection or sharing of information, such as confidential business information, personally identifiable information, protected health information, law enforcement information, and classified or other national security information); 
  • Report on the outcomes of stakeholder engagements that identify existing regulatory barriers to AI applications and high-priority AI applications; and
  • List and describe any planned or considered regulatory actions on AI. 

Conclusion

This draft guidance marks defines a concrete structure for outlining regulatory and non-regulatory approaches regarding AI. Businesses should evaluate the extent to which their own AI strategies have the ability to address the ten principles. 

In addition, since the development of AI strategies is likely to have global consequences, they should also take into account similar initiatives that have been developed elsewhere around the world, such as by the OECD (with the “OECD Recommendation on Artificial Intelligence”), the European Commission (through its “Ethics Guidelines for Trustworthy Artificial Intelligence”) or at the country level, for example in France (with the “Algorithm and Artificial Intelligence: CNIL Report on Ethics Issues”).

Read More

Failure to Meet Data Retention and Data Minimization Obligations In Germany Results in a EUR 14.5 Million fine

Francoise Gilbert

Failure to Meet Data Retention and Data Minimization Obligations In Germany Results in a EUR 14.5 Million fine

The abundance of storage space and the increased pressure to keep interacting with current or former customers prompt businesses to collect large amounts of data, and retain as much of this data as possible, often well beyond actual useful period. Too often, businesses may not spend the time and resources necessary to periodically audit their practices and evaluate the nature of the data collected or to be collected, how the data is used, or why it is needed in view their then-current needs. And they may neglect to purge their databases and securely dispose of this data.

(more…)

Read More

Legal barriers for drones

Dr. Ursula Widmer

Legal Barriers for Drones

The use of drones for various purposes, such as image recording, surveys, scientific studies, surveillance or transport, is spreading rapidly. However, certain legal barriers must be observed for reasons of security, and protection of privacy and personality rights. The Federal Office for Civil Aviation (FOCA) recently adopted more stricter regulations for the use of drones and model aircraft in order to take better account of the security risks.

(more…)

Read More

Are cookies currently regulated in South Africa?

Olivia Smith & John Giles

What is the cookie law in South Africa? Many people ask because the law relating to cookies is such a big issue in many other countries. Do you need to get a user’s (aka data subject’s) consent before using cookies? Are there any specific regulations?

What are cookies and why are they used?

Cookies are text files transferred from your browser to your computer’s hard drive. They store information about your activity on a browser. Companies worldwide use cookies to monitor customer behavior and to improve interactivity with a website.

You will notice when you search for a specific product, ads relating to that product appear on other sites you visit.  When you log into a website that uses cookies and later re-visit it, the cookies allow the website to ‘remember’ you.

Cookies make your life as a website user much easier because you do not have to log in every time you visit the same page. Your online experiences will be personalized to your preferences. (more…)

Read More

The Right to be Forgotten Tsunami: What Effect for US Companies

Francoise Gilbert

The so-called Right to Be Forgotten or right of erasure (RTBF) has been the subject of much debate and attention since the publication of the Court of Justice of the European Union (CJEU) opinion in May 2014, in the Costeja v. Google case. The CJEU held that, under certain conditions, a European citizen has the right to demand that a search engine remove links to information pertaining to him that is “inaccurate, inadequate, irrelevant, or excessive,” even if the information is truthful.

Since the publication of the CJEU opinion, search engines have been flooded by delisting requests. According to the Google Transparency Report, as of the end of February 2015, Google has received over 220,000 delisting requests, and has evaluated over 800,000 URLs.

The topic has also garnered the attention of the Article 29 Working Party (A29), which published Guidelines, in late November 2014, to explain the position of the EU Data Protection Authorities. Among other things, the Guidelines provide that delisting requests, when accepted, must be implemented on all domains operated, worldwide, by the entity receiving the delisting request, and not just only on its EU domains.

Interest in RTBF has also expanded outside the European Economic Area (EEA). Cases similar to the Costeja case have been brought in Asia and the Americas. It is clear that a strong current is building. The CJEU Costeja ruling and its aftermath are significant for businesses around the world in many respects. The genie is out of the bottle, and may be sneaking into, and disrupting many businesses.

(more…)

Read More

Right to be Forgotten – Casting a Wider Net

Francoise Gilbert

The Article 29 Working Party (WP29) has published, in its document WP 225, Guidelines on the Implementation of the Court of Justice of the European Union (CJEU) Judgment on Google Spain and Inc. v. Agencia Espanola de Proteccion des Datos (AEPD) and Mario Costeja GonzalezC-131/12 (Guidelines) to provide its interpretation of the CJEU’s ruling, and identify the criteria that will be used by the EU/EEA Member States Data Protection Authorities when addressing complaints from individuals following a denial of de-listing requests.

(more…)

Read More

People-tracking and Swiss Data Protection Law

Dr. Ursula Widmer

People-tracking and Swiss Data Protection Law

People-tracking systems are being used increasingly, e.g. for optimizing flows of traffic and people or for analysis of customer behavior. Since these systems can also be used for processing sensitive data and personal profiles, the Swiss Federal Data Protection and Information Commissioner (FDPIC) considers that caution is called for and that closer scrutiny of the data protection conditions is necessary. The FDPIC has published comments on people-tracking, which are available its website.

(more…)

Read More

Yelp to pay $450,000 penalty for COPPA violation

Francoise Gilbert

Yelp to pay $450,000 penalty for COPPA violation

The Federal Trade Commission has announced a proposed settlement with Yelp, Inc. for COPPA violations. The FTC alleged that, for five years, Yelp illegally collected and used the personal information of children under 13 who registered on its mobile app service. According to the FTC complaint, Yelp collected personal information from children through the Yelp app without first notifying parents and obtaining their consent. The Yelp app registration process required individuals to provide their date of birth. Several thousand registrants provided a date of birth showing they were under 13 years old. Even though it had knowledge that these registrants were children, Yelp did not follow the requirements of the COPPA Rule and collected their personal information without proper notice to, and consent from, their parents. Information collected included name, e-mail address, geolocation, and any other any information that these children posted on Yelp. In addition, the complaint alleges that Yelp did not adequately test its app to ensure that users under 13 were prohibited from registering. Under the terms of the proposed settlement agreement, among other things, Yelp must:

  • pay a $450,000 civil penalty;
  • delete information it collected from individuals who stated they were 13 or younger at the time they registered for the service; and
  • submit a compliance report to the FTC in one year outlining its COPPA compliance program.

In a separate action, FTC alleged that TinyCo also improperly collected Children information in violation of COPPA. Under the settlement agreement between TinyCo and the FTC, TinyCo will pay a $300,000 civil penalty.

Read More

The Brazilian Law on the Rights of Internet Users

Esther Nunes and Paulo Bonomo

The Brazilian Law on the Rights of Internet Users – Law No. 12,965, of April 23, 2014 (“Law No. 12,965/2014”)

After a time-consuming legislative process that lead to several discussions and postponements in recent years, Law No. 12,965/2014, known as the Brazilian “Marco Civil da Internet, was published on April 24, 2014. The law will take in effect within sixty (60) days from such date.

The objective of the Marco Civil da Internet is to establish the principles, guarantees, rights and obligations for the use of the Internet. In order to assure its enforceability, Law No. 12,965/2014 establishes several concrete requirements that will have to be observed by different Internet players.

Fundamental Rights of Internet Users

The Marco Civil da Internet creates a very extensive list of fundamental rights of Internet users. The law specifically identifies these rights whereas previously they were found to derive from the Brazil Federal Constitution concerning the fundamental right to privacy, as well as the Civil and Consumer Protection Codes.

(more…)

Read More

Internet Marketing and Crowdsourcing: What are the Limits?

Eric Barbry

Internet Marketing and Crowdsourcing: What are the Limits?

The Internet marketing industry is exploring various strategies to try to influence the behaviors of Internet users as how they behave has now become integral to the operation of a growing number of services offered by search engines (e.g. Google Suggest) and more generally social networks.

Crowdsourcing is one of the avenues used to achieve their goals: via crowdsourcing platforms, companies can pay Internet users to complete a variety of microtasks ranging from performing image recognition to translating content, clicking on “like” or posting comments.

One can easily imagine how crowdsourcing platforms can be misused to produce fake comments or harm someone’s online reputation. In France, this type of behavior constitutes unfair trade practices and is actionable under Article L 120-1 of the French Consumer Code.

If a website experienced an unexplained drop in traffic or begins to be associated with negative search suggestions or comments, it is worth taking a closer look at these platforms. In France, to record evidence and build a case, companies should have the litigious practices recorded by a competent member of the legal profession (in France a huissier will draft their findings in a report called constat).

Link: Article L 120-1 of the French Consumer Code (in French)

Read More