By
Christine Walker and Sarah Goodman, Offit Kurman
On June 28, 2024, the Supreme Court issued its decision in Loper Bright v. Raimondo, overruling the Chevrondoctrine[1] which required courts to observe regulatory agency interpretation of statutory law. In Loper Bright, the Court ruled that judges cannot defer to an agency’s interpretation of the law merely because it is deemed “reasonable.” The decision cautioned courts against relying on agencies' claims of authority based on their “subject matter expertise” or their role in political “policymaking.” Instead, federal judges are required to exercise “independent judgment” and interpret statutes based on their "best meaning." This standard makes judges more skeptical of agency interpretations, particularly when those interpretations are inconsistent. While judges may consider agency guidance if it is persuasive, longstanding, and consistent, such guidance is not legally binding.
In the dissent of Loper Bright, Justice Elena Kagan noted that artificial intelligence (AI) is likely to be “the next big piece of legislation on the horizon.” She emphasized the challenges Congress faces in regulating the technical area of AI, stating that “Congress can hardly see a week in the future with respect to this subject, let alone a year or a decade.” As Congress endeavors to legislate AI in the wake of Loper Bright, it will have to be specific in what power will belong to agencies to regulate AI. In turn, agencies will have less flexibility in creating and enforcing AI regulations unless power is specifically delegated to them in AI legislation.
The rapidly expanding landscape of federal and state legislation and regulation in the AI space is already creating compliance challenges for employers. Given the fast-paced evolution of AI technology, regulatory flexibility is essential. In the wake of Loper Bright, while legal compliance remains a priority, employers will find it easier to challenge agency rules—especially if those rules deviate from the statutory text or shift unpredictably with changes in administration.
Akin to the recent legislation passed by the state of Colorado,[2] before Congress enacts comprehensive AI federal legislation, local and state governments will have the opportunity to pass AI regulation specific to for their constituents. However, without a greater federal regulatory scheme expressing a goal of uniformity, this could lead to divergent AI judicial decisions. In recent years, and in the absence of congressional legislation on artificial intelligence (AI) in the workplace, the U.S. Equal Employment Opportunity Commission (EEOC), National Labor Relations Board (NLRB), and the U.S. Department of Labor (DOL) have announced various initiatives to restrict the use of AI in the workplace.
The possibility of differing interpretations between state and federal courts raises significant concerns about the future of AI regulation in the United States. Employers operating across multiple states may encounter conflicting requirements, adding complexity to an already challenging compliance landscape. Additionally, employers could face varying legal standards when individuals seek redress for alleged AI-related harms, depending on whether the case is heard in state or federal court. Consequently, the legal landscape for AI is poised to become fragmented and complex. The wheels of justice may also turn too slow to keep up with AI’s fast evolving pace.
Greater reliance on courts to determine the appropriate usage of AI could place users of AI at an increased risk for litigation. To minimize potential liability, AI users should implement an AI governance system. Such a system will determine how the AI used, its limitations, risks and provide guidance on best practices. Having advanced knowledge of an AI system's potential pitfalls will provide a business with a tactical advantage to avoid unnecessary litigation while still leveraging the benefits of the AI technology. Legal strategies will need to be tailored to the specific jurisdiction in question, and companies may need to implement more robust compliance measures to account for the varying standards that will emerge.
[1]Chevron established a two-step analysis for judicial review of statutory interpretation. Chevron U.S.A., Inc. v. Natural Resources Defense Council, Inc, 467 U.S. 837 (1984). Under Chevron, if a court concluded that a statute was silent or ambiguous, it had to defer to an agency’s permissible construction of the statute. The Loper Bright decision is premised on what the majority believes is a plain text reading of the Administrative Procedure Act (APA), which governs judicial challenges to agency actions. The Court specifically determined that the APA, which was not considered in Chevron, reflects the traditional understanding of the judiciary's role. This role requires courts to independently interpret the meaning of laws.
[2]Colorado’s newly enacted AI law aims to establish comprehensive regulations governing AI use, with a focus on transparency, accountability, and fairness. The law requires companies to conduct impact assessments and implement safeguards to mitigate bias and discrimination in AI systems, with compliance required by February 1, 2026.