By: Jessica Sunha Kweon[1]
- Introduction
Emanating from science fiction, artificially intelligent machines have been a reality since the 1950s.[2] While the use of artificial intelligence has “skyrocketed” nationwide in a post-pandemic world,[3] legislators have struggled to keep pace.[4] In 2020, Maryland passed House Bill 1202 (“H.B. 1202”), becoming one of the first states to regulate artificial intelligence use in employment decisions.[5] However, Maryland’s current laws fail to address discriminatory impacts and privacy issues arising from artificial intelligence use in recruitment and employment processes.[6]
The following sections of this comment discuss how Maryland can address discrimination and data privacy concerns regarding the use of artificial intelligence in employment decisions.[7] Section II begins with a brief history of the development of artificial intelligence.[8] This section then examines the beneficial and harmful uses of artificial intelligence in the United States and Maryland workforces.[9]
Section III explores the legal issues that arise from the use of artificial intelligence in employment decisions, including discrimination resulting from algorithmic bias and data privacy concerns.[10] This section also highlights the shortcomings of Maryland’s current legislation in artificial intelligence regulation and research.[11]
Lastly, Section IV proposes two potential legislative actions to mitigate harm resulting from artificial intelligence in employment decisions based on existing solutions at the local, state, and federal levels.[12] Part one provides recommendations for Maryland to explicitly address artificial intelligence use in legislation.[13] Part two encourages greater research in artificial intelligence to better inform future artificial intelligence lawmaking.[14] Finally, while legislation is pending, part three offers an alternative recommendation for employers to self-regulate and defer to current guidance by federal authorities as artificial intelligence laws continue developing.[15] Collectively, these solutions present a more comprehensive and robust legislative action to protect Maryland employees amidst the recent rise of artificial intelligence use in employment decisions across the country.[16]
II. Historical Development
A. The History of Artificial Intelligence
Alan Turing was an English mathematician and computer scientist who transformed computer science in the 20th century.[17] Widely known as the “father” of modern computer science and artificial intelligence,[18] Turing first raised the possibility of artificial intelligence in 1950.[19] Turing pondered whether a digital computer could “think” based on an “imitation game”[20] —known as the “Turing Test.”[21] This game involved a human interrogator who must distinguish between a human and a machine respondent based on a teletyped conversation.[22] Turing predicted that in fifty years, an average interrogator would have a seventy percent chance or less to make the correct identification after a five-minute conversation.[23] While Turing addressed objections pertaining to storage capacity and other limitations, such as emotion and theology, which are innate to the human experience, Turing argued that, in theory, a digital computer could think intelligently and produce human-like responses.[24]
In 1955, John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon formally proposed the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI), the seminal event for artificial intelligence research.[25] Notably, in the DSRPAI proposal, McCarthy first coined the term artificial intelligence.[26] Thereafter, the first DSRPAI occurred in 1956, marking the start of the research discipline.[27] There, Allen Newell, Cliff Shaw, and Herbert Simon presented the Logic Theorist—the first artificial intelligence program[28] —and demonstrated the potential of artificial intelligence research.[29] In 2004, McCarthy formally defined the term artificial intelligence as “the science and engineering of making intelligent machines, especially intelligent computer programs.”[30]
After the 1950s, artificial intelligence flourished as computers improved in performance, storage capacity, affordability, and accessibility.[31] Researchers, including McCarthy, created program languages such as List Processing (“LISP”) for artificial intelligence research.[32] Other early artificial intelligence programs, like Newell and Simon’s 1957 General Problem Solver[33] and the 1960s ELIZA chatbot[34] showed progress toward problem-solving and language-interpretation by artificial intelligence.
The 1980s “AI Boom” represented the next rapid development in artificial intelligence.[35] In 1980, the newly founded American Association of Artificial Intelligence hosted its first national conference at Stanford University.[36] In the same year, scientists developed “expert systems[s]” that used factual and heuristic specialized knowledge to automate inferences.[37] The popularization of artificial neural networks also led to the development of backpropagation algorithms that would later inspire “deep learning.”[38]
Progress slowed during the subsequent “AI Winter” due to a lack of funding stemming from high costs and low interest in artificial intelligence research.[39] Nevertheless, the emergence of intelligent agents, which modeled the mind, sparked advancement.[40] Notably, in 1997, IBM’s Deep Blue, a chess-playing program, defeated Gary Kasparav, the reigning world chess champion and grandmaster.[41] That same year, Windows released a speech recognition software.[42] In 2000, Cynthia Breazeal developed Kismet, a robot that could display and identify human emotions.[43]
Now, in the “big data”[44] era, the development of artificial intelligence applications has extended to speech recognition, virtual assistants and agents, and computer vision.[45] The recent advancements in large language models using generative artificial intelligence based on data inputs, like ChatGPT,[46] also raise new ethical issues in the modern day.[47]
B. The Recent Use of Artificial Intelligence in the Workforce
While artificial intelligence has existed for decades, the recent rise of artificial intelligence use has “revolutionize[d]” the workplace.[48] Notably, artificial intelligence use by employers spiked during the COVID-19 pandemic.[49] More organizations invested in artificial intelligence and automation to expedite remote work, improve consumer experience, decrease costs, and increase productivity and innovation while addressing skill shortages and supply chain issues.[50]
Today, about eighty percent of employers use artificial intelligence in their employment decision-making processes,[51] often in the interest of efficiency.[52] In employment hiring decisions, employers may use algorithmic tools to analyze resumes, make job performance predictions, and perform facial recognition during interviews to evaluate a candidate’s demeanor.[53] Some argue that artificial intelligence helps companies hire diversely and reduce unconscious bias by anonymizing resumes and interviewees or analyzing objective factors through facial recognition technology in interviews.[54] Post-hiring data also helps companies enhance employee experiences and productivity[55] as well as inform promotion and firing decisions.[56]
Nevertheless, there is growing concern amongst employees that the use of artificial intelligence tools results in job insecurity, displacement, and the creation of an “unequal playing field.”[57] Employees have also largely objected to the use of facial recognition technology in the workplace due to limitations in emotional interpretation and bias.[58] Consequently, the use of artificial intelligence has resulted in discrimination and data privacy concerns.[59]
C. The Recent Use of Artificial Intelligence in Maryland’s Workforce
In 2023, the Employment Law Center of Maryland—a nonprofit firm—adopted an artificial intelligence tool, CoCounsel, to assist in legal research, drafting, and reviewing documents.[60] Members of the firm shared that the tool improved efficiency and would “help close the gap for access to justice.”[61] In recent years, other employers and businesses have increasingly implemented artificial intelligence.[62] This includes automative decision-making tools like resume scanners, video interviewing, and employee monitoring software.[63]
D. Current Regulation of Artificial Intelligence in Maryland
Although Maryland is one of three states moving forward in legislation pertaining to face recognition technology, its laws are still limited.[64] H.B. 1202, enacted in 2020, requires private employers to receive consent from applicants to use facial recognition technology during pre-employment interviews.[65] H.B. 1202 does not state any particular penalty for violations and does not extend to public employers.[66] In 2022, Maryland legislators established the Industry 4.0 Technology Grant Program in the Department of Commerce to aid small and mid-sized manufacturing companies with modernizing their production and increasing overall economic growth.[67] The 2023 pilot program—funded by $1 million—will financially support qualifying projects, such as automated or robotic equipment, smart systems, and artificial intelligence tools.[68]
Meanwhile, in the executive branch, Governor Wes Moore issued an Executive Order in 2024 supporting the modernization of Maryland’s digital infrastructure.[69] The action includes integrating artificial intelligence, upgrading outdated government computer systems for state government employees, and creating an “AI Subcabinet.”[70] Two years later, in 2024, Maryland legislators expanded the responsibilities of the Department of Information Technology to include inventory and assessment of the use of artificial intelligence in state government systems, and working with Governor’s now-codified AI Subcabinet.[71] Thus, Maryland, like the rest of the nation, is balancing enhancing the benefits of artificial intelligence against legal and ethical risks.[72]
III. Issue
A. Major Legal Issues That Arise from Artificial Intelligence Use in Employment Decisions
Two major legal issues may arise when employers use artificial intelligence in employment decision-making processes. First, artificial intelligence may discriminate against certain employees and job candidates through bias.[73] For example, in 2014, Amazon’s artificial intelligence recruitment tool showed bias against women, penalizing resumes that included women’s clubs and colleges.[74] Ultimately, the recruiting engine was discarded due to these technological flaws.[75] Similar issues with “algorithmic fairness”[76] could lead to violations of federal laws.[77] This includes Title VII of the Civil Rights Act of 1964, which prohibits employers and businesses from using selection processes that disproportionately impact particular groups;[78] the American with Disabilities Act (ADA), which prohibits businesses from discrimination against employees on the basis on disability;[79] or the Age Discrimination in Employment Act (ADEA), which prevents age discrimination against employees.[80]
Second, the use of artificial intelligence also gives rise to data privacy issues.[81] For example, Clearview AI, a startup that provides facial recognition technology to clients, maintained a database of billions of private photos, which it later sold to government and law enforcement bodies.[82] Such concerns have resulted in greater privacy protections regarding artificial intelligence technology use at both the federal level and state level.[83]
B. Currently, Maryland Law Is Not Robust Enough to Regulate Artificial Intelligence Effectively and Address Discrimination and Data Privacy Concerns.
Despite Maryland legislators’ and the governor’s recent push for artificial intelligence,[84] the foregoing issues comprise significant legal and ethical obstacles in employment decisions.[85] Existing regulation of artificial intelligence in employment decisions in Maryland is restricted to H.B. 1202 and requires greater regulation.[86] However, artificial intelligence is developing at an unprecedented speed, and developers and sellers of artificial intelligence are “crowding out conversations with policymakers around how to govern [artificial intelligence] and to mitigate social consequences.”[87] Artificial intelligence implications remain largely unknown to state policymakers, thereby slowing the creation of new laws.[88] In sum, the overall lack of comprehensive regulations governing artificial intelligence in employment decisions creates significant ethical and legal risks for employees in Maryland.
IV. Solution
A. New Maryland Laws Should Directly Address Discrimination and Data Privacy Issues in Artificial Intelligence Technology Use in Employment Decisions.
To address Maryland’s gaps in current artificial intelligence laws, lawmakers can turn to existing regulations in other jurisdictions to address discriminatory impacts and privacy concerns. For example, Illinois enacted the Biometric Information Privacy Act (BIPA), which protects employer biometric data in the use of artificial intelligence like facial recognition software.[89] In 2024, Illinois amended the Illinois Human Rights Act to prohibit employers from using artificial intelligence tools resulting in discrimination against employees.[90] In 2020, New York City enacted the Automated Employment Decision Tool Law, which mandates employers to inform job candidates of artificial intelligence use in hiring processes and to perform annual audits of recruitment technology that checks for bias in hiring.[91] As of 2024, Colorado is the only state that has passed legislation requiring developers and employers “to use reasonable care to avoid algorithmic discrimination in the high-risk [AI] system.”[92]
Not all proposed legislation has been enacted successfully. In 2020, California legislators failed to pass the Talent Equity for Competitive Hiring (TECH) Act, which would have established criteria for artificial intelligence technology engaging in nondiscriminatory selection processes for hiring and promotions.[93] The TECH Act would have also held sellers accountable for testing their artificial intelligence products.[94] In 2024, Illinois legislators did not pass the Automated Decision Tools Act, which would have mandated audits and safeguards for employers who use automated decision tools.[95] At the federal level, still pending today, the National Biometric Information Privacy Act of 2020 would have expanded national regulations of the collection, retention, disclosure, and destruction of biometric data for private entities.[96]
Nevertheless, legislators at the state and federal levels have continued to push for more artificial intelligence regulations. As of 2023, thirteen states have enacted some type of biometric information privacy protections.[97] Furthermore, the pending Eliminating Bias in Algorithmic Systems (BIAS) Act of 2024, if enacted, would require federal agencies using artificial intelligence to maintain a civil rights office for combating bias and discrimination.[98]
While Maryland is at the “starting line” regarding the regulation of artificial intelligence,[99] state leaders and lawmakers have demonstrated a greater effort to address artificial intelligence in the 2024 legislative session[100] and the 2025 legislative session.[101] Maryland should follow legislative trends and aim to address the use of artificial intelligence in employment decisions through new legislation. In doing so, Maryland can proactively protect employees from discrimination and data privacy violations.
B. Maryland Legislators Must Revisit Previous Proposals or Create New Legislation That Prioritizes Artificial Intelligence Research.
Currently, Maryland has an artificial intelligence advisor, Nishant Shah, who oversees the state’s artificial intelligence strategy and partakes in the development of ethical guidance and coordination with federal leaders.[102] While Shah’s team is drafting guidance, Shah has not yet called for new legislation in light of emerging research.[103]
Instead, mandated research may be an alternative legislative solution. At least twelve state laws have enacted laws requiring government and government entities to increase their understanding of artificial intelligence and its implications.[104] Experts in task forces, advisory boards, commissions, and councils will report artificial intelligence findings and recommendations in various subjects, including employment.[105]
Congress recently introduced similar laws. The Jobs of the Future Act of 2024 would require the Secretary of Labor and the Director of the National Science Foundation to work with stakeholders and jointly report the growth and impact of artificial intelligence in the workforce.[106] In addition, the CREATE AI Act of 2024 would establish the National Artificial Intelligence Research Resource (“NAIRR”), a national research infrastructure that would provide resources, data, and tools so that researchers and students can conduct safe and reliable artificial intelligence research.[107]
Despite recent state and national support for artificial intelligence research, multiple 2023 and 2024 bills on artificial intelligence research failed to pass the Maryland legislature. These bills included House Bill 1034 (“H.B. 1034”), aimed at establishing an advisory board to investigate the issues pertaining to artificial intelligence across health care, education, transportation, and criminal justice.[108] The advisory board would have also examined the economic, social, and ethical implications of artificial intelligence as well as consulted with experts and the public regarding the adoption of artificial intelligence.[109] Finally, the board would have evaluated current artificial intelligence laws, regulations, and policies and made recommendations for new legislation.[110] The advisory board would have reported its findings and recommendations to the Governor and the General Assembly, and the report would be available to the public.[111]
The state legislature also rejected bills that would have established a commission on artificial intelligence.[112] House Bill 1068 (“H.B. 1068”) would have established the Commission on Responsible Artificial Intelligence in Maryland, staffed by the Department of Legislative Services.[113] The commission would have examined the existing artificial intelligence laws at the local, state, and federal scale to inform best practices and regulations of artificial intelligence in the public sector.[114] The commission would have also reported its findings and recommendations to specific committees of the General Assembly.[115] Similarly, House Bill 1132 (“H.B. 1132”) would have established the Technology and Science Advisory Commission (TSAC), staffed by the Department of Information Technology.[116] TSAC’s duties would have included advising the state agencies on the development and implementation of technology, such as artificial intelligence, and creating a framework for addressing ethical concerns.[117] TSAC would have reported its activities and recommendations to both the Governor and the General Assembly.[118]
In 2024, the state legislature did not pass Senate Bill 1087 (“S.B. 1087”), which would have established the Maryland Artificial Intelligence Advisory and Oversight Commission.[119] The Commission would offer guidance and recommendations for the use of artificial intelligence in the State.[120] On the other hand, House Bill 1297 (“H.B. 1297”), which mandates the State Department of Education to investigate best practices for the ethical use of artificial intelligence in public schools while ensuring data privacy, is pending but limited to the education context.[121]
Research is essential to clarify legislators’ understanding of artificial intelligence, to better inform state guidance, and to improve policymaking decisions.[122] Maryland should follow state and federal trends to increase artificial intelligence research within the state. By revisiting previous bills or creating new ones, state legislators can establish new research entities that will help maintain transparency and mitigate harmful implications, while also supporting the growing use and benefits of artificial intelligence in employment decisions.
C. As Maryland leaders and legislators continue to build an understanding of artificial intelligence, employers and businesses should engage in ethical, self-regulatory practices based on general federal guidance.
The legislature’s lack of understanding regarding the implications of artificial intelligence has led to employers regulating themselves.[123] Employers may implement safeguards to prevent the disclosure of personal data[124] or audit artificial intelligence tools for hiring bias.[125] They may also maintain human involvement during the use of artificially intelligent tools.[126] Employers and businesses may also formally implement artificial intelligence policies and training within the companies to regulate the handling of private information in decision-making practices.[127] Nevertheless, this self-regulatory framework does not supersede federal, state, or local laws pertaining to the use of artificial intelligence in employment practices.[128]
In addition to self-regulation, employers should also refer to federal guidance concerning the legal and ethical use of artificial intelligence. Although there are no federal enforcement measures directly regulating artificial intelligence use, the federal government has provided additional guidance informing employers and employees of legal issues presented by artificial intelligence use.[129] The Biden Administration[130] and various federal agencies[131] have provided helpful guidance.
i. The Biden-Harris Administration
In 2022, the Biden-Harris Administration released a “blueprint” website[132] and white paper[133] describing constitutional artificial intelligence use. Therein, the administration outlined five principles guiding the design and implementation of automated systems while providing protections to Americans.[134] These principles addressed: (1) safe and effective systems, (2) algorithmic discrimination protections, (3) data privacy, (4) notice and explanation, and (5) human alternatives, considerations, and fallback.[135]
In February 2023, former President Biden signed Executive Order 13985, directing federal agencies to focus on addressing algorithmic discrimination through automated systems and technology.[136] In October 2023, Biden also issued Executive Order 14110 regarding the “safe, secure, and trustworthy development and use of artificial intelligence.”[137] The order included new standards for artificial intelligence safety and security, calls for Congress to protect Americans’ privacy with bipartisan data privacy legislation, and calls for agencies to address algorithmic bias.[138] The order also directed the development of best practices to mitigate harms and maximize benefits for U.S. workers and to promote innovative and responsible use of artificial intelligence in private and public sectors.[139]
Since then, under Executive Order 13985, several federal agencies convened in January 2024 to discuss the intersection of artificial intelligence and civil rights.[140] Representatives also shared updates on policies, guidance, and resources and pledged continued internal and external collaboration to prevent harm to the public from artificial intelligence use.[141]
ii. Trump-Vance Administration
In January 2025, President Donald Trump revoked the Biden-Harris Administration’s Executive Order 14110 and signed Executive Order 14179.[142] The new order aims to strengthen the United States’s global leadership in Artificial Intelligence and to promote “human flourishing, economic competitiveness, and national security.”[143] Nevertheless, the Trump-Vance Administration’s “wave of layoffs . . . combined with looming budget cuts” have affected federal agencies’ research on artificial intelligence.[144]
iii. Federal Agency Guidance
Multiple federal agencies have provided guidance regarding artificial intelligence use. In 2023, leaders of the Consumer Financial Protection Bureau (CFPB), the Department of Justice (DOJ)’s Civil Rights Division, the Equal Employment Opportunity Commission (EEOC), and the Federal Trade Commission (FTC) released a joint statement regarding each agency’s enforcement of the responsible use of automated systems.[145] The statement informed Americans of the risk of unlawful outcomes from the use of automated systems, including discrimination, bias, and violations of federal law.[146] In 2024, referencing to the joint statement, the Deputy Attorney General Lisa O. Monaco affirmed the DOJ’s “tough stance” in prosecution against the misuse of artificial intelligence, including the amplification of biases and discriminatory practices.[147] In 2022, the DOJ had also issued guidance on artificial intelligence and disability discrimination in hiring decisions for public and private employers.[148]
The EEOC, which maintains its own research through its Artificial Intelligence and Algorithmic Fairness Initiative,[149] has also provided guidance on algorithmic fairness and violations of the ADA[150] and Title VII[151] through artificial intelligence use. The EEOC has also recently engaged in litigation regarding discriminatory artificial intelligence use.[152] In 2023, the EEOC settled its first artificial intelligence discrimination lawsuit regarding violations of Title VII and the ADEA.[153] There, iTutorGroup Inc. paid $365,000 for its AI hiring selection tool that automatically rejected female applicants over 55 years old and male applicants over 60 years old.[154]
With regards to data privacy, in 2023, the National Institute of Standards and Technology division of the U.S. Department of Commerce published its Artificial Intelligence Risk Management Framework.[155] This framework informs Americans of the potential risks of artificial intelligence use, including privacy and discrimination issues.[156] The framework also establishes guidance for safe, reliable, and trustworthy standards of artificial intelligence use.[157]
Despite the increasing artificial intelligence use amongst employers, regulations and understanding of relevant technology are slow to fruition.[158] As such, Maryland employers should balance self-regulatory practices and policies and federal guidance to ensure compliance with state and federal laws. Adherence will ensure that employers are cautious and do not engage in unlawful conduct while maintaining the benefits of artificial intelligence use in employment decisions.
V. Conclusion
The use of artificial intelligence is a highly debated issue affecting both the private and public sectors in Maryland.[159] In the digital age, businesses and employers state and nationwide have increasingly implemented artificially intelligent technology to produce efficient employment decisions.[160] However, while businesses, leaders, and lawmakers call for greater artificial intelligence use, Maryland requires more robust and comprehensive legislation to address discrimination and privacy concerns arising its use.[161] The implementation of more laws and greater research will help Maryland thoroughly investigate and responsibly balance the benefits and minimize the harms of artificial intelligence. In doing so, Maryland can safely lead in the nation’s overall objective to advance artificial intelligence.[162]
[1] Jessica Sunha Kweon: J.D. Candidate, May 2025, University of Baltimore School of Law. At UBalt Law, Jessica has served as a Law Scholar, Teaching Assistant, and Research Assistant for several professors, supporting both student learning and faculty scholarship. As an Articles Editor for Volume 55 of the University of Baltimore Law Forum, Jessica has also helped shape legal scholarship in Maryland and will have seven publications by Summer 2025. Jessica is also the outgoing President of the Board of Advocates, leading moot court and mock trial advocates while being a two-time National Environmental Law Moot Court competitor.
[2] Rockwell Anyoha, The History of Artificial Intelligence, Harv. (2017), https://www.scirp.org/reference/referencespapers?referenceid=3561147 (The concept of artifical intelligence began as early as the “‘heartless’ Tin man from the Wizard of Oz and continued with the humanoid robot that impersonated Maria in Metropolis.”).
[3] Joe McKendrick, AI Adoption Skyrocketed Over the Last 18 Months, Harv. Bus. Rev. (Sept. 27, 2021), https://hbr.org/2021/09/ai-adoption-skyrocketed-over-the-last-18-months.
[4] Benedict Sheehy & Yee-Fui Ng, The Challenges of AI Decision-Making in Government and Administrative Law: A Proposal for Regulatory Design, 57 Ind. L. Rev. 665, 666 (2024) (noting that “Governments have struggled to understand and address AI decision-making appropriately[,]” thereby harming “vulnerable populations”).
[5] H.D. 1202, 2020 Leg., 441st Reg. Sess. (Md. 2020).
[6] See Eli Kales, Use of AI Tools Raises Concerns About Potential for Employment Discrimination, Md. The Daily Rec. (Aug. 8, 2023), https://thedailyrecord.com/2023/08/08/use-of-ai-tools-raises-concerns-about-potential-for-employment-discrimination/; see also Madyson Fitzgerald, As Employers Expand Artificial Intelligence in Hiring, Maryland is One of Few States That Have Rules, Md. Matters (July 18, 2023, 6:50 AM), https://marylandmatters.org/2023/07/18/as-employers-expand-artificial-intelligence-in-hiring-maryland-is-one-of-few-states-that-have-rules/.
[7] See infra Sections II–IV.
[8] See infra Section II.A.
[9] See infra Section II.B-C.
[10] See infra Section III.A.
[11] See infra Section III.B.
[12] See infra Sections IV.A-B.
[13] See infra Section IV.A.
[14] See infra Section IV.C.
[15] See infra Section IV.B.
[16] McKendrick, supra note 2.
[17] See B. Jack Copeland & Diane Proudfoot, Alan Turing’s Forgotten Ideas in Computer Science, Sci. Am. 99 (1999), https://personal.utdallas.edu/~otoole/HCS6330_F10/03_turing.pdf (summarizing Turing’s achievements).
[18] Alan Turing: His Work and Impact 481 (S. Barry Cooper & Jan van Leeuwen eds., 2013).
[19] Alan M. Turing, Computing Machinery and Intelligence, Mind 49 (1950), https://redirect.cs.umbc.edu/courses/471/papers/turing.pdf.
[20] Id.
[21] B. Jack Copeland, The Turing Test, 10 Minds & Mach. 519, 522 (2000).
[22] Turing, supra note 18.
[23] Id.
[24] Id.
[25] See generally John McCarthy et al., A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence 2–4 (1995), http://jmc.stanford.edu/articles/dartmouth/dartmouth.pdf (comprising original proposals for DSRPAI).
[26] Stephanie Dick, Artificial Intelligence, 1.1 Harv. Data Sci. Rev. 1, 2 (2019), https://hdsr.mitpress.mit.edu/pub/0aytgrau/release/3.
[27] James Moor, The Darmouth College Artificial Intelligence Conference: The Next Fifty Years, 27 AI Mag. 87, 87 (2006).
[28] Pamela McCorduck, Machines Who Think: A Personal Inquiry Into the History and Prospects of Artificial Intelligence 123–124 (A K Peters, Ltd. & Natick, Mass. eds., 2004), https://monoskop.org/images/1/1e/McCorduck_Pamela_Machines_Who_Think_2nd_ed.pdf.
[29] Moor, supra note 26, at 87.
[30] John McCarthy, What Is Artificial Intelligence 2 (2007), https://www-formal.stanford.edu/jmc/whatisai.pdf.
[31] Anyoha, supra note 1.
[32] John McCarthy, History of Lisp 1 (1979), http://jmc.stanford.edu/articles/lisp/lisp.pdf.
[33] See Herbert A. Simon & Allen Newell, Human Problem Solving: The State of the Theory in 1970 152 (1971); see also Newell et al., Report on a General Problem-Solving Program 2–3 (explaining that the General Problem Solver was first presented in 1959); see George W. Ernest & Allen Newell, GPS: A Case Study in Generality and Problem Solving 3 (Robert L. Ashenhurst ed., 1969) (the General Problem Solver, as its name suggests, solves a “variety of problems” that “are simple according to human standards, although they still require intellectual effort.”); see Nils J. Nilsson, The Quest for Artificial Intelligence: A History of Ideas and Achievements 88 (2010) (Notably, the General Problem Solver “was an outgrowth of . . . the Logic Theorist in that it was based on manipulating symbol structures . . . .”).
[34] Joseph Weizenbaum, ELIZA—A Computer Program for the Study of Natural Language Communication Between Man and Machine, 9 Commc’ns of the ACM 36, 36 (1996) (ELIZA was “a program which ma[de] natural language conversation with a computer possible.”).
[35] Joseph Anderson, AI and the Legal Puzzle: Filling Gaps, But Missing Pieces, 75 Mercer L. Rev. 1521, 1525 (2024) (“The AI Boom was marked by the investment of hundreds of thousands of dollars in artificial intelligence research and development of the first driverless car.”).
[36] AAAI-80: First National Conference on Artificial Intelligence, AAAI, https://aaai.org/conference/aaai/aaai80/#:~:text=The%20First%20National%20Conference%20on,at%20Stanford%20University%2C%20Stanford%20California (last visited Apr. 8, 2025).
[37] See Pamela McCorduck, This Could Be Important: My Life and Times with the Artificial Intelligentsia (1979) (ebook) (explaining that DENDRAL and MYCIN were among the first “expert system[s]” to use the “expertise of human specialists”). There remains debate on whether DENDRAL or MYCIN was the first expert system. Robert K. Lindsay et al., DENDRAL: A Case Study of the First Expert System for Scientific Hypothesis Formation, 61 Artificial Intel. 209, 211 (1993) (“Whether DENDRAL was the first expert system is debatable . . . Allen Newell identifies MYCIN as ‘the granddaddy of expert systems’, but acknowledges that those associated with both projects may not concur.”). DENDRAL, which generated organic compounds, was discovered in 1965. Id.; see also Edward A. Geigenbaum & Bruce G. Buchanan, DENDRAL and Meta-DENDRAL: Roots of Knowledge Systems and Expert System Applications, 59 Artificial Intel. 233, 234 (1993). MYCIN, discovered in 1960, was an expert system that made medical diagnoses of blood infections. Edward A. Feigenbaum, Knowledge Engineering: The Applied Side of Artificial Intelligence 3–4 (1980).
[38] Michael Chui et al., McKinsey Glob. Inst., Notes from the AI Frontier: Insights from Hundreds of Use Cases 3 (2018), https://www.mckinsey.com/west-coast/~/media/mckinsey/featured%20insights/artificial%20intelligence/notes%20from%20the%20ai%20frontier%20applications%20and%20value%20of%20deep%20learning/notes-from-the-ai-frontier-insights-from-hundreds-of-use-cases-discussion-paper.pdf (while “there is an incomplete outline of [deep learning’s] origins,” there were several pioneers in the 1980s); see, e.g., John J. Hopfield, Neural Networks and the Physical Systems with Emergent Collective Computational Abilities, 79 Proc. Nat’l Acad. Sci. 2254 (1982); David E. Rumelhart et al., Learning Representations by Back-Propogating Errors, 323 Nature 533 (1986), https://www.iro.umontreal.ca/~vincentp/ift3395/lectures/backprop_old.pdf. Deep learning uses backpropagation algorithms to process more complex data in multiple layers. Yann LeCun et al., Deep Learning, 521 Nature 436, 436 (2015); see also Geoffrey E. Hinton et al., A Fast Learning Algorithm for Deep Belief Nets, 18 Neural Computation 1527, 1527 (2006) (coining “deep” learning through a two-layer neural network model). The development of deep learning has allowed for great advancement in solving problems, including image recognition, speech recognition, predictions, análisis, question answering, and language translation. LeCun et al., supra note 37. However, even Hinton “warn[s] about the growing dangers” resulting from the acceleration of artificially intelligent chatbots and systems following his departure from Google. Zoe Kleinman & Chris Vallance, AI ‘Godfather’ Geoffrey Hinton Warns of Dangers as He Quits Google, BBC (May 2, 2023), https://www.bbc.com/news/world-us-canada-65452940; See also discussion infra note 46.
[39] Stan. Univ., Artificial Intelligence and Life in 2030, at 51 (2016), https://ai100.stanford.edu/sites/g/files/sbiybj18871/files/media/file/ai100report10032016fnl_singles.pdf.
[40] David L. Poole & Alan K. Mackworth, A Brief History of Artificial Intelligence, A.I. 3E § 1.2 (Aug. 3, 2023), https://artint.info/3e/html/ArtInt3e.Ch1.S2.html.
[41] Murray Campbell et al., Deep Blue, 134 Artificial Intelligence 134, 134 (2002), https://core.ac.uk/download/pdf/82416379.pdf.
[42] Lawrence A. Malakhoff & Martin V. Appel, The Development of a Voice Recognition Prototype for Field Listing 234 (1997), http://www.asasrms.org/Proceedings/papers/1997_037.pdf.
[43] See Cynthia L. Breazeal, Sociable Machines: Expressive Social Exchange Between Humans and Robots 18 (2000), https://groups.csail.mit.edu/lbr/hrg/2000/phd.pdf.
[44] Edith Ramirez et al., Fed. Trade Comm’n, Big Data: A Tool for Inclusion or Exclusion?, at i (2016), https://www.ftc.gov/system/files/documents/reports/big-data-tool-inclusion-or-exclusion-understanding-issues/160106big-data-rpt.pdf.
[45] See Jacques Bughin et al., McKinsey Glob. Inst., Artificial Intelligence: The Next Digital Fronter? 8 (2017), https://www.mckinsey.com/de/~/media/mckinsey/industries/advanced%20electronics/our%20insights/how%20artificial%20intelligence%20can%20deliver%20real%20value%20to%20companies/mgi-artificial-intelligence-discussion-paper.pdf.
[46] ChatGPT, OpenAI, https://openai.com/ (last visited Mar. 4, 2025).
[47] See Jianlong Zhou, Ethical ChatGPT: Concerns, Challenges, and Commandments, 13 Electronics 1, 3 (2024), https://www.mdpi.com/2079-9292/13/17/3417 (finding that among thousands of publications, ChatGPT is linked to several ethical concerns regarding bias, privacy, and abuse); See also discussion supra note 37. Other large language models, such as Meta’s LLaMA and Google’s Bard presents similar issues. See Sebastian Porsdam Mann et al., Generative AI Entails a Credit-Blame Asymmetry, 5 Nature Machine Intelligence 472, 472 (2023).
[48] Benjamin Laker, AI at the Crossroads: Navigating Job Displacement, Ethical Concerns, and the Future of Work,Forbes (May 9, 2023, 9:00 AM), https://www.forbes.com/sites/benjaminlaker/2023/05/09/ai-at-the-crossroads-navigating-job-displacement-ethical-concerns-and-the-future-of-work/?sh=7f665d4c391c (“[I]t’s clear there is enormous potential to revolutionize the world of work.”).
[49] Anand Rao et al., Navigating the Top 5 AI Trends Facing Your Business, PWC (Mar. 9, 2021), https://www.pwc.com.au/digitalpulse/ai-predictions-2021-report.html (of over 1,000 US executives, “[f]ifty-two percent of survey respondents have accelerated their AI approach in the wake of the COVID-19 crisis[.]”); The State of AI and Machine Learning, Appen, July 2022, at 31, fig. 25 (2022) (of 501 US respondents, 55% of companies accelerated their AI strategies in response to the pandemic).
[50] McKendrick, supra note 2; Kerri Reynolds, COVID-19 Increased the Use of AI. Here’s Why It’s Here to Stay, World Econ. F. (Feb. 24, 2021), https://www.weforum.org/agenda/2021/02/covid-19-increased-use-of-ai-here-s-why-its-here-to-stay/.
[51] Lindsey Wagner, Artificial Intelligence in the Workplace, ABA (June 10, 2022), https://www.americanbar.org/groups/labor_law/publications/labor_employment_law_news/spring-2022/ai-in-the-workplace/ (statement of Charlotte Burrows, chairwoman of the Equal Employment Opportunity Commission).
[52] Zhisheng Chen, Ethics and Discrimination in Artificial Intelligence-Enabled Recruitment Practices, 10 Human. & Soc. Sci. Commc’ns 1, 1 (2023).
[53] Id. at 10 fig.3. (assessing artificially intelligent recruitment tools).
[54] Gary D. Friedman, Artificial Intelligence Is Increasingly Being Used to Make Workplace Decisions—But Human Intelligence Remains Vital, Fortune (Mar. 13, 2023, at 7:10 AM), https://fortune.com/2023/03/13/artificial-intelligence-make-workplace-decisions-human-intelligence-remains-vital-careers-tech-gary-friedman/.
[55] Don Weinstein, People Leaders Need Data-Driven Technology Too, MIT Sloan Mgmt. Rev. (Nov. 22, 2022), https://sloanreview.mit.edu/article/people-leaders-need-data-driven-technology-too/ (“Data can help companies better understand and improve the employee experience, leading to a more productive workforce.”).
[56] Kales, supra note 5.
[57] Id.
[58] Lee Rainie et al., AI in Hiring and Evaluating Workers: What Americans Think, Pew Rsch. Ctr. (Apr. 20, 2023), https://www.pewresearch.org/wp-content/uploads/sites/20/2023/04/PI_2023.04.20_AI-in-Hiring_FINAL.pdf (“Roughly three-quarters of Americans say employers’ face recognition technology would misinterpret workers’ expressions; about half say it would recognize some skin tones better than others[.]”).
[59] See Brittany Kammerer, Hired by a Robot: The Legal Implications of Artificial Intelligence Video Interviews and Advocating for Greater Protection of Job Applicants, 107 Iowa L. Rev. 817, 819 (2022), https://ilr.law.uiowa.edu/sites/ilr.law.uiowa.edu/files/2023-02/N2_Kammerer.pdf (“While AI has many benefits such as removing human bias and improving efficiency, it also has risks such as algorithmic bias and data privacy.”).
[60] Clara Niel, Maryland Law Firm Adopts AI Tool to Improve Efficiency, Access, Balt. Sun (May 14, 2023, 10:59 AM), https://www.baltimoresun.com/maryland/bs-md-ai-law-firm-20230514-o36iry63vjhmdcvg5qtmjnizzy-story.html (highlighting statement of Joseph Gibson, managing attorney of The Employment Law Center of Maryland).
[61] Id.
[62] Kales, supra note5.
[63] Id.
[64] Id.
[65] H.D. 1202, 2020 Leg., 441st Reg. Sess. (Md. 2020).
[66] See id.
[67] Governor Hogan Announces Maryland Manufacturing 4.0 Grant Program, Md. Dep’t of Com. (Aug. 22, 2022), https://commerce.maryland.gov/media/governor-hogan-announces-maryland-manufacturing-40-grant-program; See also H.D. 622, 2022 Leg., 444th Reg. Sess. (Md. 2022).
[68] H.D. 622, 2022 Leg., 444th Reg. Sess. (Md. 2022).
[69] Bryan P. Sears, Moore Announces Focus on AI, Updating State Computer Systems, Md. Matters (Jan. 8, 2024), https://www.marylandmatters.org/2024/01/08/moore-announces-focus-on-ai-updating-state-computer-systems/; see also Md. Exec. Order No. 01.01.2024.02, available at https://governor.maryland.gov/Lists/ExecutiveOrders/Attachments/31/EO%2001.01.2024.02%20Catalyzing%20the%20Responsible%20and%20Productive%20Use%20of%20Artificial%20Intelligence%20in%20Maryland%20State%20Government_Accessible.pdf.
[70] Md. Exec. Order No. 01.01.2024.02.; Md. Manual On-Line, Governor’s Artificial Intelligence Subcabinet, https://msa.maryland.gov/msa/mdmanual/08conoff/cabinet/html/ai.html (last visited Apr. 10, 2025) (explaining the AI Subcabinet, comprised of one chair and nine ex officio members in various state government roles, promotes the principles enlisted in the Executive Order, provides recommendations to the Governor regarding artificial intelligence, and coordinates statewide use of artificial intelligence); Md. Dep’t of Info. Tech., Memorandum from the Governor’s AI Subcabinet, https://doit.maryland.gov/SiteAssets/Pages/default/2025%20Maryland%20AI%20Enablement%20Strategy%20%26%20AI%20Study%20Roadmap.pdf (last visited Apr. 10, 2025) (finding the AI Subcabinet recently published an “AI Enablement Strategy & AI Study Roadmap” to promote responsible, ethical, and productive use of artificial intelligence.
[71] See S.B. 818, 2024 Leg., 446th Reg. Sess. (Md. 2024). S.B. 182, 2024 Leg., 446th Reg. Sess. (Md. 2024) (regulating the use of facial recognition technology by law enforcement agencies).
[72] Fitzgerald, supra note 5.
[73] Kales, supra note5.
[74] Jeffrey Dastin, Insight – Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women, Reuters (Oct. 10, 2018 at 8:50 P.M.), https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G/.
[75] Id.
[76] Fitzgerald, supra note 5.
[77] Kales, supra note 5.
[78] Id.
[79] Title VII of the Civil Rights Act of 1964, 42 U.S.C. §§ 2000e – 2000e17 (as amended); Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964, U.S. Equal Employment Opportunity Commission (May 18, 2023) (providing guidance that the use of automated systems could violate Title VII by creating a disparate impact).
[80] Americans with Disabilities Act, 42 U.S.C. § 12101 et seq. (1990); see also U.S. Dep’t of Justice, Civ. Rts. Div., Algorithms, Artificial Intelligence, and Disability Discrimination in Hiring 2–5 (providing guidance on how artificial intelligence and algorithmic bias can violate the ADA).
[81] Age Discrimination in Employment Act, 29 U.S.C. §§ 621–634; see, e.g., Press Release, EEOC Sues iTutorGroup for Age Discrimination, https://www.eeoc.gov/newsroom/eeoc-sues-itutorgroup-age-discrimination; see also discussion infra Section IV.C.iii.
[82] Terry Gross, Exposing the Secretive Company at the Forefront of Facial Recognition Technology, NPR (Sept. 28, 2023, 1:29 P.M.), https://www.npr.org/2023/09/28/1202310781/exposing-the-secretive-company-at-the-forefront-of-facial-recognition-technology (transcript).
[83] See discussion infra Section IV.
[84] H.D. 622.
[85] Fitzgerald, supra note5.
[86] H.D. 1202.
[87] Fitzgerald, supra note 5.
[88] Id.
[89] See Biometric Information Privacy Act, 740 ILCS 14/15 (West 2025)
[90] H.D. 3773, 103rd Gen. Assemb., Reg. Sess. (Ill. 2024).
[91] Local Law 144 (2021). The law has created intense debate on the use of artificially intelligence by employers. See Tate Ryan-Mosley, Why Everyone Is Mad About New York’s AI Hiring Law, Mit Tech. Rev. (July 10, 2023), https://www.technologyreview.com/2023/07/10/1076013/new-york-ai-hiring-law/ (discussing concerns of feasibility of audits); Wright et al., Null Compliance : NYC Local Law 144 and Challenges of Algorithm Accountability, FAccT 11, 13 (June 2024), https://facctconference.org/static/papers24/facct24-113.pdf (finding that only 18 of 391 employers had posted bias audits in compliance with Local Law 144 and that null compliance hindered the impact of Local Law 144 on algorithmic hiring decision systems). As a result, New York City “modified the law by narrowing the scope to only cover automated employment decision tools that are being used without any human oversight[.]” (statement of Md. Chamber of Commerce), https://mgaleg.maryland.gov/cmte_testimony/2024/ecm/186CoZojs0cMC3h-uUyUWY5kZY2G0TDZ-.pdf.
Most recently, in 2024, New York Governor Kathy Hochul signed the Legislative Oversight of Automated Decision-making in Government Act, which mandates transparency and meaningful human oversight over state agency use of artificial intelligence. S.B. S7543A.
[92] S.B. 24-205, 74th Gen. Assemb., Reg. Sess. (Colo. 2024).
[93] See S.B. 1241, 74th Gen. Assemb., Reg. Sess. (Cal. 2020).
[94] Id. Notably, in 2024, the California Privacy Protection Agency commenced formal rulemaking of proposed regulations on the use of artificially intelligent automated decision-making technology, including mandated risk assessments when there is “significant risk to consumers’ privacy” and when there is “a significant decision concerning a consumer or for extensive profiling.” California Privacy Protection Agency, Draft Automated Decisionmaking Technology Regulations § 7150 (2023), https://cppa.ca.gov/meetings/materials/20231208_item2_draft.pdf.; see also Proposed Regulations on CCPA Updates, Cybersecurity Audits, Risk Assessments, Automated Decisionmaking Technology (ADMT), and Insurance Companies, California Privacy Protection Agency (2024), https://cppa.ca.gov/regulations/ccpa_updates.html (noting that the public comment period closed in February 2025).
[95] H.R. 5116, 103rd Gen. Assemb., Reg. Sess. (Ill. 2024).
[96] See S. 4400, 116th Cong. (2020).
[97] See generally Sec. Indus. Ass’n, Guide To U.S. Biometric Priv. L.: A Reference Guide To State L. On Biometric Info. And Related Legis. Trends (2023), https://www.irisid.com/wp-content/uploads/2023/11/SIA-Guide-US-Biometric-Privacy-Laws-web-FINAL-c.pdf.
[98] See S. 3478, 118th Cong. (2023).
[99] Dwight A. Weingarten, Maryland’s New Artificial Intelligence Advisor Starts as Legislator Calls for Privacy Law, The Herald-Mail (Dec. 5, 2023, 4:59 A.M.) (statement by Nishant Shah, artifical intelligence advisor), https://www.heraldmailmedia.com/story/news/state/2023/12/05/maryland-is-at-starting-line-artificial-intelligence-advisor-says/71765907007/.
[100] Sam Janesch, Maryland Lawmakers Set Sights on Addressing Artificial Intelligence, Including Government Use, Balt. Sun (Jan. 8, 2024), https://www.baltimoresun.com/2024/01/08/maryland-lawmakers-set-sights-on-addressing-artificial-intelligence-including-government-use/
[101] See, e.g., H.D. 1331, 2025 Leg., 447th Reg. Sess. (Md. 2025) (protecting Marylanders from decisions made by companies using high-risk artificial intelligent systems, including employment decisions); S.B. 0936, 2025 Leg., 447th Reg. Sess. (Md. 2025) (requiring reasonable care standard for the prevention of algorithmic discrimination in high-risk artificial intelligence systems); H.D. 1255, 2024 Leg., 446th Reg. Sess. (Md. 2024) (prohibiting employers from using automated employment decision tools when screening applicants unless subject to a yearly impact assessment).
[102] Press Release, Governor Moore Announces Major Action to Rebuild State Government and Modernize Maryland Department of Information Technology Services and Operations, Off. of Governor Wes Moore (Aug. 16, 2023), https://governor.maryland.gov/news/press/pages/governor-moore-announces-major-action-to-rebuild-state-government-and-modernize-maryland-department-of-information-technolo.aspx.
[103] Weingarten, supra note 98.
[104] Lawrence Norden & Benjamin Lerude, States Take the Lead on Regulating Artificial Intelligence, Brennan Ctr. for Just. (Nov. 6, 2023), https://www.brennancenter.org/our-work/research-reports/states-take-lead-regulating-artificial-intelligence.
[105] Id.
[106] Press Release, Soto, Chavez-DeRemer, Blunt Rochester, Garbarino Introduce Bipartisan Jobs of the Future Act of 2023, Darren Soto (Jul. 6, 2023), https://soto.house.gov/media/press-releases/soto-chavez-deremer-blunt-rochester-garbarino-introduce-bipartisan-jobs-future; see also S.B. 5031, 118th Cong. (2023).
[107] Press Release, Booker, Heinrich, Young, Rounds Introduce Bipartisan, Bicameral Bill to Expand Access to Artificial Intelligence Research, Cory Booker (Jul. 28, 2023), https://www.booker.senate.gov/news/press/booker-heinrich-young-rounds-introduce-bipartisan-bicameral-bill-to-expand-access-to-artificial-intelligence-research; see also S.B. 2714, 118th Cong. (2023).
[108] H.D. 1034, 2023 Leg., 445th Sess. (Md. 2023).
[109] Id.
[110] Id.
[111] Id.
[112] Weingarten, supra note 98.
[113] H.D. 1068, 2023 Leg., 445th Sess. (Md. 2023).
[114] Id.
[115] Id.
[116] H.D. 1132, 2023 Leg., 445th Sess. (Md. 2023).
[117] Id.
[118] Id.
[119] S.B. 1087, 2024 Leg., 446th Reg. Sess. (Md. 2024).
[120] Id.
[121] H.D. 1297, 2024 Leg., 446th Reg. Sess. (Md. 2024).
[122] Weingarten, supra note 98.
[123] Fitzgerald, supra note 5.
[124] Markel et al., AI and Employee Privacy: Important Considerations for Employers, Reuters (Sept. 29, 2023), https://www.reuters.com/legal/legalindustry/ai-employee-privacy-important-considerations-employers-2023-09-29.
[125] Hilke Schellmann, Auditors Are Testing Hiring Algorithms for Bias, but There’s No Easy Fix, MIT Tech. Rev. (Feb. 11, 2021), https://www.technologyreview.com/2021/02/11/1017955/auditors-testing-ai-hiring-algorithms-bias-big-questions-remain/.
[126] Jovana Davidovic, On the Purpose of Meaningful Human Control of AI, Frontiers in Big Data (Jan. 9, 2023), https://www.frontiersin.org/journals/big-data/articles/10.3389/fdata.2022.1017677/full.
[127] Melissa Heikkiläarchive, AI companies Promised to Self-Regulate One Year Ago. What’s Changed, Mit Tech. Rev.(July 22, 2024), https://www.technologyreview.com/2024/07/22/1095193/ai-companies-promised-the-white-house-to-self-regulate-one-year-ago-whats-changed/.
[128] Lena Kemp, Navigating the AI Employment Bias Maze: Legal Compliance Guidelines and Strategies, Am. Bar Ass’n (Apr. 10, 2024), https://www.americanbar.org/groups/business_law/resources/business-law-today/2024-
april/navigating-ai-employment-bias-maze/.
[129] Id.
[130] See infra Section IV.C.i.
[131] See infra Section IV.C.ii.
[132] Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People (Oct. 2022), https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf.
[133] See id.
[134] Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, supra note 131.
[135] Id.
[136] Exec. Order No. 14901, 88 Fed. Reg. 10825, 10831 (Feb. 22, 2023).
[137] Exec. Order No. 14110, 88 Fed. Reg. 75191 (Nov. 1, 2023).
[138] Id.
[139] Id.
[140] Press Release, Readout of Justice Department’s Interagency Convening on Advancing Equity in Artificial Intelligence, Off. of Pub. Affs. (Jan. 11, 2024), https://www.justice.gov/opa/pr/readout-justice-departments-interagency-convening-advancing-equity-artificial-intelligence.
[141] Id.
[142] Exec. Order No. 14179, 90 Fed. Reg. 8741 (Jan. 31, 2025).
[143] Id.
[144] Jackie Davalos, Trump’s Funding Cuts Threaten America’s AI Competitiveness, Bloomberg (Mar. 3, 2025, 10:03 AM), https://www.bloomberg.com/news/articles/2025-03-03/trump-s-funding-cuts-threaten-america-s-ai-competitiveness?embedded-checkout=true.
[145] Rohit Chopra et al., Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems 1 (2023), https://www.ftc.gov/legal-library/browse/cases-proceedings/public-statements/joint-statement-enforcement-efforts-against-discrimination-bias-automated-systems.
[146] Id. at 3.
[147] Lisa O. Monaco, Deputy Attorney General, Deputy Attorney General Lisa O. Monaco Delivers Remarks at the University of Oxford on the Promise and Peril of AI (Feb. 14, 2024), (https://www.justice.gov/opa/speech/deputy-attorney-general-lisa-o-monaco-delivers-remarks-university-oxford-promise-and (speech by Lisa O. Monaco, deputy attorney general).
[148] See supra note 79, at 1.
[149] EEOC History: 2020—2024, U.S. Equal Employment Opportunity Commission (2023), https://www.eeoc.gov/history/eeoc-history-2020-2024 (last visited Mar 27, 2025).
[150] The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees, EEOC (May 12, 2022), https://www.eeoc.gov/laws/guidance/americans-disabilities-act-and-use-software-algorithms-and-artificial-intelligence.
[151] Press Release, EEOC Releases New Resource on Artificial Intelligence and Title VII, EEOC (May 18, 2023), https://www.eeoc.gov/newsroom/eeoc-releases-new-resource-artificial-intelligence-and-title-vii.
[152] See, e.g., EEOC v. iTutorGroup, Inc., et al., Civil Action No. 1:22-cv-02565; Mobley v. Workday, Inc., 3:23-cv-00770, (N.D. Cal. Feb 21, 2023) ECF No. 1.
[153] Press Release, iTutorGroup to Pay $365,000 to Settle EEOC Discriminatory Hiring Suit, U.S. Equal Emp. Opportunity Comm’n (Sept. 11, 2023), https://www.eeoc.gov/newsroom/itutorgroup-pay-365000-settle-eeoc-discriminatory-hiring-suit.
[154] Id.
[155] See Nat’l Inst. of Standards and Tech., Artificial Intel Risk Management Framework (AI RMF 1.0) (2023), https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf.
[156] Id.
[157] Id.
[158] Fitzgerald, supra note 5.
[159] Katie Shepherdd & Erin Cox, Maryland Looks to Harness AI for Government Use with Executive Order, Wash. Post (Jan. 8, 2024, 5:26 P.M.), https://www.washingtonpost.com/dc-md-va/2024/01/08/maryland-ai-government-wes-moore/.
[160] McKendrick, supra note 2.
[161] Kales, supra note4.
[162] Exec. Order No. 14110, 88 Fed. Reg. at 75191.





Leave a comment