[ad_1]
As artificial intelligence finds its way into aspects of everyday life and becomes increasingly advanced, some state legislators feel a new urgency to create regulations for its use in the hiring process.
Artificial intelligence, commonly known as AI, has been adopted by a quarter of businesses in the United States, according to the 2022 IBM Global AI Adoption Index, a jump of more than 13% over the previous year. And many are beginning to use it in the hiring process.
State laws haven’t kept up. Only Illinois, Maryland, and New York City require employers to ask for consent first if using AI during certain parts of the hiring process. A handful of other places are considering similar legislation.
“Legislators are critical, and as always, legislators are always late to the party,” says Maryland state delegate Mark Fisher, a Republican. Fisher sponsored his state’s law, which went into effect in 2020, regulating the use of facial recognition programs in hiring. It prohibits an employer from using certain facial recognition services—such as those that might cross-check applicants’ faces against outside databases—during an applicant’s interview process unless the applicant consents.
“Technology innovates first, and then it always seems like a good idea . . . until it isn’t,” Fisher says. “That’s when legislators step up and try to regulate things as best as they can.”
Where AI developers are interested in innovating as fast as possible with or without legislation, both developers and policymakers must think about the implications of their decisions, says Hayley Tsukayama, senior legislative activist at the Electronic Frontier Foundation, which advocates for civil liberties on the internet.
For policymakers to write effective legislation, developers must be transparent about what systems are being used and open to considering what the potential problems could be, Tsukayama says.
“It’s probably not exciting to people who want to move faster or people who want to put these systems in their workplace right now or already have them in the workplace right now,” she says. “But I do think for policymakers, it’s really important to talk to a lot of different people, particularly people who are going to be affected by this.”
AI in hiring
AI can help with the hiring process by performing résumé evaluations, scheduling candidate interviews, and sourcing data, according to an analysis by Skillroads, which provides professional résumé-writing services that incorporate AI.
Some members of Congress are trying to act too. The proposed American Data Privacy and Protection Act aims to set rules for artificial intelligence, including AI risk assessments and its overall use, and would cover data collected during the hiring process. Introduced last year by Rep. Frank Pallone Jr., a New Jersey Democrat, it currently sits in the U.S. House Energy and Commerce Committee.
The Biden administration last year issued the Blueprint for an AI Bill of Rights, a set of principles to guide organizations and individuals on the design, use, and deployment of automated systems, according to the document.
In the meantime, lawmakers in some states and localities have worked to create policies. Maryland, Illinois, and New York City are the only places with laws explicitly for job seekers dealing with AI during the hiring process; requiring companies to inform them when it’s being used at certain points and asking for consent before moving forward, according to data from Bryan Cave Leighton Paisner, a global law firm providing legal advice to clients regarding business litigation, finance, real estate, and more. California, New Jersey, New York State, and Vermont also have considered bills that would regulate AI in hiring systems, according to the New York Times.
Face recognition technology is used by many federal agencies, including for cybersecurity and policing, according to the U.S. Government Accountability Office. Some industries are using it as well. Artificial intelligence can link face-recognition programs with applicant databases in seconds, Fisher says, which he cited as the concern that motivated his bill.
His goal, he says, was to craft a narrow measure that could open the door for potential future AI-related legislation. The bill, which took effect in 2020 without being signed into law by then-Governor Larry Hogan, a Republican, only includes the private sector, but Fisher says he’d like for it to be expanded to include public employers.
Legislative challenges
Policymakers’ understanding of artificial intelligence, particularly when it comes to its civil rights implications, is almost nonexistent, says Clarence Okoh, the senior policy counsel at the Washington, D.C.-based nonprofit Center for Law and Social Policy (CLASP) and a Social Science Research Council Just Tech Fellow.
As a result, he says, companies that use AI often are regulating themselves.
“Unfortunately, I think what’s happened is a lot of AI developers and sales have been very effective at crowding out the conversation with policymakers around how to govern AI and to mitigate social consequences,” Okoh says. “And so, unfortunately, there’s a lot of interest in developing self-regulatory schemes.”
Some self-regulatory practices include audits or compliance that use general guidance, such as the Blueprint for an AI Bill of Rights, Okoh says.
The results have sometimes raised concerns. Some organizations operating under their own guidelines have used AI recruiting tools that showed bias.
In 2014, a group of developers at Amazon began creating an experimental, automated program to review job applicants’ résumés for top talent, according to a Reuters investigation; but by 2015, the company found that its system effectively had taught itself that male candidates were preferable.
Those close to the project told Reuters the experimental system was trained to filter applicants by observing patterns in résumés submitted to the company over a 10-year period—most of which came from men. Amazon told Reuters the tool “was never used by Amazon recruiters to evaluate candidates.”
But some companies say AI is helpful and that strong ethics rules are in place.
Helena Almeida, vice president-managing counsel at ADP, a human resources management software company, says its approach to using artificial intelligence in its products follows the same ethical guidelines as before the technology emerged. Regardless of the legal requirements, Almeida says, ADP considers it an obligation to go above and beyond the basic framework to ensure its products don’t discriminate.
Artificial intelligence and machine learning is used in several of ADP’s hiring-support services. And many current laws apply to the artificial intelligence world, she says. ADP also offers its clients certain services that use face recognition technology, according to its website. As the technology evolves, ADP has adopted a set of principles to govern its use of AI, machine learning, and more.
“You can’t discriminate against a particular demographic group without AI, and you also can’t do it with AI,” Almeida says. “So, that’s an essential part of our framework and how we look at bias in these tools.”
One way to avoid issues with AI in the hiring process is to maintain human involvement, from product design to regular monitoring of automated decisions.
Samantha Gordon, the chief programs officer at TechEquity Collaborative, an organization advocating for tech workers in the industry, says in situations where machine learning or data collection is used with no human input, the machine is at risk of being biased toward certain groups.
In one example, HireVue, a platform helping employers collect video interviews and assessments from job seekers, announced in 2021 the removal of its facial analysis component after finding, during an internal review, that the system had less correlation to job performance than other elements of the algorithmic assessment, according to a release from the organization.
“I think that’s the thing that you don’t have to be a computer scientist to understand,” Gordon says. Speeding up the hiring process, she says, leaves room for error. That’s where Gordon says legislators are going to have to intervene.
And on both sides of the aisle, Fisher says, legislators think companies ought to show their work.
“I would like to think that, generally speaking, people would like to see there be a lot more transparency and disclosure in the use of this technology,” Fisher says. “Who’s using this technology? And why?”
[ad_2]
Source link
Comments are closed.