Three ways the new AI Bill of Rights can affect fintech HR and ops teams
/Background
On Tuesday, the Biden Administration published a blueprint for an AI Bill of Rights. The document calls for: safe and effective systems; algorithmic discrimination protections; data privacy; clear notice and explanation channels; and human alternatives where possible.
“Algorithms used in hiring and credit decisions have been found to reflect and reproduce existing unwanted inequities or embed new harmful bias and discrimination,” the blueprint reads, directly naming unchecked alt-data use in finance as a cause for concern. But the blueprint sees ethical and regulated use of AI and large data sets as a positive force for good. “These outcomes are deeply harmful—but they are not inevitable.”
If this AI Bill of Rights takes hold and shapes the US regulatory landscape in its image, then human resources and operations teams—notably in fintech—should prepare for a new suite of hiring needs.
1. Boosting demand for compliance-related hires
While AI lobbyists may fear the proposed Bill of Rights as a source of potential job losses, the proposed regulatory framework seems to create several new job markets. For one, the Bill calls for “generally accessible plain language documentation” describing AI-powered systems, as well as “access to a person who can quickly consider and remedy problems you encounter.” In other words, more content creation and support roles may arise from this Bill of Rights in an effort to make everyday encounters with AI simpler to understand and resolve.
For companies using AI in recruitment—for example to scan resumes and cover letters—candidates may reserve the right to request a human being conduct that same task. This may force companies to build out their hiring teams, or contract with outsourcers to accommodate hiring crunches and candidates who want to bypass automated hiring processes.
And, of course, with greater compliance demands comes greater demand for compliance and legal teams. HR and ops teams will be on the lookout for legal experts with experience in AI compliance, which may force corporations to make more competitive offers in this field.
2. Reconsidering workplace monitoring
The Biden Administration’s proposed blueprint calls for an end to workplace monitoring and similar practices in other venues. “Continuous surveillance and monitoring should not be used in education, work, housing, or in other contexts where the use of such surveillance technologies is likely to limit rights, opportunities, or access,” the Bill reads.
Fintechs using screen-monitoring programs or time-tracking platforms may have to ditch them for self-reporting systems. Depending on how this law is interpreted, insurtechs may also have to tweak their telemetric systems to solve for privacy and/or transparency.
3. Remaining in a holding pattern
As of now, this blueprint remains exactly that—a proposed framework rather than actual law. To Sneha Revanur, who leads Encode Justice, an AI and youth justice organization, the blueprint currently lacks any enforcement mechanisms, which makes private-sector enforcement all but moot.
"Though it is limited in its ability to address the harms of the private sector, the AI Bill of Rights can live up to its promise if it is enforced meaningfully, and we hope that regulation with real teeth will follow suit,” she said.
If the private sector does see this blueprint as a sign that enforcement is to come, then we might expect leading alt-data and AI-forward players across sectors to transform privacy into a unique selling point, adapting to inevitable changes in policy before they arrive.