top of page
Laptop On Tray

AI Infiltrating Our Lives - States Step In To Curb Bias

You may have heard the buzz around AI tools like ChatGPT, but behind the scenes, artificial intelligence has already pervaded our everyday lives in ways most people don't even realize. AI algorithms are being used to screen job resumes, vet rental applications, and even make decisions about medical care. And there's a major problem - many of these AI systems have been found to discriminate, favoring certain races, genders, and income levels over others.

With the federal government stuck in inaction, several states are now taking matters into their own hands to regulate bias in AI decision-making tools. Lawmakers in at least seven states - California, Colorado, Rhode Island, Illinois, Connecticut, Virginia, and Vermont - are introducing sweeping legislation to get a handle on this largely unregulated domain.

The Trojan Horse of Historical Bias

So how exactly do these seemingly objective AI algorithms end up discriminating? The issue often stems from the data they are trained on. As an example, Amazon had to scrap an AI hiring tool nearly a decade ago after finding it favored male candidates. Why? The system learned to prefer men by studying years of male-dominated resumes.

As Suresh Venkatasubramanian, a Brown University professor who helped author the White House's AI Bill of Rights explains, "If you are letting the AI learn from decisions that existing managers have historically made, and if those decisions have historically favored some people and disfavored others, then that's what the technology will learn."

In other words, by training AI on historical data that already reflects human biases of the past, we are simply perpetuating those same inequalities through the new technologies meant to be objective and fair.

Impact Assessments and Public Audits 

The proposed state legislations are aiming to combat this by requiring companies to conduct "impact assessments" when using automated decision tools involving AI. This would entail analyzing exactly how the AI factors into decisions, what data is used to train it, potential discrimination risks, and safeguards employed.

Some bills go further by mandating companies disclose their use of AI to consumers, allowing people to opt-out in certain cases. But experts like Venkatasubramanian argue these impact assessments alone, without rigorous public auditing of whether bias actually exists, may not be enough.

He and others are calling for requiring third-party bias audits to test if an AI system is discriminating, and making those results transparent to truly hold companies accountable. Unsurprisingly, the tech industry is pushing back against such public auditing, arguing it could expose trade secrets.

The Tightrope of Innovation and Oversight 

Negotiating this tightrope between innovation and ethical oversight remains the core challenge as lawmakers grapple with a powerful industry furiously advancing AI capabilities at lightspeed. Just last year, only about a dozen AI-related bills managing to get passed, while hundreds more failed or stalled.

Yet the stakes are high, as Venkatasubramanian bluntly states: "AI does in fact affect every part of your life whether you know it or not...It covers everything in your life. Just by virtue of that you should care."

These initial state efforts represent important first steps in the long road ahead to establish badly needed guardrails for AI systems now embedded across domains governing core facets of our lives. Whether they succeed in striking the right balance remains to be seen. But one thing is clear - the momentum is building for society to finally wrestle with the societal impacts of algorithms affecting us all, albeit largely without our awareness.

0 views0 comments

Recent Posts

See All


Los comentarios se han desactivado.
bottom of page