India's largest platform for AI & Analytics leaders & professionals

Sign in

India's largest platform for AI & Analytics leaders & professionals

3AI Digital Library

Shaping up AI sans Societal Bias

3AI March 15, 2021

Cricket – a sport that has been a rage across the world and garners ardent fan-following has made a long stead to travel from England’s Greens, where ICC Women’s cricket world cup inception match was played in 1973, to the world stage as it is today. Multiple laurels by prodigies of women cricket over the years do corroborate the fact that women’s cricket has been several notches above and beyond an amateur’s pursuit. It also happens to be the sport’s oldest world championship, played even before the Men’s cricket world cup in 1975, yet Cricket – a gender-neutral sports is hitherto perceived to be just a ‘Gentleman’s Game’. Sounds fair?

An analysis of our day to day lexicon brings the reigning image etched on our mind that A Man is to a Doctor and Woman is to Nurse or A Man is to a Programmer or Trader and Woman is to Homemaker et al. Besides this, there have been multiple reports exhibiting the disparate and severe treatment (in some mode and form) being meted out to people of color or ethnic minorities when compared to their white counterparts and at times the consequences have proved to be lethal. Does it still sound fair and equitable by any stretch of imagination? The answer is – No

A vantage point analysis of these and many such instances from our living memory lays bare the thread of thoughts stitching these instances together that is the presence of entrenched societal prejudice and notions. Be it a serious business like finance or a mundane task, the seemingly gender agnostic professions have been entwined with a specific gender and group. The subversive and subtle racial discrimination has been at play in crucial sectors with a potential to cause fatal consequences. However innocuous these evincing bias and stereotypes might appear on the surface, it’s potential to cause precarious repercussions by providing systematic advantage or disadvantage to a certain privileged or unprivileged group, class or gender respectively, is worth seizing every iota of our attention and curative action. The repercussions get alarmingly aggravated when this bias and stereotypes gets embedded and promulgated by the prodigiously potent and ubiquitous technology – Artificial Intelligence (AI).

There is absolutely no doubt that irrespective of the mundanity or criticality of the task, AI holds the augmented potential to cause seismic shifts towards transcending and metamorphosing every strata and section of the society in this boundary less digital world, however, with a caveat. In a given context, the AI model stands to be only as fair, equitable and inclusive as the divergent perspectives, data and tapestry of societal nuances infused in it by the people developing and interacting with it. The following illustrations do substantiate this fact. 

From the automated announcements at public platforms to the world’s most celebrated AI infused humanoids – Erica by Hiroshi Ishiguro and Sophia by Hanson Robotics – have a female voice and avatar, thus personifying a young woman. Meanwhile, the physical rescue and parkour robots from MIT and Boston Dynamic have been designed noticeably male in their physique and have been named Hermes and Atlas respectively. A careful examination of in-vogue digital voice assistants – be it the first chatbot Eliza developed in 1965 or the much celebrated AI infused luxury chatbots like AlexaSiri and Cortana – in their original form have been personified by a female name, voice and demeanor.

Related Article  AI spending in India to grow to USD 880.5 million by 2023

Empirical research suggests, people prefer to have ‘authoritative statements’ delivered in male exhibit and ‘any help or obligatory roles’ delivered in female exhibit. In the light of this, companies design and personify these voice assistants as female to be unfailingly upbeat and polite, even in the face of browbeating, in the hope that this sort of behavior might maximize a user’s desire to keep interacting with the device. While this might steer the top and bottom lines for these companies, the enduring dent that it casts on the society in terms of fairness and equity is palpably disquieting at multiple levels.

The depth and magnitude of this dent can be gauged from the extent by which these gendered voice assistants are being used pervasively. As of 2019, there are around 3.25 billion digital assistants reportedly being used across the world with a projected growth of 8 billion by 2023 – a statistic even higher than the present population of the world. Thus, greater the exposure to these gendered associations radiated by these AI powered voice assistants and humanoids, higher the propensity to aggravate, adopt and propagate this stereotype and bias causing pernicious impact across the society and the generations to come. 

As we observe these instances with close contiguity, there is absolutely no denial of the fact that society’s echo chamber and prejudice have led to a significant degree of bias entrenched in most of the AI models. This is where a context sensitive awareness and validation of the model during the overarching process of data collection, development and delivery is of prime importance.

A canonical instance to discuss on how a high-stake recidivism prediction model can tip the balance towards racially discriminatory predictions, if the underpinning data and algorithm are not anchored in the nuances of the entwined social context, is the famous – Risk Assessment Instrument tool Correctional Offender Management Profiling for Alternative Sanctions (COMPAS). Fastened in the historical data and optimized for overall accuracy, the COMPAS model predicted twice the number of false positives for recidivism for African American ethnicity than for Caucasian ethnicity. Had there been a more prescient cognizance of the disparities due to the presence of sensitive and protected attributes — such as race, gender and ethnicity — in the survey questions designed to predict recidivism, the COMPAS creator team might have been able to proactively test the model using context-curated and sensitive approaches while also adjusting for bias in the high-stake domain such as judicial sentencing.

Related Article  Design thinking | Behavioral Sciences: Strategic elements to building a successful AI enterprise

Similarly, when societal bias about race and economic class gets baked into the high-stake sector, such as education, can cause life-changing consequences having the potential to derail one’s career trajectory. 

A swift analysis of the above illustrations might invite a knee-jerk reaction to remove the protected and sensitive attributes in an attempt to do away with the bias, however it only leads to futile results. The grim reality roots in the fact that models can build up understanding of these protected and sensitive class attributes from other labels known as proxy variables – such as postal codes, public utilities accessed by a specific class et al. Moreover the perils of using inappropriate proxies coupled with inadequate underrepresented datasets may be inevitable in predictive modelling. 

A cogent example of this is the Allegheny Family Screening Tool (AFST) used to predict child maltreatment risk. The underpinning statistical model employs around 130+ county specific predictive variables derived from a finite universe of data that includes information specific to public resources along with proxy variables to predict child maltreatment risk score. Besides a score of proxy variables, a quarter of variables wielded by AFST have been measures of poverty with the other quarter being derived from the information about the use of public utilities and programs specifically used by marginalized and poor communities. Thus, the crucial piece of information and data of the variables correlated to the affluent class parents who do not use the public utilities or employ private treatment for drug addiction and mental health goes missing from the data set. Making the data set non-representative, this privation and rationing makes it oversampled for poor leading to poverty profiling – which is unfairly targeting low income families for extra scrutiny due to their personal or systemic disadvantaged characteristic rather than their behavior. While the model bemuses parenting while poor with poor parenting, it also unreasonably flags the parents accessing public utilities as risks to their offspring. 

While a realm of reasons ranging from a poorly sampled data set to ill-defined objective function in the model could be attributed towards the presence and proliferation of bias and stereotypes in AI algorithms, it mandates radical shift at every step of model designing and building to mitigate it. Being proactive and nudging for fairness, the very first step starts from being conversant with the basic tenets, presence and significance of fairness in discrete contexts followed by paying heed to the genesis and avenues through which bias stems and crawls in the model. 

Fairness by definition is a nuanced and context-sensitive concept which differs across cultures, societies and over time as well. To gauge the extent and impact due to the presence/absence of fairness and equity at individual and group level, a range of fairness metrics such as – Theil Index, Disparate Impact, Statistical parity, Demographic Parity, Average Odds Difference, Equalized Odds – are brought into play.

Related Article  Facebook’s latest AI experiment helps you pick what to wear

Moreover, be it pre- processing, in-processing or post-processing phase of building the AI model, bias can creep at any stage. The more delay in identification & rectification of this bias in each of these stages, the more it is analogous to unscrambling an egg. While de-biasing techniques can be exerted at each of these stages, pre-processing is the prime phase to mitigate bias where primitive techniques such as disparate impact remover and obfuscating information on sensitivity can be employed.

While context-sensitive deployment of each of the above discussed pointers and techniques is crucial to eliminate bias, the presence of diversity in the teams responsible for design, data collection, development and validation, is something to reckon with. A revitalized focus towards taking the edge off diversity famine from the teams and creating a prismatic team at the workplace is of essence to tip the scale towards fairness and equity in these AI models. A team with low diversity-deficit not only enables one and all aboard the team to identify their blind spots but also invigorates the team to be more context and culture-sensitive. The success of bias-free and equitable AI hinges on a multitude of factors including the ones discussed above and strategic efforts from one and all are needed to eradicate bias and stereotypes.

To rise to the challenge of biased AI, it is important to play to the strength of diversity that brings divergent thought processes and blind-spots to the fore. Celebrating differences, inclusive and steadfast strategy to mainstream discussions around alleviation of bias and stereotypes, will be the first step to breeding fairness. All of the above will help thrive this tremendously utilitarian technology across the spectrum while also infusing credibility in this incredibly useful AI.

References: 

[1]https://www.statista.com/statistics/973815/worldwide-digital-voice-assistant-inuse/#:~:text=As%20of%202019%2C%20there%20are,higher%20than%20the%20world’s%20population

[2]https://www.axios.com/england-exams-algorithm-grading-4f728465-a3bf-476b-9127- 9df036525c22.html

About the Author: Aparana Gupta, Analytics & Data Science Leader, Cloud Engineering, Oracle

A seasoned professional having 13+ years of rich and diverse experience & proficiency in transforming Business into Data-driven, Cloud-native, Machine Learned & Analytics-powered culture with a focus on Data Quality, Democratization, Governance & Lineage.

My interests & forte includes – Modern Data Warehouse, Modern Analytics, Risk & Finance Analytics, Cohort Analysis, Sentiment Analytics, Conjoint Analytics, Financial & Regulatory Frameworks (BCBS 239 – RDAR, CCAR, FRTB, CECL, IFRS), Statistical & Predictive Modeling, Forecasting & Optimization, Supervised & Unsupervised Machine Learning, Credit Risk Model Creation & Validation, Business Intelligence & Data Visualization (Tableau, Qliksens), SPSS, R, Python, SQL, PLSQL, Agile Framework, Dev Ops, Internet of Things, Cloud Computing.

Soft Skills & Strengths – Effective & Impactful management of Globally distributed Agile Teams, Stakeholder Management, Presentation & Communication Skills, Proficient Project Management & Delivery, Teamwork & Collaboration & Discerning Problem-Solving Skills

Title image: freepik.com

    3AI Trending Articles

  • Transformation of Talent Acquisition by Algorithms

    Business leaders have long recognized that the ability to hire the right talent plays a significant role in any organization’s performance. Views about the challenges to acquiring and retaining such talent have evolved over time. We have moved on from a focus on the notion that a talent shortage is creating a “war for talent,” an […]

  • Evolution of Biometric Recognition Systems with AI

    Featured Article: Author: Kiranjit Pattnaik, MiQ What are biometric recognition systems Biometric recognition systems are computer-based systems that use an individual’s physical characteristics, such as their fingerprint, voice, face or any other part of the body, to authenticate their identity and grant access to secure areas, systems, or services. They are used increasingly as an […]

  • How can AI help in achieving better Sustainability?

    Featured Article: Author: Vijay Karna, Digital Transformation Executive, Cyient Artificial Intelligence (AI) is revolutionizing various industries by enabling faster decision-making, increasing efficiency, and improving productivity. Additionally, AI can also play a critical role in driving sustainability efforts across industries. From reducing carbon footprint to conserving natural resources, AI has the potential to enable more sustainable […]

  • 2 Transportation related Stocks leading the New Wave with AI

    Technology is changing our world, with results visible in real time. If you grew up in the 1980s, watching reruns of Star Trek, think for a moment about fantastic gadgets that have walked off the screen and into our lives: portable communicators, portable computers, voice-activated systems, to name just a few. Scotty once even automated […]