Bias in AI Systems and Its Impact on Deprived Groups

Main Article Content

Akash Baburao Lonkar

Abstract

Artificial Intelligence (AI) is now used to make decisions in many areas such as jobs, healthcare, education, banking, policymaking, and government welfare schemes. Although people often believe AI is neutral and objective, these systems can actually repeat and worsen the social inequalities persistently prevailing in society. This paper explains how AI bias harms deprived and marginalised groups such as Dalits, Adivasis, religious minorities, women, people with disabilities. AI becomes biased when the data used to train it is incomplete, unequal, or influenced by stereotypes. Problems also arise when algorithms are designed in ways that favour dominant social groups. This leads to real-world consequences; hiring software may reject applicants based on caste-related names or rural backgrounds; predictive policing systems may unfairly target communities that are already vulnerable.


These issues increase social exclusion by limiting access to essential opportunities and services. The paper trying to argues that reducing AI bias is not only a technical challenge but also a social responsibility. It calls for more inclusive datasets, transparent algorithms, regular bias checks, involvement of affected communities in AI design, and strong regulations. Ensuring fairness and accountability in AI is necessary to prevent digital discrimination and protect the constitutional promise of equality for all as well as establish constitutionalism.

Article Details

How to Cite
Lonkar , A. B. (2026). Bias in AI Systems and Its Impact on Deprived Groups. International Journal on Advanced Electrical and Computer Engineering, 15(1S), 146–151. Retrieved from https://journals.mriindia.com/index.php/ijaece/article/view/1351
Section
Articles

Similar Articles

1 2 > >> 

You may also start an advanced similarity search for this article.