+880 1617-120308 admin@changemakernexus.com

Artificial Intelligence (AI) has transformed Human Resource Management (HRM), offering efficiency and objectivity in processes like recruitment and performance evaluation. However, AI-driven HRM systems are not without challenges, particularly regarding the biases embedded in their design, which can disproportionately affect marginalized groups—including non-binary individuals, women, racial minorities, and persons with disabilities. This paper investigates the risks of discrimination in AI recruitment tools and HR analytics, focusing on the risks of algorithmic discrimination affecting marginalized groups and the resulting implications for fairness, compliance, and career advancement in the workplace. By employing a doctrinal research methodology, the study examines the legal, ethical, and policy frameworks governing AI in HRM, highlighting the regulatory gaps that allow bias to persist. Through an analysis of legal precedents, AI ethics guidelines, and real-world case studies, the paper proposes actionable solutions for creating more inclusive AI-driven HRM practices. Largely, this study aims to inform policymakers, HR professionals, and AI developers about the importance of ensuring fairness and inclusivity in AI systems, fostering a more equitable work environment for all individuals, regardless of gender identity.