Who Deserves the Promotion? Some Managers Ask ChatGPT

Posted on

The Growing Use of AI in Human Resources

As artificial intelligence (AI) continues to transform the business landscape, more and more managers are turning to this technology for insights into personnel decisions. While AI can offer valuable data-driven perspectives, experts warn that its use in human resources could expose companies to significant legal and ethical risks if not implemented carefully.

A recent survey conducted by Resume Builder, which included responses from 1,342 managers, revealed that 60% of respondents rely on AI tools when making decisions about their direct reports. Among these users, 78% use AI to determine raises, 77% for promotions, 66% for layoffs, and 64% for deciding whom to fire. These numbers highlight the increasing dependence on AI in critical HR functions.

The survey also uncovered concerning trends. More than one in five managers frequently allow AI to make final decisions without human input. Additionally, two-thirds of those using AI for employee management have not received any formal training on the technology. Nearly half of the managers reported being tasked with evaluating whether AI could replace their employees.

Stacie Haller, chief career adviser at Resume Builder, emphasized the risks associated with using AI without proper oversight. She stated that while AI can provide data-driven insights, it lacks context, empathy, and judgment. “Organizations have a responsibility to implement AI ethically to avoid legal liability, protect their culture, and maintain trust among employees,” she said.

There have already been instances where AI has led to legal challenges. For example, Workday Inc., a company that provides human resources management services, is currently defending itself against a class-action lawsuit alleging that its AI-driven applicant recommendation system violated federal anti-discrimination laws. Although Workday was not an employer in the case, the plaintiffs argue that the company should be held liable as an agent.

Experts like Ryan Zhang, founder and CEO of Notta.ai, have long warned about the potential for litigation when using AI tools. He pointed out that AI systems can inherit biases based on the data they are trained on. For instance, if an AI is trained on historical hiring and promotion data that reflects existing gender imbalances, it may perpetuate those biases. “Anti-employment discrimination laws don’t take a holiday just because AI comes into play,” Zhang said. “If your AI system systematically passes over women for advancement opportunities or suggests laying off older workers, you’re facing lawsuits.”

Zhang advises managers to test their AI systems for bias and ensure they understand how these platforms arrive at decisions. He also recommends keeping humans involved in the decision-making process. “Can I explain this decision to my employee in plain English?” he asked. “If you can’t because it emerged from an AI black box, don’t make the decision. Reserve humans in the loop for last-resort calls.”

Irina Mocanu, senior talent management and organizational development adviser at Workleap, believes AI can help streamline promotion decisions but only when used with caution and accountability. “At its best, AI is a thought partner: helping managers spot patterns, reduce bias, and bring structure to subjective processes. But it’s not a magic wand. The grey zones of leadership — like weighing context, values or potential — still require human judgment,” she said.

Ann Skeet, senior director of leadership ethics at the Markkula Center for Applied Ethics, added that managers ultimately lead people, not technology. “Using AI tools to help make promotion and performance-review decisions can erode trust with their direct reports and make any resulting promotions or decisions less meaningful,” she said. She also warned about the risk of manager deskilling, noting that relying too heavily on AI could cause critical thinking skills to atrophy.

The rise of AI has also sparked concerns about job security for white-collar workers. The term “white-collar recession” has gained traction as companies like Amazon have targeted middle managers for layoffs. Meanwhile, the Trump administration has moved to cut consulting contracts and federal jobs, further intensifying the pressure on knowledge workers.

A survey by Adecco found that nearly one-quarter of white-collar professionals feel generative AI has made their skills less relevant. This highlights the need for workers to adapt, either by evolving into hybrid roles that work alongside AI or by transitioning to industries still in demand, such as aerospace, finance, and healthcare.

According to a 2024 executive-opinion survey released as part of the World Economic Forum’s annual report on the future of jobs, 41% of those surveyed planned on downsizing their workforce wherever AI could replace people’s work. As AI continues to reshape the workplace, businesses and employees alike must navigate the complex interplay between technology and human judgment.

Leave a Reply

Your email address will not be published. Required fields are marked *