While AI is currently a hot topic, many still hesitate to embrace the swiftly evolving technology. Over a third of US employees are afraid feel AI might replace their jobs, and certain HR executives are concerned express concerns regarding its unpredictable impact on their positions and staff.
TL;DR
- Many US employees fear AI might replace their jobs, while HR executives worry about its impact.
- Steven Mills of Boston Consulting Group discusses AI risks and opportunities, emphasizing responsible design.
- AI can increase job satisfaction and efficiency by acting as a collaborative thinking tool.
- Employers should establish clear boundaries for AI use, focusing on responsible adoption and skill development.
HR Brew recently sat down with Steven Mills, chief AI ethics officer at Boston Consulting Group, to demystify some of the risks and opportunities associated with AI.
This conversation has been edited for length and clarity.
How do you deal with workers’ AI hesitations and fears?
Once people start using the tech and realizing the value it can bring to them, they actually start using it more, and there’s a bit of a virtuous cycle. They actually report higher job satisfaction. They feel more efficient. They feel like they make better decisions.
Furthermore, we believe it's crucial to inform the public about the technology's capabilities and limitations, specifying its inappropriate applications. My own stance falls somewhere between these perspectives.
Where do you see the biggest risks with AI?
At [BCG], we've established a comprehensive procedure; if a situation is identified as a high-risk area, it triggers a thorough review process to determine, “Are we even comfortable using AI in this way?”
Imagine we're developing the technology. It thoroughly outlines every potential risk, such as the possibility of providing factually wrong information or unintentionally guiding users toward poor choices. Then, as we construct the product, we determine what constitutes an acceptable risk threshold for these various aspects.
Some people fear that incorrectly deployed AI could result in the technology learning to reinforce biases and create more potential for discrimination. How can we make sure that there’s a diversity of thought within LLMs?
We want to evaluate the input to output from the product perspective. Again, it goes to looking at the potential risks, which might be different types of bias, whether that’s bias against any protected group or things like urban versus rural. These things can exist in models. We really talk a lot about responsible AI by design. It can’t be something you think about when you conceptualize the product, design it from the start, think about these things, and engage users in a meaningful way.
What do you hear from HR leaders about their feelings on AI transformation?
A lot of HR leaders are super excited about the productivity and the value unlock of the tech and they want to get it in the hands of their employees. The concern is we want to make sure people are using the tech and feel empowered to use the tech, but doing so in a responsible way.
I love to show fabulous failures of a system doing silly things that sort of make you chuckle, but it’s just a really good illustration that they’re not perfect at everything. And so people seeing that, it helps them realize, I have to be thoughtful about how I’m using it.
We put significant effort into our team, emphasizing that AI shouldn't perform their tasks. It should serve as a collaborative thinking tool, assisting in sharpening ideas, but ultimately, individuals must take responsibility for their final output.
How can smaller employers establish AI boundaries?
For smaller businesses, leadership can simply convene to discuss their comfort level with AI. Ultimately, corporate values play a role, necessitating senior leaders to engage in a conversation. This doesn't need to be elaborate; it could be an informal document stating, “Here’s how it’s okay to use it. Here’s how you shouldn’t use it.”
Do you think AI could impact productivity requirements?
Our aim is for employees to leverage AI for enhanced productivity, but not under duress. The perspective should be that if someone isn't adopting it, it reflects our shortcomings. Consequently, our focus will be on empowering them, improving their skills, and guiding them on how to utilize these tools effectively.
How do you use AI in your job?
I frequently employ it as a sounding board... I might present a slide deck intended for a significant meeting and state, “What questions would you have if you were the chief risk officer?” This serves as a method to aid my preparation. Additionally, I utilize it to generate opposing viewpoints for my arguments. It's crucial that we retain ownership of our concepts, yet leveraging this [AI] as a collaborative partner, something to question your thinking, proves quite effective in such scenarios.
This report was originally published by HR Brew.
