It scares me sometimes when I think about the big decisions I’ve made on gut feel and will probably continue to make relying on my instincts.
Personally, I would love to be armed with meaningful data and insights whenever I make important life decisions. Such as what’s the maximum price I should pay for that house on the weekend, who to partner with, who to work for, and who to hire into my team. Data that helped me see a bigger picture or another perspective would be very valuable. For most of those decisions there is so much information asymmetry which makes it feel even riskier. For sure I could check out glassdoor when choosing my next job but it comes with huge sample bias and not much science behind it.
So why is there still an (almost) universal blind acceptance that these decisions are best entrusted to gut feel? Especially given the facts show we are pretty crap at making good ‘gut’ based decisions.
I’m one of those people that believe in the power of AI — to remove that asymmetry, to dial down the bias , to empower me with data to make smarter!
At a recent HR conference, a quick pulse around the room confirmed there is high curiosity and appetite to understand AI. What we’re missing is the clarity about the opportunities and what success looks like from using it. The concern about how to navigate the change management exercise that comes with introducing data and technology into a previously entirely human-driven process is daunting.
The best human resources AI is not about taking the human out of hiring and culture decisions. Far from it. It’s about providing meaningful data to help us make better decisions faster.
Having worked in the ‘People and Culture’ space for a while, I know building trust in how the organisation makes decisions — especially people decisions — is hard in the absence of data. Yet we all know that transparency builds trust. So how can you build that trust through transparency when the decision maker is a human — and the humans make decisions in closed rooms and private discussions.
Human decision making is the ultimate black box. Remember that feeling when the recruiters calls up and says you weren’t a good fit — who feels great about that call? A total black box cop-out response!
It doesn’t have to be this way, and the faster we can get to better decision making the better. Seven months ago, I joined a team of data scientists who had spent the prior three years building technology that relies on AI to work its magic and equip recruiters with meaningful and actionable insights when hiring.
I’m no data scientist. I have had to learn the ins and outs of our AI pretty fast. And because our technology is at work in the people space, I’m learning how to ensure the AI is safe, fair and our customers trust it and us to do the right thing with it.
If we reduce it to its core process, a machine learning algorithm is trying to improve the performance of an outcome based on the input data it receives. In some instances, such as in deep learning algorithms, its trying to simulate the functioning of the human brain’s neural networks, to figure out the patterns between the data inputs and data outputs.
A machine is infinitely more obliging and easier to understand than a human in responding to direction. Because it has no feelings, it’s going to be free of the biases humans bring to these critical decision. Plus machines are more malleable to learning and way faster at it. This is more critical these days when roles are changing dynamically and swiftly as industries are disrupted.
Our team plays in the predictive analytics for recruitment space. What this means is our AI seeks out the lead indicators of job success: the correlating factors between values, personality and job performance. We all intuitively know that behaviours drive leading indicators. But we struggle to assess for those consistently well.
Our job is to augment your intelligence and ability to make the right decision. By knowing how people treat others, what drives them, and their values, you become better informed about the real DNA of a person and how they might function in your team.
All of our customers are looking for a slightly different kind of worker. A powerful motivator to use AI is to build confidence and trust in the process from both candidates and people leaders by dialling down the human element (getting rid of the bias) and revealing the patterns for success. Less room for bias = more fairness for candidates = more diverse hiring. Key to this is we don’t look at any personal information — the machine doesn’t know or care about your age, gender, colour or educational background.
For our customers having this data is empowering and helps them make smart decisions. For all the people who are affected by those decisions, they can feel relieved that they were considered on their merits, not based on someones gut feel.
There is no perfect method for sure to make the right decision. But if I have to choose between trusting biased humans and (a sometimes) biased machine they create, I know which one I would trust more. At least with a machine you can actually test for the bias, remove it, and re-train it .
This article is brought to you exclusively by The Business Transformation Network, in partnership with Predictive Hire.