Simple algorithms rock!
Admit it! Your actions have consequences. And the results of your interventions are not always positive. I argue that you will need to prioritize simple and transparent algorithms to manage people well.
What everyone knows: Since organizations want to manage people, they demand that decision-making be supported by data.
But, like it or not, actions taken to manage people better are going to have consequences beyond what you want to improve.
In a market that needs talent more and more, many companies are trying to figure out how to make employees loyal. And for them, the first step is figuring out how they feel about the company. Why don't you take a look at their email to get to know them better?
Hint: You could lose your employees' trust, something definitely worse than what you would gain from that ethically dubious source of information. You would unleash negative effects on motivation and even on performance, turnover, and absenteeism.
If you're not able to anticipate the full impact of your optimization actions, it's very probable that you'll end up like a sorcerer's apprentice, meddling in things you don't understand.
Let us talk about Good and bad algorithms
We have to make decisions about improving the organization. Several predictive algorithms can help the process.
However, I argue that understanding the causes of the problem and the consequences of your actions are much more important than the accuracy of the model that the algorithm generates.
To make a decision, you have to go beyond the effectiveness of an action measured only by its accuracy level. You have to calculate the costs and benefits of a correct or an incorrect decision. Often, the predictive analysis in HR ignores the consequences of the recommended action, especially in terms of collateral damage.
For example, you create a turnover model with an 80% accuracy rate. Is it enough? It depends. If the risk of being wrong is small, then the accuracy rate could be good enough. If not, it could be too high of a risk.
Then what do you do? Three conditions
1. You need models you can understand. In addition to improvement, you have to understand why you're doing it, what the actual causes of the problem are, and justify the intervention. When you have models you can interpret, you can push for effective changes in policies, directing corrective actions where they're needed.
2. You should anticipate the consequences of your actions. This simulation is only possible if you understand where you're going to intervene and can anticipate (simulate) what this intervention could cause in areas like performance, absenteeism, or turnover.
3. When improvement actions aren't consistent, and you act piecemeal (only with certain individuals, and not equally with the whole staff), you'll also need your changes to seem reasonable to the people in the organization. You'll have to explain to your colleagues why you've made certain changes and how you've identified the areas in need of the intervention.
As a general practice, you generate more than one model with different algorithms. Some of these techniques are easier to interpret than others. Sometimes algorithms are black boxes: we are unable to explain how they work. Typically neural networks are such black-box algorithms. Their implementation is opaque. But if they're "black boxes," even if they are more accurate in their predictions, they won't meet the three conditions spelled out above: identify causes, anticipate consequences, and explain the actions.
Any AI solution of people analytics should serve to support decision making in human resources, especially in those areas where there is uncertainty or incomplete information.
You could build and test your own AI prototype for Human Resources in 3 weeks by leveraging the AI expertise of our team. Test, before you leap!
Sign up for this program!
Text 347-278-2892 for early admission and more information.