Business Intelligence for Recruitment: How data can change HR today

HR, as a whole, is an area that has lots of data, this can make a huge difference in forecasting hiring, turnover and performance management.

This article was written by Thiago Rocha and Thomas Dobereiner which are, respectively, part of the Business Intelligence and Talent Management Team.

Também disponível em português 🇧🇷

The importance of companies making decisions and taking actions based on data is become more relevant by the day. With the purpose of making better decisions, the raw data is collected, interpreted and transformed into insights in many different business areas like: product development, marketing cost optimization, improvements in the sales process and many others.

The competitive advantage of making smarter decisions based on data and less on feeling is evidenced in this article written by Andrew McAfee and Erik Brynjolfsson for the Harvard Business Review, highlights that:

“Companies in the top third of their industry in the use of data-driven decision making were, on average, 5% more productive and 6% more profitable than their competitors” even after accounting for several confounding factors. ”

When searching for HR Analytics on Google, many articles, courses, and books come up about the use of data, and clear KPIs inside of Human Resources (or Talent Management as we call in RD).

Until recently, companies used to hire people based on feelings and experience. Today, it has become an essential aspect of a recruiter’s day to use tools that help with acquiring talents, making the process more accurate.

However, even with the vast quantities of information given by these tools, only a small number of companies organize this data in ways to be used daily, helping make quick and efficient decisions.

In 2017, we hired 246 people in only 8 months at RD, which represents a 61% increase when compared to 2016, when we hired 228 people in the whole year.

It is part of our culture to have clear metrics since one of our values is being data-driven. But like any other very fast-growing company, processes change very quickly and worsen the quality of the data collected, resulting in excessive costs even in a simple analysis.

This post describes the step by step taken to implement BI inside of Talent Management and what results have been achieved in these first months.

Understand the problem BEFORE building the charts

Before putting data everywhere, we need to understand the whole process as it is today: how the ATS (Applicant Tracking System) used, where and how data is stored, what are the current bottlenecks and which questions will the data help answer. There was a constant exchange of information with the stakeholders (recruiters, leaders and people analyst) to get a better answer to those questions.

After achieving a good understanding of the process, the first project was chosen which was the implementation of a KPIs dashboard with the purpose of helping the team make more precise decisions and plant a seed for other BI projects in the area.

The first step to building a dashboard was understanding with greater detail the real pains of TM (Talent Management) and the main opportunities that could be created for the short and long term.

After a couple of studies and benchmarks, we created a short questionnaire to guide us through the process:

  1. What is the dashboard’s objective?
  2. What is currently not possible to analyze?
  3. What actions would be possible with the dashboard?
  4. Which analyses and information would be necessary for a decision to be made comfortably?
  5. For each analysis, which metrics would help take the actions chosen above?
  6. How would you quickly summarize the storyline of the dashboard?

With this stage done, it is easier to have clarity over the situation the area faces and how we can use data to help them on a daily basis. We then turn the answers from the questionnaire into graphical visualizations as a prototype.

A prototype is a quick and easy way to iterate with the users demonstrating what a draft of the dashboard would look like. This way you can identify the best manner to display the information. The prototype can be done on paper or slides. To help in this stage, we suggest using this flowchart to understand the best way to display each data.

Don’t expect the first prototype to be similar to the final version of the dashboard, as we will see below, and it is very common to alter many things making the visuals easier to understand. That is why you should not take too much time at this stage because once users are using the dashboard they will likely identify other things that need change. The main goal is finding out which data sources will be needed to feed the dashboard, and most importantly, if those sources are available.

If all data is available and ready to be pulled, this stage is finalized by validating the last prototype with the area leader and checking if any final adjustments are needed. If some data source is unavailable, there might be some processes that need changes to suit the project. In our case, this was a great opportunity to ensure processes that could hurt our data quality were improved.

Recap:

  • Understand the pains and opportunities ✅
  • Demonstrate graphically the opportunities found to aid these pains ✅
  • Verified if it was possible to pull all the data necessary ✅
  • Validated with stakeholders ✅

Getting it Done

By now we know everything we need to do and there is one thing left: doing it. It is recommended to use a good and recognized BI tool as it will help with integrations, graphs, usability etc. This will make your life a whole lot easier.

Today the main BI tools are Tableau, Power BI, QlikView, and Pentaho. All have a free version.

In RD’s case, we decided to use Power BI because of its attractive price and constant monthly updates. We were already using it for a while and decided it was appropriate for the task, but of course, any of the other tools mentioned would also be fine.

Building the Dashboard

Technical points will not be talked about in this post (if you have any technical questions feel free to contact us at [email protected] or [email protected] ) but as soon as you have chosen software and imported all the data, one of the longest parts of the project begins:

  • Interpret the process based on the data.
  • Verify the quality of the data, cleaning when necessary.

There are many problems that can happen with the data at this stage, varying by software so we cannot possibly go over every case but we will try to exemplify one example.

In Lever, our ATS, we have a field called “Company” that is right under the name of the candidate. This field is editable and has no indication of what information should be there. This quickly became a great note-taking field and when the data was imported we wasted a good few hours figuring out what was messing up our .csv file when in the end it was just fine.

This is just one real example from RD but there are many important fields that people do not use correctly and may have very misleading information. This is why it’s fundamental to have constant interaction between the BI Analyst and future users.

Another common error is when the stage changes of the candidate are not correctly done in the software. This can deeply affect all efficiency and velocity metrics of the process.

A common phrase that highlights the importance of good quality data is: garbage in, garbage out

If you do not clean your data and therefore cannot trust it, it doesn’t matter what you do, your results will not be good.

You can try to increase the quality of your predictive processes but the gains will be marginal. The big secret to a world-class analysis is quality data.

This stage ends with the certainty that the data is clean and makes sense. Trusting the data, we can start building our graphs in the BI software using the prototype as a guide and making the necessaries adjustments with time.

More than Graphs

The biggest mistake made by people that work with data is thinking the project is done. However, the one thing we have learned is that one of the most critical steps is the implementation of the dashboard. If no one knows how to use the dashboard, it is useless. It is not about finishing a project but about aiding the pains.

The first test of implementation is the 5-second conclusion. Final users are to look at each of the graphs for a short amount of time and say what they understand from the graph and what would be the action taken based on it. This is a quick way to easily rule out parts of your dashboard which do not make sense to the end user.

In our case, when applying the test, we verified that half our charts were not correctly interpreted and had to do something about them. In these situations there are two main alternatives:

  • Educate the users so they can understand the graphs in a more clear manner. This alternative is not very efficient because most of the times the issue is caused by the BI analyst who should have been clearer.
  • Make charts more straightforward to interpret: here you have to understand the root of why users are not able to make conclusions based on your visualizations. Most problems are solved this way and we strongly recommend spending some time in the phase.

Finally, we arrived in our dashboard! It was divided into 3 separate views:

  • Strategic View: Main KPIs of Talent Management, not with the objective of a microanalysis but a general view of how the metrics are doing, being able to identify any potential issues. KPIs such as Forecast, hired, candidate to hire, and others. This can all be filtered by job or area.
  • Tactical View: This view shows a more specific view for the leader of the area. Has some analyzes which are more broken up by date for example.
  • Operational View: This view is designed to be used by recruiters, have a very micro view of how each job is doing, the pipeline, forecast etc. This is all filtered by the recruiter.

Benefits of using Data

With all these long and complicated steps you might be questioning: Is it worth it?

It is important to highlight that the dashboard is not the end or the mean, it’s the beginning. For that reason, we have categorized the results in short, mid and long-term.

Short-Term (1-3 months)

The first impact was how much more predictable the process had become using a forecast based on the average efficiency and lead time (time to hire) of the funnel. This became a simple metric that showed if we were going to do well or not in the current month. Recently we had a month where our forecast was way below our target and we were able to notice this in the beginning of the month, take action and improve our situation.

Another important point was our understanding of the process, finding bottlenecks and possible stages that needed tuning. This resulted in many other projects.

One last change was how much cleaner the process became. Because we were able to find all potential processes that needed change, recruiters started looking out for all potential issues and helping keep all data clean.

Some KPIs that improved are:

Mid Term (3-9 months)

This is the phase we are currently at. With cleaner data, processes and good understanding of all data, we had a very good foundation for justifying other projects and monitor existing ones

Many projects started that had the objective of reducing the time spent recruiting and at the same time increasing the quality. Today we are close to achieving a 40% reduction in the time spent recruiting while having a very assertive process.

We are now at the point of implementation better technology into the process, using machine learning and predictive analysis to continue improving.

Long-Term (9-18 months)

We are not here yet, but our priority is to keep improving our recruitment quality and, at the same time, making the hiring process faster and faster.

Conclusion

All improvements we’ve had are just a very small part of the potential there is in using data and BI at RD. It is a clear evidence of the benefits that well-defined processes and clean data can bring.

The results we have achieved until today were only possible because of two factors: data and people. Clean data was essential for basing our decisions, and the people in all areas of the company were the people responsible for taking care of all this.

These results were only for HR, but imagine the possibilities of having this intelligence in all business areas inside the company. What’s the impact this could have?

I will leave that as a provocative thought for your company, base decisions on data and not on feeling. This will reduce wrong choices and decrease biases. Let your business have real and up to date information.

If you get resistance from your HR team, saying you are taking the human side from recruitment, you can just show them this article and invite them to read Work Rules by Laszlo Bock or any other People Analytics book. Of course, there needs to be a human side as well, but today we are lacking more data-based views.

Let your company evolve into a more data-driven company. Peter Drucker, considered the father of management said:

“What cannot be measured, cannot be managed”

And 60 years later, we can append:

“ You cannot manage if you are not data-driven. You cannot be data-driven if you do not trust your data.”

Deixe um comentário