Nearly every day, Supply Management's blogs and publications are crowded with new pieces of content exploring 'big data' and its potential implications for Procurement. Nobody can seem to agree on how or when Procurement can expect to see big data reach its potential, but everyone seems to agree that this potential is considerable.
It's easy to forget that the concept is nothing new. The mainstream definition of
“big data” was initially proposed by Doug Laney in the early 2000s. Laney focused on the '3 Vs.'
Volume: The quantity of data being generated.
Velocity: The speed at which data is being generated.
Variety: The variety of data being generated and the structure, or
lack thereof, that it comes in.
Today, the popular definition of 'big data' seems to be getting bigger. Experts have proposed adding a fourth 'V' to Laney's trio.
Variability. The pace at which data is being generated.
To understand how small businesses can leverage insights gathered from analytics and gain insights into why big
data is so important, it's important to first understand the difference between statistics and data science.
Statistics was primarily developed
as a method for deducing information from a sampling of data. Data Science is the natural extension of statistics made possible by the advent of computers. It
involves cleaning, processing, storing, manipulating, and analyzing
data. For Procurement, the difference between the two is like the difference between assessing historical spend manually versus assessing it with a spend analytics platform. Why do I emphasize this distinction? To remind you that modern data analysis,
including “big” data, is really just an extension of old methods. Sometimes Procurement forgets this. Sometimes the department's efforts to scale to 'big' do little more than add complications.
How can a small and mid-sized Procurement groups start to leverage insights gathered from analytics? How can they make 'big' data more than a buzzword? One way is to intentionally frame a
problem. This is known as supervised learning. In supervised learning, we
identify an insight that we’d like to analyze, the data we’ll need in order to
analyze it, and any relevant measures (KPIs) that we’ll use to gauge our
success.
For instance, suppose that we want to predict the success or failure
of a new sourcing project. We must first define 'success.' Does it mean revenue? Does it mean profitability? Does it mean something more
qualitative? Next, we must determine what
information we have available that will help achieve this outcome. Once
we've collected this data, we can leverage it to create a predictive model. Through training, and the ongoing collection of data, we can eventually improve
the quality of our insights.
Of course, building out models
and reporting serves no organizational value if Procurement doesn't leverage this insights to initiate new processes. Is my company’s predicted cost reduction low for next quarter? That’s a problem. How should Procurement response? Is my company’s
predicted cost reduction high for next quarter? That’s great! Do I establish a bonus
to reward my employees and reinforce whatever they’ve been doing?
That's just one example of what bigger data can mean for Procurement. Analyzing larger data sets, combining disparate data sets, and reducing complexity throughout the sourcing process will likely provide competitive advantages we can't yet conceive of.
What is big data? It’s merely a
concept. It’s fluid. What constitutes “big” is necessarily changing over time
due to advances in hardware, software, and known algorithms. We'll likely
continue to see major changes well into the foreseeable future.
Furthermore, the meaning of “big” is necessarily different for every Procurement department. Big
data gives choice. It generally doesn’t hurt to have more choices.
Post A Comment:
0 comments so far,add yours