Big Data has been a central topic for corporations for many years now. Typically, this is associated with how organizations use analytics to figure out their most valuable customers, or to create new experiences, services, or products. When devising this strategy, the organization must be considerate of a few key factors:

  1. How will the data be used? What is the objective of obtaining this data?
  2. What story will the data tell?
  3. What does the data contain? Is Personally Identifiable Information (PII) or Protected Health Information (PHI) included?
  4. Who within the organization plans to use the data?

All these questions are key to develop the data management architecture. The Architecture can be divided into three sections:

A.     Data Management

o   The way the data is collected and stored

·         Data Security

o   Part of the data management plan, but specifically focuses on the protection and transfer of data

·         Data Visualization

o   The output/analytics of the data that complete the story. This involves using the data to influence actions within the company 

As a procurement professional, one should consider coaching stakeholders on adding structure to these three sections before establishing their “Big Data” plan. When it comes to data management, a company can implore multiple methods to ingest and manage data. For example, there may be one method for handling customers that is then used for marketing, and another to handle product testing data to influence product development. Let’s consider a real example:

In the Pharma industry, understanding a patient’s lifecycle journey is often critical to conducting research to produce new medicines for the market. These companies need to understand how a patient may react/respond to treatment even when they have not been treated by the company’s medicines. To paint the full patient lifecycle picture, they need a lot of data from a lot of patients around the world. The good news is this data is for sale. The bad news is that the purchasing process can be tricky.

Patient data is protected by HIPAA (The Health Insurance Portability and Accountability Act of 1996) Laws. This means that it’s unlawful for a company to buy, use, or track health information that can be directly tied to a particular patient without their consent or knowledge. But how do we create lifesaving pharmaceuticals without understanding the people they are meant to help?

We do something called, “Tokenization.” This allows companies to aggregate patient data and then anonymize it so it cannot be connected and tied back to any individual. By not linking this data to a name or person, we can understand a patient’s medical history without ever knowing the patience. Instead of John Smith, we now have JS100637. John’s name is never recorded or tied to the new “Token.” John as a patient may appear in multiple datasets hosted by various clinical sites that do not communicate with one another. But, by having a token, John’s information is anonymously stored to eventually provide us with the data that may create the next big vaccine or cure for cancer.

Big Data faces a lot of hurdles. Humans are resilient and compassionate. We find ways around the hurdles while also respecting one another and protecting our well-deserved privacy. In the world of procurement, we can be the facilitators of this discussion, ensuring our stakeholders consider each possible outcome and solution to the complex problems they aim to solve. The relationships that are required in the previous example are vital to building a stronger data management architecture. There could be one vendor to tokenize the data, another to establish the data management structure and storage needs, and a final vendor to address the visualization of the data. All must seamlessly work together to create a comfortable user experience with optimized efficiency and productivity.  

Share To:

Brandon Hummons

Post A Comment:

0 comments so far,add yours