The Big Data Challenge – Diamonds in the Rough

, ,

In the US alone, 1.5 million physicians have written over 4 billion prescriptions in 2011 [1]

Sales data from over 60,000 pharmacies in the US, is aggregated, dissected, and offered for sale by 3rd party information providers [2]

Thousands of hospitals, clinics, and even drug brands, have Facebook pages, Twitter accounts, newsletters, blogs, that generate large amounts of information – opinions, rumors, and customer feedback.

All this data, along with input from in-house operational and transactional systems, is collected and stored by life science organizations to form one of recent days hottest tickets in the business community – “Big Data” repositories.

This explosion of data isn’t something new. It continues a trend that started in the 1970s and gave birth to early business intelligence systems. What has changed is the velocity of growth, the complexity of the data streaming into the organization, and the imperative to make better use of information to transform the business.

These diverse sources create large amounts of data, with lots of valuable information.  Alas, by the nature of Big Data, collecting input from Social Networks, blogs, and public sources, results in significantly greater amount of “noise” – data with no real business value.  This greatly intensifies the difficulty of extracting meaningful insights from this data.

Enterprises strive to exploit their data – collected, purchased or created.  Collecting and storing Big Data is one challenge, but more importantly, is the ability to sift through those terabytes of data to harvest and harness every piece of relevant data and channel it to pertinent decision makers.  Today’s most advanced business intelligence technologies, not only support the ability to collect large amounts of data, but further, the ability to understand what data is important to whom, and take advantage of its full business value.

The holy grail of Big Data BI solutions is the ability to efficiently mine and analyze vast amounts of data – take advantage of the immensity of available data without drowning.  The goal is to identify unforeseen relationships between different drivers that might unveil threats and opportunities to grow sales and increase market share.  With the advent of Big Data, data sources are more disparate, making it harder to analyze and find correlations between seemingly unrelated phenomena.  It is humanly impossible to manually perform this task in a timely manner.  Since timing is key to maximize the business value of data analysis, only robust, automated BI solutions are able to generate valuable business results from Big Data analysis

An effective BI solution for Big Data must perform three essential steps:

  • ETL: Normalize, integrate, bring ALL disparate data sources together to create one large, yet coherent picture
  • Filter the noise: Automatically scan the data to find correlations and identify valuable insights to focus on – diamonds in the rough.
  • Present findings in a concise, easy to act-upon fashion

These three steps make Big Data analysis truly useful by finding the right data and delivering to the right people, in a timely and actionable manner.

 

[1] Annual profile of top prescription medicines published in the journal ACS Chemical Neuroscience

[2] 2011-12 Economic Report on Retail and Specialty Pharmacies. Pembroke Consulting, Inc. January, 2012