The 2022 Beijing Olympics is currently showcasing the world’s top talent in winter sports. There are some truly spectacular exhibits of extraordinary skills on the hill, on the rink, and in the air. As I watch, the shear volume of statistics shared with the viewer about each competitor, their previous wins, performance factors, course insights, etc. is almost overwhelming. It got me thinking: are the Olympics about sport or data?
With very little research, I discovered that staging the Olympics is one of the most complex data aggregation activities ever. The data required for the 16 days of competition takes 7 years of preparation beforehand and a year after the event to break it all down. The IT budget for the Rio Olympics was $1.5 billion. There’s so much data that there are macro statistical models that can, with some precision, predict the Olympic medal count for each country based on latitude and GDP.
Invaluable micro data insights
When every element of an athlete is analyzed and optimized (heartrate, body temperature, muscle mass, etc.), small micro insights like “when your core temperature is between # and #, your chances of winning improve by X%” or “when a (specific) competitor starts off on their left foot they are more likely to…, or “referees at the Olympic stage are more likely to call…”, are invaluable. This competitive advantage is achieved through wearable technology, big data, and AI machine learning. It’s really quite extraordinary and very little is left to chance.
Measuring Data Quality
Speaking to an analyst recently, she asked, if everything’s been optimized at the plant (macro) level (through APC, process optimization, predictive modeling), how can companies derive even more value (profitability) out of current operations?
Taking a page from the Olympics, I shared that it’s all about data at the micro level. Yes, you’ve optimized the plant, process, and piece of equipment (see figure), but how reliable and accurate is your operating data? While many believe it to be small, poor data quality actually costs companies more than 15% of their revenue annually.
- Can operators trust the data upon which they make decisions?
- How many alarms are false positives?
- Do your data scientists have data in a timely fashion to affect plant performance? Or are they spending all their time cleaning data?
- Is your maintenance staff chasing intrusive, unnecessary equipment inspections?
What competitive advantage could you gain from micro data insights?
AI Machine Learning on Demand
APERIO DataWise uses AI machine learning to detect more than 20 data anomalies in real time. When data quality index (DQI) is unsatisfactory, it can be acted upon and resolved immediately. An APERIO customer is processing up to 2M tags in real time, to inform their executives on asset health (via DQI trends) over time, providing advance notice of equipment failure in real time, and eliminating the time and cost waiting for analysts to inform operations bad tags exist. The return: 20% higher uptime, 42% less operator error, and data science on demand (80% time saved cleaning). Find out more now!
Livia Wiley has over 25 years of experience focusing on strategic planning, industry growth, and process innovation. Her engineering background is underpinned by broad industry expertise and applications of industrial automation software, having worked for AVEVA, Schneider Electric, Honeywell, and Aspen Technology. Livia holds a B.Sc. in Chemical Engineering from Queen’s University and a M.Eng in Chemical Engineering from the University of Houston.