In this article

Data can be a critical means for manufacturers to accelerate digital transformation. However, data can also be a roadblock. To overcome data-related barriers and thrive in today's digital world, manufacturers can embrace sophisticated data pipelines that seamlessly collect, process and analyze data from various sources. Data pipelines can also automate the flow of data from sensors, machines and production lines, creating a more efficient and productive organization.

This article explores the current state of data in manufacturing, the challenges manufacturers face, and WWT's six-step process for building better data pipelines.

WWT Experts webinar also available to view on-demand Watch now

The journey so far: Data silos

For the sake of operational orderliness, manufacturers have collected various pockets of data over the years. Initially, it seemed necessary to segment data collected from things like controlling the plant lines, controlling the process, managing alerts, etc. But as companies grow, their data grows and locations spread. As a result, some manufacturers manage more than 100 plants with different processes, environments and types of equipment settings — not to mention the wide array of metrics used to gauge everything from energy consumption and ESG initiative progress.

Our experience working with manufacturers has shown us there are massive amounts of data stored in singular systems (i.e., data silos), not being shared company-wide. And, while the data are all there, regarding production or quality control information, this data exists in a locked system, making it very difficult to retrieve.

To make matters more complex, different types of manufacturing data that are exclusive to a certain purpose (e.g., compliance or historical data) are stored in ways meaningful for that purpose. Such data resides in a format that can only be understood by very specialized engineers, which makes it extraordinarily difficult to extract and repurpose for insights or otherwise.

The way forward

Manufacturers need an efficient way to consolidate data from disparate and new sources into a standard infrastructure that is controlled, managed and protected. Moreover, the data should be transported to the right people in an actionable way that makes sense. 

The solution is to create a data pipeline to collect, process and transport data from various sources across the organization to a destination where it can be analyzed or used for better purposes. Funneling your data through a smart pipeline on a stable architecture will automate and streamline data flows and bring significant visibility and efficiency across the entire operation. 

WWT works with some of the largest manufacturers in the world across the entire data pipeline lifecycle, collaborating closely to identify and select the most efficient methods and tools in an unbiased fashion. While our approach is unique to each client, the process of creating effective data pipelines follows the same phased process.

A six-phase approach to data serenity

One: Collect data 

The first step is to decide what type of network infrastructure best suits your needs, be it wired and/or wireless. After this is determined, you'll need to select the right set of communication protocols, gateway devices and software tools.

After a point of connectivity or ingestion point is in place, it's best to identify the appropriate data sources to collect data from, which could include Programmable Logic Controllers (PLC's), Variable Frequency Drives (VFD's), Human Machine Interfaces (HMI's), Supervisory Control and Data Acquisition (SCADA) databases, and Industrial Internet of Things (IIoT) devices., (e.g., PLCs, VFDs, HMIs, SCADA databases, IoT devices and more). 

Finally, secure permission to access these data sources from the respective administrators or stakeholders.

Two: Preprocess data

Not all data is good data, and bad data can lead to bad decisions. So, you'll need to filter out what data is needed and what is not. Addressing data quality issues by preprocessing is a critical step to ensure your data is clean, consistent and ready for analysis or storage. 

Here are some common ways to preprocess data in preparation for what's next:

  • Detect and correct any inaccuracies or errors in the data, such as outliers or inconsistent data entries.
  • Identify potential data quality issues, such as duplicates, inconsistencies and data anomalies.
  • Decide how to handle missing values and erroneous data.
  • Create new features or transform existing ones to improve the model's performance.
  • Extract meaningful information through date and time parsing.
  • Perform quality checks to ensure accuracy and validity of collected data.

Three: Analyze data

Manufacturers are dealing with a vast amount of data. The tremendous amounts of zeros and ones to account for and metrics for things like temperature values and pressure values are constantly undergoing sub-second changes, sometimes at a hundred times per second.  Moreover, organizations usually don't want or need to move all of this data to a centralized system.

To make informed decisions about how to structure your data pipeline, it is essential to understand the nature of your data and identify its potential. Here are some objectives to consider when analyzing data before it's funneled into a pipeline:

  • Useful vs. non-useful data.
  • Best delivery mechanism.
  • Most efficient data formats for insights.
  • Optimal data model to support your most valuable use cases.

Four: Store data 

Since the dawn of data, we've needed a place to store it. For a long time, storing data in-house was the only viable option. However, the emergence of cloud storage solutions has introduced some tough decisions. Should you move all your data to the cloud? Or is a hybrid cloud solution more suitable? Should you put time parameters around how long to keep certain data?

The answers to these questions will depend on your specific use cases, data volumes and performance requirements. It's common for data pipelines to include a combination of different storage technologies and arrangements to handle various forms of data within your pipeline. 

WWT specializes in helping companies make these decisions, so it doesn't become a burden.

Five: Secure data

Securing data pipelines is a critical aspect of any modern data-driven organization's operations. It is essential to protect sensitive information, maintain data integrity and ensure compliance with internal and external regulatory requirements. 

Data pipeline security is crucial for controlling access, mitigating threats, ensuring business continuity and building trust. It is an ongoing process that requires a combination of technical measures, policies and procedures. Regular security audits and updates are also crucial to adapt to evolving threats and vulnerabilities.

To ensure the security of data pipelines, organizations should consider the following best practices:

  • Encrypt data in transit and at rest.
  • Implement access controls.
  • Monitor pipeline activity.
  • Conduct regular security audits.
  • Stay up-to-date with security patches.

Six: Visualize data

The final step is visualization, and it's important that visualization is the last piece of the puzzle, not the first. It's critical to start at the beginning and work your way through the stages of data ingestion, processing and delivery before you can determine how your data pipeline can be visualized.

Once you have collected, processed and delivered the data, you can then determine how to visualize it. This can be done using a variety of tools such as Power BI, Tableau or other data visualization tools in market. Organizations often start with the visualization element, but it's important to remember that this not where all the answers come from. Instead, it's the culmination of all the previous stages that lead up to the final visualization.

The future is yours

The data is yours, too, but it is your organization's responsibility to manage it. Owning your data is the smartest thing a manufacturer can do, both now and in the future. With everything seemingly done as-a-service these days, the implication is that your data is going somewhere else. While this may be true, it's important that organizations own their data, no matter where it is located.

To achieve this, you may want to consider embracing open architecture and standardization or a plug-and-play type of setup within your manufacturing environment. Doing so can lead to a significant reduction in integration costs, an increase in the value of data collection and the exponential growth of valuable use cases.

Want to learn more about how WWT can support your organization's data strategy? Get in touch