James D. Vogel (founder and director, The BioProcess Institute) BPI Theater @ INTERPHEX, April 27, 2016, 11:40 am–12:00 pm
James Vogel introduced the concept of trending and how to use it to improve a manufacturing facility’s performance. Trending is defined as collecting data and then examining that information for trends — for example, modeling data to forecast the weather. Sometime a simple graph tells you as much as more advanced statistics can. The key is to pay attention to present conditions.
All facilities collect data to meet regulatory requirements, but doing so is worthless if no one looks at the information. By using data, companies can get an idea of what is going on in different sections of their facilities. They also can look at different facilities and determine why the same processes do not perform in the same way. Trending can illuminate systems interactions such as cleaning and environmental monitoring (even a change in cleaning staff or contracts can make a difference in environmental monitoring results). Companies should assign the responsibility of tracking data to a single person or group of interacting employees who can review systems, utility maintenance, cleanroom parameters, and so on. Such a team can be proactive before major problems occur — instead of reactive afterward.
Melanie Cerullo (Pharmation, LLC) presented a case study of an action-level excursion for a clean-steam port associated with an autoclave. The client company did not have alert levels in place, its trending program considered only failure rates of action-level excursions over time, the data were in silos, and no one was singularly responsible for the system. Consultants with the BioProcess Institute looked at the client’s data and plotted them graphically over time, at which point they could actually see that there was a problem. An alert would have caught that, but no one had looked at historic levels and determined when the alert should sound.
“You should always provide a starting value,” Cerullo said, “even if it is somewhat arbitrary. You want one low enough so that your products are not impacted but high enough that you see something is going on that needs fixing. Over time, you can refine it based on historical data.”
In the case study, there was a significant increase in autoclave use by a different group than had been the case historically. (Production-related changes might cause a company to change its preventive maintenance cycle.) The cause of the excursion turned out to be a crack leaking glycerin into the system from the pressure gauge. Real-time review of all related data could have shown a potential problem before the failure. Vogel suggested setting alert levels based on historical performance and breaking down the “data silos.”
The importance of real-time review was the moral of that story. Data have no value if a company does nothing with the information collected. One suggestion was to create a “dashboard” for real-time review that indicates whether a facility is in a state of control — with green, yellow, and red indicators for conditions that are good, cautionary, and bad. Such a solution would include system-specific dashboards. They allow companies to fix small irregularities before such issues become real problems
Data Sourcesfor Trending Analysis Quality Control Laboratory information management systems (LIMS): test results, alerts/actions, time, and people Quality Assurance Computerized maintenance management systems (CMMS): planned/unplanned maintenance (and frequency thereof), system changes, and people Enterprise quality management software (e.g., Trackwise): nonconformances, corrective and preventive actions (CAPAs), and people Facilities and Equipment BMS/DCS Software: system temperatures, alarms, activities, and people Manufacturing Enterprise resource planning (ERP): production activities and people |