Jam Vision: Increasing Mine Crusher Efficiency Through Computer Vision at the Edge
The majority of mines in the world have been in operation for more than a century and, therefore, have lagged in adopting benefits from the Artificial Intelligence (AI) industry. This article shows how AI can help such a capital-intensive industry.
Mining is one of oldest industries of the world, dating back ~10,000 years. The majority of mines in the world have been in operation for more than a century and, therefore, have lagged in adopting benefits from the Artificial Intelligence (AI) industry. This article shows how AI can help such a capital-intensive industry, where asset efficiency is critical for profitable production. Furthermore, this article describes how unplanned downtime can be substantially reduced by using a novel analytical solution.
Context: Jammed Crushers Lead to Significant Loss in Production
Surface hard-rock mining involves digging through earth’s surface to obtain materials that have valuable mineral/metal content. This digging is done by blasting earth material with explosives into large rocks (boulders), which are then crushed into a fine consistency. After the rocks are crushed, the material is put through a chemical process that allows the valuable minerals and metals to be extracted. Therefore, the crushing of rocks is a key step in the mineral extraction process.
In the first stage of crushing (i.e., primary crushing), haul trucks dump large rocks into crushers where they are smashed to a size of <7 inches (17 cm). One of the main concerns in primary crushing is crusher jams. A crusher jam occurs when larger boulders obstruct the opening of a crusher and bring the crushing process to a halt (as illustrated in Figure 1).
Such jams are addressed by human controlled rock hammers, which break up the large boulders, so the flow of material can continue. If these jams are not resolved before the next dump of material (new material is dumped approximately every 3-5 minutes), they can lead to a “bridge-over,” which is a more challenging and time-intensive event to address as the top load of material must be removed to resolve the jam. Thus, mining operations attempt to minimize jams to prevent major loss in production and revenue (3-4% loss in asset efficiency; worth >$50 million/year just across 3 mining sites at one organization).
In typical mining operation, human operators are responsible for identifying jams along with a host of other responsibilities, often resulting in missed jams. Herein, we describe a novel analytical solution termed Jam Vision, that uses a smart camera with a custom machine learning (ML) algorithm to rapidly notify operators of jams. On average, Jam Vision detected jams before operators could identify them and with >95% sensitivity and precision, significantly reducing the chance of bridge-overs and increasing asset efficiency.
Because this solution was developed during the peak of the COVID-19 pandemic, the World Wide Technology (WWT) team had to find creative development strategies to overcome limitations imposed by remote working, which will also be described in this article.
Computer Vision and Smart Camera
The Jam Vision solution falls into a class of ML known as computer vision. Jam Vision ‘sees’ jams and performs the same task that a human would perform to identify a jam. This is where computer vision gets its name – the computer has ‘vision’ or the ability to ‘see’ like a human.
Some computer vision applications transmit video to the cloud or off-premise servers for processing; however, remote compute can be limited by network bandwidth. Jam detection needed to be instant to prevent bridge-overs and scalable so multiple crushers could simultaneously utilize Jam Vision. Therefore, Jam Vision was deployed on a smart camera, or a camera with an on-board computer, allowing for the technology to be implemented at the edge for an instant and scalable solution.
Choosing the right smart camera for jam detection was critical. Because crusher jam detection does not follow typical object detection methods, the smart camera needed to allow for a customized ML algorithm that is not found in generic computer vision toolboxes. Furthermore, the harsh working conditions of the mine – with 24x7 operations, intense vibrations, heat, and dust – require a durable smart camera. Given these requirements,
Adlink’s Neon-203B-JT2-X smart camera was selected. The Neon contained an industrial Basler camera sensor coupled with a NVIDIA Jetson TX2 computer in a compact durable package. The NVIDIA Jetson TX2 allowed for customized computer vision algorithms coded in Python to be deployed, which is not possible with most smart cameras on the market. Also, the Neon is passively cooled, which eliminates the possibility of cooling fan failure from the high dust conditions at the mine.
Jam Detection Algorithm
Under normal operating conditions, rock material is readily flowing into the opening of the crusher. In the case of a jam, this material comes to a halt. Therefore, the goal of a jam detection algorithm is to detect when static rock material is present in the dump pocket. To accomplish this, we developed an algorithm in Python composed of three modules:
- Module 1: A custom algorithm to quantify rock motion in the dump pocket, termed Smart Edge Motion Detection.
- Module 2: A neural network (a type of supervised ML algorithm) to identify where rock materials are located.
- Module 3: A algorithm to average jam signals over space and time (i.e., space time averaging algorithm) to reduce noise associated with jam signals.
The initial development of Jam Vision was performed using recorded video footage from a security camera at the mine. To learn “what is rock” or “what is not rock,” the training of the neural network (using a dataset labeled with a custom-built annotation app) was performed using an NVIDIA DGX-1 in WWT’s Advanced Technology Center.
Smart Camera Testing in a COVID-19 World
Due to travel restrictions from COVID-19, initial testing of Jam Vision on the smart camera could not be performed on-site. Instead, initial testing took place in WWT team members’ homes. This was crucial to mitigate the chances of failure after installation of the smart camera at the mine.
For the first test, a mini-crusher phantom experiment was conducted. The crusher phantom was created by filling a mini-fertilizer spreader with fertilizer pellets. When the fertilizer spreader was cranked, it simulated material flowing in the crusher; when the crank was halted, this simulated a jam. The smart camera was positioned above the crusher phantom with a bike mount and connected to a monitor so that the Jam Vision output could be visualized in real-time. The results of the crusher phantom experiment exhibited real-time processing and the ability to distinguish a rotating fertilizer spreader (no jam) and a non-rotating fertilizer spreader (jam).
A major concern for the Jam Vision solution was the production environment. The on-site mine ambient temperatures can reach up to 115⁰F, and the mounting location is exposed to intense vibrations from moving mining equipment and ore blasting. To ensure that the smart camera was not negatively impacted by these conditions, these concerns were simulated at home.
Ambient air conditions were simulated by placing the smart camera in a closed car that was sitting in direct sun light with an interior temperature between 110-125⁰F (verified with an infrared [IR] thermometer). After running Jam Vision in the car for several days, no failure was evident.
In addition to high temperature ambient air, at the mine, the smart camera was to be installed on the sun-exposed roof of the crusher, which can reach temperatures of several hundred ⁰F. The Adlink smart camera could not operate at temperatures over 145⁰F. It was hypothesized that a heat shield could reduce radiant heat exposure from the hot roof to the smart camera. To test this hypothesis, the smart camera was placed on top of heat shield that sat on a hot plate heated to 301⁰F (verified with the IR thermometer). After running for several hours, no Jam Vision failure was evident, indicating that the heat shield would provide sufficient protection from the radiant heat coming off the roof.
Mine vibration conditions were simulated by supplying a speaker with a custom vibration signal. A vibration meter was used to ensure the speaker vibrations were similar to the vibrations the smart camera would experience at its installation location at the mine. Jam Vision ran for several hours on the vibrating speaker without failure. More details of smart camera testing will be supplied in a follow-up article.
Instant Results and Real-Time Processing with NVIDIA CUDA
To ensure mine operators were notified promptly of a jam, the Jam Vision solution required real-time detection and instant alarms. While processing on an edge device significantly improves response time, the edge device processing still needs to be quick. Early into development, it became evident that a CPU-based algorithm would create significant delays in jam detection. Jam Vision needs to output jam detection frames at a rate of greater than one frame every three seconds or the algorithm will crash due to unprocessed frame build up in memory. Leveraging CUDA (a GPU accelerated coding language) on the NVIDIA Jetson TX2, we were able to hit our performance goal of real-time processing with jam detection in only 1-2 seconds (as shown in Figure 6). Compared with CPU processing, NVIDIA CUDA provided a ~4x boost in processing speed. Details of our CUDA solution and different methods tested will be featured in a future technical article.
Smart Camera Installation
For the ideal imaging angle, the smart camera was installed on the roof directly over the opening of the crusher. The crusher is outdoors but has a roof structure to allow for cranes and other equipment to perform maintenance on the crusher. The camera was attached to the roof using vibration dampeners to reduce crusher and blasting vibrational effects (see Figure 7). Fiberglass bolts were used to install the heat shield in-between the camera and the sun-exposed roof to prevent the camera from overheating. A IP67 rated serial connection powered the camera and a M12 IP67 ethernet cable connected the camera to the local network to send out jam signals to operators.
Again, COVID-19 travel restrictions prevented WWT team members to be on-site during installation, creating a unique challenge for initial setup and calibration. Because existing tools would not be sufficient to do this remotely, we built a custom calibration software tool to aid in the camera installation. The custom tool allowed for WWT team members to remotely collaborate with mine personal, monitor the installation process, and adjust camera parameters to maximize image quality for Jam Vision. This ensured correct focusing of the camera and centering of the crusher opening in the frame, all while being remote (see Figure 8). A follow-up article discussing the role of custom-built tools and their use in this solution will be published in the future.
Jam Vision Performance Results
After smart camera installation, a 96-hour field test was run to evaluate the performance of Jam Vision. To do this, the number of true alerts (i.e., Jam Vision identified correct jams), false alerts (i.e., Jam Vision identified incorrect jams), and missed jams (i.e., a jam occurred but Jam Vision did not identify it) over the course of the test were counted (see Figure 9). Sensitivity and precision were then calculated to determine performance.
Sensitivity was found to be:
Precision was found to be:
The high sensitivity and precision metrics of over 95% for Jam Vision indicate excellent performance.
During our testing, when jam alerts were not exposed to operators, Jam Vision detected jams well in advance before operators (up to 15 minutes before the operators would detect while working on other tasks). This early detection is greater than that of time in-between dumps (3-4 minutes), since often partial jams occur, which do not result in bridge-overs. However, partial jams are still advantageous to detect because they reduce the efficiency of the crusher and have bridge-over potential with each subsequent dump. Upon exposing operators to Jam Vision alerts, rock-hammer operators’ feedback was overall positive for the solution. Operators noted that Jam Vision frequently notified them of jams they failed to notice and allowed them to focus more on other tasks. Jam Vision improved mining operations by reducing the chance of bridge-overs, increasing asset efficiency, and increasing operator bandwidth.
Alarms (How are Operators Notified?)
Once Jam Vision detects a jam, operators need to be alerted immediately to address the issue and prevent further dumping via an industrial alarm system. The industrial alarm is installed on a PLC system, which is a Programmable Logic Control device installed on an isolated control network. The smart camera integrates with PLC to turn on the industrial alarms (see Figure 10). Once an alarm is active, a sound and a jam light will appear, alerting truck operators to stop dumping and rock-hammer operators to clear the jam.
Jam Vision Model Improvement (How are models improved?)
The jam video and images are saved locally on the camera and periodically synced to an Azure Blob storage (see Figure 10). Once in Azure Blob, the images are fed into a training algorithm and executed on a GPU-accelerated cluster. For the training environment, we chose Azure Databricks, which is a cloud-based data engineering platform. The data feed from the Azure Blob to Databricks is continuous; ensuring training models can be updated on-demand and without manual setup. Further pipeline development is planned as we evolve it towards a mature MLOps architecture.
Application Monitoring (Responding to Jam Vision Problems)
For any 24-7, real-time industrial control application it’s important to react immediately to problems. Network interruptions, crashes, and downtime need to be detected to inform operators as soon as possible. However, this poses a tricky problem: if an algorithm is not operational, how can it communicate problems?
To solve this, a 'health-check' monitor is executed in an external process to 'ping' the ML algorithm for metrics like delay time, responsiveness, memory use, CPU use, and external system communication status (Figure 11). The monitoring process will compile these metrics to predict or detect problems and notify operators to any problem with the smart camera. The status of these monitoring metrics and the current jam data with video is then displayed for operators on the Jam Vision Dashboard web page (see Figure 10).
Conclusions and Future Work
In conclusion, a computer vision solution to detect crusher jams was developed, termed Jam Vision. The solution was deployed to the edge using a smart camera which allowed for instant alerts of jams with over 95% sensitivity and precision. This high performance reduced the number of bridge-overs, increased asset efficiency, and increased operator bandwidth. The solution can help reduce unplanned crusher downtime (because of jams) by ~90%, resulting in ~3% increase in asset efficiency. Next steps include continuous improvement of the model using the pipeline outlined in Figure 10.
Jam Vision demonstrates that computer vision deployed at the edge can provide a high-value, scalable solution to mining operations. Therefore, jam detection is just the beginning of what computer vision can offer. Plans are underway to install smart cameras at other locations in the mine to further increase asset efficiency and make the hazardous mining environment safer.