Particle accelerators have a very direct impact on our lives through several of their applications in medicine. With the advancement of accelerator science, new techniques for the treatment of cancer and diagnosis of various diseases have provided major steps forward in healthcare during the last decades. These medical particle accelerators are hi-tech machines with a challenging operation. In this post, we describe how Data Science solutions help automating their operation and optimizing the performance of such fascinating facilities.
Originally developed to investigate the fundamental laws of nature, particle accelerators are today far more than magnificent tools for fundamental research. They also have a significant role in industry and health, directly impacting our lives. Accelerator applications include the manufacturing of new electronic systems, the study of ancient works of arts, the analysis of air pollution and climate change, the study of the 3D structure of proteins, or the development of alternative energy sources, to mention a few.
Among all the applications, we highlight in this post those centered in medical diagnostics and the treatment of cancer. Accelerator based treatments fall into the category of radiation therapy. Depending on the particles used for the treatment, we talk about X-ray therapy, electron therapy, or proton therapy.
The differential aspect of proton therapy over X-ray radiotherapy is that particles do not penetrate the whole body of a patient. But these are fired with a controlled energy that determines the depth the protons penetrate into the target body. Once protons reach the desired depth, they stop, whereas X-ray radiation passes through the whole body. Thus, with proton therapy, for a certain dose delivered to a tumor, a patient is exposed to less radiation in the regions outside of the tumor area, and therefore less healthy tissue is damaged in the treatment.
The effectiveness of proton therapy is the reason why this technology expanded significantly during the last years, and why more treatment facilities are being currently built all around the world .
Operating particle accelerator facilities
Medical particle accelerators are very complex machines composed of a myriad of subsystems. It takes the effort of many people to put all the systems up and running in an orchestrated manner. However, no human can handle all the information collected in real time by monitoring devices which operate at incredibly high frequency rates. Automated systems are required to assist the operators in the hospitals. They need these systems to understand anomalous patterns that may appear in the diagnostics readings, and that indicate something is not functioning correctly.
These automated systems can be rather simple: an alarm system alerting the operator when diagnostics system readings reach certain thresholds. But there are also more complex feedback systems. These react to the sensor readings performing actions without the intervention of the operator. When complex patterns develop, or when combining a large amount of data from several sources that, correlated or not, create a high dimensional search space to be optimized, these systems are still not enough to ensure a smooth machine operation. This is even more important when time is key as it is the case for an emergency unit.
This is why, in recent years, Machine Learning (ML) solutions are being developed in the particle accelerator domain [2, 3]. While many of these systems are developed in research environments, these are also useful in industrial and medical accelerators.
We describe here examples of ML solutions developed for particle accelerators, some based on our experience at PickleTech.
“The particle flux suddenly stops during the proton therapy treatment session. The patient placed at the end of the gantry waits while the technicians try to understand the failure. They need to call the machine expert, but that will take several hours. The technicians inform the patient that the proton therapy session must be postponed. Further investigation will reveal the fault was due to a sudden increase of the particle source temperature. The technicians had already reported some anomalous temperature patterns over the last weeks, but they could not foresee the failure.”
Predictive maintenance and anomaly detection
Building and operating particle accelerators is expensive, and it is therefore important to reduce their downtime as much as possible. This applies to research facilities and beyond. Synchrotron light sources monetize beam time to carry out experiments in the beamlines, while medical accelerators are a big economical investment for hospitals. It is always desirable to maximize their uptime.
Predictive maintenance tools based on machine learning models play an important role here. Their goal is to find patterns that help predicting and preventing failures. If you know when your machine may break down, you can schedule maintenance conveniently. Anomaly detection solutions help finding behaviors hinting upcoming failures. ML models can also serve as a diagnostics tool when a failure occurs, providing recommendations to avoid them in the future.
Machine Learning tools go beyond traditional SCADA systems set up with human-coded thresholds and alert rules. ML can take into account the complex dynamic behavioral patterns of the machinery, and the contextual data relating to the operational process at large. ML is able to recognize anomalous patterns that humans cannot perceive easily. The historical record of failure events are used to train supervised models to predict such events. With ML, not just static data is analyzed. Solutions take into account the temporal evolution of the machine parameters, its time series. In order to extract information from such sequential data, solutions go from classical statistical tools such as ARIMA models, to Deep Learning anomaly detection via LSTM networks.
Signal vs. noise and outlier detection
Very sensitive electronic detection systems are installed all along the particle accelerators, recording multiple measures. High sensitivity sometimes translates into noisy readings, where disentangling actual signal from noise requires dedicated data cleaning techniques. Obtaining clean readings is essential to ensure a smooth and accurate operation of the machine.
Machine learning, and clustering techniques in particular, significantly increase here the performance of classical cleaning techniques, such as Singular Value Decomposition. Unsupervised learning algorithms, e.g. Isolation Forest, K-Means or DBSCAN, are very useful to detect outliers. Clustering techniques conveniently separate signal from noise, and also group data with similar characteristics. DBSCAN was used in the LHC to group the tune signals according to the operational configuration status [3, 4].
Operation parameter prediction and surrogate models
Having precise models of the particle accelerator is essential to plan treatments and evaluate the effects of the radiation in the patients.
Obtaining accurate and reliable analytical models for the whole accelerator and all of its components (magnets, accelerating cavities, sensors, …) is very challenging though. Usually, analytical models that assume ideal conditions plus some approximated deviations are used. However, these models have limitations, and running them can be expensive in terms of computing resources.
Supervised learning helps here creating surrogate models that represent in an effective manner real components of the accelerator, more accurately and cheaper in computing resources than analytical models [5, 6].
Let’s take a concrete example. The magnetic field created by a magnet is rarely exactly equal to the nominal one, but it is subject to several uncertainties. Some of these uncertainties can be measured directly in the lab, but some others will remain unknown. Once in operation, it is thus very difficult to know precisely the magnetic field that is acting on the particle beam. For that, supervised machine learning models infer from the beam behavior the actual magnetic field that is acting upon it and make a precise estimation of the errors and uncertainties associated. Once the uncertainties are resolved, corrections can be applied more effectively.
Surrogate models are useful when there exists a complex relationship between the input – parameters we can tune in a particle accelerator- and the output -functions we want to evaluate. Particularly if these cannot be easily evaluated with previous measurements.
Control system optimization and automation
How easy a particle accelerator is to control for an operator is correlated to the amount of optimization and automation that lies in the control system’s back end. This is also related to the machine uptime and the time spent in the transitions between different machine configurations. Specialized control systems are built for each particle accelerator, taking into account specific rules and constraints, following similar principles.
Given the optimization nature behind a particle accelerator control system, Reinforcement Learning (RL) can be a promising tool to complement classical control theory systems [7, 8]. A particle accelerator can be framed in the typical context of a RL problem, where the particle accelerator represents the environment, which one could think of as a sequence of magnets and other elements. There, an agent is trained to learn what is the optimal configuration of the magnet correctors. If for a particular step of the correction the performance is increased, the reward the agent receives is positive, or negative otherwise. With this approach, we train an algorithm that performs the machine correction in very few steps compared to the large number of iterations a classical numerical optimizer requires. You can find more details in this blog post. There are even some applications where Quantum Reinforcement Learning may even find its way .
Another example of ML for particle accelerator control systems is Bayesian optimization. In this case, we tune machine parameters, e.g. magnetic fields of a set of magnets, to optimize e.g. the beam size at a particular location – for instance a tumor location. The relationship between the parameters we tune and the function to be optimized is complex and considered to be unknown. Bayesian optimization is a sequential algorithm based on Bayes’s theorem usually employed to optimize functions that are computationally expensive to evaluate.
While the development of ML solutions may start with experimental scripts that are tested and improved in an iterative way. Functional tools must be properly put in production and integrated in the daily operation just as any other control software. In the case of ML tools, these involve the particularities of MLOps. Models must be properly maintained, monitored and retrained when new data is acquired. Read our previous blog entry for more information on how MLOps is applied for medical devices.
Powered by Data, Driven by Science
Machine Learning facilitates automation, optimization and the operation of particle accelerators. Given our background and experience in both the particle accelerator and the health domains, PickleTech is contributing to this effort with the development and implementation of new data science solutions.
At PickleTech, we work developing tailored solutions to improve competitive aspects related to Health, Sports, and DeepTech. We believe Data Science and advances in Machine Learning coupled with domain knowledge and experimentation have the potential to provide new tools to better understand, monitor, and systematically improve the competitive performance of organizations.
 PTCOG – Facilities in Operation, https://www.ptcog.ch/index.php/facilities-in-operation-restricted
 Opportunities in Machine Learning for Particle Accelerators, A. Edeleen, https://arxiv.org/abs/1811.03172
 Unsupervised Learning Techniques for Tune Cleaning Measurement, H. Garcia https://inspirehep.net/literature/1962752.
 Detection of faulty beam position monitors using unsupervised learning, E. Fol, . https://journals.aps.org/prab/abstract/10.1103/PhysRevAccelBeams.23.102805
 HL-LHC Inner Triplet Magnetic Error Prediction using Machine Learning Techniques, H. Garcia, https://indico.cern.ch/event/1117491/contributions/4692815/attachments/2377039/4060740/RL_for_optics_correction.pdf
 Supervised learning-based reconstruction of magnet errors in circular accelerators, E. Fol, https://link.springer.com/article/10.1140/epjp/s13360-021-01348-5
 Sample-efficient reinforcement learning for CERN accelerator control, V. Kain, https://journals.aps.org/prab/abstract/10.1103/PhysRevAccelBeams.23.124801 Hybrid actor-critic algorithm for quantum reinforcement learning at CERN beam lines. M. Schenck, E. Combarro, https://arxiv.org/abs/2209.11044