Kaze W. K. Wong
Scientific machine learning
Machine learning has to be robust and interpretable when it comes to scientific discovery
My principles for machine learning in science
Machine learning [ML] (especially deep learning [DL]) has revolutionized many aspects of our life, including but not limited to image classification (convolution neural network), drug design (graph neural network), language modelling (transformer), and most recently conversational search (ChatGPT). Given the rapid progress and success in industrial machine learning, there are a lot of interests in applying machine learning methods to solve problems in fundamental science.
I like to catagorize applications of ML in science based on the goals:
Bypassing expensive computation. This usually involves learning a surrogate model that has a smaller computational graph based on the training data, so one can reduce computational cost one would need to pay to acheive a certain task. A typical example is training an emulator of simulations.
Pattern discovery. Given the data, one can leverage the flexibility in ML model to find patterns that are usually not captured by handcrafted algorithms, i.e. solving a high dimensional search problem. A typical example is symbolic regression.
Statistical modelling. In science prediction is not enough, one should also care about uncertainty. Variational inference, normalizing flow, and likelihood-free inference falls under this catagory.
Automation. This one is a bit blurry in its definition, since people do write scripts to automate things before (and after) ML is cool, and I bet no one would treat that as "ML". However, by integrating the three catagories of ML into a pipeline, there are more tasks one can (semi-)automate, such as paper writing.
If you are coming from a machine learning background, you might already be thinking about all these fancy models one can deploy to revolutionize any one of these problems, or perhaps all of them. However, I would argue there are a number of common requirements that are present in scientific ML but not necessarily in industiral ML:
Interpretability: in the end, a scientist's job is not only to make theory but also to break theory. If a model can only predict, but one cannot explain why when the model fails, it is quite a useless model in my opinion.
Uncertainty: in experimental science, any measurement without an uncertainty is practically useless. As scienctist expand the frontier of our knowledge, it is of utmost importance that we clearly state "I don't know" when you should.
Convergence: taking away the data analysis part of solving a science problem, in constructuing a theoratical model, especially a numerical one, the notion of convergence is crucial. I should be allowed to pay more to get closer to the "truth"! (This is such a captialistic thing to say)
Lawful good research interests
Normalizing flow-enhanced sampling
Chaotic neutral research interests
I am also known to be an agent of chaos from time to time. While I am not necessarily evil, I also do not necessarily follow the main stream definition of justice all the time.