Hacker News new | past | comments | ask | show | jobs | submit login

I promise you, in particle physics it is seen as somewhat suspect. But a very careful analysis with machine learning costs a couple PhD-years: it would be even more suspect to forgo an improvement on a 400m dollar experiment.

As you say, it's easy for a neural network to pick up on simulation artifacts, rather than real physical features. The appendix of the original paper [1] explains how they quantify the failure modes of this approach. One kind of cool approach was training an adversarial network to corrupt simulated backgrounds to make them signal-like. The details are sparse, but it sounds like fooling the classifier required a level of corruption that would have been noticeable when comparing the simulation to data.

Can you do this wrong? Sure, but you can also go wrong with the feature engineering that particle physicists have used in cases where they don't trust machine learning. You won't really know if the methodology is sound unless you:

- read the paper

- follow some references (the paper is a short-form "Physical Review Letter", which means it's impossible to give any meaningful review using the text itself)

- hopefully find some more detailed description of what they did

The last point is often the difficult for particle physics collaborations, they are generally a bit protective of their data and internal working.

[1]: https://arxiv.org/abs/2403.02516




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: