About the Challenge Challenge Description Submission Winning Teams Organizers Contact

About the Challenge

Adversarial machine learning has become a key area of research for improving model robustness and understanding model behavior. While much of the focus has been on domains like image recognition and natural language processing, adversarial attacks on tabular data — common in fields such as medicine and High Energy Physics (HEP) — have received less attention. This challenge seeks to address that gap by applying adversarial techniques to tabular data, a domain where adversarial vulnerabilities have been less explored despite their potential to improve model robustness. By focusing on tasks related to generating adversarial examples and creating models resilient to them, participants will explore innovative methods that could enhance robustness in fields such as particle physics. This challenge not only advances the development of more reliable machine learning systems but also offers opportunities to improve model explainability, performance under data scarcity, and inspire new approaches to adversarial robustness in various scientific fields.

Challenge Description

Dataset

The dataset for both tasks stems from two separate CMS simulated datasets from the CERN Open Data Portal. Namely TTJets and WWJets. These two separate datasets are also the target classes. The entire dataset is made up of a total of 87 input features, which represent the Transverse Momentum, the Pseudo-Rapidity η, and the Polar Coordinate φ of the 30 particles of a Jet with the highest Transverse Momentum. A more thorough description of the dataset, the pre-processing applied, and its physical interpretations will be provided within the CodaBench competition.

Model

The reference model used here is based on the TopoDNN model from this paper. A visualization of the model and a summary of its parameters will be provided within the CodaBench competition.

Tasks

This challenge is made up of two separate tasks:

  • Task 1: Participants are asked to generate adversarial examples given a public set of inputs, in order to attack a pre-trained model. The goal here is to achieve high Fooling Ratios, while minimizing the perturbation to the individual inputs.
  • Task 2: Participants are asked to build that is both robust to adversarial attacks, and also performs well on clean test samples.

More details about the underlying tasks will be in the accompanying CodaBench competition.

Sources

We will provide base-line code which participants can follow on a separate GitHub, which we will then reference here. We will also upload the required datasets (Train, Test, and Validation) for both tasks on a separate website and reference them here.

Participation

Participants will be required to create a CodaBench account to participate in the challenge. Additionally, participants will be asked to provide a report, if they wish to be eligible for awards.

Awards

For both tasks, the winning teams will be awarded 1.000€. Additionally, we would like to provide free conference registration to a member of both winning teams.

Terms and Conditions

TBD.

Submission

How to Submit

Code: Submission will be on CodaBench.
Report: We will also provide a website for participants to submit in their report.

Timeline

The challenge timeline is as follows:

  • Start of competition: Friday, 23rd May 2025, 11:59 PM UTC
  • Phase 1 (Development Phase) and Phase 2 (Evaluation Phase)
  • End of competition: Monday, 23rd June 2025, 11:59 PM UTC
  • Written report submission deadline: Monday, 14th July 2025, 11:59 PM UTC
  • Publish results: Tuesday, 8th July 2025, 11:59 PM UTC
  • Camera-ready deadline: Wednesday, 6th August 2025, 11:59 PM UTC
  • Winners present solutions at ECML/PKDD 2025: Monday, 15th Sept. - Friday, 19th Sept. 2025

Winning Teams

This is where the winning teams will go for both tasks.

Organizers

Organizer 1

Lucie Flek

University of Bonn, Germany

Organizer 2

Akbar Karimi

University of Bonn, Germany

Organizer 3

Timo Saala

University of Bonn, Germany

Organizer 4

Matthias Schott

University of Bonn, Germany

Contact

For any questions, feel free to reach us under the following address: collidingadversaries@googlegroups.com