About the Challenge Challenge Description Submission Winning Teams Organizers Contact

About the Challenge

Adversarial machine learning has become a key area of research for improving model robustness and understanding model behavior. While much of the focus has been on domains like image recognition and natural language processing, adversarial attacks on tabular data — common in fields such as medicine and High Energy Physics (HEP) — have received less attention. This challenge seeks to address that gap by applying adversarial techniques to tabular data, a domain where adversarial vulnerabilities have been less explored despite their potential to improve model robustness. By focusing on tasks related to generating adversarial examples and creating models resilient to them, participants will explore innovative methods that could enhance robustness in fields such as particle physics. This challenge not only advances the development of more reliable machine learning systems but also offers opportunities to improve model explainability, performance under data scarcity, and inspire new approaches to adversarial robustness in various scientific fields.

Challenge Description

Dataset

The dataset for both tasks stems from two separate CMS simulated datasets from the CERN Open Data Portal. Namely TTJets and WWJets. These two separate datasets are also the target classes. The entire dataset is made up of a total of 87 input features, which represent the Transverse Momentum, the Pseudo-Rapidity η, and the Polar Coordinate φ of the 30 particles of a Jet with the highest Transverse Momentum. References to a more thorough description of the dataset, the pre-processing applied, and its physical interpretations are provided within the Codabench competitions (Task 1 and Task 2).

Model

The reference model used here is based on the TopoDNN model from this paper. A summary of its parameters can be found in the accompanying Codabench competitions (Task 1 and Task 2).

Tasks

This challenge is made up of two separate tasks:

  • Task 1: Participants are asked to generate adversarial examples given a public set of inputs, in order to attack a pre-trained model. The goal here is to achieve high Fooling Ratios, while minimizing the perturbation to the individual inputs.
  • Task 2: Participants are asked to build that is both robust to adversarial attacks, and also performs well on clean test samples.

More details about the underlying tasks are provided in the accompanying Codabench competitions (Task 1 and Task 2).

Sources

Some examplary python scripts for creating and gauging submissions, as well as the required data and model can be found in the accompanying GitHub Repository. Additional data for the Robust Model task can be found on Hugging Face

Participation

Participants are required to create a Codabench account to participate in the challenge. Additionally, participants are required to sign up for the Task(s) they would like to join: Adversarial Attack Competition and/or Model Robustness Competition. Moreover, participants are required to sign up using this Google Form. Furthermore, participants will be asked to provide a technical report, if they wish to be eligible for awards.

Awards

For both tasks, the winning teams will be awarded 1.000€. Additionally, we will provide free conference registration to a member of both winning teams.

Terms and Conditions

  • 1. Eligibility
    • Participants of all backgrounds and levels of expertise are welcome to join the competition.
    • Participants must agree to abide by these terms and conditions upon registration.
    • External data cannot be used.
    • Teams must be composed of at most five people.
    • Code must be publicly released by participants to ensure compliance, verify irregularities or inaccuracies, and verify the results.
    • To be eligible for prizes, a brief technical report of 4 pages must be provided.
  • 2. Competition Period
    • The competition consists of a single phase with start and end date as stated on the competition page.
  • 3. Intellectual Property
    • Participants retain intellectual property rights to their submissions, but grant the organizers the right to use their submissions for promotional or educational purposes.
  • 4. Code of Conduct
    • Participants are expected to maintain professionalism and respect towards other competitors and organizers.
    • Any form of cheating, plagiarism, or unethical behavior will result in disqualification.
  • 5. Disputes and Appeals
    • In case of any disputes, the decision of the competition organizers shall be final.
  • 6. Changes to Terms & Conditions
    • The organizers reserve the right to make changes to the terms and conditions at any time.
    • Participants will be notified of any significant changes.

Submission

How to Submit

Participants are asked to submit their solutions on Codabench. This Challenge is split into two separate tasks: Task 1 - Adversarial Attack and Task 2 - Robust Model.
Additionally, we will also provide a website for participants to submit their technical report.

Timeline

The challenge timeline is as follows:

  • Start of competition: Friday, 16rd May 2025, 11:59 PM UTC
  • End of competition: Monday, 23rd June 2025, 11:59 PM UTC
  • Written report submission deadline: Monday, 14th July 2025, 11:59 PM UTC
  • Publish results: Tuesday, 8th July 2025, 11:59 PM UTC
  • Camera-ready deadline: Wednesday, 6th August 2025, 11:59 PM UTC
  • Winners present solutions at ECML/PKDD 2025: Monday, 15th Sept. - Friday, 19th Sept. 2025

Winning Teams

This is where the winning teams will go for both tasks. The competitions both display real-time leaderboards.

Organizers

Organizer 1

Lucie Flek

University of Bonn, Germany

Organizer 2

Akbar Karimi

University of Bonn, Germany

Organizer 3

Timo Saala

University of Bonn, Germany

Organizer 4

Matthias Schott

University of Bonn, Germany

Contact

For any questions, feel free to reach us under the following address: collidingadversaries@googlegroups.com