The 2025 Mechanical MNIST Challenge

Background

The original MNIST dataset set the standard for benchmarking classification algorithms with its curated set of handwritten digits. Inspired by this idea, Emma Lejeune (2020) created the Mechanical MNIST dataset, replacing pixels with material properties and digits with mechanically simulated deformation fields.

Building on this, we provide experimental data from 3D-printed MNIST-digit-inspired samples tested under mechanical load and analyzed via digital image correlation (DIC). The goal is to bridge data-driven modeling and experimental mechanics through two focused challenges.

Data Origin

We fabricated all samples using a Stratasys PolyJet J750 Digital Anatomy 3D printer (Stratasys, Eden Prairie, MN, USA), with geometries based on MNIST digits. Each sample measured 40 × 40 × 2 mm and contained a stiff inclusion embedded in a softer surrounding matrix. To enable mechanical testing, we incorporated rigid clamps into the printed design.

After printing, we applied a speckle pattern for digital image correlation (DIC) and mounted the sample on a uniaxial tensile tester (Instron, Norwood, MA, USA). We extended each sample until failure while recording force and displacement using the Instron software. A custom LabVIEW program (National Instruments, Austin, TX, USA) captured synchronized images at 5 Hz during testing. We processed these images in DaVis (LaVision, Göttingen, Germany) to extract full-field displacement and strain data.

The Challenges

Challenge 1: Operator Learning
(Forward Problem)

Goal: Predict force-displacement curves and full-field strain maps using sample metadata and boundary conditions.

For training, participants will receive 90 data sets comprising full-field 2D DIC data, MNIST inclusion geometry, material annotations (matrix and inclusion), and prescribed boundary forces and displacements. Data will be provided for 189 frames per sample.

For testing, we request the submission of a Docker container so that we can test each participant’s submissions against 10 in-distribution data sets, 10 out-of-distribution data sets (excluding those from the MNIST set), and 10 homogeneous material samples.

Challenge 2: Inverse Learning
(Inverse Problem)

Goal: Infer hidden geometry and material distribution from mechanical response.

For training, participants will receive 90 data sets comprising force-displacement curves and full-field 2D DIC strain maps, just as in Challenge 1.

For testing, we will test the participants’ submissions against the 10 in-distribution, 10 out-of-distribution, and 10 homogeneous samples.

Data Access

All datasets (STL files, force-displacement curves, strain maps) are available at: HERE

Challenge Evaluation & Distribution

We will ask each participant to submit a Docker container so we can evaluate submissions without revealing our test data. Based on our predefined success criteria, we will evaluate and rank each submission. Please note that our initial findings will be published blinded in a shared publication. Each participant will be a co-author on this first publication. Subsequently, we invite each participant to contribute to a special issue in which they discuss the details of their approach and evaluate its performance against the now-accessible test data.

Submission

To evaluate the model's performance, we ask the authors to provide a Docker image of their model and to upload all necessary files into a single *.zip file. Instructions for creating a Docker image are provided in the data files at the same link as above: LINK.

Organizers & Contact

  • Manuel Rausch (The University of Texas at Austin - manuel.rausch@utexas.edu)

  • Adrian Buganza Tepole (Columbia University - ab6035@columbia.edu)

  • Jan Fuhg (The University of Texas at Austin - jan.fuhg@utexas.edu)

  • Francisco Sahli Costabal (Pontificia Universidad Católica de Chile - fsc@ing.puc.cl)

Timeline

  • July 2025: Challenge launch & dataset released

  • January 2026: Submission portal opens

  • June 30th 2026: Submission deadline

  • September 2026: Blinded evaluation

  • October 2026: Zoom session with submitters

  • December 2026: Paper #1 (blinded results)