Last modified: 2025-09-14
As part of the upcoming Micro2025 Workshop, we are launching an AI model benchmarking competition using the AI-BMT (AI Benchmarking Tool) platform and DeepX M1 SoC. The primary goal is to foster innovation in developing optimized deep learning models for edge AI hardware by evaluating them under realistic constraints.
Participants are tasked with developing deep learning models that are highly optimized for the DeepX M1 NPU. The evaluation focuses on model efficiency and performance on actual edge hardware, highlighting both accuracy and inference speed.
1st place: $2,000
2nd place: $1,000
3rd place (5 winners): $500 each
To assist participants, we will provide ONNX models, corresponding compiled DXNN models, compiler configuration JSON files, and the compiler itself. The provided DXNN models have been verified to run efficiently on the DeepX M1 platform.
For downloading reference models, please visit the Reference Models page.
For details on how to evaluate your models, please refer to the Evaluation Guide page.
Participants must complete registration between September 15 and September 30 through our official platform. To sign up, visit the Register page.
Submit your final model by October 8. The top 10 teams or individuals will be notified by October 12 based on accuracy and latency.
Only one final model may be submitted per team or individual. Submit your final model through the Final Submission page.
If you have any questions regarding the competition, submission process, or evaluation criteria, please refer to the Q&A section. Responses are typically provided within 2 business days.
Submitted models will be evaluated based on a combined score:
Qualification Criteria: Only models with Accuracy(Top1) ≥ 70% and Latency ≤ 10ms are included in the score calculation.
Score Formula: A = Accuracy(Top1) / 100, B = 1 - (Latency(ms) / 10),
Score = (0.6 × A + 0.4 × B) × 100
Final model submissions must follow a standardized preprocessing pipeline and will be evaluated on actual DeepX M1 SoC hardware. Top-performing teams will be invited to the workshop for on-site evaluation and may submit improved models for final ranking. Our support staff will assist with the on-site evaluation process.
Regular Registration: Sep 15 – Sep 30
Competition Period: Sep 15 - Oct 8
Final Submission Deadline: Oct 8
Finalists Announcement: Oct 12
On-site Evaluation & Award: Oct 19