Artificial intelligence is progressing ever faster with new applications and results that would not be possible only a few years ago. At the same time, hardware security is becoming increasingly important for embedded systems applications as hardware devices implementing both, cryptographic functions and AI algorithms are at the core of security systems.
In recent years, the connection between AI and hardware security is becoming more prominent and important. This comes as a natural consequence of the need to offer improved security in a more automated way. Yet, despite all the accomplishments and progress in this new field dealing with the interplay of AI and HW security, this process is not without its challenges. Examples of such challenges are the lack of explainability of results and not clear design choices in the selection of AI techniques.
With this workshop, we aim to connect researchers coming from both AI and security, academia and industry, to increase the understanding of AI in hardware security, but also to explore new applications where such techniques could bring improved security. We hope this workshop will become a standard event for researchers interested in AI and HW security to share their ideas and also improve the state-of-the-art in this challenging field.
We encourage researchers working on all aspects of AI and HW security to take the opportunity and use AIHWS to share their work and participate in discussions.
The authors are invited to submit the papers using
EasyChair submission system through submission link
https://easychair.org/conferences/?conf=aihws2025.
Submitted papers must be written in English and be anonymous, as we follow the double-anonymized review process, with no author names, affiliations, acknowledgments, or any identifying citations. All submissions must follow the original
LNCS format with a page limit of 18 pages, including references and possible appendices. Papers should be submitted electronically in PDF format. The post-proceedings will be published in Springer’s LNCS series.
Every accepted paper must have at least one author registered for the workshop.
There will be an ACNS best workshop paper award (with 500 EUR prize sponsored by Springer), to be selected from the accepted papers of all workshops.
EXTENDED submission deadline!
Workshop paper submission deadline: March 23, 2025
previously March 7, 2025
Workshop paper notification: April 23, 2025
Camera-ready papers for pre-proceedings: May 12, 2025
Workshop date: June 25, 2025
(in parallel with the main conference)
The evolution of machine learning (ML) as an enabling technology has ushered in a new era of possibilities, including its integration into security-critical applications. AI-driven techniques are increasingly being leveraged to enhance threat detection, automate vulnerability assessments, and strengthen defense mechanisms in complex and dynamic systems. At the same time, federated learning (FL) has emerged as a pivotal advancement in distributed intelligence, enabling privacy-preserving collaboration across sensitive domains such as healthcare, finance, and autonomous systems.
However, alongside these advancements, both FL and broader ML applications introduce a range of new security challenges. Poisoning attacks, adversarial manipulations, and inference threats undermine the integrity, confidentiality, and trustworthiness of ML models. Addressing these risks demands a holistic approach that incorporates security-by-design principles, the use of AI for proactive defense, and a thorough understanding of vulnerabilities inherent to ML-enhanced security mechanisms.
In this talk, we explore the full lifecycle of ML model development and deployment through a security lens, analyzing attack vectors and defense strategies at each stage. We highlight emerging threats, present current research efforts in building adversarially resilient systems, and outline key challenges in developing trustworthy, robust, and attack-resilient ML frameworks.
Dr. Alexandra Dmitrienko is an esteemed Professor at the University of Wuerzburg in Germany and the head of the Secure Software Systems research group. With a distinguished academic background, Dr. Dmitrienko earned her PhD in Security and Information Technology with summa cum laude distinction from TU Darmstadt in 2015. Her doctoral research focused on enhancing the security and privacy of mobile systems and applications, earning recognition from both academic consortia and industrial organizations such as the European Research Consortium for Informatics and Mathematics (ERCIM STM WG 2016 Award) and Intel (Intel Doctoral Student Honor Award, 2013).
Dr. Dmitrienko’ academic journey encompasses a wealth of experience garnered from prominent security institutions in Germany and Switzerland. Prior to assuming her current faculty position in 2018, she acquired expertise at institutions including Ruhr-University Bochum (2008-2011), Fraunhofer Institute for Information Security in Darmstadt (2011-2015), and ETH Zurich (2016-2017). Throughout her career, Dr. Dmitrienko's research interests have spanned diverse domains within cybersecurity, including software security, systems security and privacy, and the security and privacy of mobile, cyber-physical, and distributed systems. Today, her research also largely focuses on security and privacy aspects of Machine Learning methods.
Over the past six years, Google has been working with the open source community to build OpenTitan, the first open source silicon Root of Trust chip. This silicon will be the first broadly used Root of Trust chip at Google with a fully transparent design. This talk will review key elements that make OpenTitan a success: The open source collaboration, the progressive hardening, the PQC support, and highlight contributions from research.
Johann Heyszl manages a team contributing to the larger datacenter security and integrity efforts. He leads security efforts in the OpenTitan project for Google.
The program starts at 09:00 am, CEST (Central European Summer Time: UTC + 2h).
TIME CEST (UTC+2h) |
SESSION/TITLE |
---|---|
09:00 - 09:10 | Opening remarks |
09:10 - 10:10 | Keynote talk 1: Trustworthy AI: Hype or Hope? Challenges of Building Resilient and Secure Machine Learning Systems Alexandra Dmitrienko, University of Wuerzburg, Germany |
10:10 - 10:30 | NETLAM: An Automated LLM Framework to Generate and Evaluate Stealthy Hardware Trojans Tishya Sarma Sarkar, Kislay Arya, Siddhartha Chowdhury, Upasana Mandal, Shubhi Shukla, Sarani Bhattacharya and Debdeep Mukhopadhyay |
10:30 - 11:10 | Coffee break |
11:10 - 12:30 | Session 1: Deep Learning Based Side Channel Analysis |
Attacking Single-Cycle Ciphers on Modern FPGAs: featuring Explainable Deep Learning Mustafa Khairallah and Trevor Yap |
|
Can KANs Do It? Toward Interpretable Deep Learning-based Side-channel Analysis Kota Yoshida, Sengim Karayalcin and Stjepan Picek |
|
Hamming Weight-based Side Channel Analysis of HLS Kyber Hardware using Neural Networks Alexander Kharitonov, Tarick Welling, Maël Gay and Ilia Polian |
|
Jump, It Is Easy: JumpReLU Activation Function in Deep Learning-based Side-channel Analysis Abraham Basurto-Becerra, Azade Rezaeezade and Stjepan Picek |
|
12:30 - 14:00 | Lunch break |
14:00 - 15:00 | Keynote talk 2: OpenTitan: Landing the Open Source Root of Trust in Production Johann Heyszl, Google, Germany |
15:00 - 15:20 | Session 2: Masking |
Let's Share a Secret: Share-Reduced Design of M&M for the AES S-box Haruka Hirata, Daiki Miyahara, Yuko Hara, Kazuo Sakiyama and Yang Li |
|
15:20 - 16:00 | Coffee break |
16:00 - 17:00 | Session 3: Attacks on AI & Faults |
Arithmetic Masking Countermeasure to Mitigate Side-channel-based Model Extraction Attack on DNN Accelerator Hirokatsu Yamasaki, Kota Yoshida, Yuta Fukuda and Takeshi Fujino |
|
Investigation of EM Fault Injection on Emerging Lightweight Neural Network Hardware Bhanprakash Goswami, Reejit Chetry, Chithambara Moorthii J and Manan Suri |
|
μScan: Deep Learning Detection of Faulty Micro-architecture States and Patterns from Scan-Chain Data Dillibabu Shanmugam, Zhenyuan Liu, Andrew Malnicof and Patrick Schaumont |
|
17:00 - 17:05 | Closing remarks |
Gorka Abad, Radboud University & Ikerlan Technology Research Centre
Nikolaos Alachiotis, University of Twente
Debapriya Basu Roy, IIT Kanpur
Durba Chatterjee, Radboud University
Łukasz Chmielewski, Masaryk University
Dirmanto Jap, Nanyang Technological University
Keerthi K, Nanyang Technological University
Maria Méndez Real, UBS
Kostas Papagiannopoulos, Radboud University
Stjepan Picek, Radboud University
Sayandeep Saha, Indian Institute of Technology Bombay
Marc Stöttinger, RheinMain University of Applied Science
Vincent Verneuil, NXP Semiconductors
Lichao Wu, TU Darmstadt
Kota Yoshida, Ritsumeikan University
Marina Krček, Radboud University, NL