Hardware Verification Programme
A six-day programme in Cambridge for experienced hardware engineers to work on AI verification, tackling trusted execution, tamper resistance, confidential computing, and auditable infrastructure for advanced AI systems.
August 2026
Cambridge, UK
10 engineers
Fully funded
How it works
Mornings are talks and workshops from researchers and engineers working on hardware verification. Afternoons are for scoping out solutions to problems in AI verification. By Day 6, you present a technical proposal to peers and practitioners.
Context loading
Before the week, you’ll receive reading lists and short talks covering why AI verification matters and the open problems in the space, so everyone arrives with a shared understanding.
The landscape
Talks on the geopolitical context, hardware-enabled governance mechanisms, and what verification needs to look like. You’ll also begin scoping your end-of-week project and finding collaborators.
Technical deep dive
Hardware security, anti-tamper, confidential computing, and specific verification proposals. Sessions led by people building in this space.
The organisations
Organisations working on AI hardware verification present what they’re building, where they’re stuck, and what they need.
Presentations
Present your scoping document and get feedback from peers, mentors, and org representatives.
Engineers with hardware depth
We’re looking for experienced hardware engineers who want to work on some of the biggest open problems in AI verification. The programme gets you up to speed on verification, AI safety, and how you can contribute.
You’ve spent five or more years in hardware engineering — electronics design, firmware, FPGAs, embedded systems, secure hardware, or cryptographic implementations.
Context on the verification landscape, a scoped technical problem, and direct relationships with organisations working in this space.
Interested?
Applications open in 2026. Leave your details and we’ll be in touch.
Small cohort. Fully funded. Cambridge-based.
Questions? hello@cambridgeaisafety.org