Cambridge, 17-22 August 2026

Hardware Assurance Programme

Six days in Cambridge to build the mechanisms that let third parties verify what AI chips are computing, where they are operating, and at what scale.

Apply now

Applications close 7 June 2026, 23:59 Anywhere on Earth.

When

17-22 August 2026

Where

Cambridge, UK

Programme size

Cohort of twenty

Support

£1,500 stipend + expenses

What this is for

The problem

AI governance increasingly depends on claims about how compute is used. Voluntary safety commitments from frontier labs and bilateral discussions between major powers turn on questions about where training runs happen, what hardware is involved, and whether deployed systems match evaluated versions. Today, many of those claims are difficult to check independently.

Without practical verification tools, those claims rest on trust alone. The technical infrastructure to verify them independently doesn’t yet exist.

The work

Hardware assurance builds mechanisms for verifying what AI hardware is doing, where it is, and at what scale, while preserving confidentiality where needed. This includes tamper-evident enclosures, telemetry and network monitoring, attestation protocols, and cryptographic approaches to inference verification.

The technical work to make these systems real is undersupplied. This programme is for the engineers and researchers who can change that. The field is still small enough that a handful of talent can meaningfully shape its direction.

How it works

Six days of talks, working sessions, and project work with researchers and practitioners building hardware assurance infrastructure. By Day 6, you present what you’ve worked on.

A presentation at Meridian, Cambridge
Pre-work

Preparation

Structured preparation so the cohort starts Day 1 with shared context. Four weeks out, an opening call and core reading list.
Two weeks out, small-group discussions to work through the readings and surface project interests.

Day 1

Arrive and orient

Welcome and introductions. Meet the residents and the cohort.

Day 2

Threat models and proposals

Threat models for chip-level verification, including how operators might try to defeat it. The proposals that have emerged so far, from compute accounting to location verification.

Day 3

Technical deep dive I

Tamper resistance, confidential computing, root-of-trust primitives. Afternoons are project work plus drop-in office hours with the residents.

Day 4

Technical deep dive II

Inference verification, attestation protocols, and cryptographic approaches including zero-knowledge proofs. Afternoons are project work plus one-to-one project reviews and feedback from the residents.

Day 5

Who is building what

Organisations building in this space present their current work, the technical bottlenecks they are facing, and what they need. Time reserved for direct conversations about collaboration and hiring.

Talks from
Day 6

Present and plan what comes next

Present your work to peers, residents, and representatives from organisations building in the space.

The afternoon turns to what comes after the week. How to apply for Coefficient Giving’s Career Development and Transition Funding if you’re considering moving into the field, BlueDot Impact’s Rapid Grants (up to $10,000) for kicking off your own project, and time to plan with the organisations represented in the room.

Who you'll work with

Residents are domain experts in hardware assurance who join the programme to give talks, run technical sessions, and give feedback on participant projects.

Who should apply

Hardware assurance pulls from across the technical stack. The cohort is a mix of engineers, researchers, and PhD students with depth in one of these areas.

Silicon and firmware

Tamper-evident hardware, silicon-level attestation, and firmware that participates in verification protocols.

Backgrounds in RTL, ASIC, SoC, FPGA, or embedded systems.

Hardware security

Mechanisms that hold up under physical attack and adversarial probing.

Backgrounds in secure hardware, side-channel countermeasures, or tamper protection.

Cryptography and formal methods

Making proof-of-inference practical, formally verifying attestation protocols, building zero-knowledge tooling.

Backgrounds in applied cryptography, zero-knowledge proof systems, or formal methods.

Trusted execution

Designing how labs and chips can prove what they’re running to a third party.

Backgrounds in TEEs, attestation, secure boot, or root-of-trust systems.

ML systems

Figuring out what’s actually verifiable in real ML workloads, and how.

Backgrounds in distributed training (NCCL, Megatron, DeepSpeed) or large-scale inference.

Networking

Network-level instrumentation that detects training-scale workloads from outside the rack.

Backgrounds in high-performance networking, line-rate packet processing, or deep packet inspection.

Open problems

Some examples of open problems. Participants spend the week tackling one and present their work on Day 6.

01

Tamper-evident enclosures for AI hardware

AI accelerators dissipate 1,200W and need liquid cooling. How do you seal them in a tamper-evident enclosure while keeping them cool? Insights from nuclear monitoring and banking HSMs may transfer, but this hasn’t been done for GPUs. Without it, no third party can verify what is running on a chip cluster, regardless of what the operator claims.

02

Mutually trusted attestation hardware

For international agreements, both parties need to trust the attestation hardware. The prover needs confidence it won’t exfiltrate data. The verifier needs confidence the attestations are genuine. How do you build hardware that highly sceptical actors on both sides can trust?

03

Inference verification at scale

Can you verify that a deployed AI model matches the one that was safety-tested, without disrupting the workload? Approaches include network taps with randomised recomputation and input-output fingerprinting. Prototyping and red-teaming are needed.

04

Workload classification from hardware telemetry

Can you determine what a chip is doing from external signals like power draw, memory access patterns, and network traffic, without seeing the workload itself?

05

Your own idea

Use the pre-work to scope it. Use the week to build the first version, with feedback from the residents and peers.

Apply now

Cohort of twenty. Travel, accommodation, and food covered, plus a £1,500 stipend.

Apply now

Closes 7 June 2026, 23:59 AoE

Refer someone. £500 if they join.

Questions? hello@cambridgeaisafety.org