The Alignment Desk

A writing accelerator for AI safety

Writing desk workspace

The Alignment Desk is a structured writing programme for people in Cambridge with existing exposure to AI safety who want to produce and publish serious work.

A small cohort meets weekly at the Meridian Office. Participants work on their own projects, set weekly goals, and hold each other accountable. Each participant is expected to publish three pieces during term, with an optional fourth post-Lent.

Participants Lent 2026

Cain Hillier

Cain Hillier

MPhil student in Global Risk & Resilience at the Centre for the Study of Existential Risk. Previously a Technical Writer at Schneider Electric and Junior Researcher at the European Institute of Asian Studies. Bylines in East Asia Forum, EIAS and APEC features.

Posts
  • Post 1
  • Post 2
  • Post 3
  • Post 4 (optional)
Henry Colbert

Henry Colbert

Masters in Mathematics from TU Munich and Bachelors from the University of Cambridge. Previously a quantitative trader in US Blue Chip ETFs, now pivoting into AI safety research through the ERA fellowship.

Posts
  • Post 1
  • Post 2
  • Post 3
  • Post 4 (optional)
Hiskias Dingeto

Hiskias Dingeto

AI researcher with a PhD focused on AI security and adversarial robustness. Works on model failure modes, interpretability, and alignment, with a focus on how training objectives shape internal representations.

Posts
  • Post 1
  • Post 2
  • Post 3
  • Post 4 (optional)
Libby Simmons

Libby Simmons

MPhil student in the Ethics of AI, Data and Algorithms at the University of Cambridge. Background in technology ethics and moral philosophy, with experience as an AI Analyst. Specialises in predictive AI, surveillance tech, fairness metrics, and AI governance.

Posts
  • Post 1
  • Post 2
  • Post 3
  • Post 4 (optional)
Joseph Hewson

Joseph Hewson

Recent Mathematics Masters graduate, now upskilling in AI safety. Participated in ARBOx3 at the Oxford AI Safety Initiative.

Posts
  • Post 1
  • Post 2
  • Post 3
  • Post 4 (optional)
Zihao Liu

Zihao Liu

2nd year Engineering student at Cambridge and Technical Officer at the Cambridge AI Builder Club. Background in full-stack software engineering, focused on bridging AI safety theory and engineering practice.

Posts
  • Post 1
  • Post 2
  • Post 3
  • Post 4 (optional)
Zach Liu

Zach Liu

Third year undergrad engineer at Cambridge. Research interests include value alignment and how goal-directed behavior emerges from RL. Helping out at CAISH since 2023, working on operations. Previously worked at Amazon doing software engineering, the Singapore AI Safety Institute researching hallucination detection, and a bit of singular learning theory.

Posts
  • Post 1
  • Post 2
  • Post 3
  • Post 4 (optional)
Shih Ee Whang

Shih Ee Whang

2nd year Engineering student at Cambridge, currently exploring LLM finetuning and generalisation under a SPAR project. Interested in investigating technical solutions through a pragmatic lens through Alignment Desk. Background in robotics and software engineering.

Posts
  • Post 1
  • Post 2
  • Post 3
  • Post 4 (optional)

Guest Posts

Gaurav Yadav

Puria Radmard

  • Post 1

Jacob Schaal

  • Post 1

Carter Rogers

Why writing?

Writing forces you to confront gaps in your understanding that remain hidden when you are just reading or thinking. It is also how people in this community come to know your work.

For many aspiring writers, the biggest challenge isn't a shortage of ideas, but turning those ideas into finished pieces. Developing a habit of regular output is often what makes the difference between wanting to contribute and actually doing so.

"Writing is the single biggest difference-maker between reading a lot and efficiently developing real views on important topics."

Holden Karnofsky, Learning By Writing

Who this is for

The Alignment Desk is best suited to people who already have context in AI safety, whether technical or governance, and want to develop and publish ideas relevant to the field. This may include people aiming for research, policy, operations, communications, or other AI safety adjacent roles.

What this is

Dedicated writing time

Weekly Saturday sessions at the Meridian Office. Quiet, structured, and distraction-light. Whiteboards are available in separate rooms for collaboration.

Accountability

You commit to a project at the start of term and report progress each week. The cohort is small, and progress is visible.

Fast feedback

During breaks, participants can sanity-check arguments, get quick red-teams, or talk through ideas that are stuck.

Access to the ecosystem

Where useful, we can help connect participants with researchers, policy professionals, or others in the Cambridge AI safety community. We will also invite relevant experts into the office to give feedback on drafts.

Clear output expectations

Three published pieces during term, with an optional fourth post-Lent. Imperfect and published beats perfect and private.

What kind of projects?

We are flexible on format. What matters is that you have a concrete output in mind. Examples include:

A literature review of a technical or governance topic
An interactive explainer or tutorial
A steelman or red-team of a research agenda
An original argument, research note, or proposal
A distillation of existing work for a new audience
A public reflection on career plans or uncertainties

Interested in the next cohort?

The current cohort is underway. If you'd like to join a future round of the Alignment Desk, sign up to our mailing list and we'll let you know when applications open.