No. You should have equivalent context, whether through previous programmes, independent study, or relevant professional experience.
The Alignment Desk
A writing accelerator for AI safety
A writing accelerator for AI safety
The Alignment Desk is a structured writing programme for people in Cambridge with existing exposure to AI safety who want to produce and publish serious work.
Every Saturday starting 6 February 2025 and running for the duration of Lent term, a small cohort meets at the Meridian Office. Participants work on their own projects, set weekly goals, and hold each other accountable. Each participant is expected to publish four pieces of work over the term.
Writing forces you to confront gaps in your understanding that remain hidden when you are just reading or thinking. It is also how people in this community come to know your work.
For many aspiring writers, the biggest challenge isn't a shortage of ideas, but turning those ideas into finished pieces. Developing a habit of regular output is often what makes the difference between wanting to contribute and actually doing so.
"Writing is the single biggest difference-maker between reading a lot and efficiently developing real views on important topics."
Holden Karnofsky, Learning By Writing
The Alignment Desk is best suited to people who already have context in AI safety, whether technical or governance, and want to develop and publish ideas relevant to the field. This may include people aiming for research, policy, operations, communications, or other AI safety adjacent roles.
Applicants who are earlier in their AI safety journey but have strong adjacent experience may be considered on a case-by-case basis. If in doubt, please err on the side of applying.
Weekly Saturday sessions at the Meridian Office. Quiet, structured, and distraction-light. Whiteboards are available in separate rooms for collaboration.
You commit to a project at the start of term and report progress each week. The cohort is small, and progress is visible.
During breaks, participants can sanity-check arguments, get quick red-teams, or talk through ideas that are stuck.
Where useful, we can help connect participants with researchers, policy professionals, or others in the Cambridge AI safety community. We will also invite relevant experts into the office to give feedback on drafts.
Four published pieces by the end of term. Imperfect and published beats perfect and private.
We are flexible on format. What matters is that you have a concrete output in mind. Examples include:
We meet every Saturday during Lent term. Occasional conflicts are fine, but consistent attendance is expected.
You should come to the first session with at least a working idea of what you are writing and who it is for.
Most of each session should be spent in focused work towards your deliverables.
The structure may evolve based on what works.
Rolling admissions. We'll close the form once spots fill up, so apply sooner rather than later.
Final deadline: Sunday, 25 January (end of day GMT)
No. You should have equivalent context, whether through previous programmes, independent study, or relevant professional experience.
You may still apply. We can help you scope a project, but you should be ready to commit to one by the first session.
Yes. If you are applying with another participant on a joint project, mention this in your application.
That is up to you. Common options include LessWrong, the Alignment Forum, Substack, or a personal blog. We care that the work is published somewhere.
Occasional absences are fine. If you already expect to miss multiple sessions, this is likely not the right programme for you.