Applications are now open! Apply here

The Alignment Fellowship

How do we navigate the next decade safely?

Source: Mollick

Advanced AI systems are improving rapidly. Within years, we may create systems more capable than humans across most domains. If that happens, the consequences will be profound. In the best case, we solve problems we've struggled with for centuries. In the worst case, we lose control entirely.

What the fellowship is

The Alignment Fellowship is a condensed five-week course that introduces curious and highly motivated students and working professionals to the fundamental concepts in AI safety. Each week, you'll do assigned readings and meet to discuss them with a small cohort. Depending on your background and interest, we offer two tracks: a technical track and a governance track. Both tracks share foundational content: understanding how AI systems might become misaligned with human goals, how they might acquire power, and what's at stake if they do. From there, the tracks go into their own focus:

Technical track

Best suited for those with backgrounds in computer science, mathematics, physics, or other quantitative fields.

What you'll cover:

  • Timelines & stakes: How quickly AI capabilities may advance and why alignment matters
  • How AI systems are trained: The basics of training modern models and how they learn to follow instructions
  • Evaluating system behaviour: How we test models for reliability, deception, and failure modes
  • Keeping systems under control: Approaches to oversight, monitoring, and safe deployment
  • Understanding what happens inside models: Ways to inspect and interpret model internals

Governance track

Best suited for those interested in policy, international relations, law, or the political economy of technology.

What you'll cover:

  • Technical foundations: A non-technical introduction to modern AI systems
  • AI risk & timelines: Why safety is important and how timelines affect policy choices
  • Regional policymaking (US, China, EU): How different regions regulate and shape AI development
  • Compute governance: The role of chips, infrastructure, and access controls
  • Responsible scaling policies: How labs approach safety commitments as systems grow
  • Geopolitical strategy: International coordination and strategic competition

Alongside that:

Talks & Workshops

From researchers at Redwood Research, UK AI Security Institute, Apollo Research and more.

Career Planning

One-on-one sessions to help you understand your next steps towards an AI safety career.

Networking

You will be embedded in the Meridian ecosystem, with opportunities to meet participants from ERA:AI, ERA:AIxBiosecurity, the Visiting Researchers Programme, and more.

The programme runs in-person in Cambridge, UK from 2 February to 6 March 2025. Meals will be provided for each session.

What we expect from you

We invest significant time and resources into each cohort: talks from leading researchers, one-on-one career support, dinners, and access to the Cambridge AI safety community. In return, we ask that you invest too.

Do the readings

Each week has core readings and optional deeper dives. We expect you to complete 1–1.5 hours of core readings before each session and come ready to discuss them.

Show up

Attend the sessions, talks, and workshops we organise.

Engage seriously

This is your chance to figure out whether AI safety matters to you, what role you might play in mitigating the risks.

If you're unable to make the commitment, we'd still love to have you engage with us. You can join our mailing list to hear about our weekly events, public talks, and other activities throughout the year.

Applications Open

15 minute application. Selected candidates will be invited to an asynchronous interview.

Rolling admissions. We'll close the form once spots fill up, so apply sooner rather than later.

Final deadline: Sunday, 25 January (end of day GMT)

FAQ