The ZAIA Discussion Group.

Our on-going discussion group for those interested and familiar with AI safety topics.

What?

The Zurich AI Alignment (ZAIA) Group meets regularly to discuss topics in AI alignment and AI safety, with a general focus on the technical aspects and implications of such topics. Additionally, we also organize workshop meetings where people meet and discuss project ideas, as well as anything else they are interested in.

Note: This spring semester we are trying out something new and will instead organize a series of talks with subsequent discussions. See AI Futures for more.

A sample of recent papers from previous semesters that we’ve discussed are below.


Who?

In general, anyone with an interest in ensuring the development of AI happens safely, whether through a technical perspective, or policy perspective, is welcome to join.

As we often have lively discussions that rely on knowledge of key terms in the field, we recommend that you have some experience in ML, as well as some background in AI security, for an engaging experience. If you have completed Zurich’s AISF Programme or an analogous one, you are well-suited to join. A good way to check if you meet these criteria is to go through the syllabus of the AI Safety Fundamentals course and see if you are familiar with these topics.

Interested? Then fill out the form at the bottom of this page so we can get in touch with you!


When
& Where?

Join the Zurich AI Safety WhatsApp community to be notified about future ZAIA discussion group sessions.


Get in touch!