Newsletter 4

Newsletter 4

ANNOUNCEMENTS 🔊

AI Safety Türkiye İstanbul & Ankara Meet-Ups

As the new academic year kicks off, AI Safety Turkey is hosting an in-person gathering to discuss our plans for the year and to give our community members a chance to connect!

Mark your calendars for Saturday, September 28th, and Saturday, October 5th. We’ll share more details in our September issue.

We Are Looking For Volunteers! 🩵

We are seeking volunteers to help with various aspects of AI Safety Türkiye (AISTR).

Whether you have experience in web development, content creation, research, event planning, or simply a passion for AI safety and ethics, we welcome your contributions.

If you fill out the form below, we will get in touch with you to discuss the projects and tasks where your skills and enthusiasm can make a difference. Thank you!

AI Safety Fundamentals by Bluedot Impact

12-weeks long online course on technical AI alignment, designed by AI safety experts.

🗓️ Register by: October 6th

TOP PICKS 📑 🎧

Beyond Open vs. Closed: Emerging Consensus and Key Questions for Foundation AI Model Governance

Debates on open-source AI models have often contrasted potential benefits like innovation and democratization with concerns about uncontrolled risks and misuse. This Carnegie report moves beyond this dichotomy, presenting expert-identified areas of consensus and crucial questions to guide more nuanced governance discussions for powerful AI systems.

EU AI Act enters into force. Now what?

Risto Uuk, an AI policy researcher, explains the highlights of EU AI Act and what the new regulations mean for the companies developing the most advanced AI models as well as businesses.

GPT4o Clones Its User’s Voice without Authorization

In their latest risk assessment, OpenAI detected a problem: the model mimicked the user’s voice without authorization. The risk assessment also classifies GPT4o’s “persuasion risks” as medium. Beyond medium level, OpenAI doesn’t consider a model to be safe to deploy.

JOB POSTINGS 👩🏻‍💻

You can check out 80,000 Hours’ job board

to explore new opportunities in AI Safety!

Take me there