Welcome to our monthly newsletter, which we have prepared to bring you current developments and informative content on Artificial Intelligence Security!
ANNOUNCEMENTS 🔊
AI Safety Türkiye Istanbul & Ankara Meetings
As the new term begins in schools, as AI Safety Türkiye, we are meeting face to face to talk about our plans for this year and for our community members to meet each other!
28 Eylül Cumartesi ve 5 Ekim Cumartesi’yi ajandalarınıza kaydedin, detayları Eylül sayımızda paylaşacağız.
We are looking for volunteers to help AI Safety Turkey on various issues. Whether or not you have experience in areas such as web development, content production, research or event organization, it is enough for us that you are interested in artificial intelligence security and ethics. All your contributions are valuable to us!
If you fill out the form below, we will contact you to share with you how you can contribute to our mission and the tasks we are looking for volunteers for. Thanks in advance!
Discussions on open-source AI models often pit their potential benefits, such as innovation and democratization, against concerns of uncontrolled risks and misuse. The Carnegie report moves beyond this dichotomy, presenting areas of consensus identified by experts and critical questions that will drive more detailed governance discussions for powerful AI systems.
AI policy researcher Risto Uuk explains the highlights of the EU AI Law and what these new regulations mean for companies and other businesses developing cutting-edge AI models.
OpenAI; In its latest risk assessment, it found that the GPT4o model spoofed the user's voice without permission. The risk assessment also includes GPT-4o “ikna risklerini” orta seviye olarak sınıflandırıyor. OpenAI, orta seviyenin üstünde ikna risklerine sahip bir modelin kullanıma sunulmasını güvenli olarak kabul etmiyor.