<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>AI Safety Türkiye</title><link>https://aisafetyturkiye.org/</link><description>Recent content on AI Safety Türkiye</description><generator>Hugo</generator><language>tr</language><copyright>Copyright (c) 2024-2026 AI Safety Türkiye</copyright><lastBuildDate>Tue, 10 Feb 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://aisafetyturkiye.org/index.xml" rel="self" type="application/rss+xml"/><item><title>Bengüsu Özcan</title><link>https://aisafetyturkiye.org/en/team/bengusu-ozcan/</link><pubDate>Sat, 12 Aug 2023 00:00:00 +0000</pubDate><guid>https://aisafetyturkiye.org/en/team/bengusu-ozcan/</guid><description>&lt;p&gt;Bengüsu studied engineering and psychology at Sabancı University and holds a master&amp;rsquo;s degree in social data science from Columbia University. She is the co-founder and director of AI Safety Turkey. She conducts research on advanced AI governance and policy at the Center for Future Generations, a Brussels-based research institute. Her areas of interest include international coordination in advanced AI governance, AI safety standards, and semiconductor industry-based security policies.&lt;/p&gt;</description></item><item><title>Bengüsu Özcan</title><link>https://aisafetyturkiye.org/tr/ekip/bengusu-ozcan/</link><pubDate>Sat, 12 Aug 2023 00:00:00 +0000</pubDate><guid>https://aisafetyturkiye.org/tr/ekip/bengusu-ozcan/</guid><description>&lt;p&gt;Sabancı Üniversitesinde mühendislik ve psikoloji, Columbia Üniversitesinde sosyal veri bilimi alanında yüksek lisans yapan Bengüsu, AI Safety Turkiye’nin eş kurucu ve direktörü. Brüksel merkezli Center for Future Generations isimli araştırma kurumunda gelişmiş yapay zeka yönetimi ve politikası üzerine araştırma yapıyor. İlgi alanları gelişmiş yapay zeka yönetiminde uluslararası koordinasyon, yapay zeka güvenliği standartları ve yarı ilteken endüstrisine dayalı güvenlik politikaları.&lt;/p&gt;</description></item><item><title>Berke Çelik</title><link>https://aisafetyturkiye.org/en/team/berke-celik/</link><pubDate>Sat, 12 Aug 2023 00:00:00 +0000</pubDate><guid>https://aisafetyturkiye.org/en/team/berke-celik/</guid><description>&lt;p&gt;Berke graduated from the Philosophy Department at Boğaziçi University and is the co-founder and director of AI Safety Turkey. He serves as director of the development program at Global Policy Research Group. His other areas of interest include AI policy in developing countries and decision theory.&lt;/p&gt;</description></item><item><title>Berke Çelik</title><link>https://aisafetyturkiye.org/tr/ekip/berke-celik/</link><pubDate>Sat, 12 Aug 2023 00:00:00 +0000</pubDate><guid>https://aisafetyturkiye.org/tr/ekip/berke-celik/</guid><description>&lt;p&gt;Boğaziçi Üniversitesi Felsefe Bölümü  mezunu olan Berke, AI Safety Turkiye’nin eş kurucu ve direktörü. Global Policy Research Group’ta kalkinma programının direktörlüğünü yapan Berke’nin diğer ilgi alanları gelişmekte olan ülkelerdeki yapay zeka politikaları ve karar teorisi.&lt;/p&gt;</description></item><item><title>Sayhan Yalvaçer</title><link>https://aisafetyturkiye.org/en/team/sayhan-yalvacer/</link><pubDate>Sat, 12 Aug 2023 00:00:00 +0000</pubDate><guid>https://aisafetyturkiye.org/en/team/sayhan-yalvacer/</guid><description>&lt;p&gt;Sayhan, an alumnus of the Department of Philosophy at Boğaziçi University, counts the alignment problem, autoregressive transformers, the rationalist community (LW), and secular demographic trends among his principal areas of interest.&lt;/p&gt;</description></item><item><title>Sayhan Yalvaçer</title><link>https://aisafetyturkiye.org/tr/ekip/sayhan-yalvacer/</link><pubDate>Sat, 12 Aug 2023 00:00:00 +0000</pubDate><guid>https://aisafetyturkiye.org/tr/ekip/sayhan-yalvacer/</guid><description>&lt;p&gt;Boğaziçi Üniversitesi Felsefe Bölümünden mezun olan Sayhan&amp;rsquo;ın temel ilgi alanları hizaya getirme (alignment) problemi, otoregresif transformatörler, rasyonalist komünite (LW), ve seküler demografik trendlerdir.&lt;/p&gt;</description></item><item><title>Alparslan Bayrak</title><link>https://aisafetyturkiye.org/en/team/alparslan-bayrak/</link><pubDate>Sat, 12 Aug 2023 00:00:00 +0000</pubDate><guid>https://aisafetyturkiye.org/en/team/alparslan-bayrak/</guid><description>&lt;p&gt;Alparslan is a senior philosophy student at Bilkent University. His main areas of interest are global priorities and suffering risks (s-risks) research. He focuses on conflict scenarios that may arise from the development and deployment of advanced AI systems, and the philosophical aspects of cooperative AI.&lt;/p&gt;</description></item><item><title>Alparslan Bayrak</title><link>https://aisafetyturkiye.org/tr/ekip/alparslan-bayrak/</link><pubDate>Sat, 12 Aug 2023 00:00:00 +0000</pubDate><guid>https://aisafetyturkiye.org/tr/ekip/alparslan-bayrak/</guid><description>&lt;p&gt;Bilkent Üniversitesi Felsefe Bölümü son sınıf öğrencisi olan Alparslan’ın temel ilgi alanları küresel öncelikler ve acı riskleri (s-risks) araştırmasıdır. Gelişmiş yapay zeka sistemlerinin geliştirilmesinden ve yaygınlaştırılmasından kaynaklanabilecek çatışma senaryolarına ve kooperatif yapay zekanın felsefi yönlerine odaklanmaktadır.&lt;/p&gt;</description></item><item><title>Haber Bülteni 22</title><link>https://aisafetyturkiye.org/tr/bulten/22/</link><pubDate>Tue, 10 Feb 2026 00:00:00 +0000</pubDate><guid>https://aisafetyturkiye.org/tr/bulten/22/</guid><description>&lt;h2 id="duyurular-"&gt;DUYURULAR 🔊&lt;/h2&gt;
&lt;h3 id="işbirlikçi-yapay-zekaya-giriş-bahar-2026"&gt;

&lt;a class="link link--text" href="https://www.cooperativeai.com/curriculum" rel="external"&gt;İşbirlikçi Yapay Zekaya Giriş (Bahar 2026)&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Katılımcıların işbirlikçi yapay zeka alanındaki anlayışlarını derinleştirmeyi ve onları devam eden bir projeye başlamaya veya katılmaya hazırlamayı hedefleyen 8 haftalık bir kurs. Farklı geçmişlere ve kariyer aşamalarına açık olup çok az ön bilgi bekleniyor.&lt;/p&gt;</description></item><item><title>Newsletter 22</title><link>https://aisafetyturkiye.org/en/newsletter/22/</link><pubDate>Tue, 10 Feb 2026 00:00:00 +0000</pubDate><guid>https://aisafetyturkiye.org/en/newsletter/22/</guid><description>&lt;h2 id="announcements-"&gt;ANNOUNCEMENTS 🔊&lt;/h2&gt;
&lt;h3 id="bluedot-impact-ai-governance-course"&gt;

&lt;a class="link link--text" href="https://bluedot.org/courses/ai-governance" rel="external"&gt;BlueDot Impact AI Governance Course&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;With an intense 5-day and part-time 5-week format, this online course builds fundamentals on advanced AI governance with an up-to-date curriculum on the latest policy and governance news in the field.&lt;/p&gt;
&lt;p&gt;🗓️ Deadline: February 15, 2026&lt;/p&gt;</description></item></channel></rss>