<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Newsletter on AI Safety Türkiye</title><link>https://aisafetyturkiye.org/en/newsletter/</link><description>Recent content in Newsletter on AI Safety Türkiye</description><generator>Hugo</generator><language>tr</language><copyright>Copyright (c) 2024-2026 AI Safety Türkiye</copyright><lastBuildDate>Tue, 10 Feb 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://aisafetyturkiye.org/en/newsletter/index.xml" rel="self" type="application/rss+xml"/><item><title>Newsletter 22</title><link>https://aisafetyturkiye.org/en/newsletter/22/</link><pubDate>Tue, 10 Feb 2026 00:00:00 +0000</pubDate><guid>https://aisafetyturkiye.org/en/newsletter/22/</guid><description>&lt;h2 id="announcements-"&gt;ANNOUNCEMENTS 🔊&lt;/h2&gt;
&lt;h3 id="bluedot-impact-ai-governance-course"&gt;

&lt;a class="link link--text" href="https://bluedot.org/courses/ai-governance" rel="external"&gt;BlueDot Impact AI Governance Course&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;With an intense 5-day and part-time 5-week format, this online course builds fundamentals on advanced AI governance with an up-to-date curriculum on the latest policy and governance news in the field.&lt;/p&gt;
&lt;p&gt;🗓️ Deadline: February 15, 2026&lt;/p&gt;</description></item><item><title>Newsletter 21</title><link>https://aisafetyturkiye.org/en/newsletter/21/</link><pubDate>Wed, 14 Jan 2026 00:00:00 +0000</pubDate><guid>https://aisafetyturkiye.org/en/newsletter/21/</guid><description>&lt;h2 id="announcements-"&gt;ANNOUNCEMENTS 🔊&lt;/h2&gt;
&lt;h3 id="-mats-summer-2026"&gt;

&lt;a class="link link--text" href="https://www.matsprogram.org/apply?utm_source=aisafety-com&amp;utm_medium=job-board&amp;utm_campaign=s26" rel="external"&gt;🌟 MATS: Summer 2026&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;12-week full-time research program connecting 120 fellows with mentors from Anthropic and UK AISI to work on alignment or governance. Some fellows will be offered a 6-month extension.&lt;/p&gt;
&lt;p&gt;🗓️ Register by: January 18&lt;/p&gt;</description></item><item><title>Newsletter 20</title><link>https://aisafetyturkiye.org/en/newsletter/20/</link><pubDate>Fri, 12 Dec 2025 00:00:00 +0000</pubDate><guid>https://aisafetyturkiye.org/en/newsletter/20/</guid><description>&lt;h2 id="announcements-"&gt;ANNOUNCEMENTS 🔊&lt;/h2&gt;
&lt;h3 id="-bluedot-impact-biosecurity-course"&gt;

&lt;a class="link link--text" href="https://bluedot.org/courses/biosecurity" rel="external"&gt;🌟 BlueDot Impact Biosecurity course&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Join an intensive course to protect the world against AI-engineered pandemics&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;🗓️ Register by: December 14&lt;/strong&gt;&lt;/p&gt;
&lt;h3 id="-cbai-spring-research-fellowship-2026"&gt;

&lt;a class="link link--text" href="https://www.cbai.ai/fellowship" rel="external"&gt;🌟 CBAI Spring Research Fellowship 2026&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Fully-funded, 10-week program run by the Cambridge Boston Alignment Initiative (CBAI), covering both technical and governance research. Fellows work closely with mentors, participate in workshops and seminars, and gain research experience and networking opportunities.&lt;/p&gt;</description></item><item><title>Newsletter 19</title><link>https://aisafetyturkiye.org/en/newsletter/19/</link><pubDate>Tue, 11 Nov 2025 00:00:00 +0000</pubDate><guid>https://aisafetyturkiye.org/en/newsletter/19/</guid><description>&lt;h2 id="announcements-"&gt;ANNOUNCEMENTS 🔊&lt;/h2&gt;
&lt;h3 id="ai-safety-türkiye-online-meetup"&gt;

&lt;a class="link link--text" href="https://luma.com/m30r93rj" rel="external"&gt;🌟AI Safety Türkiye Online Meetup&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Join our first virtual meetup to get to know broader AI Safety Türkiye community!&lt;/p&gt;
&lt;p&gt;🗓️ Register by: November 19&lt;/p&gt;
&lt;h3 id="pathfinder-fellowship"&gt;

&lt;a class="link link--text" href="https://pathfinder.kairos-project.org/" rel="external"&gt;🌟Pathfinder Fellowship&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Are you running a student club, reading group or any other activity about AI safety in your university? Apply for mentorship and funding!&lt;/p&gt;</description></item><item><title>Newsletter 18</title><link>https://aisafetyturkiye.org/en/newsletter/18/</link><pubDate>Tue, 14 Oct 2025 00:00:00 +0000</pubDate><guid>https://aisafetyturkiye.org/en/newsletter/18/</guid><description>&lt;h2 id="announcements-"&gt;ANNOUNCEMENTS 🔊&lt;/h2&gt;
&lt;h3 id="-future-impact-group-fig-fellowship-winter-2025"&gt;🌟 

&lt;a class="link link--text" href="https://futureimpact.group/fellowship" rel="external"&gt;Future Impact Group (FIG) Fellowship: Winter 2025&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;12-week fellowship from December 1, 2025 to March 1, 2026 where participants work as research associates on specific projects. Associates dedicate 8+ hours weekly to AI governance, technical AI safety, and digital sentience; gaining experience and building networks.&lt;/p&gt;</description></item><item><title>Newsletter 17</title><link>https://aisafetyturkiye.org/en/newsletter/17/</link><pubDate>Tue, 12 Aug 2025 00:00:00 +0000</pubDate><guid>https://aisafetyturkiye.org/en/newsletter/17/</guid><description>&lt;h2 id="announcements-"&gt;ANNOUNCEMENTS 🔊&lt;/h2&gt;
&lt;h3 id="-ml-alignment--theory-scholars-mats-winter-2026"&gt;🌟 

&lt;a class="link link--text" href="https://www.matsprogram.org/" rel="external"&gt;ML Alignment &amp;amp; Theory Scholars (MATS): Winter 2026&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;12-week program aimed at helping participants launch their career in AI alignment, governance, and security. There will be field-leading research mentorship, funding, Berkeley &amp;amp; London office space, housing, and talks/workshops with AI experts.&lt;/p&gt;</description></item><item><title>Newsletter 16</title><link>https://aisafetyturkiye.org/en/newsletter/16/</link><pubDate>Sun, 10 Aug 2025 00:00:00 +0000</pubDate><guid>https://aisafetyturkiye.org/en/newsletter/16/</guid><description>&lt;h2 id="announcements-"&gt;ANNOUNCEMENTS 🔊&lt;/h2&gt;
&lt;h3 id="hear-about-different-ai-futures-from-our-co-founder-bengüsu-özcan-"&gt;

&lt;a class="link link--text" href="https://lu.ma/5aifutures" rel="external"&gt;Hear About Different AI Futures from Our Co-Founder Bengüsu Özcan! 🌟&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Our co-founder Bengüsu Özcan, who published a study on various geopolitical and societal scenarios ranging from futures where AI development slows down to futures where it escapes human control, will share these scenarios in a special session organized by BlueDot Impact.&lt;/p&gt;</description></item><item><title>Newsletter 15</title><link>https://aisafetyturkiye.org/en/newsletter/15/</link><pubDate>Wed, 09 Jul 2025 00:00:00 +0000</pubDate><guid>https://aisafetyturkiye.org/en/newsletter/15/</guid><description>&lt;h2 id="announcements-"&gt;ANNOUNCEMENTS 🔊&lt;/h2&gt;
&lt;h3 id="ai-safety-collab-2025-summer"&gt;

&lt;a class="link link--text" href="https://airtable.com/appZR9Ries3akemlk/pagVA2W08JQ6W77bi/form" rel="external"&gt;AI Safety Collab 2025 Summer&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Comprehensive 8-week introductory program run by ENAIS, split into 2 tracks: governance (following the BlueDot curriculum) and alignment (following the AI Safety Atlas textbook). The course is aimed at people who have an interest in and are relatively new to AI safety, regardless of background.&lt;/p&gt;</description></item><item><title>Newsletter 14</title><link>https://aisafetyturkiye.org/en/newsletter/14/</link><pubDate>Tue, 10 Jun 2025 00:00:00 +0000</pubDate><guid>https://aisafetyturkiye.org/en/newsletter/14/</guid><description>&lt;h2 id="announcements-"&gt;ANNOUNCEMENTS 🔊&lt;/h2&gt;
&lt;h3 id="arena"&gt;

&lt;a class="link link--text" href="https://www.arena.education/" rel="external"&gt;ARENA&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;The Alignment Research Engineer Accelerator (ARENA) is a 4–5 week ML bootcamp with a focus on AI safety. The mission is to provide talented individuals with the skills, tools, confidence, and connections necessary for upskilling in ML engineering, for the purpose of contributing directly to AI alignment in technical roles.&lt;/p&gt;</description></item><item><title>Newsletter 13</title><link>https://aisafetyturkiye.org/en/newsletter/13/</link><pubDate>Tue, 13 May 2025 00:00:00 +0000</pubDate><guid>https://aisafetyturkiye.org/en/newsletter/13/</guid><description>&lt;h2 id="announcements-"&gt;ANNOUNCEMENTS 🔊&lt;/h2&gt;
&lt;h3 id="mila-ai-policy-fellowship"&gt;

&lt;a class="link link--text" href="https://mila.quebec/en/mila-ai-policy-fellowship" rel="external"&gt;Mila AI Policy Fellowship&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Mila, a Montreal-based AI research institute whose scientific director is Yoshua Bengio, is planning to bring young professionals from diverse backgrounds like law, sociology, and philosophy to tackle the complex challenges of AI governance.&lt;/p&gt;</description></item></channel></rss>