AI needs shared red lines
A Turkish voice for a global conversation
AI Safety Türkiye is Turkey's first and most active AI-safety community. Through research, education, and policy work we push for this technology to develop for humanity's benefit.
AI must be addressed with its near and far-term risks together.
We believe AI technologies will bring profound changes to politics, economy and society in the 21st century. Our aim is to bring together and support those working to make sure this transformation benefits humanity. We work across a broad range of topics, from the AI alignment problem to security risks, from today's social effects to existential risks.
I.
Independent & volunteer-led
An unaffiliated community sustained entirely by volunteer effort.
II.
A holistic safety perspective
We take a holistic view of AI's short- and long-term effects; near-term and far-term concerns are not rivals.
III.
Global contribution from Turkey
From the Athens Roundtable to the Global Call for AI Red Lines, we take part in international deliberations.
Research, education and community hand in hand.
01
Blog
In-depth Turkish writing on AI safety. System-card reviews, policy analyses, long-form pieces.
Blog 02Newsletter
Monthly digest of the latest in AI safety. 22 uninterrupted issues, straight to your inbox.
Newsletter 03Events
Meetups, panels and training programmes. İstanbul, Ankara, Bilkent, Koç. In Turkey and internationally.
Calendar 04Advocacy
We track AI legislation in Turkey and represent the country in international efforts such as the Global Call for AI Red Lines.
PolicyIn-depth analysis.

Featured · 25 Apr 2026
AI Safety Careers #2
We examine how large language models gained new capabilities through scale, covering emergence, scaling laws, Chinchilla, RLHF, ChatGPT, and GPT-4.

AI Safety Careers #1
We look at the foundations of the rapid progress in artificial intelligence through the milestones spanning from rule-based systems to neural networks and the Transformer …

An In-Depth Analysis of the Claude Mythos System Card - Part 3
In Part 3, we examine Claude Mythos's findings on internal state alignment, evaluation awareness, and model welfare.

An In-Depth Analysis of the Claude Mythos System Card - Part 2
Why does Claude Mythos look more aligned on average yet more dangerous at the extremes? In Part 2, we examine the risk update's core claim.
Volunteers and co-founders.
A monthly Turkish-language digest of AI safety.
The most important developments, events, and community announcements in AI safety, once a month, straight to your inbox.
Subscribe to the newsletter
Signup opens in a new tab.




