Welcome to our monthly newsletter, which we have prepared to bring you current developments and informative content on Artificial Intelligence Security!
🙋🏻♀️Editor's Note:
Our website aisafetyturkiye.org is under construction. During this period, you can reach us via aisafetyturkiye@gmail.com.
This 9-week course offered by AI Safety Fundamentals (Bluedot Impact) is designed for economists who want to better understand transformative AI and its economic impacts. Aynı ekip tarafından yürütülen 5 günlük hızlandırılmış bir versiyonu da mevcuttur.
🗓️ Son kayıt tarihi: Başvurular It is accepted continuously until February 13 for the 9-week course, and for the accelerated 1-week course.
Bluedot Impact's 5-day Introduction to Transformative AI course offers a focused curriculum and expert-led discussions to help individuals from diverse backgrounds gain knowledge in the AI field and grow their professional networks.
🗓️ Deadline for registration: Applications are accepted on a rolling basis.
This summer school aims to provide students and early career professionals studying Artificial Intelligence, computer science and related disciplines (sociology, economics, etc.) with a solid foundation in the emerging field of Cooperative Artificial Intelligence.
Alignment Research Engineer Accelerator (ARENA) is a 4-5 week machine learning bootcamp focused on AI Security. Participants improve their skills by practicing ML programming in pairs under the guidance of expert teaching assistants.
This Apart Sprint aims to bring together hardware experts, security researchers, and AI Security enthusiasts to develop and test verification systems to detect AI training activities through side-channel analysis. Participants will work with cutting-edge hardware systems to create innovative solutions for tracking computing usage.
This Apart Sprint encourages women and underrepresented groups to contribute to critical areas of AI Security, such as compliance, governance, security, and evaluation. No previous experience in AI Security is required to participate.
This 3-day workshop with accommodation for university students is for participants who want to explore areas that can make a big social impact in their careers. Participants will examine the need to improve Artificial Intelligence and biotechnology security measures and will have the opportunity to hear first-hand the career journeys of professionals working in these fields.
Many readers of this newsletter may have heard arguments that Artificial Intelligence could disempower humanity. In a new article, Jan Kulveit, Değer Turan and other authors analyze in detail how the advancement of Artificial Intelligence can gradually lead to this result. The authors argue that as AI systems become increasingly competitive alternatives to humans in key societal domains such as economy, culture, and governance, adaptation based on the necessity of human participation may be replaced by different dynamics. Researchers emphasize that disharmony in one area can worsen disharmony in other areas, ultimately leading to existential catastrophe and permanent loss of power for humanity.
Before the AI Action Summit, the International AI Safety Report, to which each country made an official delegation, was published, written by leading scientists in the field, including Yoshua Bengio. The report, which has a rich section on Artificial Intelligence risk management in addition to the previous report, is expected to serve as a scientific guide to the discussions at the summit.
China-based DeepSeek is challenging the US dominance in the sector by making an unexpected breakthrough in the field of Artificial Intelligence.
This development increases interest in the meeting to be held with the participation of 80 country representatives before the Artificial Intelligence Action Summit in Paris.
DeepSeek's success was achieved through innovative resource optimization strategies despite US advanced chip export restrictions.
Experts state that DeepSeek's success challenges assumptions about resource and infrastructure requirements in Artificial Intelligence development.
There is a significant shift in Artificial Intelligence Governance in the USA; Fundamental changes are reportedly being made to federal AI security testing requirements.
The previous framework had established mandatory disclosure protocols for high-risk AI systems.
While the changes are considered an effort to accelerate Artificial Intelligence innovation, they also bring about discussions about the importance of security frameworks.
This policy change could impact global AI Governance standards and set a precedent for other countries.
The UK's AI Security Institute (AISI) stands out as the world's most advanced government program for assessing AI risks, with a budget of £100 million.
Enstitü, OpenAI, Google DeepMind ve Anthropic gibi önde gelen Yapay Zeka şirketleriyle anlaşmalar yaparak, en yeni modellere yayınlanmadan önce erişim ve test etme imkanı elde etti.
Although AISI does not have access to model weights, it has developed advanced testing methodologies for biological, chemical and cyber risks.
The Institute's work promotes international cooperation on AI Security and serves as a model for other countries.
AISI is a major experiment in government oversight of Artificial Intelligence development, balancing technical talent with diplomatic skill in managing relationships with powerful technology companies.
JOB POSTINGS 👩🏻💻
To explore new opportunities in the field of Artificial Intelligence Security
You can take a look at 80.000 Hours' job postings page!