Welcome to our monthly newsletter, which we have prepared to bring you current developments and informative content on Artificial Intelligence Security!
🙋🏻♀️Editor's Note:
Did you notice that we sent the newsletter a day late this month? 👀
But don't worry! Starting next month, we will continue to meet as usual, on the second day of the second week of each month.
This grant program supports the next generation of thinkers and doers at the intersection of artificial intelligence and human development. Projects often take an interdisciplinary and human-centered approach, drawing on philosophy, computer science, political theory, economics, natural sciences, and other fields.
The CLAS 2024 competition, organized by NeurIPS, offers participants the challenge of solving critical Artificial Intelligence security issues. Winners will share the $30,000 prize pool and have the opportunity to co-author a publication about the contest results.
LASR Labs offers a 3-month, full-time, paid program in London focusing on technical AI security research. Participants work in small teams under the supervision of experienced experts to address concrete threat models. The aim of the program is to produce academic articles and blog posts aimed at reducing risks arising from advanced artificial intelligence systems. This immersive experience offers participants the opportunity to engage deeply in AI security research and make meaningful contributions to the field.
The SB1047 bill, which we featured in our previous issue and made a lot of noise in terms of artificial intelligence security, was vetoed by the California state administrator. What does this mean for the political perception of AI security?
At the Seoul AI Security Summit, US Secretary of Commerce Gina Raimondo announced the establishment of a global Network of AI Security Institutes (AISIs) stretching from Kenya to the UK, from Singapore to Canada. Alex Petropoulos of the International Center for Future Generations examines what AISIs are, their planned activities and how they differ between countries.
At OpenAI, there are significant changes in the management team with the departure of CTO (Chief Technology Officer) Mira Murati, Chief Research Officer Bob McGrew and Vice President of Research Barret Zoph.
CEO Sam Altman stated that the resignations were amicable and were due to the intense workload of leadership positions.
OpenAI introduced new artificial intelligence models called “o1-preview” and “o1-mini” on September 12.
These models use the "Chain of Thought" method to solve complex problems, thinking for a longer time and giving more successful results than previous models.
New models show significant advances, especially in the areas of basic science, coding and mathematics. For example, OpenAI claims that the o1 model achieved an 83% success rate on International Mathematical Olympiad questions.
The Council of Europe's "Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law" (CETS No. 225) was opened for signature in Vilnius on September 5.
This agreement is the first legally binding international agreement that aims to ensure that the use of artificial intelligence systems is compatible with human rights, democracy and the rule of law.
The Convention was signed by many countries and organizations, including Andorra, Georgia, Iceland, Norway, Moldova, San Marino, the United Kingdom, Israel, the USA and the European Union.
At our AI Safety Türkiye October 2024 meeting, we talked about the online training we planned this year, the events we will organize with student communities, and the possible changes we can see with the artificial intelligence law planned in Turkey.
💌 Bir sonraki buluşmamızda sizi de aramızda görmek dileğiyle!