FAQ

First, try to have some footing in the field of AI safety (whether technical or governance) before making your application. This could mean taking some AI/ AI safety courses, writing something related, or doing related work.

Second, make sure you have a solid understanding of the field of AI safety and its mission. They’re many sources you can find online these days so just start there and build up.

Third, with forensic specificity, be very clear about what exactly you want to do with the funding; how the funding will allow you to pursue an impactful career related to AI safety and how the career move that you’re seeking funding for ranks next to the other options you have.

Fourth, try to have some clarity about the kind of impactful role that you want to have in future in relation to AI safety.

Fifth, it helps a lot of you have a recommender who’s been doing AI safety-related work.

Applications for the seminar and junior research fellowship usually open at the end of January each year so just check out the website, X account or LinkedIn account around February 5. All the information you need will be there.

Most of our rejections are a result of: (i) Candidates showing a poor understanding of the field we’re focused on (AI safety) or (ii) Applications that demonstrate miniscule effort in building a good answer to the key question (for the seminar) or the project proposal (for the junior research fellowship). If you plan to apply, prioritize these two things. The other thing you can do is think very critically as you prepare the application and as you answer questions during the interviews – Demonstrating careful and precise thought can make a massive difference in how we assess your application. Finally, some candidates over-rely on AI. While AI can be helpful in telling you where to start and helping you to find gaps in your work, we will be able to tell when you’ve not done sufficient work of your own because you’ll struggle to answer some questions or you’ll give overly broad and generic answers.

If the paper touches on AI governance or public law, I’d be open to the request. However, my final decision will also depend on how many other things I’m working on at that moment. Please email me at cecilyongoatgmaildotcom and I’ll let you know.

There’s the basic and obvious stuff like writing well (readability-wise), getting your facts right and doing comprehensive and religiously cited research. Beyond these fundamentals, I would advise the following. First, select your topic very carefully. Don’t rush and don’t trust your own instincts. Consult with people who are more advanced in the field and ask them if the topic you’re considering is the best one for you to work on. Second, if you’re super junior, make sure you have an advisor (or advisors) from the beginning. There are very many factors that can cause a paper to end up mediocre or weak, and having advisors is the one of the most surefire ways to ensure you don’t mess up. Finally, select and follow a good research process. You can read my detailed advice about that here.

Please email me at cecilyongoatgmaildotcom, specifying clearly what you’d like to chat about. Once I receive the email, I’ll do my best to respond quickly but I’m generally always struggling for time so I can’t give any assurances that I’ll respond.