We are a nonprofit raising policymaker awareness about AI risks, aiming to slow potentially dangerous research and accelerate technical safety. We have ambitious goals and need to move fast. Come work with us! About Palisade
Pull a project from our ideas backlog or come up with your own. e.g.:
I think GPT-4 could do binary exploitation to the level of OSCP certificate. Concretely, I think it could hack all of crackmes.one level 3 challenges.
Develop the experiment end-to-end, including its design (how to measure the thing?) and requisite harness (Python code), then run the experiment and ocllect data.
Write your results up for publication on arXiv, as a landing page, or as a report for policymakers.
Excellent Python proficiency: it's important for coding not to get in your way while doing research.
Experience with LLM engineering: prompting, scaffolding, CoT/ToT, RAG, tool calling, agent design.
Strong writing skills.
Aptitude for self-directed, high-agency work. You take initiative and contribute proactively; we don’t micromanage.
Aptitude for cross-functional collaboration and learning. You do what it takes to ship your work.
Motivation to conduct research that is both curiosity-driven and addresses concrete open questions in AI risk.
Compensation, $/mo | |
---|---|
Intern | $1000 |
Middle | $3000 |
Senior | $5000 |
This is a remote position. Apply with this form.