Operations Lead at Truthful AI

Apply Here

Contact us at joe@constellation.org with any inquiries (please include “Truthful AI Job” in the subject line).


Job description

  • Recruiting and on-boarding new staff and interns
  • Overseeing our website and social media
  • Working with Owain Evans (director of Truthful AI) to prioritize and arrange media requests, event invitations, and external advising
  • Help manage our research projects in AI Safety
  • Budget and fundraising
  • Interfacing with our fiscal sponsor (Rethink Priorities), which handles some of our operations (e.g. accounts, payroll, benefits, visas).

We are looking for a generalist who can carry out a range of tasks to support our technical AI Safety research, communicate our findings widely, grow the team, and enable ambitious projects.

Desired skills

  • Work experience in any of the following: operations, research management, executive support, or similar areas.
  • Independent and pro-active
  • Genuine interest in AI Safety and in Truthful AI’s approach and mission
  • Ability to execute on a wide range of tasks (whatever needs to be done)
  • Verbal and written communication: ability to communicate our key research findings
  • Proficient with traditional software and recent AI tools

Logistics

Truthful AI is fiscally sponsored by Rethink Priorities. If based in the USA, you will be an employee of Rethink Priorities, a 501(c)(3) research non-profit.

  • You would report to Owain Evans (director of Truthful AI)
  • Location: We prefer candidates who can work in person in Berkeley but we will also consider remote (US or international).
  • Our office is at Constellation in Berkeley, CA
  • Hours: Full-time (40 hours/week) preferred but we will consider part-time.
  • Compensation: $140,000-200,000 depending on experience and location. Benefits provided. For onsite employees, lunch and dinner is provided.
  • We sponsor visas and are cap-exempt for H1B.
  • Hiring process: screening call, work test, reference checks, interviews with the team.

Why is this impactful?

  • Truthful AI has done impactful AI safety research in the past 3.5 years. This includes work on truthfulness/honesty (TruthfulQA), out-of-context reasoning, emergent misalignment, and subliminal learning. We published the first AI alignment paper in the journal Nature. Our research has influenced research agendas and safety practices at AI labs, UK AISI, and non-profit research groups.
  • Our work has been covered widely in the media. In 2025, Owain Evans gave the Hinton Lectures on AI Safety, a series of public lectures for a general audience hosted by Nobel Laureate and AI pioneer Geoffrey Hinton.
  • This role has the potential to increase our impact significantly. You would play a crucial role in growing our team and building up our infrastructure and ability to run ambitious projects. This will involve fundraising, hiring, and coordinating our projects at a larger scale than previously. You would also help communicate our research both to specific parties (AI labs, governments, etc) and to the broad research community.

Testimonials by external AI safety researchers

“I’ve known and worked with Owain for 11 years, and consistently recommend to my students and mentees to try to work with him. His work has consistently been driving the zeitgeist in AI alignment for the last 3-4 years. He’s brilliant, professional, and easy to work with. He’s thoughtful and dedicated to supporting the careers of his junior colleagues.”
Professor David Duvenaud, University of Toronto, former Anthropic team lead in AI Alignment

Anthropic’s AI alignment efforts have probably been influenced as much by Owain’s research as by any other outside research group, and have certainly been influenced more by his research than by the research from any three or four of the best universities in the world combined. His work on emergent misalignment in particular was plausibly the most important scientific finding in the field last year. AI alignment is plausibly one of the highest-stakes research fields in the world right now, and accelerating Owain’s research is in the very top tier of opportunities to contribute to that work.
Professor Sam Bowman, Anthropic (team lead in AI Alignment) and NYU

Why this might be a good fit for someone

  • We are a small organization and you would be the first non-technical hire. The impact of your work will be tangible from Day 1
  • You will develop a range of skills including: generalist operations, supporting and communicating AI safety research, automating workflows using AI tools, technical recruiting and on-boarding, and strategic planning
  • While we are small, there are good opportunities for learning from others:
    • Owain Evans has long experience both in doing research and in management of research teams and non-profits. Our other team members (Jan Betley, Johannes Treutlein, Anna Sztyber-Betley, Jorio Cocola) have experience from working in academic research, software companies, and frontier AI labs.
    • You will work closely with Rethink Priorities on operations and Constellation’s team (on the Astra Fellowship)
  • AI safety research is inherently fast moving and is currently being transformed by AI automation. We need to learn and adapt quickly and you will be part of this process.

Why this could be a bad fit for someone

  • We expect a high degree of autonomy and ability to get things done on your own. You’ll be the main person responsible for various functions and will need to take initiative and solve problems yourself
  • Owain Evans (the director) will have ultimate responsibility for most of the key strategic decisions for the organization (e.g. what projects to take on, who to hire). If you are looking for this kind of responsibility yourself, this is probably not a good fit.
  • You will be responsible for a wide range of tasks, including some mundane every-day tasks. You would be encouraged to find ways to automate or outsource these tasks where possible, but there will still be some mundane tasks left over.
  • You want to work in AI Safety but are not sold on the particular value of Truthful AI

Apply Here

Contact us at joe@constellation.org with any inquiries (please include “Truthful AI Job” in the subject line).