LinkedIn logo

Sr. Staff AI Engineer, GenAI Safety

LinkedIn

Share this job:

About the position

At LinkedIn, our approach to flexible work is centered on trust and optimized for culture, connection, clarity, and the evolving needs of our business. The work location of this role is hybrid, meaning it will be performed both from home and from a LinkedIn office on select days, as determined by the business needs of the team. This role will be based in Sunnyvale, CA. The Generative AI (GenAI) Safety team sits at the heart of LinkedIn’s Responsible AI & Governance (RAI‑G) organization, with a mission to set the gold standard for AI safety across all AI applications company‑wide. We ensure that every generative AI product is developed and deployed responsibly, ethically, and securely. By combining rigorous governance with cutting‑edge ML research, we identify and mitigate risks such as bias, hallucination, misuse, and privacy leakage. As both the AI Safety Research team and the central AI safety engineering function, we build safety guardrails, evaluation pipelines, and alignment techniques that enable safe innovation at scale. Our work is foundational to the company’s AI strategy and influences standards across the industry. We partner closely with Legal, Compliance, AI Infrastructure, and Product teams to embed safety into every stage of the AI lifecycle.

Responsibilities

  • Drive GenAI Safety Strategy: Serve as the senior technical leader shaping the company’s generative AI safety direction. Define the roadmap for safety alignment research, model evaluation, and system‑level protections.
  • Lead AI Safety Research & Innovation: Guide LinkedIn’s research agenda in alignment, robustness, and responsible model behaviors. Stay ahead of academic and industry advances, rapidly translating insights into practical, production‑ready solutions.
  • Design Safety‑First Foundations: Provide architectural leadership for scalable safety systems—benchmarking, red‑teaming, content safety, privacy‑preserving training, and real‑time guardrails — ensuring they are reliable, performant, and deeply integrated into AI infrastructure.
  • Deliver High‑Impact Solutions in Ambiguous Spaces: Tackle LinkedIn’s toughest ethical, regulatory, and risk‑driven problems. Bring clarity and direction in areas with evolving standards, ensuring the company ships safe GenAI experiences at speed.
  • Liaison With Product Engineering: Partner closely with product engineering teams to stay current on emerging experiments, venture bets, and product innovations, ensuring safety research and tooling anticipate and support the next wave of product development.
  • Cross‑Functional Leadership: Collaborate with Legal, Compliance, Privacy, Infra, and Policy teams to operationalize safety requirements, translate regulatory guidance into technical specifications, and ensure end‑to-end alignment across disciplines.
  • Technical Mentorship: Mentor and grow a team of ~15 engineers across research, ML, and systems. Elevate engineering rigor, drive high bar execution, and nurture future technical leaders in AI safety.
  • Company‑Wide Impact: Ensure safety techniques, tools, and evaluations are deployed across all GenAI products, safeguarding member trust while enabling safe, scalable innovation.

Requirements

  • 2+ years as a Technical Lead, Staff Engineer, Principal Engineer, or equivalent.
  • 5+ years of industry experience in AI or Machine Learning Engineering.
  • BA/BS Degree in Computer Science or related technical discipline or equivalent practical experience

Nice-to-haves

  • 10+ years of industry and/or research experience in AI/ML delivering impact at scale.
  • PhD in CS/AI/ML or related field (or equivalent research/industry achievements).
  • Expert understanding of Transformers; hands-on experience training, fine‑tuning, distilling/compressing, and deploying LLMs in production.
  • Track record applying LLMs to recommender systems and language agents.
  • Demonstrated leadership in red‑teaming (manual + automated), safety benchmarking/evaluations, content safety/guardrails, prompt‑injection/jailbreak detection, and abuse/misuse prevention.
  • Experience translating Legal/Compliance requirements (e.g., EU AI Act) into technical controls, including harm taxonomies, model cards, and risk assessments.
  • Proven ability to design safety‑first architectures (evaluation pipelines, moderation services, policy engines, incident response & telemetry) for distributed, real‑time ML systems.
  • Strong understanding of RL (e.g., RLHF/RLAIF, offline/online RL) for language‑based agents, including safety‑aware reward design and feedback loops.
  • Advanced Python and PyTorch; familiarity with TensorFlow.
  • Experience with safety evaluation tooling (e.g., platforms akin to LLUME) and safety datasets/benchmarks.
  • Significant contributions via top‑tier publications (NeurIPS, ICLR, ICML, ACL) and/or impactful open‑source or widely used safety tooling.
  • Proven technical leadership mentoring ~15 engineers, setting direction, and elevating execution quality.
  • Effective liaison with Product Engineering (tracking experiments and venture bets; aligning safety research to upcoming bets) and strong collaboration with Legal, Compliance, AI Infra, and Policy.
  • Experience with advanced reasoning/planning (e.g., CoT/ToT, self‑reflection, program synthesis, symbolic/neuro‑symbolic methods, search‑augmented reasoning, verification‑aware decoding).

Benefits

  • We strongly believe in the well-being of our employees and their families. That is why we offer generous health and wellness programs and time away for employees of all levels.
  • LinkedIn is committed to fair and equitable compensation practices.
  • The pay range for this role is \$191,000 - \$315,000.
  • Actual compensation packages are based on a wide array of factors unique to each candidate, including but not limited to skill set, years & depth of experience, certifications and specific office location.
  • This may differ in other locations due to cost of labor considerations.
  • The total compensation package for this position may also include annual performance bonus, stock, benefits and/or other applicable incentive compensation plans.
  • For additional information, visit: https://careers.linkedin.com/benefits.

Job Type

Job Type
Full Time
Location
United States

Share this job: