close

A10 Networks

Apply for this job

Senior ML Engineer - AI Safety & Evaluation (Finance)



Senior ML Engineer - AI Safety & Evaluation
We're building a future where AI systems are not only powerful but safe, aligned, and robust against misuse. Our team focuses on advancing practical safety techniques for large language models (LLMs) and multimodal systems-ensuring these models remain aligned with human intent and resist attempts to produce harmful, toxic, or policy-violating content.

We operate at the intersection of model development and real-world deployment, with a mission to build systems that can proactively detect and prevent jailbreaks, toxic behaviors, and other forms of misuse. Our work blends applied research, systems engineering, and evaluation design to ensure safety is built into our models at every layer.

Position Overview

As a Senior ML Engineer on the AI Safety & Evaluation team, you will contribute to the development, deployment, and monitoring of model-level safety components in production environments. This includes building APIs and infrastructure to integrate safety checks, running evaluations under adversarial scenarios, and deploying models and safety modules in scalable and production-ready environments.

You'll work closely with ML engineers, infrastructure teams, and product safety leads to ensure that our models are not only performant-but robust, auditable, and secure in real-world use cases.

What You'll Do

  • Develop scalable APIs Build and deploy ML models and safety components in production environments
  • Develop filtering, evaluation, and post-processing pipelines to enforce safe model behavior
  • Work with containerized environments and orchestration tools to ensure reliable deployment
  • Collaborate with engineering, research, and infrastructure teams on end-to-end system design
  • Write clean, maintainable, and production-grade code focused on real-world safety use cases
  • Contribute to the ongoing improvement of model serving workflows and safety infrastructure

Nice to Have

  • Hands-on experience working with large language models in training, fine-tuning, or deployment settings
  • Familiarity with safety techniques such as RLHF, adversarial training, or safety-aligned model tuning
  • Prior work on prompt defense mechanisms, content filtering, or jailbreak prevention for LLMs
  • Contributions to AI safety research, open-source tools, or public benchmarks related to responsible AI

A10 Networks is an equal opportunity employer and a VEVRAA federal subcontractor. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law. A10 also complies with all applicable state and local laws governing nondiscrimination in employment.#LI-AN1

Compensation: up to $155K USD Apply

Apply Here done

© 2025 US Diversity