QA Engineer (Agents)
Design and implement test plans for agent infrastructure, LLM-based APIs, and end-to-end user journeys. Build and maintain automated test suites for backend, frontend, and integration layers, including prompt and response validation for generative models. Develop tools and frameworks to accelerate testing and catch regressions early, especially in agent reasoning, tool use, and context handling. Collaborate closely with engineers to embed quality into every stage of development, focusing on the unique challenges of AI/LLM systems such as non-determinism, hallucinations, and safety. Lead root cause analysis and drive resolution for critical issues and incidents, including those arising from model updates or agent behaviors. Advocate for best practices in code quality, observability, and CI/CD pipelines, ensuring quality signals are actionable and visible.
Systems Architect - Active Safety
Design, deploy, and maintain Figure's training clusters. Architect and maintain scalable deep learning frameworks for training on massive robot datasets. Work together with AI researchers to implement training of new model architectures at a large scale. Implement distributed training and parallelization strategies to reduce model development cycles. Implement tooling for data processing, model experimentation, and continuous integration.
Systems Integration Engineer – Head Subsystem
Design, deploy, and maintain Figure's training clusters. Architect and maintain scalable deep learning frameworks for training on massive robot datasets. Work together with AI researchers to implement training of new model architectures at a large scale. Implement distributed training and parallelization strategies to reduce model development cycles. Implement tooling for data processing, model experimentation, and continuous integration.
Deployment Engineer, AI Inference
You will build and operate large-scale AI inference clusters using the Wafer-Scale Engine and lead the rollout, updates, and capacity reallocations across custom-built datacenters. The role also involves developing and enhancing telemetry, observability, and automated deployment pipelines to ensure robust and scalable AI infrastructure operations.
Deployment Engineer, AI Inference
The Deployment Engineer will build and operate advanced AI inference clusters using Cerebras' unique hardware and infrastructure. They are responsible for deploying inference replicas, managing software rollouts, capacity allocation, and automating deployment and observability pipelines.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
