Machine Learning Enginer, Core Evaluations
The responsibilities include designing model evaluation pipelines for models in both development and production environments, designing user studies for subjective model evaluations, converting requirements into measurable metrics, and designing and developing automated evaluation dashboards to monitor and compare model performance. It also involves training new models to capture various evaluation metrics, communicating with the model team to help design improved models based on evaluation results, coordinating with the data team to determine necessary data for enhancing model performance, collaborating with the product manager to ensure product requirements are accurately measured, helping to grow the evaluation team as the founding member, and leading the evaluation team in the future.
Member of Engineering (Reinforcement Learning Infrastructure)
Keep up with the latest research, and be familiar with the state of the art in LLMs, RL, and code generation. Develop methods for tuning training and inference end-to-end for high throughput. Design data control systems in an RL pipeline that govern what the model sees and when. Debug cases where infrastructure decisions are silently degrading learning dynamics. Build observability tooling that surfaces when a system-level issue is the root cause of a training regression. Help build robust, flexible and scalable RL pipelines. Optimize performance across the stack — networking, memory, compute scheduling, and I/O. Write high-quality, pragmatic code. Work in the team: plan future steps, discuss, and always stay in touch.
Research Infrastructure Engineer, Training Systems
Build and maintain infrastructure for large-scale model training and experimentation. Design APIs and interfaces to simplify complex training workflows and prevent misuse. Improve reliability, debuggability, and performance of training and data pipelines. Debug issues across technologies including Python, PyTorch, distributed systems, GPUs, networking, and storage. Write tests, benchmarks, and diagnostics to detect significant regressions.
Engineering Manager, AI & Data Infrastructure
The Engineering Manager, AI & Data Infrastructure leads the AI & Data Infrastructure team responsible for the data and inference systems that support agent interactions, including streaming and batch pipelines for analytics and customer telemetry, realtime databases for low-latency behavior, and GPU and model-serving platforms for LLM inference. This role involves building, leading, and developing a high-performing team of data and ML infrastructure engineers through hiring, coaching, and performance management. Responsibilities include owning the technical strategy and roadmap for AI & Data Infrastructure, staying hands-on with design and code reviews, leading architecture for high-throughput data systems and low-latency inference, setting reliability, quality, and cost standards, investing in developer and analyst experience, raising standards on AI-assisted engineering practices, and partnering with Research, Product Engineering, Platform, and customer-facing teams to deliver data and inference capabilities, including enterprise deployments.
Machine Learning Engineer, API Multicloud
The role involves partnering with strategic customers and internal teams to define target model behaviors, diagnose failure modes, and translate real-world needs into training, evaluation, and system requirements. The engineer will build and scale production machine learning systems for model customization, post-training, and fine-tuning-as-a-service workflows. Responsibilities include investigating whether training and customization workflows produce the intended outcomes and identifying necessary changes to data, evaluation, training, or infrastructure to improve performance. The engineer will collaborate with backend and infrastructure engineers to integrate ML capabilities into AWS-native API environments and feed learnings from partner deployments back into the platform by proposing and implementing improvements to post-training systems, tooling, APIs, and developer workflows. The role requires close work with Research and Applied teams to bring model improvements, training workflows, and evaluation best practices into production. Designing systems that allow strategic partners and enterprise customers to safely customize OpenAI models for high-value use cases is also a key responsibility. Additionally, the role involves debugging and improving complex systems spanning model behavior, training data, APIs, distributed infrastructure, and customer-facing product surfaces. The engineer must operate with high ownership in a 0 to 1 environment where requirements are ambiguous, systems are evolving quickly, and reliability matters.
Member of Technical Staff - ML Performance
The role involves engineering work focused on making machine learning systems performant at scale. This includes contributing to open-source projects and enhancing Modal's container runtime to improve the throughput and reduce the latency of language and diffusion models.
Finance Analytics Engineer
Advance inference efficiency end-to-end by designing and prototyping algorithms, architectures, and scheduling strategies for low-latency, high-throughput inference. Implement and maintain changes in high-performance inference engines such as SGLang- or vLLM-style systems and Together's inference stack, including kernel backends, speculative decoding like ATLAS, and quantization. Profile and optimize performance across GPU, networking, and memory layers to improve latency, throughput, and cost. Design and operate RL and post-training pipelines with methods such as RLHF, RLAIF, GRPO, DPO-style methods, and reward modeling, optimizing these workloads with inference-aware training loops. Use these pipelines to train, evaluate, and iterate on frontier models on top of the inference stack. Co-design algorithms and infrastructure to tightly couple objectives, rollout collection, and evaluation with efficient inference, identifying bottlenecks across training engines, inference engines, data pipelines, and user-facing layers. Run ablations and scale-up experiments to understand trade-offs among model quality, latency, throughput, and cost and feed insights into the design process. Profile, debug, and optimize inference and post-training services under production workloads. Drive roadmap items requiring engine modification, including changing kernels, memory layouts, scheduling logic, and APIs. Establish metrics, benchmarks, and experimentation frameworks for rigorous validation of improvements. Set technical direction for cross-team efforts at the intersection of inference, RL, and post-training. Mentor other engineers and researchers on full-stack ML systems work and performance engineering.
Staff Analytics Engineer — Data Warehouse
Advance inference efficiency end-to-end by designing and prototyping algorithms, architectures, and scheduling strategies for low-latency, high-throughput inference. Implement and maintain changes in high-performance inference engines, including kernel backends, speculative decoding, and quantization. Profile and optimize performance across GPU, networking, and memory layers to improve latency, throughput, and cost. Design and operate RL and post-training pipelines where most cost is inference, jointly optimizing algorithms and systems. Make RL and post-training workloads more efficient with inference-aware training loops, async RL rollouts, and speculative decoding. Use these pipelines to train, evaluate, and iterate on frontier models. Co-design algorithms and infrastructure to tightly couple objectives, rollout collection, and evaluation with efficient inference, identifying bottlenecks across the training engine, inference engine, data pipeline, and user-facing layers. Run ablations and scale-up experiments to understand trade-offs between model quality, latency, throughput, and cost, feeding insights into model, RL, and system design. Profile, debug, and optimize inference and post-training services under real production workloads. Drive roadmap items requiring engine modification such as changing kernels, memory layouts, scheduling logic, and APIs. Establish metrics, benchmarks, and experimentation frameworks to validate improvements rigorously. Provide technical leadership to set direction for cross-team efforts in inference, RL, and post-training and mentor engineers and researchers on full-stack ML systems work and performance engineering.
Sr. Partnerships Manager, Model Ecosystem
Advance inference efficiency end-to-end by designing and prototyping algorithms, architectures, and scheduling strategies for low-latency, high-throughput inference; implementing and maintaining changes in high-performance inference engines such as SGLang- or vLLM-style systems and Together’s inference stack, including kernel backends, speculative decoding, and quantization; profiling and optimizing performance across GPU, networking, and memory layers to improve latency, throughput, and cost. Unify inference with RL/post-training by designing and operating RL and post-training pipelines such as RLHF, RLAIF, GRPO, and DPO-style methods where most cost is inference, jointly optimizing algorithms and systems; making workloads more efficient with inference-aware training loops, async RL rollouts, and speculative decoding; training, evaluating, and iterating on frontier models; co-designing algorithms and infrastructure to tightly couple objectives, rollout collection, and evaluation to efficient inference; running ablations and scale-up experiments to understand trade-offs and feed insights into model, RL, and system design. Own critical systems at production scale by profiling, debugging, and optimizing inference and post-training services; driving roadmap items involving engine modifications like changing kernels, memory layouts, scheduling logic, and APIs; establishing metrics, benchmarks, and experimentation frameworks to validate improvements rigorously. Provide technical leadership by setting technical direction for cross-team efforts at the intersection of inference, RL, and post-training; mentoring engineers and researchers on full-stack ML systems work and performance engineering.
Lead/Manager Site Reliability Engineering Team (Amsterdam)
Advance inference efficiency end-to-end by designing and prototyping algorithms, architectures, and scheduling strategies for low-latency, high-throughput inference. Implement and maintain changes in high-performance inference engines such as SGLang- or vLLM-style systems and Together's inference stack, including kernel backends, speculative decoding methods like ATLAS, and quantization. Profile and optimize performance across GPU, networking, and memory layers to improve latency, throughput, and cost. Unify inference with RL/post-training by designing and operating RL and post-training pipelines where inference constitutes the majority of the cost, optimizing algorithms and systems jointly. Enhance RL and post-training workloads with inference-aware training loops, including asynchronous RL rollouts and speculative decoding techniques, making large-scale rollout collection and evaluation more efficient. Use these pipelines to train, evaluate, and iterate on cutting-edge models based on the inference stack. Co-design algorithms and infrastructure to tightly couple objectives, rollout collection, and evaluation to efficient inference, and identify bottlenecks across training engines, inference engines, data pipelines, and user-facing layers quickly. Run ablation and scale-up experiments to analyze trade-offs between model quality, latency, throughput, and cost, feeding insights back into model, RL, and system design. Own critical production-scale systems by profiling, debugging, and optimizing inference and post-training services under real production workloads. Lead roadmap initiatives necessitating engine modifications such as changes to kernels, memory layouts, scheduling logic, and APIs. Establish metrics, benchmarks, and experimentation frameworks to rigorously validate improvements. Provide technical leadership by setting direction for cross-team efforts at the intersection of inference, RL, and post-training and mentor engineers and researchers on full-stack ML systems work and performance engineering.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
