ML Infrastructure Engineer Jobs

Discover the latest remote and onsite ML Infrastructure Engineer roles across top active AI companies. Updated hourly.

Check out 20 new ML Infrastructure Engineer opportunities posted on The Homebase

Copy of Member of Technical Staff - ML Engineering

New
Top rated
Talent Labs
Full-time
Full-time
Posted

Deploy, maintain, and optimize production and research compute clusters. Design and implement scalable and efficient ML inference solutions. Develop dynamic and heterogeneous compute solutions for balancing research and production needs. Contribute to productizing model APIs for external use. Develop infrastructure observability and monitoring solutions.

Undisclosed

()

London, United Kingdom
Maybe global
Remote

Research Engineer, Machine Learning Systems

New
Top rated
Deepgram
Full-time
Full-time
Posted

The responsibilities include architecting and managing horizontally scalable systems to accelerate the end-to-end training lifecycle for Speech-to-Text (STT) and Text-to-Speech (TTS) models, focusing on optimized data preparation, high-throughput training pipelines, distributed infrastructure, and automated evaluation tooling. The role also involves designing and implementing internal UIs and tools to make ML systems and workflows accessible and transparent to non-technical stakeholders. Additionally, the position requires overseeing and managing training tooling, job orchestration, experiment tracking, and data storage.

$150,000 – $250,000
Undisclosed
YEAR

(USD)

United States
Maybe global
Remote

Member of Technical Staff - Research Software Engineer

New
Top rated
Reflection
Full-time
Full-time
Posted

The role involves bridging the gap between research and production by transforming cutting-edge algorithms into scalable training systems. Responsibilities include designing and optimizing large-scale training loops and data pipelines, implementing state-of-the-art techniques ensuring numerical stability and computational efficiency, building internal tooling for launching, monitoring, and reproducing complex experiments, diagnosing deep bottlenecks across the training stack such as GPU memory issues, communication overhead, and dataloader stalls, and translating research prototypes into reusable, production-grade infrastructure. The engineer will architect and optimize the core training infrastructure including RL training loops, distributed GPU systems, and large-scale data pipelines, working closely with researchers to build reliable, scalable systems.

Undisclosed

()

New York City, United States
Maybe global
Onsite

Senior Engineering Manager, ML Platform

New
Top rated
Zoox
Full-time
Full-time
Posted

The Senior Engineering Manager, ML Platform at Zoox is responsible for developing and executing a strategic vision for the ML training platform to ensure scalability, reliability, and performance for large-scale Foundation and RL models. They lead the design, implementation, and operation of a robust and efficient ML training platform supporting training, experimentation, validation, and monitoring of ML models. They attract, hire, and inspire a diverse world-class engineering team, fostering a culture of innovation, collaboration, and excellence. The role involves close collaboration with cross-functional teams including ML researchers, software engineers, data engineers, and hardware engineers to define requirements and align architectural decisions. The manager also mentors engineers, providing opportunities for career growth through clear and timely feedback.

$317,000 – $370,000
Undisclosed
YEAR

(USD)

Foster City, United States
Maybe global
Onsite

Software Development in Test Intern

New
Top rated
Together AI
Full-time
Full-time
Posted

Advance inference efficiency end-to-end by designing and prototyping algorithms, architectures, and scheduling strategies for low-latency, high-throughput inference. Implement and maintain changes in high-performance inference engines including kernel backends, speculative decoding, quantization, etc. Profile and optimize performance across GPU, networking, and memory layers to improve latency, throughput, and cost. Design and operate RL and post-training pipelines, jointly optimizing algorithms and systems, and making RL and post-training workloads more efficient with inference-aware training loops. Use these pipelines to train, evaluate, and iterate on frontier models on top of the inference stack. Co-design algorithms and infrastructure so that objectives, rollout collection, and evaluation are tightly coupled to efficient inference, identifying bottlenecks across various layers. Run ablations and scale-up experiments to understand trade-offs between model quality, latency, throughput, and cost, feeding insights back into model, RL, and system design. Profile, debug, and optimize inference and post-training services under real production workloads. Drive roadmap items requiring engine modification including changing kernels, memory layouts, scheduling logic, and APIs. Establish metrics, benchmarks, and experimentation frameworks to validate improvements rigorously. Provide technical leadership by setting technical direction for cross-team efforts, mentoring engineers and researchers on full-stack ML systems work and performance engineering.

$200,000 – $280,000
Undisclosed
YEAR

(USD)

San Francisco
Maybe global
Onsite

Global Hardware Sourcing & Supply Manager

New
Top rated
Together AI
Full-time
Full-time
Posted

The responsibilities for the Global Hardware Sourcing & Supply Manager role include advancing inference efficiency end-to-end by designing and prototyping algorithms, architectures, and scheduling strategies for low-latency, high-throughput inference. The role involves implementing and maintaining changes in high-performance inference engines, profiling and optimizing performance across GPU, networking, and memory layers to improve latency, throughput, and cost. It also requires unifying inference with RL/post-training by designing and operating RL and post-training pipelines and making these workloads more efficient with inference-aware training loops. The role includes training, evaluating, and iterating on frontier models using these pipelines, co-designing algorithms and infrastructure to tightly couple objectives, rollout collection, and evaluation with efficient inference, and quickly identifying bottlenecks across various components. Running ablations and scale-up experiments to understand trade-offs between model quality, latency, throughput, and cost, and owning critical production-scale systems by profiling, debugging, optimizing inference and post-training services are also key responsibilities. The role involves driving roadmap items that require engine modifications, establishing metrics, benchmarks, and experimentation frameworks, and providing technical leadership by setting technical direction for cross-team efforts and mentoring engineers and researchers on full-stack ML systems and performance engineering.

$200,000 – $280,000
Undisclosed
YEAR

(USD)

San Francisco
Maybe global
Onsite

Senior Staff Software Engineer, Model LifeCycle

New
Top rated
Crusoe
Full-time
Full-time
Posted

The Senior Staff Engineer for the Model LifeCycle team at Crusoe is responsible for building a comprehensive managed platform for the entire application development lifecycle with a focus on Machine Learning models including Large Language Models (LLMs). Responsibilities include managing fine-tuning systems for large foundation models such as SFT, PEFT, LoRA, and adapters with multi-node orchestration, checkpointing, failure recovery, and cost-efficient scaling. They implement and maintain end-to-end training pipelines for LLMs, distillation and reinforcement learning pipelines including preference optimization, policy optimization, and reward modeling, as well as manage agent execution infrastructure. They also manage dataset, model, and experiment management tasks including versioning, lineage, evaluation, and reproducible fine-tuning at scale. Additionally, they work closely with product, business, and platform teams to shape core abstractions and APIs, influence architectural decisions around training runtimes, scheduling, storage, and model lifecycle management, contribute to and engage with the open-source LLM ecosystem, and take ownership in designing and building core systems from first principles.

$237,600 – $288,000
Undisclosed
YEAR

(USD)

San Francisco, United States
Maybe global
Onsite

Staff Software Engineer, Model LifeCycle

New
Top rated
Crusoe
Full-time
Full-time
Posted

The Staff Software Engineer for the Model LifeCycle team is responsible for building a comprehensive managed platform for the application development lifecycle with a focus on Machine Learning models, including Large Language Models (LLMs). Responsibilities include contributing to fine-tuning systems for large foundation models, implementing and maintaining end-to-end training pipelines for Large Language Models, contributing to distillation and reinforcement learning pipelines, developing and maintaining agent execution infrastructure, and implementing features for dataset, model, and experiment management such as versioning, lineage, evaluation, and reproducible fine-tuning at scale. The role also involves working closely with Principal Engineers, product, business, and platform teams to implement core abstractions and APIs, contributing to architectural decisions around training runtimes, scheduling, storage, and model lifecycle management, and engaging with the open-source LLM ecosystem. This position offers significant scope for ownership and contribution to the design of core systems.

$208,725 – $253,000
Undisclosed
YEAR

(USD)

San Francisco, United States
Maybe global
Onsite

Senior Performance Engineer- Pretraining

New
Top rated
AlephAlpha
Full-time
Full-time
Posted

Engineer the systems required to train foundation models at scale to maximize hardware utilization and training throughput on large-scale GPU clusters. Profile training loops using PyTorch Profiler, Nsight Systems and Nsight Compute to identify system- and kernel-level bottlenecks and maximize model throughput. Configure and tune composite parallelism strategies such as tensor parallelism (TP), data parallelism (DP), hybrid sharded data parallel (HSDP/FSDP), and expert parallelism (EP) to optimize load balance, minimize critical-path bottlenecks, and manage communication-to-computation trade-offs for large-scale large language model (LLM) training. Collaborate with AI Researchers to define model architectures that enhance hardware efficiency without compromising convergence.

Undisclosed

()

Heidelberg, Germany
Maybe global
Hybrid

System Software Engineer

New
Top rated
HP IQ
Full-time
Full-time
Posted

As a modeling lead for the AI lab, you will be responsible for defining the technical roadmap for the team and supporting the modeling needs across the organization. You will define and establish best practices to manage the model life cycle, from data acquisition to deployment, and build tools and platforms to facilitate building and deploying ML models on different devices with specific constraints. You will work closely with different teams across the organization to support their modeling needs, translating high level user needs to specific modeling requirements, creating plans, and technically driving the team to execute on those. Responsibilities also include defining and driving AI Lab technical strategy in support of HP's AI roadmap, owning decisions across models, runtimes, inference engines, and optimization. Lead the device AI strategy including model compression, quantization, distillation, and hardware aware optimization across CPUs, GPUs, NPUs, and TPUs. Architect and evolve tooling and platforms supporting the full model lifecycle from data and training through evaluation, deployment, and monitoring. Establish standards and evaluation frameworks to ensure high quality, safe, and performant Gen AI models in production. Partner with cross functional leaders and teams to align technical direction with product and hardware strategy. Mentor a small group of senior engineers while operating as a hands-on technical leader who sets direction and moves quickly.

$200,000 – $340,000
Undisclosed
YEAR

(USD)

San Francisco, United States
Maybe global
Onsite

Want to see more ML Infrastructure Engineer jobs?

View all jobs

Access all 4,256 remote & onsite AI jobs.

Join our private AI community to unlock full job access, and connect with founders, hiring managers, and top AI professionals.
(Yes, it’s still free—your best contributions are the price of admission.)

Frequently Asked Questions

Have questions about roles, locations, or requirements for ML Infrastructure Engineer jobs?

Question text goes here

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

[{"question":"What does a ML Infrastructure Engineer do?","answer":"ML Infrastructure Engineers design, build, and maintain systems that support machine learning workflows from development to production. They create scalable platforms for model training and serving, implement distributed training systems, and develop monitoring solutions to track model performance. These engineers also build data pipelines, optimize ML systems for performance, and implement automated testing and deployment processes while collaborating with data scientists and researchers to productionize ML models."},{"question":"What skills are required for ML Infrastructure Engineer?","answer":"ML Infrastructure Engineers need strong programming skills in Python and sometimes Go, Rust, or C++. Proficiency with ML frameworks like PyTorch and TensorFlow is essential, alongside expertise in cloud platforms (AWS, GCP), containers (Docker), and orchestration (Kubernetes). They should understand distributed systems, data engineering concepts, and model serving techniques. Experience with infrastructure-as-code tools and monitoring systems rounds out the technical requirements, complemented by problem-solving abilities and collaboration skills."},{"question":"What qualifications are needed for ML Infrastructure Engineer role?","answer":"Most ML Infrastructure Engineer positions require a Bachelor's or Master's degree in Computer Science or related field, plus 4-5+ years of experience building production ML systems. Employers typically expect demonstrable experience with cloud platforms, containerization tools, and ML frameworks. Strong understanding of system-level software, machine learning concepts, and resource utilization is necessary. Experience with distributed systems and high-throughput workloads is highly valued, especially for senior positions."},{"question":"What is the salary range for ML Infrastructure Engineer job?","answer":"The research provided doesn't specify salary ranges for ML Infrastructure Engineer jobs. Compensation typically varies based on factors like location, company size, experience level, and specific technical expertise. Organizations like Anthropic, Scale AI, Apple, and other technology companies actively hiring for these positions likely offer competitive compensation packages reflecting the specialized nature of ML infrastructure skills and the current market demand."},{"question":"How long does it take to get hired as a ML Infrastructure Engineer?","answer":"The hiring timeline for ML Infrastructure Engineer positions isn't specified in the provided research. The process typically includes technical interviews focused on systems design, ML fundamentals, and programming skills. Given the specialized nature of the role, companies often conduct thorough evaluations of candidates' experience with production ML systems, distributed computing, and relevant technologies. The specialized requirements may extend the hiring process compared to more general engineering roles."},{"question":"Are ML Infrastructure Engineer job in demand?","answer":"Yes, ML Infrastructure Engineer jobs show strong demand based on active openings at major companies like DataXight, Scale AI, Anthropic, Apple, and Character.AI. The field is growing particularly in specialized areas such as LLM serving infrastructure, on-device ML optimization, and safety-critical ML systems. These positions are distributed across major tech hubs with opportunities ranging from mid-level to senior roles, reflecting industry's increasing need for engineers who can build reliable ML systems at scale."}]