Docker AI Jobs

Discover the latest remote and onsite Docker AI roles across top active AI companies. Updated hourly.

Check out 252 new Docker AI roles opportunities posted on The Homebase

Senior Python Systems Developer - Functional Testing Project

New
Top rated
Mindrift
Part-time
Full-time
Posted

Create functional black box tests for large codebases in various source languages, create and manage Docker environments to ensure 100% reproducible builds and test execution across different platforms, monitor code coverage and configure automated scoring criteria to meet industry benchmark-level standards, and leverage LLMs such as Roo Code and Claude to accelerate development cycles, automate repetitive tasks, and improve overall code quality.

$50 / hour
Undisclosed
HOUR

(USD)

Germany
Maybe global
Remote
Python
Docker
Linux
Go
C++

Software Engineer, Architecture, Reliability, & Compute

New
Top rated
Scale AI
Full-time
Full-time
Posted

As a Production AI Ops Lead, you will design and develop the production lifecycle of full-stack AI applications, support end-to-end system reliability, real-time inference observability, sovereign data orchestration, high-security software integration, and resilient cloud infrastructure for international government partners. You will take full accountability for the long-term performance and reliability of AI use cases deployed across international government agencies, oversee the end-to-end health of the platform ensuring seamless integration between AI core and full-stack components, build automated systems to monitor model performance and data drift across geographically dispersed environments, manage the technical lifecycle within diverse regulatory frameworks, lead response for production issues in mission-critical environments ensuring rapid resolution and prevention, translate technical performance metrics into clear insights for senior international government officials, and partner with Engineering and ML teams to ensure field lessons influence future technical architecture and decisions.

Undisclosed

()

San Francisco or St. Louis or New York or Washington, United States
Maybe global
Onsite
Python
Kubernetes
Docker
MLOps
CI/CD

Engineering Manager, Active Learning

New
Top rated
Deepgram
Full-time
Full-time
Posted

The Engineering Manager role at Deepgram involves leading the design and implementation of internal data and ML training systems. Responsibilities include recruiting, hiring, training, and supporting top engineering talent to build a world-class team; transforming cross-functional visions into detailed project plans with clarity on commitments, risks, and timelines; defining and owning technical strategy to accelerate ML training pipelines; promoting a strong team engineering culture focused on rigorous engineering standards and continuous improvement; partnering with DataOps and Research teams to design and implement new services, features, or products end to end; and coaching and mentoring engineers to support personal growth while achieving ambitious team goals.

$180,000 – $220,000
Undisclosed
YEAR

(USD)

United States
Maybe global
Remote
Python
Docker
Kubernetes
AWS
MLflow

Research Engineer, Machine Learning Systems

New
Top rated
Deepgram
Full-time
Full-time
Posted

The responsibilities include architecting and managing horizontally scalable systems to accelerate the end-to-end training lifecycle for Speech-to-Text (STT) and Text-to-Speech (TTS) models, focusing on optimized data preparation, high-throughput training pipelines, distributed infrastructure, and automated evaluation tooling. The role also involves designing and implementing internal UIs and tools to make ML systems and workflows accessible and transparent to non-technical stakeholders. Additionally, the position requires overseeing and managing training tooling, job orchestration, experiment tracking, and data storage.

$150,000 – $250,000
Undisclosed
YEAR

(USD)

United States
Maybe global
Remote
Python
Kubernetes
Docker
MLflow
Scikit-learn

Inference Technical Lead, On-Device Transformers

New
Top rated
OpenAI
Full-time
Full-time
Posted

As a Technical Lead on the Future of Computing Research team, you will evaluate and select silicon platforms such as GPUs, NPUs, and specialized accelerators for on-device and edge deployment of OpenAI models. You will work closely with research teams to co-design model architectures that meet real-world deployment constraints including latency, memory, power, and bandwidth. You will analyze and model system performance, identifying tradeoffs between model design, memory hierarchy, compute throughput, and hardware capabilities. You will partner with hardware vendors and internal infrastructure teams to bring up new accelerators and ensure efficient execution of transformer workloads. Additionally, you will build and lead a team of engineers responsible for implementing the low-level inference stack, including kernel development and runtime systems. You will also take nascent research capabilities and develop them into usable capabilities.

$445,000 – $445,000
Undisclosed
YEAR

(USD)

San Francisco, United States
Maybe global
Hybrid
Python
C++
CUDA
MLOps
TensorFlow

Engineering Manager, Go - Assist & Chat

New
Top rated
Grammarly
Full-time
Full-time
Posted

Own the observability and lifecycle management of AI features across the organization. Build tools and infrastructure to enable teams to develop, monitor, and optimize LLM-powered features. Design and implement closed-loop evaluation pipelines that automatically validate prompt changes. Develop comprehensive metrics and dashboards to track LLM usage including cost per feature, token patterns, and latency. Create systems that tie user feedback to specific prompts and LLM calls. Establish best practices and processes for the full lifecycle of prompts, including development, testing, deployment, and monitoring. Collaborate with engineering teams across the organization to ensure they have the tools and visibility needed to build high-quality AI features.

$103,000 – $174,000
Undisclosed
YEAR

(USD)

San Francisco
Maybe global
Onsite
Go
Kubernetes
Google Cloud
Docker
CI/CD

Head of Internal Tools Engineering

New
Top rated
Bjak
Full-time
Full-time
Posted

The Head of Internal Tools Engineering is responsible for owning the end-to-end strategy and roadmap for all internal tools, platforms, and automation, treating internal technology as a product. They make strategic build-vs-buy decisions, map current and next-state process flows, and lead systems transformation for internal teams. They architect and maintain the full engineering lifecycle of internal platforms, build seamless API-first ecosystems integrating various internal systems, ensure system reliability and operational resilience, and design scalable, secure architectures using cloud-native principles and microservices. They lead AI strategy by integrating AI and LLMs into internal workflows and deploying intelligent automation tools. They reduce cognitive load for internal users by providing standardized workflows and self-service capabilities, measure platform success by adoption, satisfaction, and productivity impact, and build, lead, and mentor a high-performing engineering team. They cultivate a collaborative culture, provide technical mentorship, foster psychological safety, partner cross-functionally with leadership across departments, and align internal platform investments with company strategy while demonstrating measurable ROI.

Undisclosed

()

New York, United States
Maybe global
Remote
Python
AWS
GCP
Azure
Docker

Head of Internal Tools Engineering

New
Top rated
Bjak
Full-time
Full-time
Posted

The role involves architecting, building, and scaling the internal technology ecosystem to accelerate workforce productivity, eliminate operational friction, and provide a compounding infrastructure advantage by treating internal tools with product rigor and user-centricity. Responsibilities include owning the end-to-end strategy and roadmap for all internal tools, platforms, and automation; making strategic build-vs-buy decisions; mapping current and next-state process flows and leading systems transformation. The role requires architecting and maintaining the full engineering lifecycle of internal platforms, building API-first ecosystems integrating with various business systems, owning system reliability and operational resilience, and designing scalable, secure cloud-native architectures. The role leads AI adoption and automation integration into internal workflows, including deploying intelligent automation tools, evaluating AI-assisted troubleshooting, and driving continuous experimentation with prototypes. The person will reduce cognitive load for internal users by providing golden paths and standardized workflows, ensuring frictionless onboarding, and measuring platform success via adoption rates, user satisfaction, DORA metrics, and productivity impact. Team leadership duties include building, leading, and mentoring engineers and managers, fostering a collaborative culture rooted in ownership, speed, craftsmanship, and psychological safety. The role partners cross-functionally with various company leadership teams to translate business needs into a unified technical vision, aligning internal platform investments with company strategy and demonstrating measurable ROI.

Undisclosed

()

Beijing, China
Maybe global
Remote
Python
AWS
GCP
Azure
CI/CD

Freelance AI Evaluation Engineer (Python/Full-Stack)

New
Top rated
Mindrift
Part-time
Full-time
Posted

Create challenging coding test cases that push AI coding systems to their limits. Review and refine realistic coding tasks based on provided production codebases with realistic scope, requirements, and information sources. Write comprehensive functional tests that validate actual end-to-end behavior and edge-cases, not just superficial checks. Craft fair but hard challenges where the AI has all the context it needs but must work for it, involving information scattered across files and external sources and requiring complex reasoning. Analyze AI failures to understand what the model struggles with versus what it masters. Iterate based on feedback from expert QA reviewers who score work on seven quality criteria.

$45 / hour
Undisclosed
HOUR

(USD)

Canada
Maybe global
Remote
Python
JavaScript
Docker
CI/CD

Freelance AI Evaluation Engineer (Python/Full-Stack)

New
Top rated
Mindrift
Part-time
Full-time
Posted

Create challenging coding test cases that push AI coding systems to their limits by reviewing and refining realistic coding tasks based on provided production codebases with realistic scope, requirements, and information sources. Write comprehensive functional tests that validate actual end-to-end behavior and edge-cases, not just superficial checks. Craft "fair but hard" challenges where the AI has all the context it needs but must work for it, involving information scattered across files and external sources and requiring complex reasoning. Analyze AI failures to understand areas where the model struggles versus what it masters. Iterate based on feedback from expert QA reviewers who score the work on seven quality criteria.

$50 / hour
Undisclosed
HOUR

(USD)

United Kingdom
Maybe global
Remote
Python
JavaScript
Docker
CI/CD

Want to see more AI Egnineer jobs?

View all jobs

Access all 4,256 remote & onsite AI jobs.

Join our private AI community to unlock full job access, and connect with founders, hiring managers, and top AI professionals.
(Yes, it’s still free—your best contributions are the price of admission.)

Frequently Asked Questions

Need help with something? Here are our most frequently asked questions.

Question text goes here

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

[{"question":"What are Docker AI jobs?","answer":"Docker AI jobs involve developing, deploying, and maintaining AI applications using containerization technology. These positions focus on creating reproducible AI workflows, packaging machine learning models with dependencies, and ensuring consistent execution across environments. Professionals in these roles typically work on MLOps pipelines, containerized AI applications, and implement solutions that seamlessly transition from development to production."},{"question":"What roles commonly require Docker skills?","answer":"Machine Learning Engineers, Data Scientists, AI Developers, and DevOps Engineers working on AI systems commonly require containerization skills. These professionals use containers to package models, ensure reproducibility, and streamline deployment pipelines. Full-stack developers building AI-powered applications and MLOps specialists implementing continuous integration workflows also frequently need proficiency with containerized environments and deployment strategies."},{"question":"What skills are typically required alongside Docker?","answer":"Alongside containerization expertise, employers typically seek proficiency in AI frameworks like TensorFlow, PyTorch, and Hugging Face. Familiarity with Docker Compose for multi-container applications, version control systems, and CI/CD pipelines is essential. Additional valuable skills include YAML configuration, cloud deployment knowledge, GPU acceleration techniques, and experience with MLOps practices that facilitate model development, testing, and production deployment."},{"question":"What experience level do Docker AI jobs usually require?","answer":"AI positions requiring containerization skills typically seek mid-level professionals with 2-4 years of practical experience. Entry-level roles may accept candidates with demonstrated proficiency in basic container commands, Dockerfile creation, and image management. Senior positions often demand extensive experience integrating containers into production ML pipelines, optimizing container resources, and implementing advanced deployment strategies across cloud and edge environments."},{"question":"What is the salary range for Docker AI jobs?","answer":"Compensation for AI professionals with containerization expertise varies based on location, experience level, industry, and additional technical skills. Junior roles typically start at competitive market rates, while senior positions command premium salaries. The most lucrative opportunities combine deep learning expertise, container orchestration experience, and cloud platform knowledge. Specialized industries like finance or healthcare often offer higher compensation for these in-demand skill combinations."},{"question":"Are Docker AI jobs in demand?","answer":"Containerization skills remain highly sought after in AI development, with strong demand driven by organizations implementing MLOps practices and scalable AI deployment strategies. Recent partnerships like Anaconda-Docker and trends in serverless AI containers have intensified hiring needs. The emergence of specialized tools like Docker Model Runner, Docker Offload, and Docker AI Catalog reflects the growing importance of containerized workflows in modern AI development and deployment practices."},{"question":"What is the difference between Docker and Kubernetes in AI roles?","answer":"In AI roles, containerization focuses on packaging individual applications with dependencies for consistent execution, while Kubernetes orchestrates multiple containers at scale. ML engineers might use Docker to create reproducible model environments but implement Kubernetes to manage production deployments across clusters. While containerization handles the model packaging, Kubernetes addresses the scalability, load balancing, and automated recovery needed for production AI systems serving multiple users simultaneously."}]