Docker AI Jobs

Discover the latest remote and onsite Docker AI roles across top active AI companies. Updated hourly.

Check out 252 new Docker AI roles opportunities posted on AI Chopping Block

GTM Engineer

New
Top rated
LangChain
Full-time
Full-time
Posted

Design and deploy production-grade agents using LangGraph and LangSmith that handle technical support queries, troubleshoot integrations, and guide users through complex onboarding flows. Analyze customer friction points to build self-service AI systems that reduce support volume and improve customer experience. Act as the product owner and technical lead to proactively identify opportunities for improvement, propose architectures, and own the full lifecycle of the systems built. Participate in the feedback loop for the product team by identifying gaps in frameworks and contributing to the LangChain and LangGraph open-source ecosystem. Develop AI-native onboarding workflows that automate documentation retrieval and code generation to help enterprise customers move from prototypes to production faster.

$160,000 – $180,000
Undisclosed
YEAR

(USD)

New York, United States
Maybe global
Remote
Python
TypeScript
LangChain
RAG
Prompt Engineering

DevSecOps Engineer (TypeScript & Agentic AI)

New
Top rated
Arize AI
Full-time
Full-time
Posted

Debug and fix issues in the platform and ship pull requests with those fixes. Build internal tools and copilots powered by generative AI to enhance the team’s capabilities. Rapidly prototype proof-of-concepts for customer use cases. Work collaboratively across Engineering, Product, and Solutions teams to unblock customers and advance AI adoption.

Undisclosed

()

Buenos Aires, Argentina
Maybe global
Remote
TypeScript
Python
Go
OpenAI API
LangChain

Machine Learning Engineer

New
Top rated
HappyRobot
Full-time
Full-time
Posted

Design, build, and maintain scalable machine learning systems including data ingestion, preprocessing, training, testing, and deployment. Develop and optimize end-to-end ML pipelines encompassing data collection, labeling, training, validation, and monitoring to ensure reliability and reproducibility. Implement robust MLOps practices such as model versioning, experiment tracking, CI/CD for machine learning, and continuous monitoring in production environments. Collaborate with product and engineering teams to integrate and deploy models into real-time products with a focus on efficiency and scalability. Ensure data quality, observability, and performance across all AI systems. Stay current with the latest AI infrastructure, tooling, and research to support ongoing innovation.

Undisclosed

()

Spain
Maybe global
Remote
Python
Go
MLOps
MLflow
Docker

Senior Product Manager, Enterprise AI Platform

New
Top rated
H Company
Full-time
Full-time
Posted

Define the vision and roadmap for the Enterprise AI platform. Understand key enterprise use cases and pain points through deep engagement with forward deployed teams, turning common pain points into high leverage features. Partner with research, engineering, and design teams to translate AI capabilities into useful product features. Own the product lifecycle from ideation through launch.

Undisclosed

()

Paris or London, United Kingdom
Maybe global
Hybrid
Python
Prompt Engineering
Model Evaluation
MLOps
MLflow

AI Deployment Engineer, Codex | Korea

New
Top rated
OpenAI
Full-time
Full-time
Posted

Serve as the primary technical subject matter expert on OpenAI Codex for a portfolio of customers, embedding deeply with them to enable their engineering teams and build coding workflows. Partner directly with customers to design and implement AI-enhanced development workflows, from rapid prototyping through scalable production rollout. Build high-quality demos, reference implementations, and workflow automations, using Codex itself as part of the development process. Lead large-format workshops, technical deep dives, and hands-on enablement sessions that help engineering organizations adopt AI coding tools effectively and safely. Contribute technical content including examples, guides, patterns, and best practices to the OpenAI Cookbook to help the broader developer community accelerate their work with Codex. Gather high-fidelity product insights from real customer deployments and translate them into clear product proposals and model feedback for internal teams. Influence customer strategy and decision-making by framing how AI coding tools fit into their software development lifecycle, technical roadmap, and organizational workflows. Serve as a trusted advisor on solution architecture, operational readiness, model configuration, security considerations, and best-practice adoption.

Undisclosed

()

Seoul, South Korea
Maybe global
Remote
Python
JavaScript
Prompt Engineering
Model Evaluation
OpenAI API

AI Deployment Engineer- Codex

New
Top rated
OpenAI
Full-time
Full-time
Posted

Serve as the primary technical subject matter expert on OpenAI Codex for a portfolio of customers, embedding deeply with them to enable their engineering teams and build coding workflows. Partner directly with customers to design and implement AI-enhanced development workflows, from rapid prototyping through scalable production rollout. Build high-quality demos, reference implementations, and workflow automations, using Codex itself as part of the development process. Lead large-format workshops, technical deep dives, and hands-on enablement sessions that help engineering organizations adopt AI coding tools effectively and safely. Contribute technical content including examples, guides, patterns, and best practices to the OpenAI Cookbook to help the broader developer community accelerate their work with Codex. Gather high-fidelity product insights from real customer deployments and translate them into clear product proposals and model feedback for internal teams. Influence customer strategy and decision-making by framing how AI coding tools fit into their software development lifecycle, technical roadmap, and organizational workflows. Serve as a trusted advisor on solution architecture, operational readiness, model configuration, security considerations, and best-practice adoption.

$197,000 – $278,000
Undisclosed
YEAR

(USD)

New York, United States
Maybe global
Remote
Python
Prompt Engineering
Model Evaluation
MLOps
Docker

Software Engineer (SF)

New
Top rated
Fractional AI
Full-time
Full-time
Posted

Work on a small, high-caliber team building AI products for clients, from requirements gathering and prototyping through system design, development, testing, and deployment. Own features end-to-end and develop domain expertise across a range of AI use cases. Spend most of the time coding and frequently interact with clients to ensure the solutions meet their needs.

$160,000 – $220,000
Undisclosed
YEAR

(USD)

San Francisco, United States
Maybe global
Hybrid
Python
JavaScript
TypeScript
PyTorch
TensorFlow

Senior / Staff Software Engineer (SF/NY)

New
Top rated
Fractional AI
Full-time
Full-time
Posted

You will work on a small, high-caliber team building AI products for clients, setting technical direction, writing code, and serving as the go-to person when challenges arise. Spend approximately 75% of your time coding and 25% interacting with clients, including CTOs, to understand problems, evaluate tradeoffs, and ensure solutions meet their needs.

$230,000 – $350,000
Undisclosed
YEAR

(USD)

San Francisco or New York, United States
Maybe global
Hybrid
Python
JavaScript
TypeScript
PyTorch
TensorFlow

Staff Software Engineer, Bots

New
Top rated
Cantina Labs
Full-time
Full-time
Posted

As a member of the Bots team, design, build, and scale systems that enhance user engagement with the AI-powered platform, including bot chat orchestration, AI image generation, AI video generation, and tooling for managing these features. Collaborate with cross-functional teams like product managers, designers, and data specialists to deliver high-quality, performant, and maintainable features. Experiment with and integrate new AI image, video, and voice generation technologies. Build tooling and infrastructure around various AI technologies. Gain exposure to the architecture and operations of a fast-growing social AI product. Contribute expertise to evolve team processes and technical infrastructure, ensuring scalability and reliability.

$230,000 – $290,000
Undisclosed
YEAR

(USD)

San Francisco, United States
Maybe global
Onsite
Go
AWS
Docker
Kubernetes
CI/CD

Staff Field Application Engineer, Customer Success

New
Top rated
Tenstorrent
Full-time
Full-time
Posted

Lead and contribute to cross-functional efforts solving complex physical design challenges across IPs, projects, and advanced technology nodes. Develop and enhance RTL-to-GDS methodologies, including floorplanning, synthesis, place and route (P&R), static timing analysis (STA), signoff, and assembly. Architect and deploy AI/ML-driven solutions in production flows to improve engineering efficiency, turnaround time, and quality of results (QoR). Optimize EDA tools and custom CAD flows using data-driven and ML-based techniques, collaborating closely with verification, extraction, timing, design for test (DFT), and EDA vendors.

$100,000 – $500,000
Undisclosed
YEAR

(USD)

Santa Clara or Austin or Fort Collins, United States
Maybe global
Hybrid
Python
PyTorch
TensorFlow
MLflow
MLOps

Want to see more AI Egnineer jobs?

View all jobs

Access all 4,256 remote & onsite AI jobs.

Join our private AI community to unlock full job access, and connect with founders, hiring managers, and top AI professionals.
(Yes, it’s still free—your best contributions are the price of admission.)

Frequently Asked Questions

Need help with something? Here are our most frequently asked questions.

Question text goes here

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

[{"question":"What are Docker AI jobs?","answer":"Docker AI jobs involve developing, deploying, and maintaining AI applications using containerization technology. These positions focus on creating reproducible AI workflows, packaging machine learning models with dependencies, and ensuring consistent execution across environments. Professionals in these roles typically work on MLOps pipelines, containerized AI applications, and implement solutions that seamlessly transition from development to production."},{"question":"What roles commonly require Docker skills?","answer":"Machine Learning Engineers, Data Scientists, AI Developers, and DevOps Engineers working on AI systems commonly require containerization skills. These professionals use containers to package models, ensure reproducibility, and streamline deployment pipelines. Full-stack developers building AI-powered applications and MLOps specialists implementing continuous integration workflows also frequently need proficiency with containerized environments and deployment strategies."},{"question":"What skills are typically required alongside Docker?","answer":"Alongside containerization expertise, employers typically seek proficiency in AI frameworks like TensorFlow, PyTorch, and Hugging Face. Familiarity with Docker Compose for multi-container applications, version control systems, and CI/CD pipelines is essential. Additional valuable skills include YAML configuration, cloud deployment knowledge, GPU acceleration techniques, and experience with MLOps practices that facilitate model development, testing, and production deployment."},{"question":"What experience level do Docker AI jobs usually require?","answer":"AI positions requiring containerization skills typically seek mid-level professionals with 2-4 years of practical experience. Entry-level roles may accept candidates with demonstrated proficiency in basic container commands, Dockerfile creation, and image management. Senior positions often demand extensive experience integrating containers into production ML pipelines, optimizing container resources, and implementing advanced deployment strategies across cloud and edge environments."},{"question":"What is the salary range for Docker AI jobs?","answer":"Compensation for AI professionals with containerization expertise varies based on location, experience level, industry, and additional technical skills. Junior roles typically start at competitive market rates, while senior positions command premium salaries. The most lucrative opportunities combine deep learning expertise, container orchestration experience, and cloud platform knowledge. Specialized industries like finance or healthcare often offer higher compensation for these in-demand skill combinations."},{"question":"Are Docker AI jobs in demand?","answer":"Containerization skills remain highly sought after in AI development, with strong demand driven by organizations implementing MLOps practices and scalable AI deployment strategies. Recent partnerships like Anaconda-Docker and trends in serverless AI containers have intensified hiring needs. The emergence of specialized tools like Docker Model Runner, Docker Offload, and Docker AI Catalog reflects the growing importance of containerized workflows in modern AI development and deployment practices."},{"question":"What is the difference between Docker and Kubernetes in AI roles?","answer":"In AI roles, containerization focuses on packaging individual applications with dependencies for consistent execution, while Kubernetes orchestrates multiple containers at scale. ML engineers might use Docker to create reproducible model environments but implement Kubernetes to manage production deployments across clusters. While containerization handles the model packaging, Kubernetes addresses the scalability, load balancing, and automated recovery needed for production AI systems serving multiple users simultaneously."}]