About the Team
OpenAI’s API Multicloud team sits within B2B Applications and is responsible for extending OpenAI’s API platform into strategic cloud environments, starting with AWS. The team’s mission is to distribute OpenAI’s API broadly and safely by enabling key API technologies in AWS-native environments, in close partnership with Amazon and internal teams across Codex, Research, Safety Systems, and Applied.
The team is focused on bringing core developer and enterprise capabilities into cloud-native environments, including AWS-hosted Codex, model customization / post-training as a service, and new stateful runtime environments for agentic workloads. This work sits at the intersection of production ML systems, developer platforms, model behavior, and large-scale infrastructure.
About the Role
We’re hiring Machine Learning Engineers to build and improve the AI systems that help strategic partners adapt OpenAI models to important use cases in cloud-native environments. This role spans post-training workflows, evaluation, data pipelines, model behavior, and API/infrastructure integration.
You’ll work at the boundary between partner needs and core ML systems: helping teams understand what is and isn’t working, diagnosing issues in training and evaluation workflows, and turning those learnings into improvements to the underlying platform. You’ll collaborate closely with Research, Applied, Safety Systems, infrastructure teams, and external technical partners to solve ambiguous model-performance problems. When you succeed, strategic partners and internal teams will be able to improve model behavior with confidence, driving measurable product improvements while the systems behind that work become more reliable, scalable, and effective over time.
In this role, you will
Partner with strategic customers and internal teams to define target model behaviors, diagnose failure modes, and translate real-world needs into training, evaluation, and system requirements.
Build and scale production ML systems for model customization, post-training, and fine-tuning-as-a-service workflows.
Investigate whether training and customization workflows are producing the intended outcomes, and identify changes to data, evaluation, training, or infrastructure that improve performance.
Partner with backend and infrastructure engineers to integrate ML capabilities into AWS-native API environments.
Feed learnings from partner deployments back into the platform by proposing and implementing improvements to post-training systems, tooling, APIs, and developer workflows.
Work closely with Research and Applied teams to bring model improvements, training workflows, and evaluation best practices into production.
Help design systems that allow strategic partners and enterprise customers to safely customize OpenAI models for high-value use cases.
Debug and improve complex systems spanning model behavior, training data, APIs, distributed infrastructure, and customer-facing product surfaces.
Operate with high ownership in a 0→1 environment where requirements are ambiguous, systems are evolving quickly, and reliability matters.
Your background might look something like:
Master’s or PhD in Computer Science, Machine Learning, or a related field, or equivalent practical experience.
7+ years of professional engineering experience in relevant ML, infrastructure, or product-driven engineering roles.
Strong ML engineering experience building, training, fine-tuning, evaluating, or deploying production AI systems, with hands-on experience in deep learning, transformer models, and frameworks like PyTorch or TensorFlow.
Familiarity with training and fine-tuning large language models, including methods like supervised fine-tuning, distillation, preference optimization, reinforcement learning, or other post-training techniques.
Strong software engineering fundamentals, including data structures, algorithms, systems design, and high-quality production code in Python, Rust, or similar languages.
Experience with model customization, evaluation systems, data pipelines, distributed systems, cloud infrastructure, or production ML platform tradeoffs.
Ability to operate across model behavior, APIs, and infrastructure, while collaborating closely with Research, Safety, product engineering, infrastructure, and external technical partners.
Comfort moving quickly through ambiguity, owning problems end-to-end, and learning whatever is needed to get the job done.
Bonus: experience with AWS, Kubernetes, agents, tool use, runtime environments, AI developer platforms, or speech models.
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.
For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.
Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.
To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.
We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.
OpenAI Global Applicant Privacy Policy
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.





