Span - Sr Product Engineer
Work on projects such as developing a product that root causes KTLO work and recommends solutions, building a software catalog that works for monoliths and is user-friendly, and helping protect engineering focus time by systemically solving sources of distraction or mental load with AI.
Lead Member of Technical Staff, Inference Infrastructure
The Lead Member of Technical Staff, Inference Infrastructure, is responsible for providing technical leadership across multiple teams, driving the architecture and strategy for deploying optimized NLP models to production in low latency, high throughput, and high availability environments. They lead the design of customized deployments to meet specific customer needs and mentor engineers to raise the technical standards across the team. The role involves contributing to the development, deployment, and operation of the AI platform delivering large language models through easy-to-use API endpoints, and serving as a key point of contact for customers.
Member of Engineering (Reinforcement Learning Infrastructure)
Keep up with the latest research, and be familiar with the state of the art in LLMs, RL, and code generation. Develop methods for tuning training and inference end-to-end for high throughput. Design data control systems in an RL pipeline that govern what the model sees and when. Debug cases where infrastructure decisions are silently degrading learning dynamics. Build observability tooling that surfaces when a system-level issue is the root cause of a training regression. Help build robust, flexible and scalable RL pipelines. Optimize performance across the stack — networking, memory, compute scheduling, and I/O. Write high-quality, pragmatic code. Work in the team: plan future steps, discuss, and always stay in touch.
Staff Software Engineer, Core Infrastructure
As a Staff Software Engineer on the Core Infrastructure team at Harvey, your responsibilities include designing and building scalable, fault-tolerant infrastructure systems that power Harvey's AI platform across multiple cloud regions. You will own and evolve the multi-cloud infrastructure (Azure, GCP), including Kubernetes orchestration, networking, and container management. You will lead technical initiatives focused on observability, incident response, and operational excellence, building systems for rapid detection and resolution of issues. Architecting and optimizing distributed systems for reliability, including load balancing, quota management, and failover mechanisms, will be part of your role. You will partner with Product Engineering and Security teams to ensure infrastructure accelerates product development, drive infrastructure-as-code practices using tools like Terraform and Pulumi for reproducible deployments, and mentor engineers through code reviews, design reviews, and technical leadership. Representative projects include designing model proxy architecture for handling inference requests, building distributed rate limiting and quota management systems, architecting multi-region deployment strategies for data residency compliance, developing observability infrastructure with SLA monitoring and cost tracking, and leading CI/CD pipeline evolution to improve velocity and stability.
Tokens-as-a-Service (Taas) Software Engineer
Develop systems and tooling to measure, monitor, and improve token throughput across first-party and partner-owned compute environments. Support performance benchmarking, tokenomics analysis, and model porting across heterogeneous infrastructure environments. Build tooling to integrate external or partner infrastructure into OpenAI’s internal compute, observability, and workload management systems. Develop and monitor operational metrics including billing, usage, SLAs, utilization, reliability, and throughput. Identify bottlenecks across hardware, networking, software, and workload enablement that prevent capacity from becoming productive tokens. Partner with compute, infrastructure, networking, finance, and operations teams to translate raw capacity into usable workload-serving capacity. Build dashboards, automation, and reporting systems that provide clear visibility into TaaS capacity, performance, and business outcomes.
Software Engineer, Compute Infrastructure
In this role, you will spin up and scale large Kubernetes clusters, including automating provisioning, bootstrapping, and cluster lifecycle management; build software abstractions that unify multiple clusters and provide a seamless interface to training workloads; own node bring-up from bare metal through firmware upgrades ensuring fast and repeatable deployment at massive scale; improve operational metrics such as reducing cluster restart times and accelerating firmware or OS upgrade cycles; integrate networking and hardware health systems to deliver end-to-end reliability across servers, switches, and data center infrastructure; develop monitoring and observability systems to detect issues early and maintain cluster stability under extreme load; solve real-time operational challenges, diagnose and fix issues quickly, and continuously improve automation, resilience, performance, and uptime across the systems powering frontier AI model training.
Software Engineer, Model Serving Infrastructure
The role involves contributing to the development of next-generation, high-performance machine learning serving systems. Responsibilities include building infrastructure that powers AI applications, working on problems at the intersection of distributed systems, machine learning, and high-performance computing, and solving fundamental computer science problems impacting AI deployment. Specific projects include implementing asynchronous inference for non-blocking client requests, designing intelligent request routing systems to balance load across thousands of model replicas with strict latency SLAs, building traffic management systems for zero-downtime model updates handling terabytes of inference requests, improving state management for scale from thousands to tens of thousands of replicas, architecting frameworks for multi-model orchestration in complex ML pipelines ensuring end-to-end latency guarantees, and developing observability and debugging tools for distributed ML applications at scale. The work involves writing performance-critical code in Python (with Cython optimizations) and potentially C++, working with distributed systems at scale using Ray Core's actor system, gRPC, and custom networking protocols, extending cloud-native infrastructure such as Kubernetes and service meshes, gaining system-level knowledge of ML/AI frameworks like TensorFlow, PyTorch, JAX, and transformers, and ensuring production reliability with tools like OpenTelemetry, Prometheus, distributed tracing, and chaos engineering to maintain 99.99% uptime. The role also involves leveraging AI coding agents to enhance team productivity while maintaining high code quality standards.
VP Engineering - London
The VP Engineering is responsible for defining and executing a scalable, defensible technology strategy; building a world-class engineering organization and platform; partnering with the CEO on product direction, investor communication, and long-term vision; and ensuring the successful bridging of frontier AI research with enterprise-grade deployment. Responsibilities include architecting and scaling H's AI platform, making build vs. buy decisions, ensuring performance, reliability, and cost efficiency, establishing technical moats, translating AI capabilities into enterprise-ready products, standardizing bespoke systems, balancing iteration speed with robustness, building and leading engineering teams, scaling organizational structure, implementing quality processes, acting as a key counterpart to the CEO in board and investor discussions, articulating technology and product roadmaps, providing technical due diligence, operating cross-functionally across Research, Product, and Go-to-Market, aligning engineering with customer and revenue goals, and helping define long-term company positioning.
VP Engineering - Paris
The VP Engineering is responsible for defining and executing a scalable, defensible technology strategy, including architecting and scaling the AI platform with a focus on agents, orchestration, model integration, and infrastructure. They make critical build versus buy decisions across the technology stack, ensure performance, reliability, and cost efficiency at scale, and establish durable technical moats in a rapidly evolving AI landscape. They translate cutting-edge AI capabilities into repeatable, enterprise-ready products, standardize systems that are currently bespoke or forward-deployed, and balance speed of iteration with platform robustness and maintainability. They build and lead a high-caliber engineering organization, scaling from a startup structure to multi-layered, high-output teams and implement processes to enable speed without sacrificing quality. The VP Engineering acts as a key counterpart to the CEO in board and investor discussions, clearly articulates the company's technology and product roadmap, and provides credibility and depth in technical due diligence and fundraising contexts. They operate at the intersection of Research, Product, and Go-to-Market, align engineering execution with customer outcomes and revenue growth, and help define the company’s long-term product and platform positioning.
Engineering Manager, Cooperative Systems
Lead and grow a small team building applied AI systems for internal operations. Design and build AI-powered automation systems in close proximity to customers. Stay hands-on in architecture and implementation across the full stack. Develop evolving systems spanning developer tools, automation platforms, knowledge graphs, and data systems. Deploy systems directly to internal users and close customers to iterate rapidly based on real-world feedback. Engage frequently with scaled workforces to understand needs and validate solutions. Create systems for visibility and learning in hybrid workforces. Partner with product, research, and ops teams daily.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Need help with something? Here are our most frequently asked questions.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
