Program Manager, Data Center Delivery
Advance inference efficiency end-to-end by designing and prototyping algorithms, architectures, and scheduling strategies for low-latency, high-throughput inference. Implement and maintain changes in high-performance inference engines such as SGLang- or vLLM-style systems and Together’s inference stack, including kernel backends, speculative decoding like ATLAS, and quantization. Profile and optimize performance across GPU, networking, and memory layers to improve latency, throughput, and cost. Design and operate RL and post-training pipelines, optimizing algorithms and systems for efficiency where inference constitutes the majority of the cost. Make RL and post-training workloads more efficient with inference-aware training loops, async RL rollouts, and speculative decoding to reduce large-scale rollout collection and evaluation costs. Use these pipelines to train, evaluate, and iterate on frontier models atop the inference stack. Co-design algorithms and infrastructure for tightly coupled objectives, rollout collection, and evaluation with efficient inference, and identify bottlenecks across training engines, inference engines, data pipelines, and user-facing layers. Conduct ablations and scale-up experiments to analyze trade-offs among model quality, latency, throughput, and cost, using insights to inform model, RL, and system design. Profile, debug, and optimize inference and post-training services under production workloads. Lead roadmap efforts that require engine modifications including changes to kernels, memory layouts, scheduling logic, and APIs. Establish metrics, benchmarks, and experimentation frameworks to validate improvements rigorously. Provide technical leadership by setting technical direction for cross-team efforts at the intersection of inference, RL, and post-training and mentoring engineers and researchers in full-stack ML systems work and performance engineering.
Senior Engineering Manager, Handshake AI
The Senior Engineering Manager leads a core product and platform engineering team responsible for building systems that integrate human expertise into AI development workflows. The team owns critical infrastructure connecting talent networks, data operations, and research needs into scalable, reliable, and high-quality platforms. The role involves leading, hiring, and developing a high-performing engineering team, owning roadmap and execution in close partnership with Product, Research, and Operations, driving architecture and technical strategy for scalable and extensible systems, building modular platforms to enable new domains and workflows to launch quickly, raising engineering quality across reliability, observability, performance, and data integrity, and fostering a culture of ownership, velocity, and strong engineering fundamentals in a fast-moving, ambiguity-heavy environment.
Director, Forward Deployed Engineering
As Director of Forward Deployed Engineering at Harvey, you will own the program end-to-end for the Forward Deployed Engineering team, which delivers a tailored experience for strategically important accounts. Your responsibilities include building, hiring, and managing a team of software engineers and managers deployed into strategic accounts. You will define staffing models, engagement structures, capacity allocation, and develop specialist pods of engineers for new verticals such as M&A, litigation, fund formation, and compliance. You are responsible for setting and upholding quality standards for client deliverables, documentation, and knowledge transfer. In terms of technical execution, you will maintain deep technical fluency to accurately scope custom builds, unblock engineering decisions, and evaluate the quality of delivered solutions. You will oversee the design and implementation of tailored workflows, retrieval systems, agent tools, and knowledge sources built on Harvey's platform, ensuring these solutions are operationalized with evaluations, documentation, and user training. Additionally, you will identify patterns across client engagements that highlight gaps or opportunities in Harvey's core platform and bring these insights to product and engineering leadership with specificity about client needs, frequency, and generalization requirements.
Manager, Forward Deployed Engineer (FDE), Life Sciences
Lead and grow a team of Forward Deployed Engineers (FDEs) delivering production AI systems across regulated life sciences environments; be accountable for the team’s end-to-end delivery outcomes, balancing scope, speed, robustness, and risk in high-stakes deployments; coach and develop engineers through direct feedback, high technical standards, and clear expectations for execution and ownership; operate as a player-coach by directly contributing to production systems while leading, coaching, and setting technical direction; guide teams through ambiguous, multi-workstream engagements spanning data, workflows, infrastructure, security, and scientific stakeholders; run evaluation loops measuring model and system quality against workflow-specific scientific benchmarks and convert results into clear roadmap input.
Senior Engineering Manager, Reinforcement Learning Environments (RLE)
Lead and grow a high-performing team of 8–9 engineers building reinforcement learning environments. Manage, mentor, and develop senior engineers and future engineering leaders. Partner closely with research, product, and operations teams to define roadmap and execution priorities. Drive technical architecture for scalable, reliable, and extensible environment systems. Build plug-and-play environments that integrate seamlessly with model training pipelines. Balance platform rigor with operational complexity and data quality requirements. Establish engineering best practices around reliability, observability, and performance. Foster a culture of ownership, velocity, and high technical standards.
Senior Manager
Lead transformational AI system implementations by scoping high-value solutions and navigating complex technical challenges alongside technical colleagues. Manage enterprise life sciences accounts, including oversight of pricing, contract negotiations, resourcing, and identifying strategic growth opportunities. Build deep trust with senior stakeholders in global enterprises through understanding how Frontier addresses their operational problems. Advocate for customer needs internally by providing product development teams with direct insights to refine and enhance the platform. Create scalable delivery assets such as playbooks and process improvements to empower external partners and internal teams. Collaborate across functions including engineering, data science, and business development to explore novel use cases and ensure seamless project coordination.
AI Implementations Manager
The AI Implementation Manager is responsible for the end-to-end delivery and stabilization of Ema's agentic AI solutions, spanning from design alignment through production rollout and steady state. This role involves ensuring solutions align with Ema’s agentic architecture and platform capabilities. The manager must develop a deep understanding of customer business processes and constraints to translate business workflows into feasible agentic AI workflows. They provide delivery-focused technical oversight, anticipating potential implementation issues such as integration, data quality, scale, and edge cases. The manager serves as the primary delivery contact for customer business and IT stakeholders and coordinates across multiple internal teams including Engineering, Product, Data, Infrastructure, and Value Engineering. They manage delivery under pressure by coaching stakeholders and teams during high-stress phases to reduce chaos. They communicate delivery progress, risks, and decisions clearly to all audiences, tracking success through adoption signals and outcome-adjacent metrics. Additionally, the role includes providing day-to-day delivery leadership and mentorship, promoting shared standards, clear ownership, and delivery discipline.
Technical Program Manager, Quality
Manage the end-to-end lifecycle of LLM projects, navigating the transition from research milestones to production-level deployments. Transform subjective user feedback into objective metrics and datasets. Design and implement technical evaluations to address issues found in the field and help integrate these evaluations into existing pipelines. Track internal and external feedback to ensure identified issues are followed through to resolution in subsequent iterations. Maintain the technical roadmap for voice-based capabilities, proactively identifying dependencies and resolving technical blockers across teams. Ensure the roadmap incorporates the work and constraints of all teams to deliver a cohesive user experience.
AI Deployment Manager
As an AI Deployment Manager, you will lead end-to-end AI deployments from kickoff to successful launch, owning project planning, timelines, execution, and delivery across customer implementations. You will act as a trusted partner to customers, helping translate business goals into successful AI deployments. You will deploy and operationalize AI models across Cresta's platform in partnership with internal teams, including rules-based models, summarization, generative knowledge assistance, and more. You will drive value realization, ensuring deployments deliver measurable results rather than just go-live dates. You will guide customers confidently through every phase of deployment, keeping momentum high and stakeholders aligned. You will collaborate closely with Solutions Engineering, Product, Customer Success, and Engineering teams. Additionally, you will anticipate risks, solve problems, and keep complex initiatives moving forward.
Manager, Forward Deployed Engineering
Lead and grow a team of Forward Deployed Engineers (FDE) delivering production systems with frontier models. Own end-to-end delivery outcomes through clarity, speed, tight coordination, and technical quality. Codify successful practices into tools, playbooks, and roadmap inputs to create leverage for OpenAI and the wider developer community. Identify early indicators in product behavior, customer environments, or delivery practices and raise them with urgency. Use judgment to distinguish which issues require action. Set a high performance bar for FDEs and support each person's growth through direct, actionable feedback. Define staffing and support models for field teams that can scale without added complexity.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
