Compliance Program Manager
Own the observability and lifecycle management of AI features across the organization. Build tools and infrastructure to enable teams to develop, monitor, and optimize LLM-powered features. Design and implement closed-loop evaluation pipelines that automatically validate prompt changes. Develop comprehensive metrics and dashboards to track LLM usage, including cost per feature, token patterns, and latency. Create systems that connect user feedback to specific prompts and LLM calls. Establish best practices and processes for the full lifecycle of prompts: development, testing, deployment, and monitoring. Collaborate with engineering teams across the organization to ensure they have the tools and visibility needed to build high-quality AI features.
Manager/Sr. Manager, Biopharma Marketing
Lead the team responsible for the AI/ML Stack infrastructure that bridges ML research and large-scale production, evolving the stack to meet scalability needs in ML training and inference workloads. Develop and execute the long-term vision and roadmap for the MLOps team to support ML development and deployment across business units, balancing short-term tactical deliveries and long-term architectural transformation. Manage and mentor a team of 6-7+ engineers, allocate resources strategically to support existing services and strategic initiatives. Collaborate across machine learning, data science, product engineering, and infrastructure teams to identify and address bottlenecks and facilitate deployment of new solutions. Architect compute and storage pipelines to manage large datasets without fragmentation or latency. Modernize the AI product inference stack to support significant growth in AI runs globally. Work with Site Reliability Engineering to establish comprehensive system observability metrics. Conduct build vs. buy assessments and technology stack refresh audits to benchmark and ensure best toolsets are in use.
Chief Technology Officer
The Chief Technology Officer is responsible for defining the long-term architecture for A1's AI systems, infrastructure, and developer platform, evaluating trade-offs between speed of iteration and long-term system design, and ensuring systems are designed for scalability, reliability, and long-term evolution. They guide key decisions across model integration, data pipelines, distributed systems, and product architecture. The CTO works with engineers to translate product direction into clear technical execution, helps structure engineering workstreams and maintain team alignment on priorities, maintains high engineering standards while encouraging shipping, and establishes engineering culture, development practices, and technical standards across the company. They build and scale a world-class engineering team across key talent hubs including China and the US, identify strong technical leaders, define hiring standards and interview processes, and ensure technical workstreams move forward smoothly across teams and locations. The CTO works closely with product, research, and leadership teams and helps resolve cross-team technical and execution challenges.
Chief Technology Officer
The Chief Technology Officer will define the long-term architecture for A1’s AI systems, infrastructure, and developer platform, evaluate trade-offs between speed of iteration and long-term system design, and ensure systems are designed for scalability, reliability, and long-term evolution. They will guide key decisions across model integration, data pipelines, distributed systems, and product architecture. The CTO will work with engineers to translate product direction into clear technical execution, help structure engineering workstreams and keep teams aligned on priorities, maintain high engineering standards while focusing on shipping, and establish engineering culture, development practices, and technical standards. Additionally, they will build and scale a world-class engineering team across key talent hubs including China and the US, identify strong technical leaders, define hiring standards and interview processes, work closely with product, research, and leadership teams, ensure technical workstreams move forward smoothly across teams and locations, and help resolve cross-team technical and execution challenges.
Program Manager, Data Center Delivery
Advance inference efficiency end-to-end by designing and prototyping algorithms, architectures, and scheduling strategies for low-latency, high-throughput inference. Implement and maintain changes in high-performance inference engines such as SGLang- or vLLM-style systems and Together’s inference stack, including kernel backends, speculative decoding like ATLAS, and quantization. Profile and optimize performance across GPU, networking, and memory layers to improve latency, throughput, and cost. Design and operate RL and post-training pipelines, optimizing algorithms and systems for efficiency where inference constitutes the majority of the cost. Make RL and post-training workloads more efficient with inference-aware training loops, async RL rollouts, and speculative decoding to reduce large-scale rollout collection and evaluation costs. Use these pipelines to train, evaluate, and iterate on frontier models atop the inference stack. Co-design algorithms and infrastructure for tightly coupled objectives, rollout collection, and evaluation with efficient inference, and identify bottlenecks across training engines, inference engines, data pipelines, and user-facing layers. Conduct ablations and scale-up experiments to analyze trade-offs among model quality, latency, throughput, and cost, using insights to inform model, RL, and system design. Profile, debug, and optimize inference and post-training services under production workloads. Lead roadmap efforts that require engine modifications including changes to kernels, memory layouts, scheduling logic, and APIs. Establish metrics, benchmarks, and experimentation frameworks to validate improvements rigorously. Provide technical leadership by setting technical direction for cross-team efforts at the intersection of inference, RL, and post-training and mentoring engineers and researchers in full-stack ML systems work and performance engineering.
Senior Engineering Manager, Handshake AI
The Senior Engineering Manager leads a core product and platform engineering team responsible for building systems that integrate human expertise into AI development workflows. The team owns critical infrastructure connecting talent networks, data operations, and research needs into scalable, reliable, and high-quality platforms. The role involves leading, hiring, and developing a high-performing engineering team, owning roadmap and execution in close partnership with Product, Research, and Operations, driving architecture and technical strategy for scalable and extensible systems, building modular platforms to enable new domains and workflows to launch quickly, raising engineering quality across reliability, observability, performance, and data integrity, and fostering a culture of ownership, velocity, and strong engineering fundamentals in a fast-moving, ambiguity-heavy environment.
Director, Forward Deployed Engineering
As Director of Forward Deployed Engineering at Harvey, you will own the program end-to-end for the Forward Deployed Engineering team, which delivers a tailored experience for strategically important accounts. Your responsibilities include building, hiring, and managing a team of software engineers and managers deployed into strategic accounts. You will define staffing models, engagement structures, capacity allocation, and develop specialist pods of engineers for new verticals such as M&A, litigation, fund formation, and compliance. You are responsible for setting and upholding quality standards for client deliverables, documentation, and knowledge transfer. In terms of technical execution, you will maintain deep technical fluency to accurately scope custom builds, unblock engineering decisions, and evaluate the quality of delivered solutions. You will oversee the design and implementation of tailored workflows, retrieval systems, agent tools, and knowledge sources built on Harvey's platform, ensuring these solutions are operationalized with evaluations, documentation, and user training. Additionally, you will identify patterns across client engagements that highlight gaps or opportunities in Harvey's core platform and bring these insights to product and engineering leadership with specificity about client needs, frequency, and generalization requirements.
Senior Program Manager, Infrastructure Strategy and Business Operations
Advance inference efficiency end-to-end by designing and prototyping algorithms, architectures, and scheduling strategies for low-latency, high-throughput inference; implement and maintain changes in high-performance inference engines, including kernel backends, speculative decoding, and quantization; profile and optimize performance across GPU, networking, and memory layers to improve latency, throughput, and cost. Unify inference with RL/post-training by designing and operating RL and post-training pipelines, making workloads more efficient with inference-aware training loops, and using these pipelines to train, evaluate, and iterate on frontier models. Co-design algorithms and infrastructure to tightly couple objectives, rollout collection, and evaluation to efficient inference and identify bottlenecks across the training engine, inference engine, data pipeline, and user-facing layers. Run ablations and scale-up experiments to understand trade-offs between model quality, latency, throughput, and cost, feeding insights back into model, RL, and system design. Own critical systems at production scale by profiling, debugging, and optimizing inference and post-training services; drive roadmap items requiring engine modification; establish metrics, benchmarks, and experimentation frameworks to validate improvements rigorously. Provide technical leadership by setting technical direction for cross-team efforts and mentoring engineers and researchers on full-stack ML systems and performance engineering.
Manager, Forward Deployed Engineer (FDE), Life Sciences
Lead and grow a team of Forward Deployed Engineers (FDEs) delivering production AI systems across regulated life sciences environments; be accountable for the team’s end-to-end delivery outcomes, balancing scope, speed, robustness, and risk in high-stakes deployments; coach and develop engineers through direct feedback, high technical standards, and clear expectations for execution and ownership; operate as a player-coach by directly contributing to production systems while leading, coaching, and setting technical direction; guide teams through ambiguous, multi-workstream engagements spanning data, workflows, infrastructure, security, and scientific stakeholders; run evaluation loops measuring model and system quality against workflow-specific scientific benchmarks and convert results into clear roadmap input.
Senior Engineering Manager, Reinforcement Learning Environments (RLE)
Lead and grow a high-performing team of 8–9 engineers building reinforcement learning environments. Manage, mentor, and develop senior engineers and future engineering leaders. Partner closely with research, product, and operations teams to define roadmap and execution priorities. Drive technical architecture for scalable, reliable, and extensible environment systems. Build plug-and-play environments that integrate seamlessly with model training pipelines. Balance platform rigor with operational complexity and data quality requirements. Establish engineering best practices around reliability, observability, and performance. Foster a culture of ownership, velocity, and high technical standards.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
