Staff Software Engineer, Core Infrastructure
As a Staff Software Engineer on the Core Infrastructure team at Harvey, your responsibilities include designing and building scalable, fault-tolerant infrastructure systems that power Harvey's AI platform across multiple cloud regions. You will own and evolve the multi-cloud infrastructure (Azure, GCP), including Kubernetes orchestration, networking, and container management. You will lead technical initiatives focused on observability, incident response, and operational excellence, building systems for rapid detection and resolution of issues. Architecting and optimizing distributed systems for reliability, including load balancing, quota management, and failover mechanisms, will be part of your role. You will partner with Product Engineering and Security teams to ensure infrastructure accelerates product development, drive infrastructure-as-code practices using tools like Terraform and Pulumi for reproducible deployments, and mentor engineers through code reviews, design reviews, and technical leadership. Representative projects include designing model proxy architecture for handling inference requests, building distributed rate limiting and quota management systems, architecting multi-region deployment strategies for data residency compliance, developing observability infrastructure with SLA monitoring and cost tracking, and leading CI/CD pipeline evolution to improve velocity and stability.
Tokens-as-a-Service (Taas) Software Engineer
Develop systems and tooling to measure, monitor, and improve token throughput across first-party and partner-owned compute environments. Support performance benchmarking, tokenomics analysis, and model porting across heterogeneous infrastructure environments. Build tooling to integrate external or partner infrastructure into OpenAI’s internal compute, observability, and workload management systems. Develop and monitor operational metrics including billing, usage, SLAs, utilization, reliability, and throughput. Identify bottlenecks across hardware, networking, software, and workload enablement that prevent capacity from becoming productive tokens. Partner with compute, infrastructure, networking, finance, and operations teams to translate raw capacity into usable workload-serving capacity. Build dashboards, automation, and reporting systems that provide clear visibility into TaaS capacity, performance, and business outcomes.
Software Engineer, Early Career
As a Software Engineer at Mirage, you will work across product engineering, backend/platform engineering, and applied AI teams. Responsibilities include designing and building systems, APIs, and infrastructure that power products; solving challenges involving distributed systems, scaling, and performance; integrating and operating large AI models in production; building core platform components such as storage, billing, observability, and security; shipping end-to-end product experiences for creative workflows; building polished, performant user interfaces (web or native mobile); pushing the boundaries of video, graphics, and AI-powered creation tools; instrumenting, A/B testing, and iterating quickly with real user data; building and shipping AI-powered product experiences end-to-end; working with state-of-the-art models across video, audio, image, and text; designing systems for context, reasoning, and intelligent behavior; and building evals, datasets, and tooling for improving model quality.
Staff Software Engineer, RLE
Define and drive architecture for scalable, extensible Reinforcement Learning Environments (RLE) systems and data pipelines. Lead development of platform capabilities enabling rapid domain creation. Partner with Research, Product, and Operations to shape strategy and execution. Set standards for reliability, observability, performance, and data quality. Mentor engineers and elevate engineering excellence across the team. Identify and solve systemic bottlenecks in scaling environments and data generation.
Director of Engineering, Infrastructure
As the Director of Engineering for Infrastructure at Zapier, you will lead multiple multidisciplinary teams responsible for building, supporting, and evolving Zapier's core services, platforms, and infrastructure. Your responsibilities include shaping platform engineering vision, scalability, accountability mechanisms, and organizational operations. You will be responsible for defining and driving the strategy and long-term roadmap for the organization in collaboration with your teams and leadership peers, understanding and articulating how your work enables product development velocity, and how reactive and Keep-The-Lights-On (KTLO) work will be reduced to increase proactive platform improvements. You will lead the AI transformation of platform engineering by re-architecting workflows, minimizing reactive work, implementing AI-powered tooling and automation, setting AI adoption pace, and building repeatable AI-enhanced systems. Additionally, you will unblock software delivery pain points, establish data-driven approaches to measure delivery velocity and quality, ensure the reliability and uptime of core infrastructure in partnership with service-owning teams, and lead the organization during major incidents when necessary. You are also accountable for team building, talent development, recruiting, mentoring, and sustaining a compelling work culture that supports growth, with clear growth paths for managers and ICs, ultimately owning output and outcomes for your teams and their systems.
RISC-V AI / HPC & Agentic Software Engineer
Lead and contribute to cross-functional efforts solving complex physical design challenges across IPs, projects, and advanced technology nodes. Develop and enhance RTL-to-GDS methodologies, including floorplanning, synthesis, placement and routing (P&R), static timing analysis (STA), signoff, and assembly. Architect and deploy AI/ML-driven solutions in production flows to improve engineering efficiency, turnaround time, and quality of results (QoR). Optimize EDA tools and custom CAD flows using data-driven and machine learning-based techniques, collaborating closely with verification, extraction, timing, Design for Test (DFT), and EDA vendors.
Director, Engineering, Proactive Offense
Lead and scale Horizon3.ai's Offensive Engineering organization, overseeing teams responsible for exploit development, offensive content, and attack automation within the NodeZero platform. Set clear technical and product direction for how NodeZero identifies, exploits, and validates vulnerabilities across large, complex environments. Partner with Product, Precision Defense, and Platform teams to define and deliver offensive capabilities that influence the roadmap and enhance customer outcomes. Drive execution from proof-of-concept through production to transform cutting-edge attack research into scalable, productized features. Stay hands-on to guide architectural decisions and evaluate exploit and automation approaches, mentoring technical leads in building resilient, modular systems. Build, mentor, and scale diverse teams of software engineers, exploit developers, and offensive researchers, fostering a culture of collaboration, creativity, and engineering excellence that bridges offensive and product software development. Collaborate across engineering, product, and GTM teams to align offensive innovation with business priorities and ensure delivery of impactful capabilities for customers. This role is central to the mission of delivering continuous, autonomous security testing at scale.
Technical Lead Manager, Platform (India)
Lead the design and development of low latency, scalable, and reliable model inference and serving stack for SSM foundation models. Manage and mentor a team of platform engineers maintaining a high technical bar and strong engineering culture. Work closely with research and product teams to translate research into products. Own the architecture and roadmap for model serving infrastructure, distributed systems, and data processing platforms. Build highly parallel, high quality data processing and evaluation infrastructure for foundation model training. Drive execution across ambiguous, zero-to-one engineering projects and platform initiatives. Establish best practices for reliability, observability, scalability, and performance across platform systems. Help recruit, interview, and build the engineering team in India. Have significant autonomy to shape the platform and impact how AI is applied across devices and applications.
IC Agentic Engineering Manager - Stargate
Design and build agent-based systems to support infrastructure deployment and operations. Identify high-impact opportunities to apply agents across workflows such as cluster bring-up and deployment readiness, incident triage and root cause analysis, system validation and health monitoring, and capacity management and operational decision-making. Lead a small team while contributing directly as an individual contributor across system design, development, and integration. Partner with infrastructure, hardware, and networking teams to integrate agentic systems into production workflows. Develop systems that leverage telemetry, logs, and system signals to enable closed-loop automation. Define evaluation frameworks to measure system effectiveness, reliability, and operational impact. Drive iteration from prototype to production, ensuring robustness and scalability.
Senior Platform Engineer, Voice AI
Advance inference efficiency end-to-end by designing and prototyping algorithms, architectures, and scheduling strategies for low-latency, high-throughput inference. Implement and maintain changes in high-performance inference engines, including kernel backends, speculative decoding, and quantization. Profile and optimize performance across GPU, networking, and memory layers to improve latency, throughput, and cost. Design and operate RL and post-training pipelines that optimize algorithms and systems jointly, making workloads more efficient with inference-aware training loops and techniques such as async RL rollouts and speculative decoding. Use these pipelines to train, evaluate, and iterate on frontier models, and co-design algorithms and infrastructure tightly coupled to efficient inference. Run ablations and scale-up experiments to understand trade-offs between model quality, latency, throughput, and cost, feeding insights back into model, RL, and system design. Own critical systems at production scale by profiling, debugging, and optimizing inference and post-training services under real workloads. Drive roadmap items requiring engine modification, establish metrics, benchmarks, and experimentation frameworks to rigorously validate improvements. Provide technical leadership by setting technical direction for cross-team efforts at the intersection of inference, RL, and post-training, and mentor other engineers and researchers on full-stack ML systems and performance engineering.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
