Span - Sr Product Engineer
Work on projects such as developing a product that root causes KTLO work and recommends solutions, building a software catalog that works for monoliths and is user-friendly, and helping protect engineering focus time by systemically solving sources of distraction or mental load with AI.
AI/ML Engineer
Develop, train, and optimize machine learning models for various mobile app features. Research and implement state-of-the-art AI techniques to improve user engagement and app performance. Collaborate with cross-functional teams to integrate AI-driven solutions into applications. Design and maintain scalable ML pipelines, ensuring efficient model deployment and monitoring. Analyze large datasets to derive insights and drive data-driven decision-making. Stay updated with the latest AI trends and best practices, incorporating them into development processes. Optimize AI models for mobile environments to ensure high performance and low latency.
Lead Member of Technical Staff, Inference Infrastructure
The Lead Member of Technical Staff, Inference Infrastructure, is responsible for providing technical leadership across multiple teams, driving the architecture and strategy for deploying optimized NLP models to production in low latency, high throughput, and high availability environments. They lead the design of customized deployments to meet specific customer needs and mentor engineers to raise the technical standards across the team. The role involves contributing to the development, deployment, and operation of the AI platform delivering large language models through easy-to-use API endpoints, and serving as a key point of contact for customers.
Sr. Applied AI Engineer
As a Sr. Applied AI Engineer at Taktile, the responsibilities include building reusable AI products by acting as the product owner for application areas, designing, developing, and deploying robust generative AI agents as configurable solutions for customers. The role requires partnering with Solution and Forward Deployed Engineers during sales and implementation projects to understand customer needs in depth, and developing standard templates and reusable components to reduce activation time and address core challenges. The engineer must synthesize customer feedback into a clear vision for AI agents, iterating solutions to solve concrete use cases at scale, treating every agent as a product itself. Collaboration with the core product team is essential to prioritize platform features that support application development and acting as an expert user consultant during new feature development.
Tokens-as-a-Service (Taas) Software Engineer
Develop systems and tooling to measure, monitor, and improve token throughput across first-party and partner-owned compute environments. Support performance benchmarking, tokenomics analysis, and model porting across heterogeneous infrastructure environments. Build tooling to integrate external or partner infrastructure into OpenAI’s internal compute, observability, and workload management systems. Develop and monitor operational metrics including billing, usage, SLAs, utilization, reliability, and throughput. Identify bottlenecks across hardware, networking, software, and workload enablement that prevent capacity from becoming productive tokens. Partner with compute, infrastructure, networking, finance, and operations teams to translate raw capacity into usable workload-serving capacity. Build dashboards, automation, and reporting systems that provide clear visibility into TaaS capacity, performance, and business outcomes.
Software Engineer I , Coding Pod
As a Software Engineer on the Coding Pod, you will build the data infrastructure and pipelines that power frontier AI coding models. Responsibilities include designing and building scalable data pipelines for generating, transforming, and validating large-scale coding datasets; developing systems for task generation, dataset curation, and quality assurance, including automated and human-in-the-loop evaluation workflows; integrating with developer ecosystems such as GitHub and building tooling to support real-world coding environments; working with containerized environments like Docker to safely execute and evaluate code at scale; building backend systems and APIs that power dataset delivery and model evaluation pipelines; collaborating closely with ML researchers, product managers, and other engineers to define evaluation methodologies and improve dataset quality; implementing automated grading, benchmarking, and assessment systems for coding tasks; debugging and optimizing pipeline performance, reliability, and scalability across distributed systems; and contributing to architectural decisions around data infrastructure, evaluation systems, and pipeline orchestration.
Software Engineer, Compute Infrastructure
In this role, you will spin up and scale large Kubernetes clusters, including automating provisioning, bootstrapping, and cluster lifecycle management; build software abstractions that unify multiple clusters and provide a seamless interface to training workloads; own node bring-up from bare metal through firmware upgrades ensuring fast and repeatable deployment at massive scale; improve operational metrics such as reducing cluster restart times and accelerating firmware or OS upgrade cycles; integrate networking and hardware health systems to deliver end-to-end reliability across servers, switches, and data center infrastructure; develop monitoring and observability systems to detect issues early and maintain cluster stability under extreme load; solve real-time operational challenges, diagnose and fix issues quickly, and continuously improve automation, resilience, performance, and uptime across the systems powering frontier AI model training.
VP Engineering - London
The VP Engineering is responsible for defining and executing a scalable, defensible technology strategy; building a world-class engineering organization and platform; partnering with the CEO on product direction, investor communication, and long-term vision; and ensuring the successful bridging of frontier AI research with enterprise-grade deployment. Responsibilities include architecting and scaling H's AI platform, making build vs. buy decisions, ensuring performance, reliability, and cost efficiency, establishing technical moats, translating AI capabilities into enterprise-ready products, standardizing bespoke systems, balancing iteration speed with robustness, building and leading engineering teams, scaling organizational structure, implementing quality processes, acting as a key counterpart to the CEO in board and investor discussions, articulating technology and product roadmaps, providing technical due diligence, operating cross-functionally across Research, Product, and Go-to-Market, aligning engineering with customer and revenue goals, and helping define long-term company positioning.
VP Engineering - Paris
The VP Engineering is responsible for defining and executing a scalable, defensible technology strategy, including architecting and scaling the AI platform with a focus on agents, orchestration, model integration, and infrastructure. They make critical build versus buy decisions across the technology stack, ensure performance, reliability, and cost efficiency at scale, and establish durable technical moats in a rapidly evolving AI landscape. They translate cutting-edge AI capabilities into repeatable, enterprise-ready products, standardize systems that are currently bespoke or forward-deployed, and balance speed of iteration with platform robustness and maintainability. They build and lead a high-caliber engineering organization, scaling from a startup structure to multi-layered, high-output teams and implement processes to enable speed without sacrificing quality. The VP Engineering acts as a key counterpart to the CEO in board and investor discussions, clearly articulates the company's technology and product roadmap, and provides credibility and depth in technical due diligence and fundraising contexts. They operate at the intersection of Research, Product, and Go-to-Market, align engineering execution with customer outcomes and revenue growth, and help define the company’s long-term product and platform positioning.
Engineering Manager, Cooperative Systems
Lead and grow a small team building applied AI systems for internal operations. Design and build AI-powered automation systems in close proximity to customers. Stay hands-on in architecture and implementation across the full stack. Develop evolving systems spanning developer tools, automation platforms, knowledge graphs, and data systems. Deploy systems directly to internal users and close customers to iterate rapidly based on real-world feedback. Engage frequently with scaled workforces to understand needs and validate solutions. Create systems for visibility and learning in hybrid workforces. Partner with product, research, and ops teams daily.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Need help with something? Here are our most frequently asked questions.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
