Software Engineer, Compute Infrastructure
In this role, you will spin up and scale large Kubernetes clusters, including automating provisioning, bootstrapping, and cluster lifecycle management; build software abstractions that unify multiple clusters and provide a seamless interface to training workloads; own node bring-up from bare metal through firmware upgrades ensuring fast and repeatable deployment at massive scale; improve operational metrics such as reducing cluster restart times and accelerating firmware or OS upgrade cycles; integrate networking and hardware health systems to deliver end-to-end reliability across servers, switches, and data center infrastructure; develop monitoring and observability systems to detect issues early and maintain cluster stability under extreme load; solve real-time operational challenges, diagnose and fix issues quickly, and continuously improve automation, resilience, performance, and uptime across the systems powering frontier AI model training.
VP Engineering - London
The VP Engineering is responsible for defining and executing a scalable, defensible technology strategy; building a world-class engineering organization and platform; partnering with the CEO on product direction, investor communication, and long-term vision; and ensuring the successful bridging of frontier AI research with enterprise-grade deployment. Responsibilities include architecting and scaling H's AI platform, making build vs. buy decisions, ensuring performance, reliability, and cost efficiency, establishing technical moats, translating AI capabilities into enterprise-ready products, standardizing bespoke systems, balancing iteration speed with robustness, building and leading engineering teams, scaling organizational structure, implementing quality processes, acting as a key counterpart to the CEO in board and investor discussions, articulating technology and product roadmaps, providing technical due diligence, operating cross-functionally across Research, Product, and Go-to-Market, aligning engineering with customer and revenue goals, and helping define long-term company positioning.
VP Engineering - Paris
The VP Engineering is responsible for defining and executing a scalable, defensible technology strategy, including architecting and scaling the AI platform with a focus on agents, orchestration, model integration, and infrastructure. They make critical build versus buy decisions across the technology stack, ensure performance, reliability, and cost efficiency at scale, and establish durable technical moats in a rapidly evolving AI landscape. They translate cutting-edge AI capabilities into repeatable, enterprise-ready products, standardize systems that are currently bespoke or forward-deployed, and balance speed of iteration with platform robustness and maintainability. They build and lead a high-caliber engineering organization, scaling from a startup structure to multi-layered, high-output teams and implement processes to enable speed without sacrificing quality. The VP Engineering acts as a key counterpart to the CEO in board and investor discussions, clearly articulates the company's technology and product roadmap, and provides credibility and depth in technical due diligence and fundraising contexts. They operate at the intersection of Research, Product, and Go-to-Market, align engineering execution with customer outcomes and revenue growth, and help define the company’s long-term product and platform positioning.
AI/ML Engineer
Develop, train, and optimize machine learning models for various mobile app features. Research and implement state-of-the-art AI techniques to improve user engagement and app performance. Collaborate with cross-functional teams to integrate AI-driven solutions into applications. Design and maintain scalable ML pipelines, ensuring efficient model deployment and monitoring. Analyze large datasets to derive insights and drive data-driven decision-making. Stay updated with the latest AI trends and best practices, incorporating them into development processes. Optimize AI models for mobile environments to ensure high performance and low latency.
Software Engineer
Design and build the backend systems and services that power Sesame's product, including data models, APIs, and distributed systems. Write durable software focusing on scalability, reliability, and correctness rather than prototyping. Build and evolve frameworks and libraries for other engineers to use, emphasizing good software design. Own the full lifecycle of services, including schema design, implementation, deployment, performance tuning, and on-call responsibilities. Work with various data stores such as relational databases, NoSQL, queues, caches, and search indexes. Identify and resolve performance bottlenecks while considering cost, throughput, and latency. Architect systems where machine learning models are a key component but not the sole aspect, such as real-time audio pipelines, agentic orchestration, and stateful conversation systems. Identify opportunities to improve developer efficiency through prototyping tools or workflow improvements and collaborate with the infrastructure team to productionize them.
Applied AI, Forward Deployed Machine Learning Engineer, Critical and Sovereign Institutions, EMEA
The Applied AI Engineer is responsible for the technical design, implementation, and deployment of AI solutions tailored to the needs of critical infrastructure and sovereign institutions. Responsibilities include individually deploying AI solutions into production for use cases with significant operational and strategic impact, developing state-of-the-art GenAI applications specific to sovereign institutions and critical infrastructure, collaborating closely with researchers, AI engineers, and product teams on complex projects involving advanced fine-tuning, LLM applications, and contributions to open-source codebases. The role also involves participating in pre-sales discussions to understand client needs and provide technical guidance on Mistral's products, and working with product and science teams to improve offerings with a focus on security, compliance, and performance.
Manager, Forward Deployed Engineering - Paris
Lead and grow a team of Forward Deployed Engineers (FDE) delivering production systems with frontier models, owning end-to-end delivery outcomes through clarity, speed, tight coordination, and technical quality. Codify effective practices into tools, playbooks, and roadmap inputs to create leverage for OpenAI and the wider developer community. Monitor early indicators in product behavior, customer environments, or delivery practices and raise them with urgency. Use judgment to determine what requires action and set high performance standards for the FDE team while supporting individual growth through direct and actionable feedback. Define staffing and support strategies for scalable field teams without added complexity.
Engineering Manager, AI & Data Infrastructure
The Engineering Manager, AI & Data Infrastructure leads the AI & Data Infrastructure team responsible for the data and inference systems that support agent interactions, including streaming and batch pipelines for analytics and customer telemetry, realtime databases for low-latency behavior, and GPU and model-serving platforms for LLM inference. This role involves building, leading, and developing a high-performing team of data and ML infrastructure engineers through hiring, coaching, and performance management. Responsibilities include owning the technical strategy and roadmap for AI & Data Infrastructure, staying hands-on with design and code reviews, leading architecture for high-throughput data systems and low-latency inference, setting reliability, quality, and cost standards, investing in developer and analyst experience, raising standards on AI-assisted engineering practices, and partnering with Research, Product Engineering, Platform, and customer-facing teams to deliver data and inference capabilities, including enterprise deployments.
Associate Software Engineer, RLE
Contribute to building Reinforcement Learning Environments (RLE) and supporting infrastructure; implement features across backend systems and frontend interfaces; work with senior engineers to improve system reliability and performance; help build modular workflow domains such as engineering, finance, and legal; support data pipelines that power model training and evaluation.
Staff Software Engineer, Anti-Abuse & Security
Design and implement LLM guardrails that detect abuse scenarios in AI-generated code and agent interactions. Build AI-powered detection systems that use LLMs to identify malicious patterns, classify threats, and automate response decisions. Build and operate abuse detection systems that identify phishing, cryptomining, account takeover, and financial fraud across millions of daily user actions. Design automated response mechanisms that enforce platform policies without manual intervention. Own the full abuse response lifecycle: detection, investigation, enforcement, and handling appeals alongside Support and Legal. Analyze attack patterns using BigQuery and Hex, turning investigation findings into new detection rules. Maintain and extend internal detection tools (Slurper, Netwatch) that continuously monitor user activity. Integrate and tune security scanners (SAST, SCA) in CI pipelines with tight performance SLAs. Track abuse trends, measure detection effectiveness, and adapt defenses as attack patterns evolve.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Need help with something? Here are our most frequently asked questions.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
