⚠️ Sorry, this job is no longer available.

The AI job market moves fast. We keep up so you don't have to.

Fresh roles added daily, reviewed for quality — across every corner of the AI ecosystem.

Edit filters

New AI Opportunities

Showing 6179  of 79 jobs
Tag
Ryz Labs.jpg

Full Stack AI Engineer

Ryz Labs
AR.svg
Argentina
Contractor
Remote
false
Ryz Labs is looking for a Full Stack AI Engineer – Prod Support to build intelligent, secure, and scalable identity experiences across our client's platforms. In this role, you’ll work end-to-end—from frontend UX to backend services and AI models—focusing on authentication, authorization, identity verification, fraud detection, and personalization.You’ll partner closely with product, security, and data teams to embed AI into identity workflows whilemaintaining the highest standards of privacy, security, and reliability. Responsibilities:- Design, build, and deploy AI/ML solutions to automate ITSM ticket triage, classification, prioritization, androuting- Develop NLP-based models for ticket summarization, root-cause detection, and resolutionrecommendation- Implement AI-powered virtual agents / copilots to assist support engineers and end users- Partner with Product Support, SRE, and Engineering teams to understand recurring issues and automateresolution workflows- Build intelligent runbooks and self-healing automation for common incidents and service requests- Enhance knowledge management by auto-generating and updating KB articles from resolved tickets- Integrate AI solutions with ITSM platforms (HALO)- Develop APIs, workflows, and event-driven automations across monitoring, logging, and ITSM tools- Ensure seamless handoff between AI systems and human support engineers- Analyze ticket, incident, and operational data to identify automation opportunities- Train, evaluate, and continuously improve ML models using real-world support data- Implement monitoring for model performance, drift, and accuracy in production- Ensure AI solutions meet reliability, security, and compliance standards- Implement guardrails, explainability, and auditability for AI-driven decisions- Contribute to AI governance and responsible AI practices Qualifications/Requirements of the Position: - 5+ years of experience as a Full Stack Engineer, Platform Engineer, or AI Engineer, with ownership ofproduction systems- Strong proficiency in JavaScript/TypeScript and a modern frontend framework (React, Next.js, orequivalent)- Backend development experience with Python, Java, or Node.js, including building and maintaining secureAPIs- Hands-on experience delivering AI/ML solutions into production environments- Strong experience in Python and/or Java for backend and ML development- Hands-on experience with NLP, LLMs, or GenAI (e.g., transformers, embeddings, RAG, promptengineering)- Experience integrating AI solutions with ITSM tools (HALO, etc.)- Knowledge of REST APIs, microservices, and cloud platforms (AWS, Azure, or GCP)- Familiarity with MLOps, CI/CD, model deployment, and monitoring- Solid understanding of ITIL / ITSM processes (Incident, Problem, Change, Request)- Experience working with Product Support, SRE, or NOC teams- Ability to translate operational pain points into automation and AI use cases Knowledge, Skills, and Abilities Required: - Background in cybersecurity, fraud detection, trust & safety, or abuse prevention. - Experience with graph-based ML, NLP for security signals, or time-series anomaly detection.- Knowledge of adversarial ML, model evasion techniques, or secure model design.- Experience building systems that operate under strict latency or reliability constraints.- Exposure to chatbots, copilots, or agentic AI frameworks- Experience in high-volume, 24x7 production support environments- Publications, talks, or open-source contributions in AI or security. About RYZ Labs:RYZ Labs is a startup studio founded in 2021 by two lifelong entrepreneurs. The founders of RYZ have worked at some of the world's largest tech companies and some of the most iconic consumer brands. They have lived and worked in Argentina for many years and have decades of experience in Latam. What brought them together was their passion for the early phases of company creation and the idea of attracting the brightest talents in order to build industry-defining companies in a post-pandemic world. Our teams are remote and distributed throughout the US and Latam. They use the latest cutting-edge cloud computing technologies to create scalable and resilient applications. We aim to provide diverse product solutions for different industries and plan to build a large number of startups in the upcoming years. At RYZ, you will find yourself working with autonomy and efficiency, owning every step of your development. We provide an environment of opportunities, learning, growth, expansion, and challenging projects. You will deepen your experience while sharing and learning from a team of great professionals and specialists. Our values and what to expect:- Customer First Mentality - Every decision we make should be made through the lens of the customer.- Bias for Action - urgency is critical, expect that the timeline to get something done is accelerated.- Ownership - Step up if you see an opportunity to help, even if it's not your core responsibility. - Humility and Respect - Be willing to learn, be vulnerable, and treat everyone who interacts with RYZ with respect.- Frugality - being frugal and cost-conscious helps us do more with less- Deliver Impact - get things done most efficiently. - Raise our Standards - always be looking to improve our processes, our team, and our expectations. The status quo is not good enough and never should be.
No items found.
Hidden link
Ryz Labs.jpg

Full Stack AI Engineer – BuilderEx

Ryz Labs
AR.svg
Argentina
Contractor
Remote
false
Remote position, only for professionals based in Argentina or Uruguay At Ryz Labs we are looking for a Full Stack AI Engineer – BuilderEx to build intelligent, secure, and scalable identity experiences across our platforms. In this role, you will work end-to-end—from frontend UX to backend services and AI models—focusing on authentication, authorization, identity verification, fraud detection, and personalization.This role combines full stack engineering, AI/ML integration, and identity architecture, with close collaboration across product, security, and platform teams to deliver privacy-first, highly reliable systems. Essential Responsibilities:Design, build, and maintain full-stack applications powering identity and access management (IAM) experiences.Develop and integrate AI/ML models for identity use cases such as fraud detection, anomaly detection, risk-based authentication, and identity verification.Lead and execute SSO migrations across products and platforms, consolidating authentication flows while minimizing user disruption.Drive domain consolidation initiatives by unifying identity systems, services, and user data models across multiple platforms or brands.Improve developer experience (DevEx) by building internal tools, SDKs, APIs, and documentation that simplify identity integrations.Design and evolve secure, scalable APIs supporting authentication, authorization, and identity data services.Partner closely with Security, Platform, and Product teams to implement and standardize protocols and patterns such as OAuth 2.0, OpenID Connect, SAML, JWT, and zero-trust architectures.Ensure AI-powered identity systems are observable, explainable, and production-ready, with robust monitoring and feedback loops.Balance security, performance, and usability while maintaining high standards for privacy and compliance.Contribute to architectural decisions, technical design discussions, and code quality standards. Qualifications / Requirements of the Position:5+ years of experience as a Full Stack Engineer, Platform Engineer, or AI Engineer with ownership of production systems.Strong proficiency in JavaScript/TypeScript and modern frontend frameworks (React, Next.js, or equivalent).Backend development experience with Python, Java, or Node.js, including building secure, scalable APIs.Hands-on experience delivering AI/ML solutions into production environments.Solid understanding of identity and access management (IAM) concepts, including authentication, authorization, and identity lifecycle.Proven experience leading or contributing to SSO migrations using OAuth 2.0, OpenID Connect, and/or SAML.Experience with domain consolidation or identity unification initiatives across multiple applications or platforms.Demonstrated ability to improve developer experience (DevEx) through internal tooling, APIs, SDKs, or platform improvements.Experience working with cloud platforms (AWS, GCP, or Azure) and containerized environments (Docker, Kubernetes).Strong security mindset, including experience designing systems with privacy, compliance, and resilience in mind.Ability to collaborate cross-functionally and communicate complex technical concepts clearly. Knowledge, Skills, and Abilities Required:Background in cybersecurity, fraud detection, trust & safety, or abuse prevention.Experience with graph-based ML, NLP for security signals, or time-series anomaly detection.Knowledge of adversarial ML, model evasion techniques, or secure model design.Experience building systems operating under strict latency or reliability constraints.Prior work in regulated or high-risk environments.Security certifications or relevant coursework (e.g., OSCP, CISSP concepts).Experience with SIEM/SOAR tools or security telemetry platforms.Publications, talks, or open-source contributions in AI, security, or related fields.
No items found.
Hidden link
Handshake.jpg

AI/Machine Learning Engineer Intern

Handshake
$49 – $49 / hour
US.svg
United States
Intern
Remote
false
About HandshakeHandshake is the career network for the AI economy. More than 20 million knowledge workers, 1,600 educational institutions, and 1 million employers — including 100% of the Fortune 50 — trust Handshake to power career discovery, hiring, and upskilling. From freelance AI training gigs to first internships to full-time careers and beyond, we connect talent with opportunity at every stage.This unique position in the ecosystem is driving exceptional growth — in 2025, we tripled ARR at scale.Why join Handshake now:Shape how careers evolve in the AI economy at global scale, with visible real-world impactWork directly with leading AI labs, Fortune 500 partners, and top educational institutionsHelp build a rapidly scaling business on a path toward multi-billion-dollar revenueThe RoleAs a AI/Machine Learning Engineering Intern, you will contribute to building intelligent product experiences that help students discover and secure opportunities. Your work will span search, recommendations, matching, and other discovery systems that power job exploration on Handshake.You will gain hands-on experience developing, evaluating, and deploying machine learning models in a production environment, learning how large-scale ML systems are designed, optimized, and maintained.This is a paid, full-time summer internship with two cohort options:May 18 – August 7, 2026June 15 – September 4, 2026 In this role, you will:Partner with senior engineers and data scientists to develop machine learning models that improve user experienceBuild Agentic pipelines/workflows to improve the Handshake student/employer user experienceContribute to experimentation, model evaluation, and performance monitoringParticipate in technical discussions, brainstorming sessions, and team reviewsDocument methodologies and findings to support knowledge sharing and long-term system improvementsYou HaveMust Haves:Are currently pursuing a degree in Computer Science, Data Science, or a related fieldHave strong programming skills in Python and experience with ML frameworks such as PyTorch or TensorFlowHave exposure to software engineering best practices (version control, testing, code reviews)Have familiarity with data analysis techniques and experience with SQLHave strong problem-solving skills and the ability to work in a collaborative team environmentHave strong communication skills and are able to explain technical concepts effectivelyBonus Points:Experience with cloud platforms such as AWS, Google Cloud, or AzureExperience with modern coding tools like Cursor/Claude code/CodexPrior internship or project experience in applied machine learning in domains such as NLP, search and recommendation systemsWe OfferHandshake provides benefits that help you feel supported and thrive at work and in life. (The below benefits apply to US-based interns.)💰 Competitive hourly compensation📚 Mentorship and hands-on learning from experienced ML engineers💻 5 days/week in-office experience🤝 Structured intern programming and team eventsExplore our mission, values, and open roles at joinhandshake.com/careers.
No items found.
Hidden link
Hippocratic AI.jpg

Agent Deployment Architect (Charlotte, NC)

Hippocratic AI
US.svg
United States
Full-time
Remote
false
About UsHippocratic AI is the leading generative AI company in healthcare. We have the only system that can have safe, autonomous, clinical conversations with patients. We have trained our own LLMs as part of our Polaris constellation, resulting in a system with over 99.9% accuracy.Why Join Our TeamReinvent healthcare with AI that puts safety first. We’re building the world’s first healthcare‑only, safety‑focused LLM — a breakthrough platform designed to transform patient outcomes at a global scale. This is category creation.Work with the people shaping the future. Hippocratic AI was co‑founded by CEO Munjal Shah and a team of physicians, hospital leaders, AI pioneers, and researchers from institutions like El Camino Health, Johns Hopkins, Washington University in St. Louis, Stanford, Google, Meta, Microsoft, and NVIDIA.Backed by the world’s leading healthcare and AI investors. We recently raised a $126M Series C at a $3.5B valuation, led by Avenir Growth, bringing total funding to $404M with participation from CapitalG, General Catalyst, a16z, Kleiner Perkins, Premji Invest, UHS, Cincinnati Children’s, WellSpan Health, John Doerr, Rick Klausner, and others.Build alongside the best in healthcare and AI. Join experts who’ve spent their careers improving care, advancing science, and building world‑changing technologies — ensuring our platform is powerful, trusted, and truly transformative.About the RoleWe are seeking a dynamic and experienced forward-deployed Deployment Architect to drive the integration and deployment of our advanced AI agents as an embedded partner within client healthcare systems. In this role, you will partner directly with customers to deeply understand their operational workflows, identify and translate their technical requirements into effective AI-powered conversations, and guide them through setting up integrations and deploying agents.As a forward-deployed Deployment Architect, you will be physically present at client sites every week, working alongside clinical, operational, and IT leaders to implement, operationalize, and scale Hippocratic AI’s solutions. You will serve as the technical backbone of our client relationships. From defining integration requirements to building and launching conversational AI agents that enhance patient care, you will have a pivotal impact on our product, customer success, and ultimately, patient outcomes.What You'll DoYou will be on an embedded team with other HippocraticAI employees - working together to transform a health system.Customer Workflow Discovery: You will spend multiple days onsite at the customer every week - working to understand their pain points. Partner with customers to analyze and document their operational workflows, translating these into integration specifications and AI conversation designs.Integration Architecture: Define, document, and drive the technical architecture required to connect our solutions with client EHR systems, CRMs, population health tools, and other relevant platforms.AI Agent Design & Deployment: Design, customize, and deploy modular, scalable AI agents that align with the customer’s unique needs and use cases.Technical Project Leadership: Lead and manage the technical post-sale implementation process, acting as the primary technical contact and ensuring a seamless deployment.Cross-Functional Collaboration: Work closely with engineering, product, machine learning, clinical, and sales teams to develop solutions to meet our customers’ needs.Tooling & Process Automation: Develop reusable tooling, playbooks, and repeatable frameworks to improve implementation scalability and efficiency.Location & TravelThis is a forward-deployed, onsite role.Candidates must be based in North Carolina and be able to travel to a client site in Charlotte weekly (5 days per week).Periodic travel to HippocraticAI offices (e.g., Palo Alto) for strategic planning and team sessions.What You BringMust-Have:Bachelor's or Master's degree in Computer Science, Business or a related fieldMinimum of 5 years of experience in healthcare implementation or product management.Minimum of 5 years of experience integrating with enterprise EHRs (Epic, Cerner, Athena, etc.) or payers / digital health companies.Proven ability to cultivate strong customer relationships and deliver exemplary product support.Demonstrated proficiency in translating external stakeholder needs into internal product requirements.Comfortable operating autonomously in a client-embedded role.Nice-to-Have:Comfortable reading and debugging Python.Start-up experience preferred.Join us and help build the future of safe, life-changing AI in healthcare. There’s never been a more exciting moment to make an impact.Please be aware of recruitment scams impersonating Hippocratic AI. All recruiting communication will come from @hippocraticai.com email addresses. We will never request payment or sensitive personal information during the hiring process.
No items found.
Hidden link
Faculty.jpg

Senior MLOps Engineer

Faculty
GB.svg
United Kingdom
Full-time
Remote
false
Why Faculty? We established Faculty in 2014 because we thought that AI would be the most important technology of our time. Since then, we’ve worked with over 350 global customers to transform their performance through human-centric AI. You can read about our real-world impact here.We don’t chase hype cycles. We innovate, build and deploy responsible AI which moves the needle - and we know a thing or two about doing it well. We bring an unparalleled depth of technical, product and delivery expertise to our clients who span government, finance, retail, energy, life sciences and defence.Our business, and reputation, is growing fast and we’re always on the lookout for individuals who share our intellectual curiosity and desire to build a positive legacy through technology.AI is an epoch-defining technology, join a company where you’ll be empowered to envision its most powerful applications, and to make them happen.About the Team Bringing medicine to patients is complex, expensive and high-risk. Faculty’s Life Science’s team is concentrated on building AI solutions which optimise the research and commercialisation of life-changing therapies.We partner with major pharma firms, academic research centres and MedTech start-ups to design and deliver solutions which address critical healthcare challenges, and help to democratise health for all.About the role:As a Senior MLOps Engineer, we’ll look to you to lead development and deployment of cutting-edge AI systems for our diverse clients. You’ll design, build, and deploy scalable, production-grade ML software and infrastructure that meets rigorous operational and ethical standards.This is an ambitious, cross-functional role requiring a blend of technical expertise, engineering leadership, and confident client-facing skills.What you'll be doing:Leading technical scoping and architectural decisions for high-impact ML systemsDesigning and building production-grade ML software, tools, and scalable infrastructureDefining and implementing best practices and standards for deploying machine learning at scale across the businessCollaborating with engineers, data scientists, product managers, and commercial teams to solve critical client challenges and leverage opportunitiesActing as a trusted technical advisor to customers and partners, translating complex concepts into actionable strategiesMentoring and developing junior engineers, actively shaping our team's engineering culture and technical depthWho we're looking for:You understand the full ML lifecycle and have significant experience operationalising models built with frameworks like TensorFlow or PyTorchYou bring deep expertise in software engineering and strong Python skills, focusing on building robust, reusable systemsYou have demonstrable hands-on experience with cloud platforms (e.g., AWS, Azure, GCP), including architecture, security, and infrastructureYou've extensive experience working with container and orchestration tools such at Docker & Kubernetes to build and manage applications at scaleYou thrive in fast-paced, high-growth environments, demonstrating ownership and autonomy in driving projects to completionYou communicate exceptionally well, confidently guiding both technical teams and senior, non-technical stakeholdersThe Interview ProcessTalent Team Screen (30 minutes)Pair Programming Interview (90 minutes)System Design Interview (90 minutes)Commercial Interview (60 minutes)Our Recruitment EthosWe aim to grow the best team - not the most similar one. We know that diversity of individuals fosters diversity of thought, and that strengthens our principle of seeking truth. And we know from experience that diverse teams deliver better work, relevant to the world in which we live. We’re united by a deep intellectual curiosity and desire to use our abilities for measurable positive impact. We strongly encourage applications from people of all backgrounds, ethnicities, genders, religions and sexual orientations.Some of our standout benefits:Unlimited Annual Leave PolicyPrivate healthcare and dentalEnhanced parental leaveFamily-Friendly Flexibility & Flexible workingSanctus CoachingHybrid Working (2 days in our Old Street office, London)If you don’t feel you meet all the requirements, but are excited by the role and know you bring some key strengths, please don't hesitate in applying as you might be right for this role, or other roles. We are open to conversations about part-time hours.
No items found.
Hidden link
Faculty.jpg

Platform Engineer

Faculty
GB.svg
United Kingdom
Full-time
Remote
false
Why Faculty? We established Faculty in 2014 because we thought that AI would be the most important technology of our time. Since then, we’ve worked with over 350 global customers to transform their performance through human-centric AI. You can read about our real-world impact here.We don’t chase hype cycles. We innovate, build and deploy responsible AI which moves the needle - and we know a thing or two about doing it well. We bring an unparalleled depth of technical, product and delivery expertise to our clients who span government, finance, retail, energy, life sciences and defence.Our business, and reputation, is growing fast and we’re always on the lookout for individuals who share our intellectual curiosity and desire to build a positive legacy through technology.AI is an epoch-defining technology, join a company where you’ll be empowered to envision its most powerful applications, and to make them happen.About the teamBringing medicine to patients is complex, expensive and high-risk. Faculty’s Life Science’s team is concentrated on building AI solutions which optimise the research and commercialisation of life-changing therapies.We partner with major pharma firms, academic research centres and MedTech start-ups to design and deliver solutions which address critical healthcare challenges, and help to democratise health for all.About the roleWe're looking for a Platform Engineer to build the backbone of applied artificial intelligence for our customers. You will design, build, and deploy robust, secure, and scalable cloud infrastructure that powers cutting-edge data and machine learning workflows. Working in a cross-functional team, you'll solve complex challenges and empower our data scientists and ML engineers to deploy their work effectively, shaping the future of AI solutions.What you'll be doing:Building robust, secure, and scalable cloud infrastructure for AI and ML workflows.Partnering with technical and non-technical stakeholders, from initial idea generation through to implementation and shipping.Enabling Machine Learning Engineers and Data Scientists by contributing to internal best practices, standards, and reusable code repos.Proactively identifying and recommending new ways customers can leverage cloud infrastructure to solve their key challenges.Creating and maintaining reusable, company-wide libraries and infrastructure-as-code.Researching and integrating the best open-source technologies to enhance Faculty's infrastructure capabilities.Who we're looking for:You are pragmatic and outcome-focused, balancing the big picture with the details to execute complex projects in the real world.You think scientifically, always testing assumptions, seeking evidence, and looking for opportunities to improve how things are done.You have a drive to learn, constantly exploring new technologies and novel applications for existing tools.You possess deep experience with both Azure and AWS as well as Infrastructure as Code, especially Terraform.You are experienced in building and deploying containerized solutions using Docker and Kubernetes, supported by strong CI/CD and GitOps practices.You possess proficient knowledge of Networking and Cloud SecurityYou excel at working directly with clients and stakeholders, confidently handling requirements gathering, technical planning, and scoping.The Interview ProcessTalent Team Screen (30 minutes)Pair Programming Interview (90 minutes) System Design Interview (90 minutes) Commercial Interview (60 minutes)Our Recruitment EthosWe aim to grow the best team - not the most similar one. We know that diversity of individuals fosters diversity of thought, and that strengthens our principle of seeking truth. And we know from experience that diverse teams deliver better work, relevant to the world in which we live. We’re united by a deep intellectual curiosity and desire to use our abilities for measurable positive impact. We strongly encourage applications from people of all backgrounds, ethnicities, genders, religions and sexual orientations.Some of our standout benefits:Unlimited Annual Leave PolicyPrivate healthcare and dentalEnhanced parental leaveFamily-Friendly Flexibility & Flexible workingSanctus CoachingHybrid Working (2 days in our Old Street office, London)If you don’t feel you meet all the requirements, but are excited by the role and know you bring some key strengths, please don't hesitate in applying as you might be right for this role, or other roles. We are open to conversations about part-time hours.
No items found.
Hidden link
Faculty.jpg

Infrastructure Engineer

Faculty
GB.svg
United Kingdom
Full-time
Remote
false
Why Faculty? We established Faculty in 2014 because we thought that AI would be the most important technology of our time. Since then, we’ve worked with over 350 global customers to transform their performance through human-centric AI. You can read about our real-world impact here.We don’t chase hype cycles. We innovate, build and deploy responsible AI which moves the needle - and we know a thing or two about doing it well. We bring an unparalleled depth of technical, product and delivery expertise to our clients who span government, finance, retail, energy, life sciences and defence.Our business, and reputation, is growing fast and we’re always on the lookout for individuals who share our intellectual curiosity and desire to build a positive legacy through technology.AI is an epoch-defining technology, join a company where you’ll be empowered to envision its most powerful applications, and to make them happen.About the teamBringing medicine to patients is complex, expensive and high-risk. Faculty’s Life Science’s team is concentrated on building AI solutions which optimise the research and commercialisation of life-changing therapies.We partner with major pharma firms, academic research centres and MedTech start-ups to design and deliver solutions which address critical healthcare challenges, and help to democratise health for all.About the roleWe're looking for a Infrastructure Engineer to build the backbone of applied artificial intelligence for our customers. You will design, build, and deploy robust, secure, and scalable cloud infrastructure that powers cutting-edge data and machine learning workflows. Working in a cross-functional team, you'll solve complex challenges and empower our data scientists and ML engineers to deploy their work effectively, shaping the future of AI solutions.What you'll be doing:Building robust, secure, and scalable cloud infrastructure for AI and ML workflows.Partnering with technical and non-technical stakeholders, from initial idea generation through to implementation and shipping.Enabling Machine Learning Engineers and Data Scientists by contributing to internal best practices, standards, and reusable code repos.Proactively identifying and recommending new ways customers can leverage cloud infrastructure to solve their key challenges.Creating and maintaining reusable, company-wide libraries and infrastructure-as-code.Researching and integrating the best open-source technologies to enhance Faculty's infrastructure capabilities.Who we're looking for:You are pragmatic and outcome-focused, balancing the big picture with the details to execute complex projects in the real world.You think scientifically, always testing assumptions, seeking evidence, and looking for opportunities to improve how things are done.You have a drive to learn, constantly exploring new technologies and novel applications for existing tools.You possess deep experience with both Azure and AWS as well as Infrastructure as Code, especially Terraform.You are experienced in building and deploying containerized solutions using Docker and Kubernetes, supported by strong CI/CD and GitOps practices.You possess proficient knowledge of Networking and Cloud SecurityYou excel at working directly with clients and stakeholders, confidently handling requirements gathering, technical planning, and scoping.The Interview ProcessTalent Team Screen (30 minutes)Pair Programming Interview (90 minutes) System Design Interview (90 minutes) Commercial Interview (60 minutes)Our Recruitment EthosWe aim to grow the best team - not the most similar one. We know that diversity of individuals fosters diversity of thought, and that strengthens our principle of seeking truth. And we know from experience that diverse teams deliver better work, relevant to the world in which we live. We’re united by a deep intellectual curiosity and desire to use our abilities for measurable positive impact. We strongly encourage applications from people of all backgrounds, ethnicities, genders, religions and sexual orientations.Some of our standout benefits:Unlimited Annual Leave PolicyPrivate healthcare and dentalEnhanced parental leaveFamily-Friendly Flexibility & Flexible workingSanctus CoachingHybrid Working (2 days in our Old Street office, London)If you don’t feel you meet all the requirements, but are excited by the role and know you bring some key strengths, please don't hesitate in applying as you might be right for this role, or other roles. We are open to conversations about part-time hours.
No items found.
Hidden link
Faculty.jpg

MLOps Engineer

Faculty
GB.svg
United Kingdom
Full-time
Remote
false
Why Faculty? We established Faculty in 2014 because we thought that AI would be the most important technology of our time. Since then, we’ve worked with over 350 global customers to transform their performance through human-centric AI. You can read about our real-world impact here.We don’t chase hype cycles. We innovate, build and deploy responsible AI which moves the needle - and we know a thing or two about doing it well. We bring an unparalleled depth of technical, product and delivery expertise to our clients who span government, finance, retail, energy, life sciences and defence.Our business, and reputation, is growing fast and we’re always on the lookout for individuals who share our intellectual curiosity and desire to build a positive legacy through technology.AI is an epoch-defining technology, join a company where you’ll be empowered to envision its most powerful applications, and to make them happen.About the Team Bringing medicine to patients is complex, expensive and high-risk. Faculty’s Life Science’s team is concentrated on building AI solutions which optimise the research and commercialisation of life-changing therapies.We partner with major pharma firms, academic research centres and MedTech start-ups to design and deliver solutions which address critical healthcare challenges, and help to democratise health for all.About the roleJoin us as a MLOps Engineer to deliver bespoke, impactful AI solutions for our diverse clients.You will be instrumental in bringing machine learning out of the lab and into the real world, contributing to scalable software architecture and defining best practices. Working with clients, and cross-functional teams, you'll ensure technical feasibility and timely delivery of high-quality, production-grade ML systems. What you'll be doing:Building and deploying production-grade ML software, tools, and infrastructure.Creating reusable, scalable solutions that accelerate the delivery of ML systems.Collaborating with engineers, data scientists, and commercial leads to solve critical client challenges.Leading technical scoping and architectural decisions to ensure project feasibility and impact.Defining and implementing Faculty’s standards for deploying machine learning at scale.Acting as a technical advisor to customers and partners, translating complex ML concepts for stakeholders.Who we're looking for:You understand the full machine learning lifecycle and have experience operationalising models built with frameworks like Scikit-learn, TensorFlow, or PyTorch.You possess strong Python skills and solid experience in software engineering best practices.You bring hands-on experience with cloud platforms and infrastructure (e.g., AWS, Azure, GCP), including architecture and security.You've worked with container and orchestration tools such at Docker & Kubernetes to build and manage applications at scaleYou are comfortable with core ML concepts, including probability, statistics, and common learning techniques.You're an excellent communicator, able to guide technical teams and confidently advise non-technical stakeholders.You thrive in a fast-paced environment, and enjoy the autonomy to own scope, solve and delivery solutionsThe Interview ProcessTalent Team Screen (30 minutes)Pair Programming Interview (90 minutes) System Design Interview (90 minutes) Commercial Interview (60 minutes)Our Recruitment EthosWe aim to grow the best team - not the most similar one. We know that diversity of individuals fosters diversity of thought, and that strengthens our principle of seeking truth. And we know from experience that diverse teams deliver better work, relevant to the world in which we live. We’re united by a deep intellectual curiosity and desire to use our abilities for measurable positive impact. We strongly encourage applications from people of all backgrounds, ethnicities, genders, religions and sexual orientations.Some of our standout benefits:Unlimited Annual Leave PolicyPrivate healthcare and dentalEnhanced parental leaveFamily-Friendly Flexibility & Flexible workingSanctus CoachingHybrid Working (2 days in our Old Street office, London)If you don’t feel you meet all the requirements, but are excited by the role and know you bring some key strengths, please don't hesitate in applying as you might be right for this role, or other roles. We are open to conversations about part-time hours.
No items found.
Hidden link
Faculty.jpg

Senior Python Engineer

Faculty
GB.svg
United Kingdom
Full-time
Remote
false
Why Faculty? We established Faculty in 2014 because we thought that AI would be the most important technology of our time. Since then, we’ve worked with over 350 global customers to transform their performance through human-centric AI. You can read about our real-world impact here.We don’t chase hype cycles. We innovate, build and deploy responsible AI which moves the needle - and we know a thing or two about doing it well. We bring an unparalleled depth of technical, product and delivery expertise to our clients who span government, finance, retail, energy, life sciences and defence.Our business, and reputation, is growing fast and we’re always on the lookout for individuals who share our intellectual curiosity and desire to build a positive legacy through technology.AI is an epoch-defining technology, join a company where you’ll be empowered to envision its most powerful applications, and to make them happen.About the Team Bringing medicine to patients is complex, expensive and high-risk. Faculty’s Life Science’s team is concentrated on building AI solutions which optimise the research and commercialisation of life-changing therapies.We partner with major pharma firms, academic research centres and MedTech start-ups to design and deliver solutions which address critical healthcare challenges, and help to democratise health for all.About the role:As a Senior Python Engineer, we’ll look to you to lead development and deployment of cutting-edge AI systems for our diverse clients. You’ll design, build, and deploy scalable, production-grade ML software and infrastructure that meets rigorous operational and ethical standards.This is an ambitious, cross-functional role requiring a blend of technical expertise, engineering leadership, and confident client-facing skills.What you'll be doing:Leading technical scoping and architectural decisions for high-impact ML systemsDesigning and building production-grade ML software, tools, and scalable infrastructureDefining and implementing best practices and standards for deploying machine learning at scale across the businessCollaborating with engineers, data scientists, product managers, and commercial teams to solve critical client challenges and leverage opportunitiesActing as a trusted technical advisor to customers and partners, translating complex concepts into actionable strategiesMentoring and developing junior engineers, actively shaping our team's engineering culture and technical depthWho we're looking for:You understand the full ML lifecycle and have significant experience operationalising models built with frameworks like TensorFlow or PyTorchYou bring deep expertise in software engineering and strong Python skills, focusing on building robust, reusable systemsYou have demonstrable hands-on experience with cloud platforms (e.g., AWS, Azure, GCP), including architecture, security, and infrastructureYou've extensive experience working with container and orchestration tools such at Docker & Kubernetes to build and manage applications at scaleYou thrive in fast-paced, high-growth environments, demonstrating ownership and autonomy in driving projects to completionYou communicate exceptionally well, confidently guiding both technical teams and senior, non-technical stakeholdersThe Interview ProcessTalent Team Screen (30 minutes)Pair Programming Interview (90 minutes)System Design Interview (90 minutes)Commercial Interview (60 minutes)Our Recruitment EthosWe aim to grow the best team - not the most similar one. We know that diversity of individuals fosters diversity of thought, and that strengthens our principle of seeking truth. And we know from experience that diverse teams deliver better work, relevant to the world in which we live. We’re united by a deep intellectual curiosity and desire to use our abilities for measurable positive impact. We strongly encourage applications from people of all backgrounds, ethnicities, genders, religions and sexual orientations.Some of our standout benefits:Unlimited Annual Leave PolicyPrivate healthcare and dentalEnhanced parental leaveFamily-Friendly Flexibility & Flexible workingSanctus CoachingHybrid Working (2 days in our Old Street office, London)If you don’t feel you meet all the requirements, but are excited by the role and know you bring some key strengths, please don't hesitate in applying as you might be right for this role, or other roles. We are open to conversations about part-time hours.
No items found.
Hidden link
Eight Sleep.jpg

Senior ML Operations (MLOps) Engineer

Eight Sleep
No items found.
Full-time
Remote
false
Eight Sleep is the world’s first sleep fitness company. Our mission is to fuel human potential through optimal sleep. We use innovative technology, detailed design, and proven science and data to personalize and improve each night for everybody—changing the way people sleep forever and for the better.Backed by leading Silicon Valley investors, we have been recognized as one of Fast Company’s Most Innovative Companies in 2018, 2022, and 2023.Our temperature-regulated technology, the Pod, is an absolute game changer, improving people's health and happiness by changing the way they sleep. The Pod was also recognized two years in a row by TIME's “Best Inventions of the Year.” It is available for purchase in North America (the United States and Canada) and throughout the United Kingdom, Europe (Belgium, France, Germany, Italy, Netherlands, Spain, Sweden, Denmark), and Australia via eightsleep.com. We’re excited by the success of the Pod to date and still have a long way to go toward achieving our mission.Join our team as a Sr MLOps Engineer to help us bring current and next generations of Pod ML models to life. You'll be a part of a small team designing and implementing solutions with high levels of autonomy to bring our members better sleep. Your work will go directly to our fleet of existing Pods with low friction and direct impact to the business. We are a fast moving and fast growing company, and we embrace individuals with a growth mindset and strong desire to help us achieve our mission: Improving people's lives through optimal sleep.How you’ll contributePioneer Cutting-Edge Technology: Introduce and implement cutting-edge ML technologies, integrating them into our products and processes to enable the future of health monitoringEnd-to-End Ownership: Own design and operation of robust ML infrastructure – building scalable data, model, and deployment pipelines that ensure reliable delivery of models to production.Cross-functional Collaboration Partner with R&D, firmware, data, and backend teams to ensure ML inference operates reliably and scales to Pods everywhere.Optimize for Performance: Drive cost-effective, scalable, and high-performance ML systems by optimizing compute, storage, and deployment resources across training and inferenceEnhance Tooling and Platforms: Develop tooling, micro services, and frameworks to streamline data processing, experimentation, and deploymentEffective Remote Communication: Thrive in a remote work environment, ensuring clear and direct communication.What you need to succeedProven Expertise: 5+ years of software engineering experience with a focus on ML infrastructure, distributed systems, or large-scale data processing in Python (e.g., PyTorch, TensorFlow, or similar).ML Operations Mastery: Hands-on experience with ML workflow orchestration and CI/CD pipelines for model deployment.Scalable Deployment Experience: Demonstrated success shipping ML models to production at scale, handling telemetry, monitoring, and feedback loops across large device fleets or user populations.Cloud-Native Expertise: Strong experience with AWS (Lambda, ECS, DynamoDB, CloudWatch) or equivalent cloud platforms for serving and monitoring ML systems.Adaptive Problem Solver: A fast-paced, collaborative, and iterative approach to tackling complex problems.What sets you apart:Expertise in real-time ML workflows and streaming systems (e.g., Kinesis, Kafka, Flink).Demonstrated expertise in optimizing ML infrastructure for efficiency, latency, and cloud cost at scale.Understanding of secure ML operations, privacy practices, and compliance considerations, particularly for health-related or IoT data.Familiarity with health, wellness, or IoT domains, especially wearables or medical-grade devices.Why join Eight Sleep?Innovation in a culture of excellenceJoin us in a workplace where innovation isn’t just encouraged - it’s a standard. Our flagship product, the Pod, is a testament to our culture of excellence, beloved by hundreds of thousands of customers worldwide. At Eight Sleep, you will be part of a team that continuously pushes the boundaries of technology in sleep fitness.Immediate responsibility and accelerated career growthFrom your first day, you’ll take on substantial responsibilities that have a direct impact on our core business and product success. We are a small team that empowers you to own your projects and see the tangible effects of your efforts, enhancing both your professional growth and our company’s trajectory. Your path will be challenging but rewarding, perfect for those who thrive in fast-paced environments aiming for high standards.Collaboration with exceptional talentWork alongside other bright minds like you: at Eight Sleep exceptional intelligence and a passion for breakthroughs are the norms. Our team members are not only experts in their fields but also avid innovators who thrive in our dynamic, fast-paced environment.Equitable compensation and continuous equity investmentWe extend equity participation to every full-time team member, recognizing and rewarding your direct contributions to our success. This includes periodic equity refreshments based on performance, ensuring that as Eight Sleep grows and succeeds, so do you – perfectly aligning your achievements with the broader triumphs of the company.Your own Pod - and other great benefitsEvery Eight Sleep employee receives the very product that defines our mission: a Pod of their own. If you join us you’ll get your own Pod, along with*:Full access to health, vision, and dental insurance for you and your dependentsSupplemental life insuranceFlexible PTOCommuter benefits to ease your daily commutePaid parental leave*List of benefits may vary depending on your locationAt Eight Sleep we continually celebrate the diverse community different individuals cultivate. As an equal opportunity employer, we stay true to our values by ensuring everyone feels they can flourish and grow. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status.
No items found.
Hidden link
Zapier.jpg

Software Engineer, Enterprise Zone (Backend & Full Stack) — Multiple Levels

Zapier
$143,900 – $261,200
US.svg
United States
Full-time
Remote
false
AI at ZapierAt Zapier, we build and use automation every day to make work more efficient, creative, and human. So if you’re using AI tools while applying here - that’s great! We just ask that you use them responsibly and transparently.Check out our guidance on How to Collaborate with AI During Zapier’s Hiring Process, including how to use AI tools like ChatGPT, Claude, Gemini, or others during our hiring process - and when not to.Job Posted: December 1, 2025Latest update: February 19, 2026Location: Americas - North, Central and South AmericaHi there!Zapier is leading the way in AI automation. We've helped small businesses increase productivity and serve their customers better through an extensive platform of integrations, robust workflow automations, and practical AI applications. As we continue our mission to make automation work for everyone, our current focus is to unlock that vast potential within Enterprise customers—organizations comprising dozens, hundreds, or even thousands of employees. These companies already recognize the value of automation; we are now empowering them to implement it at scale, ensuring control, visibility, and confidence in their chosen tools.To do this, Zapier is expanding its Enterprise zone and hiring across a number of roles and levels, including backend, fullstack, and frontend engineers. We're looking for engineers passionate about solving complex problems and delivering new value to customers, driving innovation in workflow management, and changing the future of automation for some of the largest companies in industry.The Enterprise zone is made up of several teams. We're currently hiring across three of them:Enterprise ResponseAsset ManagementInsightsTeam FocusThe Enterprise Response team is crucial for maintaining the resilience and reliability of our systems, particularly during critical periods, mostly for our largest customers. This team innovates in high-demand customer areas and delivers highly requested core features that span multiple teams and organizational boundaries. This team touches every part of the stack and plays a critical role in ensuring our biggest customers get fast, dependable service.In this role, you'll:Lead responses to critical incidents and customer-impacting eventsWork directly with enterprise customersCollaborate across engineering, product, and support to solve urgent problemsBuild internal tools and improve core product features based on real-world, customer feedbackBuild a new extensible platform for account and organization-level settings, applying declarative controls, VPC connections to on-prem data, and instrumenting account-wide AI guardrail controlsThe Asset Management team is responsible for building and scaling the systems that make every Zapier asset, across products and workspaces, discoverable, governable, and portable. You will be responsible for architecting and deepening enterprise-grade Platform APIs to manage assets across multiple products, ensuring consistency, reliability, and performance at scale for all of our customers.In this role, you'll:Design and implement scalable backend architecture and APIs for asset cataloguing, ownership, and securityBuild intuitive, unified interfaces for organizing, transferring, and governing assets across accounts and productsEmbed AI technology into core product experiences to proactively monitor and suggest optimizations to account managementDrive technical alignment with senior leaders and stakeholders across the organizationDeliver fast-paced solutions and POCs, prioritizing impact over deep polish, while balancing short-term execution with long-term technical visionTackle challenges in data standards, compliance automation, and cross-product integrationsThis team is central to Zapier's multi-product strategy, unlocking faster product onboarding, reducing manual support burden, and delivering key enterprise capabilities.The Enterprise Insights team builds and maintains the core systems that support our customers. We're not following an existing playbook, we're writing it. Here, you will help build a strong account oversight platform that connects and supports Zapier's products.You'll work on:Building scalable platforms like User Notifications, Unified Smart Inboxes, Audit Logs, Public APIs to give customers better visibility, control, and access across Zapier.Supporting key use cases such as compliance, error tracking, and transactional alerts through reliable communication systems.Creating and improving observability features like SLOs, alerts, and dashboards to help Zapier grow.Strengthening core systems as we expand to serve larger and more complex customers.Leading important projects from start to finish — using AI to speed up development and improve the customer experience, while working closely with your team.All of these roles are instrumental in ensuring Zapier can deliver on their upmarket strategy and mission. They have a direct impact in driving higher company ARR. These are high impact, high reward roles that will include far reaching features, new challenges, and close connection to customers each day.Our Commitment to ApplicantsCulture and Values at ZapierZapier Guide to Remote WorkZapier Code of ConductDiversity and Inclusivity at ZapierHigh Visibility, High ImpactIf you're someone who thrives on ambitious goals and meaningful work, this is the team for you:Work on high-impact features that directly influence premium plan sales and help close strategic enterprise deals.Collaborate cross-functionally with Product, Design, Revenue, Support, and GTM teams.Work in a high-visibility area with strong leadership support and a clear growth path for engineers.Play a key role in Zapier's growth trajectory—you're not a cog in a machine, you're part of the engine.OpeningsWe're hiring multiple openings across two levels — Engineer (L3) and Senior Engineer (L4) — across these teams. We run a shared pipeline and map candidates to the right level based on what we observe throughout the process. You don't need to decide upfront.The difference between levels comes down to scope:L3 engineers own outcomes end-to-end within their team — they break down complex problems, ship iteratively, and drive strong technical execution in their domain.L4 engineers do all of that and extend it beyond their team — they coordinate across ownership boundaries, shape technical direction for adjacent systems, and bring other engineers along with them.Our level definitions may look different from what you've seen elsewhere, so we map based on what we observe in the process, not your current title. We'll be transparent about leveling before you reach the final stages.About YouThese qualities apply across both levels — scope and context will vary based on where you land. We know great engineers come from a range of backgrounds and career paths. You'll be successful in these roles if:You're a proven SaaS engineering builder. You have deep experience designing, developing, and maintaining complex mission-critical systems, whether that's measured in years or in the breadth and impact of what you've shipped. You're comfortable writing backend services (Python or Node.js ideal), and you're not afraid to go deep into API design, database modeling, or CI/CD pipelines.You work through AI agents, not alongside them. Your daily development workflow is built around directing and reviewing agent-written code, not writing it by hand. You have opinions about which models to use for which tasks, you've hit real failure modes and built mitigations, and your workflow is actively evolving. Bonus: you use multi-agent patterns, enable others on your team to build faster with AI, or have scaled AI impact beyond yourself.You know event-driven systems and have strong platform thinking. You've built scalable tools that help other teams move faster and simpler. You're familiar with systems like Kafka, SQS, and Fastify, and you've worked with observability tools like Datadog, Grafana, and Graylog.You work close to the customer. You don't wait for product specs or UX research to tell you what to build. You pull customer tickets, talk to CSMs, review feedback, and use that context to drive what you ship. You think in milestone slices: what's the smallest thing I can get into production that moves us forward and teaches us something?You ship through ambiguity. When there's no spec, no designs, and no clear path forward, your first instinct is to gather customer evidence, break the problem into a narrow first slice, and get rough working software into production within days, not weeks. You use working prototypes to drive alignment with stakeholders rather than waiting for consensus before building. You're comfortable throwing work away when direction changes and you treat discarded work as a fast learning loop, not a loss.You own your work and yourself. You take initiative without waiting for permission and you ship fast and share early. But you also own your mistakes, your gaps, and your role in friction, openly. You say "I don't know" and "I was wrong" out loud, early, and without shame. You hold yourself to a high standard and can articulate where you've fallen short, not just where you've succeeded.You communicate proactively in an async-first culture. You manage up and across, flagging risks, surfacing decisions, and keeping stakeholders informed without being asked. You use written artifacts, working demos, and rough prototypes as your primary communication tools, not meetings.You believe enterprise can move fast. You've seen (or are excited to prove) that shipping to enterprise customers doesn't have to mean slow, waterfall-style cycles. You know how to balance compliance, security, and rollout considerations with tight iteration loops and rapid customer feedback. You see enterprise constraints as interesting design problems, not reasons to slow down.How to ApplyAt Zapier, we believe that diverse perspectives and experiences make us better, which is why we have a non-standard application process designed to promote inclusion and equity. We're looking for the best fit for each of our roles, regardless of the type of companies in your background, so we encourage you to apply even if your skills and experiences don’t exactly match the job description. All we ask is that you answer a few in-depth questions in our application that would typically be asked at the start of an interview process. This helps speed things up by letting us get to know you and your skillset a bit better right out of the gate. Please be sure to answer each question; the resume and CV fields are optional. Education is not a requirement for our roles; however, if you receive an offer, you will need to include your most recent educational experience as part of our background check process.After you apply, you are going to hear back from us—even if we don’t see an immediate fit with our team. In fact, throughout the process, we strive to never go more than seven days without letting you know the status of your application. We know we’ll make mistakes from time to time, so if you ever have questions about where you stand or about the process, just ask your recruiter!Zapier is an equal-opportunity employer and we're excited to work with talented and empathetic people of all identities. Zapier does not discriminate based on someone's identity in any aspect of hiring or employment as required by law and in line with our commitment to Diversity, Inclusion, Belonging and Equity. Our code of conduct provides a beacon for the kind of company we strive to be, and we celebrate our differences because those differences are what allow us to make a product that serves a global user base. Zapier will consider all qualified applicants, including those with criminal histories, consistent with applicable laws.Zapier prioritizes the security of our customers' information and is dedicated to adhering to all applicable data privacy laws. You can review our privacy policy here. Zapier is committed to inclusion. As part of this commitment, Zapier welcomes applications from individuals with disabilities and will work to provide reasonable accommodations. If reasonable accommodations are needed to participate in the job application or interview process, please contact jobs@zapier.com. Application Deadline:The anticipated application window is 30 days from the date job is posted, unless the number of applicants requires it to close sooner or later, or if the position is filled.Even though we’re an all-remote company, we still need to be thoughtful about where we have Zapiens working. Check out this resource for a list of countries where we currently cannot have Zapiens permanently working.
No items found.
Hidden link
Multiverse Computing.jpg

Staff Software Engineer - Product Fundamentals

Multiverse
GB.svg
United Kingdom
Full-time
Remote
false
Multiverse is the upskilling platform for AI and Tech adoption.We have partnered with 1,500+ companies to deliver a new kind of learning that's transforming today’s workforce.Our upskilling apprenticeships are designed for people of any age and career stage to build critical AI, data, and tech skills. Our learners have driven $2bn+ ROI for their employers, using the skills they’ve learned to improve productivity and measurable performance.In June 2022, we announced a $220 million Series D funding round co-led by StepStone Group, Lightspeed Venture Partners and General Catalyst. With a post-money valuation of $1.7bn, the round makes us the UK’s first EdTech unicorn.But we aren’t stopping there. With a strong operational footprint and 800+ employees, we have ambitious plans to continue scaling. We’re building a world where tech skills unlock people’s potential and output. Join Multiverse and power our mission to equip the workforce to win in the AI era.The Elevator Pitch: Why will you enjoy this new opportunity?This is an opportunity to join the engineering team at Multiverse, the UK’s first edtech unicorn, where you will directly impact the lives of thousands of people by equipping the workforce to win in an AI era. You will do more than just use AI tools; you will architect the systems that allow us to deploy AI-powered features at scale.This role thrives at the intersection of Will Larson’s Tech Lead and Architect archetypes, acting as a force multiplier who guides direction across multiple teams. You will move beyond individual feature delivery to own complex, cross-functional problems, ensuring that as we scale our AI capabilities, we guarantee stability, security, and architectural integrity. If you are motivated by resolving ambiguous engineering challenges and leading decision-making in critical situations while still keeping your hands in the code as well, this is the place for you.Where you can make an impactWe have three core Groups. This role sits within the Product Fundamentals Group, but will have lots of collaboration with the teams across the department:Learning Product Group: Mission meets market. You will own the full learner journey from application to graduation. You’ll build seamless experiences that prove the ROI of programmes to the world’s biggest employers.Product Fundamentals Group (your future Group!): The connective tissue. You will build the shared capabilities that power our ecosystem - from our LMS and our AI guide (Atlas) to the live video sessions where coaching happens and more.Foundations Group: Scale and velocity. You will build the "paved road" for our internal tech organisation. You’ll focus on Developer Experience (DevEx), infrastructure, and core services that enable us to reach our ambitious growth goals.Success in the Role: What are your goals for the first 6-12 months?• Audit and Alignment (Months 1-3): You will apply deep technical knowledge to understand our stack (TypeScript, Python, AWS) and current AI constraints. You will begin guiding technical direction by partnering with Product and Engineering leadership to shape a technical roadmap that aligns with business objectives. As part of this on-boarding, you dive deep into one or more of our on-going initiatives, learning about our systems by actually working on them.• Strategic Execution (Months 4-6+): You will lead one or more highly complex projects to help radically accelerate our AI experiential roadmap. You will demonstrate predictable delivery, ensure that these new systems are well-tested, maintain a high bar on operational excellence, and deliver an impactful user experience.• Framework Definition (Months 6-12): You will lead defining frameworks and guide architectural strategy across our teams. Success looks like the creation of a "tight sense of engineering at Multiverse" where you have coached others to handle complex intersections of business and technology, effectively delegating execution while maintaining quality standards.What type of work will you be doing?You will operate as a "Strategic Driver," balancing code contributions with high-level enablement.• Solving Ambiguity: Functioning as a Tech Lead, you will resolve ambiguous engineering challenges. You will act thoughtfully and decisively in critical situations, seeking diverse perspectives and encouraging productive debate even when decisions are unpopular.• Architectural Strategy: Functioning as an Architect, you will drive the design and implementation plan for major new platforms. You will ensure our AI systems are not just experimental, but reliable and performant for the customer, minimising outages and ensuring a cohesive product experience.• Cross-Team Leadership: You will coordinate the delivery of broader initiatives that span multiple work-streams. This includes "organisational wrangling" to ensure that the technical debt and scalability strategies you define are adopted by adjacent teams.• Innovation & Tooling: You will leverage emerging tech (like AI) to solve problems that competitor products cannot address. You will explore and prototype with tools like Cursor or Gemini, but your focus will be on hardening these into foundational components that the rest of the engineering organisation can build upon.Our Tech Stack & ToolingTypeScript is our primary language of choice for both frontend development (with ReactJS) and on the backend, running inside containers.We also have a fair few older services written in either Python or Elixir on the backend. We manage all our infrastructure using Terraform and mostly host on AWS.You’ll find tools like GitHub, CircleCI, Datadog, and Backstage as daily drivers for our engineers.We also make heavy usage of Cursor and other AI-powered tools to accelerate our ability to deliver outcomes.Leadership, Structure & CultureGroup & Teams: Each team will have 4-6 Engineers and a Product Manager and Engineering Manager. There will also be Product Designers & User Researchers who typically work across teams. We then collect domain-related Teams into a Group. Groups typically have 4 - 6 Teams. In this Staff Engineer role, you’ll be operating across the Teams within the Product Fundamentals Group rather than being dedicated to a single Team.Your Support System: You report to an Engineering Director who is invested in your growth. You are not an island; you are part of a supportive community of Staff+ Engineers and Engineering Managers who collaborate regularly.Autonomy vs. Alignment: We reject micro-management. We provide the strategic objectives, but you and your team define how to get there.Our Vibe: We value kindness, candor, and curiosity. We prioritise a culture of support, facilitation, and deep care over hierarchy.BenefitsTime off - 27 days holiday, plus 5 additional days off: 1 life event day, 2 volunteer days, 2 company-wide wellbeing days (M-Powered Weekend) and 8 bank holidays per yearHealth & Wellness- private medical Insurance with Bupa, a medical cashback scheme, life insurance, gym membership & wellness resources through Wellhub and access to Spill - all in one mental health supportHybrid work offering - for most roles we collaborate in the office three days per week with the exception of Coaches and Instructors who collaborate in the office once a monthWork-from-anywhere scheme - you'll have the opportunity to work from anywhere, up to 10 days per yearSpace to connect: Beyond the desk, we make time for weekly catch-ups, seasonal celebrations, and have a kitchen that’s always stocked! Our Commitment to Diversity, Equity and InclusionWe’re an equal opportunities employer. And proud of it. Every applicant and employee is afforded the same opportunities regardless of race, colour, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender, gender identity or expression, or veteran status. This will never change. Read our Equality, Diversity & Inclusion policy here.Our Commitment to SafeguardingMultiverse is committed to safeguarding and promoting the welfare of our learners. We expect all employees to share this commitment and adhere to our Safeguarding Policy, our Prevent Policy and all other Multiverse company policies. Successful applicants will be required to undertake at least a Basic check via the Disclosure Barring Service (DBS). For roles that will involve a Regulated Activity, successful applicants must also undergo an Enhanced DBS check, including a Children’s Barred List check and a Prohibition Order check. Roles involving Regulated Activity may interact with vulnerable groups, therefore are exempt from the Rehabilitation of Offenders Act 1974 meaning applicants are required to declare any convictions, cautions, reprimands, and final warnings.Providing false information is an offence and could result in the application being rejected or summary dismissal if the applicant has been selected, and possible referral to the police and the DBS.
No items found.
Hidden link
Langfuse.jpg

Senior Forward Deployed Engineer

Langfuse
€90,000 – €160,000
earth.svg
European Union
Full-time
Remote
false
About LangfuseOpen Source LLM Engineering Platform that helps teams build useful AI applications via tracing, evaluation, and prompt management (mission, product). We are now part of ClickHouse.We're building the "Datadog" of this category; model capabilities continue to improve, but building useful applications is really hard, both in startups and enterprises.Largest open source solution in this category: trusted by 19 of the Fortune 50, >2k customers, >26M monthly SDK downloads, >6M Docker pulls.We joined ClickHouse in January 2026 because LLM observability is fundamentally a data problem and Langfuse already ran on ClickHouse. Together we can move faster on product while staying true to open source and self-hosting, and join forces on GTM and sales to accelerate revenue.Previously backed by Y Combinator, Lightspeed, and General Catalyst.We're a small, engineering-heavy, and experienced team in Berlin and San Francisco. We are also hiring for engineering in EU timezones and expect one week per month in our Berlin office (how we work).Your impactMake our best customers successful in production and expanding over time.Improve net revenue retention via adoption, outcomes, and proactive risk management.Scale your impact to our large user base and OSS community by contributing to documentation, guides, and other public content.Create a tight loop from “what customers do” (your deep understanding of top customers) → “what we should build” (feedback to the product engineering team) → “how the GTM org explains it.” (GTM enablement).What you’ll do1) Own strategic customer relationships (portfolio ownership)Be the primary technical partner for 10–20 strategic accounts (large, highly engaged, or aligned with our roadmap).Run onboarding, success planning, and regular deep dives into the customer’s AI architecture and workflows.Drive adoption of key product capabilities across the lifecycle: initial setup, team workflows, scaling, and expansion.2) Production readiness + architectural guidanceLead customers through production readiness: instrumentation strategy, data modeling choices, evaluation setup, alerting/monitoring expectations, security & privacy considerations, and operational playbooks.Provide pragmatic architecture guidance for real LLM systems (agents, tool use, RAG, evals, prompt iteration, dataset curation, feedback loops).Build small prototypes, reference implementations, and demos when it unblocks a customer. Turn them into reusable templates that can be published.3) Escalation leadershipOwn the technical leadership during high-severity customer moments: triage, root-cause coordination, and crisp communication.Be the point of contact for the customer and partner closely with Engineering, be proactive in how you resolve issues.Establish escalation paths, runbooks, and prevention mechanisms for repeat issues.4) Turn customer signal into product + docs + enablementAggregate patterns across your portfolio and translate them into actionable product feedback (clear problem statements, impact, and recommended solutions).Create customer-facing assets (docs, guides, best practices, demos) that start as one customer’s question and become durable collateral.Enable the broader ClickHouse GTM org: training, playbooks, crisp messaging, and “how to win” narratives for AI engineering teams.What we’re looking forMust-havesSenior experience in a customer-facing technical role: TAM, Solutions Engineer, Solutions Architect, Forward Deployed Engineer, Customer Success Engineer, or similar where you owned outcomes.Strong technical foundation: you can debug integrations, reason about distributed systems, APIs/SDKs, and cloud infrastructure.Demonstrated work in applied AI / AI engineering: building, operating, or enabling LLM applications (agents, RAG, eval pipelines, prompt tooling, experimentation).Excellent communication: you can lead technical meetings, drive decisions, and write docs engineers actually follow.High ownership: you ship artifacts, close loops, and create repeatable systems rather than bespoke one-offs.Nice-to-havesExperience with devtools / OSS ecosystems and developer-centric GTM.Familiarity with observability concepts (tracing/metrics/logs), data pipelines, and evaluation frameworks.Track record of technical writing or enablement (workshops, reference architectures, public docs).ProcessWe can run the full process to your offer letter in less than 7 days (hiring process).Tech StackWe run a TypeScript monorepo: Next.js on the frontend, Express workers for background jobs, PostgreSQL for transactional data, ClickHouse for tracing at scale, S3 for file storage, and Redis for queues and caching. You should be familiar with a good chunk of this, but we trust you'll pick up the rest quickly (Stack, Architecture).How we shipLink to handbookWe trust you to take ownership (ownership overview) for your area. You identify what to build, propose solutions (RFCs), and ship them. Everyone here thinks about the user experience and the technical implementation at the same time. Everyone manages their own Linear.You're never alone. Anyone from the team is happy to go into a whiteboard session with you. 15 minutes of shared discussion can very much improve the overall output.We implement maker schedule and communication. There are two recurring meetings a week: Monday check-in on priorities (15 min) and a demo session on Fridays (60 min).Code reviews are mentorship. New joiners get all PRs reviewed to learn the codebase, patterns, and how the systems work (onboarding guide).We use AI as much as possible in our workflows to make our users happy. We encourage everyone to experiment with new tooling and AI workflows.Why Langfuse (now part of ClickHouse)This role puts you at the forefront of the AI revolution, partnering with engineering teams who are building the technology that will define the next decade(s).This is an open-source devtools company. We ship daily, talk to customers constantly, and fight for great DX. Reliability and performance are central requirements.Your work ships under your name. You'll appear on changelog posts for the features you build, and during launch weeks, you'll produce videos to announce what you've shipped to the community. You’ll own the full delivery end to end.We're solving hard engineering problems: figuring out which features actually help users improve AI product performance, building SDKs developers love, visualizing data-rich traces, rendering massive LLM prompts and completions efficiently in the UI, and processing terabytes of data per day through our ingestion pipeline.You'll work closely with the ClickHouse team and learn how they build a world-class infrastructure company. We're in a period of strong growth: Langfuse is growing organically and accelerating through ClickHouse's GTM. (Why we joined ClickHouse)If you wonder what to build next, our users are a Slack message or a Github discussions post away.You’re on a continuous learning journey. The AI space develops at breakneck speed and our customers are at the forefront. We need to be ready to meet them where they are and deliver the tools they need just-in-time.
No items found.
Hidden link
Langfuse.jpg

Senior Technical Account Manager

Langfuse
€90,000 – €160,000
earth.svg
European Union
Full-time
Remote
false
About LangfuseOpen Source LLM Engineering Platform that helps teams build useful AI applications via tracing, evaluation, and prompt management (mission, product). We are now part of ClickHouse.We're building the "Datadog" of this category; model capabilities continue to improve, but building useful applications is really hard, both in startups and enterprises.Largest open source solution in this category: trusted by 19 of the Fortune 50, >2k customers, >26M monthly SDK downloads, >6M Docker pulls.We joined ClickHouse in January 2026 because LLM observability is fundamentally a data problem and Langfuse already ran on ClickHouse. Together we can move faster on product while staying true to open source and self-hosting, and join forces on GTM and sales to accelerate revenue.Previously backed by Y Combinator, Lightspeed, and General Catalyst.We're a small, engineering-heavy, and experienced team in Berlin and San Francisco. We are also hiring for engineering in EU timezones and expect one week per month in our Berlin office (how we work).Your impactMake our best customers successful in production and expanding over time.Improve net revenue retention via adoption, outcomes, and proactive risk management.Scale your impact to our large user base and OSS community by contributing to documentation, guides, and other public content.Create a tight loop from “what customers do” (your deep understanding of top customers) → “what we should build” (feedback to the product engineering team) → “how the GTM org explains it.” (GTM enablement).What you’ll do1) Own strategic customer relationships (portfolio ownership)Be the primary technical partner for 10–20 strategic accounts (large, highly engaged, or aligned with our roadmap).Run onboarding, success planning, and regular deep dives into the customer’s AI architecture and workflows.Drive adoption of key product capabilities across the lifecycle: initial setup, team workflows, scaling, and expansion.2) Production readiness + architectural guidanceLead customers through production readiness: instrumentation strategy, data modeling choices, evaluation setup, alerting/monitoring expectations, security & privacy considerations, and operational playbooks.Provide pragmatic architecture guidance for real LLM systems (agents, tool use, RAG, evals, prompt iteration, dataset curation, feedback loops).Build small prototypes, reference implementations, and demos when it unblocks a customer. Turn them into reusable templates that can be published.3) Escalation leadershipOwn the technical leadership during high-severity customer moments: triage, root-cause coordination, and crisp communication.Be the point of contact for the customer and partner closely with Engineering, be proactive in how you resolve issues.Establish escalation paths, runbooks, and prevention mechanisms for repeat issues.4) Turn customer signal into product + docs + enablementAggregate patterns across your portfolio and translate them into actionable product feedback (clear problem statements, impact, and recommended solutions).Create customer-facing assets (docs, guides, best practices, demos) that start as one customer’s question and become durable collateral.Enable the broader ClickHouse GTM org: training, playbooks, crisp messaging, and “how to win” narratives for AI engineering teams.What we’re looking forMust-havesSenior experience in a customer-facing technical role: TAM, Solutions Engineer, Solutions Architect, Forward Deployed Engineer, Customer Success Engineer, or similar where you owned outcomes.Strong technical foundation: you can debug integrations, reason about distributed systems, APIs/SDKs, and cloud infrastructure.Demonstrated work in applied AI / AI engineering: building, operating, or enabling LLM applications (agents, RAG, eval pipelines, prompt tooling, experimentation).Excellent communication: you can lead technical meetings, drive decisions, and write docs engineers actually follow.High ownership: you ship artifacts, close loops, and create repeatable systems rather than bespoke one-offs.Nice-to-havesExperience with devtools / OSS ecosystems and developer-centric GTM.Familiarity with observability concepts (tracing/metrics/logs), data pipelines, and evaluation frameworks.Track record of technical writing or enablement (workshops, reference architectures, public docs).ProcessWe can run the full process to your offer letter in less than 7 days (hiring process).Tech StackWe run a TypeScript monorepo: Next.js on the frontend, Express workers for background jobs, PostgreSQL for transactional data, ClickHouse for tracing at scale, S3 for file storage, and Redis for queues and caching. You should be familiar with a good chunk of this, but we trust you'll pick up the rest quickly (Stack, Architecture).How we shipLink to handbookWe trust you to take ownership (ownership overview) for your area. You identify what to build, propose solutions (RFCs), and ship them. Everyone here thinks about the user experience and the technical implementation at the same time. Everyone manages their own Linear.You're never alone. Anyone from the team is happy to go into a whiteboard session with you. 15 minutes of shared discussion can very much improve the overall output.We implement maker schedule and communication. There are two recurring meetings a week: Monday check-in on priorities (15 min) and a demo session on Fridays (60 min).Code reviews are mentorship. New joiners get all PRs reviewed to learn the codebase, patterns, and how the systems work (onboarding guide).We use AI as much as possible in our workflows to make our users happy. We encourage everyone to experiment with new tooling and AI workflows.Why Langfuse (now part of ClickHouse)This role puts you at the forefront of the AI revolution, partnering with engineering teams who are building the technology that will define the next decade(s).This is an open-source devtools company. We ship daily, talk to customers constantly, and fight for great DX. Reliability and performance are central requirements.Your work ships under your name. You'll appear on changelog posts for the features you build, and during launch weeks, you'll produce videos to announce what you've shipped to the community. You’ll own the full delivery end to end.We're solving hard engineering problems: figuring out which features actually help users improve AI product performance, building SDKs developers love, visualizing data-rich traces, rendering massive LLM prompts and completions efficiently in the UI, and processing terabytes of data per day through our ingestion pipeline.You'll work closely with the ClickHouse team and learn how they build a world-class infrastructure company. We're in a period of strong growth: Langfuse is growing organically and accelerating through ClickHouse's GTM. (Why we joined ClickHouse)If you wonder what to build next, our users are a Slack message or a Github discussions post away.You’re on a continuous learning journey. The AI space develops at breakneck speed and our customers are at the forefront. We need to be ready to meet them where they are and deliver the tools they need just-in-time.
No items found.
Hidden link
OpenAI.jpg

Researcher, Frontier Cybersecurity Risks

OpenAI
$295,000 – $445,000
US.svg
United States
Full-time
Remote
false
About the team​​The Safety Systems org ensures that OpenAI’s most capable models can be responsibly developed and deployed. We build evaluations, safeguards, and safety frameworks that help our models behave as intended in real-world settings.The Preparedness team is an important part of the Safety Systems org at OpenAI, and is guided by OpenAI’s Preparedness Framework.Frontier AI models have the potential to benefit all of humanity, but also pose increasingly severe risks. To ensure that AI promotes positive change, the Preparedness team helps us prepare for the development of increasingly capable frontier AI models. This team is tasked with identifying, tracking, and preparing for catastrophic risks related to frontier AI models.The mission of the Preparedness team is to:Closely monitor and predict the evolving capabilities of frontier AI systems, with an eye towards risks whose impact could be catastrophicEnsure we have concrete procedures, infrastructure and partnerships to mitigate these risks and to safely handle the development of powerful AI systemsPreparedness tightly connects capability assessment, evaluations, and internal red teaming, and mitigations for frontier models, as well as overall coordination on AGI preparedness. This is fast paced, exciting work that has far reaching importance for the company and for society.About the roleModels are becoming increasingly capable—moving from tools that assist humans to agents that can plan, execute, and adapt in the real world. As we push toward AGI, cybersecurity becomes one of the most important and urgent frontiers: the same systems that can accelerate productivity can also accelerate exploitation. As a Researcher for cybersecurity risks, you will help design and implement an end-to-end mitigation stack to reduce severe cyber misuse across OpenAI’s products. This role requires strong technical depth and close cross-functional collaboration to ensure safeguards are enforceable, scalable, and effective. You’ll contribute directly to building protections that remain robust as products, model capabilities, and attacker behaviors evolve.In this role, you will:Design and implement mitigation components for model-enabled cybersecurity misuse—spanning prevention, monitoring, detection, and enforcement—under the guidance of senior technical and risk leadership.Integrate safeguards across product surfaces in partnership with product and engineering teams, helping ensure protections are consistent, low-latency, and scale with usage and new model capabilities.Evaluate technical trade-offs within the cybersecurity risk domain (coverage, latency, model utility, and user privacy) and propose pragmatic, testable solutions.Collaborate closely with risk and threat modeling partners to align mitigation design with anticipated attacker behaviors and high-impact misuse scenarios.Execute rigorous testing and red-teaming workflows, helping stress-test the mitigation stack against evolving threats (e.g., novel exploits, tool-use chains, automated attack workflows) and across different product surfaces—then iterate based on findings.You might thrive in this role if you:Have a passion for AI safety and are motivated to make cutting-edge AI models safer for real-world use.Bring demonstrated experience in deep learning and transformer models.Are proficient with frameworks such as PyTorch or TensorFlow.Possess a strong foundation in data structures, algorithms, and software engineering principles.Are familiar with methods for training and fine-tuning large language models, including distillation, supervised fine-tuning, and policy optimization.Excel at working collaboratively with cross-functional teams across research, security, policy, product, and engineering.Have significant experience designing and deploying technical safeguards for abuse prevention, detection, and enforcement at scale.(Nice to have) Bring background knowledge in cybersecurity or adjacent fields.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
OpenAI.jpg

Partner AI Deployment Engineer

OpenAI
GE.svg
Germany
Full-time
Remote
false
About the roleWe are looking for a Partner AI Deployment Engineer (P-ADE) to lead technical delivery with OpenAI partners across EMEA and help scale customer deployments built on the OpenAI platform. This role focuses on working across a wide range of customer use cases, supporting the design, deployment and scaling of production-grade AI solutions delivered through partners.You will work closely with partner delivery teams, alongside Solutions Engineers (SEs), Forward Deployed Engineers (FDEs) and other ADEs, to move customer engagements from initial design through to stable, scaled production. Your work will accelerate time to value, reduce delivery risk and ensure solutions meet OpenAI’s standards for quality, safety and reliability. You will collaborate closely with GTM, Applied, and Research to support partner-led enterprise adoption.This role is based in Munich or Paris. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.In this role, you will:Act as a primary technical delivery partner for a set of OpenAI partners across EMEA, supporting customer deployments across multiple industries and use cases.Work with partner delivery teams and customer stakeholders to translate solution designs into deployable, production-ready architectures on the OpenAI platform.Support customer time to value through hands-on prototyping, integration support, architectural guidance and troubleshooting during critical phases of delivery.Collaborate closely with SEs, FDEs, and other ADEs to ensure the right technical expertise is engaged from design through production rollout.Help partners operationalize solutions by addressing scalability, reliability, security and safety considerations required for enterprise production environments.Contribute to reusable deployment patterns, reference architectures and delivery guidance that enable repeatable execution across partner engagements.Act as a technical quality and governance point during deployments, helping ensure solutions meet OpenAI’s standards and best practices before and after go-live.Capture and synthesise feedback from real customer deployments and share insights with Applied, Research and partner teams to improve delivery playbooks and platform capabilities.You’ll thrive in this role if you:Have 8+ years of experience in technical consulting, solution delivery or a similar role, working with senior technical and business leaders on complex enterprise deployments.Have experience delivering large, multi-stakeholder technical projects in partnership with boutique services organisations, system integrators or similar delivery environments.Have strong hands-on experience building, integrating and operating production software using modern languages such as Python or JavaScript.Have designed, deployed and supported Generative AI and or machine learning solutions in real-world production environments.Have practical experience working with the OpenAI platform in customer-facing or delivery contexts.Are a clear communicator who can work effectively with partner engineers, internal teams and customer stakeholders.Take ownership of delivery problems end to end and are comfortable operating in ambiguous, fast-moving environments.Bring a collaborative, humble mindset and enjoy working across partners and internal teams to deliver successful customer outcomes. About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
The Talent Labs.jpg

Computational Protein Design

Talent Labs
US.svg
United States
Full-time
Remote
false
We are seeking a Computational Protein Design Scientist to join our team working at the interface of generative AI and synthetic biology. You will play a key role amongst a team of scientists designing and engineering proteins for specific functions. This is an opportunity to help shape and grow an organization that advances artificial intelligence and applies it to longstanding scientific challenges. Using your blend of computational expertise and in-depth biochemical understanding of proteins, you will generate insights to improve protein functionality and operate at the interface between our machine learning and experimental platform units, working closely to seamlessly integrate AI generations and lab validation data.Who you areYou are a computational protein designer. You have a proven track record of successfully leveraging novel computational tools and knowledge of biochemistry or structural biology to design proteins to functional requirements and applications in synthetic biology.You are a data scientist. You are a strong data scientist and you have experience owning data-driven projects to generate biological insights.You are a successful scientist. You have a PhD (or equivalent industry experience) in computational biology, bioinformatics, computer science, biochemistry, structural biology, physics, biophysics, bio/chem engineering, or a related field. Your research experience was in protein biochemistry using computational expertise.You collaborate with experimentalists. You have experience collaborating with experimental (i.e. wet lab) teams to achieve protein design objectives.You are an owner. You have a proven track record of delivering successful commercial and / or academic research projects, demonstrated through publications, patents, and/or commercially impactful outcomes, as well as other contributions to the scientific community.You are a connector. You love to connect people and enable them to perform at their highest levels. You have excellent communication and presentation skills with the ability to convey complex scientific concepts to both technical and non-technical audiences.You are a mission driven innovator. You are passionate about making a positive impact on the world, whether it's for patients, partners or beyond. You are motivated by the end goal and are flexible in adapting to different approaches and methodologies.You thrive in a dynamic and ambiguous environment. You excel in a fast-paced setting where goals must be achieved efficiently and urgently. You have a keen eye for creating, then optimizing processes to improve speed and repeatability. You are an advocate for lab automation, both through hardware and softwareWhat sets you apart (preferred but not required)You have experience with generative AI. You have experience leveraging generative AI (or other machine learning models) in synthetic biology applications.You have experience engineering gene editing tools, such as nucleases and integrases.You have experience with homology-based and structural bioinformatics, and are able to answer scientific questions using very large databases.You have helped scale a young biotech before. You have worked in startups and helped the company grow.Your ResponsibilitiesLeverage our proprietary generative AI models to design proteins for experimental validation:Analyze protein design problems based on functional requirements, biochemistry, structural biology and sequence homologyGenerate designs using our proprietary generative AI models and optimize designs for experimental validationCoordinate with our lab-based protein engineers to plan and optimize the design process and validation strategyLeverage our proprietary data to improve our models:Analyze and leverage our experimental results to improve the next round of designs and increase our success rate over validation roundsCollaborate with machine learning scientists to fine-tune and prompt our modelsCollaboration and communication:Be an effective interface between machine learning model development and experimental validationCapture bioengineering learnings and feedback to our machine learning unit, and vice versaFoster a collaborative and innovative environment, proactively finding opportunities to innovate and create clarity and alignment between different unitsContribute to our computational tools:Help improve the way we use, serve and integrate our AI models, by feeding back to the software engineers and foundational machine learning unitHelp improve our data management systems and workflowsScientific excellence and self development:Work to the highest scientific standards (publication-grade work) Stay on top of relevant developments in synthetic biologyContinue building your understanding of generative AI as well as expanded areas of protein and cell biologyParticipate in knowledge sharing, e.g. organize and present at our internal reading group.Attend and present at conferences when relevant.ApplyWe offer strongly competitive compensation and benefits packages, including:Private health insurancePension/401(K) contributionsGenerous leave policies (including gender neutral parental leave)Hybrid workingTravel opportunities and moreWe also offer a stimulating work environment, and the opportunity to shape the future of synthetic biology through the application of breakthrough generative models.We welcome applicants from all backgrounds and we are committed to building a team that represents a variety of backgrounds, perspectives, and skills.
No items found.
Hidden link
Eloquent AI.jpg

Lead Software Engineer

Eloquent AI
US.svg
United States
Full-time
Remote
false
Meet Eloquent AIAt Eloquent AI, we’re building the next generation of AI Operators—multimodal, autonomous systems that execute complex workflows across fragmented tools with human-level precision. Our technology goes far beyond chat: it sees, reads, clicks, types, and makes decisions—transforming how work gets done in regulated, high-stakes environments.We’re already powering some of the world’s leading financial institutions and insurers, fundamentally changing how millions of people manage their finances every day. From automating compliance reviews to handling customer operations, our Operators are quietly replacing repetitive, manual tasks with intelligent, end-to-end execution.Headquartered in San Francisco with a global footprint, Eloquent AI is a fast-growing company backed by top-tier investors. Join us to work alongside world-class talent in AI, engineering, and product as we redefine the future of financial services.Your RoleAs a Lead Engineer at Eloquent AI, you will lead the development of AI-powered full-stack applications while overseeing and mentoring other engineers. You’ll remain hands-on across the stack, but also take ownership of technical direction, code quality, and delivery standards.You’ll work closely with engineers, AI researchers, and product teams to ensure scalable, reliable systems that power real-time AI-driven workflows. This role requires strong engineering fundamentals, leadership capability, and the ability to operate effectively in a fast-paced, AI-first environment.You will:Design and build full-stack applications that power AI-driven workflows for enterprise users.Oversee and review the work of other engineers, ensuring high-quality, production-ready code.Provide technical guidance, architectural direction, and hands-on support where needed.Develop high-performance front-end interfaces for AI agent control, monitoring, and visualisation.Build scalable backend services that support real-time AI interactions, knowledge retrieval, and automation.Work closely with AI researchers and ML engineers to integrate LLMs, RAG, and automation into production-ready systems.Establish engineering best practices across testing, deployment, and performance optimisation.Continuously iterate and refine AI-driven products, balancing speed with robustness.Requirements8+ years of hands-on experience building full-stack production applications.Prior experience leading or mentoring engineers.Proficiency in React, TypeScript, and Node.js.Backend experience using Python.Strong knowledge of cloud infrastructure (AWS, GCP, or Azure) and scalable architectures.Understanding of AI-powered applications (LLMs, chat interfaces, agentic workflows).Ability to work in a fast-paced, high-autonomy environment.Strong collaboration skills across engineering, product, and AI teams.Bonus Points If…You have experience building AI-powered applications with LLM integrations.You’ve worked in high-performance startups or enterprise AI environments.You have a sharp eye for UI/UX design and have built intuitive, AI-driven interfaces.You have experience with GraphQL, WebSockets, or real-time data streaming.You’ve contributed to open-source projects or have built developer tools for AI.
No items found.
Hidden link
Xaira Therapeutics.jpg

Scientist I, Platform Development and Antibody Screening

Xaira
GB.svg
United Kingdom
Full-time
Remote
false
About Xaira Therapeutics Xaira is an innovative biotech startup focused on leveraging AI to transform drug discovery and development. The company is leading the development of generative AI models to design protein and antibody therapeutics, enabling the creation of medicines against historically hard-to-drug molecular targets. It is also developing foundation models for biology and disease to enable better target elucidation and patient stratification. Collectively, these technologies aim to continually enable the identification of novel therapies and to improve success in drug development. Xaira is headquartered in the San Francisco Bay Area, Seattle, and London.Position Overview Xaira is seeking enthusiastic and motivated candidates to join our team as Research Engineers. We welcome candidates across the spectrum of experience. Teams thrive when they are diverse (across all axes), and we encourage all eligible applicants to apply.  The role is based in our London office, located near Old Street. Our team is highly collaborative, operating on the belief that hard problems are best solved by multiple people working towards a clear goal, bringing and sharing their expertise with the team. We operate a hybrid working culture based on trust. Members of the team are typically in the office 3 days a week. Key Responsibilities Industry experience as a research engineer, in an AI-related company. Excited to work, learn and teach within a collaborative team working on challenging problems. Desirable Below is a list of qualities/experiences that align with the kinds of things that we are looking for. Please do not read this as an extension of the “requirements” section! We recognise that experiences, opportunities and life-paths vary. Masters (or equivalent)/PhD in AI-related field. Public codebases or contribution to public GitHub repositories. Experience building and training neural networks. Experience in distributed training and inference. Experience profiling and optimising large-scale AI models. Knowledge or experience in BioAI. If you are a motivated individual with a passion for applying AI to advance drug discovery and improve human health, we encourage you to apply and join us in our mission to make a positive difference in the world. Xaira Therapeutics an equal-opportunity employer. We believe that our strength is in our differences. Our goal to build a diverse and inclusive team began on day one, and it will never end. TO ALL RECRUITMENT AGENCIES: Xaira Therapeutics does not accept agency resumes. Please do not forward resumes to our jobs alias or employees. Xaira Therapeutics is not responsible for any fees related to unsolicited resumes.
No items found.
Hidden link
Tenstorrent.jpg

Design Director

Tenstorrent
$100,000 – $500,000
US.svg
United States
Full-time
Remote
false
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.Tenstorrent is accelerating the future of AI and high-performance compute by building industry-leading CPU and AI architectures. As an Automotive and Robotics SoC Architect, you will define scalable, top-down system architectures that unify our CPU and AI technologies for next-generation automotive applications. This senior technical role shapes the architectural direction of our automotive and robotics portfolio, ensuring our products meet the industry's highest expectations for performance, safety, reliability, and security. This position is central to how Tenstorrent delivers world-class automotive solutions and requires strong technical leadership, systems thinking, and cross-functional collaboration. This role is remote, based out of North America. We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.   Who You Are A systems thinker who can architect complex SoCs from concept to execution. A strong communicator who can articulate technical direction across engineering teams and external partners. Someone with deep knowledge of safety-critical systems and the unique needs of automotive environments. An innovator who can identify future use cases and propose next-generation architectural solutions. A leader who thrives in a highly technical, cross-functional, fast-moving environment.   What We Need Bachelor’s, Master’s, or Ph.D. in Electrical Engineering, Computer Engineering, or related field. Extensive experience designing complex SoCs, ideally in automotive applications. Proficiency in hardware description languages such as Verilog or VHDL. Experience with hardware/software co-design and co-verification. Knowledge of automotive safety standards (e.g., ISO 26262) and security principles. Someone comfortable with up to 25% international travel.  Experience with both cameras, sensors, and others is a plus.   What You Will Learn How cutting-edge CPU and AI architectures are adapted for automotive-grade environments. Best-in-class methodologies for safety-critical SoC design, verification, and system integration. How to translate emerging automotive use cases into scalable, future-proof SoC architectures. Approaches to hardware-level security, robustness, and cyber-resilience in automotive compute systems. Cross-functional collaboration strategies that drive innovation across architecture, software, DV, and product teams.   Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made. Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology.  Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2).   These requirements apply to persons located in the U.S. and all countries outside the U.S.  As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency.  If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
No items found.
Hidden link
No job found
Your search did not match any job. Please try again
Department
Clear
Category
Clear
Country
Clear
Job type
Clear
Remote
Clear
Only remote job
Company size
Clear
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.