The AI job market moves fast. We keep up so you don't have to.
Fresh roles added daily, reviewed for quality — across every corner of the AI ecosystem.
I'm strong in:
Edit filters
New AI Opportunities
Showing 61 – 79 of 79 jobs
Tag
Member of Technical Staff, Tech Lead
Listen Labs
11-50
$150,000 – $300,000
United States
Full-time
Remote
false
TL;DR: We are seeing strong market demand and an aggressive 6-month product roadmap, so we are expanding our engineering team. We're looking for someone highly technical (our current team includes 3 IOI medalists) who wants to build a product that is changing how companies make decisions. If you're excited about tackling complex problems end-to-end, we should talk.BackgroundListen Labs is an AI-powered research platform that helps teams uncover insights from customer interviews in hours — not months. We help customers analyze conversations, surface themes, and make faster, smarter product decisions.Company highlights — entirely product-led:World-Class Team: Founded by serial entrepreneurs (previous AI exit), former co-founders, and talent from Jane Street, Twitter, Stripe, Affirm, Bain, Goldman Sachs, and many more Sequoia-backed startups (plus IOI/ICPC backgrounds).Hypergrowth: We’re a 40-person team backed by Sequoia, growing from $0 to a $14M run-rate in under a year. We move fast, care deeply about craft, and love working with people who take ownership.Traction: Rapid growth across segments with enterprise wins at Google, Microsoft, Nestlé, and P&G.Performance: Industry-leading win rate driven by a highly differentiated product.Market Validation: Consistently winning customers across all segments with over six-figure lands that lead to quick expansions.Viral Product: Interviews are shared with tens of thousands of viewers, fueling PLG, organic expansion, and daily inbound from Fortune 500s.Technical ChallengesMcKinsey On Demand: Building a research agent
Hiring McKinsey is different than buying software. You don't just get tools, but get opinions, experience, and execution. We build Listen with that perspective: You have an AI agent on your side that knows everything about our platform and the best research practices. It helps you set up your project, conduct interviews with your goals in mind, and analyze thousands of responses.Database of Humanity
One of the key value props is our ability to find the people you are looking for (eg, "power users of ChatGPT and Excel"). We are building a database of millions of humans. The more studies you do with Listen, the better we understand you. This enables finding people with unmatched accuracy and, in the long run, extrapolating what a person would say based on all previous conversations -- imagine answering questions for your best friend.Realtime Video Interviews
The next version of our AI interviewer will have emotional understanding of video and voice to read between the lines. The goal is to make our interviewers more nuanced and effective than the most senior user researchers. This involves computer vision, speech analysis, and real-time decision making.Distributed Information Mining
The most interesting information is not publicly accessible on the web; it lives only in people's minds. We are building an agent that, given a question, finds the right people to talk to, asks the right questions, and returns a report and actionable recommendations. That's what consultants charge millions for. The ceiling is incredibly high, and we are pushing the technical boundaries to help companies, from investment firms to tech companies, make the best decisions.Customer Preference Model & Synthetic Personas
We're bringing Jeff Bezos' vision of the customer being part of every life decision. We're building the most profound understanding of customers, which will allow us to extrapolate to new questions via synthetic personas. This involves complex modeling of human behavior, preferences, and decision-making processes.What We Look ForYou want to solve problems end-to-end: Our team is split vertically, so every engineer owns a part of the product and needs to make decisions across the LLM pipeline, infrastructure, backend, and UX (with help!).You have a high bar for quality: In a startup, moving fast is essential. But even more important is to care about your output, obsess about details, and build a product that works, especially in the time of AI. Slop compounds!You’re an architect: You’re excited to walk into a greenfield stack and make critical decisions that will define our architecture for years to come.You want to push LLM capabilities: We continually push the most advanced AI models to their limits and work with the foundational companies on their new releases.You are a clear thinker and communicator: We only have one meeting a week and expect you to communicate tradeoffs, problems, and blockers directly.You are highly technical: Most of our team has started coding as young teenagers and nerd out on details from language design to compilers.Life at Listen LabsCompetitive Compensation: We’re backed by world-class investors, including Sequoia Capital, Conviction, AI Grant, and Pear VC, and offer competitive compensation packages with meaningful equity ownership.Over $30B in market cap has been created in adjacent industries (Medallia, AlphaSense, GLG, Ipsos, Kantar). Our Sequoia partner, Bryan Schreier, was the first investor in Qualtrics—a $12B company tackling similar problems to ours.Benefits that Support You: Comprehensive healthcare and dental coverage, flexible time off to recharge, and an environment that values balance and trust.Room to Grow: As an early member of the team, you’ll have the opportunity to take on new responsibilities, shape processes from scratch, and grow alongside the company. We value people who want to stretch beyond their role and build something lasting.
No items found.
2026-02-27 0:44
Member of Technical Staff, Applied AI
Listen Labs
11-50
$150,000 – $300,000
United States
Full-time
Remote
false
TL;DR: We are seeing strong market demand and an aggressive 6-month product roadmap, so we are expanding our engineering team. We're looking for someone highly technical (our current team includes 3 IOI medalists) who wants to build a product that is changing how companies make decisions. If you're excited about tackling complex problems end-to-end, we should talk.BackgroundListen Labs is an AI-powered research platform that helps teams uncover insights from customer interviews in hours — not months. We help customers analyze conversations, surface themes, and make faster, smarter product decisions.Company highlights — entirely product-led:World-Class Team: Founded by serial entrepreneurs (previous AI exit), former co-founders, and talent from Jane Street, Twitter, Stripe, Affirm, Bain, Goldman Sachs, and many more Sequoia-backed startups (plus IOI/ICPC backgrounds).Hypergrowth: We’re a 40-person team backed by Sequoia, growing from $0 to a $14M run-rate in under a year. We move fast, care deeply about craft, and love working with people who take ownership.Traction: Rapid growth across segments with enterprise wins at Google, Microsoft, Nestlé, and P&G.Performance: Industry-leading win rate driven by a highly differentiated product.Market Validation: Consistently winning customers across all segments with over six-figure lands that lead to quick expansions.Viral Product: Interviews are shared with tens of thousands of viewers, fueling PLG, organic expansion, and daily inbound from Fortune 500s.Technical ChallengesMcKinsey On Demand: Building a research agent
Hiring McKinsey is different than buying software. You don't just get tools, but get opinions, experience, and execution. We build Listen with that perspective: You have an AI agent on your side that knows everything about our platform and the best research practices. It helps you set up your project, conduct interviews with your goals in mind, and analyze thousands of responses. Database of Humanity
One of the key value props is our ability to find the people you are looking for (eg, "power users of ChatGPT and Excel"). We are building a database of millions of humans. The more studies you do with Listen, the better we understand you. This enables finding people with unmatched accuracy and, in the long run, extrapolating what a person would say based on all previous conversations -- imagine answering questions for your best friend.Realtime Video Interviews
The next version of our AI interviewer will have emotional understanding of video and voice to read between the lines. The goal is to make our interviewers more nuanced and effective than the most senior user researchers. This involves computer vision, speech analysis, and real-time decision making.Distributed Information Mining
The most interesting information is not publicly accessible on the web; it lives only in people's minds. We are building an agent that, given a question, finds the right people to talk to, asks the right questions, and returns a report and actionable recommendations. That's what consultants charge millions for. The ceiling is incredibly high, and we are pushing the technical boundaries to help companies, from investment firms to tech companies, make the best decisions.Customer Preference Model & Synthetic Personas
We're bringing Jeff Bezos' vision of the customer being part of every life decision. We're building the most profound understanding of customers, which will allow us to extrapolate to new questions via synthetic personas. This involves complex modeling of human behavior, preferences, and decision-making processes.What We Look ForYou want to solve problems end-to-end: Our team is split vertically, so every engineer owns a part of the product and needs to make decisions across the LLM pipeline, infrastructure, backend, and UX (with help!).You have a high bar for quality: In a startup, moving fast is essential. But even more important is to care about your output, obsess about details, and build a product that works, especially in the time of AI. Slop compounds!You want to push LLM capabilities: We continually push the most advanced AI models to their limits and work with the foundational companies on their new releases.You are a clear thinker and communicator: We only have one meeting a week and expect you to communicate tradeoffs, problems, and blockers directly.You are highly technical: Most of our team has started coding as young teenagers and nerd out on details from language design to compilers.Life at Listen LabsCompetitive Compensation: We’re backed by world-class investors, including Sequoia Capital, Conviction, AI Grant, and Pear VC, and offer competitive compensation packages with meaningful equity ownership.Over $30B in market cap has been created in adjacent industries (Medallia, AlphaSense, GLG, Ipsos, Kantar). Our Sequoia partner, Bryan Schreier, was the first investor in Qualtrics—a $12B company tackling similar problems to ours.Benefits that Support You: Comprehensive healthcare and dental coverage, flexible time off to recharge, and an environment that values balance and trust.Room to Grow: As an early member of the team, you’ll have the opportunity to take on new responsibilities, shape processes from scratch, and grow alongside the company. We value people who want to stretch beyond their role and build something lasting.
No items found.
2026-02-27 0:44
AI Solutions Engineer
V7
101-200
$120,000 – $200,000
United States
Full-time
Remote
false
V7At V7, we’re building AI platforms that help humans do their best work, at incredible scale and speed. Our mission is to turn human knowledge into trustworthy AI, making complex tasks faster, smarter, and more accurate. We’re growing fast, backed by leading investors and AI pioneers (including the minds behind Transformers and Gemini).
The productV7 Go provides legal, finance, insurance, and accounting teams with a toolkit for deploying and building custom no-code AI agents. The platform focuses on taking multi-modal data and delivering verifiable outputs with transparent AI logic to ensure accuracy and compliance.V7 Go supports all of the latest models like GPT, Claude, and Gemini for the best accuracy and performance. Watch the V7 Go keynote to see what we’re building.The team you’ll be joining and the impact you’ll haveYou'll join our go-to-market team as our second Solutions Engineer in New York (the team is six people), sitting at the intersection of sales and product in a company processing tens of millions of documents for customers across finance, insurance, and real estate.V7 Go 4x-ed revenue last year, with 160%+ upsell into accounts. You'll help accelerate that trajectory by making sure every customer gets real value.We run a lean, high-trust team where you'll work directly with AEs, engineers, and product to close complex deals and turn new logos into long-term champions.Your work directly shapes how enterprises experience agentic AI for the first time and how quickly they believe in it.What you’ll be doing from day oneRun technical discovery, design solutions, and lead POCs alongside Account Executives to close deals, then own onboarding to get customers to first value fast.Build and implement workflows within V7 Go; combining prompt engineering, data pipelines, and integrations to solve real customer problems across document processing and more.Act as the primary technical contact for accounts, handling complex challenges and spotting expansion opportunities as customers scale.Juggle up to 10 concurrent projects while feeding customer insights back to product and engineering.Who you areYou are a prototyper at heart with a gift of talking to customers, building relationships, and solving technical problems with repeatability.You have experience in delivering Large Language Model projects with customers, including LLM API integration, up-to-speed knowledge of foundation models, solutions design/architecture, integrating different cloud providers, prompt engineering, and/or measuring AI accuracy.You love coding with Python.You can develop and articulate an AI solution vision to technical and business stakeholders, with customers and partners to match the value proposition to business needs.V7 champions equality and inclusion because diverse teams build better products. Don't check every box? Apply anyway — we value what makes you unique and will support you through the process, just let our Talent team know how they can help.
No items found.
2026-02-27 0:29
Software Engineer, Applied AI
HackerOne
5000+
$166,000 – $203,000
United States
Full-time
Remote
false
HackerOne is a global leader in Continuous Threat Exposure Management (CTEM). The HackerOne Platform unites agentic AI solutions with the ingenuity of the world’s largest community of security researchers to continuously discover, validate, prioritize, and remediate exposures across code, cloud, and AI systems. Through solutions like bug bounty, vulnerability disclosure, agentic pentesting, AI red teaming, and code security, HackerOne delivers measurable, continuous reduction of cyber risk for enterprises. Industry leaders, including Anthropic, Crypto.com, General Motors, Goldman Sachs, Lufthansa, Uber, UK Ministry of Defence, and the U.S. Department of Defense, trust HackerOne to safeguard their digital ecosystems. HackerOne was recognized in Gartner’s Emerging Tech Impact Radar: AI Cybersecurity Ecosystem report for its leadership in AI Security Testing and has been named a Most Loved Workplace for Young Professionals (2024).HackerOne is at a pivotal inflection point in the security industry. Offensive security is no longer optional – it is the standard for forward-thinking companies that want to build trust and resilience in a world where AI-driven innovation and adversaries are moving faster than ever. With the industry shifting, HackerOne stands apart: we combine the ingenuity of the largest security research community with a best-in-class AI-powered platform, trusted by the world’s top organizations.HackerOne ValuesHackerOne is dedicated to fostering a strong and inclusive culture. HackerOne is Customer Obsessed and prioritizes customer outcomes in our decisions and actions. We Default to Disclosure by operating with transparency and integrity, ensuring trust and accountability. Employees, researchers, customers, and partners Win Together by fostering empowerment, inclusion, respect, and accountability.Position SummaryAt HackerOne, we’re revolutionizing offensive security by combining human intelligence with artificial intelligence to help organizations build a safer internet. As an AI Engineer on our AI Platform team, you will contribute to the development of next-generation AI security capabilities, including our in-platform AI security agent, Hai. You will implement AI-powered features that enhance vulnerability discovery, improve security analysis workflows, and expand how thousands of customers detect and respond to emerging threats.At HackerOne, we embrace a Flexible Work approach that gives us the freedom to do our best work while also fostering the connections and community that make us stronger. Reflecting this philosophy, this is a remote role targeted for candidates within ~50 miles of Seattle, WA or Austin, TX. We believe this balance of proximity and flexibility gives Hackeronies the chance to occasionally come together – fostering collaboration, connection, and in-person moments that enrich our culture – while still preserving the benefits of remote work. Must be able and willing to come to the office once per week on Thursdays.Primary ResponsibilitiesSuccess in the AI Engineer role will be accomplished by delivering on the responsibilities below in alignment with the Values and Talent Principles that define how we work at HackerOne.Apply an AI First mindset by building and integrating AI capabilities such as LLM-powered workflows, RAG pipelines, or orchestration components that enhance vulnerability detection and security automation.Demonstrate First Principles Problem Solving by breaking down AI and security challenges into clear components, identifying core assumptions, and implementing simple, maintainable solutions.Use Data-Driven Decision Making to evaluate model performance, test hypotheses, analyze telemetry, and iterate on features based on measurable impact.Practice Change Agility by adapting to evolving AI frameworks, product requirements, and security considerations while maintaining delivery momentum.Contribute to the development of our AI security agent, Hai, implementing features that enable natural language insights, reasoning workflows, and secure model interactions.
Build and maintain APIs and services that enable secure interactions between AI models, internal systems, and third-party platforms.Partner cross-functionally with Product, Security Research, and Customer Success to clarify requirements, translate user needs into technical tasks, and deliver reliable solutions.Stay current with emerging AI and AI security trends, sharing learnings with the team and incorporating relevant advancements into your work.Minimum Qualifications3+ years of experience as a software engineer, with hands-on experience building production systemsExperience integrating LLMs or generative AI models into applications
Hands-on experience with ML frameworks such as PyTorch, TensorFlow, or HuggingFace TransformersMust be able and willing to come to the office once per week (typically Thursdays)Preferred QualificationsFamiliarity with agentic frameworks such as ReAct, AutoGen, LangChain, or Semantic KernelExperience with RAG architectures, prompt engineering, fine-tuning, or LLM evaluation techniquesExposure to cloud AI/ML services (AWS Bedrock, GCP Vertex AI, Azure ML)Familiarity with full-stack technologies such as Ruby on Rails, GraphQL, or React for integrating AI features into production systemsCompensation Bands:Seattle or Austin$166K – $203K • Offers Equity#LI-HM1Job Benefits:Health (medical, vision, dental), life, and disability insurance*Equity stock optionsRetirement plansPaid public holidays and unlimited PTOPaid maternity and parental leaveLeaves of absence (including caregiver leave and leave under CO's Healthy Families and Workplaces Act)Employee Assistance Program*Eligibility may differ by countryWe're committed to building a global team! For certain roles outside the United States, India, the U.K., and the Netherlands, we partner with Remote.com as our Employer of Record (EOR).Visa/work permit sponsorship is not available. Employment at HackerOne is contingent on a background check.HackerOne is an Equal Opportunity Employer in the terms and conditions of employment for all employees and job applicants without regard to race, color, religion, sex, sexual orientation, age, gender identity or gender expression, national origin, pregnancy, disability or veteran status, or any other protected characteristic as outlined by international, federal, state, or local laws.This policy applies to all HackerOne employment practices, including hiring, recruiting, promotion, termination, layoff, recall, leave of absence, compensation, benefits, training, and apprenticeship. HackerOne makes hiring decisions based solely on qualifications, merit, and business needs at the time.For US based roles only: Pursuant to the San Francisco Fair Chance Ordinance, all qualified applicants with arrest and conviction records will be considered for the position.
No items found.
2026-02-26 20:29
Staff Software Engineer, Applied AI
HackerOne
5000+
$230,000 – $280,000
United States
Full-time
Remote
false
HackerOne is a global leader in Continuous Threat Exposure Management (CTEM). The HackerOne Platform unites agentic AI solutions with the ingenuity of the world’s largest community of security researchers to continuously discover, validate, prioritize, and remediate exposures across code, cloud, and AI systems. Through solutions like bug bounty, vulnerability disclosure, agentic pentesting, AI red teaming, and code security, HackerOne delivers measurable, continuous reduction of cyber risk for enterprises. Industry leaders, including Anthropic, Crypto.com, General Motors, Goldman Sachs, Lufthansa, Uber, UK Ministry of Defence, and the U.S. Department of Defense, trust HackerOne to safeguard their digital ecosystems. HackerOne was recognized in Gartner’s Emerging Tech Impact Radar: AI Cybersecurity Ecosystem report for its leadership in AI Security Testing and has been named a Most Loved Workplace for Young Professionals (2024).HackerOne is at a pivotal inflection point in the security industry. Offensive security is no longer optional – it is the standard for forward-thinking companies that want to build trust and resilience in a world where AI-driven innovation and adversaries are moving faster than ever. With the industry shifting, HackerOne stands apart: we combine the ingenuity of the largest security research community with a best-in-class AI-powered platform, trusted by the world’s top organizations.HackerOne ValuesHackerOne is dedicated to fostering a strong and inclusive culture. HackerOne is Customer Obsessed and prioritizes customer outcomes in our decisions and actions. We Default to Disclosure by operating with transparency and integrity, ensuring trust and accountability. Employees, researchers, customers, and partners Win Together by fostering empowerment, inclusion, respect, and accountability.Staff Software Applied AI EngineerLocation: Seattle, WA or Austin, TXPosition SummaryAt HackerOne, we’re advancing a new era of AI-powered offensive security. As a Staff AI Engineer, you’ll help shape the evolution of our autonomous HAI platform, driving the integration of advanced AI and agentic frameworks into HackerOne’s products.You will build intelligent security agents that reason, act, and learn — helping security teams identify, validate, and remediate vulnerabilities faster than ever. This is a high-impact technical role, reporting to the VP, AI Engineering, where you will architect the systems and frameworks that power the next generation of AI-driven vulnerability discovery.At HackerOne, we embrace a Flexible Work approach that gives us the freedom to do our best work while also fostering the connections and community that make us stronger. Reflecting this philosophy, this is a role targeted for candidates within ~50 miles of Seattle, WA or Austin, TX. We believe this balance of proximity and flexibility gives Hackeronies the chance to occasionally come together – fostering collaboration, connection, and in-person moments that enrich our culture – while still preserving the benefits of remote work. Must be able and willing to come to the office once per week on Thursdays.What You Will DoSuccess in the Staff AI Engineer role will be accomplished by delivering on the responsibilities below in alignment with the Talent Principles that define how we work at HackerOne.Architect and enhance our autonomous security agent “Hai,” building intelligent systems capable of natural-language reasoning, vulnerability detection, and actionable recommendations, all grounded in an AI-First mindset.Build components and services that integrate agentic AI design patterns—such as orchestration, memory systems, RAG, long-horizon tasks, and LLM-based models—into the HackerOne platform, applying an AI-First approach to improve vulnerability detection and security automation.Partner across Product, Security Research, and Engineering to introduce AI capabilities into the broader HackerOne ecosystem. As the company evolves rapidly, bring clarity and stability to shifting requirements by demonstrating strong Change Agility.Design and implement AI red-teaming agents and frameworks that proactively surface weaknesses in LLMs, generative-AI systems, and applied AI deployments, using First Principles Problem Solving to break problems down and build durable, foundational solutions.Establish meaningful metrics, observability, evaluation frameworks, and continuous feedback loops to improve model performance, safety, and user impact—ensuring decisions are grounded in Data-Driven Decision Making.Stay current with emerging AI safety research, adversarial-testing techniques, and agentic-system patterns, and integrate those learnings into HackerOne’s responsible-AI strategy with adaptability and a growth-oriented Change Agility mindset.Build APIs and integrations that enable seamless interaction between AI models, security tools, and the broader HackerOne platform, ensuring security, scalability, and interoperability across systems.Minimum Qualifications8+ years of experience as a software engineer, including deep experience building and maintaining production-grade AI platforms and infrastructure.Must be able and willing to come to the office once per week (typically Thursdays).Proven expertise in large language models (LLMs), generative AI, and machine learning frameworks such as TensorFlow, PyTorch, and Transformers in production environments.Strong hands-on experience in AI platform engineering, including model deployment, MLOps pipelines, model serving infrastructure, and shared AI services architecture.Experience building systems that support multiple AI product teams and applications, enabling scalable experimentation and deployment.Solid understanding of AI safety and alignment principles, including responsible AI development, bias mitigation, and ethical AI practices.Preferred QualificationsExperience building AI development platforms, model registries, experimentation frameworks, and tools that accelerate AI innovation across organizations.Familiarity with ReAct, AutoGen, or Semantic Kernel for agentic orchestration and multi-agent collaboration.Experience in agent action routing, secure tool usage APIs, and feedback loops for autonomous agents.Knowledge of prompt engineering, fine-tuning, retrieval-augmented generation (RAG), and advanced LLM optimization strategies.Experience with cloud-based AI/ML services (AWS Bedrock, GCP Vertex AI, Azure ML) and containerization technologies (Docker, Kubernetes) for AI workloads.Familiarity with Ruby on Rails, GraphQL, and React, and experience integrating AI capabilities into production web applications and APIs.Compensation Bands:Seattle or Austin$230K – $280K • Offers Equity#LI-HM1Job Benefits:Health (medical, vision, dental), life, and disability insurance*Equity stock optionsRetirement plansPaid public holidays and unlimited PTOPaid maternity and parental leaveLeaves of absence (including caregiver leave and leave under CO's Healthy Families and Workplaces Act)Employee Assistance Program*Eligibility may differ by countryWe're committed to building a global team! For certain roles outside the United States, India, the U.K., and the Netherlands, we partner with Remote.com as our Employer of Record (EOR).Visa/work permit sponsorship is not available. Employment at HackerOne is contingent on a background check.HackerOne is an Equal Opportunity Employer in the terms and conditions of employment for all employees and job applicants without regard to race, color, religion, sex, sexual orientation, age, gender identity or gender expression, national origin, pregnancy, disability or veteran status, or any other protected characteristic as outlined by international, federal, state, or local laws.This policy applies to all HackerOne employment practices, including hiring, recruiting, promotion, termination, layoff, recall, leave of absence, compensation, benefits, training, and apprenticeship. HackerOne makes hiring decisions based solely on qualifications, merit, and business needs at the time.For US based roles only: Pursuant to the San Francisco Fair Chance Ordinance, all qualified applicants with arrest and conviction records will be considered for the position.
No items found.
2026-02-26 20:29
Forward Deployed Engineer (India)
Cartesia
51-100
₹7,000,000 – ₹9,000,000
India
Full-time
Remote
false
About CartesiaOur mission is to build the next generation of AI: ubiquitous, interactive intelligence that runs wherever you are. Today, not even the best models can continuously process and reason over a year-long stream of audio, video and text—1B text tokens, 10B audio tokens and 1T video tokens—let alone do this on-device.We're pioneering the model architectures that will make this possible. Our founding team met as PhDs at the Stanford AI Lab, where we invented State Space Models or SSMs, a new primitive for training efficient, large-scale foundation models. Our team combines deep expertise in model innovation and systems engineering paired with a design-minded product engineering team to build and ship cutting edge models and experiences.We're funded by leading investors at Index Ventures and Lightspeed Venture Partners, along with Factory, Conviction, A Star, General Catalyst, SV Angel, Databricks and others. We're fortunate to have the support of many amazing advisors, and 90+ angels across many industries, including the world's foremost experts in AI.About the RoleWe’re hiring a Forward Deployed Engineer to advance our mission of building real-time multimodal intelligence by embedding directly with enterprise customers and delivering agentic voice AI solutions into their production environments. FDEs maximize customer success and revenue by turning Cartesia’s core product into deployed, high-impact solutions.Your ImpactWrite and ship production-grade code – Design, build, and deploy voice AI systems that power real enterprise workflows.Own deployments end-to-end – Lead discovery, architecture, implementation, and rollout for strategic customer engagements.
Drive progress in ambiguity – Move projects forward when requirements are incomplete, constraints evolve, or systems fail in unexpected ways.Deliver across complex infrastructure – Deploy across cloud, VPC, and on-prem environments while navigating security, networking, and compliance requirements.Unblock critical integrations – Diagnose and resolve integration failures, performance bottlenecks, and deployment issues under real-world constraints.Drive expansion through building – Identify new use cases and prototype solutions that deepen adoption and increase long-term customer value.Turn customer signal into platform leverage – Surface recurring patterns to engineering and product teams, influencing roadmap and reusable capabilities.Build for scale – Transform one-off solutions into repeatable playbooks, templates, and reference architectures. What You Bring4+ years of experience building and operating production software systemsStrong backend engineering fundamentals and a track record of delivering reliable systemsExperience integrating APIs and distributed services into real-world infrastructureComfort operating across cloud and containerized environments (AWS, GCP, Kubernetes)Experience navigating enterprise constraints: authentication, networking, observability, and security reviewsAbility to take ambiguous problems, define structure, and drive them to resolutionHigh ownership and bias for action — you move quickly without waiting for perfect specsClear, structured communicator comfortable engaging with senior technical stakeholdersNice-to-HaveExperience in forward-deployed, solutions, implementation, or customer-facing engineering rolesExperience deploying AI/ML systems into production environmentsFamiliarity with real-time systems (voice, streaming APIs, telephony, low-latency systems)Experience with on-prem or hybrid infrastructure deploymentsStartup or founder experienceWhat We Offer🍽 Lunch, dinner and snacks at the office.🏥 Fully covered medical, dental, and vision insurance for employees.🏦 401(k).✈️ Relocation and immigration support.🦖 Your own personal Yoshi.Our Culture🏢 We’re an in-person team based out of San Francisco. We love being in the office, hanging out together, and learning from each other every day.🚢 We ship fast. All of our work is novel and cutting edge, and execution speed is paramount. We have a high bar, and we don’t sacrifice quality or design along the way.🤝 We support each other. We have an open & inclusive culture that’s focused on giving everyone the resources they need to succeed.
No items found.
2026-02-26 18:29
Member of Technical Staff - ML Training Systems
Modal
51-100
$150,000 – $350,000
United States
Full-time
Remote
false
About Us:Modal provides the infrastructure foundation for AI teams. With instant GPU access, sub-second container startups, and native storage, Modal makes it simple to train models, run batch jobs, and serve low-latency inference. Companies like Suno, Lovable, and Substack rely on Modal to move from prototype to production without the burden of managing infrastructure.We're a fast-growing team based out of NYC, SF, and Stockholm. We've hit 9-figure ARR and recently raised a Series B at a $1.1B valuation. We have thousands of customers who rely on us for production AI workloads, including Lovable, Scale AI, Substack, and Suno.Working at Modal means joining one of the fastest-growing AI infrastructure organizations at an early stage, with many opportunities to grow within the company. Our team includes creators of popular open-source projects (e.g. Seaborn, Luigi), academic researchers, international olympiad medalists, and experienced engineering and product leaders with decades of experience.The Role:We are looking for strong engineers with experience training production machine learning models. If you are interested in contributing to open-source projects and evolving Modal's infrastructure to train the next generation of language models, we'd love to hear from you!Requirements:5+ years of experience writing high-quality, high-performance code.Experience working with torch and high-level training frameworks (Huggingface, verl, slime)Experience with ML training optimization (tell us a story about eliminating data loading bottlenecks, overlapping communications with compute, rewriting a trainer to handle off-policy rollouts, etc.)Nice-to-have: familiarity with low-level operating system foundations (Linux kernel, file systems, containers, etc).Ability to work in-person, in our NYC or San Francisco office.
No items found.
2026-02-26 11:29
AI Deployment Engineer | Codex
OpenAI
5000+
Germany
Full-time
Remote
false
About the teamThe Codex Deployment Engineering team helps customers adopt OpenAI's coding tools throughout their software development lifecycle. We act as trusted technical partners, guiding engineering teams as they integrate Codex into their projects and workflows. Our customers span digital-native companies to global enterprises, and we work side-by-side to accelerate how they plan, build, and deliver software.About the RoleWe are seeking a technically deep, creativity-driven AI Deployment Engineer who is already a power user of AI coding tools and passionate about pushing the boundaries of developer productivity. You will partner directly with engineering leaders and hands-on builders to design, validate, and scale advanced AI workflows, often using Codex to prototype and build the very demos, integrations, and automations customers ultimately adopt.This is a highly cross-functional role that blends technical architecture, product strategy, and customer-facing leadership. You’ll work closely with Sales, Solutions Engineering, Product, Applied Engineering, and the broader Codex organization to advocate for customer needs, shape product direction, and accelerate the successful deployment of intelligent coding systems across some of the world’s most influential companies.In this role, you will:Serve as the primary technical subject matter expert on OpenAI Codex for a portfolio of customers, embedding deeply with them to enable their engineering teams and build coding workflows.Partner directly with customers to design and implement AI-enhanced development workflows, from rapid prototyping through scalable production rollout.Build high-quality demos, reference implementations, and workflow automations, using Codex itself as part of your development process.Lead large-format workshops, technical deep dives, and hands-on enablement sessions that help engineering organizations adopt AI coding tools effectively and safely.Contribute technical content including examples, guides, patterns, and best practices to the OpenAI Cookbook to help the broader developer community accelerate their work with Codex.Gather high-fidelity product insights from real customer deployments and translate them into clear product proposals and model feedback for internal teams.Influence customer strategy and decision-making by framing how AI coding tools fit into their SDLC, technical roadmap, and organizational workflows.Serve as a trusted advisor on solution architecture, operational readiness, model configuration, security considerations, and best-practice adoption.You’ll thrive in this role if you:Have 5+ years of technical consulting, post-sales engineering, solutions architecture, or similar experience working directly with customers.Are an active power user of AI coding tools and have deeply customized your own developer workflow; you have a point of view on what makes engineers more productive.Enjoy building scrappy, high-signal demos, integrations, and prototypes that clearly articulate what Codex can enable, often using Codex to accelerate your own development process.Have experience delivering large, high-impact workshops or technical training to engineering teams and know how to craft sessions that are engaging, hands-on, and outcomes-driven.Have contributed technical guides, patterns, or examples publicly and care about clarity, pedagogy, and community impact.Communicate complex technical concepts in clear, persuasive written and verbal form especially when helping customers make strategic decisions about where and how to apply AI.Are excited by ambiguous, rapidly evolving problem spaces and enjoy iterating toward novel solutions hand-in-hand with customers.Care about customer success, reliability, safety, and operational excellence as much as you care about technical ingenuity.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
2026-02-26 2:44
Senior Software Engineer
Firsthand
101-200
$180,000 – $185,000
United States
Full-time
Remote
false
About FirsthandFirsthand has built the first AI-powered Brand Agent platform, transforming the way marketers and publishers engage consumers through their own AI agents, anywhere online.While most AI applications in marketing and advertising focus on back-office automation, the Firsthand Brand Agent Platform™ powers front-line consumer engagement. Operating across both owned properties and paid media, Firsthand's Brand Agents make a company’s expertise accessible in real time, adapting to consumers’ interests and guiding them towards the information they need to take action. Central to the platform is Lakebed™, the company’s AI-first data and knowledge rights management system that ensures brands retain full ownership and control of their expertise.Firsthand is led by Jon Heller, Michael Rubenstein, and Wei Wei, whose previous ventures helped build the foundations of modern digital advertising. Backed by Radical Ventures, FirstMark Capital, Aperiam Ventures, and Crossbeam Venture Partners, Firsthand is shaping the future of AI-driven consumer engagement.Firsthand is headquartered in NYC, with team members working together in-office three days a week.Responsibilities: Own full lifecycle management of advertising systems, from experimentation through deployment and continuous enhancementDesign, implement, and deploy data-driven algorithms and computational models for intelligent advertising platformsBuild scalable, high-performance software components that enable content personalization for publishers and brandsMonitor and evaluate performance of deployed models to ensure high system reliabilityAnalyze performance data and implement improvements to optimize accuracy and efficiencyConduct ongoing research into emerging algorithms and data processing techniques, aligning solutions with latest academic and industry advancementsProactively integrate new methodologies and technologies to strengthen and expand system capabilitiesPerform data wrangling and preprocessing using SQL and related tools to prepare structured training dataEnsure data integrity and usability through best practices in validation, error handling, and resource managementRequirements:Master’s degree in Computer Science, Information Systems, or Communication and Information Systems1 year of work experience in the offered position or a related role
JOBSITE & INTERVIEW: New York, NYHow to ApplyIf you are ready to embark on an exhilarating journey at the forefront of AI, seize this incredible opportunity and apply here. We eagerly anticipate hearing from you!Note: Compensation and equity will be market-competitive for well-capitalized, early stage startups and will be discussed during the interview process.
No items found.
2026-02-26 0:30
Senior Backend Engineer, LangSmith Deployments
LangChain
101-200
$175,000 – $225,000
United States
Full-time
Remote
false
About LangChainAt LangChain, our mission is to make intelligent agents ubiquitous. We provide the agent engineering platform and open source frameworks developers need to ship reliable agents fast.Our open source frameworks, LangChain and LangGraph, see over 90+ million downloads per month and help developers build agents with speed and granular control. LangSmith offers observability, evaluation, and deployment for rapid iteration, enabling teams to transform LLM systems into dependable production experiences. LangChain is trusted by millions of developers worldwide and powers AI teams at companies like Replit, Clay, Cloudflare, Harvey, Rippling, Vanta, Workday, and more.About the roleIn person 5 days/week in San Francisco, CA or New York, NYWe're building purpose-built infrastructure for running AI agents. Unlike traditional web apps, agents run for long durations, collaborate asynchronously with humans and other agents, and need to survive failures mid-execution. LangSmith Deployments is the runtime that makes this work, with durable checkpointing, fault-tolerant orchestration, and horizontal scaling, deployed across cloud and self-hosted environments.
We're looking for a Senior Backend Engineer to work on this system. While the focus is on backend development, strong familiarity with Kubernetes (K8s), Terraform (Tf), and other DevOps tooling is highly preferred.Design distributed queue and worker systems that handle concurrent agent execution, background tasks, and multi-agent coordination across horizontally scalable infrastructureOwn core data infrastructure — state persistence, atomic job claiming, connection management, and schema evolutionCollaborate on architectural decisions, ensuring solutions are scalable and robust.Ship resumable streaming infrastructure so clients can disconnect and reconnect mid-execution without losing stateInstrument and monitor production systems — tracing, metrics, and alerting to keep the platform healthyParticipate in on-call rotations and own incident response for the runtimeCreate and maintain technical documentation, including system design and operational runbooks.Contribute to and extend open-source LangGraph, which is used by thousands of developers to build agent applicationsHow to be successful in this role4+ years of professional backend engineering experienceStrong proficiency in Go and/or pythonExperience with distributed systems — conensus mechanisms, queueing, state machines, and/or workflow orchestrationExperience with scaling and sharding databases in high throughput environmentsFamiliarity with Kubernetes, infrastructure-as-code, and at least one major cloud platformStrong communication skills and ability to work cross-functionally on a small teamCompensation & BenefitsWe offer competitive compensation that includes base salary, meaningful equity, and benefits such as health and dental coverage, flexible vacation, a 401(k) plan, and life insurance. Actual compensation will vary based on role, level, and location. For team members in the EU and UK, we provide locally competitive benefits aligned with regional norms and regulations.Annual salary range: $175,000-$225,00 USD
No items found.
2026-02-25 19:59
AI Agent Engineer
Observe
201-500
$108,000 – $170,000
United States
Full-time
Remote
false
About Us
Observe.AI is the leading AI agent platform for customer experience. It enables enterprises to deploy AI agents that automate customer interactions, delivering natural conversations for customers with predictable outcomes for the business.
Observe.AI combines advanced speech understanding, workflow automation, and enterprise-grade governance to execute end-to-end workflows with AI agents. It also enables teams to guide and augment human agents with AI copilots, and analyze 100% of human and AI interactions for insights, coaching, and quality management.
Companies like DoorDash, Affordable Care, Signify Health, and Verida use Observe.AI to transform customer experiences every day by accelerating service speed, increasing operational efficiency, and strengthening customer loyalty across every channel.
Why Join Us
We’re looking for an AI Agent Engineer to lead the charge in building and deploying enterprise-grade Voice, Chat AI agents and AI Copilot. This role is hands-on, customer-facing, and pivotal in bringing AI solutions to life - from design and integration to deployment and optimization.
You’ll own the end-to-end lifecycle of AI agents: building, integrating, testing, demoing to clients, deploying into production, and tuning performance.
What you’ll be doing
Build & Deploy Agents: Own the full AI agent build process - prompts, workflows, integrations, telephony setup, and evaluation forms.
Client Engagement: Lead weekly demos, show progress, gather feedback, and act as the primary technical point of contact once a solution is defined.
Systems Integration: Configure APIs, data maps, authentication, error handling, and connect to CRMs, databases, or knowledge systems.
Telephony Integration: Set up SIP/CCaaS/PSTN routing, pass metadata, configure fallbacks, and troubleshoot call quality.
Optimization: Monitor performance, refine prompts, test iteratively, and ensure agents meet automation and containment targets.
Strategic Partner: Translate customer requirements into actionable solutions; work consultatively to unblock challenges in security, connectivity, or knowledge ingestion.
Shadow Core Engineering: Collaborate with product/engineering teams for deep technical fixes and platformization, while independently leading client delivery.
What you'll bring to the role
3+ years in conversational AI, ML engineering, or system integration with hands-on delivery of AI/LLM-based solutions.
Strong skills in prompt engineering, workflow building, API integration, and telephony (SIP, Twilio, Amazon Connect, etc.).
Familiarity with LLMs (GPT, Claude, Gemini), vector DBs, and orchestration frameworks (LangChain, LlamaIndex, etc.).
ML expertise in embeddings, retrieval-augmented generation (RAG), evaluation frameworks, fine-tuning models, and performance optimization.
Solid programming skills (Python, JavaScript, or similar).
Comfort leading customer-facing discussions - from deep technical troubleshooting to weekly project demos.
Strong problem-solving mindset: ability to find workarounds, unblock integrations, and adapt to customer-specific ecosystems.
Bachelor’s degree in Computer Science, Engineering, or a related technical field
Hands-on experience with Integration Platform-as-a-Service (iPaaS) providers, such as n8n, Zapier, or similar platforms and proficient in API integrations and data flow management.
Strong experience in telephony integrations, including knowledge of protocols like SIP, PSTN, and other telephony technologies.
Perks & Benefits
Competitive compensation including equity
Excellent medical, dental, and vision insurance options
Flexible time off
10 Company holidays + Winter Break and up to 16-weeks of parental leave
401K plan
Quarterly Lifestyle Spend
Monthly Mobile + Internet Stipend
Pre-tax Commuter Benefits
Salary Range
The base salary compensation range targeted for this full-time position is $108 - 170K per annum. Compensation may vary outside of this range depending on a number of factors, including a candidate’s qualifications, skills, competencies and experience. Base pay is one part of the Total Package that is provided to compensate and recognize employees for their work, and this role may be eligible for additional discretionary bonuses/incentives and equity (in the form of options). This salary range is an estimate, and the actual salary may vary based on the Company’s compensation practices.
Our Commitment to Inclusion and Belonging
Observe.AI is an Equal Employment Opportunity employer that proudly pursues and hires a diverse workforce. Observe AI does not make hiring or employment decisions on the basis of race, color, religion or religious belief, ethnic or national origin, nationality, sex, gender, gender identity, sexual orientation, disability, age, military or veteran status, or any other basis protected by applicable local, state, or federal laws or prohibited by Company policy. Observe.AI also strives for a healthy and safe workplace and strictly prohibits harassment of any kind.
We welcome all people. We celebrate diversity of all kinds and are committed to creating an inclusive culture built on a foundation of respect for all individuals. We seek to hire, develop, and retain talented people from all backgrounds. Individuals from non-traditional backgrounds, historically marginalized or underrepresented groups are strongly encouraged to apply.
If you are ambitious, make an impact wherever you go, and you're ready to shape the future of Observe.AI, we encourage you to apply. For more information, visit www.observe.ai.
#LI- Redwood City, CA (Hybrid)
No items found.
2026-02-25 18:59
Head of Decision Intelligence
Deepgram
201-500
United States
Full-time
Remote
false
Company OverviewDeepgram is the leading platform underpinning the emerging trillion-dollar Voice AI economy, providing real-time APIs for speech-to-text (STT), text-to-speech (TTS), and building production-grade voice agents at scale. More than 200,000 developers and 1,300+ organizations build voice offerings that are ‘Powered by Deepgram’, including Twilio, Cloudflare, Sierra, Decagon, Vapi, Daily, Cresta, Granola, and Jack in the Box. Deepgram’s voice-native foundation models are accessed through cloud APIs or as self-hosted and on-premises software, with unmatched accuracy, low latency, and cost efficiency. Backed by a recent Series C led by leading global investors and strategic partners, Deepgram has processed over 50,000 years of audio and transcribed more than 1 trillion words. There is no organization in the world that understands voice better than Deepgram.Company Operating RhythmAt Deepgram, we expect an AI-first mindset—AI use and comfort aren’t optional, they’re core to how we operate, innovate, and measure performance.Every team member who works at Deepgram is expected to actively use and experiment with advanced AI tools, and even build your own into your everyday work. We measure how effectively AI is applied to deliver results, and consistent, creative use of the latest AI capabilities is key to success here. Candidates should be comfortable adopting new models and modes quickly, integrating AI into their workflows, and continuously pushing the boundaries of what these technologies can do.Additionally, we move at the pace of AI. Change is rapid, and you can expect your day-to-day work to evolve just as quickly. This may not be the right role if you’re not excited to experiment, adapt, think on your feet, and learn constantly, or if you’re seeking something highly prescriptive with a traditional 9-to-5.Opportunity:Deepgram is the foundational AI company for voice, building the models that allow machines to hear, understand, and speak to humans with zero latency. As we scale, the complexity of our usage-based economy grows — and so does the opportunity to make every decision at Deepgram faster, sharper, and more informed.We are not looking for a traditional "BI Leader." We are looking for a technical founder-type who treats data as a product and agents as the primary interface. Your mission: build the Intelligence Layer — a system of autonomous agents that monitor, reason, and act on our data to drive NRR and operational excellence.This is a high-leverage, player-coach role. You'll spend your first months as a "Doer," directly building AI-augmented analytical workflows and critical models, while simultaneously hiring a small, elite team. You'll work at the intersection of Product, Sales, and Finance to ensure leadership has 20/20 vision into our NRR, unit economics, and growth loops.What You'll DoAI-Augmented Intelligence: In close partnership with functional leaders, architect and deploy agent-assisted intelligence systems that move beyond static dashboards. Build workflows that proactively reason across silos (e.g., correlating API latency with churn signals in Salesforce) to surface automated root-cause analysis and actionable recommendations.Infrastructure Evolution: In close partnership with our Tech/Engineering team, co-evolve our data stack. Define the semantic layer and data contracts required to make our infrastructure agent-ready and highly reliable.Unlock Audio Intelligence: Leverage Deepgram's own Voice AI models to ingest and structure thousands of hours of internal calls, turning unstructured audio into queryable insights for all functional teams.Strategic Leadership: Act as the primary data partner to the CEO and Board. Define the "source of truth" for our complex, usage-based revenue models and provide the analytical backbone for long-term strategy — from pricing, packaging, and distribution to market entry and capital allocation.Team Building: Recruit and lead a lean, elite team of AI-native engineers. Set a culture where manual, repetitive SQL tasks are viewed as automation bugs to be solved with code and agents.You'll Love This Role If YouYou want to build the "brain" of a unicorn-stage AI company from the ground upYou're energized by ambiguity and moving fast with imperfect infrastructureYou believe dashboards are a stepping stone — agents and automated reasoning are the destinationYou want direct access to the Founder/CEO and Board, where your work shapes company strategyYou're excited by a problem nobody else in the world can offer: unlocking intelligence from proprietary Voice AI modelsIt's Important To Us That You Have7+ years in data/analytics roles, with 2+ years leading teams at high-growth technology companiesExpert-level proficiency in SQL and Python — equally comfortable in a Hex notebook or dbt model as in a boardroom presentationExperience building agentic systems that can navigate a data warehouse autonomously — not just using LLMs to write SQL, but understanding where human judgment still mattersA founder's mindset: comfortable with "work-in-progress" infrastructure, pragmatic enough to deliver high-value insights today while iteratively improving the foundationDeep experience with usage-based SaaS metrics: consumption patterns, gross margin per million tokens/hours, and NRRA track record of leading teams where data infrastructure was evolving alongside the business — and you still enjoy getting your hands dirty in codeIt Would Be Great if You HadPrior experience as a Data Engineer or Software EngineerContributions to open-source data or AI projectsExperience with real-time billing and consumption systemsBenefits & Perks*Holistic healthMedical, dental, vision benefitsAnnual wellness stipendMental health supportLife, STD, LTD Income Insurance PlansWork/life blendUnlimited PTOGenerous paid parental leaveFlexible schedule12 Paid US company holidaysQuarterly personal productivity stipendOne-time stipend for home office upgrades401(k) plan with company matchTax Savings ProgramsContinuous learningLearning / Education stipendParticipation in talks and conferencesEmployee Resource GroupsAI enablement workshops / sessions*For candidates outside of the US, we use an Employer of Record model in many countries, which means benefits are administered locally and governed by country-specific regulations. Because of this, benefits will differ by region — in some cases international employees receive benefits US employees do not, and vice versa. As we scale, we will continue to evaluate where we can create more alignment, but a 1:1 global benefits structure is not always legally or operationally possible.Backed by prominent investors including Y Combinator, Madrona, Tiger Global, Wing VC and NVIDIA, Deepgram has raised over $215M in total funding. If you're looking to work on cutting-edge technology and make a significant impact in the AI industry, we'd love to hear from you!Deepgram is an equal opportunity employer. We want all voices and perspectives represented in our workforce. We are a curious bunch focused on collaboration and doing the right thing. We put our customers first, grow together and move quickly. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, gender identity or expression, age, marital status, veteran status, disability status, pregnancy, parental status, genetic information, political affiliation, or any other status protected by the laws or regulations in the locations where we operate.We are happy to provide accommodations for applicants who need them.
No items found.
2026-02-25 18:14
Software Engineer - AI Enablement
Baseten
101-200
$150,000 – $230,000
United States
Full-time
Remote
false
ABOUT BASETENBaseten powers mission-critical inference for the world's most dynamic AI companies, like Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma and Writer. By uniting applied AI research, flexible infrastructure, and seamless developer tooling, we enable companies operating at the frontier of AI to bring cutting-edge models into production. We're growing quickly and recently raised our $300M Series E, backed by investors including BOND, IVP, Spark Capital, Greylock, and Conviction. Join us and help build the platform engineers turn to to ship AI products.THE ROLEAs Baseten's AI Enablement Engineer, you'll own the AI-powered tooling and agent infrastructure that makes every person at Baseten dramatically more productive. While our product helps external teams deploy and serve AI models, your focus is inward: building, integrating, and operating AI agents and LLM-powered workflows that accelerate how we write code, review PRs, debug incidents, generate documentation, and ship faster.This is a high-autonomy, high-impact role. You'll be the go-to person for everything AI-internal, from evaluating and deploying coding assistants and agentic tools, to building custom agents tailored to Baseten's codebase and workflows. If you're excited about making an engineering org of top-tier infrastructure engineers even more effective by putting AI to work across the entire SDLC, this role is for you.EXAMPLE INITIATIVESYou'll get to work on these types of projects:Agentic coding workflows — Evaluate, customize, and deploy AI coding agents (e.g., Cursor, Claude Code, Codex) tuned to Baseten's monorepo, conventions, and internal libraries.Custom internal agents — Build purpose-built agents for tasks like incident triage, on-call support, codebase Q&A, and automated change management.AI adoption strategy — Track usage, measure productivity gains, and champion best practices for AI-assisted development across all engineering teams.RESPONSIBILITIESOwn and operate the end-to-end internal AI stack — from model selection and integration to deployment and monitoring.Build and maintain custom AI agents and LLM-powered tools tailored to Baseten's engineering workflows.Evaluate and roll out third-party AI developer tools, configuration, and onboarding.Instrument AI tool usage and measure impact on engineering velocity, code quality, and developer satisfaction.Stay at the cutting edge of AI tooling, agents, and developer productivity research — and bring the best ideas back to the team.Actively support engineering teams, ensuring they have the AI-powered resources and workflows necessary to remain productive.BENEFITSCompetitive compensation, including meaningful equity.100% coverage of medical, dental, and vision insurance for employee and dependentsGenerous PTO policy including company wide Winter Break (our offices are closed from Christmas Eve to New Year's Day!)Paid parental leaveCompany-facilitated 401(k)Exposure to a variety of ML startups, offering unparalleled learning and networking opportunities.Apply now to embark on a rewarding journey in shaping the future of AI! If you are a motivated individual with a passion for machine learning and a desire to be part of a collaborative and forward-thinking team, we would love to hear from you.At Baseten, we are committed to fostering a diverse and inclusive workplace. We provide equal employment opportunities to all employees and applicants without regard to race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status.
No items found.
2026-02-25 15:59
Senior Software Engineer - New Products
Baseten
101-200
$185,000 – $285,000
United States
Full-time
Remote
false
ABOUT BASETENBaseten powers mission-critical inference for the world's most dynamic AI companies, like Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma and Writer. By uniting applied AI research, flexible infrastructure, and seamless developer tooling, we enable companies operating at the frontier of AI to bring cutting-edge models into production. We're growing quickly and recently raised our $300M Series E, backed by investors including BOND, IVP, Spark Capital, Greylock, and Conviction. Join us and help build the platform engineers turn to to ship AI products.THE ROLE:
You’ll join a small team building new products at Baseten. This role is for an infrastructure-leaning, product-minded engineer who likes owning ambiguous problems end-to-end: from shaping an API and system design, to operating it in production with clear SLOs.
You’ll build core platform capabilities that power how researchers, developers and partners ship and operate AI products at scale: API gateways, auth/keys, quotas and metering, multi-tenant isolation, observability, and the workflows around deploying and managing model-backed services.
EXAMPLE INITIATIVES:Model APIs for frontier modelsModel training built for production inference
RESPONSIBILITIES:Own and lead projects and product areas end-to-end, including architecture, implementation, rollout, and long-term operations.Design ergonomic, developer-friendly APIs and abstractions for infrastructure capabilities.Build and operate reliable backend services (rate limiting, auth, quotas, metering, migrations) with clear SLOs.Drive performance and reliability improvements through profiling, tracing, load testing, and capacity planning.Mentor teammates through code reviews, design docs, and technical leadership.
REQUIREMENTS:5+ years of experience building and operating backend systems, distributed systems, or large-scale APIs.Proven track record owning low-latency, reliable services (auth, rate limiting, quotas, usage metering, migrations).Strong infrastructure instincts: observability, incident response, SLOs, and capacity management.Comfort working across the stack when needed (backend-first, but willing to dive into frontend/CLI to unblock the product).Strong written communication, including clear design docs and effective cross-functional collaboration.Interest in AI/ML infrastructure and willingness to learn (ML expertise not required).
NICE TO HAVE:Experience with API gateways, service meshes, Kubernetes, or distributed scheduling.Experience building developer platforms: SDKs, CLIs, APIs, and self-serve workflows.Experience with inference platforms, LLM runtimes, or performance-sensitive systems.Familiarity with multi-tenant isolation patterns (fair queuing, noisy-neighbor controls, admission control).Frontend experience (React/TypeScript) or strong product UX instincts for developer tools.BENEFITSCompetitive compensation, including meaningful equity.100% coverage of medical, dental, and vision insurance for employee and dependentsGenerous PTO policy including company wide Winter Break (our offices are closed from Christmas Eve to New Year's Day!)Paid parental leaveCompany-facilitated 401(k)Exposure to a variety of ML startups, offering unparalleled learning and networking opportunities.Apply now to embark on a rewarding journey in shaping the future of AI! If you are a motivated individual with a passion for machine learning and a desire to be part of a collaborative and forward-thinking team, we would love to hear from you.At Baseten, we are committed to fostering a diverse and inclusive workplace. We provide equal employment opportunities to all employees and applicants without regard to race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status.
No items found.
2026-02-25 15:59
Principal Engineer, C++/Integration (R4539)
Shield AI
1001-5000
$210,000 – $320,000
United States
Full-time
Remote
false
Founded in 2015, Shield AI is a venture-backed deep-tech company with the mission of protecting service members and civilians with intelligent systems. Its products include the V-BAT and X-BAT aircraft, Hivemind Enterprise, and the Hivemind Vision product lines. With offices and facilities across the U.S., Europe, the Middle East, and the Asia-Pacific, Shield AI’s technology actively supports operations worldwide. For more information, visit www.shield.ai. Follow Shield AI on LinkedIn, X, Instagram, and YouTube. Job Description:The Special Projects team at Shield AI is an elite force within the office of the CTO. It consists of a group of very senior (L5-L8) and highly experienced software engineering experts from diverse fields (aerospace, robotics, cloud infrastructure, game development, interactive media design, ...). The charter of this group is to steer technology development towards strategic alignment with the CTO’s vision, through tactical insertion into teams and technologies across the organization. Individuals within this team make direct and at times forward-sprinting contributions to all three pillars of Hivemind, Shield AI’s software ecosystem for developing and deploying resilient intelligent teaming for aircrafts. Hivemind consists of four products: EdgeOS (C++ based high-performance middleware for autonomy development), Pilot (autonomy for the edge built atop EdgeOS; a models-based, modular and open architecture C++ codebase), Forge (Shield AI's AI Factory for the design, development, and testing of Hivemind Edge systems; a service-oriented architecture leveraged through an SDK, CLI, and web portal; a Go, Python, Typescript codebase), Commander (Software and hardware to support rich human-in-the-loop and human-on-the-loop interactions with the Hivemind; a C++ based back-end for interaction with Pilot; and a web-application UI for mission planning, command and control by operators implemented in a Typescript/React codebase).
The Special Projects team is chartered to operate effectively in ambiguity. This team owns a number of software and software/hardware products, and it tactically and strategically impacts the development of all foundational Hivemind products. This work happens should-to-shoulder with the product teams in some cases, and in a forward-sprinting manner within the Special Projects team in other cases. The result is direct contribution to products in the former, and development of reference implementations in the latter.
The Special Projects team also functions as a pipeline for product and solution engineering teams across the organization. Individuals who enter the Special Projects team rapidly gain depth and breadth in their understanding of the Hivemind software ecosystem. This positions them well for leading technology development effort across the product and solution organization.
This role is expected to contribute to commercial applications of Hivemind Enterprise.What you'll do:Create Reference Implementations: Create reference implementations for potential future products or product components, by integrating new hardware platforms, sensor suits, simulators and concepts of operation with the Hivemind SDK (C++) for commercial applications, with a focus on autonomy (“Pilot”) and simulation (part of “Forge”)Iterate Rapidly with Customer Feedback: Demonstrate developed architectures as solutions to the customer and gather feedback; iterate. Explore Future Technologies: Explore and evaluate future hardware and software technologies that are relevant to Shield AI’s product roadmap and potentially high-ROI, but beyond the scope of current Direct and IRAD projects in engineering. Identify areas of technical debt across the stack, analyze and synthesize solutions and paths towards achieving them. Required qualifications:12+ years of related experience developing large, production quality software systems. 10+ years of experience with modern C++ (C++17 and beyond).Strong knowledge of modern software engineering best practices; Experience with Gitand code management tools; Good software hygiene regarding code documentation,unit testing, bug tracking.Excellent grasp of software development and coding principles with high productivityin a mainstream language (e.g. Typescript, C++, Go, Python, etc.).All-in on Generative AI tools for software engineeringDeep self-sufficiency in adopting new technologies, configuring managing local and cloud resources, maintaining a fast development pace within a complex technology stackExpertise and deep experience with architectural design and implementation of large and complex distributed systems. Experience with Linux, Docker, and CI/CD environments. Excellent software hygiene regarding code documentation, unit testing, bug tracking. Strong technical collaboration skills and a desire to develop new skills. Excited by a fast-moving environment with a highly motivated group. Demonstrated record of working hard, being a trustworthy teammate, holding yourself and others to high standards, and being kind to others.Fluid intelligence that allows one to operate effectively in sometimes ambiguous conditions, while finding opportunities to drive technical efforts and force multiply.Preferred qualifications:Experience with in aerospace and/or robotics industries. Hands-on experience with a major cloud platform (Azure, GCP, AWS). Experience with team leadership, or as a technical project lead. Passionate about developing high-quality and optimized software solutions. Experience with containerization technologies like Docker and Kubernetes.
210,000 - 320,000 a year#LI-KC3#LF
Full-time regular employee offer package:Pay within range listed + Bonus + Benefits + Equity
Temporary employee offer package:Pay within range listed above + temporary benefits package (applicable after 60 days of employment)
Salary compensation is influenced by a wide array of factors including but not limited to skill set, level of experience, licenses and certifications, and specific work location. All offers are contingent on a cleared background and possible reference check. Military fellows and part-time employees are not eligible for benefits. Please speak to your talent acquisition representative for more information.
###
Shield AI is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, marital status, disability, gender identity or Veteran status. If you have a disability or special need that requires accommodation, please let us know.
No items found.
2026-02-25 14:14
Music Producer - AI Trainer
Handshake
1001-5000
$125 – $125 / hour
United States
Contractor
Remote
false
Opportunity OverviewHandshake is looking for skilled LMMS users to support AI research through flexible, hourly contract work. This is not a traditional job. You'll draw on your hands-on experience with beat-making, music composition, or electronic music production to evaluate AI-generated content and provide feedback that helps AI better understand music tasks and creative production workflows.This is an ongoing, project-based opportunity you can take on alongside anything else you have going on.Who This Is ForThis is a good fit if you're an experienced LMMS user who has worked in or around roles like:Music Producer or Beat MakerComposer or ArrangerSound DesignerYou should have solid experience with one or more of the following:Beat-making, music composition, or electronic production using LMMSAnnotating or labeling audio and music assetsCreating music or beats following project briefs or style referencesReviewing music for quality, accuracy, or production consistencyWhat You'll DoYou'll use your experience with LMMS to create tool-related questions and review AI-generated responses for accuracy and relevance to real-world music production and beat-making workflows.No prior AI or technical experience is required.QualificationsWe're looking for people who have:Minimum 3 years of hands-on experience with LMMS, whether through professional work or freelance projectsA working knowledge of music production concepts and electronic composition workflowsStrong written communication skills and attention to detailThe ability to work independently and follow written guidelinesWork Model and Project DetailsStatus: Independent contractor (not a full-time employee role)Location: Fully remote; work from anywhere with a reliable internet connection and access to a desktop or laptop computerSchedule: Flexible and asynchronous, with no minimum hour requirement. Many contributors work approximately 5–20 hours per week when assigned to an active projectDuration: The Handshake AI program runs year-round, with projects opening periodically across different areas of expertise. Placement depends on current project needs, with opportunities to be considered for future projects as they become availableApplication ProcessCreate a Handshake accountUpload your resume and verify your identityGet matched and onboarded into relevant projectsStart working and earningWork AuthorizationF-1 students who are eligible for CPT or OPT may be eligible for projects on Handshake AI. Work with your Designated School Official to determine your eligibility. If your school requires a CPT course, Handshake AI may not meet your school's requirements. STEM OPT is not supported. See our Help Center article for more information on what types of work authorizations are supported on Handshake AI.
No items found.
2026-02-25 13:29
Software Engineer - AI Trainer
Handshake
1001-5000
$65 – $150 / hour
United States
Contractor
Remote
false
Opportunity OverviewHandshake is seeking experienced Software Engineers to support AI research through flexible, hourly contract work. This is not a traditional full-time SWE role. You’ll use your real-world software development experience to evaluate AI-generated code and technical content, provide structured feedback, and help improve how AI understands programming tasks, system design, and engineering best practices.This is an ongoing, project-based opportunity that can be done alongside your primary employment.Who This Is ForThis opportunity is designed for professionals currently working (or recently working) in roles such as:Software Engineer or Senior Software EngineerBackend, Frontend, or Full-Stack EngineerSystems Engineer or Application DeveloperThis is not a traditional full-time role. You’ll apply once and, if qualified, be considered for part-time, project-based work as new projects become available.What You’ll DoThis project involves using your software engineering experience to design job-related coding questions and review AI-generated responses for correctness, efficiency, clarity, and alignment with real-world engineering practices.Applicants will be required to pass a coding assessment as part of the selection process.QualificationsWe’re looking for professionals with:4+ years of professional software engineering experience (internships excluded)Strong hands-on coding experience in at least one major programming language (e.g., Python, Java, C++, JavaScript, Go, etc.)Experience writing, reviewing, and debugging production-level codeComfortable working independently and following detailed technical guidelinesStrong written communication skills and attention to detailApplication ProcessCreate a Handshake accountUpload your resume and verify your identityGet matched and onboarded into relevant projectsStart working and earningWork Model and Project DetailsStatus: Independent contractor (not a full-time employee role)Location: Fully remoteSchedule: Flexible and asynchronous, with no minimum hour requirement. Many contributors work approximately 5–20 hours per week when assigned to an active projectDuration: The Handshake AI program runs year-round, with projects opening periodically across different areas of expertise. Placement depends on current project needs, with opportunities to be considered for future projects as they become availableWork authorization informationF-1 students who are eligible for CPT or OPT may be eligible for projects on Handshake AI. Work with your Designated School Official to determine your eligibility. If your school requires a CPT course, Handshake AI may not meet your school’s requirements. STEM OPT is not supported. See our Help Center article for more information on what types of work authorizations are supported on Handshake AI.
No items found.
2026-02-25 13:29
Staff Engineer, API Core Platform
Together AI
201-500
$200,000 – $280,000
No items found.
Full-time
Remote
false
About the Role
The Turbo team sits at the intersection of efficient inference (algorithms, architectures, engines) and post‑training / RL systems. We build and operate the systems behind Together’s API, including high‑performance inference and RL/post‑training engines that can run at production scale.
Our mandate is to push the frontier of efficient inference and RL‑driven training: making models dramatically faster and cheaper to run, while improving their capabilities through RL‑based post‑training (e.g., GRPO‑style objectives). This work lives at the interface of algorithms and systems: asynchronous RL, rollout collection, scheduling, and batching all interact with engine design, creating many knobs to tune across the RL algorithm, training loop, and inference stack. Much of the job is modifying production inference systems—for example, SGLang‑ or vLLM‑style serving stacks and speculative decoding systems such as ATLAS—grounded in a strong understanding of post‑training and inference theory, rather than purely theoretical algorithm design.
You’ll work across the stack—from RL algorithms and training engines to kernels and serving systems—to build and improve frontier models via RL pipelines. People on this team are often spiky: some are more RL‑first, some are more systems‑first. Depth in one of these areas plus appetite to collaborate across (and grow toward more full‑stack ownership over time) is ideal.
Requirements
We don’t expect anyone to check every box below. People on this team typically have deep expertise in one or more areas and enough breadth (or interest) to work effectively across the stack. The closer you are to full‑stack (inference + post‑training/RL + systems), the stronger the fit—but being spiky in one area and eager to grow is absolutely okay.
You might be a good fit if you:
Have strong expertise in at least one of the following, and are excited to collaborate across (and grow into) the others:
Systems‑first profile: Large‑scale inference systems (e.g., SGLang, vLLM, FasterTransformer, TensorRT, custom engines, or similar), GPU performance, distributed serving.
RL‑first profile: RL / post‑training for LLMs or large models (e.g., GRPO, RLHF/RLAIF, DPO‑like methods, reward modeling), and using these to train or fine‑tune real models.
Model architecture design for Transformers or other large neural nets.
Distributed systems / high‑performance computing for ML.
Are comfortable working from algorithms to engines:
Strong coding ability in Python
Experience profiling and optimizing performance across GPU, networking, and memory layers.
Able to take a new sampling method, scheduler, or RL update and turn it into a production‑grade implementation in the engine and/or training stack.
Have a solid research foundation in your area(s) of depth:
Track record of impactful work in ML systems, RL, or large‑scale model training (papers, open‑source projects, or production systems).
Can read new RL / post‑training papers, understand their implications on the stack, and design minimal, correct changes in the right layer (training engine vs. inference engine vs. data / API).
Operate well as a full‑stack problem solver:
You naturally ask: “Where in the stack is this really bottlenecked?”
You enjoy collaborating with infra, research, and product teams, and you care about both scientific quality and user‑visible wins.
Minimum qualifications
3+ years of experience working on ML systems, large‑scale model training, inference, or adjacent areas (or equivalent experience via research / open source).
Advanced degree in Computer Science, EE, or a related field, or equivalent practical experience.
Demonstrated experience owning complex technical projects end‑to‑end.
If you’re excited about the role and strong in some of these areas, we encourage you to apply even if you don’t meet every single requirement.
Responsibilities
Advance inference efficiency end‑to‑end
Design and prototype algorithms, architectures, and scheduling strategies for low‑latency, high‑throughput inference.
Implement and maintain changes in high‑performance inference engines (e.g., SGLang‑ or vLLM‑style systems and Together’s inference stack), including kernel backends, speculative decoding (e.g., ATLAS), quantization, etc.
Profile and optimize performance across GPU, networking, and memory layers to improve latency, throughput, and cost.
Unify inference with RL / post‑training
Design and operate RL and post‑training pipelines (e.g., RLHF, RLAIF, GRPO, DPO‑style methods, reward modeling) where 90+% of the cost is inference, jointly optimizing algorithms and systems.
Make RL and post‑training workloads more efficient with inference‑aware training loops—for example, async RL rollouts, speculative decoding, and other techniques that make large‑scale rollout collection and evaluation cheaper.
Use these pipelines to train, evaluate, and iterate on frontier models on top of our inference stack.
Co‑design algorithms and infrastructure so that objectives, rollout collection, and evaluation are tightly coupled to efficient inference, and quickly identify bottlenecks across the training engine, inference engine, data pipeline, and user‑facing layers.
Run ablations and scale‑up experiments to understand trade‑offs between model quality, latency, throughput, and cost, and feed these insights back into model, RL, and system design.
Own critical systems at production scale
Profile, debug, and optimize inference and post‑training services under real production workloads.
Drive roadmap items that require real engine modification—changing kernels, memory layouts, scheduling logic, and APIs as needed.
Establish metrics, benchmarks, and experimentation frameworks to validate improvements rigorously.
Provide technical leadership (Staff level)
Set technical direction for cross‑team efforts at the intersection of inference, RL, and post‑training.
Mentor other engineers and researchers on full‑stack ML systems work and performance engineering.
About Together AI
Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers in our journey in building the next generation AI infrastructure.
Compensation
We offer competitive compensation, startup equity, health insurance and other competitive benefits. The US base salary range for this full-time position is: $200,000 - $280,000 + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge.
Equal Opportunity
Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.
Please see our privacy policy at https://www.together.ai/privacy
No items found.
2026-02-25 12:29
Machine Learning Engineer
Faculty
501-1000
United Kingdom
Full-time
Remote
false
Why Faculty?
We established Faculty in 2014 because we thought that AI would be the most important technology of our time. Since then, we’ve worked with over 350 global customers to transform their performance through human-centric AI. You can read about our real-world impact here.We don’t chase hype cycles. We innovate, build and deploy responsible AI which moves the needle - and we know a thing or two about doing it well. We bring an unparalleled depth of technical, product and delivery expertise to our clients who span government, finance, retail, energy, life sciences and defence.Our business, and reputation, is growing fast and we’re always on the lookout for individuals who share our intellectual curiosity and desire to build a positive legacy through technology.AI is an epoch-defining technology, join a company where you’ll be empowered to envision its most powerful applications, and to make them happen.About the Team Bringing medicine to patients is complex, expensive and high-risk. Faculty’s Life Science’s team is concentrated on building AI solutions which optimise the research and commercialisation of life-changing therapies.We partner with major pharma firms, academic research centres and MedTech start-ups to design and deliver solutions which address critical healthcare challenges, and help to democratise health for all.About the roleJoin us as a Machine Learning Engineer to deliver bespoke, impactful AI solutions for our diverse clients.You will be instrumental in bringing machine learning out of the lab and into the real world, contributing to scalable software architecture and defining best practices. Working with clients, and cross-functional teams, you'll ensure technical feasibility and timely delivery of high-quality, production-grade ML systems. What you'll be doing:Building and deploying production-grade ML software, tools, and infrastructure.Creating reusable, scalable solutions that accelerate the delivery of ML systems.Collaborating with engineers, data scientists, and commercial leads to solve critical client challenges.Leading technical scoping and architectural decisions to ensure project feasibility and impact.Defining and implementing Faculty’s standards for deploying machine learning at scale.Acting as a technical advisor to customers and partners, translating complex ML concepts for stakeholders.Who we're looking for:You understand the full machine learning lifecycle and have experience operationalising models built with frameworks like Scikit-learn, TensorFlow, or PyTorch.You possess strong Python skills and solid experience in software engineering best practices.You bring hands-on experience with cloud platforms and infrastructure (e.g., AWS, Azure, GCP), including architecture and security.You've worked with container and orchestration tools such at Docker & Kubernetes to build and manage applications at scaleYou are comfortable with core ML concepts, including probability, statistics, and common learning techniques.You're an excellent communicator, able to guide technical teams and confidently advise non-technical stakeholders.You thrive in a fast-paced environment, and enjoy the autonomy to own scope, solve and delivery solutionsThe Interview ProcessTalent Team Screen (30 minutes)Pair Programming Interview (90 minutes) System Design Interview (90 minutes) Commercial Interview (60 minutes)Our Recruitment EthosWe aim to grow the best team - not the most similar one. We know that diversity of individuals fosters diversity of thought, and that strengthens our principle of seeking truth. And we know from experience that diverse teams deliver better work, relevant to the world in which we live. We’re united by a deep intellectual curiosity and desire to use our abilities for measurable positive impact. We strongly encourage applications from people of all backgrounds, ethnicities, genders, religions and sexual orientations.Some of our standout benefits:Unlimited Annual Leave PolicyPrivate healthcare and dentalEnhanced parental leaveFamily-Friendly Flexibility & Flexible workingSanctus CoachingHybrid Working (2 days in our Old Street office, London)If you don’t feel you meet all the requirements, but are excited by the role and know you bring some key strengths, please don't hesitate in applying as you might be right for this role, or other roles. We are open to conversations about part-time hours.
No items found.
2026-02-25 9:14
Lead Machine Learning Engineer
Faculty
501-1000
United Kingdom
Full-time
Remote
false
Why Faculty?
We established Faculty in 2014 because we thought that AI would be the most important technology of our time. Since then, we’ve worked with over 350 global customers to transform their performance through human-centric AI. You can read about our real-world impact here.We don’t chase hype cycles. We innovate, build and deploy responsible AI which moves the needle - and we know a thing or two about doing it well. We bring an unparalleled depth of technical, product and delivery expertise to our clients who span government, finance, retail, energy, life sciences and defence.Our business, and reputation, is growing fast and we’re always on the lookout for individuals who share our intellectual curiosity and desire to build a positive legacy through technology.AI is an epoch-defining technology, join a company where you’ll be empowered to envision its most powerful applications, and to make them happen.
About the team In our Professional and Financial Services Business unit, we bring everything we have learned in more than a decade of Applied AI, and use it to help our clients navigate a rapidly changing landscape.
We develop and embed AI solutions which help firms become more efficient, enhance customer experience, and find the commercial upside in uncertain markets. Within the constraints of a highly regulated industry, we see so much opportunity for meaningful innovation and are proud to set the gold-standard for marrying technical excellence with safe deployment.About the roleJoin us as a Lead Machine Learning Engineer to spearhead the technical direction and delivery of complex, innovative AI projects. You will act as a technical expert, applying your skills across various projects from AI strategy to client-side deployments, while ensuring architectural decisions are sound and reliable.
This role demands a balance of deep technical expertise and strong leadership, focusing on driving innovation, fostering team growth, and building reusable solutions across the organisation. If you're ready to manage high-risk projects and deliver practical, innovative outcomes, this is your chance to shape our future.What you'll be doingSetting the technical direction for complex ML projects, balancing trade-offs, and guiding team priorities.Designing, implementing, and maintaining reliable, scalable ML/software systems and justifying key architectural decisions.Defining project problems, developing roadmaps, and overseeing delivery across multiple work-streams in often ill-defined, high-risk environments.Driving the development of shared resources and libraries across the organisation and guiding other engineers in contributing to them.Leading hiring processes, making informed selection decisions, and mentoring multiple individuals to foster team growth.Proactively developing and executing recommendations for adopting new technologies and changing our ways of working to stay ahead of the competition.Acting as a technical expert and coach for customers, accurately estimating large work-streams and defending rationale to stakeholders.Who we're looking forYou are a technical expert among your peers, capable of going deep on particular topics and demonstrating breadth of knowledge to solve almost any problem.You possess strong Python skills and practical experience operationalising models using frameworks like Scikit-learn, TensorFlow, or PyTorch.You are an expert in at least one major Cloud Solution Provider (e.g., Azure, GCP, AWS) and have led teams to build full-stack web applications.You have hands-on experience with containerisation tools like Docker and orchestration via Kubernetes.You can successfully manage and coach a team of engineers, setting team-wide development goals to improve client delivery.You find novel, clever solutions for project delivery and take ownership for successful project outcomes.You're an excellent communicator who can proactively help customers achieve their goals and guide both technical teams and non-technical stakeholders.Our Interview ProcessTalent Team Screen (30 minutes)Introduction to the role (45 minutes) Pair Programming Interview (90 minutes) System Design Interview (90 minutes) Commercial & Leadership Interview (60 minutes)Our Recruitment EthosWe aim to grow the best team - not the most similar one. We know that diversity of individuals fosters diversity of thought, and that strengthens our principle of seeking truth. And we know from experience that diverse teams deliver better work, relevant to the world in which we live. We’re united by a deep intellectual curiosity and desire to use our abilities for measurable positive impact. We strongly encourage applications from people of all backgrounds, ethnicities, genders, religions and sexual orientations.Some of our standout benefits:Unlimited Annual Leave PolicyPrivate healthcare and dentalEnhanced parental leaveFamily-Friendly Flexibility & Flexible workingSanctus CoachingHybrid Working (2 days in our Old Street office, London)If you don’t feel you meet all the requirements, but are excited by the role and know you bring some key strengths, please don't hesitate in applying as you might be right for this role, or other roles. We are open to conversations about part-time hours.
No items found.
2026-02-25 9:14
No job found
Your search did not match any job. Please try again
