The AI job market moves fast. We keep up so you don't have to.
Fresh roles added daily, reviewed for quality — across every corner of the AI ecosystem.
I'm strong in:
Edit filters
New AI Opportunities
Showing 61 – 79 of 79 jobs
Tag
Director, Revenue Transformation
Gong
1001-5000
$148,000 – $225,000
No items found.
Full-time
Remote
false
Gong harnesses the power of AI to transform how revenue teams win. The Gong Revenue AI Operating System unifies data, insights, and workflows into a single, trusted system that observes, guides, and acts alongside the world’s most successful revenue teams. Powered by the Gong Revenue Graph, AI-powered intelligence, specialized agents, and trusted applications, Gong helps more than 5,000 companies around the world deeply understand their teams and customers, automate critical sales workflows, and close more deals with less effort. For more information, visit www.gong.io.
At Gong, you will join a company built on innovative products, ambitious goals, and passionate people. We are shaping the future of revenue intelligence and we want people who are excited to build what comes next. You will work with a team that dreams big, moves fast, and cares deeply about the craft and about each other. Here, transparency and trust are core to how we operate, and every person has the opportunity to make a visible impact. If you want to grow, stretch, and do work that truly matters, Gong is the place to do the best work of your career.Gong is seeking a hands-on Staff, AI Enablement and Innovation professional to own our internal AI operating model. Sitting within our IT organization, this role is the heartbeat of our internal digital transformation. You will empower our internal teams by bridging the gap between high-level business discovery and deep technical execution.
You will be the primary architect of Gong’s internal agentic strategy—responsible for "mining" the business for efficiency opportunities while simultaneously building the centralized orchestration layer that ensures our enterprise AI spend is governed, consistent, and scalable. This is a high-impact IC (Individual Contributor) role designed for a "scrappy builder" who thrives on turning internal complexity into streamlined, automated excellence.
RESPONSIBILITIES
Strategy & Governance (The "Guardrails")
Define the Roadmap: Partner with Security, Legal, and business leaders to define the internal AI roadmap.
Own the Stack: Operate the enterprise AI stack, including LLMs, vector databases, and gateways.
Standardization: Enforce consistent patterns for tool calling, prompt versioning, state management, and error handling to prevent fragmented, "ad-hoc" agent implementations.
Lifecycle Management: Manage the full model lifecycle, from evaluation and testing to upgrades and deprecations.
Discovery & Execution (The "Gold Mining")
Business Partnership: Proactively interview teams (Talent, Support, Sales) to identify manual workflows that can be automated via agentic AI.
Proof of Efficacy: Build and deploy POCs independently to demonstrate ROI before scaling.
Financial & Performance Operations (The "Numbers")
Cost Management: Own the token procurement process and build forecasting/chargeback models to prevent uncontrolled spend.
Performance Monitoring: Build dashboards to track SLAs/SLOs (latency, accuracy, uptime) and monitor usage, cost, and error rates.
Optimization: Proactively identify opportunities for cost-saving (e.g., model switching) and performance tuning.
QUALIFICATIONS
The Persona: You are a Senior IT Business Analyst, Technical Implementation Lead, or Solutions Architect.
Technical Depth: Practical, hands-on experience with the modern AI stack (OpenAI, Gemini, Anthropic, Vector DBs). You understand the nuances of state management and prompt versioning.
Scrappy Builder: You have a "hands-on-keyboard" mentality. You can take an idea from a stakeholder and turn it into a working agentic workflow without needing external engineering resources.
Business Acumen: Ability to translate complex technical AI patterns into clear business value and ROI for stakeholders.
Operational Rigor: Experience managing vendor relationships, forecasting technical costs (tokens), and maintaining system uptime/SLAs.
YOU ARE
Orchestration: Experienced with LangChain, or similar agentic frameworks.
AI Tooling: Prompt Flow, Vector Databases, and API integration.
Data & Analytics: Ability to build performance and cost-tracking dashboards (SQL, Tableau, etc.).
PERKS & BENEFITS
We offer Gongsters a variety of medical, dental, and vision plans, designed to fit you and your family’s needs.
Wellbeing Fund - flexible wellness stipend to support a healthy lifestyle.
Mental Health benefits with covered therapy and coaching.
401(k) program to help you invest in your future.
Education & learning stipend for personal growth and development.
Flexible vacation time to promote a healthy work-life blend.
Paid parental leave to support you and your family.
Company-wide recharge days each quarter.
Work from home stipend to help you succeed in a remote environment.
The annual salary hiring range for this position is $148,000 - $225,000 USD.
Compensation is based on factors unique to each candidate, including, but not limited to, job-related skills, qualification, education, experience, and location. At Gong, we have a location-based compensation structure, which means there may be a different range for candidates in other locations. The total compensation package for this position, in addition to base compensation, may include incentive compensation, bonus, equity, and benefits. Some of our sales compensation programs also offer the potential to achieve above targeted earnings for those who exceed their sales targets.
We are always looking for outstanding Gongsters! So if this sounds like something that interests you regardless of compensation, please reach out. We may have more roles for you to consider and would love to connect.
We have noticed a rise in recruiting impersonations across the industry, where scammers attempt to access candidates' personal and financial information through fake interviews and offers. All Gong recruiting email communications will always come from the @gong.io domain. Any outreach claiming to be from Gong via other sources should be ignored.
Gong is an equal-opportunity employer. We believe that diversity is integral to our success, and do not discriminate based on race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, military status, genetic information, or any other basis protected by applicable law.
To review Gong's privacy policy, visit https://www.gong.io/gong-io-job-candidates-privacy-notice/ for more details.
No items found.
2026-04-16 17:20
Senior Data Intelligence Engineer
Deepgram
201-500
$165,000 – $230,000
United States
Full-time
Remote
false
Company OverviewDeepgram is the leading platform underpinning the emerging trillion-dollar Voice AI economy, providing real-time APIs for speech-to-text (STT), text-to-speech (TTS), and building production-grade voice agents at scale. More than 200,000 developers and 1,300+ organizations build voice offerings that are ‘Powered by Deepgram’, including Twilio, Cloudflare, Sierra, Decagon, Vapi, Daily, Cresta, Granola, and Jack in the Box. Deepgram’s voice-native foundation models are accessed through cloud APIs or as self-hosted and on-premises software, with unmatched accuracy, low latency, and cost efficiency. Backed by a recent Series C led by leading global investors and strategic partners, Deepgram has processed over 50,000 years of audio and transcribed more than 1 trillion words. There is no organization in the world that understands voice better than Deepgram.Company Operating RhythmAt Deepgram, we expect an AI-first mindset—AI use and comfort aren’t optional, they’re core to how we operate, innovate, and measure performance.Every team member who works at Deepgram is expected to actively use and experiment with advanced AI tools, and even build your own into your everyday work. We measure how effectively AI is applied to deliver results, and consistent, creative use of the latest AI capabilities is key to success here. Candidates should be comfortable adopting new models and modes quickly, integrating AI into their workflows, and continuously pushing the boundaries of what these technologies can do.Additionally, we move at the pace of AI. Change is rapid, and you can expect your day-to-day work to evolve just as quickly. This may not be the right role if you’re not excited to experiment, adapt, think on your feet, and learn constantly, or if you’re seeking something highly prescriptive with a traditional 9-to-5.About DeepgramDeepgram is the foundational AI company for voice. We build the models that allow machines to hear, understand, and speak to humans with zero latency. As we scale our usage-based economy, we are building the "Intelligence Layer"—a system of autonomous agents that monitor, reason, and act on our data to drive NRR and operational excellence.The RoleReporting to the Head of Decision Intelligence, you are the founding engineer of the Data Intelligence team. While the Head of Data sets the strategic "Brain" of the company, you build the nervous system. You are not building legacy dashboards; instead you are building the API-based tools, dbt models, and semantic layers that allow AI agents to navigate our data environment autonomously.Key ResponsibilitiesSemantic Layer Architecture: Build and maintain high-fidelity dbt and SQL models that serve as the "ground truth" for our complex, usage-based revenue models.Agent Tooling & Integration: Develop the tools and permissions frameworks that allow "Analyst Agents" to query Athena, correlate Salesforce churn signals, and identify API latency issues.Infrastructure Evolution: Serve as the technical bridge to the Engineering/Infra team to ensure our data contracts are "agent-ready" and highly reliable.The "Unlock Audio Intelligence" Project: Partner with the Head of Data to ingest thousands of hours of internal calls using Deepgram’s own models, turning unstructured audio into queryable insights for GTM teams.Automation Obsession: Maintain a culture where manual, repetitive SQL tasks are viewed as automation bugs to be solved with code and agents.RequirementsExperience: 5+ years in high-growth Data or AI Engineering roles.AI-Native Technical Skills: Mastery of SQL and Python (production-grade) with a proven track record of building RAG or Agentic systems on top of structured data.Usage-Based SaaS Fluency: Familiarity with the metrics of an API-first business, such as consumption patterns, gross margins, and NRR.Builder Mindset: Comfort with "work-in-progress" infrastructure and the ability to turn messy, siloed data into high-impact business logic.Bonus PointsPrior experience as a Software Engineer or Data Engineer.Experience with real-time billing and consumption systems.Contributions to open-source data or AI projects.Benefits & Perks*Holistic healthMedical, dental, vision benefitsAnnual wellness stipendMental health supportLife, STD, LTD Income Insurance PlansWork/life blendUnlimited PTOGenerous paid parental leaveFlexible schedule12 Paid US company holidaysQuarterly personal productivity stipendOne-time stipend for home office upgrades401(k) plan with company matchTax Savings ProgramsContinuous learningLearning / Education stipendParticipation in talks and conferencesEmployee Resource GroupsAI enablement workshops / sessions*For candidates outside of the US, we use an Employer of Record model in many countries, which means benefits are administered locally and governed by country-specific regulations. Because of this, benefits will differ by region — in some cases international employees receive benefits US employees do not, and vice versa. As we scale, we will continue to evaluate where we can create more alignment, but a 1:1 global benefits structure is not always legally or operationally possible.Backed by prominent investors including Y Combinator, Madrona, Tiger Global, Wing VC and NVIDIA, Deepgram has raised over $215M in total funding. If you're looking to work on cutting-edge technology and make a significant impact in the AI industry, we'd love to hear from you!Deepgram is an equal opportunity employer. We want all voices and perspectives represented in our workforce. We are a curious bunch focused on collaboration and doing the right thing. We put our customers first, grow together and move quickly. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, gender identity or expression, age, marital status, veteran status, disability status, pregnancy, parental status, genetic information, political affiliation, or any other status protected by the laws or regulations in the locations where we operate.We are happy to provide accommodations for applicants who need them.
No items found.
2026-04-16 17:06
Staff Software Engineer, CAPE
Crusoe
501-1000
$209,000 – $253,000
United States
Full-time
Remote
false
Crusoe is on a mission to accelerate the abundance of energy and intelligence. As the only vertically integrated AI infrastructure company built from the ground up, we own and operate each layer of the stack — from electrons to tokens — to power the world's most ambitious AI workloads. When you join Crusoe, you join a team that is building the future, faster.We're in the midst of the greatest industrial revolution of our time. The demand for AI compute is boundless, and power is a bottleneck. We're solving that — with an energy-first approach that makes AI infrastructure better for the world and faster for the people innovating with AI.We're looking for problem-solving, opportunity-finding teammates with a sense of urgency, who believe in the scale of our ambition and thrive on a path not fully paved — people who want to grow their careers alongside a team of experts across energy, manufacturing, data center construction, and cloud services.If you want to do the most meaningful work of your career, help our customers and partners advance their AI strategies, and be part of a high-performing team that believes in each other, come build with us at Crusoe.About This RoleWe are seeking a Staff Software Engineer to architect, design, and develop the intelligence layer that controls how every GPU node in Crusoe's fleet gets assigned, monetized, and managed. You would be one of the first engineers on both the Virtual Pool Service and Capacity Management Intelligence systems, shaping implementation, making real design decisions, and building the foundational infrastructure that Crusoe's entire cloud platform depends on. You'll play a crucial role in delivering end-to-end use cases and workflows for a vertically integrated, AI-first cloud while driving key business revenue metrics at scale.What You'll Be Working OnBuilding the Virtual Pool Service (VP Service), a physical infrastructure classification layer that serves as the single source of truth for every GPU node's state, pool membership, and transition history across Crusoe's fleetDesigning and implementing Capacity Management Intelligence (CMI), the automation layer that handles priority-descending allocation, forward availability forecasting, and automated node lifecycle transitions — replacing manual spreadsheet workflows with enforced, auditable, event-driven automationCollaborating extensively across teams to architect and implement physical infrastructure management systems, availability platforms, and frameworks that meet end-to-end customer use casesChampioning reliability, scalability, and security of our systems, designing high-performing, highly available cloud architectures optimized for both performance and cost-effectivenessStreamlining cloud deployment, configuration management, and operations using Go, gRPC, NATS event streaming, PostgreSQL (CNPG on Kubernetes), and Netbox as the physical source of truthMentoring fellow engineers and actively contributing to team growth in collaboration with engineering managersWhat You'll Bring to the TeamA Bachelor's degree in Computer Science or Software Engineering10+ years of relevant experience building and operating distributed systems at scaleProven experience building reliable, scalable, and secure cloud platforms and running them in productionStrong distributed systems thinking with the ability to reason about consistency, failure modes, event ordering, and correctness invariantsFluency in Go, Rust, Java, or C++; Go is our primary language, but strong engineers from other backgrounds ramp quicklyA collaborative, platform-minded approach to building robust systems and driving adoption across dev and ops teamsOwnership mentality with comfort owning a system end to end: design, implementation, testing, ops, and iterationGood judgment under ambiguity, with the ability to drive open-ended technical decisions to resolutionExcellent communication and troubleshooting skills across cross-functional teamsBonus PointsHands-on experience deploying, managing, and troubleshooting Kubernetes clustersPrior experience with event-driven architectures or message streaming systems (NATS, Kafka, Kinesis)Experience with capacity planning, resource scheduling, or fleet management systemsBackground in GPU compute, AI/ML platform infrastructure, or fast-paced startup environmentsA passion for sustainability, clean energy, and building AI infrastructure that scales responsiblyBenefitsIndustry competitive payRestricted Stock Units in a fast growing, well-funded technology companyHealth insurance package options that include HDHP and PPO, vision, and dental for you and your dependentsEmployer contributions to HSA accountsPaid Parental LeavePaid life insurance, short-term and long-term disabilityTeladoc401(k) with a 100% match up to 4% of salaryGenerous paid time off and holiday scheduleCell phone reimbursementTuition reimbursementSubscription to the Calm appMetLife LegalCompany paid commuter benefit; $300 per monthCompensationCompensation will be paid in the range of $209,000 - $253,000. Restricted Stock Units are included in all offers. Compensation to be determined by the applicant’s education, experience, knowledge, skills, and abilities, as well as internal equity and alignment with market data.Crusoe is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, disability, genetic information, pregnancy, citizenship, marital status, sex/gender, sexual preference/ orientation, gender identity, age, veteran status, national origin, or any other status protected by law or regulation.
No items found.
2026-04-16 16:21
Director of Applied AI
webAI
101-200
United States
Full-time
Remote
false
About Us:webAI is pioneering the future of artificial intelligence by establishing the first distributed AI infrastructure dedicated to personalized AI. We recognize the evolving demands of a data-driven society for scalability and flexibility, and we firmly believe that the future of AI lies in distributed processing at the edge, bringing computation closer to the source of data generation. Our mission is to build a future where a company's valuable data and intellectual property remain entirely private, enabling the deployment of large-scale AI models directly on standard consumer hardware without compromising the information embedded within those models. We are developing an end-to-end platform that is secure, scalable, and fully under the control of our users, empowering enterprises with AI that understands their unique business. We are a team driven by truth, ownership, tenacity, and humility, and we seek individuals who resonate with these core values and are passionate about shaping the next generation of AI.About the Role:The Director of Applied AI will play a pivotal leadership role in designing, developing, and deploying advanced AI solutions for public sector and enterprise customers. You will lead a cross-functional team of engineers and applied scientists to deliver mission-critical AI capabilities, architect novel approaches leveraging distributed AI infrastructure, and partner closely with customers to understand their needs and translate them into scalable, impactful solutions.This role requires deep technical expertise, strong communication and leadership skills, and the ability to drive execution in high-stakes, rapidly evolving environments. You will help shape webAI’s applied research strategy, mentor teams, and ensure we deliver world-class AI technologies that align with customer missions and business goals.
Responsibilities:Lead the design, development, and deployment of applied AI systems built on webAI’s distributed AI platform.Engage directly with public sector and commercial customers to translate mission needs into technical requirements and scalable solutions.Guide teams in training, fine-tuning, optimizing, and evaluating large language models and multimodal models for domain-specific applications.Oversee technical execution across multiple parallel projects, ensuring high-quality delivery under tight deadlines.Collaborate with Infrastructure, Platform Engineering, and Product teams to align applied AI initiatives with overall product strategy.Build and mentor a high-performing applied AI team, fostering a culture of innovation, ownership, and hands-on problem solving.Drive architectural decisions related to edge inference, distributed processing, privacy-preserving AI, and real-time performance.Establish rigorous evaluation methodologies, benchmarks, and validation frameworks for AI model performance and system reliability.Identify emerging trends, technologies, and research areas that can accelerate mission impact and product differentiation.Present complex technical concepts to leaders, customers, and stakeholders in a clear and compelling manner.
Qualifications:You need to code! Every leader in this company is hands-on.8+ years of experience in applied machine learning, AI engineering, or related fields, with at least 3 years in a leadership role.Proven track record designing and deploying an AI product.Strong expertise with LLMs, generative AI, machine learning workflows.Hands-on experience building or integrating AI systems for customer-facing applications.Proficiency in Python and modern ML frameworks (PyTorch, TensorFlow, JAX, etc.).Experience leading cross-functional teams and managing multiple high-impact projects simultaneously.Ability to communicate complex technical ideas clearly to both technical and non-technical audiences.Bachelor’s degree in Computer Science, Engineering, Mathematics, or related field.
Preferred SkillsMaster's or PhD in a technical discipline such as Computer Science, Machine Learning, AI, or Applied Mathematics.Background in distributed systems, edge computing, or privacy-preserving ML.Experience deploying AI models on-device or in constrained compute environments.Familiarity with MLOps, data pipelines, and scalable inference architectures.Strong understanding of security, compliance, and data governance considerations in enterprise or public sector contexts.Ability to thrive in ambiguous, fast-paced, high-growth startup environments.
We at webAI are committed to living out the core values we have put in place as the foundation on which we operate as a team. We seek individuals who exemplify the following:Truth - Emphasizing transparency and honesty in every interaction and decision.Ownership - Taking full responsibility for one’s actions and decisions, demonstrating commitment to the success of our clients. Tenacity - Persisting in the face of challenges and setbacks, continually striving for excellence and improvement.Humility - Maintaining a respectful and learning-oriented mindset, acknowledging the strengths and contributions of others.Benefits:Competitive salaryComprehensive health, dental, and vision benefits package401(k) match (U.S.-based employees only)$200/month Health & Wellness stipendContinuing Education support$500/year Function Health subscription (U.S.-based employees only)Free parking for in-office employeesFlexible Time Off (FTO)Parental leave for eligible employeesSupplemental life insurance
webAI is an Equal Opportunity Employer and does not discriminate against any employee or applicant on the basis of age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances. We adhere to these principles in all aspects of employment, including recruitment, hiring, training, compensation, promotion, benefits, social and recreational programs, and discipline. In addition, it is the policy of webAI to provide reasonable accommodation to qualified employees who have protected disabilities to the extent required by applicable laws, regulations and ordinances where a particular employee works.
No items found.
2026-04-16 15:21
AI Research Director
webAI
101-200
United States
Full-time
Remote
false
About Us:webAI is pioneering the future of artificial intelligence by establishing the first distributed AI infrastructure dedicated to personalized AI. We recognize the evolving demands of a data-driven society for scalability and flexibility, and we firmly believe that the future of AI lies in distributed processing at the edge, bringing computation closer to the source of data generation. Our mission is to build a future where a company's valuable data and intellectual property remain entirely private, enabling the deployment of large-scale AI models directly on standard consumer hardware without compromising the information embedded within those models. We are developing an end-to-end platform that is secure, scalable, and fully under the control of our users, empowering enterprises with AI that understands their unique business. We are a team driven by truth, ownership, tenacity, and humility, and we seek individuals who resonate with these core values and are passionate about shaping the next generation of AI.About the Role:The AI Research Director will lead webAI’s advanced research initiatives across large language models, multimodal systems, edge inference, and distributed AI. This role drives the scientific vision behind our core technologies and works closely with engineering leadership to convert cutting-edge research into production-grade capabilities.You will oversee research teams, establish long-term research roadmaps, design novel model architectures, and push innovation that directly shapes webAI’s platform, model performance, and differentiated technical position. This role is ideal for a hands-on, forward-thinking leader who wants to build what the future of private, distributed AI looks like.
Responsibilities:Lead webAI’s AI and ML research strategy including long-term vision, experimentation roadmap, and architectural innovationOversee research on LLMs, diffusion and multimodal models, inference optimization, and distributed executionAdvance techniques for compression, quantization, distillation, and privacy-preserving learning for edge and on-device AICollaborate with Engineering and Product teams to translate research breakthroughs into scalable production-ready capabilitiesBuild, mentor, and lead a world-class research team fostering creativity, scientific rigor, and innovationEvaluate emerging technologies, academic research, and industry trends to influence strategic directionDesign and evaluate experiments, benchmarks, and methodologies for model performance and efficiencyRepresent webAI in research discussions with customers, partners, and the broader AI communityEnsure research initiatives align with customer missions, security requirements, and enterprise needs
Qualifications:You read papers on a daily basis8+ years of experience in AI, machine learning research, deep learning, or related fieldsExperience leading research teams and delivering impactful ML innovationsExpertise in modern AI architectures including transformers, diffusion models, multimodal systems, and reinforcement learningExperience designing, training, and optimizing large-scale ML modelsStrong understanding of distributed systems, edge inference, and model efficiency techniques such as quantization and pruningProficiency in Python and ML frameworks such as PyTorch, TensorFlow, or JAXStrong research background demonstrated through publications, patents, or deployed innovationsAbility to communicate complex concepts clearly to technical and non-technical audiencesBachelor’s degree in Computer Science, Engineering, or related field
Preferred SkillsMaster’s or PhD in Machine Learning, Computer Science, AI, Mathematics, or related fieldExperience with privacy-preserving AI such as federated learning or secure executionStrong mathematical foundation in ML optimization and model theoryExperience integrating novel research into production environmentsAbility to operate in fast-paced, high-growth startup environments
We at webAI are committed to living out the core values we have put in place as the foundation on which we operate as a team. We seek individuals who exemplify the following:Truth - Emphasizing transparency and honesty in every interaction and decision.Ownership - Taking full responsibility for one’s actions and decisions, demonstrating commitment to the success of our clients. Tenacity - Persisting in the face of challenges and setbacks, continually striving for excellence and improvement.Humility - Maintaining a respectful and learning-oriented mindset, acknowledging the strengths and contributions of others.Benefits:Competitive salaryComprehensive health, dental, and vision benefits package401(k) match (U.S.-based employees only)$200/month Health & Wellness stipendContinuing Education support$500/year Function Health subscription (U.S.-based employees only)Free parking for in-office employeesFlexible Time Off (FTO)Parental leave for eligible employeesSupplemental life insurance
webAI is an Equal Opportunity Employer and does not discriminate against any employee or applicant on the basis of age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances. We adhere to these principles in all aspects of employment, including recruitment, hiring, training, compensation, promotion, benefits, social and recreational programs, and discipline. In addition, it is the policy of webAI to provide reasonable accommodation to qualified employees who have protected disabilities to the extent required by applicable laws, regulations and ordinances where a particular employee works.
No items found.
2026-04-16 15:21
Senior Forward Deployed AI Engineer
webAI
101-200
United States
Full-time
Remote
false
Special Notice:This position is NOT contingent upon awarding of a project or needing a funding source. This is full-time employment with webAI.About the Role:We are seeking a Senior Forward Deployed AI Engineer to support our Public Sector initiatives focused on building and optimizing production ready AI systems for secure and distributed environments. This role sits at the intersection of machine learning, systems engineering, and deployment optimization, bridging research and real world implementation.You will be responsible for transforming prototype models into scalable, efficient, and reliable production systems that operate seamlessly across a spectrum of hardware from government cloud infrastructure to edge devices in restricted or disconnected environments.The ideal candidate will be based in Austin, Texas or in the Washington, D.C./Northern Virginia area. On a case-by-case basis, candidates who are fully remote. Qualified candidates who are not based in Austin, Texas may be asked to travel to our Austin, Texas headquarters. This role will require up to 25% travel, including occasional on-site work at customer locations to support deployment, troubleshooting, and integration of our platform within enterprise environments.
Key Responsibilities:Collaborate closely with customers to scope, deploy, and maintain AI solutions in production environments.Design, develop, and deploy agentic workflows to orchestrate multi-step reasoning, tool use, and decision-making across production systems.Debug and optimize data pipelines and AI systems running on customer networks.Translate complex, often ambiguous customer requirements into well-scoped technical solutions.Work across the stack—from model inference on consumer hardware to infrastructure automation.Read hardware schematics/logs to identify performance bottlenecks and suggest improvements.Serve as a trusted technical advisor to enterprise clients, representing the engineering team externally.Contribute feedback and insight to internal teams to continuously improve product robustness and usability.
Required Skills & Qualifications:5+ years of combined experience in software engineering and machine learning.Active US Security clearanceProven track record of deploying and maintaining machine learning/AI systems in production.Strong expertise in ML frameworks such as PyTorch, TensorFlow, ONNX, JAX, etc.Experience debugging complex systems involving data pipelines, model inference, and hardware interaction.Exceptional communication skills; able to challenge vague requirements and turn them into actionable plans.Comfortable working in dynamic environments with high customer exposure.
Preferred Qualifications:Experience deploying models on edge devices or consumer hardware.Familiarity with distributed systems, DevOps tooling, and performance tuning.Prior experience in a customer-facing engineering role.Proficiency in developing full stack applicationsKnowledge of privacy-preserving AI and secure compute environments.Master’s degree in a relevant technical discipline.
We at webAI are committed to living out the core values we have put in place as the foundation on which we operate as a team. We seek individuals who exemplify the following:Truth - Emphasizing transparency and honesty in every interaction and decision.Ownership - Taking full responsibility for one’s actions and decisions, demonstrating commitment to the success of our clients. Tenacity - Persisting in the face of challenges and setbacks, continually striving for excellence and improvement.Humility - Maintaining a respectful and learning-oriented mindset, acknowledging the strengths and contributions of others.
Benefits:Competitive salaryComprehensive health, dental, and vision benefits package401(k) match (U.S.-based employees only)$200/month Health & Wellness stipendContinuing Education support$500/year Function Health subscription (U.S.-based employees only)Free parking for in-office employeesFlexible Time Off (FTO)Parental leave for eligible employeesSupplemental life insurance
webAI is an Equal Opportunity Employer and does not discriminate against any employee or applicant on the basis of age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances. We adhere to these principles in all aspects of employment, including recruitment, hiring, training, compensation, promotion, benefits, social and recreational programs, and discipline. In addition, it is the policy of webAI to provide reasonable accommodation to qualified employees who have protected disabilities to the extent required by applicable laws, regulations and ordinances where a particular employee works.
No items found.
2026-04-16 15:21
Applied AI, Forward Deployed Machine Learning Engineer - Palo Alto
Mistral AI
501-1000
United States
Full-time
Remote
false
About Mistral
At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.
We democratize AI through high-performance, optimized, open-source and cutting-edge models, products and solutions. Our comprehensive AI platform is designed to meet enterprise needs, whether on-premises or in cloud environments. Our offerings include le Chat, the AI assistant for life and work.
We are a dynamic, collaborative team passionate about AI and its potential to transform society.
Our diverse workforce thrives in competitive environments and is committed to driving innovation. Our teams are distributed between France, USA, UK, Germany and Singapore. We are creative, low-ego and team-spirited.
Join us to be part of a pioneering company shaping the future of AI. Together, we can make a meaningful impact. See more about our culture on https://mistral.ai/careers.
About The Job
Mistral AI is seeking a Applied AI Engineer to facilitate the adoption of its products among customers and collaborate with them to address complex technical challenges.
The Applied AI Engineer will be an integral part of our Applied AI Engineering team, which is dedicated to driving the successful deployment of Mistral AI products. They will work hand-in-hand with customers from the pre-sale stage to post-implementation, ensuring our solutions meet and exceed client expectations.
In this role, you’ll manage daily customer relations involving multiple stakeholders (CEO/CTO, data scientists, and software engineers) and function as a key resource in externalising our research in production settings.
What you will do
• You’ll be responsible for onboarding customers on our products and APIs, providing guidance on prompting, evaluation, and fine-tuning, and ensuring the best production integration with back-end and front-end interfaces.
• You’ll work on state-of-the-art GenAI applications from consumer products to industrial use cases, driving with our customers a crucial technological transformation.
• You’ll individually help deploy into production use cases with a considerable business impact across various industries.
• You’ll work in collaboration with our researchers, other AI engineers, product engineers on our most complex customer projects involving complex fine-tuning, state-of-the-art LLM applications, and contributing to our open source codebases for tasks such as inference and fine-tuning.
• You’ll be involved in pre-sales calls to understand potential clients' needs, challenges, and aspirations. You will provide technical guidance on our products and explain Mistral technologies to various stakeholders.
• Your collaboration with our product and science team to improve continuously our product and model capabilities based on customers’ feedback
About you
• You are fluent in English
• You hold a PhD / master in AI / data science.
• You have 2+ years as a technical individual contributor (data scientist or software engineer) on AI-based products
• You have experience in Fine Tuning LLMs, tackling advanced RAG or agentic use cases
• You have deep understanding of concepts and algorithms underlying machine learning and LLMs
• You're experienced with building and deploying LLMs or NLP applications
• You have proven experience in AI or machine learning product implementation with APIs, back-end and front-end interfaces.
• You have strong technical coding skills in Python
• You have experience with deep learning with Pytorch
• You have experience with agents framework such as Langchain, vector DBs
• You hold strong communication skills with an ability to explain complex technical concepts in simple terms with technical and non-technical audiences
Ideally you have:
• Contributed to open-source projects in particular in the space of LLMs
• Experience as a Customer Engineer, Forward Deployed Engineer, Sales Engineer, Solutions Architect or Technical Product Manager
Benefits
💰 Competitive salary and bonus structure
🚀 Generous Equity
🧑‍⚕️ Health : Competitive Healthcare program (Medical Provider: Blueshield of California 100% coverage for employee, 75% for dependents)
👴🏻 Pension : 401K (6% matching)
🏝️ PTO : 18 days
🚗 Transportation: Reimburse office parking charges, or $120/month for public transport
🤝 Coaching: we offer Betterup coaching on a voluntary basis
🏀 Sport: $120/month reimbursement for gym membership
🥕 Meal stipend: $400 monthly allowance for meals (solution might evolve as we grow bigger)
🌎 Visa sponsorship
No items found.
2026-04-16 12:50
Senior AI Engineer (Core) - Supernal
Infinity Constellation
11-50
United States
Full-time
Remote
false
Senior AI Engineer
About SupernalSupernal helps small-to-medium businesses hire their first AI employee. Our AI teammates are built using intelligent, agentic workflows deployed on a proprietary platform. We deliver working, value-generating AI Employees—not tools—that handle real business processes alongside human teams.
The RoleWe’re hiring a Senior AI Engineer to build and ship the first generation of personalized, self-improving agentic workflows that users rely on daily. This is an “end-to-end” role: you’ll design the agent runtime, memory + retrieval systems, evaluation harnesses, and the product-facing surfaces that put agents in front of real users at scale.You should be equally comfortable reasoning about distributed systems and data (latency, caching, queues, failure modes, cost) as you are with modern agent stacks (tool use, memory, RAG, multi-step planning, guardrails, and evaluation).This role will partner closely with platform engineering to leverage and extend our core services (Django backend, event-driven systems, Kubernetes, observability) while owning critical parts of the AI application layer.
What You’ll BuildPersonalized agent runtime: Agentic workflows that adapt to a user’s preferences, data, and ongoing behavior over time.Memory & retrieval systems: Short/long-term memory, durable state, and retrieval pipelines across vector DBs and relational data.Voice experiences (real-time + async): Speech-to-speech/voice agents, streaming audio pipelines, turn-taking, interruption handling, latency tuning, and QA for natural conversations.Agent evaluation + reliability: Offline/online evals, regression suites, red-teaming, monitoring, and rollout controls so agents are trustworthy in production.Production agent infrastructure: Scalable orchestration patterns for multi-step jobs, background tasks, and user-facing interactions (sync + async), with clear SLAs/SLOs.Tooling + developer experience: Libraries and primitives that make it easy for the team to build new agent capabilities quickly and safely.What You’ll Own (Responsibilities)Ship user-facing agent experiences end-to-end: prototype → production → iteration based on real usage.Architect and implement stateful agent systems (workflows, tool calling, memory, retrieval, and human-in-the-loop where needed).Build voice features end-to-end where they unlock value: realtime speech agents, voice UI/UX, prompt/audio routing, and guardrails for safe tool execution.Build/own an evaluation harness:curated test sets + scenario suitesautomated scoring / rubric-based gradersprompt/model/version trackingcanary + A/B experimentation and safe rollout patternsDesign data + retrieval pipelines:chunking, enrichment, metadata strategyhybrid retrieval (vector + keyword + structured filters)re-ranking, caching, and latency optimizationmulti-tenant safety and data isolationIntegrate with and extend our platform primitives:Django/DRF/ASGI servicesasync execution + queues + workflow orchestrationPostgreSQL + pgvectorKubernetes deployments, autoscaling, and cost controlsEstablish engineering rigor for agents:observability (traces, spans, structured logs)reliability patterns (timeouts, retries, circuit breakers, graceful degradation)security/privacy controls for data access and tool executionWhat We’re Looking ForRequiredStrong software engineering fundamentals (design, testing, code quality, performance, security).Production experience deploying AI systems in front of users (not just notebooks/demos).Experience building agentic or LLM-powered systems with memory and tool use.Comfort working across application + infrastructure layers: APIs, background jobs, data stores, and deployment.Hands-on experience with at least one agent framework (or equivalent custom implementation), such as:LangChain / LangGraphLlamaIndexAutoGen / CrewAI-style multi-agent patternsStrong understanding of retrieval and vector search concepts: embeddings, indexing, filtering, evaluation.PreferredExperience with vector databases and/or search stacks (e.g., Pinecone, Chroma, Weaviate, Qdrant, pgvector).Experience designing evaluation systems (offline eval, human eval loops, production monitoring, prompt/model regression).Experience building voice/real-time systems (streaming, WebRTC or similar), and/or integrating speech (STT/TTS) into production applications.Experience building durable, long-running workflows (Temporal or similar orchestration engines).Familiarity with observability tooling (OpenTelemetry, Datadog, or similar).Experience shipping multi-tenant SaaS systems with strong privacy boundaries.Interview Focus AreasSystem design for agentic applications (state, memory, evaluation, failure modes).Practical retrieval/RAG design (data modeling, indexing, relevance, latency).Production engineering practices (testing strategy, observability, rollouts).Ability to communicate tradeoffs and make good technical decisions under uncertainty.Compensation & LogisticsCompensation: Competitive salary commensurate with experience (Senior level)Location: RemoteType: Full-timeRequirements: Overlap with Americas timezones for collaboration; reliable high-speed internet
No items found.
2026-04-16 12:06
Engineering Manager (AI) - Supernal
Infinity Constellation
11-50
United States
Full-time
Remote
false
About SupernalAt Supernal, we help SMBs hire their first AI employee. Our AI teammates are built with intelligent, agentic workflows and deployed on our proprietary platform. We don't build tools — we deliver working, value-generating AI Employees.Our AI Platform Engineers, known internally as Masons, are the builders behind these systems. As we scale delivery, we need a Mason Manager to lead multiple pods of Masons and ensure we ship reliable, production-grade AI Employees — predictably and at high quality.
The RoleAs a Mason Manager (Engineering Manager), you will lead multiple pods of Junior + Senior Masons responsible for building and shipping production automation and agentic systems for customers.This is a highly technical people leadership role. You will be accountable for what your pods ship: architecture decisions, quality bars, reliability, documentation, and delivery outcomes. You’ll also invest heavily in hiring, coaching, and performance management — building a team that can deliver at scale with consistent craft.You are not a “process-only” manager. You will stay close to the work: reviewing designs, unblocking complex integrations, setting engineering standards, and acting as the escalation point for production issues and delivery risk.
ResponsibilitiesLead multiple Mason pods and own delivery outcomes: scope, milestones, quality, and on-time executionTranslate ambiguous customer/internal requests into clear plans, acceptance criteria, and execution strategySet and enforce production-quality standards for Mason builds (testing, monitoring, runbooks, documentation, rollout plans)Serve as technical escalation for difficult problems: auth/permissions, integrations, data modeling, reliability, and failure recoveryEstablish and evolve team processes: scoping discipline, QA gates, review checklists, incident/postmortem loops, and continuous improvementDrive prioritization and capacity planning across pods; identify the critical path and remove blockers fastPartner with Delivery Leads and stakeholders to manage tradeoffs, timelines, and expectations (including client-facing escalations when needed)Hire and build the team: define roles, run interview loops, calibrate, close candidates, and improve onboardingManage performance: set expectations, deliver feedback, coach growth, and handle underperformance clearly and fairlyDevelop leaders within the Mason org: mentoring, delegation, and building strong ownership at every levelYou Might Be a Fit If You...Have 5+ years of experience building production systems as a software/automation engineer, plus 2+ years of engineering management or tech-leadership experience (people management strongly preferred)Have managed multiple concurrent workstreams (pods/squads) with shared standards and predictable deliveryAre deeply comfortable with integrations: APIs, webhooks, auth (OAuth/API keys), and data stores (Postgres/Supabase)Can reason about reliability in automation/agentic systems: idempotency, retries/backoff, rate limits, auditing, and safe failure modesHave a strong quality mindset: unit/integration/E2E testing, regression prevention, monitoring/observability, and runbook cultureHave experience with applied AI delivery patterns: prompt iteration, eval harnesses, human-in-the-loop QA, and LLM observabilityEnjoy people management and have real examples of coaching, feedback, and performance managementHave run hiring loops end-to-end: defining roles, interviewing, calibration, and closing candidatesCommunicate clearly and fluently in English — written and verbal — and can align technical and non-technical stakeholdersThrive in fast-paced, ambiguous environments and take ownership without being askedWhat Success Looks LikeMultiple Mason pods ship production AI Employees predictably, with clear milestones and minimal thrashBuilds are reliable in the wild: fewer incidents, fast recovery, strong observability, and durable runbooks/SOPsEngineering standards are consistently applied across pods (testing, documentation, QA gates, and design clarity)Stakeholders have high trust: timelines and tradeoffs are communicated early and crisplyThe Mason org scales through strong hiring and onboarding; new Masons ramp quickly and ship meaningful workTeam performance improves over time through coaching, clear expectations, and a high-accountability culture
No items found.
2026-04-16 12:05
Product Manager, Agent Builder
Sierra
201-500
$175,000 – $350,000
United States
Full-time
Remote
false
About usAt Sierra, we’re creating a platform to help businesses build better, more human customer experiences with AI. We are primarily an in-person company based in San Francisco, with growing offices in Atlanta, New York, London, Paris, Madrid, Munich, Singapore, Japan, and Sydney.We are guided by a set of values that are at the core of our actions and define our culture: Trust, Customer Obsession, Craftsmanship, Intensity, and Family. These values are the foundation of our work, and we are committed to upholding them in everything we do.Our co-founders are Bret Taylor and Clay Bavor. Bret currently serves as Board Chair of OpenAI. Previously, he was co-CEO of Salesforce (which had acquired the company he founded, Quip) and CTO of Facebook. Bret was also one of Google's earliest product managers and co-creator of Google Maps. Before founding Sierra, Clay spent 18 years at Google, where he most recently led Google Labs. Earlier, he started and led Google’s AR/VR effort, Project Starline, and Google Lens. Before that, Clay led the product and design teams for Google Workspace. About the roleWe’re looking for a Product Manager for Agent Builder: the core environment where agents are created, tested, and improved.Agent Builder brings together the full agent development experience — journeys, chat, simulations, workspaces, packages, and AI copilots like Ghostwriter and Explorer — into a unified system. It reflects Sierra’s Agent Development Life Cycle: Analyze → Build → Test → Release, repeated continuously to improve agent quality.As PM for Agent Builder, you will define how teams actually build agents in practice. You’ll shape the systems that turn ideas into working behaviors, enable rapid iteration, and allow agents to continuously improve over time.This is a zero-to-one and scaling role at the center of Sierra’s product.What you'll doDefine the agent building experience - Shape how users move from idea → implementation across journeys, chat, and workflows. Create clear abstractions for building complex agent behavior without unnecessary friction.Own the Agent Development Life Cycle - Design how users analyze conversations, make changes, test improvements, and release updates. Ensure the loop between build → test → learn is tight, fast, and intuitive.Build simulation and testing systems - Define how agents are validated before deployment. Create tools for simulating real-world scenarios, identifying failures, and improving performance with confidence.Design reusable systems through packages - Define how integrations, skills, and agent behaviors are packaged, discovered, and reused. Enable users to compose sophisticated agents from modular building blocks that improve over time.Integrate AI copilots into the workflow - Work closely on how Ghostwriter (build) and Explorer (analyze) fit into Agent Builder. Decide what is automated vs user-driven, and how AI augments each step of the workflow.Make agent quality measurable and actionable- Define evaluation frameworks and feedback systems so users can understand performance and systematically improve their agents.What you'll bring3+ years of product management experience, ideally working on developer platforms, AI systems, or complex technical productsStrong technical depth - Ability to engage deeply with engineers on system design (e.g., simulation systems, evaluation frameworks, APIs, and platform architecture)Experience building 0→1 products or platforms - Comfortable defining new abstractions and workflows in ambiguous spacesExperience working with AI systems - Familiarity with LLMs and the challenges of building, testing, and iterating on non-deterministic systemsStrong product instincts for builder tools - You have taste in developer experience and can balance power, flexibility, and usabilityOur valuesTrust: We build trust with our customers with our accountability, empathy, quality, and responsiveness. We build trust in AI by making it more accessible, safe, and useful. We build trust with each other by showing up for each other professionally and personally, creating an environment that enables all of us to do our best work.Customer Obsession: We deeply understand our customers’ business goals and relentlessly focus on driving outcomes, not just technical milestones. Everyone at the company knows and spends time with our customers. When our customer is having an issue, we drop everything and fix it.Craftsmanship: We get the details right, from the words on the page to the system architecture. We have good taste. When we notice something isn’t right, we take the time to fix it. We are proud of the products we produce. We continuously self-reflect to continuously self-improve.Intensity: We know we don’t have the luxury of patience. We play to win. We care about our product being the best, and when it isn’t, we fix it. When we fail, we talk about it openly and without blame so we succeed the next time.Family: We know that balance and intensity are compatible, and we model it in our actions and processes. We are the best technology company for parents. We support and respect each other and celebrate each other’s personal and professional achievements.What we offerWe want our benefits to reflect our values and offer the following to full-time employees:Flexible (Unlimited) Paid Time OffMedical, Dental, and Vision benefits for you and your familyLife Insurance and Disability BenefitsRetirement Plan (e.g., 401K, pension) with Sierra matchParental LeaveFertility and family building benefits through CarrotLunch, as well as delicious snacks and coffee to keep you energized Discretionary Benefit Stipend giving people the ability to spend where it matters mostFree alphorn lessonsThese benefits are further detailed in Sierra's policies, may vary by region, and are subject to change at any time, consistent with the terms of any applicable compensation or benefits plans. Eligible full-time employees can participate in Sierra's equity plans subject to the terms of the applicable plans and policies.Be you, with usWe're working to bring the transformative power of AI to every organization in the world. To do so, it is important to us that the diversity of our employees represents the diversity of our customers. We believe that our work and culture are better when we encourage, support, and respect different skills and experiences represented within our team. We encourage you to apply even if your experience doesn't precisely match the job description. We strive to evaluate all applicants consistently without regard to race, color, religion, gender, national origin, age, disability, veteran status, pregnancy, gender expression or identity, sexual orientation, citizenship, or any other legally protected class.
No items found.
2026-04-16 11:36
Product Manager, Voice
Sierra
201-500
$230,000 – $390,000
United States
Full-time
Remote
false
About usAt Sierra, we’re creating a platform to help businesses build better, more human customer experiences with AI. We are primarily an in-person company based in San Francisco, with growing offices in Atlanta, New York, London, Paris, Madrid, Munich, Singapore, Japan, and Sydney.We are guided by a set of values that are at the core of our actions and define our culture: Trust, Customer Obsession, Craftsmanship, Intensity, and Family. These values are the foundation of our work, and we are committed to upholding them in everything we do.Our co-founders are Bret Taylor and Clay Bavor. Bret currently serves as Board Chair of OpenAI. Previously, he was co-CEO of Salesforce (which had acquired the company he founded, Quip) and CTO of Facebook. Bret was also one of Google's earliest product managers and co-creator of Google Maps. Before founding Sierra, Clay spent 18 years at Google, where he most recently led Google Labs. Earlier, he started and led Google’s AR/VR effort, Project Starline, and Google Lens. Before that, Clay led the product and design teams for Google Workspace. About the roleWe’re looking for a Product Manager for Voice: real-time, human-quality AI conversations.Voice is one of the most demanding and important surfaces for AI agents. It requires low latency, high reliability, natural turn-taking, and the ability to handle messy, real-world interactions across phone systems and global customers.As PM for Voice, you will define how Sierra agents sound, respond, and behave in live conversations. You’ll shape the core voice experience—from first utterance → dialogue → resolution—and ensure agents perform reliably in production across telephony and real-time systems.This is a zero-to-one and scaling role at the intersection of speech, infrastructure, and product experience.What you'll doDefine the voice interaction model - Shape how agents handle real-time conversations—turn-taking, interruptions, latency, tone, and recovery from errors. Design what “human-quality” voice interaction actually means in practice.Build reliable real-time systems - Work closely with engineering on streaming architectures, latency budgets, and failure handling. Voice is unforgiving—ensure agents respond quickly and consistently in production environments.Own the voice stack experience - Partner across ASR, TTS, LLMs, and telephony integrations to deliver a cohesive product. Help decide model choices, orchestration strategies, and how different components work together.Make voice measurable and improvable - Define how we evaluate voice agents: latency, interruption handling, resolution rate, and conversation quality. Build feedback loops that improve performance over time.Translate real-world usage into product direction - Work closely with customers deploying voice agents in production. Understand edge cases (noisy environments, accents, call flows) and turn them into product improvements.What you'll bring3+ years of product management experience, with meaningful exposure to real-time systems, voice, or AI productsExperience shipping voice or real-time products - You understand the constraints of latency, streaming systems, and user expectations in synchronous interactionsStrong technical depth - Ability to engage deeply with engineers on system design (e.g., speech pipelines, streaming infra, telephony systems, reliability tradeoffs)Experience working with AI systems - Familiarity with LLMs, speech-to-text, or text-to-speech systems and their limitations in production environmentsTrack record of 0→1 product development - Comfortable operating in ambiguous spaces and iterating quickly to reach product-market fitOur valuesTrust: We build trust with our customers with our accountability, empathy, quality, and responsiveness. We build trust in AI by making it more accessible, safe, and useful. We build trust with each other by showing up for each other professionally and personally, creating an environment that enables all of us to do our best work.Customer Obsession: We deeply understand our customers’ business goals and relentlessly focus on driving outcomes, not just technical milestones. Everyone at the company knows and spends time with our customers. When our customer is having an issue, we drop everything and fix it.Craftsmanship: We get the details right, from the words on the page to the system architecture. We have good taste. When we notice something isn’t right, we take the time to fix it. We are proud of the products we produce. We continuously self-reflect to continuously self-improve.Intensity: We know we don’t have the luxury of patience. We play to win. We care about our product being the best, and when it isn’t, we fix it. When we fail, we talk about it openly and without blame so we succeed the next time.Family: We know that balance and intensity are compatible, and we model it in our actions and processes. We are the best technology company for parents. We support and respect each other and celebrate each other’s personal and professional achievements.What we offerWe want our benefits to reflect our values and offer the following to full-time employees:Flexible (Unlimited) Paid Time OffMedical, Dental, and Vision benefits for you and your familyLife Insurance and Disability BenefitsRetirement Plan (e.g., 401K, pension) with Sierra matchParental LeaveFertility and family building benefits through CarrotLunch, as well as delicious snacks and coffee to keep you energized Discretionary Benefit Stipend giving people the ability to spend where it matters mostFree alphorn lessonsThese benefits are further detailed in Sierra's policies, may vary by region, and are subject to change at any time, consistent with the terms of any applicable compensation or benefits plans. Eligible full-time employees can participate in Sierra's equity plans subject to the terms of the applicable plans and policies.Be you, with usWe're working to bring the transformative power of AI to every organization in the world. To do so, it is important to us that the diversity of our employees represents the diversity of our customers. We believe that our work and culture are better when we encourage, support, and respect different skills and experiences represented within our team. We encourage you to apply even if your experience doesn't precisely match the job description. We strive to evaluate all applicants consistently without regard to race, color, religion, gender, national origin, age, disability, veteran status, pregnancy, gender expression or identity, sexual orientation, citizenship, or any other legally protected class.
No items found.
2026-04-16 11:35
Finance Analytics Engineer
Together AI
201-500
$200,000 – $280,000
No items found.
Full-time
Remote
false
About the Role
The Turbo team sits at the intersection of efficient inference (algorithms, architectures, engines) and post‑training / RL systems. We build and operate the systems behind Together’s API, including high‑performance inference and RL/post‑training engines that can run at production scale.
Our mandate is to push the frontier of efficient inference and RL‑driven training: making models dramatically faster and cheaper to run, while improving their capabilities through RL‑based post‑training (e.g., GRPO‑style objectives). This work lives at the interface of algorithms and systems: asynchronous RL, rollout collection, scheduling, and batching all interact with engine design, creating many knobs to tune across the RL algorithm, training loop, and inference stack. Much of the job is modifying production inference systems—for example, SGLang‑ or vLLM‑style serving stacks and speculative decoding systems such as ATLAS—grounded in a strong understanding of post‑training and inference theory, rather than purely theoretical algorithm design.
You’ll work across the stack—from RL algorithms and training engines to kernels and serving systems—to build and improve frontier models via RL pipelines. People on this team are often spiky: some are more RL‑first, some are more systems‑first. Depth in one of these areas plus appetite to collaborate across (and grow toward more full‑stack ownership over time) is ideal.
Requirements
We don’t expect anyone to check every box below. People on this team typically have deep expertise in one or more areas and enough breadth (or interest) to work effectively across the stack. The closer you are to full‑stack (inference + post‑training/RL + systems), the stronger the fit—but being spiky in one area and eager to grow is absolutely okay.
You might be a good fit if you:
Have strong expertise in at least one of the following, and are excited to collaborate across (and grow into) the others:
Systems‑first profile: Large‑scale inference systems (e.g., SGLang, vLLM, FasterTransformer, TensorRT, custom engines, or similar), GPU performance, distributed serving.
RL‑first profile: RL / post‑training for LLMs or large models (e.g., GRPO, RLHF/RLAIF, DPO‑like methods, reward modeling), and using these to train or fine‑tune real models.
Model architecture design for Transformers or other large neural nets.
Distributed systems / high‑performance computing for ML.
Are comfortable working from algorithms to engines:
Strong coding ability in Python
Experience profiling and optimizing performance across GPU, networking, and memory layers.
Able to take a new sampling method, scheduler, or RL update and turn it into a production‑grade implementation in the engine and/or training stack.
Have a solid research foundation in your area(s) of depth:
Track record of impactful work in ML systems, RL, or large‑scale model training (papers, open‑source projects, or production systems).
Can read new RL / post‑training papers, understand their implications on the stack, and design minimal, correct changes in the right layer (training engine vs. inference engine vs. data / API).
Operate well as a full‑stack problem solver:
You naturally ask: “Where in the stack is this really bottlenecked?”
You enjoy collaborating with infra, research, and product teams, and you care about both scientific quality and user‑visible wins.
Minimum qualifications
3+ years of experience working on ML systems, large‑scale model training, inference, or adjacent areas (or equivalent experience via research / open source).
Advanced degree in Computer Science, EE, or a related field, or equivalent practical experience.
Demonstrated experience owning complex technical projects end‑to‑end.
If you’re excited about the role and strong in some of these areas, we encourage you to apply even if you don’t meet every single requirement.
Responsibilities
Advance inference efficiency end‑to‑end
Design and prototype algorithms, architectures, and scheduling strategies for low‑latency, high‑throughput inference.
Implement and maintain changes in high‑performance inference engines (e.g., SGLang‑ or vLLM‑style systems and Together’s inference stack), including kernel backends, speculative decoding (e.g., ATLAS), quantization, etc.
Profile and optimize performance across GPU, networking, and memory layers to improve latency, throughput, and cost.
Unify inference with RL / post‑training
Design and operate RL and post‑training pipelines (e.g., RLHF, RLAIF, GRPO, DPO‑style methods, reward modeling) where 90+% of the cost is inference, jointly optimizing algorithms and systems.
Make RL and post‑training workloads more efficient with inference‑aware training loops—for example, async RL rollouts, speculative decoding, and other techniques that make large‑scale rollout collection and evaluation cheaper.
Use these pipelines to train, evaluate, and iterate on frontier models on top of our inference stack.
Co‑design algorithms and infrastructure so that objectives, rollout collection, and evaluation are tightly coupled to efficient inference, and quickly identify bottlenecks across the training engine, inference engine, data pipeline, and user‑facing layers.
Run ablations and scale‑up experiments to understand trade‑offs between model quality, latency, throughput, and cost, and feed these insights back into model, RL, and system design.
Own critical systems at production scale
Profile, debug, and optimize inference and post‑training services under real production workloads.
Drive roadmap items that require real engine modification—changing kernels, memory layouts, scheduling logic, and APIs as needed.
Establish metrics, benchmarks, and experimentation frameworks to validate improvements rigorously.
Provide technical leadership (Staff level)
Set technical direction for cross‑team efforts at the intersection of inference, RL, and post‑training.
Mentor other engineers and researchers on full‑stack ML systems work and performance engineering.
About Together AI
Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers in our journey in building the next generation AI infrastructure.
Compensation
We offer competitive compensation, startup equity, health insurance and other competitive benefits. The US base salary range for this full-time position is: $200,000 - $280,000 + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge.
Equal Opportunity
Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.
Please see our privacy policy at https://www.together.ai/privacy
No items found.
2026-04-16 11:21
Member of technical staff - Research - Agent
H Company
201-500
United Kingdom
Full-time
Remote
false
About H:
H exists to push the boundaries of superintelligence with agentic AI. By automating complex, multi-step tasks typically performed by humans, AI agents will help unlock full human potential.H is hiring the world’s best AI talent, seeking those who are dedicated as much to building safely and responsibly as to advancing disruptive agentic capabilities. We promote a mindset of openness, learning, and collaboration, where everyone has something to contribute.About the Team: The Agent team defines new learning algorithms and agent paradigms to push the frontiers of agentic systems. We build upon foundation models and reinforcement learning to develop new approaches to train artificial general agents and work closely with the LLM/VLM and Safety teams to explore new directions.This is a heavily engineering-focused role embedded within the research team. You will be responsible for defining the architecture and building the robust, scalable systems that underpin our research efforts. Your work will translate cutting-edge research concepts into high-performance, production-quality platforms, enabling the next generation of agentic AI.Key Responsibilities:Research & Leadership: Design and develop new agents, proposing new research directions, e.g., combining state-of-the-art RL with foundation models (LLMs/VLMs).Algorithm & Systems Design: Design, implement, and scale complex, high-performance systems for training large-scale agents. This includes both the foundational infrastructure and the novel algorithms, reward models, and sophisticated training environments.Research-to-Production: Collaborate closely with researchers and engineers to implement, test, and productionize new agent logics, learning algorithms, and system architectures.Evaluation & Reliability: Create, manage, and scale massive benchmarks and evaluation systems to rigorously track agent capabilities. You will own system reliability, scalability, and observability for our entire research infrastructure.Mentorship & Standards: Mentor and guide other engineers and researchers on the team, fostering technical excellence. You will establish and enforce engineering standards, tooling, and best practices for both code and research design.Innovation: Conduct thorough code and design reviews, champion technical innovation, and proactively address technical debt to accelerate the R&D lifecycle.Requirements:Technical Skills:Senior Experience: Previous demonstrable role(s) as a Staff, Principal, or Senior Engineer (or equivalent Research Scientist) in a Frontier AI Lab with a proven track record of leading complex, end-to-end AI/ML projects from conception to production.Education / Publication: Preferably PhD (or equivalent research experience) in Machine Learning, Computer Science, or a related field, preferably with a strong publication record (e.g., NeurIPS, ICML, ICLR) in Computer Science.Core Expertise: Deep theoretical and practical expertise in Agentic AI and proven experience building, scaling, and shipping solutions involving foundation models (LLMs/VLMs).Soft Skills:Collaborative: Enjoys collaboration and thrives in a teamwork-oriented, fast-paced research environment.High-Impact Communicator: Possesses impactful communication skills, with the ability to bridge the gap between research and engineering and articulate complex ideas clearly.Mission-Driven: Genuinely eager to explore and solve the new engineering and research challenges at the frontier of agentic AI.Bonus Skills:Practical experience applying Reinforcement Learning to systems built on Large Language Models (LLMs).Experience with distributed systems or cloud computing, preferably in AWS.Familiarity with building complex simulation environments for agent training.Experience with LLM training or fine-tuning.Experience developing large-scale evaluation and benchmarking systems for AI models.Experience in an agentic framework (e.g., LangChain, AutoGen, CrewAI, OpenAI SDK).Expertise in system architecture, instrumentation, observability, and monitoring for complex, high-performance systems.Location:Paris or London.This role is hybrid, and you are expected to be in the office 3 days a week on average.Please expect some travel between offices on a reasonable cadence (e.g., every 4-6 weeks).What We Offer:Join the exciting journey of shaping the future of AI, and be part of the early days of one of the hottest AI startups.Collaborate with a fun, dynamic, and multicultural team, working alongside world-class AI talent in a highly collaborative environment.Enjoy a competitive salary.Unlock opportunities for professional growth, continuous learning, and career development.If you want to change the status quo in AI, join us.
No items found.
2026-04-16 11:05
Forward Deployed Engineering Manager
Labelbox
201-500
$250,000 – $300,000
United States
Poland
Full-time
Remote
false
Shape the Future of AI
At Labelbox, we're building the critical infrastructure that powers breakthrough AI models at leading research labs and enterprises. Since 2018, we've been pioneering data-centric approaches that are fundamental to AI development, and our work becomes even more essential as AI capabilities expand exponentially.
About Labelbox
We're the only company offering three integrated solutions for frontier AI development:
Enterprise Platform & Tools: Advanced annotation tools, workflow automation, and quality control systems that enable teams to produce high-quality training data at scale
Frontier Data Labeling Service: Specialized data labeling through Alignerr, leveraging subject matter experts for next-generation AI models
Expert Marketplace: Connecting AI teams with highly skilled annotators and domain experts for flexible scaling
Why Join Us
High-Impact Environment: We operate like an early-stage startup, focusing on impact over process. You'll take on expanded responsibilities quickly, with career growth directly tied to your contributions.
Technical Excellence: Work at the cutting edge of AI development, collaborating with industry leaders and shaping the future of artificial intelligence.
Innovation at Speed: We celebrate those who take ownership, move fast, and deliver impact. Our environment rewards high agency and rapid execution.
Continuous Growth: Every role requires continuous learning and evolution. You'll be surrounded by curious minds solving complex problems at the frontier of AI.
Clear Ownership: You'll know exactly what you're responsible for and have the autonomy to execute. We empower people to drive results through clear ownership and metrics.
Role Overview
As an Applied Research Engineer at Labelbox, you will be at the forefront of developing cutting-edge systems and methods to create, analyze, and leverage high-quality human-in-the-loop data for frontier model developers. Your role will involve designing and implementing advanced systems that align human feedback into AI training processes, such as Reinforcement Learning from Human Feedback (RLHF), Direct Preference Optimization (DPO), etc. You will also work on innovative techniques to measure and improve human data quality, and develop AI-assisted tools to enhance the data labeling process. Your expertise in machine learning, frontier model training, and advanced human data alignment techniques will be crucial in pushing the boundaries of AI capabilities and delivering state-of-the-art solutions to meet the evolving needs of our customers.
Your Impact
Advance the field of AI alignment by developing cutting-edge methods, such as RLHF and novel approaches, that ensure AI systems reflect human preferences more accurately.
Improve the quality of human-in-the-loop data by designing and deploying rigorous measurement and enhancement systems, leading to more reliable AI training.
Increase efficiency and effectiveness in AI-assisted data labeling by creating tools that leverage active learning and adaptive sampling, reducing manual effort while improving accuracy.
Shape the next generation of AI models by investigating how different types of human feedback (e.g., demonstrations, preferences, critiques) impact model performance and alignment.
Optimize human feedback collection by developing novel algorithms that enhance how AI learns from human input, improving model adaptability and responsiveness.
Bridge research and real-world application by integrating breakthroughs into Labelbox’s product suite, making human-AI alignment techniques scalable and impactful for users.
Drive industry innovation by engaging with customers and the broader AI community to understand evolving data needs and share best practices for training frontier models.
Contribute to the AI research ecosystem by publishing in top-tier journals, presenting at leading conferences, and influencing the future of human-centric AI.
Stay ahead of AI advancements by continuously exploring new frontiers in human-AI collaboration, human data quality, and AI alignment, keeping Labelbox at the cutting edge.
Establish Labelbox as a thought leader in AI by creating technical documentation, blog posts, and educational content that shape the industry's approach to human-centric AI development.
What You Bring
A strong foundation in AI and machine learning, backed by a Ph.D. or Master’s degree in Computer Science, Machine Learning, AI, or a related field.
Proven experience (3+ years) in solving complex ML challenges and delivering impactful solutions that improve real-world AI applications.
Expertise in designing and implementing data quality measurement and refinement systems that directly enhance model performance and reliability.
A deep understanding of frontier AI models—such as large language models and multimodal models—and the human data strategies needed to optimize them.
Proficiency in Python and experience with deep learning frameworks like PyTorch, JAX, or TensorFlow to prototype and develop cutting-edge solutions.
A track record of publishing in top-tier AI/ML conferences (e.g., NeurIPS, ICML, ICLR, ACL, EMNLP, NAACL) and contributing to the broader research community.
The ability to bridge research and application by interpreting new findings and rapidly translating them into functional prototypes.
Strong analytical and problem-solving skills that enable you to tackle ambiguous AI challenges with structured, data-driven approaches.
Exceptional communication and collaboration skills, allowing you to work effectively across multidisciplinary teams and with external stakeholders.
Labelbox Applied Research
At Labelbox Applied Research, we're committed to pushing the boundaries of AI and data-centric machine learning, with a particular focus on advanced human-AI interaction techniques. We believe that high-quality human data and sophisticated human feedback integration methods are key to unlocking the next generation of AI capabilities. Our research team works at the intersection of machine learning, human-computer interaction, and AI ethics to develop innovative solutions that can be practically applied in real-world scenarios.
We foster an environment of intellectual curiosity, collaboration, and innovation. We encourage our researchers to explore new ideas, engage in open discussions, and contribute to the wider AI community through publications and conference presentations. Our goal is to be at the forefront of human-centric AI development, setting new standards for how AI systems learn from and interact with humans.Labelbox strives to ensure pay parity across the organization and discuss compensation transparently. The expected annual base salary range for United States-based candidates is below. This range is not inclusive of any potential equity packages or additional benefits. Exact compensation varies based on a variety of factors, including skills and competencies, experience, and geographical location.Annual base salary range$250,000—$300,000 USDLife at Labelbox
Location: Join our dedicated tech hubs in San Francisco or Wrocław, Poland
Work Style: Hybrid model with 2 days per week in office, combining collaboration and flexibility
Environment: Fast-paced and high-intensity, perfect for ambitious individuals who thrive on ownership and quick decision-making
Growth: Career advancement opportunities directly tied to your impact
Vision: Be part of building the foundation for humanity's most transformative technology
Our Vision
We believe data will remain crucial in achieving artificial general intelligence. As AI models become more sophisticated, the need for high-quality, specialized training data will only grow. Join us in developing new products and services that enable the next generation of AI breakthroughs.
Labelbox is backed by leading investors including SoftBank, Andreessen Horowitz, B Capital, Gradient Ventures, Databricks Ventures, and Kleiner Perkins. Our customers include Fortune 500 enterprises and leading AI labs.
Your Personal Data Privacy: Any personal information you provide Labelbox as a part of your application will be processed in accordance with Labelbox’s Job Applicant Privacy notice.
Any emails from Labelbox team members will originate from a @labelbox.com email address. If you encounter anything that raises suspicions during your interactions, we encourage you to exercise caution and suspend or discontinue communications.
No items found.
2026-04-16 8:35
Senior Machine Learning Engineer
Faculty
501-1000
United Kingdom
Full-time
Remote
false
Why Faculty?
We established Faculty in 2014 because we thought that AI would be the most important technology of our time. Since then, we’ve worked with over 350 global customers to transform their performance through human-centric AI. You can read about our real-world impact here.We don’t chase hype cycles. We innovate, build and deploy responsible AI which moves the needle - and we know a thing or two about doing it well. We bring an unparalleled depth of technical, product and delivery expertise to our clients who span government, finance, retail, energy, life sciences and defence.Our business, and reputation, is growing fast and we’re always on the lookout for individuals who share our intellectual curiosity and desire to build a positive legacy through technology.AI is an epoch-defining technology, join a company where you’ll be empowered to envision its most powerful applications, and to make them happen.
About the team Our Public Services Business Unit is committed to leveraging AI for the benefit of individual citizens and the public good.
From our work informing strategic government decisions, to optimising our NHS, through to reducing bureaucratic backlogs - we know that AI offers opportunities to drive improvements at every level of Government and we are proud to lead on some of the most impactful work happening in the sector.
Because of the nature of the work we do with our Government clients, you may need to be eligible for UK Security Clearance (SC) and willing to work on site with these clients from time to time.About the roleAs a Senior Machine Learning Engineer, we’ll look to you to lead development and deployment of cutting-edge AI systems for our diverse clients. You’ll design, build, and deploy scalable, production-grade ML software and infrastructure that meets rigorous operational and ethical standards.This is an ambitious, cross-functional role requiring a blend of technical expertise, engineering leadership, and confident client-facing skills.What you'll be doing:Leading technical scoping and architectural decisions for high-impact ML systemsDesigning and building production-grade ML software, tools, and scalable infrastructureDefining and implementing best practices and standards for deploying machine learning at scale across the businessCollaborating with engineers, data scientists, product managers, and commercial teams to solve critical client challenges and leverage opportunitiesActing as a trusted technical advisor to customers and partners, translating complex concepts into actionable strategiesMentoring and developing junior engineers, actively shaping our team's engineering culture and technical depthWho we're looking for:You understand the full ML lifecycle and have significant experience operationalising models built with frameworks like TensorFlow or PyTorchYou bring deep expertise in software engineering and strong Python skills, focusing on building robust, reusable systemsYou have demonstrable hands-on experience with cloud platforms (e.g., AWS, Azure, GCP), including architecture, security, and infrastructureYou've extensive experience working with container and orchestration tools such at Docker & Kubernetes to build and manage applications at scaleYou thrive in fast-paced, high-growth environments, demonstrating ownership and autonomy in driving projects to completionYou communicate exceptionally well, confidently guiding both technical teams and senior, non-technical stakeholdersOur Interview ProcessTalent Team Screen (30 minutes)Pair Programming Interview (90 minutes) System Design Interview (90 minutes) Commercial Interview (60 minutes)Our Recruitment EthosWe aim to grow the best team - not the most similar one. We know that diversity of individuals fosters diversity of thought, and that strengthens our principle of seeking truth. And we know from experience that diverse teams deliver better work, relevant to the world in which we live. We’re united by a deep intellectual curiosity and desire to use our abilities for measurable positive impact. We strongly encourage applications from people of all backgrounds, ethnicities, genders, religions and sexual orientations.Some of our standout benefits:Unlimited Annual Leave PolicyPrivate healthcare and dentalEnhanced parental leaveFamily-Friendly Flexibility & Flexible workingSanctus CoachingHybrid WorkingIf you don’t feel you meet all the requirements, but are excited by the role and know you bring some key strengths, please don't hesitate in applying as you might be right for this role, or other roles. We are open to conversations about part-time hours.
No items found.
2026-04-16 8:05
Security engineer, detection and response (UK)
Writer
1001-5000
United Kingdom
Full-time
Remote
false
🚀 About WRITERWRITER is where the world's leading enterprises orchestrate AI-powered work. Our vision is to expand human capacity through superintelligence. And we're proving it's possible – through powerful, trustworthy AI that unites IT and business teams together to unlock enterprise-wide transformation. With WRITER's end-to-end platform, hundreds of companies like Mars, Marriott, Uber, and Vanguard are building and deploying AI agents that are grounded in their company's data and fueled by WRITER's enterprise-grade LLMs. Valued at $1.9B and backed by industry-leading investors including Premji Invest, Radical Ventures, and ICONIQ Growth, WRITER is rapidly cementing its position as the leader in enterprise generative AI.Founded in 2020 with office hubs in San Francisco, New York City, Austin, Chicago, and London, our team thinks big and moves fast, and we're looking for smart, hardworking builders and scalers to join us on our journey to create a better future of work with AI.📐 About the roleJoin WRITER's security team as a staff detection and response engineer and help protect the AI infrastructure that's transforming how the world works. You'll build sophisticated detection systems that identify attacks targeting our AI platform, training data, and model deployments while creating automated response capabilities that scale with our explosive growth. This isn't just traditional security work – you're defending cutting-edge AI/AGI systems against adversaries who are evolving their tactics as fast as AI itself advances.This role combines hands-on security engineering with strategic thinking to stay ahead of novel threats that don't exist in textbooks yet. You'll be the operational arm of our security function, translating threat intelligence into real-time detections, coordinating incident response across multiple teams, and hunting for sophisticated attacks across GPU clusters and distributed training environments. If you're excited by the challenge of securing systems that are fundamentally different from anything you've protected before, this is your opportunity to define what AI security engineering looks like at scale.You'll work closely with our AI Security research team, Cloud Infrastructure, Software Security Engineering, and AI researchers to build a defense-in-depth strategy that protects one of the most valuable AI platforms in the industry. The threats are real, the stakes are high, and the problems are intellectually fascinating.This role can be based in San reporting to our head of security operations.🦸🏻♀️ What you’ll doDesign and implement detection strategies that identify AI-specific threats including prompt injection, model extraction, data poisoning, adversarial examples, and unauthorized access to training datasets or model weights across our distributed infrastructureBuild automated response playbooks and orchestration workflows that contain threats without human intervention, creating self-healing security systems that reduce mean time to response from hours to minutes while automatically remediating compromised inference endpointsLead security incident response coordination across all teams (Cloud, AppSec, Enterprise, AI Security) when AI infrastructure or models are compromised, conducting forensic investigations on training pipeline attacks and model manipulation attempts while drafting clear incident communications for engineering and executive leadershipHunt proactively for sophisticated threats across GPU clusters and training infrastructure by analyzing model outputs for signs of compromise, reproducing AI-specific vulnerabilities from security research, and identifying visibility gaps in distributed training environments before adversaries exploit themBuild detection-as-code frameworks with version control and automated deployment, onboard telemetry from AI training infrastructure and inference endpoints, and create dashboards that track model security metrics, GPU utilization patterns, and access to sensitive research dataCollaborate cross-functionally as the operational security partner for all teams – translating AI Security's threat research into production detections, monitoring Cloud Infrastructure's GPU clusters for threats, detecting customer-impacting incidents for Software Security Engineering, and enabling responsible AI development through security guardrailsMaintain 24/7 on-call rotation for critical AI security incidents, responding to real-time threats targeting our platform while continuously improving detection coverage and automation capabilities as our AI systems evolve⭐️ What you need3-5+ years in security operations, detection engineering, or incident response with a proven track record of identifying and stopping sophisticated attacks in production environments, plus 3+ years specifically securing AI/ML infrastructure, high-performance computing environments, or other distributed systems at scaleStrong programming skills in Python, KQL, SPL, or similar languages that allow you to build custom detection logic, automate response workflows, and create tools that operationalize security at scale across cloud-native and distributed computing environmentsExperience with SIEM platforms, detection technologies, and forensic investigation techniques with demonstrated ability to build detection for novel attack techniques that don't have established patterns yet and to conduct forensics in complex distributed environmentsSelf-directed execution mindset with a track record of securing high-value intellectual property, automating incident response in complex environments, and identifying critical security gaps through proactive threat hunting before they become incidentsDeep alignment with WRITER's values – you naturally Connect across security, infrastructure, and AI research teams to build comprehensive defenses, you Challenge assumptions about what's possible in AI security engineering, and you Own the protection of our AI platform with unwavering accountability and a commitment to staying ahead of evolving threats🍩 Benefits & perks (UK full-time employees):Generous PTO, plus company holidaysComprehensive medical and dental insurancePaid parental leave for all parents (12 weeks)Fertility and family planning supportEarly-detection cancer testing through GalleriCompetitive pension scheme and company contributionAnnual work-life stipends for:Wellness stipend for gym, massage/chiropractor, personal training, etc.Learning and development stipendCompany-wide off-sites and team off-sitesCompetitive compensation and company stock options
No items found.
2026-04-16 1:51
Engineering Manager, Applied AI
Mercor
1001-5000
$250,000 – $400,000
United States
Full-time
Remote
false
About MercorMercor is defining the future of work. We partner with leading AI labs and enterprises to provide the human intelligence essential to AI development.Our vast talent network trains frontier AI models in the same way teachers teach students: by sharing knowledge, experience, and context that can't be captured in code alone. Today, more than 30,000 experts in our network collectively earn over $2 million a day.Mercor is creating a new category of work where expertise powers AI advancement. Achieving this requires an ambitious, fast-paced and deeply committed team. You’ll work alongside researchers, operators, and AI companies at the forefront of shaping the systems that are redefining society.Mercor is a profitable Series C company valued at $10 billion. We work in-person five days a week in our San Francisco, NYC, or London offices.About the RoleWe’re hiring Engineering Managers to lead teams within our Applied AI organization.Applied AI builds systems that directly improve model quality, including evaluation infrastructure, annotation products, and emerging multimodal capabilities. You’ll lead a team of engineers, partner with product and research, and help define how Mercor stays at the frontier of AI.You will also build partnerships with leading AI labs and directly contribute to improving the quality of frontier models through data, evaluation, and systems.This role requires high ownership and abstraction: setting direction, driving outcomes, and staying hands-on while leading.What You’ll Work OnBuild and scale teamsManage and grow a team of 6–10 engineersCoach and develop high-potential engineersEstablish strong ownership, culture, and execution standardsShape the team: define hiring processes, establish engineering practices, and scale a high-performing team from the ground upDrive Applied AI systems and outcomesDrive insights and methodology behind evaluation systems that benchmark and improve model performanceLead development while driving improvements in data quality, operational efficiency, system stability, and scalabilityScale products that generate high-quality training data and improve human-in-the-loop workflowsOwn technical directionLead system design for complex AI systemsStay close to the technical work and guide engineers through ambiguous problemsTranslate high-level AI goals into clear engineering roadmaps and execution plansOperate across a broad scopePartner with product, research, operations, and external AI labsWork across systems, data insights, and custom partnerships to drive model qualityShape the organizationIntroduce lightweight processes as the organization scalesHire and develop strong engineering talentHelp define engineering management at MercorWhat We’re Looking For6–10 years in engineering; 2–3+ years managing teamsStrong background in building scalable systemsAbility to lead in ambiguous environments with sound technical judgmentProven ability to coach engineers and drive executionStrong ownership mindset with pragmatic decision-makingWho Thrives HereWe’re looking for engineering leaders who take ownership in ambiguous environments, make pragmatic decisions, and consistently deliver measurable impact.Success MetricsBuild high-performing, well-supported teamsHire and retain strong engineersImprove team velocity and execution qualityContribute to systems that measurably improve model performanceLocationSan Francisco (preferred) or New YorkWhy This RoleDirect impact on frontier AI developmentBuild partnerships with leading AI labs and shape how frontier models improveHigh ownership across team, systems, and hiringOpportunity to shape a rapidly scaling organizationWork at the intersection of engineering, product, data, and AI
No items found.
2026-04-15 20:36
Forward Deployed Engineer
Assort Health
51-100
$155,000 – $185,000
United States
Full-time
Remote
false
Our mission is to make exceptional healthcare accessible anytime, anywhere, for anyone.At Assort Health, we believe healthcare should feel effortless and connected — quick answers, clear communication, and seamless access to care. That’s why we’re building a new foundation for how patients and providers connect, driven by AI, built to embrace the complexities of healthcare, and tailored to each provider’s unique needs.Assort is the most comprehensive patient experience platform powered by specialty-specific agentic AI. Assort’s omnichannel AI agents seamlessly integrate with EHR/PMS and complicated provider preferences to eliminate lengthy hold times and inefficiencies that stand in the way of patients getting the care they need.Since launching in 2023, Assort has managed over 125M+ patient interactions, slashing average hold times from 11 minutes to 1 minute. Our platform now handles calls for thousands of providers with 98%+ resolution rates and 99% scheduling accuracy. Patient satisfaction averages 4.5/5 over 52K reviews, and we’ve achieved 20× revenue growth in 2025. We’re scaling rapidly and expanding adoption across the entire healthcare industry.About The RoleWe're hiring Forward Deployed Engineers to own customer deployments from technical build through go-live and optimization. You'll be the technical owner for health system implementations, working directly with customers to build, configure, and deploy production AI agents tailored to their workflows.You'll work closely with Agent Product Managers who own the customer relationship and project plan, while you own the technical execution—building integrations, creating custom workflows, and ensuring successful launches.What You'll DoYou'll be the technical force behind bringing AI agents to life for healthcare organizations. This isn't a typical engineering role—you'll work at the intersection of cutting-edge AI, complex healthcare systems, and real customer impact.Ship Production AI AgentsOwn implementations end-to-end for health systems, from technical scoping through go-live and beyondBuild custom integrations with complex healthcare platformsDesign intelligent workflows tailored to each customer's specialty, patient population, and operational constraintsLaunch agents that handle thousands of real patient interactions dailySolve Hard Technical ProblemsDebug gnarly integration issues across phone systems, EHRs, scheduling platforms, and patient engagement toolsArchitect creative solutions to handle healthcare's complexity—insurance verification, appointment rules, clinical protocolsBuild tooling and automation that makes every future implementation faster and betterOptimize agent performance in production using real-world data and customer feedbackShape the ProductPartner directly with Product Engineering to influence platform direction based on what you learn in the fieldIdentify patterns across implementations that become product featuresWork with Sales to scope technical requirements and demo capabilities to prospectsBe the voice of the customer—you'll see what works, what doesn't, and what customers actually needWhat We're Looking ForRequired:Strong software engineering background with a track record of shipping production codeProficiency in PythonExperience with APIs, integrations, and working across multiple systemsComfortable with ambiguity and building in fast-paced, high-stakes environmentsExcellent communication skills—you can explain technical concepts to non-technical stakeholdersScrappy, resourceful mindset—you find ways to solve problems even when the path isn't clearPassion for healthcare and improving patient accessNice to Have:Experience with healthcare systems (EHR, practice management, patient engagement platforms)Previous work in customer-facing engineering, solutions engineering, or implementation rolesFamiliarity with voice/telephony systems (Twilio, etc.)Background in AI/LLM applicationsExperience working with enterprise customersBenefits & Perks for Assorties💸 Competitive Compensation – Including salary and employee stock options so you share in our success.📚 Lifelong Learning – Annual budget for professional development, plus training opportunities to help you grow.💻 Office Setup Stipend – We’ll outfit your in-office workspace so comfy as it's productive.🩺 Top-Tier Health Coverage – Medical, dental, and vision insurance, because your health comes first.🏖 Unlimited PTO – We trust you to take the time you need to recharge and come back ready to crush it.🥗 Meals & Snacks – Lunch, dinner, and snack breaks that fuel great ideas.💪 Fitness Stipend – Your wellness matters. We reimburse monthly membership costs to support your health.🚆 Commuter Benefits – We cover eligible transportation costs to make your trip to work easier.👵 401(k) – Build your retirement savings.How We Work & What We ValueOur team at Assort Health moves fast, stays focused, and is fueled by a desire to serve our customers and patients. Our company values guide how we work—they are present in how we show up, make decisions and work together to move our mission forward. We bring a Day One Drive, relentlessly striving to improve, keep a 5-Star Focus, as our customers are our lifeblood, always Answer the Call, remembering that ownership and accountability are paramount, and show up with One Pulse, because we are one team, with one rhythm and one result. Our team is growing and we are looking for motivated, hardworking, and passionate talent. If you want to make healthcare accessible for everyone, we’d love to hear from you!
Please note: the Assort Health Talent Team will only email you from an assorthealth.com email address.
No items found.
2026-04-15 20:06
Research Scientist
Hedra
11-50
$200,000 – $325,000
United States
Full-time
Remote
false
Overview:Hedra is building a world-class Physical AI research team to push the boundaries of action-conditioned world models and generative AI for physical systems. As a Researcher, you will drive original research into the intersection of generative modeling, embodied AI, and real-world physical applications alongside industrial partners. You will have access to large-scale compute, the freedom to pursue high-impact research directions, and a direct path to publication at top venues. We are looking for researchers who are excited to go beyond benchmarks and build models that operate in the real world — drawing on Hedra's leadership in generative modeling and the depth of our academic partnerships, including connections to Fei-Fei Li and the Stanford Vision & Learning Lab.Responsibilities:Define and lead research directions in action-conditioned world models, physical AI, and generative modeling for embodied systemsDesign novel architectures, training objectives, and evaluation frameworks for VLMs, VLAs, and world modelsDirect research efforts with the goal of publishing in top journals.Partner with industrial collaborators to ground research in real-world physical AI use casesMentor research engineers and collaborate cross-functionally to move research into productionStay at the frontier of the field — synthesizing relevant literature and identifying opportunities for impactful contributionsContribute to Hedra's research culture and external scientific reputationQualifications:PhD in Machine Learning, Computer Science, Robotics, or a related field, with publications at top ML or robotics venuesDeep expertise in generative modeling, world models, or vision-language(-action) modelsStrong publication record at NeurIPS, ICML, ICLR, CVPR, CoRL, or equivalent venuesExperience with large-scale model training and modern deep learning infrastructureAbility to independently drive research projects from ideation through publicationBackground in embodied AI, robotic manipulation, or sim-to-real transfer is highly desirableExperience with RLHF, DPO, or preference optimization for model alignment is a plusStrong collaboration and communication skills — comfortable bridging research and applied teamsBenefits:Competitive compensation and equity401k (no match)Healthcare (Silver PPO Medical, Vision, Dental)Lunch and snacks at the officeWe encourage you to apply even if you don't fully meet all the listed requirements; we value potential and diverse perspectives, and your unique skills could be a great asset to our team.
No items found.
2026-04-15 17:06
Research Engineer
Hedra
11-50
$175,000 – $275,000
United States
Full-time
Remote
false
Overview:Hedra is a pioneering generative modeling company — first models to market — now building a Physical AI team to bring these models to real-world industry and economy use cases. As a Research Engineer on our Physical AI team, you will lead pre-training and post-training on action-conditioned world models, working hand-in-hand with industrial partners to close the loop between generative AI and physical systems. This is not a black-box applied role: your work will be published, your infrastructure will be serious, and your impact will be direct. If you want to work at the frontier of generative modeling and physical AI, this is the team.Responsibilities:Design, implement, and run pre-training and post-training pipelines for action-conditioned world models and vision-language-action (VLA) modelsDevelop and refine training methodologies, including fine-tuning, reinforcement learning, and large-scale multimodal learningDesign and generate training and evaluation datasets from simulation, including environment setup, domain randomization, and sim-to-real transfer strategiesBuild distributed training infrastructure using PyTorch, FSDP, and DeepSpeedWork with multimodal data pipelines involving video, sensory inputs, and action sequencesEvaluate model performance using both benchmark datasets and real-world deployment metricsContributions research publications a plusCollaborate with industrial partners to adapt generative models for real-world physical AI applicationsQualifications:Experience with pre-training or post-training on large generative models (video, multimodal, or action-conditioned)Hands-on proficiency with PyTorch and distributed training frameworks (FSDP, DeepSpeed)Strong fundamentals in machine learning, optimization, and large-scale data processingFamiliarity with VLMs, VLAs, or world modelsBackground in robotics, embodied AI, or sim-to-real transfer is a plusExperience with video understanding or temporal reasoning is a plusBS/MS/PhD in Computer Science, Machine Learning, Robotics, or a related fieldBenefits:Competitive compensation and equity401k (no match)Healthcare (Silver PPO Medical, Vision, Dental)Lunch and snacks at the officeWe encourage you to apply even if you don't fully meet all the listed requirements; we value potential and diverse perspectives, and your unique skills could be a great asset to our team.
No items found.
2026-04-15 17:06
No job found
Your search did not match any job. Please try again
