The AI job market moves fast. We keep up so you don't have to.
Fresh roles added daily, reviewed for quality — across every corner of the AI ecosystem.
I'm strong in:
Edit filters
New AI Opportunities
Showing 61 – 79 of 79 jobs
Tag
Training: Process Management Engineer
OpenAI
5000+
United Kingdom
Full-time
Remote
false
About the TeamTraining Runtime designs the core distributed runtime that powers everything from early research experiments to frontier-scale model runs. We work on building robust, scalable, high performance components to support our distributed training workloads. Our priorities are to maximize the productivity of our researchers and our hardware, with the goal of accelerating progress towards AGI.Within Training Runtime, the Process Management team develops the distributed OS responsible for launching, coordinating, and supervising the large numbers of processes that make up modern training workloads. Our runtime sits beneath training frameworks and on top of research infrastructure, ensuring jobs run reliably across massive clusters while maintaining performance, stability, and observability.Success for us is measured by both system reliability and researcher velocity - enabling ideas to scale from experiments to production training runs.About the RoleAs a Training Runtime: Process Management Engineer, you will work on the software that ties thousands of computers together and exposes them as a unified system.This system has to serve individual researchers running multiple parallel experiments, as well as our largest training runs spanning 100’s of thousands and even millions of machines and accelerators. This requires easy to use, introspectable systems that can promote a fast debugging and development cycle, as well as relentless optimization for scale while maintaining stability and performance throughout.You will work primarily in Rust, building high-performance asynchronous systems with a strong emphasis on performance, correctness, and scalability.Working at this scale and at the frontier of AI development poses novel challenges. Out-of-the-box approaches often don’t work. The problems you will be working on are highly ambiguous and require strong design judgment as well as proficient execution to advance the state of our infrastructure.We’re looking for people who love optimizing an end-to-end platform, understanding high-performance architectures to maximize both local and distributed performance across our supercomputers. We’re looking for engineers excited by the rapid pace of responding to the dynamic and evolving needs of our training runtime and compute stack.This role is based in London, UK. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.In this role, you will:Work across our Python and Rust stackDesign, build, and maintain software to orchestrate and monitor machine learning workloads on our largest supercomputersProfile and optimize our software stack to support computation orchestration at frontier scaleImprove reliability, observability, and fault tolerance for long-running jobsDebug complex distributed systems issues across large clustersRespond to the changing shapes and needs of the ML systems to enable our researchers
You might thrive in this role if you:Have experience developing distributed systems (not just operating them)Enjoy understanding how large systems behave and fail at scaleCare deeply about performance, correctness, and reliabilityHave strong software engineering skills and are proficient in Python and Rust or another systems programming language (e.g. C++)Have solid Linux knowledge, and are comfortable with systems-level debugging, performance analysis, and memory profilingAre comfortable and experienced working and developing asynchronous and concurrent systemsLike high-ownership environments with light process and strong engineering agencyAbout OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
2026-03-05 2:59
Threat Modeler, Preparedness
OpenAI
5000+
$325,000 – $325,000
United States
Full-time
Remote
false
About the TeamThe Preparedness team is an important part of the Safety Systems org at OpenAI, and is guided by OpenAI’s Preparedness Framework.Frontier AI models have the potential to benefit all of humanity, but also pose increasingly severe risks. To ensure that AI promotes positive change, the Preparedness team helps us prepare for the development of increasingly capable frontier AI models. This team is tasked with identifying, tracking, and preparing for catastrophic risks related to frontier AI models.The mission of the Preparedness team is to:Closely monitor and predict the evolving capabilities of frontier AI systems, with an eye towards misuse risks whose impact could be catastrophic to our societyEnsure we have concrete procedures, infrastructure and partnerships to mitigate these risks and to safely handle the development of powerful AI systemsPreparedness tightly connects capability assessment, evaluations, and internal red teaming, and mitigations for frontier models, as well as overall coordination on AGI preparedness. This is fast paced, exciting work that has far reaching importance for the company and for society.About the RoleAs a threat modeler, you will own OpenAI’s holistic approach to identifying, modeling, and forecasting frontier risks from frontier AI systems. This role ensures that our evaluation frameworks, safeguards, and taxonomies are robust, high-coverage, and forward-looking. You will help the company answer the “why” behind our most stringent risk-prevention efforts, shaping the rationale for prioritizing and mitigating risks across domains. You will serve as a central node connecting technical, governance, and policy perspectives on prioritization, focus and rationale on our approach to frontier risks from AI.In this role, you will:Develop and maintain comprehensive threat models across all misuse areas (bio, cyber, attack planning, etc.).Develop plausible and convincing threat models across loss of control, self-improvement, and other possible alignment risks from frontier AI systemsForecast risks by combining technical foresight, adversarial simulation, and emerging trends.Pair closely with technical partners on capability evaluations to ensure these map to and cover the gambit of severe risks differentially enabled by frontier AI systems.Pair closely with Bio and Cyber Leads to size the remaining risk of the designed safeguards and translate threat models into actionable mitigation designs.Act as the thought partner and explainer of “why” and “when” for high-investment mitigation efforts—helping stakeholders understand the rationale behind prioritization.Serve as the central node connecting technical, governance, and policy perspectives on prioritization, focus and rationale on our approach to misuse risk.You might thrive in this role if you:Understand risks from frontier AI systems and have a strong grasp of AI alignment literature.
Bring deep experience in threat modeling, risk analysis, or adversarial thinking (e.g., security, national security, or safety).Know how AI evaluations work and can connect eval results to both capability testing and safeguard sufficiency.Enjoy working across technical and policy domains to drive rigorous, multidisciplinary risk assessments.Communicate complex risks clearly and compellingly to both technical and non-technical audiences.Think in systems and naturally anticipate second-order and cascading risks.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
2026-03-05 2:59
Member of Technical Staff: Agent DX Research
Modal
51-100
$150,000 – $350,000
United States
Full-time
Remote
false
About Us:Modal provides the infrastructure foundation for AI teams. With instant GPU access, sub-second container startups, and native storage, Modal makes it simple to train models, run batch jobs, and serve low-latency inference. We have thousands of customers who rely on us for production AI workloads, including Lovable, Scale AI, Substack, and Suno.We're a fast-growing team based out of NYC, SF, and Stockholm. We've hit 9-figure ARR and recently raised a Series B at a $1.1B valuation. Our investors include Lux Capital, Redpoint Ventures, Amplify Partners, and Elad Gil.Working at Modal means joining one of the fastest-growing AI infrastructure organizations at an early stage, with many opportunities to grow within the company. Our team includes creators of popular open-source projects (e.g. Seaborn, Luigi), academic researchers, international olympiad medalists, and experienced engineering and product leaders with decades of experience.The Role:Modal has always obsessed over developer experience and productivity. With rapid advancements in the capabilities of AI coding agents, the practice of developing software and the meaning of developer experience is changing. We see this as an opportunity.We’re looking for an experienced researcher to join us and help make it even easier and more productive to build on Modal. We believe that our code-first approach to AI infrastructure is uniquely well suited to agent-based development. But we’re looking to do even better by subjecting agent productivity to rigorous evaluation and using those insights to guide the development of our platform.You’ll work in collaboration with Modal’s SDK team and other product engineers to build out a framework and process for agent productivity evaluation. Our goal is to treat developer experience optimization as a scientific problem. You’ll be responsible for defining quantitative objectives, designing systems to measure performance, and translating results into product improvements. You’ll also be expected to stay on top of new developments in tools and workflows and to work with our customers to understand how they’re using coding agents with Modal and where we can be providing more value.Requirements:This is a new kind of role, and we don’t have one specific background in mind. Training in quantitative research is preferred: you might have a PhD in Computer Science, Human Computer Interaction, Cognitive Science, Operations Research, or other related field. You also might have prior experience working as a Machine Learning Scientist, Quantitive UX Researcher, or other similar role on a product team. Regardless of your exact background, we’ll be looking for the following:Sufficient technical skills to design and implement scalable agent benchmarking workflowsExperience with experimental design, measurement, and statistical evaluationUp-to-date knowledge of the latest advances in coding agents (with a dose of healthy skepticism about their current capabilities)Interest in developer tooling and opinions about developer ergonomicsFamiliarity with the use cases that Modal serves (generative AI inference, large-scale batch jobs, multi-node training, etc.)Strong communication skills and the ability to convey research insights to decision makersThe ability to work in person from our New York (preferred) or San Francisco office
No items found.
2026-03-04 11:44
Forward Deployed Engineer - ML
Modal
51-100
Sweden
Full-time
Remote
false
About Us:Modal provides the infrastructure foundation for AI teams. With instant GPU access, sub-second container startups, and native storage, Modal makes it simple to train models, run batch jobs, and serve low-latency inference. We have thousands of customers who rely on us for production AI workloads, including Lovable, Scale AI, Substack, and Suno.We're a fast-growing team based out of NYC, SF, and Stockholm. We've hit 9-figure ARR and recently raised a Series B at a $1.1B valuation. Our investors include Lux Capital, Redpoint Ventures, Amplify Partners, and Elad Gil.Working at Modal means joining one of the fastest-growing AI infrastructure organizations at an early stage, with many opportunities to grow within the company. Our team includes creators of popular open-source projects (e.g. Seaborn, Luigi), academic researchers, international olympiad medalists, and experienced engineering and product leaders with decades of experience.The Role:We're looking for Forward Deployed ML Engineers who want to work at the intersection of deep technical work and direct customer impact. As an ML FDE, you'll partner with leading AI companies and foundation model labs to help them achieve state-of-the-art performance on their most demanding workloads — LLM serving, model training (SFT, RLHF), audio pipelines, scientific computing, and more. You're helping teams reach outcomes most engineers can't on their own.The FDE team today includes world-class software engineers, computational scientists, ML engineers, and former founders. We're looking for people with strong engineering fundamentals, deep curiosity across the AI stack, and energy for working directly with customers on hard problems. You will:Work hands-on with companies like Suno, Lovable, Cognition, and Meta to architect and optimize production AI workloads on ModalContribute to open-source projects — members of the team are active contributors to SGLang — and publish technical content that demonstrates Modal's capabilities across the AI stackCollaborate with Modal's product and sales teams, contributing to the platform as both an engineer and a product stakeholderBuild trusted relationships with technical leaders (CTOs, VPs of Engineering, ML leads) at companies doing frontier AI workConduct technical demos, experiments, and proof-of-concepts that make Modal's performance advantages tangibleRequirements:2+ years of professional ML engineering experience, ideally with hands-on work in inference optimization, model training, GPU programming, or ML infrastructureFamiliarity with the serving (e.g., vLLM, SGLang) and training (e.g., slime, verl, TRL) toolchains. You don't need all of these, but you should be able to go deep on at least one.Strong communicator who can go deep on technical architecture with an engineering team and clearly articulate tradeoffs to technical leadershipGenuine interest in working directly with customers — you find it energizing to understand someone else's problem and help them solve itBonus: side projects, open-source contributions, or published work you're proud of in ML or systems performanceWilling to work in-person in Stockholm
No items found.
2026-03-04 11:44
Research Product Manager — Structured AI Systems
Granica
11-50
$160,000 – $250,000
United States
Full-time
Remote
false
About GranicaGranica is an AI research and infrastructure company focused on reliable, steerable representations for enterprise data.We earn trust through Crunch, a policy-driven health layer that keeps large tabular datasets efficient, reliable, and reversible. On this foundation, we’re building Large Tabular Models—systems that learn cross-column and relational structure to deliver trustworthy answers and automation with built-in provenance and governance.Research Product Manager — Structured AI Systems & Economic ExtractionLocation: Downtown Mountain View, CA (office-based, five days a week)
Team: Research & Applied SystemsThe MissionGranica’s Research team is advancing foundational work in:Tabular data learning and large tabular modelsStructured and relational representation learningCompression-aware and efficiency-driven AIHybrid symbolic, relational, and neural systemsThe intersection of information theory, learning theory, and large-scale systemsThese efforts are tightly coupled with real production systems operating over petabytes of enterprise data.The mission of the Research Product Manager is to ensure this work moves forward coherently, efficiently, and at scale—connecting people, ideas, compute, and systems so that breakthrough research becomes durable capability.This role is not program management.It is for someone who can:Understand how large AI models are trained, deployed, and maintained in production systemsTranslate foundational modeling advances into economically valuable infrastructureShape both the technical execution path and the economic strategy behind itWhat This Role Actually Owns1️⃣ Productionization of Structured AI ModelsWork with Research and Systems teams to:Design how large tabular models are trained on Parquet / Iceberg / Delta dataDefine training infra requirements (data pipelines, distributed training, evaluation loops)Define inference architecture (batch vs streaming, embedding materialization, retrieval)Define maintenance loops (retraining cadence, data drift detection, schema evolution)Understand storage/compute trade-offs in real systemsYou must be able to reason about:Data layoutCompute schedulingModel lifecycleInfrastructure bottlenecksEvaluation pipelines2️⃣ Economic Value ExtractionHelp define:Who the buyer is (infra teams, ML teams, data platform teams)Where economic value is unlocked (compression, compute savings, model accuracy, governance)How value is quantified (cost curves, workload modeling, infra substitution)How to convert research capability into revenue and durable platform advantageThis role requires strong intuition around enterprise infra economics.3️⃣ Research → Durable SystemYou will:Identify which modeling advances are worth productionizingKill research directions that lack economic or system viabilityDefine integration paths into enterprise workloadsWork directly with the Chief Research Scientist on research agenda prioritizationRequired BackgroundYou must have experience in at least one of:A) Production AI SystemsImplementing or PM’ing deployment of large models in productionTraining infra / inference infra / model maintenanceOperating over structured datasets (Parquet, columnar storage, data lakes)B) Economic Platform ThinkingDefining buyer, pricing, ROI, and cost structure of AI infrastructureConverting modeling advantage into business valueIdeally both.This Role Is NOTA coordination-heavy research program managerA consumer AI personalization PMA pure academic researcherCore QualificationsBackground in computer science, AI, mathematics, physics, engineering, or a closely related field.Comfort engaging deeply with researchers and engineers on complex technical topics.Strong SignalsExperience working with or within a research lab (academic or industrial).Familiarity with modern AI research workflows, including experimentation, evaluation, and large-scale training.Ability to abstract at a high level while also diving into details when needed.Strong written and verbal communication, especially around technical progress and trade-offs.Bonus ExperienceMaster’s or PhD in a relevant technical field.Publications or direct contributions to AI research (e.g., modeling, data, evals, systems, or related areas).Experience supporting research in structured data, tabular models, or system-aware ML.Demonstrated ability to learn new technical domains quickly.Why This Role MattersGranica is building foundational technology with a long horizon. The research happening here—particularly in structured and tabular AI—is aimed at reshaping how intelligence is built and applied across the global economy.As a Research Product Manager, you will:Enable breakthrough research to happen faster and land harder.Help define how frontier ideas become real systems.Play a central role in shaping the execution engine behind a generational research agenda.This role has real ownership, real influence, and a deep connection to the core of the company.
Location & Work ModelThis role is office-based in Downtown Mountain View, five days a week. We believe close, in-person collaboration is essential for the kind of deep, cross-disciplinary research and execution this role requires.
Why GranicaFundamental Research Meets Enterprise Impact. Work at the intersection of science and engineering, turning foundational research into deployed systems serving enterprise workloads at exabyte scale.AI by Design. Build the infrastructure that defines how efficiently the world can create and apply intelligence.Real Ownership. Design primitives that will underpin the next decade of AI infrastructure.High-Trust Environment. Deep technical work, minimal bureaucracy, shared mission.Enduring Horizon. Backed by NEA, Bain Capital, and various luminaries from tech and business. We are building a generational company for decades, not quarters or a product cycle.Compensation & BenefitsCompetitive salary, meaningful equity, and substantial bonus for top performersFlexible time off plus comprehensive health coverage for you and your familySupport for research, publication, and deep technical explorationAt Granica, you will shape the fundamental infrastructure that makes intelligence itself efficient, structured, and enduring. Join us to build the foundational data systems that power the future of enterprise AI!
No items found.
2026-03-04 8:44
Product Manager, AI Platform
Fluidstack
51-100
$180,000 – $250,000
United States
Full-time
Remote
false
About FluidstackAt Fluidstack, we’re building the infrastructure for abundant intelligence. We partner with top AI labs, governments, and enterprises - including Mistral, Poolside, Black Forest Labs, Meta, and more - to unlock compute at the speed of light.We’re working with urgency to make AGI a reality. As such, our team is highly motivated and committed to delivering world-class infrastructure. We treat our customers’ outcomes as our own, taking pride in the systems we build and the trust we earn. If you’re motivated by purpose, obsessed with excellence, and ready to work very hard to accelerate the future of intelligence, join us in building what's next.About the roleWe're hiring a Product Manager to own our AI platform roadmap, including managed inference and agent platforms. You'll define how Fluidstack enables customers to deploy, scale, and optimize LLM inference workloads—from model serving and routing to agent orchestration and compound AI systems. This role requires balancing customer needs for low latency and high throughput with the operational realities of GPU utilization, cost efficiency, and platform reliability. You'll work across engineering, ML research, and go-to-market teams to position Fluidstack against inference-first competitors like Together AI, Fireworks, Baseten, Modal, and Replicate.What you'll doOwn the product strategy and roadmap for managed inference services, including model deployment, autoscaling, multi-LoRA serving, and inference optimizationDefine requirements for agent platform capabilities: structured outputs, function calling, memory primitives, tool integration, and multi-step reasoning workflowsDrive decisions on which inference optimizations to prioritize: speculative decoding, continuous batching, KV cache management, quantization support, and custom kernel integrationPartner with ML infrastructure engineers to design APIs, SDKs, and deployment workflows that support model fine-tuning, version management, and A/B testingWork with datacenter teams to optimize GPU allocation strategies—balancing dedicated vs. serverless deployments, cold start latency, and cost-per-token economicsAnalyze competitive offerings from Together AI (inference optimization stack), Fireworks (custom inference engine), Baseten (training-to-inference integration), and Modal (serverless architecture)Define pricing models that align with customer usage patterns (tokens, requests, GPU-hours) while maintaining healthy unit economicsConduct customer research to understand inference workload requirements: latency SLAs, throughput targets, model size constraints, and integration needsTranslate customer feedback into feature specifications—including support for new model architectures, framework integrations (vLLM, TensorRT-LLM, TGI), and observability toolingBuild go-to-market materials: reference architectures, performance benchmarks, cost calculators, and migration guides for customers moving from self-hosted or competing platformsAbout you5+ years product management experience with at least 3 years focused on AI/ML infrastructure, inference platforms, or developer toolsStrong technical understanding of transformer architectures, inference optimization techniques, and production ML systemsExperience building products for technical users deploying LLMs in production (ML engineers, research scientists, AI application developers)Track record of shipping features that improved inference latency, throughput, or cost efficiency—backed by quantitative metricsDeep familiarity with the inference ecosystem: serving frameworks (vLLM, TensorRT-LLM, TGI), model formats (GGUF, SafeTensors), and API standards (OpenAI-compatible endpoints)Understanding of GPU memory constraints, batching strategies, and the tradeoffs between latency-optimized vs. throughput-optimized servingAbility to translate complex technical concepts (speculative decoding, PagedAttention, Multi-LoRA) into clear customer value propositionsExperience conducting competitive analysis in the inference market, including pricing elasticity, feature differentiation, and customer acquisition patternsComfortable working with engineering teams to debug performance bottlenecks, analyze profiling data, and prioritize kernel-level optimizationsBonus: Experience with agent frameworks (LangChain, LlamaIndex, AutoGPT), compound AI patterns, or model fine-tuning workflowsCompensationTo provide greater transparency to candidates, we share base pay ranges for all US-based job postings. Our compensation package includes base salary, equity, benefits, and for applicable roles, commissions plans. Our cash compensation range for this role is $180,000-$250,000. Final offers vary based on geography, candidate experience, relevant credentials, and other factors. Outstanding candidates may be eligible for adjusted terms plus meaningful equity.We are committed to pay equity and transparency.Fluidstack is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Fluidstack will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.You will receive a confirmation email once your application has successfully been accepted. If there is an error with your submission and you did not receive a confirmation email, please email careers@fluidstack.io with your resume/CV, the role you've applied for, and the date you submitted your application-- someone from our recruiting team will be in touch.
No items found.
2026-03-04 7:59
Senior AI Researcher- Reinforcement learning (f/m/d)
AlephAlpha
201-500
Germany
Full-time
Remote
false
Our MissionAleph Alpha is one of the few companies in Europe with end-to-end in-house model development including pre- and post-training. We’re building models that have general-purpose capabilities, but also specifically excel at addressing the needs of our customers.We're growing our post-training team in Heidelberg (or hybrid in Germany) and are looking for an AI Researcher who combines a deep theoretical understanding of reinforcement learning methods with a desire to improve on the state of the art and improve model capabilities in large-scale training.Team CultureAt Aleph Alpha, we foster a culture built on ownership, autonomy, and empowerment. Teams and individual contributors are trusted to take responsibility for their work and drive meaningful impact. We maintain a flat organizational structure with efficient, supportive management that enables quick decision‑making, open communication, and a strong sense of shared purpose.About the role As a (senior) AI Researcher for reinforcement learning you will shape and improve the underlying RL methodology, maintain a high-quality training code-base, and conduct large-scale experiments to hill-climb our performance benchmarks. This role is for you if you both have a strong theoretical background on RL and the engineering drive to bring these methods into production and improve on the methods as part of the reinforcement learning team.In your day-to-day you will conduct large-scale reinforcement learning experiments, derive hypotheses from the results, and iterate on both the implementation and methodology based on the observations. Together with a collaborative team, you will have direct impact on the models that we ship to our customers.This role is for Aleph Alpha Research GmbH.Your ResponsibilitiesHill-climb in large-scale training: Conduct large-scale LLM training runs, analyze evaluation scores in depth, propose hypotheses for improvement and directly implement them in order to maximize performance on our benchmarks.Theoretical innovation: Stay at the bleeding edge of RL research. You will identify, implement, and iterate on novel approaches to multi-turn reinforcement learning.Scale our training infrastructure: Identify bottlenecks in our training setup and optimize our RL training loops for large-scale training.Cross-functional collaboration: Partner with our other post-training teams to turn raw feedback into actionable training signals, ensuring that our RL iterations lead to measurable improvements in downstream performance.Your ProfileBasic QualificationsA deep understanding of Reinforcement Learning theory and how it relates to modern RL methods.Experience with multi-node LLM training (ideally using RL). You understand how to scale multi-node RL trainings and can reason about and implement distributed algorithms.Familiarity with statistical methods for evaluation and experiment design.Ability to reason about what an evaluation/environment measures and whether it matters - not just run benchmarks, but understand them.Strong Python skills and comfort with ML tooling (especially torch distributed)Willingness to relocate to Heidelberg or travel regularly (potentially weekly).Preferred QualificationsPhD in reinforcement learning or equivalent research experience.A history of contributions to top-tier venues (NeurIPS, ICML, ICLR, etc.) specifically regarding RL.Experience evaluating LLM models and crafting environments for training.Compensation and BenefitsBecome part of an AI revolution!30 days of paid vacationAccess to a variety of fitness & wellness offerings via WellhubMental health support through nilo.healthSubstantially subsidized company pension plan for your future securitySubsidized Germany-wide transportation ticketBudget for additional technical equipmentFlexible working hours for better work-life balance and hybrid working modelVirtual Stock Option PlanJobRad® Bike Lease
No items found.
2026-03-04 5:59
Research Scientist (Measurement and Evaluation)
Abridge
201-500
$220,000 – $280,000
United States
Full-time
Remote
false
About AbridgeAbridge was founded in 2018 with the mission of powering deeper understanding in healthcare. Our AI-powered platform was purpose-built for medical conversations, improving clinical documentation efficiencies while enabling clinicians to focus on what matters most—their patients.Our enterprise-grade technology transforms patient-clinician conversations into structured clinical notes in real-time, with deep EMR integrations. Powered by Linked Evidence and our purpose-built, auditable AI, we are the only company that maps AI-generated summaries to ground truth, helping providers quickly trust and verify the output. As pioneers in generative AI for healthcare, we are setting the industry standards for the responsible deployment of AI across health systems.We are a growing team of practicing MDs, AI scientists, PhDs, creatives, technologists, and engineers working together to empower people and make care make more sense. We have offices located in the Mission District in San Francisco, the SoHo neighborhood of New York, and East Liberty in Pittsburgh. The RoleAbridge is hiring Research Scientists to join our Strategic Research team to rigorously evaluate and advance the real-world impact of ambient AI on patient outcomes, care quality, and provider experience. In this role, you will design and lead empirical studies of Abridge models and products in partnership with health systems, leveraging large-scale clinical conversation data to generate new insights about care delivery, documentation quality, clinical decision-making, and downstream patient outcomes. You will operationalize complex constructs—such as quality of care, safety, cognitive burden, and return on investment—using principled measurement frameworks and rigorous experimental or quasi-experimental methods. Working closely with our science and product teams, you will also develop evaluation frameworks that inform model development and product strategy. Your work will contribute to broader scientific understanding of how AI systems affect patients and providers in the real world. This role sits at the intersection of methodological innovation and practical impact, applying serious measurement science to systems that directly shape patient care.About Strategic Research: The Strategic Research team at Abridge has two primary functions: (i) designing and conducting rigorous research studies investigating the impact of ambient AI as an intervention in partnership with collaborating health systems; and (ii) supporting external research efforts that leverage Abridge data. In addition to driving and supporting empirical studies of the impact of ambient AI-enabled technologies, the team works closely with our science and engineering teams on core model evaluation. The common thread to all our work is ensuring that every partner-facing research initiative meets the highest standards of rigour, credibility, and strategic value.What You’ll DoDesign and conduct evaluations of Abridge models and productsEngage with external researchers and other stakeholders on designing and conducting research on ambient AI and research that leverages Abridge dataDevelop a strong user-centric and patient-centric mindset, grounding the research in empathy for the real world experience of providers and patientsCollaborate across our cross-functional product teams to ensure the research is deeply informed by current practices and our product roadmapWrite technical reports and give presentations to internal and external stakeholdersActively contribute to the wider research community by publishing original research in leading peer-reviewed publication venuesMentor research internsWhat You’ll BringPhD in statistics, biostatistics, computer science, economics, information systems, clinical informatics, or a related field.Expertise in rigorous quantitative or mixed-methods approaches for conducting evaluations using observational and experimental data.Strong research track record in evaluation and measurement, as evidenced by high-impact publications at peer-reviewed journals or conferences.A problem-before-method mindset. You do not change the question to make it amenable to simple analysis, but instead push the methodological frontier to solve the real world problems that matter to health systems, providers, and patients.A curious, adaptable, and proactive mindset, with a desire to learn and grow as a researcher in a fast-paced startup environment.Passion for and understanding of Abridge’s mission.Must be willing to work from our New York City office at least 3x per week.This position requires a commitment to a hybrid work model, with the expectation of coming into the office a minimum of (3) three times per week. Relocation assistance is available for candidates willing to move to New York City.We value people who want to learn new things, and we know that great team members might not perfectly match a job description. If you’re interested in the role but aren’t sure whether or not you’re a good fit, we’d still like to hear from you.Why Work at Abridge?At Abridge, we’re transforming healthcare delivery experiences with generative AI, enabling clinicians and patients to connect in deeper, more meaningful ways. Our mission is clear: to power deeper understanding in healthcare. We’re driving real, lasting change, with millions of medical conversations processed each month.Joining Abridge means stepping into a fast-paced, high-growth startup where your contributions truly make a difference. Our culture requires extreme ownership—every employee has the ability to (and is expected to) make an impact on our customers and our business.Beyond individual impact, you will have the opportunity to work alongside a team of curious, high-achieving people in a supportive environment where success is shared, growth is constant, and feedback fuels progress. At Abridge, it’s not just what we do—it’s how we do it. Every decision is rooted in empathy, always prioritizing the needs of clinicians and patients.We’re committed to supporting your growth, both professionally and personally. Whether it's flexible work hours, an inclusive culture, or ongoing learning opportunities, we are here to help you thrive and do the best work of your life.If you are ready to make a meaningful impact alongside passionate people who care deeply about what they do, Abridge is the place for you.
How we take care of Abridgers:Generous Time Off: 14 paid holidays, flexible PTO for salaried employees, and accrued time off for hourly employeesComprehensive Health Plans: Medical, Dental, and Vision coverage for all full-time employees and their families.Generous HSA Contribution: If you choose a High Deductible Health Plan, Abridge makes monthly contributions to your HSA.Paid Parental Leave: Generous paid parental leave for all full-time employees.Family Forming Benefits: Resources and financial support to help you build your family.401(k) Matching: Contribution matching to help invest in your future.Personal Device Allowance: Tax free funds for personal device usage.Pre-tax Benefits: Access to Flexible Spending Accounts (FSA) and Commuter Benefits.Lifestyle Wallet: Monthly contributions for fitness, professional development, coworking, and more.Mental Health Support: Dedicated access to therapy and coaching to help you reach your goals.Sabbatical Leave: Paid Sabbatical Leave after 5 years of employment.Compensation and Equity: Competitive compensation and equity grants for full time employees.... and much more!Equal Opportunity EmployerAbridge is an equal opportunity employer and considers all qualified applicants equally without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, or disability.Staying safe - Protect yourself from recruitment fraudWe are aware of individuals and entities fraudulently representing themselves as Abridge recruiters and/or hiring managers. Abridge will never ask for financial information or payment, or for personal information such as bank account number or social security number during the job application or interview process. Any emails from the Abridge recruiting team will come from an @abridge.com email address. You can learn more about how to protect yourself from these types of fraud by referring to this article. Please exercise caution and cease communications if something feels suspicious about your interactions.
No items found.
2026-03-04 5:29
Director, Forward Deployed Engineering
Harvey
501-1000
$320,000 – $360,000
United States
Full-time
Remote
false
Why HarveyAt Harvey, we’re transforming how legal and professional services operate — not incrementally, but end-to-end. By combining frontier agentic AI, an enterprise-grade platform, and deep domain expertise, we’re reshaping how critical knowledge work gets done for decades to come.This is a rare chance to help build a generational company at a true inflection point. With 1000+ customers in 58+ countries, strong product-market fit, and world-class investor support, we’re scaling fast and defining a new category in real time. The work is ambitious, the bar is high, and the opportunity for growth — personal, professional, and financial — is unmatched.Our team is sharp, motivated, and deeply committed to the mission. We move fast, operate with intensity, and take real ownership of the problems we tackle — from early thinking to long-term outcomes. We stay close to our customers — from leadership to engineers — and work together to solve real problems with urgency and care. If you thrive in ambiguity, push for excellence, and want to help shape the future of work alongside others who raise the bar, we invite you to build with us.At Harvey, the future of professional services is being written today — and we’re just getting started.Role OverviewHarvey is building a Forward Deployed Engineering program to deliver a white-glove, tailored experience for our most strategically important accounts. As Director of Forward Deployed Engineering, you will own that program end-to-end: building the team, defining the operating model, and ensuring Harvey's top accounts feel like Harvey works exclusively for them.This is a rare opportunity to sit at the intersection of engineering leadership, enterprise client strategy, and product influence. You'll lead a team of software engineers working directly with clients, while partnering closely with legal engineering, Sales, CS, and Product to shape what gets built and for whom.This is not a product function. The job’s primary goal is to make Harvey's most valuable customers wildly successful, while also influencing the product roadmap with direct customer feedback.What You'll DoTeam LeadershipBuild, hire, and manage a team of software engineers and managers deployed into strategic accountsDefine staffing models, engagement structures, and capacity allocation across active and prospective accountsDevelop specialist pods of engineers for new verticals (M&A, litigation, fund formation, compliance, etc.) that can be drawn on across engagements.Set and uphold quality standards for client deliverables, documentation, and knowledge transfer.Technical ExecutionMaintain deep technical fluency to scope custom builds accurately, unblock engineering decisions, and evaluate quality of delivered solutionsOversee the design and implementation of tailored workflows, retrieval systems, agent tools, and knowledge sources built on Harvey's platformEnsure solutions are built to be operationalized: with evaluations, documentation, and user training.Product InfluenceIdentify patterns across client engagements that signal gaps or opportunities in Harvey's core platformBring field signal to product and engineering leadership with specificity: what clients need, how often, and what it would take to generalizeWhat You Have10+ years of experience in software engineering, with at least 5+ years leading engineering teams (bonus if in customer-facing contexts)Deep familiarity with LLM application development: retrieval-augmented generation, agent architectures, structured outputs, and evaluation designExperience building and scaling technical teams: hiring, developing, and retaining engineers across specializationsExceptional communication skills; able to translate complex technical work into clear language for both engineers and C-suite clientsLow ego, high accountability; you're as comfortable rolling up your sleeves on a client problem as you are presenting to the boardNice to HavePrior experience building products for these legal, asset management, banking, or insurance.Familiarity with enterprise legal workflows: document review, contract analysis, compliance, M&A diligenceWhy This RoleHarvey's most strategic accounts — the firms and in-house teams that set the standard for the rest of the industry — deserve more than a great product. They deserve a team that shows up for them. As Director of Forward Deployed Engineering, you'll build that team, define what excellence looks like, and make Harvey indispensable to the companies that matter most.Compensation$320,000 - $360,000 USD#LI-PM1Harvey is an equal opportunity employer and does not discriminate on the basis of race, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition, or any other basis protected by law.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made by emailing accommodations@harvey.ai
No items found.
2026-03-04 3:29
Emulation Engineer, Automotive Robotics
Tenstorrent
1001-5000
$100,000 – $500,000
Germany
Full-time
Remote
false
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.Tenstorrent is seeking a ASIC Networking Engineer to help define and build next-generation CPU networking architecture for both datacenter and emerging robotics/automotive applications. You will contribute to our current datacenter networking efforts while also helping to seed and specify future medium- to low-power robotics/automotive devices for AI/ML compute and sensor ingest. The initial focus will be datacenter networking, with robotics as the first target within the automotive/robotics space.
This role is remote, based out of North America.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who You Are
You thrive while navigating multiple priorities and ambiguous, evolving requirements.
You have knowledge of Ethernet network architecture and how performance is modeled.
You have experience with die-to-die interfaces and understand associated protocols and design tradeoffs.
You understand Ethernet networking concepts and how they map onto on-chip and off-chip fabrics.
You have experience with datacenter scale up architectures like UALink, NVLink, and Broadcom SUE.
You have experience with scale out RDMA protocols like RoCE, Infiniband, and others.
Experience working on safety (diagnostic and fault coverage) with RTL design process.
What We Need
A network ASIC designer who can contribute to both datacenter networking and early-stage automotive/robotics scoping and specifications.
Someone comfortable working at the intersection of NoC, performance modeling, and RTL design to guide architectural decisions.
An engineer who can collaborate across hardware, software, and systems teams to define and refine networking requirements.
A contributor who can help drive forward next-generation CPU networking architecture for AI/ML workloads.
What You Will Learn
How to build next-generation CPU networking architectures for both high-performance datacenter and constrained robotics/automotive environments.
How to help drive forward next-generation robotics-focused CPUs for AI/ML compute with rich sensor ingestion.
How to work at the intersection of NoC design, performance modeling, and RTL to close the loop between architecture and implementation.
How to take an early-stage concept (automotive/robotics networking) from seeding and specification through to project initiation.
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
No items found.
2026-03-03 18:59
Field Application Engineer, Automotive Robotics
Tenstorrent
1001-5000
$100,000 – $500,000
Germany
Full-time
Remote
false
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.Tenstorrent is seeking a ASIC Networking Engineer to help define and build next-generation CPU networking architecture for both datacenter and emerging robotics/automotive applications. You will contribute to our current datacenter networking efforts while also helping to seed and specify future medium- to low-power robotics/automotive devices for AI/ML compute and sensor ingest. The initial focus will be datacenter networking, with robotics as the first target within the automotive/robotics space.
This role is remote, based out of North America.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who You Are
You thrive while navigating multiple priorities and ambiguous, evolving requirements.
You have knowledge of Ethernet network architecture and how performance is modeled.
You have experience with die-to-die interfaces and understand associated protocols and design tradeoffs.
You understand Ethernet networking concepts and how they map onto on-chip and off-chip fabrics.
You have experience with datacenter scale up architectures like UALink, NVLink, and Broadcom SUE.
You have experience with scale out RDMA protocols like RoCE, Infiniband, and others.
Experience working on safety (diagnostic and fault coverage) with RTL design process.
What We Need
A network ASIC designer who can contribute to both datacenter networking and early-stage automotive/robotics scoping and specifications.
Someone comfortable working at the intersection of NoC, performance modeling, and RTL design to guide architectural decisions.
An engineer who can collaborate across hardware, software, and systems teams to define and refine networking requirements.
A contributor who can help drive forward next-generation CPU networking architecture for AI/ML workloads.
What You Will Learn
How to build next-generation CPU networking architectures for both high-performance datacenter and constrained robotics/automotive environments.
How to help drive forward next-generation robotics-focused CPUs for AI/ML compute with rich sensor ingestion.
How to work at the intersection of NoC design, performance modeling, and RTL to close the loop between architecture and implementation.
How to take an early-stage concept (automotive/robotics networking) from seeding and specification through to project initiation.
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
No items found.
2026-03-03 18:59
Sr Staff Engineer, CPU System Microarchitect
Tenstorrent
1001-5000
$100,000 – $500,000
India
Full-time
Remote
false
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.Tenstorrent is seeking a ASIC Networking Engineer to help define and build next-generation CPU networking architecture for both datacenter and emerging robotics/automotive applications. You will contribute to our current datacenter networking efforts while also helping to seed and specify future medium- to low-power robotics/automotive devices for AI/ML compute and sensor ingest. The initial focus will be datacenter networking, with robotics as the first target within the automotive/robotics space.
This role is remote, based out of North America.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who You Are
You thrive while navigating multiple priorities and ambiguous, evolving requirements.
You have knowledge of Ethernet network architecture and how performance is modeled.
You have experience with die-to-die interfaces and understand associated protocols and design tradeoffs.
You understand Ethernet networking concepts and how they map onto on-chip and off-chip fabrics.
You have experience with datacenter scale up architectures like UALink, NVLink, and Broadcom SUE.
You have experience with scale out RDMA protocols like RoCE, Infiniband, and others.
Experience working on safety (diagnostic and fault coverage) with RTL design process.
What We Need
A network ASIC designer who can contribute to both datacenter networking and early-stage automotive/robotics scoping and specifications.
Someone comfortable working at the intersection of NoC, performance modeling, and RTL design to guide architectural decisions.
An engineer who can collaborate across hardware, software, and systems teams to define and refine networking requirements.
A contributor who can help drive forward next-generation CPU networking architecture for AI/ML workloads.
What You Will Learn
How to build next-generation CPU networking architectures for both high-performance datacenter and constrained robotics/automotive environments.
How to help drive forward next-generation robotics-focused CPUs for AI/ML compute with rich sensor ingestion.
How to work at the intersection of NoC design, performance modeling, and RTL to close the loop between architecture and implementation.
How to take an early-stage concept (automotive/robotics networking) from seeding and specification through to project initiation.
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
No items found.
2026-03-03 18:59
Solutions Engineer (Autonomous Vehicles & Robotics)
Encord
101-200
United States
Full-time
Remote
false
About us
Encord is the universal data layer for AI that helps 300+ AI teams train and run models on the right data. Our platform indexes, curates, annotates, and evaluates data across the full AI lifecycle, from development through production. Trusted by Woven by Toyota, AXA, UiPath, Zipline, and more.
We're an ambitious team of 100+ working at the frontier of AI and have raised $60M in Series C funding from Wellington Management, CRV, Next47 and Y Combinator.
The role
As a Solutions Engineer at Encord, you'll be the core technical expert for customers building autonomous vehicles, robotics, and physical AI solutions. Your expertise in LiDAR data, sensor fusion, and perception will be critical as you architect data solutions for prospects at the cutting edge of autonomous systems.
You'll partner with Account Executives to drive technical wins while establishing Encord as the definitive platform for managing multimodal sensor datasets. This role combines deep LiDAR technical expertise with customer-facing impact.What you'll doLead technical discovery with perception teams working on autonomous systems, understanding their sensor stack, model development pipelines, and data challengesArchitect complete solutions for complex multimodal datasets (LiDAR + camera + radar fusion, sensor calibration)Act as technical authority on how Encord handles 3D point clouds, sensor fusion, temporal sequences, and multimodal annotationBuild bespoke POCs for LiDAR data ingestion, point cloud processing, coordinate transformations, and sensor calibrationDevelop custom integrations with robotics/AV stacks (MCAP, ROS, Apollo, Autoware)Create technical demos showcasing LiDAR annotation, 3D bounding boxes, semantic segmentation, and multi-sensor fusionDebug complex issues involving point cloud rendering, sensor calibration matrices, and multimodal data synchronizationGuide prospects through technical evaluations of LiDAR formats, sensor configurations, and annotation requirementsProvide expert consultation on 3D annotation best practices, coordinate conventions, and quality control workflowsPartner with Account Executives to co-own technical wins in enterprise sales cyclesTranslate technical capabilities into business value for CTOs and senior stakeholdersChannel customer feedback to Product and Engineering teams to shape our roadmapWho we're looking forA creative problem-solver with a hacker mindset who builds robust scripts and integrations quicklyAn excellent communicator, comfortable engaging both perception engineers and executive stakeholdersCustomer-obsessed and passionate about solving complex technical problemsDeep empathy for autonomous systems engineers managing massive 3D datasetsExperience requirements1+ years working with LiDAR data in autonomous vehicles, robotics, or physical AI applicationsStrong Python programming or other scripting language proficiency3D perception knowledge — point cloud processing, 3D object detection, semantic segmentation, SLAMAutonomous systems experience with ROS, sensor calibration, coordinate transformations, and multimodal sensor integrationPrior experience in a customer-facing technical role (Solutions Engineering, Technical Account Management, or similar)Expert in LiDAR data formats (PCD, LAS, PLY, etc.) and point cloud processingUnderstanding of sensor calibration, coordinate transformations, and sensor fusionKnowledge of ML frameworks, model development processes, and perception model requirementsExperience with cloud infrastructure (AWS, GCP, Azure), APIs, and SaaS platformsWhy EncordCompetitive salary, commission, and meaningful equity in a high-growth startupClear, accelerated growth opportunities as the company scales rapidlyStrong in-person culture: 3–5 days/week in our newly launched North Beach loft officeFlexible PTO to fully recharge18 paid vacation days in the U.S. plus federal holidaysAnnual learning & development budgetComprehensive health, dental, and vision coverageFrequent travel opportunities across the U.S., London, and EuropeBi-annual company offsites, twice-weekly team lunches, and monthly socials
No items found.
2026-03-03 14:29
Helix AI Engineer, Agentic Systems
Figure AI
201-500
$150,000 – $350,000
United States
Full-time
Remote
false
Figure is an AI robotics company developing autonomous general-purpose humanoid robots. The goal of the company is to ship humanoid robots with human level intelligence. Its robots are engineered to perform a variety of tasks in the home and commercial markets. Figure is headquartered in San Jose, CA.
Figure’s vision is to deploy autonomous humanoids at a global scale. Our Helix team is looking for an experienced Training Infrastructure Engineer, to take our infrastructure to the next level. This role is focused on managing the training cluster, implementing distributed training algorithms, data loaders, and developer tools for AI researchers. The ideal candidate has experience building tools and infrastructure for a large-scale deep learning system.
Responsibilities
Design, deploy, and maintain Figure's training clusters
Architect and maintain scalable deep learning frameworks for training on massive robot datasets
Work together with AI researchers to implement training of new model architectures at a large scale
Implement distributed training and parallelization strategies to reduce model development cycles
Implement tooling for data processing, model experimentation, and continuous integration
Requirements
Strong software engineering fundamentals
Bachelor's or Master's degree in Computer Science, Robotics, Engineering, or a related field
Experience with Python and PyTorch
Experience managing HPC clusters for deep neural network training
Minimum of 4 years of professional, full-time experience building reliable backend systems
Bonus Qualifications
Experience managing cloud infrastructure (AWS, Azure, GCP)
Experience with job scheduling / orchestration tools (SLURM, Kubernetes, LSF, etc.)
Experience with configuration management tools (Ansible, Terraform, Puppet, Chef, etc.)
The US base salary range for this full-time position is between $150,000 - $350,000 annually.
The pay offered for this position may vary based on several individual factors, including job-related knowledge, skills, and experience. The total compensation package may also include additional components/benefits depending on the specific role. This information will be shared if an employment offer is extended.
No items found.
2026-03-03 9:44
AI Platform Architect
Notable
201-500
$117,500 – $168,000
United States
Full-time
Remote
false
Notable is the leading healthcare AI platform for transforming workforce productivity. Health systems, hospitals, and payers use Notable to improve healthcare quality, close gaps in patient care, drive member enrollment, and patient acquisition, retention, and reimbursement, scaling growth without hiring more staff.We are on a mission to improve the lives of patients, staff, and clinicians - to improve healthcare for humanity. This isn't just a lofty goal - it's something we're achieving every single day. When you join Notable, you become part of a force actively transforming healthcare. Our aim to impact 100 million patients isn't just a number; it's a commitment to creating meaningful change on a massive scale.Therefore, our culture is purposeful in pursuit of this mission. We believe our culture gives each person the opportunity to do the best work of their lives, work with the best teammates, and have fun achieving great things together.Role SummaryThe AI Platform Architect plays a critical role in designing, scoping, and implementing complex healthcare AI workflows on the Notable platform.This role sits at the intersection of healthcare operations, AI workflow design, data architecture, enterprise integration/implementation, data orchestration, and change management. You will partner closely with clients and internal cross-functional teams to translate operational challenges into scalable AI‑driven solutions.You will be responsible for designing and architecting end‑to‑end AI flows that leverage multiple healthcare data models — including structured, semi‑structured, and unstructured data — while ensuring workflows are secure, reliable, scalable, and aligned with real‑world clinical and administrative processes.Notable’s AI Platform Architects are responsible for flow discovery, design, and architecture: gathering customer requirements, validating scope with an eye toward speed-to-value, building and configuring flows in Flow Builder, partnering with integrations to build required connections, internal and external testing, and training customers on using the platform and facilitating change management. At Notable, we are setting the bar for this emerging industry role. Our architects are problem solvers with a strong analytical mindset and a passion for partnering with healthcare leaders to drive business transformation. They demonstrate a deep understanding of both the healthcare industry and the Flow Builder platform, as well as using AI frameworks to balance efficiency, scalability, and value, and they translate technical concepts to non‑technical audiences while leveraging LLMs to create workflow efficiencies.What You’ll DoAI Workflow Architecture & DesignDesign, scope, and architect end‑to‑end healthcare AI workflows utilizing the Notable platform and Flow Builder.Translate business and operational requirements into scalable AI flow architectures that are grounded in customer context and Notable best practices.Build intelligent automation flows that incorporate:AI orchestration across sub‑flows and agentsDecision logic and routing based on clinical and operational rulesData transformation across heterogeneous healthcare data modelsHuman‑in‑the‑loop workflows for exception handling, QA, and escalationDefine and standardize workflow patterns that balance automation, accuracy, safety, and compliance.Recommend flow design choices based on patterns from similar organizations and clearly tie those recommendations to expected impact and measurable outcomes.Socialize designs with key customer decision‑makers and internal stakeholders to ensure alignment, safety, and adherence to Notable’s best practices.Healthcare Data Modeling & IntegrationArchitect solutions that leverage multiple healthcare data models, including:HL7EHR‑native data objects (e.g., Cerner/Oracle Millennium, Epic)API‑based and event‑driven integrations (REST, webhooks, messaging, FHIR)Design workflows that operate across:Structured data (demographics, orders, encounters, appointments, coverage)Semi‑structured data (forms, questionnaires, intake packets)Unstructured data (documents, faxes, clinical notes, pathology reports)Ensure accurate and maintainable data mapping, normalization, and enrichment to support downstream AI and automation.Collaborate with Integration Specialists and customer IT teams to validate end‑to‑end data flows, error handling, and observability.Implementation & Delivery LeadershipLead technical scoping sessions with customers to define:Workflow scope, constraints, and dependenciesIntegration requirements and data contractsData sources, ownership, and quality considerationsSuccess metrics, baselines, and value leversIndependently implement flows with customers via Flow Builder by:Defining and managing scopeSetting appropriate expectations with cross‑functional and customer stakeholdersInfluencing customer counterparts to achieve target outcomesProactively build and own project plans for your flows:Run project meetings and working sessionsProvide clear, regular status updates to internal and external teamsMaintain shared accountability for achieving target outcomes and timelinesDevelop and execute rigorous testing plans, including unit, integration, and UAT workflows, ensuring flows are vetted and signed off prior to launch.Serve as the technical authority for assigned implementations, ensuring:Accurate data mapping and field‑level validationReliable, observable flow execution in productionPerformance and scale readiness, with attention to cost and utilizationProactively escalate deployment risks or blockers, propose actionable recommendations, and drive issues through to resolution in partnership with Product, Engineering, Integrations, and Support & Maintenance.As implementations complete, facilitate the transition to steady‑state ownership alongside Customer Success and Support & Maintenance, ensuring clear documentation, runbooks, and success criteria.Product Feedback & Platform EvolutionProvide structured feedback to Product, Engineering, and Integrations on:Platform and integration capability gapsPatterns that significantly improve customer outcomesAreas where Notable can widen its advantage versus competitorsCapabilities that are harder/easier to deploy in real‑world environmentsCompetitive solutions and market signals encountered in the fieldHelp shape reusable templates, patterns, and reference architectures that accelerate future implementations and Builder‑led projects.You’re a Great Fit If You:Thrive on technical challenges and enjoy pushing the boundaries of applied healthcare AI.Demonstrate strong product intuition — you understand how what you build will impact patients, staff, outcomes, and long‑term growth.Are energized by bridging the gap between technical execution and strategic vision, and can clearly connect architecture decisions to business value.Inspire innovation and experimentation in the partners, Builders, and leaders you work with.Are comfortable operating in complex healthcare environments, engaging with clinical, operational, and IT stakeholders.Look at yourself as an entrepreneur who likes to build and solve challenges others have yet to master.Communicate clearly and confidently with a wide range of audiences, from engineers and analysts to executives and front‑line operators.What We’re Looking For3–5+ years of experience in one or more of the following:Enterprise software workflow design and configurationHealthcare IT, health system operations, or healthcare analyticsAI/automation platforms, low‑code tools, or integration platformsExperience working in a dynamic, collaborative, fast‑paced environment where you can operate autonomously and own outcomes end‑to‑end.A self‑driven thought partner with the ability to:Think critically and logically about complex systemsBreak down ambiguous problems into structured, solvable componentsRoot‑cause issues and identify testable assumptionsHands‑on experience configuring workflows using a low‑code or admin console, and leveraging emerging technologies (LLMs, prompt engineering, retrieval, etc.) to triage issues and develop solutions.Experience with healthcare data and integrations, such as:HL7 v2 interfaces (ADT, SIU, ORM, ORU)FHIR APIsEHR ecosystemsAPI‑based integrations, event‑driven systems, or ETL pipelinesAbility to translate customer needs into achievable goals and articulate trade‑offs between speed, safety, cost, and long‑term maintainability.Proven ability to communicate technical concepts to a variety of audiences — from technical/IT to operational to executive stakeholders.Willingness and flexibility to travel up to ~40% for customer and company meetings as needed.We value in-person collaboration and connection. For Bay Area–based employees, this role requires being in our San Mateo office at least three days a week. For remote employees, occasional travel to headquarters is expected for company-wide events and onsite gatherings.Beware of job scam fraudsters! Our recruiters use @notablehealth.com email addresses exclusively. We do not conduct interviews via text or instant message, to purchase equipment through us, or to provide sensitive personally identifiable information such as bank account or social security numbers. If you have been contacted by someone claiming to be a recruiter from Notable from a different domain about a job offer, please report it as potential job fraud to law enforcement and contact us here.
No items found.
2026-03-03 9:29
Field Events Marketing Manager
Arize AI
101-200
United Kingdom
Argentina
Full-time
Remote
false
About Arize
AI is rapidly transforming the world. As generative AI reshapes industries, teams need powerful ways to monitor, troubleshoot, and optimize their AI systems. That’s where we come in. Arize AI is the leading AI & Agent Engineering observability and evaluation platform, empowering AI engineers to ship high-performing, reliable agents and applications. From first prototype to production scale, Arize AX unifies build, test, and run in a single workspace—so teams can ship faster with confidence.
We’re a Series C company backed by top-tier investors, with over $135M in funding and a rapidly growing customer base of 150+ leading enterprises and Fortune 500 companies. Customers like Booking.com, Uber, Siemens, and PepsiCo leverage Arize to deliver AI that works.Note: The nature of this role requires candidates to be based in the Buenos Aires area, though there isn't an in-office requirement.
The Opportunity
We’re looking for an Application Engineer who thrives on solving hard problems with code. In this role, you'll have the opportunity to work at the cutting edge of generative AI in a high-impact role with autonomy and ownership.
What You’ll Do
Debug and fix issues in our platform (and ship PRs with your fixes).
Build internal tools and copilots powered by generative AI to supercharge our team.
Rapidly prototype proof-of-concepts for customer use cases.
Work across Engineering, Product, and Solutions to unblock customers and push the boundaries of AI adoption.
What We’re Looking For
You have 2-5 years of experience in software.
Strong in Python and Golang; comfortable shipping fixes in production systems.
Hands-on with generative AI (LLM APIs, frameworks, building copilots or automations)
Hands-on with OpenTelimetry and deep familiarity with distributed tracing concepts.
Familiarity with AI frameworks (CrewAI, Langchain, Langgraph, DiFy, LiteLLM, etc).
Familiarity or eagerness to learn JavaScript/TypeScript.
Great debugger, creative problem solver, and fast learner.
Independent and resourceful. You create solutions, not dependencies.
Bonus Points (but not required!)
Experience in a customer-facing role
Built copilots, plugins, or custom GenAI-powered applications.
Open-sourced or contributed PRs to real codebases.
Startup or fast-moving environment experience.
Actual compensation is determined based upon a variety of job related factors that may include: transferable work experience, skill sets, and qualifications. Total compensation also includes unlimited paid time off, generous parental leave plan, and others for mental and wellness support.More About Arize
Arize’s mission is to make the world’s AI work—and work for people.
Our founders came together through a shared frustration: while investments in AI are growing rapidly across every industry, organizations face a critical challenge—understanding whether AI is performing and how to improve it at scale.
Learn more about what we're doing here:
https://techcrunch.com/2025/02/20/arize-ai-hopes-it-has-first-mover-advantage-in-ai-observability/
https://arize.com/blog/arize-ai-raises-70m-series-c-to-build-the-gold-standard-for-ai-evaluation-observability/
Diversity & Inclusion @ Arize
Our company's mission is to make AI work and make AI work for the people, we hope to make an impact in bias industry-wide and that's a big motivator for people who work here. We actively hope that individuals contribute to a good culture
Regularly have chats with industry experts, researchers, and ethicists across the ecosystem to advance the use of responsible AI
Culturally conscious events such as LGBTQ trivia during pride month
We have an active Lady Arizers subgroup
No items found.
2026-03-03 8:14
Hardware Tools Engineer
OpenAI
5000+
$225,000 – $445,000
United States
Full-time
Remote
false
About the TeamOpenAI’s Hardware organization develops silicon and system-level solutions designed for the unique demands of advanced AI workloads. The team is responsible for building the next generation of AI-native silicon while working closely with software and research partners to co-design hardware tightly integrated with AI models. In addition to delivering production-grade silicon for OpenAI’s supercomputing infrastructure, the team also creates custom design tools and methodologies that accelerate innovation and enable hardware optimized specifically for AI.About the RoleYou will develop and evolve the tooling ecosystem that hardware engineers rely on every day — from hardware compilers and IR transformations to simulation, debugging, and automation infrastructure. The work spans software engineering, compiler concepts, and practical hardware workflows, with direct impact on how quickly and effectively we design next-generation AI systems.You’ll collaborate closely with architects, RTL designers, and verification engineers to translate real engineering friction into durable, scalable tooling solutions.In this role you will:Build and improve the software tooling that makes hardware teams faster: compilation, IR transforms, RTL generation, simulation, debug, and automation.Extend and integrate hardware compiler stacks (frontends, IR passes, lowering, scheduling, codegen to Verilog/SystemVerilog) and connect them to real design workflows.Improve developer experience and reliability: reproducible builds, better error messages, faster iteration loops, and dependable CI and regression infrastructure.Work closely with designers and verification engineers to turn real pain points into durable tools.Dive into RTL when needed: read and reason about Verilog/SystemVerilog to debug issues, validate tool output, and improve debuggability.Be willing to go all the way down the stack when necessary, including gate-level views, synthesis results, and implementation artifacts.Help enable PPA optimization loops by building analysis and automation around area, timing, and power tradeoffs, and by improving tooling that impacts those outcomes.You might thrive in this role if:Demonstrated ability to build and maintain software (projects, internships, research, open source, or equivalent experience).Strong CS fundamentals: data structures, algorithms, debugging, and software design.Proficiency in at least one of Rust, C++, or Python (and willingness to learn the rest).Familiarity with digital design concepts and the ability to read RTL (Verilog/SystemVerilog) or equivalent hardware descriptions.Familiarity with compiler or IR-based ideas (representations, passes, transformations, lowering), through coursework or projects.Comfort operating in ambiguity and iterating quickly with users of your tools.Nice to have skills:Exposure to compiler and hardware toolchains such as XLS/DSLX, LLVM, Chisel/FIRRTL, CIRCT/MLIR, other novel hardware languages (e.g. HardCaml, SpinalHDL, Spade, PyMTL, Clash, BlueSpec, PyRope)Experience with Verilog tooling ecosystems (Yosys/RTLIL, Verilator, Slang) or writing tooling around them.Experience with build and test infrastructure (Bazel, CI systems, fuzzing, performance testing).Prior work touching synthesis, place and route, static timing analysis, or other PPA-related workflows.To comply with U.S. export control laws and regulations, candidates for this role may need to meet certain legal status requirements as provided in those laws and regulations.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
2026-03-03 2:44
AI Solutions Engineer
V7
101-200
£80,000 – £125,000
United Kingdom
Full-time
Remote
false
V7At V7, we’re building AI platforms that help humans do their best work, at incredible scale and speed. Our mission is to turn human knowledge into trustworthy AI, making complex tasks faster, smarter, and more accurate. We’re growing fast, backed by leading investors and AI pioneers (including the minds behind Transformers and Gemini).
The productV7 Go provides legal, finance, insurance, and accounting teams with a toolkit for deploying and building custom no-code AI agents. The platform focuses on taking multi-modal data and delivering verifiable outputs with transparent AI logic to ensure accuracy and compliance.V7 Go supports all of the latest models like GPT, Claude, and Gemini for the best accuracy and performance. Watch the V7 Go keynote to see what we’re building.The team you’ll be joining and the impact you’ll haveYou'll join our go-to-market team as our second Solutions Engineer in New York (the team is six people), sitting at the intersection of sales and product in a company processing tens of millions of documents for customers across finance, insurance, and real estate.V7 Go 4x-ed revenue last year, with 160%+ upsell into accounts. You'll help accelerate that trajectory by making sure every customer gets real value.We run a lean, high-trust team where you'll work directly with AEs, engineers, and product to close complex deals and turn new logos into long-term champions.Your work directly shapes how enterprises experience agentic AI for the first time and how quickly they believe in it.What you’ll be doing from day oneRun technical discovery, design solutions, and lead POCs alongside Account Executives to close deals, then own onboarding to get customers to first value fast.Build and implement workflows within V7 Go; combining prompt engineering, data pipelines, and integrations to solve real customer problems across document processing and more.Act as the primary technical contact for accounts, handling complex challenges and spotting expansion opportunities as customers scale.Juggle up to 10 concurrent projects while feeding customer insights back to product and engineering.Who you areYou are a prototyper at heart with a gift of talking to customers, building relationships, and solving technical problems with repeatability.You have experience in delivering Large Language Model projects with customers, including LLM API integration, up-to-speed knowledge of foundation models, solutions design/architecture, integrating different cloud providers, prompt engineering, and/or measuring AI accuracy.You love coding with Python.You can develop and articulate an AI solution vision to technical and business stakeholders, with customers and partners to match the value proposition to business needs.V7 champions equality and inclusion because diverse teams build better products. Don't check every box? Apply anyway — we value what makes you unique and will support you through the process, just let our Talent team know how they can help.
No items found.
2026-03-03 0:29
Software Engineer (AI)
Heidi Health
201-500
United Kingdom
Full-time
Remote
false
Who are Heidi?Heidi is building an AI Care Partner that supports clinicians every step of the way, from documentation to delivery of care.We exist to double healthcare’s capacity while keeping care deeply human. In 18 months, Heidi has returned more than 18 million hours to clinicians and supported over 73 million patient visits. Today, more than two million patient visits each week are powered by Heidi across 116 countries and over 110 languages.Founded by clinicians, Heidi brings together clinicians, engineers, designers, scientists, creatives, and mathematicians, working with a shared purpose: to strengthen the human connection at the heart of healthcare.Backed by nearly $100 million in total funding, Heidi is expanding across the USA, UK, Canada, and Europe, partnering with major health systems including the NHS, Beth Israel Lahey Health, MaineGeneral, and Monash Health, among others.We move quickly where it matters and stay grounded in what’s proven, shaping healthcare’s next era. Ready for the challenge?The RoleWorking closely with the Product Lead, you will be a Mid-level or Senior Fullstack Engineer who operates at the intersection of core product development and clinical application.This role requires formal medical training and real clinical experience. Your clinical background will directly inform how we design, evaluate, and ship AI features that support real-world care delivery. Experience working on clinical AI products is highly valued, as you’ll be shaping systems that must perform safely in production environments.What you’ll do:Build end-to-end AI features: Architect and ship fullstack solutions (from React frontends to Python backend services) that leverage our voice AI and LLMs to automate clinical workflows.Operationalize Voice AI: Implement and fine-tune audio processing pipelines, ensuring our Automatic Speech Recognition (ASR) and LLM agents perform accurately in diverse, real-world medical environments.Bridge the gap between model and product: Translate complex feedback from clinicians into technical solutions, rapidly prototyping and deploying improvements to model behavior, prompting strategies, and audio handling.Optimise for real-time interaction: Tune fullstack performance to handle real-time audio streaming and token generation, minimizing latency so clinicians have a seamless conversational experience.Partner with implementation and clinical teams: Shorten the feedback loop by shipping critical integrations and feature requests from concept to production in days, not quarters.What we will look for:Mastery of Fullstack fundamentals: You are equally proficient in Python and modern frontend frameworks (React/TypeScript), capable of owning a feature from the database schema to the UI interaction.Applied AI & Voice fluency: You have a working knowledge of LLM integration (RAG, prompt engineering) and audio technologies (ASR, speech processing) and know how to build around their probabilistic nature.Pragmatic problem solving: You balance engineering purity with the need for speed; you know when to build a robust system and when to ship a tactical solution to unblock a customer.Cloud fluency (AWS or GCP): You can spin up your own infrastructure (containers, serverless functions) and manage CI/CD pipelines to get your code into the hands of users independently.Rigorous testing in production: You understand that "works on my machine" isn't enough; you implement observability and feedback loops to monitor how your AI features perform in the wild.Medical degree with clinical experience, and ideally experience working on clinical AI products
What do we believe in?Heidi builds for the future of healthcare, not just the next quarter, and our goals are ambitious because the world’s health demands it. We believe in progress built through precision, pace, and ownership.Live Forever - Every release moves care forward: measured, safe, and built to last. Data guides us, but patients define the truth that matters.Practice Ownership - Decisions follow logic and proof, not hierarchy. Exceptional care demands exceptional standards in our work, our thinking, and our character.Small Cuts Heal Faster - Stability earns trust, speed delivers impact. Progress is about learning fast without breaking what people depend on.Make others better - Feedback is direct, kindness is constant, and excellence lifts everyone. Our success is measured by collective growth, not individual output.Our mission is clear: expand the world’s capacity to care, and do it without losing the humanity that makes care worth delivering.Why you should Join HeidiReal product momentum. We’re not trying to generate interest, we’re channeling it. This is a rare chance to create a global impact as you immerse yourself in Australia’s fastest growing start-up.Equity from day one. When Heidi wins, you win. You’ll share directly in the success you help create.Unmatched impact. Play a pivotal role at a critical growth moment - all while working on a product that delivers tangible value to clinicians and patients every day.Work alongside world-class talent. Join a team of operators and builders who’ve scaled unicorns.Global reach. Help shape our international expansion as we bring Heidi to key international markets.Growth and balance. Enjoy a personal development budget, dedicated wellness days, subsidised gym membership, and your birthday off to recharge.Flexibility that works. A hybrid environment, with 3 days in the office.Heidi’s commitment to Diversity, Equity and InclusionHeidi is dedicated to creating an equitable, inclusive, and supportive work environment that brings people together from diverse backgrounds, experiences, and perspectives. Our strength is in our differences. We're proud to be an equal opportunity employer and are proud to welcome all applicants as we're committed to promoting a culture of opportunity for all.Help us reimagine primary care and change the face of healthcare in Australia and then around the world.
No items found.
2026-03-02 6:29
Senior Software Engineer
Heidi Health
201-500
United Kingdom
Full-time
Remote
false
Who are Heidi?Heidi is building an AI Care Partner that supports clinicians every step of the way, from documentation to delivery of care.We exist to double healthcare’s capacity while keeping care deeply human. In 18 months, Heidi has returned more than 18 million hours to clinicians and supported over 73 million patient visits. Today, more than two million patient visits each week are powered by Heidi across 116 countries and over 110 languages.Founded by clinicians, Heidi brings together clinicians, engineers, designers, scientists, creatives, and mathematicians, working with a shared purpose: to strengthen the human connection at the heart of healthcare.Backed by nearly $100 million in total funding, Heidi is expanding across the USA, UK, Canada, and Europe, partnering with major health systems including the NHS, Beth Israel Lahey Health, MaineGeneral, and Monash Health, among others.We move quickly where it matters and stay grounded in what’s proven, shaping healthcare’s next era. Ready for the challenge?The RoleThe UK healthcare system is defined by its friction - complex billing requirements, rigid EHRs, and administrative burden that pulls clinicians away from patients.We're looking for a Senior Software Engineer to turn that friction into flow.You'll build the systems that make Heidi feel native to American healthcare. That means going deep into the infrastructure clinicians actually use and making Heidi work seamlessly inside those workflows. It means building AI systems that handle the complexity of UK billing so clinicians don't have to.You'll work across the full stack of what makes Heidi valuable in the US market: from the AI pipelines that understand clinical documentation to the integrations that put the right information in the right place at the right time.This isn't just localisation. It's building the definitive clinical AI experience for the world's most demanding healthcare market.What you’ll doBuild systems that live inside clinical workflows: You'll shape how Heidi integrates with the EHRs that run American healthcare. The goal isn't connectivity—it's making Heidi feel like a native capability, not a plugin.Turn clinical complexity into simple experiences: US healthcare has layers of billing rules, compliance requirements, and payer constraints. You'll build systems that absorb that complexity so clinicians never see it.Build for trust and quality: Write clean, testable code with strong interfaces, thoughtful error handling, and observability. These workflows are depended on by clinicians, operators, and downstream systems.Own outcomes, not just code: You'll care about whether the things you build actually help clinicians and improve practice revenue—not just whether they technically work.Ship agentic workflow functionality: Build systems where AI assists with extraction, reconciliation, and drafting across workflows, with human review, auditability, and clear controls.: Build systems where AI assists with extraction, reconciliation, and drafting across workflows, with human review, auditability, and clear controls.Operate in close collaboration: Work day-to-day in a highly collaborative environment, including frequent pairing and shared ownership of design and implementation.Grow with the domain: Learn how healthcare organisations operate in practice, especially the requirements and constraints that come with serving US customers, and translate that into product improvements.What we’re looking for5+ years of software engineering experience, with a track record of shipping complex systems that real users depend on.Strong full-stack fundamentals and experience contributing to user-facing products end-to-end.Sound engineering judgment: You make sensible trade-offs, keep scope clear, and improve quality through testing, readable code, and thoughtful design.Ownership and follow-through: You take responsibility for what you commit to, communicate clearly when something changes, and unblock yourself or escalate early.Collaborative working style: You work well with others, enjoy building in a tight feedback loop, and are comfortable pairing and sharing work in progress.Comfort with ambiguity: You can engage with messy problems, ask good questions, and drive toward a practical, shippable solution.Fluency with AI coding tools: You use modern AI tools to accelerate delivery, while staying rigorous about correctness and validation.Experience with agentic frameworks, modelling complex domains, orchestration, and event-driven architectures is a plus.What do we believe in?Heidi builds for the future of healthcare, not just the next quarter, and our goals are ambitious because the world’s health demands it. We believe in progress built through precision, pace, and ownership.Live Forever - Every release moves care forward: measured, safe, and built to last. Data guides us, but patients define the truth that matters.Practice Ownership - Decisions follow logic and proof, not hierarchy. Exceptional care demands exceptional standards in our work, our thinking, and our character.Small Cuts Heal Faster - Stability earns trust, speed delivers impact. Progress is about learning fast without breaking what people depend on.Make others better - Feedback is direct, kindness is constant, and excellence lifts everyone. Our success is measured by collective growth, not individual output.Our mission is clear: expand the world’s capacity to care, and do it without losing the humanity that makes care worth delivering.Why you should Join HeidiReal product momentum. We’re not trying to generate interest, we’re channeling it. This is a rare chance to create a global impact as you immerse yourself in Australia’s fastest growing start-up.Equity from day one. When Heidi wins, you win. You’ll share directly in the success you help create.Unmatched impact. Play a pivotal role at a critical growth moment - all while working on a product that delivers tangible value to clinicians and patients every day.Work alongside world-class talent. Join a team of operators and builders who’ve scaled unicorns.Global reach. Help shape our international expansion as we bring Heidi to key international markets.Growth and balance. Enjoy a personal development budget, dedicated wellness days, subsidised gym membership, and your birthday off to recharge.Flexibility that works. A hybrid environment, with 3 days in the office.Heidi’s commitment to Diversity, Equity and InclusionHeidi is dedicated to creating an equitable, inclusive, and supportive work environment that brings people together from diverse backgrounds, experiences, and perspectives. Our strength is in our differences. We're proud to be an equal opportunity employer and are proud to welcome all applicants as we're committed to promoting a culture of opportunity for all.Help us reimagine primary care and change the face of healthcare in Australia and then around the world.
No items found.
2026-03-02 6:29
No job found
Your search did not match any job. Please try again
