The AI job market moves fast. We keep up so you don't have to.
Fresh roles added daily, reviewed for quality — across every corner of the AI ecosystem.
I'm strong in:
Edit filters
New AI Opportunities
Showing 61 – 79 of 79 jobs
Tag
Emulation Engineer, Automotive Robotics
Tenstorrent
1001-5000
$100,000 – $500,000
Germany
Full-time
Remote
false
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.Tenstorrent is seeking a ASIC Networking Engineer to help define and build next-generation CPU networking architecture for both datacenter and emerging robotics/automotive applications. You will contribute to our current datacenter networking efforts while also helping to seed and specify future medium- to low-power robotics/automotive devices for AI/ML compute and sensor ingest. The initial focus will be datacenter networking, with robotics as the first target within the automotive/robotics space.
This role is remote, based out of North America.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who You Are
You thrive while navigating multiple priorities and ambiguous, evolving requirements.
You have knowledge of Ethernet network architecture and how performance is modeled.
You have experience with die-to-die interfaces and understand associated protocols and design tradeoffs.
You understand Ethernet networking concepts and how they map onto on-chip and off-chip fabrics.
You have experience with datacenter scale up architectures like UALink, NVLink, and Broadcom SUE.
You have experience with scale out RDMA protocols like RoCE, Infiniband, and others.
Experience working on safety (diagnostic and fault coverage) with RTL design process.
What We Need
A network ASIC designer who can contribute to both datacenter networking and early-stage automotive/robotics scoping and specifications.
Someone comfortable working at the intersection of NoC, performance modeling, and RTL design to guide architectural decisions.
An engineer who can collaborate across hardware, software, and systems teams to define and refine networking requirements.
A contributor who can help drive forward next-generation CPU networking architecture for AI/ML workloads.
What You Will Learn
How to build next-generation CPU networking architectures for both high-performance datacenter and constrained robotics/automotive environments.
How to help drive forward next-generation robotics-focused CPUs for AI/ML compute with rich sensor ingestion.
How to work at the intersection of NoC design, performance modeling, and RTL to close the loop between architecture and implementation.
How to take an early-stage concept (automotive/robotics networking) from seeding and specification through to project initiation.
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
No items found.
2026-03-03 18:59
Field Application Engineer, Automotive Robotics
Tenstorrent
1001-5000
$100,000 – $500,000
Germany
Full-time
Remote
false
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.Tenstorrent is seeking a ASIC Networking Engineer to help define and build next-generation CPU networking architecture for both datacenter and emerging robotics/automotive applications. You will contribute to our current datacenter networking efforts while also helping to seed and specify future medium- to low-power robotics/automotive devices for AI/ML compute and sensor ingest. The initial focus will be datacenter networking, with robotics as the first target within the automotive/robotics space.
This role is remote, based out of North America.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who You Are
You thrive while navigating multiple priorities and ambiguous, evolving requirements.
You have knowledge of Ethernet network architecture and how performance is modeled.
You have experience with die-to-die interfaces and understand associated protocols and design tradeoffs.
You understand Ethernet networking concepts and how they map onto on-chip and off-chip fabrics.
You have experience with datacenter scale up architectures like UALink, NVLink, and Broadcom SUE.
You have experience with scale out RDMA protocols like RoCE, Infiniband, and others.
Experience working on safety (diagnostic and fault coverage) with RTL design process.
What We Need
A network ASIC designer who can contribute to both datacenter networking and early-stage automotive/robotics scoping and specifications.
Someone comfortable working at the intersection of NoC, performance modeling, and RTL design to guide architectural decisions.
An engineer who can collaborate across hardware, software, and systems teams to define and refine networking requirements.
A contributor who can help drive forward next-generation CPU networking architecture for AI/ML workloads.
What You Will Learn
How to build next-generation CPU networking architectures for both high-performance datacenter and constrained robotics/automotive environments.
How to help drive forward next-generation robotics-focused CPUs for AI/ML compute with rich sensor ingestion.
How to work at the intersection of NoC design, performance modeling, and RTL to close the loop between architecture and implementation.
How to take an early-stage concept (automotive/robotics networking) from seeding and specification through to project initiation.
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
No items found.
2026-03-03 18:59
Sr Staff Engineer, CPU System Microarchitect
Tenstorrent
1001-5000
$100,000 – $500,000
India
Full-time
Remote
false
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.Tenstorrent is seeking a ASIC Networking Engineer to help define and build next-generation CPU networking architecture for both datacenter and emerging robotics/automotive applications. You will contribute to our current datacenter networking efforts while also helping to seed and specify future medium- to low-power robotics/automotive devices for AI/ML compute and sensor ingest. The initial focus will be datacenter networking, with robotics as the first target within the automotive/robotics space.
This role is remote, based out of North America.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who You Are
You thrive while navigating multiple priorities and ambiguous, evolving requirements.
You have knowledge of Ethernet network architecture and how performance is modeled.
You have experience with die-to-die interfaces and understand associated protocols and design tradeoffs.
You understand Ethernet networking concepts and how they map onto on-chip and off-chip fabrics.
You have experience with datacenter scale up architectures like UALink, NVLink, and Broadcom SUE.
You have experience with scale out RDMA protocols like RoCE, Infiniband, and others.
Experience working on safety (diagnostic and fault coverage) with RTL design process.
What We Need
A network ASIC designer who can contribute to both datacenter networking and early-stage automotive/robotics scoping and specifications.
Someone comfortable working at the intersection of NoC, performance modeling, and RTL design to guide architectural decisions.
An engineer who can collaborate across hardware, software, and systems teams to define and refine networking requirements.
A contributor who can help drive forward next-generation CPU networking architecture for AI/ML workloads.
What You Will Learn
How to build next-generation CPU networking architectures for both high-performance datacenter and constrained robotics/automotive environments.
How to help drive forward next-generation robotics-focused CPUs for AI/ML compute with rich sensor ingestion.
How to work at the intersection of NoC design, performance modeling, and RTL to close the loop between architecture and implementation.
How to take an early-stage concept (automotive/robotics networking) from seeding and specification through to project initiation.
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
No items found.
2026-03-03 18:59
Solutions Engineer (Autonomous Vehicles & Robotics)
Encord
101-200
United States
Full-time
Remote
false
About us
Encord is the universal data layer for AI that helps 300+ AI teams train and run models on the right data. Our platform indexes, curates, annotates, and evaluates data across the full AI lifecycle, from development through production. Trusted by Woven by Toyota, AXA, UiPath, Zipline, and more.
We're an ambitious team of 100+ working at the frontier of AI and have raised $60M in Series C funding from Wellington Management, CRV, Next47 and Y Combinator.
The role
As a Solutions Engineer at Encord, you'll be the core technical expert for customers building autonomous vehicles, robotics, and physical AI solutions. Your expertise in LiDAR data, sensor fusion, and perception will be critical as you architect data solutions for prospects at the cutting edge of autonomous systems.
You'll partner with Account Executives to drive technical wins while establishing Encord as the definitive platform for managing multimodal sensor datasets. This role combines deep LiDAR technical expertise with customer-facing impact.What you'll doLead technical discovery with perception teams working on autonomous systems, understanding their sensor stack, model development pipelines, and data challengesArchitect complete solutions for complex multimodal datasets (LiDAR + camera + radar fusion, sensor calibration)Act as technical authority on how Encord handles 3D point clouds, sensor fusion, temporal sequences, and multimodal annotationBuild bespoke POCs for LiDAR data ingestion, point cloud processing, coordinate transformations, and sensor calibrationDevelop custom integrations with robotics/AV stacks (MCAP, ROS, Apollo, Autoware)Create technical demos showcasing LiDAR annotation, 3D bounding boxes, semantic segmentation, and multi-sensor fusionDebug complex issues involving point cloud rendering, sensor calibration matrices, and multimodal data synchronizationGuide prospects through technical evaluations of LiDAR formats, sensor configurations, and annotation requirementsProvide expert consultation on 3D annotation best practices, coordinate conventions, and quality control workflowsPartner with Account Executives to co-own technical wins in enterprise sales cyclesTranslate technical capabilities into business value for CTOs and senior stakeholdersChannel customer feedback to Product and Engineering teams to shape our roadmapWho we're looking forA creative problem-solver with a hacker mindset who builds robust scripts and integrations quicklyAn excellent communicator, comfortable engaging both perception engineers and executive stakeholdersCustomer-obsessed and passionate about solving complex technical problemsDeep empathy for autonomous systems engineers managing massive 3D datasetsExperience requirements1+ years working with LiDAR data in autonomous vehicles, robotics, or physical AI applicationsStrong Python programming or other scripting language proficiency3D perception knowledge — point cloud processing, 3D object detection, semantic segmentation, SLAMAutonomous systems experience with ROS, sensor calibration, coordinate transformations, and multimodal sensor integrationPrior experience in a customer-facing technical role (Solutions Engineering, Technical Account Management, or similar)Expert in LiDAR data formats (PCD, LAS, PLY, etc.) and point cloud processingUnderstanding of sensor calibration, coordinate transformations, and sensor fusionKnowledge of ML frameworks, model development processes, and perception model requirementsExperience with cloud infrastructure (AWS, GCP, Azure), APIs, and SaaS platformsWhy EncordCompetitive salary, commission, and meaningful equity in a high-growth startupClear, accelerated growth opportunities as the company scales rapidlyStrong in-person culture: 3–5 days/week in our newly launched North Beach loft officeFlexible PTO to fully recharge18 paid vacation days in the U.S. plus federal holidaysAnnual learning & development budgetComprehensive health, dental, and vision coverageFrequent travel opportunities across the U.S., London, and EuropeBi-annual company offsites, twice-weekly team lunches, and monthly socials
No items found.
2026-03-03 14:29
Global Hardware Sourcing & Supply Manager
Together AI
201-500
$200,000 – $280,000
No items found.
Full-time
Remote
false
About the Role
The Turbo team sits at the intersection of efficient inference (algorithms, architectures, engines) and post‑training / RL systems. We build and operate the systems behind Together’s API, including high‑performance inference and RL/post‑training engines that can run at production scale.
Our mandate is to push the frontier of efficient inference and RL‑driven training: making models dramatically faster and cheaper to run, while improving their capabilities through RL‑based post‑training (e.g., GRPO‑style objectives). This work lives at the interface of algorithms and systems: asynchronous RL, rollout collection, scheduling, and batching all interact with engine design, creating many knobs to tune across the RL algorithm, training loop, and inference stack. Much of the job is modifying production inference systems—for example, SGLang‑ or vLLM‑style serving stacks and speculative decoding systems such as ATLAS—grounded in a strong understanding of post‑training and inference theory, rather than purely theoretical algorithm design.
You’ll work across the stack—from RL algorithms and training engines to kernels and serving systems—to build and improve frontier models via RL pipelines. People on this team are often spiky: some are more RL‑first, some are more systems‑first. Depth in one of these areas plus appetite to collaborate across (and grow toward more full‑stack ownership over time) is ideal.
Requirements
We don’t expect anyone to check every box below. People on this team typically have deep expertise in one or more areas and enough breadth (or interest) to work effectively across the stack. The closer you are to full‑stack (inference + post‑training/RL + systems), the stronger the fit—but being spiky in one area and eager to grow is absolutely okay.
You might be a good fit if you:
Have strong expertise in at least one of the following, and are excited to collaborate across (and grow into) the others:
Systems‑first profile: Large‑scale inference systems (e.g., SGLang, vLLM, FasterTransformer, TensorRT, custom engines, or similar), GPU performance, distributed serving.
RL‑first profile: RL / post‑training for LLMs or large models (e.g., GRPO, RLHF/RLAIF, DPO‑like methods, reward modeling), and using these to train or fine‑tune real models.
Model architecture design for Transformers or other large neural nets.
Distributed systems / high‑performance computing for ML.
Are comfortable working from algorithms to engines:
Strong coding ability in Python
Experience profiling and optimizing performance across GPU, networking, and memory layers.
Able to take a new sampling method, scheduler, or RL update and turn it into a production‑grade implementation in the engine and/or training stack.
Have a solid research foundation in your area(s) of depth:
Track record of impactful work in ML systems, RL, or large‑scale model training (papers, open‑source projects, or production systems).
Can read new RL / post‑training papers, understand their implications on the stack, and design minimal, correct changes in the right layer (training engine vs. inference engine vs. data / API).
Operate well as a full‑stack problem solver:
You naturally ask: “Where in the stack is this really bottlenecked?”
You enjoy collaborating with infra, research, and product teams, and you care about both scientific quality and user‑visible wins.
Minimum qualifications
3+ years of experience working on ML systems, large‑scale model training, inference, or adjacent areas (or equivalent experience via research / open source).
Advanced degree in Computer Science, EE, or a related field, or equivalent practical experience.
Demonstrated experience owning complex technical projects end‑to‑end.
If you’re excited about the role and strong in some of these areas, we encourage you to apply even if you don’t meet every single requirement.
Responsibilities
Advance inference efficiency end‑to‑end
Design and prototype algorithms, architectures, and scheduling strategies for low‑latency, high‑throughput inference.
Implement and maintain changes in high‑performance inference engines (e.g., SGLang‑ or vLLM‑style systems and Together’s inference stack), including kernel backends, speculative decoding (e.g., ATLAS), quantization, etc.
Profile and optimize performance across GPU, networking, and memory layers to improve latency, throughput, and cost.
Unify inference with RL / post‑training
Design and operate RL and post‑training pipelines (e.g., RLHF, RLAIF, GRPO, DPO‑style methods, reward modeling) where 90+% of the cost is inference, jointly optimizing algorithms and systems.
Make RL and post‑training workloads more efficient with inference‑aware training loops—for example, async RL rollouts, speculative decoding, and other techniques that make large‑scale rollout collection and evaluation cheaper.
Use these pipelines to train, evaluate, and iterate on frontier models on top of our inference stack.
Co‑design algorithms and infrastructure so that objectives, rollout collection, and evaluation are tightly coupled to efficient inference, and quickly identify bottlenecks across the training engine, inference engine, data pipeline, and user‑facing layers.
Run ablations and scale‑up experiments to understand trade‑offs between model quality, latency, throughput, and cost, and feed these insights back into model, RL, and system design.
Own critical systems at production scale
Profile, debug, and optimize inference and post‑training services under real production workloads.
Drive roadmap items that require real engine modification—changing kernels, memory layouts, scheduling logic, and APIs as needed.
Establish metrics, benchmarks, and experimentation frameworks to validate improvements rigorously.
Provide technical leadership (Staff level)
Set technical direction for cross‑team efforts at the intersection of inference, RL, and post‑training.
Mentor other engineers and researchers on full‑stack ML systems work and performance engineering.
About Together AI
Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers in our journey in building the next generation AI infrastructure.
Compensation
We offer competitive compensation, startup equity, health insurance and other competitive benefits. The US base salary range for this full-time position is: $200,000 - $280,000 + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge.
Equal Opportunity
Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.
Please see our privacy policy at https://www.together.ai/privacy
No items found.
2026-03-03 12:29
Senior Program Manager, Infrastructure Strategy and Business Operations
Together AI
201-500
$200,000 – $280,000
United States
Full-time
Remote
false
About the Role
The Turbo team sits at the intersection of efficient inference (algorithms, architectures, engines) and post‑training / RL systems. We build and operate the systems behind Together’s API, including high‑performance inference and RL/post‑training engines that can run at production scale.
Our mandate is to push the frontier of efficient inference and RL‑driven training: making models dramatically faster and cheaper to run, while improving their capabilities through RL‑based post‑training (e.g., GRPO‑style objectives). This work lives at the interface of algorithms and systems: asynchronous RL, rollout collection, scheduling, and batching all interact with engine design, creating many knobs to tune across the RL algorithm, training loop, and inference stack. Much of the job is modifying production inference systems—for example, SGLang‑ or vLLM‑style serving stacks and speculative decoding systems such as ATLAS—grounded in a strong understanding of post‑training and inference theory, rather than purely theoretical algorithm design.
You’ll work across the stack—from RL algorithms and training engines to kernels and serving systems—to build and improve frontier models via RL pipelines. People on this team are often spiky: some are more RL‑first, some are more systems‑first. Depth in one of these areas plus appetite to collaborate across (and grow toward more full‑stack ownership over time) is ideal.
Requirements
We don’t expect anyone to check every box below. People on this team typically have deep expertise in one or more areas and enough breadth (or interest) to work effectively across the stack. The closer you are to full‑stack (inference + post‑training/RL + systems), the stronger the fit—but being spiky in one area and eager to grow is absolutely okay.
You might be a good fit if you:
Have strong expertise in at least one of the following, and are excited to collaborate across (and grow into) the others:
Systems‑first profile: Large‑scale inference systems (e.g., SGLang, vLLM, FasterTransformer, TensorRT, custom engines, or similar), GPU performance, distributed serving.
RL‑first profile: RL / post‑training for LLMs or large models (e.g., GRPO, RLHF/RLAIF, DPO‑like methods, reward modeling), and using these to train or fine‑tune real models.
Model architecture design for Transformers or other large neural nets.
Distributed systems / high‑performance computing for ML.
Are comfortable working from algorithms to engines:
Strong coding ability in Python
Experience profiling and optimizing performance across GPU, networking, and memory layers.
Able to take a new sampling method, scheduler, or RL update and turn it into a production‑grade implementation in the engine and/or training stack.
Have a solid research foundation in your area(s) of depth:
Track record of impactful work in ML systems, RL, or large‑scale model training (papers, open‑source projects, or production systems).
Can read new RL / post‑training papers, understand their implications on the stack, and design minimal, correct changes in the right layer (training engine vs. inference engine vs. data / API).
Operate well as a full‑stack problem solver:
You naturally ask: “Where in the stack is this really bottlenecked?”
You enjoy collaborating with infra, research, and product teams, and you care about both scientific quality and user‑visible wins.
Minimum qualifications
3+ years of experience working on ML systems, large‑scale model training, inference, or adjacent areas (or equivalent experience via research / open source).
Advanced degree in Computer Science, EE, or a related field, or equivalent practical experience.
Demonstrated experience owning complex technical projects end‑to‑end.
If you’re excited about the role and strong in some of these areas, we encourage you to apply even if you don’t meet every single requirement.
Responsibilities
Advance inference efficiency end‑to‑end
Design and prototype algorithms, architectures, and scheduling strategies for low‑latency, high‑throughput inference.
Implement and maintain changes in high‑performance inference engines (e.g., SGLang‑ or vLLM‑style systems and Together’s inference stack), including kernel backends, speculative decoding (e.g., ATLAS), quantization, etc.
Profile and optimize performance across GPU, networking, and memory layers to improve latency, throughput, and cost.
Unify inference with RL / post‑training
Design and operate RL and post‑training pipelines (e.g., RLHF, RLAIF, GRPO, DPO‑style methods, reward modeling) where 90+% of the cost is inference, jointly optimizing algorithms and systems.
Make RL and post‑training workloads more efficient with inference‑aware training loops—for example, async RL rollouts, speculative decoding, and other techniques that make large‑scale rollout collection and evaluation cheaper.
Use these pipelines to train, evaluate, and iterate on frontier models on top of our inference stack.
Co‑design algorithms and infrastructure so that objectives, rollout collection, and evaluation are tightly coupled to efficient inference, and quickly identify bottlenecks across the training engine, inference engine, data pipeline, and user‑facing layers.
Run ablations and scale‑up experiments to understand trade‑offs between model quality, latency, throughput, and cost, and feed these insights back into model, RL, and system design.
Own critical systems at production scale
Profile, debug, and optimize inference and post‑training services under real production workloads.
Drive roadmap items that require real engine modification—changing kernels, memory layouts, scheduling logic, and APIs as needed.
Establish metrics, benchmarks, and experimentation frameworks to validate improvements rigorously.
Provide technical leadership (Staff level)
Set technical direction for cross‑team efforts at the intersection of inference, RL, and post‑training.
Mentor other engineers and researchers on full‑stack ML systems work and performance engineering.
About Together AI
Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers in our journey in building the next generation AI infrastructure.
Compensation
We offer competitive compensation, startup equity, health insurance and other competitive benefits. The US base salary range for this full-time position is: $200,000 - $280,000 + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge.
Equal Opportunity
Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.
Please see our privacy policy at https://www.together.ai/privacy
No items found.
2026-03-03 12:29
Helix AI Engineer, Agentic Systems
Figure AI
201-500
$150,000 – $350,000
United States
Full-time
Remote
false
Figure is an AI robotics company developing autonomous general-purpose humanoid robots. The goal of the company is to ship humanoid robots with human level intelligence. Its robots are engineered to perform a variety of tasks in the home and commercial markets. Figure is headquartered in San Jose, CA.
Figure’s vision is to deploy autonomous humanoids at a global scale. Our Helix team is looking for an experienced Training Infrastructure Engineer, to take our infrastructure to the next level. This role is focused on managing the training cluster, implementing distributed training algorithms, data loaders, and developer tools for AI researchers. The ideal candidate has experience building tools and infrastructure for a large-scale deep learning system.
Responsibilities
Design, deploy, and maintain Figure's training clusters
Architect and maintain scalable deep learning frameworks for training on massive robot datasets
Work together with AI researchers to implement training of new model architectures at a large scale
Implement distributed training and parallelization strategies to reduce model development cycles
Implement tooling for data processing, model experimentation, and continuous integration
Requirements
Strong software engineering fundamentals
Bachelor's or Master's degree in Computer Science, Robotics, Engineering, or a related field
Experience with Python and PyTorch
Experience managing HPC clusters for deep neural network training
Minimum of 4 years of professional, full-time experience building reliable backend systems
Bonus Qualifications
Experience managing cloud infrastructure (AWS, Azure, GCP)
Experience with job scheduling / orchestration tools (SLURM, Kubernetes, LSF, etc.)
Experience with configuration management tools (Ansible, Terraform, Puppet, Chef, etc.)
The US base salary range for this full-time position is between $150,000 - $350,000 annually.
The pay offered for this position may vary based on several individual factors, including job-related knowledge, skills, and experience. The total compensation package may also include additional components/benefits depending on the specific role. This information will be shared if an employment offer is extended.
No items found.
2026-03-03 9:44
AI Platform Architect
Notable
201-500
$117,500 – $168,000
United States
Full-time
Remote
false
Notable is the leading healthcare AI platform for transforming workforce productivity. Health systems, hospitals, and payers use Notable to improve healthcare quality, close gaps in patient care, drive member enrollment, and patient acquisition, retention, and reimbursement, scaling growth without hiring more staff.We are on a mission to improve the lives of patients, staff, and clinicians - to improve healthcare for humanity. This isn't just a lofty goal - it's something we're achieving every single day. When you join Notable, you become part of a force actively transforming healthcare. Our aim to impact 100 million patients isn't just a number; it's a commitment to creating meaningful change on a massive scale.Therefore, our culture is purposeful in pursuit of this mission. We believe our culture gives each person the opportunity to do the best work of their lives, work with the best teammates, and have fun achieving great things together.Role SummaryThe AI Platform Architect plays a critical role in designing, scoping, and implementing complex healthcare AI workflows on the Notable platform.This role sits at the intersection of healthcare operations, AI workflow design, data architecture, enterprise integration/implementation, data orchestration, and change management. You will partner closely with clients and internal cross-functional teams to translate operational challenges into scalable AI‑driven solutions.You will be responsible for designing and architecting end‑to‑end AI flows that leverage multiple healthcare data models — including structured, semi‑structured, and unstructured data — while ensuring workflows are secure, reliable, scalable, and aligned with real‑world clinical and administrative processes.Notable’s AI Platform Architects are responsible for flow discovery, design, and architecture: gathering customer requirements, validating scope with an eye toward speed-to-value, building and configuring flows in Flow Builder, partnering with integrations to build required connections, internal and external testing, and training customers on using the platform and facilitating change management. At Notable, we are setting the bar for this emerging industry role. Our architects are problem solvers with a strong analytical mindset and a passion for partnering with healthcare leaders to drive business transformation. They demonstrate a deep understanding of both the healthcare industry and the Flow Builder platform, as well as using AI frameworks to balance efficiency, scalability, and value, and they translate technical concepts to non‑technical audiences while leveraging LLMs to create workflow efficiencies.What You’ll DoAI Workflow Architecture & DesignDesign, scope, and architect end‑to‑end healthcare AI workflows utilizing the Notable platform and Flow Builder.Translate business and operational requirements into scalable AI flow architectures that are grounded in customer context and Notable best practices.Build intelligent automation flows that incorporate:AI orchestration across sub‑flows and agentsDecision logic and routing based on clinical and operational rulesData transformation across heterogeneous healthcare data modelsHuman‑in‑the‑loop workflows for exception handling, QA, and escalationDefine and standardize workflow patterns that balance automation, accuracy, safety, and compliance.Recommend flow design choices based on patterns from similar organizations and clearly tie those recommendations to expected impact and measurable outcomes.Socialize designs with key customer decision‑makers and internal stakeholders to ensure alignment, safety, and adherence to Notable’s best practices.Healthcare Data Modeling & IntegrationArchitect solutions that leverage multiple healthcare data models, including:HL7EHR‑native data objects (e.g., Cerner/Oracle Millennium, Epic)API‑based and event‑driven integrations (REST, webhooks, messaging, FHIR)Design workflows that operate across:Structured data (demographics, orders, encounters, appointments, coverage)Semi‑structured data (forms, questionnaires, intake packets)Unstructured data (documents, faxes, clinical notes, pathology reports)Ensure accurate and maintainable data mapping, normalization, and enrichment to support downstream AI and automation.Collaborate with Integration Specialists and customer IT teams to validate end‑to‑end data flows, error handling, and observability.Implementation & Delivery LeadershipLead technical scoping sessions with customers to define:Workflow scope, constraints, and dependenciesIntegration requirements and data contractsData sources, ownership, and quality considerationsSuccess metrics, baselines, and value leversIndependently implement flows with customers via Flow Builder by:Defining and managing scopeSetting appropriate expectations with cross‑functional and customer stakeholdersInfluencing customer counterparts to achieve target outcomesProactively build and own project plans for your flows:Run project meetings and working sessionsProvide clear, regular status updates to internal and external teamsMaintain shared accountability for achieving target outcomes and timelinesDevelop and execute rigorous testing plans, including unit, integration, and UAT workflows, ensuring flows are vetted and signed off prior to launch.Serve as the technical authority for assigned implementations, ensuring:Accurate data mapping and field‑level validationReliable, observable flow execution in productionPerformance and scale readiness, with attention to cost and utilizationProactively escalate deployment risks or blockers, propose actionable recommendations, and drive issues through to resolution in partnership with Product, Engineering, Integrations, and Support & Maintenance.As implementations complete, facilitate the transition to steady‑state ownership alongside Customer Success and Support & Maintenance, ensuring clear documentation, runbooks, and success criteria.Product Feedback & Platform EvolutionProvide structured feedback to Product, Engineering, and Integrations on:Platform and integration capability gapsPatterns that significantly improve customer outcomesAreas where Notable can widen its advantage versus competitorsCapabilities that are harder/easier to deploy in real‑world environmentsCompetitive solutions and market signals encountered in the fieldHelp shape reusable templates, patterns, and reference architectures that accelerate future implementations and Builder‑led projects.You’re a Great Fit If You:Thrive on technical challenges and enjoy pushing the boundaries of applied healthcare AI.Demonstrate strong product intuition — you understand how what you build will impact patients, staff, outcomes, and long‑term growth.Are energized by bridging the gap between technical execution and strategic vision, and can clearly connect architecture decisions to business value.Inspire innovation and experimentation in the partners, Builders, and leaders you work with.Are comfortable operating in complex healthcare environments, engaging with clinical, operational, and IT stakeholders.Look at yourself as an entrepreneur who likes to build and solve challenges others have yet to master.Communicate clearly and confidently with a wide range of audiences, from engineers and analysts to executives and front‑line operators.What We’re Looking For3–5+ years of experience in one or more of the following:Enterprise software workflow design and configurationHealthcare IT, health system operations, or healthcare analyticsAI/automation platforms, low‑code tools, or integration platformsExperience working in a dynamic, collaborative, fast‑paced environment where you can operate autonomously and own outcomes end‑to‑end.A self‑driven thought partner with the ability to:Think critically and logically about complex systemsBreak down ambiguous problems into structured, solvable componentsRoot‑cause issues and identify testable assumptionsHands‑on experience configuring workflows using a low‑code or admin console, and leveraging emerging technologies (LLMs, prompt engineering, retrieval, etc.) to triage issues and develop solutions.Experience with healthcare data and integrations, such as:HL7 v2 interfaces (ADT, SIU, ORM, ORU)FHIR APIsEHR ecosystemsAPI‑based integrations, event‑driven systems, or ETL pipelinesAbility to translate customer needs into achievable goals and articulate trade‑offs between speed, safety, cost, and long‑term maintainability.Proven ability to communicate technical concepts to a variety of audiences — from technical/IT to operational to executive stakeholders.Willingness and flexibility to travel up to ~40% for customer and company meetings as needed.We value in-person collaboration and connection. For Bay Area–based employees, this role requires being in our San Mateo office at least three days a week. For remote employees, occasional travel to headquarters is expected for company-wide events and onsite gatherings.Beware of job scam fraudsters! Our recruiters use @notablehealth.com email addresses exclusively. We do not conduct interviews via text or instant message, to purchase equipment through us, or to provide sensitive personally identifiable information such as bank account or social security numbers. If you have been contacted by someone claiming to be a recruiter from Notable from a different domain about a job offer, please report it as potential job fraud to law enforcement and contact us here.
No items found.
2026-03-03 9:29
Field Events Marketing Manager
Arize AI
101-200
United Kingdom
Argentina
Full-time
Remote
false
About Arize
AI is rapidly transforming the world. As generative AI reshapes industries, teams need powerful ways to monitor, troubleshoot, and optimize their AI systems. That’s where we come in. Arize AI is the leading AI & Agent Engineering observability and evaluation platform, empowering AI engineers to ship high-performing, reliable agents and applications. From first prototype to production scale, Arize AX unifies build, test, and run in a single workspace—so teams can ship faster with confidence.
We’re a Series C company backed by top-tier investors, with over $135M in funding and a rapidly growing customer base of 150+ leading enterprises and Fortune 500 companies. Customers like Booking.com, Uber, Siemens, and PepsiCo leverage Arize to deliver AI that works.Note: The nature of this role requires candidates to be based in the Buenos Aires area, though there isn't an in-office requirement.
The Opportunity
We’re looking for an Application Engineer who thrives on solving hard problems with code. In this role, you'll have the opportunity to work at the cutting edge of generative AI in a high-impact role with autonomy and ownership.
What You’ll Do
Debug and fix issues in our platform (and ship PRs with your fixes).
Build internal tools and copilots powered by generative AI to supercharge our team.
Rapidly prototype proof-of-concepts for customer use cases.
Work across Engineering, Product, and Solutions to unblock customers and push the boundaries of AI adoption.
What We’re Looking For
You have 2-5 years of experience in software.
Strong in Python and Golang; comfortable shipping fixes in production systems.
Hands-on with generative AI (LLM APIs, frameworks, building copilots or automations)
Hands-on with OpenTelimetry and deep familiarity with distributed tracing concepts.
Familiarity with AI frameworks (CrewAI, Langchain, Langgraph, DiFy, LiteLLM, etc).
Familiarity or eagerness to learn JavaScript/TypeScript.
Great debugger, creative problem solver, and fast learner.
Independent and resourceful. You create solutions, not dependencies.
Bonus Points (but not required!)
Experience in a customer-facing role
Built copilots, plugins, or custom GenAI-powered applications.
Open-sourced or contributed PRs to real codebases.
Startup or fast-moving environment experience.
Actual compensation is determined based upon a variety of job related factors that may include: transferable work experience, skill sets, and qualifications. Total compensation also includes unlimited paid time off, generous parental leave plan, and others for mental and wellness support.More About Arize
Arize’s mission is to make the world’s AI work—and work for people.
Our founders came together through a shared frustration: while investments in AI are growing rapidly across every industry, organizations face a critical challenge—understanding whether AI is performing and how to improve it at scale.
Learn more about what we're doing here:
https://techcrunch.com/2025/02/20/arize-ai-hopes-it-has-first-mover-advantage-in-ai-observability/
https://arize.com/blog/arize-ai-raises-70m-series-c-to-build-the-gold-standard-for-ai-evaluation-observability/
Diversity & Inclusion @ Arize
Our company's mission is to make AI work and make AI work for the people, we hope to make an impact in bias industry-wide and that's a big motivator for people who work here. We actively hope that individuals contribute to a good culture
Regularly have chats with industry experts, researchers, and ethicists across the ecosystem to advance the use of responsible AI
Culturally conscious events such as LGBTQ trivia during pride month
We have an active Lady Arizers subgroup
No items found.
2026-03-03 8:14
Hardware Tools Engineer
OpenAI
5000+
$225,000 – $445,000
United States
Full-time
Remote
false
About the TeamOpenAI’s Hardware organization develops silicon and system-level solutions designed for the unique demands of advanced AI workloads. The team is responsible for building the next generation of AI-native silicon while working closely with software and research partners to co-design hardware tightly integrated with AI models. In addition to delivering production-grade silicon for OpenAI’s supercomputing infrastructure, the team also creates custom design tools and methodologies that accelerate innovation and enable hardware optimized specifically for AI.About the RoleYou will develop and evolve the tooling ecosystem that hardware engineers rely on every day — from hardware compilers and IR transformations to simulation, debugging, and automation infrastructure. The work spans software engineering, compiler concepts, and practical hardware workflows, with direct impact on how quickly and effectively we design next-generation AI systems.You’ll collaborate closely with architects, RTL designers, and verification engineers to translate real engineering friction into durable, scalable tooling solutions.In this role you will:Build and improve the software tooling that makes hardware teams faster: compilation, IR transforms, RTL generation, simulation, debug, and automation.Extend and integrate hardware compiler stacks (frontends, IR passes, lowering, scheduling, codegen to Verilog/SystemVerilog) and connect them to real design workflows.Improve developer experience and reliability: reproducible builds, better error messages, faster iteration loops, and dependable CI and regression infrastructure.Work closely with designers and verification engineers to turn real pain points into durable tools.Dive into RTL when needed: read and reason about Verilog/SystemVerilog to debug issues, validate tool output, and improve debuggability.Be willing to go all the way down the stack when necessary, including gate-level views, synthesis results, and implementation artifacts.Help enable PPA optimization loops by building analysis and automation around area, timing, and power tradeoffs, and by improving tooling that impacts those outcomes.You might thrive in this role if:Demonstrated ability to build and maintain software (projects, internships, research, open source, or equivalent experience).Strong CS fundamentals: data structures, algorithms, debugging, and software design.Proficiency in at least one of Rust, C++, or Python (and willingness to learn the rest).Familiarity with digital design concepts and the ability to read RTL (Verilog/SystemVerilog) or equivalent hardware descriptions.Familiarity with compiler or IR-based ideas (representations, passes, transformations, lowering), through coursework or projects.Comfort operating in ambiguity and iterating quickly with users of your tools.Nice to have skills:Exposure to compiler and hardware toolchains such as XLS/DSLX, LLVM, Chisel/FIRRTL, CIRCT/MLIR, other novel hardware languages (e.g. HardCaml, SpinalHDL, Spade, PyMTL, Clash, BlueSpec, PyRope)Experience with Verilog tooling ecosystems (Yosys/RTLIL, Verilator, Slang) or writing tooling around them.Experience with build and test infrastructure (Bazel, CI systems, fuzzing, performance testing).Prior work touching synthesis, place and route, static timing analysis, or other PPA-related workflows.To comply with U.S. export control laws and regulations, candidates for this role may need to meet certain legal status requirements as provided in those laws and regulations.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
2026-03-03 2:44
AI Solutions Engineer
V7
101-200
£80,000 – £125,000
United Kingdom
Full-time
Remote
false
V7At V7, we’re building AI platforms that help humans do their best work, at incredible scale and speed. Our mission is to turn human knowledge into trustworthy AI, making complex tasks faster, smarter, and more accurate. We’re growing fast, backed by leading investors and AI pioneers (including the minds behind Transformers and Gemini).
The productV7 Go provides legal, finance, insurance, and accounting teams with a toolkit for deploying and building custom no-code AI agents. The platform focuses on taking multi-modal data and delivering verifiable outputs with transparent AI logic to ensure accuracy and compliance.V7 Go supports all of the latest models like GPT, Claude, and Gemini for the best accuracy and performance. Watch the V7 Go keynote to see what we’re building.The team you’ll be joining and the impact you’ll haveYou'll join our go-to-market team as our second Solutions Engineer in New York (the team is six people), sitting at the intersection of sales and product in a company processing tens of millions of documents for customers across finance, insurance, and real estate.V7 Go 4x-ed revenue last year, with 160%+ upsell into accounts. You'll help accelerate that trajectory by making sure every customer gets real value.We run a lean, high-trust team where you'll work directly with AEs, engineers, and product to close complex deals and turn new logos into long-term champions.Your work directly shapes how enterprises experience agentic AI for the first time and how quickly they believe in it.What you’ll be doing from day oneRun technical discovery, design solutions, and lead POCs alongside Account Executives to close deals, then own onboarding to get customers to first value fast.Build and implement workflows within V7 Go; combining prompt engineering, data pipelines, and integrations to solve real customer problems across document processing and more.Act as the primary technical contact for accounts, handling complex challenges and spotting expansion opportunities as customers scale.Juggle up to 10 concurrent projects while feeding customer insights back to product and engineering.Who you areYou are a prototyper at heart with a gift of talking to customers, building relationships, and solving technical problems with repeatability.You have experience in delivering Large Language Model projects with customers, including LLM API integration, up-to-speed knowledge of foundation models, solutions design/architecture, integrating different cloud providers, prompt engineering, and/or measuring AI accuracy.You love coding with Python.You can develop and articulate an AI solution vision to technical and business stakeholders, with customers and partners to match the value proposition to business needs.V7 champions equality and inclusion because diverse teams build better products. Don't check every box? Apply anyway — we value what makes you unique and will support you through the process, just let our Talent team know how they can help.
No items found.
2026-03-03 0:29
Software Engineer (AI)
Heidi Health
201-500
United Kingdom
Full-time
Remote
false
Who are Heidi?Heidi is building an AI Care Partner that supports clinicians every step of the way, from documentation to delivery of care.We exist to double healthcare’s capacity while keeping care deeply human. In 18 months, Heidi has returned more than 18 million hours to clinicians and supported over 73 million patient visits. Today, more than two million patient visits each week are powered by Heidi across 116 countries and over 110 languages.Founded by clinicians, Heidi brings together clinicians, engineers, designers, scientists, creatives, and mathematicians, working with a shared purpose: to strengthen the human connection at the heart of healthcare.Backed by nearly $100 million in total funding, Heidi is expanding across the USA, UK, Canada, and Europe, partnering with major health systems including the NHS, Beth Israel Lahey Health, MaineGeneral, and Monash Health, among others.We move quickly where it matters and stay grounded in what’s proven, shaping healthcare’s next era. Ready for the challenge?The RoleWorking closely with the Product Lead, you will be a Mid-level or Senior Fullstack Engineer who operates at the intersection of core product development and clinical application.This role requires formal medical training and real clinical experience. Your clinical background will directly inform how we design, evaluate, and ship AI features that support real-world care delivery. Experience working on clinical AI products is highly valued, as you’ll be shaping systems that must perform safely in production environments.What you’ll do:Build end-to-end AI features: Architect and ship fullstack solutions (from React frontends to Python backend services) that leverage our voice AI and LLMs to automate clinical workflows.Operationalize Voice AI: Implement and fine-tune audio processing pipelines, ensuring our Automatic Speech Recognition (ASR) and LLM agents perform accurately in diverse, real-world medical environments.Bridge the gap between model and product: Translate complex feedback from clinicians into technical solutions, rapidly prototyping and deploying improvements to model behavior, prompting strategies, and audio handling.Optimise for real-time interaction: Tune fullstack performance to handle real-time audio streaming and token generation, minimizing latency so clinicians have a seamless conversational experience.Partner with implementation and clinical teams: Shorten the feedback loop by shipping critical integrations and feature requests from concept to production in days, not quarters.What we will look for:Mastery of Fullstack fundamentals: You are equally proficient in Python and modern frontend frameworks (React/TypeScript), capable of owning a feature from the database schema to the UI interaction.Applied AI & Voice fluency: You have a working knowledge of LLM integration (RAG, prompt engineering) and audio technologies (ASR, speech processing) and know how to build around their probabilistic nature.Pragmatic problem solving: You balance engineering purity with the need for speed; you know when to build a robust system and when to ship a tactical solution to unblock a customer.Cloud fluency (AWS or GCP): You can spin up your own infrastructure (containers, serverless functions) and manage CI/CD pipelines to get your code into the hands of users independently.Rigorous testing in production: You understand that "works on my machine" isn't enough; you implement observability and feedback loops to monitor how your AI features perform in the wild.Medical degree with clinical experience, and ideally experience working on clinical AI products
What do we believe in?Heidi builds for the future of healthcare, not just the next quarter, and our goals are ambitious because the world’s health demands it. We believe in progress built through precision, pace, and ownership.Live Forever - Every release moves care forward: measured, safe, and built to last. Data guides us, but patients define the truth that matters.Practice Ownership - Decisions follow logic and proof, not hierarchy. Exceptional care demands exceptional standards in our work, our thinking, and our character.Small Cuts Heal Faster - Stability earns trust, speed delivers impact. Progress is about learning fast without breaking what people depend on.Make others better - Feedback is direct, kindness is constant, and excellence lifts everyone. Our success is measured by collective growth, not individual output.Our mission is clear: expand the world’s capacity to care, and do it without losing the humanity that makes care worth delivering.Why you should Join HeidiReal product momentum. We’re not trying to generate interest, we’re channeling it. This is a rare chance to create a global impact as you immerse yourself in Australia’s fastest growing start-up.Equity from day one. When Heidi wins, you win. You’ll share directly in the success you help create.Unmatched impact. Play a pivotal role at a critical growth moment - all while working on a product that delivers tangible value to clinicians and patients every day.Work alongside world-class talent. Join a team of operators and builders who’ve scaled unicorns.Global reach. Help shape our international expansion as we bring Heidi to key international markets.Growth and balance. Enjoy a personal development budget, dedicated wellness days, subsidised gym membership, and your birthday off to recharge.Flexibility that works. A hybrid environment, with 3 days in the office.Heidi’s commitment to Diversity, Equity and InclusionHeidi is dedicated to creating an equitable, inclusive, and supportive work environment that brings people together from diverse backgrounds, experiences, and perspectives. Our strength is in our differences. We're proud to be an equal opportunity employer and are proud to welcome all applicants as we're committed to promoting a culture of opportunity for all.Help us reimagine primary care and change the face of healthcare in Australia and then around the world.
No items found.
2026-03-02 6:29
Senior Software Engineer
Heidi Health
201-500
United Kingdom
Full-time
Remote
false
Who are Heidi?Heidi is building an AI Care Partner that supports clinicians every step of the way, from documentation to delivery of care.We exist to double healthcare’s capacity while keeping care deeply human. In 18 months, Heidi has returned more than 18 million hours to clinicians and supported over 73 million patient visits. Today, more than two million patient visits each week are powered by Heidi across 116 countries and over 110 languages.Founded by clinicians, Heidi brings together clinicians, engineers, designers, scientists, creatives, and mathematicians, working with a shared purpose: to strengthen the human connection at the heart of healthcare.Backed by nearly $100 million in total funding, Heidi is expanding across the USA, UK, Canada, and Europe, partnering with major health systems including the NHS, Beth Israel Lahey Health, MaineGeneral, and Monash Health, among others.We move quickly where it matters and stay grounded in what’s proven, shaping healthcare’s next era. Ready for the challenge?The RoleThe UK healthcare system is defined by its friction - complex billing requirements, rigid EHRs, and administrative burden that pulls clinicians away from patients.We're looking for a Senior Software Engineer to turn that friction into flow.You'll build the systems that make Heidi feel native to American healthcare. That means going deep into the infrastructure clinicians actually use and making Heidi work seamlessly inside those workflows. It means building AI systems that handle the complexity of UK billing so clinicians don't have to.You'll work across the full stack of what makes Heidi valuable in the US market: from the AI pipelines that understand clinical documentation to the integrations that put the right information in the right place at the right time.This isn't just localisation. It's building the definitive clinical AI experience for the world's most demanding healthcare market.What you’ll doBuild systems that live inside clinical workflows: You'll shape how Heidi integrates with the EHRs that run American healthcare. The goal isn't connectivity—it's making Heidi feel like a native capability, not a plugin.Turn clinical complexity into simple experiences: US healthcare has layers of billing rules, compliance requirements, and payer constraints. You'll build systems that absorb that complexity so clinicians never see it.Build for trust and quality: Write clean, testable code with strong interfaces, thoughtful error handling, and observability. These workflows are depended on by clinicians, operators, and downstream systems.Own outcomes, not just code: You'll care about whether the things you build actually help clinicians and improve practice revenue—not just whether they technically work.Ship agentic workflow functionality: Build systems where AI assists with extraction, reconciliation, and drafting across workflows, with human review, auditability, and clear controls.: Build systems where AI assists with extraction, reconciliation, and drafting across workflows, with human review, auditability, and clear controls.Operate in close collaboration: Work day-to-day in a highly collaborative environment, including frequent pairing and shared ownership of design and implementation.Grow with the domain: Learn how healthcare organisations operate in practice, especially the requirements and constraints that come with serving US customers, and translate that into product improvements.What we’re looking for5+ years of software engineering experience, with a track record of shipping complex systems that real users depend on.Strong full-stack fundamentals and experience contributing to user-facing products end-to-end.Sound engineering judgment: You make sensible trade-offs, keep scope clear, and improve quality through testing, readable code, and thoughtful design.Ownership and follow-through: You take responsibility for what you commit to, communicate clearly when something changes, and unblock yourself or escalate early.Collaborative working style: You work well with others, enjoy building in a tight feedback loop, and are comfortable pairing and sharing work in progress.Comfort with ambiguity: You can engage with messy problems, ask good questions, and drive toward a practical, shippable solution.Fluency with AI coding tools: You use modern AI tools to accelerate delivery, while staying rigorous about correctness and validation.Experience with agentic frameworks, modelling complex domains, orchestration, and event-driven architectures is a plus.What do we believe in?Heidi builds for the future of healthcare, not just the next quarter, and our goals are ambitious because the world’s health demands it. We believe in progress built through precision, pace, and ownership.Live Forever - Every release moves care forward: measured, safe, and built to last. Data guides us, but patients define the truth that matters.Practice Ownership - Decisions follow logic and proof, not hierarchy. Exceptional care demands exceptional standards in our work, our thinking, and our character.Small Cuts Heal Faster - Stability earns trust, speed delivers impact. Progress is about learning fast without breaking what people depend on.Make others better - Feedback is direct, kindness is constant, and excellence lifts everyone. Our success is measured by collective growth, not individual output.Our mission is clear: expand the world’s capacity to care, and do it without losing the humanity that makes care worth delivering.Why you should Join HeidiReal product momentum. We’re not trying to generate interest, we’re channeling it. This is a rare chance to create a global impact as you immerse yourself in Australia’s fastest growing start-up.Equity from day one. When Heidi wins, you win. You’ll share directly in the success you help create.Unmatched impact. Play a pivotal role at a critical growth moment - all while working on a product that delivers tangible value to clinicians and patients every day.Work alongside world-class talent. Join a team of operators and builders who’ve scaled unicorns.Global reach. Help shape our international expansion as we bring Heidi to key international markets.Growth and balance. Enjoy a personal development budget, dedicated wellness days, subsidised gym membership, and your birthday off to recharge.Flexibility that works. A hybrid environment, with 3 days in the office.Heidi’s commitment to Diversity, Equity and InclusionHeidi is dedicated to creating an equitable, inclusive, and supportive work environment that brings people together from diverse backgrounds, experiences, and perspectives. Our strength is in our differences. We're proud to be an equal opportunity employer and are proud to welcome all applicants as we're committed to promoting a culture of opportunity for all.Help us reimagine primary care and change the face of healthcare in Australia and then around the world.
No items found.
2026-03-02 6:29
RTL Engineer, Automotive Robotics
Tenstorrent
1001-5000
$100,000 – $500,000
Germany
Full-time
Remote
false
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.Tenstorrent is accelerating the future of AI and high-performance compute by building industry-leading CPU and AI architectures. As an Automotive and Robotics SoC Architect, you will define scalable, top-down system architectures that unify our CPU and AI technologies for next-generation automotive applications. This senior technical role shapes the architectural direction of our automotive and robotics portfolio, ensuring our products meet the industry's highest expectations for performance, safety, reliability, and security. This position is central to how Tenstorrent delivers world-class automotive solutions and requires strong technical leadership, systems thinking, and cross-functional collaboration.
This role is remote, based out of North America.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who You Are
A systems thinker who can architect complex SoCs from concept to execution.
A strong communicator who can articulate technical direction across engineering teams and external partners.
Someone with deep knowledge of safety-critical systems and the unique needs of automotive environments.
An innovator who can identify future use cases and propose next-generation architectural solutions.
A leader who thrives in a highly technical, cross-functional, fast-moving environment.
What We Need
Bachelor’s, Master’s, or Ph.D. in Electrical Engineering, Computer Engineering, or related field.
Extensive experience designing complex SoCs, ideally in automotive applications.
Proficiency in hardware description languages such as Verilog or VHDL.
Experience with hardware/software co-design and co-verification.
Knowledge of automotive safety standards (e.g., ISO 26262) and security principles.
Someone comfortable with up to 25% international travel.
Experience with both cameras, sensors, and others is a plus.
What You Will Learn
How cutting-edge CPU and AI architectures are adapted for automotive-grade environments.
Best-in-class methodologies for safety-critical SoC design, verification, and system integration.
How to translate emerging automotive use cases into scalable, future-proof SoC architectures.
Approaches to hardware-level security, robustness, and cyber-resilience in automotive compute systems.
Cross-functional collaboration strategies that drive innovation across architecture, software, DV, and product teams.
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
No items found.
2026-03-01 18:59
Deployment Strategist Lead - France
ElevenLabs
501-1000
France
Full-time
Remote
false
About ElevenLabsElevenLabs is an AI research and product company transforming how we interact with technology.We launched in January 2023 with the first human-like AI voice model. Today, we serve millions of users and thousands of businesses - from fast-growing startups to large enterprises like Deutsche Telekom and Meta. Our investors are some of the world's most prominent, including Andreessen Horowitz, ICONIQ Growth and Sequoia. We've raised $781M in funding and our last valuation was $11B - multiples of 11, always.
We have expanded from voice into three main platforms:ElevenAgents enables businesses to deliver seamless and intelligent customer experiences, with the integrations, testing, monitoring, and reliability necessary to deploy voice and chat agents at scale.ElevenCreative empowers creators and marketers to generate and edit speech, music, image, and video across 70+ languages.ElevenAPI gives developers access to our leading AI audio foundational models.Everything we do is the result of the creativity and commitment of our team - builders doing the best work of their lives. We are researchers, engineers, and operators. IOI medalists and ex-founders. If you want to work hard and create lasting positive impact, we want to hear from you.How we workHigh-velocity: Rapid experimentation, lean autonomous teams, and minimal bureaucracy.Impact not job titles: We don’t have job titles. Instead, it’s about the impact you have. No task is above or beneath you.AI first: We use AI to move faster with higher-quality results. We do this across the whole company—from engineering to growth to operations.Excellence everywhere: Everything we do should match the quality of our AI models.Global team: We prioritize your talent, not your location.What we offerInnovative culture: You’ll be part of a generational opportunity to define the trajectory of AI, surrounded by a team pushing the boundaries of what’s possible.Growth paths: Joining ElevenLabs means joining a dynamic team with countless opportunities to drive impact - beyond your immediate role and responsibilities.Learning & development: ElevenLabs proactively supports professional development through an annual discretionary stipend.Social travel: We also provide an annual discretionary stipend to meet up with colleagues each year, however you choose.Annual company offsite: Each year, we bring the entire team together in a new location - past offsites have included Croatia and Italy.Co-working: If you’re not located near one of our main hubs, we offer a monthly co-working stipend.About the roleAs a Deployment Strategist Lead, you'll be fully responsible for opening up a new market for ElevenLabs, and building a team to make it happen. This is a senior position in which you’ll operate as if a co-founder, working alongside an experienced sales leader, of an effective startup moving from 0 -> 1 within our broader org, aiming to deploy our voice AI technology at the most Strategic customers in the market against their most challenging problems. All the while you’ll be able to leverage support from the rest of ElevenLabs to accelerate.
You will be responsible for hiring the team you need, winning the first customers in the market and ensuring that our product generates impact for them.No two days are the same, but you should expect to:Meet with strategic customers to understand their critical audio and voice AI needs and locate their biggest pain points.Build and lead a team of forward deployed engineers within the region.Identify relevant use cases through deep engagement with customer problems and workflows, and work with Engineers to implement our voice and audio AI technology into innovative solutions.Design and architect bespoke integrations for customers, ensuring our technology fits seamlessly into their products and operations.Guide customers on best practices for implementing our voice and audio AI models to maximize their effectiveness.Present the results of our work and proposals for future work to audiences ranging from technical teams to C-suite executives.Collaborate with our Research and Product teams to incorporate field insights into ElevenLabs' software products and AI models.Build and deliver compelling demos of our voice and audio AI technology to new and existing customers.Scope out potential applications in new industries and expand our AI solutions across different sectors globally.Take full ownership of end-to-end execution of major projects for our most strategic partners, working hands-on to deliver high-impact solutions.Collaborate daily with our customers' engineering and executive teams to ensure optimal implementation of ElevenLabs' technologies.RequirementsExperience working with customers in a technical capacity.A proven track record of leading and building teamsBasic proficiency in Python and understanding of API integration to implement scripts and help with prototyping/demo building.Excellent communication and problem-solving skills, especially in terms of ability to summarize complex technical concepts and using logic in pursuing optimal solutions.Track record of taking ownership of complex projects and delivering results.Technical aptitude to quickly understand our voice and audio AI models and their applications.LocationThis role will require you to be based within Paris and in-office 3+ days of the week and travel to customer offices within France as required.#LI-Remote
No items found.
2026-03-01 15:14
Software Engineer, Agent
Sierra
201-500
CA$180,000 – CA$390,000
Canada
Full-time
Remote
false
About usAt Sierra, we’re creating a platform to help businesses build better, more human customer experiences with AI. We are primarily an in-person company based in San Francisco, with growing offices in Atlanta, New York, London, Paris, Singapore, and Japan.We are guided by a set of values that are at the core of our actions and define our culture: Trust, Customer Obsession, Craftsmanship, Intensity, and Family. These values are the foundation of our work, and we are committed to upholding them in everything we do.Our co-founders are Bret Taylor and Clay Bavor. Bret currently serves as Board Chair of OpenAI. Previously, he was co-CEO of Salesforce (which had acquired the company he founded, Quip) and CTO of Facebook. Bret was also one of Google's earliest product managers and co-creator of Google Maps. Before founding Sierra, Clay spent 18 years at Google, where he most recently led Google Labs. Earlier, he started and led Google’s AR/VR effort, Project Starline, and Google Lens. Before that, Clay led the product and design teams for Google Workspace. What you’ll doDesign and deliver production-grade AI agents: You'll build and ship highly performant, reliable, and intuitive AI agents that are central, mission-critical and drive revenue directly to Sierra's growth. These aren't prototypes—they are powerful, scalable systems running in production environments across industries like finance, healthcare, and commerce.Drive the Agent Development Life Cycle (ADLC): You'll have complete ownership and autonomy from initial pilot through deployment and continuous iteration. You'll be responsible for building, tuning, and evolving AI agents in production environments, defining the standard for ADLC best practices along the way.Partner with large enterprises and cutting-edge startups: You’ll work directly with leaders at some of the world’s largest enterprises to understand their most pressing business challenges, and build AI agents that transform how they operate at scale. You'll also partner with the most cutting-edge startups in Silicon Valley, embedding AI agents across their entire business stack to drive innovation and efficiency.Build the future of the platform: Your direct work with customers will guide the evolution of Sierra's core platform. You'll surface unmet needs, prototype new tools and features, and collaborate with research, product, and platform to shape the future of AI agent development and Sierra's product.Example projectsThese are some examples of projects that engineers on our team have worked on recently:Design and build AI agents for large telecommunications and media companies that consistently outperform human agents in managing subscription churnDevelop and refine AI agents capable of navigating complex customer interactions, like troubleshooting a broken device and personalizing product recommendationsCreate generalizable AI agent frameworks tailored for industry-specific use cases. See some examples in our financial services blog!Facilitate design partnerships for new product initiatives, such as new agent architectures, self-service capabilities, and generative agent developmentExperiment with the latest voice models and figure out how to integrate them at scale to enterprise-grade customersWhat you'll bringExperience building and scaling end-to-end production systemsStrong technical problem-solving skills, especially in fast-changing, ambiguous environmentsA builder and tinkerer’s mindset with high agency - you find creative ways to overcome obstacles and shipComfort working directly with customers to understand their needs and solve real-world problemsExcellent communication skills - clear, direct, and persuasive across technical and non-technical audiencesEven better...Experience building or deploying AI/LLM systems in productionHave been a founder or founding engineer - you know what it means to balance craft, ownership, and speedFamiliarity with tools that power today's AI agents: eval frameworks, agent tooling, RAG pipelines, and prompt engineeringPrior experience with React, TypeScript, and/or GoPrevious roles where you interfaced with customers or led technical projects with external stakeholdersOur valuesTrust: We build trust with our customers with our accountability, empathy, quality, and responsiveness. We build trust in AI by making it more accessible, safe, and useful. We build trust with each other by showing up for each other professionally and personally, creating an environment that enables all of us to do our best work.Customer Obsession: We deeply understand our customers’ business goals and relentlessly focus on driving outcomes, not just technical milestones. Everyone at the company knows and spends time with our customers. When our customer is having an issue, we drop everything and fix it.Craftsmanship: We get the details right, from the words on the page to the system architecture. We have good taste. When we notice something isn’t right, we take the time to fix it. We are proud of the products we produce. We continuously self-reflect to continuously self-improve.Intensity: We know we don’t have the luxury of patience. We play to win. We care about our product being the best, and when it isn’t, we fix it. When we fail, we talk about it openly and without blame so we succeed the next time.Family: We know that balance and intensity are compatible, and we model it in our actions and processes. We are the best technology company for parents. We support and respect each other and celebrate each other’s personal and professional achievements.What we offerWe want our benefits to reflect our values and offer the following to full-time employees:Flexible (Unlimited) Paid Time OffMedical, Dental, and Vision benefits for you and your familyLife Insurance and Disability BenefitsRetirement Plan (e.g., 401K, pension) with Sierra matchParental LeaveFertility and family building benefits through CarrotLunch, as well as delicious snacks and coffee to keep you energized Discretionary Benefit Stipend giving people the ability to spend where it matters mostFree alphorn lessonsThese benefits are further detailed in Sierra's policies, may vary by region, and are subject to change at any time, consistent with the terms of any applicable compensation or benefits plans. Eligible full-time employees can participate in Sierra's equity plans subject to the terms of the applicable plans and policies.Be you, with usWe're working to bring the transformative power of AI to every organization in the world. To do so, it is important to us that the diversity of our employees represents the diversity of our customers. We believe that our work and culture are better when we encourage, support, and respect different skills and experiences represented within our team. We encourage you to apply even if your experience doesn't precisely match the job description. We strive to evaluate all applicants consistently without regard to race, color, religion, gender, national origin, age, disability, veteran status, pregnancy, gender expression or identity, sexual orientation, citizenship, or any other legally protected class.
No items found.
2026-03-01 12:44
Staff Product Designer, Go Enterprise
Grammarly
1001-5000
$103,000 – $128,000
No items found.
Full-time
Remote
false
SUPERHUMAN MAIL 👉
We exist so that professionals end each day feeling happier, more productive, and closer to achieving their potential.
Our customers get through their inboxes twice as fast; many see inbox zero for the first time in years.
Today we are…
The fastest email experience in the world
Loved and adored: see what our customers say
📣 We've joined forces with Grammarly to build the AI-native productivity suite of the future, with Superhuman as the central communication layer. This partnership accelerates our mission to help professionals achieve their potential — now at even greater scale.
Come shape the future of email, communication, and productivity!
BUILD LOVE 💜
At Superhuman, we deeply understand how to build products that people love. We incorporate fun and play; we infuse magic and joy; we make experiences that amaze and delight.
It all starts with the right team — a team that deeply cares about values, customers, and each other.
CREATE MASSIVE IMPACT 🚀
We're not solving a small problem, and we're not addressing a small market. We're going after email; the one activity that consumes more of our work day than any other.
Our ambition doesn't stop there. Next: calendars, notes, contacts, and team communication. We are building the productivity platform of the future.
DO THE BEST WORK OF YOUR LIFE 🌟
We have created the frameworks for how to build product market fit and redefined the narrative of how to onboard customers successfully. We have shown the world it’s possible to build a premium productivity brand. Our investors included Andreessen Horowitz, First Round Capital, IVP, Tiger Global Management, Sam Altman, and the founders of Gmail, Dropbox, Reddit, Discord, Stripe, GitHub, AngelList, and Intercom.
This time, we’re swinging beyond the fences and fundamentally rethinking how individuals and teams should collaborate. We are building a household brand and a worldwide organization. We are here to do the best work of our lives, and we hope you are too.
ROLE 👩🏽💻👨💻
Own the observability and lifecycle management of AI features across the organization
Build tools and infrastructure to enable teams to develop, monitor, and optimize LLM-powered features
Design and implement closed-loop evaluation pipelines that automatically validate prompt changes
Develop comprehensive metrics and dashboards to track LLM usage: cost per feature, token patterns, and latency.
Create systems that tie user feedback to specific prompts and LLM calls
Establish best practices and processes for the full lifecycle of prompts: development, testing, deployment, and monitoring
Collaborate with engineering teams across the org to ensure they have the tools and visibility needed to build high-quality AI features
Technologies we use: Go, Postgres, Kubernetes, Google Cloud, various LLM providers (OpenAI, Anthropic, Google Vertex)
SOUND LIKE YOU? 🙌
Experience: You have 4+ years of software development experience with a focus on backend engineering, DevOps, ML Ops, or SRE work. You're proficient in at least one back-end programming language (ideally Go). You have hands-on experience with observability, metrics, and monitoring systems.
AI Enthusiast: You believe AI will revolutionize how we work as well as the experiences that we create for our customers. Driven by passion and curiosity, you leverage AI to dramatically increase your own productivity and the impact of your team.
Metrics-Driven: You understand percentiles (P90, P95), know how to build meaningful dashboards, and can turn raw data into actionable insights. You've worked with monitoring and observability tools.
Systems Thinker: You think in terms of pipelines, lifecycles, and closed loops. You know how to build scalable infrastructure that enables other teams to move faster.
Remarkable Quality: You produce work that is striking, worthy of attention, and a contribution to the state of the art.
Asynchronous Communicator: You’re effective across various mediums (especially Slack, Notion, and email) and can produce and consume detailed written materials as needed without sacrificing speed. You respond quickly and thoughtfully to unblock others and speed things up.
Start-to-Finish Ownership: Acts with 100% responsibility for their own outcomes as well as the outcomes of the company.
Bias to action: Speed matters. Takes rapid and decisive steps forward, even in the face of uncertainty, recognizing action is the catalyst for progress and growth.
Growth Mindset: You embrace challenges, welcome feedback, and believe you and others can always grow.
SALARY INFO 💸
Superhuman takes a market-based approach to compensation, which means base pay may vary depending on your location. Base pay may vary considerably depending on job-related knowledge, skills, and experience. The expected salary ranges for this position are outlined below by location and may be modified in the future.
Canada: $128,000-$174,000 CAD
Mexico: $1,928,000-$2,405,000 MXN
Brazil: R$562,000-R$702,000 BRL
Argentina: $103,000-$128,000 USD
The salary ranges do not reflect total compensation, which includes base salary, benefits, and company equity. This range is intentionally broad because we are open to considering candidates at multiple levels of seniority within engineering. The exact salary offered will depend on the candidate’s skills, experience, and the level at which they join our team.
BENEFITS 🎁
Superhuman offers all team members competitive pay along with a benefits package encompassing the following and more:
Excellent health care (including a wide range of medical, dental, vision, mental health, and fertility benefits)
Disability and life insurance options
401(k) and RRSP matching (US & Canada only)
Paid parental leave
20 days of paid time off per year, 12 days of paid holidays per year (17 days for LatAm), two floating holidays per year, and flexible sick time
Generous stipends (including those for caregiving, pet care, wellness, your home office, and more)
Annual professional development budget and opportunities
Superhuman takes a market-based approach to compensation, so base pay may vary by location.
COME JOIN US 🎟️
We value our differences, and we encourage all to apply — especially those whose identities are traditionally underrepresented in tech organizations. We do not discriminate on the basis of race, religion, color, gender expression or identity, sexual orientation, ancestry, national origin, citizenship, age, marital status, veteran status, disability status, political belief, or any other characteristic protected by law. Superhuman is an equal opportunity employer and a participant in the US federal E-Verify program (US). We also abide by the Employment Equity Act (Canada).
#LI-Remote
No items found.
2026-02-28 19:14
Software Data Engineer
BenchSci
201-500
$110,000 – $135,000
Canada
Full-time
Remote
false
We are looking for a Software Data Engineer to join our growing Data Team! Reporting tothe Engineering Manager, you will evolve our data models in several styles of datastores andoperationalize production-grade data pipelines. As part of this role, you'll collaborate with aworld-class team, experience growth and mentorship, and apply data engineering solutionsto shape the future of scientific discovery.
Pay range: $110,000 - 135,000
We know compensation is an important part of choosing your next role. The range shown reflects our target hiring range, informed by market data, internal equity, and the role’s current scope. Often the mid-range is where we tend to fall, but individual offers may vary based on experience, skills, and the role scope.You Will:Collaborate with Machine Learning, Full-stack engineers and Science to solve complex document mining challenges, helping us capture and model additional scientific experimentsScale data pipelines to allow our data to go from research to platform quickly and reliablyWork with sources that contain both semi-structured and unstructured dataUse your experience to help define and apply best practices for a broad platform of technologies in a cloud-based environmentArchitect and maintain robust data pipelines that ingest diverse sources and utilize LLMs for high-fidelity entity extraction into structured formatsImplement evaluation frameworks to monitor the accuracy, drift, and hallucination rates of extraction models within the production pipeline.Lead or consult the authoring of engineering design proposals following the unified Platform Stream roadmap at BenchSciLeverage a deep understanding of the business context and the team’s goals to unlock independent technical decisions in the face of open-ended requirementsProactively identify new opportunities (from both internal and external sources) and advocate for and implement improvements to the current state of projectsRespond with urgency and drive urgency in own team to operational issues, owning resolution within one's sphere of responsibilityChallenge the status quo and propose newertechnologies or ways of workingYou Have:A degree in Computer Science/Engineering or a related field within science3+ years experience working as a software developerin the industryProficient with PythonProficient with SQLExperience using LLMs for structured data extractionExperience with event-driven architecture with Pub/SubA track record in building high-quality, maintainable codeExperience with cloud computing (for example: GCP, Azure, AWS)Nice To Have:ML/Data science exposureWorked with Auth0, TerraformHave experience with data warehouse solutions like BigQuery, and databases including AlloyDB and SpannerHave experience with agentic driven development and AI-based tools like Cursor or Claude CodeHave experience with building ConversationalAI solutionsBenefits and Perks: * A great compensation package that includes BenchSci equity options* A robust vacation policy plus an additional vacation day every year* Company closures for 14 more days throughout the year* Flex time for sick days, personal days, and religious holidays* Comprehensive health and dental benefits* Annual learning & development budget* A one-time home office set-up budget to use upon joining BenchSci* An annual lifestyle spending account allowance* Generous parental leave benefits with a top-up plan or paid time off options* The ability to save for your retirement coupled with a company match!
About BenchSci:BenchSci's mission is to exponentially increase the speed and quality of life-saving research and development. We empower scientists to run more successful experiments with the world's most advanced, biomedical artificial intelligence software platform.
Backed by Generation Investment Management, TCV, Inovia, F-Prime, Golden Ventures, and Google's AI fund, Gradient Ventures, we provide an indispensable tool for scientists that accelerates research at top pharmaceutical companies and leading academic centers.
Our Culture:Our culture fosters transparency, collaboration, and continuous learning.
We value each other's differences and always look for opportunities to embed equity into the fabric of our work. We foster diversity, autonomy, and personal growth, and provide resources to support motivated self-leaders in continuous improvement.
You will work with high-impact, highly skilled, and intelligent experts motivated to drive impact and fulfill a meaningful mission. We empower you to unleash your full potential, do your best work, and thrive. Here you will be challenged to stretch yourself to achieve the seemingly impossible.
Diversity, Equity and Inclusion: We're committed to creating an inclusive environment where people from all backgrounds can thrive. We believe that improving diversity, equity and inclusion is our collective responsibility, and this belief guides our DEI journey. Learn more about our DEI initiatives.
Accessibility Accommodations: Should you require any accommodation, we will work with you to meet your needs. Please reach out to talent@benchsci.com.
No items found.
2026-02-28 15:14
Member of Technical Staff, Inference & RL Systems
Magic
51-100
$225,000 – $550,000
United States
Full-time
Remote
false
Magic’s mission is to build safe AGI that accelerates humanity’s progress on the world’s most important problems. We believe the most promising path to safe AGI lies in automating research and code generation to improve models and solve alignment more reliably than humans can alone. Our approach combines frontier-scale pre-training, domain-specific RL, ultra-long context, and inference-time compute to achieve this goal.About the roleAs a Software Engineer on the Inference & RL Systems team, you will design and operate the distributed systems that serve our models in production and power large-scale post-training workflows.This role sits at the boundary between model execution and distributed infrastructure. You will work on systems that determine inference latency, throughput, stability, and the reliability of RL and post-training training loops.Magic’s long-context models introduce demanding execution constraints: KV-cache scaling, memory pressure under long sequences, batching trade-offs, long-horizon trajectory rollouts, and sustained throughput under real-world workloads. You will own the infrastructure that makes both production inference and large-scale RL iteration fast and reliable.What you’ll work onDesign and scale high-performance inference serving systemsOptimize KV-cache management, batching strategies, and schedulingImprove throughput and latency for long-context workloadsBuild and maintain distributed RL and post-training infrastructureImprove reliability of rollout, evaluation, and reward pipelinesAutomate fault detection and recovery for serving and RL systemsProfile and eliminate performance bottlenecks across GPU, networking, and storage layersCollaborate with Kernels and Research to align execution systems with model architectureWhat we’re looking forStrong software engineering and distributed systems fundamentalsExperience building or operating large-scale inference or training systemsDeep understanding of GPU execution constraints and memory trade-offsExperience debugging performance issues in production ML systemsAbility to reason about system-level trade-offs between latency, throughput, and costTrack record of owning critical production infrastructureCompensation, benefits, and perks (US):Annual salary range: $225K - $550KEquity is a significant part of total compensation, in addition to salary401(k) plan with 6% salary matchingGenerous health, dental and vision insurance for you and your dependentsUnlimited paid time offVisa sponsorship and relocation stipend to bring you to SF, if possibleA small, fast-paced, highly focused teamMagic strives to be the place where high-potential individuals can do their best work. We value quick learning and grit just as much as skill and experience.Our cultureIntegrity. Words and actions should be alignedHands-on. At Magic, everyone is building Teamwork. We move as one team, not N individualsFocus. Safely deploy AGI. Everything else is noiseQuality. Magic should feel like magic
No items found.
2026-02-28 12:59
Member of Technical Staff, Pre-training Systems
Magic
51-100
$225,000 – $550,000
United States
Full-time
Remote
false
Magic’s mission is to build safe AGI that accelerates humanity’s progress on the world’s most important problems. We believe the most promising path to safe AGI lies in automating research and code generation to improve models and solve alignment more reliably than humans can alone. Our approach combines frontier-scale pre-training, domain-specific RL, ultra-long context, and inference-time compute to achieve this goal.About the roleAs a Software Engineer on the Pre-training Systems team, you will design and operate the distributed infrastructure that trains Magic’s long-context models at scale.This role focuses on large-scale model training across massive GPU clusters. You will work at the boundary between deep learning and distributed systems, ensuring that training runs are performant, reliable, and reproducible under extreme scale.Magic’s long-context models create non-trivial systems challenges: sustained memory pressure, communication overhead across thousands of devices, long-running jobs that must survive failures, and efficient sequence packing under hardware constraints. You will own the systems that make large-scale pre-training stable and fast.What you’ll work onScale distributed training across large GPU clusters (data, tensor, pipeline parallelism)Optimize communication patterns and gradient synchronizationImprove checkpointing, fault tolerance, and job recovery systemsProfile and eliminate performance bottlenecks across compute, networking, and storageImprove experiment reproducibility and orchestration workflowsIncrease hardware utilization and training throughputCollaborate with Kernels and Research to align model architecture with systems realitiesWhat we’re looking forStrong software engineering and distributed systems fundamentalsExperience training large models in multi-node GPU environmentsDeep understanding of parallelism strategies and performance trade-offsExperience debugging cross-layer issues in production ML systemsStrong ownership mindset and ability to operate critical infrastructureTrack record of improving performance or reliability of large-scale systemsCompensation, benefits, and perks (US):Annual salary range: $225K - $550KEquity is a significant part of total compensation, in addition to salary401(k) plan with 6% salary matchingGenerous health, dental and vision insurance for you and your dependentsUnlimited paid time offVisa sponsorship and relocation stipend to bring you to SF, if possibleA small, fast-paced, highly focused teamMagic strives to be the place where high-potential individuals can do their best work. We value quick learning and grit just as much as skill and experience.Our cultureIntegrity. Words and actions should be alignedHands-on. At Magic, everyone is building Teamwork. We move as one team, not N individualsFocus. Safely deploy AGI. Everything else is noiseQuality. Magic should feel like magic
No items found.
2026-02-28 12:59
No job found
Your search did not match any job. Please try again
