The AI job market moves fast. We keep up so you don't have to.
Fresh roles added daily, reviewed for quality — across every corner of the AI ecosystem.
I'm strong in:
Edit filters
New AI Opportunities
Showing 61 – 79 of 79 jobs
Tag
Software Development in Test Intern
Together AI
201-500
$200,000 – $280,000
No items found.
Full-time
Remote
false
About the Role
The Turbo team sits at the intersection of efficient inference (algorithms, architectures, engines) and post‑training / RL systems. We build and operate the systems behind Together’s API, including high‑performance inference and RL/post‑training engines that can run at production scale.
Our mandate is to push the frontier of efficient inference and RL‑driven training: making models dramatically faster and cheaper to run, while improving their capabilities through RL‑based post‑training (e.g., GRPO‑style objectives). This work lives at the interface of algorithms and systems: asynchronous RL, rollout collection, scheduling, and batching all interact with engine design, creating many knobs to tune across the RL algorithm, training loop, and inference stack. Much of the job is modifying production inference systems—for example, SGLang‑ or vLLM‑style serving stacks and speculative decoding systems such as ATLAS—grounded in a strong understanding of post‑training and inference theory, rather than purely theoretical algorithm design.
You’ll work across the stack—from RL algorithms and training engines to kernels and serving systems—to build and improve frontier models via RL pipelines. People on this team are often spiky: some are more RL‑first, some are more systems‑first. Depth in one of these areas plus appetite to collaborate across (and grow toward more full‑stack ownership over time) is ideal.
Requirements
We don’t expect anyone to check every box below. People on this team typically have deep expertise in one or more areas and enough breadth (or interest) to work effectively across the stack. The closer you are to full‑stack (inference + post‑training/RL + systems), the stronger the fit—but being spiky in one area and eager to grow is absolutely okay.
You might be a good fit if you:
Have strong expertise in at least one of the following, and are excited to collaborate across (and grow into) the others:
Systems‑first profile: Large‑scale inference systems (e.g., SGLang, vLLM, FasterTransformer, TensorRT, custom engines, or similar), GPU performance, distributed serving.
RL‑first profile: RL / post‑training for LLMs or large models (e.g., GRPO, RLHF/RLAIF, DPO‑like methods, reward modeling), and using these to train or fine‑tune real models.
Model architecture design for Transformers or other large neural nets.
Distributed systems / high‑performance computing for ML.
Are comfortable working from algorithms to engines:
Strong coding ability in Python
Experience profiling and optimizing performance across GPU, networking, and memory layers.
Able to take a new sampling method, scheduler, or RL update and turn it into a production‑grade implementation in the engine and/or training stack.
Have a solid research foundation in your area(s) of depth:
Track record of impactful work in ML systems, RL, or large‑scale model training (papers, open‑source projects, or production systems).
Can read new RL / post‑training papers, understand their implications on the stack, and design minimal, correct changes in the right layer (training engine vs. inference engine vs. data / API).
Operate well as a full‑stack problem solver:
You naturally ask: “Where in the stack is this really bottlenecked?”
You enjoy collaborating with infra, research, and product teams, and you care about both scientific quality and user‑visible wins.
Minimum qualifications
3+ years of experience working on ML systems, large‑scale model training, inference, or adjacent areas (or equivalent experience via research / open source).
Advanced degree in Computer Science, EE, or a related field, or equivalent practical experience.
Demonstrated experience owning complex technical projects end‑to‑end.
If you’re excited about the role and strong in some of these areas, we encourage you to apply even if you don’t meet every single requirement.
Responsibilities
Advance inference efficiency end‑to‑end
Design and prototype algorithms, architectures, and scheduling strategies for low‑latency, high‑throughput inference.
Implement and maintain changes in high‑performance inference engines (e.g., SGLang‑ or vLLM‑style systems and Together’s inference stack), including kernel backends, speculative decoding (e.g., ATLAS), quantization, etc.
Profile and optimize performance across GPU, networking, and memory layers to improve latency, throughput, and cost.
Unify inference with RL / post‑training
Design and operate RL and post‑training pipelines (e.g., RLHF, RLAIF, GRPO, DPO‑style methods, reward modeling) where 90+% of the cost is inference, jointly optimizing algorithms and systems.
Make RL and post‑training workloads more efficient with inference‑aware training loops—for example, async RL rollouts, speculative decoding, and other techniques that make large‑scale rollout collection and evaluation cheaper.
Use these pipelines to train, evaluate, and iterate on frontier models on top of our inference stack.
Co‑design algorithms and infrastructure so that objectives, rollout collection, and evaluation are tightly coupled to efficient inference, and quickly identify bottlenecks across the training engine, inference engine, data pipeline, and user‑facing layers.
Run ablations and scale‑up experiments to understand trade‑offs between model quality, latency, throughput, and cost, and feed these insights back into model, RL, and system design.
Own critical systems at production scale
Profile, debug, and optimize inference and post‑training services under real production workloads.
Drive roadmap items that require real engine modification—changing kernels, memory layouts, scheduling logic, and APIs as needed.
Establish metrics, benchmarks, and experimentation frameworks to validate improvements rigorously.
Provide technical leadership (Staff level)
Set technical direction for cross‑team efforts at the intersection of inference, RL, and post‑training.
Mentor other engineers and researchers on full‑stack ML systems work and performance engineering.
About Together AI
Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers in our journey in building the next generation AI infrastructure.
Compensation
We offer competitive compensation, startup equity, health insurance and other competitive benefits. The US base salary range for this full-time position is: $200,000 - $280,000 + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge.
Equal Opportunity
Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.
Please see our privacy policy at https://www.together.ai/privacy
No items found.
2026-03-07 12:44
AI Tooling Frontend Engineer - Helix Team
Figure AI
201-500
$150,000 – $250,000
United States
Full-time
Remote
false
Figure is an AI Robotics company developing a general purpose humanoid. Our humanoid robot is designed for commercial tasks and the home. We are based in San Jose, CA and require 5 days/week in-office collaboration. It’s time to build.
Figure’s vision is to deploy autonomous humanoids at a global scale. Our Helix team is seeking an experienced Frontend Engineer to enhance our internal, web-based data and AI training tools. This role focuses on developing intuitive web interfaces that support key AI research functions, including robot data annotation, training dataset visualization, and experiment tracking. The ideal candidate has experience building rich, interactive web interfaces using React and TypeScript.
Responsibilities
Design and build intuitive web interfaces for robot data annotation, datasets visualization, and experiment tracking
Utilize data-driven techniques to optimize interfaces for efficiency and fast iteration cycles
Integrate AI models to automate manual tasks
Work together with AI researchers, robot operators, and annotators to support new user experiences
Requirements
Strong software engineering fundamentals
Bachelor's or Master's degree in Computer Science, Robotics, Engineering, or a related field
Minimum of 4 years of professional, full-time experience building rich, interactive web interfaces
Proficiency in React and TypeScript
Bonus Qualifications
Experience using data stores (Postgres, MySQL, ElasticSearch, Redis, etc.)
Experience managing cloud infrastructure (AWS, Azure, GCP)
Experience with Tailwind CSS
Experience building data annotation and dataset management tools.
The US base salary range for this full-time position is between $150,000 - $250,000 annually.
The pay offered for this position may vary based on several individual factors, including job-related knowledge, skills, and experience. The total compensation package may also include additional components/benefits depending on the specific role. This information will be shared if an employment offer is extended.
No items found.
2026-03-07 10:14
Senior Product Manager – Data & Quality
Snorkel AI
501-1000
$172,000 – $300,000
United States
Full-time
Remote
false
About Snorkel
At Snorkel, we believe meaningful AI doesn’t start with the model, it starts with the data.
We’re on a mission to help enterprises transform expert knowledge into specialized AI at scale. The AI landscape has gone through incredible changes between 2015, when Snorkel started as a research project in the Stanford AI Lab, to the generative AI breakthroughs of today. But one thing has remained constant: the data you use to build AI is the key to achieving differentiation, high performance, and production-ready systems. We work with some of the world’s largest organizations to empower scientists, engineers, financial experts, product creators, journalists, and more to build custom AI with their data faster than ever before. Excited to help us redefine how AI is built? Apply to be the newest Snorkeler!About the Role
Snorkel AI is hiring Frontier AI Solutions Engineers who will partner with leading AI labs on their most challenging data problems. This is a high-impact, customer-facing role that combines technical depth with strong presales instincts. You'll partner with customer research teams to design complex data and environments that improve frontier model performance, demonstrating Snorkel's capabilities through research-driven engagements.
You'll work at the critical intersection of research, technical strategy, and customer partnership. This includes scoping training data needs, designing RL environments and tasks, developing evaluation frameworks, probing model behavior and failure modes, and translating customer research objectives into actionable technical plans. You'll develop technical specifications, analyze frontier model failure modes, and serve as a thought partner to customer research teams throughout the sales cycle and into early delivery phases.
Main Responsibilities
Partner with frontier AI research labs to design datasets and environments that improve model performance
Lead technical conversations with customer researchers to understand model capabilities, failure modes, data requirements, and success criteria
Probe model behavior through systematic evaluation to uncover weaknesses and identify high-impact data interventions
Design evaluation frameworks, calibration processes, and quality rubrics that establish measurable project success metrics
Develop technical specifications for data projects that balance research rigor with operational feasibility
Serve as thought partner to customer research teams throughout the sales cycle, building trust and credibility
Stay current on frontier AI research, RL environment design, post-training techniques, and evaluation methodologies
Preferred Qualifications
Strong expertise in frontier AI concepts including LLMs, training data pipelines, evaluation methodologies, post-training techniques (RLHF, DPO, RLAIF), and domain areas such as coding agents, reasoning, multimodal models, or RL environments
Experience in applied ML research, data science, or research-intensive technical roles with customer-facing or collaborative research experience
Proficiency in Python and familiarity with ML frameworks and LLM APIs
Excellent communication skills — ability to deliver technical presentations and explain complex concepts to diverse audiences
Familiarity with data curation workflows, synthetic data generation, LLM-as-a-Judge, or evaluation framework design
Ability to work in a fast-moving environment, comfortable with ambiguity and rapid iteration
B.S. in Computer Science, Machine Learning, or related field with 4+ years of experience in AI/ML solutions engineering or technical customer-facing roles
Compensation range for Tier 1 locations of San Francisco Bay Area and New York City, $172K - $300K OTE. All offers also include equity in the form of employee stock options. Our compensation ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training.
Why Join Snorkel AI?
At Snorkel AI, we're building the future of data-centric AI. Our Expert Data-as-a-Service organization partners with world-class customers to solve some of the hardest data challenges — creating training and evaluation data that power the next generation of LLMs and AI systems. You'll work directly on projects that impact real production systems, while shaping how internal teams deliver faster, better, and more intelligently. This is a rare opportunity to own technical data workflows and be a founding member of the technical DaaS team.
#LI-CG1
Salary Range
-
Salary Range $172,000—$300,000 USDBe Your Best at Snorkel
Joining Snorkel AI means becoming part of a company that has market proven solutions, robust funding, and is scaling rapidly—offering a unique combination of stability and the excitement of high growth. As a member of our team, you’ll have meaningful opportunities to shape priorities and initiatives, influence key strategic decisions, and directly impact our ongoing success. Whether you’re looking to deepen your technical expertise, explore leadership opportunities, or learn new skills across multiple functions, you’re fully supported in building your career in an environment designed for growth, learning, and shared success.
Snorkel AI is proud to be an Equal Employment Opportunity employer and is committed to building a team that represents a variety of backgrounds, perspectives, and skills. Snorkel AI embraces diversity and provides equal employment opportunities to all employees and applicants for employment. Snorkel AI prohibits discrimination and harassment of any type on the basis of race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local law. All employment is decided on the basis of qualifications, performance, merit, and business need.
We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
No items found.
2026-03-07 7:44
Software Engineer, Internal Tools
X AI
5000+
$45 – $100 / hour
United States
Full-time
Remote
false
About xAI
xAI’s mission is to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. Our team is small, highly motivated, and focused on engineering excellence. This organization is for individuals who appreciate challenging themselves and thrive on curiosity. We operate with a flat organizational structure. All employees are expected to be hands-on and to contribute directly to the company’s mission. Leadership is given to those who show initiative and consistently deliver excellence. Work ethic and strong prioritization skills are important. All employees are expected to have strong communication skills. They should be able to concisely and accurately share knowledge with their teammates.About the Role
As an Accounting Expert, you will be instrumental in enhancing the capabilities of our cutting-edge technologies by providing high-quality input and labels using specialized software. Your role involves collaborating closely with our technical team to support the training of new AI tasks, ensuring the implementation of innovative initiatives. You'll contribute to refining annotation tools and selecting complex problems from corporate accounting domains, with a focus on financial reporting, consolidation, internal controls, and GAAP compliance where your expertise can drive significant improvements in model performance. This position demands a dynamic approach to learning and adapting in a fast-paced environment, where your ability to interpret and execute tasks based on evolving instructions is crucial.
AI Tutor’s Role in Advancing xAI’s Mission
As an AI Tutor, you will play an essential role in advancing xAI's mission by supporting the training and refinement of xAI’s AI models. AI Tutors teach our AI models about how people interact and react, as well as how people approach issues and discussions in corporate accounting. To accomplish this, AI Tutors will actively participate in gathering or providing data, such as text, voice, and video data, sometimes providing annotations, recording audio, or participating in video sessions. We seek individuals who are comfortable and eager to engage in these activities as a fundamental part of the role, ensuring a strong alignment with xAI’s goals and objectives to innovate.
Scope
An AI Tutor will provide services that include labeling and annotating data in text, voice, and video formats to support AI model training. At times, this may involve recording audio or video sessions, and tutors are expected to be comfortable with these tasks as they are fundamental to the role. Such data is a job requirement to advance xAI’s mission, and AI Tutors acknowledge that all work is done for hire and owned by xAI.
Responsibilities
Use proprietary software applications to provide input/labels on defined projects.
Support and ensure the delivery of high-quality curated data.
Play a pivotal role in supporting and contributing to the training of new tasks, working closely with the technical staff to ensure the successful development and implementation of cutting-edge initiatives/technologies.
Interact with the technical staff to help improve the design of efficient annotation tools.
Choose problems from corporate accounting fields that align with your expertise, providing rigorous solutions and model critiques where you can confidently provide detailed solutions and evaluate model responses.
Regularly interpret, analyze, and execute tasks based on given instructions.
Key Qualifications
Must have 3+ years of Big 4 public accounting experience (audit/assurance) on corporate or SEC clients, or an equivalent senior corporate accounting role (e.g., Controller, Assistant Controller, or Technical Accounting Manager at a public company or large private enterprise with complex GAAP reporting).
Must possess a Master's or PhD in Accounting (corporate focus) or equivalent as a licensed CPA.
Proficiency in reading and writing, both in informal and professional English.
Strong ability to navigate various corporate accounting information resources, databases, and online resources (e.g., FASB codification, SEC EDGAR, 10-K/10-Q filings, ERP systems).
Outstanding communication, interpersonal, analytical, and organizational capabilities.
Solid reading comprehension skills combined with the capacity to exercise autonomous judgment even when presented with limited data/material.
Strong passion for and commitment to technological advancements and innovation in corporate accounting.
Preferred Qualifications
5+ years at a Big 4 firm or in a senior corporate controllership role, with direct involvement in SEC reporting, SOX 404, or complex consolidations.
Experience drafting or reviewing 10-K/10-Q footnotes, MD&A, or technical accounting memos.
Possesses experience with at least one publication in a reputable accounting journal or outlet.
Teaching experience as a professor
Location & Other Expectations
This position is based in Palo Alto, CA, or fully remote.
The Palo Alto option is an in-office role requiring 5 days per week; remote positions require strong self-motivation.
If you are based in the US, please note we are unable to hire in the states of Wyoming and Illinois at this time.
We are unable to provide visa sponsorship.
Team members are expected to work from 9:00am - 5:30pm PST for the first two weeks of training and 9:00am - 5:30pm in their own timezone thereafter.
For those who will be working from a personal device, please note your computer must be a Chromebook, Mac with MacOS 11.0 or later, or Windows 10 or later.
Compensation
$45/hour - $100/hour
The posted pay range is intended for U.S.-based candidates and depends on factors including relevant experience, skills, education, geographic location, and qualifications. For international candidates, our recruiting team can provide an estimated pay range for your location.
Benefits:
Hourly pay is just one part of our total rewards package at xAI. Specific benefits vary by country, depending on your country of residence you may have access to medical benefits. We do not offer benefits for part-time roles.xAI is an equal opportunity employer. For details on data processing, view our Recruitment Privacy Notice.
No items found.
2026-03-07 7:29
Data Engineer - Foundational
Harmattan AI
51-100
France
Full-time
Remote
false
About UsHarmattan AI is a next-generation defense prime building autonomous and scalable defense systems. Following the close of a $200M Series B, valuing the company at $1.4 billion, we are expanding our teams and capabilities to deliver mission-critical systems to allied forces.Our work is guided by clear values: building technologies with real-world impact, pursuing excellence in everything we do, setting ambitious goals, and taking on the hardest technical challenges. We operate in a demanding environment where rigor, ownership, and execution are expected.About the RoleAs a Data Engineer on the Foundational team, you will serve as the "plumber" for deep learning, building the massive, high-performance data infrastructure required to power our foundational models. Based in Paris, you will manage terabytes—and eventually petabytes—of raw, unstructured, and noisy video data (EO and IR). Your mission is to ensure our ML engineers spend their time designing architectures, not waiting for data loaders or wrangling corrupted files.ResponsibilitiesMulti-Modal Ingestion Pipeline: Build ETL/ELT pipelines to extract, decode, and store raw Electro-Optical (EO) and Infrared (IR) video from field logs into highly optimised formats like WebDataset, TFRecords, or Parquet.Sensor Synchronisation & Alignment: Develop algorithms to programmatically synchronise EO and IR frames temporally and spatially to provide paired inputs for model training.High-Throughput Data Loading: Architect storage-to-GPU pipelines to ensure multi-node training clusters maintain >90% GPU utilisation without I/O bottlenecks.Distributed Processing: Write and optimise distributed data processing jobs using tools like Apache Spark, Ray, or Apache Beam to process thousands of hours of tactical video logs.Data Quality & Versioning: Implement automated quality checks to filter corrupted or blank frames and maintain 100% reproducible training runs through robust versioning and lineage tracking.Infrastructure Evaluation: Assess and implement advanced storage solutions (e.g., MinIO, S3 tiering) to manage growing datasets while optimising for cost and latency.Candidate RequirementsEducational Background: A BS or MS in Computer Science, Software Engineering, or Distributed Systems is highly preferred. Deep knowledge of operating systems, networking, and parallel computing is essential.Technical Experience: 5-6+ years of experience building and maintaining terabyte-scale pipelines for unstructured data (video, images, or point clouds).Performance Optimisation: Proven track record of maximising multi-node GPU utilisation and optimising data loaders for frameworks like PyTorch or JAX.Tooling Expertise: Strong command of distributed computing tools (Spark, Ray, Beam) and ML data versioning tools (DVC, Apache Iceberg, or Pachyderm).Adaptability & Ownership: A systems-thinker who thrives in a fast-paced startup environment and views messy data as an engineering problem to be solved via automation.Commitment: 100% dedication to Harmattan AI’s mission of providing a defensive edge to allied nations through ethical, high-impact technologyWe look forward to hearing how you can help shape the future of autonomous defense systems at Harmattan AI.
No items found.
2026-03-07 6:14
Computer Vision Engineer
Harmattan AI
51-100
Switzerland
Full-time
Remote
false
About UsHarmattan AI is a next-generation defense prime building autonomous and scalable defense systems. Following the close of a $200M Series B, valuing the company at $1.4 billion, we are expanding our teams and capabilities to deliver mission-critical systems to allied forces.Our work is guided by clear values: building technologies with real-world impact, pursuing excellence in everything we do, setting ambitious goals, and taking on the hardest technical challenges. We operate in a demanding environment where rigor, ownership, and execution are expected.About the RoleWe are looking for a Computer Vision Engineer to join our Machine Learning and Computer Vision team. This role is crucial for developing core technical components across various robotics/aerospace projects.ResponsibilitiesResearch & Data Preparation: Conduct research on state-of-the-art Computer Vision methodologies. Participate in the creation and curation of training and validation datasets. Perform statistical analyses and develop visualization tools to ensure data quality.Algorithm Development & Optimization: Build and refine training pipelines and metrics to enhance model performance.Develop and optimize Computer Vision algorithms for multiple robotics/aerospace projects.Deployment & Integration: Implement ML/CV models into production-ready environments. Ensure seamless integration with Harmattan AI’s systems and conduct rigorous code reviews.Validation & Monitoring: Test algorithms in real-world environments and develop monitoring tools. Track model performance and continuously improve deployed solutions.Cross-Team Collaboration: Work closely with software and simulation teams to align development with system requirements. Communicate findings effectively to stakeholders.Candidate RequirementsEducational Background: A degree from a top-tier engineering school or university (Master’s degree in Computer Science or related field, PhD is a plus)Technical Expertise: Strong mathematical foundations, coding skills (Python, C++ is a plus) and hands-on ML/CV project experience. Experience in top AI companies is a huge plus.Passion for ML: Enthusiasm for Machine Learning and Computer Vision. Strong Communication & Teamwork: Ability to collaborate effectively with diverse teamsCommitment: 100% dedication to Harmattan AI’s mission, vision, and ambitious growth plans, ready to go the extra mile to ensure operational excellence.We look forward to hearing how you can help shape the future of autonomous defense systems at Harmattan AI.
No items found.
2026-03-07 6:14
Software Engineering Manager, Autonomous
Magical
51-100
Canada
Full-time
Remote
false
About MagicalMagical is an agentic automation platform bringing state-of-the-art AI to healthcare, delivering AI agents that actually work in production.We're building "AI employees" that automate the repetitive, time-consuming workflows that slow teams down. Our focus is healthcare – a $4 trillion industry buried in administrative complexity – where we automate claims processing, prior authorizations, and eligibility checks, enabling providers to focus on patient care.Our TractionThe shift to agentic automation in healthcare is inevitable, and we're leading it:Dramatic acceleration in revenue growth with customers expanding into new workflows before renewal7-day proof-of-concepts that demonstrate real value fast, in an industry where months is the normSelf-healing automations with production-grade reliability at scale, where most competitors fail to launchUnlike many AI companies making bold promises, we ship reliable solutions that deliver measurable results. We're backed by Greylock, Coatue, and Lightspeed with $41M raised. Our founder, Harpaul Sambhi, is a second-time founder who successfully sold his first company to LinkedIn.About the RoleAs our Engineering Manager on our Autonomous team, you will lead and scale a high-calibre team of engineers dedicated to defining the future of AI agent development, pushing the boundaries of AI and backend systems.You are deeply passionate about the craft of management and find genuine fulfillment in helping engineers grow their careers. You bring the technical credibility required to navigate complex architectural discussions and translate deep technical challenges into clear business strategies. In this role, you will serve as the essential bridge between product vision and technical execution.This is a hybrid role with 2 days per week in our Toronto office.In this role, you willOversee the technical roadmap for the Autonomous team, translating architectural complexity into clear product strategiesMentor a diverse group of engineers, ranging from product-focused builders to specialized Staff Engineers, and actively support their professional growthPartner closely with Product and Design to ensure our agent-building tools remain intuitive while supporting deep technical capabilitiesChampion a "show > tell" culture by ensuring the team ships rapidly and maintains a high bar for both technical stability and user experienceClear technical and operational roadblocks to ensure the team operates with high agency and clarityYour background looks something like thisHave a proven track record of leading and scaling engineering teams in fast-paced, high-growth environments.Possess the technical background necessary to critically evaluate complex trade-offs and provide strategic direction on complex system designsExperience navigating the balance between long-term technical health and the immediate needs of a rapidly evolving productEmbody a servant-leadership philosophy, with a primary focus on the success of the team and individual growthHigh degree of agency: you thrive in ambiguity and proactively improve processes or solve bottlenecks without much outside inputStrong business acumen and a genuine interest in how technical decisions impact the customer and the company's successEven betterPrior experience building AI-powered products or developer toolsA sharp eye for design and product qualityExperience with real-time interfaces, data visualization, or collaborative editingUnderstanding of agent systems, LLMs, or evaluation frameworksTrack record of building products that balance power and simplicityWe're building the best self-serve agentic automation platform for the healthcare industry and we're just getting started. Come join us.
No items found.
2026-03-07 3:59
Software Engineering Manager, Autonomous
Magical
51-100
United States
Full-time
Remote
false
About MagicalMagical is an agentic automation platform bringing state-of-the-art AI to healthcare, delivering AI agents that actually work in production.We're building "AI employees" that automate the repetitive, time-consuming workflows that slow teams down. Our focus is healthcare – a $4 trillion industry buried in administrative complexity – where we automate claims processing, prior authorizations, and eligibility checks, enabling providers to focus on patient care.Our TractionThe shift to agentic automation in healthcare is inevitable, and we're leading it:Dramatic acceleration in revenue growth with customers expanding into new workflows before renewal7-day proof-of-concepts that demonstrate real value fast, in an industry where months is the normSelf-healing automations with production-grade reliability at scale, where most competitors fail to launchUnlike many AI companies making bold promises, we ship reliable solutions that deliver measurable results. We're backed by Greylock, Coatue, and Lightspeed with $41M raised. Our founder, Harpaul Sambhi, is a second-time founder who successfully sold his first company to LinkedIn.About the RoleAs our Engineering Manager on our Autonomous team, you will lead and scale a high-calibre team of engineers dedicated to defining the future of AI agent development, pushing the boundaries of AI and backend systems.You are deeply passionate about the craft of management and find genuine fulfillment in helping engineers grow their careers. You bring the technical credibility required to navigate complex architectural discussions and translate deep technical challenges into clear business strategies. In this role, you will serve as the essential bridge between product vision and technical execution.This is a hybrid role with 2 days per week in our San Francisco office.In this role, you willOversee the technical roadmap for the Autonomous team, translating architectural complexity into clear product strategiesMentor a diverse group of engineers, ranging from product-focused builders to specialized Staff Engineers, and actively support their professional growthPartner closely with Product and Design to ensure our agent-building tools remain intuitive while supporting deep technical capabilitiesChampion a "show > tell" culture by ensuring the team ships rapidly and maintains a high bar for both technical stability and user experienceClear technical and operational roadblocks to ensure the team operates with high agency and clarityYour background looks something like thisHave a proven track record of leading and scaling engineering teams in fast-paced, high-growth environments.Possess the technical background necessary to critically evaluate complex trade-offs and provide strategic direction on complex system designsExperience navigating the balance between long-term technical health and the immediate needs of a rapidly evolving productEmbody a servant-leadership philosophy, with a primary focus on the success of the team and individual growthHigh degree of agency: you thrive in ambiguity and proactively improve processes or solve bottlenecks without much outside inputStrong business acumen and a genuine interest in how technical decisions impact the customer and the company's successEven betterPrior experience building AI-powered products or developer toolsA sharp eye for design and product qualityExperience with real-time interfaces, data visualization, or collaborative editingUnderstanding of agent systems, LLMs, or evaluation frameworksTrack record of building products that balance power and simplicityWe're building the best self-serve agentic automation platform for the healthcare industry and we're just getting started. Come join us.
No items found.
2026-03-07 3:59
Senior Product Designer, Mobile
Grammarly
1001-5000
$103,000 – $128,000
United States
Canada
Mexico
Full-time
Remote
false
SUPERHUMAN MAIL 👉
We exist so that professionals end each day feeling happier, more productive, and closer to achieving their potential.
Our customers get through their inboxes twice as fast; many see inbox zero for the first time in years.
Today we are…
The fastest email experience in the world
Loved and adored: see what our customers say
📣 We've joined forces with Grammarly to build the AI-native productivity suite of the future, with Superhuman as the central communication layer. This partnership accelerates our mission to help professionals achieve their potential — now at even greater scale.
Come shape the future of email, communication, and productivity!
BUILD LOVE 💜
At Superhuman, we deeply understand how to build products that people love. We incorporate fun and play; we infuse magic and joy; we make experiences that amaze and delight.
It all starts with the right team — a team that deeply cares about values, customers, and each other.
CREATE MASSIVE IMPACT 🚀
We're not solving a small problem, and we're not addressing a small market. We're going after email; the one activity that consumes more of our work day than any other.
Our ambition doesn't stop there. Next: calendars, notes, contacts, and team communication. We are building the productivity platform of the future.
DO THE BEST WORK OF YOUR LIFE 🌟
We have created the frameworks for how to build product market fit and redefined the narrative of how to onboard customers successfully. We have shown the world it’s possible to build a premium productivity brand. Our investors included Andreessen Horowitz, First Round Capital, IVP, Tiger Global Management, Sam Altman, and the founders of Gmail, Dropbox, Reddit, Discord, Stripe, GitHub, AngelList, and Intercom.
This time, we’re swinging beyond the fences and fundamentally rethinking how individuals and teams should collaborate. We are building a household brand and a worldwide organization. We are here to do the best work of our lives, and we hope you are too.
ROLE 👩🏽💻👨💻
Own the observability and lifecycle management of AI features across the organization
Build tools and infrastructure to enable teams to develop, monitor, and optimize LLM-powered features
Design and implement closed-loop evaluation pipelines that automatically validate prompt changes
Develop comprehensive metrics and dashboards to track LLM usage: cost per feature, token patterns, and latency.
Create systems that tie user feedback to specific prompts and LLM calls
Establish best practices and processes for the full lifecycle of prompts: development, testing, deployment, and monitoring
Collaborate with engineering teams across the org to ensure they have the tools and visibility needed to build high-quality AI features
Technologies we use: Go, Postgres, Kubernetes, Google Cloud, various LLM providers (OpenAI, Anthropic, Google Vertex)
SOUND LIKE YOU? 🙌
Experience: You have 4+ years of software development experience with a focus on backend engineering, DevOps, ML Ops, or SRE work. You're proficient in at least one back-end programming language (ideally Go). You have hands-on experience with observability, metrics, and monitoring systems.
AI Enthusiast: You believe AI will revolutionize how we work as well as the experiences that we create for our customers. Driven by passion and curiosity, you leverage AI to dramatically increase your own productivity and the impact of your team.
Metrics-Driven: You understand percentiles (P90, P95), know how to build meaningful dashboards, and can turn raw data into actionable insights. You've worked with monitoring and observability tools.
Systems Thinker: You think in terms of pipelines, lifecycles, and closed loops. You know how to build scalable infrastructure that enables other teams to move faster.
Remarkable Quality: You produce work that is striking, worthy of attention, and a contribution to the state of the art.
Asynchronous Communicator: You’re effective across various mediums (especially Slack, Notion, and email) and can produce and consume detailed written materials as needed without sacrificing speed. You respond quickly and thoughtfully to unblock others and speed things up.
Start-to-Finish Ownership: Acts with 100% responsibility for their own outcomes as well as the outcomes of the company.
Bias to action: Speed matters. Takes rapid and decisive steps forward, even in the face of uncertainty, recognizing action is the catalyst for progress and growth.
Growth Mindset: You embrace challenges, welcome feedback, and believe you and others can always grow.
SALARY INFO 💸
Superhuman takes a market-based approach to compensation, which means base pay may vary depending on your location. Base pay may vary considerably depending on job-related knowledge, skills, and experience. The expected salary ranges for this position are outlined below by location and may be modified in the future.
Canada: $128,000-$174,000 CAD
Mexico: $1,928,000-$2,405,000 MXN
Brazil: R$562,000-R$702,000 BRL
Argentina: $103,000-$128,000 USD
The salary ranges do not reflect total compensation, which includes base salary, benefits, and company equity. This range is intentionally broad because we are open to considering candidates at multiple levels of seniority within engineering. The exact salary offered will depend on the candidate’s skills, experience, and the level at which they join our team.
BENEFITS 🎁
Superhuman offers all team members competitive pay along with a benefits package encompassing the following and more:
Excellent health care (including a wide range of medical, dental, vision, mental health, and fertility benefits)
Disability and life insurance options
401(k) and RRSP matching (US & Canada only)
Paid parental leave
20 days of paid time off per year, 12 days of paid holidays per year (17 days for LatAm), two floating holidays per year, and flexible sick time
Generous stipends (including those for caregiving, pet care, wellness, your home office, and more)
Annual professional development budget and opportunities
Superhuman takes a market-based approach to compensation, so base pay may vary by location.
COME JOIN US 🎟️
We value our differences, and we encourage all to apply — especially those whose identities are traditionally underrepresented in tech organizations. We do not discriminate on the basis of race, religion, color, gender expression or identity, sexual orientation, ancestry, national origin, citizenship, age, marital status, veteran status, disability status, political belief, or any other characteristic protected by law. Superhuman is an equal opportunity employer and a participant in the US federal E-Verify program (US). We also abide by the Employment Equity Act (Canada).
#LI-Remote
No items found.
2026-03-06 18:59
Lead AI/ML Engineer
ASAPP
201-500
$170,000 – $190,000
United States
Full-time
Remote
false
At ASAPP, our mission is simple: deliver the best AI-powered customer experience—faster than anyone else. To achieve that, we’re guided by principles that shape how we think, build, and execute. We value customer obsession, purposeful speed, ownership, and a relentless focus on outcomes. ASAPP’s AI Engineering team is seeking an enterprising, talented and curious machine learning engineer.
We are seeking a highly experienced Lead AI/ML Engineer to join our Core GenerativeAgent team. You will play a pivotal role in designing, building, and deploying cutting-edge AI systems that power mission-critical enterprise applications. This role is ideal for an individual who thrives in ambiguity, is deeply technical, and has a strong product sense paired with deep expertise in foundational models and enterprise AI systems.
You will lead the design and delivery of end-to-end voice AI solutions, combining large language models with speech technologies such as speech-to-text, text-to-speech, and real-time streaming audio pipelines. This role requires a hands-on technical leader who can architect low-latency, highly reliable conversational voice systems and guide a team through ambiguity toward production excellence.
We are looking for someone who understands the unique constraints of voice experiences, latency, turn-taking, interruption handling, streaming inference, and audio quality, and can translate these into scalable, enterprise-grade systems.
This is a hybrid role with weekly in-person responsibilities. We have offices in New York City and Mountain View, CAWhat you'll doLead the design and implementation of scalable ML/AI systems, with a focus on large language models, vector databases, and retrieval-based architecturesIntegrate and apply foundation models from major providers (OpenAI, AWS Bedrock, Anthropic, etc.) for prototyping and production use casesAdapt, evaluate, and optimize LLMs for domain-specific enterprise applicationsBuild and maintain infrastructure for experimentation, deployment, and monitoring of AI models in productionImprove model performance and inference workflows with attention to latency, cost, and reliabilityProvide technical leadership within the team, mentoring engineers and promoting best practices in ML engineeringPartner with product and cross-functional stakeholders to translate requirements into scalable ML solutionsContribute to the evolution of internal standards for experimentation, evaluation, and deploymentWhat you'll need6+ years of experience in Machine Learning or AI systems, with hands-on experience in LLMs, speech, or conversational AI systemsProficiency on Python and ML frameworks like PyTorch or TensorFlowProven experience leading complex, cross-functional AI initiativesExperience building or integrating speech-to-text and text-to-speech systemsDeep understanding of latency-sensitive system design and distributed architecturesStrong proficiency in Python and ML frameworks such as PyTorch or TensorFlowStrong experience integrating foundational models into production applicationsUnderstanding of RAG pipelines, prompt engineering, and vector searchExperience deploying and scaling AI systems using AWS (required), Docker, Kubernetes, and CI/CD practicesStrong communication skills with the ability to align engineering, product, and executive stakeholdersComfortable operating in fast-paced environments and driving clarity in ambiguous problem spacesWhat we'd like to seeExperience with speech model fine-tuning or acoustic/language model optimizationHands-on experience with real-time or streaming audio systems (WebRTC, gRPC streaming, or similar architectures)Experience optimizing TTS prosody, pronunciation control, and voice customizationBackground in MLOps, experimentation platforms, or evaluation frameworks for speech and conversational systemsContributions to open-source AI or speech toolingGraduate degree (MS or PhD) in Computer Science, Machine Learning, Speech Processing, or related field
170,000 - 190,000 a yearCompensation package also includes a performance bonus on top of the listed salary range
Separately, we also offer a compelling equity grant comprised of stock optionsBenefits include:
Competitive compensation with stock optionsComprehensive medical, vision, and dental insurance 401k matchingFitness and wellness stipendMental well-being benefitsProfessional learning and development stipendParental leave, including adoptive and foster parents3 weeks paid time off (increases with tenure) along with sick leave, bereavement and jury duty
ASAPP is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, disability, age, or veteran status. If you have a disability and need assistance with our employment application process, please email us at careers@asapp.com to obtain assistance. #LI-AG1 #LI-Hybrid
No items found.
2026-03-06 14:29
Software Engineer, Inference Platform
Fluidstack
51-100
$165,000 – $500,000
United States
Full-time
Remote
false
About FluidstackAt Fluidstack, we’re building the infrastructure for abundant intelligence. We partner with top AI labs, governments, and enterprises - including Mistral, Poolside, Black Forest Labs, Meta, and more - to unlock compute at the speed of light.We’re working with urgency to make AGI a reality. As such, our team is highly motivated and committed to delivering world-class infrastructure. We treat our customers’ outcomes as our own, taking pride in the systems we build and the trust we earn. If you’re motivated by purpose, obsessed with excellence, and ready to work very hard to accelerate the future of intelligence, join us in building what's next.About the RoleInference is now the defining cost and latency bottleneck for frontier AI. Fluidstack’s Inference Platform team owns the serving layer that sits between our global accelerator supply and the production workloads our customers run on it: LLM serving frameworks, KV cache infrastructure, disaggregated prefill/decode pipelines, and Kubernetes-based orchestration across multi-datacenter footprints.This is a hands-on IC role at the intersection of distributed systems, model optimization, and serving infrastructure. You’ll own end-to-end inference deployments for frontier AI labs and our inference product, drive measurable improvements in throughput, cost-per-token, and time-to-first-token, and contribute to the platform architecture choices that determine how Fluidstack deploys across tens of thousands of accelerators.
You will:Own inference deployments end-to-end: from initial configuration and performance tuning to production SLA maintenance and incident response.Drive measurable improvements in throughput, TTFT, and cost-per-token across diverse model families (dense transformers, mixture-of-experts, multi-modal) and customer workload patterns.Build and operate KV cache and scheduling infrastructure to maximize utilization across concurrent requests.Implement and validate disaggregated prefill/decode pipelines and the Kubernetes orchestration that supports them at scale.Profile and resolve bottlenecks at the compute, memory, and communication layers; instrument deployments for end-to-end observability.Partner with customers to translate their model architectures, access patterns, and latency requirements into deployment configurations and upstream platform improvements.Contribute to inference platform architecture and roadmap, with a focus on reducing deployment complexity, improving hardware utilization, and expanding support for new model classes and accelerators.Participate in an on-call rotation (up to one week per month) to maintain the reliability and SLA commitments of production deployments.
Basic Qualifications5+ years of professional software engineering experience with a track record of shipping production-quality systems.Strong programming skills in Python and/or Go.Hands-on production experience with at least one LLM serving framework (vLLM, SGLang, TensorRT-LLM, TGI, or equivalent).Working knowledge of PyTorch or JAX and an understanding of how model architecture choices affect inference characteristics.Experience deploying and operating GPU workloads on Kubernetes at production scale, including autoscaling and resource scheduling.Solid understanding of GPU memory hierarchies, compute parallelism, and the tradeoffs across tensor, pipeline, and expert parallelism strategies.Ability to create structure from ambiguity and communicate technical tradeoffs clearly to both engineering peers and customers.Great written and verbal communication skills in English.
Preferred QualificationsProduction experience with disaggregated prefill/decode architectures (NVIDIA Dynamo, LLM-d, or equivalent), including scheduling policies and network fabric configuration.Deep familiarity with KV cache strategies: RadixAttention, slab-based memory allocators, cross-request prefix sharing, and cache-aware scheduling.Experience with multi-node GPU inference across InfiniBand or RoCE fabrics, including NCCL collective communication tuning.Custom kernel or operator development experience (e.g., CUDA, Triton, torch.compile, Pallas, or equivalent)Contributions to open-source inference engines (vLLM, SGLang, TGI, TensorRT-LLM, or similar).Hands-on experience with quantization tooling: GPTQ, AWQ, FP8 via llm-compressor, or AutoGPTQ.Knowledge of speculative decoding implementations (Medusa, EAGLE-3, draft-model approaches) and their performance/quality tradeoffs.Experience optimizing and adapting model implementations for non-NVIDIA accelerators and their ecosystems: AMD, TPU, Trainium/Inferentia, Cerebras, Groq, and other custom ASICs.
Salary & BenefitsCompetitive total compensation package (salary + equity).Retirement or pension plan, in line with local norms.Health, dental, and vision insurance.Generous PTO policy, in line with local norms.
The base salary range for this position is $165,000 – $500,000 per year, depending on experience, skills, qualifications, and location. This range represents our good faith estimate of the compensation for this role at the time of posting. Total compensation may also include equity in the form of stock options.We are committed to pay equity and transparency.Fluidstack is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Fluidstack will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.You will receive a confirmation email once your application has successfully been accepted. If there is an error with your submission and you did not receive a confirmation email, please email careers@fluidstack.io with your resume/CV, the role you've applied for, and the date you submitted your application-- someone from our recruiting team will be in touch.
No items found.
2026-03-06 7:44
Product Manager, Models
Heidi Health
201-500
Australia
Full-time
Remote
false
Who We AreHowdy, we're Heidi 👋"The AI startup growing faster than Canva"That's what the Financial Review called us. In 18 months, we supported over 73 million patient visits and become one of the fastest-growing companies in the world.We pivoted from broad healthcare AI to building Earth's finest AI Care Partner. Today, we support over 2 million patient sessions weekly across 116 countries and over 110 languages. Hundreds of thousands of clinicians use Heidi to complete documentation.Our mission is simple: strengthen the human connection at the heart of healthcare.We've found product-market fit with individual clinicians through our freemium medical scribe, transforming unstructured clinical visits into structured text artefacts. Clinicians and organizations quite like it. Now, we embark upon consuming more than just documentation. Every new job a clinician delegates to Heidi makes patients feel more attended to, cleans up health system logjams, and lets clinicians be clinicians again.That's where you come in.The roleWe're looking for a Product Manager to own the AI models that power everything Heidi does. Someone who thinks platform teams exist to make product teams faster.You will own Heidi's models platform: evaluation pipelines, fine-tuning infrastructure, model routing, and safety systems. Hundreds of thousands of clinicians across 116 countries use these models in clinical settings every week. You'll work with engineers and researchers, partner with product PMs and clinical safety, and stay close enough to product teams that you know what they need before they file a ticket.You will report into Product leadership. This is a platform role. Every user-facing product at Heidi depends on what your team builds.This role will be based in either our Sydney or Melbourne office.We don't care about logos; the traditional insignia of competence. We'll evaluate senior well-credentialed candidates and young, hungry hopefuls alike. If you're an engineer who's been living inside these models and wants to move up a layer of abstraction into product, we want to hear from you.What you'll do:Own product strategy and roadmap for Heidi's models platform (evaluation, safety, model routing, fine-tuning infrastructure), setting clear goals and being held accountable to achieving themPrioritise your team's work across enablement requests, model safety and quality, and bets on new capabilitiesFigure out where product teams get stuck on your models and fix the platform so they don'tBuild eval tooling and fine-tuning workflows that your engineers and product teams can actually use in clinical settingsDecide what to improve next by reading clinician feedback, model quality signals, and what product teams are asking forAllocate engineering capacity across product teams who all want more than you can give, and tell them clearly what you're deferringWork with your engineers on eval design, fine-tuning trade-offs, and model architecture decisions at a technical levelSet model quality and safety targets grounded in clinical outcomes (did the note capture the right diagnosis? did the referral letter contain the right history?)Spot infrastructure that two product teams are building separately and consolidate itWatch foundation model developments and decide when to rip up your roadmapIf we'd worked together the last 6 weeks, you'd have:Defined an evaluation framework for model quality that your engineers actually useMade a clear ship/hold decision on a model update under pressure from a product team, and communicated the rationale to leadershipIdentified overlapping model capability requests across two product teams and proposed shared infrastructureBuilt a 90-day roadmap that balances enablement requests with your own priorities for model qualityHad a productive disagreement with a senior engineer about prioritisation and reached a resolution you both committed toWhat we're looking for:4+ years working on AI platform, infrastructure, or model-adjacent products, though we care more about what you've built than time servedTechnical depth on model evaluation, fine-tuning, and production AI systems. You've designed eval suites, debugged model regressions, and understand what makes models fail in production.Genuine curiosity about what models get wrong in clinical settings and whyTechnical enough to hold your own with your engineers, credible enough to present safety trade-offs to leadershipYou use AI tools to do your own work, not just manage people who doStrong opinions, weakly held. You'll shift the room when you're right.Willingness to update your views when the technology shifts, which it does roughly quarterly.Data fluency with diagnostic teeth - can you read evaluation results and distinguish a real regression from noise? Can you design an eval that catches the thing your current suite misses?If you answer 'NO' to these questions, this may not be the job for you:Are you an execution powerhouse?Have you worked on AI products where model quality directly affected end users?Can you allocate engineering resources across competing priorities and defend the split?Are you comfortable making decisions with incomplete information, then revising them when the picture changes?Are you able to execute without a legion of data analysts, product marketers, and research coordinators at your beck and call?Does the prospect of re-energising our health systems make you feel fuzzy inside?The Way We Work1. Build to LastWe design for safety and reliability so clinicians, patients, and our teams can trust what we build every day.2. Own Your PracticeIdeas rise on merit, not title, and everyone shares responsibility for the standards we set together.3. Move Fast, Stay SteadyWe move quickly but never at the cost of trust. Progress only matters if people can depend on what we make.4. Make Others BetterHonest feedback, steady support, and shared growth keep our teams improving together.Why you will flourish with usFlexible hybrid working environment, with 3 days in the office.A generous personal development budget of $500 per annumLearn from some of the best engineers and creatives, joining a diverse teamBecome an owner, with shares (equity) in the company, if Heidi wins, we all winThe rare chance to create a global impact as you immerse yourself in one of Australia’s leading healthtech startupsIf you have an impact quickly, the opportunity to fast track your startup career!Heidi is dedicated to creating an equitable, inclusive, and supportive work environment that brings people together from diverse backgrounds, experiences, and perspectives. Our strength is in our differences. We're proud to be an equal opportunity employer and welcome all applicants as we're committed to promoting a culture of opportunity for all.
No items found.
2026-03-06 6:14
Forward Deployed Engineer (FDE) - Seattle
OpenAI
5000+
$162,000 – $280,000
No items found.
Full-time
Remote
false
About the teamOpenAI’s Forward Deployed Engineering team partners with customers to turn research breakthroughs into production systems. We operate at the intersection of customer delivery and core platform development.About the roleForward Deployed Engineers (FDEs) lead complex end-to-end deployments of frontier models in production alongside our most strategic customers. You will own discovery, technical scoping, system design, build, and production rollout, partnering directly with customer engineering and domain teams.You will measure success through production adoption, measurable workflow impact, and eval-driven feedback that changes product and model roadmaps. You’ll work closely with our Product, Research, Partnerships, GRC, Security, and GTM teams.This role is based in Seattle. We use a hybrid work model of 3 days in the office per week. We offer relocation assistance. Travel up to 50% is required.In this role you willOwn technical delivery across multiple deployments from first prototype to stable productionBuild full-stack systems that deliver customer value and sharpen how we learnEmbed closely with customer teams, understand their needs, and guide adoption of what you buildScope work, sequence delivery, and remove blockers earlyMake trade-offs between scope, speed, and quality; adjust plans to protect deliveryContribute directly in the code when progress or clarity depends on itCodify working patterns into tools, playbooks, or building blocks that others can useShare field feedback that helps Research and Product understand where the models succeed and where they can improveKeep teams moving through clarity and follow-throughYou might thrive in this role if youBring 5+ years of engineering or technical deployment experience that includes customer-facing workHave scoped and delivered complex systems in fast-moving or ambiguous environmentsWrite and review production-grade code across frontend and backend using Python, JavaScript, or comparable stacksHave built or deployed systems powered by LLMs or generative models and understand how model behaviour affects product experienceSimplify complexity and make fast, sound decisions under pressureCommunicate clearly with engineers, product teams, and customer stakeholdersSpot risks early and adjust without slowing downModel calm and judgment when the stakes are highAbout OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
2026-03-06 2:44
Senior Software Engineer
Lorikeet
51-100
Australia
Full-time
Remote
false
About LorikeetLorikeet is the most powerful customer support AI for complex businesses like fintechs, healthtechs, marketplaces and delivery services.We’re doing this by building ground up from the premise that most support responses should be automated with transparent, customizable AI, and that support teams should spend their time managing automation and engaging with complex cases, not grinding through high volumes of simple tickets. Once teams are freed from reactive support, we want to help them tackle what’s next: providing personalized concierge services to their customers.To deliver this combination of powerful AI systems and well designed tooling we’re leveraging Jamie’s experience as an early member of Google’s generative AI team and Steve’s experience building for operational teams at Stripe, as well as the experience of our team who’ve joined us from places like Stripe, Canva, Atlassian, Dropbox and Dovetail.We are growing fast, have paying customers, real revenue, an exciting roadmap and a strong sales pipeline. We’ve raised over USD 50m from leading VCs and angel investors, including QED, Blackbird, Square Peg, Claire Hughes Johnson (ex Stripe COO), Cristina Cordova (Linear COO), Bob Van Winden (Stripe Head of Support), and Cos Nicolaescu (Brex CTO).Our global customers include:The largest telehealth company in Australia,The largest bank for teens in the US,One of the largest NFT marketplaces by trading volumeOne of the largest Web3 gaming companies… and a handful of other enterprise customers with over 1 million support tickets a year.What’s unique about this opportunity?Technical founders and an engineering-led culture. Most at Lorikeet write code. Everyone at Lorikeet owns working with our users and building a great product for them. Engineers will take ownership of challenging problems and define and implement solutions.Warm, mature, in-person, flexible culture. Low ego, high trust team. No tolerance for ‘talented jerks’. We value working together in office as the default in our (quite nice!) Surry Hills office. Folks on the team have young families, so we embrace a) working efficiently, and b) working flexible hours to fit in life priorities outside of work. We’re committed to building a diverse team and really encourage folks from underrepresented backgrounds to reach out - we value user obsession and eagerness to learn over traditional credentials.High pay, high expectations, high performance. We’re building a small, great team. Engineers are generally underpaid in Sydney and under compensated with equity. We aim to match unicorn / scale up pay at base salary and offer a potentially life-changing equity stake in the business. Our team get the same monthly updates we send to our investors because they’re investors and owners too.On the technical cutting edge. With our users we’re defining what an AI-first SaaS product looks like. No one has figured out what the UI/UX, capabilites and data models of an AI first company are - it’s white space for us to invent. The AI agent problems we’re solving are beyond the cutting edge at the biggest research labs. We’re building on a modern tech stack, with Typescript, React/Remix, PrismaORM, NestJS and some Python sprinkled in. Knowledge of that stack is nice, but we know good engineers will pick up new languages.No nonsense recruitment process. The process is: 1) informal chats with Steve and Jamie to hear our pitch and understand your interests and goals, 2) a ~two day paid work trial where you come in and ship with us. There’s no better way for each of us to figure out if we like working together than to work together!About the role and youYou'll be building a powerful project that is truly innovating the world of customer support. Together we'll be defining what an AI-first SaaS product looks like. No one has figured out what the UI/UX, capabilities and data models of an AI first company are - it’s white space for us to invent. The AI agent problems we’re solving are beyond the cutting edge at the biggest research labs.We'd love to speak with you ifYou are excited by working with a top caliber team tackling the aboveYou have 5+ years of experience working in a top tier engineering organisation, and ideally some exposure to startups/scaleupsYou are comfortable across the stack and are excited to lead ambitious, ambiguous projects that involve strong technical decision-making, effective implementation, and good product and design instinctsYou're keen to mentor / lead less experienced engineersIf you don't quite match this and are from and under-represented background we strongly encourage you to reach out. We know first hand that diverse teams are higher performing and are proud that our team reflects a broad spectrum of identities and lived experiences.
No items found.
2026-03-05 22:59
Scientist/Sr Scientist, Display Technology (Contract)
Xaira
101-200
United Kingdom
Contractor
Remote
false
About Xaira Therapeutics
Xaira is an innovative biotech startup focused on leveraging AI to transform drug discovery and development. The company is leading the development of generative AI models to design protein and antibody therapeutics, enabling the creation of medicines against historically hard-to-drug molecular targets. It is also developing foundation models for biology and disease to enable better target elucidation and patient stratification. Collectively, these technologies aim to continually enable the identification of novel therapies and to improve success in drug development. Xaira is headquartered in the San Francisco Bay Area, Seattle, and London.Position Overview
Xaira is seeking enthusiastic and motivated candidates to join our team as Research Engineers. We welcome candidates across the spectrum of experience. Teams thrive when they are diverse (across all axes), and we encourage all eligible applicants to apply.
The role is based in our London office, located near Old Street. Our team is highly collaborative, operating on the belief that hard problems are best solved by multiple people working towards a clear goal, bringing and sharing their expertise with the team. We operate a hybrid working culture based on trust. Members of the team are typically in the office 3 days a week.
Key Responsibilities
Industry experience as a research engineer, in an AI-related company.
Excited to work, learn and teach within a collaborative team working on challenging problems.
Desirable
Below is a list of qualities/experiences that align with the kinds of things that we are looking for. Please do not read this as an extension of the “requirements” section! We recognise that experiences, opportunities and life-paths vary.
Masters (or equivalent)/PhD in AI-related field.
Public codebases or contribution to public GitHub repositories.
Experience building and training neural networks.
Experience in distributed training and inference.
Experience profiling and optimising large-scale AI models.
Knowledge or experience in BioAI.
If you are a motivated individual with a passion for applying AI to advance drug discovery and improve human health, we encourage you to apply and join us in our mission to make a positive difference in the world.
Xaira Therapeutics an equal-opportunity employer. We believe that our strength is in our differences. Our goal to build a diverse and inclusive team began on day one, and it will never end.
TO ALL RECRUITMENT AGENCIES: Xaira Therapeutics does not accept agency resumes. Please do not forward resumes to our jobs alias or employees. Xaira Therapeutics is not responsible for any fees related to unsolicited resumes.
No items found.
2026-03-05 19:59
Field Application Engineer - AI Systems & Solutions
Tenstorrent
1001-5000
$100,000 – $500,000
Germany
Full-time
Remote
false
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.Tenstorrent is seeking a ASIC Networking Engineer to help define and build next-generation CPU networking architecture for both datacenter and emerging robotics/automotive applications. You will contribute to our current datacenter networking efforts while also helping to seed and specify future medium- to low-power robotics/automotive devices for AI/ML compute and sensor ingest. The initial focus will be datacenter networking, with robotics as the first target within the automotive/robotics space.
This role is remote, based out of North America.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who You Are
You thrive while navigating multiple priorities and ambiguous, evolving requirements.
You have knowledge of Ethernet network architecture and how performance is modeled.
You have experience with die-to-die interfaces and understand associated protocols and design tradeoffs.
You understand Ethernet networking concepts and how they map onto on-chip and off-chip fabrics.
You have experience with datacenter scale up architectures like UALink, NVLink, and Broadcom SUE.
You have experience with scale out RDMA protocols like RoCE, Infiniband, and others.
Experience working on safety (diagnostic and fault coverage) with RTL design process.
What We Need
A network ASIC designer who can contribute to both datacenter networking and early-stage automotive/robotics scoping and specifications.
Someone comfortable working at the intersection of NoC, performance modeling, and RTL design to guide architectural decisions.
An engineer who can collaborate across hardware, software, and systems teams to define and refine networking requirements.
A contributor who can help drive forward next-generation CPU networking architecture for AI/ML workloads.
What You Will Learn
How to build next-generation CPU networking architectures for both high-performance datacenter and constrained robotics/automotive environments.
How to help drive forward next-generation robotics-focused CPUs for AI/ML compute with rich sensor ingestion.
How to work at the intersection of NoC design, performance modeling, and RTL to close the loop between architecture and implementation.
How to take an early-stage concept (automotive/robotics networking) from seeding and specification through to project initiation.
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
No items found.
2026-03-05 19:14
Software Architect, Automotive Robotics
Tenstorrent
1001-5000
$100,000 – $500,000
Germany
Full-time
Remote
false
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.Tenstorrent is seeking a ASIC Networking Engineer to help define and build next-generation CPU networking architecture for both datacenter and emerging robotics/automotive applications. You will contribute to our current datacenter networking efforts while also helping to seed and specify future medium- to low-power robotics/automotive devices for AI/ML compute and sensor ingest. The initial focus will be datacenter networking, with robotics as the first target within the automotive/robotics space.
This role is remote, based out of North America.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who You Are
You thrive while navigating multiple priorities and ambiguous, evolving requirements.
You have knowledge of Ethernet network architecture and how performance is modeled.
You have experience with die-to-die interfaces and understand associated protocols and design tradeoffs.
You understand Ethernet networking concepts and how they map onto on-chip and off-chip fabrics.
You have experience with datacenter scale up architectures like UALink, NVLink, and Broadcom SUE.
You have experience with scale out RDMA protocols like RoCE, Infiniband, and others.
Experience working on safety (diagnostic and fault coverage) with RTL design process.
What We Need
A network ASIC designer who can contribute to both datacenter networking and early-stage automotive/robotics scoping and specifications.
Someone comfortable working at the intersection of NoC, performance modeling, and RTL design to guide architectural decisions.
An engineer who can collaborate across hardware, software, and systems teams to define and refine networking requirements.
A contributor who can help drive forward next-generation CPU networking architecture for AI/ML workloads.
What You Will Learn
How to build next-generation CPU networking architectures for both high-performance datacenter and constrained robotics/automotive environments.
How to help drive forward next-generation robotics-focused CPUs for AI/ML compute with rich sensor ingestion.
How to work at the intersection of NoC design, performance modeling, and RTL to close the loop between architecture and implementation.
How to take an early-stage concept (automotive/robotics networking) from seeding and specification through to project initiation.
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
No items found.
2026-03-05 19:14
Head of Product, AI
Bjak
201-500
No items found.
Full-time
Remote
false
About the RoleA1 is building a proactive AI system that carries work forward across conversations, tools, and time.You define what we build and why, grounded in what AI systems can actually do in production. You sit at the intersection of user needs, model capability, and system constraints, and are responsible for turning AI potential into real, reliable user value.This is a hands-on role for product leaders who are comfortable making decisions under uncertainty and working closely with engineers on hard technical trade-offs.What You'll be DoingOwn the end-to-end AI product strategy, grounded in technical feasibility and real-world constraints.Translate model capabilities, data limitations, and evaluation results into clear product decisions.Make hard trade-offs across quality, latency, cost, reliability, and user experience.Work daily with ML, backend, and mobile engineers on design, evaluation, and iteration.Define success metrics and feedback loops across offline evaluation, online experiments, and human feedback.Drive execution with clear specifications, risk awareness, and disciplined prioritization.Ensure AI features ship quickly, safely, and reliably into production.Own AI product quality across UX, correctness, and outcomes.What You Will NeedTechnical foundationStrong grounding in computer science fundamentals, including algorithms, data structures, and system design.Solid understanding of ML fundamentals and how modern AI systems behave in production.Comfort reading, reviewing, and discussing technical design documents.AI & ML experienceHands-on exposure to AI-powered products, including LLM-based systems.Experience working with model evaluation, prompt or pipeline iteration, and feedback loops.Strong intuition for model limitations, hallucinations, bias, and drift.Product leadershipSignificant experience owning complex, technical products end-to-end.Proven ability to work closely with senior engineers and ML teams.Strong judgment and decision-making ability in ambiguous, fast-moving environments.Ability to balance ambition with technical and operational reality.Nice to haveExperience shipping AI-heavy consumer products.Background as an engineer or highly technical product manager.Experience defining evaluation metrics for ML systems.Strong intuition for AI UX patterns and failure handling.Prior experience in zero-to-one product environments.How We WorkOur organization is very flat and our team is small, highly motivated, and focused on engineering and product excellence. All members are expected to be hands-on and to contribute directly to the company’s mission.Interview processIf there appears to be a fit, we'll reach to schedule 3, but no more than 4 interviews.Applications are evaluated by our technical team members. Interviews will be conducted via virtual meetings and/or onsite.We value transparency and efficiency, so expect a prompt decision. If you've demonstrated the exceptional skills and mindset we're looking for, we'll extend an offer to join us. This isn't just a job offer; it's an invitation to be part of a team that's bringing AI to have practical benefits to billions globally.
No items found.
2026-03-05 18:59
Head of Product, AI
Bjak
201-500
China
Full-time
Remote
false
About the RoleA1 is building a proactive AI system that carries work forward across conversations, tools, and time.You define what we build and why, grounded in what AI systems can actually do in production. You sit at the intersection of user needs, model capability, and system constraints, and are responsible for turning AI potential into real, reliable user value.This is a hands-on role for product leaders who are comfortable making decisions under uncertainty and working closely with engineers on hard technical trade-offs.What You'll be DoingOwn the end-to-end AI product strategy, grounded in technical feasibility and real-world constraints.Translate model capabilities, data limitations, and evaluation results into clear product decisions.Make hard trade-offs across quality, latency, cost, reliability, and user experience.Work daily with ML, backend, and mobile engineers on design, evaluation, and iteration.Define success metrics and feedback loops across offline evaluation, online experiments, and human feedback.Drive execution with clear specifications, risk awareness, and disciplined prioritization.Ensure AI features ship quickly, safely, and reliably into production.Own AI product quality across UX, correctness, and outcomes.What You Will NeedTechnical foundationStrong grounding in computer science fundamentals, including algorithms, data structures, and system design.Solid understanding of ML fundamentals and how modern AI systems behave in production.Comfort reading, reviewing, and discussing technical design documents.AI & ML experienceHands-on exposure to AI-powered products, including LLM-based systems.Experience working with model evaluation, prompt or pipeline iteration, and feedback loops.Strong intuition for model limitations, hallucinations, bias, and drift.Product leadershipSignificant experience owning complex, technical products end-to-end.Proven ability to work closely with senior engineers and ML teams.Strong judgment and decision-making ability in ambiguous, fast-moving environments.Ability to balance ambition with technical and operational reality.Nice to haveExperience shipping AI-heavy consumer products.Background as an engineer or highly technical product manager.Experience defining evaluation metrics for ML systems.Strong intuition for AI UX patterns and failure handling.Prior experience in zero-to-one product environments.How We WorkOur organization is very flat and our team is small, highly motivated, and focused on engineering and product excellence. All members are expected to be hands-on and to contribute directly to the company’s mission.Interview processIf there appears to be a fit, we'll reach to schedule 3, but no more than 4 interviews.Applications are evaluated by our technical team members. Interviews will be conducted via virtual meetings and/or onsite.We value transparency and efficiency, so expect a prompt decision. If you've demonstrated the exceptional skills and mindset we're looking for, we'll extend an offer to join us. This isn't just a job offer; it's an invitation to be part of a team that's bringing AI to have practical benefits to billions globally.
No items found.
2026-03-05 18:59
AI Software Engineer (Model Training)
Maincode
11-50
Australia
Full-time
Remote
false
About the roleMaincode is training Matilda, the first large language model built and trained from scratch in Australia. Our new compute cluster is live, and we are now scaling the next version.This role sits directly inside that training stack. You will build the pipelines, infrastructure, and tooling that determine how efficiently Matilda trains, how stable long runs are, and how fast new experiments can be executed. Training runs last days or weeks. Small changes propagate through complex systems. The work requires precision and patience.We build AI systems from first principles: designing the architectures, running the infrastructure, shaping the training process, and operating the models ourselves. Matilda is not a research prototype. It is a production system, trained at scale and served for open public access.Maincode operates one of the largest private AI compute environments in Australia, built for a single purpose: training our own models. This is not a role that wraps external APIs or ships user-facing features. You will be working on the systems that train a large language model from scratch.What you would actually doYou will build and maintain the systems that support large scale model training.This includes:Designing and maintaining distributed training pipelines for large language modelsBuilding data ingestion and preprocessing systems for large training datasetsDeveloping tooling for experiment management, checkpointing, and reproducibilityMonitoring and debugging long running training jobs across clustersImproving reliability and observability across the training stackOptimising training throughput across compute, memory, and data pipelinesWorking closely with researchers to translate experimental ideas into training runsDiagnosing failures across infrastructure, training loops, and data pipelinesTraining runs can last days or weeks. Small changes propagate through complex systems.You will spend time inside code, logs, dashboards, and experiment outputs. The goal is simple: make large scale training reliable.The kind of person who does well hereWe are looking for engineers early in their careers who want to understand how large models are actually trained.You may have one or two years of experience building production software. What matters most is curiosity and the willingness to learn how these systems behave under load.People who tend to do well here:Care about how systems behave over long runtimesEnjoy debugging complex distributed systemsPay attention to logs, metrics, and system behaviourPrefer understanding how a system works rather than relying on abstractionAre comfortable working close to infrastructureHave the patience to diagnose failures that appear hours into a runWant to learn how large scale AI training actually happensYou do not need prior experience training large language models. What matters is intellectual curiosity, persistence, and the ability to learn quickly.How you would workYou will write production code that sits directly in the training stack.You should be comfortable:Working in PythonUsing machine learning frameworks such as PyTorch or JAXWriting reliable infrastructure for large compute workloadsDebugging distributed systems and long running jobsCollaborating closely with researchers and infrastructure engineersMuch of the work sits between research and infrastructure. Ideas move quickly, but the systems that support them must remain stable.What this role is notIt is not primarily about building user facing applicationsIt is not about prompt engineeringIt is not about wrapping external APIs or third party modelsYou will be working on the systems that train our own models from scratch.Why MaincodeMaincode builds AI systems end to end. We prepare the data, design the training process, run the infrastructure, and operate the models ourselves.You will work with a small team that:Builds the full AI stack rather than outsourcing itTreats infrastructure as part of the intelligence system itselfValues engineers who want to understand how things actually workIs building long term capability in training and operating large modelsIf you want to work directly on the systems that train large language models from scratch, this is the only role in Australia that will put you inside that work.NoteThis is a full time role based in Melbourne, working closely with our in person engineering and research team. At this time we are not able to offer visa sponsorship, so applicants must have existing and unrestricted work rights in Australia.
No items found.
2026-03-05 16:44
No job found
Your search did not match any job. Please try again
