The AI job market moves fast. We keep up so you don't have to.
Fresh roles added daily, reviewed for quality — across every corner of the AI ecosystem.
I'm strong in:
Edit filters
New AI Opportunities
Showing 61 – 79 of 79 jobs
Tag
AI Researcher
Maincode
11-50
A$150,000 – A$180,000
Australia
Full-time
Remote
false
About the roleMaincode builds foundation models from first principles on Australian infrastructure. We design architectures, run our own compute, shape the training process, and operate the systems that serve our models.We have built Matilda, the first large language model built and trained from scratch in Australia. Our new compute cluster is live; we are scaling the next version of Matilda and deploying and serving it live for public access.We are looking for AI researchers who want to work on the core architecture, training, and evaluation of large-scale language models that power Matilda.This role is not focused on incremental benchmarking or paper output. You will work directly with the engineers running large-scale training systems and help design models that learn efficiently and behave reliably in production.What you would actually doYou will work across the model development loop, from research questions to training runs to evaluation.This includes:Designing and testing architecture changes and training regimes for large language modelsRunning controlled experiments at scale and isolating causal effectsStudying failure modes in reasoning, generalisation, robustness, and representationShaping objectives, data mixtures, and optimisation choices that influence model behaviourBuilding and refining evaluations that measure capability and reliability, not just scoresAnalysing training dynamics using logs, metrics, and model outputsCollaborating with ML systems engineers on distributed training and training operationsWriting clear internal notes that turn experimental results into design decisionsYou will spend substantial time in code, training runs, logs, and evaluation outputs. The goal is clarity about what improves the model and why.What we are looking forWe care about depth of reasoning, experimental discipline, and the ability to make progress under ambiguity.We expect:Hands-on experience writing and running production-grade ML or research codeStrong Python and experience with PyTorch or JAXSolid understanding of transformer-based language models and the basics of pre-training and evaluationAbility to design experiments, interpret results, and communicate tradeoffs clearlyComfort working close to infrastructure, performance constraints, and operational realityInterest and exposure to reasoning-oriented architectures and training methods beyond standard approaches, and beyond standard LLMs
Nice to haveExperience with distributed training concepts and tooling (data parallel, tensor parallel, sharding, checkpointing)Experience running training across multiple nodes and managing long training cyclesFamiliarity with large-model training stacks and frameworks (for example Megatron-style systems, DeepSpeed-like tooling, or equivalent)Comfort across the full workflow: training, evaluation, and deployment constraintsExperience working in ROCm-based environmentsHow you would workThis is hands-on research. You will use code as a primary tool for thinking.You will be expected to:Move between theory and implementation quickly and preciselyPrefer controlled experiments over broad sweepsUse logs, metrics, and model behaviour to guide decisionsWork closely with engineering counterparts to scale and validate ideasWhat this role is notIt is not a product research roleIt is not prompt engineeringIt is not fine-tuning someone else’s model and shipping wrappers around external APIsYou will work on Matilda, trained from scratch on our infrastructure, and pushed until its behaviour is understood and improved.Why MaincodeMaincode builds and operates the full stack: training infrastructure, model code, evaluation systems, and deployment. We run one of the largest private AI compute environments in Australia, built for the sole purpose of training and deploying large scale models.If you want to work directly on training and evaluating a large language model built from scratch, this is the only role in Australia that will put you inside that work.NoteThis is a full time role based in Melbourne, working closely with our in person team. At this time we are not able to offer visa sponsorship, so applicants must have existing and unrestricted work rights in Australia.
No items found.
2026-03-05 16:44
Forward Deployed Engineer
Sunrise
11-50
Slovenia
Full-time
Remote
false
Our Mission:At Sunrise Robotics, we are dedicated to augmenting humanity through intelligent robotics. Our mission is to elevate the world of manufacturing by introducing intelligent, flexible robots that enhance human capabilities and existing machinery, ushering in the next era of production at higher quality, with less waste, and lower cost.
Our Vision:We see a future where every element of manufacturing, from design to assembly, is optimised with intelligent automation. Our vision is to integrate flexible robotic solutions, based on generic hardware and advanced software/AI capabilities, into manufacturing, particularly in small and medium-sized enterprises, to make automation economically viable and accessible, for all sizes of manufacturers. We are not just building robots; we are creating the strategically crucial components for autonomous, intelligent agents of the future.The role:Sunrise Robotics is building a new category of intelligent, flexible robotic automation - designed to scale across factories, not just solve one-off integrations. As a Forward Deploy Engineer, you’ll be at the front lines of that transformation.This is a unique role that sits between product and deployment. You’ll work directly with customers to deliver automation solutions in real production environments, while continuously improving how those solutions are packaged, standardised, and scaled. The challenge is not simply delivering systems - it’s helping turn delivery into a repeatable, efficient, productised capability.You’ll bring practical experience from traditional automation or system integration and apply it to a new model: scalable deployment powered by Sunrise tools, products, and processes. Your work will directly shape how we refine our offering for customers and our roadmap for product teams to reduce non-recurring engineering effort and speed market adoption.What you’ll do:Deploy Sunrise robotic systems in live manufacturing environments, ensuring successful customer go-livesTranslate real-world production constraints into structured feedback that improves product capabilities and deployment workflowsIdentify opportunities to reduce non-recurring engineering effort and improve delivery scalabilityContribute to the refinement of Sunrise’s tools, processes, and system architecture to enable repeatable deploymentsCollaborate closely with AI, robotics, product, and commercial teams to align customer needs with product evolutionSupport pilot launches and early deployments, ensuring systems meet defined performance and operational success criteriaAct as a technical partner to customers during integration, building trust and ensuring long-term successProvide technical support to customers during troubleshooting and maintenance of deployed robotic systemsInvestigate and respond to operational incidents or anomalies in robotic systems, ensuring timely resolution and system reliabilityWhat you’ll need:A genuine drive to reinvent how flexible automation solutions are delivered and scaled across manufacturing environmentsHands-on experience deploying robotic systems and automated machinery in real-world industrial settingsStrong understanding of robotics fundamentals and familiarity with deep learning models, including how they can be adapted or re-trained for specific applicationsWorking knowledge of synthetic data generation concepts, simulation environments such as NVIDIA Isaac Sim / Omniverse, and structured quality assurance processesExperience with ROS 2 and Behaviour Tree programming for robotic system controlAn abstract, systems-level thinker who can view problems from multiple perspectives and generalise solutions across different applicationsAbility to collaborate effectively with multidisciplinary teams across AI, mechanical, electrical, and software engineeringStrong communication skills, able to explain complex technical concepts clearly to both technical and non-technical stakeholdersA deep passion for robotics, automation, and building intelligent systems that transform manufacturingWhat Makes You Stand Out:Experience in collaborative applications.Experience with ABB SafeMove.Experience with TIA portal.Experience with radar-based safety.Experience with safety analysis tools.Why Us?We’re building a new category of intelligent, flexible robotic automation with real deployments, real customers, and momentum across Europe.Why this role: You’ll operate at the intersection of product and deployment, helping transform automation delivery from bespoke integration to scalable, repeatable systems.Career acceleration: High ownership, deep cross-functional exposure, and the opportunity to shape how a category-defining robotics company scales globally.Real impact: Your work won’t stay in simulation or design reviews - it will run on real factory floors, directly influencing product evolution, customer success, and company trajectory.
No items found.
2026-03-05 11:14
People Operations Specialist
X AI
5000+
$45 – $100 / hour
United States
Full-time
Remote
false
About xAI
xAI’s mission is to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. Our team is small, highly motivated, and focused on engineering excellence. This organization is for individuals who appreciate challenging themselves and thrive on curiosity. We operate with a flat organizational structure. All employees are expected to be hands-on and to contribute directly to the company’s mission. Leadership is given to those who show initiative and consistently deliver excellence. Work ethic and strong prioritization skills are important. All employees are expected to have strong communication skills. They should be able to concisely and accurately share knowledge with their teammates.
About the Role
As an Accounting Expert, you will be instrumental in enhancing the capabilities of our cutting-edge technologies by providing high-quality input and labels using specialized software. Your role involves collaborating closely with our technical team to support the training of new AI tasks, ensuring the implementation of innovative initiatives. You'll contribute to refining annotation tools and selecting complex problems from corporate accounting domains, with a focus on financial reporting, consolidation, internal controls, and GAAP compliance where your expertise can drive significant improvements in model performance. This position demands a dynamic approach to learning and adapting in a fast-paced environment, where your ability to interpret and execute tasks based on evolving instructions is crucial.
AI Tutor’s Role in Advancing xAI’s Mission
As an AI Tutor, you will play an essential role in advancing xAI's mission by supporting the training and refinement of xAI’s AI models. AI Tutors teach our AI models about how people interact and react, as well as how people approach issues and discussions in corporate accounting. To accomplish this, AI Tutors will actively participate in gathering or providing data, such as text, voice, and video data, sometimes providing annotations, recording audio, or participating in video sessions. We seek individuals who are comfortable and eager to engage in these activities as a fundamental part of the role, ensuring a strong alignment with xAI’s goals and objectives to innovate.
Scope
An AI Tutor will provide services that include labeling and annotating data in text, voice, and video formats to support AI model training. At times, this may involve recording audio or video sessions, and tutors are expected to be comfortable with these tasks as they are fundamental to the role. Such data is a job requirement to advance xAI’s mission, and AI Tutors acknowledge that all work is done for hire and owned by xAI.
Responsibilities
Use proprietary software applications to provide input/labels on defined projects.
Support and ensure the delivery of high-quality curated data.
Play a pivotal role in supporting and contributing to the training of new tasks, working closely with the technical staff to ensure the successful development and implementation of cutting-edge initiatives/technologies.
Interact with the technical staff to help improve the design of efficient annotation tools.
Choose problems from corporate accounting fields that align with your expertise, providing rigorous solutions and model critiques where you can confidently provide detailed solutions and evaluate model responses.
Regularly interpret, analyze, and execute tasks based on given instructions.
Key Qualifications
Must have 3+ years of Big 4 public accounting experience (audit/assurance) on corporate or SEC clients, or an equivalent senior corporate accounting role (e.g., Controller, Assistant Controller, or Technical Accounting Manager at a public company or large private enterprise with complex GAAP reporting).
Must possess a Master's or PhD in Accounting (corporate focus) or equivalent as a licensed CPA.
Proficiency in reading and writing, both in informal and professional English.
Strong ability to navigate various corporate accounting information resources, databases, and online resources (e.g., FASB codification, SEC EDGAR, 10-K/10-Q filings, ERP systems).
Outstanding communication, interpersonal, analytical, and organizational capabilities.
Solid reading comprehension skills combined with the capacity to exercise autonomous judgment even when presented with limited data/material.
Strong passion for and commitment to technological advancements and innovation in corporate accounting.
Preferred Qualifications
5+ years at a Big 4 firm or in a senior corporate controllership role, with direct involvement in SEC reporting, SOX 404, or complex consolidations.
Experience drafting or reviewing 10-K/10-Q footnotes, MD&A, or technical accounting memos.
Possesses experience with at least one publication in a reputable accounting journal or outlet.
Teaching experience as a professor
Location & Other Expectations
This position is based in Palo Alto, CA, or fully remote.
The Palo Alto option is an in-office role requiring 5 days per week; remote positions require strong self-motivation.
If you are based in the US, please note we are unable to hire in the states of Wyoming and Illinois at this time.
We are unable to provide visa sponsorship.
Team members are expected to work from 9:00am - 5:30pm PST for the first two weeks of training and 9:00am - 5:30pm in their own timezone thereafter.
For those who will be working from a personal device, please note your computer must be a Chromebook, Mac with MacOS 11.0 or later, or Windows 10 or later.
Compensation
$45/hour - $100/hour
The posted pay range is intended for U.S.-based candidates and depends on factors including relevant experience, skills, education, geographic location, and qualifications. For international candidates, our recruiting team can provide an estimated pay range for your location.
Benefits:
Hourly pay is just one part of our total rewards package at xAI. Specific benefits vary by country, depending on your country of residence you may have access to medical benefits. We do not offer benefits for part-time roles.xAI is an equal opportunity employer. For details on data processing, view our Recruitment Privacy Notice.
No items found.
2026-03-05 7:29
Forward Deployed Engineer, Agentic Platform
Cohere
501-1000
Middle East
Full-time
Remote
false
Who are we?Our mission is to scale intelligence to serve humanity. We’re training and deploying frontier models for developers and enterprises who are building AI systems to power magical experiences like content generation, semantic search, RAG, and agents. We believe that our work is instrumental to the widespread adoption of AI.We obsess over what we build. Each one of us is responsible for contributing to increasing the capabilities of our models and the value they drive for our customers. We like to work hard and move fast to do what’s best for our customers.Cohere is a team of researchers, engineers, designers, and more, who are passionate about their craft. Each person is one of the best in the world at what they do. We believe that a diverse range of perspectives is a requirement for building great products.Join us on our mission and shape the future!About North:North is Cohere's cutting-edge AI workspace platform, designed to revolutionize the way enterprises utilize AI. It offers a secure and customizable environment, allowing companies to deploy AI while maintaining control over sensitive data. North integrates seamlessly with existing workflows, providing a trusted platform that connects AI agents with workplace tools and applications.Why This Role?This role offers a unique opportunity to shape how enterprises harness the power of AI in real-world applications. As a bridge between our core North product and our clients’ engineering teams, you’ll be at the forefront of solving complex problems and securely integrating AI into critical sectors such as finance, healthcare, and telecommunications. We are seeking engineers with diverse skill sets, including backend, infrastructure, agent development, and deployments, who deeply care about customers and want to work at the cutting edge of Agentic AI.Note: between 20 - 40% travel anticipatedIn this role, you will:Build and ship features for North, our AI workspace platformDevelop autonomous agents that talk to sensitive enterprise dataExperiment at a high velocity and with a high level of quality to engage our customers and ultimately deliver solutions that exceed their expectationsWork across the entire product lifecycle from conceptualization through productionLead end-to-end deployment of North in private cloud and on-premises environments, including planning, configuration, testing, and rolloutYou may be a good fit if:You have experience with and enjoy working directly with customersYou are fluent in both English and ArabicYou have shipped (lots of) Python in productionYou have built and deployed highly performant client-side or server-side RAG/agentic applicationsYou have strong coding abilities and are comfortable working across the stack. You’re able to read and understand, and even fix issues outside of the main code baseYou excel in fast-paced environments and can execute while priorities and objectives are a moving targetWe are open to candidates currently based in the Middle East or who are open to travelling or relocating.If some of the above doesn’t line up perfectly with your experience, we still encourage you to apply! We value and celebrate diversity and strive to create an inclusive work environment for all. We welcome applicants from all backgrounds and are committed to providing equal opportunities. Should you require any accommodations during the recruitment process, please submit an Accommodations Request Form, and we will work together to meet your needs.Full-Time Employees at Cohere enjoy these Perks:🤝 An open and inclusive culture and work environment 🧑💻 Work closely with a team on the cutting edge of AI research 🍽 Weekly lunch stipend, in-office lunches & snacks🦷 Full health and dental benefits, including a separate budget to take care of your mental health 🐣 100% Parental Leave top-up for up to 6 months🎨 Personal enrichment benefits towards arts and culture, fitness and well-being, quality time, and workspace improvement🏙 Remote-flexible, offices in Toronto, New York, San Francisco, London and Paris, as well as a co-working stipend✈️ 6 weeks of vacation (30 working days!)
No items found.
2026-03-05 5:59
Software Engineer, GenAI
Abridge
201-500
$255,000 – $300,000
United States
Full-time
Remote
false
About AbridgeAbridge was founded in 2018 with the mission of powering deeper understanding in healthcare. Our AI-powered platform was purpose-built for medical conversations, improving clinical documentation efficiencies while enabling clinicians to focus on what matters most—their patients.Our enterprise-grade technology transforms patient-clinician conversations into structured clinical notes in real-time, with deep EMR integrations. Powered by Linked Evidence and our purpose-built, auditable AI, we are the only company that maps AI-generated summaries to ground truth, helping providers quickly trust and verify the output. As pioneers in generative AI for healthcare, we are setting the industry standards for the responsible deployment of AI across health systems.We are a growing team of practicing MDs, AI scientists, PhDs, creatives, technologists, and engineers working together to empower people and make care make more sense. We have offices located in the Mission District in San Francisco, the SoHo neighborhood of New York, and East Liberty in Pittsburgh. The RoleWe are looking for passionate GenAI Engineers of all levels who are passionate about making a positive impact. You’ll collaborate closely with a cross-functional team of researchers, clinicians, and engineers to translate cutting-edge language model capabilities into dependable, real-world clinical systems. Your focus will be on designing advanced LLM-driven workflows that can reason through complex clinical contexts, leverage agentic capabilities and structured tool use, navigate branching chains of LLM calls, integrate seamlessly with retrieval systems, and consistently generate outputs that meet the highest standards of clinical reliability and trust.A major part of this role will involve developing and applying rigorous evaluation frameworks (both automated and human-in-the-loop) to continuously assess accuracy, robustness, multilingual capabilities, and more. This is an opportunity to design experiments to probe failure modes, simulate edge cases, and stress-test LLM workflows under realistic load and challenging real-world conditions. You’ll apply a disciplined, data-driven approach to understanding model behavior—developing tools to measure system performance, conduct A/B tests against established baselines, and generate clear, actionable insights that inform deployment decisions. This high impact role will own the end-to-end productionization of LLM workflows: deploying models into low-latency, high-uptime environments, building monitoring and observability systems, implementing post-processing guardrails, and managing workflow versioning.What You’ll DoDesign and build GenAI systems that turn LLMs into composable, dependable tools—leveraging retrieval, tool use, agentic reasoning, and structured outputs.Collaborate with ML and infra engineers to scale and optimize GenAI workflows, managing latency, context windows, and model choice.Write high-quality, modular code that’s graceful under failure, flexible to change, and easy to iterate on.Own major architectural decisions—how we architect workflows,define data flow, cache intermediate state, and structure generative outputs.Drive rigorous evaluation: build benchmark datasets, develop automated and human-in-the-loop frameworks, design experiments to surface failure modes and edge cases, run A/B tests to inform deployment, and distill insights from clinician feedback to evaluate and guide model improvement.Leverage frontier capabilities: rapidly prototype with new models and model capabilities, open-source tools, and novel prompting techniques.What You’ll Bring3+ years of experience building production-grade systems, with 1–2+ years focused on GenAI or LLM-powered products.Deep fluency with LLM APIs, prompting strategies, and orchestration patterns (e.g., LangChain, LlamaIndex, custom pipelines).Experience with retrieval systems (e.g., semantic and lexical retrieval, vector DBs, efficient kNN), function calling, tool-use, or agentic workflows.Working knowledge of model evaluation, experience building diverse datasets, conducting both automated and human-in-the-loop evaluations, running A/B tests, and working with subject matter experts to guide model improvement.Strong Python fundamentals—including ability to write clean code, design comprehensive test-cases, and familiarity with core language features and standard libraries; experience with async programming, performance profiling, packaging, and deployment tooling is strongly preferred.Good taste and intuition: You know when to move fast, ship, and iterate and also when to take a beat to tackle tech debt.We value people who are eager to learn new things and recognize that great team members might not perfectly match a job description. If you’re interested in the role but aren’t sure whether or not you’re a good fit, we’d still like to hear from you.Must be willing to work from our SF office at least 3x per weekThis position requires a commitment to a hybrid work model, with the expectation of coming into the office a minimum of (3) three times per week. Relocation assistance is available for candidates willing to move to San Francisco.Why Work at Abridge?At Abridge, we’re transforming healthcare delivery experiences with generative AI, enabling clinicians and patients to connect in deeper, more meaningful ways. Our mission is clear: to power deeper understanding in healthcare. We’re driving real, lasting change, with millions of medical conversations processed each month.Joining Abridge means stepping into a fast-paced, high-growth startup where your contributions truly make a difference. Our culture requires extreme ownership—every employee has the ability to (and is expected to) make an impact on our customers and our business.Beyond individual impact, you will have the opportunity to work alongside a team of curious, high-achieving people in a supportive environment where success is shared, growth is constant, and feedback fuels progress. At Abridge, it’s not just what we do—it’s how we do it. Every decision is rooted in empathy, always prioritizing the needs of clinicians and patients.We’re committed to supporting your growth, both professionally and personally. Whether it's flexible work hours, an inclusive culture, or ongoing learning opportunities, we are here to help you thrive and do the best work of your life.If you are ready to make a meaningful impact alongside passionate people who care deeply about what they do, Abridge is the place for you.
How we take care of Abridgers:Generous Time Off: 14 paid holidays, flexible PTO for salaried employees, and accrued time off for hourly employeesComprehensive Health Plans: Medical, Dental, and Vision coverage for all full-time employees and their families.Generous HSA Contribution: If you choose a High Deductible Health Plan, Abridge makes monthly contributions to your HSA.Paid Parental Leave: Generous paid parental leave for all full-time employees.Family Forming Benefits: Resources and financial support to help you build your family.401(k) Matching: Contribution matching to help invest in your future.Personal Device Allowance: Tax free funds for personal device usage.Pre-tax Benefits: Access to Flexible Spending Accounts (FSA) and Commuter Benefits.Lifestyle Wallet: Monthly contributions for fitness, professional development, coworking, and more.Mental Health Support: Dedicated access to therapy and coaching to help you reach your goals.Sabbatical Leave: Paid Sabbatical Leave after 5 years of employment.Compensation and Equity: Competitive compensation and equity grants for full time employees.... and much more!Equal Opportunity EmployerAbridge is an equal opportunity employer and considers all qualified applicants equally without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, or disability.Staying safe - Protect yourself from recruitment fraudWe are aware of individuals and entities fraudulently representing themselves as Abridge recruiters and/or hiring managers. Abridge will never ask for financial information or payment, or for personal information such as bank account number or social security number during the job application or interview process. Any emails from the Abridge recruiting team will come from an @abridge.com email address. You can learn more about how to protect yourself from these types of fraud by referring to this article. Please exercise caution and cease communications if something feels suspicious about your interactions.
No items found.
2026-03-05 5:44
Partner AI Deployment Engineer
OpenAI
5000+
Germany
Full-time
Remote
false
About the roleWe are looking for a Partner AI Deployment Engineer (P-ADE) to lead technical delivery with OpenAI partners across EMEA and help scale customer deployments built on the OpenAI platform. This role focuses on working across a wide range of customer use cases, supporting the design, deployment and scaling of production-grade AI solutions delivered through partners.You will work closely with partner delivery teams, alongside Solutions Engineers (SEs), Forward Deployed Engineers (FDEs) and other ADEs, to move customer engagements from initial design through to stable, scaled production. Your work will accelerate time to value, reduce delivery risk and ensure solutions meet OpenAI’s standards for quality, safety and reliability. You will collaborate closely with GTM, Applied, and Research to support partner-led enterprise adoption.This role is based in Paris or Munich. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.In this role, you will:Act as a primary technical delivery partner for a set of OpenAI partners across EMEA, supporting customer deployments across multiple industries and use cases.Work with partner delivery teams and customer stakeholders to translate solution designs into deployable, production-ready architectures on the OpenAI platform.Support customer time to value through hands-on prototyping, integration support, architectural guidance and troubleshooting during critical phases of delivery.Collaborate closely with SEs, FDEs, and other ADEs to ensure the right technical expertise is engaged from design through production rollout.Help partners operationalise solutions by addressing scalability, reliability, security and safety considerations required for enterprise production environments.Contribute to reusable deployment patterns, reference architectures and delivery guidance that enable repeatable execution across partner engagements.Act as a technical quality and governance point during deployments, helping ensure solutions meet OpenAI’s standards and best practices before and after go-live.Capture and synthesise feedback from real customer deployments and share insights with Applied, Research and partner teams to improve delivery playbooks and platform capabilities.You’ll thrive in this role if you:Have 8+ years of experience in technical consulting, solution delivery or a similar role, working with senior technical and business leaders on complex enterprise deployments.Have experience delivering large, multi-stakeholder technical projects in partnership with boutique services organisations, system integrators or similar delivery environments.Have strong hands-on experience building, integrating and operating production software using modern languages such as Python or JavaScript.Have designed, deployed and supported Generative AI and or machine learning solutions in real-world production environments.Have practical experience working with the OpenAI platform in customer-facing or delivery contexts.Are a clear communicator who can work effectively with partner engineers, internal teams and customer stakeholders.Take ownership of delivery problems end to end and are comfortable operating in ambiguous, fast-moving environments.Bring a collaborative, humble mindset and enjoy working across partners and internal teams to deliver successful customer outcomes.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
2026-03-05 2:59
Training: Process Management Engineer
OpenAI
5000+
United Kingdom
Full-time
Remote
false
About the TeamTraining Runtime designs the core distributed runtime that powers everything from early research experiments to frontier-scale model runs. We work on building robust, scalable, high performance components to support our distributed training workloads. Our priorities are to maximize the productivity of our researchers and our hardware, with the goal of accelerating progress towards AGI.Within Training Runtime, the Process Management team develops the distributed OS responsible for launching, coordinating, and supervising the large numbers of processes that make up modern training workloads. Our runtime sits beneath training frameworks and on top of research infrastructure, ensuring jobs run reliably across massive clusters while maintaining performance, stability, and observability.Success for us is measured by both system reliability and researcher velocity - enabling ideas to scale from experiments to production training runs.About the RoleAs a Training Runtime: Process Management Engineer, you will work on the software that ties thousands of computers together and exposes them as a unified system.This system has to serve individual researchers running multiple parallel experiments, as well as our largest training runs spanning 100’s of thousands and even millions of machines and accelerators. This requires easy to use, introspectable systems that can promote a fast debugging and development cycle, as well as relentless optimization for scale while maintaining stability and performance throughout.You will work primarily in Rust, building high-performance asynchronous systems with a strong emphasis on performance, correctness, and scalability.Working at this scale and at the frontier of AI development poses novel challenges. Out-of-the-box approaches often don’t work. The problems you will be working on are highly ambiguous and require strong design judgment as well as proficient execution to advance the state of our infrastructure.We’re looking for people who love optimizing an end-to-end platform, understanding high-performance architectures to maximize both local and distributed performance across our supercomputers. We’re looking for engineers excited by the rapid pace of responding to the dynamic and evolving needs of our training runtime and compute stack.This role is based in London, UK. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.In this role, you will:Work across our Python and Rust stackDesign, build, and maintain software to orchestrate and monitor machine learning workloads on our largest supercomputersProfile and optimize our software stack to support computation orchestration at frontier scaleImprove reliability, observability, and fault tolerance for long-running jobsDebug complex distributed systems issues across large clustersRespond to the changing shapes and needs of the ML systems to enable our researchers
You might thrive in this role if you:Have experience developing distributed systems (not just operating them)Enjoy understanding how large systems behave and fail at scaleCare deeply about performance, correctness, and reliabilityHave strong software engineering skills and are proficient in Python and Rust or another systems programming language (e.g. C++)Have solid Linux knowledge, and are comfortable with systems-level debugging, performance analysis, and memory profilingAre comfortable and experienced working and developing asynchronous and concurrent systemsLike high-ownership environments with light process and strong engineering agencyAbout OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
2026-03-05 2:59
Threat Modeler, Preparedness
OpenAI
5000+
$325,000 – $325,000
United States
Full-time
Remote
false
About the TeamThe Preparedness team is an important part of the Safety Systems org at OpenAI, and is guided by OpenAI’s Preparedness Framework.Frontier AI models have the potential to benefit all of humanity, but also pose increasingly severe risks. To ensure that AI promotes positive change, the Preparedness team helps us prepare for the development of increasingly capable frontier AI models. This team is tasked with identifying, tracking, and preparing for catastrophic risks related to frontier AI models.The mission of the Preparedness team is to:Closely monitor and predict the evolving capabilities of frontier AI systems, with an eye towards misuse risks whose impact could be catastrophic to our societyEnsure we have concrete procedures, infrastructure and partnerships to mitigate these risks and to safely handle the development of powerful AI systemsPreparedness tightly connects capability assessment, evaluations, and internal red teaming, and mitigations for frontier models, as well as overall coordination on AGI preparedness. This is fast paced, exciting work that has far reaching importance for the company and for society.About the RoleAs a threat modeler, you will own OpenAI’s holistic approach to identifying, modeling, and forecasting frontier risks from frontier AI systems. This role ensures that our evaluation frameworks, safeguards, and taxonomies are robust, high-coverage, and forward-looking. You will help the company answer the “why” behind our most stringent risk-prevention efforts, shaping the rationale for prioritizing and mitigating risks across domains. You will serve as a central node connecting technical, governance, and policy perspectives on prioritization, focus and rationale on our approach to frontier risks from AI.In this role, you will:Develop and maintain comprehensive threat models across all misuse areas (bio, cyber, attack planning, etc.).Develop plausible and convincing threat models across loss of control, self-improvement, and other possible alignment risks from frontier AI systemsForecast risks by combining technical foresight, adversarial simulation, and emerging trends.Pair closely with technical partners on capability evaluations to ensure these map to and cover the gambit of severe risks differentially enabled by frontier AI systems.Pair closely with Bio and Cyber Leads to size the remaining risk of the designed safeguards and translate threat models into actionable mitigation designs.Act as the thought partner and explainer of “why” and “when” for high-investment mitigation efforts—helping stakeholders understand the rationale behind prioritization.Serve as the central node connecting technical, governance, and policy perspectives on prioritization, focus and rationale on our approach to misuse risk.You might thrive in this role if you:Understand risks from frontier AI systems and have a strong grasp of AI alignment literature.
Bring deep experience in threat modeling, risk analysis, or adversarial thinking (e.g., security, national security, or safety).Know how AI evaluations work and can connect eval results to both capability testing and safeguard sufficiency.Enjoy working across technical and policy domains to drive rigorous, multidisciplinary risk assessments.Communicate complex risks clearly and compellingly to both technical and non-technical audiences.Think in systems and naturally anticipate second-order and cascading risks.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
2026-03-05 2:59
Member of Technical Staff: Agent DX Research
Modal
51-100
$150,000 – $350,000
United States
Full-time
Remote
false
About Us:Modal provides the infrastructure foundation for AI teams. With instant GPU access, sub-second container startups, and native storage, Modal makes it simple to train models, run batch jobs, and serve low-latency inference. We have thousands of customers who rely on us for production AI workloads, including Lovable, Scale AI, Substack, and Suno.We're a fast-growing team based out of NYC, SF, and Stockholm. We've hit 9-figure ARR and recently raised a Series B at a $1.1B valuation. Our investors include Lux Capital, Redpoint Ventures, Amplify Partners, and Elad Gil.Working at Modal means joining one of the fastest-growing AI infrastructure organizations at an early stage, with many opportunities to grow within the company. Our team includes creators of popular open-source projects (e.g. Seaborn, Luigi), academic researchers, international olympiad medalists, and experienced engineering and product leaders with decades of experience.The Role:Modal has always obsessed over developer experience and productivity. With rapid advancements in the capabilities of AI coding agents, the practice of developing software and the meaning of developer experience is changing. We see this as an opportunity.We’re looking for an experienced researcher to join us and help make it even easier and more productive to build on Modal. We believe that our code-first approach to AI infrastructure is uniquely well suited to agent-based development. But we’re looking to do even better by subjecting agent productivity to rigorous evaluation and using those insights to guide the development of our platform.You’ll work in collaboration with Modal’s SDK team and other product engineers to build out a framework and process for agent productivity evaluation. Our goal is to treat developer experience optimization as a scientific problem. You’ll be responsible for defining quantitative objectives, designing systems to measure performance, and translating results into product improvements. You’ll also be expected to stay on top of new developments in tools and workflows and to work with our customers to understand how they’re using coding agents with Modal and where we can be providing more value.Requirements:This is a new kind of role, and we don’t have one specific background in mind. Training in quantitative research is preferred: you might have a PhD in Computer Science, Human Computer Interaction, Cognitive Science, Operations Research, or other related field. You also might have prior experience working as a Machine Learning Scientist, Quantitive UX Researcher, or other similar role on a product team. Regardless of your exact background, we’ll be looking for the following:Sufficient technical skills to design and implement scalable agent benchmarking workflowsExperience with experimental design, measurement, and statistical evaluationUp-to-date knowledge of the latest advances in coding agents (with a dose of healthy skepticism about their current capabilities)Interest in developer tooling and opinions about developer ergonomicsFamiliarity with the use cases that Modal serves (generative AI inference, large-scale batch jobs, multi-node training, etc.)Strong communication skills and the ability to convey research insights to decision makersThe ability to work in person from our New York (preferred) or San Francisco office
No items found.
2026-03-04 11:44
Forward Deployed Engineer - ML
Modal
51-100
Sweden
Full-time
Remote
false
About Us:Modal provides the infrastructure foundation for AI teams. With instant GPU access, sub-second container startups, and native storage, Modal makes it simple to train models, run batch jobs, and serve low-latency inference. We have thousands of customers who rely on us for production AI workloads, including Lovable, Scale AI, Substack, and Suno.We're a fast-growing team based out of NYC, SF, and Stockholm. We've hit 9-figure ARR and recently raised a Series B at a $1.1B valuation. Our investors include Lux Capital, Redpoint Ventures, Amplify Partners, and Elad Gil.Working at Modal means joining one of the fastest-growing AI infrastructure organizations at an early stage, with many opportunities to grow within the company. Our team includes creators of popular open-source projects (e.g. Seaborn, Luigi), academic researchers, international olympiad medalists, and experienced engineering and product leaders with decades of experience.The Role:We're looking for Forward Deployed ML Engineers who want to work at the intersection of deep technical work and direct customer impact. As an ML FDE, you'll partner with leading AI companies and foundation model labs to help them achieve state-of-the-art performance on their most demanding workloads — LLM serving, model training (SFT, RLHF), audio pipelines, scientific computing, and more. You're helping teams reach outcomes most engineers can't on their own.The FDE team today includes world-class software engineers, computational scientists, ML engineers, and former founders. We're looking for people with strong engineering fundamentals, deep curiosity across the AI stack, and energy for working directly with customers on hard problems. You will:Work hands-on with companies like Suno, Lovable, Cognition, and Meta to architect and optimize production AI workloads on ModalContribute to open-source projects — members of the team are active contributors to SGLang — and publish technical content that demonstrates Modal's capabilities across the AI stackCollaborate with Modal's product and sales teams, contributing to the platform as both an engineer and a product stakeholderBuild trusted relationships with technical leaders (CTOs, VPs of Engineering, ML leads) at companies doing frontier AI workConduct technical demos, experiments, and proof-of-concepts that make Modal's performance advantages tangibleRequirements:2+ years of professional ML engineering experience, ideally with hands-on work in inference optimization, model training, GPU programming, or ML infrastructureFamiliarity with the serving (e.g., vLLM, SGLang) and training (e.g., slime, verl, TRL) toolchains. You don't need all of these, but you should be able to go deep on at least one.Strong communicator who can go deep on technical architecture with an engineering team and clearly articulate tradeoffs to technical leadershipGenuine interest in working directly with customers — you find it energizing to understand someone else's problem and help them solve itBonus: side projects, open-source contributions, or published work you're proud of in ML or systems performanceWilling to work in-person in Stockholm
No items found.
2026-03-04 11:44
Research Product Manager — Structured AI Systems
Granica
11-50
$160,000 – $250,000
United States
Full-time
Remote
false
About GranicaGranica is an AI research and infrastructure company focused on reliable, steerable representations for enterprise data.We earn trust through Crunch, a policy-driven health layer that keeps large tabular datasets efficient, reliable, and reversible. On this foundation, we’re building Large Tabular Models—systems that learn cross-column and relational structure to deliver trustworthy answers and automation with built-in provenance and governance.Research Product Manager — Structured AI Systems & Economic ExtractionLocation: Downtown Mountain View, CA (office-based, five days a week)
Team: Research & Applied SystemsThe MissionGranica’s Research team is advancing foundational work in:Tabular data learning and large tabular modelsStructured and relational representation learningCompression-aware and efficiency-driven AIHybrid symbolic, relational, and neural systemsThe intersection of information theory, learning theory, and large-scale systemsThese efforts are tightly coupled with real production systems operating over petabytes of enterprise data.The mission of the Research Product Manager is to ensure this work moves forward coherently, efficiently, and at scale—connecting people, ideas, compute, and systems so that breakthrough research becomes durable capability.This role is not program management.It is for someone who can:Understand how large AI models are trained, deployed, and maintained in production systemsTranslate foundational modeling advances into economically valuable infrastructureShape both the technical execution path and the economic strategy behind itWhat This Role Actually Owns1️⃣ Productionization of Structured AI ModelsWork with Research and Systems teams to:Design how large tabular models are trained on Parquet / Iceberg / Delta dataDefine training infra requirements (data pipelines, distributed training, evaluation loops)Define inference architecture (batch vs streaming, embedding materialization, retrieval)Define maintenance loops (retraining cadence, data drift detection, schema evolution)Understand storage/compute trade-offs in real systemsYou must be able to reason about:Data layoutCompute schedulingModel lifecycleInfrastructure bottlenecksEvaluation pipelines2️⃣ Economic Value ExtractionHelp define:Who the buyer is (infra teams, ML teams, data platform teams)Where economic value is unlocked (compression, compute savings, model accuracy, governance)How value is quantified (cost curves, workload modeling, infra substitution)How to convert research capability into revenue and durable platform advantageThis role requires strong intuition around enterprise infra economics.3️⃣ Research → Durable SystemYou will:Identify which modeling advances are worth productionizingKill research directions that lack economic or system viabilityDefine integration paths into enterprise workloadsWork directly with the Chief Research Scientist on research agenda prioritizationRequired BackgroundYou must have experience in at least one of:A) Production AI SystemsImplementing or PM’ing deployment of large models in productionTraining infra / inference infra / model maintenanceOperating over structured datasets (Parquet, columnar storage, data lakes)B) Economic Platform ThinkingDefining buyer, pricing, ROI, and cost structure of AI infrastructureConverting modeling advantage into business valueIdeally both.This Role Is NOTA coordination-heavy research program managerA consumer AI personalization PMA pure academic researcherCore QualificationsBackground in computer science, AI, mathematics, physics, engineering, or a closely related field.Comfort engaging deeply with researchers and engineers on complex technical topics.Strong SignalsExperience working with or within a research lab (academic or industrial).Familiarity with modern AI research workflows, including experimentation, evaluation, and large-scale training.Ability to abstract at a high level while also diving into details when needed.Strong written and verbal communication, especially around technical progress and trade-offs.Bonus ExperienceMaster’s or PhD in a relevant technical field.Publications or direct contributions to AI research (e.g., modeling, data, evals, systems, or related areas).Experience supporting research in structured data, tabular models, or system-aware ML.Demonstrated ability to learn new technical domains quickly.Why This Role MattersGranica is building foundational technology with a long horizon. The research happening here—particularly in structured and tabular AI—is aimed at reshaping how intelligence is built and applied across the global economy.As a Research Product Manager, you will:Enable breakthrough research to happen faster and land harder.Help define how frontier ideas become real systems.Play a central role in shaping the execution engine behind a generational research agenda.This role has real ownership, real influence, and a deep connection to the core of the company.
Location & Work ModelThis role is office-based in Downtown Mountain View, five days a week. We believe close, in-person collaboration is essential for the kind of deep, cross-disciplinary research and execution this role requires.
Why GranicaFundamental Research Meets Enterprise Impact. Work at the intersection of science and engineering, turning foundational research into deployed systems serving enterprise workloads at exabyte scale.AI by Design. Build the infrastructure that defines how efficiently the world can create and apply intelligence.Real Ownership. Design primitives that will underpin the next decade of AI infrastructure.High-Trust Environment. Deep technical work, minimal bureaucracy, shared mission.Enduring Horizon. Backed by NEA, Bain Capital, and various luminaries from tech and business. We are building a generational company for decades, not quarters or a product cycle.Compensation & BenefitsCompetitive salary, meaningful equity, and substantial bonus for top performersFlexible time off plus comprehensive health coverage for you and your familySupport for research, publication, and deep technical explorationAt Granica, you will shape the fundamental infrastructure that makes intelligence itself efficient, structured, and enduring. Join us to build the foundational data systems that power the future of enterprise AI!
No items found.
2026-03-04 8:44
Product Manager, AI Platform
Fluidstack
51-100
$180,000 – $250,000
United States
Full-time
Remote
false
About FluidstackAt Fluidstack, we’re building the infrastructure for abundant intelligence. We partner with top AI labs, governments, and enterprises - including Mistral, Poolside, Black Forest Labs, Meta, and more - to unlock compute at the speed of light.We’re working with urgency to make AGI a reality. As such, our team is highly motivated and committed to delivering world-class infrastructure. We treat our customers’ outcomes as our own, taking pride in the systems we build and the trust we earn. If you’re motivated by purpose, obsessed with excellence, and ready to work very hard to accelerate the future of intelligence, join us in building what's next.About the roleWe're hiring a Product Manager to own our AI platform roadmap, including managed inference and agent platforms. You'll define how Fluidstack enables customers to deploy, scale, and optimize LLM inference workloads—from model serving and routing to agent orchestration and compound AI systems. This role requires balancing customer needs for low latency and high throughput with the operational realities of GPU utilization, cost efficiency, and platform reliability. You'll work across engineering, ML research, and go-to-market teams to position Fluidstack against inference-first competitors like Together AI, Fireworks, Baseten, Modal, and Replicate.What you'll doOwn the product strategy and roadmap for managed inference services, including model deployment, autoscaling, multi-LoRA serving, and inference optimizationDefine requirements for agent platform capabilities: structured outputs, function calling, memory primitives, tool integration, and multi-step reasoning workflowsDrive decisions on which inference optimizations to prioritize: speculative decoding, continuous batching, KV cache management, quantization support, and custom kernel integrationPartner with ML infrastructure engineers to design APIs, SDKs, and deployment workflows that support model fine-tuning, version management, and A/B testingWork with datacenter teams to optimize GPU allocation strategies—balancing dedicated vs. serverless deployments, cold start latency, and cost-per-token economicsAnalyze competitive offerings from Together AI (inference optimization stack), Fireworks (custom inference engine), Baseten (training-to-inference integration), and Modal (serverless architecture)Define pricing models that align with customer usage patterns (tokens, requests, GPU-hours) while maintaining healthy unit economicsConduct customer research to understand inference workload requirements: latency SLAs, throughput targets, model size constraints, and integration needsTranslate customer feedback into feature specifications—including support for new model architectures, framework integrations (vLLM, TensorRT-LLM, TGI), and observability toolingBuild go-to-market materials: reference architectures, performance benchmarks, cost calculators, and migration guides for customers moving from self-hosted or competing platformsAbout you5+ years product management experience with at least 3 years focused on AI/ML infrastructure, inference platforms, or developer toolsStrong technical understanding of transformer architectures, inference optimization techniques, and production ML systemsExperience building products for technical users deploying LLMs in production (ML engineers, research scientists, AI application developers)Track record of shipping features that improved inference latency, throughput, or cost efficiency—backed by quantitative metricsDeep familiarity with the inference ecosystem: serving frameworks (vLLM, TensorRT-LLM, TGI), model formats (GGUF, SafeTensors), and API standards (OpenAI-compatible endpoints)Understanding of GPU memory constraints, batching strategies, and the tradeoffs between latency-optimized vs. throughput-optimized servingAbility to translate complex technical concepts (speculative decoding, PagedAttention, Multi-LoRA) into clear customer value propositionsExperience conducting competitive analysis in the inference market, including pricing elasticity, feature differentiation, and customer acquisition patternsComfortable working with engineering teams to debug performance bottlenecks, analyze profiling data, and prioritize kernel-level optimizationsBonus: Experience with agent frameworks (LangChain, LlamaIndex, AutoGPT), compound AI patterns, or model fine-tuning workflowsCompensationTo provide greater transparency to candidates, we share base pay ranges for all US-based job postings. Our compensation package includes base salary, equity, benefits, and for applicable roles, commissions plans. Our cash compensation range for this role is $180,000-$250,000. Final offers vary based on geography, candidate experience, relevant credentials, and other factors. Outstanding candidates may be eligible for adjusted terms plus meaningful equity.We are committed to pay equity and transparency.Fluidstack is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Fluidstack will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.You will receive a confirmation email once your application has successfully been accepted. If there is an error with your submission and you did not receive a confirmation email, please email careers@fluidstack.io with your resume/CV, the role you've applied for, and the date you submitted your application-- someone from our recruiting team will be in touch.
No items found.
2026-03-04 7:59
Senior Fullstack Software Engineer
Heidi Health
201-500
$150,000 – $210,000
United Kingdom
Full-time
Remote
false
Who are Heidi?Heidi is building an AI Care Partner that supports clinicians every step of the way, from documentation to delivery of care.We exist to double healthcare’s capacity while keeping care deeply human. In 18 months, Heidi has returned more than 18 million hours to clinicians and supported over 73 million patient visits. Today, more than two million patient visits each week are powered by Heidi across 116 countries and over 110 languages.Founded by clinicians, Heidi brings together clinicians, engineers, designers, scientists, creatives, and mathematicians, working with a shared purpose: to strengthen the human connection at the heart of healthcare.Backed by nearly $100 million in total funding, Heidi is expanding across the USA, UK, Canada, and Europe, partnering with major health systems including the NHS, Beth Israel Lahey Health, MaineGeneral, and Monash Health, among others.We move quickly where it matters and stay grounded in what’s proven, shaping healthcare’s next era. Ready for the challenge? The RoleThe UK healthcare system is defined by its friction—complex billing requirements, rigid EHRs, and administrative burden that pulls clinicians away from patients.We're looking for a Senior Software Engineer to turn that friction into flow.You'll build the systems that make Heidi feel native to American healthcare. That means going deep into the infrastructure clinicians actually use—Epic, athena, eClinicalWorks—and making Heidi work seamlessly inside those workflows. It means building AI systems that handle the complexity of UK billing so clinicians don't have to.You'll work across the full stack of what makes Heidi valuable in the US market: from the AI pipelines that understand clinical documentation to the integrations that put the right information in the right place at the right time.This isn't just localisation. It's building the definitive clinical AI experience for the world's most demanding healthcare market.What you’ll doBuild systems that live inside clinical workflows: You'll shape how Heidi integrates with the EHRs that run American healthcare. The goal isn't connectivity—it's making Heidi feel like a native capability, not a plugin.Turn clinical complexity into simple experiences: US healthcare has layers of billing rules, compliance requirements, and payer constraints. You'll build systems that absorb that complexity so clinicians never see it.Build for trust and quality: Write clean, testable code with strong interfaces, thoughtful error handling, and observability. These workflows are depended on by clinicians, operators, and downstream systems.Own outcomes, not just code: You'll care about whether the things you build actually help clinicians and improve practice revenue—not just whether they technically work.Ship agentic workflow functionality: Build systems where AI assists with extraction, reconciliation, and drafting across workflows, with human review, auditability, and clear controls.: Build systems where AI assists with extraction, reconciliation, and drafting across workflows, with human review, auditability, and clear controls.Operate in close collaboration: Work day-to-day in a highly collaborative environment, including frequent pairing and shared ownership of design and implementation.Grow with the domain: Learn how healthcare organisations operate in practice, especially the requirements and constraints that come with serving US customers, and translate that into product improvements.What we’re looking for5+ years of software engineering experience, with a track record of shipping complex systems that real users depend on.Strong full-stack fundamentals and experience contributing to user-facing products end-to-end.Sound engineering judgment: You make sensible trade-offs, keep scope clear, and improve quality through testing, readable code, and thoughtful design.Ownership and follow-through: You take responsibility for what you commit to, communicate clearly when something changes, and unblock yourself or escalate early.Collaborative working style: You work well with others, enjoy building in a tight feedback loop, and are comfortable pairing and sharing work in progress.Comfort with ambiguity: You can engage with messy problems, ask good questions, and drive toward a practical, shippable solution.Fluency with AI coding tools: You use modern AI tools to accelerate delivery, while staying rigorous about correctness and validation.Experience with agentic frameworks, modelling complex domains, orchestration, and event-driven architectures is a plus.What do we believe in?Heidi builds for the future of healthcare, not just the next quarter, and our goals are ambitious because the world’s health demands it. We believe in progress built through precision, pace, and ownership.Live Forever - Every release moves care forward: measured, safe, and built to last. Data guides us, but patients define the truth that matters.Practice Ownership - Decisions follow logic and proof, not hierarchy. Exceptional care demands exceptional standards in our work, our thinking, and our character.Small Cuts Heal Faster - Stability earns trust, speed delivers impact. Progress is about learning fast without breaking what people depend on.Make others better - Feedback is direct, kindness is constant, and excellence lifts everyone. Our success is measured by collective growth, not individual output.Our mission is clear: expand the world’s capacity to care, and do it without losing the humanity that makes care worth delivering.Why you should Join HeidiReal product momentum. We’re not trying to generate interest, we’re channeling it. This is a rare chance to create a global impact as you immerse yourself in Australia’s fastest growing start-up.Equity from day one. When Heidi wins, you win. You’ll share directly in the success you help create.Unmatched impact. Play a pivotal role at a critical growth moment - all while working on a product that delivers tangible value to clinicians and patients every day.Work alongside world-class talent. Join a team of operators and builders who’ve scaled unicorns.Global reach. Help shape our international expansion as we bring Heidi to key international markets.Growth and balance. Enjoy a personal development budget, dedicated wellness days, subsidised gym membership, and your birthday off to recharge.Flexibility that works. A hybrid environment, with 3 days in the office.Heidi’s commitment to Diversity, Equity and InclusionHeidi is dedicated to creating an equitable, inclusive, and supportive work environment that brings people together from diverse backgrounds, experiences, and perspectives. Our strength is in our differences. We're proud to be an equal opportunity employer and are proud to welcome all applicants as we're committed to promoting a culture of opportunity for all.Help us reimagine primary care and change the face of healthcare in Australia and then around the world.
No items found.
2026-03-04 6:29
Senior AI Researcher- Reinforcement learning (f/m/d)
AlephAlpha
201-500
Germany
Full-time
Remote
false
Our MissionAleph Alpha is one of the few companies in Europe with end-to-end in-house model development including pre- and post-training. We’re building models that have general-purpose capabilities, but also specifically excel at addressing the needs of our customers.We're growing our post-training team in Heidelberg (or hybrid in Germany) and are looking for an AI Researcher who combines a deep theoretical understanding of reinforcement learning methods with a desire to improve on the state of the art and improve model capabilities in large-scale training.Team CultureAt Aleph Alpha, we foster a culture built on ownership, autonomy, and empowerment. Teams and individual contributors are trusted to take responsibility for their work and drive meaningful impact. We maintain a flat organizational structure with efficient, supportive management that enables quick decision‑making, open communication, and a strong sense of shared purpose.About the role As a (senior) AI Researcher for reinforcement learning you will shape and improve the underlying RL methodology, maintain a high-quality training code-base, and conduct large-scale experiments to hill-climb our performance benchmarks. This role is for you if you both have a strong theoretical background on RL and the engineering drive to bring these methods into production and improve on the methods as part of the reinforcement learning team.In your day-to-day you will conduct large-scale reinforcement learning experiments, derive hypotheses from the results, and iterate on both the implementation and methodology based on the observations. Together with a collaborative team, you will have direct impact on the models that we ship to our customers.This role is for Aleph Alpha Research GmbH.Your ResponsibilitiesHill-climb in large-scale training: Conduct large-scale LLM training runs, analyze evaluation scores in depth, propose hypotheses for improvement and directly implement them in order to maximize performance on our benchmarks.Theoretical innovation: Stay at the bleeding edge of RL research. You will identify, implement, and iterate on novel approaches to multi-turn reinforcement learning.Scale our training infrastructure: Identify bottlenecks in our training setup and optimize our RL training loops for large-scale training.Cross-functional collaboration: Partner with our other post-training teams to turn raw feedback into actionable training signals, ensuring that our RL iterations lead to measurable improvements in downstream performance.Your ProfileBasic QualificationsA deep understanding of Reinforcement Learning theory and how it relates to modern RL methods.Experience with multi-node LLM training (ideally using RL). You understand how to scale multi-node RL trainings and can reason about and implement distributed algorithms.Familiarity with statistical methods for evaluation and experiment design.Ability to reason about what an evaluation/environment measures and whether it matters - not just run benchmarks, but understand them.Strong Python skills and comfort with ML tooling (especially torch distributed)Willingness to relocate to Heidelberg or travel regularly (potentially weekly).Preferred QualificationsPhD in reinforcement learning or equivalent research experience.A history of contributions to top-tier venues (NeurIPS, ICML, ICLR, etc.) specifically regarding RL.Experience evaluating LLM models and crafting environments for training.Compensation and BenefitsBecome part of an AI revolution!30 days of paid vacationAccess to a variety of fitness & wellness offerings via WellhubMental health support through nilo.healthSubstantially subsidized company pension plan for your future securitySubsidized Germany-wide transportation ticketBudget for additional technical equipmentFlexible working hours for better work-life balance and hybrid working modelVirtual Stock Option PlanJobRad® Bike Lease
No items found.
2026-03-04 5:59
Research Scientist (Measurement and Evaluation)
Abridge
201-500
$220,000 – $280,000
United States
Full-time
Remote
false
About AbridgeAbridge was founded in 2018 with the mission of powering deeper understanding in healthcare. Our AI-powered platform was purpose-built for medical conversations, improving clinical documentation efficiencies while enabling clinicians to focus on what matters most—their patients.Our enterprise-grade technology transforms patient-clinician conversations into structured clinical notes in real-time, with deep EMR integrations. Powered by Linked Evidence and our purpose-built, auditable AI, we are the only company that maps AI-generated summaries to ground truth, helping providers quickly trust and verify the output. As pioneers in generative AI for healthcare, we are setting the industry standards for the responsible deployment of AI across health systems.We are a growing team of practicing MDs, AI scientists, PhDs, creatives, technologists, and engineers working together to empower people and make care make more sense. We have offices located in the Mission District in San Francisco, the SoHo neighborhood of New York, and East Liberty in Pittsburgh. The RoleAbridge is hiring Research Scientists to join our Strategic Research team to rigorously evaluate and advance the real-world impact of ambient AI on patient outcomes, care quality, and provider experience. In this role, you will design and lead empirical studies of Abridge models and products in partnership with health systems, leveraging large-scale clinical conversation data to generate new insights about care delivery, documentation quality, clinical decision-making, and downstream patient outcomes. You will operationalize complex constructs—such as quality of care, safety, cognitive burden, and return on investment—using principled measurement frameworks and rigorous experimental or quasi-experimental methods. Working closely with our science and product teams, you will also develop evaluation frameworks that inform model development and product strategy. Your work will contribute to broader scientific understanding of how AI systems affect patients and providers in the real world. This role sits at the intersection of methodological innovation and practical impact, applying serious measurement science to systems that directly shape patient care.About Strategic Research: The Strategic Research team at Abridge has two primary functions: (i) designing and conducting rigorous research studies investigating the impact of ambient AI as an intervention in partnership with collaborating health systems; and (ii) supporting external research efforts that leverage Abridge data. In addition to driving and supporting empirical studies of the impact of ambient AI-enabled technologies, the team works closely with our science and engineering teams on core model evaluation. The common thread to all our work is ensuring that every partner-facing research initiative meets the highest standards of rigour, credibility, and strategic value.What You’ll DoDesign and conduct evaluations of Abridge models and productsEngage with external researchers and other stakeholders on designing and conducting research on ambient AI and research that leverages Abridge dataDevelop a strong user-centric and patient-centric mindset, grounding the research in empathy for the real world experience of providers and patientsCollaborate across our cross-functional product teams to ensure the research is deeply informed by current practices and our product roadmapWrite technical reports and give presentations to internal and external stakeholdersActively contribute to the wider research community by publishing original research in leading peer-reviewed publication venuesMentor research internsWhat You’ll BringPhD in statistics, biostatistics, computer science, economics, information systems, clinical informatics, or a related field.Expertise in rigorous quantitative or mixed-methods approaches for conducting evaluations using observational and experimental data.Strong research track record in evaluation and measurement, as evidenced by high-impact publications at peer-reviewed journals or conferences.A problem-before-method mindset. You do not change the question to make it amenable to simple analysis, but instead push the methodological frontier to solve the real world problems that matter to health systems, providers, and patients.A curious, adaptable, and proactive mindset, with a desire to learn and grow as a researcher in a fast-paced startup environment.Passion for and understanding of Abridge’s mission.Must be willing to work from our New York City office at least 3x per week.This position requires a commitment to a hybrid work model, with the expectation of coming into the office a minimum of (3) three times per week. Relocation assistance is available for candidates willing to move to New York City.We value people who want to learn new things, and we know that great team members might not perfectly match a job description. If you’re interested in the role but aren’t sure whether or not you’re a good fit, we’d still like to hear from you.Why Work at Abridge?At Abridge, we’re transforming healthcare delivery experiences with generative AI, enabling clinicians and patients to connect in deeper, more meaningful ways. Our mission is clear: to power deeper understanding in healthcare. We’re driving real, lasting change, with millions of medical conversations processed each month.Joining Abridge means stepping into a fast-paced, high-growth startup where your contributions truly make a difference. Our culture requires extreme ownership—every employee has the ability to (and is expected to) make an impact on our customers and our business.Beyond individual impact, you will have the opportunity to work alongside a team of curious, high-achieving people in a supportive environment where success is shared, growth is constant, and feedback fuels progress. At Abridge, it’s not just what we do—it’s how we do it. Every decision is rooted in empathy, always prioritizing the needs of clinicians and patients.We’re committed to supporting your growth, both professionally and personally. Whether it's flexible work hours, an inclusive culture, or ongoing learning opportunities, we are here to help you thrive and do the best work of your life.If you are ready to make a meaningful impact alongside passionate people who care deeply about what they do, Abridge is the place for you.
How we take care of Abridgers:Generous Time Off: 14 paid holidays, flexible PTO for salaried employees, and accrued time off for hourly employeesComprehensive Health Plans: Medical, Dental, and Vision coverage for all full-time employees and their families.Generous HSA Contribution: If you choose a High Deductible Health Plan, Abridge makes monthly contributions to your HSA.Paid Parental Leave: Generous paid parental leave for all full-time employees.Family Forming Benefits: Resources and financial support to help you build your family.401(k) Matching: Contribution matching to help invest in your future.Personal Device Allowance: Tax free funds for personal device usage.Pre-tax Benefits: Access to Flexible Spending Accounts (FSA) and Commuter Benefits.Lifestyle Wallet: Monthly contributions for fitness, professional development, coworking, and more.Mental Health Support: Dedicated access to therapy and coaching to help you reach your goals.Sabbatical Leave: Paid Sabbatical Leave after 5 years of employment.Compensation and Equity: Competitive compensation and equity grants for full time employees.... and much more!Equal Opportunity EmployerAbridge is an equal opportunity employer and considers all qualified applicants equally without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, or disability.Staying safe - Protect yourself from recruitment fraudWe are aware of individuals and entities fraudulently representing themselves as Abridge recruiters and/or hiring managers. Abridge will never ask for financial information or payment, or for personal information such as bank account number or social security number during the job application or interview process. Any emails from the Abridge recruiting team will come from an @abridge.com email address. You can learn more about how to protect yourself from these types of fraud by referring to this article. Please exercise caution and cease communications if something feels suspicious about your interactions.
No items found.
2026-03-04 5:29
Staff Product Manager, AI-Powered Workflows
Vanta
1001-5000
$221,000 – $260,000
United States
Full-time
Remote
false
At Vanta, our mission is to help businesses earn and prove trust. We believe that security should be monitored and verified continuously, and we empower companies to practice better security and prove it with ease. Vanta has a kind and talented team, and while some have prior security experience, many have been successful at Vanta without it. As Vanta continues to expand upmarket, we have a significant opportunity to redefine how compliance workflows are designed, automated, and executed. Our upmarket customers operate with increasingly sophisticated, highly customized compliance processes. The ability to support and automate these workflows directly within Vanta represents a powerful lever for differentiation and long-term growth.As a Staff Product Manager at Vanta, you will define the vision, strategy, and execution for a highly AI-centric workflow builder and execution engine. This platform will enable Vanta to serve our most complex enterprise use cases while opening up an entirely new class of automation and extensibility across the product. This is a true 0→1 initiative with meaningful strategic upside: executed well, it becomes a core pillar of how Vanta scales upmarket, deepening our competitive moat.This role offers a rare opportunity to build a net-new strategic product from the ground up, working at the cutting edge of AI to unlock automation for complex compliance workflows and shape Vanta’s future with enterprise customers.What you’ll do as a Staff Product Manager, AI Workflows at Vanta:Define and own the product vision, strategy, and roadmap for Vanta's AI-centric workflow builderConduct rigorous AI evaluations and performance assessments, continuously analyzing data to optimize AI-powered featuresPartner deeply with Engineering and AI teams to design the technical architecture for workflow orchestration at scalePartner closely with teams across Vanta to understand all potential use cases and distill a clear direction for impactDrive product discovery with upmarket customers to understand their custom compliance workflow needs and translate them into product requirementsLead cross-functional execution to deliver the workflow builder, managing dependencies across multiple teams and ensuring timely delivery of this strategic initiativeEstablish measurement frameworks and success metrics to track product adoption, AI performance, and customer valueHow to be successful in this role:Have 10+ years of product management experience, with at least 2 years building AI-powered products and 2 years working on workflow, orchestration, or automation platformsBring experience and drive to ship AI-powered products, including rapid experimentation, rigorous evaluation, and LLM prompt tuning to ensure high product qualityDemonstrate proven experience designing and shipping workflow engines, orchestration platforms, or automation tools; you understand the technical architecture required to bring these systems to lifePossess the ability to set product vision and strategy for a major product, with strong frameworks for making complex tradeoffs and prioritization decisionsShow experience building products for upmarket customers with complex, custom requirements and the ability to balance product flexibility with scalabilityDemonstrate a strong track record influencing and collaborating with Engineering, Design, AI/ML, and GTM teams to drive product outcomesCommunicate excellently both written and verbally, to effectively communicate ideas, requirements, and product vision to team members, stakeholders, and executivesOpen to using AI to amplify their skills and strengthen their work - demonstrating curiosity, a willingness to learn, and sound judgment in applying AI responsibly to improve efficiency and impactWhat you can expect as a Vanta’n:Industry-competitive salary and equityComprehensive medical, dental, and vision coverage, with 100% of employee-only benefit premiums covered for most medical plans16 weeks fully-paid Parental Leave for all new parentsHealth & wellness stipendRemote workspace, internet, and cellphone stipendCommuter benefits for team members who report to the SF and NYC officeFamily planning benefitsMatching 401(k) contribution with immediate vestingFlexible PTO policy, plus 80 hours of Sick Time11 company-paid holidaysVirtual team building activities, lunch and learns, and other company-wide events!Offices in SF, NYC, London, Dublin, Tel Aviv, and SydneyTo provide greater transparency to candidates, we share base pay ranges for all US-based job postings regardless of state. We set standard base pay ranges for all roles based on function, level, and country location, benchmarked against similar-stage growth companies. Final offer amounts are determined by multiple factors and may vary based on candidate location, skills, depth of work experience, and relevant licenses/credentials. #LI-remoteAt Vanta, we are committed to hiring diverse talent of different backgrounds and as such, it is important to us to provide an inclusive work environment for all. We do not discriminate on the basis of race, gender identity, age, religion, sexual orientation, veteran or disability status, or any other protected class. As an equal opportunity employer, we encourage and welcome people of all backgrounds to apply.About VantaWe started in 2018, in the wake of several high-profile data breaches. Online security was only becoming more important, but we knew firsthand how hard it could be for fast-growing companies to invest the time and manpower it takes to build a solid security foundation. Vanta was inspired by a vision to restore trust in internet businesses by enabling companies to improve and prove their security. From our early days automating security monitoring for compliance standards like SOC 2, HIPAA and ISO 27001 to creating the world's leading Trust Management Platform, our vision remains unchanged. Now more than ever, making security continuous—not just a point-in-time check— is essential. Thousands of companies rely on Vanta to build, maintain and demonstrate their trust— all in a way that's real-time and transparent.
No items found.
2026-03-04 4:59
Director, Forward Deployed Engineering
Harvey
501-1000
$320,000 – $360,000
United States
Full-time
Remote
false
Why HarveyAt Harvey, we’re transforming how legal and professional services operate — not incrementally, but end-to-end. By combining frontier agentic AI, an enterprise-grade platform, and deep domain expertise, we’re reshaping how critical knowledge work gets done for decades to come.This is a rare chance to help build a generational company at a true inflection point. With 1000+ customers in 58+ countries, strong product-market fit, and world-class investor support, we’re scaling fast and defining a new category in real time. The work is ambitious, the bar is high, and the opportunity for growth — personal, professional, and financial — is unmatched.Our team is sharp, motivated, and deeply committed to the mission. We move fast, operate with intensity, and take real ownership of the problems we tackle — from early thinking to long-term outcomes. We stay close to our customers — from leadership to engineers — and work together to solve real problems with urgency and care. If you thrive in ambiguity, push for excellence, and want to help shape the future of work alongside others who raise the bar, we invite you to build with us.At Harvey, the future of professional services is being written today — and we’re just getting started.Role OverviewHarvey is building a Forward Deployed Engineering program to deliver a white-glove, tailored experience for our most strategically important accounts. As Director of Forward Deployed Engineering, you will own that program end-to-end: building the team, defining the operating model, and ensuring Harvey's top accounts feel like Harvey works exclusively for them.This is a rare opportunity to sit at the intersection of engineering leadership, enterprise client strategy, and product influence. You'll lead a team of software engineers working directly with clients, while partnering closely with legal engineering, Sales, CS, and Product to shape what gets built and for whom.This is not a product function. The job’s primary goal is to make Harvey's most valuable customers wildly successful, while also influencing the product roadmap with direct customer feedback.What You'll DoTeam LeadershipBuild, hire, and manage a team of software engineers and managers deployed into strategic accountsDefine staffing models, engagement structures, and capacity allocation across active and prospective accountsDevelop specialist pods of engineers for new verticals (M&A, litigation, fund formation, compliance, etc.) that can be drawn on across engagements.Set and uphold quality standards for client deliverables, documentation, and knowledge transfer.Technical ExecutionMaintain deep technical fluency to scope custom builds accurately, unblock engineering decisions, and evaluate quality of delivered solutionsOversee the design and implementation of tailored workflows, retrieval systems, agent tools, and knowledge sources built on Harvey's platformEnsure solutions are built to be operationalized: with evaluations, documentation, and user training.Product InfluenceIdentify patterns across client engagements that signal gaps or opportunities in Harvey's core platformBring field signal to product and engineering leadership with specificity: what clients need, how often, and what it would take to generalizeWhat You Have10+ years of experience in software engineering, with at least 5+ years leading engineering teams (bonus if in customer-facing contexts)Deep familiarity with LLM application development: retrieval-augmented generation, agent architectures, structured outputs, and evaluation designExperience building and scaling technical teams: hiring, developing, and retaining engineers across specializationsExceptional communication skills; able to translate complex technical work into clear language for both engineers and C-suite clientsLow ego, high accountability; you're as comfortable rolling up your sleeves on a client problem as you are presenting to the boardNice to HavePrior experience building products for these legal, asset management, banking, or insurance.Familiarity with enterprise legal workflows: document review, contract analysis, compliance, M&A diligenceWhy This RoleHarvey's most strategic accounts — the firms and in-house teams that set the standard for the rest of the industry — deserve more than a great product. They deserve a team that shows up for them. As Director of Forward Deployed Engineering, you'll build that team, define what excellence looks like, and make Harvey indispensable to the companies that matter most.Compensation$320,000 - $360,000 USD#LI-PM1Harvey is an equal opportunity employer and does not discriminate on the basis of race, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition, or any other basis protected by law.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made by emailing accommodations@harvey.ai
No items found.
2026-03-04 3:29
Software Engineer Systems Research Internship, Applied Emerging Talent (Summer 2026)
OpenAI
5000+
$67 – $67 / hour
United States
Intern
Remote
false
About the Team
The Applied team works across research, engineering, product, and design to bring OpenAI’s technology to the world. We seek to learn from deployment and broadly distribute the benefits of AI, while ensuring that this powerful tool is used responsibly and safely. We aim to make our innovative tools globally accessible, transcending geographic, economic, or platform barriers. Our commitment is to facilitate the use of AI to enhance lives, fostered by rigorous insights into how people use our products.
About the RoleA systems research internship is for people who love the real-world intersection of systems-engineering and research: you’ll investigate a hard systems problem, build something meaningful, and measure it carefully. The goal is practical impact—making Applied Systems better: more efficient, more scalable, and more reliable.OpenAI is currently recruiting for candidates interested in a 13-week, paid, in-person internship based in our San Francisco office during Summer 2026. In some cases, it may be extended for an additional 13 weeks (for a total of up to 26 weeks), based on team needs, candidate interest, and performance.In this role, you will typically focus on improving real systems in areas like:Distributed systems & storage (throughput, latency, consistency, durability)Compute & scheduling (GPU/accelerator utilization, job orchestration, queuing)Performance engineering (profiling, bottlenecks, scalability, capacity planning)Reliability & observability (fault tolerance, monitoring, incident learning)Networking & data pipelines (data movement, caching, streaming efficiency)Systems for ML (training/inference performance, evaluation infrastructure, tooling)Most projects involve some of these steps:Defining a clear hypothesis (“we think X will reduce latency by Y under Z”)Instrumenting existing production systems, gathering metrics and detailed analysis to validate the hypothesisBuilding or modifying a real system (prototype or production-quality improvements when appropriate)Running experiments/benchmarks and analyzing resultsCommunicating tradeoffs and recommendations clearlyPublishing the research work in technical journals and conferencesYour background looks something like:Currently pursuing a PhD in Computer Science, Computer Engineering, or relevant technical fieldProficiency with Coding in c++, Java, python, rust, etcDoing ongoing research on systems topics such as DL/ML, information retrieval, systems security and cryptography, databases, networking, distributed systems, and compilers, etcAbility to move fast in an environment where things are sometimes loosely defined and may have competing priorities or deadlinesAbout OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
2026-03-04 2:44
Lazo - Head of Engineering
Silver.dev
1-10
$72,000 – $96,000
Argentina
Full-time
Remote
false
About LazoLazo is the AI-powered operating system for modern startups—automating finance, tax, legal, payroll, and fundraising workflows so founders can scale faster. We're backed by AlleyCorp, AWS Startups, Google for Startups, Endeavor, and Tampa Bay Wave. Learn more at lazo.us.The RoleWe're uniting strategic technology leadership with hands-on execution to scale our multi-agent AI platform from 0→1→N. You'll shape the technical vision at the leadership table and roll up your sleeves to ship code, harden reliability, and build the engineering muscle that powers Lazo.Expect a healthy mix of strategy, architecture, and coding. You'll set direction, make trade-offs explicit, and still own meaningful PRs until the team scales.Responsibilities1. Tech Strategy & ArchitectureOwn the technology strategy & roadmap aligned with business and product OKRs.Define the reference architecture for agentic systems (LLMs, tool orchestration, data plane, safety/guardrails, evals).Establish security & compliance baselines (IAM, secrets, data privacy; SOC2-readiness) and cost governance (FinOps).Present trade-offs, risks, and progress in Leadership reviews (weekly/MBR/QBR).2. Hands-on Engineering & DeliveryShip backend services in Python/TypeScript; drive high-impact PRs and code reviews.Orchestrate agents and toolchains (e.g., ADK, BeeAI, n8n or similar), integrate external APIs/DBs, and build robust pipelines.End-to-end DevOps: AWS/GCP, containerization, IaC (Terraform/CDK), CI/CD, observability (logs/metrics/traces), and on-call design.Reduce tech debt, improve latency/throughput, and manage infra cost per workflow/client.3. Reliability, Security & DataDefine SLOs and error budgets; reduce MTTR and change-fail rate (DORA/SRE).Implement data access policies, PII protection, and secure data flows for AI features.Drive post-mortems and preventive engineering (runbooks, playbooks, chaos drills where appropriate).4. Team Building & CultureAct as a player-coach: hire, mentor, and level up engineers; install rituals of quality and focused execution.Set clear scorecards (DORA, SRE, lead time, review SLAs) integrated with our operating system (Monday + GSheets).Foster a culture of thoughtful trade-offs, fast feedback loops, and writing things down.5. Cross-Functional & Stakeholder ManagementPartner with Product & AI to turn customer problems into scalable solutions.Collaborate with Ops, Growth, and Customer teams to ensure reliability, supportability, and launch readiness.Manage key vendors/partners and evaluate build-vs-buy decisions with crisp ROI narratives.What Success Looks LikeDORA: deployment frequency ↑, lead time ↓, change fail rate ↓, MTTR ↓.Reliability: SLOs met, healthy error budgets, predictable incident management.AI Quality: reproducible evaluations, task success rates up, safe/traceable tool use.Cost & Scale: infra cost per client/workflow trending down with usage up.Team: time-to-productivity for new hires, strong review SLAs, high engagement/eNPS.Your First 90 DaysDay 30Current-state architecture, risks, and cost/security posture mapped.First meaningful PR to production + "golden path" for contributions.On-call design and hiring plan drafted.Day 60Tech roadmap v1 with quarterly milestones and cost targets.Hardened CI/CD, improved test coverage, and change-fail rate trending down.1–2 automated agentic workflows running in production with basic evals.Day 90SLOs defined (e.g., 99.9% core uptime) with dashboards; MTTR < 60 minutes for P1s.Tech-debt backlog prioritized with clear pay-down plan; 1–2 key hires signed.QualificationsMust-haves6–8+ years building and scaling software in high-growth environments; 3+ years leading teams as Head/Lead/VP-level or equivalent scope.Strong backend in Python and/or TypeScript, containers (Docker), and cloud (AWS or GCP) with IaC and CI/CD.Hands-on experience with LLMs/agentic systems and tool orchestration; solid grasp of data flows and AI safety/guardrails.Observability & security fluency (OpenTelemetry or similar; IAM, secrets, hardening).Data-driven decision-maker with excellent communication; professional English.Nice to haveFinOps, SRE, and SOC 2/compliance experience.Domain exposure to finance/accounting/tax/payroll or other back-office systems.Open-source contributions; RAG/evaluations experience.Spanish; experience with distributed teams across time zones.Our Evolving StackPython/TypeScript · AWS/GCP · Docker · IaC (Terraform/CDK) · Postgres/OLAP · CI/CD (GitHub Actions or similar) · Observability (OpenTelemetry + stack) · Agent orchestration (ADK/BeeAI/n8n or similar) · Feature flags · Secrets management · Security scannersHow We WorkWeekly Leadership review for priorities and decisions; MBR/QBR for strategy & outcomes.Scorecards in Google Sheets + Monday; focus on outcomes over output; PRDs and design docs over meetings.
No items found.
2026-03-04 1:29
Software Engineer, Backend
Mirage
101-200
$185,000 – $285,000
United States
Full-time
Remote
false
Mirage is an AI-native video platform that intelligently orchestrates production and editing through natural language. Our models leverage contextual awareness to execute the same creative decisions a professional editor would — dramatically improving productivity for experienced teams, while making video creation accessible to anyone.
We’re an interdisciplinary team addressing some of the most difficult technical and creative challenges in generative media. As an early member of our team, you’ll tackle foundational problems that remain largely unsolved across the industry, driving an outsized impact on the future of creative expression.
More about usProduct (Captions by Mirage) Research (Seeing Voices, technical-white-paper)Updates (Mirage on X / twitter)TechCrunch, Forbes AI 50, Fast Company (press)
Our InvestorsWe’re very fortunate to have some the best investors and entrepreneurs backing us, including Index Ventures, Kleiner Perkins, Sequoia Capital, Andreessen Horowitz, Uncommon Projects, Kevin Systrom, Mike Krieger, Lenny Rachitsky, Antoine Martin, Julie Zhuo, Ben Rubin, Jaren Glover, SVAngel, 20VC, Ludlow Ventures, Chapter One, and more.Please note that all of our roles will require you to be in-person at our NYC HQ (located in Union Square) About the role
Backend engineering at Mirage isn’t narrowly defined. We work across product, platform, and machine learning—building the systems, services, APIs, and ML pipelines that power all of our product surfaces. You’ll own critical backend systems end-to-end, partnering closely with product, design, client, and AI teams to turn ambitious ideas into reliable, scalable production systems. This opportunity is ideal for engineers who enjoy solving complex technical problems while staying connected to product and user outcomes.ResponsibilitiesDesign, build, and own backend systems end-to-end, including services, APIs, data pipelines, and infrastructure that power our productsSolve complex technical challenges across distributed systems, scaling, concurrency, and performanceIntegrate and operate large generative AI models in production—deploying, serving, and scaling systems that combine internal research and external capabilities to unlock new product experiences.Instrument, experiment, and iterate in production to continuously improve system and product qualityDesign and operate core platform infrastructure, including integrations with third-party providers, storage systems, security, and internal APIsWhat Makes You a Great Fit5+ years of professional industry experienceA track record of shipping high‑impact systems and/or products in productionExceptional problem solving fundamentals and ability to learnExcellent engineering taste combined with a strong sense of practicality and time-managementAble to operate effectively in an extremely fast-paced environment as well as scope and deliver projects end-to-endEven better if...You have worked extensively with LLMs, generative media models, and have a pulse on where the technology is goingYou are grounded, collaborative, and willing to do whatever it takes to help the team winYou have been a startup founder or an early engineer at oneBenefits:Comprehensive medical, dental, and vision plans401K with employer matchCommuter BenefitsCatered lunch multiple days per weekDinner stipend every night if you're working late and want a bite! Grubhub subscriptionHealth & Wellness Perks (Talkspace, Kindbody, One Medical subscription, HealthAdvocate, Teladoc)Multiple team offsites per year with team events every monthGenerous PTO policyCaptions provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.Please note benefits apply to full time employees only.
No items found.
2026-03-03 21:14
No job found
Your search did not match any job. Please try again
