⚠️ Sorry, this job is no longer available.

The AI job market moves fast. We keep up so you don't have to.

Fresh roles added daily, reviewed for quality — across every corner of the AI ecosystem.

Edit filters

New AI Opportunities

Showing 6179  of 79 jobs
Tag
OpenAI.jpg

Performance Modeling Engineer

OpenAI
$266,000 – $445,000
US.svg
United States
Full-time
Remote
false
About the TeamOpenAI’s Hardware organization develops system and infrastructure solutions designed for the unique demands of advanced AI workloads. We work closely with architecture, infrastructure, and vendor teams to evaluate system performance and guide critical design decisions.Our team focuses on building and applying performance modeling frameworks to understand system behavior, quantify tradeoffs, and inform next-generation infrastructure design.About the RoleWe are seeking Performance Modeling Engineers to develop and apply modeling tools that evaluate AI system performance and inform architectural decisions.In this role, you will work closely with the Performance Modeling Lead and partner teams to analyze system behavior, run simulations or analytical models, and help quantify tradeoffs across compute, memory, networking, and storage. You will contribute to building modeling frameworks and applying them to real-world questions that impact system design and vendor decisions.This role is well-suited for engineers with strong software or modeling backgrounds who are interested in developing deeper expertise in system architecture and AI infrastructure.This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance.Key ResponsibilitiesDevelop and maintain performance modeling tools and frameworks.Build models to evaluate system behavior across:compute, memory, and interconnect subsystemsdistributed system scaling and bottlenecks.Run simulations and analytical models to support architectural tradeoff analysis.Collaborate with performance modeling lead and system architects to answer forward-looking design questions.Analyze and interpret modeling outputs, translating results into actionable insights.Validate models against real system measurements and workload behavior.Contribute to improving modeling fidelity, usability, and scalability.QualificationsStrong software engineering or modeling background (e.g., simulation, systems modeling, or performance analysis).Familiarity with system architecture fundamentals (compute, memory, networking).Experience with programming and building technical tools or frameworks.Ability to reason about performance bottlenecks and scaling behavior.Strong analytical skills and comfort working with quantitative models.Ability to collaborate across teams and learn new system domains quickly.Preferred SkillsExposure to AI/ML workloads or distributed systems.Experience with simulation tools, performance modeling, or systems analysis.Familiarity with data center infrastructure or large-scale systems.Experience working with performance data, benchmarking, or profiling tools.Interest in system architecture and hardware/software co-design.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
OpenAI.jpg

Hardware Architecture Expert - 3P

OpenAI
$342,000 – $555,000
US.svg
United States
Full-time
Remote
false
About the TeamOpenAI’s Hardware organization develops system and infrastructure solutions optimized for advanced AI workloads. We collaborate across research, software, and external hardware partners to design and deploy next-generation AI systems at scale.Our team works closely with silicon vendors and system partners to evaluate emerging technologies, validate performance characteristics, and ensure that hardware capabilities translate effectively to real-world AI workloads.About the RoleWe are seeking a 3P Hardware Architecture Expert with deep expertise in GPU and accelerator architectures to engage directly with silicon vendors and guide hardware decisions for AI infrastructure.In this role, you will evaluate architectural tradeoffs across compute, memory, and interconnect systems, translating vendor specifications into real-world workload impact. You will play a critical role in early silicon evaluation, benchmarking, and performance validation, helping ensure that next-generation hardware meets the needs of our workloads.This role is highly hands-on and requires both deep technical understanding and the ability to engage at a high level with partners such as NVIDIA and AMD on architectural direction and design tradeoffs.This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance.Key ResponsibilitiesEngage deeply with silicon vendors (e.g NVIDIA & AMD) on GPU and accelerator architecture tradeoffs.Analyze and interpret performance, power, and efficiency characteristics of next-generation hardware.Translate vendor specifications into expected real-world performance for AI workloads.Evaluate architectural aspects including:compute throughput and utilizationmemory systems (HBM, cache hierarchies, bandwidth constraints)data types and precision tradeoffs (FP16, BF16, FP8, etc.)interconnect and scaling behavior.Run benchmarks and profiling to validate hardware performance against workload requirements.Lead early bring-up and evaluation of engineering sample (ES) silicon.Partner with performance modeling and system architecture teams to align measured vs. modeled behavior.Provide actionable feedback to vendors to influence future silicon design and roadmap decisions.QualificationsHave deep expertise in GPU or accelerator architecture, including performance and power tradeoffs.Understand AI workload behavior and how it interacts with hardware design choices.Are comfortable engaging directly with silicon vendors at a technical architecture level.Have hands-on experience with benchmarking, profiling, and performance analysis.Can translate low-level hardware details into system-level and workload-level impact.Are equally comfortable in theory (architecture) and practice (measurement/validation).Thrive in environments where you bridge internal teams and external partners.Preferred SkillsExperience working with or at companies like (e.g NVIDIA & AMD) or similar silicon providers.Familiarity with AI accelerator stacks, including GPUs, custom ASICs, or emerging architectures.Experience with early silicon bring-up or hardware validation workflows.Strong understanding of memory systems (HBM, DDR, cache hierarchies) and data movement bottlenecks.Experience with performance tooling, microbenchmarks, and workload characterization.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
OpenAI.jpg

Performance Modeling Lead

OpenAI
$342,000 – $555,000
US.svg
United States
Full-time
Remote
false
About the TeamOpenAI’s Hardware organization develops system and infrastructure solutions designed for the unique demands of advanced AI workloads. We work closely with research, software, and external hardware partners to shape the next generation of AI systems, from silicon through full-scale deployments.Our team focuses on understanding and optimizing performance across the full system stack—ensuring that architectural decisions are grounded in rigorous, quantitative analysis of real-world workloads.About the RoleWe are seeking a Performance Modeling Lead to build and lead a small, high-impact team responsible for answering forward-looking architectural questions across AI infrastructure systems.You will develop modeling frameworks and methodologies to evaluate system-level tradeoffs and guide key design decisions. Your work will directly influence reference architectures, vendor designs, and long-term infrastructure strategy.This role sits at the intersection of AI workloads, system architecture, and quantitative modeling, and requires strong technical judgment, ownership, and the ability to translate complex analysis into clear, actionable guidance.This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance.Key ResponsibilitiesBuild and own a performance modeling framework/toolchain to evaluate AI systems across multiple levels of abstraction.Analyze and quantify architectural tradeoffs across compute, memory, networking, storage, and system topology.Develop performance models to guide decisions on:scale-up vs. scale-out architecturesinterconnect and network designmemory hierarchy and system balance.Translate modeling outputs into clear recommendations for internal teams and external hardware vendors.Influence reference designs and vendor roadmaps through data-driven insights.Partner closely with machine learning, systems, and hardware teams to understand workload characteristics and requirements.Lead and grow a small team (2–3 engineers), setting technical direction and maintaining high standards for modeling rigor.Continuously improve modeling fidelity by validating against real system behavior and measurements.QualificationsHave experience owning or building performance modeling frameworks used to drive real system design decisions.Have deep knowledge of AI/ML workloads, including training and/or inference at scale.Understand system-level tradeoffs across compute, memory, and networking in large-scale distributed systems.Are comfortable working across abstraction layers—from workload behavior to hardware implementation.Have experience using modeling (analytical or simulation) to inform architectural decisions.Can operate in ambiguous problem spaces and turn open-ended questions into structured analysis.Communicate clearly and influence both internal teams and external partners.Preferred SkillsExperience working with hardware vendors (ODM/JDM, silicon, networking).Background in data center infrastructure or hyperscale systems.Familiarity with accelerators (GPUs/ASICs) and interconnects (e.g., NVLink, InfiniBand, Ethernet).Experience influencing hardware roadmaps or reference architectures.Prior experience leading or mentoring engineers.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
OpenAI.jpg

Performance Modeling Engineer ~2

OpenAI
$266,000 – $445,000
US.svg
United States
Full-time
Remote
false
About the TeamOpenAI’s Hardware organization develops system and infrastructure solutions designed for the unique demands of advanced AI workloads. We work closely with architecture, infrastructure, and vendor teams to evaluate system performance and guide critical design decisions.Our team focuses on building and applying performance modeling frameworks to understand system behavior, quantify tradeoffs, and support next-generation infrastructure design.About the RoleWe are seeking an Performance Modeling Engineer to support the development and application of modeling tools used to evaluate AI system performance and inform architectural decisions.In this role, you will partner closely with Senior Performance Modeling Engineers and the Performance Modeling Lead to analyze system behavior, run simulations and analytical models, and help evaluate tradeoffs across compute, memory, networking, and storage. You will contribute to building modeling frameworks while developing a strong foundation in system architecture and AI infrastructure.This role is ideal for early-career engineers with 1–2 years of experience in software engineering, systems analysis, or performance modeling who are excited to grow in large-scale infrastructure and hardware/software systems.This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance.Key ResponsibilitiesSupport the development and maintenance of performance modeling tools and frameworksAssist in building models to evaluate system behavior across compute, memory, networking, and interconnect subsystemsHelp analyze distributed system scaling behavior and identify performance bottlenecksRun simulations and analytical models to support architecture and infrastructure decisionsPartner with senior engineers to evaluate design tradeoffs across hardware and system componentsInterpret modeling outputs and help translate findings into clear recommendationsValidate models using benchmarking data and real system performance measurementsImprove modeling workflows, documentation, and usability for broader team adoptionCollaborate cross-functionally with hardware, infrastructure, and architecture teamsContinuously build technical depth across AI infrastructure, system architecture, and performance analysisQualifications1–2 years of experience in software engineering, systems modeling, performance analysis, or related technical workStrong programming skills and experience building technical tools, scripts, or frameworksFamiliarity with system architecture fundamentals such as compute, memory, and networkingAbility to reason about system performance, bottlenecks, and scaling behaviorStrong analytical and problem-solving skills with comfort working in quantitative environmentsAbility to learn quickly and work effectively across technical teamsPreferred SkillsExposure to AI/ML workloads, distributed systems, or large-scale infrastructureExperience with simulation tools, benchmarking, profiling, or performance analysisFamiliarity with data center systems, server architecture, or hardware platformsInterest in system architecture and hardware/software co-designInternship or early professional experience in performance engineering, infrastructure, or systems designAbout OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
OpenAI.jpg

Silicon Implementation Engineer, Front End

OpenAI
$266,000 – $445,000
US.svg
United States
Full-time
Remote
false
About the TeamOpenAI’s Hardware organization develops silicon and system-level solutions designed for the unique demands of advanced AI workloads. The team is responsible for building the next generation of AI-native silicon while working closely with software and research partners to co-design hardware tightly integrated with AI models. In addition to delivering production-grade silicon for OpenAI’s supercomputing infrastructure, the team also creates custom design tools and methodologies that accelerate innovation and enable hardware optimized specifically for AI.About the RoleWe are seeking a highly capable Implementation Engineer & Technologist to drive silicon construction and optimization for next-generation AI chips. This is a senior hands-on individual-contributor role for an engineer who combines strong technical breadth with the ability to go deep quickly, solve hard problems, and land results in collaboration with cross-functional teams.You will operate across architecture, circuits, memory, RTL, physical implementation, and integration technologies to turn amb itious product goals into manufacturable silicon. This role is not limited to analysis or pathfinding—you will be expected to develop solutions, prototype ideas, drive execution, and close critical gaps.The ideal candidate is a hands-on generalist with strong engineering judgment, deep circuit intuition, broad semiconductor knowledge, and a habit of using AI tools to move faster and make better decisions.In this role, you will:Partner with architecture and system teams to translate product goals into executable silicon construction strategies.Drive hands-on optimization of power, performance, area, cost, and reliability across the silicon stack.Develop and implement solutions spanning circuits, memory, RTL, physical design, and integration.Use and build AI-driven tools, flows, and methodologies to accelerate silicon implementation.Evaluate new technologies and convert them into reliable product constructions optimized for performance, performance/TCO, and performance/W.You might thrive in this role if:BS with 12+ years, MS with 10+ years, or PhD with 6+ years of relevant industry experience in chip design or implementation.Strong hands-on expertise in circuits and implementation-driven PPA optimization.Deep knowledge of semiconductor technologies including memory, advanced nodes, packaging, and 3D integration.Hands-on experience with RTL design and physical implementation through tapeout.Proven ability to work across disciplines and solve complex technical problems end-to-end.Strong use of AI tools for engineering productivity, analysis, coding, or design optimization.Excellent technical communication and collaboration skills.Preferred QualificationsStrong first-principles understanding of AI chip architectures and training/inference workloads.Experience improving silicon products through innovations in performance, power, cost, yield, or reliability.Experience with HBM, SRAM, memory hierarchy design, or memory-centric optimization.Experience building internal tools, models, or automation used by engineering teams.Research lab experience and/or PhD in Electrical Engineering, Computer Engineering, Computer Science, or related field.To comply with U.S. export control laws and regulations, candidates for this role may need to meet certain legal status requirements as provided in those laws and regulations.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
OpenAI.jpg

3P Architect

OpenAI
$342,000 – $555,000
US.svg
United States
Full-time
Remote
false
About the TeamOpenAI’s Hardware organization develops system and infrastructure solutions tailored to the demands of advanced AI workloads. We work across the full stack—from silicon to system integration—partnering closely with internal teams and external vendors to define and deliver next-generation AI infrastructure.Our team focuses on defining scalable, high-performance system architectures and reference designs that balance performance, cost, and operational efficiency across rapidly evolving technologies.About the RoleWe are seeking a 3P Architect to define and drive rack- and cluster-level reference designs in collaboration with external partners. This role is responsible for translating workload requirements and system-level goals into concrete architectures, aligning partners on critical design attributes, and ensuring vendor roadmaps meet our infrastructure needs.You will work closely with performance modeling and internal architecture teams to evaluate tradeoffs, while owning the end-to-end definition and execution of third-party system designs. This includes identifying gaps in current technologies, driving vendor development, and shaping future infrastructure capabilities.This role requires strong system intuition, cross-functional leadership, and the ability to operate effectively across internal teams and external ecosystems.This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance.Key ResponsibilitiesDefine rack- and cluster-level reference architectures for AI infrastructure deployments.Translate workload requirements into clear system design specifications and partner deliverables.Collaborate with performance modeling teams to evaluate architectural tradeoffs and system behaviors.Align internal stakeholders and external partners on critical system attributes (performance, cost, power, reliability, scalability).Identify gaps in current technology offerings and drive vendors (ODM/JDM, silicon, networking) to close those gaps.Influence and shape vendor roadmaps to meet future infrastructure needs.Track emerging technologies and evaluate their applicability to AI systems.Define and lead proof-of-concept (PoC) efforts to validate new architectures and technologies.Act as a key interface between OpenAI and external partners, ensuring execution against design intent.QualificationsHave strong experience in system architecture for large-scale infrastructure or data center environments.Understand AI workload characteristics and how they map to system-level design decisions.Are comfortable working with performance modeling outputs to inform architectural direction.Have experience working with or managing hardware vendors (ODM/JDM, silicon, networking).Can drive alignment across multiple stakeholders with competing constraints.Have a track record of turning ambiguous requirements into clear, executable system designs.Are proactive in identifying gaps and driving solutions across organizational boundaries.Preferred SkillsExperience defining rack- or cluster-level systems for hyperscale or AI workloads.Familiarity with accelerators (GPUs/ASICs), interconnects, and data center networking architectures.Experience influencing vendor roadmaps and reference designs.Background in infrastructure deployment, hardware engineering, or systems integration.Experience leading PoCs or early-stage hardware validation efforts.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
DatologyAI.jpg

Research Scientist

DatologyAI
$180,000 – $300,000
US.svg
United States
Full-time
Remote
false
About the CompanyModels are what they eat. But a large portion of training compute is wasted training on data that are already learned, irrelevant, or even harmful, leading to worse models that cost more to train and deploy.At DatologyAI, we’ve built a state of the art data curation suite to automatically curate and optimize petabytes of data to create the best possible training data for your models. Training on curated data can dramatically reduce training time and cost (7-40x faster training depending on the use case), dramatically increase model performance as if you had trained on >10x more raw data without increasing the cost of training, and allow smaller models with fewer than half the parameters to outperform larger models despite using far less compute at inference time, substantially reducing the cost of deployment. For more details, check out our recent blog posts sharing our high-level results for text models and image-text models.We raised a total of $57.5M in two rounds, a Seed and Series A. Our investors include Felicis Ventures, Radical Ventures, Amplify Partners, Microsoft, Amazon, and AI visionaries like Geoff Hinton, Yann LeCun, Jeff Dean, and many others who deeply understand the importance and difficulty of identifying and optimizing the best possible training data for models. Our team has pioneered this frontier research area and has the deep expertise on both data research and data engineering necessary to solve this incredibly challenging problem and make data curation easy for anyone who wants to train their own model on their own data.This role is based in Redwood City, CA. We are in office 4 days a week.About the RoleWe're looking for a Research Scientist to investigate how intervening on training data can improve the quality and shape the behavior of deep learning models. You'll source and implement ideas from the literature, conduct research grounded in real customer needs, and collaborate closely with engineers and product teams to turn findings into tangible impact. This role requires strong scientific judgment, fluency with the deep learning literature, and the drive to work autonomously in a fast-moving startup environment.What You'll Work OnThe research literature is vast, rife with ambiguity, and constantly evolving. You'll source, vet, implement, and improve promising ideas from the literature and your own thinking.Our research is guided by concrete customer needs and product outcomes, not conference benchmarks. You'll have clear context on why your work matters and who it serves.How You'll Work We believe researchers do their best work with autonomy. You'll have the freedom to pursue problems in the way that works best for you, with the resources and context to back it up.We expect Research Scientists to collaborate closely with engineers, talk to customers, and shape the product vision.About You3+ years of deep learning research experienceStrong fundamentals in deep learningPractical experience and/or publications in one or more of the following areas:Data pruning and curationCurriculum learningSynthetic data generationDataset distillationEffects of training data on model behaviorEmbedding models and semantic searchTraining large vision (including video), language, or multimodal modelsEfficient MLEnough software engineering and PyTorch experience (or willingness to learn) to run large-scale experiments and build production prototypesA demonstrated track record in deep learning research, whether through papers, tools, or other artifactsNice to have:Experience with distributed data processing tools like Spark or SnowflakeExperience building and shipping ML productsCandidates do not need a PhD or extensive publications. Some of the best researchers we've worked with have no formal training in machine learning, and obtained all of their experience working in industry and building products. We believe adaptability, combined with exceptional communication and collaboration skills, are the most important ingredients for successful research in a startup environment.CompensationAt DatologyAI, we are dedicated to rewarding talent with highly competitive salary and significant equity. The base salary for this position ranges from $180,000 to $300,000.The candidate's starting pay will be determined based on job-related skills, experience, qualifications, and interview performance.We offer a comprehensive benefits package to support our employees' well-being and professional growth:100% covered health benefits (medical, vision, and dental).401(k) plan with a generous 4% company match.Unlimited PTO policyAnnual $2,000 wellness stipend.Annual $1,000 learning and development stipend.Daily lunches and snacks are provided in our office!Relocation assistance for employees moving to the Bay Area.
No items found.
Hidden link
Mercor.jpg

Software Engineer, Applied AI

Mercor
$130,000 – $500,000
US.svg
United States
Full-time
Remote
false
About MercorMercor is defining the future of work. We partner with leading AI labs and enterprises to provide the human intelligence essential to AI development. Our vast talent network trains frontier AI models in the same way teachers teach students: by sharing knowledge, experience, and context that can't be captured in code alone. Today, more than 30,000 experts in our network collectively earn over $2 million a day.Mercor is creating a new category of work where expertise powers AI advancement. Achieving this requires an ambitious, fast-paced and deeply committed team. You’ll work alongside researchers, operators, and AI companies at the forefront of shaping the systems that are redefining society. Mercor is a profitable Series C company valued at $10 billion. We work in-person five days a week in our San Francisco, NYC, or London offices.About the RoleAs a Software Engineer on Applied AI, you’ll build, deploy, and operate systems that sit directly between frontier AI research and data delivery.This is a high-ownership, deeply technical role. You’ll work through ill-defined problems, prototype quickly with researchers and customers, and take systems from early experiments to reliable, scalable production. You’ll own projects end-to-end: spanning requirements gathering, creating data creation pipelines, and improving model-adjacent infrastructure, while partnering closely with frontier AI labs and Mercor’s internal teams to ship high-impact applied AI solutions.What You’ll DoPartner closely with frontier AI labs to understand their data, post-training, and evaluation needsBuild and operate scalable data pipelines for post-training workflows and model evaluationsDesign and build scalable systems for synthetic data generation and data quality, and work directly with customers to understand requirements and develop technical solutionsPrototype new data types, benchmarks, and evaluation frameworksLead technical discussions with customersWhat Makes This Role DifferentDirect frontier exposure. You’ll work closely with researchers at leading AI labs, building infrastructure that directly accelerates cutting-edge research.Coding + customer work. This role blends deep technical execution with customer interaction. Engineers who enjoy both building and communicating tend to thrive.Day-to-DayFast-moving, high-ownership environmentTechnically demanding, collaborative work“Building the plane while flying it”In-person culture (SF, 181 Fremont)Aligned with frontier research timelinesWhat We’re Looking ForStrong backend engineering fundamentals in a modern language (Python, Go, Rust, etc.)Experience with model training and inferenceStrong grounding in statistical analysis and experimental design for measuring model performance and improvementsFamiliarity with evaluation methods for large language modelsComfort working through ambiguity and shipping iterativelyYou’re likely someone who:Enjoys ownership and customer-facing problem solvingThinks entrepreneurially and moves quicklyBalances speed with engineering rigorCommunicates clearly with technical and non-technical usersWhy Engineers JoinDirect exposure to frontier AI researchReal ownership and visible impactTechnical depth paired with human interactionFast feedback loops and high leverageThere are very few roles that combine this level of technical rigor, customer proximity, and research exposure.BenefitsBi-annual performance bonus structureGenerous equity grant vested over 4 yearsUp to $15k Relocation bonus$10K housing bonus (if you live within 0.5 miles of our office)$1.5K monthly stipend for mealsFree Equinox membership$200 monthly laundry reimbursement$200 monthly personal wellness reimbursementHealth, Dental, Vision insurance
No items found.
Hidden link
Aircall.jpg

AI Productivity Engineer

Aircall
GB.svg
United Kingdom
Full-time
Remote
false
Aircall is a unicorn, AI-powered customer communications platform used by 22,000+ companies worldwide to drive revenue, resolve issues faster, and scale customer-facing teams. We’re redefining customer communications by bringing voice, SMS, WhatsApp, and AI together into one seamless workspace. Our momentum comes from a simple idea: help teams work smarter, not harder. Aircall’s AI Voice Agent automates routine calls, AI Assist streamlines post-call work, and AI Assist Pro delivers real-time guidance so people can do their best work. The result is higher revenue, faster resolutions, and teams that scale with confidence. Aircall is headquartered in Paris, our European HQ, with a strong North American presence anchored in Seattle, our North American HQ, and teams across Madrid, London, Berlin, San Francisco, New York City, Sydney, and Mexico City. We’ve built a product customers love and a business that’s scaling quickly, backed by world-class investors and driven by rapid AI innovation across multiple product lines.At Aircall, you’ll join a company in motion. We’re ambitious, product-driven, and execution-focused, with visible impact, fast decisions, and real growth. How we work at Aircall: We’re customer-obsessed, data-driven, and focused on delivering meaningful outcomes. We value ownership, continuous learning, and thoughtful speed. If you thrive in a collaborative, fast-moving environment where trust and impact matter, you’ll feel at home here. We are hiring a Software AI Engineer to join the Engineering Productivity (EngProd) team at Aircall.   Your mission is to accelerate AI adoption across the engineering organization by building AI-powered tools and systems that measurably improve how engineers work — reducing friction, automating repetitive tasks, and embedding intelligence directly into everyday workflows.   This is not a research role and not a customer-facing product AI role. You will build practical, production-grade AI solutions that engineers use daily, and you will be accountable for their real-world adoption and impact.   This role is about using AI to make engineers more effective, not about chasing trends. If you enjoy building real systems that people rely on every day — this role is for you. What You'll Do Take clear ownership of rapid AI adoption across the engineering organization Identify high-friction areas in engineering workflows where AI can meaningfully improve productivity Design and build practical, production-grade AI-powered developer tooling (coding, testing, PR reviews, debugging) Build contextual, system-aware AI assistants using internal data, codebases, and tooling Explore, prototype, and productionize AI-driven solutions with strong autonomy on how problems are solved Automate and streamline workflows across GitLab, Jira, CI/CD, Slack, and observability tools Design and operate internal AI services and orchestration layers (e.g. MCP servers) Own solutions end-to-end: discovery → design → build → measure → iterate Work hands-on with engineering teams to remove friction, enable usage, and move tools from delivery to daily practice Measure success through adoption, impact, and tangible time saved for engineers What You Won't Do Build AI features for customer-facing products Work on speculative AI research without clear outcomes Act as a general internal support team Own generic ML infrastructure unrelated to developer productivity What We’re Looking For - Required Experience 5+ years of experience as a software engineer, with recent focus on GenAI systems Strong experience building production-grade systems, not just prototypes Hands-on experience with: LLMs (OpenAI, Anthropic, etc.) Prompting, retrieval, and context injection AI-powered tooling or internal platforms Solid backend engineering skills (APIs, services, integrations) Experience working with developer tools (CI/CD, GitHub/GitLab, Jira, observability) Strong product mindset and comfort operating in ambiguous problem spaces Nice to Have Particularly interesting profiles are engineers who have built developer tools and are now evolving toward AI-native system design. Prior experience building developer tools, internal platforms, or DevEx tooling Experience evolving traditional tooling into AI-assisted or AI-driven workflows Familiarity with MCP, agent-based systems, or model orchestration concepts Experience integrating AI with large codebases, monorepos, or complex CI/CD environments Exposure to security, privacy, and trust considerations in internal AI systems How You’ll Be Successful AI solutions you build are widely adopted and used regularly by engineers Engineering productivity measurably improves, using: existing metrics we already track (e.g. DevEx, CI, delivery, quality, flow), and/or new, clearly defined metrics you help introduce to capture AI impact Manual, repetitive workflows are reduced or eliminated, with clear before/after comparisons Engineering time is visibly saved and reinvested into higher-value work Improvements are demonstrated with data, not just qualitative feedback Adoption grows organically because tools are useful, fast, and well-integrated into existing workflows Team & Environment You’ll join the Engineering Productivity team You’ll work closely with engineers across the company Strong collaboration with Infrastructure and Security teams Product-oriented culture focused on outcomes, not hype Why join us? 🚀 Key moment to join Aircall in terms of growth and opportunities💆‍♀️ Our people matter, work-life balance is important at Aircall📚 Fast-learning environment, entrepreneurial and strong team spirit🌍 45+ Nationalities: cosmopolite & multi-cultural mindset💶 Competitive salary package & benefits DE&I Statement: At Aircall, we believe diversity, equity and inclusion – irrespective of origins, identity, background and orientations – are core to our journey.  We pride ourselves on promoting active inclusion within our business to foster a strong sense of belonging for all. We’re working to create a place filled with diverse people who can enrich and learn from one another. We’re committed to ensuring that everyone not only has a seat at the table but is valued and respected at it by providing equal opportunities to develop and thrive.   We are strongly committed to hiring a diverse and multicultural team and we encourage applications from traditionally underrepresented backgrounds.
No items found.
Hidden link
Zoox.jpg

PhD Research Intern, Multi-Modal Foundation Encoder for Perception

Zoox
$9,500 – $9,500 / month
US.svg
United States
Intern
Remote
false
About Our Internship Program   Zoox’s internship program offers hands-on experience with cutting-edge technology, mentorship from some of the industry’s brightest minds, and the opportunity to make meaningful contributions to real projects. We seek interns who demonstrate strong academic performance, engagement beyond the classroom, intellectual curiosity, and a genuine interest in Zoox’s mission. Project Overview   During this internship, you will lead the development of a multi-modality (vision, LiDAR, Radar, and language), temporal foundation encoder to support 3D object detection & tracking, 3D segmentation (occupancy), and live maps. This Multi-Modal Foundation Encoder (MMFE) is a critical key to achieving End-to-End Perception at Zoox. Your research will aim to significantly improve system performance on long-tail events and rare classes by utilizing a large-capacity foundation model to learn rich representations across different sensor modalities. Additionally, the project aims to improve perception in adverse environmental conditions (such as medium to heavy rain and fog, reducing false positives on water splashes or dust particles) , achieve long-range sensing for highway driving , and build robustness to occlusion. This is a highly research-driven role with the goal of publication. You will have the opportunity to explore novel directions such as tri-modal foundation models with self-supervised pre-training, radar-language grounding for zero-shot detection, efficient sensor fusion via sparse cross-attention, or integrating 3D Gaussian Splats for dynamic agent geometry and streaming sparse Gaussian occupancy prediction. Requirements: Currently working towards a Ph.D., or advanced degree in a relevant engineering program Good academic standing Able to commit to a 12-week internship during one of the following summer 2026 cohorts: May 18th - August 7th, OR May 26th - August 14th, OR June 15th - September 4th At least one previous industry internship, co-op, or project completed in a relevant area Ability to relocate to the Bay Area, California (or Boston, Massachusetts) for the duration of the internship Interns at Zoox may not use any proprietary information they are working on as part of their thesis, any published work with their university, or to be distributed to anyone outside of Zoox Qualifications (It’s helpful if you meet a majority of the following qualifications, but it isn’t a requirement): Currently enrolled in a Ph.D. program in Computer Science, Electrical/Computer Engineering, Robotics, or a related field with a focus on Deep Learning, Computer Vision, or Autonomous Driving. Publication record in top-tier AI/Robotics conferences (e.g., CVPR, ICCV, ECCV, NeurIPS, ICLR, ICRA). Prior experience designing and training foundation models (such as World Models, VLMs, LLMs, or VLAs) using large-scale multi-modal autonomous driving datasets. Hands-on experience developing deep learning models for 3D object detection, tracking, or 3D segmentation. Experience working with multi-modal sensor data, specifically combining representations from vision, LiDAR, Radar, or language. Bonus Qualifications Experience with 4D radar object detection or handling sensor data in adverse weather conditions. Experience with cross-modal alignment for zero-shot detection. Familiarity with 3D Gaussian Splatting, voxel grid representations, or streaming sparse occupancy prediction. Experience with self-supervised pre-training or masked sensor modeling. Compensation: The monthly salary for this position is $9,500. Compensation will vary based on geographic location. Additional benefits may include medical insurance, and a housing stipend (relocation assistance will be offered based on eligibility).
No items found.
Hidden link
Zoox.jpg

PhD Research Intern, Vision Language Action Models

Zoox
$9,500 – $9,500 / month
US.svg
United States
Intern
Remote
false
About Zoox Zoox is transforming mobility with fully autonomous, electric vehicles designed from the ground up for a driverless future. Our mission is to make transportation safer, more sustainable, and accessible to everyone. At Zoox, innovation, collaboration, and a bold vision for the future drive everything we do. About Our Internship Program Zoox’s internship program offers hands-on experience with cutting-edge technology, mentorship from some of the industry’s brightest minds, and the opportunity to make meaningful contributions to real projects. We seek interns who demonstrate strong academic performance, engagement beyond the classroom, intellectual curiosity, and a genuine interest in Zoox’s mission. Project Overview This internship opportunity is within the Foundation Models team which focuses on advancing the state of the art in autonomous driving: Multimodal Language Action models (MLA), massively scaling reinforcement learning for agent policies, and more.  You will have the chance to work on our Multimodal Language Action model, exploring novel discrete action tokenization and flow matching approaches, building off of MotionLM, FAST and others. You’ll train models at the billion+ scale on millions of miles of proprietary Zoox driving data. You’ll gain valuable experience and insight into training MLAs at scale. This project will contribute to publishable research, and could make it into our vehicle. Requirements Currently working towards a Ph.D., or advanced degree in a relevant engineering program Good academic standing Able to commit to a 12-week internship during one of the following summer 2026 cohorts: May 18th - August 7th OR May 26th - August 14th OR June 15th - September 4th Ability to relocate to the Bay Area, California (or Boston, Massachusetts) for the duration of the internship Interns at Zoox may not use any proprietary information they are working on as part of their thesis, any published work with their university, or to be distributed to anyone outside of Zoox Qualifications Experience training VLMs, or VLAs Experience working in large codebases as part of a team Advanced understanding of Python and PyTorch Has authored publications in top ML/robotics conferences (e.g. NeurIPS, CVPR, ICRA, etc) Bonus Qualifications Experience with autonomous driving Experience with machine-learning-based robotic planning Experience with large-scale, multi-node Pytorch workloads Compensation: The monthly salary for this position is $9,500. Compensation will vary based on geographic location. Additional benefits may include medical insurance, and a housing stipend (relocation assistance will be offered based on eligibility). About ZooxZoox is developing the first ground-up, fully autonomous vehicle fleet and the supporting ecosystem required to bring this technology to market. Sitting at the intersection of robotics, machine learning, and design, Zoox aims to provide the next generation of mobility-as-a-service in urban environments. We’re looking for top talent that shares our passion and wants to be part of a fast-moving and highly execution-oriented team. Follow us on LinkedIn AccommodationsIf you need an accommodation to participate in the application or interview process please reach out to accommodations@zoox.com or your assigned recruiter. A Final Note:You do not need to match every listed expectation to apply for this position. Here at Zoox, we know that diverse perspectives foster the innovation we need to be successful, and we are committed to building a team that encompasses a variety of backgrounds, experiences, and skills.
No items found.
Hidden link
Zapier.jpg

Director of Engineering, Infrastructure

Zapier
$280,400 – $420,500
US.svg
United States
Full-time
Remote
false
AI at ZapierAt Zapier, we build and use automation every day to make work more efficient, creative, and human. So if you’re using AI tools while applying here - that’s great! We just ask that you use them responsibly and transparently.Check out our guidance on How to Collaborate with AI During Zapier’s Hiring Process, including how to use AI tools like ChatGPT, Claude, Gemini, or others during our hiring process - and when not to.Job posted: 4/20/2026Location: North America or EMEAHi there!As the Director of Engineering for Infrastructure, you’ll lead several multidisciplinary teams building, supporting, and evolving Zapier's core services, platforms, and infrastructure. Your work directly enables every product team at Zapier to move faster and build better quality products.This is a leadership role with scope to build. Zapier is investing in platform engineering as a strategic priority, and you'll shape what that looks like: vision, scalability, accountability mechanisms, and how the organization operates. Many of the processes and systems your teams need don't exist yet. You will build them, partnering closely with our VP of Engineering and dedicated Platform PMs.You'd be joining at a formative and transformational stage. This is a build role where you'll be responsible for shaping the direction, not inheriting a finished state.We're looking for someone energized by this kind of work: someone who sees a growing, evolving organization as an opportunity to make their mark and shape what comes next.Our Commitment to ApplicantsCulture and Values at ZapierZapier Guide to Remote WorkZapier Code of ConductDiversity and Inclusivity at ZapierAbout YouYou’ve built internal infrastructure platforms that engineering teams love to use.You've led teams responsible for cloud, database, and event infrastructure. You've built feedback loops with internal engineering customers that directly shaped your platform roadmap and investment priorities. You understand that internal platforms are products, and you've treated them that way: measuring adoption, gathering feedback, and iterating towards customer delight based on developer needs.You orchestrate AI systems and redesign how engineering work gets done.You fundamentally rethink software development in an AI-first world. You've used AI coding agents and orchestration tools (Cursor, Claude Code, Codex, or similar) hands-on and can demonstrate how you've applied them to transform your own work, your teams' output, and can quantify that impact. You have a concrete vision for how AI will reshape infrastructure and developer tooling, and you've already started implementing it by rethinking roles, eliminating legacy workflows, and reallocating capacity based on where AI is headed. You embed accountability and quality standards into AI-powered systems rather than lowering the bar for speed. You iterate, refine, and critically evaluate AI outputs rather than accepting first-pass results and expect your team to do the same. You are a builder, not just an operator.You've built teams and systems from the ground up, not just inherited and optimized them. You thrive in 0-to-1 environments where you're making hard trade-offs, retooling teams, and creating structure where little existed before. You have a track record of leading through organizational change and ambiguity, and you actively seek out those challenges. You are deeply customer-oriented.You have a track record of deeply understanding your customers—internal developers, external partners, and cross-functional stakeholders. You've built mechanisms to stay close to their needs and have used that insight to prioritize ruthlessly. You're empathetic to the expectations of Zapier's customers and the needs of our developers.You have scaled platform engineering and infrastructure organizations.You are a skilled engineering leader who has built teams responsible for internal developer platforms and the infrastructure that underpins them. You've shaped how product engineering teams build and ship software by investing in the foundational infrastructure and platforms. You've managed teams of managers and technical leaders, helping each be successful. You understand the infrastructure underpinning your platforms well enough to make sound architectural tradeoffs, but your focus is on developer outcomes, not infrastructure for its own sake.You've driven developer velocity and quality through platform investment.You've established data-driven approaches to measuring engineering velocity and quality both for your platform teams and for the product development teams they serve. You understand the incompleteness and necessity of purely quantitative metrics and bring curiosity to maximize learning from them. You've made the complex tradeoffs between velocity, cost, and functionality when building and supporting platforms. You have vision and mechanisms to track and improve on Keep-The-Lights-On (KTLO) and reactive work in favour of increased investment in proactive and platform improvements.You are a skilled mentor and coach.You have a passion and track record of developing engineers and leaders. You realize the best way to grow a team is by helping them grow themselves. You're able to effectively share your experience and provide clear frameworks for growth. You've developed the next generation of engineering leaders.You are a skilled communicator.You communicate proactively, write clearly, and know when async works and when to jump on a call. You lean on all of these tools to communicate vision, strategy, plans, findings, and results to engineering and the broader Zapier team.You champion efficiency and leverage.At Zapier, your work has a disproportionate impact on the business. You believe in systems and processes that let you scale impact beyond yourself and your immediate team. You default to AI-first thinking when approaching new work and have built personal and team systems (copilots, workflows, or automations) that compound impact over time.What You’ll Be Accountable ForThis organization spans cloud, database and event infrastructure and platforms. Platform Vision & StrategyA clear, shared vision for your organization that connects to internal developer needs and Zapier's overall strategy, with progress visible to leadershipDefine and drive the strategy and long-term roadmap for your organization in collaboration with your teams and Engineering/Product leadership peersUnderstand, measure, and articulate how your organization's work enables Zapier's product development velocityUnderstand, measure and articulate how reactive and Keep-The-Lights-On (KTLO) work will be reduced in order to increase investment in high leverage platform improvementsAI-Transformed DevelopmentDefine and drive the AI transformation of your organization, re-architecting how platform engineering operates, how KTLO and reactive work is minimized, not just adding AI to existing workflowsIdentify and implement AI-powered tooling and automation that accelerates developer velocity and quality for the broader organizationSet the pace for AI adoption across your teams; model the behaviors you expect from your engineersBuild repeatable systems and workflows with AI, not just one-off uses; turn experiments into durable capabilitiesDeveloper Experience & VelocityUnblock software delivery pain points specific to infrastructure through your organization's platforms, tooling, and partnership with product teamsEstablish data-driven approaches to measuring delivery velocity and quality; work with stakeholders, peers, and engineering teams to own those measures and deliver ongoing improvementsStakeholders and the broader org understand your organization's direction, tradeoffs, and results so they can plan and trust the platformReliability & OperationsIn partnership with service-owning teams, be accountable for the uptime of Zapier’s core infrastructure and take action if that uptime faltersStep in during major incidents at the director level by coordinating cross-functional resources and setting stakeholder expectations when severity demands senior leadership involvementTeam Building & TalentTeams that ship consistently and retain talent because the work is compelling and the culture supports growthManagers and ICs have clear growth paths and the feedback they need to improve; you develop the next generation of leadersGrowing and developing your teams by recruiting, hiring, retaining, and mentoringOwn output and outcomes for your teams and the systems they ownHow to ApplyAt Zapier, we believe that diverse perspectives and experiences make us better, which is why we have a non-standard application process designed to promote inclusion and equity. We're looking for the best fit for each of our roles, regardless of the type of companies in your background, so we encourage you to apply even if your skills and experiences don’t exactly match the job description. All we ask is that you answer a few in-depth questions in our application that would typically be asked at the start of an interview process. This helps speed things up by letting us get to know you and your skillset a bit better right out of the gate. Please be sure to answer each question; the resume and CV fields are optional.Education is not a requirement for our roles; however, if you receive an offer, you will need to include your most recent educational experience as part of our background check process.After you apply, you are going to hear back from us—even if we don’t see an immediate fit with our team. In fact, throughout the process, we strive to never go more than seven days without letting you know the status of your application. We know we’ll make mistakes from time to time, so if you ever have questions about where you stand or about the process, just ask your recruiter!Zapier is an equal-opportunity employer and we're excited to work with talented and empathetic people of all identities. Zapier does not discriminate based on someone's identity in any aspect of hiring or employment as required by law and in line with our commitment to Diversity, Inclusion, Belonging and Equity. Our code of conduct provides a beacon for the kind of company we strive to be, and we celebrate our differences because those differences are what allow us to make a product that serves a global user base. Zapier will consider all qualified applicants, including those with criminal histories, consistent with applicable laws.Zapier prioritizes the security of our customers' information and is dedicated to adhering to all applicable data privacy laws. You can review our privacy policy here.Zapier is committed to inclusion. As part of this commitment, Zapier welcomes applications from individuals with disabilities and will work to provide reasonable accommodations. If reasonable accommodations are needed to participate in the job application or interview process, please contact jobs@zapier.com.Application Deadline:The anticipated application window is 30 days from the date job is posted, unless the number of applicants requires it to close sooner or later, or if the position is filled.Even though we’re an all-remote company, we still need to be thoughtful about where we have Zapiens working. Check out this resource for a list of countries where we currently cannot have Zapiens permanently working.
No items found.
Hidden link
Netomi.jpg

Prompt Engineer

Netomi
CA.svg
Canada
Full-time
Remote
false
About the Company:Netomi is the leading agentic AI platform for enterprise customer experience. We work with the largest global brands like Delta Airlines, MetLife, MGM, United, and others to enable agentic automation at scale across the entire customer journey. Our no-code platform delivers the fastest time to market, lowest total cost of ownership, and simple, scalable management of AI agents for any CX use case. Backed by WndrCo, Y Combinator, and Index Ventures, we help enterprises drive efficiency, lower costs, and deliver higher quality customer experiences. Want to be part of the AI revolution and transform how the world’s largest global brands do business? Join us!About the Role We are looking for a Prompt Engineer to craft, optimize, evaluate, and benchmark prompts for enhanced AI performance at Netomi. This role requires expertise in NLP, AI behavior, and customer-specific business rules. You will collaborate with the Customer Success team and data scientists to develop customized, effective AI solutions.Responsibilities Prompt Development: Design and refine client-specific prompts, ensuring accuracy and relevance. Define tool descriptions for agentic frameworks to enhance AI interactions. Optimization & Testing: Improve prompts for clarity and performance, automate testing with scripts, and evaluate LLMs for best-fit solutions. Evaluation & Benchmarking: Develop evaluation frameworks and benchmark prompts to establish best practices. Collaboration & Documentation: Work with Customer Success and Data Science teams, maintaining clear documentation on prompt development and optimization. Research & Innovation: Stay updated on NLP advancements, experiment with new prompting strategies, and refine model-specific adaptations. Requirements Bachelor’s or Master’s degree in Computer Science, Data Science, Linguistics, or a related field. Strong understanding of natural language processing (NLP) and machine learning principles. Experience with AI prompt engineering, including writing, optimizing, and evaluating prompts. Proficiency in programming languages such as Python. Familiarity with AI frameworks and libraries (e.g., TensorFlow, PyTorch, Hugging Face).Excellent analytical and problem-solving skills. Strong written and verbal communication skills. Ability to work collaboratively in a team environment and manage multiple projects simultaneously. Nice to Have Should have a minimum of 5 years of working in data science Experience with large language models (e.g., GPT-3, GPT-4, Llama3) and their applications. Knowledge of data visualization tools and techniques. Background in linguistics or computational linguistics. Experience with agile development methodologies. Netomi is an equal opportunity employer committed to diversity in the workplace. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, disability, veteran status, and other protected characteristics.
No items found.
Hidden link
Sierra.jpg

Software Engineer, Agent (Thai Speaking)

Sierra
SGD 295,000 – SGD 495,000
SG.svg
Singapore
Full-time
Remote
false
About usAt Sierra, we’re creating a platform to help businesses build better, more human customer experiences with AI. We are primarily an in-person company based in San Francisco, with growing offices in Atlanta, New York, London, Paris, Madrid, Munich, Singapore, Japan, and Sydney.We are guided by a set of values that are at the core of our actions and define our culture: Trust, Customer Obsession, Craftsmanship, Intensity, and Family. These values are the foundation of our work, and we are committed to upholding them in everything we do.Our co-founders are Bret Taylor and Clay Bavor. Bret currently serves as Board Chair of OpenAI. Previously, he was co-CEO of Salesforce (which had acquired the company he founded, Quip) and CTO of Facebook. Bret was also one of Google's earliest product managers and co-creator of Google Maps. Before founding Sierra, Clay spent 18 years at Google, where he most recently led Google Labs. Earlier, he started and led Google’s AR/VR effort, Project Starline, and Google Lens. Before that, Clay led the product and design teams for Google Workspace. What you’ll doDesign and deliver production-grade AI agents: You'll build and ship highly performant, reliable, and intuitive AI agents that are central, mission-critical and drive revenue directly to Sierra's growth. These aren't prototypes—they are powerful, scalable systems running in production environments across industries like finance, healthcare, and commerce.Drive the Agent Development Life Cycle (ADLC): You'll have complete ownership and autonomy from initial pilot through deployment and continuous iteration. You'll be responsible for building, tuning, and evolving AI agents in production environments, defining the standard for ADLC best practices along the way.Partner with large enterprises and cutting-edge startups: You’ll work directly with leaders at some of the world’s largest enterprises to understand their most pressing business challenges, and build AI agents that transform how they operate at scale. You'll also partner with the most cutting-edge startups, embedding AI agents across their entire business stack to drive innovation and efficiency.Build the future of the platform: Your direct work with customers will guide the evolution of Sierra's core platform. You'll surface unmet needs, prototype new tools and features, and collaborate with research, product, and platform to shape the future of AI agent development and Sierra's product.Example projectsThese are some examples of projects that engineers on our team have worked on recently:Design and build AI agents for large telecommunications and media companies that consistently outperform human agents in managing subscription churnDevelop and refine AI agents capable of navigating complex customer interactions, like troubleshooting a broken device and personalizing product recommendationsCreate generalizable AI agent frameworks tailored for industry-specific use cases. See some examples in our financial services blog!Facilitate design partnerships for new product initiatives, such as new agent architectures, self-service capabilities, and generative agent developmentExperiment with the latest voice models and figure out how to integrate them at scale to enterprise-grade customersWhat you'll bringExperience building and scaling end-to-end production systemsStrong technical problem-solving skills, especially in fast-changing, ambiguous environmentsA builder and tinkerer’s mindset with high agency - you find creative ways to overcome obstacles and shipComfort working directly with customers to understand their needs and solve real-world problemsExcellent communication skills - clear, direct, and persuasive across technical and non-technical audiencesEven better...Experience building or deploying AI/LLM systems in productionHave been a founder or founding engineer - you know what it means to balance craft, ownership, and speedFamiliarity with tools that power today's AI agents: eval frameworks, agent tooling, RAG pipelines, and prompt engineeringPrior experience with React, TypeScript, and/or GoPrevious roles where you interfaced with customers or led technical projects with external stakeholdersProficiency in Thai is preferred to support interactions with Thai-speaking clients.Our valuesTrust: We build trust with our customers with our accountability, empathy, quality, and responsiveness. We build trust in AI by making it more accessible, safe, and useful. We build trust with each other by showing up for each other professionally and personally, creating an environment that enables all of us to do our best work.Customer Obsession: We deeply understand our customers’ business goals and relentlessly focus on driving outcomes, not just technical milestones. Everyone at the company knows and spends time with our customers. When our customer is having an issue, we drop everything and fix it.Craftsmanship: We get the details right, from the words on the page to the system architecture. We have good taste. When we notice something isn’t right, we take the time to fix it. We are proud of the products we produce. We continuously self-reflect to continuously self-improve.Intensity: We know we don’t have the luxury of patience. We play to win. We care about our product being the best, and when it isn’t, we fix it. When we fail, we talk about it openly and without blame so we succeed the next time.Family: We know that balance and intensity are compatible, and we model it in our actions and processes. We are the best technology company for parents. We support and respect each other and celebrate each other’s personal and professional achievements.What we offerWe want our benefits to reflect our values and offer the following to full-time employees:Flexible (Unlimited) Paid Time OffMedical, Dental, and Vision benefits for you and your familyLife Insurance and Disability BenefitsRetirement Plan (e.g., 401K, pension) with Sierra matchParental LeaveFertility and family building benefits through CarrotLunch, as well as delicious snacks and coffee to keep you energized Discretionary Benefit Stipend giving people the ability to spend where it matters mostFree alphorn lessonsThese benefits are further detailed in Sierra's policies, may vary by region, and are subject to change at any time, consistent with the terms of any applicable compensation or benefits plans. Eligible full-time employees can participate in Sierra's equity plans subject to the terms of the applicable plans and policies.Be you, with usWe're working to bring the transformative power of AI to every organization in the world. To do so, it is important to us that the diversity of our employees represents the diversity of our customers. We believe that our work and culture are better when we encourage, support, and respect different skills and experiences represented within our team. We encourage you to apply even if your experience doesn't precisely match the job description. We strive to evaluate all applicants consistently without regard to race, color, religion, gender, national origin, age, disability, veteran status, pregnancy, gender expression or identity, sexual orientation, citizenship, or any other legally protected class.
No items found.
Hidden link
Intrinsic.jpg

Compensation and Analytics Program Manager

Intrinsic
No items found.
Full-time
Remote
false
Intrinsic is an AI robotics group at Google aiming to reimagine the potential of industrial robotics. Our team believes that advances in AI, perception and simulation will redefine what’s possible for industrial robotics in the near future – with software and data at the core. Our mission is to make industrial robotics intelligent, accessible, and usable for millions more businesses, entrepreneurs, and developers. We are a dynamic team of engineers, roboticists, designers, and technologists who are passionate about unlocking the creative and economic potential of industrial robotics.Role As a Senior AI Research Scientist for Perception for Contact Rich Manipulation you will lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. You will explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Your work will involve creating models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. You will collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities that can be deployed on physical hardware for industrial applications. How your work moves the mission forward Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios. Design algorithms that enable robots to manipulate complex or deformable objects with high precision. Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware. Evaluate model performance in both simulation and real-world environments to ensure robustness and reliability. Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems. Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap. Skills you will need to be successful PhD in Computer Science, Robotics, or a related field with a focus on machine learning or computer vision. 3 years of experience in applied research focused on robotic manipulation or robot learning. Proficiency in programming with Python and C++. Experience with deep learning frameworks such as PyTorch, JAX, or TensorFlow. Experience developing algorithms for vision-based manipulation or contact-rich interaction. Publication record in top-tier robotics or AI conferences (e.g., ICRA, IROS, CVPR, NeurIPS).  Skills that will differentiate your candidacy Experience with reinforcement learning or imitation learning for robotics. Familiarity with physics simulators like MuJoCo, Isaac Sim, or Gazebo. Experience integrating tactile sensors with visual perception systems. Experience in LfD (Learning from Demonstrations), kinesthetic learning. Background in sim-to-real transfer techniques for manipulation policies. Experience with transformer-based architectures or foundation models in a robotics context. Experience deploying machine learning models on edge compute hardwar​e. At Intrinsic, we are proud to be an equal opportunity workplace. Employment at Intrinsic is based solely on a person's merit and qualifications directly related to professional competence. Intrinsic does not discriminate against any employee or applicant because of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), or any other basis protected by law. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. It is Intrinsic’s policy to comply with all applicable national, state and local laws pertaining to nondiscrimination and equal opportunity. If you have a disability or special need that requires accommodation, please contact us at: candidate-support@intrinsic.ai.
No items found.
Hidden link
Netomi.jpg

SDET II

Netomi
CA.svg
Canada
Full-time
Remote
false
About the Company:Netomi is the leading agentic AI platform for enterprise customer experience. We work with the largest global brands like Delta Airlines, MetLife, MGM, United, and others to enable agentic automation at scale across the entire customer journey. Our no-code platform delivers the fastest time to market, lowest total cost of ownership, and simple, scalable management of AI agents for any CX use case. Backed by WndrCo, Y Combinator, and Index Ventures, we help enterprises drive efficiency, lower costs, and deliver higher quality customer experiences. Want to be part of the AI revolution and transform how the world’s largest global brands do business? Join us!About the role: We are looking for a seasoned SDET to help us keep a check on the quality of products. If you are passionate about quality and want to help make an impact in our organisation then we have a perfect role for you!Responsibilities Testing of AI based conversational products Monitoring and improving quality assurance process ensuring any agreed-upon standards and procedures are followed Providing a high level of data quality awareness across multiple teams Evaluating and identifying where enhancements in accuracy of models are required Detailed testing feedback preparation to help the team to improve AI models Requirements A good focus on delivering high-quality products and a passion for learning new technologies. Should have a passion for Manual & Automation Testing, Test Planning, Test Cases, Preparation, and Execution. Should have experience in API testing and scripting languages, like Java/Python. Hands-on experience in Test Automation Framework implementations in Enterprise software environments with open-source tools such as Selenium, TestNG/JUnit, Rest Assured etc. Accurate Knowledge of CI/CD technologies (e.g. Jenkins, Git, Maven, etc.). Bachelor in Computer Science or related Engineering field with 4-6 years of relevant experience. Should understand testing requirements related to the current product and should have excellent knowledge of Functional and Non Functional Testing concepts and tools. Ability to work in a fast-paced and dynamic work environment. Exposure to New technology, Problem-solving skills, Self-motivated and self-starter, work without supervision. Knowledge of any Database (MySql, etc) and load test testing like Jmeter is a plus. Excellent written and verbal communication Netomi is an equal opportunity employer committed to diversity in the workplace. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, disability, veteran status, and other protected characteristics.
No items found.
Hidden link
Sierra.jpg

Software Engineer, Agent (Cantonese Speaking)

Sierra
SGD 295,000 – SGD 495,000
SG.svg
Singapore
Full-time
Remote
false
About usAt Sierra, we’re creating a platform to help businesses build better, more human customer experiences with AI. We are primarily an in-person company based in San Francisco, with growing offices in Atlanta, New York, London, Paris, Madrid, Munich, Singapore, Japan, and Sydney.We are guided by a set of values that are at the core of our actions and define our culture: Trust, Customer Obsession, Craftsmanship, Intensity, and Family. These values are the foundation of our work, and we are committed to upholding them in everything we do.Our co-founders are Bret Taylor and Clay Bavor. Bret currently serves as Board Chair of OpenAI. Previously, he was co-CEO of Salesforce (which had acquired the company he founded, Quip) and CTO of Facebook. Bret was also one of Google's earliest product managers and co-creator of Google Maps. Before founding Sierra, Clay spent 18 years at Google, where he most recently led Google Labs. Earlier, he started and led Google’s AR/VR effort, Project Starline, and Google Lens. Before that, Clay led the product and design teams for Google Workspace. What you’ll doDesign and deliver production-grade AI agents: You'll build and ship highly performant, reliable, and intuitive AI agents that are central, mission-critical and drive revenue directly to Sierra's growth. These aren't prototypes—they are powerful, scalable systems running in production environments across industries like finance, healthcare, and commerce.Drive the Agent Development Life Cycle (ADLC): You'll have complete ownership and autonomy from initial pilot through deployment and continuous iteration. You'll be responsible for building, tuning, and evolving AI agents in production environments, defining the standard for ADLC best practices along the way.Partner with large enterprises and cutting-edge startups: You’ll work directly with leaders at some of the world’s largest enterprises to understand their most pressing business challenges, and build AI agents that transform how they operate at scale. You'll also partner with the most cutting-edge startups, embedding AI agents across their entire business stack to drive innovation and efficiency.Build the future of the platform: Your direct work with customers will guide the evolution of Sierra's core platform. You'll surface unmet needs, prototype new tools and features, and collaborate with research, product, and platform to shape the future of AI agent development and Sierra's product.Example projectsThese are some examples of projects that engineers on our team have worked on recently:Design and build AI agents for large telecommunications and media companies that consistently outperform human agents in managing subscription churnDevelop and refine AI agents capable of navigating complex customer interactions, like troubleshooting a broken device and personalizing product recommendationsCreate generalizable AI agent frameworks tailored for industry-specific use cases. See some examples in our financial services blog!Facilitate design partnerships for new product initiatives, such as new agent architectures, self-service capabilities, and generative agent developmentExperiment with the latest voice models and figure out how to integrate them at scale to enterprise-grade customersWhat you'll bringExperience building and scaling end-to-end production systemsStrong technical problem-solving skills, especially in fast-changing, ambiguous environmentsA builder and tinkerer’s mindset with high agency - you find creative ways to overcome obstacles and shipComfort working directly with customers to understand their needs and solve real-world problemsExcellent communication skills - clear, direct, and persuasive across technical and non-technical audiencesEven better...Experience building or deploying AI/LLM systems in productionHave been a founder or founding engineer - you know what it means to balance craft, ownership, and speedFamiliarity with tools that power today's AI agents: eval frameworks, agent tooling, RAG pipelines, and prompt engineeringPrior experience with React, TypeScript, and/or GoPrevious roles where you interfaced with customers or led technical projects with external stakeholdersProficiency in Cantonese is preferred to support interactions with Cantonese-speaking clients.Our valuesTrust: We build trust with our customers with our accountability, empathy, quality, and responsiveness. We build trust in AI by making it more accessible, safe, and useful. We build trust with each other by showing up for each other professionally and personally, creating an environment that enables all of us to do our best work.Customer Obsession: We deeply understand our customers’ business goals and relentlessly focus on driving outcomes, not just technical milestones. Everyone at the company knows and spends time with our customers. When our customer is having an issue, we drop everything and fix it.Craftsmanship: We get the details right, from the words on the page to the system architecture. We have good taste. When we notice something isn’t right, we take the time to fix it. We are proud of the products we produce. We continuously self-reflect to continuously self-improve.Intensity: We know we don’t have the luxury of patience. We play to win. We care about our product being the best, and when it isn’t, we fix it. When we fail, we talk about it openly and without blame so we succeed the next time.Family: We know that balance and intensity are compatible, and we model it in our actions and processes. We are the best technology company for parents. We support and respect each other and celebrate each other’s personal and professional achievements.What we offerWe want our benefits to reflect our values and offer the following to full-time employees:Flexible (Unlimited) Paid Time OffMedical, Dental, and Vision benefits for you and your familyLife Insurance and Disability BenefitsRetirement Plan (e.g., 401K, pension) with Sierra matchParental LeaveFertility and family building benefits through CarrotLunch, as well as delicious snacks and coffee to keep you energized Discretionary Benefit Stipend giving people the ability to spend where it matters mostFree alphorn lessonsThese benefits are further detailed in Sierra's policies, may vary by region, and are subject to change at any time, consistent with the terms of any applicable compensation or benefits plans. Eligible full-time employees can participate in Sierra's equity plans subject to the terms of the applicable plans and policies.Be you, with usWe're working to bring the transformative power of AI to every organization in the world. To do so, it is important to us that the diversity of our employees represents the diversity of our customers. We believe that our work and culture are better when we encourage, support, and respect different skills and experiences represented within our team. We encourage you to apply even if your experience doesn't precisely match the job description. We strive to evaluate all applicants consistently without regard to race, color, religion, gender, national origin, age, disability, veteran status, pregnancy, gender expression or identity, sexual orientation, citizenship, or any other legally protected class.
No items found.
Hidden link
Shield AI.jpg

Senior Engineer, Applications (R4829)

Shield AI
JP.svg
Japan
Full-time
Remote
false
Founded in 2015, Shield AI is a venture-backed deep-tech company with the mission of protecting service members and civilians with intelligent systems. Its products include the V-BAT and X-BAT aircraft, Hivemind Enterprise, and the Hivemind Vision product lines. With offices and facilities across the U.S., Europe, the Middle East, and the Asia-Pacific, Shield AI’s technology actively supports operations worldwide. For more information, visit www.shield.ai. Follow Shield AI on LinkedIn, X, Instagram, and YouTube. Job Description: Our Applications Engineers are highly technical, customer-facing problem solvers who play a critical role in deploying Shield AI’s Hivemind software in real-world environments.   In this role, you will work closely with customers to understand their requirements, provide technical expertise and customer support during deployment, and ensure successful integration of Hivemind. You’ll also collaborate internally with engineering teams to develop and test new autonomy capabilities.   This role is travel-intensive, with frequent trips, often international, and sometimes lasting multiple weeks, to work directly alongside customers on-site. What you'll do: Deploy with customers on site globally (~50% travel) to support software integration and development activities.   Become an expert user of the Hivemind enterprise software stack and its various autonomy modules.   Provide technical support and training to customers on use of Hivemind.   Develop AI & Autonomy applications using the Shield AI enterprise software development kit.  Assist the sales team in pre-sales activities, e.g., demos, conferences, immersions  Assist in post-sales deployment and integration of Shield AI enterprise software products.   Develop and maintain technical documentation and training materials.   Help customers debug software/API integration issues.   Collaborate with the product engineering team to address customer feedback and improve products.   Required qualifications: Bachelor’s degree in Engineering, Computer Science, or a related field, and 5+ years of industry experience, OR   Master’s degree in Engineering, Computer Science, or a related field, and 3+ years of industry experience.  Strong technical background in software engineering.   Strong proficiency in writing modern C++ code.   Excellent problem-solving and analytical skills.   Strong communication and interpersonal skills.   Preferred qualifications: Experience in the defense, aviation, or robotics industry.   Prior experience as a customer-facing solutions engineer, application engineer, or sales engineer.  Experience operating in a fast-paced, startup-like, environment.  Advanced technical degree, especially in robotics or autonomy related fields  #LI-FB1 #LC Our international teammates receive a comprehensive total rewards package aligned to your country office location. For full details on compensation and benefits, please consult your talent acquisition partner.
No items found.
Hidden link
Zoox.jpg

PhD Research Intern, Offline Driving Intelligence

Zoox
$9,500 – $9,500 / month
US.svg
United States
Intern
Remote
false
This internship opportunity is with the Offline Driving Intelligence team working on ML-Agents, within Zoox’s broader Foundation Models organization. The team focuses on creating agents that behave like humans (in driving simulation environments). We are developing multi-agent simulation, tackling open research problems at the frontier of large-scale reinforcement and imitation learning.  Interns on this team will have the opportunity to develop state-of-the-art agent policies, contribute to publishable research, and receive mentorship from experienced researchers in the field. Interns will work with a mentor to address a major open research question currently facing the team. There is a direct path from the novel research of this internship to being used in production as part of the simulation system that tests Zoox’s autonomous driving software. Requirements Currently working towards a Ph.D., or advanced degree in a relevant engineering program Good academic standing Able to commit to a 12-week internship during one of the following summer 2026 cohorts: May 18th - August 7th OR May 26th - August 14th OR June 15th - September 4th Ability to relocate to the Bay Area, CA or Seattle, WA for the duration of the internship Interns at Zoox may not use any proprietary information they are working on as part of their thesis, any published work with their university, or to be distributed to anyone outside of Zoox Qualifications Experience with imitation learning (both behavior cloning and closed-loop methods) Experience with online reinforcement learning Advanced understanding of Python, PyTorch, and Jax Experience working in large codebases as part of a team Has authored publications in top ML/robotics conferences (e.g. NeurIPS, CVPR, ICRA, etc) Bonus Qualifications Experience with autonomous driving Experience with robotics planning Experience with inverse reinforcement learning Experience with multi-agent reinforcement learning Compensation: The monthly salary for this position is $9,500. Compensation will vary based on geographic location. Additional benefits may include medical insurance, and a housing stipend (relocation assistance will be offered based on eligibility). About ZooxZoox is developing the first ground-up, fully autonomous vehicle fleet and the supporting ecosystem required to bring this technology to market. Sitting at the intersection of robotics, machine learning, and design, Zoox aims to provide the next generation of mobility-as-a-service in urban environments. We’re looking for top talent that shares our passion and wants to be part of a fast-moving and highly execution-oriented team. Follow us on LinkedIn AccommodationsIf you need an accommodation to participate in the application or interview process please reach out to accommodations@zoox.com or your assigned recruiter. A Final Note:You do not need to match every listed expectation to apply for this position. Here at Zoox, we know that diverse perspectives foster the innovation we need to be successful, and we are committed to building a team that encompasses a variety of backgrounds, experiences, and skills.
No items found.
Hidden link
Mercor.jpg

Agentic Finance Engineer

Mercor
$175,000 – $250,000
US.svg
United States
Full-time
Remote
false
About MercorMercor is defining the future of work. We partner with leading AI labs and enterprises to provide the human intelligence essential to AI development. Our vast talent network trains frontier AI models in the same way teachers teach students: by sharing knowledge, experience, and context that can't be captured in code alone. Today, more than 30,000 experts in our network collectively earn over $2 million a day.Mercor is creating a new category of work where expertise powers AI advancement. Achieving this requires an ambitious, fast-paced and deeply committed team. You’ll work alongside researchers, operators, and AI companies at the forefront of shaping the systems that are redefining society. Mercor is a profitable Series C company valued at $10 billion. We work in-person five days a week in our San Francisco, NYC, or London offices.About the RoleThis is a rare, foundational hire: Mercor's first Agentic Finance Engineer. You'll sit at the intersection of finance, data engineering, and applied AI — building the systems and infrastructure that power how our finance and corporate operations team runs.You will design and deploy the modern data and automation stack from the ground up: architecting scalable financial data models, integrating AI agents into real workflows, and turning manual, spreadsheet-driven processes into reliable, production-grade systems.The ideal candidate has deep finance domain knowledge — you understand how the books actually close, how procurement flows, how revenue is recognized — and the technical fluency to translate that understanding into durable, automated systems. You're equally comfortable in a SQL editor and in a conversation with the CFO, and you're energized by building in a zero-to-one environment.What You'll DoFinance Data & SystemsFacilitate the design, build, and maintenance of a reliable financial data foundation using modern tools (.g. dbt, FiveTran, Snowflake/BigQuery, Airflow), covering revenue, AP/AR, procurement, close, strategic finance and FP&APartner closely with data infrastructure team to build Mercor's financial data model: define canonical datasets, dimensional schemas, and the transformation logic that serves Finance stakeholdersPartner with Finance leads across accounting, strategic finance, and operations to translate business requirements into technical architectureBuild and maintain dashboards and self-serve reporting tools that give finance leaders real-time visibility into key metricsAI & AutomationOwn the Agentic Finance roadmap – prioritizing use cases and driving features from ideation to deploymentIdentify high-value automation opportunities across Finance and corporate operations — month-end close tasks, reconciliations, procurement workflows, variance analysis — and ship solutions that eliminate manual workBuild intelligent, reliable automation via agents (in partnership with engineering teams), AI-powered tools, and multi-step ETL jobs into live finance workflows, using APIs (including frontier models)Build internal tooling that Finance teams actually use: lightweight apps, workflow automations, and AI-assisted processes that save meaningful time at scaleStay at the frontier of what's possible with AI in Finance (meeting other finance teams, reading research, etc.), and proactively bring new capabilities to bear on Mercor's highest-leverage operational problemsGovernance, Quality & ScaleEnforce data integrity standards and testing practices across all financial data products — ensuring auditability and reliability for a maturing Finance functionEnsure AI-assisted processes meet appropriate governance and controls standards, with clear auditability of model outputs used in financial workflowsChampion a culture of data quality and documentation so that Finance teams trust and rely on the systems you buildWhat We're Looking For3+ years of experience spanning finance operations, data engineering, analytics engineering, data science, or a combination — with a track record of shipping reliable systems in high-growth environmentsUnderstand the data needs of finance and corporate operations, including accounting, finance operations, FP&A, strategic finance, and procurementYou should be very proficient in SQL and Python. While you can learn data & workflow orchestration tools on-the-job, we expect you to have a very strong computer science fundamentals.A builder's instinct: you take ownership of problems, move fast, and know when to do things properly versus pragmaticallyExcellent communication skills — you can explain a data model to a CFO and a pipeline architecture to an engineerExperience navigating ambiguity in early-stage or rapidly scaling environmentsBonus PointsHands-on experience deploying AI agents or LLM-powered toolsExperience with financial data governance and audit readinessBackground in strategic finance, accounting, FP&A, or finance transformation — or time spent in a high-growth tech company's finance functionCompensationBase cash comp from $175K - $250kAggressive bi-annual performance bonus structureGenerous equity grant vested over 4 yearsUp to $15k Relocation bonus$10K housing bonus (if you live within 0.5 miles of our office)$1.5K monthly stipend for mealsFree Equinox membership$200 monthly laundry reimbursement$200 monthly personal wellness reimbursementHealth, Dental, Vision insurance401(k) with company match
No items found.
Hidden link
No job found
Your search did not match any job. Please try again
Department
Clear
Category
Clear
Country
Clear
Job type
Clear
Remote
Clear
Only remote job
Company size
Clear
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.