⚠️ Sorry, this job is no longer available.

The AI job market moves fast. We keep up so you don't have to.

Fresh roles added daily, reviewed for quality — across every corner of the AI ecosystem.

Edit filters

New AI Opportunities

Showing 6179  of 79 jobs
Tag
Scale AI.jpg

Forward Deployed AI Engineering Manager, Enterprise

Scale AI
$201,600 – $241,920
US.svg
United States
Full-time
Remote
false
About the role We’re hiring an AI Architect to sit at the intersection of frontier AI research, product, and go-to-market. You’ll partner closely with ML teams in high-stakes meetings, scope and pitch solutions to top AI labs, and translate research needs (post-training, evals, alignment) into clear product roadmaps and measurable outcomes. You’ll drive end-to-end delivery—partnering with AI research teams and core customers to scope, pilot, and iterate on frontier model improvements—while coordinating with engineering, ops, and finance to translate cutting-edge research into deployable, high-impact solutions. What you’ll do Translate research → product: work with client side researchers on post-training, evals, safety/alignment and build the primitives, data, and tooling they need. Partner deeply with core customers and frontier labs: work hands-on with leading AI teams and frontier research labs to tackle hard, open-ended technical problems related to frontier model improvement, performance, and deployment. Shape and propose model improvement work: translate customer and research objectives into clear, technically rigorous proposals—scoping post-training, evaluation, and safety work into well-defined statements of work and execution plans. Translate research into production impact: collaborate with customer-side researchers on post-training, evaluations, and alignment, and help design the data, primitives, and tooling required to improve frontier models in practice. Own the end-to-end lifecycle: lead discovery, write crisp PRDs and technical specs, prioritize trade-offs, run experiments, ship initial solutions, and scale successful pilots into durable, repeatable offerings. Lead complex, high-stakes engagements: independently run technical working sessions with senior customer stakeholders; define success metrics; surface risks early; and drive programs to measurable outcomes. Partner across Scale: collaborate closely with research (agents, browser/SWE agents), platform, operations, security, and finance to deliver reliable, production-grade results for demanding customers. Build evaluation rigor at the frontier: design and stand up robust evaluation frameworks (e.g., RLVR, benchmarks), close the loop with data quality and feedback, and share learnings that elevate technical execution across accounts. You have Deep technical background in applied AI/ML: 5–10+ years in research, engineering, solutions engineering, or technical product roles working on LLMs or multimodal systems, ideally in high-stakes, customer-facing environments. Hands-on experience with model improvement workflows: demonstrated experience with post-training techniques, evaluation design, benchmarking, and model quality iteration. Ability to work on hard, ambiguous technical problems: proven track record of partnering directly with advanced customers or research teams to scope, reason through, and execute on deep technical challenges involving frontier models. Strong technical fluency: you can read papers, interrogate metrics, write or review complex Python/SQL for analysis, and reason about model-data trade-offs. Executive presence with world-class researchers and enterprise leaders; excellent writing and storytelling. Bias to action: you ship, learn, and iterate. How you’ll work Customer-obsessed: start from real research needs; prototype quickly; validate with data. Cross-functional by default: align research, engineering, ops, and GTM on a single plan; communicate clearly up and down. Field-forward: expect regular customer time and research leads; light travel as needed. What success looks like Clear wins with top labs: pilots that convert to scaled programs with strong eval signals. Reusable alignment & eval building blocks that shorten time-to-value across accounts. Crisp internal docs (PRDs, experiment readouts, exec updates) that drive decisions quickly. Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.Please reference the job posting's subtitle for where this position will be located. For pay transparency purposes, the base salary range for this full-time position in the locations of San Francisco, New York, Seattle is:$201,600—$241,920 USDPLEASE NOTE: Our policy requires a 90-day waiting period before reconsidering candidates for the same role. This allows us to ensure a fair and thorough evaluation of all applicants. About Us: At Scale, our mission is to develop reliable AI systems for the world's most important decisions. Our products provide the high-quality data and full-stack technologies that power the world's leading models, and help enterprises and governments build, deploy, and oversee AI applications that deliver real impact. We work closely with industry leaders like Meta, Cisco, DLA Piper, Mayo Clinic, Time Inc., the Government of Qatar, and U.S. government agencies including the Army and Air Force. We are expanding our team to accelerate the development of AI applications. We believe that everyone should be able to bring their whole selves to work, which is why we are proud to be an inclusive and equal opportunity workplace. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability status, gender identity or Veteran status.  We are committed to working with and providing reasonable accommodations to applicants with physical and mental disabilities. If you need assistance and/or a reasonable accommodation in the application or recruiting process due to a disability, please contact us at accommodations@scale.com. Please see the United States Department of Labor's Know Your Rights poster for additional information. We comply with the United States Department of Labor's Pay Transparency provision.  PLEASE NOTE: We collect, retain and use personal data for our professional business purposes, including notifying you of job opportunities that may be of interest and sharing with our affiliates. We limit the personal data we collect to that which we believe is appropriate and necessary to manage applicants’ needs, provide our services, and comply with applicable laws. Any information we collect in connection with your application will be treated in accordance with our internal policies and programs designed to protect personal data. Please see our privacy policy for additional information.
No items found.
Hidden link
Scale AI.jpg

Senior Software Engineer, Connectivity

Scale AI
$201,600 – $241,920
US.svg
United States
Full-time
Remote
false
About the role We’re hiring an AI Architect to sit at the intersection of frontier AI research, product, and go-to-market. You’ll partner closely with ML teams in high-stakes meetings, scope and pitch solutions to top AI labs, and translate research needs (post-training, evals, alignment) into clear product roadmaps and measurable outcomes. You’ll drive end-to-end delivery—partnering with AI research teams and core customers to scope, pilot, and iterate on frontier model improvements—while coordinating with engineering, ops, and finance to translate cutting-edge research into deployable, high-impact solutions. What you’ll do Translate research → product: work with client side researchers on post-training, evals, safety/alignment and build the primitives, data, and tooling they need. Partner deeply with core customers and frontier labs: work hands-on with leading AI teams and frontier research labs to tackle hard, open-ended technical problems related to frontier model improvement, performance, and deployment. Shape and propose model improvement work: translate customer and research objectives into clear, technically rigorous proposals—scoping post-training, evaluation, and safety work into well-defined statements of work and execution plans. Translate research into production impact: collaborate with customer-side researchers on post-training, evaluations, and alignment, and help design the data, primitives, and tooling required to improve frontier models in practice. Own the end-to-end lifecycle: lead discovery, write crisp PRDs and technical specs, prioritize trade-offs, run experiments, ship initial solutions, and scale successful pilots into durable, repeatable offerings. Lead complex, high-stakes engagements: independently run technical working sessions with senior customer stakeholders; define success metrics; surface risks early; and drive programs to measurable outcomes. Partner across Scale: collaborate closely with research (agents, browser/SWE agents), platform, operations, security, and finance to deliver reliable, production-grade results for demanding customers. Build evaluation rigor at the frontier: design and stand up robust evaluation frameworks (e.g., RLVR, benchmarks), close the loop with data quality and feedback, and share learnings that elevate technical execution across accounts. You have Deep technical background in applied AI/ML: 5–10+ years in research, engineering, solutions engineering, or technical product roles working on LLMs or multimodal systems, ideally in high-stakes, customer-facing environments. Hands-on experience with model improvement workflows: demonstrated experience with post-training techniques, evaluation design, benchmarking, and model quality iteration. Ability to work on hard, ambiguous technical problems: proven track record of partnering directly with advanced customers or research teams to scope, reason through, and execute on deep technical challenges involving frontier models. Strong technical fluency: you can read papers, interrogate metrics, write or review complex Python/SQL for analysis, and reason about model-data trade-offs. Executive presence with world-class researchers and enterprise leaders; excellent writing and storytelling. Bias to action: you ship, learn, and iterate. How you’ll work Customer-obsessed: start from real research needs; prototype quickly; validate with data. Cross-functional by default: align research, engineering, ops, and GTM on a single plan; communicate clearly up and down. Field-forward: expect regular customer time and research leads; light travel as needed. What success looks like Clear wins with top labs: pilots that convert to scaled programs with strong eval signals. Reusable alignment & eval building blocks that shorten time-to-value across accounts. Crisp internal docs (PRDs, experiment readouts, exec updates) that drive decisions quickly. Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.Please reference the job posting's subtitle for where this position will be located. For pay transparency purposes, the base salary range for this full-time position in the locations of San Francisco, New York, Seattle is:$201,600—$241,920 USDPLEASE NOTE: Our policy requires a 90-day waiting period before reconsidering candidates for the same role. This allows us to ensure a fair and thorough evaluation of all applicants. About Us: At Scale, our mission is to develop reliable AI systems for the world's most important decisions. Our products provide the high-quality data and full-stack technologies that power the world's leading models, and help enterprises and governments build, deploy, and oversee AI applications that deliver real impact. We work closely with industry leaders like Meta, Cisco, DLA Piper, Mayo Clinic, Time Inc., the Government of Qatar, and U.S. government agencies including the Army and Air Force. We are expanding our team to accelerate the development of AI applications. We believe that everyone should be able to bring their whole selves to work, which is why we are proud to be an inclusive and equal opportunity workplace. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability status, gender identity or Veteran status.  We are committed to working with and providing reasonable accommodations to applicants with physical and mental disabilities. If you need assistance and/or a reasonable accommodation in the application or recruiting process due to a disability, please contact us at accommodations@scale.com. Please see the United States Department of Labor's Know Your Rights poster for additional information. We comply with the United States Department of Labor's Pay Transparency provision.  PLEASE NOTE: We collect, retain and use personal data for our professional business purposes, including notifying you of job opportunities that may be of interest and sharing with our affiliates. We limit the personal data we collect to that which we believe is appropriate and necessary to manage applicants’ needs, provide our services, and comply with applicable laws. Any information we collect in connection with your application will be treated in accordance with our internal policies and programs designed to protect personal data. Please see our privacy policy for additional information.
No items found.
Hidden link
X.jpg

Client Account Manager (Madrid)

X AI
$45 – $100 / hour
US.svg
United States
Full-time
Remote
false
About xAI xAI’s mission is to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. Our team is small, highly motivated, and focused on engineering excellence. This organization is for individuals who appreciate challenging themselves and thrive on curiosity. We operate with a flat organizational structure. All employees are expected to be hands-on and to contribute directly to the company’s mission. Leadership is given to those who show initiative and consistently deliver excellence. Work ethic and strong prioritization skills are important. All employees are expected to have strong communication skills. They should be able to concisely and accurately share knowledge with their teammates.About the Role As an AI Tutor - Economics, you will be instrumental in enhancing the capabilities of our cutting-edge technologies by providing high-quality input and labels using specialized software. Your role involves collaborating closely with our technical team to support the training of new AI tasks, ensuring the implementation of innovative initiatives. You'll contribute to refining annotation tools and selecting complex problems from advanced economics domains, with a focus on macroeconomic forecasting, microeconomic incentives, and behavioral experiments. This position demands a dynamic approach to learning and adapting in a fast-paced environment, where your ability to interpret and execute tasks based on evolving instructions is crucial. AI Tutor’s Role in Advancing xAI’s Mission As an AI Tutor, you will play an essential role in advancing xAI's mission by supporting the training and refinement of xAI’s AI models. AI Tutors teach our AI models about how people interact and react, as well as how people approach issues and discussions in economics. To accomplish this, AI Tutors will actively participate in gathering or providing data, such as text, voice, and video data, sometimes providing annotations, recording audio, or participating in video sessions. We seek individuals who are comfortable and eager to engage in these activities as a fundamental part of the role, ensuring a strong alignment with xAI’s goals and objectives to innovate. Scope An AI Tutor will provide services that include labeling and annotating data in text, voice, and video formats to support AI model training. At times, this may involve recording audio or video sessions, and tutors are expected to be comfortable with these tasks as they are fundamental to the role. Such data is a job requirement to advance xAI’s mission, and AI Tutors acknowledge that all work is done for hire and owned by xAI. Responsibilities Use proprietary software applications to provide input/labels on defined projects. Support and ensure the delivery of high-quality curated data. Play a pivotal role in supporting and contributing to the training of new tasks, working closely with the technical staff to ensure the successful development and implementation of cutting-edge initiatives/technologies. Interact with the technical staff to help improve the design of efficient annotation tools. Choose problems from economics fields that align with your expertise, focusing on areas like macroeconomics, microeconomics, and behavioral economics.  Regularly interpret, analyze, and execute tasks based on given instructions. Key Qualifications Must possess a PhD in Economics or related field Proficiency in reading and writing, both in informal and professional English. Outstanding communication, interpersonal, analytical, and organizational capabilities. Solid reading comprehension skills combined with the capacity to exercise autonomous judgment even when presented with limited data/material. Strong passion for and commitment to technological advancements and innovation in economics. Preferred Qualifications Possesses experience with at least one publication in a reputable economics journal or outlet. Teaching experience as a professor. Location & Other Expectations This position is based in Palo Alto, CA, or fully remote. The Palo Alto option is an in-office role requiring 5 days per week; remote positions require strong self-motivation. If you are based in the US, please note we are unable to hire in the states of Wyoming and Illinois at this time. We are unable to provide visa sponsorship. Team members are expected to work from 9:00am - 5:30pm PST for the first two weeks of training and 9:00am - 5:30pm in their own timezone thereafter. For those who will be working from a personal device, please note your computer must be a Chromebook, Mac with MacOS 11.0 or later, or Windows 10 or later. Compensation $45/hour - $100/hour The posted pay range is intended for U.S.-based candidates and depends on factors including relevant experience, skills, education, geographic location, and qualifications. For international candidates, our recruiting team can provide an estimated pay range for your location. Benefits: Hourly pay is just one part of our total rewards package at xAI. Specific benefits vary by country, depending on your country of residence you may have access to medical benefits. We do not offer benefits for part-time roles.xAI is an equal opportunity employer. For details on data processing, view our Recruitment Privacy Notice.
No items found.
Hidden link
Hayden AI.jpg

Senior Software Engineer, Pilots

Haydenai
$200,454 – $260,590
US.svg
United States
Full-time
Remote
false
About UsAt Hayden AI, we are on a mission to harness the power of computer vision to transform the way transit systems and other government agencies address real-world challenges.From bus lane and bus stop enforcement to transportation optimization technologies and beyond, our innovative mobile perception system empowers our clients to accelerate transit, enhance street safety, and drive toward a sustainable future.Job SummaryAs a Senior Software Engineer on the Pilots team within the Perception organization, you will be responsible for developing prototypes for forthcoming pilots, aligning with Hayden’s mission and long-term roadmap for business expansion.This team investigates novel use cases, vehicles, and deployment environments, expeditiously translating preliminary concepts into functional prototypes. You will construct comprehensive end-to-end perception and robotics systems designed to operate on real-world hardware and capable of scaling into Hayden’s core product platform.This is a C++ software engineering generalist position emphasizing robotics and systems expertise. You will function with a substantial degree of ownership, navigating complexity to meticulously design, implement, and fortify solutions that judiciously balance rapid experimentation with sustained maintainability. Key Responsibilities:Deliver robust, thoroughly tested, and maintainable C++ code tailored for edge and robotics platforms.Design, implement, and take ownership of prototype perception systems with the potential to transition into production-grade solutions.Construct and iteratively refine real-time perception pipelines, encompassing detection, tracking, and sensor fusion methodologies.Adapt, refine, and integrate Machine Learning (ML) and Computer Vision (CV) models, including leveraging open-source solutions, for novel, Hayden-specific applications.Drive technical decision-making in ambiguous problem spaces, effectively balancing the speed required for prototyping with the requirements for production readiness.Collaborate closely with the Product team and cross-functional Engineering departments.Contribute to shared infrastructure, tooling, and architectural patterns as pilot initiatives mature into foundational products.Required Qualifications:Master's degree in Computer Science, Electrical Engineering, Robotics, or a closely related discipline. A PhD is considered advantageous.5-8 years of relevant experience in building and deploying perception systems; experience in automotive or robotics domains is a plus.Substantial background in a minimum of one of the following domains: robotics, state estimation, computer vision, or applied machine learning.Senior-level industrial experience in the delivery of intricate, production-grade software systems.Demonstrated proficiency in modern C++, coupled with experience in real-time systems.Experience in the construction and ownership of end-to-end systems, rather than merely isolated components.Capability to operate effectively in ambiguous and rapidly evolving environments.Proven capacity to collaborate constructively within a developing engineering organization.
No items found.
Hidden link
OpenAI.jpg

Software Engineer - Sensing, Consumer Products

OpenAI
$325,000 – $325,000
US.svg
United States
Full-time
Remote
false
About the TeamConsumer Products Research prototypes the future of computing: we explore new modalities, interaction patterns, and system behaviors, then do the engineering required to make those ideas real in rigorous prototypes. The Neosensing team sits at the intersection of sensing, edge algorithms, and systems engineering. We build the end-to-end software that turns new signals into dependable capabilities—collection tooling and protocols, algorithm integration and evaluation hooks, and on-device loops that stay stable under real-world variability. We care deeply about software quality and iteration speed: clean interfaces, debuggability, observability, and performance under tight device constraints.About the RoleAs a Software Engineer on Consumer Products Research, you’ll sit at the boundary between algorithm development and shippable systems. You’ll work closely with algorithm engineers to translate prototypes into clean interfaces, reliable pipelines, and efficient on-device implementations—with strong attention to performance, observability, and real-world failure modes.This is a software role first: we’re looking for someone who loves writing great code every day, takes pride in engineering craft, and is comfortable going deep enough into the algorithmic details to make the system work end-to-end.This role is based in San Francisco, CA. We use a hybrid work model of four days in the office per week and offer relocation assistance to new employees.In this role, you will:Build and ship production software for sensing algorithms, translating algorithm prototypes into reliable end-to-end systems.Implement and own key parts of the Python shipping pipeline (integration surfaces, evaluation hooks, and quality/performance guardrails).Develop embedded/on-device software in an RTOS environment (e.g., Zephyr) and deploy models to device runtimes and hardware accelerators.Optimize real-time on-device perception loops (e.g., detection/tracking-style pipelines) for stability, latency, power, and memory constraints.Create data collection + instrumentation tooling to bring up new sensing modalities and accelerate iteration from prototype → dataset → model → device.Partner cross-functionally (algorithms, human data, firmware/hardware) to debug, profile, and harden systems against real-world variability.You might thrive in this role if you:Love writing great software and want your work to sit close to novel sensing and edge algorithms.Understand algorithm behavior well enough to integrate, debug, and evaluate it—even if you’re not the primary model inventor.Have shipped production Python systems and care about clean interfaces, tests, and long-term maintainability.Enjoy embedded/on-device work and can debug across hardware, firmware, and higher-level application layers.Care about performance engineering and know how to profile and optimize under tight device constraints.Take ownership end-to-end and thrive in ambiguous, fast-moving, zero-to-one environments.Bonus:Zephyr (or similar RTOS) experience.On-device ML deployment (NPU/GPU/DSP) and accelerator-aware profiling/optimization.Background in multimodal sensing, sensor fusion, or on-device perception.Experience building data collection systems and human-in-the-loop workflows (protocols, QA, metadata)About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
Zoox.jpg

Senior Software Engineer, ML Core

Zoox
$214,000 – $290,000
US.svg
United States
Full-time
Remote
false
Zoox is on a mission to reimagine transportation and ground-up build autonomous robotaxis that are safe, reliable, clean, and enjoyable for everyone. We are still in the early stages of deploying our robotaxis on public roads, and it is a great time to join Zoox and have a significant impact in executing this mission. The ML Platform team at Zoox plays a crucial role in enabling innovations in ML and AI to make autonomous driving as seamless as possible.  The OpportunityWould you like to enable ML use cases like autonomous driving, scene understanding, and automated mapping at Zoox? This role works across all ML teams within Zoox - Perception, Behavior ML, Simulation, Data Science, Collision Avoidance, as well as with our Advanced Hardware Engineering group specifying our next generation of autonomous hardware.  You will significantly push the boundaries of how ML is practiced within Zoox. We build and operate the base layer of ML tools, deep learning frameworks, inference libraries, and ML infrastructure used by our applied research teams for in- and off-vehicle ML use cases. We coordinate across all of Zoox to make sure that the needs of both the vehicle and ML teams are met. You will play a crucial role in reducing the time it takes from ideation to productionization of cutting-edge AI innovation. This team has a lot of growth opportunities as we expand our robotaxi deployments and venture into new ML domains. If you want to learn more about our stack behind autonomous driving, please look here.In this role, you will:Design, develop, and deploy custom and off-the-shelf ML libraries and toolings to improve ML development, training, deployment, and on-vehicle model inference latency.Build tooling and establish development best practices to manage and upgrade foundational libraries, i.e., Nvidia driver, PyTorch, TensorRT, etc., improve ML developer experience, and expedite debugging efforts.Collaborate closely with cross-functional teams, including applied ML research, high-performance compute, advanced hardware engineering, and data science, to define requirements and align on architectural decisions.Qualifications6+ years of experience Proficient in Python or C++Familiarity with any of the training frameworks and libraries like PyTorch, Lightning, Hugging Face, Ray, JAX, etc.Familiarity with any of the GPU-accelerated inference on Nvidia hardware like CUDA, TensorRT, and/or XLABonus QualificationsFamiliarity with Bazel and/or the C++ linker 214,000 - 290,000 a yearThere are three major components to compensation for this position: salary, Amazon Restricted Stock Units (RSUs), and Zoox Stock Appreciation Rights. A sign-on bonus may be offered as part of the compensation package. The listed range applies only to the base salary. Compensation will vary based on geographic location and level. Leveling, as well as positioning within a level, is determined by a range of factors, including, but not limited to, a candidate's relevant years of experience, domain knowledge, and interview performance. The salary range listed in this posting is representative of the range of levels Zoox is considering for this position. Zoox also offers a comprehensive package of benefits, including paid time off (e.g. sick leave, vacation, bereavement), unpaid time off, Zoox Stock Appreciation Rights, Amazon RSUs, health insurance, long-term care insurance, long-term and short-term disability insurance, and life insurance. About ZooxZoox is developing the first ground-up, fully autonomous vehicle fleet and the supporting ecosystem required to bring this technology to market. Sitting at the intersection of robotics, machine learning, and design, Zoox aims to provide the next generation of mobility-as-a-service in urban environments. We’re looking for top talent that shares our passion and wants to be part of a fast-moving and highly execution-oriented team. Follow us on LinkedIn AccommodationsIf you need an accommodation to participate in the application or interview process please reach out to accommodations@zoox.com or your assigned recruiter. A Final Note:You do not need to match every listed expectation to apply for this position. Here at Zoox, we know that diverse perspectives foster the innovation we need to be successful, and we are committed to building a team that encompasses a variety of backgrounds, experiences, and skills.
No items found.
Hidden link
Arcade.dev

Engineering Manager - Engine and Platform

Arcade.dev
$200,000 – $275,000
US.svg
United States
Full-time
Remote
false
The Revolution Needs YouWe're building the actions runtime that allows AI agents to safely take real-world actions at enterprise scale. As the Engineering Manager for the Engine and Platform, you’ll lead the team responsible for securing and running all the tools LLMs will need - at scale.Why This Is The Opportunity of a Lifetime$166B has been poured into AI reasoning. The models work. But 94% of enterprise agent projects die before production… Why? Because authentication and integration, which were separate problems in the cloud era (Okta + MuleSoft), collapsed into one in the agent era. Every agent action requires both auth and integration in the same moment. We're building that unified layer - the actions runtime between agents and every system they need to touch. Auth, tools, and governance in one place.Founder-Market Fit: Our CEO previously founded Stormpath (acquired by Okta), where he created the first Authentication API for developers. Our CTO led the vector database team at Redis and shipped 100+ LLM applications. We're MCP contributors who wrote SEP-1036 (URL Elicitation), now the auth standard for the Model Context Protocol.Dream Team: Authentication, integrations, distributed systems, and AI experts from Okta, Redis, Microsoft, Splunk, Ngrok, Google, Airbyte, Disney, and HPE. Four second-time founders on staff. We've built and scaled developer platforms before.Real Traction: $1M ARR in our first year with Fortune 100 customers closed in under 6 months.Massive Market: Identity ($32B in exits) + Integration ($24B in exits) were separate in cloud. We're building them together for agents. One layer. One company. Bigger opportunity.Backed By The Best: Our investors backed Databricks, Clickhouse, MongoDB, Perplexity, Cohere, ScaleAI, Confluent, Elastic, and Firebase.Why This Is The Opportunity of a LifetimeFounder-Market Fit : Our CEO previously founded Stormpath (acquired by Okta), where he created the first Authentication API for developers. He's done this before - and this time the market is 10x bigger. Our CTO led the vector database team at Redis, shipped 100+ LLM applications, and is a contributor to LangChain and LlamaIndex. He knows this space better than anyone.Dream Team : We've assembled authentication, integrations, distributed systems, and AI experts from Okta, Redis, Microsoft, Splunk, Ngrok, Google, Airbyte, Disney, and HPE who've built and founded multiple successful developer platforms. We are MCP contributors and agent builders.Perfect Timing : We're at the inflection point of AI adoption. The biggest problem isn't better models - it's securely connecting AI to real-world actions. That's us.Massive Market : We're building critical infrastructure for the biggest technological shift of our generation. Every AI app will need what we're building.Backed By The Best: Our investors have backed Databricks, Clickhouse, MongoDB, Perplexity, Cohere, ScaleAI, Confluent, Elastic, and Firebase. They see what we see - this is going to be huge.The ChallengeTeam Charter: Build, maintain, and deploy the runtime for customers to run, manage, secure, and understand AI tools, enabling advanced agentic use-cases.Arcade’s engineering team is growing! You will be one of our first two engineering managers, responsible for scaling the team owning the development of our platform and services. This is a team of distributed systems engineers and authorization/identity experts who build our platform and features like MCP gateways, tenant and project roles and permissions, and the platform-as-service for tool executions that powers arcade-deploy. Given our upmarket customer base, this team also invests in security and compliance, as well as the integrations our self-hosted enterprise customers need. Building the one-stop platform for the enterprise to run and manage all of their tools safely and easily is the mission.We expect that you’ll spend most of your time leveling-up the team, ensuring the team is unblocked, and aligning the team’s work with our product organization… but you still are excited to get your hands dirty in the code when possible. While this is primarily a people, product, and operations leadership role, we expect you to stay technically engaged through reviews, critical-path contributions, and occasional hands-on coding when it meaningfully unblocks the team. We have a team of A-players today, and we need you to scale up our capacity without losing our culture, our heart, or our velocity.This role involves navigating ambiguity, evolving standards in the AI tools ecosystem (e.g. the MCP specification, which we contribute to), and the challenges of scaling fast without sacrificing quality at an early startup. This team works primarily in Go, Typescript, and Python, and touches our Python tools/MCP-servers codebase from time-to-time. This role reports to the Head of Engineering.The ideal candidate for this role is already an expert in enterprise-grade platforms and/or security products. You’ve previously built self-hosted products for enterprises and managed large-scale SaaS deployments. You are invested in the AI revolution and have strong opinions on what it means to roll out safe AI products at scale.What You'll DoBe ultimately responsible for the deliverables, stability, and uptime of your team, and empowered to ramp up these goals as the team grows and matures.Ensure a consistent product vision and architecture for the team, and help shape the team & company roadmap - you will be the primary owner of technical direction, prioritization, and execution for the team’s work, in close partnership with Product and the founders.Hire and Mentor talented engineers and help shape their technical and career growth. You’ll be managing a team of 6 very senior engineers, many of whom have been startup CTOs themselves, with the goal of growing to a team of 8 by the end of the year.Define and deliver the next most important platform features for our customers.Ship high-impact features at scale while ensuring reliability, security, and enterprise readiness.Build leverage into the system - Transform week-long tasks into minutes through automation and agents.Required Skills8+ years of software engineering experience, including 2+ years in engineering leadership.Proven success managing and scaling high-performing teams.Deep experience deploying and monitoring software both as a SaaS platform and on-prem.Strong architectural instincts and ability to align technical direction with business outcomes.Passion for building frameworks that empower other developers to win.Excellent communication skills and cross-functional leadership.Comfort using LLMs/AI throughout all parts of the software development lifecycle.An insatiable desire to ship and continuously improve.Bonus PointsOpen-source contributionsYou’ve been at an early-stage startup before and loved itGo and/or Typescript expertiseJoin The MovementWe're not just building a product - we're leading a movement to transform AI from just chatbots to agents that can take actions against real systems. This is your chance to be at the forefront of that revolution.If you want to look back in 5 years and say, "I helped build that", then we want to talk to you. Ready to make AI actually useful? Apply Now
No items found.
Hidden link
Harmattan AI.jpg

Computer Vision Engineer (VIO)

Harmattan AI
CH.svg
Switzerland
Full-time
Remote
false
About UsHarmattan AI is a next-generation defense prime building autonomous and scalable defense systems. Following the close of a $200M Series B, valuing the company at $1.4 billion, we are expanding our teams and capabilities to deliver mission-critical systems to allied forces.Our work is guided by clear values: building technologies with real-world impact, pursuing excellence in everything we do, setting ambitious goals, and taking on the hardest technical challenges. We operate in a demanding environment where rigor, ownership, and execution are expected.About the RoleWe are looking for a Computer Vision Engineer to contribute to the development of the front-end of our visual inertial odometry (VIO) algorithmic stack.ResponsibilitiesVIO Front-end Algorithm development:Matching between frames and stereo pairsCalibration of cameras intrinsic and extrinsic parametersDetection of obstructionImplementation and optimization of the algorithmic stack for our embedded platformsTesting, Validation & Monitoring: algorithms testing in simulation and real-world environments, development of inspection and monitoring tools..Cross-Team Collaboration: Work closely with system engineers, optical engineers, software engineers. Communicate findings effectively to stakeholders.Candidate RequirementsEducational Background: Master’s degree in Robotics, Physics, Computer Science or related field, PhD is a plusTechnical Expertise: Strong mathematical foundations, coding skills in both Python and C++Hands-on VIO project experience. Experience with VIO on industrial products is a huge plus.Passion for Computer Vision and RoboticsStrong Communication & Teamwork: Ability to collaborate effectively with diverse teamsCommitment: 100% dedication to Harmattan’s mission, vision, and ambitious growth plans, ready to go the extra mile to ensure operational excellence.We look forward to hearing how you can help shape the future of autonomous defense systems at Harmattan AI.
No items found.
Hidden link
OpenAI.jpg

Software Engineer, Platform Systems

OpenAI
GB.svg
United Kingdom
Full-time
Remote
false
About the TeamThe Platform Systems team at OpenAI operates at the intersection of cutting-edge AI and large-scale distributed systems. We build the engineering and research infrastructure required to train OpenAI’s flagship models on some of the world’s largest, custom-built supercomputers.Our team develops core model training software and works deep in the stack - spanning collective communication, compute efficiency, parallelism strategies, fault tolerance, failure detection, and observability. The systems we build are foundational to OpenAI’s research velocity, enabling reliable, efficient training at frontier scale.We collaborate closely with researchers across the organization, continuously incorporating learnings from across OpenAI into the evolution of our training platform.About the RoleAs a Software Engineer, Platform Systems, you will design and build distributed systems that provide visibility into large-scale training workloads and help operate them reliably at scale.You’ll work on failure detection, tracing, and observability systems that identify slow or faulty nodes, surface performance bottlenecks, and help engineers understand and optimize massive distributed training jobs. This infrastructure is critical to operating OpenAI’s training stack and is actively evolving to support new use cases and increasingly complex workloads.This role sits at the core of our training infrastructure, blending systems engineering, performance analysis, and large-scale debugging.In This Role, You WillDesign and build distributed failure detection, tracing, and profiling systems for large-scale AI training jobsDevelop tooling to identify slow, faulty, or misbehaving nodes and provide actionable visibility into system behaviorImprove observability, reliability, and performance across OpenAI’s training platformDebug and resolve issues in complex, high-throughput distributed systemsCollaborate with systems, infrastructure, and research teams to evolve platform capabilitiesExtend and adapt failure detection systems or tracing systems to support new training paradigms and workloadsYou Might Thrive in This Role If YouCare deeply about performance, stability, and observability in distributed systemsEnjoy finding and fixing issues in large-scale systems and automating operational workflowsHave experience writing low-level software where system details matterUnderstand hardware, operating systems, networking, concurrency, and distributed systemsHave a background in high-performance computing or low-level systems engineeringAre excited to work on critical infrastructure that powers frontier AI researchAbout OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
OpenAI.jpg

Software Engineer, Platform Systems

OpenAI
$310,000 – $460,000
US.svg
United States
Full-time
Remote
false
About the TeamThe Platform Systems team at OpenAI operates at the intersection of cutting-edge AI and large-scale distributed systems. We build the engineering and research infrastructure required to train OpenAI’s flagship models on some of the world’s largest, custom-built supercomputers.Our team develops core model training software and works deep in the stack - spanning collective communication, compute efficiency, parallelism strategies, fault tolerance, failure detection, and observability. The systems we build are foundational to OpenAI’s research velocity, enabling reliable, efficient training at frontier scale.We collaborate closely with researchers across the organization, continuously incorporating learnings from across OpenAI into the evolution of our training platform.About the RoleAs a Software Engineer, Platform Systems, you will design and build distributed systems that provide visibility into large-scale training workloads and help operate them reliably at scale.You’ll work on failure detection, tracing, and observability systems that identify slow or faulty nodes, surface performance bottlenecks, and help engineers understand and optimize massive distributed training jobs. This infrastructure is critical to operating OpenAI’s training stack and is actively evolving to support new use cases and increasingly complex workloads.This role sits at the core of our training infrastructure, blending systems engineering, performance analysis, and large-scale debugging.In This Role, You WillDesign and build distributed failure detection, tracing, and profiling systems for large-scale AI training jobsDevelop tooling to identify slow, faulty, or misbehaving nodes and provide actionable visibility into system behaviorImprove observability, reliability, and performance across OpenAI’s training platformDebug and resolve issues in complex, high-throughput distributed systemsCollaborate with systems, infrastructure, and research teams to evolve platform capabilitiesExtend and adapt failure detection systems or tracing systems to support new training paradigms and workloadsYou Might Thrive in This Role If YouCare deeply about performance, stability, and observability in distributed systemsEnjoy finding and fixing issues in large-scale systems and automating operational workflowsHave experience writing low-level software where system details matterUnderstand hardware, operating systems, networking, concurrency, and distributed systemsHave a background in high-performance computing or low-level systems engineeringAre excited to work on critical infrastructure that powers frontier AI researchAbout OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
Netic.jpg

Founding Platform Engineer

Netic
US.svg
United States
Full-time
Remote
false
Netic is the AI revenue engine for essential services who are the backbone of the American economy. With $43M in funding from Founders Fund, Greylock, Hanabi, and Dylan Field who led our Series B, we helped our customers book hundreds of thousands of jobs across services industries in North America. There are now companies operating entirely AI-first on Netic. You’ll join our team with relentless builders from Scale, Databricks, HRT, Meta, MIT, Stanford, and Harvard in bringing frontier AI to the physical economy, where the problems are hard, the data is complex, and the impact is immediate and tangible.As a Founding Platform Engineer, you will own the semantic layer that powers our system of record and enables compounding products across the company. What makes this role uniquely rare at Netic is the scope and shape of the platform we are building. From day one, our system of record has to work across industries, geographies, and regulatory environments. That requires a platform that is highly customizable to support different workflows and business rules, while still being tractable enough that engineers can understand, debug, and operate it in production.This platform directly determines how dynamic workflows are built, packaged, and delivered to customers. The abstractions you create will influence how quickly we can launch new products, expand into new verticals, and respond to real world edge cases without sacrificing reliability. If you are excited to design foundational systems where flexibility, correctness, and developer ergonomics all matter, this is an opportunity to build a durable, company defining platform.What You’ll DoBuild the Semantic Layer Design and own the semantic layer that powers our system-of-record flywheel. This layer is the foundation for how data, state, and meaning flow through the company, enabling compounding AI products across teams. Create Internal Platforms with Leverage Build primitives, abstractions, and APIs that product teams use as building blocks. Your success is measured by how easily other engineers can ship powerful AI-driven features on top of your work. Treat Product Teams as Customers Partner closely with internal product and engineering teams to understand their needs, eliminate friction, and design systems that are intuitive, well-documented, and hard to misuse. Architect for Multiple Data & Request Topologies Design systems that span data warehouses, OLTP databases, streaming systems, and vector stores. Make intentional tradeoffs based on latency, throughput, consistency, and access patterns. Set Platform Direction Work with leadership to define the long-term platform architecture, including build-vs-buy decisions, evolution of the semantic layer, and how the system scales as product surface area grows. What You’ll BringPlatform & Systems Experience 5+ years building and operating distributed systems, ideally with experience owning a platform or core abstraction used by multiple teams. Strong Data Systems Background Deep understanding of data warehouses, transactional databases, and other storage systems, including how to model data for different access patterns and workloads. Semantic & Abstraction Thinking Ability to design schemas, contracts, and semantic models that remain stable over time while supporting rapid product iteration. Operational Rigor Experience with observability, migrations, backfills, incident response, and running high-uptime systems that are hard to unwind once in production. Ownership Mentality You take responsibility for long-term outcomes, not just shipping code. You think in terms of flywheels, leverage, and second-order effects.We believe fulfillment comes from producing your best work with the smartest people together in one room. All roles are in person, in SF (our office is in Jackson Square). What brings us together is our commitment to:Live to buildRun through walls and winObsess over customers in each line of codeLose sleep over the "almost perfect"Show internal locus of controlPrioritize finesse: refinement of first principles thinking, execution, and craftsmanshipWe are an equal opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability or any other legally protected status.
No items found.
Hidden link
Liquid AI.jpg

Member of Technical Staff - Post Training, Applied

Liquid AI
US.svg
United States
Full-time
Remote
false
About Liquid AISpun out of MIT CSAIL, we build AI systems that run where others stall: on CPUs, with low latency, minimal memory, and maximum reliability. We partner with enterprises across consumer electronics, automotive, life sciences, and financial services. We are scaling rapidly and need exceptional people to help us get there.The OpportunityThis is a rare chance to sit at the intersection of frontier foundation models and real-world deployment. You’ll own applied post-training work end-to-end for some of the world’s largest enterprises, while still contributing directly to Liquid’s core model development. Unlike most roles that force a trade-off between customer impact and foundational work, this role gives you both: deep ownership over how models are adapted, evaluated, and shipped, and a direct line into the evolution of Liquid’s post-training stack. If you care about data quality, evaluation, and making models actually work in production, this is a chance to shape how applied AI is done at a foundation-model company.What We're Looking ForWe need someone who:Takes ownership: Owns post-training projects end-to-end, from customer requirements through delivery and evaluation.Thinks end-to-end: Can reason across data generation, training, alignment, and evaluation as a single system.Is pragmatic: Optimises for model quality and customer outcomes over publications or theory.Communicates clearly: Can translate between customer needs and internal technical teams, and push back when needed. The WorkAct as the technical owner for enterprise customer post-training engagements.Translate customer requirements into concrete post-training specifications and workflows.Design and execute data generation, filtering, and quality assessment processes.Run supervised fine-tuning, preference alignment, and reinforcement learning workflows.Design task-specific evaluations, interpret results, and feed learnings back into core post-training pipelines.Desired ExperienceMust-have:Hands-on experience with data generation and evaluation for LLM post-training.Experience training or fine-tuning models using SFT, preference alignment, and/or RL.Strong intuition for data quality and evaluation design.Familiarity with alignment or RL techniques beyond basic supervised fine-tuning.Nice-to-have:Experience contributing to shared or general-purpose post-training infrastructure.Prior exposure to customer-facing or applied ML delivery environments.Familiarity with alignment or RL techniques beyond basic supervised fine-tuning.What Success Looks Like (Year One)Independently owns and delivers enterprise post-training projects with minimal oversight.Is trusted by customers as the technical owner, demonstrating strong judgment and delivery quality.Has made durable contributions to Liquid’s general-purpose post-training pipelines by feeding applied learnings back into baseline model development.What We OfferCompensation: Competitive base salary with equity in a unicorn-stage companyHealth: We pay 100% of medical, dental, and vision premiums for employees and dependentsFinancial: 401(k) matching up to 4% of base payTime Off: Unlimited PTO plus company-wide Refill Days throughout the year
No items found.
Hidden link
AI Fund.jpg

Founding Engineering Lead

AIFund
$180,000 – $220,000
US.svg
United States
Full-time
Remote
false
About Meeno Backed by Sequoia Capital, AI Fund & NEA, and founded by the former CEO of Tinder, Meeno helps people build real-world social courage through voice-based practice: 1-minute scenes, instant scoring, and personalized feedback. Not a dating app. Not a companion. Not therapy. IRL reps to help Gen Z meet people IRL.What You'll DoOwn the technical foundation of Meeno end-to-end: Web + mobile + backend + data + experimentationCo-design product vision in close partnership with Meeno’s team:Renate Nyborg, founder/CEO (former Tinder CEO, ex Apple/Headspace)Josh Knox, product lead (founder of Outright, brand and content expert)Andrew Ng, Chairman (AI Fund, founder of Coursera/Google Brain)Build core AI product primitives:Voice capture/playbackLow-latency interactionsScene framework (content + branching + scoring hooks)Feedback loops and user progressionPersonalizationArchitect for speed + iteration (weekly experiments, not quarterly releases).Set the engineering bar:Quality, reliability, security/privacy, and shipping culture.Hire and mentor engineers as we scale:Focus on quality over quantity, leveraging AI and talent to scale while staying lean.What We're Looking ForStrong builder who has shipped consumer products.Comfortable with ambiguity and moving fast without breaking the soul of the product.Product taste: you care about “feel,” not just functionality.Excited by voice, interaction design, and behavior change.Can operate like a founder: make tradeoffs, set direction, ship.Green flags (not requirements)Hyped about social connection, empathy for Gen Z loneliness.Early adopter of AI-driven development.Hungry for rapid career progression and societal impact.Experience with audio/voice pipelines, realtime systems, mobile performance.History of building unconventional projects (creative tech, experiments, indie builds).Interest in human behavior, social dynamics, and product psychology.You won't pass the vibe check if...You are not passionate about our mission.You only want to manage, not build.You need perfect specs to move.You optimize for process over product feel.You are not fun.What success looks like (first 6 months)Meeno has a stable foundation, and we can run rapid experiments.Our product feels fast, human, and culturally sharp.We have a pipeline of world-class engineering talent.We are well-positioned to raise Series A with clear signal on north star metrics.We are having fun. 180,000 - 220,000 a year
No items found.
Hidden link
P-1 AI.jpg

Senior Software Engineer - Agentic AI Platform

P-1 AI
$200,000 – $250,000
US.svg
United States
Full-time
Remote
false
About P-1 AI:We are building an engineering AGI. We founded P-1 AI with the conviction that the greatest impact of artificial intelligence will be on the built world. Our first product is Archie, an AI engineer capable of quantitative intuition over physical product domains and engineering tool use. Archie initially performs at the level of an entry-level design engineer but rapidly gets smarter and more capable. We aim to put an Archie on every engineering team at every industrial company on earth.Our founding team includes the top minds in deep learning, model-based engineering, and industries that are our customers. We closed a $23 million seed round led by Radical Ventures that includes a number of other AI and industrial luminaries (from OpenAI, DeepMind, etc.).About the role: We’re a small team tackling an ambitious problem. At the core of our solution is an intelligent, agentic platform that acts as a tireless teammate which can perform all of the same actions that human engineers can, from developing an intuition about the company’s products and the physics at play in their design, to accessing and using information systems, to decomposing performance requirements and developing them into design solutions. Archie must be scalable, responsive, and competent, and must also be deployable within the corporate networks of our customers so that he can see their most valuable information and do his job well.Our Senior SWE for the Agentic AI Platform will design and implement solutions for these challenges, which range from classic distributed systems, integration, and security work, to robustly implementing advanced intelligence features developed by our AI research team. Ownership comes by default, but as Archie becomes more sophisticated we also have potential growth into technical leadership for specific specialty areas.This role can be either remote (based in US or Canada and with existing work authorization) or based in our San Mateo Bay Area office. If you are remote, you should plan to spend one week out of six co-working with the rest of the company in our San Mateo office. We will support relocation for candidates interested in moving to the Bay Area.Why this role:This role is where AI meets reality. Every user that works with Archie will be using features and capabilities that you have built, no matter the industry or context.What you’ll do:Work with our team to understand our Archie capability roadmap and decompose capabilities into technical development.Turn capability prototypes and PoCs from our AI research team into robust, scalable implementations.Diagnose and solve technical problems identified by our team or users.Develop and act on automated platform tests, including traditional software testing, agentic AI evals, and infrastructure.Improve Archie’s scalability and robustness through system architecture design and implementation.Who you are:You’ve consistently taken technical ownership of production-grade software.You’re experienced with application development in Python.You’ve built and implemented data models and database interactions.You develop an obsession with understanding your users’ goals and incorporating this knowledge into your solutions.Take pride in developing quality code and practices.Are aggressively seeking ways to use AI better in your own workflows.Our values:Mission obsession & urgency: We are obsessed with building engineering AGI as quickly as possible. We also recognize that as a startup, speed is our most precious competitive advantage. We are constantly asking ourselves what we can do to go faster. We make tradeoffs and sacrifices (personally and in the workplace) in exchange for speed.Intellectual excellence & curiosity: We ask “what if?” and experiment liberally. We always look for better ways of doing something. We read voraciously. We challenge each other to be better. We surround ourselves with A players and we actively and unapologetically reject B players (and even B+ players⸺because they tend to surround themselves with C players).Shipping discipline: We treat production with respect. We test and demo our product constantly. We listen attentively to our customers, users, and stakeholders, and we respect our commitments to them. We also respect our commitments to each other and will go the extra mile (or ten or one hundred) to honor them.Ownership: We all have significant ownership stakes in the company and operate in founder mode. We believe in hierarchical requirements but not in hierarchical information flows. If we see that something is broken or can be done better, we flag it and we fix it. We encourage each other to play with and fix anything and everything... but there’s a clear owner for everything.Interview process:Initial screening call (30 mins)Biographical/behavioural interview (45 mins)Technical interview (60 mins)CEO interview (30 mins)Compensation:Salary: $200k - $250k.This role includes a significant equity component. We are an early-stage startup, so we favor equity over cash in our current compensation philosophy. This role is best suited for candidates who value long-term ownership and impact over short-term cash optimization. Our benefits include healthcare, dental, and vision insurance, 401k with employer matching, unlimited PTO.
No items found.
Hidden link
OpenAI.jpg

Senior Research Engineer/Scientist - Edge, Consumer Products

OpenAI
$380,000 – $460,000
US.svg
United States
Full-time
Remote
false
About the TeamThe Consumer Products Research team is an Applied Research team focused on developing new methods and models to support our vision for the future of computing as we advance forward in our mission of building AGI that benefits all of humanity. About the RoleAs a Research Engineer/Scientist on the Consumer Products Research team, you will work together with both the best ML researchers in the world and the greatest design talent of our generation to push the frontier of model capabilities.This role is based in San Francisco, CA. We follow a hybrid model with 4 days a week in the office and offer relocation assistance to new employees. In this role you will:Train and evaluate multimodal SoTA models along axis that are important to our vision for future devices.Develop novel architectures that improve model performance when scaling the models themselves is not an option.Run through the necessary walls to take nascent research capabilities and turn them into capabilities we can build on top of.You might thrive in this role if you:Have a research background in adapting transformers to run in environments with significantly less compute than traditional GPUs and datacenter accelerators.Love performance optimization and working with GPU kernel engineers (but you do not need CUDA experience yourself).Do rigorous science (rather than vibes based). We need confidence in the experiments we run to move quickly.Have already spent time in the weeds teaching models to speak and perceive.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
OpenAI.jpg

RTL & Co-design Engineer (junior)

OpenAI
$250,000 – $460,000
US.svg
United States
Full-time
Remote
false
About the TeamOpenAI’s Hardware organization develops silicon and system-level solutions designed for the unique demands of advanced AI workloads. The team is responsible for building the next generation of AI-native silicon while working closely with software and research partners to co-design hardware tightly integrated with AI models. In addition to delivering production-grade silicon for OpenAI’s supercomputing infrastructure, the team also creates custom design tools and methodologies that accelerate innovation and enable hardware optimized specifically for AI.About the RoleWe’re looking for a RTL Engineer to design and implement key compute, memory, and interconnect components for our custom AI accelerator. You’ll work closely with architecture, verification, physical design, and ML engineers to translate AI workloads into efficient hardware structures. This is a hands-on design role with significant ownership across definition, modeling, and implementation.This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.ResponsibilitiesProduce clean, production-quality microarchitecture and RTL for major accelerator subsystemsContribute to architectural studies including performance modeling and feasibility analysis.Collaborate with software, simulator, and compiler teams to ensure hardware/software co-design and workload fit.Partner with DV and PD to ensure functional correctness, timing closure, area/power targets, and clean integration.Build and review performance and functional models to validate design intent.Participate in design reviews, documentation, and bring-up support across the full silicon lifecycle.You Might Thrive In This Role If You Have:Graduate-level research or industry experience in computer architecture, AI/ML hardware–software co-design, including workload analysis, dataflow mapping, or accelerator algorithm optimization.Expertise writing production-quality RTL in Verilog/SystemVerilog, with a track record of delivering complex blocks to tape-out.Experience developing hardware design models or architectural simulators, ideally for AI/ML or high-performance compute systems.Familiarity with industry-standard design tools (lint, CDC/RDC, synthesis, STA) and methodologies.Ability to work cross-functionally with architecture, ML systems, compilers, and verification teams.Strong problem-solving skills and ability to think across abstraction layers, from algorithms to circuits.Passion for building industry-leading massive-scale hardware systems.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
Zoox.jpg

Machine Learning Engineer - Perception Mapping (copy)

Zoox
$189,000 – $227,000
US.svg
United States
Full-time
Remote
false
The Perception team at Zoox is fundamental to our autonomous vehicle technology, creating the understanding of the world for our self-driving robots. We enable safe and efficient navigation in complex environments through sophisticated detection, classification, and tracking systems. As a software engineer on the perception mapping team, you will be a key contributor to Zoox’s online mapping initiative. You will design, train, validate, and integrate into the stack ML models that detect semantic map elements in the world. Your work will touch on all aspects of ML development, including data gathering, labeling, training, validation, and onboard integration. Your work will enable important milestones to scaling and autonomy capabilities and will be critical to the success of Zoox.In this role, you will: Curate, validate, and label datasets for model training and validationResearch, implement, and train ML models to perform semantic map element detectionClosely collaborate with validation teams to formulate and execute model validation pipelinesIntegrate models into the greater onboard autonomy system within compute budgetsBe a technical leader on the team, maintaining coding and ML development best practices and contributing to architectural decisionsQualifications:MS or PhD or equivalent experience (5+ years) in Computer Science or related fieldExperience in computer vision or roboticsExperience with training and deploying deep learning modelsExperience with with Python libraries (pytorch, numpy)Bonus Qualifications: Experience with C++Experience with CUDA and/or GPU programmingExperience with mapping related ML techniques 189,000 - 227,000 a yearBase Salary Range There are three major components to compensation for this position: salary, Amazon Restricted Stock Units (RSUs), and Zoox Stock Appreciation Rights. A sign-on bonus may be offered as part of the compensation package. The listed range applies only to the base salary. Compensation will vary based on geographic location and level. Leveling, as well as positioning within a level, is determined by a range of factors, including, but not limited to, a candidate's relevant years of experience, domain knowledge, and interview performance. The salary range listed in this posting is representative of the range of levels Zoox is considering for this position. Zoox also offers a comprehensive package of benefits, including paid time off (e.g. sick leave, vacation, bereavement), unpaid time off, Zoox Stock Appreciation Rights, Amazon RSUs, health insurance, long-term care insurance, long-term and short-term disability insurance, and life insurance.
No items found.
Hidden link
Plaid.jpg

Senior Machine Learning Engineer - Payments

Plaid
$225,600 – $337,200
US.svg
United States
Full-time
Remote
false
We believe that the way people interact with their finances will drastically improve in the next few years. We’re dedicated to empowering this transformation by building the tools and experiences that thousands of developers use to create their own products. Plaid powers the tools millions of people rely on to live a healthier financial life. We work with thousands of companies like Venmo, SoFi, several of the Fortune 500, and many of the largest banks to make it easy for people to connect their financial accounts to the apps and services they want to use. Plaid’s network covers 12,000 financial institutions across the US, Canada, UK and Europe. Founded in 2013, the company is headquartered in San Francisco with offices in New York, Washington D.C., London and Amsterdam. #LI-Remote Plaid’s Machine Learning team is building models that improve how millions of users understand and grow their financial lives. We're looking for machine learning engineers with experience applying state-of-the-art machine learning and modeling techniques -- including natural language processing, anomaly detection, optimization, and time series forecasting -- toward different product areas. We value not only technical know-how, but also creativity, user empathy, and teamwork. You’ll be a machine learning engineer as a part of the core ML payments team, contributing to diverse, high-impact machine learning challenges. Specifically, you will focus on designing, building and deploying scalable ML solutions and systems. You will lead the efforts to experiment with new modeling approaches and strategies, as well as collaborating closely with a skilled team of engineers on ingesting signals and productionizing these models. If you're interested in building the state of art AI/ML solutions to unblock financial freedom for everyone, let's chat!ResponsibilitiesBuild with impact. Your work will empower millions of users through well-known and emerging Fintech Applications with access to financial services.Experiment with cutting edge ML modeling techniques.Work on both 0-1 stage problems as well as 1-10.Develop AI/MLmodels in a full life cycle, from offline training to online serving and monitoring. Collaborate with teams across Plaid to define ML roadmap.Dive deep into data and apply data driven decisions in day-to-day work.A high ownership, bottom-up driven team.Qualifications5+ years in training and serving AI/ML models in a production environment.Experience in building/working with data intensive backend applications in large distributed systems.Ability to code and iterate independently on top of data infrastructure tools like Python, Spark, Jupyter notebooks, standard ML libraries, etc.Take pride in taking ownership and driving projects to business impact.Data analytics and data engineering experience is a plus.Experience with the industry application of NLP is a plus. Experience with the FinTech industry is a plus. Ability to work with technical and non-technical teamsMaster's degree or equivalent work experience in Computer Science, Mathematics, Engineering, or a closely related field. 225,600 - 337,200 a yearThe target base salary for this position ranges from $225,600/year to $337,200/year in Zone 1. The target base salary will vary based on the job's location.  Our geographic zones are as follows:Zone 1 - New York City and San Francisco Bay AreaZone 2 - Los Angeles, Seattle, Washington D.C.Zone 3 - Austin, Boston, Denver, Houston, Portland, Sacramento, San DiegoZone 4 - Raleigh-Durham and all other US cities Additional compensation in the form(s) of equity and/or commission are dependent on the position offered. Plaid provides a comprehensive benefit plan, including medical, dental, vision, and 401(k). Pay is based on factors such as (but not limited to) scope and responsibilities of the position, candidate's work experience and skillset, and location. Pay and benefits are subject to change at any time, consistent with the terms of any applicable compensation or benefit plans.Our mission at Plaid is to unlock financial freedom for everyone. To support that mission, we seek to build a diverse team of driven individuals who care deeply about making the financial ecosystem more equitable. We recognize that strong qualifications can come from both prior work experiences and lived experiences. We encourage you to apply to a role even if your experience doesn't fully match the job description. We are always looking for team members that will bring something unique to Plaid! Plaid is proud to be an equal opportunity employer and values diversity at our company. We do not discriminate based on race, color, national origin, ethnicity, religion or religious belief, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, military or veteran status, disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state, and local laws. Plaid is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance with your application or interviews due to a disability, please let us know at accommodations@plaid.com. Please review our Candidate Privacy Notice here. Our mission at Plaid is to unlock financial freedom for everyone. To support that mission, we seek to build a diverse team of driven individuals who care deeply about making the financial ecosystem more equitable. We recognize that strong qualifications can come from both prior work experiences and lived experiences. We encourage you to apply to a role even if your experience doesn't fully match the job description. We are always looking for team members that will bring something unique to Plaid! Plaid is proud to be an equal opportunity employer and values diversity at our company. We do not discriminate based on race, color, national origin, ethnicity, religion or religious belief, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, military or veteran status, disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state, and local laws. Plaid is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance with your application or interviews due to a disability, please let us know at accommodations@plaid.com. Please review our Candidate Privacy Notice here.
No items found.
Hidden link
Reka AI.jpg

Member of Technical Staff (Applied AI)

Reka
US.svg
United States
GB.svg
United Kingdom
Full-time
Remote
false
As a Member of Technical Staff on Applied AI, you will:Productionize frontier AI models to solve complex real-world problems.Collaborate closely with researchers and other teammates on the latest advancements in AI and ML.Work closely with our customers to integrate our models into their technology stack.Make direct business impact with a high level of product ownership.Be a founding member of a fast-growing team and wear many hats.You may be a good fit, if you have:An obsession with customers and a passion for solving practical (but sometimes ambiguous) real-world problems.Experience as a machine learning engineer, ideally working on AI products for external customers.Background as a technical founder or in a similar capacity.Practical experience working with transformer models and LLMs. An exceptional ability to communicate and work effectively with cross-functional teams.Track records of owning problems end-to-end, and can learn fast and pick up new knowledge to get the job done. A kind and collaborative nature and work style.Reka's MissionReka's mission is to build useful multimodal artificial intelligence and use it to empower organisations and businesses. We are a globally distributed foundation model startup, headquartered in the San Francisco Bay Area, California. Embracing a remote-first approach, our team brings together top talent from around the world. Our founding team, along with many of our team members, has contributed to many of the breakthroughs in AI over the past decade.Why Reka?An Elite Team: Collaborate with top-tier engineers, researchers, operators from renowned organizations like Google DeepMind and Facebook AI Research (FAIR) and successful startups, driving innovation in cutting-edge AI technology.Massive Market Opportunity: Be part of a rapidly growing industry poised to transform multiple sectors globally, offering the chance to make a significant impact.Mission-Driven Environment: Work alongside a collaborative, mission-focused team dedicated to advancing AI for meaningful applications.Inclusive and Open Culture: Thrive in an open and inclusive work environment that values diverse perspectives and fosters creativity.Generous Benefits: Enjoy 5 weeks of paid leave to recharge, comprehensive healthcare benefits including vision and dental, and additional perks that support your well-being.Visa Support: We provide visa assistance, including H1B and OPT transfers, for US employees to ensure a smooth transition and support your career with us.
No items found.
Hidden link
OpenAI.jpg

AI Deployment Engineer- Codex

OpenAI
$176,000 – $224,000
US.svg
United States
Full-time
Remote
false
About the teamThe Codex Deployment Engineering team helps customers adopt OpenAI's coding tools throughout their software development lifecycle. We act as trusted technical partners, guiding engineering teams as they integrate Codex into their projects and workflows. Our customers span digital-native companies to global enterprises, and we work side-by-side to accelerate how they plan, build, and deliver software.About the RoleWe are seeking a technically deep, creativity-driven AI Deployment Engineer who is already a power user of AI coding tools and passionate about pushing the boundaries of developer productivity. You will partner directly with engineering leaders and hands-on builders to design, validate, and scale advanced AI workflows, often using Codex to prototype and build the very demos, integrations, and automations customers ultimately adopt.This is a highly cross-functional role that blends technical architecture, product strategy, and customer-facing leadership. You’ll work closely with Sales, Solutions Engineering, Product, Applied Engineering, and the broader Codex organization to advocate for customer needs, shape product direction, and accelerate the successful deployment of intelligent coding systems across some of the world’s most influential companies.In this role, you will:Serve as the primary technical subject matter expert on OpenAI Codex for a portfolio of customers, embedding deeply with them to enable their engineering teams and build coding workflows.Partner directly with customers to design and implement AI-enhanced development workflows, from rapid prototyping through scalable production rollout.Build high-quality demos, reference implementations, and workflow automations, using Codex itself as part of your development process.Lead large-format workshops, technical deep dives, and hands-on enablement sessions that help engineering organizations adopt AI coding tools effectively and safely.Contribute technical content including examples, guides, patterns, and best practices to the OpenAI Cookbook to help the broader developer community accelerate their work with Codex.Gather high-fidelity product insights from real customer deployments and translate them into clear product proposals and model feedback for internal teams.Influence customer strategy and decision-making by framing how AI coding tools fit into their SDLC, technical roadmap, and organizational workflows.Serve as a trusted advisor on solution architecture, operational readiness, model configuration, security considerations, and best-practice adoption.You’ll thrive in this role if you:Have 5+ years of technical consulting, post-sales engineering, solutions architecture, or similar experience working directly with customers.Are an active power user of AI coding tools and have deeply customized your own developer workflow; you have a point of view on what makes engineers more productive.Enjoy building scrappy, high-signal demos, integrations, and prototypes that clearly articulate what Codex can enable, often using Codex to accelerate your own development process.Have experience delivering large, high-impact workshops or technical training to engineering teams and know how to craft sessions that are engaging, hands-on, and outcomes-driven.Have contributed technical guides, patterns, or examples publicly and care about clarity, pedagogy, and community impact.Communicate complex technical concepts in clear, persuasive written and verbal form especially when helping customers make strategic decisions about where and how to apply AI.Are excited by ambiguous, rapidly evolving problem spaces and enjoy iterating toward novel solutions hand-in-hand with customers.Care about customer success, reliability, safety, and operational excellence as much as you care about technical ingenuity.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
No job found
Your search did not match any job. Please try again
Department
Clear
Category
Clear
Country
Clear
Job type
Clear
Remote
Clear
Only remote job
Company size
Clear
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.