⚠️ Sorry, this job is no longer available.

The AI job market moves fast. We keep up so you don't have to.

Fresh roles added daily, reviewed for quality — across every corner of the AI ecosystem.

Edit filters

New AI Opportunities

Showing 6179  of 79 jobs
Tag
Scale AI.jpg

Senior Software Engineer, Agentic Data Products

Scale AI
$201,600 – $241,920
US.svg
United States
Full-time
Remote
false
About the role We’re hiring an AI Architect to sit at the intersection of frontier AI research, product, and go-to-market. You’ll partner closely with ML teams in high-stakes meetings, scope and pitch solutions to top AI labs, and translate research needs (post-training, evals, alignment) into clear product roadmaps and measurable outcomes. You’ll drive end-to-end delivery—partnering with AI research teams and core customers to scope, pilot, and iterate on frontier model improvements—while coordinating with engineering, ops, and finance to translate cutting-edge research into deployable, high-impact solutions. What you’ll do Translate research → product: work with client side researchers on post-training, evals, safety/alignment and build the primitives, data, and tooling they need. Partner deeply with core customers and frontier labs: work hands-on with leading AI teams and frontier research labs to tackle hard, open-ended technical problems related to frontier model improvement, performance, and deployment. Shape and propose model improvement work: translate customer and research objectives into clear, technically rigorous proposals—scoping post-training, evaluation, and safety work into well-defined statements of work and execution plans. Translate research into production impact: collaborate with customer-side researchers on post-training, evaluations, and alignment, and help design the data, primitives, and tooling required to improve frontier models in practice. Own the end-to-end lifecycle: lead discovery, write crisp PRDs and technical specs, prioritize trade-offs, run experiments, ship initial solutions, and scale successful pilots into durable, repeatable offerings. Lead complex, high-stakes engagements: independently run technical working sessions with senior customer stakeholders; define success metrics; surface risks early; and drive programs to measurable outcomes. Partner across Scale: collaborate closely with research (agents, browser/SWE agents), platform, operations, security, and finance to deliver reliable, production-grade results for demanding customers. Build evaluation rigor at the frontier: design and stand up robust evaluation frameworks (e.g., RLVR, benchmarks), close the loop with data quality and feedback, and share learnings that elevate technical execution across accounts. You have Deep technical background in applied AI/ML: 5–10+ years in research, engineering, solutions engineering, or technical product roles working on LLMs or multimodal systems, ideally in high-stakes, customer-facing environments. Hands-on experience with model improvement workflows: demonstrated experience with post-training techniques, evaluation design, benchmarking, and model quality iteration. Ability to work on hard, ambiguous technical problems: proven track record of partnering directly with advanced customers or research teams to scope, reason through, and execute on deep technical challenges involving frontier models. Strong technical fluency: you can read papers, interrogate metrics, write or review complex Python/SQL for analysis, and reason about model-data trade-offs. Executive presence with world-class researchers and enterprise leaders; excellent writing and storytelling. Bias to action: you ship, learn, and iterate. How you’ll work Customer-obsessed: start from real research needs; prototype quickly; validate with data. Cross-functional by default: align research, engineering, ops, and GTM on a single plan; communicate clearly up and down. Field-forward: expect regular customer time and research leads; light travel as needed. What success looks like Clear wins with top labs: pilots that convert to scaled programs with strong eval signals. Reusable alignment & eval building blocks that shorten time-to-value across accounts. Crisp internal docs (PRDs, experiment readouts, exec updates) that drive decisions quickly. Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.Please reference the job posting's subtitle for where this position will be located. For pay transparency purposes, the base salary range for this full-time position in the locations of San Francisco, New York, Seattle is:$201,600—$241,920 USDPLEASE NOTE: Our policy requires a 90-day waiting period before reconsidering candidates for the same role. This allows us to ensure a fair and thorough evaluation of all applicants. About Us: At Scale, our mission is to develop reliable AI systems for the world's most important decisions. Our products provide the high-quality data and full-stack technologies that power the world's leading models, and help enterprises and governments build, deploy, and oversee AI applications that deliver real impact. We work closely with industry leaders like Meta, Cisco, DLA Piper, Mayo Clinic, Time Inc., the Government of Qatar, and U.S. government agencies including the Army and Air Force. We are expanding our team to accelerate the development of AI applications. We believe that everyone should be able to bring their whole selves to work, which is why we are proud to be an inclusive and equal opportunity workplace. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability status, gender identity or Veteran status.  We are committed to working with and providing reasonable accommodations to applicants with physical and mental disabilities. If you need assistance and/or a reasonable accommodation in the application or recruiting process due to a disability, please contact us at accommodations@scale.com. Please see the United States Department of Labor's Know Your Rights poster for additional information. We comply with the United States Department of Labor's Pay Transparency provision.  PLEASE NOTE: We collect, retain and use personal data for our professional business purposes, including notifying you of job opportunities that may be of interest and sharing with our affiliates. We limit the personal data we collect to that which we believe is appropriate and necessary to manage applicants’ needs, provide our services, and comply with applicable laws. Any information we collect in connection with your application will be treated in accordance with our internal policies and programs designed to protect personal data. Please see our privacy policy for additional information.
No items found.
Hidden link
Scale AI.jpg

Forward Deployed AI Engineering Manager, Enterprise

Scale AI
$201,600 – $241,920
US.svg
United States
Full-time
Remote
false
About the role We’re hiring an AI Architect to sit at the intersection of frontier AI research, product, and go-to-market. You’ll partner closely with ML teams in high-stakes meetings, scope and pitch solutions to top AI labs, and translate research needs (post-training, evals, alignment) into clear product roadmaps and measurable outcomes. You’ll drive end-to-end delivery—partnering with AI research teams and core customers to scope, pilot, and iterate on frontier model improvements—while coordinating with engineering, ops, and finance to translate cutting-edge research into deployable, high-impact solutions. What you’ll do Translate research → product: work with client side researchers on post-training, evals, safety/alignment and build the primitives, data, and tooling they need. Partner deeply with core customers and frontier labs: work hands-on with leading AI teams and frontier research labs to tackle hard, open-ended technical problems related to frontier model improvement, performance, and deployment. Shape and propose model improvement work: translate customer and research objectives into clear, technically rigorous proposals—scoping post-training, evaluation, and safety work into well-defined statements of work and execution plans. Translate research into production impact: collaborate with customer-side researchers on post-training, evaluations, and alignment, and help design the data, primitives, and tooling required to improve frontier models in practice. Own the end-to-end lifecycle: lead discovery, write crisp PRDs and technical specs, prioritize trade-offs, run experiments, ship initial solutions, and scale successful pilots into durable, repeatable offerings. Lead complex, high-stakes engagements: independently run technical working sessions with senior customer stakeholders; define success metrics; surface risks early; and drive programs to measurable outcomes. Partner across Scale: collaborate closely with research (agents, browser/SWE agents), platform, operations, security, and finance to deliver reliable, production-grade results for demanding customers. Build evaluation rigor at the frontier: design and stand up robust evaluation frameworks (e.g., RLVR, benchmarks), close the loop with data quality and feedback, and share learnings that elevate technical execution across accounts. You have Deep technical background in applied AI/ML: 5–10+ years in research, engineering, solutions engineering, or technical product roles working on LLMs or multimodal systems, ideally in high-stakes, customer-facing environments. Hands-on experience with model improvement workflows: demonstrated experience with post-training techniques, evaluation design, benchmarking, and model quality iteration. Ability to work on hard, ambiguous technical problems: proven track record of partnering directly with advanced customers or research teams to scope, reason through, and execute on deep technical challenges involving frontier models. Strong technical fluency: you can read papers, interrogate metrics, write or review complex Python/SQL for analysis, and reason about model-data trade-offs. Executive presence with world-class researchers and enterprise leaders; excellent writing and storytelling. Bias to action: you ship, learn, and iterate. How you’ll work Customer-obsessed: start from real research needs; prototype quickly; validate with data. Cross-functional by default: align research, engineering, ops, and GTM on a single plan; communicate clearly up and down. Field-forward: expect regular customer time and research leads; light travel as needed. What success looks like Clear wins with top labs: pilots that convert to scaled programs with strong eval signals. Reusable alignment & eval building blocks that shorten time-to-value across accounts. Crisp internal docs (PRDs, experiment readouts, exec updates) that drive decisions quickly. Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.Please reference the job posting's subtitle for where this position will be located. For pay transparency purposes, the base salary range for this full-time position in the locations of San Francisco, New York, Seattle is:$201,600—$241,920 USDPLEASE NOTE: Our policy requires a 90-day waiting period before reconsidering candidates for the same role. This allows us to ensure a fair and thorough evaluation of all applicants. About Us: At Scale, our mission is to develop reliable AI systems for the world's most important decisions. Our products provide the high-quality data and full-stack technologies that power the world's leading models, and help enterprises and governments build, deploy, and oversee AI applications that deliver real impact. We work closely with industry leaders like Meta, Cisco, DLA Piper, Mayo Clinic, Time Inc., the Government of Qatar, and U.S. government agencies including the Army and Air Force. We are expanding our team to accelerate the development of AI applications. We believe that everyone should be able to bring their whole selves to work, which is why we are proud to be an inclusive and equal opportunity workplace. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability status, gender identity or Veteran status.  We are committed to working with and providing reasonable accommodations to applicants with physical and mental disabilities. If you need assistance and/or a reasonable accommodation in the application or recruiting process due to a disability, please contact us at accommodations@scale.com. Please see the United States Department of Labor's Know Your Rights poster for additional information. We comply with the United States Department of Labor's Pay Transparency provision.  PLEASE NOTE: We collect, retain and use personal data for our professional business purposes, including notifying you of job opportunities that may be of interest and sharing with our affiliates. We limit the personal data we collect to that which we believe is appropriate and necessary to manage applicants’ needs, provide our services, and comply with applicable laws. Any information we collect in connection with your application will be treated in accordance with our internal policies and programs designed to protect personal data. Please see our privacy policy for additional information.
No items found.
Hidden link
Scale AI.jpg

Senior Software Engineer, Connectivity

Scale AI
$201,600 – $241,920
US.svg
United States
Full-time
Remote
false
About the role We’re hiring an AI Architect to sit at the intersection of frontier AI research, product, and go-to-market. You’ll partner closely with ML teams in high-stakes meetings, scope and pitch solutions to top AI labs, and translate research needs (post-training, evals, alignment) into clear product roadmaps and measurable outcomes. You’ll drive end-to-end delivery—partnering with AI research teams and core customers to scope, pilot, and iterate on frontier model improvements—while coordinating with engineering, ops, and finance to translate cutting-edge research into deployable, high-impact solutions. What you’ll do Translate research → product: work with client side researchers on post-training, evals, safety/alignment and build the primitives, data, and tooling they need. Partner deeply with core customers and frontier labs: work hands-on with leading AI teams and frontier research labs to tackle hard, open-ended technical problems related to frontier model improvement, performance, and deployment. Shape and propose model improvement work: translate customer and research objectives into clear, technically rigorous proposals—scoping post-training, evaluation, and safety work into well-defined statements of work and execution plans. Translate research into production impact: collaborate with customer-side researchers on post-training, evaluations, and alignment, and help design the data, primitives, and tooling required to improve frontier models in practice. Own the end-to-end lifecycle: lead discovery, write crisp PRDs and technical specs, prioritize trade-offs, run experiments, ship initial solutions, and scale successful pilots into durable, repeatable offerings. Lead complex, high-stakes engagements: independently run technical working sessions with senior customer stakeholders; define success metrics; surface risks early; and drive programs to measurable outcomes. Partner across Scale: collaborate closely with research (agents, browser/SWE agents), platform, operations, security, and finance to deliver reliable, production-grade results for demanding customers. Build evaluation rigor at the frontier: design and stand up robust evaluation frameworks (e.g., RLVR, benchmarks), close the loop with data quality and feedback, and share learnings that elevate technical execution across accounts. You have Deep technical background in applied AI/ML: 5–10+ years in research, engineering, solutions engineering, or technical product roles working on LLMs or multimodal systems, ideally in high-stakes, customer-facing environments. Hands-on experience with model improvement workflows: demonstrated experience with post-training techniques, evaluation design, benchmarking, and model quality iteration. Ability to work on hard, ambiguous technical problems: proven track record of partnering directly with advanced customers or research teams to scope, reason through, and execute on deep technical challenges involving frontier models. Strong technical fluency: you can read papers, interrogate metrics, write or review complex Python/SQL for analysis, and reason about model-data trade-offs. Executive presence with world-class researchers and enterprise leaders; excellent writing and storytelling. Bias to action: you ship, learn, and iterate. How you’ll work Customer-obsessed: start from real research needs; prototype quickly; validate with data. Cross-functional by default: align research, engineering, ops, and GTM on a single plan; communicate clearly up and down. Field-forward: expect regular customer time and research leads; light travel as needed. What success looks like Clear wins with top labs: pilots that convert to scaled programs with strong eval signals. Reusable alignment & eval building blocks that shorten time-to-value across accounts. Crisp internal docs (PRDs, experiment readouts, exec updates) that drive decisions quickly. Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.Please reference the job posting's subtitle for where this position will be located. For pay transparency purposes, the base salary range for this full-time position in the locations of San Francisco, New York, Seattle is:$201,600—$241,920 USDPLEASE NOTE: Our policy requires a 90-day waiting period before reconsidering candidates for the same role. This allows us to ensure a fair and thorough evaluation of all applicants. About Us: At Scale, our mission is to develop reliable AI systems for the world's most important decisions. Our products provide the high-quality data and full-stack technologies that power the world's leading models, and help enterprises and governments build, deploy, and oversee AI applications that deliver real impact. We work closely with industry leaders like Meta, Cisco, DLA Piper, Mayo Clinic, Time Inc., the Government of Qatar, and U.S. government agencies including the Army and Air Force. We are expanding our team to accelerate the development of AI applications. We believe that everyone should be able to bring their whole selves to work, which is why we are proud to be an inclusive and equal opportunity workplace. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability status, gender identity or Veteran status.  We are committed to working with and providing reasonable accommodations to applicants with physical and mental disabilities. If you need assistance and/or a reasonable accommodation in the application or recruiting process due to a disability, please contact us at accommodations@scale.com. Please see the United States Department of Labor's Know Your Rights poster for additional information. We comply with the United States Department of Labor's Pay Transparency provision.  PLEASE NOTE: We collect, retain and use personal data for our professional business purposes, including notifying you of job opportunities that may be of interest and sharing with our affiliates. We limit the personal data we collect to that which we believe is appropriate and necessary to manage applicants’ needs, provide our services, and comply with applicable laws. Any information we collect in connection with your application will be treated in accordance with our internal policies and programs designed to protect personal data. Please see our privacy policy for additional information.
No items found.
Hidden link
X.jpg

Client Account Manager (Madrid)

X AI
$45 – $100 / hour
US.svg
United States
Full-time
Remote
false
About xAI xAI’s mission is to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. Our team is small, highly motivated, and focused on engineering excellence. This organization is for individuals who appreciate challenging themselves and thrive on curiosity. We operate with a flat organizational structure. All employees are expected to be hands-on and to contribute directly to the company’s mission. Leadership is given to those who show initiative and consistently deliver excellence. Work ethic and strong prioritization skills are important. All employees are expected to have strong communication skills. They should be able to concisely and accurately share knowledge with their teammates.About the Role As an AI Tutor - Economics, you will be instrumental in enhancing the capabilities of our cutting-edge technologies by providing high-quality input and labels using specialized software. Your role involves collaborating closely with our technical team to support the training of new AI tasks, ensuring the implementation of innovative initiatives. You'll contribute to refining annotation tools and selecting complex problems from advanced economics domains, with a focus on macroeconomic forecasting, microeconomic incentives, and behavioral experiments. This position demands a dynamic approach to learning and adapting in a fast-paced environment, where your ability to interpret and execute tasks based on evolving instructions is crucial. AI Tutor’s Role in Advancing xAI’s Mission As an AI Tutor, you will play an essential role in advancing xAI's mission by supporting the training and refinement of xAI’s AI models. AI Tutors teach our AI models about how people interact and react, as well as how people approach issues and discussions in economics. To accomplish this, AI Tutors will actively participate in gathering or providing data, such as text, voice, and video data, sometimes providing annotations, recording audio, or participating in video sessions. We seek individuals who are comfortable and eager to engage in these activities as a fundamental part of the role, ensuring a strong alignment with xAI’s goals and objectives to innovate. Scope An AI Tutor will provide services that include labeling and annotating data in text, voice, and video formats to support AI model training. At times, this may involve recording audio or video sessions, and tutors are expected to be comfortable with these tasks as they are fundamental to the role. Such data is a job requirement to advance xAI’s mission, and AI Tutors acknowledge that all work is done for hire and owned by xAI. Responsibilities Use proprietary software applications to provide input/labels on defined projects. Support and ensure the delivery of high-quality curated data. Play a pivotal role in supporting and contributing to the training of new tasks, working closely with the technical staff to ensure the successful development and implementation of cutting-edge initiatives/technologies. Interact with the technical staff to help improve the design of efficient annotation tools. Choose problems from economics fields that align with your expertise, focusing on areas like macroeconomics, microeconomics, and behavioral economics.  Regularly interpret, analyze, and execute tasks based on given instructions. Key Qualifications Must possess a PhD in Economics or related field Proficiency in reading and writing, both in informal and professional English. Outstanding communication, interpersonal, analytical, and organizational capabilities. Solid reading comprehension skills combined with the capacity to exercise autonomous judgment even when presented with limited data/material. Strong passion for and commitment to technological advancements and innovation in economics. Preferred Qualifications Possesses experience with at least one publication in a reputable economics journal or outlet. Teaching experience as a professor. Location & Other Expectations This position is based in Palo Alto, CA, or fully remote. The Palo Alto option is an in-office role requiring 5 days per week; remote positions require strong self-motivation. If you are based in the US, please note we are unable to hire in the states of Wyoming and Illinois at this time. We are unable to provide visa sponsorship. Team members are expected to work from 9:00am - 5:30pm PST for the first two weeks of training and 9:00am - 5:30pm in their own timezone thereafter. For those who will be working from a personal device, please note your computer must be a Chromebook, Mac with MacOS 11.0 or later, or Windows 10 or later. Compensation $45/hour - $100/hour The posted pay range is intended for U.S.-based candidates and depends on factors including relevant experience, skills, education, geographic location, and qualifications. For international candidates, our recruiting team can provide an estimated pay range for your location. Benefits: Hourly pay is just one part of our total rewards package at xAI. Specific benefits vary by country, depending on your country of residence you may have access to medical benefits. We do not offer benefits for part-time roles.xAI is an equal opportunity employer. For details on data processing, view our Recruitment Privacy Notice.
No items found.
Hidden link
Scale AI.jpg

Sales Manager, Public Sector

Scale AI
$201,600 – $241,920
US.svg
United States
Full-time
Remote
false
About the role We’re hiring an AI Architect to sit at the intersection of frontier AI research, product, and go-to-market. You’ll partner closely with ML teams in high-stakes meetings, scope and pitch solutions to top AI labs, and translate research needs (post-training, evals, alignment) into clear product roadmaps and measurable outcomes. You’ll drive end-to-end delivery—partnering with AI research teams and core customers to scope, pilot, and iterate on frontier model improvements—while coordinating with engineering, ops, and finance to translate cutting-edge research into deployable, high-impact solutions. What you’ll do Translate research → product: work with client side researchers on post-training, evals, safety/alignment and build the primitives, data, and tooling they need. Partner deeply with core customers and frontier labs: work hands-on with leading AI teams and frontier research labs to tackle hard, open-ended technical problems related to frontier model improvement, performance, and deployment. Shape and propose model improvement work: translate customer and research objectives into clear, technically rigorous proposals—scoping post-training, evaluation, and safety work into well-defined statements of work and execution plans. Translate research into production impact: collaborate with customer-side researchers on post-training, evaluations, and alignment, and help design the data, primitives, and tooling required to improve frontier models in practice. Own the end-to-end lifecycle: lead discovery, write crisp PRDs and technical specs, prioritize trade-offs, run experiments, ship initial solutions, and scale successful pilots into durable, repeatable offerings. Lead complex, high-stakes engagements: independently run technical working sessions with senior customer stakeholders; define success metrics; surface risks early; and drive programs to measurable outcomes. Partner across Scale: collaborate closely with research (agents, browser/SWE agents), platform, operations, security, and finance to deliver reliable, production-grade results for demanding customers. Build evaluation rigor at the frontier: design and stand up robust evaluation frameworks (e.g., RLVR, benchmarks), close the loop with data quality and feedback, and share learnings that elevate technical execution across accounts. You have Deep technical background in applied AI/ML: 5–10+ years in research, engineering, solutions engineering, or technical product roles working on LLMs or multimodal systems, ideally in high-stakes, customer-facing environments. Hands-on experience with model improvement workflows: demonstrated experience with post-training techniques, evaluation design, benchmarking, and model quality iteration. Ability to work on hard, ambiguous technical problems: proven track record of partnering directly with advanced customers or research teams to scope, reason through, and execute on deep technical challenges involving frontier models. Strong technical fluency: you can read papers, interrogate metrics, write or review complex Python/SQL for analysis, and reason about model-data trade-offs. Executive presence with world-class researchers and enterprise leaders; excellent writing and storytelling. Bias to action: you ship, learn, and iterate. How you’ll work Customer-obsessed: start from real research needs; prototype quickly; validate with data. Cross-functional by default: align research, engineering, ops, and GTM on a single plan; communicate clearly up and down. Field-forward: expect regular customer time and research leads; light travel as needed. What success looks like Clear wins with top labs: pilots that convert to scaled programs with strong eval signals. Reusable alignment & eval building blocks that shorten time-to-value across accounts. Crisp internal docs (PRDs, experiment readouts, exec updates) that drive decisions quickly. Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.Please reference the job posting's subtitle for where this position will be located. For pay transparency purposes, the base salary range for this full-time position in the locations of San Francisco, New York, Seattle is:$201,600—$241,920 USDPLEASE NOTE: Our policy requires a 90-day waiting period before reconsidering candidates for the same role. This allows us to ensure a fair and thorough evaluation of all applicants. About Us: At Scale, our mission is to develop reliable AI systems for the world's most important decisions. Our products provide the high-quality data and full-stack technologies that power the world's leading models, and help enterprises and governments build, deploy, and oversee AI applications that deliver real impact. We work closely with industry leaders like Meta, Cisco, DLA Piper, Mayo Clinic, Time Inc., the Government of Qatar, and U.S. government agencies including the Army and Air Force. We are expanding our team to accelerate the development of AI applications. We believe that everyone should be able to bring their whole selves to work, which is why we are proud to be an inclusive and equal opportunity workplace. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability status, gender identity or Veteran status.  We are committed to working with and providing reasonable accommodations to applicants with physical and mental disabilities. If you need assistance and/or a reasonable accommodation in the application or recruiting process due to a disability, please contact us at accommodations@scale.com. Please see the United States Department of Labor's Know Your Rights poster for additional information. We comply with the United States Department of Labor's Pay Transparency provision.  PLEASE NOTE: We collect, retain and use personal data for our professional business purposes, including notifying you of job opportunities that may be of interest and sharing with our affiliates. We limit the personal data we collect to that which we believe is appropriate and necessary to manage applicants’ needs, provide our services, and comply with applicable laws. Any information we collect in connection with your application will be treated in accordance with our internal policies and programs designed to protect personal data. Please see our privacy policy for additional information.
No items found.
Hidden link
Hayden AI.jpg

Senior Software Engineer, Pilots

Haydenai
$200,454 – $260,590
US.svg
United States
Full-time
Remote
false
About UsAt Hayden AI, we are on a mission to harness the power of computer vision to transform the way transit systems and other government agencies address real-world challenges.From bus lane and bus stop enforcement to transportation optimization technologies and beyond, our innovative mobile perception system empowers our clients to accelerate transit, enhance street safety, and drive toward a sustainable future.Job SummaryAs a Senior Software Engineer on the Pilots team within the Perception organization, you will be responsible for developing prototypes for forthcoming pilots, aligning with Hayden’s mission and long-term roadmap for business expansion.This team investigates novel use cases, vehicles, and deployment environments, expeditiously translating preliminary concepts into functional prototypes. You will construct comprehensive end-to-end perception and robotics systems designed to operate on real-world hardware and capable of scaling into Hayden’s core product platform.This is a C++ software engineering generalist position emphasizing robotics and systems expertise. You will function with a substantial degree of ownership, navigating complexity to meticulously design, implement, and fortify solutions that judiciously balance rapid experimentation with sustained maintainability. Key Responsibilities:Deliver robust, thoroughly tested, and maintainable C++ code tailored for edge and robotics platforms.Design, implement, and take ownership of prototype perception systems with the potential to transition into production-grade solutions.Construct and iteratively refine real-time perception pipelines, encompassing detection, tracking, and sensor fusion methodologies.Adapt, refine, and integrate Machine Learning (ML) and Computer Vision (CV) models, including leveraging open-source solutions, for novel, Hayden-specific applications.Drive technical decision-making in ambiguous problem spaces, effectively balancing the speed required for prototyping with the requirements for production readiness.Collaborate closely with the Product team and cross-functional Engineering departments.Contribute to shared infrastructure, tooling, and architectural patterns as pilot initiatives mature into foundational products.Required Qualifications:Master's degree in Computer Science, Electrical Engineering, Robotics, or a closely related discipline. A PhD is considered advantageous.5-8 years of relevant experience in building and deploying perception systems; experience in automotive or robotics domains is a plus.Substantial background in a minimum of one of the following domains: robotics, state estimation, computer vision, or applied machine learning.Senior-level industrial experience in the delivery of intricate, production-grade software systems.Demonstrated proficiency in modern C++, coupled with experience in real-time systems.Experience in the construction and ownership of end-to-end systems, rather than merely isolated components.Capability to operate effectively in ambiguous and rapidly evolving environments.Proven capacity to collaborate constructively within a developing engineering organization.
No items found.
Hidden link
OpenAI.jpg

Software Engineer - Sensing, Consumer Products

OpenAI
$325,000 – $325,000
US.svg
United States
Full-time
Remote
false
About the TeamConsumer Products Research prototypes the future of computing: we explore new modalities, interaction patterns, and system behaviors, then do the engineering required to make those ideas real in rigorous prototypes. The Neosensing team sits at the intersection of sensing, edge algorithms, and systems engineering. We build the end-to-end software that turns new signals into dependable capabilities—collection tooling and protocols, algorithm integration and evaluation hooks, and on-device loops that stay stable under real-world variability. We care deeply about software quality and iteration speed: clean interfaces, debuggability, observability, and performance under tight device constraints.About the RoleAs a Software Engineer on Consumer Products Research, you’ll sit at the boundary between algorithm development and shippable systems. You’ll work closely with algorithm engineers to translate prototypes into clean interfaces, reliable pipelines, and efficient on-device implementations—with strong attention to performance, observability, and real-world failure modes.This is a software role first: we’re looking for someone who loves writing great code every day, takes pride in engineering craft, and is comfortable going deep enough into the algorithmic details to make the system work end-to-end.This role is based in San Francisco, CA. We use a hybrid work model of four days in the office per week and offer relocation assistance to new employees.In this role, you will:Build and ship production software for sensing algorithms, translating algorithm prototypes into reliable end-to-end systems.Implement and own key parts of the Python shipping pipeline (integration surfaces, evaluation hooks, and quality/performance guardrails).Develop embedded/on-device software in an RTOS environment (e.g., Zephyr) and deploy models to device runtimes and hardware accelerators.Optimize real-time on-device perception loops (e.g., detection/tracking-style pipelines) for stability, latency, power, and memory constraints.Create data collection + instrumentation tooling to bring up new sensing modalities and accelerate iteration from prototype → dataset → model → device.Partner cross-functionally (algorithms, human data, firmware/hardware) to debug, profile, and harden systems against real-world variability.You might thrive in this role if you:Love writing great software and want your work to sit close to novel sensing and edge algorithms.Understand algorithm behavior well enough to integrate, debug, and evaluate it—even if you’re not the primary model inventor.Have shipped production Python systems and care about clean interfaces, tests, and long-term maintainability.Enjoy embedded/on-device work and can debug across hardware, firmware, and higher-level application layers.Care about performance engineering and know how to profile and optimize under tight device constraints.Take ownership end-to-end and thrive in ambiguous, fast-moving, zero-to-one environments.Bonus:Zephyr (or similar RTOS) experience.On-device ML deployment (NPU/GPU/DSP) and accelerator-aware profiling/optimization.Background in multimodal sensing, sensor fusion, or on-device perception.Experience building data collection systems and human-in-the-loop workflows (protocols, QA, metadata)About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
OpenAI.jpg

Technical Program Manager – Adversarial Model Research

OpenAI
$230,000 – $285,000
US.svg
United States
Full-time
Remote
false
About the TeamThe Human Data team at OpenAI is responsible for identifying and mitigating risks in advanced AI systems by designing evaluations, surfacing vulnerabilities, and collaborating closely with researchers to strengthen model reliability and public trust.About the RoleAs a Technical Program Manager, you will lead initiatives that test the safety and robustness of OpenAI’s models through creative experimentation and structured evaluation. You’ll coordinate efforts across research and engineering teams to transform ambiguous risks into concrete research programs and influence future model development and deployment.We’re looking for people who are technically savvy, comfortable with ambiguity, and excited about shaping the future of safe AI.This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.In this role, you will:Lead programs that explore unexpected model behaviors and identify failure modes.Translate vague or emergent risk signals into clear priorities and actionable research plans.Design and run creative evaluations, experiments, and red-teaming campaigns.Collaborate with research, product, and deployment teams to integrate findings into model training and deployment cycles.Develop repeatable systems for tracking model performance and understanding emerging behavior patterns.You might thrive in this role if you:Have strong experience in technical program management, with excellent organizational and communication skills.Are familiar with large language models, prompt engineering, or model evaluation techniques.Are comfortable managing fast-paced, high-uncertainty projects and shaping them from the ground up.Are creative and resourceful in devising new methods for testing model behavior and performance.Can effectively coordinate across technical and non-technical stakeholders to drive alignment and execution.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
Zoox.jpg

Senior Software Engineer, ML Core

Zoox
$214,000 – $290,000
US.svg
United States
Full-time
Remote
false
Zoox is on a mission to reimagine transportation and ground-up build autonomous robotaxis that are safe, reliable, clean, and enjoyable for everyone. We are still in the early stages of deploying our robotaxis on public roads, and it is a great time to join Zoox and have a significant impact in executing this mission. The ML Platform team at Zoox plays a crucial role in enabling innovations in ML and AI to make autonomous driving as seamless as possible.  The OpportunityWould you like to enable ML use cases like autonomous driving, scene understanding, and automated mapping at Zoox? This role works across all ML teams within Zoox - Perception, Behavior ML, Simulation, Data Science, Collision Avoidance, as well as with our Advanced Hardware Engineering group specifying our next generation of autonomous hardware.  You will significantly push the boundaries of how ML is practiced within Zoox. We build and operate the base layer of ML tools, deep learning frameworks, inference libraries, and ML infrastructure used by our applied research teams for in- and off-vehicle ML use cases. We coordinate across all of Zoox to make sure that the needs of both the vehicle and ML teams are met. You will play a crucial role in reducing the time it takes from ideation to productionization of cutting-edge AI innovation. This team has a lot of growth opportunities as we expand our robotaxi deployments and venture into new ML domains. If you want to learn more about our stack behind autonomous driving, please look here.In this role, you will:Design, develop, and deploy custom and off-the-shelf ML libraries and toolings to improve ML development, training, deployment, and on-vehicle model inference latency.Build tooling and establish development best practices to manage and upgrade foundational libraries, i.e., Nvidia driver, PyTorch, TensorRT, etc., improve ML developer experience, and expedite debugging efforts.Collaborate closely with cross-functional teams, including applied ML research, high-performance compute, advanced hardware engineering, and data science, to define requirements and align on architectural decisions.Qualifications6+ years of experience Proficient in Python or C++Familiarity with any of the training frameworks and libraries like PyTorch, Lightning, Hugging Face, Ray, JAX, etc.Familiarity with any of the GPU-accelerated inference on Nvidia hardware like CUDA, TensorRT, and/or XLABonus QualificationsFamiliarity with Bazel and/or the C++ linker 214,000 - 290,000 a yearThere are three major components to compensation for this position: salary, Amazon Restricted Stock Units (RSUs), and Zoox Stock Appreciation Rights. A sign-on bonus may be offered as part of the compensation package. The listed range applies only to the base salary. Compensation will vary based on geographic location and level. Leveling, as well as positioning within a level, is determined by a range of factors, including, but not limited to, a candidate's relevant years of experience, domain knowledge, and interview performance. The salary range listed in this posting is representative of the range of levels Zoox is considering for this position. Zoox also offers a comprehensive package of benefits, including paid time off (e.g. sick leave, vacation, bereavement), unpaid time off, Zoox Stock Appreciation Rights, Amazon RSUs, health insurance, long-term care insurance, long-term and short-term disability insurance, and life insurance. About ZooxZoox is developing the first ground-up, fully autonomous vehicle fleet and the supporting ecosystem required to bring this technology to market. Sitting at the intersection of robotics, machine learning, and design, Zoox aims to provide the next generation of mobility-as-a-service in urban environments. We’re looking for top talent that shares our passion and wants to be part of a fast-moving and highly execution-oriented team. Follow us on LinkedIn AccommodationsIf you need an accommodation to participate in the application or interview process please reach out to accommodations@zoox.com or your assigned recruiter. A Final Note:You do not need to match every listed expectation to apply for this position. Here at Zoox, we know that diverse perspectives foster the innovation we need to be successful, and we are committed to building a team that encompasses a variety of backgrounds, experiences, and skills.
No items found.
Hidden link
Arcade.dev

Engineering Manager - Engine and Platform

Arcade.dev
$200,000 – $275,000
US.svg
United States
Full-time
Remote
false
The Revolution Needs YouWe're building the actions runtime that allows AI agents to safely take real-world actions at enterprise scale. As the Engineering Manager for the Engine and Platform, you’ll lead the team responsible for securing and running all the tools LLMs will need - at scale.Why This Is The Opportunity of a Lifetime$166B has been poured into AI reasoning. The models work. But 94% of enterprise agent projects die before production… Why? Because authentication and integration, which were separate problems in the cloud era (Okta + MuleSoft), collapsed into one in the agent era. Every agent action requires both auth and integration in the same moment. We're building that unified layer - the actions runtime between agents and every system they need to touch. Auth, tools, and governance in one place.Founder-Market Fit: Our CEO previously founded Stormpath (acquired by Okta), where he created the first Authentication API for developers. Our CTO led the vector database team at Redis and shipped 100+ LLM applications. We're MCP contributors who wrote SEP-1036 (URL Elicitation), now the auth standard for the Model Context Protocol.Dream Team: Authentication, integrations, distributed systems, and AI experts from Okta, Redis, Microsoft, Splunk, Ngrok, Google, Airbyte, Disney, and HPE. Four second-time founders on staff. We've built and scaled developer platforms before.Real Traction: $1M ARR in our first year with Fortune 100 customers closed in under 6 months.Massive Market: Identity ($32B in exits) + Integration ($24B in exits) were separate in cloud. We're building them together for agents. One layer. One company. Bigger opportunity.Backed By The Best: Our investors backed Databricks, Clickhouse, MongoDB, Perplexity, Cohere, ScaleAI, Confluent, Elastic, and Firebase.Why This Is The Opportunity of a LifetimeFounder-Market Fit : Our CEO previously founded Stormpath (acquired by Okta), where he created the first Authentication API for developers. He's done this before - and this time the market is 10x bigger. Our CTO led the vector database team at Redis, shipped 100+ LLM applications, and is a contributor to LangChain and LlamaIndex. He knows this space better than anyone.Dream Team : We've assembled authentication, integrations, distributed systems, and AI experts from Okta, Redis, Microsoft, Splunk, Ngrok, Google, Airbyte, Disney, and HPE who've built and founded multiple successful developer platforms. We are MCP contributors and agent builders.Perfect Timing : We're at the inflection point of AI adoption. The biggest problem isn't better models - it's securely connecting AI to real-world actions. That's us.Massive Market : We're building critical infrastructure for the biggest technological shift of our generation. Every AI app will need what we're building.Backed By The Best: Our investors have backed Databricks, Clickhouse, MongoDB, Perplexity, Cohere, ScaleAI, Confluent, Elastic, and Firebase. They see what we see - this is going to be huge.The ChallengeTeam Charter: Build, maintain, and deploy the runtime for customers to run, manage, secure, and understand AI tools, enabling advanced agentic use-cases.Arcade’s engineering team is growing! You will be one of our first two engineering managers, responsible for scaling the team owning the development of our platform and services. This is a team of distributed systems engineers and authorization/identity experts who build our platform and features like MCP gateways, tenant and project roles and permissions, and the platform-as-service for tool executions that powers arcade-deploy. Given our upmarket customer base, this team also invests in security and compliance, as well as the integrations our self-hosted enterprise customers need. Building the one-stop platform for the enterprise to run and manage all of their tools safely and easily is the mission.We expect that you’ll spend most of your time leveling-up the team, ensuring the team is unblocked, and aligning the team’s work with our product organization… but you still are excited to get your hands dirty in the code when possible. While this is primarily a people, product, and operations leadership role, we expect you to stay technically engaged through reviews, critical-path contributions, and occasional hands-on coding when it meaningfully unblocks the team. We have a team of A-players today, and we need you to scale up our capacity without losing our culture, our heart, or our velocity.This role involves navigating ambiguity, evolving standards in the AI tools ecosystem (e.g. the MCP specification, which we contribute to), and the challenges of scaling fast without sacrificing quality at an early startup. This team works primarily in Go, Typescript, and Python, and touches our Python tools/MCP-servers codebase from time-to-time. This role reports to the Head of Engineering.The ideal candidate for this role is already an expert in enterprise-grade platforms and/or security products. You’ve previously built self-hosted products for enterprises and managed large-scale SaaS deployments. You are invested in the AI revolution and have strong opinions on what it means to roll out safe AI products at scale.What You'll DoBe ultimately responsible for the deliverables, stability, and uptime of your team, and empowered to ramp up these goals as the team grows and matures.Ensure a consistent product vision and architecture for the team, and help shape the team & company roadmap - you will be the primary owner of technical direction, prioritization, and execution for the team’s work, in close partnership with Product and the founders.Hire and Mentor talented engineers and help shape their technical and career growth. You’ll be managing a team of 6 very senior engineers, many of whom have been startup CTOs themselves, with the goal of growing to a team of 8 by the end of the year.Define and deliver the next most important platform features for our customers.Ship high-impact features at scale while ensuring reliability, security, and enterprise readiness.Build leverage into the system - Transform week-long tasks into minutes through automation and agents.Required Skills8+ years of software engineering experience, including 2+ years in engineering leadership.Proven success managing and scaling high-performing teams.Deep experience deploying and monitoring software both as a SaaS platform and on-prem.Strong architectural instincts and ability to align technical direction with business outcomes.Passion for building frameworks that empower other developers to win.Excellent communication skills and cross-functional leadership.Comfort using LLMs/AI throughout all parts of the software development lifecycle.An insatiable desire to ship and continuously improve.Bonus PointsOpen-source contributionsYou’ve been at an early-stage startup before and loved itGo and/or Typescript expertiseJoin The MovementWe're not just building a product - we're leading a movement to transform AI from just chatbots to agents that can take actions against real systems. This is your chance to be at the forefront of that revolution.If you want to look back in 5 years and say, "I helped build that", then we want to talk to you. Ready to make AI actually useful? Apply Now
No items found.
Hidden link
Harmattan AI.jpg

Computer Vision Engineer (VIO)

Harmattan AI
CH.svg
Switzerland
Full-time
Remote
false
About UsHarmattan AI is a next-generation defense prime building autonomous and scalable defense systems. Following the close of a $200M Series B, valuing the company at $1.4 billion, we are expanding our teams and capabilities to deliver mission-critical systems to allied forces.Our work is guided by clear values: building technologies with real-world impact, pursuing excellence in everything we do, setting ambitious goals, and taking on the hardest technical challenges. We operate in a demanding environment where rigor, ownership, and execution are expected.About the RoleWe are looking for a Computer Vision Engineer to contribute to the development of the front-end of our visual inertial odometry (VIO) algorithmic stack.ResponsibilitiesVIO Front-end Algorithm development:Matching between frames and stereo pairsCalibration of cameras intrinsic and extrinsic parametersDetection of obstructionImplementation and optimization of the algorithmic stack for our embedded platformsTesting, Validation & Monitoring: algorithms testing in simulation and real-world environments, development of inspection and monitoring tools..Cross-Team Collaboration: Work closely with system engineers, optical engineers, software engineers. Communicate findings effectively to stakeholders.Candidate RequirementsEducational Background: Master’s degree in Robotics, Physics, Computer Science or related field, PhD is a plusTechnical Expertise: Strong mathematical foundations, coding skills in both Python and C++Hands-on VIO project experience. Experience with VIO on industrial products is a huge plus.Passion for Computer Vision and RoboticsStrong Communication & Teamwork: Ability to collaborate effectively with diverse teamsCommitment: 100% dedication to Harmattan’s mission, vision, and ambitious growth plans, ready to go the extra mile to ensure operational excellence.We look forward to hearing how you can help shape the future of autonomous defense systems at Harmattan AI.
No items found.
Hidden link
Giga.jpg

Software Engineer II (India - Bangalore)

Giga
₹10,000,000 – ₹11,000,000
IN.svg
India
Full-time
Remote
false
Location: Bangalore (3 days In-Office) Experience: 2-4 yearsAbout GigaGiga has recently raised a $61M Series A and has several paying customers, including DoorDash. We’re building the next generation of customer experience — real-time AI agents that can understand emotion, resolve issues instantly, and scale across the world’s largest enterprises.It’s an exciting inflection point for the company. While we have been successful, we have larger ambitions. Our goal is to become the go-to AI platform for all enterprise automation, powered by our voice superintelligence. To achieve this, we need more great engineers.The work affects millions of people every day and our engineers have autonomy and make true impact. This opportunity is unique because we have brilliant founders, have found commercial success, and see a clear path to becoming a generational company. Some further info about us:Voice AI startup Giga raises $61M Series ADoorDash and Giga PartnershipGiga builds AI agents trusted by the largest B2C companies in the world. Industry leaders like DoorDash trust Giga with their most complex support and operations workflows across voice, chat, and email. If being a part of this resonates with you, please apply!Engineers at Giga work on problems like:Building agents with almost no hallucination ratesA voice experience that's better than talking to humansCreating self-learning agents that optimize metrics Who You Are / Must-Haves:Exceptional EngineerBackend: Strong experience in Python (Django experience is a plus)Cloud: Proficient with AWS or Google CloudScalability: Familiarity with Kubernetes and Docker is a plusEntrepreneurial: Preference for candidates with experience at successful startups. Former founders are especially valuedSeasoned: 2-4 years of hands-on engineering experience Committed: Willing to be present in-office for all work days. No remote. Perks & BenefitsCompetitive total compensationFull health, dental, and vision coverageOn-demand snacks, coffee Interview ProcessScreening call w/ Recruiting Lead (20 min)Intro call w/ Founding Engineer (15 min)Live coding round w/ Founding Engineer (1 hr)In-office interview (full day)
No items found.
Hidden link
Harvey.jpg

Senior Product Manager, Data & Retrieval

Harvey
$178,500 – $241,500
US.svg
United States
Full-time
Remote
false
Why HarveyAt Harvey, we’re transforming how legal and professional services operate — not incrementally, but end-to-end. By combining frontier agentic AI, an enterprise-grade platform, and deep domain expertise, we’re reshaping how critical knowledge work gets done for decades to come.This is a rare chance to help build a generational company at a true inflection point. With 1000+ customers in 58+ countries, strong product-market fit, and world-class investor support, we’re scaling fast and defining a new category in real time. The work is ambitious, the bar is high, and the opportunity for growth — personal, professional, and financial — is unmatched.Our team is sharp, motivated, and deeply committed to the mission. We move fast, operate with intensity, and take real ownership of the problems we tackle — from early thinking to long-term outcomes. We stay close to our customers — from leadership to engineers — and work together to solve real problems with urgency and care. If you thrive in ambiguity, push for excellence, and want to help shape the future of work alongside others who raise the bar, we invite you to build with us.At Harvey, the future of professional services is being written today — and we’re just getting started.Role OverviewHarvey is building the AI platform for the world’s top legal and professional services teams. Our users rely on fast, accurate access to external legal data to perform research that underpins their most important work. As we scale, our Data & Knowledge organization sits at the center of this mission—turning raw, fragmented information into intelligent systems that power research and reasoning at global scale.Our Data Team is responsible for ingesting, structuring, understanding, and retrieving millions of documents across jurisdictions, formats, and domains. Whether public or private, offline or online, our mission is to organize the world’s legal information and make it accessible, reliable, and useful.We’re looking for a Product Manager with deep search, retrieval, and data platform expertise to lead the next generation of Harvey’s data engine as we 100× our capabilities. You will shape the strategy, roadmap, and architecture behind the systems that make advanced reasoning possible. The team owns end-to-end RAG (retrieval-augmented generation) pipelines across domains such as case law, legislation, tax code, and IP law across 50+ jurisdictions. As generation quality continues improving, retrieval quality has become the new frontier—and one of Harvey’s highest priorities.If you’re passionate about large-scale data problems, complex information retrieval, and building the backbone of cutting-edge AI systems, we’d love to meet you.What You'll DoDrive the roadmap and strategy for Harvey’s “Data Factory”, ensuring we scale our data 100x through new platforms that build the ‘legal index’ of the worldWork with internal operations and external data providers to methodically expand coverage, accelerate execution, and improve dataset quality.Own and evolve Harvey’s end-to-end data architecture—from ingestion and transformation to storage, indexing, and retrieval—ensuring performance, reliability, and scalability for LLM-powered products.Partner with Applied AI engineers to build and optimize retrieval systems, embeddings, search models, and evaluation frameworks.Architect and oversee large-scale ingestion pipelines that aggregate, normalize, and continuously update millions of heterogeneous legal documents across global jurisdictions.Collaborate cross-functionally with Product Engineering, Applied AI, Research, and Platform teams to deliver high-quality production systems that support reasoning, summarization, and legal research workflows.What You Have5+ years of experience building or managing search, retrieval, recommendation, or data platforms at scale.Experience working with complex, heterogeneous, or domain-specific datasets with structured + unstructured data.Understanding of modern retrieval methods, including hybrid search (lexical + vector), dense retrieval, re-ranking, embeddings, chunking strategies, and index optimization.Hands-on experience with LLMs or RAG frameworks (evaluation, grounding, hybrid pipelines, query rewriting, LLM-as-a-judge, retrieval metrics).Ability to partner with engineers on technical architecture, with enough depth to challenge assumptions, propose solutions, and influence design.A product mindset for search—balancing user needs, domain complexity, and system constraints to propose high-leverage improvements.Compensation Range$178,500 - $241,500 USDPlease find our CA applicant privacy notice here.#LI-ML1Harvey is an equal opportunity employer and does not discriminate on the basis of race, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition, or any other basis protected by law.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made by emailing accommodations@harvey.ai
No items found.
Hidden link
OpenAI.jpg

Software Engineer, Platform Systems

OpenAI
GB.svg
United Kingdom
Full-time
Remote
false
About the TeamThe Platform Systems team at OpenAI operates at the intersection of cutting-edge AI and large-scale distributed systems. We build the engineering and research infrastructure required to train OpenAI’s flagship models on some of the world’s largest, custom-built supercomputers.Our team develops core model training software and works deep in the stack - spanning collective communication, compute efficiency, parallelism strategies, fault tolerance, failure detection, and observability. The systems we build are foundational to OpenAI’s research velocity, enabling reliable, efficient training at frontier scale.We collaborate closely with researchers across the organization, continuously incorporating learnings from across OpenAI into the evolution of our training platform.About the RoleAs a Software Engineer, Platform Systems, you will design and build distributed systems that provide visibility into large-scale training workloads and help operate them reliably at scale.You’ll work on failure detection, tracing, and observability systems that identify slow or faulty nodes, surface performance bottlenecks, and help engineers understand and optimize massive distributed training jobs. This infrastructure is critical to operating OpenAI’s training stack and is actively evolving to support new use cases and increasingly complex workloads.This role sits at the core of our training infrastructure, blending systems engineering, performance analysis, and large-scale debugging.In This Role, You WillDesign and build distributed failure detection, tracing, and profiling systems for large-scale AI training jobsDevelop tooling to identify slow, faulty, or misbehaving nodes and provide actionable visibility into system behaviorImprove observability, reliability, and performance across OpenAI’s training platformDebug and resolve issues in complex, high-throughput distributed systemsCollaborate with systems, infrastructure, and research teams to evolve platform capabilitiesExtend and adapt failure detection systems or tracing systems to support new training paradigms and workloadsYou Might Thrive in This Role If YouCare deeply about performance, stability, and observability in distributed systemsEnjoy finding and fixing issues in large-scale systems and automating operational workflowsHave experience writing low-level software where system details matterUnderstand hardware, operating systems, networking, concurrency, and distributed systemsHave a background in high-performance computing or low-level systems engineeringAre excited to work on critical infrastructure that powers frontier AI researchAbout OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
OpenAI.jpg

Software Engineer, Platform Systems

OpenAI
$310,000 – $460,000
US.svg
United States
Full-time
Remote
false
About the TeamThe Platform Systems team at OpenAI operates at the intersection of cutting-edge AI and large-scale distributed systems. We build the engineering and research infrastructure required to train OpenAI’s flagship models on some of the world’s largest, custom-built supercomputers.Our team develops core model training software and works deep in the stack - spanning collective communication, compute efficiency, parallelism strategies, fault tolerance, failure detection, and observability. The systems we build are foundational to OpenAI’s research velocity, enabling reliable, efficient training at frontier scale.We collaborate closely with researchers across the organization, continuously incorporating learnings from across OpenAI into the evolution of our training platform.About the RoleAs a Software Engineer, Platform Systems, you will design and build distributed systems that provide visibility into large-scale training workloads and help operate them reliably at scale.You’ll work on failure detection, tracing, and observability systems that identify slow or faulty nodes, surface performance bottlenecks, and help engineers understand and optimize massive distributed training jobs. This infrastructure is critical to operating OpenAI’s training stack and is actively evolving to support new use cases and increasingly complex workloads.This role sits at the core of our training infrastructure, blending systems engineering, performance analysis, and large-scale debugging.In This Role, You WillDesign and build distributed failure detection, tracing, and profiling systems for large-scale AI training jobsDevelop tooling to identify slow, faulty, or misbehaving nodes and provide actionable visibility into system behaviorImprove observability, reliability, and performance across OpenAI’s training platformDebug and resolve issues in complex, high-throughput distributed systemsCollaborate with systems, infrastructure, and research teams to evolve platform capabilitiesExtend and adapt failure detection systems or tracing systems to support new training paradigms and workloadsYou Might Thrive in This Role If YouCare deeply about performance, stability, and observability in distributed systemsEnjoy finding and fixing issues in large-scale systems and automating operational workflowsHave experience writing low-level software where system details matterUnderstand hardware, operating systems, networking, concurrency, and distributed systemsHave a background in high-performance computing or low-level systems engineeringAre excited to work on critical infrastructure that powers frontier AI researchAbout OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
Netic.jpg

Founding Platform Engineer

Netic
US.svg
United States
Full-time
Remote
false
Netic is the AI revenue engine for essential services who are the backbone of the American economy. With $43M in funding from Founders Fund, Greylock, Hanabi, and Dylan Field who led our Series B, we helped our customers book hundreds of thousands of jobs across services industries in North America. There are now companies operating entirely AI-first on Netic. You’ll join our team with relentless builders from Scale, Databricks, HRT, Meta, MIT, Stanford, and Harvard in bringing frontier AI to the physical economy, where the problems are hard, the data is complex, and the impact is immediate and tangible.As a Founding Platform Engineer, you will own the semantic layer that powers our system of record and enables compounding products across the company. What makes this role uniquely rare at Netic is the scope and shape of the platform we are building. From day one, our system of record has to work across industries, geographies, and regulatory environments. That requires a platform that is highly customizable to support different workflows and business rules, while still being tractable enough that engineers can understand, debug, and operate it in production.This platform directly determines how dynamic workflows are built, packaged, and delivered to customers. The abstractions you create will influence how quickly we can launch new products, expand into new verticals, and respond to real world edge cases without sacrificing reliability. If you are excited to design foundational systems where flexibility, correctness, and developer ergonomics all matter, this is an opportunity to build a durable, company defining platform.What You’ll DoBuild the Semantic Layer Design and own the semantic layer that powers our system-of-record flywheel. This layer is the foundation for how data, state, and meaning flow through the company, enabling compounding AI products across teams. Create Internal Platforms with Leverage Build primitives, abstractions, and APIs that product teams use as building blocks. Your success is measured by how easily other engineers can ship powerful AI-driven features on top of your work. Treat Product Teams as Customers Partner closely with internal product and engineering teams to understand their needs, eliminate friction, and design systems that are intuitive, well-documented, and hard to misuse. Architect for Multiple Data & Request Topologies Design systems that span data warehouses, OLTP databases, streaming systems, and vector stores. Make intentional tradeoffs based on latency, throughput, consistency, and access patterns. Set Platform Direction Work with leadership to define the long-term platform architecture, including build-vs-buy decisions, evolution of the semantic layer, and how the system scales as product surface area grows. What You’ll BringPlatform & Systems Experience 5+ years building and operating distributed systems, ideally with experience owning a platform or core abstraction used by multiple teams. Strong Data Systems Background Deep understanding of data warehouses, transactional databases, and other storage systems, including how to model data for different access patterns and workloads. Semantic & Abstraction Thinking Ability to design schemas, contracts, and semantic models that remain stable over time while supporting rapid product iteration. Operational Rigor Experience with observability, migrations, backfills, incident response, and running high-uptime systems that are hard to unwind once in production. Ownership Mentality You take responsibility for long-term outcomes, not just shipping code. You think in terms of flywheels, leverage, and second-order effects.We believe fulfillment comes from producing your best work with the smartest people together in one room. All roles are in person, in SF (our office is in Jackson Square). What brings us together is our commitment to:Live to buildRun through walls and winObsess over customers in each line of codeLose sleep over the "almost perfect"Show internal locus of controlPrioritize finesse: refinement of first principles thinking, execution, and craftsmanshipWe are an equal opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability or any other legally protected status.
No items found.
Hidden link
Liquid AI.jpg

Member of Technical Staff - Post Training, Applied

Liquid AI
US.svg
United States
Full-time
Remote
false
About Liquid AISpun out of MIT CSAIL, we build AI systems that run where others stall: on CPUs, with low latency, minimal memory, and maximum reliability. We partner with enterprises across consumer electronics, automotive, life sciences, and financial services. We are scaling rapidly and need exceptional people to help us get there.The OpportunityThis is a rare chance to sit at the intersection of frontier foundation models and real-world deployment. You’ll own applied post-training work end-to-end for some of the world’s largest enterprises, while still contributing directly to Liquid’s core model development. Unlike most roles that force a trade-off between customer impact and foundational work, this role gives you both: deep ownership over how models are adapted, evaluated, and shipped, and a direct line into the evolution of Liquid’s post-training stack. If you care about data quality, evaluation, and making models actually work in production, this is a chance to shape how applied AI is done at a foundation-model company.What We're Looking ForWe need someone who:Takes ownership: Owns post-training projects end-to-end, from customer requirements through delivery and evaluation.Thinks end-to-end: Can reason across data generation, training, alignment, and evaluation as a single system.Is pragmatic: Optimises for model quality and customer outcomes over publications or theory.Communicates clearly: Can translate between customer needs and internal technical teams, and push back when needed. The WorkAct as the technical owner for enterprise customer post-training engagements.Translate customer requirements into concrete post-training specifications and workflows.Design and execute data generation, filtering, and quality assessment processes.Run supervised fine-tuning, preference alignment, and reinforcement learning workflows.Design task-specific evaluations, interpret results, and feed learnings back into core post-training pipelines.Desired ExperienceMust-have:Hands-on experience with data generation and evaluation for LLM post-training.Experience training or fine-tuning models using SFT, preference alignment, and/or RL.Strong intuition for data quality and evaluation design.Familiarity with alignment or RL techniques beyond basic supervised fine-tuning.Nice-to-have:Experience contributing to shared or general-purpose post-training infrastructure.Prior exposure to customer-facing or applied ML delivery environments.Familiarity with alignment or RL techniques beyond basic supervised fine-tuning.What Success Looks Like (Year One)Independently owns and delivers enterprise post-training projects with minimal oversight.Is trusted by customers as the technical owner, demonstrating strong judgment and delivery quality.Has made durable contributions to Liquid’s general-purpose post-training pipelines by feeding applied learnings back into baseline model development.What We OfferCompensation: Competitive base salary with equity in a unicorn-stage companyHealth: We pay 100% of medical, dental, and vision premiums for employees and dependentsFinancial: 401(k) matching up to 4% of base payTime Off: Unlimited PTO plus company-wide Refill Days throughout the year
No items found.
Hidden link
AI Fund.jpg

Founding Engineering Lead

AIFund
$180,000 – $220,000
US.svg
United States
Full-time
Remote
false
About Meeno Backed by Sequoia Capital, AI Fund & NEA, and founded by the former CEO of Tinder, Meeno helps people build real-world social courage through voice-based practice: 1-minute scenes, instant scoring, and personalized feedback. Not a dating app. Not a companion. Not therapy. IRL reps to help Gen Z meet people IRL.What You'll DoOwn the technical foundation of Meeno end-to-end: Web + mobile + backend + data + experimentationCo-design product vision in close partnership with Meeno’s team:Renate Nyborg, founder/CEO (former Tinder CEO, ex Apple/Headspace)Josh Knox, product lead (founder of Outright, brand and content expert)Andrew Ng, Chairman (AI Fund, founder of Coursera/Google Brain)Build core AI product primitives:Voice capture/playbackLow-latency interactionsScene framework (content + branching + scoring hooks)Feedback loops and user progressionPersonalizationArchitect for speed + iteration (weekly experiments, not quarterly releases).Set the engineering bar:Quality, reliability, security/privacy, and shipping culture.Hire and mentor engineers as we scale:Focus on quality over quantity, leveraging AI and talent to scale while staying lean.What We're Looking ForStrong builder who has shipped consumer products.Comfortable with ambiguity and moving fast without breaking the soul of the product.Product taste: you care about “feel,” not just functionality.Excited by voice, interaction design, and behavior change.Can operate like a founder: make tradeoffs, set direction, ship.Green flags (not requirements)Hyped about social connection, empathy for Gen Z loneliness.Early adopter of AI-driven development.Hungry for rapid career progression and societal impact.Experience with audio/voice pipelines, realtime systems, mobile performance.History of building unconventional projects (creative tech, experiments, indie builds).Interest in human behavior, social dynamics, and product psychology.You won't pass the vibe check if...You are not passionate about our mission.You only want to manage, not build.You need perfect specs to move.You optimize for process over product feel.You are not fun.What success looks like (first 6 months)Meeno has a stable foundation, and we can run rapid experiments.Our product feels fast, human, and culturally sharp.We have a pipeline of world-class engineering talent.We are well-positioned to raise Series A with clear signal on north star metrics.We are having fun. 180,000 - 220,000 a year
No items found.
Hidden link
Loop.jpg

New Grad | Software Engineer, AI

Loop
$150,000 – $150,000
US.svg
United States
Full-time
Remote
false
About Loop Loop is the data platform for the global supply chain. Logistics runs on messy, unstructured data—trapped in PDFs, emails, and legacy systems. We use AI to structure this chaos, creating a "source of truth" that automates payments and audits for the Fortune 100. We are building the financial nervous system for a $100 trillion physical economy. Our technology ensures freight moves efficiently and carriers get paid instantly. Backed by Founders Fund, Index Ventures and 8VC, we are scaling rapidly. We are looking for engineers ready to deploy production AI that powers the physical economy. About the New Grad Program Most AI stays in the browser. Ours moves atoms. You aren't just building features; you are building the autonomous brain for the Fortune 100’s global supply chain. This program is designed to compress 3 years of learning into 1 year by throwing you into the deep end of production AI systems on Day 1. Instead of sandboxed projects, you get to solve real problems and impact customers directly. This program demands intense investment, but by the end, you will perform as a strong entry-level engineer jumpstarting your career. The Schedule: Week 1 (Onboarding): Deep dive into tools and domain. You will ship code to production on Day 1 and fully grasp our dev loop by Friday. Months 1-3 (Velocity): You will deliver 3 entry-level projects with increasing ambiguity. By the end of Month 3, you are expected to operate as a fully independent engineer. Months 4-9 (The Rotation): You will rotate onto a different high-impact team to expand your surface area. Tracks include: Platform: AI infrastructure and Engineering Systems. Core Product: Audit, Billing, and Payments logic. Commercial: Revenue Activation and Forward Deployed Engineering. Special Projects: Partnering directly with the CEO/CTO and other execs Month 9+ (Graduation): You should demonstrate Mid-Level Engineer performance and will be considered for immediate promotion.   About You We are not looking for just “straight-A students." We are looking for obsessive builders with spikes. If the following resonates, you belong here: You build on nights and weekends. You have a repo, a side hustle, or a project you built just because you were curious. You don't wait for class assignments to write code.You hate waiting for approval. You prefer to ship, break, fix, and apologize rather than wait for a committee decision.You are bored by easy tasks. You want problems that are more than 1 prompt away from a solution. You grind to win. You are the type to skip social events just to climb the Esports ladder, ace a math competition, or debug a complex issue—purely for the joy of mastery. Responsibilities Ship critical infrastructure. Manage real-world logistics and financial data for the largest enterprise in the world..  Own the why. Build deep context through customer calls, and understand the Loop’s value to our customers. You push back on requirements if you see a better, faster way to solve the customer’s problem. Full-stack proficiency. Work across system boundaries, from frontend UX to LLM agents, database schema and event infrastructures. Leverage AI tools to handle the 90% boilerplate, so you can focus the highest leverage 10%: quality, architecture, product taste. Raise the velocity bar. You will constantly optimize our dev loops, refactor legacy patterns, automate your own workflows and fix broken processes. Qualifications Graduating with a BS or higher in STEM fields; available to start full-time in 2026. Working in person in the SF, Chicago, or NYC office 4 days a week. Proficiency with modern techstack. You can deliver a modern web app in hours not in days.. Unblocking yourself. You thrive in ambiguity. Despite the chaos, you deliver high quality products and business impact. AI Literate. You have strong intuition on how LLM works: where they excel and where they generate slop. You live and breathe AI native tools (Cursor, Codex, Claude Code etc.) Compensation $150,000 annual base pay Benefits & Perks Fully paid health insurance. 401k with matching. Unlimited PTO. Generous professional development budget. Commuter benefits. Wellness benefits Phone plan stipend
No items found.
Hidden link
Loop.jpg

2026 New Grad | Software Engineer, Full-Stack

Loop
$150,000 – $150,000
US.svg
United States
Full-time
Remote
false
About Loop Loop is the data platform for the global supply chain. Logistics runs on messy, unstructured data—trapped in PDFs, emails, and legacy systems. We use AI to structure this chaos, creating a "source of truth" that automates payments and audits for the Fortune 100. We are building the financial nervous system for a $100 trillion physical economy. Our technology ensures freight moves efficiently and carriers get paid instantly. Backed by Founders Fund, Index Ventures and 8VC, we are scaling rapidly. We are looking for engineers ready to deploy production AI that powers the physical economy. About the New Grad Program Most AI stays in the browser. Ours moves atoms. You aren't just building features; you are building the autonomous brain for the Fortune 100’s global supply chain. This program is designed to compress 3 years of learning into 1 year by throwing you into the deep end of production AI systems on Day 1. Instead of sandboxed projects, you get to solve real problems and impact customers directly. This program demands intense investment, but by the end, you will perform as a strong entry-level engineer jumpstarting your career. The Schedule: Week 1 (Onboarding): Deep dive into tools and domain. You will ship code to production on Day 1 and fully grasp our dev loop by Friday. Months 1-3 (Velocity): You will deliver 3 entry-level projects with increasing ambiguity. By the end of Month 3, you are expected to operate as a fully independent engineer. Months 4-9 (The Rotation): You will rotate onto a different high-impact team to expand your surface area. Tracks include: Platform: AI infrastructure and Engineering Systems. Core Product: Audit, Billing, and Payments logic. Commercial: Revenue Activation and Forward Deployed Engineering. Special Projects: Partnering directly with the CEO/CTO and other execs Month 9+ (Graduation): You should demonstrate Mid-Level Engineer performance and will be considered for immediate promotion.   About You We are not looking for just “straight-A students." We are looking for obsessive builders with spikes. If the following resonates, you belong here: You build on nights and weekends. You have a repo, a side hustle, or a project you built just because you were curious. You don't wait for class assignments to write code.You hate waiting for approval. You prefer to ship, break, fix, and apologize rather than wait for a committee decision.You are bored by easy tasks. You want problems that are more than 1 prompt away from a solution. You grind to win. You are the type to skip social events just to climb the Esports ladder, ace a math competition, or debug a complex issue—purely for the joy of mastery. Responsibilities Ship critical infrastructure. Manage real-world logistics and financial data for the largest enterprise in the world..  Own the why. Build deep context through customer calls, and understand the Loop’s value to our customers. You push back on requirements if you see a better, faster way to solve the customer’s problem. Full-stack proficiency. Work across system boundaries, from frontend UX to LLM agents, database schema and event infrastructures. Leverage AI tools to handle the 90% boilerplate, so you can focus the highest leverage 10%: quality, architecture, product taste. Raise the velocity bar. You will constantly optimize our dev loops, refactor legacy patterns, automate your own workflows and fix broken processes. Qualifications Graduating with a BS or higher in STEM fields; available to start full-time in 2026. Working in person in the SF, Chicago, or NYC office 4 days a week. Proficiency with modern techstack. You can deliver a modern web app in hours not in days.. Unblocking yourself. You thrive in ambiguity. Despite the chaos, you deliver high quality products and business impact. AI Literate. You have strong intuition on how LLM works: where they excel and where they generate slop. You live and breathe AI native tools (Cursor, Codex, Claude Code etc.) Compensation $150,000 annual base pay Benefits & Perks Fully paid health insurance. 401k with matching. Unlimited PTO. Generous professional development budget. Commuter benefits. Wellness benefits Phone plan stipend
No items found.
Hidden link
No job found
Your search did not match any job. Please try again
Department
Clear
Category
Clear
Country
Clear
Job type
Clear
Remote
Clear
Only remote job
Company size
Clear
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.