The AI job market moves fast. We keep up so you don't have to.
Fresh roles added daily, reviewed for quality — across every corner of the AI ecosystem.
I'm strong in:
Edit filters
New AI Opportunities
Showing 61 – 79 of 79 jobs
Tag
VP Engineering - Paris
H Company
201-500
France
Full-time
Remote
false
VP EngineeringAbout H:H exists to push the boundaries of superintelligence with agentic AI. By automating complex, multi-step tasks typically performed by humans, AI agents will help unlock full human potential.H is hiring the world’s best AI talent, seeking those who are dedicated as much to building safely and responsibly as to advancing disruptive agentic capabilities. We promote a mindset of openness, learning, and collaboration, where everyone has something to contribute.ContextH Company is building a new class of agentic AI systems—autonomous software capable of executing complex workflows across tools and environments.Following a landmark European seed round and early enterprise traction, the company is entering a pivotal phase: scaling from breakthrough technology and early deployments to a robust, global AI platform business.To support this transition, H Company is hiring a VP Engineering to join the executive leadership team and co-drive the company’s evolution from startup to category-defining scale-up.Mission:The VP Engineering will be responsible for:Defining and executing a scalable, defensible technology strategyBuilding a world-class engineering organization and platformPartnering with the CEO on product direction, investor communication, and long-term visionEnsuring H Company successfully bridges frontier AI research and enterprise-grade deploymentThis role is central to converting early technical advantage into sustained product leadership and commercial scale.Core Responsibilities1. Platform & Technology StrategyArchitect and scale H’s AI platform (agents, orchestration, model integration, infra)Make critical build vs. buy decisions across the stackEnsure performance, reliability, and cost efficiency at scaleEstablish durable technical moats in a rapidly evolving AI landscape2. From Innovation to ProductizationTranslate cutting-edge AI capabilities into repeatable, enterprise-ready productsStandardize systems that are currently bespoke or forward-deployedBalance speed of iteration with platform robustness and maintainability3. Organizational ScaleBuild and lead a high-caliber engineering organizationScale from a startup structure to multi-layered, high-output teamsImplement processes that enable speed without sacrificing quality4. Executive & Investor InterfaceAct as a key counterpart to the CEO in board and investor discussionsClearly articulate H Company’s technology and product roadmapProvide credibility and depth in technical due diligence and fundraising contexts5. Cross-Functional LeadershipOperate at the intersection of Research, Product, and Go-to-MarketAlign engineering execution with customer outcomes and revenue growthHelp define the company’s long-term product and platform positioningProfileLeadership & ScaleExperience as CTO / VP Engineering (or equivalent) in a high-growth, venture-backed environmentProven ability to scale teams and systems through rapid growth phasesTechnical CredibilityDeep expertise in distributed systems, AI/ML infrastructure, or developer platformsAbility to engage with frontier research while driving production excellenceStrategic OrientationStrong product intuition and ability to shape platform strategyComfortable operating in ambiguity and fast-moving technical landscapesInvestor ReadinessExperience engaging with investors, boards, and senior stakeholdersAbility to communicate complex technical topics with clarity and authorityOpportunityThis is a rare opportunity to:Shape the technical foundation of a potential category leader in agentic AIPartner directly with the CEO on company-building and strategic decisionsPlay a defining role in scaling a European AI company to global relevanceLocation:Paris or London.Please expect some travel between offices on a reasonable cadence (e.g., every 4-6 weeks).What We Offer:Join the exciting journey of shaping the future of AI, and be part of the early days of one of the hottest AI startups.Collaborate with a fun, dynamic, and multicultural team, working alongside world-class AI talent in a highly collaborative environment.If you want to change the status quo in AI, join us.
No items found.
2026-04-28 11:06
Software Engineer, Model Serving Infrastructure
Anyscale
201-500
India
Full-time
Remote
false
About Anyscale:
At Anyscale, we're on a mission to democratize distributed computing and make it accessible to software developers of all skill levels. We’re commercializing Ray, a popular open-source project that's creating an ecosystem of libraries for scalable machine learning. Companies like OpenAI, Uber, Spotify, Instacart, Cruise, and many more, have Ray in their tech stacks to accelerate the progress of AI applications out into the real world.
With Anyscale, we’re building the best place to run Ray, so that any developer or data scientist can scale an ML application from their laptop to the cluster without needing to be a distributed systems expert.
Proud to be backed by Andreessen Horowitz, NEA, and Addition with $250+ million raised to date.
About the role:Anyscale is actively seeking talented engineers to join our team and contribute to the development of next-generation, high-performance machine learning serving systems. We value diversity and inclusion, and we encourage individuals from underrepresented groups to apply.Many existing ML serving tools are inherited from previous infrastructure generations, but emerging ML applications present new requirements, such as high compute demands, specialized hardware needs, and the integration of multiple models and business logic within a single request. At Anyscale, our mission is to provide a powerful yet simple set of tools that enable the seamless deployment of complex ML applications in production.The Challenge:What if you could build the infrastructure that powers AI applications for millions of users worldwide? Ray Serve is the production-grade serving framework that makes this possible—and we need exceptional engineers to push its boundaries.You'll be working on problems that sit at the intersection of distributed systems, machine learning, and high-performance computing. This isn't about maintaining CRUD apps or tweaking configurations—this is about solving fundamental computer science problems that directly impact how the world deploys AI.What You'll Actually Build:Example projects:Asynchronous inference: Let the client submit a request and get a request handle that asks for its requests completion while not blocking the client side. Really important for image, video, or audio generation applications.Sub-millisecond Model Routing: Design and implement intelligent request routing systems that dynamically balance load across thousands of model replicas while maintaining strict latency SLAsZero-Downtime Model Updates: Build sophisticated traffic management systems that seamlessly transition between model versions at scale, handling terabytes of inference requests without dropping a single queryState Management at Scale: With many models and many replicas deployed into production, the control loop’s state management can become the bottleneck for events such as routing, autoscaling, etc. What are the architectural improvements that can shift the envelop of scale by 10x going from 1000s replicas to 10,000s replicas, etc.Multi-Model Orchestration: Architect frameworks for complex ML pipelines where dozens of models need to communicate, share resources, and maintain end-to-end latency guaranteesObservability & Debugging: Build deep introspection tools that make it trivial to debug distributed ML applications—because "works on my laptop" doesn't cut it at scaleThe Tech You'll Work With:Deep Systems Programming: You'll write performance-critical code in Python (with Cython optimization paths) and potentially C++ for the hot pathsDistributed Systems at Scale: Work directly with Ray Core's actor system, gRPC, and custom networking protocols to handle millions of requests per secondCloud-Native Infrastructure: Kubernetes, service meshes, and custom operators—you'll need to understand and extend the cloud native ecosystemML/AI Systems: TensorFlow, PyTorch, JAX, transformers—you don't need to be an ML expert, but you'll develop deep system-level knowledge of how these frameworks work under the hoodProduction Reliability: OpenTelemetry, Prometheus, distributed tracing, and chaos engineering to ensure 99.99% uptime. Availability and performance are our key objectives as a serving infrastructure.Using AI coding agents: We are an AI-forward company, leveraging coding agents to scale our-selves while keeping the team lean and highly utilized.What We're Looking For:Must-HavesStrong Systems Fundamentals: You understand operating systems, networking, concurrency, and distributed systems at a deep level and the trade-offs that different design options implyProduction Experience: You've built and maintained systems that serve real users at scaleCode Quality: Have a good taste in code quality, simplicity, generality, testing coverage. AI-agents write a lot of code in short time, you should be able to instruct them to output what is golden standardOwnership Mindset: You take responsibility for your code in production—from design to deployment to incident responseNice-to-HavesExperience with distributed systems frameworks (gRPC, Ray)Background in ML/AI systems or serving infrastructureContributions to major open source projectsExperience with performance optimization and profilingKnowledge of cloud-native technologies (Kubernetes, Istio, etc.)What Really MattersWe care more about how you think and solve problems and whether you have shown patterns of end to end ownership in your past stages of career than checking boxes. If you're intellectually curious, love building elegant solutions to hard problems, and want to work on infrastructure that matters—we want to talk to you.Anyscale Inc. is an Equal Opportunity Employer. Candidates are evaluated without regard to age, race, color, religion, sex, disability, national origin, sexual orientation, veteran status, or any other characteristic protected by federal or state law. Anyscale Inc. is an E-Verify company and you may review the Notice of E-Verify Participation and the Right to Work posters in English and Spanish
No items found.
2026-04-28 9:06
Staff Software Engineer, Security Controls Telemetry & Detection
Horizon3ai
201-500
$220,000 – $275,000
United States
Full-time
Remote
false
Get to Know UsHorizon3.ai is a fast-growing, remote cybersecurity company dedicated to the mission of enabling organizations to proactively find and fix and verify exploitable attack vectors before criminals exploit them. Our flagship product, the NodeZeroTM platform, delivers production-safe autonomous pentests and other key assessment operations that scale across the largest internal, external, cloud, and hybrid cloud environments. NodeZero has been adopted by organizations of all sizes, from small educational institutions to government agencies and Global 100 enterprises. It is used by ITOps/SecOps teams, consulting pentesters, and MSSPs and MSPs. We are a fusion of former U.S. Special Operations cyber operators, startup engineers, and formerly frustrated cybersecurity practitioners. We're committed to helping solve our common security problems: ineffective security tools, false positives resulting in alert fatigue, blind spots, "checkbox” security culture, cybersecurity skills shortage, and the long lead time and expense of hiring outside consultants. Collectively, we are a team of learn it alls, committed to a culture of respect, collaboration, ownership, and results.SummaryWe are hiring a Staff Software Engineer to own the technical vision for EDR telemetry and detection work inside NodeZero and ultimately, the future of EDR effectiveness and tuning as a product capability. Modern endpoints are instrumented by CrowdStrike, SentinelOne, Microsoft Defender, Carbon Black, and others. Our customers need to know, with evidence, which of our attack techniques their EDR caught, which ones slipped through, and why. Answering that at scale — across platforms, tenants, and operator objectives — requires someone who deeply understands the telemetry surface and can turn that understanding into a product. Over time, this role will own the technical work for incorporating AI and ML research into how we reason about detection gaps, generate tuning recommendations, and scale effectiveness insights.This is not a pure architect role. You will write production code every week, review PRs from the people you lead, and partner closely with Product to sequence the right problems in the right order. You will be the person who both draws the system diagram on the whiteboard and commits the first slice of it to mainEssential FunctionsLeadership & Team DevelopmentOwns the end-to-end technical vision for the workstream and rallies the team around it — from blank doc through shipping, iterating, and deprecating.Production code contributions at Lead/Staff level in a modern backend language (Go, Rust, Python, or similar) in a service-oriented environment.Sets and raises the technical bar (design reviews, code quality, operational discipline) by example rather than by mandate.Mentors and enhances the engineers around them; Build the frameworks and architecture for others to do the best work of their careers.Partners with the hiring team to attract, interview, and level engineers into the workstream as it scales.Holds the team accountable to outcomes rather than activity; surfaces risks and tradeoffs early and in writing.Product-Minded Technical LeadershipTranslates ambiguous product goals into concrete technical roadmaps.Makes build vs. buy vs. integrate calls with business context, not just engineering preference.Partners closely with PM — comfortable in PRD reviews, not just sprint planning.Sequences an MVP without painting the team into a corner.EDR Domain ExpertiseDeep familiarity with at least one major EDR platform (CrowdStrike, SentinelOne, Microsoft Defender) at the telemetry and API level.Understands detection logic, alert triage workflows, and how SOC teams consume EDR output.Can build and evaluate labeled ground truth datasets — knows what a correct detection actually looks like.Fluent in FP/FN tradeoffs and confidence scoring in real production environments.Detection & Measurement Methodology (primary owner)Defines ground truth methodology and oversees execution (initially with intern support).Designs confidence scoring approach and FP/FN threshold definitions.Owns calibration and recalibration methodology as the system evolves.Defines what “correct” looks like for tuning recommendations, translates missed detections into vendor-accurate guidance.Travel RequiredWe are a fully remote company, and this job may require up to 10% travel.Perks of Horizon3.aiInclusive Team: We value diversity and promote an inclusive culture where everyone can thrive.Growth Opportunities: Be part of a dynamic and growing team with numerous career development opportunities.Innovative Culture: Work in a collaborative environment that encourages creativity and out-of-the-box thinking.Remote Work: We are a 100% remote company. Enjoy the convenience and work-life balance that comes with remote work. Competitive Compensation: We offer competitive salary, equity and benefits. Our benefits include health, vision & dental insurance for you and your family, a flexible vacation policy, and generous parental leave.Compensation and ValuesAt Horizon3, we believe that our people are our greatest asset, and our compensation philosophy reflects this core value. We are committed to fostering an environment where all employees feel valued, respected, and rewarded for their contributions. Our compensation structure is designed to be fair, competitive, and transparent, ensuring that every team member is recognized and compensated equitably across roles, levels, and locations.In accordance with various State’s transparency regulations, we provide the following salary range information for this position:Base salary range $220,000 - $275,000 annually. The exact salary will be determined based on the selected candidate’s location, qualifications, experience, and relevant skills.Additional compensation: All full-time roles are eligible for an equity package in the form of stock options.You Belong HereHorizon3 is not just an equal opportunity employer - we are a community that values diversity, equity, and inclusion as fundamental principles of our culture and success. We are dedicated to fostering a workplace where everyone feels welcome and respected, regardless of race, color, religion, sex, national origin, age, disability, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, or any other legally protected status by law.Our commitment to diversity and inclusion means we strive to attract, develop, and retain a workforce that reflects the varied communities we serve. We believe that diverse perspectives drive innovation and strengthen our ability to create cutting-edge cybersecurity solutions. At Horizon3, every team member is valued and supported in an environment that encourages personal and professional growth.We welcome candidates from all backgrounds and experiences, and we encourage all qualified individuals to apply. Come be a part of Horizon3, where your unique contributions are recognized, and your potential is limitless.Other DutiesPlease note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee. Duties, responsibilities, and activities may change at any time with or without notice.
No items found.
2026-04-28 8:35
Senior Content Strategist
Arize AI
101-200
No items found.
Full-time
Remote
false
About Arize
AI is rapidly transforming the world. As generative AI reshapes industries, teams need powerful ways to monitor, troubleshoot, and optimize their AI systems. That’s where we come in. Arize AI is the leading AI & Agent Engineering observability and evaluation platform, empowering AI engineers to ship high-performing, reliable agents and applications. From first prototype to production scale, Arize AX unifies build, test, and run in a single workspace—so teams can ship faster with confidence.
We’re a Series C company backed by top-tier investors, with over $135M in funding and a rapidly growing customer base of 150+ leading enterprises and Fortune 500 companies. Customers like Booking.com, Uber, Siemens, and PepsiCo leverage Arize to deliver AI that works.Note: The nature of this role requires candidates to be based in the Buenos Aires area, though there isn't an in-office requirement.
The Opportunity
We’re looking for an Application Engineer who thrives on solving hard problems with code. In this role, you'll have the opportunity to work at the cutting edge of generative AI in a high-impact role with autonomy and ownership.
What You’ll Do
Debug and fix issues in our platform (and ship PRs with your fixes).
Build internal tools and copilots powered by generative AI to supercharge our team.
Rapidly prototype proof-of-concepts for customer use cases.
Work across Engineering, Product, and Solutions to unblock customers and push the boundaries of AI adoption.
What We’re Looking For
You have 2-5 years of experience in software.
Strong in Python and Golang; comfortable shipping fixes in production systems.
Hands-on with generative AI (LLM APIs, frameworks, building copilots or automations)
Hands-on with OpenTelimetry and deep familiarity with distributed tracing concepts.
Familiarity with AI frameworks (CrewAI, Langchain, Langgraph, DiFy, LiteLLM, etc).
Familiarity or eagerness to learn JavaScript/TypeScript.
Great debugger, creative problem solver, and fast learner.
Independent and resourceful. You create solutions, not dependencies.
Bonus Points (but not required!)
Experience in a customer-facing role
Built copilots, plugins, or custom GenAI-powered applications.
Open-sourced or contributed PRs to real codebases.
Startup or fast-moving environment experience.
Actual compensation is determined based upon a variety of job related factors that may include: transferable work experience, skill sets, and qualifications. Total compensation also includes unlimited paid time off, generous parental leave plan, and others for mental and wellness support.More About Arize
Arize’s mission is to make the world’s AI work—and work for people.
Our founders came together through a shared frustration: while investments in AI are growing rapidly across every industry, organizations face a critical challenge—understanding whether AI is performing and how to improve it at scale.
Learn more about what we're doing here:
https://techcrunch.com/2025/02/20/arize-ai-hopes-it-has-first-mover-advantage-in-ai-observability/
https://arize.com/blog/arize-ai-raises-70m-series-c-to-build-the-gold-standard-for-ai-evaluation-observability/
Diversity & Inclusion @ Arize
Our company's mission is to make AI work and make AI work for the people, we hope to make an impact in bias industry-wide and that's a big motivator for people who work here. We actively hope that individuals contribute to a good culture
Regularly have chats with industry experts, researchers, and ethicists across the ecosystem to advance the use of responsible AI
Culturally conscious events such as LGBTQ trivia during pride month
We have an active Lady Arizers subgroup
No items found.
2026-04-28 7:05
Parcel Contract Intelligence Consultant
Loop
101-200
$125,000 – $150,000
No items found.
Remote
false
About Loop
Loop is the data platform for the global supply chain. Logistics runs on messy, unstructured data—trapped in PDFs, emails, and legacy systems. We use AI to structure this chaos, creating a "source of truth" that automates payments and audits for the Fortune 100.
We are building the financial nervous system for a $100 trillion physical economy. Our technology ensures freight moves efficiently and carriers get paid instantly.
Backed by Founders Fund, Index Ventures and 8VC, we are scaling rapidly. We are looking for engineers ready to deploy production AI that powers the physical economy.
About the New Grad Program
Most AI stays in the browser. Ours moves atoms. You aren't just building features; you are building the autonomous brain for the Fortune 100’s global supply chain.
This program is designed to compress 3 years of learning into 1 year by throwing you into the deep end of production AI systems on Day 1. Instead of sandboxed projects, you get to solve real problems and impact customers directly. This program demands intense investment, but by the end, you will perform as a strong entry-level engineer jumpstarting your career.
The Schedule:
Week 1 (Onboarding): Deep dive into tools and domain. You will ship code to production on Day 1 and fully grasp our dev loop by Friday.
Months 1-3 (Velocity): You will deliver 3 entry-level projects with increasing ambiguity. By the end of Month 3, you are expected to operate as a fully independent engineer.
Months 4-9 (The Rotation): You will rotate onto a different high-impact team to expand your surface area. Tracks include:
Platform: AI infrastructure and Engineering Systems.
Core Product: Audit, Billing, and Payments logic.
Commercial: Revenue Activation and Forward Deployed Engineering.
Special Projects: Partnering directly with the CEO/CTO and other execs
Month 9+ (Graduation): You should demonstrate Mid-Level Engineer performance and will be considered for immediate promotion.
About You
We're not just looking for strong academic performers. We're looking for people who are genuinely driven to build things and go deep on hard problems. If the following resonates, you belong here:
You go above and beyond. You have a repo, a side hustle, or a project you built just because you are curious. You’re self-directed and don't need an assignment to start coding.
You have a bias towards action. You prefer to ship, break, fix, and apologize rather than wait for a committee decision.
You are drawn to hard problems. You want problems that are more than one prompt away from a solution.
You get absorbed in mastering your craft. Whether it’s climbing the Esports ladder, acing a math competition, winning a hackathon, or debugging a complex issue, you know what it feels like to lose track of time working on something you care about.
Responsibilities
Ship critical infrastructure. Manage real-world logistics and financial data for the largest enterprise in the world..
Own the why. Build deep context through customer calls, and understand the Loop’s value to our customers. You push back on requirements if you see a better, faster way to solve the customer’s problem.
Full-stack proficiency. Work across system boundaries, from frontend UX to LLM agents, database schema and event infrastructures.
Leverage AI tools to handle the 90% boilerplate, so you can focus the highest leverage 10%: quality, architecture, product taste.
Raise the velocity bar. You will constantly optimize our dev loops, refactor legacy patterns, automate your own workflows and fix broken processes.
Qualifications
Graduating with a BS or higher in STEM fields; available to start full-time in 2026.
Working in person in the SF or Chicago office 4 days a week.
Proficiency with modern techstack. You can deliver a modern web app in hours not in days..
Unblocking yourself. You thrive in ambiguity. Despite the chaos, you deliver high quality products and business impact.
AI Literate. You have strong intuition on how LLM works: where they excel and where they generate slop. You live and breathe AI native tools (Cursor, Codex, Claude Code etc.)
Compensation
$150,000 annual base pay for SF
$125,000 annual base pay for Chicago
Benefits & Perks
Fully paid health insurance.
401k with matching.
Unlimited PTO.
Generous professional development budget.
Commuter benefits.
Wellness benefits
Phone plan stipend
#LI-LOOP
No items found.
2026-04-28 6:05
AI/ML Engineer
Air Apps
51-100
€60,000 – €76,000
Netherlands
Full-time
Remote
false
About Air AppsAt Air Apps, we believe in thinking bigger—and moving faster. We’re a family-founded company on a mission to create the world’s first AI-powered Personal & Entrepreneurial Resource Planner (PRP), and we need your passion and ambition to help us change how people plan, work, and live. Born in Lisbon, Portugal in 2018—and now with offices in both Lisbon and San Francisco—we’ve remained self-funded while reaching over 100 million downloads worldwide.Our long-term focus drives us to challenge the status quo every day, pushing the boundaries of AI-driven solutions that truly make a difference. Here, you’ll be a creative force, shaping products that empower people across the globe.Join us on this journey to redefine resource management—and change lives along the way.The RoleAs an AI/ML Engineer, you will play a crucial role in designing, developing, and optimizing machine learning models to power our mobile applications. You will work closely with product managers, engineers, and designers to create intelligent, data-driven features that enhance user experiences. Your expertise in artificial intelligence and deep learning will help us innovate and stay ahead in the mobile app industry.This is a fully onsite position, based at our office in Lisbon, where you will collaborate closely with cross-functional teams in person and contribute to a dynamic and fast-paced environment. We are open to support with relocation efforts.ResponsibilitiesDevelop, train, and optimize machine learning models for various mobile app features.Research and implement state-of-the-art AI techniques to improve user engagement and app performance.Collaborate with cross-functional teams to integrate AI-driven solutions into our applications.Design and maintain scalable ML pipelines, ensuring efficient model deployment and monitoring.Analyze large datasets to derive insights and drive data-driven decision-making.Stay updated with the latest AI trends and best practices, incorporating them into our development processes.Optimize AI models for mobile environments to ensure high performance and low latency.RequirementsAround 4+ years of experience in AI/ML development, preferably in mobile applications.Proficiency in Python, TensorFlow, PyTorch, or other ML frameworks.Experience with deep learning, NLP, computer vision, and statistical modeling.Familiarity with cloud-based ML services (AWS, Google Cloud, or Azure).Strong understanding of data structures, algorithms, and software engineering best practices.Experience in deploying and maintaining ML models in production.Ability to work collaboratively in a remote team environment.Strong problem-solving skills and a passion for innovation.What benefits do we offer?Apple hardware ecosystem for work.Annual BonusTop-tier Health and Life Insurance for peace of mind.Transportation Budget to support your commute needs.Coverflex benefits package for meal allowances, well-being, and more.Childcare support.Air Conference - an opportunity to meet the team, collaborate, and grow together.Pension Fund to support your long-term financial planning.Urban Sports Club membership to keep you active.Meals 100% free at the hub.Diversity & InclusionAt Air Apps, we are committed to fostering a diverse, inclusive, and equitable workplace. We enthusiastically welcome applicants from all backgrounds, experiences, and perspectives. We celebrate diversity in all its forms and believe that varied voices and experiences make us stronger.Application DisclaimerAt Air Apps, we value transparency and integrity in our hiring process. Applicants must submit their own work without any AI-generated assistance. Any use of AI in application materials, assessments, or interviews will result in disqualification.
No items found.
2026-04-28 4:36
AI Product Manager
Air Apps
51-100
€58,000 – €73,000
Germany
Full-time
Remote
false
About Air AppsAt Air Apps, we believe in thinking bigger—and moving faster. We’re a family-founded company on a mission to create the world’s first AI-powered Personal & Entrepreneurial Resource Planner (PRP), and we need your passion and ambition to help us change how people plan, work, and live. Born in Lisbon, Portugal in 2018—and now with offices in both Lisbon and San Francisco—we’ve remained self-funded while reaching over 100 million downloads worldwide.Our long-term focus drives us to challenge the status quo every day, pushing the boundaries of AI-driven solutions that truly make a difference. Here, you’ll be a creative force, shaping products that empower people across the globe.Join us on this journey to redefine resource management—and change lives along the way.The RoleAs an AI Product Manager at Air Apps, you will be at the forefront of shaping AI-powered applications that enhance user experiences. You will lead the product development lifecycle for AI-driven features, working closely with engineers, designers, and data scientists to develop, launch, and scale AI-driven solutions. Your role is pivotal in ensuring that AI technologies align with user needs and business objectives.This is a fully onsite position, based at our office in Lisbon, where you will collaborate closely with cross-functional teams in person and contribute to a dynamic and fast-paced environment. We are open to support with relocation efforts.ResponsibilitiesDefine and drive the AI product roadmap, ensuring alignment with business objectives and user needs.Collaborate with cross-functional teams, including engineering, design, and marketing, to develop and launch AI-powered features.Conduct market research and analyze user feedback to identify opportunities for AI integration.Work closely with data scientists and machine learning engineers to optimize AI models for accuracy, performance, and user impact.Define key performance indicators (KPIs) to measure success and iterate based on data-driven insights.Stay up to date with AI trends, emerging technologies, and best practices to ensure our products remain competitive.Ensure ethical AI usage and compliance with data privacy regulations.RequirementsAround 4+ years of experience in product management, preferably in AI, machine learning, or data-driven applications.Strong understanding of AI/ML concepts, including NLP, computer vision, and recommendation systems.Experience working with data science and engineering teams to develop AI-based features.Ability to translate complex AI concepts into user-friendly applications.Strong analytical skills and experience leveraging data to drive product decisions.What benefits are we offering?Apple hardware ecosystem for work.Annual BonusTop-tier Health and Life Insurance for peace of mind.Transportation Budget to support your commute needs.Coverflex benefits package for meal allowances, well-being, and more.Childcare support.Air Conference - an opportunity to meet the team, collaborate, and grow together.Pension Fund to support your long-term financial planning.Urban Sports Club membership to keep you active.Meals 100% free at the hub.Diversity & InclusionAt Air Apps, we are committed to fostering a diverse, inclusive, and equitable workplace. We enthusiastically welcome applicants from all backgrounds, experiences, and perspectives. We celebrate diversity in all its forms and believe that varied voices and experiences make us stronger.Application DisclaimerAt Air Apps, we value transparency and integrity in our hiring process. Applicants must submit their own work without any AI-generated assistance. Any use of AI in application materials, assessments, or interviews will result in disqualification.
No items found.
2026-04-28 4:36
AI Product Manager
Air Apps
51-100
€58,000 – €73,000
Netherlands
Full-time
Remote
false
About Air AppsAt Air Apps, we believe in thinking bigger—and moving faster. We’re a family-founded company on a mission to create the world’s first AI-powered Personal & Entrepreneurial Resource Planner (PRP), and we need your passion and ambition to help us change how people plan, work, and live. Born in Lisbon, Portugal in 2018—and now with offices in both Lisbon and San Francisco—we’ve remained self-funded while reaching over 100 million downloads worldwide.Our long-term focus drives us to challenge the status quo every day, pushing the boundaries of AI-driven solutions that truly make a difference. Here, you’ll be a creative force, shaping products that empower people across the globe.Join us on this journey to redefine resource management—and change lives along the way.The RoleAs an AI Product Manager at Air Apps, you will be at the forefront of shaping AI-powered applications that enhance user experiences. You will lead the product development lifecycle for AI-driven features, working closely with engineers, designers, and data scientists to develop, launch, and scale AI-driven solutions. Your role is pivotal in ensuring that AI technologies align with user needs and business objectives.This is a fully onsite position, based at our office in Lisbon, where you will collaborate closely with cross-functional teams in person and contribute to a dynamic and fast-paced environment. We are open to support with relocation efforts.ResponsibilitiesDefine and drive the AI product roadmap, ensuring alignment with business objectives and user needs.Collaborate with cross-functional teams, including engineering, design, and marketing, to develop and launch AI-powered features.Conduct market research and analyze user feedback to identify opportunities for AI integration.Work closely with data scientists and machine learning engineers to optimize AI models for accuracy, performance, and user impact.Define key performance indicators (KPIs) to measure success and iterate based on data-driven insights.Stay up to date with AI trends, emerging technologies, and best practices to ensure our products remain competitive.Ensure ethical AI usage and compliance with data privacy regulations.RequirementsAround 4+ years of experience in product management, preferably in AI, machine learning, or data-driven applications.Strong understanding of AI/ML concepts, including NLP, computer vision, and recommendation systems.Experience working with data science and engineering teams to develop AI-based features.Ability to translate complex AI concepts into user-friendly applications.Strong analytical skills and experience leveraging data to drive product decisions.What benefits are we offering?Apple hardware ecosystem for work.Annual BonusTop-tier Health and Life Insurance for peace of mind.Transportation Budget to support your commute needs.Coverflex benefits package for meal allowances, well-being, and more.Childcare support.Air Conference - an opportunity to meet the team, collaborate, and grow together.Pension Fund to support your long-term financial planning.Urban Sports Club membership to keep you active.Meals 100% free at the hub.Diversity & InclusionAt Air Apps, we are committed to fostering a diverse, inclusive, and equitable workplace. We enthusiastically welcome applicants from all backgrounds, experiences, and perspectives. We celebrate diversity in all its forms and believe that varied voices and experiences make us stronger.Application DisclaimerAt Air Apps, we value transparency and integrity in our hiring process. Applicants must submit their own work without any AI-generated assistance. Any use of AI in application materials, assessments, or interviews will result in disqualification.
No items found.
2026-04-28 4:20
Research Infrastructure Engineer, Training Systems
OpenAI
5000+
$295,000 – $380,000
United States
Full-time
Remote
false
About The TeamThe team works on research and systems that advance frontier models. Our work often goes beyond standard training recipes, which means we also build the infrastructure needed to make new training approaches practical at scale. This is a team where systems work is directly tied to research progress: better tools, abstractions, and runtimes can unlock experiments that would otherwise be too slow, brittle, or difficult to express.About The RoleThis is a systems engineering role focused on ML training infrastructure. You will work on the systems layer that turns novel research ideas into runnable, measurable training workloads for large models. The work can sit on the critical path for model releases, bringing both the excitement of direct impact and the responsibility of building systems that remain reliable under real pressure.In This Role, You WillBuild and maintain infrastructure for large-scale model training and experimentation.Design APIs and interfaces that make complex training workflows easier to express and harder to misuse.Improve reliability, debuggability, and performance across training and data pipelines.Debug issues spanning Python, PyTorch, distributed systems, GPUs, networking, and storage.Write tests, benchmarks, and diagnostics that catch meaningful regressions.You Might Thrive In This Role If YouYou want to build systems that enable new model training approaches, not just optimize established ones.You have strong systems instincts and care deeply about performance, reliability, and clean abstractions.You have good taste in API and interface design, with empathy for the researchers and engineers using your tools.You are comfortable working across ML research code and production-quality infrastructure.You enjoy debugging from evidence: profiles, traces, logs, tests, and minimal reproductions.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
2026-04-28 2:36
Engineering Manager, Cooperative Systems
OpenAI
5000+
$325,000 – $385,000
No items found.
Full-time
Remote
false
About the TeamThe Cooperative AI team is scaling OpenAI with OpenAI.We are building a model-powered scaled automated workforce and knowledge system that evolves and learns alongside a human workforce. By leveraging OpenAI’s state-of-the-art models and technologies, some already in production, others still in the lab, we develop systems that reason and work autonomously for a wide variety of operational work.We leverage real workloads for critical systems across finance, sales, customer support, integrity, product insights, internal operations, and more in order to drive insights into product and industry. We partner closely with internal teams and external customers globally, operating in a hyper-fast feedback loop where many of our users are just a few steps away. This proximity allows us to iterate quickly, validate impact in real time, and accelerate industry impacting learnings and systems builds.We are a highly multidisciplinary, self-contained team focused on transforming the workplace via smart systems, knowledge, scalable and reliable primitives that apply world-class AI capabilities across domains. Our mission is to learn fast and transform how humans collaborate with AI at scale.
About the RoleWe are looking for a hands-on Engineering Manager to lead a small, fast-moving team building AI-powered automation systems that redefine how work gets done across OpenAI.This role sits at the intersection of applied AI, research, and product engineering. You’ll lead a team that builds systems that know how to learn from humans, and carry real workloads across, sales, support, finance, IT, and more, while staying deeply involved in the technical work.You will operate in a highly iterative environment, deploying systems directly to internal users, gathering rapid feedback, and evolving solutions in real time.This is a high-ownership role for someone excited about building 0→1 systems, working closely with customers, and shaping how AI transforms operational work at scale.What You’ll DoLead and grow a small team building applied AI systems for internal operationsDesign and build AI-powered automation systems in close proximity to customersStay hands-on in architecture and implementation across the full stackDevelop evolving systems spanning developer tools, automation platforms, knowledge graphs, and data systemsDeploy systems directly to internal users and close customers to iterate rapidly based on real-world feedbackEngage frequently with scaled workforces to understand needs and validate solutionsCreate systems for visibility and learning in hybrid workforcesPartner with product, research, and ops teams dailyYou Might Thrive in This Role If YouHave 12+ years of experience in engineering, including 3+ years of experience in engineering management, and at least 7 years as an IC engineerAre a hands-on builder who enjoys operating across the stackHave deep experience applying AI, and are ready to experiment with frontier approachesBring strong technical judgment across systems design, infrastructure, and full-stack developmentHave built developer tools, internal platforms, or workflow automation systemsEnjoy frequent interaction with customers and thrive in feedback-driven development loopsAre comfortable with ambiguity and excited to operate in a rapidly evolving spaceHave experience in operationally complex environments (e.g., logistics, support systems, internal tooling, warehouse automation, etc.)About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
2026-04-28 2:36
Software Engineer, Compute Infrastructure
OpenAI
5000+
$230,000 – $405,000
United States
Full-time
Remote
false
About the TeamWe build and scale the Compute foundation that powers frontier AI research and products. Our team delivers reliable, efficient, and cost-effective GPU/CPU capacity across world-scale supercomputing systems, enabling researchers and product teams to move quickly. We operate one of the largest GPU fleets in the world, rapidly bringing new infrastructure online across a wide range of providers, hardware types, and generations while integrating it into a single seamless platform at massive scale. We focus on building an intuitive, low-friction system that helps teams experiment faster, innovate faster, and train some of the world’s largest and most advanced models.About the RoleWe’re looking for engineers to help build and operate the next generation of compute infrastructure powering OpenAI’s frontier research. This is an opportunity to work on the large-scale clusters, high-performance networks, and supercomputing systems that enable some of the most advanced AI workloads in the world.In this role, you’ll combine distributed systems engineering with hands-on infrastructure work across some of our largest data centers. You’ll help scale Kubernetes clusters to massive scale, automate bare-metal bring-up, and build the software layers that make heterogeneous GPU fleets and multi-datacenter supercomputing environments easier to operate.You’ll work where hardware and software meet, in an environment where speed, efficiency, and reliability are critical. That means solving real-time operational challenges, quickly diagnosing and fixing issues when they arise, and continuously improving automation, resilience, performance, and uptime across the systems that power frontier model training.In this role, you will:Spin up and scale large Kubernetes clusters, including automation for provisioning, bootstrapping, and cluster lifecycle managementBuild software abstractions that unify multiple clusters and present a seamless interface to training workloadsOwn node bring-up from bare metal through firmware upgrades, ensuring fast, repeatable deployment at massive scaleImprove operational metrics such as reducing cluster restart times (e.g., from hours to minutes) and accelerating firmware or OS upgrade cyclesIntegrate networking and hardware health systems to deliver end-to-end reliability across servers, switches, and data center infrastructureDevelop monitoring and observability systems to detect issues early and keep clusters stable under extreme loadYou might thrive in this role if you:Have experience operating large-scale compute fleets and enjoy bringing diverse hardware across providers, generations, and environments into one reliable platformCare deeply about infrastructure efficiency and know how to maximize utilization so every GPU and CPU delivers meaningful workBring a strong bias for operational excellence, balancing speed with long-term quality and building systems that improve consistently over timeFocus on solving root causes rather than symptoms, and build trust by eliminating recurring pain points for usersHave experience improving training performance, reducing bottlenecks, and helping workloads run faster and more cost-effectively at scaleEnjoy pushing the limits of scale, from increasing concurrent workloads to enabling larger and more ambitious single-cluster jobsBuild intuitive platforms and tooling that empower researchers, product teams, and operators to self-serve with minimal manual supportAre comfortable working in fast-moving environments where ownership, reliability, and continuous improvement are essentialQualificationsExperience as an infrastructure, systems, or distributed systems engineer in large-scale or high-availability environmentsStrong knowledge of Kubernetes internals, cluster scaling patterns, and containerized workloadsProficiency in compute infrastructure concepts (compute, networking, storage, security) and in automating cluster or data center operationsBonus: background with GPU workloads, firmware management, or high-performance computingAbout OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
2026-04-28 2:35
Staff Software Engineer, AI Platform
Harvey
501-1000
$231,000 – $340,000
United States
Full-time
Remote
false
Why HarveyAt Harvey, we’re transforming how legal and professional services operate — not incrementally, but end-to-end. By combining frontier agentic AI, an enterprise-grade platform, and deep domain expertise, we’re reshaping how critical knowledge work gets done for decades to come.This is a rare chance to help build a generational company at a true inflection point. With 1000+ customers in 60+ countries, strong product-market fit, and world-class investor support, we’re scaling fast and defining a new category in real time. The work is ambitious, the bar is high, and the opportunity for growth — personal, professional, and financial — is unmatched.Our team is sharp, motivated, and deeply committed to the mission. We move fast, operate with intensity, and take real ownership of the problems we tackle — from early thinking to long-term outcomes. We stay close to our customers — from leadership to engineers — and work together to solve real problems with urgency and care. If you thrive in ambiguity, push for excellence, and want to help shape the future of work alongside others who raise the bar, we invite you to build with us.At Harvey, the future of professional services is being written today — and we’re just getting started.Role OverviewHarvey’s products all depend on a shared AI foundation: the model layer and agent infrastructure that determine the quality of work our agents deliver. Legal is one of the hardest domains for AI: documents run to hundreds of pages, matters can span millions of files, and there is zero margin for error on accuracy.The AI Platform team builds the foundation that every product and agent team at Harvey builds upon. This team is early and there’s a lot to build: model routing, agent architecture, context management, evals. Your work here sets the ceiling for what Harvey’s AI can do.Representative ProjectsContext Engineering & Agent Infrastructure. Build the platform-level systems for context management, session state, and memory that all of Harvey’s agents and products rely on.Model Integration & Routing. Own the infrastructure that lets Harvey onboard new foundation models fast and route to the right one for every task - a capability every product team depends on.Evaluation Infrastructure. Build the shared eval tooling and frameworks that let every team across Harvey measure and improve AI quality systematically.Shared Abstractions. Create the SDKs, platform primitives, and developer tooling that make it dramatically easier for product teams to ship AI-powered features.What You’ll DoDesign and build abstractions and platform-level systems that improve all of Harvey’s agentic products.Own infrastructure for model integration, routing, and evaluation that helps Harvey choose and deploy the right foundation model for any given context.Build evaluation frameworks and tooling that let every team across Harvey iterate on AI quality effectively.Partner closely with product engineering teams, PMs, and design to launch cutting-edge AI products.Evaluate, prototype, and integrate the latest advancements in AI and agentic systems as they emerge.What You Have8+ years of experience building backend systems, with at least 1+ year focused on AI/ML engineering and a track record of technical leadership across teams.Experience building and shipping multi-model or multi-provider AI systems in production.Familiarity with context management, session state, or memory systems in AI or distributed systems. You’ve thought about what the model sees and why it matters.A track record of building internal platforms, SDKs, or shared infrastructure that other engineering teams actually adopted - and an understanding of why developer experience matters as much as raw capability.Strong judgment about abstractions. Opinionated about good design but pragmatic about shipping incrementally.Excitement about agentic AI and the infrastructure challenges of making autonomous systems reliable when the stakes are real.A bias toward full ownership: you navigate ambiguity well and don’t wait for a roadmap to start solving problems.Bonus: experience building evaluation frameworks, working with agent/function-calling architectures, familiarity with legal or other high-stakes professional services domains, or time at early-stage or hyper-growth startups where the underlying technology changes regularly.Compensation Range$231,000 - $340,000Depending on your location, an Applicant Privacy Notice may apply to you. You can find all of our Applicant Privacy Notices [here].#LI-AK1Harvey is an equal opportunity employer and does not discriminate on the basis of race, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition, or any other basis protected by law.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made by emailing accommodations@harvey.ai
No items found.
2026-04-26 3:21
Staff Software Engineer, AI Platform
Harvey
501-1000
$231,000 – $340,000
United States
Full-time
Remote
false
Why HarveyAt Harvey, we’re transforming how legal and professional services operate — not incrementally, but end-to-end. By combining frontier agentic AI, an enterprise-grade platform, and deep domain expertise, we’re reshaping how critical knowledge work gets done for decades to come.This is a rare chance to help build a generational company at a true inflection point. With 1000+ customers in 60+ countries, strong product-market fit, and world-class investor support, we’re scaling fast and defining a new category in real time. The work is ambitious, the bar is high, and the opportunity for growth — personal, professional, and financial — is unmatched.Our team is sharp, motivated, and deeply committed to the mission. We move fast, operate with intensity, and take real ownership of the problems we tackle — from early thinking to long-term outcomes. We stay close to our customers — from leadership to engineers — and work together to solve real problems with urgency and care. If you thrive in ambiguity, push for excellence, and want to help shape the future of work alongside others who raise the bar, we invite you to build with us.At Harvey, the future of professional services is being written today — and we’re just getting started.Role OverviewHarvey’s products all depend on a shared AI foundation: the model layer and agent infrastructure that determine the quality of work our agents deliver. Legal is one of the hardest domains for AI: documents run to hundreds of pages, matters can span millions of files, and there is zero margin for error on accuracy.The AI Platform team builds the foundation that every product and agent team at Harvey builds upon. This team is early and there’s a lot to build: model routing, agent architecture, context management, evals. Your work here sets the ceiling for what Harvey’s AI can do.Representative ProjectsContext Engineering & Agent Infrastructure. Build the platform-level systems for context management, session state, and memory that all of Harvey’s agents and products rely on.Model Integration & Routing. Own the infrastructure that lets Harvey onboard new foundation models fast and route to the right one for every task - a capability every product team depends on.Evaluation Infrastructure. Build the shared eval tooling and frameworks that let every team across Harvey measure and improve AI quality systematically.Shared Abstractions. Create the SDKs, platform primitives, and developer tooling that make it dramatically easier for product teams to ship AI-powered features.What You’ll DoDesign and build abstractions and platform-level systems that improve all of Harvey’s agentic products.Own infrastructure for model integration, routing, and evaluation that helps Harvey choose and deploy the right foundation model for any given context.Build evaluation frameworks and tooling that let every team across Harvey iterate on AI quality effectively.Partner closely with product engineering teams, PMs, and design to launch cutting-edge AI products.Evaluate, prototype, and integrate the latest advancements in AI and agentic systems as they emerge.What You Have8+ years of experience building backend systems, with at least 1+ year focused on AI/ML engineering and a track record of technical leadership across teams.Experience building and shipping multi-model or multi-provider AI systems in production.Familiarity with context management, session state, or memory systems in AI or distributed systems. You’ve thought about what the model sees and why it matters.A track record of building internal platforms, SDKs, or shared infrastructure that other engineering teams actually adopted - and an understanding of why developer experience matters as much as raw capability.Strong judgment about abstractions. Opinionated about good design but pragmatic about shipping incrementally.Excitement about agentic AI and the infrastructure challenges of making autonomous systems reliable when the stakes are real.A bias toward full ownership: you navigate ambiguity well and don’t wait for a roadmap to start solving problems.Bonus: experience building evaluation frameworks, working with agent/function-calling architectures, familiarity with legal or other high-stakes professional services domains, or time at early-stage or hyper-growth startups where the underlying technology changes regularly.Compensation Range$231,000 - $340,000Depending on your location, an Applicant Privacy Notice may apply to you. You can find all of our Applicant Privacy Notices [here].#LI-AK1Harvey is an equal opportunity employer and does not discriminate on the basis of race, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition, or any other basis protected by law.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made by emailing accommodations@harvey.ai
No items found.
2026-04-26 3:21
Staff Software Engineer, AI Platform
Harvey
501-1000
Canada
Full-time
Remote
false
Why HarveyAt Harvey, we’re transforming how legal and professional services operate — not incrementally, but end-to-end. By combining frontier agentic AI, an enterprise-grade platform, and deep domain expertise, we’re reshaping how critical knowledge work gets done for decades to come.This is a rare chance to help build a generational company at a true inflection point. With 1000+ customers in 60+ countries, strong product-market fit, and world-class investor support, we’re scaling fast and defining a new category in real time. The work is ambitious, the bar is high, and the opportunity for growth — personal, professional, and financial — is unmatched.Our team is sharp, motivated, and deeply committed to the mission. We move fast, operate with intensity, and take real ownership of the problems we tackle — from early thinking to long-term outcomes. We stay close to our customers — from leadership to engineers — and work together to solve real problems with urgency and care. If you thrive in ambiguity, push for excellence, and want to help shape the future of work alongside others who raise the bar, we invite you to build with us.At Harvey, the future of professional services is being written today — and we’re just getting started.Role OverviewHarvey’s products all depend on a shared AI foundation: the model layer and agent infrastructure that determine the quality of work our agents deliver. Legal is one of the hardest domains for AI: documents run to hundreds of pages, matters can span millions of files, and there is zero margin for error on accuracy.The AI Platform team builds the foundation that every product and agent team at Harvey builds upon. This team is early and there’s a lot to build: model routing, agent architecture, context management, evals. Your work here sets the ceiling for what Harvey’s AI can do.Representative ProjectsContext Engineering & Agent Infrastructure. Build the platform-level systems for context management, session state, and memory that all of Harvey’s agents and products rely on.Model Integration & Routing. Own the infrastructure that lets Harvey onboard new foundation models fast and route to the right one for every task - a capability every product team depends on.Evaluation Infrastructure. Build the shared eval tooling and frameworks that let every team across Harvey measure and improve AI quality systematically.Shared Abstractions. Create the SDKs, platform primitives, and developer tooling that make it dramatically easier for product teams to ship AI-powered features.What You’ll DoDesign and build abstractions and platform-level systems that improve all of Harvey’s agentic products.Own infrastructure for model integration, routing, and evaluation that helps Harvey choose and deploy the right foundation model for any given context.Build evaluation frameworks and tooling that let every team across Harvey iterate on AI quality effectively.Partner closely with product engineering teams, PMs, and design to launch cutting-edge AI products.Evaluate, prototype, and integrate the latest advancements in AI and agentic systems as they emerge.What You Have8+ years of experience building backend systems, with at least 1+ year focused on AI/ML engineering and a track record of technical leadership across teams.Experience building and shipping multi-model or multi-provider AI systems in production.Familiarity with context management, session state, or memory systems in AI or distributed systems. You’ve thought about what the model sees and why it matters.A track record of building internal platforms, SDKs, or shared infrastructure that other engineering teams actually adopted - and an understanding of why developer experience matters as much as raw capability.Strong judgment about abstractions. Opinionated about good design but pragmatic about shipping incrementally.Excitement about agentic AI and the infrastructure challenges of making autonomous systems reliable when the stakes are real.A bias toward full ownership: you navigate ambiguity well and don’t wait for a roadmap to start solving problems.Bonus: experience building evaluation frameworks, working with agent/function-calling architectures, familiarity with legal or other high-stakes professional services domains, or time at early-stage or hyper-growth startups where the underlying technology changes regularly.Depending on your location, an Applicant Privacy Notice may apply to you. You can find all of our Applicant Privacy Notices [here].#LI-AK1Harvey is an equal opportunity employer and does not discriminate on the basis of race, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition, or any other basis protected by law.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made by emailing accommodations@harvey.ai
No items found.
2026-04-26 3:20
Software Engineer, Inference - Performance Optimization
OpenAI
5000+
$295,000 – $555,000
United States
Full-time
Remote
false
About the Team
Our team analyzes inference stack performance across the application, model, and fleet layers to identify bottlenecks and drive faster, cheaper inference. We combine systems profiling, benchmarking, and analysis to understand where time and cost are spent, then turn that understanding into performance optimizations and models that project performance and capacity needs for future launches.
About the Role
In this role, you will model inference performance across application, model, and fleet layers with higher fidelity. You will build cost-to-serve estimates from microbenchmarks and create tools that help cross-functional teams reason about latency, capacity, utilization, and cost tradeoffs.
In this role, you will:Build and refine performance models that translate microbenchmark results into cost-to-serve estimates.Analyze inference workloads end to end across applications, models, and fleet infrastructure.Enhance tooling to identify bottlenecks across layers for latency and throughput.Partner with other teams to turn performance insights into concrete improvements and project how future changes affect inference.
You might thrive in this role if you:Enjoy reasoning from first principles about distributed systems, model inference, and hardware efficiency.Are comfortable working across abstraction layers, from application behavior to kernels, accelerators, networking, and fleet scheduling.Have deep expertise with performance profiling, benchmarking, analysis, and optimization.Enjoy collaborating with engineering and research teams to improve real production systems.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
2026-04-26 2:35
Software Engineer, Simulation
Intrinsic
201-500
$132,000 – $187,000
Singapore
Full-time
Remote
false
Intrinsic is an AI robotics group at Google aiming to reimagine the potential of industrial robotics. Our team believes that advances in AI, perception and simulation will redefine what’s possible for industrial robotics in the near future – with software and data at the core.
Our mission is to make industrial robotics intelligent, accessible, and usable for millions more businesses, entrepreneurs, and developers. We are a dynamic team of engineers, roboticists, designers, and technologists who are passionate about unlocking the creative and economic potential of industrial robotics.Role
As a Robotics Application Engineer specializing in Intelligent Manufacturing Automation, you will closely collaborate with our automation group and industry partners to advance manufacturing automation by extending our capabilities and solutions. Your daily responsibilities will include integrating and developing robotics solutions, including processes, software features and integrating state-of-the-art AI for manufacturing automation. You will be working with the team leading the deployment of impactful robotic systems in production.
How your work moves the mission forward
Contribute to the technical development and integration of advanced robotic automation solutions for manufacturing automation, utilizing the Intrinsic platform, ROS and state-of-the-art AI capabilities.
Collaborate closely with our research and industry partners to successfully integrate AI and automation capabilities into factory settings.
Document designs, processes, and results, communicating effectively with internal technical teams and our partners.
Skills you will need to be successful
Bachelor’s degree or equivalent practical background in Robotics Engineering, Software Engineering, or a related technical field.
Foundational understanding of industrial robotics, motion control, and machine vision, with a strong desire to learn and contribute to their application in manufacturing environments.
Programming proficiency with C++ and/or Python.
Excellent collaboration skills, with the ability to work effectively with internal cross-functional teams and external technical partners during deployment and support.
Skills that will differentiate your candidacy
Foundational understanding of AI concepts for perception, robotics and automation, with a strong desire to learn and contribute to their application in manufacturing environments.
Experience with ROS 2 and its major frameworks.
Experience with technologies such as Kubernetes, gRPC, Protobuf, Bazel, microservice architectures, and/or real-time systems.
In addition to the salary range below, this full-time position is eligible for bonus + equity + benefits. Your recruiter will share more about the specific salary range + bonus + equity for your targeted location and role during the hiring process.Salary Range$132,000—$187,000 USDAt Intrinsic, we are proud to be an equal opportunity workplace. Employment at Intrinsic is based solely on a person's merit and qualifications directly related to professional competence. Intrinsic does not discriminate against any employee or applicant because of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), or any other basis protected by law. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. It is Intrinsic’s policy to comply with all applicable national, state and local laws pertaining to nondiscrimination and equal opportunity.
If you have a disability or special need that requires accommodation, please contact us at: candidate-support@intrinsic.ai.
No items found.
2026-04-25 9:35
People Operations Lead
Fireworks AI
101-200
$170,000 – $240,000
United States
Full-time
Remote
false
About Us:
At Fireworks, we’re building the future of generative AI infrastructure. Our platform delivers the highest-quality models with the fastest and most scalable inference in the industry. We’ve been independently benchmarked as the leader in LLM inference speed and are driving cutting-edge innovation through projects like our own function calling and multimodal models. Fireworks is a Series C company valued at $4 billion and backed by top investors including Benchmark, Sequoia, Lightspeed, Index, and Evantic. We’re an ambitious, collaborative team of builders, founded by veterans of Meta PyTorch and Google Vertex AI.The Role:
As an Applied Machine Learning Engineer, you will serve as a vital bridge between cutting-edge AI research and practical, real-world applications. Your work will focus on developing, fine-tuning, and operationalizing machine learning models that drive business value and enhance user experiences. This is a hands-on engineering role that combines deep technical expertise with a strong customer focus to deliver scalable AI solutions.
Key Responsibilities:
Customer Success: Collaborate directly with the GTM team (Account Executives and Solutions Architects) to ensure smooth integration and successful deployment of ML solutions.
Demo / Proof of Concept (PoC): Build and present compelling PoCs that demonstrate the capabilities of our AI technology.
Application Build: Design, develop, and deploy end-to-end AI-powered applications tailored to customer needs.
Platform Features / Bug Fixes: Contribute to the internal ML platform, including adding features and resolving issues.
New Model Enablements: Integrate and enable new machine learning models into the existing platform or client environments.
Performance Optimizations: Improve system performance, efficiency, and scalability of deployed models and applications.
Partnership Enablement: Work closely with partners to enable joint AI solutions and ensure seamless collaboration.
Minimum Qualifications:
Bachelor’s degree in Computer Science, Engineering, or a related technical field.
5+ years of experience in a software engineering role, with a strong preference for customer-facing roles.
Robust coding skills required, preferably with proficiency in Python.
Demonstrated ability to lead and execute complex technical projects with a focus on customer success.
Strong interpersonal and communication skills; ability to thrive in dynamic, cross-functional teams.
Preferred Qualifications:
Master’s degree in Computer Science, Engineering, or a related technical field.
Experience working in a startup or fast-paced environment.
Hands-on experience fine-tuning machine learning models, including supervised fine-tuning (SFT) and reinforcement learning from human feedback (RLHF or RFT).
Solid understanding of generative AI, machine learning principles, and enterprise infrastructure.
Total compensation for this role also includes meaningful equity in a fast-growing startup, along with a competitive salary and comprehensive benefits package. Base salary is determined by a range of factors including individual qualifications, experience, skills, interview performance, market data, and work location. The listed salary range is intended as a guideline and may be adjusted.Base Pay Range (Plus Equity)$170,000—$240,000 USDWhy Fireworks AI?
Solve Hard Problems: Tackle challenges at the forefront of AI infrastructure, from low-latency inference to scalable model serving.
Build What’s Next: Work with bleeding-edge technology that impacts how businesses and developers harness AI globally.
Ownership & Impact: Join a fast-growing, passionate team where your work directly shapes the future of AI—no bureaucracy, just results.
Learn from the Best: Collaborate with world-class engineers and AI researchers who thrive on curiosity and innovation.
Fireworks AI is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all innovators.
No items found.
2026-04-25 7:35
Machine Learning Research, RF Foundation Models Specialist
Distributed Spectrum
11-50
$200,000 – $300,000
United States
Full-time
Remote
false
DS creates systems that power the next generation of radio spectrum intelligence. We collect radio data from all over the world, train neural networks to decipher it, and run them on the smallest chips we can. We’re solving a new, technically hard problem where nothing from other fields works out of the box, and along the way, we’ve built our own stack from scratch, including entirely new embedding model architectures, custom GPU kernels, and much more.Joining DS means owning major parts of a fast-growing AI research organization, joining a collaborative, talent-dense team with decades of experience in probabilistic ML, accelerated computing, embedded systems, and signal theory, and growing your career in the areas that interest you. You’ll fit in if you want to come to work for the problem itself and don’t want to choose between technical rigor, business value, and real-world impact. We work with high ownership and trust, and we do it together in the office 5 days/week.About the RoleSome domains already have standard ML playbooks. RF is not one of them.Distributed Spectrum is building AI-enabled sensing systems for the radio domain, and we are hiring a Machine Learning Researcher, Specialist to bring modern ML to a problem space where representation, structure, physics, runtime constraints, and deployment realities all matter at once.This role is designed for a strong generalist researcher who wants genuinely open technical terrain. You will work on problems where signal structure, propagation effects, interference, sparse visibility, and edge deployment constraints all shape what "good" looks like. The job is not just to improve accuracy. It is to formulate the right problem, find the right modeling approach, and get that capability into systems that are used in the field.You will work across the lifecycle of research and deployment: data and evaluation design, experimentation, model development, release readiness, and iteration based on real-world outcomes. You will collaborate closely with embedded, hardware, and mission teammates, and your work will directly influence how Distributed Spectrum builds machine learning capability as the company scales.What You'll DoFormulate new ML problems in RF sensing and spectrum understandingDesign experiments and evaluation approaches that reflect real operating conditions including domain shift, changing interference, and varying sensors and platformsBuild models for structured, noisy, and partially observed signal environmentsImprove robustness across propagation, interference, and low-visibility waveform conditionsOptimize models for throughput, latency, and deployment constraintsMove promising research into a release path for real systems through proofs-of-concept, realistic validation, and conversion into maintainable, deployable codeUse field performance to inform the next generation of models and toolingWhat We're Looking ForDeep mathematical and modeling fundamentalsStrong hands-on experience with modern ML frameworks and experimental practiceAbility to work in domains where problem formulation is as important as implementationStrong instincts for signal-rich, structured, non-generic dataComfort operating with ambiguity and changing requirementsClear technical communication and cross-functional collaborationNice-To HavesBackground in RF or signal-centric ML (spectrum sensing, modulation recognition, or related work) is welcome but not required; we are equally interested in researchers from adjacent domains who have demonstrated strong reasoning on hard signal or sensing problemsExperience building for constrained inference (quantization, kernel-level optimizations, or similar)Evidence of research impact: publications, open-source implementations, or prior work building new architectures that shippedWho Thrives at Distributed Spectrum Fast learners over specific backgrounds – We care more about how quickly you can pick up new skills than where you’ve worked before.Intellectual honesty – The right answer matters more than being right. You challenge assumptions, test ideas, and pivot when needed.Adaptability – We’re organized, but sometimes things change quickly. You find a way to make it work and balance short-term deliverables with long-term goals.Ownership of outcomes – You optimize your own time, focus on what matters to deliver quickly, and cut out inefficiencies.Not building in a vacuum – You stay connected to the rest of our teams and our customers to make sure all the pieces fit together.What We Offer Above-market salary, equity, and benefits package. Early Series A EquityExcellent health, dental, and vision coverage401(k) match - up to 4% of your salaryFlexible PTODaily office lunches in NYCITAR Requirements
To conform to U.S. Government technology export regulations, including the International Traffic in Arms Regulations (ITAR) you must be a U.S. citizen, lawful permanent resident of the U.S., protected individual as defined by 8 U.S.C. 1324b(a)(3), or eligible to obtain the required authorizations from the U.S. Department of State. Learn more about the ITAR here.
No items found.
2026-04-25 6:51
Staff Software Engineer
Haydenai
101-200
$230,522 – $299,679
United States
Full-time
Remote
false
About UsAt Hayden AI, we are on a mission to harness the power of computer vision to transform the way transit systems and other government agencies address real-world challenges.From bus lane and bus stop enforcement to transportation optimization technologies and beyond, our innovative mobile perception system empowers our clients to accelerate transit, enhance street safety, and drive toward a sustainable future.
What the job involves
As a Staff Software Engineer on the Perception team, you will be a key technical leader, defining and driving the long-term vision and architecture for our perception systems in forthcoming pilots, directly influencing Hayden’s mission and roadmap for business expansion.
This role requires deep expertise in computer vision and/or robotics algorithms that are deployed on the edge/cloud. You will be responsible for setting the technical direction and standards for the team, architecting complex, scalable, and robust end-to-end perception and robotics systems for deployment on real-world hardware, and ensuring their successful integration into Hayden’s core product platform.
This is a C++ software engineering position demanding both hands-on technical mastery and significant architectural and systems leadership. You will operate with complete technical ownership over major system components, mentoring junior and senior engineers, driving complex cross-functional initiatives, and making critical trade-off decisions that balance bleeding-edge innovation, system reliability, performance, and long-term maintainability across the entire Perception stack.Responsibilities: Spearhead the architectural design, implementation, and long-term ownership of next-generation perception systems, ensuring seamless transition from research prototypes to robust, scalable production solutions.Champion best practices in software development, delivering high-performance, meticulously tested, and maintainable C++ code optimized for heterogeneous edge and robotics computing platforms.Architect and optimize high-throughput, real-time perception pipelines, setting the technical direction for advanced techniques in object detection, tracking, and sophisticated sensor fusionDrive the strategic selection, adaptation, and integration of state-of-the-art Machine Learning (ML) and Computer Vision (CV) models, including the development of novel, proprietary models tailored to complex, large-scale Hayden-specific problems.Provide technical leadership in highly ambiguous and complex problem domains, defining the technical roadmap and consistently striking the optimal balance between rapid iteration for R&D and the rigorous requirements for productization and scale.Act as a key technical liaison, collaborating strategically with Product leadership and setting technical standards across cross-functional Engineering organizations.Define and significantly contribute to foundational shared infrastructure, tooling, and architectural patterns, scaling pilot initiatives into core, mission-critical product capabilities.
Required Qualifications:Advanced degree (MS or PhD) in Computer Science, Electrical Engineering, Robotics, or a related field.10+ years of relevant experience in building and deploying perception systems; experience in automotive or robotics domains is a plus.Deep expertise in one or more of: robotics, state estimation, computer vision, or applied machine learning.Extensive experience leading large, complex, production-grade systems end-to-end.Expert-level proficiency in modern C++, including experience with real-time and/or embedded systems.Proven track record of technical leadership and architectural ownership across multiple projects or teams.Experience scaling systems from prototype to production in ambiguous, fast-moving environments.Strong ability to influence without authority and drive alignment across teams.Demonstrated mentorship and ability to raise the bar for engineering quality across an organization.
No items found.
2026-04-25 5:05
Manager, Forward Deployed Engineering - Munich
OpenAI
5000+
Germany
Full-time
Remote
false
About the teamOpenAI’s Forward Deployed Engineering team partners with customers to turn research breakthroughs into production systems. We operate at the intersection of customer delivery and core platform development.About the role
As an FDE manager, you’ll lead FDEs through high-stakes, ambiguous customer deployments and own technical and business value outcomes end to end. You’ll grow a team that can operate under pressure and help OpenAI learn from the field.You’ll partner closely with Product, Research, Sales, and GTM to ensure fieldwork informs roadmap priorities, drives new exploration, and supports safe deployment at scale. Your decisions will influence how OpenAI is trusted by the customers closest to our deployment work. Your success will be measured by how consistently your team ships, how clearly you deliver signal to Research and Product, and how durable your team and delivery model prove to be.This role is based in Munich. We use a hybrid work model of 3 days in the office per week. We offer relocation assistance. This role also will require travel up to 25%.In this role you willLead and grow a team of FDE delivering production systems with frontier models.Own end-to-end delivery outcomes through clarity, speed, tight coordination, and technical quality.Codify what works into tools, playbooks, and roadmap inputs that create leverage for both OpenAI and our wider developer community.Notice early indicators and raise them with urgency, whether in product behavior, customer environments, or delivery practices.Use judgement to distinguish what requires action and what does not.Set a high bar for FDE performance and support each person’s growth through direct, actionable feedback.Define how we staff and support field teams that can scale without added complexity.You might thrive in this role if youBring 8+ years of engineering or technical delivery experience, including 2+ years managing high-performing FDE or customer-facing engineers.Have led high-pressure technical projects from prototype to production.Write and review production-grade code across frontend and backend using Javascript or Python.Simplify complex work and make fast, sound decisions under pressure.Elevate team performance through clarity, not process.Operate with urgency in ambiguous or evolving environments.Translate field experience into sharp, actionable feedback for Product and Research.Build deep trust with your team by modeling calm, focus, and judgment when it matters most.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
2026-04-25 2:36
No job found
Your search did not match any job. Please try again
