The AI job market moves fast. We keep up so you don't have to.
Fresh roles added daily, reviewed for quality — across every corner of the AI ecosystem.
I'm strong in:
Edit filters
New AI Opportunities
Showing 61 – 79 of 79 jobs
Tag
Technical Director of AI Safety
Faculty
501-1000
United Kingdom
Full-time
Remote
false
Why Faculty?
We established Faculty in 2014 because we thought that AI would be the most important technology of our time. Since then, we’ve worked with over 350 global customers to transform their performance through human-centric AI. You can read about our real-world impact here.We don’t chase hype cycles. We innovate, build and deploy responsible AI which moves the needle - and we know a thing or two about doing it well. We bring an unparalleled depth of technical, product and delivery expertise to our clients who span government, finance, retail, energy, life sciences and defence.Our business, and reputation, is growing fast and we’re always on the lookout for individuals who share our intellectual curiosity and desire to build a positive legacy through technology.AI is an epoch-defining technology, join a company where you’ll be empowered to envision its most powerful applications, and to make them happen.About the TeamFaculty’s Research team conducts critical red teaming and builds evaluations for misuse capabilities in sensitive areas, such as CBRN, cybersecurity and international security, for several leading frontier model developers and national safety institutes; notably, our work has been featured in OpenAI's system card for o1.Our commitment also extends to conducting fundamental technical research on mitigation strategies, with our findings published in peer-reviewed conferences and delivered to national security institutes. Complementing this, we design evaluations for model developers across broader safety-relevant fields, including the societal impacts of increasingly capable frontier models, showcasing our expertise across the safety landscape.About the roleThis is a brand new senior leadership role to provide technical leadership of Faculty's work on AI safety for the Foundation Labs - and presents a unique opportunity to shape how AI safety is done globally.Faculty is one of the world's leading applied AI companies, helping many of the organisations that shape our world to adopt AI successfully and safely. We play an important role in the emerging AI safety ecosystem. We already have many of the key Frontier Labs as clients, including Open AI and Anthropic, for whom we provide third-party red teaming, technical testing and other AI safety services. And we work with the UK government and other international governments on AI safety, including helping set up the AI Security Institute and delivering technical work which catalysed the first global AI Safety Summit at Bletchley Park in 2023.With the recent announcement of Faculty's acquisition by Accenture, we are investing to take our work on AI safety to global scale, and this role will be key to shaping that. This will include:The opportunity to hire and build a world-class AI safety technical team - of calibre unmatched outside of the Labs themselvesThe opportunity to design and lead an AI safety R&D programme - creating the advances which will enable AI safety at scale to keep pace with model advancesThe opportunity to build our work with the Frontier Labs to scale - helping to test and assure new frontier models ahead of public releaseThe opportunity to contribute to and shape the international debate on AI safety, including with governments and other key bodies, working closely with Marc Warner Faculty's founder & CEO.This role will suit someone with a deep passion and commitment to AI safety, and represents a unique opportunity to contribute to this agenda globally.What you'll be doing:Owning the technical strategy for AI Safety by determining research directions and building technologies that mitigate risks from alignment to societal harms.Leading a high-performing R&D team through intentional hiring, mentorship, and the cultivation of a culture defined by technical excellence and high output.Driving academic impact by guiding complex machine learning projects and securing top-tier publications that cement Faculty’s reputation in the safety domain.Shaping market-leading offerings for frontier labs and security institutes, translating cutting-edge R&D into practical, groundbreaking safety solutions.Overseeing technical delivery of AI safety and security projects, ensuring scientific rigor and high-quality outputs across evaluations and red-teaming.Representing Faculty externally as a primary technical voice, delivering influential thought leadership and speaking at major global industry events.Collaborating cross-functionally with business unit directors and commercial teams to align research investment with strategic growth and client needs.Who we're looking for:You have a proven track record of designing and leading high-performing technical teams, with the ability to manage R&D budgets and mentor senior technical staff.You bring deep expertise in AI safety research, specifically regarding alignment, interpretability, and robustness in large language models (LLMs) or safety-critical systems.You possess a strong scientific background evidenced by high-impact machine learning publications and a comprehensive understanding of transformer architectures.You are a strategic visionary capable of setting research priorities that align with long-term organisational goals while remaining at the cutting edge of field developments.You are a compelling communicator who can synthesise complex technical concepts into narratives that influence both C-suite executives and the broader research community.You exhibit strong commercial acumen and stakeholder management skills, allowing you to navigate complex organisations and accelerate the delivery of high-value projects.Interview ProcessTalent Team Screen (45 mins)Principles and Experience interview (60 mins)Research Proposal (90 mins)Leadership Interview (60 mins)Meet with CEO (30 mins)Our Recruitment EthosWe aim to grow the best team - not the most similar one. We know that diversity of individuals fosters diversity of thought, and that strengthens our principle of seeking truth. And we know from experience that diverse teams deliver better work, relevant to the world in which we live. We’re united by a deep intellectual curiosity and desire to use our abilities for measurable positive impact. We strongly encourage applications from people of all backgrounds, ethnicities, genders, religions and sexual orientations.Some of our standout benefits:Unlimited Annual Leave PolicyPrivate healthcare and dentalEnhanced parental leaveFamily-Friendly Flexibility & Flexible workingSanctus CoachingHybrid WorkingIf you don’t feel you meet all the requirements, but are excited by the role and know you bring some key strengths, please don't hesitate in applying as you might be right for this role, or other roles. We are open to conversations about part-time hours.
No items found.
2026-03-19 9:16
Staff Applied AI Engineer - Pre-Sales
Snorkel AI
501-1000
$172,000 – $300,000
United States
Full-time
Remote
false
About Snorkel
At Snorkel, we believe meaningful AI doesn’t start with the model, it starts with the data.
We’re on a mission to help enterprises transform expert knowledge into specialized AI at scale. The AI landscape has gone through incredible changes between 2015, when Snorkel started as a research project in the Stanford AI Lab, to the generative AI breakthroughs of today. But one thing has remained constant: the data you use to build AI is the key to achieving differentiation, high performance, and production-ready systems. We work with some of the world’s largest organizations to empower scientists, engineers, financial experts, product creators, journalists, and more to build custom AI with their data faster than ever before. Excited to help us redefine how AI is built? Apply to be the newest Snorkeler!About Snorkel
We’re on a mission to democratize AI by building the definitive AI data development platform. The AI landscape has gone through incredible change between 2016, when Snorkel started as a research project in the Stanford AI Lab, to the generative AI breakthroughs of today. But one thing has remained constant: the data you use to build AI is the key to achieving differentiation, high performance, and production-ready systems. We work with some of the world’s largest organizations to empower scientists, engineers, financial experts, product creators, journalists, and more to build custom AI with their data faster than ever before. Excited to help us redefine how AI is built? Apply to be the newest Snorkeler!
As an Applied AI Engineer, you’ll research and utilize state-of-the-art Gen AI and machine learning (ML) techniques to successfully deliver solutions to our customers. You will work directly with our customers to understand their business and technical needs and design and deliver AI solutions to solve them - either by leveraging Snorkel Flow or developing custom approaches when needed. You will also help define Snorkel’s Applied AI tooling by translating repeatable real-world challenges into reusable solution recipes, workflows, best practices, and platform-level capabilities that become part of Snorkel Flow’s next generation of AI tooling. We move fast and are constantly prototyping and innovating new ways to deliver value to our customers. This position is ideal for someone who enjoys solving complex problems, bridging the gap between AI technology and business value, working directly with customers, keeping up-to date with AI research, and standardizing bespoke solutions into internal recipes and staying naturally curious about the infrastructure that underpin the Applied AI stack end-to-end.
Main Responsibilities
Partner with customers to build and deploy impactful Gen AI and machine learning solutions, from use case scoping and data exploration to model development and deployment. This may involve leveraging Snorkel Flow or designing custom approaches using state-of-the-art tools, with the goal of delivering real business value and informing the evolution of the Snorkel platform.
Develop and implement state of the art AI systems such as retrieval-augmented generation (RAG), fine-tuning pipelines, prompt engineering recipes and agentic workflows.
Create augmented real-world datasets and comprehensive evaluation workflows to ensure model reliability, transparency, and stakeholder trust. A data- and evaluation-first mindset is essential for success in this role.
Forge and manage relationships with our customers’ leadership and stakeholders to ensure successful development and deployment of AI projects with Snorkel Flow.
Collaborate closely with pre-sales Solutions and Product teams to map customer needs to existing capabilities, prioritize roadmap gaps, and guide successful project setup.
Work with other Applied AI Engineers to standardize solutions and contribute to internal tooling and best practices.
Lead stakeholder education on quantitative capabilities, helping them to understand the strengths and weaknesses of different approaches and what problems are best-suited for Snorkel AI.
Serve as the voice of our customers for new AI paradigms, data science workflows, and share customer feedback to product teams.
Conduct one-to-few and one-to-many enablement workshops to transfer knowledge to customers considering or already using Snorkel AI.
Annual travel up to 25%.
Preferred Qualifications
B.S. degree in a quantitative field such as Computer Science, Engineering, Mathematics, Statistics, or comparable degree/experience.
3+ years of customer-facing experience in the design and implementation of AI/ML solutions.
Proficiency in Python, including strong grounding in software engineering fundamentals (e.g., modular design, testing, profiling, packaging) and experience with modern Python constructs and libraries for type validation and typed data modeling (e.g., pydantic), building type-safe systems (e.g., mypy), testing (e.g., pytest), packaging and environment configuration (e.g., poetry), API and service frameworks (e.g., FastAPI), serialization and structured data handling (e.g., msgspec), and orchestration tooling relevant to ML deployment (e.g., Ray, Airflow).
Expertise across the Applied AI stack, spanning classical ML libraries (e.g., scikit-learn), deep learning frameworks (e.g., PyTorch), foundation-model ecosystems (e.g., Hugging Face Transformers), vector/embedding tooling (e.g., FAISS), data processing frameworks (e.g., pandas, Spark), retrieval/RAG tooling (e.g., Chroma, Weaviate), synthetic dataset curation, evaluation workflows, and LLM orchestration, workflow, agent authoring tools (e.g., LlamaIndex, LangGraph, CrewAI).
Experience leading strategic, customer-facing initiatives and collaborating with business stakeholders to ensure ML solutions drive successful business outcomes, with a strong focus on teaching and enablement.
Outstanding presentation skills to technical and executive audiences, whether impromptu on a whiteboard or using presentations and demos.
Ability to work in a fast-paced environment and balance priorities across multiple projects at once.
Compensation range for Tier 1 locations of San Francisco Bay Area $172K - $300K OTE. All offers also include equity in the form of employee stock options. Our compensation ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training.
Locations
Redwood City, CA - Hybrid; San Francisco, CA - Hybrid - US; New York, NY - Hybrid
#LI-CG1Salary Range $172,000—$300,000 USDBe Your Best at Snorkel
Joining Snorkel AI means becoming part of a company that has market proven solutions, robust funding, and is scaling rapidly—offering a unique combination of stability and the excitement of high growth. As a member of our team, you’ll have meaningful opportunities to shape priorities and initiatives, influence key strategic decisions, and directly impact our ongoing success. Whether you’re looking to deepen your technical expertise, explore leadership opportunities, or learn new skills across multiple functions, you’re fully supported in building your career in an environment designed for growth, learning, and shared success.
Snorkel AI is proud to be an Equal Employment Opportunity employer and is committed to building a team that represents a variety of backgrounds, perspectives, and skills. Snorkel AI embraces diversity and provides equal employment opportunities to all employees and applicants for employment. Snorkel AI prohibits discrimination and harassment of any type on the basis of race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local law. All employment is decided on the basis of qualifications, performance, merit, and business need.
We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
No items found.
2026-03-19 8:46
Expansion Account Executive
Arize AI
101-200
Argentina
Full-time
Remote
false
About Arize
AI is rapidly transforming the world. As generative AI reshapes industries, teams need powerful ways to monitor, troubleshoot, and optimize their AI systems. That’s where we come in. Arize AI is the leading AI & Agent Engineering observability and evaluation platform, empowering AI engineers to ship high-performing, reliable agents and applications. From first prototype to production scale, Arize AX unifies build, test, and run in a single workspace—so teams can ship faster with confidence.
We’re a Series C company backed by top-tier investors, with over $135M in funding and a rapidly growing customer base of 150+ leading enterprises and Fortune 500 companies. Customers like Booking.com, Uber, Siemens, and PepsiCo leverage Arize to deliver AI that works.Note: The nature of this role requires candidates to be based in the Buenos Aires area, though there isn't an in-office requirement.
The Opportunity
We’re looking for an Application Engineer who thrives on solving hard problems with code. In this role, you'll have the opportunity to work at the cutting edge of generative AI in a high-impact role with autonomy and ownership.
What You’ll Do
Debug and fix issues in our platform (and ship PRs with your fixes).
Build internal tools and copilots powered by generative AI to supercharge our team.
Rapidly prototype proof-of-concepts for customer use cases.
Work across Engineering, Product, and Solutions to unblock customers and push the boundaries of AI adoption.
What We’re Looking For
You have 2-5 years of experience in software.
Strong in Python and Golang; comfortable shipping fixes in production systems.
Hands-on with generative AI (LLM APIs, frameworks, building copilots or automations)
Hands-on with OpenTelimetry and deep familiarity with distributed tracing concepts.
Familiarity with AI frameworks (CrewAI, Langchain, Langgraph, DiFy, LiteLLM, etc).
Familiarity or eagerness to learn JavaScript/TypeScript.
Great debugger, creative problem solver, and fast learner.
Independent and resourceful. You create solutions, not dependencies.
Bonus Points (but not required!)
Experience in a customer-facing role
Built copilots, plugins, or custom GenAI-powered applications.
Open-sourced or contributed PRs to real codebases.
Startup or fast-moving environment experience.
Actual compensation is determined based upon a variety of job related factors that may include: transferable work experience, skill sets, and qualifications. Total compensation also includes unlimited paid time off, generous parental leave plan, and others for mental and wellness support.More About Arize
Arize’s mission is to make the world’s AI work—and work for people.
Our founders came together through a shared frustration: while investments in AI are growing rapidly across every industry, organizations face a critical challenge—understanding whether AI is performing and how to improve it at scale.
Learn more about what we're doing here:
https://techcrunch.com/2025/02/20/arize-ai-hopes-it-has-first-mover-advantage-in-ai-observability/
https://arize.com/blog/arize-ai-raises-70m-series-c-to-build-the-gold-standard-for-ai-evaluation-observability/
Diversity & Inclusion @ Arize
Our company's mission is to make AI work and make AI work for the people, we hope to make an impact in bias industry-wide and that's a big motivator for people who work here. We actively hope that individuals contribute to a good culture
Regularly have chats with industry experts, researchers, and ethicists across the ecosystem to advance the use of responsible AI
Culturally conscious events such as LGBTQ trivia during pride month
We have an active Lady Arizers subgroup
No items found.
2026-03-19 8:17
DevOps Engineer, Infrastructure & Security
Scale AI
5000+
United States
Full-time
Remote
false
Role Overview
Scale’s rapidly growing Global Public Sector team is focused on using AI to address critical challenges facing the public sector around the world. Our core work consists of:
Creating custom AI applications that will impact millions of citizens
Generating high-quality training data for national LLMs
Upskilling and advisory services to spread the impact of AI
As a Production AI Ops Lead, you will design and develop the production lifecycle of full-stack AI applications, while supporting end-to-end system reliability, real-time inference observability, sovereign data orchestration, high-security software integration, and the resilient cloud infrastructure required for our international government partners.
At Scale, we’re not just building AI solutions—we’re enabling the public sector to transform their operations and better serve citizens through cutting-edge technology. If you’re ready to shape the future of AI in the public sector and be a founding member of our team, we’d love to hear from you.
You will:
Own the production outcome: Take full accountability for the long-term performance and reliability of AI use cases deployed across international government agencies.
Ensure Full-Stack integrity: Oversee the end-to-end health of the platform, ensuring seamless integration between the AI core and all full-stack components, from APIs to UI, to maintain a responsive and production-ready environment.
Scale the feedback loop: Build automated systems to monitor model performance and data drift across geographically dispersed environments, ensuring the right levels of reliability.
Navigate global compliance: Manage the technical lifecycle within diverse regulatory frameworks.
Incident command: Lead the response for production issues in mission-critical environments, ensuring rapid resolution and building the guardrails to prevent them from happening again.
Bridge the gap: Translate deep technical performance metrics into clear insights for senior international government officials.
Drive product evolution: Partner with our Engineering and ML teams to ensure the lessons learned in the field directly influence the technical architecture and decisions of future use cases.
Ideally, you have:
Experience: 6+ years in a high-impact technical role (SRE, FDE or MLOps) with experience in the public sector.
Global perspective: Familiarity with international government security standards and the complexities of deploying sovereign AI.
System architecture proficiency: Proven experience maintaining production-grade applications with a deep understanding of the full request lifecycle-connecting frontend/API layers to the backend and AI core.
Modern AI Stack expertise: Proficiency in coding and the modern AI infrastructure, including Kubernetes, vector databases, agentic development, and LLM observability tools.
Ownership: You treat every production deployment as your own. You race toward solving hard problems before the customer even sees them.
Reliability: You understand that in the public sector, a model failure may be a risk to public safety or privacy.
Customer communication: The ability to explain to a high-ranking official why the performance of the system has degraded and how we are fixing it.
PLEASE NOTE: Our policy requires a 90-day waiting period before reconsidering candidates for the same role. This allows us to ensure a fair and thorough evaluation of all applicants.
About Us:
At Scale, our mission is to develop reliable AI systems for the world's most important decisions. Our products provide the high-quality data and full-stack technologies that power the world's leading models, and help enterprises and governments build, deploy, and oversee AI applications that deliver real impact. We work closely with industry leaders like Meta, Cisco, DLA Piper, Mayo Clinic, Time Inc., the Government of Qatar, and U.S. government agencies including the Army and Air Force. We are expanding our team to accelerate the development of AI applications.
We believe that everyone should be able to bring their whole selves to work, which is why we are proud to be an inclusive and equal opportunity workplace. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability status, gender identity or Veteran status.
We are committed to working with and providing reasonable accommodations to applicants with physical and mental disabilities. If you need assistance and/or a reasonable accommodation in the application or recruiting process due to a disability, please contact us at accommodations@scale.com. Please see the United States Department of Labor's Know Your Rights poster for additional information.
We comply with the United States Department of Labor's Pay Transparency provision.
PLEASE NOTE: We collect, retain and use personal data for our professional business purposes, including notifying you of job opportunities that may be of interest and sharing with our affiliates. We limit the personal data we collect to that which we believe is appropriate and necessary to manage applicants’ needs, provide our services, and comply with applicable laws. Any information we collect in connection with your application will be treated in accordance with our internal policies and programs designed to protect personal data. Please see our privacy policy for additional information.
No items found.
2026-03-19 8:16
Machine Learning and State Estimation Intern
Harmattan AI
51-100
Switzerland
Intern
Remote
false
About UsHarmattan AI is a next-generation defense prime building autonomous and scalable defense systems. Following the close of a $200M Series B, valuing the company at $1.4 billion, we are expanding our teams and capabilities to deliver mission-critical systems to allied forces.Our work is guided by clear values: building technologies with real-world impact, pursuing excellence in everything we do, setting ambitious goals, and taking on the hardest technical challenges. We operate in a demanding environment where rigor, ownership, and execution are expected.About the RoleWe are developing advanced autonomous systems that rely on robust state estimation and sensor fusion to operate in complex, dynamic environments. Our platforms integrate multiple sensors (e.g., IMU, GNSS, vision, barometer, magnetometer) and require accurate, real-time estimation of system states (position, velocity, attitude, etc).Classical approaches such as Kalman filtering are powerful but rely on modeling assumptions that often break down in real-world conditions. To push performance beyond these limits, we are exploring hybrid approaches that combine model-based estimation and control with modern machine learning techniques.Your missionThe goal of this internship is to explore and apply machine learning-based sensor fusion and state estimation methods to improve performance in dynamic environments.ResponsibilitiesLiterature review: Conduct a comprehensive review of existing ML methods for state estimation and sensor fusion.Algorithm Implementation: Develop and implement various algorithms based on the literature review and project requirements using simulated and real-world flight data.Performance evaluation: Assess and compare the performance and computational overhead of the developed algorithms with classical baselines.Documentation: Document all work performed, including methodologies, results, and conclusions.Flight Tests Participation: Actively participate in flight test sessions to gather real-world data and validate the effectiveness of the developed algorithms in operational conditions. Contribute to real-time deployment.Candidates RequirementsEducational Background: A strong academic record in applied Mathematics (especially machine learning). Knowledge of sensor fusion/state estimation is a strong plus.Technical Skills: Strong understanding of ML fundamentals. Experience with State Estimation, drones, or Control Theory is a major plus.Mindset: You are curious to learn, autonomous and able to take initiative.We look forward to hearing how you can help shape the future of autonomous defense systems at Harmattan AI.
No items found.
2026-03-19 8:16
Product Manager, Agent Harness & Modelling
Cohere
501-1000
Canada
Full-time
Remote
false
Who are we?Our mission is to scale intelligence to serve humanity. We’re training and deploying frontier models for developers and enterprises who are building AI systems to power magical experiences like content generation, semantic search, RAG, and agents. We believe that our work is instrumental to the widespread adoption of AI.We obsess over what we build. Each one of us is responsible for contributing to increasing the capabilities of our models and the value they drive for our customers. We like to work hard and move fast to do what’s best for our customers.Cohere is a team of researchers, engineers, designers, and more, who are passionate about their craft. Each person is one of the best in the world at what they do. We believe that a diverse range of perspectives is a requirement for building great products.Join us on our mission and shape the future!About Cohere and NorthCohere is revolutionizing enterprise AI with North, an agentic AI platform designed to securely deploy AI agents and automations within organizations' infrastructure. North empowers employees to streamline workflows, automate repetitive tasks, and unlock actionable insights while ensuring data privacy and compliance. North combines cutting-edge generative and search models with customizable integrations to drive productivity and innovation at scale.Role OverviewWe are seeking an Agent Harness Product Manager to own the execution layer that makes North agents reliable, capable, and production-ready. This is a role that sits at the intersection of three domains:Agent Loop and Execution: Own the core agent runtime: tool orchestration, parallel execution, sub-agent delegation, sandbox code execution, and failure recovery. You will define how North agents plan and act across long, multi-step workflows and ensure the execution environment is robust enough for the most demanding enterprise tasks. You are expected to engage at the implementation level, contributing to architecture decisions alongside engineering rather than simply handing off requirements.Context Engineering: Own how our Agents manage the context window as a deliberately controlled resource. This includes progressive disclosure of tools and skills, context compaction and summarization, offloading of large payloads to a persistent filesystem, and the instrumentation that keeps agents oriented across extended trajectories.Model-Scaffolding Co-evolution: Own the feedback loop between North's harness and the Modeling Team. This PM is the connective tissue that makes that possible: ensuring harness design decisions are validated by Modeling before they are built, that evals are the shared bridge between both teams, and that as the harness evolves the model evolves with it.ResponsibilitiesDefine and own the roadmap for North's agent harness, including the agent loop, context engineering layer, tool orchestration, sandbox execution, and sub-agent delegationServe as the primary interface between North engineering and Cohere's Modeling team, ensuring new harness capabilities are validated before being built and that neither team paints itself into a cornerOwn North's agentic evaluation framework, ensuring evals are compatible with both the North harness and Modeling's training infrastructure, and that they serve as a reliable bridge between product and researchEngage enterprise customers to surface real-world agentic failures and translate findings into concrete product and model requirementsStay current with the open-source and commercial agent ecosystem and drive adoption decisions that keep North's architecture aligned with emerging standards.Requirements5+ years of product management experience in agentic AI systems, developer infrastructure, or applied ML productsDeep understanding of modern LLM agent architectures, including multi-agent systems, tool-augmented reasoning, memory and retrieval, programmatic orchestration, RAG, and long-horizon executionStrong grasp of agentic evaluation design, including how to measure task completion, failure recovery, and long-horizon reliability, and how to diagnose model vs. scaffolding gapsTechnically deep enough to contribute to architecture decisions at the implementation level: comfortable reviewing and shaping design docs, reasoning about async execution patterns, sandboxed environments, filesystem design, and the tradeoffs that come with building harness capabilities into a production platformAbility to flex between ML research conversations and engineering architecture discussions with equal fluencyTrack record of shipping platform-layer products with demonstrated impact on reliability, performance, or capability.Nice-to-HavesAn active practitioner of agent frameworks who regularly builds with and follows the latest developments in open-source harnesses, coding agents, and orchestration tools in both professional and personal workHands-on experience with enterprise agentic deployments: multi-tenant orchestration, tool permissioning, audit trails, and compliance requirementsFamiliarity with infrastructure constraints relevant to enterprise deployments: on-premises environments, scalability challenges, and the operational tradeoffs of running complex agent workloads in restricted or air-gapped settingsPrior work at the intersection of research and product, translating nascent model capabilities into shipped product featuresBackground working within or closely alongside an ML research or post-training teamWhy Join Cohere?Impact: Shape how Canada's most important public institutions adopt and deploy frontier AI.Innovation: Work alongside leading researchers and engineers solving complex ML challenges.Growth: Competitive compensation, equity options, and opportunities for professional development.Flexibility: Hybrid work model with offices in key global locations (Toronto, Montreal, New York, San Francisco, London, Paris, and Korea)If some of the above doesn’t line up perfectly with your experience, we still encourage you to apply! We value and celebrate diversity and strive to create an inclusive work environment for all. We welcome applicants from all backgrounds and are committed to providing equal opportunities. Should you require any accommodations during the recruitment process, please submit an Accommodations Request Form, and we will work together to meet your needs.Full-Time Employees at Cohere enjoy these Perks:🤝 An open and inclusive culture and work environment 🧑💻 Work closely with a team on the cutting edge of AI research 🍽 Weekly lunch stipend, in-office lunches & snacks🦷 Full health and dental benefits, including a separate budget to take care of your mental health 🐣 100% Parental Leave top-up for up to 6 months🎨 Personal enrichment benefits towards arts and culture, fitness and well-being, quality time, and workspace improvement🏙 Remote-flexible, offices in Toronto, New York, San Francisco, London and Paris, as well as a co-working stipend✈️ 6 weeks of vacation (30 working days!)
No items found.
2026-03-19 8:16
Field Engineering Manager, Public Sector
Scale AI
5000+
United States
Full-time
Remote
false
Role Overview
Scale’s rapidly growing Global Public Sector team is focused on using AI to address critical challenges facing the public sector around the world. Our core work consists of:
Creating custom AI applications that will impact millions of citizens
Generating high-quality training data for national LLMs
Upskilling and advisory services to spread the impact of AI
As a Production AI Ops Lead, you will design and develop the production lifecycle of full-stack AI applications, while supporting end-to-end system reliability, real-time inference observability, sovereign data orchestration, high-security software integration, and the resilient cloud infrastructure required for our international government partners.
At Scale, we’re not just building AI solutions—we’re enabling the public sector to transform their operations and better serve citizens through cutting-edge technology. If you’re ready to shape the future of AI in the public sector and be a founding member of our team, we’d love to hear from you.
You will:
Own the production outcome: Take full accountability for the long-term performance and reliability of AI use cases deployed across international government agencies.
Ensure Full-Stack integrity: Oversee the end-to-end health of the platform, ensuring seamless integration between the AI core and all full-stack components, from APIs to UI, to maintain a responsive and production-ready environment.
Scale the feedback loop: Build automated systems to monitor model performance and data drift across geographically dispersed environments, ensuring the right levels of reliability.
Navigate global compliance: Manage the technical lifecycle within diverse regulatory frameworks.
Incident command: Lead the response for production issues in mission-critical environments, ensuring rapid resolution and building the guardrails to prevent them from happening again.
Bridge the gap: Translate deep technical performance metrics into clear insights for senior international government officials.
Drive product evolution: Partner with our Engineering and ML teams to ensure the lessons learned in the field directly influence the technical architecture and decisions of future use cases.
Ideally, you have:
Experience: 6+ years in a high-impact technical role (SRE, FDE or MLOps) with experience in the public sector.
Global perspective: Familiarity with international government security standards and the complexities of deploying sovereign AI.
System architecture proficiency: Proven experience maintaining production-grade applications with a deep understanding of the full request lifecycle-connecting frontend/API layers to the backend and AI core.
Modern AI Stack expertise: Proficiency in coding and the modern AI infrastructure, including Kubernetes, vector databases, agentic development, and LLM observability tools.
Ownership: You treat every production deployment as your own. You race toward solving hard problems before the customer even sees them.
Reliability: You understand that in the public sector, a model failure may be a risk to public safety or privacy.
Customer communication: The ability to explain to a high-ranking official why the performance of the system has degraded and how we are fixing it.
PLEASE NOTE: Our policy requires a 90-day waiting period before reconsidering candidates for the same role. This allows us to ensure a fair and thorough evaluation of all applicants.
About Us:
At Scale, our mission is to develop reliable AI systems for the world's most important decisions. Our products provide the high-quality data and full-stack technologies that power the world's leading models, and help enterprises and governments build, deploy, and oversee AI applications that deliver real impact. We work closely with industry leaders like Meta, Cisco, DLA Piper, Mayo Clinic, Time Inc., the Government of Qatar, and U.S. government agencies including the Army and Air Force. We are expanding our team to accelerate the development of AI applications.
We believe that everyone should be able to bring their whole selves to work, which is why we are proud to be an inclusive and equal opportunity workplace. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability status, gender identity or Veteran status.
We are committed to working with and providing reasonable accommodations to applicants with physical and mental disabilities. If you need assistance and/or a reasonable accommodation in the application or recruiting process due to a disability, please contact us at accommodations@scale.com. Please see the United States Department of Labor's Know Your Rights poster for additional information.
We comply with the United States Department of Labor's Pay Transparency provision.
PLEASE NOTE: We collect, retain and use personal data for our professional business purposes, including notifying you of job opportunities that may be of interest and sharing with our affiliates. We limit the personal data we collect to that which we believe is appropriate and necessary to manage applicants’ needs, provide our services, and comply with applicable laws. Any information we collect in connection with your application will be treated in accordance with our internal policies and programs designed to protect personal data. Please see our privacy policy for additional information.
No items found.
2026-03-19 8:01
Senior Cyber Security Engineer (AI Safety)
Faculty
501-1000
United Kingdom
Full-time
Remote
false
Why Faculty?
We established Faculty in 2014 because we thought that AI would be the most important technology of our time. Since then, we’ve worked with over 350 global customers to transform their performance through human-centric AI. You can read about our real-world impact here.We don’t chase hype cycles. We innovate, build and deploy responsible AI which moves the needle - and we know a thing or two about doing it well. We bring an unparalleled depth of technical, product and delivery expertise to our clients who span government, finance, retail, energy, life sciences and defence.Our business, and reputation, is growing fast and we’re always on the lookout for individuals who share our intellectual curiosity and desire to build a positive legacy through technology.AI is an epoch-defining technology, join a company where you’ll be empowered to envision its most powerful applications, and to make them happen.About the team Our National Security and AI Safety business unit is dedicated to advancing the responsible development and deployment of AI in support of national security and global stability. From strengthening mission-critical capabilities across national security and intelligence, to working with frontier labs to provide robust AI safety red teaming and evaluation, we work at the frontier of high-stakes, high-impact missions.
We understand that powerful AI systems bring both transformative opportunities and complex risks and we are proud to partner with Government and the biggest tech organisations in the world to ensure AI is not just transformative but is also secure, trustworthy and safe for all.
Because of the nature of the work we do with our Government clients, you may need to be eligible for UK Developed Vetting (DV) and willing to work on site with our clients from time to time.About the roleAs a Senior Cyber Security Engineer, you will lead our security efforts across a number of key projects, bridging the gap between robust engineering and AI safety.
You will play a pivotal role in designing evaluations, test harnesses and "Capture the Flag" scenarios to test the limits of Frontier models. You'll also work on securing the agentic AI systems that we deploy into the heart of government.
If you are a talented engineer with a security mindset who thrives on solving complex, real-world problems, this is your opportunity to shape the future of AI safety.What you’ll be doing:Designing and building scaffolds to rigorously test the security and capabilities of frontier AI models and agentic systems.Setting technical standards for AI Security across our consulting and AI safety business units, acting as the senior technical authority for deployed security practices.Collaborating within cross-functional teams of machine learning engineers, data scientists, and designers to ensure security is woven into the fabric of the projects you deliver.Automating security processes, vulnerability management, and secure development lifecycles to create resilient, scalable software.Mentoring and guiding junior engineers and data scientists, fostering a culture of technical excellence and continuous security learning.Who we’re looking for:You are a solid engineer with a deep interest in security, possessing strong Python skills and experience working in deployed production systems. You bring a creative and curious mindset to offensive security, ideally with experience in CTF exercises or red-teaming scenarios.You possess hands-on experience with cloud security tools (such as Security Hub, IAM, and WAF) and a firm understanding of identity management protocols like OAuth2.0 and SAML.You have a proven track record in vulnerability management and securing the application development lifecycle, including container scanning and automated testing.You are a clear and persuasive communicator, comfortable acting as a technical advisor to clients and translating complex risks into actionable engineering tasks.You thrive on autonomy and ownership, ready to step into a senior role where you will define best practices in a fast-paced, evolving domain.Our Interview Process
Talent Team Screen (30 minutes)Pair Programming Interview (90 minutes) Cyber Security Experience Interview (60 minutes)Commercial Interview (60 minutes) Our Recruitment EthosWe aim to grow the best team - not the most similar one. We know that diversity of individuals fosters diversity of thought, and that strengthens our principle of seeking truth. And we know from experience that diverse teams deliver better work, relevant to the world in which we live. We’re united by a deep intellectual curiosity and desire to use our abilities for measurable positive impact. We strongly encourage applications from people of all backgrounds, ethnicities, genders, religions and sexual orientations.Some of our standout benefits:Unlimited Annual Leave PolicyPrivate healthcare and dentalEnhanced parental leaveFamily-Friendly Flexibility & Flexible workingSanctus CoachingHybrid WorkingIf you don’t feel you meet all the requirements, but are excited by the role and know you bring some key strengths, please don't hesitate in applying as you might be right for this role, or other roles. We are open to conversations about part-time hours.
No items found.
2026-03-18 11:32
Mechanical Engineer & Python Expert - Freelance AI Trainer
Mindrift
1001-5000
$33 / hour
Spain
Part-time
Remote
false
Please submit your CV in English and indicate your level of English proficiency. Mindrift connects specialists with project-based AI opportunities for leading tech companies, focused on testing, evaluating, and improving AI systems. Participation isproject-based, not permanent employment.What this opportunity involves While each project involves unique tasks, contributors may: Design graduate- and industry-level mechanical engineering problems grounded in real practice.Evaluate AI-generated solutions for correctness, assumptions, and engineering logic.Validate analytical or numerical results using Python (NumPy, SciPy, Pandas).Improve AI reasoning to align with first principles and accepted engineering standards.Apply structured scoring criteria to assess multi-step problem solving. What we look for This opportunity is a good fit for mechanical engineers with an experience in python open to part-time, non-permanent projects. Ideally, contributors will have: Degree in Mechanical Engineering or related fields, e.g. Thermodynamics, Fluid Mechanics, Mechanical Design, Computational Mechanics, etc. 3+ years of professional mechanical engineering experience Strong written English (C1/C2) Strong Python proficiency for numerical validation Stable internet connection Professional certifications (e.g., PE, CEng, PMP) and experience in international or applied projects are an advantage.How it works Apply → Pass qualification(s) → Join a project → Complete tasks → Get paidProject time expectations For this project, tasks are estimated to require around 10–20 hours per week during active phases, based on project requirements. This is an estimate, not a guaranteed workload, and applies only while the project is active. Payment Paid contributions, with rates up to $33/hour* Fixed project rate or individual rates, depending on the project Some projects include incentive payments *Note: Rates vary based on expertise, skills assessment, location, project needs, and other factors. Higher rates may be offered to highly specialized experts. Lower rates may apply during onboarding or non-core project phases. Payment details are shared per project.
No items found.
2026-03-18 11:32
AI Software Engineer (Back End)
Maincode
11-50
Australia
Full-time
Remote
false
About the roleMaincode is training Matilda, a large language model built and trained from scratch in Australia. Our new compute cluster is now live, and we are scaling the next version and deploying it publicly.This role sits inside the production system that serves Matilda. You will build and maintain the back end services that make the model usable in the real world: APIs, infrastructure, and the systems that turn a trained model into a reliable public capability.We build AI systems end to end. We design the architectures, run the infrastructure, train the models, and operate the systems ourselves. Matilda is not a research prototype. It is a production system trained at scale and served publicly.Maincode operates one of the largest private AI compute environments in Australia, built for training and operating our own models. You will be working directly on the systems that deploy and serve a model trained from scratch.What you would actually doYou will build and maintain the services that sit between the model and the outside world.This includes work such as:Building and maintaining services that handle model inference and user requestsDesigning systems that manage requests, sessions, and streaming responsesImplementing reliability mechanisms such as rate limiting, retries, and graceful failureBuilding authentication and access controls for public usageDesigning systems for logging, telemetry, and evaluation signalsImproving latency, throughput, and reliability of model servingIntegrating new model checkpoints into the production systemWorking closely with training and infrastructure engineers to deploy and operate the modelMuch of the work happens inside production systems: logs, traces, performance profiles, and deployment pipelines. The goal is not polish. The goal is a system that stays up, stays fast, and behaves predictably under load.The kind of person who does well hereWe are looking for engineers early in their careers who want to learn how production AI systems are actually built and operated.You may have one or two years of experience building production software. What matters most is curiosity, reliability, and the willingness to learn how large scale systems behave under real constraints.People who tend to do well here:Care about runtime behaviour and system reliabilityEnjoy debugging real systems rather than writing theoretical codeThink clearly about system boundaries and failure modesStay calm and methodical when production behaves unexpectedlyWant to understand how large scale AI systems actually workYou do not need prior experience serving large language models. You do need the discipline to build systems that are hard to break.How you would workYou will use code as a way of shaping a production system.You should be comfortable:Building back end services in a modern language (Python is common here)Working with APIs and service interfacesDesigning systems that remain stable under loadReading logs and system metrics to understand behaviourCollaborating closely with training, infrastructure, and product engineersSpeed matters, but so does rigour. Reliability is a feature.What this role is notIt is not maintaining internal business software or conventional product back endsIt is not integrating third party AI services or building on top of external modelsIt is not primarily front end work or prompt engineeringIt is not incremental feature work on mature systemsThis role focuses on building and operating the systems that deploy and run a model we train ourselves, where the core problems are performance, scale, and reliability.Why MaincodeMaincode builds AI systems end to end. We train the models, run the infrastructure, and operate the systems ourselves.You will work with a small team that:Builds the full AI stack rather than outsourcing itTreats reliability and system design as core engineering problemsValues engineers who want to understand how systems actually workIs building long term capability in training, deployment, and servingIf you want to work directly on the systems that deploy and operate a large language model trained from scratch, this role will put you inside that work.NoteThis is a full time role based in Melbourne, working closely with our in person engineering and research team. At this time we are not able to offer visa sponsorship, so applicants must have existing and unrestricted work rights in Australia.
No items found.
2026-03-18 11:32
Principal Growth Marketing Manager
Snorkel AI
501-1000
$172,000 – $300,000
United States
Full-time
Remote
false
About Snorkel
At Snorkel, we believe meaningful AI doesn’t start with the model, it starts with the data.
We’re on a mission to help enterprises transform expert knowledge into specialized AI at scale. The AI landscape has gone through incredible changes between 2015, when Snorkel started as a research project in the Stanford AI Lab, to the generative AI breakthroughs of today. But one thing has remained constant: the data you use to build AI is the key to achieving differentiation, high performance, and production-ready systems. We work with some of the world’s largest organizations to empower scientists, engineers, financial experts, product creators, journalists, and more to build custom AI with their data faster than ever before. Excited to help us redefine how AI is built? Apply to be the newest Snorkeler!About the Role
Snorkel AI is hiring Frontier AI Solutions Engineers who will partner with leading AI labs on their most challenging data problems. This is a high-impact, customer-facing role that combines technical depth with strong presales instincts. You'll partner with customer research teams to design complex data and environments that improve frontier model performance, demonstrating Snorkel's capabilities through research-driven engagements.
You'll work at the critical intersection of research, technical strategy, and customer partnership. This includes scoping training data needs, designing RL environments and tasks, developing evaluation frameworks, probing model behavior and failure modes, and translating customer research objectives into actionable technical plans. You'll develop technical specifications, analyze frontier model failure modes, and serve as a thought partner to customer research teams throughout the sales cycle and into early delivery phases.
Main Responsibilities
Partner with frontier AI research labs to design datasets and environments that improve model performance
Lead technical conversations with customer researchers to understand model capabilities, failure modes, data requirements, and success criteria
Probe model behavior through systematic evaluation to uncover weaknesses and identify high-impact data interventions
Design evaluation frameworks, calibration processes, and quality rubrics that establish measurable project success metrics
Develop technical specifications for data projects that balance research rigor with operational feasibility
Serve as thought partner to customer research teams throughout the sales cycle, building trust and credibility
Stay current on frontier AI research, RL environment design, post-training techniques, and evaluation methodologies
Preferred Qualifications
Strong expertise in frontier AI concepts including LLMs, training data pipelines, evaluation methodologies, post-training techniques (RLHF, DPO, RLAIF), and domain areas such as coding agents, reasoning, multimodal models, or RL environments
Experience in applied ML research, data science, or research-intensive technical roles with customer-facing or collaborative research experience
Proficiency in Python and familiarity with ML frameworks and LLM APIs
Excellent communication skills — ability to deliver technical presentations and explain complex concepts to diverse audiences
Familiarity with data curation workflows, synthetic data generation, LLM-as-a-Judge, or evaluation framework design
Ability to work in a fast-moving environment, comfortable with ambiguity and rapid iteration
B.S. in Computer Science, Machine Learning, or related field with 4+ years of experience in AI/ML solutions engineering or technical customer-facing roles
Compensation range for Tier 1 locations of San Francisco Bay Area and New York City, $172K - $300K OTE. All offers also include equity in the form of employee stock options. Our compensation ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training.
Why Join Snorkel AI?
At Snorkel AI, we're building the future of data-centric AI. Our Expert Data-as-a-Service organization partners with world-class customers to solve some of the hardest data challenges — creating training and evaluation data that power the next generation of LLMs and AI systems. You'll work directly on projects that impact real production systems, while shaping how internal teams deliver faster, better, and more intelligently. This is a rare opportunity to own technical data workflows and be a founding member of the technical DaaS team.
#LI-CG1
Salary Range
-
Salary Range $172,000—$300,000 USDBe Your Best at Snorkel
Joining Snorkel AI means becoming part of a company that has market proven solutions, robust funding, and is scaling rapidly—offering a unique combination of stability and the excitement of high growth. As a member of our team, you’ll have meaningful opportunities to shape priorities and initiatives, influence key strategic decisions, and directly impact our ongoing success. Whether you’re looking to deepen your technical expertise, explore leadership opportunities, or learn new skills across multiple functions, you’re fully supported in building your career in an environment designed for growth, learning, and shared success.
Snorkel AI is proud to be an Equal Employment Opportunity employer and is committed to building a team that represents a variety of backgrounds, perspectives, and skills. Snorkel AI embraces diversity and provides equal employment opportunities to all employees and applicants for employment. Snorkel AI prohibits discrimination and harassment of any type on the basis of race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local law. All employment is decided on the basis of qualifications, performance, merit, and business need.
We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
No items found.
2026-03-18 11:32
Chief Technology Officer
Bjak
201-500
United States
Full-time
Remote
false
About the RoleA1 is building a proactive AI system that carries work forward across conversations, tools, and time — enabling users to delegate ongoing tasks to AI agents that coordinate across software, data, and workflows.We are looking for a leader who can think clearly about systems, make strong technical decisions, and help build the engineering organisation from the ground up.Be part of founding team to shape the technical direction of the company, while helping build a strong engineering team across the globe.What You'll DoTechnical DirectionDefine the long-term architecture for A1’s AI systems, infrastructure, and developer platformEvaluate trade-offs between speed of iteration and long-term system designEnsure systems are designed for scalability, reliability, and long-term evolutionGuide key decisions across model integration, data pipelines, distributed systems, and product architectureEngineering LeadershipWork with engineers to translate product direction into clear technical executionHelp structure engineering workstreams and keep teams aligned on prioritiesMaintain high engineering standards while keeping the team focused on shippingEstablish engineering culture, development practices, and technical standards across the companyBuilding the TeamBuild and scale a world-class engineering team across key talent hubs including China and USIdentify strong technical leaders and help build a high-quality engineering organizationDefine hiring standards and interview processes to maintain a high engineering barCoordination and ExecutionWork closely with product, research, and leadership teamsEnsure technical workstreams move forward smoothly across teams and locationsHelp resolve cross-team technical and execution challengesWhat You Will NeedStrong technical foundation in system architecture, large-scale systems, distributed architecture.Ability to reason clearly about complex systems and make pragmatic technical decisionsExperience building or leading high-performing engineering teamsStrong judgment on technical trade-offs and engineering prioritiesComfortable operating in early-stage environments with high ambiguityClear communication and ability to align teamsWe are particularly interested in candidates who enjoy building teams, superior products and shaping engineering organisations.How We WorkWe operate as a small, senior, hands-on team. Engineers own features end-to-end — from design discussion through production monitoring.Code reviews and design reviews are expected for all meaningful changes. We discuss architecture openly, make decisions quickly, and ship frequently.Interview processIf there appears to be a fit, we'll reach out to schedule 3, but no more than 4 interviews.Applications are evaluated by our technical team members. Interviews will be conducted via virtual meetings and/or onsite.We value transparency and efficiency, so expect a prompt decision. If you've demonstrated the exceptional skills and mindset we're looking for, the process to offer may be shorter.
No items found.
2026-03-18 11:31
Electrical Engineer & Python Expert - Freelance AI Trainer
Mindrift
1001-5000
$12 / hour
India
Part-time
Remote
false
Please submit your CV in English and indicate your level of English proficiency. Mindrift connects specialists with project-based AI opportunities for leading tech companies, focused on testing, evaluating, and improving AI systems. Participation isproject-based, not permanent employment.What this opportunity involves While each project involves unique tasks, contributors may: Design rigorous electrical engineering problems reflecting professional practice.Evaluate AI solutions for correctness, assumptions, and constraints.Validate calculations or simulations using Python (NumPy, Pandas, SciPy).Improve AI reasoning to align with industry-standard logic.Apply structured scoring criteria to multi-step problems. What we look for This opportunity is a good fit for electrical engineers with an experience in python open to part-time, non-permanent projects. Ideally, contributors will have: Degree in Electrical Engineering or related fields, e.g. Electronics, Microelectronics, Embedded Systems, Power Systems, etc. 3+ years of professional electrical engineering experience Strong written English (C1/C2) • Strong Python proficiency for numerical validation Stable internet connection Professional certifications (e.g., PE, CEng, EUR ING, RPEQ) and experience in international or applied projects are an advantage.How it works Apply → Pass qualification(s) → Join a project → Complete tasks → Get paidProject time expectations For this project, tasks are estimated to require around 10–20 hours per week during active phases, based on project requirements. This is an estimate, not a guaranteed workload, and applies only while the project is active. Payment Paid contributions, with rates up to $12/hour* Fixed project rate or individual rates, depending on the project Some projects include incentive payments *Note: Rates vary based on expertise, skills assessment, location, project needs, and other factors. Higher rates may be offered to highly specialized experts. Lower rates may apply during onboarding or non-core project phases. Payment details are shared per project.
No items found.
2026-03-18 11:31
Legal Operations Analyst
Figure AI
201-500
$150,000 – $250,000
No items found.
Full-time
Remote
false
Figure is an AI Robotics company developing a general purpose humanoid. Our humanoid robot is designed for commercial tasks and the home. We are based in San Jose, CA and require 5 days/week in-office collaboration. It’s time to build.
Figure’s vision is to deploy autonomous humanoids at a global scale. Our Helix team is seeking an experienced AI Tooling Engineer to enhance our internal, web-based data and AI training tools. This role focuses on developing intuitive web interfaces that support key AI research functions, including robot data annotation, training dataset visualization, and experiment tracking. The ideal candidate has experience building rich, interactive web interfaces using React and TypeScript.
Responsibilities
Design and build intuitive web interfaces for robot data annotation, datasets visualization, and experiment tracking
Utilize data-driven techniques to optimize interfaces for efficiency and fast iteration cycles
Integrate AI models to automate manual tasks
Work together with AI researchers, robot operators, and annotators to support new user experiences
Requirements
Strong software engineering fundamentals
Bachelor's or Master's degree in Computer Science, Robotics, Engineering, or a related field
Minimum of 4 years of professional, full-time experience building rich, interactive web interfaces
Proficiency in React and TypeScript
Bonus Qualifications
Experience using data stores (Postgres, MySQL, ElasticSearch, Redis, etc.)
Experience managing cloud infrastructure (AWS, Azure, GCP)
Experience with Tailwind CSS
Experience building data annotation and dataset management tools.
The US base salary range for this full-time position is between $150,000 - $250,000 annually.
The pay offered for this position may vary based on several individual factors, including job-related knowledge, skills, and experience. The total compensation package may also include additional components/benefits depending on the specific role. This information will be shared if an employment offer is extended.
No items found.
2026-03-18 11:31
Backend Engineer
Together AI
201-500
$200,000 – $280,000
No items found.
Full-time
Remote
false
About the Role
The Turbo team sits at the intersection of efficient inference (algorithms, architectures, engines) and post‑training / RL systems. We build and operate the systems behind Together’s API, including high‑performance inference and RL/post‑training engines that can run at production scale.
Our mandate is to push the frontier of efficient inference and RL‑driven training: making models dramatically faster and cheaper to run, while improving their capabilities through RL‑based post‑training (e.g., GRPO‑style objectives). This work lives at the interface of algorithms and systems: asynchronous RL, rollout collection, scheduling, and batching all interact with engine design, creating many knobs to tune across the RL algorithm, training loop, and inference stack. Much of the job is modifying production inference systems—for example, SGLang‑ or vLLM‑style serving stacks and speculative decoding systems such as ATLAS—grounded in a strong understanding of post‑training and inference theory, rather than purely theoretical algorithm design.
You’ll work across the stack—from RL algorithms and training engines to kernels and serving systems—to build and improve frontier models via RL pipelines. People on this team are often spiky: some are more RL‑first, some are more systems‑first. Depth in one of these areas plus appetite to collaborate across (and grow toward more full‑stack ownership over time) is ideal.
Requirements
We don’t expect anyone to check every box below. People on this team typically have deep expertise in one or more areas and enough breadth (or interest) to work effectively across the stack. The closer you are to full‑stack (inference + post‑training/RL + systems), the stronger the fit—but being spiky in one area and eager to grow is absolutely okay.
You might be a good fit if you:
Have strong expertise in at least one of the following, and are excited to collaborate across (and grow into) the others:
Systems‑first profile: Large‑scale inference systems (e.g., SGLang, vLLM, FasterTransformer, TensorRT, custom engines, or similar), GPU performance, distributed serving.
RL‑first profile: RL / post‑training for LLMs or large models (e.g., GRPO, RLHF/RLAIF, DPO‑like methods, reward modeling), and using these to train or fine‑tune real models.
Model architecture design for Transformers or other large neural nets.
Distributed systems / high‑performance computing for ML.
Are comfortable working from algorithms to engines:
Strong coding ability in Python
Experience profiling and optimizing performance across GPU, networking, and memory layers.
Able to take a new sampling method, scheduler, or RL update and turn it into a production‑grade implementation in the engine and/or training stack.
Have a solid research foundation in your area(s) of depth:
Track record of impactful work in ML systems, RL, or large‑scale model training (papers, open‑source projects, or production systems).
Can read new RL / post‑training papers, understand their implications on the stack, and design minimal, correct changes in the right layer (training engine vs. inference engine vs. data / API).
Operate well as a full‑stack problem solver:
You naturally ask: “Where in the stack is this really bottlenecked?”
You enjoy collaborating with infra, research, and product teams, and you care about both scientific quality and user‑visible wins.
Minimum qualifications
3+ years of experience working on ML systems, large‑scale model training, inference, or adjacent areas (or equivalent experience via research / open source).
Advanced degree in Computer Science, EE, or a related field, or equivalent practical experience.
Demonstrated experience owning complex technical projects end‑to‑end.
If you’re excited about the role and strong in some of these areas, we encourage you to apply even if you don’t meet every single requirement.
Responsibilities
Advance inference efficiency end‑to‑end
Design and prototype algorithms, architectures, and scheduling strategies for low‑latency, high‑throughput inference.
Implement and maintain changes in high‑performance inference engines (e.g., SGLang‑ or vLLM‑style systems and Together’s inference stack), including kernel backends, speculative decoding (e.g., ATLAS), quantization, etc.
Profile and optimize performance across GPU, networking, and memory layers to improve latency, throughput, and cost.
Unify inference with RL / post‑training
Design and operate RL and post‑training pipelines (e.g., RLHF, RLAIF, GRPO, DPO‑style methods, reward modeling) where 90+% of the cost is inference, jointly optimizing algorithms and systems.
Make RL and post‑training workloads more efficient with inference‑aware training loops—for example, async RL rollouts, speculative decoding, and other techniques that make large‑scale rollout collection and evaluation cheaper.
Use these pipelines to train, evaluate, and iterate on frontier models on top of our inference stack.
Co‑design algorithms and infrastructure so that objectives, rollout collection, and evaluation are tightly coupled to efficient inference, and quickly identify bottlenecks across the training engine, inference engine, data pipeline, and user‑facing layers.
Run ablations and scale‑up experiments to understand trade‑offs between model quality, latency, throughput, and cost, and feed these insights back into model, RL, and system design.
Own critical systems at production scale
Profile, debug, and optimize inference and post‑training services under real production workloads.
Drive roadmap items that require real engine modification—changing kernels, memory layouts, scheduling logic, and APIs as needed.
Establish metrics, benchmarks, and experimentation frameworks to validate improvements rigorously.
Provide technical leadership (Staff level)
Set technical direction for cross‑team efforts at the intersection of inference, RL, and post‑training.
Mentor other engineers and researchers on full‑stack ML systems work and performance engineering.
About Together AI
Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers in our journey in building the next generation AI infrastructure.
Compensation
We offer competitive compensation, startup equity, health insurance and other competitive benefits. The US base salary range for this full-time position is: $200,000 - $280,000 + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge.
Equal Opportunity
Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.
Please see our privacy policy at https://www.together.ai/privacy
No items found.
2026-03-18 11:31
Senior Forward Deployed Engineer
Taktile
101-200
Germany
Full-time
Remote
false
About The RoleTaktile is redefining how financial institutions use AI to make critical decisions, and we're growing fast. As a Senior Forward Deployed Engineer, you'll be at the heart of that transformation, owning the technical journey from customer onboarding to production-grade AI deployments that deliver real business impact.If you're passionate about tech and AI, stay up to date on the latest AI developments, and have extensive experience with Python, SQL, REST APIs, you'll thrive here.What You'll Do as Senior Forward Deployed EngineerLead complex AI-driven Taktile deployments in production. You own technical delivery across multiple deployments, from scoping high-impact Agentic AI use cases to stable production.Apply your technical expertise, problem-solving skills and creativity to help organizations address real-world challenges. Your day could include designing solution architectures, developing decision logic and deploying production-grade Generative AI agents, or aligning with key customer stakeholders - all while ensuring an outstanding experience and rapid time to value for Taktile’s customers.You effectively scope work, sequence delivery, and proactively remove blockers, while making thoughtful trade-offs between scope, speed, and quality to ensure successful and timely project delivery.Partner with Taktile’s product management team to turn your understanding of customer needs into actionable product insights, directly influencing the evolution of Taktile’s product roadmap.Develop reusable resources, best practices, and tools to share your expertise and help scale the forward deployed engineering function across the organization.About YouYou bring 4-6 years of engineering or technical deployment experience that includes customer-facing work.You have strong technical background, preferred in fields such as Computer Science, Mathematics, Software Engineering, Physics, and Data Science.You write and review production-grade Python and SQL, and have strong command of REST API design and integrations.You excel at breaking down complex problems and making quick, well-informed decisions even under pressure.You build strong relationships with both technical and business stakeholders at all levels, driven by curiosity and a customer-centric mindset that helps you understand their needs and solve their challenges.You're collaborative, curious, and low-ego- you work well across product, engineering, and GTM teams, and you bring a genuine desire to understand customers' businesses.You are open to a hybrid work model and can work from our Berlin or London office at least three days per week.Ideal Qualifications (but not required)You have 4-6 years of experience as a Forward Deployed Engineer, Solution Engineer, Implementation Specialist or an equivalent position within a B2B SaaS company.You have experience in building AI applications within financial servicesYou have experience in applying and optimizing statistical and machine learning models to solve business problems.You have experience with at least one of the major cloud platforms (AWS, Azure, GCP).What We OfferWork with colleagues that lift you up, challenge you, celebrate you and help you grow. We come from many different backgrounds, but what we have in common is the desire to operate at the very top of our fields. If you are similarly capable, caring, and driven, you'll find yourself at home here.Make an impact and meaningfully shape an early-stage company.Experience a truly flat hierarchy and communicate directly with founding team members. Having an opinion and voicing your ideas is not only welcome but encouraged, especially when they challenge the status quo.Learn from experienced mentors and achieve tremendous personal and professional growth. Get to know and leverage our network of leading tech investors and advisors around the globe.Receive a top-of-market equity and cash compensation package.Get access to a self-development budget you can use to e.g. attend conferences, buy books or take classes.Use the equipment of your choice including meaningful home office set-up.Our StanceWe're eager to meet talented and driven candidates regardless of whether they tick all the boxes. We're looking for someone who will add to our culture, not just fit within it. We strongly encourage individuals from groups traditionally underestimated and underrepresented in tech to apply.We seek to actively recognize and combat racism, sexism, ableism and ageism. We embrace and support all gender identities and expressions, and celebrate love in its many forms. We won't inquire about how you identify or if you've experienced discrimination, but if you want to tell your story, we are all ears.About UsTaktile helps financial institutions make smarter, safer decisions with the power of AI. Our software gives teams the tools to automate complex decisions — like who to onboard, how to underwrite, or when to flag suspicious activity — with full visibility and control.By combining AI with a rich ecosystem of financial data, Taktile enables companies to adapt their decision-making in real time as markets, customer behavior, and risks evolve.Our mission is to build the world’s leading platform for automated decision-making in financial services — setting the standard for how AI is applied responsibly and effectively in this industry.We were founded by machine learning and data science experts with deep experience in financial services. Today, our team works across Berlin, London, and New York, bringing together engineers, entrepreneurs, and researchers from companies like Google, Amazon, and Meta, as well as fast-growing startups and enterprise leaders.Backed by top investors including Y Combinator, Index Ventures, Balderton Capital, and Tiger Global, along with the founders of Looker, GitHub, Mulesoft, Datadog, and UiPath - we’re building a world-class organization across all functions and levels to power the next generation of AI-driven decision-making in financial services.
No items found.
2026-03-18 11:31
Senior Pathologist
PathAI
201-500
$181,500 – $278,300
United States
Full-time
Remote
false
Who We Are
PathAI's mission is to improve patient outcomes with AI-powered pathology. Our platform promises substantial improvements to the accuracy of diagnosis and the efficacy of treatment of diseases like cancer, leveraging modern approaches in machine learning and artificial intelligence. We have a track record of success in deploying AI algorithms for histopathology in translational research, pathology labs and clinical trials. Rigorous science and careful analysis is critical to the success of everything we do. Our team, composed of diverse employees with a wide range of backgrounds and experiences, is passionate about solving challenging problems and making a huge impact on patient outcomes.
Where You Fit
As the Associate Director, MLOps Lead, you will lead the team responsible for the backbone of our AI/ML Stack: the infrastructure that bridges ML research and massive-scale production. Your primary directive is to evolve our stack to meet the next scale of needs in large scale ML training & inference workloads.
You’re someone who enjoys designing and building for reliability, relishes collaboration and technical challenges, and takes pride in making things better – without taking yourself too seriously. Our technical space is broad: high-scale AI training & inference workloads, cloud infrastructure, Kubernetes, observability, distributed systems, and a bit of everything in between.
What You’ll Do
This role is critical for driving the scalability and efficiency of our Machine Learning Operations platform with high-impact & high growth strategic initiatives.
Vision and Roadmap: Develop and execute the long term vision & roadmap for MLOPs team to support ML development and deployment needs across the business units. Successfully manage the tension between short-term tactical deliveries and long-term architectural transformation for future growth.
Team Management: Lead and mentor a team of 6-7+ high-performing engineers. Strategically allocate resources to manage support for existing services while executing key strategic initiatives.
Cross-Functional Collaboration: Partner with leaders across machine learning, data science, product engineering, and infrastructure to proactively identify pain points, address bottlenecks, and facilitate the deployment of new solutions.
Foundation Model Readiness: Architect the compute and storage pipelines required for ML Engineers to manage millions of slides and complex derived artifacts without data fragmentation or synchronization latency.
Inference Modernization: Modernize the AI Product inference stack to support 5-10x growth of AI runs across global deployments.
System Observability: Collaborate with Site Reliability Engineering (SRE) to establish comprehensive metrics covering compute under-utilization, network bottlenecks, and granular cost and turn-around-time attribution.
Technology Refresh: Conduct "Build vs. Buy" assessments, leading "Stack Refresh" audits to benchmark our proprietary tools against best-in-class commercial and open-source alternatives to meet our future needs.
What You Bring
To be successful in this role with us, you'll at least need:
Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field (or equivalent experience).
2-3+ years of experience managing engineering team(s), with a focus on building production-grade frameworks for MLOps or ML Infrastructure.
Deep technical expertise with ML workloads on kubernetes, cloud computing platforms (AWS/GCP/Azure), workflow orchestration (Airflow, Kubeflow, or proprietary equivalents) and DevOps principles and infrastructure-as-code (Helm, Terraform).
Proven experience managing petabyte-scale datasets and high-throughput production inference pipelines.
Strong software engineering skills in complex, multi-language systems and experience with scalable service architecture.
Use of AI assistants (e.g. CoPilot, Cursor, Claude) across platform development lifecycle.
It Would Be Great If You Also Have
Exposure to ML frameworks like PyTorch or Scikit-learn.
Experience with large-scale data processing frameworks (e.g. Spark, Hive, Databricks, Amazon EMR)
Expertise in MLOps principles, including model lifecycle management, feature stores, model monitoring, and CI/CD for ML.
Familiarity with security and compliance best practices in ML systems.
We Want To Hear From You
At PathAI, we are looking for individuals who are team players, are willing to do the work no matter how big or small it may be, and who are passionate about everything they do. If this sounds like you, even if you may not match the job description to a tee, we encourage you to apply. You could be exactly what we're looking for.
PathAI is an equal opportunity employer, dedicated to creating a workplace that is free of harassment and discrimination. We base our employment decisions on business needs, job requirements, and qualifications — that's all. We do not discriminate based on race, gender, religion, health, personal beliefs, age, family or parental status, or any other status. We don't tolerate any kind of discrimination or bias, and we are looking for teammates who feel the same way.
The cash compensation outlined below includes base salary or hourly wage and on-target commission for employees in eligible roles. The summary below indicates if an employee in this position is eligible for annual bonus, overtime pay and equity awards. Individual compensation packages are tailored based on skills, experience, qualifications, and other job-related factors.
Annual Pay Range:
AD, MLOps: $181,500 - $278,300
Not Overtime Eligible
Eligible for Equity
No items found.
2026-03-18 11:20
Demo Experience Engineer, Technical Success
OpenAI
5000+
$234,000 – $260,000
United States
Full-time
Remote
false
About the TeamThe Technical Success team is responsible for ensuring the safe and effective deployment of ChatGPT and OpenAI API applications for developers and enterprises. We act as trusted advisors and thought partners for our customers, ensuring developers and enterprises maximize value from our models and products.As a Demo Engineer, you will help bring the power of OpenAI’s technology to life through compelling prototypes, interactive demos, and real-world applications. You will work closely with Solutions Engineers, Product, and Go-To-Market teams to showcase how organizations can transform their businesses using ChatGPT, the OpenAI API, and our latest models.Your work will play a critical role in helping customers imagine what’s possible with AI by translating technical capabilities into engaging and tangible demonstrations.About the RoleWe are seeking a Demo Engineer to design and build high-quality technical demos that highlight the capabilities of OpenAI models and products. You will collaborate with Solutions Engineers and customers to quickly prototype solutions that demonstrate how AI can solve real business problems.This role sits at the intersection of engineering, product storytelling, and customer engagement. You will build demos that showcase real-world applications such as customer service automation, AI-powered assistants, developer tools, and intelligent workflows.You will also contribute reusable demo assets, reference architectures, and prototype applications that enable the broader go-to-market team to effectively communicate the value of OpenAI technologies.This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.In this role, you will:Build compelling demos and prototypes that showcase the capabilities of OpenAI models and APIs across a wide range of use cases.Partner with Solutions Engineers and go-to-market teams to develop custom demonstrations for strategic customer engagements.Rapidly prototype AI-powered applications that highlight real-world business value, such as copilots, AI agents, automation workflows, and developer tools.Translate complex technical capabilities into engaging demonstrations that resonate with both technical and business audiences.Maintain a library of reusable demos, starter projects, and technical assets that can be leveraged across customer engagements.Collaborate with Product and Engineering teams to stay current on new model capabilities and integrate them into demo experiences.Continuously improve demos based on customer feedback, product updates, and emerging AI capabilities.Create documentation and walkthroughs that enable internal teams to effectively use and present demo applications.Represent the voice of the customer by identifying common use cases and opportunities to showcase OpenAI technology more effectively.You’ll thrive in this role if you:Have 5+ years of experience building applications, prototypes, or technical demos in a developer-facing or customer-facing role.Are highly comfortable building with Python, JavaScript, or similar languages, and have experience with modern web frameworks and APIs.Have experience working with Generative AI, LLMs, or machine learning systems, including building prototypes or proof-of-concept applications.Enjoy building quick, polished prototypes that demonstrate technical concepts in an engaging and accessible way.Have strong product intuition and the ability to translate technical capabilities into compelling demonstrations.Are comfortable working in fast-moving environments where experimentation and iteration are encouraged.Communicate clearly with both technical and non-technical audiences.Take ownership of problems end-to-end and enjoy learning new technologies quickly.Have a collaborative mindset and enjoy partnering with engineers, product managers, and go-to-market teams.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
2026-03-18 11:20
Lead Forward Deployed Engineer
Taktile
101-200
Germany
Full-time
Remote
false
About The RoleTaktile is redefining how financial institutions use AI to make critical decisions, and we're growing fast. As a Lead Forward Deployed Engineer, you'll be at the heart of that transformation, owning the technical journey from customer onboarding to production-grade AI deployments that deliver real business impact.If you're passionate about tech and AI, stay up to date on the latest AI developments, and have extensive experience with Python, SQL, REST APIs, you'll thrive here.What You'll Do as Lead Forward Deployed EngineerLead complex AI-driven Taktile deployments in production. You own technical delivery across multiple deployments, from scoping high-impact Agentic AI use cases to stable production.Apply your technical expertise, problem-solving skills and creativity to help organizations address real-world challenges. Your day could include designing solution architectures, developing decision logic and deploying production-grade Generative AI agents, or aligning with key customer stakeholders - all while ensuring an outstanding experience and rapid time to value for Taktile’s customers.You effectively scope work, sequence delivery, and proactively remove blockers, while making thoughtful trade-offs between scope, speed, and quality to ensure successful and timely project delivery.Partner with Taktile’s product management team to turn your understanding of customer needs into actionable product insights, directly influencing the evolution of Taktile’s product roadmap.Develop reusable resources, best practices, and tools to share your expertise and help scale the forward deployed engineering function across the organization.You actively coach and mentor junior Forward Deployed Engineers, supporting their development and success.About YouYou bring 6+ years of engineering or technical deployment experience that includes customer-facing work.You have strong technical background, preferred in fields such as Computer Science, Mathematics, Software Engineering, Physics, and Data Science.You write and review production-grade Python and SQL, and have strong command of REST API design and integrations.You excel at breaking down complex problems and making quick, well-informed decisions even under pressure.You build strong relationships with both technical and business stakeholders at all levels, driven by curiosity and a customer-centric mindset that helps you understand their needs and solve their challenges.You're collaborative, curious, and low-ego- you work well across product, engineering, and GTM teams, and you bring a genuine desire to understand customers' businesses.You are open to a hybrid work model and can work from our Berlin or London office at least three days per week.Ideal Qualifications (but not required)You have 6+ years of experience as a Forward Deployed Engineer, Solution Engineer, Implementation Specialist or an equivalent position within a B2B SaaS company.You have experience in building AI applications within financial servicesYou have experience in applying and optimizing statistical and machine learning models to solve business problems.You have experience with at least one of the major cloud platforms (AWS, Azure, GCP).What We OfferWork with colleagues that lift you up, challenge you, celebrate you and help you grow. We come from many different backgrounds, but what we have in common is the desire to operate at the very top of our fields. If you are similarly capable, caring, and driven, you'll find yourself at home here.Make an impact and meaningfully shape an early-stage company.Experience a truly flat hierarchy and communicate directly with founding team members. Having an opinion and voicing your ideas is not only welcome but encouraged, especially when they challenge the status quo.Learn from experienced mentors and achieve tremendous personal and professional growth. Get to know and leverage our network of leading tech investors and advisors around the globe.Receive a top-of-market equity and cash compensation package.Get access to a self-development budget you can use to e.g. attend conferences, buy books or take classes.Use the equipment of your choice including meaningful home office set-up.Our StanceWe're eager to meet talented and driven candidates regardless of whether they tick all the boxes. We're looking for someone who will add to our culture, not just fit within it. We strongly encourage individuals from groups traditionally underestimated and underrepresented in tech to apply.We seek to actively recognize and combat racism, sexism, ableism and ageism. We embrace and support all gender identities and expressions, and celebrate love in its many forms. We won't inquire about how you identify or if you've experienced discrimination, but if you want to tell your story, we are all ears.About UsTaktile helps financial institutions make smarter, safer decisions with the power of AI. Our software gives teams the tools to automate complex decisions — like who to onboard, how to underwrite, or when to flag suspicious activity — with full visibility and control.By combining AI with a rich ecosystem of financial data, Taktile enables companies to adapt their decision-making in real time as markets, customer behavior, and risks evolve.Our mission is to build the world’s leading platform for automated decision-making in financial services — setting the standard for how AI is applied responsibly and effectively in this industry.We were founded by machine learning and data science experts with deep experience in financial services. Today, our team works across Berlin, London, and New York, bringing together engineers, entrepreneurs, and researchers from companies like Google, Amazon, and Meta, as well as fast-growing startups and enterprise leaders.Backed by top investors including Y Combinator, Index Ventures, Balderton Capital, and Tiger Global, along with the founders of Looker, GitHub, Mulesoft, Datadog, and UiPath - we’re building a world-class organization across all functions and levels to power the next generation of AI-driven decision-making in financial services.
No items found.
2026-03-18 11:20
Research Engineer, SLAM & Multi-View Geometry
OpenAI
5000+
$380,000 – $445,000
United States
Full-time
Remote
false
About the Team
Our Robotics team is focused on unlocking general-purpose robotics and pushing toward AGI-level intelligence in dynamic, real-world settings. Working across the entire model stack, we integrate cutting-edge hardware and software to explore a broad range of robotic form factors. We strive to seamlessly blend high-level AI capabilities with the constraints of physical systems to improve peoples’ lives.About the Role
As a SLAM / Multi-View Geometry Engineer on the Robotics team, you will develop systems that enable robots to perceive, track, and reconstruct the world in 3D from multi-camera and multimodal sensor data. You will work on real-time and offline SLAM pipelines used during teleoperation and robot data collection, as well as scalable systems for reconstructing and tracking 3D structure from large datasets.We’re looking for people who combine strong fundamentals in computer vision with practical experience building robust perception systems. The ideal candidate is comfortable working across classical geometry-based approaches and modern machine learning methods, and enjoys working closely with AI researchers and engineers.This role is based in San Francisco, CA. We use a hybrid work model of 4 days in the office per week and offer relocation assistance to new employees.In this role, you will:Develop and deploy online SLAM systems used during robotic data collection with multi-camera sensor stacks and teleoperation platforms.Build systems for large-scale 3D reconstruction and point tracking across massive datasets, enabling new approaches to world modeling and perception.Work with research and engineering teams to scale multi-view geometry pipelines to large datasets.Improve the accuracy, robustness, and scalability of perception systems used in robotics data collection and training pipelines.Collaborate across robotics, perception, and ML teams to integrate geometry-based methods with learned models.
You might thrive in this role if you:Have industry experience applying SLAM or visual-inertial odometry systems, such as in robotics, self-driving vehicles, AR/VR headsets, or other real-world perception systems.Have a deep understanding of multi-view geometry, camera calibration, bundle adjustment, feature tracking, and sources of error in real-world SLAM systems.Have experience with large-scale data processing pipelines or are excited to learn and work with infrastructure that supports large datasets.Enjoy working in fast-moving environments and collaborating closely with engineers and researchers to ship systems quickly.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
2026-03-18 11:20
No job found
Your search did not match any job. Please try again
