The AI job market moves fast. We keep up so you don't have to.
Fresh roles added daily, reviewed for quality — across every corner of the AI ecosystem.
I'm strong in:
Edit filters
New AI Opportunities
Showing 61 – 79 of 79 jobs
Tag
Senior Software Engineer, Agents
Decagon
101-200
$250,000 – $330,000
United States
Full-time
Remote
false
About DecagonDecagon is the leading conversational AI platform empowering every brand to deliver concierge customer experiences.Our technology enables industry-defining enterprises like Avis Budget Group, Block’s Cash App and Square, Chime, Oura Health, and Hunter Douglas to deploy AI agents that power personalized, deeply satisfying interactions across voice, chat, email, SMS, and every other channel.We’re building a future where customer experiences are being redefined from support tickets and hold music to faster resolutions, richer conversations, and deeper relationships. We’re proud to be backed by world-class investors who share that vision, including a16z, Accel, Bain Capital Ventures, Coatue, and Index Ventures, along with many others.We’re an in-office company, driven by a shared commitment to excellence and velocity. Our values — Just Get It Done, Invent What Customers Want, Winner’s Mindset, and The Polymath Principle — shape how we work and grow as a team.About the TeamThe Agent Engineering team at Decagon deploys mission-critical AI agents to our customers that impact millions of users and directly drive Decagon’s growth. You will build on our industry-leading AI agent platform, collaborate directly with customers and use your own creativity to devise long-term, scalable solutions.Our mission is to deliver magical support experiences — AI agents working alongside human agents to help users resolve their issues.About the RoleOn the Agent Engineering team, you’ll have complete ownership and autonomy in building and shipping best-in-class AI agents, from initial implementation through continuous iteration. You’ll work directly with leaders across industries like finance, healthcare and hospitality, solving their users’ needs with reliable and intuitive AI agents.Engineers here own their work end-to-end and are trusted to make a real impact. This role is for someone who dives deep into complex system challenges and builds elegant solutions that scale to millions of users.In this role, you willDesign and build AI agents that outperform human agents in managing complex customer interactions and driving customer retentionIdentify cross-customer trends that guide the evolution of Decagon’s agent building platform and research effortsExperiment with and run evaluations on the latest text and voice models, then integrate them at scale with large enterprise-grade customersYour background looks something like thisHave 5+ years of industry experience in software engineeringProficiency with Python, Typescript and asynchronous programmingA high degree of comfort digging into system failures within deep technology stacks using any tool necessaryEven betterPrior experience working with multi-modal modelsBenefitsMedical, dental, and vision benefitsTake what you need vacation policyDaily lunches, dinners and snacks in the office to keep you at your bestCompensation$250K – $330K + Offers Equity
No items found.
2026-04-02 6:51
IT Support Specialist
Snorkel AI
501-1000
$172,000 – $300,000
United States
Full-time
Remote
false
About Snorkel
At Snorkel, we believe meaningful AI doesn’t start with the model, it starts with the data.
We’re on a mission to help enterprises transform expert knowledge into specialized AI at scale. The AI landscape has gone through incredible changes between 2015, when Snorkel started as a research project in the Stanford AI Lab, to the generative AI breakthroughs of today. But one thing has remained constant: the data you use to build AI is the key to achieving differentiation, high performance, and production-ready systems. We work with some of the world’s largest organizations to empower scientists, engineers, financial experts, product creators, journalists, and more to build custom AI with their data faster than ever before. Excited to help us redefine how AI is built? Apply to be the newest Snorkeler!About the Role
Snorkel AI is hiring Frontier AI Solutions Engineers who will partner with leading AI labs on their most challenging data problems. This is a high-impact, customer-facing role that combines technical depth with strong presales instincts. You'll partner with customer research teams to design complex data and environments that improve frontier model performance, demonstrating Snorkel's capabilities through research-driven engagements.
You'll work at the critical intersection of research, technical strategy, and customer partnership. This includes scoping training data needs, designing RL environments and tasks, developing evaluation frameworks, probing model behavior and failure modes, and translating customer research objectives into actionable technical plans. You'll develop technical specifications, analyze frontier model failure modes, and serve as a thought partner to customer research teams throughout the sales cycle and into early delivery phases.
Main Responsibilities
Partner with frontier AI research labs to design datasets and environments that improve model performance
Lead technical conversations with customer researchers to understand model capabilities, failure modes, data requirements, and success criteria
Probe model behavior through systematic evaluation to uncover weaknesses and identify high-impact data interventions
Design evaluation frameworks, calibration processes, and quality rubrics that establish measurable project success metrics
Develop technical specifications for data projects that balance research rigor with operational feasibility
Serve as thought partner to customer research teams throughout the sales cycle, building trust and credibility
Stay current on frontier AI research, RL environment design, post-training techniques, and evaluation methodologies
Preferred Qualifications
Strong expertise in frontier AI concepts including LLMs, training data pipelines, evaluation methodologies, post-training techniques (RLHF, DPO, RLAIF), and domain areas such as coding agents, reasoning, multimodal models, or RL environments
Experience in applied ML research, data science, or research-intensive technical roles with customer-facing or collaborative research experience
Proficiency in Python and familiarity with ML frameworks and LLM APIs
Excellent communication skills — ability to deliver technical presentations and explain complex concepts to diverse audiences
Familiarity with data curation workflows, synthetic data generation, LLM-as-a-Judge, or evaluation framework design
Ability to work in a fast-moving environment, comfortable with ambiguity and rapid iteration
B.S. in Computer Science, Machine Learning, or related field with 4+ years of experience in AI/ML solutions engineering or technical customer-facing roles
Compensation range for Tier 1 locations of San Francisco Bay Area and New York City, $172K - $300K OTE. All offers also include equity in the form of employee stock options. Our compensation ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training.
Why Join Snorkel AI?
At Snorkel AI, we're building the future of data-centric AI. Our Expert Data-as-a-Service organization partners with world-class customers to solve some of the hardest data challenges — creating training and evaluation data that power the next generation of LLMs and AI systems. You'll work directly on projects that impact real production systems, while shaping how internal teams deliver faster, better, and more intelligently. This is a rare opportunity to own technical data workflows and be a founding member of the technical DaaS team.
#LI-CG1
Salary Range
-
Salary Range $172,000—$300,000 USDBe Your Best at Snorkel
Joining Snorkel AI means becoming part of a company that has market proven solutions, robust funding, and is scaling rapidly—offering a unique combination of stability and the excitement of high growth. As a member of our team, you’ll have meaningful opportunities to shape priorities and initiatives, influence key strategic decisions, and directly impact our ongoing success. Whether you’re looking to deepen your technical expertise, explore leadership opportunities, or learn new skills across multiple functions, you’re fully supported in building your career in an environment designed for growth, learning, and shared success.
Snorkel AI is proud to be an Equal Employment Opportunity employer and is committed to building a team that represents a variety of backgrounds, perspectives, and skills. Snorkel AI embraces diversity and provides equal employment opportunities to all employees and applicants for employment. Snorkel AI prohibits discrimination and harassment of any type on the basis of race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local law. All employment is decided on the basis of qualifications, performance, merit, and business need.
We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
No items found.
2026-04-02 6:35
Forward Deployed Engineer
V7
101-200
$130,000 – $170,000
United States
Full-time
Remote
false
V7At V7, we’re building AI platforms that help humans do their best work, at incredible scale and speed. Our mission is to turn human knowledge into trustworthy AI, making complex tasks faster, smarter, and more accurate. We’re growing fast, backed by leading investors and AI pioneers (including the minds behind Transformers and Gemini).
The ProductV7 Go provides legal, finance, insurance, and accounting teams with a toolkit for deploying and building custom no-code AI agents. The platform focuses taking multi-modal data and delivering verifiable outputs with transparent AI logic to ensure accuracy and compliance.V7 Go supports all of the latest models like GPT, Claude, and Gemini for the best accuracy and performance. Watch the V7 Go keynote to see what we’re building.The team you’ll be joining and the impact you’ll haveYou'll join our go-to-market team as our second Solutions Engineer in New York (the team is six people), sitting at the intersection of sales and product in a company processing tens of millions of documents for customers across finance, insurance, and real estate.V7 Go 4x-ed revenue last year, with 160%+ upsell into accounts. You'll help accelerate that trajectory by making sure every customer gets real value.We run a lean, high-trust team where you'll work directly with AEs, engineers, and product to close complex deals and turn new logos into long-term champions.Your work directly shapes how enterprises experience agentic AI for the first time and how quickly they believe in it.What you’ll be doing from day oneRun technical discovery, design solutions, and lead POCs alongside Account Executives to close deals, then own onboarding to get customers to first value fast.Build and implement workflows within V7 Go; combining prompt engineering, data pipelines, and integrations to solve real customer problems across document processing and more.Act as the primary technical contact for accounts, handling complex challenges and spotting expansion opportunities as customers scale.Juggle up to 10 concurrent projects while feeding customer insights back to product and engineering.Who you areYou are a prototyper at heart with a gift of talking to customers, building relationships, and solving technical problems with repeatability.You have experience in delivering Large Language Model projects with customers, including LLM API integration, up-to-speed knowledge of foundation models, solutions design/architecture, integrating different cloud providers, prompt engineering, and/or measuring AI accuracy.You love coding with Python.You can develop and articulate an AI solution vision to technical and business stakeholders, with customers and partners to match the value proposition to business needs.V7 champions equality and inclusion because diverse teams build better products. Don't check every box? Apply anyway — we value what makes you unique and will support you through the process, just let our Talent team know how they can help.
No items found.
2026-04-02 0:20
Senior Python Engineer (FastAPI / Python)
Photoroom
501-1000
$75,000 – $110,000
France
Full-time
Remote
false
About usPhotoroom launched in 2020 after being accepted into Y Combinator and has become the world's most popular AI photo editor over the past four years. Our goal is to create the technology allowing anyone create studio-level product images in minutes.With over 300 million downloads and processing 5+ billion images annually, we serve both individual creators and major enterprises like Amazon, DoorDash, and Decathlon through our B2C app and B2B API solutions.We're a profitable, remote-friendly company that has raised Series B funding and aims for 40% year-over-year growth. Our team of 100+ passionate builders focuses on craft, innovation, and collaboration, creating exceptional impact for entrepreneurs and businesses worldwide.🤓 We are looking for a strong Python engineer to take ownership of the public API that powers how developers integrate Photoroom into their products. This is a high-impact role at the intersection of developer experience and applied AI, where you’ll shape the interface used by both self-serve users and large enterprise customers.💰 75k – 110k* + Stock-Options/BSPCE🇪🇺 Work flexibly from one of our core countries: France, Germany, Italy, Spain or the UK✈️ Relocation support available (up to €10k), including help with visa and housing🏖️ Socials – company retreats, offsites, and regular team events🇬🇧 International team, English-speaking environment, with optional language lessonsWe can go higher for outstanding profiles and adjust for cost of living where needed✨ About the role ✨You will design, build, and evolve our public API product — the core interface through which developers access Photoroom’s AI capabilities.You will ship features used by both self-serve developers and large enterprise customers, ensuring the API remains reliable, scalable, and easy to integrate.You will own the API surface end-to-end: from design decisions (naming, versioning, structure) to implementation, performance, and long-term maintainability.You will iterate quickly based on real usage — we ship multiple times per week — using data and user feedback to prioritise what matters most.You will work at the intersection of backend engineering and AI, abstracting complex image models into simple, elegant developer-facing interfaces.You will collaborate closely with product, machine learning, backend, and sales teams to ensure the API delivers real value to users.You will join a small, senior team with high ownership and direct product impact from day one.✨ About you ✨You have strong experience building backend systems in Python, with a focus on APIs used by external consumers (not just internal services).You are comfortable with FastAPI (or similar frameworks) and have worked with async patterns, concurrency, and production debugging.You have owned or significantly contributed to the design of public-facing APIs — thinking about versioning, consistency, and long-term evolution.You care deeply about reliability, performance, and developer experience.You are pragmatic and product-minded — you prioritise impact and speed, and avoid over-engineering.You take ownership and are comfortable making decisions, while collaborating closely with cross-functional teams.You have worked in high-performing teams, ideally in fast-paced or startup environments.You communicate clearly and can translate technical decisions into something others can understand.Bonus points if you:Have experience with API deployment patterns (Docker, uvicorn/gunicorn, health checks, etc.)Have worked with image processing libraries (Pillow, pyvips…)Have integrated AI/image models behind APIs (latency, inference, abstraction)Have experience with Node.js (current API is being migrated)Have built integrations or developer-facing tooling✨ Hiring Process ✨Screening callTechnical interviewTake-home assignment + reviewFinal interviews & team meetReference check & offerDiversity, Equity, Inclusion, and BelongingWe're committed to enabling everyone to feel included and valued at work. We believe our company and culture are strongest when composed of diverse experiences and backgrounds.That's also why we have flexible working hours, trust people to work remotely, and extended parental leave.All qualified applicants receive consideration for employment without regard to age, color, family, gender identity, marital status, national origin, physical or mental disability, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws.
No items found.
2026-04-01 20:36
Technical Lead Manager, Platform (India)
Cartesia
51-100
₹10,000,000 – ₹13,000,000
India
Full-time
Remote
false
About CartesiaOur mission is to build the next generation of AI: ubiquitous, interactive intelligence that runs wherever you are. Today, not even the best models can continuously process and reason over a year-long stream of audio, video and text—1B text tokens, 10B audio tokens and 1T video tokens—let alone do this on-device.We're pioneering the model architectures that will make this possible. Our founding team met as PhDs at the Stanford AI Lab, where we invented State Space Models or SSMs, a new primitive for training efficient, large-scale foundation models. Our team combines deep expertise in model innovation and systems engineering paired with a design-minded product engineering team to build and ship cutting edge models and experiences.We're funded by leading investors at Index Ventures and Lightspeed Venture Partners, along with Factory, Conviction, A Star, General Catalyst, SV Angel, Databricks and others. We're fortunate to have the support of many amazing advisors, and 90+ angels across many industries, including the world's foremost experts in AI.About the RoleWe’re opening our first ever office in India, and looking to hire a Technical Lead Manager to help build and lead our Platform Engineering team while advancing our mission of building real-time multimodal intelligence. This role will be responsible for leading the development of our model infrastructure, scaling our systems, and building a high-performing engineering team in India.What You'll DoLead the design and development of low latency, scalable, and reliable model inference and serving stack for our cutting edge SSM foundation modelsManage and mentor a team of platform engineers while maintaining a high technical bar and strong engineering cultureWork closely with our research team and product engineers to translate cutting edge research into incredible productsOwn the architecture and roadmap for model serving infrastructure, distributed systems, and data processing platformsBuild highly parallel, high quality data processing and evaluation infrastructure for foundation model trainingDrive execution across ambiguous, zero-to-one engineering projects and platform initiativesEstablish best practices for reliability, observability, scalability, and performance across platform systemsHelp recruit, interview, and build our engineering team in IndiaYou’ll have significant autonomy to shape our platform and directly impact how cutting-edge AI is applied across various devices and applications.What You'll BringGiven the scale and difficulty of problems we work on, we value strong engineering and leadership skills at Cartesia.Strong engineering skills, comfortable navigating complex codebases and monorepos.Experience leading technical projects and mentoring or managing engineers.A minimum of 6+ years of experience in software engineering, distributed systems, platform engineering, or ML infrastructure.Strong system design skills and experience building large-scale distributed systems.An eye for craft and writing clean and maintainable code.You're comfortable diving into new technologies and can quickly adapt your skills to our tech stack (Go and Python on the backend, Next.js for the frontend).Experience building systems with high demands on performance, reliability, and observability.Technical leadership with the ability to execute and deliver zero-to-one results amidst ambiguity.[bonus] Background in or experience working with machine learning, infrastructure, or generative models.Even if you don’t meet every requirement above, we'd encourage you to apply.What We Offer🍽 Lunch, dinner and snacks at the office.🏥 Stipend to cover medical insurance🦖 Your own personal Yoshi.Our Culture🏢 We’re an in-person team based out of San Francisco. We love being in the office, hanging out together, and learning from each other every day.🚢 We ship fast. All of our work is novel and cutting edge, and execution speed is paramount. We have a high bar, and we don’t sacrifice quality or design along the way.🤝 We support each other. We have an open & inclusive culture that’s focused on giving everyone the resources they need to succeed.
No items found.
2026-04-01 18:20
Senior Machine Learning Engineer - Scene Understanding
Zoox
1001-5000
$189,000 – $290,000
United States
Full-time
Remote
false
The Perception team at Zoox creates the "eyes and ears" of our self-driving robots. Navigating safely and efficiently in complex environments requires detecting, classifying, tracking, and understanding various attributes of surrounding objects—all in real-time and with exceptional accuracy.
As an engineer in the Scene Understanding team, you will develop advanced Vision-Language-Action (VLA) models that perceive our vehicle's surroundings to identify hazards and make driving suggestions. You will utilize VLA models for detecting rare events and ensuring safe driving in these situations. You'll work with state-of-the-art machine learning models that operate in real-time on our robotaxi platform with minimal latency. Collaborating with world-class engineers and researchers across sensors, planning, and other teams, you'll have access to premium sensor data and cutting-edge infrastructure to validate your algorithms in real-world conditions.In this role, you will...
Design and train Vision-Language-Action (VLA) solutions for robotaxis
Lead end-to-end data strategy, including mining, auto-labeling, and dataset construction to power our ML flywheel
Lead the full post-training stack for VLMs and VLAs, including Continual Pre-training (CPT) on domain-specific driving data, Supervised Fine-Tuning (SFT) for instruction following.
Utilize our large-scale data pipelines and ML infrastructure to research, prototype, and deploy solutions that improve driving behavior
Partner with cross-functional teams to integrate perception signals
Qualifications
MS or PhD in Computer Science or related field
Background in deep learning solutions for VLM and VLA models
Track record in post-training large-scale models, CPT, SFT, RL
Hands-on experience with production ML pipelines, including dataset creation, training frameworks, and metrics
Expertise in Python libraries (PyTorch, NumPy, Pandas, VLLM)
Bonus Qualifications
Deep knowledge of cutting-edge computer vision techniques
Publications in top-tier conferences (CVPR, ICCV, RSS, ICRA)
Experience with integrating large language models to various tasks.
189,000 - 290,000 a year
Base Salary Range
There are three major components to compensation for this position: salary, Amazon Restricted Stock Units (RSUs), and Zoox Stock Appreciation Rights. A sign-on bonus may be offered as part of the compensation package. The listed range applies only to the base salary. Compensation will vary based on geographic location and level. Leveling, as well as positioning within a level, is determined by a range of factors, including, but not limited to, a candidate's relevant years of experience, domain knowledge, and interview performance. The salary range listed in this posting is representative of the range of levels Zoox is considering for this position.
Zoox also offers a comprehensive package of benefits, including paid time off (e.g. sick leave, vacation, bereavement), unpaid time off, Zoox Stock Appreciation Rights, Amazon RSUs, health insurance, long-term care insurance, long-term and short-term disability insurance, and life insurance.
About ZooxZoox is developing the first ground-up, fully autonomous vehicle fleet and the supporting ecosystem required to bring this technology to market. Sitting at the intersection of robotics, machine learning, and design, Zoox aims to provide the next generation of mobility-as-a-service in urban environments. We’re looking for top talent that shares our passion and wants to be part of a fast-moving and highly execution-oriented team.
Follow us on LinkedIn
AccommodationsIf you need an accommodation to participate in the application or interview process please reach out to accommodations@zoox.com or your assigned recruiter.
A Final Note:You do not need to match every listed expectation to apply for this position. Here at Zoox, we know that diverse perspectives foster the innovation we need to be successful, and we are committed to building a team that encompasses a variety of backgrounds, experiences, and skills.
No items found.
2026-04-01 18:06
Senior Product Manager – Agentic AI Systems
Level AI
201-500
United States
Full-time
Remote
false
Location: US (Remote / Bay Area preferred)
Experience: 4 years of Product Management experience
Reports to: Head of Product
About Level AI :Level AI is an AI-native customer experience intelligence platform helping enterprises deploy agentic AI systems that reason, act, and improve across high-volume customer interactions. Our products power real-world contact center workflows and deliver measurable business outcomes at scale.
Role Overview : We’re looking for a Senior Product Manager to help build and scale agentic AI systems at Level AI. In this role, you will work closely with Engineering, Applied AI/ML, Design, and customer-facing teams to ship production-ready agentic capabilities and make them successful in real customer environments.This role emphasizes execution, customer impact, and production rigor, with opportunities to grow into broader platform ownership over time.
What You’ll Do: Define and execute product initiatives for agentic AI systems, with a focus on measurable customer and business outcomesOwn significant parts of the agentic system lifecycle, including orchestration, decisioning, evaluation, and iterationContribute to building a repeatable framework for launching, evaluating, and improving agentic capabilities across customersHelp define how agentic systems are measured and improved in production, balancing autonomy with safety and reliabilityPartner closely with Engineering, Applied AI/ML, Design, and Solutions teams to ship production-ready systemsWork directly with customers to understand workflows, requirements, and success criteriaDrive customer-informed prioritization by staying close to live deployments and real usage patternsSupport best practices for agent evaluation, iteration, and safe rolloutRepresent the product in customer conversations, demos, and feedback sessions
What We’re Looking For:
Required -
- 4 years of Product Management experience, preferably with AI-driven or platform products
- Experience shipping and iterating on production software systems
- Exposure to LLMs, agentic systems, or AI-powered workflows (hands-on or via close partnership)
- Strong customer-facing skills and comfort working with enterprise customers
- Ability to translate ambiguous problems into clear product requirements
- Excellent collaboration and communication skills
Nice to Have:
- Experience with conversational systems, automation, or real-time decisioning
- Familiarity with AI evaluation concepts, human-in-the-loop systems, or feedback loops
- Experience working in enterprise SaaS or B2B platformsTechnical background or strong comfort working with engineering and ML teams
Why This Role at Level AI
- Work on real production agentic AI systems, not experimentsHigh exposure to customers, data, and real-world outcomes
- Opportunity to grow into broader platform or Principal-level ownership
- Meaningful impact on how enterprises adopt and trust AI
No items found.
2026-04-01 10:06
Research Intern – Reinforcement Learning (RL)
Level AI
201-500
India
Intern
Remote
false
🚀 Build the next generation of Agentic AI with us
Our platform combines conversation intelligence, multimodal understanding, and agentic AI systems to power both human agents and autonomous AI agents across the entire customer experience lifecycle.
A core part of this vision is our investment in custom Small Language Models (SLMs)—purpose-built for CX workflows—paired with reinforcement learning systems that continuously improve decision-making in real-world environments.
We’re looking for a Research Intern (Reinforcement Learning) to join us in shaping this future.
What you’ll do
Design and build reinforcement learning environments that model real-world customer interaction workflows.
Design RL agents that learn from these environments using real-world interaction data, rewards, and feedback loops
Define reward models and feedback loops using real-world signals (outcomes and human feedback)
Enable learning from production data by structuring interaction traces into training-ready datasets for offline and online learning
Experiment with multi-agent systems and simulation frameworks for complex coordination and decision-making
Collaborate with engineering and product teams to deploy, evaluate, and iterate on learning systems in production at scale.
What we’re looking for
Currently pursuing (or recently completed) a degree in Computer Science, AI, Machine Learning, or related field
Strong understanding of reinforcement learning fundamentals
Familiarity with RL environments and training libraries such as Verl and Tinker
Strong foundation in probability, maths, and optimization
Passion for building real-world AI systems
Nice to have
Experience with RLHF, LLM/SLM fine-tuning, or model alignment
Exposure to agent-based systems or multi-agent RL
Prior research, projects, or publications in RL or applied ML
Experience working with large-scale or production datasets
Why Level AI
Work on production-grade Agentic AI systems used by leading enterprises
Build alongside a team with deep expertise from Amazon, Google, and Meta
Be part of a fast-growing Series C AI company.
Direct exposure to 0→1 AI innovation in CX and decisioning systems
No items found.
2026-04-01 10:05
Robotics Software Engineer - Manufacturing Automation
Intrinsic
201-500
United States
Full-time
Remote
false
Intrinsic is an AI robotics group at Google aiming to reimagine the potential of industrial robotics. Our team believes that advances in AI, perception and simulation will redefine what’s possible for industrial robotics in the near future – with software and data at the core.
Our mission is to make industrial robotics intelligent, accessible, and usable for millions more businesses, entrepreneurs, and developers. We are a dynamic team of engineers, roboticists, designers, and technologists who are passionate about unlocking the creative and economic potential of industrial robotics.Role
As a Senior AI Research Scientist for Perception for Contact Rich Manipulation you will lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. You will explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Your work will involve creating models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. You will collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities that can be deployed on physical hardware for industrial applications.
How your work moves the mission forward
Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios.
Design algorithms that enable robots to manipulate complex or deformable objects with high precision.
Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware.
Evaluate model performance in both simulation and real-world environments to ensure robustness and reliability.
Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems.
Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.
Skills you will need to be successful
PhD in Computer Science, Robotics, or a related field with a focus on machine learning or computer vision.
3 years of experience in applied research focused on robotic manipulation or robot learning.
Proficiency in programming with Python and C++.
Experience with deep learning frameworks such as PyTorch, JAX, or TensorFlow.
Experience developing algorithms for vision-based manipulation or contact-rich interaction.
Publication record in top-tier robotics or AI conferences (e.g., ICRA, IROS, CVPR, NeurIPS).
Skills that will differentiate your candidacy
Experience with reinforcement learning or imitation learning for robotics.
Familiarity with physics simulators like MuJoCo, Isaac Sim, or Gazebo.
Experience integrating tactile sensors with visual perception systems.
Experience in LfD (Learning from Demonstrations), kinesthetic learning.
Background in sim-to-real transfer techniques for manipulation policies.
Experience with transformer-based architectures or foundation models in a robotics context.
Experience deploying machine learning models on edge compute hardware.
At Intrinsic, we are proud to be an equal opportunity workplace. Employment at Intrinsic is based solely on a person's merit and qualifications directly related to professional competence. Intrinsic does not discriminate against any employee or applicant because of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), or any other basis protected by law. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. It is Intrinsic’s policy to comply with all applicable national, state and local laws pertaining to nondiscrimination and equal opportunity.
If you have a disability or special need that requires accommodation, please contact us at: candidate-support@intrinsic.ai.
No items found.
2026-04-01 9:21
AI Product Manager
Seven AI
51-100
United States
Full-time
Remote
false
7AI is seeking an AI-Native Product Builder to design and launch intelligent security products that help organizations proactively defend against evolving cyber threats. This role blends product thinking with hands-on engineering, enabling you to move from idea to production quickly using AI-first workflows.You’ll work at the intersection of cybersecurity, AI, and product innovation—building systems that detect, analyze, and respond to threats in real time. From early concept validation to production deployment, you’ll own the full lifecycle of new capabilities, collaborating closely with engineering, research, and go-to-market teams.The ideal candidate is deeply technical, AI-native in how they build, and energized by solving complex security problems in a fast-moving, high-ownership environment.What You’ll DoRapidly prototype and ship new security features using AI coding tools (e.g., Cursor, Copilot) and modern development workflows to explore product ideas and technical feasibility.Leverage internal data, threat intelligence, and market signals to identify high-impact opportunities and validate product direction.Build and scale features across cloud platforms, APIs, and security infrastructure using GenAI to accelerate development while maintaining high standards for reliability and security.Own product definition end-to-end—translating ambiguous problems into clear requirements and shipped solutions.Balance trade-offs across security effectiveness, performance, usability, and operational complexity.Partner cross-functionally with engineering, security researchers, and business stakeholders to deliver impactful outcomes.Identify opportunities to automate workflows across teams using agentic AI systems and internal tooling.Contribute to the evolution of next-generation AI-driven cybersecurity capabilities, including detection, response, and analysis systems.What We’re Looking ForBachelor’s degree or higher in a technical field (or equivalent experience).5+ years of experience in software engineering, product development, or a hybrid role, including building AI-powered applications.Strong hands-on experience with LLMs and AI coding tools for prototyping, analysis, and product development.Experience building systems that incorporate GenAI (e.g., LLM integrations, RAG pipelines, agentic workflows, evaluation frameworks).Proven ability to take products from concept to launch with a strong focus on user impact.Nice to HaveExperience building security or infrastructure products (e.g., detection systems, monitoring tools, or distributed services).A portfolio of projects demonstrating AI-native development and rapid prototyping.Experience instrumenting products with analytics and telemetry to drive decision-making.Familiarity with cloud infrastructure (e.g., AWS) and data visualization tools.
No items found.
2026-04-01 4:35
IC Agentic Engineering Manager - Stargate
OpenAI
5000+
$293,000 – $490,000
United States
Full-time
Remote
false
About the TeamOpenAI’s Stargate Infrastructure team is building and operating the systems that power next-generation AI workloads at massive scale. This includes deploying and managing clusters, networks, and data center infrastructure across first-party and partner environments.As the scale and complexity of these systems grow, we are investing in agentic systems and intelligent automation to improve how infrastructure is deployed, operated, and debugged. This team focuses on applying AI-driven approaches to real-world infrastructure workflows—enabling faster execution, higher reliability, and scalable operations.About the RoleWe are seeking an IC Agentic Engineering Manager to lead the development and application of agent-based systems for infrastructure delivery and operations within Stargate.This is a player-coach role: you will contribute directly to system design and implementation while leading a small team. You will focus on applying agentic systems to infrastructure workflows such as deployment orchestration, system bring-up, issue triage, debugging, and capacity management.This role is not focused on building general-purpose agent platforms. Instead, it is centered on applying agentic systems to solve concrete infrastructure problems, working closely with hardware, networking, and cluster teams.Key ResponsibilitiesDesign and build agent-based systems to support infrastructure deployment and operationsIdentify high-impact opportunities to apply agents across workflows such as:cluster bring-up and deployment readinessincident triage and root cause analysissystem validation and health monitoringcapacity management and operational decision-makingLead a small team while contributing directly as an IC across system design, development, and integrationPartner with infrastructure, hardware, and networking teams to integrate agentic systems into production workflowsDevelop systems that leverage telemetry, logs, and system signals to enable closed-loop automationDefine evaluation frameworks to measure system effectiveness, reliability, and operational impactDrive iteration from prototype to production, ensuring robustness and scalabilityQualificationsStrong software engineering background in distributed systems, infrastructure, or platform engineeringExperience building production automation systems or data-driven operational toolingExperience applying AI, ML, or agent-based approaches to real-world systems or workflowsAbility to operate as a hands-on IC while leading a small teamExperience working cross-functionally with infrastructure, hardware, or systems teamsStrong problem-solving skills in complex, ambiguous environmentsPreferred SkillsExperience with LLM-based systems, agents, or autonomous workflowsBackground in infrastructure operations, SRE, or large-scale system deploymentExperience working on cluster bring-up, debugging, or data center infrastructure systemsFamiliarity with telemetry, monitoring systems, and observability pipelinesExperience building internal tools or platforms for engineering productivity and operationsAbout OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
2026-04-01 2:36
Senior Forward Deploy Engineer
Armada
201-500
$154,560 – $193,200
United States
Full-time
Remote
false
About the Company
Armada is a full-stack edge infrastructure company delivering compute, connectivity, and sovereign AI/ML to some of the world’s most remote places. Named one of Fast Company's Most Innovative Companies, Armada’s solutions are deployed in over 60 countries globally for organizations ranging from energy to defense.
With over $200 million in funding, Armada is backed by top investors such as Microsoft (M12), Founders Fund, and has strategic partnerships including Starlink, Skydio, and NVIDIA. We are looking for the most brilliant minds in the world to join us.
Working at Armada means taking ownership, driving autonomy, and delivering impact. You’ll tackle challenges that haven’t been solved before and help build something transformative from the ground up. What you do here will not only define your career but help further Armada’s mission to bridge the digital divide for customers around the world.
About the role
At Armada, we are unlocking the limitless potential of AI to transform operations and improve lives in some of the most remote locations on Earth. From the expansive mines of Australia to the oil fields of Northern Canada, and the coffee plantations of Colombia, Armada offers a unique opportunity to tackle exciting AI and ML challenges on a global scale. We are actively seeking passionate AI Engineers with hands-on expertise across a range of domains, including real-time computer vision, statistical machine learning, natural language processing, transformers, control and navigation, reinforcement learning, and large-scale distributed AI systems.
Ideal candidates will possess strong skills in machine learning (ML), deep learning (DL), and real-time computer vision techniques. You will be responsible for building ML/DL models tailored to specific challenges, preparing datasets for testing, evaluating model performance, and deploying solutions in production environments. Familiarity with containerization, microservices architecture, and the ability to independently deploy ML models into production is essential.
If you are a self-driven individual with a passion for cutting-edge AI, we want to hear from you. Armada offers an unparalleled opportunity to confront some of the most thrilling AI and ML challenges in the world. Join our dynamic AI Engineering team as we deliver disruptive edge-compute systems capable of autonomous learning, prediction, and adaptation using vast, real-time datasets.
We are pioneers in developing high-performance computing solutions for self-driving cars, camera networks, robotics, drones, conversational agents, and real-time monitoring and diagnostic systems. Our vision is to empower AI systems to seamlessly and securely interact with the complexities and uncertainties of the real world, and our mission is to bridge the digital divide in the process.
Location. This role is office-based at our Bellevue, Washington office.
What You'll Do (Key Responsibilities)
Translating business requirements into requirements for AI/ML models.
Preparing data to train and evaluate AI/ML/DL models.
Building AI/ML/DL models by applying state-of-the-art algorithms, especially transformers. In some cases, leverage existing algorithms from academic or industrial research.
Testing, evaluating the AI/ML/DL models, benchmarking their quality, and publishing the models, data sets, and evaluations.
Deploying the models in production by containerizing the models.
Working with customers and internal employees to refine the quality of the models.
Establishing continuous learning pipelines for models with online learning or transfer learning.
Building and deploying containerized applications on the cloud or on-premise environments
Required Qualifications
BS or MS degree in computer science, computational. science/engineering, or related technical field (or equivalent experience).
3+ years of work-related experience in software development with good Python, Java, and/or C/C++ programming skills.
Familiarity with containers, numeric libraries, modular software design.
Hands-on expertise with traditional statistical machine learning techniques as well as deep-learning and natural language processing modeling.
Expertise in supervised, unsupervised, and transfer learning techniques.
Hands-on expertise in machine learning techniques and algorithms with a strong background in state-of-the-art DNN architectures (Transformers, CNN, R-CNN, RNN, BERT, GAN, autoencoders, etc.) and experience in developing or using major deep learning frameworks (e.g., PyTorch, Tensorflow, etc).
Experience with solving and using machine learning for real-world problems.
Preferred Experience and Skills
Demonstrable experience in building, programming, and integrating software and hardware for autonomous or robotic systems.
Proven experience producing computationally efficient software to meet real-time requirements.
Background with container platforms such as Kubernetes.
Strong analytical skills with a bias for action.
Strong time-management and organization skills to thrive in a fast-paced, dynamic environment.
Solid written and oral communications skills.
Good teamwork and interpersonal skills.
Compensation
For U.S. Based candidates: To ensure fairness and transparency, the starting base salary range for this role for candidates in the U.S. are listed below, varying based on location experience, skills, and qualifications.
In addition to base salary, this role will also be offered equity and subsidized benefits (details available upon request).
Benefits
Competitive base salary and equity
Medical, dental, and vision (subsidized cost)
Health savings accounts (HSA), flexible spending accounts (FSA), and dependent care FSAs (DCFSA)
Retirement plan options, including 401(k) and Roth 401(k)
Unlimited paid time off (PTO)
14 paid company holidays per year
#LI-SM2
#LI-Onsite
Compensation$154,560—$193,200 USD
You're a Great Fit if You're
A go-getter with a growth mindset. You're intellectually curious, have strong business acumen, and actively seek opportunities to build relevant skills and knowledge
A detail-oriented problem-solver. You can independently gather information, solve problems efficiently, and deliver results with a "get-it-done" attitude
Thrive in a fast-paced environment. You're energized by an entrepreneurial spirit, capable of working quickly, and excited to contribute to a growing company
A collaborative team player. You focus on business success and are motivated by team accomplishment vs personal agenda
Highly organized and results-driven. Strong prioritization skills and a dedicated work ethic are essential for you
Equal Opportunity Statement
At Armada, we are committed to fostering a work environment where everyone is given equal opportunities to thrive. As an equal opportunity employer, we strictly prohibit discrimination or harassment based on race, color, gender, religion, sexual orientation, national origin, disability, genetic information, pregnancy, or any other characteristic protected by law. This policy applies to all employment decisions, including hiring, promotions, and compensation. Our hiring is guided by qualifications, merit, and the business needs at the time.
Unsolicited Resumes and Candidates
Armada does not accept unsolicited resumes or candidate submissions from external agencies or recruiters. All candidates must apply directly through our careers page. Any resumes submitted by agencies without a prior signed agreement will be considered unsolicited and Armada will not be obligated to pay any fees.
No items found.
2026-03-31 22:02
Senior Mission Success Engineer, US Federal
Armada
201-500
$154,560 – $193,200
United States
Full-time
Remote
false
About the Company
Armada is a full-stack edge infrastructure company delivering compute, connectivity, and sovereign AI/ML to some of the world’s most remote places. Named one of Fast Company's Most Innovative Companies, Armada’s solutions are deployed in over 60 countries globally for organizations ranging from energy to defense.
With over $200 million in funding, Armada is backed by top investors such as Microsoft (M12), Founders Fund, and has strategic partnerships including Starlink, Skydio, and NVIDIA. We are looking for the most brilliant minds in the world to join us.
Working at Armada means taking ownership, driving autonomy, and delivering impact. You’ll tackle challenges that haven’t been solved before and help build something transformative from the ground up. What you do here will not only define your career but help further Armada’s mission to bridge the digital divide for customers around the world.
About the role
At Armada, we are unlocking the limitless potential of AI to transform operations and improve lives in some of the most remote locations on Earth. From the expansive mines of Australia to the oil fields of Northern Canada, and the coffee plantations of Colombia, Armada offers a unique opportunity to tackle exciting AI and ML challenges on a global scale. We are actively seeking passionate AI Engineers with hands-on expertise across a range of domains, including real-time computer vision, statistical machine learning, natural language processing, transformers, control and navigation, reinforcement learning, and large-scale distributed AI systems.
Ideal candidates will possess strong skills in machine learning (ML), deep learning (DL), and real-time computer vision techniques. You will be responsible for building ML/DL models tailored to specific challenges, preparing datasets for testing, evaluating model performance, and deploying solutions in production environments. Familiarity with containerization, microservices architecture, and the ability to independently deploy ML models into production is essential.
If you are a self-driven individual with a passion for cutting-edge AI, we want to hear from you. Armada offers an unparalleled opportunity to confront some of the most thrilling AI and ML challenges in the world. Join our dynamic AI Engineering team as we deliver disruptive edge-compute systems capable of autonomous learning, prediction, and adaptation using vast, real-time datasets.
We are pioneers in developing high-performance computing solutions for self-driving cars, camera networks, robotics, drones, conversational agents, and real-time monitoring and diagnostic systems. Our vision is to empower AI systems to seamlessly and securely interact with the complexities and uncertainties of the real world, and our mission is to bridge the digital divide in the process.
Location. This role is office-based at our Bellevue, Washington office.
What You'll Do (Key Responsibilities)
Translating business requirements into requirements for AI/ML models.
Preparing data to train and evaluate AI/ML/DL models.
Building AI/ML/DL models by applying state-of-the-art algorithms, especially transformers. In some cases, leverage existing algorithms from academic or industrial research.
Testing, evaluating the AI/ML/DL models, benchmarking their quality, and publishing the models, data sets, and evaluations.
Deploying the models in production by containerizing the models.
Working with customers and internal employees to refine the quality of the models.
Establishing continuous learning pipelines for models with online learning or transfer learning.
Building and deploying containerized applications on the cloud or on-premise environments
Required Qualifications
BS or MS degree in computer science, computational. science/engineering, or related technical field (or equivalent experience).
3+ years of work-related experience in software development with good Python, Java, and/or C/C++ programming skills.
Familiarity with containers, numeric libraries, modular software design.
Hands-on expertise with traditional statistical machine learning techniques as well as deep-learning and natural language processing modeling.
Expertise in supervised, unsupervised, and transfer learning techniques.
Hands-on expertise in machine learning techniques and algorithms with a strong background in state-of-the-art DNN architectures (Transformers, CNN, R-CNN, RNN, BERT, GAN, autoencoders, etc.) and experience in developing or using major deep learning frameworks (e.g., PyTorch, Tensorflow, etc).
Experience with solving and using machine learning for real-world problems.
Preferred Experience and Skills
Demonstrable experience in building, programming, and integrating software and hardware for autonomous or robotic systems.
Proven experience producing computationally efficient software to meet real-time requirements.
Background with container platforms such as Kubernetes.
Strong analytical skills with a bias for action.
Strong time-management and organization skills to thrive in a fast-paced, dynamic environment.
Solid written and oral communications skills.
Good teamwork and interpersonal skills.
Compensation
For U.S. Based candidates: To ensure fairness and transparency, the starting base salary range for this role for candidates in the U.S. are listed below, varying based on location experience, skills, and qualifications.
In addition to base salary, this role will also be offered equity and subsidized benefits (details available upon request).
Benefits
Competitive base salary and equity
Medical, dental, and vision (subsidized cost)
Health savings accounts (HSA), flexible spending accounts (FSA), and dependent care FSAs (DCFSA)
Retirement plan options, including 401(k) and Roth 401(k)
Unlimited paid time off (PTO)
14 paid company holidays per year
#LI-SM2
#LI-Onsite
Compensation$154,560—$193,200 USD
You're a Great Fit if You're
A go-getter with a growth mindset. You're intellectually curious, have strong business acumen, and actively seek opportunities to build relevant skills and knowledge
A detail-oriented problem-solver. You can independently gather information, solve problems efficiently, and deliver results with a "get-it-done" attitude
Thrive in a fast-paced environment. You're energized by an entrepreneurial spirit, capable of working quickly, and excited to contribute to a growing company
A collaborative team player. You focus on business success and are motivated by team accomplishment vs personal agenda
Highly organized and results-driven. Strong prioritization skills and a dedicated work ethic are essential for you
Equal Opportunity Statement
At Armada, we are committed to fostering a work environment where everyone is given equal opportunities to thrive. As an equal opportunity employer, we strictly prohibit discrimination or harassment based on race, color, gender, religion, sexual orientation, national origin, disability, genetic information, pregnancy, or any other characteristic protected by law. This policy applies to all employment decisions, including hiring, promotions, and compensation. Our hiring is guided by qualifications, merit, and the business needs at the time.
Unsolicited Resumes and Candidates
Armada does not accept unsolicited resumes or candidate submissions from external agencies or recruiters. All candidates must apply directly through our careers page. Any resumes submitted by agencies without a prior signed agreement will be considered unsolicited and Armada will not be obligated to pay any fees.
No items found.
2026-03-31 22:02
Senior Scientist, Biology & Pharmacology
Xaira
101-200
$10,000 – $15,000 / month
United States
Full-time
Remote
false
About Xaira Therapeutics
Xaira is an innovative biotech startup focused on leveraging AI to transform drug discovery and development. The company is leading the development of generative AI models to design protein and antibody therapeutics, enabling the creation of medicines against historically hard-to-drug molecular targets. It is also developing foundation models for biology and disease to enable better target elucidation and patient stratification. Collectively, these technologies aim to continually enable the identification of novel therapies and to improve success in drug development. Xaira is headquartered in the San Francisco Bay Area, Seattle, and London.AI in Residence
AI in Residence is a highly selective role at the intersection of frontier machine learning and drug discovery. Designed as an industry alternative to a traditional postdoctoral position, the program is for exceptional researchers and engineers who want to apply advanced AI to real biomedical problems end to end, from data to deployed systems.
Residents join a small cohort working on high-impact AI efforts across Xaira. You’ll collaborate closely with AI scientists, research engineers, and drug discovery teams to design, build, and ship machine learning capabilities that directly influence therapeutic programs. This is hands-on, system-level work with real scientific consequence.
We’re looking for candidates with technical depth, intellectual independence, strong research judgment, and evidence of delivering high-quality work—whether through publications, open-source, or production systems.
What You’ll Do
Develop and advance ML models for biological, preclinical, and translational datasets (e.g., multimodal omics, imaging, text, assay data)
Design and implement scalable pipelines for data curation, training, evaluation, and inference integrated into discovery workflows
Own projects end-to-end: problem framing → prototyping → validation → deployment
Evaluate robustness and reliability (generalization, uncertainty, failure modes), plus interpretability where it supports scientific decision-making
Contribute technical leadership by proposing new directions, shaping platform capabilities, and raising engineering/research standards through collaboration
You Might Work On
Examples include (not limited to):
Foundation / representation models over multimodal biological and translational data
Methods for small, biased, noisy datasets; distribution shift; and uncertainty estimation
ML systems for experimental prioritization, assay interpretation, or translational signal discovery
Evaluation frameworks and benchmarks tailored to discovery decision-making
Tooling that makes models usable by scientists (interfaces, automation, monitoring)
What Success Looks Like
You ship one or more models or pipelines that are used in real discovery workflows
Your work improves decision quality (e.g., better prioritization, faster iteration, clearer uncertainty)
You raise the bar on evaluation rigor and reproducibility (strong baselines, error analysis, reliable metrics)
You leave behind maintainable systems (tests, documentation, monitoring) that others can build on
We Value
Strong research judgment: choosing the right problems and knowing what “good evidence” looks like
Rigor: careful experimental design, ablations, error analysis, and honest reporting
Systems thinking: reliability, scalability, and maintainability—not just prototypes
Clear communication: writing, documentation, and sharing decisions/assumptions
Collaborative execution with scientific and engineering partners
Program Structure
Duration
6–12 months, with the possibility of extension or conversion to full-time
Start Dates
First hires beginning March 2026, with rolling applications and additional intakes in Summer and Fall 2026
Cohort Size
Small, highly selective cohort to enable meaningful ownership and close collaboration
Mentorship & Support
Dedicated technical mentor, plus structured feedback from senior AI, engineering, and scientific leadership
Publications & Presentations
We value scientific contribution and may support publications and conference presentations when appropriate. Publication scope and timing depend on project needs and are subject to internal review (e.g., IP and confidentiality). Authorship follows standard contribution-based guidelines.
Who Should Apply
We encourage applications from candidates who meet most of the following:
Recent MS or PhD graduates (or equivalent research experience) in ML/AI, computational biology, biomedical engineering, or related fields
Evidence of research excellence through high-quality publications or artifacts. Top venues (e.g., NeurIPS, ICML, ICLR, CVPR, ACL; Nature Methods, Cell Systems) are a plus, but strong preprints, open-source contributions, or shipped systems with demonstrated impact are equally compelling
Demonstrated ability to lead substantial technical work with originality—new modeling ideas, rigorous experiments, or production-grade systems adopted by others
Motivation to translate rigorous research into reliable, deployable AI systems that support therapeutic discovery
Please include a brief cover letter describing your interest in this role, why you’re excited about this area, and what you hope to gain from the experience.
Compensation:
The expected monthly compensation range is $10,000–$15,000, depending on experience and qualifications. We are open to higher compensation for candidates with exceptional experience or impact.
No items found.
2026-03-31 18:31
Member of Technical Staff - Applied ML, RecSys
Liquid AI
51-100
United States
Full-time
Remote
false
About Liquid AISpun out of MIT CSAIL, we build general-purpose AI systems that run efficiently across deployment targets, from data center accelerators to on-device hardware, ensuring low latency, minimal memory usage, privacy, and reliability. We partner with enterprises across consumer electronics, automotive, life sciences, and financial services. We are scaling rapidly and need exceptional people to help us get there.The OpportunityThis is a rare chance to apply frontier sequential recommendation architectures to real enterprise problems at scale. You will own applied ML work end-to-end for recommendation system workloads, adapting Liquid Foundation Models for customers who need personalization and ranking capabilities that run efficiently under production constraints.Unlike most recommendation roles that are siloed into a single product surface, this role gives you full ownership over how large-scale recommendation models are adapted, evaluated, and deployed for enterprise customers. Between engagements, you will build reusable applied tooling and workflows that accelerate future delivery.If you care about data quality at scale, user behavior modeling, and making recommendation systems actually work in enterprise production environments, this is the role.What We’re Looking ForWe need someone who:Takes ownership: Owns customer recommendation system engagements end-to-end, from requirements through delivery and evaluation.Thinks at scale: Can reason about user interaction data, sequential modeling, feature engineering, and evaluation across large-scale production systems.Is pragmatic: Optimizes for measurable customer outcomes (engagement, conversion, revenue lift) over theoretical novelty.Communicates clearly: Can translate between customer business metrics and internal technical decisions, and push back when needed.The WorkAct as the technical owner for enterprise customer engagements involving recommendation and ranking workloadsTranslate customer requirements into concrete specifications for recommendation modelsDesign and execute data pipelines for user interaction data, feature engineering, and training data curation at scaleFine-tune and adapt large-scale sequential recommendation models (e.g., HSTU-style architectures) for customer-specific use casesDesign task-specific evaluations for recommendation model performance (ranking quality, latency, throughput) and interpret resultsBuild reusable applied tooling and workflows that accelerate future customer engagementsDesired ExperienceMust-have:Hands-on experience building or fine-tuning recommendation models at scale (not just off-the-shelf collaborative filtering)Experience with sequential recommendation architectures, user behavior modeling, or large-scale ranking systemsStrong intuition for data quality and evaluation design in recommendation contexts (offline metrics, A/B testing, business metric alignment)Experience with large-scale data pipelines for user interaction data and feature engineeringProficiency in Python and PyTorch with autonomous coding and debugging abilityNice-to-have:Experience with transformer-based recommendation architectures (HSTU, SASRec, BERT4Rec, or similar)Experience delivering recommendation systems to external customers with measurable business outcomesFamiliarity with serving recommendation models under latency and throughput constraintsWhat Success Looks Like (Year One)Independently owns and delivers enterprise recommendation system engagements with minimal oversightIs trusted by customers as the technical owner, demonstrating strong judgment on the tradeoffs between model quality, latency, and business impactHas built reusable applied workflows or tooling that accelerate future customer engagementsWhat We OfferReal ML work: You will build and adapt large-scale recommendation models for enterprise customers, working with frontier architectures like HSTU under real production constraints.Compensation: Competitive base salary with equity in a unicorn-stage companyHealth: We pay 100% of medical, dental, and vision premiums for employees and dependentsFinancial: 401(k) matching up to 4% of base payTime Off: Unlimited PTO plus company-wide Refill Days throughout the year
No items found.
2026-03-31 17:16
Member of Technical Staff - Post Training, Applied (Vision)
Liquid AI
51-100
United States
Full-time
Remote
false
About Liquid AISpun out of MIT CSAIL, we build general-purpose AI systems that run efficiently across deployment targets, from data center accelerators to on-device hardware, ensuring low latency, minimal memory usage, privacy, and reliability. We partner with enterprises across consumer electronics, automotive, life sciences, and financial services. We are scaling rapidly and need exceptional people to help us get there.
The OpportunityThis is a rare chance to sit at the intersection of frontier vision-language models and real-world deployment. You'll own applied post-training work for VLMs end-to-end for some of the world's largest enterprises, while still contributing directly to Liquid's core multimodal model development.Unlike most roles that force a trade-off between customer impact and foundational work, this role gives you both: deep ownership over how vision-language models are adapted, evaluated, and shipped, and a direct line into the evolution of Liquid's multimodal post-training stack.If you care about visual understanding, data quality, evaluation, and making VLMs actually work in production, this is a chance to shape how applied multimodal AI is done at a foundation model company.
What We're Looking ForWe need someone who:Takes ownership: Owns VLM post-training projects end-to-end, from customer requirements through delivery and evaluation.Thinks end-to-end: Can reason across visual data curation, training, alignment, and evaluation as a single system.Is pragmatic: Optimizes for model quality and customer outcomes over publications or theory.Communicates clearly: Can translate between customer needs and internal technical teams, and push back when needed.The WorkAct as the technical owner for enterprise customer VLM post-training engagements.Translate customer requirements into concrete multimodal post-training specifications and workflows.Design and execute visual data generation, filtering, and quality assessment processes, including image-text pair curation, annotation pipelines, and synthetic data generation for visual tasks.Run supervised fine-tuning, preference alignment, and reinforcement learning workflows for vision-language models.Design task-specific evaluations for visual understanding, grounding, OCR, document parsing, and other multimodal capabilities. Interpret results and feed learnings back into core post-training pipelines.Desired ExperienceMust-have:Hands-on experience with data generation and evaluation for VLM or multimodal post-training.Experience training or fine-tuning vision-language models using SFT, preference alignment, and/or RL.Strong intuition for visual data quality, annotation design, and multimodal evaluation.Familiarity with vision encoders, image-text architectures, and how visual representations interact with language model backbones.Nice-to-have:Experience with visual grounding, document understanding, OCR, or video understanding tasks.Experience contributing to shared or general-purpose multimodal post-training infrastructure.Prior exposure to customer-facing or applied ML delivery environments.Familiarity with alignment or RL techniques beyond basic supervised fine-tuning in the multimodal setting.What Success Looks Like (Year One)Independently owns and delivers enterprise VLM post-training projects with minimal oversight.Is trusted by customers as the technical owner, demonstrating strong judgment and delivery quality on multimodal workloads.Has made durable contributions to Liquid's general-purpose multimodal post-training pipelines by feeding applied learnings back into baseline model development.What We OfferReal ML work: You will fine-tune vision-language models, generate multimodal data, and ship solutions, not configure API calls. Your work feeds directly back into our core model development.Compensation: Competitive base salary with equity in a unicorn-stage company.Health: We pay 100% of medical, dental, and vision premiums for employees and dependents.Financial: 401(k) matching up to 4% of base pay.Time Off: Unlimited PTO plus company-wide Refill Days throughout the year.
No items found.
2026-03-31 17:16
Back-End Engineer - Team Agents
Taktile
101-200
Germany
Full-time
Remote
false
About the roleTaktile is building a platform for creating, publishing, and executing AI-powered agents that help teams automate complex workflows in financial services. The Agents team owns the agent execution runtime, tool orchestration, and platform infrastructure that makes this possible.We are hiring a Back-End Engineer to help build and ship production features across this stack. You'll work alongside experienced engineers, contribute to real product impact from day one, and grow your skills in a fast-moving, high-ownership environment.Taktile is a hybrid company. This role requires working at least 3 days per week from our Berlin HQ.What You'll DoBuild and ship backend features for Taktile's AI agent platform in Python (FastAPI), deployed on AWS serverless infrastructure.Own your work end-to-end: collaborate on scope, implement, test, release, and iterate based on real usage and customer feedback.Improve how agents run, how they connect to external tools, and how they behave when something goes wrong.Use leading-edge AI tools (e.g. Claude) on a daily basis to move faster, improve quality, and build AI-native capabilities where it makes sense.Review pull requests with depth, improve test coverage and CI/CD, and raise the bar on reliability and engineering excellence.Engage actively in team rituals such as daily syncs, planning sessions, demos, and technical deep-dive discussions.Grow as an individual and accelerate your career by learning from experienced team members, contributing to a foundational layer of the product, and joining cross-team learning groups.For career growth at Taktile, this role will involve daytime ops duty and at some point joining an on-call rotation, so you need to have passion for owning systems end to end and grow your DevSecOps skills as well.Technical stackBackend: Python (FastAPI, Pydantic)Data: DynamoDB, PostgresCloud: AWS serverless (Lambda, API Gateway)AI: LLM orchestration, tool-use frameworks, streaming executionFrontend: React, TypeScript (you will collaborate closely, but this is not a front-end role)RequirementsStrong engineering fundamentals with a passion for simplicity and precision.Fluency in English, both written and spoken, is essential as we operate in a remote environment requiring clear and effective communication. Strong English skills are also crucial to efficiently interacting with AI.Prior industry experience with Python back-end development (this is not an entry-level position).Experience building and operating RESTful APIs and working with databases.Experience integrating into AWS or similar cloud providers.Ideally, but not requiredFastAPI and Pydantic experience.DynamoDB or Postgres, SQLAlchemy.Exposure to LLM application development, agent frameworks, or building developer tools.Prior ops or on-call experience.Experience with distributed systems, async task processing, or observability tooling (tracing, metrics, logging).Our OfferWork with colleagues that lift you up, challenge you, celebrate you and help you grow. We come from many different backgrounds, but what we have in common is the desire to operate at the very top of our fields. If you are similarly capable, caring, and driven, you'll find yourself at home here.Experience a truly flat hierarchy and communicate directly with founding team members. Having an opinion and voicing your ideas is not only welcome but encouraged, especially when they challenge the status quo.Learn from experienced mentors and achieve tremendous personal and professional growth. Get to know and leverage our network of leading tech investors and advisors around the globe.Receive a top-of-market equity and cash compensation package.Get access to a self-development budget you can use to e.g. attend conferences, buy books or take classes.Receive a new Apple MacBook Pro, as well as meaningful home office set-up.Our StanceWe're eager to meet talented and driven candidates regardless of whether they tick all the boxes. We're looking for someone who will add to our culture, not just fit within it. We strongly encourage individuals from groups traditionally underestimated and underrepresented in tech to apply.We seek to actively recognize and combat racism, sexism, ableism and ageism. We embrace and support all gender identities and expressions, and celebrate love in its many forms. We won't inquire about how you identify or if you've experienced discrimination, but if you want to tell your story, we are all ears.About usTaktile helps financial institutions make smarter, safer decisions with the power of AI. Our software gives teams the tools to automate complex decisions like who to onboard, how to underwrite, or when to flag suspicious activity with full visibility and control.By combining AI with a rich ecosystem of financial data, Taktile enables companies to adapt their decision-making in real time as markets, customer behavior, and risks evolve.Our mission is to build the world's leading platform for automated decision-making in financial services, setting the standard for how AI is applied responsibly and effectively in this industry.We were founded by machine learning and data science experts with deep experience in financial services. Today, our team works across Berlin, London, and New York, bringing together engineers, entrepreneurs, and researchers from companies like Google, Amazon, and Meta, as well as fast-growing startups and enterprise leaders.Backed by top investors including Y Combinator, Index Ventures, Balderton Capital, and Tiger Global, along with the founders of Looker, GitHub, Mulesoft, Datadog, and UiPath, we're building a world-class organization across all functions and levels to power the next generation of AI-driven decision-making in financial services.
No items found.
2026-03-31 17:01
Member of Technical Staff - Post Training, Applied (Audio)
Liquid AI
51-100
United States
Full-time
Remote
false
About Liquid AISpun out of MIT CSAIL, we build general-purpose AI systems that run efficiently across deployment targets, from data center accelerators to on-device hardware, ensuring low latency, minimal memory usage, privacy, and reliability. We partner with enterprises across consumer electronics, automotive, life sciences, and financial services. We are scaling rapidly and need exceptional people to help us get there.The OpportunityThis is a rare chance to own applied post-training work end-to-end for audio workloads, adapting Liquid Foundation Models for customers who need speech and audio capabilities that run on-device under real-time constraints.You will act as the technical bridge between customer requirements and model delivery for audio tasks. You will lead engagements from scoping through evaluation, with full ownership over how audio models are adapted and shipped. Between engagements, you will build reusable applied workflows and tooling that accelerate future delivery.If you care about audio data quality, speech model evaluation, and making audio models actually work in production for real customers, this is the role.What We’re Looking ForWe need someone who:Takes ownership: Owns customer post-training projects end-to-end for audio workloads, from requirements through delivery and evaluation.Thinks end-to-end: Can reason across audio data pipelines, speech-text alignment, model adaptation, and evaluation as a connected system.Is pragmatic: Optimizes for model quality and customer outcomes over publications or theory.Thrives under constraints: On-device, low-latency, memory-limited audio systems excite you. You see constraints as design parameters, not blockers.The WorkAct as the technical owner for enterprise customer post-training engagements involving audio and speech workloadsTranslate customer requirements into concrete post-training specifications for ASR, TTS, and speech-to-speech tasksDesign and execute data generation, preprocessing, augmentation, and quality filtering processes for audio corporaFine-tune and adapt audio/speech models for customer-specific use cases, owning delivery from requirements through deploymentDesign task-specific evaluations for audio model performance (noise robustness, speaker variation, latency) and interpret resultsBuild reusable applied tooling and workflows that accelerate future customer engagementsDesired ExperienceMust-have:Hands-on experience with data generation and evaluation for ML model post-trainingExperience training or fine-tuning models using SFT, preference alignment, and/or RLStrong intuition for data quality and evaluation designExperience with speech or audio ML models (ASR, TTS, audio understanding, vocoders, or speech-to-speech systems)Proficiency in Python and PyTorch with autonomous coding and debugging abilityNice-to-have:Experience with audio data pipelines at scale (preprocessing, augmentation, quality filtering)Experience delivering applied ML work to external customers with measurable outcomesFamiliarity with on-device deployment under latency and memory constraintsWhat Success Looks Like (Year One)Independently owns and delivers enterprise post-training projects for audio workloads with minimal oversightIs trusted by customers as the technical owner for audio engagements, demonstrating strong judgment and delivery qualityHas built reusable applied workflows or tooling that accelerate future customer engagementsWhat We OfferReal ML work: You will fine-tune audio and speech models, build audio data pipelines, and ship solutions to enterprise customers under real-time on-device constraints.Compensation: Competitive base salary with equity in a unicorn-stage companyHealth: We pay 100% of medical, dental, and vision premiums for employees and dependentsFinancial: 401(k) matching up to 4% of base payTime Off: Unlimited PTO plus company-wide Refill Days throughout the year
No items found.
2026-03-31 17:01
Senior Applied AI Manager
Oumi
11-50
United States
Full-time
Remote
false
Oumi · Hybrid (Seattle / SF / NY) · Full-timeAbout OumiWhy we exist:Oumi is on a mission to make frontier AI truly open for all. We believe that AI will have a transformative impact on humanity. As such, AI should be developed openly and collectively. It should be made universally accessible.What we do:Oumi provides an end-to-end, AI-native platform to build custom AI models in hours, not months –automating the loop of evaluation, data synthesis, training, and repeat. Oumi also develops an open research stack and models in collaboration with academic collaborators and the open community.The RoleWe're looking for a Senior Applied AI Manager to own the strategy and execution for AI science at Oumi. This is a senior AI science leadership role in the company: you'll set the applied science agenda, build and lead the team, and be accountable for the science quality of every feature that ships on our platform.Your scope spans the full model development lifecycle—data strategy, pre-training and post-training methodology, evaluation science, and production deployment—as well as the agentic systems that automate and improve each stage. You'll work closely with the CEO and product leadership to translate Oumi's company strategy into a concrete AI science roadmap, then execute against it with a growing team of ML engineers and applied researchers.This role blends research and product shipping. You'll stay very close to the academic research, but also industry trends. You will leverage AI science, drive experimentation, and translate breakthroughs into production systems that Oumi and our customers use every day.What You'll DoAI Science Strategy & Roadmap: Define and drive the research and engineering roadmap for AI science at Oumi. Translate company objectives into concrete milestones for model quality, capability, and efficiency—and make the hard prioritization calls when resources are scarce.Team Building: Recruit, manage, and develop a high-performing team of ML engineers and applied researchers. Set a high bar for talent, create an environment of rigorous experimentation, and coach people toward increasing scope and independence.Training Science: Lead experimentation across the full training stack—pre-training, supervised fine-tuning, alignment (RLHF, DPO, GRPO), distillation, curriculum learning, and data mixing—to systematically improve model quality with each generation.Data Strategy: Own the data side of model development. Build intelligent pipelines for quality scoring, filtering, deduplication, and synthetic data generation. Develop a data-scientific understanding of what data actually moves the needle and use it to guide investment.Evaluation & Feedback Loops: Design evaluation frameworks that go beyond static benchmarks. Build automated feedback loops where evaluation signals inform data selection, training decisions, and agent behavior—creating a flywheel of continuous improvement.Agentic Workflows: Research and develop agent-based systems that orchestrate the model training lifecycle—from automated hyperparameter optimization to self-improving data curation—so training runs get smarter over time with less manual intervention.Production & Deployment: Partner with infrastructure and product teams to ensure AI science features ship reliably, perform at quality.Open Source & Community: Publish findings, contribute to open-source tooling, and collaborate with external researchers and academic partners. Represent Oumi's AI science work in the broader research community.What You'll BringExperience: 5+ years of professional experience in ML research, ML engineering, or a closely related field. Demonstrated track record of turning research into production systems. PhD in AI or equivalent industry experience.Management: 1+ years of experience managing engineers or applied researchers. You've hired, coached, and retained strong technical talent.ML Depth: Expertise across the model training lifecycle—pre-training, fine-tuning (SFT, RLHF, DPO), evaluation, and deployment. Hands-on experience training or substantially improving LLMs or VLMs.Research Mindset: You design rigorous experiments, interpret results critically, and stay current with the literature. You know when to apply an existing technique and when to invent something new.Agentic Systems: Experience building or working with LLM-powered automation, tool-use patterns, or multi-agent architectures. You think naturally about how to decompose complex tasks into agent-friendly steps.Nice to HavePh.D. in Computer Science, Machine Learning, or a related field.Publications in ML/AI venues (NeurIPS, ICML, ICLR, ACL, etc.).Experience with data-centric ML approaches—data quality estimation, curriculum learning, or synthetic data generation.Contributions to open-source ML frameworks or tooling.Familiarity with ML infrastructure (Kubernetes, GPU clusters, orchestration frameworks).Prior experience at an early-stage or high-growth startup where you wore multiple hats across research, engineering, and strategy.
No items found.
2026-03-31 14:46
Senior Machine Learning Engineer, Voice AI
Together AI
201-500
$200,000 – $280,000
No items found.
Full-time
Remote
false
About the Role
The Turbo team sits at the intersection of efficient inference (algorithms, architectures, engines) and post‑training / RL systems. We build and operate the systems behind Together’s API, including high‑performance inference and RL/post‑training engines that can run at production scale.
Our mandate is to push the frontier of efficient inference and RL‑driven training: making models dramatically faster and cheaper to run, while improving their capabilities through RL‑based post‑training (e.g., GRPO‑style objectives). This work lives at the interface of algorithms and systems: asynchronous RL, rollout collection, scheduling, and batching all interact with engine design, creating many knobs to tune across the RL algorithm, training loop, and inference stack. Much of the job is modifying production inference systems—for example, SGLang‑ or vLLM‑style serving stacks and speculative decoding systems such as ATLAS—grounded in a strong understanding of post‑training and inference theory, rather than purely theoretical algorithm design.
You’ll work across the stack—from RL algorithms and training engines to kernels and serving systems—to build and improve frontier models via RL pipelines. People on this team are often spiky: some are more RL‑first, some are more systems‑first. Depth in one of these areas plus appetite to collaborate across (and grow toward more full‑stack ownership over time) is ideal.
Requirements
We don’t expect anyone to check every box below. People on this team typically have deep expertise in one or more areas and enough breadth (or interest) to work effectively across the stack. The closer you are to full‑stack (inference + post‑training/RL + systems), the stronger the fit—but being spiky in one area and eager to grow is absolutely okay.
You might be a good fit if you:
Have strong expertise in at least one of the following, and are excited to collaborate across (and grow into) the others:
Systems‑first profile: Large‑scale inference systems (e.g., SGLang, vLLM, FasterTransformer, TensorRT, custom engines, or similar), GPU performance, distributed serving.
RL‑first profile: RL / post‑training for LLMs or large models (e.g., GRPO, RLHF/RLAIF, DPO‑like methods, reward modeling), and using these to train or fine‑tune real models.
Model architecture design for Transformers or other large neural nets.
Distributed systems / high‑performance computing for ML.
Are comfortable working from algorithms to engines:
Strong coding ability in Python
Experience profiling and optimizing performance across GPU, networking, and memory layers.
Able to take a new sampling method, scheduler, or RL update and turn it into a production‑grade implementation in the engine and/or training stack.
Have a solid research foundation in your area(s) of depth:
Track record of impactful work in ML systems, RL, or large‑scale model training (papers, open‑source projects, or production systems).
Can read new RL / post‑training papers, understand their implications on the stack, and design minimal, correct changes in the right layer (training engine vs. inference engine vs. data / API).
Operate well as a full‑stack problem solver:
You naturally ask: “Where in the stack is this really bottlenecked?”
You enjoy collaborating with infra, research, and product teams, and you care about both scientific quality and user‑visible wins.
Minimum qualifications
3+ years of experience working on ML systems, large‑scale model training, inference, or adjacent areas (or equivalent experience via research / open source).
Advanced degree in Computer Science, EE, or a related field, or equivalent practical experience.
Demonstrated experience owning complex technical projects end‑to‑end.
If you’re excited about the role and strong in some of these areas, we encourage you to apply even if you don’t meet every single requirement.
Responsibilities
Advance inference efficiency end‑to‑end
Design and prototype algorithms, architectures, and scheduling strategies for low‑latency, high‑throughput inference.
Implement and maintain changes in high‑performance inference engines (e.g., SGLang‑ or vLLM‑style systems and Together’s inference stack), including kernel backends, speculative decoding (e.g., ATLAS), quantization, etc.
Profile and optimize performance across GPU, networking, and memory layers to improve latency, throughput, and cost.
Unify inference with RL / post‑training
Design and operate RL and post‑training pipelines (e.g., RLHF, RLAIF, GRPO, DPO‑style methods, reward modeling) where 90+% of the cost is inference, jointly optimizing algorithms and systems.
Make RL and post‑training workloads more efficient with inference‑aware training loops—for example, async RL rollouts, speculative decoding, and other techniques that make large‑scale rollout collection and evaluation cheaper.
Use these pipelines to train, evaluate, and iterate on frontier models on top of our inference stack.
Co‑design algorithms and infrastructure so that objectives, rollout collection, and evaluation are tightly coupled to efficient inference, and quickly identify bottlenecks across the training engine, inference engine, data pipeline, and user‑facing layers.
Run ablations and scale‑up experiments to understand trade‑offs between model quality, latency, throughput, and cost, and feed these insights back into model, RL, and system design.
Own critical systems at production scale
Profile, debug, and optimize inference and post‑training services under real production workloads.
Drive roadmap items that require real engine modification—changing kernels, memory layouts, scheduling logic, and APIs as needed.
Establish metrics, benchmarks, and experimentation frameworks to validate improvements rigorously.
Provide technical leadership (Staff level)
Set technical direction for cross‑team efforts at the intersection of inference, RL, and post‑training.
Mentor other engineers and researchers on full‑stack ML systems work and performance engineering.
About Together AI
Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers in our journey in building the next generation AI infrastructure.
Compensation
We offer competitive compensation, startup equity, health insurance and other competitive benefits. The US base salary range for this full-time position is: $200,000 - $280,000 + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge.
Equal Opportunity
Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.
Please see our privacy policy at https://www.together.ai/privacy
No items found.
2026-03-31 11:46
No job found
Your search did not match any job. Please try again
