The AI job market moves fast. We keep up so you don't have to.
Fresh roles added daily, reviewed for quality — across every corner of the AI ecosystem.
I'm strong in:
Edit filters
New AI Opportunities
Showing 61 – 79 of 79 jobs
Tag
Full Stack AI Engineer
Ryz Labs
51-100
Argentina
Contractor
Remote
false
Ryz Labs is looking for a Full Stack AI Engineer – Prod Support to build intelligent, secure, and scalable identity experiences across our client's platforms. In this role, you’ll work end-to-end—from frontend UX to backend services and AI models—focusing on authentication, authorization, identity verification, fraud detection, and personalization.You’ll partner closely with product, security, and data teams to embed AI into identity workflows whilemaintaining the highest standards of privacy, security, and reliability.
Responsibilities:- Design, build, and deploy AI/ML solutions to automate ITSM ticket triage, classification, prioritization, androuting- Develop NLP-based models for ticket summarization, root-cause detection, and resolutionrecommendation- Implement AI-powered virtual agents / copilots to assist support engineers and end users- Partner with Product Support, SRE, and Engineering teams to understand recurring issues and automateresolution workflows- Build intelligent runbooks and self-healing automation for common incidents and service requests- Enhance knowledge management by auto-generating and updating KB articles from resolved tickets- Integrate AI solutions with ITSM platforms (HALO)- Develop APIs, workflows, and event-driven automations across monitoring, logging, and ITSM tools- Ensure seamless handoff between AI systems and human support engineers- Analyze ticket, incident, and operational data to identify automation opportunities- Train, evaluate, and continuously improve ML models using real-world support data- Implement monitoring for model performance, drift, and accuracy in production- Ensure AI solutions meet reliability, security, and compliance standards- Implement guardrails, explainability, and auditability for AI-driven decisions- Contribute to AI governance and responsible AI practices
Qualifications/Requirements of the Position:
- 5+ years of experience as a Full Stack Engineer, Platform Engineer, or AI Engineer, with ownership ofproduction systems- Strong proficiency in JavaScript/TypeScript and a modern frontend framework (React, Next.js, orequivalent)- Backend development experience with Python, Java, or Node.js, including building and maintaining secureAPIs- Hands-on experience delivering AI/ML solutions into production environments- Strong experience in Python and/or Java for backend and ML development- Hands-on experience with NLP, LLMs, or GenAI (e.g., transformers, embeddings, RAG, promptengineering)- Experience integrating AI solutions with ITSM tools (HALO, etc.)- Knowledge of REST APIs, microservices, and cloud platforms (AWS, Azure, or GCP)- Familiarity with MLOps, CI/CD, model deployment, and monitoring- Solid understanding of ITIL / ITSM processes (Incident, Problem, Change, Request)- Experience working with Product Support, SRE, or NOC teams- Ability to translate operational pain points into automation and AI use cases
Knowledge, Skills, and Abilities Required:
- Background in cybersecurity, fraud detection, trust & safety, or abuse prevention. - Experience with graph-based ML, NLP for security signals, or time-series anomaly detection.- Knowledge of adversarial ML, model evasion techniques, or secure model design.- Experience building systems that operate under strict latency or reliability constraints.- Exposure to chatbots, copilots, or agentic AI frameworks- Experience in high-volume, 24x7 production support environments- Publications, talks, or open-source contributions in AI or security.
About RYZ Labs:RYZ Labs is a startup studio founded in 2021 by two lifelong entrepreneurs. The founders of RYZ have worked at some of the world's largest tech companies and some of the most iconic consumer brands. They have lived and worked in Argentina for many years and have decades of experience in Latam. What brought them together was their passion for the early phases of company creation and the idea of attracting the brightest talents in order to build industry-defining companies in a post-pandemic world.
Our teams are remote and distributed throughout the US and Latam. They use the latest cutting-edge cloud computing technologies to create scalable and resilient applications. We aim to provide diverse product solutions for different industries and plan to build a large number of startups in the upcoming years.
At RYZ, you will find yourself working with autonomy and efficiency, owning every step of your development. We provide an environment of opportunities, learning, growth, expansion, and challenging projects. You will deepen your experience while sharing and learning from a team of great professionals and specialists.
Our values and what to expect:- Customer First Mentality - Every decision we make should be made through the lens of the customer.- Bias for Action - urgency is critical, expect that the timeline to get something done is accelerated.- Ownership - Step up if you see an opportunity to help, even if it's not your core responsibility. - Humility and Respect - Be willing to learn, be vulnerable, and treat everyone who interacts with RYZ with respect.- Frugality - being frugal and cost-conscious helps us do more with less- Deliver Impact - get things done most efficiently. - Raise our Standards - always be looking to improve our processes, our team, and our expectations. The status quo is not good enough and never should be.
No items found.
2026-02-21 18:14
Full Stack AI Engineer – BuilderEx
Ryz Labs
51-100
Argentina
Contractor
Remote
false
Remote position, only for professionals based in Argentina or Uruguay
At Ryz Labs we are looking for a Full Stack AI Engineer – BuilderEx to build intelligent, secure, and scalable identity experiences across our platforms. In this role, you will work end-to-end—from frontend UX to backend services and AI models—focusing on authentication, authorization, identity verification, fraud detection, and personalization.This role combines full stack engineering, AI/ML integration, and identity architecture, with close collaboration across product, security, and platform teams to deliver privacy-first, highly reliable systems.
Essential Responsibilities:Design, build, and maintain full-stack applications powering identity and access management (IAM) experiences.Develop and integrate AI/ML models for identity use cases such as fraud detection, anomaly detection, risk-based authentication, and identity verification.Lead and execute SSO migrations across products and platforms, consolidating authentication flows while minimizing user disruption.Drive domain consolidation initiatives by unifying identity systems, services, and user data models across multiple platforms or brands.Improve developer experience (DevEx) by building internal tools, SDKs, APIs, and documentation that simplify identity integrations.Design and evolve secure, scalable APIs supporting authentication, authorization, and identity data services.Partner closely with Security, Platform, and Product teams to implement and standardize protocols and patterns such as OAuth 2.0, OpenID Connect, SAML, JWT, and zero-trust architectures.Ensure AI-powered identity systems are observable, explainable, and production-ready, with robust monitoring and feedback loops.Balance security, performance, and usability while maintaining high standards for privacy and compliance.Contribute to architectural decisions, technical design discussions, and code quality standards.
Qualifications / Requirements of the Position:5+ years of experience as a Full Stack Engineer, Platform Engineer, or AI Engineer with ownership of production systems.Strong proficiency in JavaScript/TypeScript and modern frontend frameworks (React, Next.js, or equivalent).Backend development experience with Python, Java, or Node.js, including building secure, scalable APIs.Hands-on experience delivering AI/ML solutions into production environments.Solid understanding of identity and access management (IAM) concepts, including authentication, authorization, and identity lifecycle.Proven experience leading or contributing to SSO migrations using OAuth 2.0, OpenID Connect, and/or SAML.Experience with domain consolidation or identity unification initiatives across multiple applications or platforms.Demonstrated ability to improve developer experience (DevEx) through internal tooling, APIs, SDKs, or platform improvements.Experience working with cloud platforms (AWS, GCP, or Azure) and containerized environments (Docker, Kubernetes).Strong security mindset, including experience designing systems with privacy, compliance, and resilience in mind.Ability to collaborate cross-functionally and communicate complex technical concepts clearly.
Knowledge, Skills, and Abilities Required:Background in cybersecurity, fraud detection, trust & safety, or abuse prevention.Experience with graph-based ML, NLP for security signals, or time-series anomaly detection.Knowledge of adversarial ML, model evasion techniques, or secure model design.Experience building systems operating under strict latency or reliability constraints.Prior work in regulated or high-risk environments.Security certifications or relevant coursework (e.g., OSCP, CISSP concepts).Experience with SIEM/SOAR tools or security telemetry platforms.Publications, talks, or open-source contributions in AI, security, or related fields.
No items found.
2026-02-21 18:14
Senior ML Operations (MLOps) Engineer
Eight Sleep
101-200
No items found.
Full-time
Remote
false
Eight Sleep is the world’s first sleep fitness company. Our mission is to fuel human potential through optimal sleep. We use innovative technology, detailed design, and proven science and data to personalize and improve each night for everybody—changing the way people sleep forever and for the better.Backed by leading Silicon Valley investors, we have been recognized as one of Fast Company’s Most Innovative Companies in 2018, 2022, and 2023.Our temperature-regulated technology, the Pod, is an absolute game changer, improving people's health and happiness by changing the way they sleep. The Pod was also recognized two years in a row by TIME's “Best Inventions of the Year.” It is available for purchase in North America (the United States and Canada) and throughout the United Kingdom, Europe (Belgium, France, Germany, Italy, Netherlands, Spain, Sweden, Denmark), and Australia via eightsleep.com. We’re excited by the success of the Pod to date and still have a long way to go toward achieving our mission.Join our team as a Sr MLOps Engineer to help us bring current and next generations of Pod ML models to life. You'll be a part of a small team designing and implementing solutions with high levels of autonomy to bring our members better sleep. Your work will go directly to our fleet of existing Pods with low friction and direct impact to the business. We are a fast moving and fast growing company, and we embrace individuals with a growth mindset and strong desire to help us achieve our mission: Improving people's lives through optimal sleep.How you’ll contributePioneer Cutting-Edge Technology: Introduce and implement cutting-edge ML technologies, integrating them into our products and processes to enable the future of health monitoringEnd-to-End Ownership: Own design and operation of robust ML infrastructure – building scalable data, model, and deployment pipelines that ensure reliable delivery of models to production.Cross-functional Collaboration Partner with R&D, firmware, data, and backend teams to ensure ML inference operates reliably and scales to Pods everywhere.Optimize for Performance: Drive cost-effective, scalable, and high-performance ML systems by optimizing compute, storage, and deployment resources across training and inferenceEnhance Tooling and Platforms: Develop tooling, micro services, and frameworks to streamline data processing, experimentation, and deploymentEffective Remote Communication: Thrive in a remote work environment, ensuring clear and direct communication.What you need to succeedProven Expertise: 5+ years of software engineering experience with a focus on ML infrastructure, distributed systems, or large-scale data processing in Python (e.g., PyTorch, TensorFlow, or similar).ML Operations Mastery: Hands-on experience with ML workflow orchestration and CI/CD pipelines for model deployment.Scalable Deployment Experience: Demonstrated success shipping ML models to production at scale, handling telemetry, monitoring, and feedback loops across large device fleets or user populations.Cloud-Native Expertise: Strong experience with AWS (Lambda, ECS, DynamoDB, CloudWatch) or equivalent cloud platforms for serving and monitoring ML systems.Adaptive Problem Solver: A fast-paced, collaborative, and iterative approach to tackling complex problems.What sets you apart:Expertise in real-time ML workflows and streaming systems (e.g., Kinesis, Kafka, Flink).Demonstrated expertise in optimizing ML infrastructure for efficiency, latency, and cloud cost at scale.Understanding of secure ML operations, privacy practices, and compliance considerations, particularly for health-related or IoT data.Familiarity with health, wellness, or IoT domains, especially wearables or medical-grade devices.Why join Eight Sleep?Innovation in a culture of excellenceJoin us in a workplace where innovation isn’t just encouraged - it’s a standard. Our flagship product, the Pod, is a testament to our culture of excellence, beloved by hundreds of thousands of customers worldwide. At Eight Sleep, you will be part of a team that continuously pushes the boundaries of technology in sleep fitness.Immediate responsibility and accelerated career growthFrom your first day, you’ll take on substantial responsibilities that have a direct impact on our core business and product success. We are a small team that empowers you to own your projects and see the tangible effects of your efforts, enhancing both your professional growth and our company’s trajectory. Your path will be challenging but rewarding, perfect for those who thrive in fast-paced environments aiming for high standards.Collaboration with exceptional talentWork alongside other bright minds like you: at Eight Sleep exceptional intelligence and a passion for breakthroughs are the norms. Our team members are not only experts in their fields but also avid innovators who thrive in our dynamic, fast-paced environment.Equitable compensation and continuous equity investmentWe extend equity participation to every full-time team member, recognizing and rewarding your direct contributions to our success. This includes periodic equity refreshments based on performance, ensuring that as Eight Sleep grows and succeeds, so do you – perfectly aligning your achievements with the broader triumphs of the company.Your own Pod - and other great benefitsEvery Eight Sleep employee receives the very product that defines our mission: a Pod of their own. If you join us you’ll get your own Pod, along with*:Full access to health, vision, and dental insurance for you and your dependentsSupplemental life insuranceFlexible PTOCommuter benefits to ease your daily commutePaid parental leave*List of benefits may vary depending on your locationAt Eight Sleep we continually celebrate the diverse community different individuals cultivate. As an equal opportunity employer, we stay true to our values by ensuring everyone feels they can flourish and grow. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status.
No items found.
2026-02-20 18:37
Senior Forward Deployed Engineer
Langfuse
11-50
€90,000 – €160,000
European Union
Full-time
Remote
false
About LangfuseOpen Source LLM Engineering Platform that helps teams build useful AI applications via tracing, evaluation, and prompt management (mission, product). We are now part of ClickHouse.We're building the "Datadog" of this category; model capabilities continue to improve, but building useful applications is really hard, both in startups and enterprises.Largest open source solution in this category: trusted by 19 of the Fortune 50, >2k customers, >26M monthly SDK downloads, >6M Docker pulls.We joined ClickHouse in January 2026 because LLM observability is fundamentally a data problem and Langfuse already ran on ClickHouse. Together we can move faster on product while staying true to open source and self-hosting, and join forces on GTM and sales to accelerate revenue.Previously backed by Y Combinator, Lightspeed, and General Catalyst.We're a small, engineering-heavy, and experienced team in Berlin and San Francisco. We are also hiring for engineering in EU timezones and expect one week per month in our Berlin office (how we work).Your impactMake our best customers successful in production and expanding over time.Improve net revenue retention via adoption, outcomes, and proactive risk management.Scale your impact to our large user base and OSS community by contributing to documentation, guides, and other public content.Create a tight loop from “what customers do” (your deep understanding of top customers) → “what we should build” (feedback to the product engineering team) → “how the GTM org explains it.” (GTM enablement).What you’ll do1) Own strategic customer relationships (portfolio ownership)Be the primary technical partner for 10–20 strategic accounts (large, highly engaged, or aligned with our roadmap).Run onboarding, success planning, and regular deep dives into the customer’s AI architecture and workflows.Drive adoption of key product capabilities across the lifecycle: initial setup, team workflows, scaling, and expansion.2) Production readiness + architectural guidanceLead customers through production readiness: instrumentation strategy, data modeling choices, evaluation setup, alerting/monitoring expectations, security & privacy considerations, and operational playbooks.Provide pragmatic architecture guidance for real LLM systems (agents, tool use, RAG, evals, prompt iteration, dataset curation, feedback loops).Build small prototypes, reference implementations, and demos when it unblocks a customer. Turn them into reusable templates that can be published.3) Escalation leadershipOwn the technical leadership during high-severity customer moments: triage, root-cause coordination, and crisp communication.Be the point of contact for the customer and partner closely with Engineering, be proactive in how you resolve issues.Establish escalation paths, runbooks, and prevention mechanisms for repeat issues.4) Turn customer signal into product + docs + enablementAggregate patterns across your portfolio and translate them into actionable product feedback (clear problem statements, impact, and recommended solutions).Create customer-facing assets (docs, guides, best practices, demos) that start as one customer’s question and become durable collateral.Enable the broader ClickHouse GTM org: training, playbooks, crisp messaging, and “how to win” narratives for AI engineering teams.What we’re looking forMust-havesSenior experience in a customer-facing technical role: TAM, Solutions Engineer, Solutions Architect, Forward Deployed Engineer, Customer Success Engineer, or similar where you owned outcomes.Strong technical foundation: you can debug integrations, reason about distributed systems, APIs/SDKs, and cloud infrastructure.Demonstrated work in applied AI / AI engineering: building, operating, or enabling LLM applications (agents, RAG, eval pipelines, prompt tooling, experimentation).Excellent communication: you can lead technical meetings, drive decisions, and write docs engineers actually follow.High ownership: you ship artifacts, close loops, and create repeatable systems rather than bespoke one-offs.Nice-to-havesExperience with devtools / OSS ecosystems and developer-centric GTM.Familiarity with observability concepts (tracing/metrics/logs), data pipelines, and evaluation frameworks.Track record of technical writing or enablement (workshops, reference architectures, public docs).ProcessWe can run the full process to your offer letter in less than 7 days (hiring process).Tech StackWe run a TypeScript monorepo: Next.js on the frontend, Express workers for background jobs, PostgreSQL for transactional data, ClickHouse for tracing at scale, S3 for file storage, and Redis for queues and caching. You should be familiar with a good chunk of this, but we trust you'll pick up the rest quickly (Stack, Architecture).How we shipLink to handbookWe trust you to take ownership (ownership overview) for your area. You identify what to build, propose solutions (RFCs), and ship them. Everyone here thinks about the user experience and the technical implementation at the same time. Everyone manages their own Linear.You're never alone. Anyone from the team is happy to go into a whiteboard session with you. 15 minutes of shared discussion can very much improve the overall output.We implement maker schedule and communication. There are two recurring meetings a week: Monday check-in on priorities (15 min) and a demo session on Fridays (60 min).Code reviews are mentorship. New joiners get all PRs reviewed to learn the codebase, patterns, and how the systems work (onboarding guide).We use AI as much as possible in our workflows to make our users happy. We encourage everyone to experiment with new tooling and AI workflows.Why Langfuse (now part of ClickHouse)This role puts you at the forefront of the AI revolution, partnering with engineering teams who are building the technology that will define the next decade(s).This is an open-source devtools company. We ship daily, talk to customers constantly, and fight for great DX. Reliability and performance are central requirements.Your work ships under your name. You'll appear on changelog posts for the features you build, and during launch weeks, you'll produce videos to announce what you've shipped to the community. You’ll own the full delivery end to end.We're solving hard engineering problems: figuring out which features actually help users improve AI product performance, building SDKs developers love, visualizing data-rich traces, rendering massive LLM prompts and completions efficiently in the UI, and processing terabytes of data per day through our ingestion pipeline.You'll work closely with the ClickHouse team and learn how they build a world-class infrastructure company. We're in a period of strong growth: Langfuse is growing organically and accelerating through ClickHouse's GTM. (Why we joined ClickHouse)If you wonder what to build next, our users are a Slack message or a Github discussions post away.You’re on a continuous learning journey. The AI space develops at breakneck speed and our customers are at the forefront. We need to be ready to meet them where they are and deliver the tools they need just-in-time.
No items found.
2026-02-20 10:07
Senior Technical Account Manager
Langfuse
11-50
€90,000 – €160,000
European Union
Full-time
Remote
false
About LangfuseOpen Source LLM Engineering Platform that helps teams build useful AI applications via tracing, evaluation, and prompt management (mission, product). We are now part of ClickHouse.We're building the "Datadog" of this category; model capabilities continue to improve, but building useful applications is really hard, both in startups and enterprises.Largest open source solution in this category: trusted by 19 of the Fortune 50, >2k customers, >26M monthly SDK downloads, >6M Docker pulls.We joined ClickHouse in January 2026 because LLM observability is fundamentally a data problem and Langfuse already ran on ClickHouse. Together we can move faster on product while staying true to open source and self-hosting, and join forces on GTM and sales to accelerate revenue.Previously backed by Y Combinator, Lightspeed, and General Catalyst.We're a small, engineering-heavy, and experienced team in Berlin and San Francisco. We are also hiring for engineering in EU timezones and expect one week per month in our Berlin office (how we work).Your impactMake our best customers successful in production and expanding over time.Improve net revenue retention via adoption, outcomes, and proactive risk management.Scale your impact to our large user base and OSS community by contributing to documentation, guides, and other public content.Create a tight loop from “what customers do” (your deep understanding of top customers) → “what we should build” (feedback to the product engineering team) → “how the GTM org explains it.” (GTM enablement).What you’ll do1) Own strategic customer relationships (portfolio ownership)Be the primary technical partner for 10–20 strategic accounts (large, highly engaged, or aligned with our roadmap).Run onboarding, success planning, and regular deep dives into the customer’s AI architecture and workflows.Drive adoption of key product capabilities across the lifecycle: initial setup, team workflows, scaling, and expansion.2) Production readiness + architectural guidanceLead customers through production readiness: instrumentation strategy, data modeling choices, evaluation setup, alerting/monitoring expectations, security & privacy considerations, and operational playbooks.Provide pragmatic architecture guidance for real LLM systems (agents, tool use, RAG, evals, prompt iteration, dataset curation, feedback loops).Build small prototypes, reference implementations, and demos when it unblocks a customer. Turn them into reusable templates that can be published.3) Escalation leadershipOwn the technical leadership during high-severity customer moments: triage, root-cause coordination, and crisp communication.Be the point of contact for the customer and partner closely with Engineering, be proactive in how you resolve issues.Establish escalation paths, runbooks, and prevention mechanisms for repeat issues.4) Turn customer signal into product + docs + enablementAggregate patterns across your portfolio and translate them into actionable product feedback (clear problem statements, impact, and recommended solutions).Create customer-facing assets (docs, guides, best practices, demos) that start as one customer’s question and become durable collateral.Enable the broader ClickHouse GTM org: training, playbooks, crisp messaging, and “how to win” narratives for AI engineering teams.What we’re looking forMust-havesSenior experience in a customer-facing technical role: TAM, Solutions Engineer, Solutions Architect, Forward Deployed Engineer, Customer Success Engineer, or similar where you owned outcomes.Strong technical foundation: you can debug integrations, reason about distributed systems, APIs/SDKs, and cloud infrastructure.Demonstrated work in applied AI / AI engineering: building, operating, or enabling LLM applications (agents, RAG, eval pipelines, prompt tooling, experimentation).Excellent communication: you can lead technical meetings, drive decisions, and write docs engineers actually follow.High ownership: you ship artifacts, close loops, and create repeatable systems rather than bespoke one-offs.Nice-to-havesExperience with devtools / OSS ecosystems and developer-centric GTM.Familiarity with observability concepts (tracing/metrics/logs), data pipelines, and evaluation frameworks.Track record of technical writing or enablement (workshops, reference architectures, public docs).ProcessWe can run the full process to your offer letter in less than 7 days (hiring process).Tech StackWe run a TypeScript monorepo: Next.js on the frontend, Express workers for background jobs, PostgreSQL for transactional data, ClickHouse for tracing at scale, S3 for file storage, and Redis for queues and caching. You should be familiar with a good chunk of this, but we trust you'll pick up the rest quickly (Stack, Architecture).How we shipLink to handbookWe trust you to take ownership (ownership overview) for your area. You identify what to build, propose solutions (RFCs), and ship them. Everyone here thinks about the user experience and the technical implementation at the same time. Everyone manages their own Linear.You're never alone. Anyone from the team is happy to go into a whiteboard session with you. 15 minutes of shared discussion can very much improve the overall output.We implement maker schedule and communication. There are two recurring meetings a week: Monday check-in on priorities (15 min) and a demo session on Fridays (60 min).Code reviews are mentorship. New joiners get all PRs reviewed to learn the codebase, patterns, and how the systems work (onboarding guide).We use AI as much as possible in our workflows to make our users happy. We encourage everyone to experiment with new tooling and AI workflows.Why Langfuse (now part of ClickHouse)This role puts you at the forefront of the AI revolution, partnering with engineering teams who are building the technology that will define the next decade(s).This is an open-source devtools company. We ship daily, talk to customers constantly, and fight for great DX. Reliability and performance are central requirements.Your work ships under your name. You'll appear on changelog posts for the features you build, and during launch weeks, you'll produce videos to announce what you've shipped to the community. You’ll own the full delivery end to end.We're solving hard engineering problems: figuring out which features actually help users improve AI product performance, building SDKs developers love, visualizing data-rich traces, rendering massive LLM prompts and completions efficiently in the UI, and processing terabytes of data per day through our ingestion pipeline.You'll work closely with the ClickHouse team and learn how they build a world-class infrastructure company. We're in a period of strong growth: Langfuse is growing organically and accelerating through ClickHouse's GTM. (Why we joined ClickHouse)If you wonder what to build next, our users are a Slack message or a Github discussions post away.You’re on a continuous learning journey. The AI space develops at breakneck speed and our customers are at the forefront. We need to be ready to meet them where they are and deliver the tools they need just-in-time.
No items found.
2026-02-20 10:07
Researcher, Frontier Cybersecurity Risks
OpenAI
5000+
$295,000 – $445,000
United States
Full-time
Remote
false
About the teamThe Safety Systems org ensures that OpenAI’s most capable models can be responsibly developed and deployed. We build evaluations, safeguards, and safety frameworks that help our models behave as intended in real-world settings.The Preparedness team is an important part of the Safety Systems org at OpenAI, and is guided by OpenAI’s Preparedness Framework.Frontier AI models have the potential to benefit all of humanity, but also pose increasingly severe risks. To ensure that AI promotes positive change, the Preparedness team helps us prepare for the development of increasingly capable frontier AI models. This team is tasked with identifying, tracking, and preparing for catastrophic risks related to frontier AI models.The mission of the Preparedness team is to:Closely monitor and predict the evolving capabilities of frontier AI systems, with an eye towards risks whose impact could be catastrophicEnsure we have concrete procedures, infrastructure and partnerships to mitigate these risks and to safely handle the development of powerful AI systemsPreparedness tightly connects capability assessment, evaluations, and internal red teaming, and mitigations for frontier models, as well as overall coordination on AGI preparedness. This is fast paced, exciting work that has far reaching importance for the company and for society.About the roleModels are becoming increasingly capable—moving from tools that assist humans to agents that can plan, execute, and adapt in the real world. As we push toward AGI, cybersecurity becomes one of the most important and urgent frontiers: the same systems that can accelerate productivity can also accelerate exploitation. As a Researcher for cybersecurity risks, you will help design and implement an end-to-end mitigation stack to reduce severe cyber misuse across OpenAI’s products. This role requires strong technical depth and close cross-functional collaboration to ensure safeguards are enforceable, scalable, and effective. You’ll contribute directly to building protections that remain robust as products, model capabilities, and attacker behaviors evolve.In this role, you will:Design and implement mitigation components for model-enabled cybersecurity misuse—spanning prevention, monitoring, detection, and enforcement—under the guidance of senior technical and risk leadership.Integrate safeguards across product surfaces in partnership with product and engineering teams, helping ensure protections are consistent, low-latency, and scale with usage and new model capabilities.Evaluate technical trade-offs within the cybersecurity risk domain (coverage, latency, model utility, and user privacy) and propose pragmatic, testable solutions.Collaborate closely with risk and threat modeling partners to align mitigation design with anticipated attacker behaviors and high-impact misuse scenarios.Execute rigorous testing and red-teaming workflows, helping stress-test the mitigation stack against evolving threats (e.g., novel exploits, tool-use chains, automated attack workflows) and across different product surfaces—then iterate based on findings.You might thrive in this role if you:Have a passion for AI safety and are motivated to make cutting-edge AI models safer for real-world use.Bring demonstrated experience in deep learning and transformer models.Are proficient with frameworks such as PyTorch or TensorFlow.Possess a strong foundation in data structures, algorithms, and software engineering principles.Are familiar with methods for training and fine-tuning large language models, including distillation, supervised fine-tuning, and policy optimization.Excel at working collaboratively with cross-functional teams across research, security, policy, product, and engineering.Have significant experience designing and deploying technical safeguards for abuse prevention, detection, and enforcement at scale.(Nice to have) Bring background knowledge in cybersecurity or adjacent fields.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
2026-02-20 2:37
Partner AI Deployment Engineer
OpenAI
5000+
Germany
Full-time
Remote
false
About the roleWe are looking for a Partner AI Deployment Engineer (P-ADE) to lead technical delivery with OpenAI partners across EMEA and help scale customer deployments built on the OpenAI platform. This role focuses on working across a wide range of customer use cases, supporting the design, deployment and scaling of production-grade AI solutions delivered through partners.You will work closely with partner delivery teams, alongside Solutions Engineers (SEs), Forward Deployed Engineers (FDEs) and other ADEs, to move customer engagements from initial design through to stable, scaled production. Your work will accelerate time to value, reduce delivery risk and ensure solutions meet OpenAI’s standards for quality, safety and reliability. You will collaborate closely with GTM, Applied, and Research to support partner-led enterprise adoption.This role is based in Munich or Paris. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.In this role, you will:Act as a primary technical delivery partner for a set of OpenAI partners across EMEA, supporting customer deployments across multiple industries and use cases.Work with partner delivery teams and customer stakeholders to translate solution designs into deployable, production-ready architectures on the OpenAI platform.Support customer time to value through hands-on prototyping, integration support, architectural guidance and troubleshooting during critical phases of delivery.Collaborate closely with SEs, FDEs, and other ADEs to ensure the right technical expertise is engaged from design through production rollout.Help partners operationalize solutions by addressing scalability, reliability, security and safety considerations required for enterprise production environments.Contribute to reusable deployment patterns, reference architectures and delivery guidance that enable repeatable execution across partner engagements.Act as a technical quality and governance point during deployments, helping ensure solutions meet OpenAI’s standards and best practices before and after go-live.Capture and synthesise feedback from real customer deployments and share insights with Applied, Research and partner teams to improve delivery playbooks and platform capabilities.You’ll thrive in this role if you:Have 8+ years of experience in technical consulting, solution delivery or a similar role, working with senior technical and business leaders on complex enterprise deployments.Have experience delivering large, multi-stakeholder technical projects in partnership with boutique services organisations, system integrators or similar delivery environments.Have strong hands-on experience building, integrating and operating production software using modern languages such as Python or JavaScript.Have designed, deployed and supported Generative AI and or machine learning solutions in real-world production environments.Have practical experience working with the OpenAI platform in customer-facing or delivery contexts.Are a clear communicator who can work effectively with partner engineers, internal teams and customer stakeholders.Take ownership of delivery problems end to end and are comfortable operating in ambiguous, fast-moving environments.Bring a collaborative, humble mindset and enjoy working across partners and internal teams to deliver successful customer outcomes.
About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
2026-02-20 2:37
Computational Protein Design
Talent Labs
11-50
United States
Full-time
Remote
false
We are seeking a Computational Protein Design Scientist to join our team working at the interface of generative AI and synthetic biology. You will play a key role amongst a team of scientists designing and engineering proteins for specific functions. This is an opportunity to help shape and grow an organization that advances artificial intelligence and applies it to longstanding scientific challenges. Using your blend of computational expertise and in-depth biochemical understanding of proteins, you will generate insights to improve protein functionality and operate at the interface between our machine learning and experimental platform units, working closely to seamlessly integrate AI generations and lab validation data.Who you areYou are a computational protein designer. You have a proven track record of successfully leveraging novel computational tools and knowledge of biochemistry or structural biology to design proteins to functional requirements and applications in synthetic biology.You are a data scientist. You are a strong data scientist and you have experience owning data-driven projects to generate biological insights.You are a successful scientist. You have a PhD (or equivalent industry experience) in computational biology, bioinformatics, computer science, biochemistry, structural biology, physics, biophysics, bio/chem engineering, or a related field. Your research experience was in protein biochemistry using computational expertise.You collaborate with experimentalists. You have experience collaborating with experimental (i.e. wet lab) teams to achieve protein design objectives.You are an owner. You have a proven track record of delivering successful commercial and / or academic research projects, demonstrated through publications, patents, and/or commercially impactful outcomes, as well as other contributions to the scientific community.You are a connector. You love to connect people and enable them to perform at their highest levels. You have excellent communication and presentation skills with the ability to convey complex scientific concepts to both technical and non-technical audiences.You are a mission driven innovator. You are passionate about making a positive impact on the world, whether it's for patients, partners or beyond. You are motivated by the end goal and are flexible in adapting to different approaches and methodologies.You thrive in a dynamic and ambiguous environment. You excel in a fast-paced setting where goals must be achieved efficiently and urgently. You have a keen eye for creating, then optimizing processes to improve speed and repeatability. You are an advocate for lab automation, both through hardware and softwareWhat sets you apart (preferred but not required)You have experience with generative AI. You have experience leveraging generative AI (or other machine learning models) in synthetic biology applications.You have experience engineering gene editing tools, such as nucleases and integrases.You have experience with homology-based and structural bioinformatics, and are able to answer scientific questions using very large databases.You have helped scale a young biotech before. You have worked in startups and helped the company grow.Your ResponsibilitiesLeverage our proprietary generative AI models to design proteins for experimental validation:Analyze protein design problems based on functional requirements, biochemistry, structural biology and sequence homologyGenerate designs using our proprietary generative AI models and optimize designs for experimental validationCoordinate with our lab-based protein engineers to plan and optimize the design process and validation strategyLeverage our proprietary data to improve our models:Analyze and leverage our experimental results to improve the next round of designs and increase our success rate over validation roundsCollaborate with machine learning scientists to fine-tune and prompt our modelsCollaboration and communication:Be an effective interface between machine learning model development and experimental validationCapture bioengineering learnings and feedback to our machine learning unit, and vice versaFoster a collaborative and innovative environment, proactively finding opportunities to innovate and create clarity and alignment between different unitsContribute to our computational tools:Help improve the way we use, serve and integrate our AI models, by feeding back to the software engineers and foundational machine learning unitHelp improve our data management systems and workflowsScientific excellence and self development:Work to the highest scientific standards (publication-grade work)
Stay on top of relevant developments in synthetic biologyContinue building your understanding of generative AI as well as expanded areas of protein and cell biologyParticipate in knowledge sharing, e.g. organize and present at our internal reading group.Attend and present at conferences when relevant.ApplyWe offer strongly competitive compensation and benefits packages, including:Private health insurancePension/401(K) contributionsGenerous leave policies (including gender neutral parental leave)Hybrid workingTravel opportunities and moreWe also offer a stimulating work environment, and the opportunity to shape the future of synthetic biology through the application of breakthrough generative models.We welcome applicants from all backgrounds and we are committed to building a team that represents a variety of backgrounds, perspectives, and skills.
No items found.
2026-02-20 1:52
Lead Software Engineer
Eloquent AI
11-50
United States
Full-time
Remote
false
Meet Eloquent AIAt Eloquent AI, we’re building the next generation of AI Operators—multimodal, autonomous systems that execute complex workflows across fragmented tools with human-level precision. Our technology goes far beyond chat: it sees, reads, clicks, types, and makes decisions—transforming how work gets done in regulated, high-stakes environments.We’re already powering some of the world’s leading financial institutions and insurers, fundamentally changing how millions of people manage their finances every day. From automating compliance reviews to handling customer operations, our Operators are quietly replacing repetitive, manual tasks with intelligent, end-to-end execution.Headquartered in San Francisco with a global footprint, Eloquent AI is a fast-growing company backed by top-tier investors. Join us to work alongside world-class talent in AI, engineering, and product as we redefine the future of financial services.Your RoleAs a Lead Engineer at Eloquent AI, you will lead the development of AI-powered full-stack applications while overseeing and mentoring other engineers. You’ll remain hands-on across the stack, but also take ownership of technical direction, code quality, and delivery standards.You’ll work closely with engineers, AI researchers, and product teams to ensure scalable, reliable systems that power real-time AI-driven workflows. This role requires strong engineering fundamentals, leadership capability, and the ability to operate effectively in a fast-paced, AI-first environment.You will:Design and build full-stack applications that power AI-driven workflows for enterprise users.Oversee and review the work of other engineers, ensuring high-quality, production-ready code.Provide technical guidance, architectural direction, and hands-on support where needed.Develop high-performance front-end interfaces for AI agent control, monitoring, and visualisation.Build scalable backend services that support real-time AI interactions, knowledge retrieval, and automation.Work closely with AI researchers and ML engineers to integrate LLMs, RAG, and automation into production-ready systems.Establish engineering best practices across testing, deployment, and performance optimisation.Continuously iterate and refine AI-driven products, balancing speed with robustness.Requirements8+ years of hands-on experience building full-stack production applications.Prior experience leading or mentoring engineers.Proficiency in React, TypeScript, and Node.js.Backend experience using Python.Strong knowledge of cloud infrastructure (AWS, GCP, or Azure) and scalable architectures.Understanding of AI-powered applications (LLMs, chat interfaces, agentic workflows).Ability to work in a fast-paced, high-autonomy environment.Strong collaboration skills across engineering, product, and AI teams.Bonus Points If…You have experience building AI-powered applications with LLM integrations.You’ve worked in high-performance startups or enterprise AI environments.You have a sharp eye for UI/UX design and have built intuitive, AI-driven interfaces.You have experience with GraphQL, WebSockets, or real-time data streaming.You’ve contributed to open-source projects or have built developer tools for AI.
No items found.
2026-02-20 0:22
Scientist I, Platform Development and Antibody Screening
Xaira
101-200
United Kingdom
Full-time
Remote
false
About Xaira Therapeutics
Xaira is an innovative biotech startup focused on leveraging AI to transform drug discovery and development. The company is leading the development of generative AI models to design protein and antibody therapeutics, enabling the creation of medicines against historically hard-to-drug molecular targets. It is also developing foundation models for biology and disease to enable better target elucidation and patient stratification. Collectively, these technologies aim to continually enable the identification of novel therapies and to improve success in drug development. Xaira is headquartered in the San Francisco Bay Area, Seattle, and London.Position Overview
Xaira is seeking enthusiastic and motivated candidates to join our team as Research Engineers. We welcome candidates across the spectrum of experience. Teams thrive when they are diverse (across all axes), and we encourage all eligible applicants to apply.
The role is based in our London office, located near Old Street. Our team is highly collaborative, operating on the belief that hard problems are best solved by multiple people working towards a clear goal, bringing and sharing their expertise with the team. We operate a hybrid working culture based on trust. Members of the team are typically in the office 3 days a week.
Key Responsibilities
Industry experience as a research engineer, in an AI-related company.
Excited to work, learn and teach within a collaborative team working on challenging problems.
Desirable
Below is a list of qualities/experiences that align with the kinds of things that we are looking for. Please do not read this as an extension of the “requirements” section! We recognise that experiences, opportunities and life-paths vary.
Masters (or equivalent)/PhD in AI-related field.
Public codebases or contribution to public GitHub repositories.
Experience building and training neural networks.
Experience in distributed training and inference.
Experience profiling and optimising large-scale AI models.
Knowledge or experience in BioAI.
If you are a motivated individual with a passion for applying AI to advance drug discovery and improve human health, we encourage you to apply and join us in our mission to make a positive difference in the world.
Xaira Therapeutics an equal-opportunity employer. We believe that our strength is in our differences. Our goal to build a diverse and inclusive team began on day one, and it will never end.
TO ALL RECRUITMENT AGENCIES: Xaira Therapeutics does not accept agency resumes. Please do not forward resumes to our jobs alias or employees. Xaira Therapeutics is not responsible for any fees related to unsolicited resumes.
No items found.
2026-02-19 19:22
Design Director
Tenstorrent
1001-5000
$100,000 – $500,000
United States
Full-time
Remote
false
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.Tenstorrent is accelerating the future of AI and high-performance compute by building industry-leading CPU and AI architectures. As an Automotive and Robotics SoC Architect, you will define scalable, top-down system architectures that unify our CPU and AI technologies for next-generation automotive applications. This senior technical role shapes the architectural direction of our automotive and robotics portfolio, ensuring our products meet the industry's highest expectations for performance, safety, reliability, and security. This position is central to how Tenstorrent delivers world-class automotive solutions and requires strong technical leadership, systems thinking, and cross-functional collaboration.
This role is remote, based out of North America.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who You Are
A systems thinker who can architect complex SoCs from concept to execution.
A strong communicator who can articulate technical direction across engineering teams and external partners.
Someone with deep knowledge of safety-critical systems and the unique needs of automotive environments.
An innovator who can identify future use cases and propose next-generation architectural solutions.
A leader who thrives in a highly technical, cross-functional, fast-moving environment.
What We Need
Bachelor’s, Master’s, or Ph.D. in Electrical Engineering, Computer Engineering, or related field.
Extensive experience designing complex SoCs, ideally in automotive applications.
Proficiency in hardware description languages such as Verilog or VHDL.
Experience with hardware/software co-design and co-verification.
Knowledge of automotive safety standards (e.g., ISO 26262) and security principles.
Someone comfortable with up to 25% international travel.
Experience with both cameras, sensors, and others is a plus.
What You Will Learn
How cutting-edge CPU and AI architectures are adapted for automotive-grade environments.
Best-in-class methodologies for safety-critical SoC design, verification, and system integration.
How to translate emerging automotive use cases into scalable, future-proof SoC architectures.
Approaches to hardware-level security, robustness, and cyber-resilience in automotive compute systems.
Cross-functional collaboration strategies that drive innovation across architecture, software, DV, and product teams.
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
No items found.
2026-02-19 18:37
Backend Software Engineer - Engine Team (Voice Agent)
Deepgram
201-500
$150,000 – $220,000
United States
Full-time
Remote
false
Company OverviewDeepgram is the leading platform underpinning the emerging trillion-dollar Voice AI economy, providing real-time APIs for speech-to-text (STT), text-to-speech (TTS), and building production-grade voice agents at scale. More than 200,000 developers and 1,300+ organizations build voice offerings that are ‘Powered by Deepgram’, including Twilio, Cloudflare, Sierra, Decagon, Vapi, Daily, Cresta, Granola, and Jack in the Box. Deepgram’s voice-native foundation models are accessed through cloud APIs or as self-hosted and on-premises software, with unmatched accuracy, low latency, and cost efficiency. Backed by a recent Series C led by leading global investors and strategic partners, Deepgram has processed over 50,000 years of audio and transcribed more than 1 trillion words. There is no organization in the world that understands voice better than Deepgram.Company Operating RhythmAt Deepgram, we expect an AI-first mindset—AI use and comfort aren’t optional, they’re core to how we operate, innovate, and measure performance.Every team member who works at Deepgram is expected to actively use and experiment with advanced AI tools, and even build your own into your everyday work. We measure how effectively AI is applied to deliver results, and consistent, creative use of the latest AI capabilities is key to success here. Candidates should be comfortable adopting new models and modes quickly, integrating AI into their workflows, and continuously pushing the boundaries of what these technologies can do.Additionally, we move at the pace of AI. Change is rapid, and you can expect your day-to-day work to evolve just as quickly. This may not be the right role if you’re not excited to experiment, adapt, think on your feet, and learn constantly, or if you’re seeking something highly prescriptive with a traditional 9-to-5.OpportunityDeepgram is looking for a backend software engineer to lead the design and implementation of Deepgram’s Voice Agent product. You will design and implement secure, robust, and scalable services for speech processing; build integrations supporting telephony providers, RAG systems, and diverse deployment scenarios; engineer for testability and observability within a complex chain of AI models; and more. Your skill at building highly reusable code that overcomes technical challenges is paired with an intuition for delightful user experiences. You will be a critical voice in Deepgram’s Product and Engineering teams, driving high impact products from start to finish.What You’ll DoImprove Deepgram’s core inference services including areas in networking, speech processing, model orchestration, and observabilityDevelop integrations with cutting edge in-house, third-party, and open-source AI models for perception and managing conversational dynamicsDebug complex system issues that include networking, scheduling, and highly concurrent workloadsRapidly customize backend services to support our customer needsPartner with Product to design and implement new services, features, and/or products end to endYou’ll Love This Role If YouThrive in a fast-paced, impact-driven environment where learning new skills on-the-fly is not only encouraged but a regular necessityEnjoy balancing decisions about product and feature maturity to decide when to make minimally invasive changes versus when to incorporate detailed design workIt’s Important To Us That You Have3+ years of experience in an industry roleProgramming experience in Rust (or C, C++), with competence in PythonExcellent communication and organizational skills, both written and verbal.A high level of experience and understanding of version control; preferably git.Comprehensive experience with UNIX-style systems.It Would Be Great If You HadExperience with low-latency, multi-model orchestration for AI-enabled applicationsExperience with audio processingBenefits & Perks*Holistic healthMedical, dental, vision benefitsAnnual wellness stipendMental health supportLife, STD, LTD Income Insurance PlansWork/life blendUnlimited PTOGenerous paid parental leaveFlexible schedule12 Paid US company holidaysQuarterly personal productivity stipendOne-time stipend for home office upgrades401(k) plan with company matchTax Savings ProgramsContinuous learningLearning / Education stipendParticipation in talks and conferencesEmployee Resource GroupsAI enablement workshops / sessions*For candidates outside of the US, we use an Employer of Record model in many countries, which means benefits are administered locally and governed by country-specific regulations. Because of this, benefits will differ by region — in some cases international employees receive benefits US employees do not, and vice versa. As we scale, we will continue to evaluate where we can create more alignment, but a 1:1 global benefits structure is not always legally or operationally possible.Backed by prominent investors including Y Combinator, Madrona, Tiger Global, Wing VC and NVIDIA, Deepgram has raised over $215M in total funding. If you're looking to work on cutting-edge technology and make a significant impact in the AI industry, we'd love to hear from you!Deepgram is an equal opportunity employer. We want all voices and perspectives represented in our workforce. We are a curious bunch focused on collaboration and doing the right thing. We put our customers first, grow together and move quickly. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, gender identity or expression, age, marital status, veteran status, disability status, pregnancy, parental status, genetic information, political affiliation, or any other status protected by the laws or regulations in the locations where we operate.We are happy to provide accommodations for applicants who need them.
No items found.
2026-02-19 17:52
Staff DevOps Engineer
webAI
101-200
United States
Full-time
Remote
false
About Us:webAI is pioneering the future of artificial intelligence by establishing the first distributed AI infrastructure dedicated to personalized AI. We recognize the evolving demands of a data-driven society for scalability and flexibility, and we firmly believe that the future of AI lies in distributed processing at the edge, bringing computation closer to the source of data generation.Our mission is to build a future where a company's valuable data and intellectual property remain entirely private, enabling the deployment of large-scale AI models directly on standard consumer hardware without compromising theinformation embedded within those models. We are developing an end-to-end platform that is secure, scalable, and fully under the control of our users, empowering enterprises with AI that understands their unique business.We are a team driven by truth, ownership, tenacity, and humility, and we seek individuals who resonate with these core values and are passionate about shaping the next generation ofAI.About the Role:We are seeking a Staff DevOps Engineer to architect, build, and scale secure infrastructure for deploying AI workloads across cloud and edge environments. This is a high-impact, staff-level individual contributor role where you will drive infrastructure strategy, lead technical initiatives, and serve as the subject matter expert on cloud architecture, security best practices, and platform reliability.You will design scalable, automated infrastructure solutions that enable our AI platform to operate efficiently across diverse deployment scenarios—from public cloud to on-premises and edge computing environments. This role requires deep technical expertise, architectural thinking, and the ability to translate complex requirements into production-ready infrastructure automation.Responsibilities:Design and architect secure, scalable cloud and edge infrastructure for deploying AI workloads across multi-cloud (AWS, Azure, GCP) and hybrid environmentsBuild and maintain production-grade Infrastructure as Code (IaC) using Terraform, Ansible, or Pulumi, managing 100+ resources with GitOps workflows and automated validationDesign and operate production Kubernetes clusters optimized for AI/ML workloads with GPU support, implementing container security, multi-tenancy, and resource optimizationImplement secure CI/CD pipelines with integrated security controls (SAST, DAST, vulnerability scanning, secrets management) and automated deployment workflows for containerized AI modelsLead MLOps infrastructure initiatives including model deployment pipelines, versioning, feature stores, experiment tracking, and monitoring for model performance and driftDesign comprehensive observability and monitoring using Prometheus, Grafana, ELK, or Datadog with distributed tracing, APM, and real-time alerting aligned to SLIs/SLOsImplement security best practices including least-privilege access, encryption at rest/in transit, network segmentation, and automated compliance validationLead incident response and reliability initiatives, participate in on-call rotation, conduct post-mortems, and drive continuous improvement for system reliabilityArchitect disaster recovery and business continuity strategies with automated backup, failover, and recovery processesDevelop reusable infrastructure modules and templates to accelerate environment provisioning and standardize deployment patterns across teamsMentor mid-level and senior engineers on cloud architecture, DevOps best practices, and platform reliability through design reviews and technical guidanceDrive technical documentation and knowledge sharing including runbooks, architecture decision records (ADRs), and infrastructure standards
Qualifications:7+ years of hands-on experience in DevOps, Site Reliability Engineering, or Infrastructure Engineering with proven track record of architecting production systemsExpert-level proficiency with Docker, Kubernetes (CKA/CKAD preferred), and cloud-native technologies in production environments5+ years implementing Infrastructure as Code with Terraform, Ansible, or Pulumi, managing large-scale (50+) cloud resourcesDeep experience with cloud platforms (AWS, Azure, or GCP) including compute, networking, storage, and managed servicesProven experience building and scaling CI/CD pipelines with integrated security controls (GitHub Actions, GitLab CI, Jenkins, ArgoCD)Strong programming skills in Python (preferred for automation), Bash, or Go for infrastructure tooling and automationProduction experience with observability and monitoring tools: Prometheus, Grafana, ELK, CloudWatch, Datadog, or similarExperience with MLOps workflows: model deployment automation, versioning, and lifecycle managementDemonstrated experience with GitOps methodologies and declarative infrastructure managementStrong understanding of security best practices: encryption, secrets management, identity and access management (IAM), network securityExcellent written and verbal communication skills for technical documentation and cross-functional collaborationPreferred Skills:Experience architecting multi-cloud or hybrid cloud environments with portability and interoperability considerationsHands-on experience deploying large language models (LLMs) or transformer models at scale with model serving infrastructureExpertise in Zero Trust architecture and modern security patterns for cloud-native applicationsExperience with service mesh technologies (Istio, Linkerd) for microservices communication and observabilityStrong understanding of AI/ML infrastructure: feature stores, model registries, A/B testing infrastructure, and model monitoringExperience with edge computing deployments and distributed system architecturesCost optimization expertise: FinOps practices, resource rightsizing, and cloud cost managementExperience mentoring or leading technical initiatives across engineering teamsCertifications: CKA, CKAD, Terraform Associate, AWS Solutions Architect, Azure Administrator, or GCP Professional Cloud ArchitectCore Values:We at webAI are committed to living out the core values we have put in place as the foundation on which we operate as a team. We seek individuals who exemplify the following:Truth - Emphasizing transparency and honesty in every interaction and decision.Ownership - Taking full responsibility for one’s actions and decisions, demonstrating commitment to the success of our clients.Tenacity - Persisting in the face of challenges and setbacks, continually striving for excellence and improvement.Humility - Maintaining a respectful and learning-oriented mindset, acknowledging the strengths and contributions of others.Benefits:Competitive salary and performance-based incentives.Comprehensive health, dental, and vision benefits package.401k Match (US-based only)$200/mos Health and Wellness Stipend$400/year Continuing Education Credit$500/year Function Health subscription (US-based only)Free parking, for in-office employeesUnlimited Approved PTOParental Leave for Eligible EmployeesSupplemental Life InsurancewebAI is an Equal Opportunity Employer and does not discriminate against any employee or applicant on the basis of age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances. We adhere to these principles in all aspects of employment, including recruitment, hiring, training, compensation,promotion, benefits, social and recreational programs, and discipline. In addition, it is the policy of webAI to provide reasonable accommodation to qualified employees who have protected disabilities to the extent required by applicable laws, regulations and ordinances where a particular employee works.
No items found.
2026-02-19 16:07
Full Stack Product Engineer
Ideogram
51-100
Canada
Full-time
Remote
false
About IdeogramIdeogram’s mission is to make world-class design accessible to everyone, multiplying human creativity. We build proprietary generative media models and AI native creative workflows, tackling unsolved challenges in graphic design. Our team includes builders with a track record of technology breakthroughs including early research in Diffusion Models, Google’s Imagen, and Imagen Video. We care about design, taste, and craft as much as research and engineering – shipping experiences that creatives actually love.We’ve raised nearly $100M, led by Andreessen Horowitz and Index Ventures. Headquartered in Toronto with a growing team in NYC, we're scaling fast, aiming to triple over the next year. We're a flat team with a culture of high ownership, collaboration, and mentorship. Explore Ideogram 3.0, Canvas, and Character blog posts, and try Ideogram at ideogram.ai.About The RoleAs a Full-Stack Product Engineer at Ideogram, you'll build the products that put generative AI directly into the hands of creators. You'll work across the entire stack, from crafting delightful user experiences to optimizing backend systems that serve millions, with a relentless focus on shipping features that users love. We're looking for someone who combines product instinct with strong ownership, user empathy, and the ability to move fast in an evolving AI landscape.What We're Looking ForProduct & AI MindsetDeep curiosity about generative AI and genuine excitement about its potential to empower creatorsStrong product intuition; you think about user problems first, then architect solutionsExperience building features where AI is core to the user experience (not just a backend detail)Ability to navigate ambiguity and turn open-ended problems into shipped featuresAI-Native Full Stack ExecutionExperience building and shipping full stack applications with real user impactComfortable working across frontend and backend systemsFamiliarity with cloud infrastructure and modern web technologiesCan design APIs and data models that support evolving product needsUse AI-native engineering tools (e.g., Claude Code, Codex, or similar) to meaningfully accelerate development velocity, debugging, and codebase comprehensionOwnership & ExecutionSelf-starter who takes initiative to identify opportunities and drive them to completionOperates with urgency. You ship incremental value and iterate based on real user feedbackComfortable working with minimal direction in a fast-moving environmentTakes responsibility for outcomes, not just code—you care about whether users love what you buildCollaboration & CommunicationCan explain technical concepts to both engineers and non-technical stakeholdersSeeks feedback, acknowledges mistakes, and learns quicklyPushes for quality through constructive code review and collaborationBachelor's degree in Computer Science, Engineering, related field, or equivalent practical experienceOur StackWe primarily use React and Python. Familiarity with the following technologies is a plus, but not required:OpenAPI & gRPCKubernetesRedis & MemcachedGCP, Google Bigtable, Google BigQuery, Google Spanner, Google Pub/SubDocker & TerraformCloudflareNice to HaveExperience integrating ML models into production applications (inference, prompt engineering, fine-tuning workflows)Track record of shipping consumer-facing AI products or featuresContributions to design systems, component libraries, or developer toolingExperience with experimentation frameworks and feature flaggingFamiliarity with real-time systems or high-throughput applicationsOur CultureWe’re a team of exceptionally talented, curious builders who love solving tough problems and turning bold ideas into reality. We move fast, collaborate deeply, and operate without unnecessary hierarchy, because we believe the best ideas can come from anyone.Everyone at Ideogram rolls up their sleeves to make our products and our customers successful. We thrive on curiosity, creativity, and shared ownership. We believe that small, dedicated teams working together with trust and purpose can move faster, think bigger, and create amazing things.Ideogram is committed to welcoming everyone — regardless of gender identity, orientation, or expression. Our mission is to create belonging and remove barriers so everyone can create boldly.What We Offer💸Competitive compensation and equity designed to recognize the value and impact of your contributions to Ideogram’s success.
🌴 4 weeks of vacation to recharge and explore.
🩺 Comprehensive health, vision, and dental coverage starting on day one.
💰 RRSP/401(k) with employer match up to 4% to invest in your future from the moment you join.
💻 Top-of-the-line tools and tech to fuel your creativity and productivity.
📍 Toronto HQ perks: Steps from Union Station and the PATH, with daily in-office lunches and dinners.
🔍 Autonomy to explore and experiment — whether you’re testing new ideas, running large-scale experiments, or diving into research, you’ll have access to compute/resources you need when there’s a clear business or creative use case. We encourage curiosity and bold thinking.
🌱 A culture of learning and growth, where curiosity is encouraged and mentorship is part of the journey.
No items found.
2026-02-19 15:07
Senior Engineering Manager, Reinforcement Learning Environments (RLE)
Handshake
1001-5000
$230,000 – $280,000
United States
Full-time
Remote
false
About HandshakeHandshake is the career network for the AI economy. 20 million knowledge workers, 1,600 educational institutions, 1 million employers (including 100% of the Fortune 50), and every foundational AI lab trust Handshake to power career discovery, hiring, and upskilling, from freelance AI training gigs to first internships to full-time careers and beyond. This unique value is leading to unparalleled growth; in 2025, we tripled our ARR at scale.Why join Handshake now:Shape how every career evolves in the AI economy, at global scale, with impact your friends, family and peers can see and feelWork hand-in-hand with world-class AI labs, Fortune 500 partners and the world’s top educational institutionsJoin a team with leadership from Scale AI, Meta, xAI, Notion, Coinbase, and Palantir, among othersBuild a massive, fast-growing business with billions in revenueAbout the RoleWe’re expanding our team and seeking a Senior Engineering Manager to lead our Reinforcement Learning Environments (RLE) team.The RLE team builds the sandbox environments where frontier AI models learn complete, end-to-end workflows. These environments simulate real-world professional domains such as software engineering, finance, and legal research — complete with realistic tools, constraints, and feedback loops. Instead of learning from static examples, models practice doing the work: navigating multi-step tasks, using domain-specific tools, handling ambiguity, and optimizing for real outcomes.Researchers use these environments and the data they generate to train state-of-the-art models with reinforcement learning grounded in execution — not just prediction, but task completion, quality, and robustness in complex workflows.As a Senior Engineering Manager, you’ll shape the technical direction and long-term strategy of this critical platform. You’ll lead a growing team (currently 9 engineers) and will likely manage an Engineering Manager in the near term. This is a highly strategic role sitting at the intersection of platform engineering, applied AI infrastructure, research tooling, and human-in-the-loop operations systems.Location: San Francisco, CA| 5 days/week in-officeLead and grow a high-performing team of 8–9 engineers building reinforcement learning environmentsManage, mentor, and develop senior engineers and future engineering leadersPartner closely with research, product, and operations teams to define roadmap and execution prioritiesDrive technical architecture for scalable, reliable, and extensible environment systemsBuild plug-and-play environments that integrate seamlessly with model training pipelinesBalance platform rigor with operational complexity and data quality requirementsEstablish engineering best practices around reliability, observability, and performanceFoster a culture of ownership, velocity, and high technical standardsDesired Capabilities3+ years of engineering management experience, with increasing scope and ownershipExperience managing senior engineers; experience managing an Engineering Manager (or equivalent scope) strongly preferred5+ years of prior hands-on engineering experienceStrong technical background in platform systems, distributed systems, or full-stack infrastructureExperience building internal platforms, data pipelines, or research-facing toolsProven ability to operate effectively in fast-paced, ambiguous environmentsExperience driving cross-functional alignment across engineering, research, and operationsWillingness to work in-office in San Francisco 5 days/weekExtra CreditExperience in reinforcement learning, simulation systems, or AI training infrastructureBackground in human-in-the-loop systems, data annotation platforms, or workflow toolingExperience in operations-heavy, tech-enabled organizationsFamiliarity with cloud infrastructure (AWS or GCP), APIs, and modern web stacks (e.g., React, TypeScript, Node.js, Python)Experience building systems used by AI researchers or applied ML teamsWhat Success Looks LikeRLE becomes the default platform researchers use to train reinforcement learning workflowsNew domains (e.g., finance, legal, SWE) can be launched quickly and reliablyEnvironment reliability and data quality are trusted by top AI research partnersThe team scales with strong technical leaders who can independently drive new verticalsThe RLE platform materially accelerates model capability in real-world task completionPerksHandshake delivers benefits that help you feel supported—and thrive at work and in life.The below benefits are for full-time US employees.🎯 Ownership: Equity in a fast-growing company💰 Financial Wellness: 401(k) match, competitive compensation, financial coaching🍼 Family Support: Paid parental leave, fertility benefits, parental coaching💝 Wellbeing: Medical, dental, and vision, mental health support, $500 wellness stipend📚 Growth: $2,000 learning stipend, ongoing development💻 Remote & Office: Internet, commuting, and free lunch/gym in our SF office🏝 Time Off: Flexible PTO, 15 holidays + 2 flex days🤝 Connection: Team outings & referral bonusesExplore our mission, values, and comprehensive US benefits at joinhandshake.com/careers.
No items found.
2026-02-19 13:07
Research Engineer, Core ML
Together AI
201-500
$200,000 – $280,000
No items found.
Full-time
Remote
false
About the Role
The Turbo team sits at the intersection of efficient inference (algorithms, architectures, engines) and post‑training / RL systems. We build and operate the systems behind Together’s API, including high‑performance inference and RL/post‑training engines that can run at production scale.
Our mandate is to push the frontier of efficient inference and RL‑driven training: making models dramatically faster and cheaper to run, while improving their capabilities through RL‑based post‑training (e.g., GRPO‑style objectives). This work lives at the interface of algorithms and systems: asynchronous RL, rollout collection, scheduling, and batching all interact with engine design, creating many knobs to tune across the RL algorithm, training loop, and inference stack. Much of the job is modifying production inference systems—for example, SGLang‑ or vLLM‑style serving stacks and speculative decoding systems such as ATLAS—grounded in a strong understanding of post‑training and inference theory, rather than purely theoretical algorithm design.
You’ll work across the stack—from RL algorithms and training engines to kernels and serving systems—to build and improve frontier models via RL pipelines. People on this team are often spiky: some are more RL‑first, some are more systems‑first. Depth in one of these areas plus appetite to collaborate across (and grow toward more full‑stack ownership over time) is ideal.
Requirements
We don’t expect anyone to check every box below. People on this team typically have deep expertise in one or more areas and enough breadth (or interest) to work effectively across the stack. The closer you are to full‑stack (inference + post‑training/RL + systems), the stronger the fit—but being spiky in one area and eager to grow is absolutely okay.
You might be a good fit if you:
Have strong expertise in at least one of the following, and are excited to collaborate across (and grow into) the others:
Systems‑first profile: Large‑scale inference systems (e.g., SGLang, vLLM, FasterTransformer, TensorRT, custom engines, or similar), GPU performance, distributed serving.
RL‑first profile: RL / post‑training for LLMs or large models (e.g., GRPO, RLHF/RLAIF, DPO‑like methods, reward modeling), and using these to train or fine‑tune real models.
Model architecture design for Transformers or other large neural nets.
Distributed systems / high‑performance computing for ML.
Are comfortable working from algorithms to engines:
Strong coding ability in Python
Experience profiling and optimizing performance across GPU, networking, and memory layers.
Able to take a new sampling method, scheduler, or RL update and turn it into a production‑grade implementation in the engine and/or training stack.
Have a solid research foundation in your area(s) of depth:
Track record of impactful work in ML systems, RL, or large‑scale model training (papers, open‑source projects, or production systems).
Can read new RL / post‑training papers, understand their implications on the stack, and design minimal, correct changes in the right layer (training engine vs. inference engine vs. data / API).
Operate well as a full‑stack problem solver:
You naturally ask: “Where in the stack is this really bottlenecked?”
You enjoy collaborating with infra, research, and product teams, and you care about both scientific quality and user‑visible wins.
Minimum qualifications
3+ years of experience working on ML systems, large‑scale model training, inference, or adjacent areas (or equivalent experience via research / open source).
Advanced degree in Computer Science, EE, or a related field, or equivalent practical experience.
Demonstrated experience owning complex technical projects end‑to‑end.
If you’re excited about the role and strong in some of these areas, we encourage you to apply even if you don’t meet every single requirement.
Responsibilities
Advance inference efficiency end‑to‑end
Design and prototype algorithms, architectures, and scheduling strategies for low‑latency, high‑throughput inference.
Implement and maintain changes in high‑performance inference engines (e.g., SGLang‑ or vLLM‑style systems and Together’s inference stack), including kernel backends, speculative decoding (e.g., ATLAS), quantization, etc.
Profile and optimize performance across GPU, networking, and memory layers to improve latency, throughput, and cost.
Unify inference with RL / post‑training
Design and operate RL and post‑training pipelines (e.g., RLHF, RLAIF, GRPO, DPO‑style methods, reward modeling) where 90+% of the cost is inference, jointly optimizing algorithms and systems.
Make RL and post‑training workloads more efficient with inference‑aware training loops—for example, async RL rollouts, speculative decoding, and other techniques that make large‑scale rollout collection and evaluation cheaper.
Use these pipelines to train, evaluate, and iterate on frontier models on top of our inference stack.
Co‑design algorithms and infrastructure so that objectives, rollout collection, and evaluation are tightly coupled to efficient inference, and quickly identify bottlenecks across the training engine, inference engine, data pipeline, and user‑facing layers.
Run ablations and scale‑up experiments to understand trade‑offs between model quality, latency, throughput, and cost, and feed these insights back into model, RL, and system design.
Own critical systems at production scale
Profile, debug, and optimize inference and post‑training services under real production workloads.
Drive roadmap items that require real engine modification—changing kernels, memory layouts, scheduling logic, and APIs as needed.
Establish metrics, benchmarks, and experimentation frameworks to validate improvements rigorously.
Provide technical leadership (Staff level)
Set technical direction for cross‑team efforts at the intersection of inference, RL, and post‑training.
Mentor other engineers and researchers on full‑stack ML systems work and performance engineering.
About Together AI
Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers in our journey in building the next generation AI infrastructure.
Compensation
We offer competitive compensation, startup equity, health insurance and other competitive benefits. The US base salary range for this full-time position is: $200,000 - $280,000 + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge.
Equal Opportunity
Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.
Please see our privacy policy at https://www.together.ai/privacy
No items found.
2026-02-19 12:07
Senior Data Scientist
Faculty
501-1000
United Kingdom
Full-time
Remote
false
Why Faculty?
We established Faculty in 2014 because we thought that AI would be the most important technology of our time. Since then, we’ve worked with over 350 global customers to transform their performance through human-centric AI. You can read about our real-world impact here.We don’t chase hype cycles. We innovate, build and deploy responsible AI which moves the needle - and we know a thing or two about doing it well. We bring an unparalleled depth of technical, product and delivery expertise to our clients who span government, finance, retail, energy, life sciences and defence.Our business, and reputation, is growing fast and we’re always on the lookout for individuals who share our intellectual curiosity and desire to build a positive legacy through technology.AI is an epoch-defining technology, join a company where you’ll be empowered to envision its most powerful applications, and to make them happen.About the teamThe Faculty Frontier TM product is our ambitious vision to create the first enterprise-grade platform that unifies decision intelligence with AI Agents to optimise real-world outcomes of critical processes across large-scale organisations. You will work on highly complex and consequential problems across the real economy, with particular focus on healthcare and life sciences.About the roleJoin us to shape the future of our Frontier Decision Intelligence Platform. As a Senior Data Scientist, you will lead the design and delivery of AI-powered digital twins that transform how organisations make critical decisions. You will sit at the heart of cross-functional teams, blending technical excellence with commercial insight to solve complex customer problems. This is an opportunity to mentor emerging talent while driving high-impact, production-grade AI solutions in a fast-paced, entrepreneurial environment.What you’ll be doing:Designing and building computational twins, creating AI-driven digital reflections tailored for each unique Frontier deployment.Leading data science efforts within cross-functional teams, partnering with engineers, designers, and commercial leads to ensure successful project outcomes.Understanding deeply core customer challenges to ensure every technical solution delivers significant real-world value.Performing rigorous exploratory data analysis, model building, validation, and performance monitoring.Supporting strong client relationships by working alongside our commercial team to shape the strategic direction of projects.Mentoring and developing other data scientists through task leadership and potential line management.Who we’re looking for:You have senior-level experience in data science or quantitative research, supported by a strong foundation in statistics and mathematics.You’re proficient in Python and essential libraries like NumPy and Pandas, with familiarity in deep-learning frameworks such as TensorFlow or PyTorch.You possess a versatile toolkit—including supervised learning, time-series, and Bayesian methods—and the creativity to develop new algorithms when needed.You bring a leadership mindset focused on technical excellence, team growth, and fostering a collaborative, inclusive culture.You exhibit scientific rigour and a business-focused approach, successfully translating complex problems into actionable technical strategies.You’ve demonstrated success in project planning and delivery, with the communication skills to present persuasively to senior stakeholders.The Interview ProcessTalent Team Screen (30 minutes)Technical Interview (90 minutes)Commercial Interview (60 minutes)#LI-PRIOOur Recruitment EthosWe aim to grow the best team - not the most similar one. We know that diversity of individuals fosters diversity of thought, and that strengthens our principle of seeking truth. And we know from experience that diverse teams deliver better work, relevant to the world in which we live. We’re united by a deep intellectual curiosity and desire to use our abilities for measurable positive impact. We strongly encourage applications from people of all backgrounds, ethnicities, genders, religions and sexual orientations.Some of our standout benefits:Unlimited Annual Leave PolicyPrivate healthcare and dentalEnhanced parental leaveFamily-Friendly Flexibility & Flexible workingSanctus CoachingHybrid Working (2 days in our Old Street office, London)If you don’t feel you meet all the requirements, but are excited by the role and know you bring some key strengths, please don't hesitate in applying as you might be right for this role, or other roles. We are open to conversations about part-time hours.
No items found.
2026-02-19 8:52
Agent Product Manager
Ema
101-200
$135,000 – $200,000
United States
Full-time
Remote
false
About EmaEma is at the forefront of the agentic AI revolution, empowering enterprises to reimagine how work gets done. Our platform enables organizations to design, deploy, and manage fleets of AI employees—multi-agent systems with rich human-in-the-loop interfaces—that automate complex workflows, augment decision-making, and unlock new levels of efficiency and growth. We are a team of ambitious innovators, building the future of work, and we’re looking for passionate individuals to join us on this mission.The RoleThis is not a traditional, backlog-focused product management role. As an Agentic Solutions Product Manager, you’ll partner directly with enterprise leaders to observe and decode human workflows—what data they use, what applications they rely on, and what SOPs they follow. From this, you’ll craft AI employees: multi-agent workflows with intuitive, UI-driven human-in-the-loop controls that transform how businesses operate.You won’t just manage features; you’ll design and deliver entire AI-powered solutions. You’ll be a trusted advisor and a strategic co-creator, working at the intersection of business strategy, workflow design, and cutting-edge AI technology.What You Will DoUnderstand Human Workflows: Partner with enterprise customers to map end-to-end processes, uncover inefficiencies, and identify opportunities where agentic AI can create impact.Design AI Employees: Translate workflows into agentic multi-agent systems, integrating data, applications, and UI-driven human oversight.Bridge Business and Technology: Work hand-in-hand with engineering and design to turn client requirements into scalable agent capabilities and elegant product experiences.Drive Strategic Roadmaps: Own the lifecycle of your AI employees—from concept through deployment—guided by customer feedback, data, and business outcomes.Champion Adoption & Value: Ensure customers achieve measurable ROI, advocate for your solutions internally and externally, and evangelize the power of agentic AI.Continuously Optimize: Use data and customer insights to refine workflows, enhance capabilities, and identify new areas for automation and transformation.
What We’re Looking ForEntrepreneurial Mindset: Self-starter who thrives in ambiguity, owns outcomes, and builds solutions from the ground up.Proven Client-Facing Experience: 4+ years in consulting, engagement management, product, or as a founder—trusted by senior stakeholders.Strategic Product Acumen: Ability to go beyond surface-level requests and solve the real business problem.Technical Credibility: Comfortable diving into architectural trade-offs, APIs, and agentic design with engineers.Systems Thinking: Natural ability to see the whole picture, anticipate downstream effects, and design resilient solutions.
Preferred SkillsExperience in user research and workflow mapping, with a data-driven mindset.Familiarity with generative AI and agentic AI; hands-on experience designing agent-based systems is a plus.Ability to prototype quickly—comfortable with "vibe coding" to visualize solutions.Background in product management, consulting, or founding roles.Experience in agile development environments and tools (JIRA, Asana, etc.).Hands-on experience with APIs and working closely with technical teams.Degree in Computer Science, Engineering, Math, or equivalent experience.
For California Based CandidatesThe standard base salary for this position is $135,000 to $200,000 annually.Compensation offered will be determined by factors such as location, level, job-related knowledge, skills, and experience. Certain roles may be eligible for variable compensation, equity, and benefits.Ema Unlimited is an equal opportunity employer and is committed to providing equal employment opportunities to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability, sexual orientation, gender identity, or genetics.
No items found.
2026-02-19 5:07
Software engineer, agents
Writer
1001-5000
$140,700 – $292,400
United States
Full-time
Remote
false
🚀 About WRITERWRITER is where the world's leading enterprises orchestrate AI-powered work. Our vision is to expand human capacity through superintelligence. And we're proving it's possible – through powerful, trustworthy AI that unites IT and business teams together to unlock enterprise-wide transformation. With WRITER's end-to-end platform, hundreds of companies like Mars, Marriott, Uber, and Vanguard are building and deploying AI agents that are grounded in their company's data and fueled by WRITER's enterprise-grade LLMs. Valued at $1.9B and backed by industry-leading investors including Premji Invest, Radical Ventures, and ICONIQ Growth, WRITER is rapidly cementing its position as the leader in enterprise generative AI.Founded in 2020 with office hubs in San Francisco, New York City, Austin, Chicago, and London, our team thinks big and moves fast, and we're looking for smart, hardworking builders and scalers to join us on our journey to create a better future of work with AI.📐 About the role
We’re seeking a highly skilled fullstack software engineer to join our engineering team building advanced AI-driven agent systems that execute autonomous workflows, orchestrate multi-step tasks, and extend human capacity across enterprise applications. In this role, you’ll play a key part in designing, building, and scaling next-generation AI agents that integrate with enterprise data and services to solve real-world problems. You will collaborate with cross-functional teams to turn complex agent concepts into production-ready systems.
🦸🏻♀️ What you'll doDesign, implement, and maintain scalable, secure agent-driven services and systems that autonomously accomplish tasks using modern AI frameworks.Develop and enhance robust infrastructure and high-throughput APIs, focusing on core agent capabilities such as memory, communication channels, skills, intelligent decision logic, security and workflow management.Integrate agent capabilities with backend services, data stores, vector databases, search/retrieval systems, and external APIs.Collaborate with product managers, AI researchers, data engineers, and UX teams to translate high-level agent use cases into robust, production-ready software.Ensure reliability, monitoring, and observability for all agent components (metrics, logging, CI/CD, fault tolerance).Contribute to architectural design decisions and participate in rigorous code reviews to uphold quality and maintainability.⭐️ What you need3+ years of professional software engineering experience with strong proficiency (3+ years) in Python in production environmentsProficiency in native agentic coding, demonstrated through the daily use of tools like Cursor, Claude Code, and other agentic coding platforms.Demonstrated experience building distributed systems, microservices, or complex backend APIs that support AI/agent workflows.Solid expertise with systems that integrate AI models, agent frameworks (e.g., LangChain or platform-specific tooling), vector databases, and large context reasoning services.Understanding of agent orchestration patterns, state management, and asynchronous workflows.Experience with cloud platforms (e.g., AWS, GCP, Azure), containerization (Docker, Kubernetes), and operational engineering best practices.Good grasp of performance optimization, testing frameworks, and CI/CD pipelines.Excellent communication and collaboration skills, with a “connect + challenge + own” mindset.Past work on AI agents that coordinate multi-step actions, reasoning, or autonomous decision-making loops.Contributions to open-source agent toolkits or SDKs.Experience with frontend technologies (React, TypeScript) for tooling around agent management interfaces.
🍩 Benefits & perks (US Full-time employees)Generous PTO, plus company holidaysMedical, dental, and vision coverage for you and your familyPaid parental leave for all parents (12 weeks)Fertility and family planning supportEarly-detection cancer testing through GalleriFlexible spending account and dependent FSA optionsHealth savings account for eligible plans with company contributionAnnual work-life stipends for:Wellness stipend for gym, massage/chiropractor, personal training, etc.Learning and development stipendCompany-wide off-sites and team off-sitesCompetitive compensation, company stock options and 401kWRITER is an equal-opportunity employer and is committed to diversity. We don't make hiring or employment decisions based on race, color, religion, creed, gender, national origin, age, disability, veteran status, marital status, pregnancy, sex, gender expression or identity, sexual orientation, citizenship, or any other basis protected by applicable local, state or federal law. Under the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.By submitting your application on the application page, you acknowledge and agree to WRITER's Global Candidate Privacy Notice.
No items found.
2026-02-19 1:52
Software engineer, agents (UK)
Writer
1001-5000
United States
Full-time
Remote
false
🚀 About WRITERWRITER is where the world's leading enterprises orchestrate AI-powered work. Our vision is to expand human capacity through superintelligence. And we're proving it's possible – through powerful, trustworthy AI that unites IT and business teams together to unlock enterprise-wide transformation. With WRITER's end-to-end platform, hundreds of companies like Mars, Marriott, Uber, and Vanguard are building and deploying AI agents that are grounded in their company's data and fueled by WRITER's enterprise-grade LLMs. Valued at $1.9B and backed by industry-leading investors including Premji Invest, Radical Ventures, and ICONIQ Growth, WRITER is rapidly cementing its position as the leader in enterprise generative AI.Founded in 2020 with office hubs in San Francisco, New York City, Austin, Chicago, and London, our team thinks big and moves fast, and we're looking for smart, hardworking builders and scalers to join us on our journey to create a better future of work with AI.📐 About the role
We’re seeking a highly skilled fullstack software engineer to join our engineering team building advanced AI-driven agent systems that execute autonomous workflows, orchestrate multi-step tasks, and extend human capacity across enterprise applications. In this role, you’ll play a key part in designing, building, and scaling next-generation AI agents that integrate with enterprise data and services to solve real-world problems. You will collaborate with cross-functional teams to turn complex agent concepts into production-ready systems.
🦸🏻♀️ What you'll doDesign, implement, and maintain scalable, secure agent-driven services and systems that autonomously accomplish tasks using modern AI frameworks.Develop and enhance robust infrastructure and high-throughput APIs, focusing on core agent capabilities such as memory, communication channels, skills, intelligent decision logic, security and workflow management.Integrate agent capabilities with backend services, data stores, vector databases, search/retrieval systems, and external APIs.Collaborate with product managers, AI researchers, data engineers, and UX teams to translate high-level agent use cases into robust, production-ready software.Ensure reliability, monitoring, and observability for all agent components (metrics, logging, CI/CD, fault tolerance).Contribute to architectural design decisions and participate in rigorous code reviews to uphold quality and maintainability.⭐️ What you need3+ years of professional software engineering experience with strong proficiency (3+ years) in Python in production environmentsProficiency in native agentic coding, demonstrated through the daily use of tools like Cursor, Claude Code, and other agentic coding platforms.Demonstrated experience building distributed systems, microservices, or complex backend APIs that support AI/agent workflows.Solid expertise with systems that integrate AI models, agent frameworks (e.g., LangChain or platform-specific tooling), vector databases, and large context reasoning services.Understanding of agent orchestration patterns, state management, and asynchronous workflows.Experience with cloud platforms (e.g., AWS, GCP, Azure), containerization (Docker, Kubernetes), and operational engineering best practices.Good grasp of performance optimization, testing frameworks, and CI/CD pipelines.Excellent communication and collaboration skills, with a “connect + challenge + own” mindset.Past work on AI agents that coordinate multi-step actions, reasoning, or autonomous decision-making loops.Contributions to open-source agent toolkits or SDKs.Experience with frontend technologies (React, TypeScript) for tooling around agent management interfaces.🍩 Benefits & perks (UK full-time employees):Generous PTO, plus company holidaysComprehensive medical and dental insurancePaid parental leave for all parents (12 weeks)Fertility and family planning supportEarly-detection cancer testing through GalleriCompetitive pension scheme and company contributionAnnual work-life stipends for:Wellness stipend for gym, massage/chiropractor, personal training, etc.Learning and development stipendCompany-wide off-sites and team off-sitesCompetitive compensation and company stock options
No items found.
2026-02-19 1:52
No job found
Your search did not match any job. Please try again
