⚠️ Sorry, this job is no longer available.

The AI job market moves fast. We keep up so you don't have to.

Fresh roles added daily, reviewed for quality — across every corner of the AI ecosystem.

Edit filters

New AI Opportunities

Showing 6179  of 79 jobs
Tag
Cohere Health.jpg

Staff Software Engineer, GPU Infrastructure (HPC)

Cohere
CA.svg
Canada
Full-time
Remote
false
Who are we?Our mission is to scale intelligence to serve humanity. We’re training and deploying frontier models for developers and enterprises who are building AI systems to power magical experiences like content generation, semantic search, RAG, and agents. We believe that our work is instrumental to the widespread adoption of AI.We obsess over what we build. Each one of us is responsible for contributing to increasing the capabilities of our models and the value they drive for our customers. We like to work hard and move fast to do what’s best for our customers.Cohere is a team of researchers, engineers, designers, and more, who are passionate about their craft. Each person is one of the best in the world at what they do. We believe that a diverse range of perspectives is a requirement for building great products.Join us on our mission and shape the future!Why this team?The internal infrastructure team is responsible for building world-class infrastructure and tools used to train, evaluate and serve Cohere's foundational models. By joining our team, you will work in close collaboration with AI researchers to support their AI workload needs on the cutting edge, with a strong focus on stability, scalability, and observability. You will be responsible for building and operating superclusters across multiple clouds. Your work will directly accelerate the development of industry-leading AI models that power Cohere's platform North. Please Note: All of our infrastructure roles require participating in a 24x7 on-call rotation, where you are compensated for your on-call schedule. As a Staff Software Engineer, you will:Build and scale ML-optimized HPC infrastructure: Deploy and manage Kubernetes-based GPU/TPU superclusters across multiple clouds, ensuring high throughput and low-latency performance for AI workloads.Optimize for AI/ML training: Collaborate with cloud providers to fine-tune infrastructure for cost efficiency, reliability, and performance, leveraging technologies like RDMA, NCCL, and high-speed interconnects.Troubleshoot and resolve complex issues: Proactively identify and resolve infrastructure bottlenecks, performance degradation, and system failures to ensure minimal disruption to AI/ML workflows.Enable researchers with self-service tools: Design intuitive interfaces and workflows that allow researchers to monitor, debug, and optimize their training jobs independently.Drive innovation in ML infrastructure: Work closely with AI researchers to understand emerging needs (e.g., JAX, PyTorch, distributed training) and translate them into robust, scalable infrastructure solutions.Champion best practices: Advocate for observability, automation, and infrastructure-as-code (IaC) across the organization, ensuring systems are maintainable and resilient.Mentorship and collaboration: Share expertise through code reviews, documentation, and cross-team collaboration, fostering a culture of knowledge transfer and engineering excellence. You may be a good fit if you have:Deep expertise in ML/HPC infrastructure: Experience with GPU/TPU clusters, distributed training frameworks (JAX, PyTorch, TensorFlow), and high-performance computing (HPC) environments.Kubernetes at scale: Proven ability to deploy, manage, and troubleshoot cloud-native Kubernetes clusters for AI workloads.Strong programming skills: Proficiency in Python (for ML tooling) and Go (for systems engineering), with a preference for open-source contributions over reinventing solutions.Low-level systems knowledge: Familiarity with Linux internals, RDMA networking, and performance optimization for ML workloads.Research collaboration experience: A track record of working closely with AI researchers or ML engineers to solve infrastructure challenges.Self-directed problem-solving: The ability to identify bottlenecks, propose solutions, and drive impact in a fast-paced environment.If some of the above doesn’t line up perfectly with your experience, we still encourage you to apply! We value and celebrate diversity and strive to create an inclusive work environment for all. We welcome applicants from all backgrounds and are committed to providing equal opportunities. Should you require any accommodations during the recruitment process, please submit an Accommodations Request Form, and we will work together to meet your needs.Full-Time Employees at Cohere enjoy these Perks:🤝 An open and inclusive culture and work environment 🧑‍💻 Work closely with a team on the cutting edge of AI research 🍽 Weekly lunch stipend, in-office lunches & snacks🦷 Full health and dental benefits, including a separate budget to take care of your mental health 🐣 100% Parental Leave top-up for up to 6 months🎨 Personal enrichment benefits towards arts and culture, fitness and well-being, quality time, and workspace improvement🏙 Remote-flexible, offices in Toronto, New York, San Francisco, London and Paris, as well as a co-working stipend✈️ 6 weeks of vacation (30 working days!)
No items found.
Hidden link
Cohere Health.jpg

Intern of Technical Staff - Sovereign AI

Cohere
CA.svg
Canada
Full-time
Remote
false
Who are we?Our mission is to scale intelligence to serve humanity. We’re training and deploying frontier models for developers and enterprises who are building AI systems to power magical experiences like content generation, semantic search, RAG, and agents. We believe that our work is instrumental to the widespread adoption of AI.We obsess over what we build. Each one of us is responsible for contributing to increasing the capabilities of our models and the value they drive for our customers. We like to work hard and move fast to do what’s best for our customers.Cohere is a team of researchers, engineers, designers, and more, who are passionate about their craft. Each person is one of the best in the world at what they do. We believe that a diverse range of perspectives is a requirement for building great products.Join us on our mission and shape the future!As a Sovereign AI Intern, you will:Design, train and improve upon cutting-edge models to serve public interest.Help us develop new techniques to train and serve models safer, better, and faster.Train extremely large-scale models on massive datasets.Learn from experienced senior machine learning technical staff.Work closely with product teams to develop solutions.You may be a good fit if you have:Proficiency in Python and related ML frameworksExperience using large-scale distributed training strategies.Strong communication and problem-solving skills.Bonus: Canadian citizenshipBonus: papers at top-tier venues (such as NeurIPS, ICML, ICLR, AIStats, MLSys, JMLR, AAAI, Nature, COLING, ACL, EMNLP).If some of the above doesn’t line up perfectly with your experience, we still encourage you to apply! We value and celebrate diversity and strive to create an inclusive work environment for all. We welcome applicants from all backgrounds and are committed to providing equal opportunities. Should you require any accommodations during the recruitment process, please submit an Accommodations Request Form, and we will work together to meet your needs.Full-Time Employees at Cohere enjoy these Perks:🤝 An open and inclusive culture and work environment 🧑‍💻 Work closely with a team on the cutting edge of AI research 🍽 Weekly lunch stipend, in-office lunches & snacks🦷 Full health and dental benefits, including a separate budget to take care of your mental health 🐣 100% Parental Leave top-up for up to 6 months🎨 Personal enrichment benefits towards arts and culture, fitness and well-being, quality time, and workspace improvement🏙 Remote-flexible, offices in Toronto, New York, San Francisco, London and Paris, as well as a co-working stipend✈️ 6 weeks of vacation (30 working days!)
No items found.
Hidden link
Cohere Health.jpg

Applied AI Engineer - Agentic Workflows (Singapore)

Cohere
SG.svg
Singapore
Full-time
Remote
false
Who are we?Our mission is to scale intelligence to serve humanity. We’re training and deploying frontier models for developers and enterprises who are building AI systems to power magical experiences like content generation, semantic search, RAG, and agents. We believe that our work is instrumental to the widespread adoption of AI.We obsess over what we build. Each one of us is responsible for contributing to increasing the capabilities of our models and the value they drive for our customers. We like to work hard and move fast to do what’s best for our customers.Cohere is a team of researchers, engineers, designers, and more, who are passionate about their craft. Each person is one of the best in the world at what they do. We believe that a diverse range of perspectives is a requirement for building great products.Join us on our mission and shape the future!Why this role?We’re a fast-growing startup building production-grade AI agents for enterprise customers at scale. We’re looking for Applied AI Engineers who can own the design, build, and deployment of agentic workflows powered by Large Language Models (LLMs)—from early prototypes to production-grade AI agents, to deliver concrete business value in enterprise workflows.In this role, you’ll work closely with customers on real-world business problems, often building first-of-their-kind agent workflows that integrate LLMs with tools, APIs, and data sources. While our pace is startup-fast, the bar is enterprise-high: agents must be reliable, observable, safe, and auditable from day one.You’ll collaborate closely with customers, product, and platform teams, and help shape how agentic systems are built, evaluated, and deployed at scale.What You’ll DoWork with enterprise customers and internal teams to turn business workflows into scalable, production-ready agentic AI systems.Design and build LLM-powered agents that reason, plan, and act across tools and data sources with enterprise-grade reliability.Balance rapid iteration with enterprise requirements, evolving prototypes into stable, reusable solutions.Define and apply evaluation and quality standards to measure success, failures, and regressions.Debug real-world agent behavior and systematically improve prompts, workflows, tools, and guardrails.Contribute to shared frameworks and patterns that enable consistent delivery across customers.Required Skills & ExperienceBachelor’s degree in Computer Science or a related technical field.Strong programming skills in Python and/or JavaScript/TypeScript.3+ years of experience building and shipping production software; 2+ years working with LLMs or AI APIs.Hands-on experience with modern LLMs (e.g., GPT, Claude, Gemini), vector databases, and agent/orchestration frameworks (e.g., LangChain, LangGraph, LlamaIndex, or custom solutions).Practical experience with RAG, agent workflows, evaluation, and performance optimization.Strong agent design skills, including prompt engineering, tool use, multi-step agent workflows (e.g. ReAct), and failure handling.Ability to reason about and balance trade-offs between customization and reuse, as well as autonomy, control, cost, latency, and risk.Strong communication skills and experience leading technical discussions with customers or partners.Nice-to-HaveExperience working in a fast-moving startup environment.Prior work delivering AI or automation solutions to enterprise customers.Familiarity with human-in-the-loop workflows, fine-tuning, or LLM evaluation techniques.Experience with cloud deployment and production operations for AI systems.Background in applied ML, NLP, or decision systems.Additional RequirementsStrong written and verbal communication skills.Ability and interest to travel up to 25%, flexible.Why Join UsBuild production-grade AI agents used in real enterprise workflows.Operate at scale while retaining end-to-end ownership.Work on hard problems in agent design, evaluation, and reliability.Shape shared platforms and standards, not just individual features.Move fast with a high bar for quality, safety, and reliability.If some of the above doesn’t line up perfectly with your experience, we still encourage you to apply! We value and celebrate diversity and strive to create an inclusive work environment for all. We welcome applicants from all backgrounds and are committed to providing equal opportunities. Should you require any accommodations during the recruitment process, please submit an Accommodations Request Form, and we will work together to meet your needs.Full-Time Employees at Cohere enjoy these Perks:🤝 An open and inclusive culture and work environment 🧑‍💻 Work closely with a team on the cutting edge of AI research 🍽 Weekly lunch stipend, in-office lunches & snacks🦷 Full health and dental benefits, including a separate budget to take care of your mental health 🐣 100% Parental Leave top-up for up to 6 months🎨 Personal enrichment benefits towards arts and culture, fitness and well-being, quality time, and workspace improvement🏙 Remote-flexible, offices in Toronto, New York, San Francisco, London and Paris, as well as a co-working stipend✈️ 6 weeks of vacation (30 working days!)
No items found.
Hidden link
Brain & Co..jpg

Early Career AI/ML Engineer

Brain Co
US.svg
United States
Full-time
Remote
false
About Brain Co.Brain Co. is an Applied AI startup founded by Elad Gil and Jared Kushner, and backed by many of Silicon Valley’s leading builders — including Patrick Collison (CEO of Stripe), Andrej Karpathy (Cofounder of OpenAI), Mike Krieger (CPO of Anthropic), Kevin Weil (CPO of OpenAI), and Aravind Srinivas (CEO of Perplexity).We are building an AI platform and applications for the world’s most important institutions - delivering impact on real-world problems.Our progress so farAutomated construction permitting for a sovereign government → 80% faster, unlocking $375M+ in valueOptimized supply chains for a leading global energy company → 30% lower cost, 99% reliability, preventing $100M+ in lossesStreamlined hospital patient care across national health systems → 40% better outcomes, 80% less admin workRaised a $30M Series A from top investorsBuilt a team of 40+ AI experts from Tesla, Google DeepMind, NVIDIA, and DatabricksAt Brain Co., your work will be deployed in the real world, not stuck in research. We move fast, with more demand than we can serve, and are looking for exceptional people to take ownership from day one.About The RoleAs an AI/ML Engineer at Brain Co., you will play a crucial role in deploying state-of-the-art models to automate various real world problems in sectors such as healthcare, government and energy. Part of the role will involve turning research breakthroughs into practical solutions for various nation states. This role is your opportunity to make a significant impact by making AI technology both accessible and influential.In This Role, You Will:Innovate and Deploy: Design and deploy advanced LLM models to tackle real-world problems, particularly in automating complex, manual processes in a range of real-world verticals.Optimize and Scale: Build scalable data pipelines, optimize models for performance and accuracy, and prepare them for production. Monitor and maintain deployed models to ensure they continue delivering value across various governments worldwide.Make a Difference: Engage in projects including but not limited to optimizing the world's most advanced energy production systems, modernizing core government workflows, or improving patient outcomes in advanced public healthcare systems. Your work will directly impact how AI benefits individuals, businesses, and society at large.Engage with Leaders: interact directly with government officials in various countries and apply the first of its kind AI solutions while working alongside experienced ex. Founders, AI researchers, and software engineers to understand complex business challenges and deliver AI-powered solutions. Join a dynamic team where ideas are exchanged freely and creativity flourishes. You will be able to wear many hats: software building, product management, sales, interpersonal skills.Learn and Lead: Keep abreast of the latest developments in machine learning and AI. Participate in code reviews, share knowledge, and set an example with high-quality engineering practices.You Might Thrive In This Role If You:Have 0-2 years of industry experience in applied machine learning or related AI work.Hold a BSc/Master’s/PhD degree in Computer Science, Machine Learning, Data Science, or a related field.Have hands-on experience building GenAI-focused applications (e.g., agents, reasoning workflows, or RAG) and a solid understanding of how large language models are architected and operated.Have personally implemented models in common ML frameworks such as PyTorch, Jax or TensorFlow.Possess a strong foundation in data structures, algorithms, and software engineering principles.Exhibit excellent problem-solving and analytical skills, with a proactive approach to challenges.Can work collaboratively with cross-functional teams.Thrive in fast-paced environments where priorities or deadlines may compete.Are eager to own problems end-to-end and willing to acquire any necessary knowledge to get the job done.BenefitsCompetitive salaryMedical, Dental, and Vision (100% Coverage)Paid Maternity and Paternity Leave401(k)Daily LunchesCommuter BenefitsUnlimited PTOWhy Join UsShip quickly, iterate constantly and see your work deployed at global scaleCollaborate with industry veterans from Tesla, DeepMind, Databricks, and moreAccelerate your career with ownership based on impact, not tenureEarn competitive compensation + meaningful equity in a high-growth companyThrive in a culture built on speed, curiosity, and impactIf you want to see your work deployed at scale with real impact, Brain Co. is the place to build.
No items found.
Hidden link
Cohere Health.jpg

Member of Technical Staff, MLE [Singapore]

Cohere
SG.svg
Singapore
Full-time
Remote
false
Who are we?Our mission is to scale intelligence to serve humanity. We’re training and deploying frontier models for developers and enterprises who are building AI systems to power magical experiences like content generation, semantic search, RAG, and agents. We believe that our work is instrumental to the widespread adoption of AI.We obsess over what we build. Each one of us is responsible for contributing to increasing the capabilities of our models and the value they drive for our customers. We like to work hard and move fast to do what’s best for our customers.Cohere is a team of researchers, engineers, designers, and more, who are passionate about their craft. Each person is one of the best in the world at what they do. We believe that a diverse range of perspectives is a requirement for building great products.Join us on our mission and shape the future!Why This Role Is DifferentThis is not a typical “Applied Scientist” or “ML Engineer” role. As a Member of Technical Staff, Applied ML, you will:Work directly with enterprise customers on problems that push LLMs to their limits. You’ll rapidly understand customer domains, design custom LLM solutions, and deliver production-ready models that solve high-value, real-world problems.Train and customize frontier models — not just use APIs. You’ll leverage Cohere’s full stack: CPT, post-training, retrieval + agent integrations, model evaluations, and SOTA modeling techniques.Influence the capabilities of Cohere’s foundation models. Techniques, datasets, evaluations, and insights you develop for customers will directly shape the next generation of Cohere’s frontier models.Operate with an early-startup level of ownership inside a frontier-model company. This role combines the breadth of an early-stage CTO with the infrastructure and scale of a deep-learning lab.Wear multiple hats, set a high technical bar, and define what Applied ML at Cohere becomes. Few roles in the industry combine application, research, customer-facing engineering, and core-model influence as directly as this one.What You’ll DoTechnical Leadership & Solution DesignContribute to the design and delivery of custom LLM solutions for enterprise customers.Translate ambiguous business problems into well-framed ML problems with clear success criteria and evaluation methodologies.Modeling, Customization & Foundations ContributionBuild custom models using Cohere’s foundation model stack, CPT recipes, post-training pipelines (including RLVR), and data assets.Develop SOTA modeling techniques that directly enhance model performance for customer use-cases.Contribute improvements back to the foundation-model stack — including new capabilities, tuning strategies, and evaluation frameworks.Customer-Facing Technical ImpactWork as part of Cohere’s customer facing MLE team to identify high-value opportunities where LLMs can unlock transformative impact to our enterprise customers.You May Be a Good Fit If You Have:Technical FoundationsStrong ML fundamentals and the ability to frame complex, ambiguous problems as ML solutions.Fluency with Python and core ML/LLM frameworks.Experience working with (or the ability to learn) large-scale datasets and distributed training or inference pipelines.Understanding of LLM architectures, tuning techniques (CPT, post-training), and evaluation methodologies.Demonstrated ability to meaningfully shape LLM performance.Experience & LeadershipA broad view of the ML research landscape and a desire to push the state of the art.MindsetBias toward action, high ownership, and comfort with ambiguity.Humility and strong collaboration instincts.A deep conviction that AI should meaningfully empower people and organizations.Join UsThis is a pivotal moment in Cohere’s history. As an MTS in Applied ML, you will define not only what we build — but how the world experiences AI. If you're excited about building custom models, solving generational problems for global organizations, and shaping frontier-model capabilities, we’d love to meet you.If some of the above doesn’t line up perfectly with your experience, we still encourage you to apply! We value and celebrate diversity and strive to create an inclusive work environment for all. We welcome applicants from all backgrounds and are committed to providing equal opportunities. Should you require any accommodations during the recruitment process, please submit an Accommodations Request Form, and we will work together to meet your needs.Full-Time Employees at Cohere enjoy these Perks:🤝 An open and inclusive culture and work environment 🧑‍💻 Work closely with a team on the cutting edge of AI research 🍽 Weekly lunch stipend, in-office lunches & snacks🦷 Full health and dental benefits, including a separate budget to take care of your mental health 🐣 100% Parental Leave top-up for up to 6 months🎨 Personal enrichment benefits towards arts and culture, fitness and well-being, quality time, and workspace improvement🏙 Remote-flexible, offices in Toronto, New York, San Francisco, London and Paris, as well as a co-working stipend✈️ 6 weeks of vacation (30 working days!)
No items found.
Hidden link
OpenAI.jpg

Forward Deployed Engineer (FDE), Life Sciences - NYC

OpenAI
$220,000 – $280,000
US.svg
United States
Full-time
Remote
false
About the teamOpenAI’s Forward Deployed Engineering team partners with global pharma and biotech, CROs, and research institutions to deploy existing expertise across the R&D value chain to help customers design and ship production-grade AI systems. We operate at the intersection of customer delivery and core platform development, converting early deployments into repeatable system standards and evaluation practices that scale across regulated environments.About the roleWe are hiring a Forward Deployed Engineer (FDE) to push the frontier on what is possible today across drug discovery (e.g., target identification, molecular design, pre-clinical) and development (e.g., trial design, trial ops, biostats) by leading end-to-end deployments of our models inside life sciences organizations and research institutions. You will work with customers who are deep experts in their scientific or operational domains, translating real-world data, infrastructure, and constraints into production systems.You will measure success through production adoption, measurable workflow impact, and eval-driven feedback loops, including evaluation benchmarks and acceptance criteria, that inform product and model roadmaps. You’ll work closely with our Product, Research, Partnerships, GRC, Security, and GTM to deliver in regulated contexts, including inspection readiness with audit trails and traceable evidence.This role is based in NYC. We use a hybrid work model of 3 days in the office per week. We offer relocation assistance. Travel up to 50% is required.In this role you willDesign and ship production systems around models, owning integrations, data provenance, reliability, and on-call readiness across research, clinical, and operational workflows.Lead discovery and scoping from pre-sales through post-sales, translating ambiguous workflow needs into hypothesis-driven problem framing, system requirements, and an execution plan with measurable endpoints.Define and enforce launch criteria for regulated contexts, including validation evidence, audit readiness, outcome metrics, and drive delivery until we demonstrate sustained production impact.Build in sensitive scientific data environments where auditability, validation, and access controls shape architecture, operating procedures, and failure handling.Run evaluation loops that measure model and system quality against workflow-specific scientific benchmarks and use results to drive model and product changes.Distill deployment learnings into hardened primitives, reference architectures, validation templates, and benchmark harnesses that scale across regulated life sciences environments.You might thrive in this role if youBring 5+ years of software/ML engineering or technical deployment experience with customer-facing ownership in biotech, pharma, clinical research, or scientific software; PhD, MS, or equivalent applied experience in a life sciences relevant field encouraged.Have owned customer GenAI deployments end-to-end from scoping through production adoption, and improved them through evaluation design, error analysis, and iterative evidence generation that tightens acceptance criteria over time.Have delivered AI systems in trial design, regulatory writing, or scientific operations where validation strategy, auditability, compliance constraints, and reviewer expectations shaped system design and rollout.Communicate clearly across scientific, clinical, model research, technical, and executive audiences, translating technical tradeoffs into decision quality, risk posture, and measurable outcomes with credibility.Apply systems thinking with high execution standards, consistently turning failures, escalations, and audit findings into improved operating standards, validation artifacts, and repeatable deployment playbooks.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
The Reflection.jpg

Member of Technical Staff - Data Ingestion Engineer

Reflection
US.svg
United States
Full-time
Remote
false
Our MissionReflection’s mission is to build open superintelligence and make it accessible to all.We’re developing open weight models for individuals, agents, enterprises, and even nation states. Our team of AI researchers and company builders come from DeepMind, OpenAI, Google Brain, Meta, Character.AI, Anthropic and beyond.About the RoleData is playing an increasingly crucial role at the frontier of AI innovation. Many of the most meaningful advances in recent years have come not from new architectures, but from better data.As a member of the Data Team, your mission is to build and operate the ingestion systems that turn the open web and other large-scale data sources into reliable, well-structured corpora for training frontier models. You will own the machinery that acquires, extracts, normalizes, versions, and delivers data to our pre-training pipelines. You’ll work directly with world-class researchers to close the loop between what we collect and how it impacts model performance.This role is ideal for engineers who love building robust distributed systems, but who also want to run experiments, reason about tradeoffs in data acquisition, and iterate quickly based on measurable impact.Working closely with our pre-training and data quality teams, you will:Build and operate large-scale data ingestion systems for pre-training, including web crawling, extraction, and dataset deliveryRun experiments to evaluate crawling strategies, extraction methods, and ingestion tradeoffsAnalyze ingested data to identify gaps, redundancy, and areas to improveBuild ingestion pipelines that scale reliably across large data campaignsDevelop specialized crawlers for high-priority data sourcesReview code, debug production issues, and continuously improve ingestion infrastructureAbout You:Curious about how training data influences model capabilities, and can iterate quickly based on measurable downstream impactAble to collaborate tightly across functions: researchers, infra, operations, and external partners.Enjoy working in a hybrid research–engineering roleSkills and Qualifications:Experience building web crawling, data ingestion, or large-scale data acquisition systems using Ray, Beam, Spark, or similar technologies.Familiarity with how LLMs are trained and evaluated, and an intuition for what makes data useful for trainingComfortable working with very large datasets (multi-TB to PB scale) and building systems that are observable, testable, and maintainableComfortable designing experiments and using data to guide system improvementsExcellent communication skills. You can explain system behavior. You consider and communicate tradeoffs clearlyWhat We Offer:We believe that to build superintelligence that is truly open, you need to start at the foundation. Joining Reflection means building from the ground up as part of a small talent-dense team. You will help define our future as a company, and help define the frontier of open foundational models.We want you to do the most impactful work of your career with the confidence that you and the people you care about most are supported.Top-tier compensation: Salary and equity structured to recognize and retain the best talent globally.Health & wellness: Comprehensive medical, dental, vision, life, and disability insurance.Life & family: Fully paid parental leave for all new parents, including adoptive and surrogate journeys. Financial support for family planning.Benefits & balance: paid time off when you need it, relocation support, and more perks that optimize your time. Opportunities to connect with teammates: lunch and dinner are provided daily. We have regular off-sites and team celebrations.
No items found.
Hidden link
Together AI.jpg

HR Operations Partner

Together AI
$160,000 – $230,000
No items found.
Full-time
Remote
false
About the Role As an AI Researcher, you will be pushing the frontier of foundation model research and make them a reality in products. You will be working on developing novel architectures, system optimizations, optimization algorithms, and data-centric optimizations, that go beyond state-of-the-arts. As a team, we have been pushing on all these fronts (e.g., Hyena, FlashAttention, FlexGen, and RedPajama). You will also work closely together with the machine learning systems, NLP/CV, and engineering teams for inspiration of research problems and to jointly work on solutions to practical challenges. You will also interact with customers to help them in their journey of training, using, and improving their AI applications using open models. Your research skills will be vital in staying up-to-date with the latest advancements in machine learning, ensuring that we stay at the cutting edge of open model innovations. Requirements Strong background in Machine Learning Experience in building state-of-the-art models at large scale Experience in developing algorithms in areas such as optimization, model architecture, and data-centric optimizations Passion in contributing to the open model ecosystem and pushing the frontier of open models Excellent problem-solving and analytical skills Bachelor's, Master's, or Ph.D. degree in Computer Science, Electrical Engineering, or a related field Responsibilities Develop novel architectures, system optimizations, optimization algorithms, and data-centric optimizations, that significantly improve over state-of-the-art Take advantage of the computational infrastructure of Together to create the best open models in their class Understand and improve the full lifecycle of building open models; release and publish your insights (blogs, academic papers etc.) Collaborate with cross-functional teams to deploy your models and make them available to a wider community and customer base Stay up-to-date with the latest advancements in machine learning About Together AI Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers in our journey in building the next generation AI infrastructure. Compensation We offer competitive compensation, startup equity, health insurance and other competitive benefits. The US base salary range for this full-time position is: $160,000 - $230,000 + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge. Equal Opportunity Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more. Please see our privacy policy at https://www.together.ai/privacy    
No items found.
Hidden link
Faculty.jpg

Lead Software Engineer (Machine Learning)

Faculty
GB.svg
United Kingdom
Full-time
Remote
false
Why Faculty? We established Faculty in 2014 because we thought that AI would be the most important technology of our time. Since then, we’ve worked with over 350 global customers to transform their performance through human-centric AI. You can read about our real-world impact here.We don’t chase hype cycles. We innovate, build and deploy responsible AI which moves the needle - and we know a thing or two about doing it well. We bring an unparalleled depth of technical, product and delivery expertise to our clients who span government, finance, retail, energy, life sciences and defence.Our business, and reputation, is growing fast and we’re always on the lookout for individuals who share our intellectual curiosity and desire to build a positive legacy through technology.AI is an epoch-defining technology, join a company where you’ll be empowered to envision its most powerful applications, and to make them happen.About the teamOur Energy, Transition and Environment business unit is pioneering meaningful change in the clean energy revolution. Our vision is to accelerate the transition to net-zero emissions and drive efficiencies for a new era of utility companies.We believe that the responsible, and intelligent, deployment of AI is critical to the success of this mission. We partner with a wide range of clients - from major energy operators, to GreenTech startups, and national infrastructure providers - to build solutions which return measurable impact and move us towards a smarter, cleaner, and more sustainable world.About the roleJoin us as a Lead Software Engineer, with a focus on Machine Learning, to spearhead the technical direction and delivery of complex, innovative AI projects. You will act as a technical expert, applying your skills across various projects from client-side deployments to advising on AI strategy, while ensuring architectural decisions are sound and reliable. This role demands a balance of deep technical expertise and strong leadership, focusing on driving innovation, fostering team growth, and building reusable solutions across the organisation. If you're ready to manage high-risk projects and deliver practical, innovative outcomes, this is your chance to shape our future.What you'll be doing:Setting the technical direction and overseeing delivery of high-risk, ill-defined software and infrastructure projects while balancing strategic trade-offs and helping teams prioritise in shifting environments, taking full ownership of successful outcomes for our most challenging projects.Designing and developing reliable, production-grade ML systems and justifying critical architectural decisions to ensure robust delivery.Developing clear, comprehensively scoped roadmaps for novel solutions to help customers achieve their strategic goals and accurately estimating effort on large workstreams to ensure successful and timely deliveryEngaging with technical and non-technical customers at all stages of the customer lifecycle, giving reasoned and credible advice and opinions on a broad range engineering topicsCollaborating proactively both within multidisciplinary delivery teams and across the engineering community at Faculty to overcome technical challengesCoaching team members on specific technologies and driving the development of shared organisational resources and libraries to streamline delivery and improve engineering methods across the company.Leading the hiring and selection process while mentoring multiple individuals and managers to define the future shape of the engineering team.Who we're looking for:You are a recognised technical expert who sets the standard for code quality and solution design, possessing the breadth of knowledge to solve almost any problem.You have an entrepreneurial mindset and are proactive in recommending new technologies or ways of working to keep our offering ahead of the competition.You bring expert-level experience in at least one major Cloud Solution Provider (AWS, GCP, or Azure) and have led teams to build full-stack web applications.You are a proven leader, capable of managing other managers and setting team-wide development goals to elevate client delivery.You thrive in high-stakes environments, demonstrating the ability to turn innovative ideas into practical, measurable outcomes for global energy operators.You are a compelling communicator who can confidently defend technical rationales to senior stakeholders and guide both technical and non-technical teams.The Interview ProcessTalent Team Screen (30 minutes) Introduction to the role (45 minutes) Pair Programming Interview (90 minutes) System Design Interview (90 minutes) Commercial & Leadership Interview (60 minutes) #LI-PRIO Our Recruitment EthosWe aim to grow the best team - not the most similar one. We know that diversity of individuals fosters diversity of thought, and that strengthens our principle of seeking truth. And we know from experience that diverse teams deliver better work, relevant to the world in which we live. We’re united by a deep intellectual curiosity and desire to use our abilities for measurable positive impact. We strongly encourage applications from people of all backgrounds, ethnicities, genders, religions and sexual orientations.Some of our standout benefits:Unlimited Annual Leave PolicyPrivate healthcare and dentalEnhanced parental leaveFamily-Friendly Flexibility & Flexible workingSanctus CoachingHybrid Working (2 days in our Old Street office, London)If you don’t feel you meet all the requirements, but are excited by the role and know you bring some key strengths, please do apply or reach out to our Talent Acquisition team for a confidential chat - talent@faculty.ai Please know we are open to conversations about part-time roles or condensed hours.
No items found.
Hidden link
Dataiku.jpg

Infrastructure Engineer

Dataiku
FR.svg
France
GE.svg
Germany
NL.svg
Netherlands
Full-time
Remote
false
Dataiku is The Universal AI Platform™, giving organizations control over their AI talent, processes, and technologies to unleash the creation of analytics, models, and agents. Providing no-, low-, and full-code capabilities, Dataiku meets teams where they are today, allowing them to begin building with AI using their existing skills and knowledge.Dataiku’s promise to our customers is to provide them with the software and support needed to accelerate their Data Science and Machine Learning maturity. Dataiku’s Data Science team is responsible for delivering on that promise. As an AI Deployment Strategist / Data Scientist at Dataiku, you will have the opportunity to participate in our customers' journeys, from supporting their discovery of the platform to coaching users and co-developing data science applications from design to deployment. You will primarily work with our customers in the financial services and insurance industries. You will gain hands-on experience coding in multiple languages (primarily Python, occasionally R, SQL, PySpark, JavaScript, etc.) and applying the latest big data technologies to real-world business use cases. Our ideal candidate is comfortable learning new languages, technologies, and modelling techniques while being able to explain their work to other data scientists and clients.  Key Areas of Responsibility (What You’ll Do) Help users discover and master the Dataiku platform through user training, office hours, demos, and ongoing consultative support. Analyse and investigate various kinds of data and machine learning applications across industries and use cases. Provide strategic input to the customer and account teams that help our customers achieve success. Scope and co-develop production-level data science projects with our customers. Mentor and help educate data scientists and other customer team members to aid in career development and growth. Experience (What We’re Looking For) French and English - fluent Curiosity and a desire to learn new technical skills. Empathy and an eagerness to share your knowledge with your colleagues, Dataiku’s customers, and the general public. Ability to clearly explain complex topics to technical as well as non-technical audiences. Over 5 years of experience with coding (Python, R, SQL). Over 5 years of experience building ML models.   Understanding of underlying data systems and platform mechanics, such as Cloud architectures, K8S, Spark, and SQL.  Bonus points for any of these Experience with Consulting and/or Customer-facing Data Science roles. Experience in the manufacturing industry. Experience with Spark, SAS, Data Engineering or MLOps. Experience developing web apps in JavaScript, RShiny, or Dash. Experience building APIs. Experience using enterprise data science tools. Passion for teaching or public speaking. #LI-Hybrid  What are you waiting for! At Dataiku, you'll be part of a journey to shape the ever-evolving world of AI. We're not just building a product; we're crafting the future of AI. If you're ready to make a significant impact in a company that values innovation, collaboration, and your personal growth, we can't wait to welcome you to Dataiku! And if you’d like to learn even more about working here, you can visit our Dataiku LinkedIn page.   Our practices are rooted in the idea that everyone should be treated with dignity, decency and fairness. Dataiku also believes that a diverse identity is a source of strength and allows us to optimize across the many dimensions that are needed for our success. Therefore, we are proud to be an equal opportunity employer. All employment practices are based on business needs, without regard to race, ethnicity, gender identity or expression, sexual orientation, religion, age, neurodiversity, disability status, citizenship, veteran status or any other aspect which makes an individual unique or protected by laws and regulations in the locations where we operate. This applies to all policies and procedures related to recruitment and hiring, compensation, benefits, performance, promotion and termination and all other conditions and terms of employment. If you need assistance or an accommodation, please contact us at: reasonable-accommodations@dataiku.com     Protect yourself from fraudulent recruitment activity Dataiku will never ask you for payment of any type during the interview or hiring process. Other than our video-conference application, Zoom, we will also never ask you to make purchases or download third-party applications during the process. If you experience something out of the ordinary or suspect fraudulent activity, please review our page on identifying and reporting fraudulent activity here.
No items found.
Hidden link
Scale AI.jpg

Senior Director and AGC, Product Legal (Privacy, IP, Employment)

Scale AI
$201,600 – $241,920
US.svg
United States
Full-time
Remote
false
About the role We’re hiring an AI Architect to sit at the intersection of frontier AI research, product, and go-to-market. You’ll partner closely with ML teams in high-stakes meetings, scope and pitch solutions to top AI labs, and translate research needs (post-training, evals, alignment) into clear product roadmaps and measurable outcomes. You’ll drive end-to-end delivery—partnering with AI research teams and core customers to scope, pilot, and iterate on frontier model improvements—while coordinating with engineering, ops, and finance to translate cutting-edge research into deployable, high-impact solutions. What you’ll do Translate research → product: work with client side researchers on post-training, evals, safety/alignment and build the primitives, data, and tooling they need. Partner deeply with core customers and frontier labs: work hands-on with leading AI teams and frontier research labs to tackle hard, open-ended technical problems related to frontier model improvement, performance, and deployment. Shape and propose model improvement work: translate customer and research objectives into clear, technically rigorous proposals—scoping post-training, evaluation, and safety work into well-defined statements of work and execution plans. Translate research into production impact: collaborate with customer-side researchers on post-training, evaluations, and alignment, and help design the data, primitives, and tooling required to improve frontier models in practice. Own the end-to-end lifecycle: lead discovery, write crisp PRDs and technical specs, prioritize trade-offs, run experiments, ship initial solutions, and scale successful pilots into durable, repeatable offerings. Lead complex, high-stakes engagements: independently run technical working sessions with senior customer stakeholders; define success metrics; surface risks early; and drive programs to measurable outcomes. Partner across Scale: collaborate closely with research (agents, browser/SWE agents), platform, operations, security, and finance to deliver reliable, production-grade results for demanding customers. Build evaluation rigor at the frontier: design and stand up robust evaluation frameworks (e.g., RLVR, benchmarks), close the loop with data quality and feedback, and share learnings that elevate technical execution across accounts. You have Deep technical background in applied AI/ML: 5–10+ years in research, engineering, solutions engineering, or technical product roles working on LLMs or multimodal systems, ideally in high-stakes, customer-facing environments. Hands-on experience with model improvement workflows: demonstrated experience with post-training techniques, evaluation design, benchmarking, and model quality iteration. Ability to work on hard, ambiguous technical problems: proven track record of partnering directly with advanced customers or research teams to scope, reason through, and execute on deep technical challenges involving frontier models. Strong technical fluency: you can read papers, interrogate metrics, write or review complex Python/SQL for analysis, and reason about model-data trade-offs. Executive presence with world-class researchers and enterprise leaders; excellent writing and storytelling. Bias to action: you ship, learn, and iterate. How you’ll work Customer-obsessed: start from real research needs; prototype quickly; validate with data. Cross-functional by default: align research, engineering, ops, and GTM on a single plan; communicate clearly up and down. Field-forward: expect regular customer time and research leads; light travel as needed. What success looks like Clear wins with top labs: pilots that convert to scaled programs with strong eval signals. Reusable alignment & eval building blocks that shorten time-to-value across accounts. Crisp internal docs (PRDs, experiment readouts, exec updates) that drive decisions quickly. Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.Please reference the job posting's subtitle for where this position will be located. For pay transparency purposes, the base salary range for this full-time position in the locations of San Francisco, New York, Seattle is:$201,600—$241,920 USDPLEASE NOTE: Our policy requires a 90-day waiting period before reconsidering candidates for the same role. This allows us to ensure a fair and thorough evaluation of all applicants. About Us: At Scale, our mission is to develop reliable AI systems for the world's most important decisions. Our products provide the high-quality data and full-stack technologies that power the world's leading models, and help enterprises and governments build, deploy, and oversee AI applications that deliver real impact. We work closely with industry leaders like Meta, Cisco, DLA Piper, Mayo Clinic, Time Inc., the Government of Qatar, and U.S. government agencies including the Army and Air Force. We are expanding our team to accelerate the development of AI applications. We believe that everyone should be able to bring their whole selves to work, which is why we are proud to be an inclusive and equal opportunity workplace. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability status, gender identity or Veteran status.  We are committed to working with and providing reasonable accommodations to applicants with physical and mental disabilities. If you need assistance and/or a reasonable accommodation in the application or recruiting process due to a disability, please contact us at accommodations@scale.com. Please see the United States Department of Labor's Know Your Rights poster for additional information. We comply with the United States Department of Labor's Pay Transparency provision.  PLEASE NOTE: We collect, retain and use personal data for our professional business purposes, including notifying you of job opportunities that may be of interest and sharing with our affiliates. We limit the personal data we collect to that which we believe is appropriate and necessary to manage applicants’ needs, provide our services, and comply with applicable laws. Any information we collect in connection with your application will be treated in accordance with our internal policies and programs designed to protect personal data. Please see our privacy policy for additional information.
No items found.
Hidden link
Cohere Health.jpg

Staff Software Engineer, Inference Infrastructure

Cohere
US.svg
United States
Full-time
Remote
false
Who are we?Our mission is to scale intelligence to serve humanity. We’re training and deploying frontier models for developers and enterprises who are building AI systems to power magical experiences like content generation, semantic search, RAG, and agents. We believe that our work is instrumental to the widespread adoption of AI.We obsess over what we build. Each one of us is responsible for contributing to increasing the capabilities of our models and the value they drive for our customers. We like to work hard and move fast to do what’s best for our customers.Cohere is a team of researchers, engineers, designers, and more, who are passionate about their craft. Each person is one of the best in the world at what they do. We believe that a diverse range of perspectives is a requirement for building great products.Join us on our mission and shape the future!Why this role?Are you energized by building high-performance, scalable and reliable machine learning systems? Do you want to help define and build the next generation of AI platforms powering advanced NLP applications? We are looking for Members of Technical Staff to join the Model Serving team at Cohere. The team is responsible for developing, deploying, and operating the AI platform delivering Cohere's large language models through easy to use API endpoints. In this role, you will work closely with many teams to deploy optimized NLP models to production in low latency, high throughput, and high availability environments. You will also get the opportunity to interface with customers and create customized deployments to meet their specific needs.You may be a good fit if you have:5+ years of engineering experience running production infrastructure at a large scaleExperience designing large, highly available distributed systems with Kubernetes, and GPU workloads on those clustersExperience with Kubernetes dev and production coding and supportExperience with GCP, Azure, AWS, OCI, multi-cloud on-prem / hybrid servingExperience in designing, deploying, supporting, and troubleshooting in complex Linux-based computing environmentsExperience in compute/storage/network resource and cost managementExcellent collaboration and troubleshooting skills to build mission-critical systems, and ensure smooth operations and efficient teamworkThe grit and adaptability to solve complex technical challenges that evolve day to dayFamiliarity with computational characteristics of accelerators (GPUs, TPUs, and/or custom accelerators), especially how they influence latency and throughput of inference.Strong understanding or working experience with distributed systems.Experience in Golang, C++ or other languages designed for high-performance scalable servers).If some of the above doesn’t line up perfectly with your experience, we still encourage you to apply! We value and celebrate diversity and strive to create an inclusive work environment for all. We welcome applicants from all backgrounds and are committed to providing equal opportunities. Should you require any accommodations during the recruitment process, please submit an Accommodations Request Form, and we will work together to meet your needs.Full-Time Employees at Cohere enjoy these Perks:🤝 An open and inclusive culture and work environment 🧑‍💻 Work closely with a team on the cutting edge of AI research 🍽 Weekly lunch stipend, in-office lunches & snacks🦷 Full health and dental benefits, including a separate budget to take care of your mental health 🐣 100% Parental Leave top-up for up to 6 months🎨 Personal enrichment benefits towards arts and culture, fitness and well-being, quality time, and workspace improvement🏙 Remote-flexible, offices in Toronto, New York, San Francisco, London and Paris, as well as a co-working stipend✈️ 6 weeks of vacation (30 working days!)
No items found.
Hidden link
Cohere Health.jpg

Site Reliability Engineer, Inference Infrastructure

Cohere
CA.svg
Canada
Full-time
Remote
false
Who are we?Our mission is to scale intelligence to serve humanity. We’re training and deploying frontier models for developers and enterprises who are building AI systems to power magical experiences like content generation, semantic search, RAG, and agents. We believe that our work is instrumental to the widespread adoption of AI.We obsess over what we build. Each one of us is responsible for contributing to increasing the capabilities of our models and the value they drive for our customers. We like to work hard and move fast to do what’s best for our customers.Cohere is a team of researchers, engineers, designers, and more, who are passionate about their craft. Each person is one of the best in the world at what they do. We believe that a diverse range of perspectives is a requirement for building great products.Join us on our mission and shape the future!Why this role?Are you energized by building high-performance, scalable and reliable machine learning systems? Do you want to help define and build the next generation of AI platforms powering advanced NLP applications? We are looking for a Site Reliability Engineer to join the Model Serving team at Cohere. The team is responsible for developing, deploying, and operating the AI platform delivering Cohere's large language models through easy to use API endpoints. In this role, you will work closely with many teams to deploy optimized NLP models to production in low latency, high throughput, and high availability environments. You will also get the opportunity to interface with customers and create customized deployments to meet their specific needs.As a Site Reliability Engineer you will:Build self-service systems that automate managing, deploying and operating services.This includes our custom Kubernetes operators that support language model deployments.Automate environment observability and resilience. Enable all developers to troubleshoot and resolve problems.Take steps required to ensure we hit defined SLOs, including participation in an on-call rotation.Build strong relationships with internal developers and influence the Infrastructure team’s roadmap based on their feedback.Develop our team through knowledge sharing and an active review process.You may be a good fit if you have:5+ years of engineering experience running production infrastructure at a large scaleExperience designing large, highly available distributed systems with Kubernetes, and GPU workloads on those clustersExperience with Kubernetes dev and production coding and supportExperience with GCP, Azure, AWS, OCI, multi-cloud on-prem / hybrid servingExperience in designing, deploying, supporting, and troubleshooting in complex Linux-based computing environmentsExperience in compute/storage/network resource and cost managementExcellent collaboration and troubleshooting skills to build mission-critical systems, and ensure smooth operations and efficient teamworkThe grit and adaptability to solve complex technical challenges that evolve day to dayFamiliarity with computational characteristics of accelerators (GPUs, TPUs, and/or custom accelerators), especially how they influence latency and throughput of inference.Strong understanding or working experience with distributed systems.Experience in Golang, C++ or other languages designed for high-performance scalable servers).If some of the above doesn’t line up perfectly with your experience, we still encourage you to apply! We value and celebrate diversity and strive to create an inclusive work environment for all. We welcome applicants from all backgrounds and are committed to providing equal opportunities. Should you require any accommodations during the recruitment process, please submit an Accommodations Request Form, and we will work together to meet your needs.Full-Time Employees at Cohere enjoy these Perks:🤝 An open and inclusive culture and work environment 🧑‍💻 Work closely with a team on the cutting edge of AI research 🍽 Weekly lunch stipend, in-office lunches & snacks🦷 Full health and dental benefits, including a separate budget to take care of your mental health 🐣 100% Parental Leave top-up for up to 6 months🎨 Personal enrichment benefits towards arts and culture, fitness and well-being, quality time, and workspace improvement🏙 Remote-flexible, offices in Toronto, New York, San Francisco, London and Paris, as well as a co-working stipend✈️ 6 weeks of vacation (30 working days!)
No items found.
Hidden link
OpenAI.jpg

Research-Hardware Codesign Engineer

OpenAI
$230,000 – $460,000
US.svg
United States
Full-time
Remote
false
About the TeamOpenAI’s Hardware organization develops silicon and system-level solutions designed for the unique demands of advanced AI workloads. The team is responsible for building the next generation of AI silicon while working closely with software and research partners to co-design hardware tightly integrated with AI models. In addition to delivering production-grade silicon for OpenAI’s supercomputing infrastructure, the team also creates custom design tools and methodologies that accelerate innovation and enable hardware optimized specifically for AI.About the RoleWe’re seeking a Research-Hardware Codesign Engineer to operate at the boundary between model research and silicon/system architecture. You’ll help shape the numerics, architecture, and technology bets of future OpenAI silicon in collaboration with both Research and Hardware.Your work will include debugging gaps between rooflines and reality, writing quantization kernels, derisking numerics via model evals, quantifying system architecture tradeoffs, and implementing novel numeric RTL. This is a hands-on role for people who go looking for hard problems, get to ground truth, and drive it to production. Strong prioritization and clear, honest communication are essential.Location: San Francisco, CA (Hybrid: 3 days/week onsite) Relocation assistance available.In this role:Build on our roofline simulator to track evolving workloads, and deliver analyses that quantify the impact of system architecture decisions and support technology pathfinding.Debug gaps between performance simulation and real measurements; clearly communicate root cause, bottlenecks, and invalid assumptions.Write emulation kernels for low-precision numerics and lossy compression schemes, and get Research the information they need to trade efficiency with model quality.Prototype numerics modules by pushing RTL through synthesis; hand off novel numerics cleanly, or occasionally own an RTL module end-to-end.Proactively pull in new ML workloads, prototype them with rooflines and/or functional simulation, and drive initial evaluation of new opportunities or risks.Understand the whole picture from ML science to hardware optimization, and slice this end-to-end objective into near-term deliverables.Build ad-hoc collaborations across teams with very different goals and areas of expertise, and keep progress unblocked.Communicate design tradeoffs clearly with explicit assumptions and confidence levels; produce a trail of evidence that enables confident execution.You Will Thrive in this Role if:An exceptional track record of high-quality technical output, and a bias for shipping a prototype now and iterating later in the absence of clear requirements.Strong Python, and C++ or Rust, with a cautious attitude toward correctness and an intuition for clean extensibility.Experience writing Triton, CUDA, or similar, and an understanding of the resulting mapping of tensor ops to functional units.Working knowledge of PyTorch or JAX; experience in large ML codebases is a plus.Practical understanding of floating point numerics, the ML tradeoffs of reduced precision, and the current state of the art in model quantization.Deep understanding of transformer models, and strong intuition for transformer rooflines and the tradeoffs of sharded training and inference in large-scale ML systems.Experience writing RTL (especially for floating point logic) and understanding of PPA tradeoffs is a plus.Strong cross-functional communication (e.g. across ML researchers and hardware engineers); ability to slice ambiguous early-incubation ideas into concrete arenas in which progress can be made.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
OpenAI.jpg

Research Engineer, AI for Science

OpenAI
$310,000 – $460,000
US.svg
United States
Full-time
Remote
false
About the TeamOpenAI for Science is building the next great scientific instrument: an AI-powered platform that accelerates scientific discovery. We aim to prove that OpenAI’s frontier models can do real science—and help researchers everywhere do more, faster.About the RoleAs a Research Engineer, you will build AI systems that enable previously impossible capabilities or achieve unprecedented levels of performance. You’ll work at the intersection of engineering and research, designing, implementing, and improving large-scale machine learning systems while contributing to the science behind the algorithms themselves.We’re looking for people with strong engineering fundamentals who enjoy writing high-quality ML code, are comfortable working at massive scale, and are excited about OpenAI’s approach to research. As deep learning systems continue to scale, engineering excellence will play a critical role in driving the next major advances in AI.This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.In this role, you will:Design, implement, and improve large-scale distributed machine learning systemsWrite robust, high-quality machine learning code and contribute to performance-critical componentsCollaborate closely with researchers to translate ideas into scalable, production-ready systemsYou might thrive in this role if you:Have strong programming skills and enjoy building reliable, high-performance systemsAre comfortable working in large distributed systems and at significant computational scaleAre excited about OpenAI’s research direction and motivated by the real-world impact of AINice to have:Interest in using AI to accelerate scientific discovery, improve experimental design, or enable new forms of scientific insightExperience building high-performance implementations of deep learning algorithmsAbout OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
The Reflection.jpg

Member of Technical Staff - Alignment Lead

Reflection
US.svg
United States
Full-time
Remote
false
Our MissionReflection’s mission is to build open superintelligence and make it accessible to all.We’re developing open weight models for individuals, agents, enterprises, and even nation states. Our team of AI researchers and company builders come from DeepMind, OpenAI, Google Brain, Meta, Character.AI, Anthropic and beyond.About the RoleDrive the entire alignment stack, spanning instruction tuning, RLHF, and RLAIF, to push the model toward high factual accuracy and robust instruction following.Lead research efforts to design next-generation reward models and optimization objectives that significantly improve human preference (HP) performance.Curate high-quality training data and design synthetic data pipelines that solve complex reasoning and behavioral gaps.Optimize large-scale RL pipelines for stability and efficiency, ensuring rapid iteration cycles for model improvements.Collaborate closely with pre-training and evaluation teams to create tight feedback loops that translate alignment research into generalizable model gains.About YouGraduate degree (MS or PhD) in Computer Science, Machine Learning, or related discipline.Deep technical command of alignment methodologies (PPO, DPO, rejection sampling) and experience scaling them to large models.Strong engineering skills, comfortable diving into complex ML codebases and distributed systems.Experience improving model behavior through data, reward modeling, or RL techniques.Evidence of owning ambitious research or engineering agendas that led to measurable model improvements.Thrive in a fast-paced, high-agency startup environment with bias toward action.Passionate about advancing the frontier of intelligence.What We Offer:We believe that to build superintelligence that is truly open, you need to start at the foundation. Joining Reflection means building from the ground up as part of a small talent-dense team. You will help define our future as a company, and help define the frontier of open foundational models.We want you to do the most impactful work of your career with the confidence that you and the people you care about most are supported.Top-tier compensation: Salary and equity structured to recognize and retain the best talent globally.Health & wellness: Comprehensive medical, dental, vision, life, and disability insurance.Life & family: Fully paid parental leave for all new parents, including adoptive and surrogate journeys. Financial support for family planning.Benefits & balance: paid time off when you need it, relocation support, and more perks that optimize your time. Opportunities to connect with teammates: lunch and dinner are provided daily. We have regular off-sites and team celebrations.
No items found.
Hidden link
The Reflection.jpg

Member of Technical Staff - Safety Lead

Reflection
US.svg
United States
Full-time
Remote
false
Our MissionReflection’s mission is to build open superintelligence and make it accessible to all.We’re developing open weight models for individuals, agents, enterprises, and even nation states. Our team of AI researchers and company builders come from DeepMind, OpenAI, Google Brain, Meta, Character.AI, Anthropic and beyond.About the RoleOwn the red-teaming and adversarial evaluation pipeline for Reflection’s models, continuously probing for failure modes across security, misuse, and alignment gaps.Work hand-in-hand with the Alignment team to translate safety findings into concrete guardrails, ensuring models behave reliably under stress and adhere to deployment policies.Validate that every release meets the lab’s risk thresholds before it ships, serving as a critical gatekeeper for our open weight releases.Develop scalable, automated safety benchmarks that evolve alongside our model capabilities, moving beyond static datasets to dynamic adversarial testing.Research and implement state-of-the-art jailbreaking techniques and defenses to stay ahead of potential vulnerabilities in the wild.About YouGraduate degree (MS or PhD) in Computer Science, Machine Learning, or related discipline, or equivalent practical experience in AI Safety.Deep technical understanding of LLM safety, including adversarial attacks, red-teaming methodologies, and interpretability.Strong software engineering capabilities with experience building automated evaluation pipelines or large-scale ML systems.Experience with Reinforcement Learning (RLHF/RLAIF) and how it impacts model safety and alignment is a strong plus.Thrive in a fast-paced, high-agency startup environment with bias toward action.Willing to make high-stakes decisions regarding model release and safety thresholds.Passionate about advancing the frontier of intelligence.What We Offer:We believe that to build superintelligence that is truly open, you need to start at the foundation. Joining Reflection means building from the ground up as part of a small talent-dense team. You will help define our future as a company, and help define the frontier of open foundational models.We want you to do the most impactful work of your career with the confidence that you and the people you care about most are supported.Top-tier compensation: Salary and equity structured to recognize and retain the best talent globally.Health & wellness: Comprehensive medical, dental, vision, life, and disability insurance.Life & family: Fully paid parental leave for all new parents, including adoptive and surrogate journeys. Financial support for family planning.Benefits & balance: paid time off when you need it, relocation support, and more perks that optimize your time. Opportunities to connect with teammates: lunch and dinner are provided daily. We have regular off-sites and team celebrations.
No items found.
Hidden link
HackerOne.jpg

Principal Software Architect

HackerOne
$230,000 – $255,000
US.svg
United States
Full-time
Remote
false
HackerOne is a global leader in Continuous Threat Exposure Management (CTEM). The HackerOne Platform unites agentic AI solutions with the ingenuity of the world’s largest community of security researchers to continuously discover, validate, prioritize, and remediate exposures across code, cloud, and AI systems. Through solutions like bug bounty, vulnerability disclosure, agentic pentesting, AI red teaming, and code security, HackerOne delivers measurable, continuous reduction of cyber risk for enterprises. Industry leaders, including Anthropic, Crypto.com, General Motors, Goldman Sachs, Lufthansa, Uber, UK Ministry of Defence, and the U.S. Department of Defense, trust HackerOne to safeguard their digital ecosystems. HackerOne was recognized in Gartner’s Emerging Tech Impact Radar: AI Cybersecurity Ecosystem report for its leadership in AI Security Testing and has been named a Most Loved Workplace for Young Professionals (2024).HackerOne is at a pivotal inflection point in the security industry. Offensive security is no longer optional – it is the standard for forward-thinking companies that want to build trust and resilience in a world where AI-driven innovation and adversaries are moving faster than ever. With the industry shifting, HackerOne stands apart: we combine the ingenuity of the largest security research community with a best-in-class AI-powered platform, trusted by the world’s top organizations.HackerOne ValuesHackerOne is dedicated to fostering a strong and inclusive culture. HackerOne is Customer Obsessed and prioritizes customer outcomes in our decisions and actions. We Default to Disclosure by operating with transparency and integrity, ensuring trust and accountability. Employees, researchers, customers, and partners Win Together by fostering empowerment, inclusion, respect, and accountability.Principal Software Architect Eligible Remote Locations:- Boston, MA- Austin, TX- Washington, DCPosition Summary As a Principal Software Architect, you will define and drive the architectural vision and identify steps towards the target architecture for the HackerOne Platform, the foundation that powers our global community of security researchers and the thousands of organizations that rely on us to secure their digital ecosystems.You will lead cross-team technical strategy, ensuring our systems are scalable, reliable, secure, and designed for the future of AI-driven offensive security. Collaborating with Product Managers, Designers, and Engineering leadership, you will translate business strategy into cohesive, sustainable system architecture. Your work will have deep, long-term impact on HackerOne’s ability to innovate quickly, deliver reliably, and uphold the trust our customers place in us.At HackerOne, we embrace a Flexible Work approach that gives us the freedom to do our best work while also fostering the connections and community that make us stronger. Reflecting this philosophy, this is a remote role targeted for candidates within ~50 miles of Boston, Washington DC, or Austin. We believe this balance of proximity and flexibility gives Hackeronies the chance to occasionally come together – fostering collaboration, connection, and in-person moments that enrich our culture – while still preserving the benefits of remote work.What You Will DoDefine and evolve the architectural vision for HackerOne’s platform and core systems, ensuring scalability, reliability, and performance.Partner with Product, Platform, and Security teams to translate long-term business and product goals into actionable architectural strategies.Collaborate with Principal and Distinguished Engineers to align on technical direction, establish shared standards, and evolve HackerOne’s system design principles.Lead major cross-team initiatives that modernize our architecture, improve observability, and reduce complexity across our systems.Mentor and guide engineering teams, fostering a culture of technical excellence, knowledge sharing, and continuous improvement.Evaluate and integrate emerging technologies, including AI, GenAI, and LLM-driven architectures, to enhance the intelligence and effectiveness of our platform.Drive architectural governance and documentation, ensuring long-term maintainability and transparency in decision-making.Communicate architectural direction clearly to both technical and non-technical stakeholders, building alignment through clarity and evidence.Within your first 30–60–90 days, you’ll move from deeply understanding our systems and architecture, to identifying strategic opportunities, to leading architectural initiatives that impact teams company-wide.Win Together, Default to Disclosure, and Customer Obsession will be critical to your success in this role — as you collaborate openly, build trust across teams, and design systems that empower our customers and community.Minimum Qualifications  10+ years of experience in software engineering and system architecture within a SaaS environment.Proven track record designing and delivering large-scale distributed systems, ideally using Ruby on Rails, ReactJS, TypeScript, GraphQL, and ElasticSearch/OpenSearch.Hands-on experience with GenAI and LLM integration in production systems; understanding of model lifecycle or AI-assisted architectures is a strong plus.Demonstrated experience leading architectural initiatives spanning multiple teams and product domains.Excellent communication and influence skills, capable of aligning technical and non-technical stakeholders around shared goals.Preferred Qualifications Experience driving modernization and scalability initiatives in complex, legacy systems.Deep knowledge of system reliability, security, and performance optimization in high-availability environments.Proven ability to mentor engineers and elevate architectural and coding standards across an organization.Compensation$230,000 – $255,000(HackerOne also offers equity in the form of stock options)#LI-MH1Job Benefits:Health (medical, vision, dental), life, and disability insurance*Equity stock optionsRetirement plansPaid public holidays and unlimited PTOPaid maternity and parental leaveLeaves of absence (including caregiver leave and leave under CO's Healthy Families and Workplaces Act)Employee Assistance ProgramFlexible Work Stipend*Eligibility may differ by countryWe're committed to building a global team! For certain roles outside the United States, India, the U.K., and the Netherlands, we partner with Remote.com as our Employer of Record (EOR).Visa/work permit sponsorship is not available. Employment at HackerOne is contingent on a background check.HackerOne is an Equal Opportunity Employer in the terms and conditions of employment for all employees and job applicants without regard to race, color, religion, sex, sexual orientation, age, gender identity or gender expression, national origin, pregnancy, disability or veteran status, or any other protected characteristic as outlined by international, federal, state, or local laws.This policy applies to all HackerOne employment practices, including hiring, recruiting, promotion, termination, layoff, recall, leave of absence, compensation, benefits, training, and apprenticeship. HackerOne makes hiring decisions based solely on qualifications, merit, and business needs at the time.For US based roles only: Pursuant to the San Francisco Fair Chance Ordinance, all qualified applicants with arrest and conviction records will be considered for the position.
No items found.
Hidden link
ASAPP.jpg

Senior Staff Systems Engineer

ASAPP
$240,000 – $265,000
US.svg
United States
Full-time
Remote
false
At ASAPP, our mission is simple: deliver the best AI-powered customer experience—faster than anyone else. We are guided by principles that shape how we think, build, and execute, including deep customer obsession, purposeful speed, ownership, and a relentless focus on outcomes. We work in small, highly skilled teams, prioritize clarity over complexity, and continuously evolve through curiosity, data, and craftsmanship. We’re building a globally diverse team of technologists and problem solvers who thrive in fast-paced environments, value collaboration, and approach every challenge with a Day 1 mindset. With hubs in New York City, Mountain View, Latin America, and India. If you’re driven by continuous learning, rapid iteration, and the challenge of building in a high-growth startup, this is more than a role—it’s a journey. We are seeking a Senior Staff Systems Engineer to drive the architectural vision for our GenerativeAgent product. In this role, you will design and build a highly scalable, multi-agent platform that powers real-time voice and text customer service experiences across a wide range of industries. As a Senior Staff Engineer, you will set long-term system direction, tackle the organization’s most complex technical challenges, and partner closely with engineering, product, security, and operations leaders. You will ensure our systems are resilient, scalable, and tightly aligned with both customer needs and business goals, while maintaining a strong hands-on presence and broad organizational impact.What you'll doTechnical Leadership & InfluenceAct as a technical authority and advisor across multiple engineering teamsDevelop and drive alignment on system design, technical roadmaps, and best practicesDesign and implement a scalable, multi-tenant deployment architecture that supports easy configuration of agents and business workflows for various customersDefine the communication, state management, and orchestration patterns for multi-agent systemsMentor senior engineers and help raise the bar on systems thinking across the organizationReliability, Scalability & SecurityOwn and define system-level SLOs/SLIs for high-traffic, real-time voice and text services, focusing on latency, cost-efficiency, and fault tolerance across the ecosystemIdentify systemic risks and proactively design mitigation strategiesPartner with Security and Compliance teams to ensure systems meet regulatory and security requirementsLead post-incident analysis and drive systemic improvementsCross-Functional CollaborationPartner with Product, Customer Engineering, SRE, and TPMs to translate business requirements into system designsWork with Research to evaluate and productionize cutting-edge ideas in the ML spaceDrive complex initiatives that span multiple quarters and involve multiple teamsCommunicate complex technical concepts clearly to both technical and non-technical stakeholdersWhat you'll need10+ years of experience building large-scale production systemsMastery in at least one cloud platform (AWS, GCP, Azure)Expertise with container orchestration (Kubernetes)Expertise with Python and/or GolangExcellent communication, consensus-building, strategic thinking, and mentorship abilitiesDeep understanding of:Distributed systems design patterns (e.g., service mesh, event-driven architecture, microservices)Reliability engineering principlesSystem performance and capacity planningAbility to lead through influence rather than authorityWhat we'd like to seeExpertise with the LLM development lifecycle (e.g. fine-tuning, RAG, deployment, evaluation, monitoring & feedback)Strong track record of building multi-agent systems at scaleExperience with security, compliance, and data privacy requirements (e.g. HIPAA, SOC 2) 240,000 - 265,000 a yearThe compensation includes salary plus performance bonus. The actual salary may be different depending upon non-discriminatory factors such as qualifications, experience, and other factors permitted by law. ASAPP is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, disability, age, or veteran status. If you have a disability and need assistance with our employment application process, please email us at careers@asapp.com to obtain assistance. #LI-AG1 #LI-Hybrid
No items found.
Hidden link
ASAPP.jpg

Speech Software Engineer

ASAPP
$215,000 – $235,000
US.svg
United States
Full-time
Remote
false
At ASAPP, our mission is simple: deliver the best AI-powered customer experience—faster than anyone else. We are guided by principles that shape how we think, build, and execute, including deep customer obsession, purposeful speed, ownership, and a relentless focus on outcomes. We work in small, highly skilled teams, prioritize clarity over complexity, and continuously evolve through curiosity, data, and craftsmanship. We’re building a globally diverse team of technologists and problem solvers who thrive in fast-paced environments, value collaboration, and approach every challenge with a Day 1 mindset. With hubs in New York City, Mountain View, Latin America, and India. If you’re driven by continuous learning, rapid iteration, and the challenge of building in a high-growth startup, this is more than a role—it’s a journey. We are seeking a Speech Software Engineer to spearhead the architectural evolution of our voice infrastructure. This isn't just a maintenance role; you will be a primary architect in rebuilding our core speech stack from the ground up to support the next generation of real-time customer interactions. You will have the autonomy to make high-level technical decisions and the support of a team that thrives on deep thinking and startup-paced execution.  You will join the GenerativeAgent team, bridging the gap between cutting-edge ASR (Automatic Speech Recognition) research and high-performance production systems. If you are passionate about low-latency streaming, distributed systems, and the intricacies of audio processing, this is your opportunity to make a massive impact for millions of users.What you'll doArchitect & Modernize: Lead the design and implementation of a scalable, high-availability voice infrastructure that replaces legacy systems.Optimize Performance: Build and refine multi-threaded server frameworks capable of handling thousands of concurrent, real-time audio streams with minimal jitter and latency.Build for Scale: Deploy robust ASR > LLM > TTS pipelines that process thousands of calls concurrently.Stream Engineering: Develop robust logic for handling media streams, ensuring seamless audio data flow between clients and our ML models.System Observability: Build advanced monitoring and load-testing tools specifically designed to simulate high-concurrency voice traffic.Collaborate: Partner with Speech Scientists and Research Engineers to integrate state-of-the-art models into a production-ready environment.What you'll needExperience: 5+ years of software engineering experience, with a proven track record of building and maintaining production-grade infrastructure.Industry Knowledge: A background in building ASR/TTS products at scale that interact with foundational LLMs.Language Mastery: Expert-level proficiency in Golang, Python, or willingness to learn.Voice Fundamentals: Deep understanding of audio processing, including sample rates, codecs (Opus, G.711), network protocols, and buffering strategies. System Design: Strong background in object-oriented design and the ability to architect systems that are both modular and performant.Growth Mindset: The ability to navigate and refactor large existing codebases while transitioning to new, more efficient architectures.What we'd like to seeCloud Native: Hands-on experience with Kubernetes, Docker, and cloud providers (AWS/GCP/Azure) for deploying distributed speech services.Event-Driven Architecture: Familiarity with event loops (Boost.Asio, uvloop) and asynchronous programming patternsBig Data: Experience with Hadoop, Spark, or Hive for analyzing massive datasets of speech logs to improve model accuracy. 215,000 - 235,000 a yearThe compensation includes salary plus performance bonus. The actual salary may be different depending upon non-discriminatory factors such as qualifications, experience, and other factors permitted by law. ASAPP is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, disability, age, or veteran status. If you have a disability and need assistance with our employment application process, please email us at careers@asapp.com to obtain assistance. #LI-AG1 #LI-Hybrid
No items found.
Hidden link
No job found
Your search did not match any job. Please try again
Department
Clear
Category
Clear
Country
Clear
Job type
Clear
Remote
Clear
Only remote job
Company size
Clear
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.