⚠️ Sorry, this job is no longer available.

The AI job market moves fast. We keep up so you don't have to.

Fresh roles added daily, reviewed for quality — across every corner of the AI ecosystem.

Edit filters

New AI Opportunities

Showing 6179  of 79 jobs
Tag
Baseten.jpg

Software Engineer - AI Enablement

Baseten
$150,000 – $230,000
US.svg
United States
Full-time
Remote
false
ABOUT BASETENBaseten powers mission-critical inference for the world's most dynamic AI companies, like Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma and Writer. By uniting applied AI research, flexible infrastructure, and seamless developer tooling, we enable companies operating at the frontier of AI to bring cutting-edge models into production. We're growing quickly and recently raised our $300M Series E, backed by investors including BOND, IVP, Spark Capital, Greylock, and Conviction. Join us and help build the platform engineers turn to to ship AI products.THE ROLEAs Baseten's AI Enablement Engineer, you'll own the AI-powered tooling and agent infrastructure that makes every person at Baseten dramatically more productive. While our product helps external teams deploy and serve AI models, your focus is inward: building, integrating, and operating AI agents and LLM-powered workflows that accelerate how we write code, review PRs, debug incidents, generate documentation, and ship faster.This is a high-autonomy, high-impact role. You'll be the go-to person for everything AI-internal, from evaluating and deploying coding assistants and agentic tools, to building custom agents tailored to Baseten's codebase and workflows. If you're excited about making an engineering org of top-tier infrastructure engineers even more effective by putting AI to work across the entire SDLC, this role is for you.EXAMPLE INITIATIVESYou'll get to work on these types of projects:Agentic coding workflows — Evaluate, customize, and deploy AI coding agents (e.g., Cursor, Claude Code, Codex) tuned to Baseten's monorepo, conventions, and internal libraries.Custom internal agents — Build purpose-built agents for tasks like incident triage, on-call support, codebase Q&A, and automated change management.AI adoption strategy — Track usage, measure productivity gains, and champion best practices for AI-assisted development across all engineering teams.RESPONSIBILITIESOwn and operate the end-to-end internal AI stack — from model selection and integration to deployment and monitoring.Build and maintain custom AI agents and LLM-powered tools tailored to Baseten's engineering workflows.Evaluate and roll out third-party AI developer tools, configuration, and onboarding.Instrument AI tool usage and measure impact on engineering velocity, code quality, and developer satisfaction.Stay at the cutting edge of AI tooling, agents, and developer productivity research — and bring the best ideas back to the team.Actively support engineering teams, ensuring they have the AI-powered resources and workflows necessary to remain productive.BENEFITSCompetitive compensation, including meaningful equity.100% coverage of medical, dental, and vision insurance for employee and dependentsGenerous PTO policy including company wide Winter Break (our offices are closed from Christmas Eve to New Year's Day!)Paid parental leaveCompany-facilitated 401(k)Exposure to a variety of ML startups, offering unparalleled learning and networking opportunities.Apply now to embark on a rewarding journey in shaping the future of AI! If you are a motivated individual with a passion for machine learning and a desire to be part of a collaborative and forward-thinking team, we would love to hear from you.At Baseten, we are committed to fostering a diverse and inclusive workplace. We provide equal employment opportunities to all employees and applicants without regard to race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status.
No items found.
Hidden link
Baseten.jpg

Senior Software Engineer - New Products

Baseten
$185,000 – $285,000
US.svg
United States
Full-time
Remote
false
ABOUT BASETENBaseten powers mission-critical inference for the world's most dynamic AI companies, like Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma and Writer. By uniting applied AI research, flexible infrastructure, and seamless developer tooling, we enable companies operating at the frontier of AI to bring cutting-edge models into production. We're growing quickly and recently raised our $300M Series E, backed by investors including BOND, IVP, Spark Capital, Greylock, and Conviction. Join us and help build the platform engineers turn to to ship AI products.THE ROLE: You’ll join a small team building new products at Baseten. This role is for an infrastructure-leaning, product-minded engineer who likes owning ambiguous problems end-to-end: from shaping an API and system design, to operating it in production with clear SLOs. You’ll build core platform capabilities that power how researchers, developers and partners ship and operate AI products at scale: API gateways, auth/keys, quotas and metering, multi-tenant isolation, observability, and the workflows around deploying and managing model-backed services. EXAMPLE INITIATIVES:Model APIs for frontier modelsModel training built for production inference RESPONSIBILITIES:Own and lead projects and product areas end-to-end, including architecture, implementation, rollout, and long-term operations.Design ergonomic, developer-friendly APIs and abstractions for infrastructure capabilities.Build and operate reliable backend services (rate limiting, auth, quotas, metering, migrations) with clear SLOs.Drive performance and reliability improvements through profiling, tracing, load testing, and capacity planning.Mentor teammates through code reviews, design docs, and technical leadership. REQUIREMENTS:5+ years of experience building and operating backend systems, distributed systems, or large-scale APIs.Proven track record owning low-latency, reliable services (auth, rate limiting, quotas, usage metering, migrations).Strong infrastructure instincts: observability, incident response, SLOs, and capacity management.Comfort working across the stack when needed (backend-first, but willing to dive into frontend/CLI to unblock the product).Strong written communication, including clear design docs and effective cross-functional collaboration.Interest in AI/ML infrastructure and willingness to learn (ML expertise not required). NICE TO HAVE:Experience with API gateways, service meshes, Kubernetes, or distributed scheduling.Experience building developer platforms: SDKs, CLIs, APIs, and self-serve workflows.Experience with inference platforms, LLM runtimes, or performance-sensitive systems.Familiarity with multi-tenant isolation patterns (fair queuing, noisy-neighbor controls, admission control).Frontend experience (React/TypeScript) or strong product UX instincts for developer tools.BENEFITSCompetitive compensation, including meaningful equity.100% coverage of medical, dental, and vision insurance for employee and dependentsGenerous PTO policy including company wide Winter Break (our offices are closed from Christmas Eve to New Year's Day!)Paid parental leaveCompany-facilitated 401(k)Exposure to a variety of ML startups, offering unparalleled learning and networking opportunities.Apply now to embark on a rewarding journey in shaping the future of AI! If you are a motivated individual with a passion for machine learning and a desire to be part of a collaborative and forward-thinking team, we would love to hear from you.At Baseten, we are committed to fostering a diverse and inclusive workplace. We provide equal employment opportunities to all employees and applicants without regard to race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status.
No items found.
Hidden link
Shield AI.jpg

Principal Engineer, C++/Integration (R4539)

Shield AI
$210,000 – $320,000
US.svg
United States
Full-time
Remote
false
Founded in 2015, Shield AI is a venture-backed deep-tech company with the mission of protecting service members and civilians with intelligent systems. Its products include the V-BAT and X-BAT aircraft, Hivemind Enterprise, and the Hivemind Vision product lines. With offices and facilities across the U.S., Europe, the Middle East, and the Asia-Pacific, Shield AI’s technology actively supports operations worldwide. For more information, visit www.shield.ai. Follow Shield AI on LinkedIn, X, Instagram, and YouTube. Job Description:The Special Projects team at Shield AI is an elite force within the office of the CTO. It consists of a group of very senior (L5-L8) and highly experienced software engineering experts from diverse fields (aerospace, robotics, cloud infrastructure, game development, interactive media design, ...). The charter of this group is to steer technology development towards strategic alignment with the CTO’s vision, through tactical insertion into teams and technologies across the organization. Individuals within this team make direct and at times forward-sprinting contributions to all three pillars of Hivemind, Shield AI’s software ecosystem for developing and deploying resilient intelligent teaming for aircrafts. Hivemind consists of four products: EdgeOS (C++ based high-performance middleware for autonomy development), Pilot (autonomy for the edge built atop EdgeOS; a models-based, modular and open architecture C++ codebase), Forge (Shield AI's AI Factory for the design, development, and testing of Hivemind Edge systems; a service-oriented architecture leveraged through an SDK, CLI, and web portal; a Go, Python, Typescript codebase), Commander (Software and hardware to support rich human-in-the-loop and human-on-the-loop interactions with the Hivemind; a C++ based back-end for interaction with Pilot; and a web-application UI for mission planning, command and control by operators implemented in a Typescript/React codebase).  The Special Projects team is chartered to operate effectively in ambiguity. This team owns a number of software and software/hardware products, and it tactically and strategically impacts the development of all foundational Hivemind products. This work happens should-to-shoulder with the product teams in some cases, and in a forward-sprinting manner within the Special Projects team in other cases. The result is direct contribution to products in the former, and development of reference implementations in the latter.  The Special Projects team also functions as a pipeline for product and solution engineering teams  across the organization. Individuals who enter the Special Projects team rapidly gain depth and breadth in their understanding of the Hivemind software ecosystem. This positions them well for leading technology development effort across the product and solution organization.   This role is expected to contribute to commercial applications of Hivemind Enterprise.What you'll do:Create Reference Implementations: Create reference implementations for potential future products or product components, by integrating new hardware platforms, sensor suits, simulators and concepts of operation with the Hivemind SDK (C++) for commercial applications, with a focus on autonomy (“Pilot”) and simulation (part of “Forge”)Iterate Rapidly with Customer Feedback: Demonstrate developed architectures as solutions to the customer and gather feedback; iterate.  Explore Future Technologies: Explore and evaluate future hardware and software technologies that are relevant to Shield AI’s product roadmap and potentially high-ROI, but beyond the scope of current Direct and IRAD projects in engineering.  Identify areas of technical debt across the stack, analyze and synthesize solutions and paths towards achieving them. Required qualifications:12+ years of related experience developing large, production quality software systems.  10+ years of experience with modern C++ (C++17 and beyond).Strong knowledge of modern software engineering best practices; Experience with Gitand code management tools; Good software hygiene regarding code documentation,unit testing, bug tracking.Excellent grasp of software development and coding principles with high productivityin a mainstream language (e.g. Typescript, C++, Go, Python, etc.).All-in on Generative AI tools for software engineeringDeep self-sufficiency in adopting new technologies, configuring managing local and cloud resources, maintaining a fast development pace within a complex technology stackExpertise and deep experience with architectural design and implementation of large and complex distributed systems.  Experience with Linux, Docker, and CI/CD environments. Excellent software hygiene regarding code documentation, unit testing, bug tracking.   Strong technical collaboration skills and a desire to develop new skills.   Excited by a fast-moving environment with a highly motivated group.  Demonstrated record of working hard, being a trustworthy teammate, holding yourself and others to high standards, and being kind to others.Fluid intelligence that allows one to operate effectively in sometimes ambiguous conditions, while finding opportunities to drive technical efforts and force multiply.Preferred qualifications:Experience with in aerospace and/or robotics industries.  Hands-on experience with a major cloud platform (Azure, GCP, AWS).  Experience with team leadership, or as a technical project lead.  Passionate about developing high-quality and optimized software solutions.   Experience with containerization technologies like Docker and Kubernetes.   210,000 - 320,000 a year#LI-KC3#LF Full-time regular employee offer package:Pay within range listed + Bonus + Benefits + Equity Temporary employee offer package:Pay within range listed above + temporary benefits package (applicable after 60 days of employment) Salary compensation is influenced by a wide array of factors including but not limited to skill set, level of experience, licenses and certifications, and specific work location. All offers are contingent on a cleared background and possible reference check. Military fellows and part-time employees are not eligible for benefits. Please speak to your talent acquisition representative for more information. ### Shield AI is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, marital status, disability, gender identity or Veteran status. If you have a disability or special need that requires accommodation, please let us know. 
No items found.
Hidden link
Handshake.jpg

Music Producer - AI Trainer

Handshake
$125 – $125 / hour
US.svg
United States
Contractor
Remote
false
Opportunity OverviewHandshake is looking for skilled LMMS users to support AI research through flexible, hourly contract work. This is not a traditional job. You'll draw on your hands-on experience with beat-making, music composition, or electronic music production to evaluate AI-generated content and provide feedback that helps AI better understand music tasks and creative production workflows.This is an ongoing, project-based opportunity you can take on alongside anything else you have going on.Who This Is ForThis is a good fit if you're an experienced LMMS user who has worked in or around roles like:Music Producer or Beat MakerComposer or ArrangerSound DesignerYou should have solid experience with one or more of the following:Beat-making, music composition, or electronic production using LMMSAnnotating or labeling audio and music assetsCreating music or beats following project briefs or style referencesReviewing music for quality, accuracy, or production consistencyWhat You'll DoYou'll use your experience with LMMS to create tool-related questions and review AI-generated responses for accuracy and relevance to real-world music production and beat-making workflows.No prior AI or technical experience is required.QualificationsWe're looking for people who have:Minimum 3 years of hands-on experience with LMMS, whether through professional work or freelance projectsA working knowledge of music production concepts and electronic composition workflowsStrong written communication skills and attention to detailThe ability to work independently and follow written guidelinesWork Model and Project DetailsStatus: Independent contractor (not a full-time employee role)Location: Fully remote; work from anywhere with a reliable internet connection and access to a desktop or laptop computerSchedule: Flexible and asynchronous, with no minimum hour requirement. Many contributors work approximately 5–20 hours per week when assigned to an active projectDuration: The Handshake AI program runs year-round, with projects opening periodically across different areas of expertise. Placement depends on current project needs, with opportunities to be considered for future projects as they become availableApplication ProcessCreate a Handshake accountUpload your resume and verify your identityGet matched and onboarded into relevant projectsStart working and earningWork AuthorizationF-1 students who are eligible for CPT or OPT may be eligible for projects on Handshake AI. Work with your Designated School Official to determine your eligibility. If your school requires a CPT course, Handshake AI may not meet your school's requirements. STEM OPT is not supported. See our Help Center article for more information on what types of work authorizations are supported on Handshake AI.
No items found.
Hidden link
Handshake.jpg

Software Engineer - AI Trainer

Handshake
$65 – $150 / hour
US.svg
United States
Contractor
Remote
false
Opportunity OverviewHandshake is seeking experienced Software Engineers to support AI research through flexible, hourly contract work. This is not a traditional full-time SWE role. You’ll use your real-world software development experience to evaluate AI-generated code and technical content, provide structured feedback, and help improve how AI understands programming tasks, system design, and engineering best practices.This is an ongoing, project-based opportunity that can be done alongside your primary employment.Who This Is ForThis opportunity is designed for professionals currently working (or recently working) in roles such as:Software Engineer or Senior Software EngineerBackend, Frontend, or Full-Stack EngineerSystems Engineer or Application DeveloperThis is not a traditional full-time role. You’ll apply once and, if qualified, be considered for part-time, project-based work as new projects become available.What You’ll DoThis project involves using your software engineering experience to design job-related coding questions and review AI-generated responses for correctness, efficiency, clarity, and alignment with real-world engineering practices.Applicants will be required to pass a coding assessment as part of the selection process.QualificationsWe’re looking for professionals with:4+ years of professional software engineering experience (internships excluded)Strong hands-on coding experience in at least one major programming language (e.g., Python, Java, C++, JavaScript, Go, etc.)Experience writing, reviewing, and debugging production-level codeComfortable working independently and following detailed technical guidelinesStrong written communication skills and attention to detailApplication ProcessCreate a Handshake accountUpload your resume and verify your identityGet matched and onboarded into relevant projectsStart working and earningWork Model and Project DetailsStatus: Independent contractor (not a full-time employee role)Location: Fully remoteSchedule: Flexible and asynchronous, with no minimum hour requirement. Many contributors work approximately 5–20 hours per week when assigned to an active projectDuration: The Handshake AI program runs year-round, with projects opening periodically across different areas of expertise. Placement depends on current project needs, with opportunities to be considered for future projects as they become availableWork authorization informationF-1 students who are eligible for CPT or OPT may be eligible for projects on Handshake AI. Work with your Designated School Official to determine your eligibility. If your school requires a CPT course, Handshake AI may not meet your school’s requirements. STEM OPT is not supported. See our Help Center article for more information on what types of work authorizations are supported on Handshake AI.
No items found.
Hidden link
Together AI.jpg

Staff Engineer, API Core Platform

Together AI
$200,000 – $280,000
No items found.
Full-time
Remote
false
About the Role The Turbo team sits at the intersection of efficient inference (algorithms, architectures, engines) and post‑training / RL systems. We build and operate the systems behind Together’s API, including high‑performance inference and RL/post‑training engines that can run at production scale. Our mandate is to push the frontier of efficient inference and RL‑driven training: making models dramatically faster and cheaper to run, while improving their capabilities through RL‑based post‑training (e.g., GRPO‑style objectives). This work lives at the interface of algorithms and systems: asynchronous RL, rollout collection, scheduling, and batching all interact with engine design, creating many knobs to tune across the RL algorithm, training loop, and inference stack. Much of the job is modifying production inference systems—for example, SGLang‑ or vLLM‑style serving stacks and speculative decoding systems such as ATLAS—grounded in a strong understanding of post‑training and inference theory, rather than purely theoretical algorithm design. You’ll work across the stack—from RL algorithms and training engines to kernels and serving systems—to build and improve frontier models via RL pipelines. People on this team are often spiky: some are more RL‑first, some are more systems‑first. Depth in one of these areas plus appetite to collaborate across (and grow toward more full‑stack ownership over time) is ideal. Requirements We don’t expect anyone to check every box below. People on this team typically have deep expertise in one or more areas and enough breadth (or interest) to work effectively across the stack. The closer you are to full‑stack (inference + post‑training/RL + systems), the stronger the fit—but being spiky in one area and eager to grow is absolutely okay. You might be a good fit if you: Have strong expertise in at least one of the following, and are excited to collaborate across (and grow into) the others: Systems‑first profile: Large‑scale inference systems (e.g., SGLang, vLLM, FasterTransformer, TensorRT, custom engines, or similar), GPU performance, distributed serving. RL‑first profile: RL / post‑training for LLMs or large models (e.g., GRPO, RLHF/RLAIF, DPO‑like methods, reward modeling), and using these to train or fine‑tune real models. Model architecture design for Transformers or other large neural nets. Distributed systems / high‑performance computing for ML. Are comfortable working from algorithms to engines: Strong coding ability in Python Experience profiling and optimizing performance across GPU, networking, and memory layers. Able to take a new sampling method, scheduler, or RL update and turn it into a production‑grade implementation in the engine and/or training stack. Have a solid research foundation in your area(s) of depth: Track record of impactful work in ML systems, RL, or large‑scale model training (papers, open‑source projects, or production systems). Can read new RL / post‑training papers, understand their implications on the stack, and design minimal, correct changes in the right layer (training engine vs. inference engine vs. data / API). Operate well as a full‑stack problem solver: You naturally ask: “Where in the stack is this really bottlenecked?” You enjoy collaborating with infra, research, and product teams, and you care about both scientific quality and user‑visible wins. Minimum qualifications 3+ years of experience working on ML systems, large‑scale model training, inference, or adjacent areas (or equivalent experience via research / open source). Advanced degree in Computer Science, EE, or a related field, or equivalent practical experience. Demonstrated experience owning complex technical projects end‑to‑end. If you’re excited about the role and strong in some of these areas, we encourage you to apply even if you don’t meet every single requirement. Responsibilities Advance inference efficiency end‑to‑end Design and prototype algorithms, architectures, and scheduling strategies for low‑latency, high‑throughput inference. Implement and maintain changes in high‑performance inference engines (e.g., SGLang‑ or vLLM‑style systems and Together’s inference stack), including kernel backends, speculative decoding (e.g., ATLAS), quantization, etc. Profile and optimize performance across GPU, networking, and memory layers to improve latency, throughput, and cost. Unify inference with RL / post‑training Design and operate RL and post‑training pipelines (e.g., RLHF, RLAIF, GRPO, DPO‑style methods, reward modeling) where 90+% of the cost is inference, jointly optimizing algorithms and systems. Make RL and post‑training workloads more efficient with inference‑aware training loops—for example, async RL rollouts, speculative decoding, and other techniques that make large‑scale rollout collection and evaluation cheaper. Use these pipelines to train, evaluate, and iterate on frontier models on top of our inference stack. Co‑design algorithms and infrastructure so that objectives, rollout collection, and evaluation are tightly coupled to efficient inference, and quickly identify bottlenecks across the training engine, inference engine, data pipeline, and user‑facing layers. Run ablations and scale‑up experiments to understand trade‑offs between model quality, latency, throughput, and cost, and feed these insights back into model, RL, and system design. Own critical systems at production scale Profile, debug, and optimize inference and post‑training services under real production workloads. Drive roadmap items that require real engine modification—changing kernels, memory layouts, scheduling logic, and APIs as needed. Establish metrics, benchmarks, and experimentation frameworks to validate improvements rigorously. Provide technical leadership (Staff level) Set technical direction for cross‑team efforts at the intersection of inference, RL, and post‑training. Mentor other engineers and researchers on full‑stack ML systems work and performance engineering. About Together AI Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers in our journey in building the next generation AI infrastructure. Compensation We offer competitive compensation, startup equity, health insurance and other competitive benefits. The US base salary range for this full-time position is: $200,000 - $280,000 + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge. Equal Opportunity Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more. Please see our privacy policy at https://www.together.ai/privacy    
No items found.
Hidden link
Faculty.jpg

Machine Learning Engineer

Faculty
GB.svg
United Kingdom
Full-time
Remote
false
Why Faculty? We established Faculty in 2014 because we thought that AI would be the most important technology of our time. Since then, we’ve worked with over 350 global customers to transform their performance through human-centric AI. You can read about our real-world impact here.We don’t chase hype cycles. We innovate, build and deploy responsible AI which moves the needle - and we know a thing or two about doing it well. We bring an unparalleled depth of technical, product and delivery expertise to our clients who span government, finance, retail, energy, life sciences and defence.Our business, and reputation, is growing fast and we’re always on the lookout for individuals who share our intellectual curiosity and desire to build a positive legacy through technology.AI is an epoch-defining technology, join a company where you’ll be empowered to envision its most powerful applications, and to make them happen.About the Team Bringing medicine to patients is complex, expensive and high-risk. Faculty’s Life Science’s team is concentrated on building AI solutions which optimise the research and commercialisation of life-changing therapies.We partner with major pharma firms, academic research centres and MedTech start-ups to design and deliver solutions which address critical healthcare challenges, and help to democratise health for all.About the roleJoin us as a Machine Learning Engineer to deliver bespoke, impactful AI solutions for our diverse clients.You will be instrumental in bringing machine learning out of the lab and into the real world, contributing to scalable software architecture and defining best practices. Working with clients, and cross-functional teams, you'll ensure technical feasibility and timely delivery of high-quality, production-grade ML systems. What you'll be doing:Building and deploying production-grade ML software, tools, and infrastructure.Creating reusable, scalable solutions that accelerate the delivery of ML systems.Collaborating with engineers, data scientists, and commercial leads to solve critical client challenges.Leading technical scoping and architectural decisions to ensure project feasibility and impact.Defining and implementing Faculty’s standards for deploying machine learning at scale.Acting as a technical advisor to customers and partners, translating complex ML concepts for stakeholders.Who we're looking for:You understand the full machine learning lifecycle and have experience operationalising models built with frameworks like Scikit-learn, TensorFlow, or PyTorch.You possess strong Python skills and solid experience in software engineering best practices.You bring hands-on experience with cloud platforms and infrastructure (e.g., AWS, Azure, GCP), including architecture and security.You've worked with container and orchestration tools such at Docker & Kubernetes to build and manage applications at scaleYou are comfortable with core ML concepts, including probability, statistics, and common learning techniques.You're an excellent communicator, able to guide technical teams and confidently advise non-technical stakeholders.You thrive in a fast-paced environment, and enjoy the autonomy to own scope, solve and delivery solutionsThe Interview ProcessTalent Team Screen (30 minutes)Pair Programming Interview (90 minutes) System Design Interview (90 minutes) Commercial Interview (60 minutes)Our Recruitment EthosWe aim to grow the best team - not the most similar one. We know that diversity of individuals fosters diversity of thought, and that strengthens our principle of seeking truth. And we know from experience that diverse teams deliver better work, relevant to the world in which we live. We’re united by a deep intellectual curiosity and desire to use our abilities for measurable positive impact. We strongly encourage applications from people of all backgrounds, ethnicities, genders, religions and sexual orientations.Some of our standout benefits:Unlimited Annual Leave PolicyPrivate healthcare and dentalEnhanced parental leaveFamily-Friendly Flexibility & Flexible workingSanctus CoachingHybrid Working (2 days in our Old Street office, London)If you don’t feel you meet all the requirements, but are excited by the role and know you bring some key strengths, please don't hesitate in applying as you might be right for this role, or other roles. We are open to conversations about part-time hours.
No items found.
Hidden link
Faculty.jpg

Lead Machine Learning Engineer

Faculty
GB.svg
United Kingdom
Full-time
Remote
false
Why Faculty? We established Faculty in 2014 because we thought that AI would be the most important technology of our time. Since then, we’ve worked with over 350 global customers to transform their performance through human-centric AI. You can read about our real-world impact here.We don’t chase hype cycles. We innovate, build and deploy responsible AI which moves the needle - and we know a thing or two about doing it well. We bring an unparalleled depth of technical, product and delivery expertise to our clients who span government, finance, retail, energy, life sciences and defence.Our business, and reputation, is growing fast and we’re always on the lookout for individuals who share our intellectual curiosity and desire to build a positive legacy through technology.AI is an epoch-defining technology, join a company where you’ll be empowered to envision its most powerful applications, and to make them happen. About the team In our Professional and Financial Services Business unit, we bring everything we have learned in more than a decade of Applied AI, and use it to help our clients navigate a rapidly changing landscape. We develop and embed AI solutions which help firms become more efficient, enhance customer experience, and find the commercial upside in uncertain markets. Within the constraints of a highly regulated industry, we see so much opportunity for meaningful innovation and are proud to set the gold-standard for marrying technical excellence with safe deployment.About the roleJoin us as a Lead Machine Learning Engineer to spearhead the technical direction and delivery of complex, innovative AI projects. You will act as a technical expert, applying your skills across various projects from AI strategy to client-side deployments, while ensuring architectural decisions are sound and reliable. This role demands a balance of deep technical expertise and strong leadership, focusing on driving innovation, fostering team growth, and building reusable solutions across the organisation. If you're ready to manage high-risk projects and deliver practical, innovative outcomes, this is your chance to shape our future.What you'll be doingSetting the technical direction for complex ML projects, balancing trade-offs, and guiding team priorities.Designing, implementing, and maintaining reliable, scalable ML/software systems and justifying key architectural decisions.Defining project problems, developing roadmaps, and overseeing delivery across multiple work-streams in often ill-defined, high-risk environments.Driving the development of shared resources and libraries across the organisation and guiding other engineers in contributing to them.Leading hiring processes, making informed selection decisions, and mentoring multiple individuals to foster team growth.Proactively developing and executing recommendations for adopting new technologies and changing our ways of working to stay ahead of the competition.Acting as a technical expert and coach for customers, accurately estimating large work-streams and defending rationale to stakeholders.Who we're looking forYou are a technical expert among your peers, capable of going deep on particular topics and demonstrating breadth of knowledge to solve almost any problem.You possess strong Python skills and practical experience operationalising models using frameworks like Scikit-learn, TensorFlow, or PyTorch.You are an expert in at least one major Cloud Solution Provider (e.g., Azure, GCP, AWS) and have led teams to build full-stack web applications.You have hands-on experience with containerisation tools like Docker and orchestration via Kubernetes.You can successfully manage and coach a team of engineers, setting team-wide development goals to improve client delivery.You find novel, clever solutions for project delivery and take ownership for successful project outcomes.You're an excellent communicator who can proactively help customers achieve their goals and guide both technical teams and non-technical stakeholders.Our Interview ProcessTalent Team Screen (30 minutes)Introduction to the role (45 minutes) Pair Programming Interview (90 minutes) System Design Interview (90 minutes) Commercial & Leadership Interview (60 minutes)Our Recruitment EthosWe aim to grow the best team - not the most similar one. We know that diversity of individuals fosters diversity of thought, and that strengthens our principle of seeking truth. And we know from experience that diverse teams deliver better work, relevant to the world in which we live. We’re united by a deep intellectual curiosity and desire to use our abilities for measurable positive impact. We strongly encourage applications from people of all backgrounds, ethnicities, genders, religions and sexual orientations.Some of our standout benefits:Unlimited Annual Leave PolicyPrivate healthcare and dentalEnhanced parental leaveFamily-Friendly Flexibility & Flexible workingSanctus CoachingHybrid Working (2 days in our Old Street office, London)If you don’t feel you meet all the requirements, but are excited by the role and know you bring some key strengths, please don't hesitate in applying as you might be right for this role, or other roles. We are open to conversations about part-time hours.
No items found.
Hidden link
Decagon.jpg

Staff Software Engineer, ML Infrastructure

Decagon
$300,000 – $430,000
US.svg
United States
Full-time
Remote
false
About DecagonDecagon is the leading conversational AI platform empowering every brand to deliver concierge customer experiences.Our technology enables industry-defining enterprises like Avis Budget Group, Block’s Cash App and Square, Chime, Oura Health, and Hunter Douglas to deploy AI agents that power personalized, deeply satisfying interactions across voice, chat, email, SMS, and every other channel.We’re building a future where customer experiences are being redefined from support tickets and hold music to faster resolutions, richer conversations, and deeper relationships. We’re proud to be backed by world-class investors who share that vision, including a16z, Accel, Bain Capital Ventures, Coatue, and Index Ventures, along with many others.We’re an in-office company, driven by a shared commitment to excellence and velocity. Our values — Just Get It Done, Invent What Customers Want, Winner’s Mindset, and The Polymath Principle — shape how we work and grow as a team.About the TeamThe ML Infrastructure team builds the systems that power every stage of Decagon's model lifecycle. We own the platforms for model training, the infrastructure for model evaluation and experimentation, and the routing layer that manages inference across multiple providers.We work at the intersection of research and production: translating cutting-edge ML techniques into reliable, scalable systems that run in customer environments. We collaborate closely with Research, Infrastructure, and Product teams to ensure models train efficiently, serve reliably, and deliver exceptional user experiences.The team values technical rigor, pragmatic decision-making, and building systems that others love to use. About the RoleWe're hiring a Staff ML Infrastructure Engineer to own the platforms powering Decagon's model training and inference. You'll build distributed training systems, design inference architecture across multiple providers, and create the frameworks that let our Research and Product teams ship faster.This role is for someone who thrives on technical depth, can lead multi-quarter initiatives, and wants to shape the long-term architecture of our ML stack. In this role, you willDesign and build distributed training platforms for LLM and multimodal fine-tuning and post-training at scaleImplement and integrate state-of-the-art training algorithms into production pipelinesOwn inference architecture and multi-provider routing, including failover and optimizationResearch and implement inference optimizations including quantization, speculative decoding, and batching strategiesLead initiatives to improve latency and cost efficiency across the training and serving stackBuild evaluation and experimentation infrastructure that enables rapid, reliable iterationDrive technical direction, mentor engineers, and establish best practices for ML infrastructure Your background looks something like this8+ years building ML infrastructure or production systems at scaleDeep experience with distributed training: multi-node GPU clusters, fault tolerance, and optimizationStrong understanding of LLM inference: latency optimization, provider tradeoffs, and serving architectureProficiency in Python and modern ML frameworks (PyTorch, JAX, or TensorFlow)Proven track record leading complex, multi-quarter technical projects BenefitsMedical, dental, and vision benefitsTake what you need vacation policyDaily lunches, dinners and snacks in the office to keep you at your best Compensation$300K – $430K + Offers Equity
No items found.
Hidden link
Snorkel AI.jpg

Senior Product Manager – Platform

Snorkel AI
$172,000 – $300,000
US.svg
United States
Full-time
Remote
false
About Snorkel At Snorkel, we believe meaningful AI doesn’t start with the model, it starts with the data. We’re on a mission to help enterprises transform expert knowledge into specialized AI at scale. The AI landscape has gone through incredible changes between 2015, when Snorkel started as a research project in the Stanford AI Lab, to the generative AI breakthroughs of today. But one thing has remained constant: the data you use to build AI is the key to achieving differentiation, high performance, and production-ready systems. We work with some of the world’s largest organizations to empower scientists, engineers, financial experts, product creators, journalists, and more to build custom AI with their data faster than ever before. Excited to help us redefine how AI is built? Apply to be the newest Snorkeler!About Snorkel We’re on a mission to democratize AI by building the definitive AI data development platform. The AI landscape has gone through incredible change between 2016, when Snorkel started as a research project in the Stanford AI Lab, to the generative AI breakthroughs of today. But one thing has remained constant: the data you use to build AI is the key to achieving differentiation, high performance, and production-ready systems. We work with some of the world’s largest organizations to empower scientists, engineers, financial experts, product creators, journalists, and more to build custom AI with their data faster than ever before. Excited to help us redefine how AI is built? Apply to be the newest Snorkeler! As an Applied AI Engineer, you’ll research and utilize state-of-the-art Gen AI and machine learning (ML) techniques to successfully deliver solutions to our customers. You will work directly with our customers to understand their business and technical needs and design and deliver AI solutions to solve them - either by leveraging Snorkel Flow or developing custom approaches when needed. You will also help define Snorkel’s Applied AI tooling by translating repeatable real-world challenges into reusable solution recipes, workflows, best practices, and platform-level capabilities that become part of Snorkel Flow’s next generation of AI tooling. We move fast and are constantly prototyping and innovating new ways to deliver value to our customers. This position is ideal for someone who enjoys solving complex problems, bridging the gap between AI technology and business value, working directly with customers, keeping up-to date with AI research, and standardizing bespoke solutions into internal recipes and staying naturally curious about the infrastructure that underpin the Applied AI stack end-to-end. Main Responsibilities Partner with customers to build and deploy impactful Gen AI and machine learning solutions, from use case scoping and data exploration to model development and deployment. This may involve leveraging Snorkel Flow or designing custom approaches using state-of-the-art tools, with the goal of delivering real business value and informing the evolution of the Snorkel platform. Develop and implement state of the art AI systems such as retrieval-augmented generation (RAG), fine-tuning pipelines, prompt engineering recipes and agentic workflows. Create augmented real-world datasets and comprehensive evaluation workflows to ensure model reliability, transparency, and stakeholder trust. A data- and evaluation-first mindset is essential for success in this role. Forge and manage relationships with our customers’ leadership and stakeholders to ensure successful development and deployment of AI projects with Snorkel Flow. Collaborate closely with pre-sales Solutions and Product teams to map customer needs to existing capabilities, prioritize roadmap gaps, and guide successful project setup. Work with other Applied AI Engineers to standardize solutions and contribute to internal tooling and best practices. Lead stakeholder education on quantitative capabilities, helping them to understand the strengths and weaknesses of different approaches and what problems are best-suited for Snorkel AI. Serve as the voice of our customers for new AI paradigms, data science workflows, and share customer feedback to product teams. Conduct one-to-few and one-to-many enablement workshops to transfer knowledge to customers considering or already using Snorkel AI. Annual travel up to 25%. Preferred Qualifications B.S. degree in a quantitative field such as Computer Science, Engineering, Mathematics, Statistics, or comparable degree/experience. 3+ years of customer-facing experience in the design and implementation of AI/ML solutions. Proficiency in Python, including strong grounding in software engineering fundamentals (e.g., modular design, testing, profiling, packaging) and experience with modern Python constructs and libraries for type validation and typed data modeling (e.g., pydantic), building type-safe systems (e.g., mypy), testing (e.g., pytest), packaging and environment configuration (e.g., poetry), API and service frameworks (e.g., FastAPI), serialization and structured data handling (e.g., msgspec), and orchestration tooling relevant to ML deployment (e.g., Ray, Airflow). Expertise across the Applied AI stack, spanning classical ML libraries (e.g., scikit-learn), deep learning frameworks (e.g., PyTorch), foundation-model ecosystems (e.g., Hugging Face Transformers), vector/embedding tooling (e.g., FAISS), data processing frameworks (e.g., pandas, Spark), retrieval/RAG tooling (e.g., Chroma, Weaviate), synthetic dataset curation, evaluation workflows, and LLM orchestration, workflow, agent authoring tools (e.g., LlamaIndex, LangGraph, CrewAI). Experience leading strategic, customer-facing initiatives and collaborating with business stakeholders to ensure ML solutions drive successful business outcomes, with a strong focus on teaching and enablement. Outstanding presentation skills to technical and executive audiences, whether impromptu on a whiteboard or using presentations and demos. Ability to work in a fast-paced environment and balance priorities across multiple projects at once. Compensation range for Tier 1 locations of San Francisco Bay Area $172K - $300K OTE. All offers also include equity in the form of employee stock options. Our compensation ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Locations Redwood City, CA - Hybrid; San Francisco, CA - Hybrid - US #LI-CG1Salary Range $172,000—$300,000 USDBe Your Best at Snorkel Joining Snorkel AI means becoming part of a company that has market proven solutions, robust funding, and is scaling rapidly—offering a unique combination of stability and the excitement of high growth. As a member of our team, you’ll have meaningful opportunities to shape priorities and initiatives, influence key strategic decisions, and directly impact our ongoing success. Whether you’re looking to deepen your technical expertise, explore leadership opportunities, or learn new skills across multiple functions, you’re fully supported in building your career in an environment designed for growth, learning, and shared success. Snorkel AI is proud to be an Equal Employment Opportunity employer and is committed to building a team that represents a variety of backgrounds, perspectives, and skills. Snorkel AI embraces diversity and provides equal employment opportunities to all employees and applicants for employment. Snorkel AI prohibits discrimination and harassment of any type on the basis of race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local law. All employment is decided on the basis of qualifications, performance, merit, and business need. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
No items found.
Hidden link
OpenAI.jpg

Researcher, Automated Red Teaming

OpenAI
$295,000 – $445,000
US.svg
United States
Full-time
Remote
false
About the team The Safety Systems org ensures that OpenAI’s most capable models can be responsibly developed and deployed. We build evaluations, safeguards, and safety frameworks that help our models behave as intended in real-world settings.The Preparedness team is an important part of the Safety Systems org at OpenAI, and is guided by OpenAI’s Preparedness Framework.Frontier AI models have the potential to benefit all of humanity, but also pose increasingly severe risks. To ensure that AI promotes positive change, the Preparedness team helps us prepare for the development of increasingly capable frontier AI models. This team is tasked with identifying, tracking, and preparing for catastrophic risks related to frontier AI models.The mission of the Preparedness team is to:Closely monitor and predict the evolving capabilities of frontier AI systems, with an eye towards risks whose impact could be catastrophicEnsure we have concrete procedures, infrastructure and partnerships to mitigate these risks and to safely handle the development of powerful AI systemsPreparedness tightly connects capability assessment, evaluations, and internal red teaming, and mitigations for frontier models, as well as overall coordination on AGI preparedness. This is fast paced, exciting work that has far reaching importance for the company and for society.About the roleThis role leads the Automated Red Teaming (ART) effort: building scalable, research-driven systems that continuously discover failure modes in our models and mitigations — and translate those findings into actionable, production-facing improvements. The goal is to maximize counterfactual reduction in expected harm by finding the highest-leverage, least-covered weaknesses early and reliably.In this role you willYou will own the research and technical direction for automated red teaming across catastrophic risk areas, with an initial emphasis on:Automated classifier jailbreak discovery (cyber and bio)Automated bio threat-development elicitation (worst-feasible planning uplift)CoT monitoring evasion probing (and adjacent loss-of-control evaluations)You will partner tightly with:Vertical risk teams (Cyber, Bio, Loss of Control) to define threat models, prioritize targets, and land mitigationsThe Classifiers team to turn discovered attacks into training data, evals, and measurable robustness gainsProduct / eng / safety stakeholders to ensure ART outputs are operationally useful (not just interesting)You might thrive in this role if you:Feel a strong pull toward AI safety, and you’re motivated by reducing real-world catastrophic risk (not just publishing cool results).Love breaking systems (responsibly) — you get energy from finding weird, high-severity failure modes and turning them into concrete fixes.Have strong applied research instincts, especially around evaluations: you’re good at designing experiments that are reproducible, interpretable, and hard to fool.Bring hands-on experience with LLMs and agents, including multi-turn behaviors, tool use, and the ways models adapt to constraints.Are comfortable building scalable automation, not just prototypes — you can turn red-teaming ideas into pipelines that run continuously and produce high-signal outputs.Have solid software engineering fundamentals (data structures, algorithms, testing discipline) and you can work effectively in a production-adjacent environment.Think in threat models and incentives, and you naturally ask “what would an attacker do next?” or “how would this fail under pressure?”Can translate messy findings into action, communicating clearly with researchers, engineers, product, and policy — and driving alignment on what to fix first.Care about efficiency and prioritization, and you’re happy to say “no” to low-leverage work to focus on what moves the risk needle.(Nice to have) Experience in adversarial ML, security research / red teaming, abuse prevention systems, or large-scale eval infrastructure.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
HackerOne.jpg

Senior Data Engineer

HackerOne
₹3,672,000 – ₹4,131,000
IN.svg
India
Full-time
Remote
false
HackerOne is a global leader in Continuous Threat Exposure Management (CTEM). The HackerOne Platform unites agentic AI solutions with the ingenuity of the world’s largest community of security researchers to continuously discover, validate, prioritize, and remediate exposures across code, cloud, and AI systems. Through solutions like bug bounty, vulnerability disclosure, agentic pentesting, AI red teaming, and code security, HackerOne delivers measurable, continuous reduction of cyber risk for enterprises. Industry leaders, including Anthropic, Crypto.com, General Motors, Goldman Sachs, Lufthansa, Uber, UK Ministry of Defence, and the U.S. Department of Defense, trust HackerOne to safeguard their digital ecosystems. HackerOne was recognized in Gartner’s Emerging Tech Impact Radar: AI Cybersecurity Ecosystem report for its leadership in AI Security Testing and has been named a Most Loved Workplace for Young Professionals (2024).HackerOne is at a pivotal inflection point in the security industry. Offensive security is no longer optional – it is the standard for forward-thinking companies that want to build trust and resilience in a world where AI-driven innovation and adversaries are moving faster than ever. With the industry shifting, HackerOne stands apart: we combine the ingenuity of the largest security research community with a best-in-class AI-powered platform, trusted by the world’s top organizations.HackerOne ValuesHackerOne is dedicated to fostering a strong and inclusive culture. HackerOne is Customer Obsessed and prioritizes customer outcomes in our decisions and actions. We Default to Disclosure by operating with transparency and integrity, ensuring trust and accountability. Employees, researchers, customers, and partners Win Together by fostering empowerment, inclusion, respect, and accountability.Senior Data EngineerLocation : PuneWork model : In officeHackerOne is a global leader in Continuous Threat Exposure Management (CTEM). The HackerOne Platform unites agentic AI solutions with the ingenuity of the world’s largest community of security researchers to continuously discover, validate, prioritize, and remediate exposures across code, cloud, and AI systems. Through solutions like bug bounty, vulnerability disclosure, agentic pentesting, AI red teaming, and code security, HackerOne delivers measurable, continuous reduction of cyber risk for enterprises. Industry leaders, including Anthropic, Crypto.com, General Motors, Goldman Sachs, Lufthansa, Uber, UK Ministry of Defence, and the U.S. Department of Defense, trust HackerOne to safeguard their digital ecosystems. HackerOne was recognized in Gartner’s Emerging Tech Impact Radar: AI Cybersecurity Ecosystem report for its leadership in AI Security Testing and has been named a Most Loved Workplace for Young Professionals (2024).HackerOne is at a pivotal inflection point in the security industry. Offensive security is no longer optional – it is the standard for forward-thinking companies that want to build trust and resilience in a world where AI-driven innovation and adversaries are moving faster than ever. With the industry shifting, HackerOne stands apart: we combine the ingenuity of the largest security research community with a best-in-class AI-powered platform, trusted by the world’s top organizations.HackerOne ValuesHackerOne is dedicated to fostering a strong and inclusive culture. HackerOne is Customer Obsessed and prioritizes customer outcomes in our decisions and actions. We Default to Disclosure by operating with transparency and integrity, ensuring trust and accountability. Employees, researchers, customers, and partners Win Together by fostering empowerment, inclusion, respect, and accountability.Position SummaryHackerOne is seeking a Senior Data Engineer to lead the discovery, architecture, and development of high-performance, scalable data products and solutions. Joining our growing, distributed organization, you'll be instrumental in building the foundation that powers HackerOne's enterprise transformation from human-powered operations to data-driven, AI-powered, and human-led agentic operations.You’ll achieve success by leading with AI-first thinking, demonstrating agility through change, applying first-principles problem solving, and using data to learn and adapt along the way. Leveraging your technological expertise, domain knowledge, and dedication to business objectives, you'll drive innovation to propel HackerOne forward. Enterprise Data & AI MissionEnterprise Data & AI provides the data and systems to enable our enterprise transformation from human-powered operations to data-driven, AI-powered, and human-led agentic operations. This data and systems infrastructure includes autonomous AI agents handling routine or repeatable work and centralized AI and data infrastructure for cross-org leverage.What You Will DoLead the end-to-end design and delivery of scalable, secure, and intelligent data products and solutions that support HackerOne’s transformation into an AI-first organization.Partner across business and engineering teams to identify high-leverage opportunities for automation, integration, and system modernization.Drive the architecture and execution of platform-level capabilities, leveraging AI and modern tooling to reduce manual effort, improve decision-making, and increase system resilience.Provide technical leadership to internal engineers and external development partners, ensuring design quality, operational excellence, and long-term maintainability.Shape and contribute to our incident and on-call response strategy, playbooks, and processes, focusing on building systems that fail gracefully and recover quickly.Act as a multiplier to mentor other engineers, advocate for technical excellence, and promote a culture of innovation, curiosity, and continuous improvement.Champion effective change management and enablement, ensuring systems are not only launched, but adopted, understood, and evolved.Minimum Qualifications6+ years of experience in a Data, Engineering, Science, or similar role w/ proven track record of leading the design, development, and deployment of AI-first data products and solutions (preferably using Python).Extensive hands-on experience building and optimizing data pipelines, products, and solutions.Strong SQL for data manipulation and programming skills.Knowledge of algorithms and data structures.Extensive experience working with various data technologies and tools such as Airflow, Snowflake, Meltano, Fivetran, DBT, Looker, and AWS.Experience with infrastructure as code tools such as Terraform or PulumiProven track record of successfully championing new initiatives focused on architectural enhancements.Proven track record of having substantial impact across the company, demonstrating your ability to drive positive change and achieve significant results.Passion for working backwards from the Customer and empathy for business stakeholders.Excellent communication skills, and can present data-driven narratives in verbal, presentation, and written formats.Preferred QualificationsProven track record of driving innovation, adopting emerging technologies, and implementing industry best practices.Experience building and managing a cloud deployed data lakeExperience working with Kubernetes.Experience working with Agile and iterative development processes.Understanding of network architecture.Job Benefits:Health (medical, vision, dental), life, and disability insurance*Equity stock optionsRetirement plansPaid public holidays and unlimited PTOPaid maternity and parental leaveLeaves of absence (including caregiver leave and leave under CO's Healthy Families and Workplaces Act)Employee Assistance Program*Eligibility may differ by countryWe're committed to building a global team! For certain roles outside the United States, India, the U.K., and the Netherlands, we partner with Remote.com as our Employer of Record (EOR).Visa/work permit sponsorship is not available. Employment at HackerOne is contingent on a background check.HackerOne is an Equal Opportunity Employer in the terms and conditions of employment for all employees and job applicants without regard to race, color, religion, sex, sexual orientation, age, gender identity or gender expression, national origin, pregnancy, disability or veteran status, or any other protected characteristic as outlined by international, federal, state, or local laws.This policy applies to all HackerOne employment practices, including hiring, recruiting, promotion, termination, layoff, recall, leave of absence, compensation, benefits, training, and apprenticeship. HackerOne makes hiring decisions based solely on qualifications, merit, and business needs at the time.For US based roles only: Pursuant to the San Francisco Fair Chance Ordinance, all qualified applicants with arrest and conviction records will be considered for the position.
No items found.
Hidden link
Eight Sleep.jpg

ML Research Scientist (Health & Sensing)

Eight Sleep
US.svg
United States
Full-time
Remote
false
Join the Sleep Fitness MovementAt Eight Sleep, we’re on a mission to fuel human potential through optimal sleep. As the world’s first sleep fitness company, we’re redefining what it means to be well-rested and building the most advanced hardware, software, and AI technology to make it possible. Our products power peak mental, physical, and emotional performance by transforming every night of sleep into a personalized, data-driven recovery experience. We are trusted by high performers, professional athletes, and health-conscious consumers in over 30 countries worldwide. Recognized as one of Fast Company's Most Innovative Companies in 2019, 2022, and 2023, and twice named to TIME's “Best Inventions of the Year.” We operate like a high-performance team: fast, focused, and motivated by impact. We don’t just ship; we iterate, refine, and obsess over the details that help our members sleep better and wake up stronger. Every role at Eight Sleep is a chance to create cutting-edge technology, collaborate with world-class talent, and help shape a future where sleep isn’t passive - it’s a powerful tool for living better. If you’re tired of the ordinary and driven to build at the edge of what’s possible, this is your moment. Join us and lead the movement that’s transforming how the world sleeps and what we’re all capable of when we wake up.High Standards. No Apologies.We operate with intensity because our mission demands it. At Eight Sleep, we bring the same mindset as the world’s top performers: focused, relentless, and always pushing to be in the top 1% of our craft. Think Kobe Bryant’s mamba mentality, applied to bold ideas, next-gen tech, and flawless execution. This isn’t a 9-to-5. Our team is deeply committed, often putting in 60+ hours a week –not because we’re told to, but because we’re invested. We’re here to build fast, push limits, and deliver without compromise. If you thrive under pressure and want to do the most meaningful work of your career, you’ll feel right at home. If you’re looking for something easier –this isn’t it.The RoleEight Sleep is the first sleep fitness company. At Eight Sleep we design products at the forefront of sleep innovation. Our mission is to make people’s sleep count for more, using innovative technology, minimalistic design, and proven clinical science to personalize and improve each night for everybody—changing the way people sleep forever and for better.Our temperature-regulated technology, the Pod, is an absolute game changer. It improves people's health and happiness by changing the way they sleep. The Pod was recognized two years in a row by TIME's “Best Inventions of the Year.” It is available for purchase in North America and internationally. Backed by leading Silicon Valley investors including Valor Equity Partners, Founders Fund, Khosla Ventures and Y Combinator, we’ve raised over $150 million (Series C) to date. We were recognized as one of Fast Company’s Most Innovative Companies in Consumer Electronics in 2019, 2022, and 2023.That is why Eight Sleep is looking for research scientists with a passion for using AI & Machine Learning to transform sensor data into personalized intelligent health & fitness experiences. Today, we offer heart rate, heart rate variability, sleep, and other metrics. Tomorrow, your imagination will unlock new meaningful metrics that will induce behavior change for better sleep and better health. To do so, you will work closely with a cross-functional R&D and production team to prototype and to ship solutions.We are seeking someone who is passionate about health technologies and about making an impact on health outcomes. We look for people who tackle problems with a system approach, and make data driven decisions to deliver the best products to our users.How you'll contributeExample projects include:Autopilot Thermoregulation: Advance the Pod’s adaptive thermoregulation system - the “autopilot” system that continuously learns and reacts to micro-events like restlessness or awakenings. Design policies that optimize comfort and sleep quality in real time through reinforcement learning and closed-loop control.Health Foundation Modeling: Develop a multimodal health foundation model that integrates physiology and environmental context - learning from Pod signals, wearable sensors, and contextual data - powered by one billion+ hours of sleep data at Eight Sleep.Behavioral simulations: Build a high-fidelity physiological simulator that models how daily behaviors ripple into tonight’s sleep and tomorrow’s readiness. This initiative aims to create a non-invasive, always-on model of healthy aging powered by Pod biosignals and continuous physiological data. What you'll need to succeedExpertise in at least one area of machine learning and artificial intelligence (e.g., self-supervised learning, multi-modal ML, model optimization, NLP, LLM)Strong interest in applying machine learning to health related problems and dataExperience using a programming language (Python, C/C++ etc.) to manipulate data, draw insights from large data sets, and train machine learning models3+ years of practical experience applying ML to solve real-world problems or relevant quantitative and qualitative research and analytics experiencePhD in Computer Science, Machine Learning, AI, Statistics, Mathematics, or related quantitative field with a notable publication record; or BS/MS with publications at top venues (e.g. NeurIPS, ICML, ICLR, AAAI, CVPR, ICCV, ACL, EMNLP, INTERSPEECH) Why join Eight Sleep?Innovation in a culture of excellenceJoin us in a workplace where innovation isn’t just encouraged - it’s a standard. Our flagship product, the Pod, is a testament to our culture of excellence, beloved by hundreds of thousands of customers worldwide. At Eight Sleep, you will be part of a team that continuously pushes the boundaries of technology in sleep fitness.Immediate responsibility and accelerated career growthFrom your first day, you’ll take on substantial responsibilities that have a direct impact on our core business and product success. We are a small team that empowers you to own your projects and see the tangible effects of your efforts, enhancing both your professional growth and our company’s trajectory. Your path will be challenging but rewarding, perfect for those who thrive in fast-paced environments aiming for high standards.Collaboration with exceptional talentWork alongside other bright minds like you: at Eight Sleep exceptional intelligence and a passion for breakthroughs are the norms. Our team members are not only experts in their fields but also avid innovators who thrive in our dynamic, fast-paced environment.Equitable compensation and continuous equity investmentWe extend equity participation to every full-time team member, recognizing and rewarding your direct contributions to our success. This includes periodic equity refreshments based on performance, ensuring that as Eight Sleep grows and succeeds, so do you – perfectly aligning your achievements with the broader triumphs of the company. Pay grows rapidly as you accumulate experience with Eight Sleep and translate it into concrete impact.Your own Pod - and other great benefitsEvery Eight Sleep employee receives the very product that defines our mission: a Pod of their own. If you join us you’ll get your own Pod, along with other benefitsAt Eight Sleep we continually celebrate the diverse community different individuals cultivate. As an equal opportunity employer, we stay true to our values by ensuring everyone feels they can flourish and grow. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status.
No items found.
Hidden link
Zoox.jpg

Senior Autonomy System Test Engineer

Zoox
$198,000 – $237,000
US.svg
United States
Full-time
Remote
false
Autonomous vehicles have some of the largest, most complex software ever shipped in a safety-critical environment. Solving that problem is one of the most exciting technical challenges of our lifetime. As a Senior Autonomy Systems Test Engineer at Zoox, you will help accelerate our product development by helping our developers build the safest and most reliable autonomous driving software possible. As developers build features to enable the core ability for our vehicles to choose the best routes through cities and adapt in real-time,  you will oversee the development of extensive test plans, develop standardized simulated test design tooling processes to execute many different scenarios, validate end-to-end behaviors, and put together triage pipelines for analyzing issues seen during offline and on-vehicle testing. These efforts are critical to exposing unforeseen failure modes, implementation bugs, and other issues as a result of new feature development. You will participate regularly in in-vehicle testing missions where you will see these new features behaving in the real world. Following these missions, you will help triage and root cause issues seen during testing, analyze test results to ensure existing functionality does not regress from release to release, and update our testing processes to ensure QA can continue scaling efficiently with future Zoox milestones. Your ability to distill complex systems and root cause issues will be critical to producing a safe and stable vehicle platform.In this role, you will: Create test strategies and test plans for Zoox’s self-driving behavior features.Identify, track, report, and resolve test strategy, planning, or implementation issues with cross-functional software development teams.Propose, prototype, and validate a new testing methodology for the AI stack using automated metrics.Design, develop, and execute synthetic and log-based test scenarios on an in-house simulation test framework.Compile triage strategy and triage results from different QA validation platforms and pipelines.Qualifications: Bachelor’s or Master’s in Computer Science, Electrical Engineering, or related field7+ years of experience in designing and implementing scalable software systems test strategies with excellent analytical, problem-solving, and prioritization skillsProficiency with automated test framework and test automationExperience with JIRA, GIT, Linux command line, Python, or other scripting languages Strong organizational, leadership, and communication skillsBonus Qualifications: PhD in Computer Science, Electrical Engineering, or STEM fieldRelevant industry experience in software and systems testing for robotics, autonomous mobility, or safety-critical systems. Familiarity with ISO 26262 or other safety standards 198,000 - 237,000 a yearBase Salary Range There are three major components to compensation for this position: salary, Amazon Restricted Stock Units (RSUs), and Zoox Stock Appreciation Rights. A sign-on bonus may be offered as part of the compensation package. The listed range applies only to the base salary. Compensation will vary based on geographic location and level. Leveling, as well as positioning within a level, is determined by a range of factors, including, but not limited to, a candidate's relevant years of experience, domain knowledge, and interview performance. The salary range listed in this posting is representative of the range of levels Zoox is considering for this position. Zoox also offers a comprehensive package of benefits, including paid time off (e.g. sick leave, vacation, bereavement), unpaid time off, Zoox Stock Appreciation Rights, Amazon RSUs, health insurance, long-term care insurance, long-term and short-term disability insurance, and life insurance.
No items found.
Hidden link
Pika.jpg

Senior Software Engineer, Agent

Pika
US.svg
United States
Full-time
Remote
false
Senior Software Engineer, Agent Department: Engineering | Location: Palo Alto HQ | Type: Full-time, On-site About the Role We're looking for a Senior Agent Engineer to push the boundaries of what AI agents can do at Pika. You'll work on the systems that give AI agents their personality, memory, reasoning, multi-modal capabilities, and ability to act autonomously across platforms. This is the core of what makes Pika's AI products feel alive. You'll design agent architectures, build tool-use frameworks, optimize LLM interactions, and create the systems that allow agents to learn, remember, and evolve over time. If you're excited about building AI systems that feel genuinely intelligent — not just wrappers over chat APIs — this role is for you. What You'll Do • Design and evolve the agent runtime — the core loop that handles reasoning, tool use, memory retrieval, and response generation • Build agent capabilities — image generation, voice synthesis, video creation, web browsing, code execution, and other skills • Optimize LLM integration — prompt engineering, context window management, multi-provider model routing (Claude, GPT, Gemini, open-source), cost optimization, and latency reduction • Implement memory systems — long-term memory, working memory, episodic recall, and semantic search so agents learn from every interaction • Design autonomous behaviors — proactive actions, scheduled tasks, and goal-directed behavior that makes agents feel self-directed • Build evaluation and quality systems — benchmarks, A/B testing, and metrics for agent behavior, personality consistency, and response quality • Experiment with new architectures — multi-agent collaboration, planning, chain-of-thought reasoning, and other emerging patterns • Collaborate with product and design to translate AI capabilities into intuitive user experiences What We're Looking For • 5+ years of software engineering experience, with 2+ years working with LLMs or AI agent systems • Deep understanding of LLM capabilities and limitations — you know how to get the best out of frontier models • Experience building agent systems — tool use, function calling, multi-step reasoning, retrieval-augmented generation (RAG) • Strong prompt engineering skills — system prompts, few-shot learning, chain-of-thought, structured output • Proficiency in TypeScript and/or Python • Understanding of embedding models and vector search for memory and retrieval • Comfortable with rapid experimentation — you ship experiments, measure results, and iterate • Product intuition — you understand what makes an AI agent feel "alive" vs. robotic • Clear communication skills and a team-first mindset Nice to Have • Experience with multi-modal AI (image generation, TTS, speech-to-text, video generation) • Experience with agent frameworks (LangChain, AutoGPT, CrewAI, or custom runtimes) • Background in NLP, computational linguistics, or cognitive science • Experience with fine-tuning, RLHF, or DPO • Familiarity with AI safety and alignment considerations • Experience with real-time/streaming LLM responses • Previous startup experience — comfortable with ambiguity and moving fast
No items found.
Hidden link
blp.jpg

Applied AI Engineer - Zürich

BLP
CH.svg
Switzerland
Full-time
Remote
false
Join blp – The #1 Solution for ERP Automationblp is a high-performance ETH and HSG spin-off redefining ERP automation with AI. We solve real enterprise problems with cutting-edge tech and a strong sense of ownership. Our solution is in production across 40+ countries, used by 20'000+ daily active users, automating 70'000+ processes every day.Our AI-driven ERP automation is transforming finance, procurement, logistics, sales, and more. As one of Switzerland's fastest-growing SaaS scale-ups, we are proudly self-financed and fully employee-owned.Our success stems from deep expertise in technology and business processes, delivering a superior product with an outstanding product-to-market fit, proven by our growing customer base, including Fortune 500 companies.Our rapid growth and career opportunities have been recognised with the LinkedIn Top Startup Award, and we’re just getting started. Our HQ? Zürich’s iconic Bahnhofstrasse, a fitting home for a company redefining how businesses operate.Ready to build the future? Join blp today.About the positionWe're giving you the keys to the machine. As our Applied AI Engineer, you'll have a founder-level mandate to build and ship AI-powered internal tools that make our hiring, finance and ops workflows 10x faster — not 10% faster. You'll work at the frontier of what's possible with LLMs, AI agents, and emerging AI tooling, and apply it where it matters most: inside our own company. Key ResponsibilitiesBuild and ship AI-powered internal tools fast — we're talking days and weeks, not quartersEvaluate, integrate, and chain the latest AI models, APIs, and frameworks (LLMs, agent frameworks, embedding pipelines, AI coding tools) into production-ready internal solutionsCreate compounding leverage — every tool you build should free up hours across the entire companyWrite clean, maintainable code that integrates with our existing systems — APIs, databases, third-party services — and doesn't break at scaleOwn your deployments end-to-end: build it, test it, ship it, monitor it — and build it well enough that it runs reliably without babysittingRequirementsTechnical background with real building experience — CS degree, bootcamp, self-taught, we don't care as long as you've shipped things that workYou've built with modern AI tooling (LLMs, AI coding assistants, workflow automation) and you're obsessed with what's coming nextStartup or entrepreneurial DNA — you've either founded something, built an internal tool that became critical infra, or hacked together solutions that people actually usedYou move fast, communicate clearly, and love collaborating with non-technical teams to understand their pain points and turn them into elegant, automated solutionsFluency in English, German is a plusBonus: experience with tools like Make, n8n, Retool, Supabase, or similar — but honestly, we care more about your ability to learn any tool in a weekendBenefitsSolve real problems, fast. Join a high-growth deep-tech startup tackling real-world challenges for global industry leaders.Build from zero to one. Take ownership of our internal tooling landscape — shaping its foundations and direction from the ground up.Opportunities to grow. You're controlling your personal development and career trajectory in a rapidly growing, internationally expanding startupExceptional colleagues. You'll be growing alongside individuals from exceptional academic backgrounds and careers
No items found.
Hidden link
blp.jpg

Lead Applied AI Engineer - Zürich

BLP
CH.svg
Switzerland
Full-time
Remote
false
Join blp – The #1 Solution for ERP Automationblp is a high-performance ETH and HSG spin-off redefining ERP automation with AI. We solve real enterprise problems with cutting-edge tech and a strong sense of ownership. Our solution is in production across 40+ countries, used by 20'000+ daily active users, automating 70'000+ processes every day.Our AI-driven ERP automation is transforming finance, procurement, logistics, sales, and more. As one of Switzerland's fastest-growing SaaS scale-ups, we are proudly self-financed and fully employee-owned.Our success stems from deep expertise in technology and business processes, delivering a superior product with an outstanding product-to-market fit, proven by our growing customer base, including Fortune 500 companies.Our rapid growth and career opportunities have been recognised with the LinkedIn Top Startup Award, and we’re just getting started. Our HQ? Zürich’s iconic Bahnhofstrasse, a fitting home for a company redefining how businesses operate.Ready to build the future? Join blp today.About the positionWe're giving you the keys to the machine. As our Lead Applied AI Engineer, you'll build up and lead the engineering team responsible for internal tooling and automation. You'll own the mandate to build and ship AI-powered internal tools that make our hiring, finance and ops workflows 10x faster. You'll work at the frontier of what's possible with LLMs, AI agents, and emerging AI tooling, and apply it where it matters most: inside our own company.Key ResponsibilitiesTechnical background with real building experience — CS degree, bootcamp, self-taught, we don't care as long as you've shipped things that workHands-on experience building with LLMs, AI agents, prompt engineering, and modern AI frameworks — you're not just reading about it, you're building with it dailyProven track record of building software, tools or automation end-to-end — from scoping the problem, writing the code, to deploying and maintaining it in productionExperience leading small technical teams — you know how to set direction, keep people unblocked, and ship fast without introducing unnecessary processStrong communicator who can translate between technical and non-technical worlds — you're comfortable sitting with ops, finance, or HR, understanding their pain points, and turning them into engineered solutionsBonus: experience with tools like LangChain, CrewAI, OpenAI API, Make, n8n, Retool, Supabase, or similar — but we care more about your ability to pick up any new AI tool in a weekendRequirementsTechnical background with real building experience — CS degree, bootcamp, self-taught, we don't care as long as you've shipped things that workYou've built with modern AI tooling (LLMs, AI coding assistants, workflow automation) and you're obsessed with what's coming nextStartup or entrepreneurial DNA — you've either founded something, built an internal tool that became critical infra, or hacked together solutions that people actually usedYou move fast, communicate clearly, and love collaborating with non-technical teams to understand their pain points and turn them into elegant, automated solutionsFluency in English, German is a plusBonus: experience with tools like Make, n8n, Retool, Supabase, or similar — but honestly, we care more about your ability to learn any tool in a weekendBenefitsSolve real problems, fast. Join a high-growth deep-tech startup tackling real-world challenges for global industry leaders.Build from zero to one. Take ownership of our internal tooling landscape — shaping its foundations and direction from the ground up.Opportunities to grow. You're controlling your personal development and career trajectory in a rapidly growing, internationally expanding startupExceptional colleagues. You'll be growing alongside individuals from exceptional academic backgrounds and careers
No items found.
Hidden link
42dot.jpg

Senior AI Platform Engineer (Autonomous Driving)

42dot
$120,000 – $280,000
US.svg
United States
Full-time
Remote
false
About Us: 42dot is a mobility AI company committed to solving mobility challenges with software and AI. As the Global Software Center of Hyundai Motor Group, 42dot pioneers the future of mobility by advancing the development of software-defined vehicles.We develop safety-first, user-centric software-defined vehicle technologies that deliver the latest performance through continuous updates like smartphones. By advancing software and AI technology, 42dot envisions a world where everything is connected and moves autonomously through a self-managing urban transportation operating system.About the Role:As a Senior Data Platform Engineer, you will play a pivotal role in building the core infrastructure that powers the future of autonomous driving at 42dot. This role is at the intersection of data engineering, machine learning, and autonomous systems—requiring both deep technical expertise and a system-level mindset. You will be responsible for setting the technical strategy and leading the development of our high-performance data platform, designed to process, manage, and serve massive-scale multimodal datasets for ML model training and validation. From building a robust lakehouse architecture for AD scene data to optimizing complex data processing pipelines, you'll work across disciplines to ensure the seamless flow of data that drives our autonomy stack forward. If you’re passionate about solving large-scale data challenges in a fast-paced, high-impact environment, this is your opportunity to shape how self-driving vehicles learn and evolve.ResponsibilitiesSet technical strategy and oversee development of high scale, reliable data platform to manage, visualize and serve large-scale datasets for ML model training and validation.Build up the data lakehouse for autonomous driving scene datasets, including the sensor data, calibration data, as well as annotation dataDrive the Autonomous Driving Data SDK development, including scene data search, datasets preparation, dataset loading, etc.Dig into performance bottlenecks all along the data processing pipelines, from data processing latency, data search latency to Test Procedure (TP) coverage.Bootstrap and maintain infrastructure for Data Platform components—Data Processing Pipeline, Database, Data Lakehouse and Data Serving.Collaborate with cross-functional teams, including ML algorithm, ML application, and Cloud Infra to align ML Platforms with overall Autonomous Driving System Architecture.QualificationsBachelor's degree or higher in Computer Science, Engineering, Robotics, or a similar technical field.Minimum of 7 years of experience in Data Engineering or ML Platform rolesExpert-level proficiency in Python and solid experience in Python SDK developmentExperience with autonomous vehicle sensor data (e.g., LiDAR, camera, radar)Solid working experience in Databases (e.g., MongoDB, PostgreSQL, etc)Strong understanding of modern AI frameworks (e.g., PyTorch, TensorFlow etc.), especially the principle of distributed data loader for model trainingHands-on experience with data pipeline job orchestration with Databricks Workflows or Apache Airflow, as well as integrating data pipelines with machine learning modelsExtensive experience with data technologies and architectures such as Data Warehouse (e.g., Hive) or Lakehouse (e.g., Delta Lake)Experience with Apache Spark or other big data computing enginesExcellent leadership and communication skills, with a demonstrated ability to lead technical projectsPreferred QualificationsUnderstanding data governance principles, data privacy regulations, and experience implementing security measures to protect dataExperience with ML model training lifecycle (e.g., data preparation, model training / validation / deployment, etc.)Understanding of Large Models, like VLMBase Salary: $120,000 - $280,000
No items found.
Hidden link
OpenAI.jpg

Platform Engineer, Forward Deployed Engineering (FDE) - NYC

OpenAI
$230,000 – $385,000
US.svg
United States
Full-time
Remote
false
About the teamOpenAI’s Forward Deployed Engineering (FDE) org sits at the intersection of product, engineering, research, and go-to-market. We take frontier platform capabilities into the real world with design partners, turning raw customer signal into shipped software, repeatable patterns, and durable products.The FDE Platform team is primarily a leverage function that scales the FDE org’s impact to OpenAI’s platform and products. We provide hands-on leverage by embedding with customer-tagged FDE pods to aid in architecting, product shaping, refactoring, and building. This team is perfect for highly collaborative software engineers who love innovating on cutting-edge products with other builders.About the rolePlatform Engineer is a role within Forward Deployed Engineering (FDE) for strong software and ML engineers who want to build new platform capabilities from scratch, grounded in real customer deployments.You will partner with customer-tagged FDEs who are driving delivery and customer outcomes, and embed where you can provide the highest leverage. In practice that means working in the trenches on architecture, product shaping, refactoring, hardening, and reusable abstractions, while preserving the pod’s ownership of customer understanding and day-to-day execution. You will also collaborate closely with our B2B Platform Team and other long-term owners to align early on what should generalize, what should remain customer-specific, and what “ready for handoff” looks like.This role does not require travel. It is based in San Francisco or New York. We use a hybrid work model of 3 days in the office per week. We offer relocation assistance. Travel is optional-by-project and typically <10%, with occasional spikes for key embeds or launches.In this role you willProvide hands-on leverage to customer pods: embed with customer-tagged FDE teams to support generalization, contributing directly in architecture, product shaping, refactoring, and implementation.Turn repeated signals into platform bets: translate cross-customer patterns into crisp hypotheses with clear success criteria, scope, and a validation plan that fits real account constraints.Raise the engineering bar through tooling and mentorship: set org-wide quality norms through high-signal code review and pairing, and build lightweight developer tooling that makes good architecture, readability, and correctness the default across FDE.Collaborate as part of cross-functional platform teams: partner closely with B2B Product, customer-tagged FDEs, ops, and business partners to bring the right products and platform capabilities to market.Lead complex platform capabilities end-to-end when needed: for high-leverage primitives like our Context Platform, act as DRI from requirements through implementation, make key tradeoffs explicit, and pull in customer pods early to keep the work grounded in real deployments.You might thrive in this role if youBring 5+ years of software engineering or ML engineering experience with a track record of shipping 0→1 capabilities that other engineers or customers depend on. Experience in high-ambiguity, fast-iteration environments (startups or product-centric teams) is a plus.Have owned customer-adjacent technical work end-to-end, from scoping and hypothesis-setting through production adoption, and improved outcomes through structured iteration (instrumentation, evals, error analysis, and tightening success criteria over time).Have built or operated systems where reliability, security, and governance materially shaped design (permissions/RBAC, auditability, data access boundaries, rollout safety, observability, and incident-driven hardening).Communicate clearly across engineering, product, go-to-market, and executive audiences, simplifying complex ideas and translating technical tradeoffs into adoption impact, sequencing decisions, and measurable outcomes. You can credibly “pitch” a platform bet in a customer conversation.Default to systems thinking: you turn ambiguous feedback, failures, and escalations into durable product requirements and reusable platform capabilities, not one-off fixes or bespoke delivery work.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
Silver.dev

Prospera AI - AI Backend Engineer

Silver.dev
$60,000 – $90,000
AR.svg
Argentina
Full-time
Remote
false
About Prospera AIWe’re building Sophie, a multi-agent AI orchestrator that helps wealth management advisors deliver more personalized, effective service to their clients.Our platform analyzes behavioral patterns, communication preferences, and emotional states to transform how advisors understand and serve their clients.We’re a small, well-funded team at an exciting inflection point—our technology works, customers love the product, and now we’re building the engineering team to scale. The RoleWe’re looking for an AI/Backend Engineer to own and evolve our LLM orchestration pipeline. You’ll be the first dedicated engineering hire, working directly with our CTO to transform Sophie from a working prototype into a scalable, enterprise-ready platform.This is a high-impact, high-autonomy role. You’ll shape technical decisions that define the product for years to come.What You’ll DoOwn the AI PipelineDesign and optimize our multi-agent orchestration systemImplement parallelization and streaming to dramatically reduce response latencyBuild robust prompt management with versioning and A/B testing capabilitiesBuild RAG SystemsDesign retrieval-augmented generation for accurate, contextual responsesWork with vector databases, embeddings, and relevance scoringOptimize for both speed and accuracy at scaleDevelop Production APIsBuild developer-friendly APIs connecting our AI capabilities to the frontendDesign for future integrations with CRMs and advisor toolsImplement proper authentication, rate limiting, and documentationShape the FoundationEstablish code review practices and testing standardsDocument architecture decisions for future team membersContribute to technical patents and IP developmentWhat We’re Looking ForMust Have4+ years production Python experience (async patterns, type hints)Hands-on experience with LLM APIs (OpenAI, Anthropic, or similar)Strong understanding of prompt engineering and multi-step LLM workflowsProduction API development experience (FastAPI or similar)Strong SQL and PostgreSQL skillsGreat to HaveExperience with RAG systems and vector databases (Pinecone, Weaviate, pgvector)Streaming/real-time implementation experience (SSE, WebSockets)TypeScript/JavaScript familiarityFinTech or regulated industry backgroundHow You WorkStrong UX intuition—you notice when flows have one too many clicksPragmatic perfectionism—you know when to polish and when to shipClear communicator who can explain technical constraints in business termsCollaborative mindset—frontend doesn’t exist in isolationHow You WorkSelf-directed and comfortable with ambiguityStrong written communication (async-first culture)Pragmatic problem-solver who ships iterativelyCollaborative mindset with ego-free approach to feedbackWhat This Role Is NotWe want to be upfront about expectations:Not a pure ML/research role—you’ll apply LLMs, not train themNot a management role—near-term focus is individual contributionNot fully autonomous—you’ll collaborate closely with the CTO on architectureNot 9-to-5—startup intensity applies, though we respect work-life balanceCompensation & BenefitsEquity: Meaningful early-stage grant with 4-year vestingEquipment: Professional Laptop ready to work with AI provided + stipend for remote work when 6 month mark is metTime Off: Flexible PTO with a minimum 15 days encouragedLearning: $1,000 annual professional development budgetSchedule: Flexible hours with 3-4 hours daily overlap (Americas timezones)
No items found.
Hidden link
No job found
Your search did not match any job. Please try again
Department
Clear
Category
Clear
Country
Clear
Job type
Clear
Remote
Clear
Only remote job
Company size
Clear
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.