⚠️ Sorry, this job is no longer available.

The AI job market moves fast. We keep up so you don't have to.

Fresh roles added daily, reviewed for quality — across every corner of the AI ecosystem.

Edit filters

New AI Opportunities

Showing 6179  of 79 jobs
Tag
Magic.jpg

Member of Technical Staff, Pre-training Data

Magic
$200,000 – $550,000
US.svg
United States
Full-time
Remote
false
Magic’s mission is to build safe AGI that accelerates humanity’s progress on the world’s most important problems. We believe the most promising path to safe AGI lies in automating research and code generation to improve models and solve alignment more reliably than humans can alone. Our approach combines frontier-scale pre-training, domain-specific RL, ultra-long context, and inference-time compute to achieve this goal.About the roleAs a Software Engineer on the Pre-training Data team, you will design and operate the systems that define our model’s training corpus at scale.This role is focused on large-scale data acquisition, processing, filtering, mixture design, and ablation-driven iteration. You will work on the infrastructure and experimental loops that determine what data we train on — and therefore what the model learns.Magic’s long-context models introduce non-trivial data challenges: maintaining document structure and long-range coherence, designing sequence chunking and packing strategies, balancing mixture trade-offs, and ensuring data quality at internet scale. You will own systems that turn these questions into measurable training decisions.This role can evolve into broader ownership of corpus strategy, deeper involvement in training systems, or transition into ML systems work as you shape how data and model behavior interact at scale.What you’ll work onBuild and operate large-scale web crawling, scraping, and ingestion pipelinesDesign filtering, deduplication, quality controls, and dataset versioning systemsRun data ablations across sources, rewrites, mixtures, and long-sequence strategiesOptimize distributed data processing systems for throughput and cost efficiencyImprove observability and reliability of large ETL and dataflow jobsCollaborate with Research and Training Systems teams to align corpus design with model behaviorWhat we’re looking forStrong software engineering fundamentalsExperience building and operating large-scale distributed data systemsAbility to design and interpret practical data ablation experimentsComfort making decisions under compute, storage, and cost constraintsStrong systems intuition around reliability and scaleTrack record of owning production systems end-to-endCompensation, benefits, and perks (US):Annual salary range: $200K - $550KEquity is a significant part of total compensation, in addition to salary401(k) plan with 6% salary matchingGenerous health, dental and vision insurance for you and your dependentsUnlimited paid time offVisa sponsorship and relocation stipend to bring you to SF, if possibleA small, fast-paced, highly focused teamMagic strives to be the place where high-potential individuals can do their best work. We value quick learning and grit just as much as skill and experience.Our cultureIntegrity. Words and actions should be alignedHands-on. At Magic, everyone is building Teamwork. We move as one team, not N individualsFocus. Safely deploy AGI. Everything else is noiseQuality. Magic should feel like magic
No items found.
Hidden link
Magic.jpg

Software Engineer

Magic
$200,000 – $550,000
US.svg
United States
Full-time
Remote
false
Magic’s mission is to build safe AGI that accelerates humanity’s progress on the world’s most important problems. We believe the most promising path to safe AGI lies in automating research and code generation to improve models and solve alignment more reliably than humans can alone. Our approach combines frontier-scale pre-training, domain-specific RL, ultra-long context, and inference-time compute to achieve this goal.About the roleAs a Software Engineer at Magic, you will work on core systems or product surfaces that directly determine model capability and user experience.This role can map onto Pre-training Data, RL Research & Environments, or Product, depending on background and strengths. Across all placements, the expectation is end-to-end ownership: defining problems, implementing solutions, shipping to production, and iterating based on real outcomes.Magic’s long-context models introduce unique technical challenges — internet-scale data acquisition, long-horizon post-training loops, and product workflows that make complex model behavior understandable and controllable. You will operate close to these constraints, building systems that are both technically rigorous and production-ready.This role can evolve into deeper specialization in data systems, post-training capability development, or product engineering leadership, depending on strengths and interests.What you’ll work onDepending on team placement, you may:Build and scale large distributed data pipelines for pre-trainingDesign filtering, mixture, and dataset versioning systemsDevelop post-training datasets, evaluation frameworks, and reward pipelinesRun ablations that translate capability goals into measurable improvementsBuild end-to-end product surfaces that integrate deeply with the modelDesign APIs, backend services, and frontend workflows for AI-first experiencesImprove reliability, observability, and performance of production systemsWhat we’re looking forStrong software engineering fundamentalsHigh ownership and comfort operating in ambiguous problem spacesExperience building production systems at scaleAbility to reason clearly about trade-offs between quality, performance, and costStrong technical judgment and bias toward shippingTrack record of turning complex technical problems into working systemsCompensation, benefits, and perks (US):Annual salary range: $200K - $550KEquity is a significant part of total compensation, in addition to salary401(k) plan with 6% salary matchingGenerous health, dental and vision insurance for you and your dependentsUnlimited paid time offVisa sponsorship and relocation stipend to bring you to SF, if possibleA small, fast-paced, highly focused teamMagic strives to be the place where high-potential individuals can do their best work. We value quick learning and grit just as much as skill and experience.Our cultureIntegrity. Words and actions should be alignedHands-on. At Magic, everyone is building Teamwork. We move as one team, not N individualsFocus. Safely deploy AGI. Everything else is noiseQuality. Magic should feel like magic
No items found.
Hidden link
Intrinsic.jpg

Senior Software Engineering Lead, Resilience and Chaos Engineering

Intrinsic
SG.svg
Singapore
Full-time
Remote
false
Intrinsic is Alphabet’s bet aiming to reimagine the potential of industrial robotics. Our team believes that advances in AI, perception and simulation will redefine what’s possible for industrial robotics in the near future – with software and data at the core.  Our mission is to make industrial robotics intelligent, accessible, and usable for millions more businesses, entrepreneurs, and developers. We are a dynamic team of engineers, roboticists, designers, and technologists who are passionate about unlocking the creative and economic potential of industrial robotics.Role As a Senior AI Research Scientist for Perception for Contact Rich Manipulation you will lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. You will explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Your work will involve creating models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. You will collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities that can be deployed on physical hardware for industrial applications. How your work moves the mission forward Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios. Design algorithms that enable robots to manipulate complex or deformable objects with high precision. Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware. Evaluate model performance in both simulation and real-world environments to ensure robustness and reliability. Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems. Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap. Skills you will need to be successful PhD in Computer Science, Robotics, or a related field with a focus on machine learning or computer vision. 3 years of experience in applied research focused on robotic manipulation or robot learning. Proficiency in programming with Python and C++. Experience with deep learning frameworks such as PyTorch, JAX, or TensorFlow. Experience developing algorithms for vision-based manipulation or contact-rich interaction. Publication record in top-tier robotics or AI conferences (e.g., ICRA, IROS, CVPR, NeurIPS).  Skills that will differentiate your candidacy Experience with reinforcement learning or imitation learning for robotics. Familiarity with physics simulators like MuJoCo, Isaac Sim, or Gazebo. Experience integrating tactile sensors with visual perception systems. Experience in LfD (Learning from Demonstrations), kinesthetic learning. Background in sim-to-real transfer techniques for manipulation policies. Experience with transformer-based architectures or foundation models in a robotics context. Experience deploying machine learning models on edge compute hardwar​e. At Intrinsic, we are proud to be an equal opportunity workplace. Employment at Intrinsic is based solely on a person's merit and qualifications directly related to professional competence. Intrinsic does not discriminate against any employee or applicant because of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), or any other basis protected by law. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. It is Intrinsic’s policy to comply with all applicable national, state and local laws pertaining to nondiscrimination and equal opportunity. If you have a disability or special need that requires accommodation, please contact us at: candidate-support@intrinsic.ai.
No items found.
Hidden link
OpusClip.jpg

Senior Full Stack Engineer, Backend Engineering

Opusclip
CA$160,000 – CA$265,000
CA.svg
Canada
Full-time
Remote
false
🎨 OpusClip is the world's No.1 AI video agent, built for authenticity on social media.We envision a world where everyone can authentically share their story through video, with no expertise needed. Within just 18 months of our launch, over 10 million creators and businesses have used OpusClip to enhance their social presence. We have raised $50 million in total funding and are fortunate to have some of the most supportive investors, including SoftBank Vision Fund, DCM Ventures, Millennium New Horizons, Fellows Fund, AI Grant, Jason Lemkin (SaaStr), Samsung Next, GTMfund, Alumni Ventures, and many more. Check out our latest coverage by Business Insider featuring our product and funding milestones, and our recognition as one of The Information's 50 Most Promising Startups in 2024. Headquartered in Palo Alto, we are a team of 100 passionate and experienced AI enthusiasts and video experts, driven by our core values: Be a Champion Team Prioritize Ruthlessly Ship fast, Quality Follows Obsess over customers Be a part of this exciting journey with us!The MissionWe're building one of the world's largest AI video clipper. Having solved the "long-to-short clipping" challenge, we're now tackling the "magic quality" challenge: elevating AI quality and taste to match top editors, producers, and professional creative teams. We're also streamlining workflows by deeply examining content selection, video production, and post-production editing best practices while reinforcing our data flywheel. You'll join the team to push our product beyond its current limits.ResponsibilitiesArchitect Dedicated Processing Environments: Design and implement high-throughput, isolated processing clusters for Enterprise clients. Build the "paved road" for strict tenant isolation and High Availability (HA) without noisy neighbor interference.Scale Core Infrastructure: Drive improvements across our Temporal workflow clusters and production Kubernetes environments, implementing scaling strategies that support both self-serve consumers and high-touch Enterprise contracts.Build the AI Serving Layer: Bridge engineering and research by collaborating with the AI/ML team to transform experimental models into scalable, production-ready services. Own the infrastructure that connects model outputs to user-facing features with minimal latency.Implement Semantic Search at Scale: Build and operate high-dimensional vector database infrastructure (Milvus) to power "OpusSearch". Enable users to find exact moments across thousands of hours of video using natural language.Enterprise Readiness: Architect the backend systems (bulk workflow orchestration, resource isolation, multi-tenancy) that enable large media houses to manage massive video archives.Who You AreInfrastructure-Minded Product Engineer: You care about why features exist and how they scale. You think in terms of throughput, isolation, and failure modes, but always in service of the user experience.Systems Builder: You're experienced in building backend systems, designing APIs, orchestrating distributed workflows, tuning databases, and optimizing compute. You're comfortable debugging Kubernetes internals, Temporal workflow logic, and async processing pipelines.AI-Native Mindset: You have experience or a strong, demonstrated interest in building infrastructure around AI models, managing GPU workloads, and serving models at scale.Experience: 5+ years shipping production-grade backend systems. Experience with Kubernetes, workflow orchestration (Temporal or similar), video infrastructure, vector databases, or high-volume data pipelines is a major plus.Our Tech Stack:Orchestration & Compute: Kubernetes (GKE), Docker, Horizontal Pod Autoscaling (HPA)Workflow Engine: TemporalLanguages: Python, TypeScriptData & Storage: Redis, MongoDB, Vector DB, Postgres, Cloud StorageInfrastructure as Code: TerraformObservability: Datadog (APM, Custom Metrics)Cloud: GCPWhy Join Now?Market Leadership: We're defining the "Agentic Video Editing" category and setting industry standards for quality and taste.Hard Infrastructure Problems: You'll build systems that process millions of video hours — tenant isolation, GPU scheduling, distributed workflow orchestration, and vector search at scale.High Impact: Your work directly influences our ability to capture the prosumer/enterprise market and hit our ARR targets.Vancouver Hub: Be a key part of our growing Vancouver site, working closely with leadership to shape our engineering culture.Location (On-site):Burnaby, Vancouver, CACompensation:The base salary range for this position is CAD$160,000 - $265,000 annually.This role is also eligible for performance-based bonus, with an annual range of CAD$13,000 - $58,000.Actual compensation may vary based on factors such as a candidate's qualifications, skills, experience, and geographic location.Competitive equity (ISOs)🎁 Our Benefits:Comprehensive medical, vision, and dental coverage to support you and your well-being.Flexible paid time off that empowers you to recharge, reset, and come back stronger.Generous equipment, software, and office furniture budget, including MacBook, 4k monitor, standing desks and more of what you need to be creative and productive.Access to the latest technologies and tools.Visa sponsorship available, subject to eligibility.EEOOpusClip is proud to be an equal opportunity employer. We do not discriminate in hiring or any employment decision based on race, color, religion, national origin, age, sex (including pregnancy, childbirth, or related medical conditions), marital status, ancestry, physical or mental disability, genetic information, veteran status, gender identity or expression, sexual orientation, or other applicable legally protected characteristics. OpusClip considers qualified applicants with criminal histories, consistent with applicable federal, state and local law. Opus Clip is also committed to providing reasonable accommodations for qualified individuals with disabilities and disabled veterans in our job application procedures.
No items found.
Hidden link
OpenAI.jpg

Research Scientist, PhD

OpenAI
$250,000 – $380,000
US.svg
United States
Full-time
Remote
false
About the TeamThe Research team at OpenAI advances the frontier of artificial intelligence by developing new models, algorithms, and learning paradigms that push toward safe and beneficial AGI. Our researchers work across a wide range of domains including multimodal learning, reasoning, robotics, alignment, and large-scale foundation models, collaborating closely with engineering and product teams to translate research breakthroughs into real-world impact.About the RoleAs a Research Scientist at OpenAI, you will develop novel machine learning methods and contribute to advancing the research agenda of the team you join. You will work on discovering simple, scalable, and generalizable ideas that improve model capabilities and help shape a unified long-term research vision across the organization.We’re looking for people who are excited to pursue ambitious research problems, operate independently, and collaborate deeply with interdisciplinary teams to bring cutting-edge ideas from concept to impactful systems.This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.In this role, you will:Conduct original research to advance the state of the art in machine learning and artificial intelligence.Design, implement, and evaluate novel algorithms, models, or training approaches at large scale.Collaborate with researchers and engineers to translate research insights into production systems and real-world applications.You might thrive in this role if you:Are pursuing or have recently completed a PhD in Machine Learning, Computer Science, Robotics, or a related technical field.Have demonstrated research impact through first-author publications, open-source projects, or significant technical contributions.Can independently define and execute on a research agenda, identifying impactful problems and driving long-term projects forward.Are motivated by OpenAI’s mission and excited to work on research that contributes toward beneficial AGI.Nice to have:Interest in the societal impacts and responsible deployment of AI systems.Experience building high-performance or large-scale implementations of deep learning systems.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
OpenAI.jpg

Manager, Forward Deployed Engineer (FDE), Life Sciences

OpenAI
$252,000 – $335,000
US.svg
United States
Full-time
Remote
false
About the teamOpenAI’s Forward Deployed Engineering (FDE) team partners with global pharma and biotech, CROs, and research institutions to deploy production-grade AI systems across the R&D value chain. We operate at the intersection of customer delivery and core platform development, converting early deployments into repeatable system standards and evaluation practices that scale across regulated environments.About the roleAs a Life Sciences FDE Manager, you’ll lead a team of FDEs delivering production AI systems across drug discovery and development workflows. You’ll own delivery outcomes and team leverage while staying hands-on as a player-coach. This includes building and shipping alongside the team, setting technical direction, and maintaining a high bar for production-grade systems in regulated environments.We measure success through the health and quality of your FDE team, production adoption and measurable workflow impact, the quality of eval-driven feedback delivered back to Product and Research, and the repeatability of deployment patterns across life sciences customers.This role is based in San Francisco. We use a hybrid work model of 3 days in the office per week. We offer relocation assistance. This role will require travel up to 25%.In this role you willLead and grow a team of FDEs delivering production AI systems across regulated life sciences environmentsBe accountable for your team’s end-to-end delivery outcomes, balancing scope, speed, robustness, and risk in high-stakes deploymentsCoach and develop engineers through direct feedback, high technical standards, and clear expectations for execution and ownershipOperate as a player-coach, directly contributing to production systems while leading, coaching, and setting technical directionGuide teams through ambiguous, multi-workstream engagements spanning data, workflows, infrastructure, security, and scientific stakeholdersRun evaluation loops that measure model and system quality against workflow-specific scientific benchmarks, then convert results into crisp roadmap inputYou might thrive in this role if youBring 8+ years of engineering or technical delivery experience, including 2+ years managing high-performing customer-facing or systems-oriented engineering teamsHave led complex, high-pressure technical programs from prototype through sustained production use in regulated environmentsHave experience working in or adjacent to life sciences R&D, clinical research, scientific software, or regulated scientific data environmentsWrite and review production-grade code and can guide architectural decisions across backend, data, and ML-adjacent systemsTranslate scientific and technical tradeoffs into clear delivery plans, risk posture, and measurable outcomes across scientific, clinical, technical, and executive audiencesElevate team performance through clarity, judgment, and technical credibilityTurn field experience into precise, actionable feedback for Product, Research, and GTM teamsAbout OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
Sesame.jpg

Sensing Systems Engineer

Sesame
$175,000 – $280,000
US.svg
United States
Full-time
Remote
false
About SesameSesame believes in a future where computers are lifelike - with the ability to see, hear, and collaborate with us in ways that feel natural and human. With this vision, we're designing a new kind of computer, focused on making voice agents part of our daily lives. Our team brings together founders from Oculus and Ubiquity6, alongside proven leaders from Meta, Google, and Apple, with deep expertise spanning hardware and software. Join us in shaping a future where computers truly come alive.About the roleSesame is developing new consumer wearables whose purpose is to enrich the human-computer interface with novel interaction modes, including the use of new and available sensor systems. We are looking for a Systems Engineer to own the holistic performance of Sesame’s devices across the full stack—from hardware selection and integration through firmware, signal processing, and application behavior.The ideal candidate thrives in crossing the boundaries between disciplines to ensure product-level goals are translated into technical requirements that deliver a magical experience in users' hands. You'll be the connective tissue between hardware, firmware, application, ML, and product teams, ensuring every layer works in concert from proof of concept prototypes to shipping in mass production. ResponsibilitiesSensing Architecture: Research, evaluate and recommend optimal sensor technologies and devices for various wearable applications, taking into account physical, electrical, and software capabilities along with cost, schedule, and user impact.End-to-end Performance: Own the end-to-end performance of Sesame devices’ sensor systems from prototyping to mass production, including latency, power consumption, thermal constraints, and reliability.System Requirements & Test: Champion a high quality user-experience by defining system-level test plans and acceptance criteria and detailed, actionable specifications for designing and validating each layer of the stack.User data collection: Design and supervise a data collection strategy to gather ground-truth data sets necessary for algorithm and model development.Algorithm Design: Develop, test, and implement the signal processing, sensor fusion and calibration systems that translate raw sensor data into usable outputs.Model-sensor integration: Collaborate with Sesame’s ML team to determine how sensor data improves the quality of Sesame agents’ responses. Required qualifications5+ years of experience in systems engineering, sensor systems, algorithms and/or signal processing.Demonstrated experience shipping high-volume consumer products from concept through production, in collaboration with electrical, firmware, and mechanical engineers.Background in Sensor Physics: deeply understanding how sensor systems interact with the physical world, and how to design a test plan to isolate specific variables and behaviors in the system.Understanding of electro-mechanical integration requirements and limitations of various sensors: Implementation constraints, best practices, areas for optimizationStrong understanding of embedded systems: SoC architectures, memory management, wireless connectivity, power and thermal management, latency.Proficiency in Digital Signal Processing (DSP) techniques: filtering, windowing, feature extraction, and sensor fusion. Expert in Python for signal processing, data visualization, and algorithm development.Excellent communication skills and ability to drive alignment across multiple engineering disciplines.BS/MS in Electrical Engineering, Mechanical Engineering, Physics, or Computer Science (or equivalent experience). Preferred qualificationsExperience with wearable or small-form-factor consumer devices, especially on-device voice, audio, or camera processing pipelines.Familiarity with AI/ML workloads on edge devices and their system-level implications (compute, memory, power). Hands-on experience porting of algorithms and models to embedded systems, including experience with C/C++.Multiphysics / FEA Experience: Proficiency with simulation tools (e.g., COMSOL, ANSYS) to model mechanical-electrical coupling and sensor response.Experience with Machine Learning for signal processing.Direct experience defining system architecture for a 0-to-1 product.Experience in a startup or fast-moving, small-team product environment.Sesame is committed to a workplace where everyone feels valued, respected, and empowered. We welcome all qualified applicants, embracing diversity in race, gender, identity, orientation, ability, and more. We provide reasonable accommodations for applicants with disabilities—contact careers@sesame.com for assistance.Full-time Employee Benefits: 401k matching100% employer-paid health, vision, and dental benefits Unlimited PTO and sick time Flexible spending account matching (medical FSA) Benefits do not apply to contingent/contract workers.
No items found.
Hidden link
LangChain.jpg

Deployed Engineer (Central)

LangChain
$150,000 – $250,000
US.svg
United States
Full-time
Remote
false
About UsAt LangChain, our mission is to make intelligent agents ubiquitous. We build the foundation for agent engineering in the real world, helping developers move from prototypes to production-ready AI agents that teams can rely on. We began as widely adopted open-source tools and have grown to also offer a platform for building, evaluating, deploying, and operating agents at scale.Today, LangChain, LangGraph, LangSmith, and Agent Builder are used by teams shipping real AI products across startups and large enterprises. Millions of developers trust LangChain to power AI teams at companies like Replit, Clay, Coinbase, Workday, Lyft, Cloudflare, Harvey, Rippling, Vanta, and 35% of the Fortune 500.With $125M raised at Series B from IVP, Sequoia, Benchmark, CapitalG, and Sapphire Ventures, we’re at a stage where we’re continuing to develop new products, growth is accelerating, and all team members have meaningful impact on what we build and how we work together. LangChain is a place where your contributions can shape how this technology shows up in the real world.About the TeamThe Deployed Engineering team works directly with companies building and running AI agents in production, helping turn ideas and prototypes into systems teams can rely on.This is a hands-on, highly technical team that partners closely with customer engineers across the full lifecycle, from pre-sales evaluations to post-deployment advisory work. The focus is on achieving the technical win, co-designing agent architectures, and helping customers operate agents reliably at scale using the LangChain suite.Deployed Engineers sit at the intersection of engineering, product, and go-to-market, shaping how LangChain is adopted in the field and feeding real-world insights back into the platform.About the RoleThe Deployed Engineer…You’ll work on some of the hardest problems in applied AI — not demos, not research, but systems that real teams depend on in production. The feedback loop is fast, the impact is visible, and the work you do directly shapes how AI agents are built in the real world.What You’ll DoCo-architect and co-build production AI agents with customer engineering teamsOwn the technical win in pre-sales by designing POCs, answering deep technical questions, and guiding evaluationsHelp customers deploy and operate agent-based applications such as conversational agents, research agents, and multi-step workflowsAdvise customers post-sale on architecture, best practices, and roadmap-level decisionsRun technical demos, trainings, and workshops for developer audiencesSurface field feedback and contribute reusable patterns, cookbooks, and example code that scale across customersOccasionally contribute code upstream when it meaningfully improves customer outcomesWhat You’ll Bring3+ years in a relevant technical role (software engineering, customer engineering, solutions engineering, founding/product engineering), ideally in a startup or scale-upStrong Python, JavaScript and systems fundamentalsHave designed agent-based or LLM-powered applications beyond simple API calls, including multi-step workflows, orchestration, and failure handlingAre comfortable working directly with customers during POCs, architecture reviews, and technical evaluationsCan explain technical tradeoffs clearly and build trust with developer audiencesTake responsibility for outcomes, not just recommendationsHave a bias toward action and enjoy figuring things out as you goAre excited about operating AI agents in production, not just building demosNice to Have’s:You’ve deployed AI agents in production, especially using LangChain, LangGraph, or similar frameworksWorked with LLM evaluation, observability, or guardrailsHave experience with cloud environments (AWS, GCP, Azure), containers, and basic Kubernetes conceptsHave shipped and operated production software and are comfortable owning systems under real-world constraintsCompensation & BenefitsWe offer competitive compensation that includes base salary, variable compensation for relevant roles, meaningful equity, benefits, and perks. Benefits include things like medical, dental, and vision coverage, flexible vacation, a 401(k) plan, and life insurance. Actual compensation and offerings will vary based on role, level, and location. Team members in the EU, UK, and APAC receive locally competitive benefits aligned with regional norms and regulations.Annual OTE range: $150,000–$250,000 USD
No items found.
Hidden link
Deepgram.jpg

Software Engineer, Voice Agents / AI - Deepgram for Restaurants

Deepgram
$160,000 – $250,000
US.svg
United States
Full-time
Remote
false
Company OverviewDeepgram is the leading platform underpinning the emerging trillion-dollar Voice AI economy, providing real-time APIs for speech-to-text (STT), text-to-speech (TTS), and building production-grade voice agents at scale. More than 200,000 developers and 1,300+ organizations build voice offerings that are ‘Powered by Deepgram’, including Twilio, Cloudflare, Sierra, Decagon, Vapi, Daily, Cresta, Granola, and Jack in the Box. Deepgram’s voice-native foundation models are accessed through cloud APIs or as self-hosted and on-premises software, with unmatched accuracy, low latency, and cost efficiency. Backed by a recent Series C led by leading global investors and strategic partners, Deepgram has processed over 50,000 years of audio and transcribed more than 1 trillion words. There is no organization in the world that understands voice better than Deepgram.Company Operating RhythmAt Deepgram, we expect an AI-first mindset—AI use and comfort aren’t optional, they’re core to how we operate, innovate, and measure performance.Every team member who works at Deepgram is expected to actively use and experiment with advanced AI tools, and even build your own into your everyday work. We measure how effectively AI is applied to deliver results, and consistent, creative use of the latest AI capabilities is key to success here. Candidates should be comfortable adopting new models and modes quickly, integrating AI into their workflows, and continuously pushing the boundaries of what these technologies can do.Additionally, we move at the pace of AI. Change is rapid, and you can expect your day-to-day work to evolve just as quickly. This may not be the right role if you’re not excited to experiment, adapt, think on your feet, and learn constantly, or if you’re seeking something highly prescriptive with a traditional 9-to-5.The Opportunity:We are seeking a Software Engineer to join Deepgram for Restaurants- a new, vertically focused business unit dedicated to solving the toughest problems in the space, working alongside Deepgram’s core research teams.We’ll be working directly with leading restaurant enterprises and leading restaurant technology partners to uncover and solve the highest-impact problems in the space. Early focus areas include:Ultra-robust ASR in noisy, multi-mic environmentsHigh-accuracy, menu-aware drive-thru and phone ordering agentsReal-time analytics and operational intelligenceDeeper, more reliable in-restaurant hardware integrationWe are already partnering with several leading brands and will continue expanding these effortsWhat You’ll DoDesign, develop, and maintain scalable, high-performance backend systems for our automated order-taking platformCollaborate closely with our team to ensure the seamless integration of our backend with our ML models and client devicesMonitor and optimize the performance of our backend systems in production environmentsBuild and maintain integrations with 3rd party restaurant software systems like POS, loyalty, payment gateways, and customer data platformsImplement best practices in system design, code quality, and testing to ensure a reliable, secure, and maintainable systemOptimize AI pipeline to improve performance in challenging audio environments, robustly handle ambiguous customer requests, and rapidly scale to new menusPush the very boundaries of LLMs and voice AI technology to solve one of the technology industry’s historically elusive challengesRun experiments to validate the product impact of new functionalityIt’s Important To Us That You HaveBachelor's or Master's degree in Computer Science, Engineering, or a related field4+ years of hands-on experience in developing, implementing, and maintaining backend infrastructure in production environmentsProven track record of building and deploying scalable, high-performance backend systemsExperience with cloud-based infrastructure and deployment technologies, such as AWS and API designExcellent problem-solving skills and the ability to adapt to a rapidly evolving startup environmentStrong communication skills and the ability to work collaboratively with a teamWillingness to problem solve across our entire productAbility to proactively recognize problems and identify solutions to those problemsResults oriented and care about the impact of your workIt Would Be Great if You HadExperience working with audioExperience taking a backend system from zero to oneExperience building and maintaining a system that integrates with many 3rd party APIs, particularly Point of Sale systemsExperience with Python, Kotlin, or JavaExperience with containerization tools, such as Docker and KubernetesExperience working in or alongside AI / MLBenefits & Perks*Holistic healthMedical, dental, vision benefitsAnnual wellness stipendMental health supportLife, STD, LTD Income Insurance PlansWork/life blendUnlimited PTOGenerous paid parental leaveFlexible schedule12 Paid US company holidaysQuarterly personal productivity stipendOne-time stipend for home office upgrades401(k) plan with company matchTax Savings ProgramsContinuous learningLearning / Education stipendParticipation in talks and conferencesEmployee Resource GroupsAI enablement workshops / sessions*For candidates outside of the US, we use an Employer of Record model in many countries, which means benefits are administered locally and governed by country-specific regulations. Because of this, benefits will differ by region — in some cases international employees receive benefits US employees do not, and vice versa. As we scale, we will continue to evaluate where we can create more alignment, but a 1:1 global benefits structure is not always legally or operationally possible.Backed by prominent investors including Y Combinator, Madrona, Tiger Global, Wing VC and NVIDIA, Deepgram has raised over $215M in total funding. If you're looking to work on cutting-edge technology and make a significant impact in the AI industry, we'd love to hear from you!Deepgram is an equal opportunity employer. We want all voices and perspectives represented in our workforce. We are a curious bunch focused on collaboration and doing the right thing. We put our customers first, grow together and move quickly. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, gender identity or expression, age, marital status, veteran status, disability status, pregnancy, parental status, genetic information, political affiliation, or any other status protected by the laws or regulations in the locations where we operate.We are happy to provide accommodations for applicants who need them.
No items found.
Hidden link
Sierra.jpg

Software Engineer, Product (New Grad)

Sierra
US.svg
United States
Full-time
Remote
false
About usAt Sierra, we’re creating a platform to help businesses build better, more human customer experiences with AI. We are primarily an in-person company based in San Francisco, with growing offices in Atlanta, New York, London, Paris, Singapore, and Japan.We are guided by a set of values that are at the core of our actions and define our culture: Trust, Customer Obsession, Craftsmanship, Intensity, and Family. These values are the foundation of our work, and we are committed to upholding them in everything we do.Our co-founders are Bret Taylor and Clay Bavor. Bret currently serves as Board Chair of OpenAI. Previously, he was co-CEO of Salesforce (which had acquired the company he founded, Quip) and CTO of Facebook. Bret was also one of Google's earliest product managers and co-creator of Google Maps. Before founding Sierra, Clay spent 18 years at Google, where he most recently led Google Labs. Earlier, he started and led Google’s AR/VR effort, Project Starline, and Google Lens. Before that, Clay led the product and design teams for Google Workspace. As a New Grad Software Engineer at Sierra, you’ll join a small, highly collaborative engineering organization building the core platform behind AI-powered customer experiences. You’ll work alongside experienced engineers to design, build, and improve production systems that power intelligent agents used by real customers every day.This role is designed for early-career engineers who are eager to learn, curious about AI systems, and excited to grow by shipping real product. You’ll start with well-scoped projects, receive close mentorship, and steadily take on more ownership as you develop your technical skills and product intuition.New Grad engineers are placed on teams based on business needs, your interests, and your strengths, with opportunities to learn across different parts of the system over time. What You’ll DoBuild and ship production features across backend services, APIs, and user-facing componentsCollaborate with senior engineers to design, implement, and iterate on systems that support AI agents and customer workflowsWrite clean, well-tested code and participate in code reviews to learn best practices and improve system qualityHelp improve the reliability, performance, and scalability of existing systems through debugging, testing, and incremental improvementsPartner with product managers, designers, and other engineers to understand customer problems and translate them into technical solutionsLearn how large-scale, real-world systems are built, monitored, and evolved in production Team Placement New Grad engineers may join one of several teams, including but not limited to:Agent Architecture – Help build the core foundations of how AI agents reason, retrieve information, and are evaluatedInsights & Data – Work on tools and systems that analyze conversational data, monitor agent performance, and enable experimentationVoice & Real-Time Systems – Build bleeding edge voice AI systems by improving transcription and synthesis quality in cascaded models, as well as bringing voice-to-voice models to life.Team placement is flexible and may evolve as the company grows and your interests develop.Our valuesTrust: We build trust with our customers with our accountability, empathy, quality, and responsiveness. We build trust in AI by making it more accessible, safe, and useful. We build trust with each other by showing up for each other professionally and personally, creating an environment that enables all of us to do our best work.Customer Obsession: We deeply understand our customers’ business goals and relentlessly focus on driving outcomes, not just technical milestones. Everyone at the company knows and spends time with our customers. When our customer is having an issue, we drop everything and fix it.Craftsmanship: We get the details right, from the words on the page to the system architecture. We have good taste. When we notice something isn’t right, we take the time to fix it. We are proud of the products we produce. We continuously self-reflect to continuously self-improve.Intensity: We know we don’t have the luxury of patience. We play to win. We care about our product being the best, and when it isn’t, we fix it. When we fail, we talk about it openly and without blame so we succeed the next time.Family: We know that balance and intensity are compatible, and we model it in our actions and processes. We are the best technology company for parents. We support and respect each other and celebrate each other’s personal and professional achievements.What we offerWe want our benefits to reflect our values and offer the following to full-time employees:Flexible (Unlimited) Paid Time OffMedical, Dental, and Vision benefits for you and your familyLife Insurance and Disability BenefitsRetirement Plan (e.g., 401K, pension) with Sierra matchParental LeaveFertility and family building benefits through CarrotLunch, as well as delicious snacks and coffee to keep you energized Discretionary Benefit Stipend giving people the ability to spend where it matters mostFree alphorn lessonsThese benefits are further detailed in Sierra's policies, may vary by region, and are subject to change at any time, consistent with the terms of any applicable compensation or benefits plans. Eligible full-time employees can participate in Sierra's equity plans subject to the terms of the applicable plans and policies.Be you, with usWe're working to bring the transformative power of AI to every organization in the world. To do so, it is important to us that the diversity of our employees represents the diversity of our customers. We believe that our work and culture are better when we encourage, support, and respect different skills and experiences represented within our team. We encourage you to apply even if your experience doesn't precisely match the job description. We strive to evaluate all applicants consistently without regard to race, color, religion, gender, national origin, age, disability, veteran status, pregnancy, gender expression or identity, sexual orientation, citizenship, or any other legally protected class.
No items found.
Hidden link
Together AI.jpg

Lead Product Designer

Together AI
$200,000 – $280,000
No items found.
Full-time
Remote
false
About the Role The Turbo team sits at the intersection of efficient inference (algorithms, architectures, engines) and post‑training / RL systems. We build and operate the systems behind Together’s API, including high‑performance inference and RL/post‑training engines that can run at production scale. Our mandate is to push the frontier of efficient inference and RL‑driven training: making models dramatically faster and cheaper to run, while improving their capabilities through RL‑based post‑training (e.g., GRPO‑style objectives). This work lives at the interface of algorithms and systems: asynchronous RL, rollout collection, scheduling, and batching all interact with engine design, creating many knobs to tune across the RL algorithm, training loop, and inference stack. Much of the job is modifying production inference systems—for example, SGLang‑ or vLLM‑style serving stacks and speculative decoding systems such as ATLAS—grounded in a strong understanding of post‑training and inference theory, rather than purely theoretical algorithm design. You’ll work across the stack—from RL algorithms and training engines to kernels and serving systems—to build and improve frontier models via RL pipelines. People on this team are often spiky: some are more RL‑first, some are more systems‑first. Depth in one of these areas plus appetite to collaborate across (and grow toward more full‑stack ownership over time) is ideal. Requirements We don’t expect anyone to check every box below. People on this team typically have deep expertise in one or more areas and enough breadth (or interest) to work effectively across the stack. The closer you are to full‑stack (inference + post‑training/RL + systems), the stronger the fit—but being spiky in one area and eager to grow is absolutely okay. You might be a good fit if you: Have strong expertise in at least one of the following, and are excited to collaborate across (and grow into) the others: Systems‑first profile: Large‑scale inference systems (e.g., SGLang, vLLM, FasterTransformer, TensorRT, custom engines, or similar), GPU performance, distributed serving. RL‑first profile: RL / post‑training for LLMs or large models (e.g., GRPO, RLHF/RLAIF, DPO‑like methods, reward modeling), and using these to train or fine‑tune real models. Model architecture design for Transformers or other large neural nets. Distributed systems / high‑performance computing for ML. Are comfortable working from algorithms to engines: Strong coding ability in Python Experience profiling and optimizing performance across GPU, networking, and memory layers. Able to take a new sampling method, scheduler, or RL update and turn it into a production‑grade implementation in the engine and/or training stack. Have a solid research foundation in your area(s) of depth: Track record of impactful work in ML systems, RL, or large‑scale model training (papers, open‑source projects, or production systems). Can read new RL / post‑training papers, understand their implications on the stack, and design minimal, correct changes in the right layer (training engine vs. inference engine vs. data / API). Operate well as a full‑stack problem solver: You naturally ask: “Where in the stack is this really bottlenecked?” You enjoy collaborating with infra, research, and product teams, and you care about both scientific quality and user‑visible wins. Minimum qualifications 3+ years of experience working on ML systems, large‑scale model training, inference, or adjacent areas (or equivalent experience via research / open source). Advanced degree in Computer Science, EE, or a related field, or equivalent practical experience. Demonstrated experience owning complex technical projects end‑to‑end. If you’re excited about the role and strong in some of these areas, we encourage you to apply even if you don’t meet every single requirement. Responsibilities Advance inference efficiency end‑to‑end Design and prototype algorithms, architectures, and scheduling strategies for low‑latency, high‑throughput inference. Implement and maintain changes in high‑performance inference engines (e.g., SGLang‑ or vLLM‑style systems and Together’s inference stack), including kernel backends, speculative decoding (e.g., ATLAS), quantization, etc. Profile and optimize performance across GPU, networking, and memory layers to improve latency, throughput, and cost. Unify inference with RL / post‑training Design and operate RL and post‑training pipelines (e.g., RLHF, RLAIF, GRPO, DPO‑style methods, reward modeling) where 90+% of the cost is inference, jointly optimizing algorithms and systems. Make RL and post‑training workloads more efficient with inference‑aware training loops—for example, async RL rollouts, speculative decoding, and other techniques that make large‑scale rollout collection and evaluation cheaper. Use these pipelines to train, evaluate, and iterate on frontier models on top of our inference stack. Co‑design algorithms and infrastructure so that objectives, rollout collection, and evaluation are tightly coupled to efficient inference, and quickly identify bottlenecks across the training engine, inference engine, data pipeline, and user‑facing layers. Run ablations and scale‑up experiments to understand trade‑offs between model quality, latency, throughput, and cost, and feed these insights back into model, RL, and system design. Own critical systems at production scale Profile, debug, and optimize inference and post‑training services under real production workloads. Drive roadmap items that require real engine modification—changing kernels, memory layouts, scheduling logic, and APIs as needed. Establish metrics, benchmarks, and experimentation frameworks to validate improvements rigorously. Provide technical leadership (Staff level) Set technical direction for cross‑team efforts at the intersection of inference, RL, and post‑training. Mentor other engineers and researchers on full‑stack ML systems work and performance engineering. About Together AI Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers in our journey in building the next generation AI infrastructure. Compensation We offer competitive compensation, startup equity, health insurance and other competitive benefits. The US base salary range for this full-time position is: $200,000 - $280,000 + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge. Equal Opportunity Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more. Please see our privacy policy at https://www.together.ai/privacy    
No items found.
Hidden link
Replit.jpg

Senior Product Engineer, Product Platform

Replit
$225,000 – $320,000
US.svg
United States
Full-time
Remote
false
Replit is the agentic software creation platform that enables anyone to build applications using natural language. With millions of users worldwide, Replit is democratizing software development by removing traditional barriers to application creation.About The Role As a Staff Product Engineer on Replit’s Product Platform team, you’ll build the shared product systems and primitives that power Replit’s core experiences — enabling product teams to ship faster and helping users (and agents) build better software.The nature of software development has changed and Replit is at the forefront of that revolution. The product platform team builds and scales the primitives that Replit Agent uses to empower over 40 million users to build anything they want.This role is ideal for a senior platform-minded web engineer who’s shipped at scale, thrives in high-ownership environments, and can define what “good” looks like across reliability, performance, and developer experience.We’ve hit a significant scale and have escape velocity. A number of our systems need to be scaled and rebuilt, so you’ll get to build these out for 0 → 1 → huge scale quickly.You’ll work closely with product other engineers, platform engineers, designers, product managers, and go-to-market partners to deliver foundational capabilities that unlock entire categories of product development.This is why we require product engineers with a strong product sense and past distributed systems experience that are excited about building at scale platform primitives!What you’ll doLead major cross-team platform initiatives, taking foundational systems from 0 → 1 and scaling them to support millions of users Build shared, extensible Agent primitives that Replit Agent can reuse safely and consistently (Meta Programming) Identify the highest-leverage technical bottlenecks (performance, reliability, correctness, abuse, observability), then design and ship solutions for our scale Raise the bar for engineering excellence through architecture reviews, code quality, reliability standards, and mentorship Partner across teams to improve platform adoption, ergonomics, and velocity — turning platform work into measurable outcomes Core areas you’ll work onAgents and Replit users depend on us to build applications (e.g. Connectors framework, Content/configuration primitives (CMS + product surfaces), Data/analytics/events + experimentation primitives).Replit Agent as a principal in third party systems. Agent can be fully used within ChatGPT and publishes straight to the iOS app store. We’ll be doing loads of that.Platform product teams rely on us to ship consistently (e.g. Identity & Access platform (SSO/SCIM), Localization/i18n platform, Notifications & communications platform).Core web platform infrastructure (e.g. performance & page load optimization, observability and debugging workflows, caching strategy and reliability).Required skills and experience5+ years of professional software engineering experience Understanding of the full agentic software development stack, helping coding agents build, test and review correct code.Strong track record leading complex projects with cross-functional stakeholders Experience building and operating platform systems that other teams depend on Experience operating and scaling systems in production (reliability, performance, incidents, on-call readiness) Strong product judgment: you can balance UX, speed, correctness, and long-term maintainability Comfort working in modern web stacks such as TypeScript, React, Node.js, PostgresBonus pointsExperience working in environments with a high engineering bar (or a fast-growing startup where you shipped fast without burning out quality) Experience with platform and distributed systems patterns (queues, workflows, caching, rate limiting, async processing) Familiarity with systems like: Redis, Postgres Workflow engines (e.g. Temporal) Auth and enterprise identity (SSO, SCIM) Abuse protection and edge systems (Cloudflare) Cloud platforms (GCP) Observability (Datadog, Sentry) Localization Experimentation and event pipelines (Statsig, Segment, analytics/event tracking) Excited about the future of programming, including agent workflows and developer tools Exposure to agent ecosystems (e.g. MCP-style patterns, tool integrations, structured automation) Example Projects You’ll Work OnConnectors platform for agents — ship a secure connector framework (OAuth/permissions/data access) so agents can integrate with Slack/Notion/GitHub/etc. Agent-facing external surfaces — own high-quality embedded experiences (desktop/extension/embeds) that let agents act in-context across tools Safety + abuse controls for agent actions — design permissioning, rate limits, and policy enforcement so agents can operate safely at scale Real-time notifications platform — design in-app/email surfaces + build reliable delivery/fanout, preferences, and observability Core web platform performance + caching — improve latency and reliability via caching strategy (Redis), profiling, and safe fallbacks Events + experimentation primitives — standardize tracking/metrics + feature flags/rollouts so teams can ship safely and measure impact This is a full-time role that can be held from our Foster City, CA office. The role has an in-office requirement of Monday, Wednesday, and Friday.Full-Time Employee Benefits Include:💰 Competitive Salary & Equity💹 401(k) Program with a 4% match⚕️ Health, Dental, Vision and Life Insurance🩼 Short Term and Long Term Disability🚼 Paid Parental, Medical, Caregiver Leave🚗 Commuter Benefits📱 Monthly Wellness Stipend🧑‍💻 Autonomous Work Environment🖥 In Office Set-Up Reimbursement🏝 Flexible Time Off (FTO) + Holidays🚀 Quarterly Team Gatherings☕ In Office AmenitiesWant to learn more about what we are up to?Meet the Replit AgentReplit: Make an app for thatReplit BlogAmjad TED TalkInterviewing + Culture at ReplitOperating PrinciplesReasons not to work at ReplitTo achieve our mission of making programming more accessible around the world, we need our team to be representative of the world. We welcome your unique perspective and experiences in shaping this product. We encourage people from all kinds of backgrounds to apply, including and especially candidates from underrepresented and non-traditional backgrounds.
No items found.
Hidden link
HP IQ.jpg

System Software Engineer

HP IQ
$200,000 – $340,000
US.svg
United States
Full-time
Remote
false
Who We Are HP IQ is HP’s new AI innovation lab. Combining startup agility with HP’s global scale, we’re building intelligent technologies that redefine how the world works, creates, and collaborates. We’re assembling a diverse, world-class team—engineers, designers, researchers, and product minds—focused on creating an intelligent ecosystem across HP’s portfolio. Together, we’re developing intuitive, adaptive solutions that spark creativity, boost productivity, and make collaboration seamless. We create breakthrough solutions that make complex tasks feel effortless, teamwork more natural, and ideas more impactful—always with a human-centric mindset. By embedding AI advancements into every HP product and service, we’re expanding what’s possible for individuals, organisations, and the future of work. Join us as we reinvent work, so people everywhere can do their best work.About The Role  As a modeling lead for the AI lab, you will be responsible for defining the technical roadmap for the team and supporting the modeling needs across the organization. You’ll be expected to define and establish best practices to manage the model life cycle, from data acquisition to deployment, and to build the tools and platforms to facilitate building and deploying ML models on different devices with specific constraints. You’ll work closely with different teams across the organization to support their modeling needs, translating high level user needs to specific modeling requirements, creating plans and technically driving the team to execute on those.  What You Might Do Define and drive the AI Lab technical strategy in support of HP’s AI roadmap, owning decisions across models, runtimes, inference engines, and optimization. Lead on device AI strategy, including model compression, quantization, distillation, and hardware aware optimization across CPUs, GPUs, NPUs, and TPUs. Architect and evolve tooling and platforms that support the full model lifecycle from data and training through evaluation, deployment, and monitoring. Establish standards and evaluation frameworks to ensure high quality, safe, and performant Gen AI models in production. Partner closely with cross functional leaders and teams to align technical direction with product and hardware strategy. Mentor a small group of senior engineers while operating as a hands on technical leader who sets direction and moves quickly. Essential Qualifications 12+ years of experience in AI modeling, applied Machine Learning, or large scale ML systems, with demonstrated ownership of technical strategy. Deep expertise in training and fine tuning LLMs or LMMs, and applying state of the art generative AI techniques. Strong systems background with experience in inference engines, model runtimes, and performance optimization on heterogeneous hardware. Proven track record of delivering optimized, production grade AI solutions, including on device deployments. Excellent software engineering skills in Python and or C++, with experience building scalable ML systems. Strong communicator and technical leader with experience influencing across teams; background at companies such as Nvidia, Google, Intel, or Qualcomm is a plus. Salary Range:  $200,000 - $340,000Compensation & Benefits (Full-Time Employees) The salary range for this role is listed above. Final salary offered is based upon multiple factors including individual job-related qualifications, education, experience, knowledge and skills. At HP IQ, we offer a competitive and comprehensive benefits package, including: Health insurance Dental insurance Vision insurance Long term/short term disability insurance Employee assistance program Flexible spending account Life insurance Generous time off policies, including;  4-12 weeks fully paid parental leave based on tenure 11 paid holidays Additional flexible paid vacation and sick leave (US benefits overview) Why HP IQ? HP IQ is HP’s new AI innovation lab, building the intelligence to empower humanity—reimagining how we work, create, and connect to shape the future of work. Innovative Work Help shape the future of intelligent computing and workplace transformation. Autonomy and Agility Work with the speed and focus of a startup, backed by HP’s scale. Meaningful Impact Build AI-powered solutions that help people and organisations thrive. Flexible Work Environment Freedom and flexibility to do your best work. Forward-Thinking Culture We learn fast, stay future-focused, and imagine what comes next—together. Equal Opportunity Employer (EEO) Statement HP, Inc. provides equal employment opportunity to all employees and prospective employees, without regard to race, color, religion, sex, national origin, ancestry, citizenship, sexual orientation, age, disability, or status as a protected veteran, marital status, familial status, physical or mental disability, medical condition, pregnancy, genetic predisposition or carrier status, uniformed service status, political affiliation or any other characteristic protected by applicable national, federal, state, and local law(s). Please be assured that you will not be subject to any adverse treatment if you choose to disclose the information requested. This information is provided voluntarily. The information obtained will be kept in strict confidence. If you’d like more information about HP’s EEO Policy or your EEO rights as an applicant under the law, please click here: Equal Employment Opportunity is the Law Equal Employment Opportunity is the Law – Supplement
No items found.
Hidden link
X.jpg

AI Tutor - Software Engineer Specialist

X AI
$45 – $100 / hour
US.svg
United States
Full-time
Remote
false
About xAI xAI’s mission is to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. Our team is small, highly motivated, and focused on engineering excellence. This organization is for individuals who appreciate challenging themselves and thrive on curiosity. We operate with a flat organizational structure. All employees are expected to be hands-on and to contribute directly to the company’s mission. Leadership is given to those who show initiative and consistently deliver excellence. Work ethic and strong prioritization skills are important. All employees are expected to have strong communication skills. They should be able to concisely and accurately share knowledge with their teammates.  About the Role As an Accounting Expert, you will be instrumental in enhancing the capabilities of our cutting-edge technologies by providing high-quality input and labels using specialized software. Your role involves collaborating closely with our technical team to support the training of new AI tasks, ensuring the implementation of innovative initiatives. You'll contribute to refining annotation tools and selecting complex problems from corporate accounting domains, with a focus on financial reporting, consolidation, internal controls, and GAAP compliance where your expertise can drive significant improvements in model performance. This position demands a dynamic approach to learning and adapting in a fast-paced environment, where your ability to interpret and execute tasks based on evolving instructions is crucial. AI Tutor’s Role in Advancing xAI’s Mission As an AI Tutor, you will play an essential role in advancing xAI's mission by supporting the training and refinement of xAI’s AI models. AI Tutors teach our AI models about how people interact and react, as well as how people approach issues and discussions in corporate accounting. To accomplish this, AI Tutors will actively participate in gathering or providing data, such as text, voice, and video data, sometimes providing annotations, recording audio, or participating in video sessions. We seek individuals who are comfortable and eager to engage in these activities as a fundamental part of the role, ensuring a strong alignment with xAI’s goals and objectives to innovate. Scope An AI Tutor will provide services that include labeling and annotating data in text, voice, and video formats to support AI model training. At times, this may involve recording audio or video sessions, and tutors are expected to be comfortable with these tasks as they are fundamental to the role. Such data is a job requirement to advance xAI’s mission, and AI Tutors acknowledge that all work is done for hire and owned by xAI. Responsibilities Use proprietary software applications to provide input/labels on defined projects.   Support and ensure the delivery of high-quality curated data.   Play a pivotal role in supporting and contributing to the training of new tasks, working closely with the technical staff to ensure the successful development and implementation of cutting-edge initiatives/technologies.   Interact with the technical staff to help improve the design of efficient annotation tools.   Choose problems from corporate accounting fields that align with your expertise, providing rigorous solutions and model critiques where you can confidently provide detailed solutions and evaluate model responses.   Regularly interpret, analyze, and execute tasks based on given instructions.   Key Qualifications Must have 3+ years of Big 4 public accounting experience (audit/assurance) on corporate or SEC clients, or an equivalent senior corporate accounting role (e.g., Controller, Assistant Controller, or Technical Accounting Manager at a public company or large private enterprise with complex GAAP reporting).   Must possess a Master's or PhD in Accounting (corporate focus) or equivalent as a licensed CPA.   Proficiency in reading and writing, both in informal and professional English.   Strong ability to navigate various corporate accounting information resources, databases, and online resources (e.g., FASB codification, SEC EDGAR, 10-K/10-Q filings, ERP systems).   Outstanding communication, interpersonal, analytical, and organizational capabilities.   Solid reading comprehension skills combined with the capacity to exercise autonomous judgment even when presented with limited data/material.   Strong passion for and commitment to technological advancements and innovation in corporate accounting.  Preferred Qualifications 5+ years at a Big 4 firm or in a senior corporate controllership role, with direct involvement in SEC reporting, SOX 404, or complex consolidations.   Experience drafting or reviewing 10-K/10-Q footnotes, MD&A, or technical accounting memos.   Possesses experience with at least one publication in a reputable accounting journal or outlet.   Teaching experience as a professor   Location & Other Expectations This position is based in Palo Alto, CA, or fully remote.   The Palo Alto option is an in-office role requiring 5 days per week; remote positions require strong self-motivation.   If you are based in the US, please note we are unable to hire in the states of Wyoming and Illinois at this time.   We are unable to provide visa sponsorship.   Team members are expected to work from 9:00am - 5:30pm PST for the first two weeks of training and 9:00am - 5:30pm in their own timezone thereafter.   For those who will be working from a personal device, please note your computer must be a Chromebook, Mac with MacOS 11.0 or later, or Windows 10 or later.   Compensation $45/hour - $100/hour The posted pay range is intended for U.S.-based candidates and depends on factors including relevant experience, skills, education, geographic location, and qualifications. For international candidates, our recruiting team can provide an estimated pay range for your location. Benefits: Hourly pay is just one part of our total rewards package at xAI. Specific benefits vary by country, depending on your country of residence you may have access to medical benefits. We do not offer benefits for part-time roles.xAI is an equal opportunity employer. For details on data processing, view our Recruitment Privacy Notice.
No items found.
Hidden link
Harmattan AI.jpg

System Architect

Harmattan AI
FR.svg
France
Full-time
Remote
false
About UsHarmattan AI is a next-generation defense prime building autonomous and scalable defense systems. Following the close of a $200M Series B, valuing the company at $1.4 billion, we are expanding our teams and capabilities to deliver mission-critical systems to allied forces.Our work is guided by clear values: building technologies with real-world impact, pursuing excellence in everything we do, setting ambitious goals, and taking on the hardest technical challenges. We operate in a demanding environment where rigor, ownership, and execution are expected.About the RoleAs a System Architect, you will own the end-to-end architecture, system definition, and strategic implementation for our entire portfolio of robotic systems, from long-term vision to field deployment, collaborating closely with executive leadership and technical leads, and forming a critical partnership with the Product Manager to ensure efficiency; this pivotal role drives strategic programs like Anti-drone warfare, ISR and strike systems, Equipment fusion (especially aeronautics), or C2 and IA.ResponsibilitiesSystem-of-Systems Design & Strategy: Translate complex strategic goals into global, system-of-systems designs, defining and championing the overall system architecture strategy across the enterprise.Architectural Sizing & Verification: Ensure all systems meet defined needs and are correctly sized through rigorous verification of scope, complex system-of-systems simulations, and precise system sizing, guiding major technical investments.Coordination and Technical Leadership: Coordinate large multidisciplinary engineering organizations, providing overarching technical leadership across cross-functional design efforts (mechanical, electrical, software, GNC, ML, product), ensuring long-term performance, robustness, and strategic reliability across the enterprise.Specification & Integration Governance: Govern system integration standards and validation processes, contributing to the complex process of specification management by ensuring all architectural prerequisites are met and driving multi-system architecture reviews for enterprise design consistency.Continuous Improvement: Implement and institutionalize processes to enhance requirements traceability, system documentation standards, and validation workflows across the entire engineering organization.Candidate Requirements10+ years of experience in system engineering or architecture, with significant time in a senior or lead role.MSc in Engineering (any field); PhD is a plus.Specific Domain: Experience in the defense sector is mandatory, and drones are a big plus.Strong knowledge in one of those specializations:C2 (Command and Control) & AIISR (Intelligence, Surveillance, and Reconnaissance)Strike UAV, loitering munitionsAnti-drone warfareUAV equipments & fusionExpert in defining and leading the full lifecycle of complex, multi-system and system-of-systems architectures.Strategic, analytical, and mission-driven with expertise in strategic requirements engineering and high-level trade-off analysis.Deep engineering understanding across mechanical, electrical, embedded, and enterprise software systems.Experience coordinating large, multi-disciplinary engineering organizations and managing complex technical/strategic interfaces.Exceptional communication, presentation, and technical documentation skills for both technical and executive audiences.We look forward to hearing how you can help shape the future of autonomous defense systems at Harmattan AI.
No items found.
Hidden link
Aleph Alpha.jpg

Senior Performance Engineer- Pretraining

AlephAlpha
GE.svg
Germany
Full-time
Remote
false
Our MissionAleph Alpha is one of the few companies in Europe doing serious foundation model pre-training. Our customers - in finance, manufacturing, public administration - need models that understand German, meet European regulatory requirements, and work reliably in high-stakes settings. We're building that in Heidelberg.We are hiring a Performance Engineer to grow our pre-training efficiency team. If you are excited about making models fast, this is the role for you!Team CultureAt Aleph Alpha, we foster a culture built on ownership, autonomy, and empowerment. Teams and individual contributors are trusted to take responsibility for their work and drive meaningful impact. We maintain a flat organizational structure with efficient, supportive management that enables quick decision‑making, open communication, and a strong sense of shared purpose.About the role:You will engineer the systems required to train foundation models at scale. Your objective is to maximize hardware utilization and training throughput on our large-scale GPU clusters (thousands of NVIDIA Blackwell GPUs). You will work at the intersection of deep learning frameworks, distributed systems, and GPU microarchitecture, eliminating bottlenecks from the Python layer down to the GPU kernel.This role is for Aleph Alpha Research. Your responsibilities:End-to-End Optimization: Profile training loops using PyTorch Profiler, Nsight Systems and Nsight Compute to identify system- and kernel-level bottlenecks in order to maximize model throughput.Distributed Strategy and Topology: Configure and tune composite parallelism strategies (e.g. TP, DP, HSDP/FSDP, EP), optimizing load balance, minimizing critical-path bottlenecks, and managing communication-to-computation trade-offs for large-scale LLM training.Hardware-Aware Modeling: Partner with AI Researchers to define model architectures for hardware efficiency without compromising convergence.You could be a great fit if you:Are proficient in Python and the PyTorch library.Have a strong engineering background in parallel and/or distributed systems with proven track record of excellence.Have hands-on experience with modern machine learning techniques (especially large language models and their life cycle).Deeply understand the CUDA programming model.Have experience in distributed programming with APIs like NCCL or MPI.Have experience analysing profiling traces with tools such as PyTorch Profiler and Nvidia Nsight.Please note this role requires regular on-site collaboration in Heidelberg as a member of the Training Efficiency Team.Strong candidates may also have:Contributions to modern distributed training frameworks (e.g., TorchTitan, Megatron-LM, DeepSpeed).Familiarity with low-precision training formats (MXFP4, MXFP8) and their impact on numerical stability and throughput.A deep understanding of NCCL communication primitives, NVSHMEM or CUDA IPC and their performance.A proven track record of implementing and optimising modern transformer-based model training.A proven track record working on the NVIDIA Blackwell architecture.Compensation and BenefitsCompetitive salary and equity package30 days of paid vacationAccess to a variety of fitness & wellness offerings via WellhubMental health support through nilo.healthJobRad® Bike LeaseSubstantially subsidized company pension plan for your future securitySubsidized Germany-wide transportation ticketBudget for additional technical equipmentFlexible working hours for better work-life balance and hybrid working model
No items found.
Hidden link
OpenAI.jpg

Data Scientist, Preparedness

OpenAI
$347,000 – $400,000
US.svg
United States
Full-time
Remote
false
About the TeamThe Preparedness team is an important part of the Safety Systems org at OpenAI, and is guided by OpenAI’s Preparedness Framework.Frontier AI models have the potential to benefit all of humanity, but also pose increasingly severe risks. To ensure that AI promotes positive change, the Preparedness team helps us prepare for the development of increasingly capable frontier AI models. This team is tasked with identifying, tracking, and preparing for catastrophic risks related to frontier AI models.The mission of the Preparedness team is to:Closely monitor and predict the evolving capabilities of frontier AI systems, with an eye towards misuse risks whose impact could be catastrophic to our societyEnsure we have concrete procedures, infrastructure and partnerships to mitigate these risks and to safely handle the development of powerful AI systemsPreparedness tightly connects capability assessment, evaluations, and internal red teaming, and mitigations for frontier models, as well as overall coordination on AGI preparedness. This is fast paced, exciting work that has far reaching importance for the company and for society.About the RoleWe’re hiring a Data Scientist to help build, evaluate, and continuously improve mitigations that prevent extreme harms from AI systems. This role is for an experienced, highly autonomous individual contributor who can take ambiguous problem statements, structure rigorous analyses, and translate findings into actionable product and policy changes.This position goes beyond “running evals.” You’ll help create mitigation intelligence and monitoring systems that enable OpenAI to detect issues early, measure effectiveness over time, and reduce both over-blocking (unnecessary friction) and under-blocking (missed harm).What You’ll DoEvaluate and improve mitigation systems, including classifiers and detection pipelines across domains (e.g., biosecurity, cybersecurity, and emerging risk areas).Diagnose false positives and false negatives with deep error analysis, root cause investigation, and clear recommendations for mitigation adjustments.Build monitoring and measurement frameworks to track mitigation effectiveness over time and across user segments and use cases.Identify trends in over-blocking vs. under-blocking, quantify customer impact, and propose prioritized interventions.Develop insights from customer feedback, complaints, and usage patterns to detect shifts in adversarial behavior and system failure modes.Expand risk monitoring into new areas, including cybersecurity threats and model loss-of-control or sabotage scenarios, in partnership with domain experts.Communicate results to technical and executive stakeholders with crisp narratives, decision-ready metrics, and clear tradeoffs. You might thrive in this role if you are:An autonomous operator: you can take a problem statement and independently structure the analysis end-to-end.Strong at executive-ready communication: concise, clear, and outcome-oriented.Skilled in turning analysis into productable changes: you’re comfortable influencing across functions to drive mitigation improvements. QualificationsSignificant experience in data science or applied analytics in high-stakes domains (e.g., security, trust & safety, abuse prevention, fraud, platform integrity, or reliability).Strong foundations in experimentation, causal thinking, and/or observational inference; ability to design robust measurement under imperfect data.Fluency in SQL and Python (or equivalent) for analysis, modeling, and building monitoring workflows.Experience building metrics, dashboards, and operational monitoring that meaningfully changes outcomes (not just reporting).Track record of driving cross-functional impact with engineering, product, and research partners.Cybersecurity data science experience (strong preference), including exposure to threat modeling, adversarial dynamics, abuse patterns, or security telemetry.Experience with classifier evaluation, calibration, thresholding, and error analysis at scale. Familiarity with detection systems in adversarial settings (e.g., evasion, distribution shift, feedback loops).Trust & Safety experience is helpful, but not required.Genuine interest in AI safety, alignment, and catastrophic risk prevention.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
OpenAI.jpg

Software Engineer, Marketing Innovation

OpenAI
$230,000 – $385,000
US.svg
United States
Full-time
Remote
false
About the team Marketing Innovation is a product engineering team embedded within OpenAI’s broader Demand Gen and Marketing organization. The team builds agentic, AI‑native systems that directly drive revenue, pipeline, and marketing leverage.Our mandate is to own and scale autonomous, customer‑facing and internal systems that operate at massive scale. This includes building autonomous, customer‑facing systems that engage with prospects, generate pipeline and revenue, and shape the future of how marketing and revenue teams operate in an AI‑native world.We focus on problems that exceed the limits of low‑code tools and long‑tail workflow automation. When a problem requires a native product experience, deep system integration, and high standards for precision, safety, and reliability, Marketing Innovation steps in. About the role We’re looking for product‑minded software engineers to join the Marketing Innovation team.As a Software Engineer on this team, you will build and own autonomous, customer‑facing agentic systems that interface directly with enterprise customers, prospects, and revenue‑critical workflows. You will partner closely with functional leaders across scaled revenue, demand gen, and marketing to understand desired outcomes, then translate those needs into production‑grade systems.You will move quickly from ambiguous problem statements to working software in production. Your work will be measured not by output, but by impact: revenue, enterprise pipeline, and Marketing efficiency.This role is ideal for engineers who want ownership over real systems, enjoy building 0->1 products, and are excited to define new categories of AI‑native products and workflows.In this role, you will:Build autonomous and semi‑autonomous customer‑facing and internal agentic systems that directly drive Revenue, Pipeline, and Marketing efficiency.Own end‑to‑end product execution, from early prototypes to reliable production systems with strong instrumentation and evals.Work across the full stack, including APIs, orchestration, data flows, frontend experiences, and deployment.Partner closely with marketing, demand gen, and enterprise sales stakeholders to define success metrics and functional requirements.Apply OpenAI models and tooling in novel ways, making informed tradeoffs between models, platforms, and architectures.Continuously iterate based on live usage, agent behavior, and performance data. You’ll thrive in this role if youHave 4+ years of experience as a software, product, or ML engineer working on user‑ or customer‑facing systems.Have built, deployed, or operated complex systems in production, ideally including agents or automation at scale.Are comfortable working without a groomed backlog and enjoy framing problems, proposing solutions, and executing end‑to‑end.Are fluent in Python or JavaScript and comfortable building full‑stack applications.Have hands‑on experience with APIs, cloud infrastructure, orchestration, and production monitoring.Are curious about modern AI models and excited to experiment with how evolving capabilities change system design.Have familiarity with sales, marketing, or go‑to‑market workflows, or are eager to learn how revenue systems operate.Operate with a strong sense of ownership, move quickly, and care deeply about measurable business outcomes.Are collaborative, pragmatic, and motivated to build at the frontier of applied AI.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
Silver.dev

AceUp - Lead ML Engineer (Generative AI & LLM Focus)

Silver.dev
$66,000 – $120,000
AR.svg
Argentina
Full-time
Remote
false
About the company AceUp is evolving from a traditional SaaS platform into an AI-first leadership development engine. We are looking for a Full Stack Engineer who excels at bridging established core systems with cutting-edge AI services. You will be the primary developer responsible for bringing our new AI capabilities "to the glass." While our ML team builds the intelligence in Python, you will build the experience. You will architect the React frontends that users interact with and the Ruby backend logic that orchestrates data between our core platform and our new AI microservices.The Tech StackWe are a GCP-native shop. You will be building directly within the Google Cloud ecosystem:GenAI & Compute: Vertex AI, Gemini Pro/Ultra, PaLM API, Cloud Functions.Data & Vector: Firestore, BigQuery, Vertex AI Vector Search.Orchestration: Cloud Run, Pub/Sub.Frameworks: Python, LangChain/LangGraph.What You Will Do:Architect Conversational Agents: Design and build stateful, context-aware conversational agents that can maintain long-running coherent dialogues, handling complex reasoning tasks rather than just single-turn Q&A.Build RAG Pipelines: Develop low-latency retrieval systems that ground LLM responses in proprietary data, ensuring high accuracy and minimizing hallucinations.Unstructured Data Intelligence: Lead the development of NLP pipelines to extract structured insights (semantic signals, sentiment, action items) from varied unstructured data sources (text, and eventually audio).Personalization Architecture: Implement advanced personalization layers that dynamically adapt model behavior and tone based on user history and context.LLMOps & Infrastructure: Own the deployment lifecycle of your models. You will be responsible for prompt architecture, evaluation frameworks, latency optimization, and cost management on Vertex AI.Technical Mentorship: Act as the technical “North Star” for our existing ML engineers. You will review code, set architectural standards, and guide technical decision-making without the overhead of people management.Who You AreA “Product” Engineer: You care about the end-user experience. You don’t just optimize for accuracy; you optimize for utility, latency, and reliability in a production environment.GCP Specialist: You are comfortable navigating the Google Cloud ecosystem and know which services to use to build scalable, secure AI solutions.Hands-On Architect: You are looking for a role where you can code 70-80% of the time. You thrive in the IDE, not just in meetings.Pragmatic Innovator: You stay up to date with the latest papers (LoRA, CoT, ReAct), but you know when to use a simple solution over a complex one to ship value faster.RequirementsExperience: 6+ years of professional engineering experience, with at least 3+ years focused on ML/NLP and 1+ years specifically working with Large Language Models (LLMs) and GenAI.Technical Fluency: Expert in Python. Strong familiarity with modern AI frameworks (LangChain, LlamaIndex) and GCP Services (Vertex AI, Firestore).Communication: Conversational English is required. You must be able to explain complex technical trade-offs to Product Managers and Executives.Education: B.S. or M.S. in Computer Science, Mathematics, or equivalent practical experience.Nice to HaveExperience with Audio/Speech processing pipelines (ASR, Diarization).Background in EdTech, HR Tech, or Psychology-based applications.AceUp is proud to be an equal opportunity employer, seeking to create a welcoming and diverse environment. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.Interview process:Silver Screening interviewFirst meeting with client: Intro Intro Call + Problem SolvingTake Home ChallengeTechnical InterviewInterview w/Product
No items found.
Hidden link
Listen Labs.jpg

Member of Technical Staff, Design

Listen Labs
$150,000 – $300,000
US.svg
United States
Full-time
Remote
false
TL;DR: We are seeing strong market demand and an aggressive 6-month product roadmap, so we are expanding our engineering team. We're looking for someone highly technical (our current team includes 3 IOI medalists) who wants to build a product that is changing how companies make decisions. If you're excited about tackling complex problems end-to-end, we should talk.BackgroundListen Labs is an AI-powered research platform that helps teams uncover insights from customer interviews in hours — not months. We help customers analyze conversations, surface themes, and make faster, smarter product decisions.Company highlights — entirely product-led:World-Class Team: Founded by serial entrepreneurs (previous AI exit), former co-founders, and talent from Jane Street, Twitter, Stripe, Affirm, Bain, Goldman Sachs, and many more Sequoia-backed startups (plus IOI/ICPC backgrounds).Hypergrowth: We’re a 40-person team backed by Sequoia, growing from $0 to a $14M run-rate in under a year. We move fast, care deeply about craft, and love working with people who take ownership.Traction: Rapid growth across segments with enterprise wins at Google, Microsoft, Nestlé, and P&G.Performance: Industry-leading win rate driven by a highly differentiated product.Market Validation: Consistently winning customers across all segments with over six-figure lands that lead to quick expansions.Viral Product: Interviews are shared with tens of thousands of viewers, fueling PLG, organic expansion, and daily inbound from Fortune 500s.Technical ChallengesMcKinsey On Demand: Building a research agent Hiring McKinsey is different than buying software. You don't just get tools, but get opinions, experience, and execution. We build Listen with that perspective: You have an AI agent on your side that knows everything about our platform and the best research practices. It helps you set up your project, conduct interviews with your goals in mind, and analyze thousands of responses.Database of Humanity One of the key value props is our ability to find the people you are looking for (eg, "power users of ChatGPT and Excel"). We are building a database of millions of humans. The more studies you do with Listen, the better we understand you. This enables finding people with unmatched accuracy and, in the long run, extrapolating what a person would say based on all previous conversations -- imagine answering questions for your best friend.Realtime Video Interviews The next version of our AI interviewer will have emotional understanding of video and voice to read between the lines. The goal is to make our interviewers more nuanced and effective than the most senior user researchers. This involves computer vision, speech analysis, and real-time decision making.Distributed Information Mining The most interesting information is not publicly accessible on the web; it lives only in people's minds. We are building an agent that, given a question, finds the right people to talk to, asks the right questions, and returns a report and actionable recommendations. That's what consultants charge millions for. The ceiling is incredibly high, and we are pushing the technical boundaries to help companies, from investment firms to tech companies, make the best decisions.Customer Preference Model & Synthetic Personas We're bringing Jeff Bezos' vision of the customer being part of every life decision. We're building the most profound understanding of customers, which will allow us to extrapolate to new questions via synthetic personas. This involves complex modeling of human behavior, preferences, and decision-making processes.What We Look ForYou want to solve problems end-to-end: Our team is split vertically, so every engineer owns a part of the product and needs to make decisions across the LLM pipeline, infrastructure, backend, and UX (with help!).You have a high bar for quality: In a startup, moving fast is essential. But even more important is to care about your output, obsess about details, and build a product that works, especially in the time of AI. Slop compounds!You're opinionated about user experience: You have strong product instincts and care deeply about how people interface with what you build.You are a clear thinker and communicator: We only have one meeting a week and expect you to communicate tradeoffs, problems, and blockers directly.You are highly technical: Most of our team has started coding as young teenagers and nerd out on details from language design to compilers.You want to push LLM capabilities: We continually push the most advanced AI models to their limits and work with the foundational companies on their new releases.Life at Listen LabsCompetitive Compensation: We’re backed by world-class investors, including Sequoia Capital, Conviction, AI Grant, and Pear VC, and offer competitive compensation packages with meaningful equity ownership.Over $30B in market cap has been created in adjacent industries (Medallia, AlphaSense, GLG, Ipsos, Kantar). Our Sequoia partner, Bryan Schreier, was the first investor in Qualtrics—a $12B company tackling similar problems to ours.Benefits that Support You: Comprehensive healthcare and dental coverage, flexible time off to recharge, and an environment that values balance and trust.Room to Grow: As an early member of the team, you’ll have the opportunity to take on new responsibilities, shape processes from scratch, and grow alongside the company. We value people who want to stretch beyond their role and build something lasting.
No items found.
Hidden link
No job found
Your search did not match any job. Please try again
Department
Clear
Category
Clear
Country
Clear
Job type
Clear
Remote
Clear
Only remote job
Company size
Clear
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.