⚠️ Sorry, this job is no longer available.

The AI job market moves fast. We keep up so you don't have to.

Fresh roles added daily, reviewed for quality — across every corner of the AI ecosystem.

Edit filters

New AI Opportunities

Showing 6179  of 79 jobs
Tag
Harmattan AI.jpg

GNC Engineer

Harmattan AI
CH.svg
Switzerland
Full-time
Remote
false
About UsHarmattan AI is a next-generation defense prime building autonomous and scalable defense systems. Following the close of a $200M Series B, valuing the company at $1.4 billion, we are expanding our teams and capabilities to deliver mission-critical systems to allied forces.Our work is guided by clear values: building technologies with real-world impact, pursuing excellence in everything we do, setting ambitious goals, and taking on the hardest technical challenges. We operate in a demanding environment where rigor, ownership, and execution are expected.About the RoleAs a GNC Engineer on the UAV team, you’ll design and develop advanced guidance, navigation, and control (GNC) solutions. From integrating and fusing sensor data to developing control laws and flight trajectories, you'll take your work from simulation to embedded implementation—and ultimately to live flight testing.ResponsibilitiesDevelop state-of-the-art navigation and sensor fusion algorithms for UAVsDesign and implement GNC and flight control systemsBuild filtering and estimation strategies for robust and efficient flight performanceRun extensive simulations (Monte Carlo, SITL, HITL) and coverage testingAnalyze test flight data and refine algorithmic performanceSupport full-stack system integration: GNSS, INS/IMU, localization, and fusionMaintain and evolve a flight-proven flight computer across multiple UAV platformsRequirementsEducation : Engineering school or master's degree in computer science, engineering, or a related technical field3+ years of experience in aerospace GNC systemsStrong foundations in control theory, estimation, and sensor integrationProficient in Python and embedded C/C++Experience with simulation tools (e.g., Gazebo) and version control (Git)Familiarity with open-source autopilots (PX4, Ardupilot, Betaflight, etc.)Hands-on electronics and integration skills: debugging, soldering, harnessingBonus: FPV pilot hobbyistWe look forward to hearing how you can help shape the future of autonomous defense systems at Harmattan AI.
No items found.
Hidden link
Together AI.jpg

Customer Support Engineer (Inference), India

Together AI
$200,000 – $280,000
IN.svg
India
Full-time
Remote
false
About the Role The Turbo team sits at the intersection of efficient inference (algorithms, architectures, engines) and post‑training / RL systems. We build and operate the systems behind Together’s API, including high‑performance inference and RL/post‑training engines that can run at production scale. Our mandate is to push the frontier of efficient inference and RL‑driven training: making models dramatically faster and cheaper to run, while improving their capabilities through RL‑based post‑training (e.g., GRPO‑style objectives). This work lives at the interface of algorithms and systems: asynchronous RL, rollout collection, scheduling, and batching all interact with engine design, creating many knobs to tune across the RL algorithm, training loop, and inference stack. Much of the job is modifying production inference systems—for example, SGLang‑ or vLLM‑style serving stacks and speculative decoding systems such as ATLAS—grounded in a strong understanding of post‑training and inference theory, rather than purely theoretical algorithm design. You’ll work across the stack—from RL algorithms and training engines to kernels and serving systems—to build and improve frontier models via RL pipelines. People on this team are often spiky: some are more RL‑first, some are more systems‑first. Depth in one of these areas plus appetite to collaborate across (and grow toward more full‑stack ownership over time) is ideal. Requirements We don’t expect anyone to check every box below. People on this team typically have deep expertise in one or more areas and enough breadth (or interest) to work effectively across the stack. The closer you are to full‑stack (inference + post‑training/RL + systems), the stronger the fit—but being spiky in one area and eager to grow is absolutely okay. You might be a good fit if you: Have strong expertise in at least one of the following, and are excited to collaborate across (and grow into) the others: Systems‑first profile: Large‑scale inference systems (e.g., SGLang, vLLM, FasterTransformer, TensorRT, custom engines, or similar), GPU performance, distributed serving. RL‑first profile: RL / post‑training for LLMs or large models (e.g., GRPO, RLHF/RLAIF, DPO‑like methods, reward modeling), and using these to train or fine‑tune real models. Model architecture design for Transformers or other large neural nets. Distributed systems / high‑performance computing for ML. Are comfortable working from algorithms to engines: Strong coding ability in Python Experience profiling and optimizing performance across GPU, networking, and memory layers. Able to take a new sampling method, scheduler, or RL update and turn it into a production‑grade implementation in the engine and/or training stack. Have a solid research foundation in your area(s) of depth: Track record of impactful work in ML systems, RL, or large‑scale model training (papers, open‑source projects, or production systems). Can read new RL / post‑training papers, understand their implications on the stack, and design minimal, correct changes in the right layer (training engine vs. inference engine vs. data / API). Operate well as a full‑stack problem solver: You naturally ask: “Where in the stack is this really bottlenecked?” You enjoy collaborating with infra, research, and product teams, and you care about both scientific quality and user‑visible wins. Minimum qualifications 3+ years of experience working on ML systems, large‑scale model training, inference, or adjacent areas (or equivalent experience via research / open source). Advanced degree in Computer Science, EE, or a related field, or equivalent practical experience. Demonstrated experience owning complex technical projects end‑to‑end. If you’re excited about the role and strong in some of these areas, we encourage you to apply even if you don’t meet every single requirement. Responsibilities Advance inference efficiency end‑to‑end Design and prototype algorithms, architectures, and scheduling strategies for low‑latency, high‑throughput inference. Implement and maintain changes in high‑performance inference engines (e.g., SGLang‑ or vLLM‑style systems and Together’s inference stack), including kernel backends, speculative decoding (e.g., ATLAS), quantization, etc. Profile and optimize performance across GPU, networking, and memory layers to improve latency, throughput, and cost. Unify inference with RL / post‑training Design and operate RL and post‑training pipelines (e.g., RLHF, RLAIF, GRPO, DPO‑style methods, reward modeling) where 90+% of the cost is inference, jointly optimizing algorithms and systems. Make RL and post‑training workloads more efficient with inference‑aware training loops—for example, async RL rollouts, speculative decoding, and other techniques that make large‑scale rollout collection and evaluation cheaper. Use these pipelines to train, evaluate, and iterate on frontier models on top of our inference stack. Co‑design algorithms and infrastructure so that objectives, rollout collection, and evaluation are tightly coupled to efficient inference, and quickly identify bottlenecks across the training engine, inference engine, data pipeline, and user‑facing layers. Run ablations and scale‑up experiments to understand trade‑offs between model quality, latency, throughput, and cost, and feed these insights back into model, RL, and system design. Own critical systems at production scale Profile, debug, and optimize inference and post‑training services under real production workloads. Drive roadmap items that require real engine modification—changing kernels, memory layouts, scheduling logic, and APIs as needed. Establish metrics, benchmarks, and experimentation frameworks to validate improvements rigorously. Provide technical leadership (Staff level) Set technical direction for cross‑team efforts at the intersection of inference, RL, and post‑training. Mentor other engineers and researchers on full‑stack ML systems work and performance engineering. About Together AI Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers in our journey in building the next generation AI infrastructure. Compensation We offer competitive compensation, startup equity, health insurance and other competitive benefits. The US base salary range for this full-time position is: $200,000 - $280,000 + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge. Equal Opportunity Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more. Please see our privacy policy at https://www.together.ai/privacy    
No items found.
Hidden link
Zoox.jpg

Senior Engineering Manager, ML Platform

Zoox
$317,000 – $370,000
US.svg
United States
Full-time
Remote
false
Zoox is on a mission to reimagine transportation and build autonomous robotaxis from the ground up that are safe, reliable, clean, and enjoyable for everyone. With bidirectional driving capabilities and four-wheel steering, our vehicle allows us to maneuver through compact spaces and change directions without needing to reverse. We are still in the early stages of deploying our robotaxis, and it is a great time to join Zoox and have a significant impact on executing this mission.  Our growing Software Infrastructure engineering leadership team is looking for a Senior Engineering Manager, ML Platform. The centralized ML Platform team at Zoox plays a crucial role in enabling innovations across all our Autonomy and Data Science teams to develop and deploy models across our robotaxi and cloud infrastructure, and to work on cutting-edge training and inference optimization techniques. The OpportunityWe are working on many interesting challenges to enable rapid experimentation and scale our multi-modal Foundation models and RL infrastructure, and ensure these models run efficiently on our vehicles, meeting our latency targets. You will get to work across all ML teams within Zoox - Perception, Prediction, Planner, Simulation, Collision Avoidance, and our Advanced Hardware Engineering group, and have the opportunity to significantly push the boundaries of how ML is practiced within Zoox. We build and operate the base layer of ML tools, deep learning frameworks, and inference libraries used by our applied research teams for in- and off-vehicle ML use cases. You will lead a team of strong software engineers and managers and act as a force multiplier for our internal customers. This team has many growth opportunities as we expand our robotaxi deployments and venture into new ML domains. If you want to learn more about our ML Infrastructure, here is one of our past talks at re:Invent.In this role, you will:Vision: Develop and execute a strategic vision for our ML training platform, ensuring scalability, reliability, and performance to support large-scale Foundation and RL models.Technical acumen: Lead the design, implementation, and operation of a robust and efficient ML training platform to enable the training, experimentation, validation, and monitoring of ML models.Hiring: Attract, hire, and inspire a diverse world-class engineering team, fostering a culture of innovation, collaboration, and excellence.Partnership: Collaborate closely with cross-functional teams, including ML researchers, software engineers, data engineers, and hardware engineers to define requirements and align on architectural decisions.Mentorship: Enable the engineers in the team to grow their careers by providing the right opportunities along with clear and timely feedback.Qualifications10+ years of relevant experience, including 4+ years of management experience managing other managers and engineers.Experience building user-friendly ML Infrastructure that enabled large-scale model training and high-throughput, low-latency serving use cases.Experience with training frameworks like PyTorch, JAX, etc., leveraging GPUs for distributed model training.Experience with GPU-accelerated inference using TensorRT, Ray Serve, or similar frameworks. 317,000 - 370,000 a yearBase Salary Range There are three major components to compensation for this position: salary, Amazon Restricted Stock Units (RSUs), and Zoox Stock Appreciation Rights. A sign-on bonus may be offered as part of the compensation package. The listed range applies only to the base salary. Compensation will vary based on geographic location and level. Leveling, as well as positioning within a level, is determined by a range of factors, including, but not limited to, a candidate's relevant years of experience, domain knowledge, and interview performance. The salary range listed in this posting is representative of the range of levels Zoox is considering for this position. Zoox also offers a comprehensive package of benefits, including paid time off (e.g. sick leave, vacation, bereavement), unpaid time off, Zoox Stock Appreciation Rights, Amazon RSUs, health insurance, long-term care insurance, long-term and short-term disability insurance, and life insurance.About ZooxZoox is developing the first ground-up, fully autonomous vehicle fleet and the supporting ecosystem required to bring this technology to market. Sitting at the intersection of robotics, machine learning, and design, Zoox aims to provide the next generation of mobility-as-a-service in urban environments. We’re looking for top talent that shares our passion and wants to be part of a fast-moving and highly execution-oriented team. Follow us on LinkedIn AccommodationsIf you need an accommodation to participate in the application or interview process please reach out to accommodations@zoox.com or your assigned recruiter. A Final Note:You do not need to match every listed expectation to apply for this position. Here at Zoox, we know that diverse perspectives foster the innovation we need to be successful, and we are committed to building a team that encompasses a variety of backgrounds, experiences, and skills.
No items found.
Hidden link
Intrinsic.jpg

Software Engineer, Developer Experience

Intrinsic
GE.svg
Germany
Full-time
Remote
false
Intrinsic is Alphabet’s bet aiming to reimagine the potential of industrial robotics. Our team believes that advances in AI, perception and simulation will redefine what’s possible for industrial robotics in the near future – with software and data at the core.  Our mission is to make industrial robotics intelligent, accessible, and usable for millions more businesses, entrepreneurs, and developers. We are a dynamic team of engineers, roboticists, designers, and technologists who are passionate about unlocking the creative and economic potential of industrial robotics.Role As a Senior AI Research Scientist for Perception for Contact Rich Manipulation you will lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. You will explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Your work will involve creating models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. You will collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities that can be deployed on physical hardware for industrial applications. How your work moves the mission forward Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios. Design algorithms that enable robots to manipulate complex or deformable objects with high precision. Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware. Evaluate model performance in both simulation and real-world environments to ensure robustness and reliability. Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems. Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap. Skills you will need to be successful PhD in Computer Science, Robotics, or a related field with a focus on machine learning or computer vision. 3 years of experience in applied research focused on robotic manipulation or robot learning. Proficiency in programming with Python and C++. Experience with deep learning frameworks such as PyTorch, JAX, or TensorFlow. Experience developing algorithms for vision-based manipulation or contact-rich interaction. Publication record in top-tier robotics or AI conferences (e.g., ICRA, IROS, CVPR, NeurIPS).  Skills that will differentiate your candidacy Experience with reinforcement learning or imitation learning for robotics. Familiarity with physics simulators like MuJoCo, Isaac Sim, or Gazebo. Experience integrating tactile sensors with visual perception systems. Experience in LfD (Learning from Demonstrations), kinesthetic learning. Background in sim-to-real transfer techniques for manipulation policies. Experience with transformer-based architectures or foundation models in a robotics context. Experience deploying machine learning models on edge compute hardwar​e. At Intrinsic, we are proud to be an equal opportunity workplace. Employment at Intrinsic is based solely on a person's merit and qualifications directly related to professional competence. Intrinsic does not discriminate against any employee or applicant because of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), or any other basis protected by law. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. It is Intrinsic’s policy to comply with all applicable national, state and local laws pertaining to nondiscrimination and equal opportunity. If you have a disability or special need that requires accommodation, please contact us at: candidate-support@intrinsic.ai.
No items found.
Hidden link
OpenAI.jpg

Product Manager, Personalization

OpenAI
$325,000 – $325,000
US.svg
United States
Full-time
Remote
false
About the Team We are the team behind ChatGPT, a rapidly evolving AI companion designed to answer any question and perform any task intuitively. With hundreds of millions of people globally each week, ChatGPT plays a significant role in ensuring that AI benefits all of humanity. And we’re just getting started. We have ambitious plans to further enhance the product by combining research, engineering, and design, making ChatGPT even more intuitive and indispensable in users’ daily lives. About the Role At OpenAI, we believe the most useful AI systems will deeply understand the people using them. The Memory & Personalization team is responsible for building the systems that allow ChatGPT to learn from interactions over time, remembering context, preferences, goals, and workflows to deliver more helpful, tailored experiences. We are looking for a Product Manager to define and scale the next generation of AI personalization, building the foundation that enables ChatGPT to become a truly adaptive assistant for hundreds of millions of users. As a Product Manager for Memory & Personalization, you will define how ChatGPT learns from and adapts to individual users over time. You will work at the intersection of product, research, and engineering to design the systems that capture meaningful signals from user interactions and translate them into more helpful, personalized experiences. This role requires balancing ambitious product innovation with thoughtful safeguards around user control, privacy, and transparency. Your work will shape how hundreds of millions of people experience AI that understands their preferences, workflows, and goals. This position is based in San Francisco. We utilize a hybrid work model with 3 days in the office per week and offer relocation assistance to new employees. In this role, you will:Spearhead the development and implementation of cutting-edge AI features by crafting the vision, strategy, roadmap, and execution plan.Convert user feedback into detailed product requirements, narratives, and technical specifications.Utilize data to deeply understand user needs and guide future product development.Work closely with research, product design, and engineering teams to bring new capabilities to life.You might thrive in this role if you:Have 7-10+ years of product management experience or have successfully started a company.Hold a bachelor’s degree in Computer Science, Engineering, Information Systems, Analytics, Mathematics, Physics, Applied Sciences, or a related field.Have proven experience shipping products in a technical environment, collaborating with multiple cross-functional teams to drive product vision, define requirements, and guide teams to successfully deliver key milestones.Showcase strong leadership, organizational, and execution skills, with excellent communication abilities while working in high ambiguity, fast moving environments.Proven track record of working with LLM research, and translating this into production applicationsPays attention to how the product landscape is evolving across the industryAbout OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
Hayden AI.jpg

Intern, Software Engineer - Platform

Haydenai
$45 – $45 / hour
US.svg
United States
Intern
Remote
false
About UsAt Hayden AI, we are on a mission to harness the power of computer vision to transform the way transit systems and other government agencies address real-world challenges.From bus lane and bus stop enforcement to transportation optimization technologies and beyond, our innovative mobile perception system empowers our clients to accelerate transit, enhance street safety, and drive toward a sustainable future.About the Team the Platform TeamThe Platform team serves as the foundational layer that keeps Hayden's operations running reliably and at scale. This team is responsible for ensuring that Perception algorithms run smoothly across both purpose-built edge hardware and public cloud infrastructure. The team manages event processing, starting from device and ending at delivery to partners. It is also responsible for the fleet's health. Security, compliance, data governance, and data privacy is also within the responsibility of the Platform team.About the RoleAs a Platform Engineering Intern at Hayden AI, you’ll work on the foundational systems that power our entire product. This is not a sandbox role, you will contribute directly to the infrastructure, services, and data pipelines that ensure our Perception algorithms run reliably across edge devices and cloud environments.You will partner closely with senior engineers to build and improve the systems that move data from devices in the field all the way through event processing and delivery to our partners. Your work may touch cloud services, backend systems, fleet health monitoring, data tooling, or internal platforms that support security, compliance, and governance.This role is ideal for someone who wants to understand how large-scale AI systems operate in production—from edge hardware to cloud infrastructure—and is excited by reliability, performance, and scalability challenges.You will gain hands-on experience designing, building, testing, and deploying production-grade systems while learning how high-performing infrastructure teams operate.This position is based in San Francisco and follows a hybrid schedule with at least 3 days in-office per week.Key ResponsibilitiesBelow are your primary responsibilities. These represent the core areas where you’ll make an impact. As part of a rapidly evolving team, we look forward to your impact expanding over time.Take ownership of a real project and see it through to completionBuild and ship features with support from senior engineersWrite clean, scalable codeTest your work and iterate quicklyBe involved in everything from design discussions to deploymentCollaborate with engineers in code reviews and team discussionsParticipate in standups, sprint planning, and retrospectivesSupport the team on ad hoc engineering tasks as they come upHelp improve performance, reliability, or usability where needed​​Ask questions, seek feedback, and apply it quicklyDeliverables or project examples:GPS data analysisTrain Deep learning modelCreate AI datasetsLidar/Camera data toolingTest cases for end-to-end system performanceDevelop a cloud service in the event processing pipelineAdd page or a new user flow to the Portal web applicationRequired QualificationsThe qualifications below outline the experience and skills most relevant to success in this role. We recognize that skills and potential come in many forms, and we welcome diverse experiences that advance our mission.Education:Currently in your final year of a Bachelor’s program, or enrolled in a Master’s or PhD program in Computer Science. Technical Experience:Experience in one or more of the following programming languages:Go, PythonPersonal Attributes:Detail-oriented with a high bar for quality and accuracy.Curious and self-driven, motivated to dig into problems and find root causes.Strong communicator who can clearly document findings and surface issues to the right stakeholders.Collaborative team player who thrives in cross-functional environments.Organized and reliable, with the ability to manage multiple tasks and follow through consistently.Comfortable with ambiguity and able to make progress with limited direction.
No items found.
Hidden link
flowith.jpg

Android Developer FT - Shanghai 安卓工程师 (全职) - 上海

Flowith
CN.svg
China
Full-time
Remote
false
About the roleAs an Android Developer at Flowith, you'll be at the forefront of mobile innovation, designing and developing cutting-edge Android applications that integrate advanced AI capabilities. This role combines technical expertise with creative collaboration through our unique Vibe Coding approach, where 70-99% of your time will be dedicated to collaborative programming in a creative, dynamic environment. You'll transform complex requirements into intuitive user experiences while exploring the latest Android technologies to power our innovative products.Responsibilities:Design and develop the flowith's Android platform products Implement and optimize AI features for mobile applicationsCollaborate with product, design, and backend teams to create exceptional user experiencesExplore and implement cutting-edge Android technologies and frameworks Conduct code reviews and performance optimizationsParticipate in team Vibe Coding sessions, working collaboratively in a creative atmosphere工作内容:核心产品开发:主导 Flowith 安卓平台产品的架构设计与核心开发,从底层保障 APP 的极致性能。沉浸式 Vibe Coding:将 70%-99% 的时间投入到充满创意的结对与协作编程中,将灵感转化为代码。AI 能力移动端化:在手机端深度集成、落地并持续优化 AI 核心功能,让复杂的前沿模型在移动设备上跑得又快又稳。跨界体验打磨:与产品、设计及后端研发团队无缝咬合,跨越技术与业务的边界,共同打磨出令人惊艳的用户体验。前沿技术破圈与护航:持续探索并引入最前沿的安卓技术与框架;把控代码质量,主导 Code Review 与深度的性能调优。RequirementsDemonstrated experience building and shipping applications, websites, or hardware projects for personal interest or commercial purposesProficiency in Java/Kotlin with strong knowledge of Android development frameworks and ecosystemSolid understanding of UI design principles, particularly Material Design Familiarity with mobile AI technologies implementation (e.g., TensorFlow Lite)Strong programming fundamentals, clean coding practices, and excellent problem-solving abilitiesAbility to thrive in the Vibe Coding environment, enjoying collaborative creative programmingWorking knowledge of English; preference given to candidates with open-source project experience实战创作者:有过真正主导并打包上线的应用、网站或硬件项目经验—无论这是出于商业目的,还是纯粹的个人狂热与兴趣。安卓原生极客:精通 Java/Kotlin,深入理解 Android 开发框架与技术生态。具备扎实的编程底子、Clean Code(整洁代码)的强迫症,以及绝佳的问题解决能力。端侧 AI 玩家:熟悉并在意移动端 AI 技术的落地实现(如 TensorFlow Lite 等),渴望探索端侧智能的性能边界。审美在线的工程师:扎实掌握 UI 设计原则(尤其是 Material Design),不仅懂代码,更懂交互,对移动端产品的美感有自己的坚持。Vibe Coding 狂热分子:极度适应并享受Vibe Coding,能在高度动态的环境中与团队同频共振。加分项:具备良好的英文工作沟通能力;有活跃的 Open-source(开源)项目贡献经验是巨大的加分项。BenefitsWorkspace, Culture & LifestyleAwesome Teammates: Work alongside a kind, creative, and hardworking crew of occasional "geeks" and visionaries.Building the AGI Future: Participate in the in-house development of rapidly evolving AI agents and explore the future of AGI interactive interfaces.Cool Offices in SH & SF: Enjoy our ultra-open workspaces with the ultimate freedom to seamlessly switch between our Shanghai and San Francisco locations.Pet-Friendly Workplace: Bring your furry friends to work! Come play with our resident Orange Tabby and Golden Retriever Mix, or bring your own pets to hang out.Island Hackathons: Join our annual internal hackathons, where we select a new city or country each year for innovative coding sessions and team bonding.Free AI Tools & Tech Gear: Enjoy free, unlimited access to cutting-edge AI tools, plus the latest tech equipment like Apple Vision Pro and FPV drones.Tech Events: Regularly participate in top-tier global tech meetups and innovation showcases.Parties & Events: Celebrate with monthly birthday bashes and annual milestone partiesFree Snacks & Drinks: Stay fueled with an endless supply of your favorite beverages and unlimited complimentary snacks.Work ArrangementsFlexible Working Hours: Customize your schedule by arriving at the office between 10 AM and 1 PM for a standard 8-hour workday, 5 days a week.Remote Work & Care: Embrace a supportive hybrid work model, featuring 1 additional Work-From-Home (WFH) day per month exclusively for female employees.Comprehensive Benefits PackageCompetitive Compensation: Earn an above-market salary structure with an optional equity/stock options package.Wellness Program: Take care of your body and mind with free gym access and monthly on-site professional massages.Exclusive Swag & Perks: Receive holiday surprise gift boxes, premium custom company apparel (T-shirts, hoodies, and jackets), and occasional exclusive internal brand discounts.
No items found.
Hidden link
Scale AI.jpg

Principal AI Ops Architect, IPS

Scale AI
GB.svg
United Kingdom
Full-time
Remote
false
Role Overview Scale’s rapidly growing International Public Sector team is focused on using AI to address critical challenges facing the public sector around the world. Our core work consists of: Creating custom AI applications that will impact millions of citizens Generating high-quality training data for national LLMs Upskilling and advisory services to spread the impact of AI As a Production AI Ops Lead, you will design and develop the production lifecycle of full-stack AI applications, while supporting end-to-end system reliability, real-time inference observability, sovereign data orchestration, high-security software integration, and the resilient cloud infrastructure required for our international government partners. At Scale, we’re not just building AI solutions—we’re enabling the public sector to transform their operations and better serve citizens through cutting-edge technology. If you’re ready to shape the future of AI in the public sector and be a founding member of our team, we’d love to hear from you. You will: Own the production outcome: Take full accountability for the long-term performance and reliability of AI use cases deployed across international government agencies. Ensure Full-Stack integrity: Oversee the end-to-end health of the platform, ensuring seamless integration between the AI core and all full-stack components, from APIs to UI, to maintain a responsive and production-ready environment. Scale the feedback loop: Build automated systems to monitor model performance and data drift across geographically dispersed environments, ensuring the right levels of reliability. Navigate global compliance: Manage the technical lifecycle within diverse regulatory frameworks. Incident command: Lead the response for production issues in mission-critical environments, ensuring rapid resolution and building the guardrails to prevent them from happening again. Bridge the gap: Translate deep technical performance metrics into clear insights for senior international government officials. Drive product evolution: Partner with our Engineering and ML teams to ensure the lessons learned in the field directly influence the technical architecture and decisions of future use cases. Ideally, you have: Experience: 6+ years in a high-impact technical role (SRE, FDE or MLOps) with experience in the public sector. Global perspective: Familiarity with international government security standards and the complexities of deploying sovereign AI. System architecture proficiency: Proven experience maintaining production-grade applications with a deep understanding of the full request lifecycle-connecting frontend/API layers to the backend and AI core. Modern AI Stack expertise: Proficiency in coding and the modern AI infrastructure, including Kubernetes, vector databases, agentic development, and LLM observability tools. Ownership: You treat every production deployment as your own. You race toward solving hard problems before the customer even sees them. Reliability: You understand that in the public sector, a model failure may be a risk to public safety or privacy. Customer communication: The ability to explain to a high-ranking official why the performance of the system has degraded and how we are fixing it. PLEASE NOTE: Our policy requires a 90-day waiting period before reconsidering candidates for the same role. This allows us to ensure a fair and thorough evaluation of all applicants. About Us: At Scale, our mission is to develop reliable AI systems for the world's most important decisions. Our products provide the high-quality data and full-stack technologies that power the world's leading models, and help enterprises and governments build, deploy, and oversee AI applications that deliver real impact. We work closely with industry leaders like Meta, Cisco, DLA Piper, Mayo Clinic, Time Inc., the Government of Qatar, and U.S. government agencies including the Army and Air Force. We are expanding our team to accelerate the development of AI applications. We believe that everyone should be able to bring their whole selves to work, which is why we are proud to be an inclusive and equal opportunity workplace. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability status, gender identity or Veteran status.  We are committed to working with and providing reasonable accommodations to applicants with physical and mental disabilities. If you need assistance and/or a reasonable accommodation in the application or recruiting process due to a disability, please contact us at accommodations@scale.com. Please see the United States Department of Labor's Know Your Rights poster for additional information. We comply with the United States Department of Labor's Pay Transparency provision.  PLEASE NOTE: We collect, retain and use personal data for our professional business purposes, including notifying you of job opportunities that may be of interest and sharing with our affiliates. We limit the personal data we collect to that which we believe is appropriate and necessary to manage applicants’ needs, provide our services, and comply with applicable laws. Any information we collect in connection with your application will be treated in accordance with our internal policies and programs designed to protect personal data. Please see our privacy policy for additional information.
No items found.
Hidden link
Figure.jpg

Senior/Staff Software Engineer, Developer Tools and Productivity

Figure AI
$150,000 – $250,000
US.svg
United States
Full-time
Remote
false
Figure is an AI Robotics company developing a general purpose humanoid. Our humanoid robot is designed for commercial tasks and the home. We are based in San Jose, CA and require 5 days/week in-office collaboration. It’s time to build. Figure’s vision is to deploy autonomous humanoids at a global scale. Our Helix team is seeking an experienced AI Tooling Engineer to enhance our internal, web-based data and AI training tools. This role focuses on developing intuitive web interfaces that support key AI research functions, including robot data annotation, training dataset visualization, and experiment tracking. The ideal candidate has experience building rich, interactive web interfaces using React and TypeScript. Responsibilities Design and build intuitive web interfaces for robot data annotation, datasets visualization, and experiment tracking Utilize data-driven techniques to optimize interfaces for efficiency and fast iteration cycles Integrate AI models to automate manual tasks Work together with AI researchers, robot operators, and annotators to support new user experiences Requirements Strong software engineering fundamentals Bachelor's or Master's degree in Computer Science, Robotics, Engineering, or a related field Minimum of 4 years of professional, full-time experience building rich, interactive web interfaces Proficiency in React and TypeScript Bonus Qualifications Experience using data stores (Postgres, MySQL, ElasticSearch, Redis, etc.) Experience managing cloud infrastructure (AWS, Azure, GCP) Experience with Tailwind CSS Experience building data annotation and dataset management tools. The US base salary range for this full-time position is between $150,000 - $250,000 annually. The pay offered for this position may vary based on several individual factors, including job-related knowledge, skills, and experience. The total compensation package may also include additional components/benefits depending on the specific role. This information will be shared if an employment offer is extended.
No items found.
Hidden link
Replit.jpg

Offensive Security Engineer

Replit
$188,000 – $313,000
US.svg
United States
Full-time
Remote
false
Replit is the agentic software creation platform that enables anyone to build applications using natural language. With millions of users worldwide, Replit is democratizing software development by removing traditional barriers to application creation.About the role We are looking for a senior-level Offensive Security Engineer to serve as a high-impact "adversary-in-residence" for Replit’s cloud-native platform. At Replit, security isn't just about perimeter defense; it’s about the integrity of the code that powers millions of environments.In this role, you will lead advanced "whitebox" penetration testing engagements—diving deep into our source code to identify systemic weaknesses, logic flaws, and architectural gaps. You will simulate sophisticated adversary tactics across our web applications, APIs, and containerized infrastructure, ensuring that our AI-integrated development environment remains the most secure place for the world’s software to live. What You'll DoLead Whitebox Penetration Testing: Execute end-to-end testing with full access to source code. You will perform manual code-level inspections to uncover complex logic flaws and authorization bypasses that automated tools miss.Simulate Adversarial Attacks: Conduct Red and Purple team engagements across our cloud-native stack (K8s, Docker), simulating how a sophisticated actor might move from a code-level exploit to infrastructure-wide impact.Secure AI-Enabled Systems: Perform offensive testing on LLM-backed applications and agentic AI workflows, focusing on prompt injection, data leakage, and abuse of AI-driven components.Vulnerability Research & Chaining: Identify, exploit, and demonstrate realistic business risk by chaining vulnerabilities—from the application layer down through our internal trust boundaries.Build Offensive Tooling: Contribute to internal security frameworks and build AI-assisted testing tools to automate the discovery of common bug classes while maintaining deep manual testing depth.Partner with Engineering: Work closely with product teams and security architects to explain root causes, influence design guardrails, and triage high-priority findings from our Bug Bounty (HackerOne) program.Required Skills & ExperienceExperience: 7+ years of hands-on experience in penetration testing, offensive security, or vulnerability research.Code Fluency: You are a practitioner of whitebox testing. You can navigate large codebases and have a deep understanding of modern application architectures and secure coding pitfalls.Cloud-Native Context: You are comfortable in a cloud-native environment. While your focus is the code, you understand how it interacts with Kubernetes, Docker, and hybrid cloud infrastructure.Engineering Skills: Strong proficiency in Go, Python, or TypeScript. You should be capable of writing custom scripts, payloads, and proof-of-concept exploits.Adversarial Mindset: You enjoy the "hunt" and have a proven track record of manual exploitation beyond automated scanners.Communicator: You can translate a complex code-level exploit into a clear narrative that helps engineering teams understand risk and prioritize fixes.Bonus QualificationsPublic recognition on platforms like HackerOne or Bugcrowd.Experience building or extending AI-based security testing tools.Background in incident response or detection engineering from the defensive side.Published CVEs or security research in the cloud-native or AI space.This is a full-time role that can be held from our Foster City, CA office. The role has an in-office requirement of Monday, Wednesday, and Friday. Full-Time Employee Benefits Include:💰 Competitive Salary & Equity💹 401(k) Program with a 4% match⚕️ Health, Dental, Vision and Life Insurance🩼 Short Term and Long Term Disability🚼 Paid Parental, Medical, Caregiver Leave🚗 Commuter Benefits📱 Monthly Wellness Stipend🧑‍💻 Autonomous Work Environment🖥 In Office Set-Up Reimbursement🏝 Flexible Time Off (FTO) + Holidays🚀 Quarterly Team Gatherings☕ In Office AmenitiesWant to learn more about what we are up to?Meet the Replit AgentReplit: Make an app for thatReplit BlogAmjad TED TalkInterviewing + Culture at ReplitOperating PrinciplesReasons not to work at ReplitTo achieve our mission of making programming more accessible around the world, we need our team to be representative of the world. We welcome your unique perspective and experiences in shaping this product. We encourage people from all kinds of backgrounds to apply, including and especially candidates from underrepresented and non-traditional backgrounds.
No items found.
Hidden link
OpenAI.jpg

Learning Systems Engineer

OpenAI
$239,000 – $325,000
US.svg
United States
Full-time
Remote
false
About the TeamThe Education team at OpenAI is building systems that help make ChatGPT a highly effective learning partner. As AI progress continues to accelerate, our goal is to design learning capabilities that help individuals, developers, educators, and organizations move from curiosity to mastery with the help of AI tools.This team works at the intersection of AI systems, learning science, and large-scale platforms to build the infrastructure for AI-native learning experiences globally.About the RoleOpenAI is seeking a Learning Systems Engineer to help build the infrastructure behind AI-native learning experiences. This role sits at the intersection of backend systems, learning science, research, and product.You will translate pedagogical goals into production systems: learner models, progress tracking, adaptive feedback loops, formative assessments, and analytics to measure whether people are actually learning.Your work will help ensure that millions of people around the world can learn effectively and build real capability with AI. What You'll Work OnBuild AI-Native Learning Infrastructure: Develop core systems for AI education, including dynamic experiences, progress tracking, and assessments.Enable Adaptive Learning: Develop capabilities that allow learning experiences to dynamically adapt to learners’ knowledge, goals, and behaviors over time.Design Data Systems for Insights: Build data pipelines and analytics systems to help educators understand learner outcomes, engagement patterns, and skill development.Empower Educators: Build systems that allow non-engineers to design, configure, and experiment with learning experiences without requiring direct engineering support.What Success Looks Like in the First 12 MonthsHelp launch new AI learning experiences that reach a broad set of learners.Refine infrastructure that allows educators to deliver adaptive learning and assessments.Validate the learning analytics pipeline to provide deeper insights into AI driven learning.Empower education teams to use AI tools to build and iterate at scale.Core Qualifications5–10+ years of experience in software, data, or learning engineering.Experience building data systems or infrastructure that support education & training.Experience working with learning data, analytics pipelines, or educational metrics.Comfort translating learning or pedagogical goals into technical systems.Strong Additional QualificationsBackground in learning science, instructional design, or education research.Experience building LMS platforms or training infrastructure.Experience with learning analytics, learner data models, or educational measurement.Experience building systems that incorporate AI models into educational workflows.Experience with credentialing, certification systems, or competency frameworks.Experience building platforms used by non-technical educators or curriculum teams.Relevant BackgroundsEducation technology engineering.Learning analytics or educational data systems.Instructional technology platforms.Training platforms for technical products.AI tutoring systems or adaptive learning platforms.Why This Role MattersAI will fundamentally transform how people learn, work, and create. This role is essential for building the systems that ensure the benefits of AI are accessible to all, enabling learners worldwide to develop meaningful capability with this foundational technology.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
OpenAI.jpg

Forward Deployed Engineer - Semiconductor

OpenAI
$162,000 – $302,000
US.svg
United States
Full-time
Remote
false
About the teamOpenAI’s Forward Deployed Engineering team partners with leading semiconductor companies to deploy production-grade AI systems across chip design, verification, and tooling workflows. We operate at the intersection of customer delivery and core platform development, embedding deeply with customers to translate frontier model capabilities into systems that materially reduce design cycles, improve verification quality, and accelerate innovation.Our work turns early, high-touch deployments into repeatable solution patterns, reference architectures, and evaluation practices that scale across the semiconductor ecosystem — from chip designers to EDA vendors and, longer-term, fabrication partners.About the roleWe are hiring a Forward Deployed Engineer (FDE) to lead end-to-end deployments of OpenAI’s models inside semiconductor and chip design organizations. You will work with customers who are deep experts in hardware architecture, RTL, verification, and performance engineering, translating complex workflows, massive codebases, and long-running toolchains into production AI systems.Your focus will span end-to-end semiconductor workflows, from chip design and verification through tooling and manufacturing-adjacent systems. You will help expand OpenAI’s footprint across the stack, shaping how frontier models are applied throughout the semiconductor lifecycle.You will measure success through production adoption, cycle-time reduction, engineer productivity gains, and evaluation-driven feedback loops that inform product, model, and platform strategy. You’ll work closely with Product, Research, GTM, and Partnerships to turn early wins into a durable semiconductor vertical offering.This role operates in environments where correctness, scale, and trust matter — regressions cost weeks, failures block tape-out, and credibility is earned through technical rigor.This role is based in San Francisco. We use a hybrid work model of 3 days in the office per week. We offer relocation assistance. Travel up to 50% is required.In this role you willDesign and ship production AI systems around models, owning integrations with RTL repositories, verification environments, simulators, and internal tooling.Lead discovery and scoping from pre-engagement through production rollout, translating ambiguous engineering pain points into hypothesis-driven use cases with measurable outcomes.Deliver AI-powered verification workflows such as change-aware test selection, directed test generation, and intelligent regression triage, taking them from prototype to daily production use.Build systems that operate over large, evolving codebases and artifacts (RTL, tests, logs, waveforms, traces), where performance, latency, and failure handling shape architecture.Define and run evaluation loops that measure model and system quality against workflow-specific benchmarks (e.g., coverage, false positives, debug time, iteration speed).Own delivery state across multiple workstreams, making trade-offs between scope, speed, and robustness to protect production impact.Distill deployment learnings into hardened primitives, reference implementations, playbooks, and tooling that can be reused across customers.Surface field insights that inform model behavior, tooling gaps, and future product direction across the semiconductor stack. You might thrive in this role if youBring 5+ years of engineering experience in chip design, verification, EDA, or FPGA development (including RTL design, timing closure, and hardware/software co-design), or closely adjacent systems domains such as firmware, distributed systems, compilers, or performance-critical infrastructure.Have worked directly with RTL, verification environments, simulators, or large-scale performance/debug tooling — or have partnered closely with teams who do.Have delivered complex systems end-to-end in environments where scale, correctness, and long feedback loops shaped how you build and ship.Write and review production-grade code in Python and/or systems-adjacent languages, and are comfortable integrating across heterogeneous toolchains.Have experience deploying or experimenting with LLM-powered systems and understand how model behavior, evaluation, and guardrails affect trust and adoption.Communicate clearly with hardware engineers, software engineers, product teams, and executives, translating technical trade-offs into delivery decisions.Apply systems thinking with high execution standards, turning failures, regressions, and unexpected model behavior into improved operating patterns.Stay calm and decisive in technically deep, high-stakes environments where progress depends on credibility and follow-through.Success in this role means shipping AI systems that semiconductor engineers trust in their daily workflows, establishing repeatable deployment patterns across chip design and verification, and helping OpenAI become a long-term partner across the semiconductor ecosystem.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
Together AI.jpg

Program Manager, Data Center Delivery

Together AI
$200,000 – $280,000
No items found.
Full-time
Remote
false
About the Role The Turbo team sits at the intersection of efficient inference (algorithms, architectures, engines) and post‑training / RL systems. We build and operate the systems behind Together’s API, including high‑performance inference and RL/post‑training engines that can run at production scale. Our mandate is to push the frontier of efficient inference and RL‑driven training: making models dramatically faster and cheaper to run, while improving their capabilities through RL‑based post‑training (e.g., GRPO‑style objectives). This work lives at the interface of algorithms and systems: asynchronous RL, rollout collection, scheduling, and batching all interact with engine design, creating many knobs to tune across the RL algorithm, training loop, and inference stack. Much of the job is modifying production inference systems—for example, SGLang‑ or vLLM‑style serving stacks and speculative decoding systems such as ATLAS—grounded in a strong understanding of post‑training and inference theory, rather than purely theoretical algorithm design. You’ll work across the stack—from RL algorithms and training engines to kernels and serving systems—to build and improve frontier models via RL pipelines. People on this team are often spiky: some are more RL‑first, some are more systems‑first. Depth in one of these areas plus appetite to collaborate across (and grow toward more full‑stack ownership over time) is ideal. Requirements We don’t expect anyone to check every box below. People on this team typically have deep expertise in one or more areas and enough breadth (or interest) to work effectively across the stack. The closer you are to full‑stack (inference + post‑training/RL + systems), the stronger the fit—but being spiky in one area and eager to grow is absolutely okay. You might be a good fit if you: Have strong expertise in at least one of the following, and are excited to collaborate across (and grow into) the others: Systems‑first profile: Large‑scale inference systems (e.g., SGLang, vLLM, FasterTransformer, TensorRT, custom engines, or similar), GPU performance, distributed serving. RL‑first profile: RL / post‑training for LLMs or large models (e.g., GRPO, RLHF/RLAIF, DPO‑like methods, reward modeling), and using these to train or fine‑tune real models. Model architecture design for Transformers or other large neural nets. Distributed systems / high‑performance computing for ML. Are comfortable working from algorithms to engines: Strong coding ability in Python Experience profiling and optimizing performance across GPU, networking, and memory layers. Able to take a new sampling method, scheduler, or RL update and turn it into a production‑grade implementation in the engine and/or training stack. Have a solid research foundation in your area(s) of depth: Track record of impactful work in ML systems, RL, or large‑scale model training (papers, open‑source projects, or production systems). Can read new RL / post‑training papers, understand their implications on the stack, and design minimal, correct changes in the right layer (training engine vs. inference engine vs. data / API). Operate well as a full‑stack problem solver: You naturally ask: “Where in the stack is this really bottlenecked?” You enjoy collaborating with infra, research, and product teams, and you care about both scientific quality and user‑visible wins. Minimum qualifications 3+ years of experience working on ML systems, large‑scale model training, inference, or adjacent areas (or equivalent experience via research / open source). Advanced degree in Computer Science, EE, or a related field, or equivalent practical experience. Demonstrated experience owning complex technical projects end‑to‑end. If you’re excited about the role and strong in some of these areas, we encourage you to apply even if you don’t meet every single requirement. Responsibilities Advance inference efficiency end‑to‑end Design and prototype algorithms, architectures, and scheduling strategies for low‑latency, high‑throughput inference. Implement and maintain changes in high‑performance inference engines (e.g., SGLang‑ or vLLM‑style systems and Together’s inference stack), including kernel backends, speculative decoding (e.g., ATLAS), quantization, etc. Profile and optimize performance across GPU, networking, and memory layers to improve latency, throughput, and cost. Unify inference with RL / post‑training Design and operate RL and post‑training pipelines (e.g., RLHF, RLAIF, GRPO, DPO‑style methods, reward modeling) where 90+% of the cost is inference, jointly optimizing algorithms and systems. Make RL and post‑training workloads more efficient with inference‑aware training loops—for example, async RL rollouts, speculative decoding, and other techniques that make large‑scale rollout collection and evaluation cheaper. Use these pipelines to train, evaluate, and iterate on frontier models on top of our inference stack. Co‑design algorithms and infrastructure so that objectives, rollout collection, and evaluation are tightly coupled to efficient inference, and quickly identify bottlenecks across the training engine, inference engine, data pipeline, and user‑facing layers. Run ablations and scale‑up experiments to understand trade‑offs between model quality, latency, throughput, and cost, and feed these insights back into model, RL, and system design. Own critical systems at production scale Profile, debug, and optimize inference and post‑training services under real production workloads. Drive roadmap items that require real engine modification—changing kernels, memory layouts, scheduling logic, and APIs as needed. Establish metrics, benchmarks, and experimentation frameworks to validate improvements rigorously. Provide technical leadership (Staff level) Set technical direction for cross‑team efforts at the intersection of inference, RL, and post‑training. Mentor other engineers and researchers on full‑stack ML systems work and performance engineering. About Together AI Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers in our journey in building the next generation AI infrastructure. Compensation We offer competitive compensation, startup equity, health insurance and other competitive benefits. The US base salary range for this full-time position is: $200,000 - $280,000 + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge. Equal Opportunity Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more. Please see our privacy policy at https://www.together.ai/privacy    
No items found.
Hidden link
OpenAI.jpg

Researcher, Loss of Control

OpenAI
$295,000 – $445,000
US.svg
United States
Full-time
Remote
false
About the teamThe Safety Systems org ensures that OpenAI’s most capable models can be responsibly developed and deployed. We build evaluations, safeguards, and safety frameworks that help our models behave as intended in real-world settings.The Preparedness team is an important part of the Safety Systems org at OpenAI, and is guided by OpenAI’s Preparedness Framework.Frontier AI models have the potential to benefit all of humanity, but also pose increasingly severe risks. To ensure that AI promotes positive change, the Preparedness team helps us prepare for the development of increasingly capable frontier AI models. This team is tasked with identifying, tracking, and preparing for catastrophic risks related to frontier AI models.The mission of the Preparedness team is to:Closely monitor and predict the evolving capabilities of frontier AI systems, with an eye towards risks whose impact could be catastrophicEnsure we have concrete procedures, infrastructure and partnerships to mitigate these risks and to safely handle the development of powerful AI systemsPreparedness tightly connects capability assessment, evaluations, and internal red teaming, and mitigations for frontier models, as well as overall coordination on AGI preparedness. This is fast paced, exciting work that has far reaching importance for the company and for society.About the roleAs frontier AI systems become more capable, they are increasingly able to pursue long-horizon goals, use tools, adapt to feedback, and operate with greater autonomy. These advances create enormous potential benefits, but they also introduce the risk that models may behave in ways that are misaligned, deceptive, or difficult to supervise or contain. Reducing loss of control risk is therefore a core challenge for safely developing and deploying advanced AI systems.As a Researcher for loss of control mitigations, you will help design and implement an end-to-end mitigation stack to reduce the risk of intentionally subversive or insufficiently controllable model behavior across OpenAI’s products and internal deployments. This role requires strong technical depth and close cross-functional collaboration to ensure safeguards are enforceable, scalable, and effective. You’ll contribute directly to building protections that remain robust as model capabilities, deployment patterns, and threat models evolve.In this role, you will:Design and implement mitigation components for loss of control risk—spanning prevention, monitoring, detection, containment, and enforcement—under the guidance of senior technical and risk leadership.Integrate safeguards across product and research surfaces in partnership with product, engineering, and research teams, helping ensure protections are consistent, low-latency, and resilient as usage and model autonomy increase.Evaluate technical trade-offs within the loss of control domain (coverage, robustness, latency, model utility, and operational complexity) and propose pragmatic, testable solutions.Collaborate closely with risk modeling, evaluations, and policy partners to align mitigation design with anticipated failure modes and high-severity threat scenarios, including deceptive alignment, hidden subgoals, reward hacking, and attempts to evade oversight.Execute rigorous testing and red-teaming workflows, helping stress-test the mitigation stack against increasingly capable and potentially subversive model behaviors—such as sandbagging, monitor evasion, exploit-seeking, unsafe tool use, or strategic deception—and iterate based on findings.You might thrive in this role if you:Have a passion for AI safety and are motivated to make cutting-edge AI models safer for real-world use.Bring demonstrated experience in deep learning and transformer models.Are proficient with frameworks such as PyTorch or TensorFlow.Possess a strong foundation in data structures, algorithms, and software engineering principles.Are familiar with methods for training and fine-tuning large language models, including distillation, supervised fine-tuning, and policy optimization.Excel at working collaboratively with cross-functional teams across research, policy, product, and engineering.Have significant experience designing and evaluating technical safeguards, control mechanisms, or monitoring systems for advanced AI behavior.(Nice to have) Bring background knowledge in alignment, control, interpretability, robustness, adversarial ML, or related fields.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
The Reflection.jpg

Member of Technical Staff - Research Software Engineer

Reflection
US.svg
United States
Full-time
Remote
false
Our MissionReflection’s mission is to build open superintelligence and make it accessible to all.We’re developing open weight models for individuals, agents, enterprises, and even nation states. Our team of AI researchers and company builders come from DeepMind, OpenAI, Google Brain, Meta, Character.AI, Anthropic and beyond.The Roles MissionBridge the gap between research and production by turning cutting-edge algorithms into scalable training systems. You will design and optimize the core infrastructure behind frontier AI models — from reinforcement learning training loops and distributed GPU training to massive-scale data pipelines.Our systems train models across thousands of GPUs and process petabyte-scale datasets. We care deeply about numerical stability, throughput, and reproducibility.What This Team DoesThis team owns and evolves the core infrastructure behind our training systems.We focus on:Reinforcement learning training infrastructureDistributed training and inference systemsExperiment infrastructure and reproducibilityLarge-scale data pipelinesThe goal is to build the engineering foundation that allows researchers to iterate quickly while training models at massive scale.About the RoleYou will architect and optimize the core training infrastructure that powers our models. This includes RL training loops, distributed GPU systems, and large-scale data pipelines.You will work closely with researchers to transform new ideas into reliable, scalable training systems.Responsibilities include:Designing and optimizing large-scale training loops and data pipelines.Implementing state-of-the-art techniques and ensuring they are numerically stable and computationally efficient.Building internal tooling for launching, monitoring, and reproducing complex experiments.Diagnosing deep bottlenecks across the training stack (GPU memory issues, communication overhead, dataloader stalls).Translating research prototypes into reusable, production-grade infrastructure.What You'll Work WithRL SystemsPPO / GRPO style training loopsRLHF / RLAIF style pipelinesRollout generation and reward modelingOnline learning and evaluation systemsDistributed TrainingGPU parallelism (data, tensor, pipeline, expert)Large-scale distributed training infrastructureCommunication optimization (NCCL, RDMA, GPU interconnects)FSDP / ZeRO and model shardingOrchestration & Runtime SystemsRay, Kubernetes, SlurmDistributed runtimes and async systemsContainerization and sandboxingFrameworksPyTorchJAXMegatron-style training stacksTriton / custom kernelsData InfrastructureLarge-scale dataset curation pipelinesDeduplication and filtering systemsTokenization and preprocessingDistributed data processing frameworksAbout YouYou are a strong software engineer who speaks the language of machine learning.You may not have a PhD, but you know how to implement a research paper.You have deep experience in at least one of the following: Reinforcement Learning Systems, Distributed Training & Inference or Data InfrastructureYou enjoy working at the boundary between:Machine learning algorithmsDistributed systemsHigh-performance computingYou care deeply about performance, numerical stability, and reproducibility.You thrive in high-agency environments and enjoy solving hard technical problems.What We Offer:We believe that to build superintelligence that is truly open, you need to start at the foundation. Joining Reflection means building from the ground up as part of a small talent-dense team. You will help define our future as a company, and help define the frontier of open foundational models.We want you to do the most impactful work of your career with the confidence that you and the people you care about most are supported.Top-tier compensation: Salary and equity structured to recognize and retain the best talent globally.Health & wellness: Comprehensive medical, dental, vision, life, and disability insurance.Life & family: Fully paid parental leave for all new parents, including adoptive and surrogate journeys. Financial support for family planning.Benefits & balance: paid time off when you need it, relocation support, and more perks that optimize your time. Opportunities to connect with teammates: lunch and dinner are provided daily. We have regular off-sites and team celebrations.
No items found.
Hidden link
Eight Sleep.jpg

Data Scientist, Growth

Eight Sleep
US.svg
United States
Full-time
Remote
false
Join the Sleep Fitness MovementAt Eight Sleep, we’re on a mission to fuel human potential through optimal sleep. As the world’s first sleep fitness company, we’re redefining what it means to be well-rested and building the most advanced hardware, software, and AI technology to make it possible. Our products power peak mental, physical, and emotional performance by transforming every night of sleep into a personalized, data-driven recovery experience. We are trusted by high performers, professional athletes, and health-conscious consumers in over 30 countries worldwide. Recognized as one of Fast Company's Most Innovative Companies in 2019, 2022, and 2023, and twice named to TIME's “Best Inventions of the Year.” We operate like a high-performance team: fast, focused, and motivated by impact. We don’t just ship; we iterate, refine, and obsess over the details that help our members sleep better and wake up stronger. Every role at Eight Sleep is a chance to create cutting-edge technology, collaborate with world-class talent, and help shape a future where sleep isn’t passive - it’s a powerful tool for living better. If you’re tired of the ordinary and driven to build at the edge of what’s possible, this is your moment. Join us and lead the movement that’s transforming how the world sleeps and what we’re all capable of when we wake up.High Standards. No Apologies.We operate with intensity because our mission demands it. At Eight Sleep, we bring the same mindset as the world’s top performers: focused, relentless, and always pushing to be in the top 1% of our craft. Think Kobe Bryant’s mamba mentality, applied to bold ideas, next-gen tech, and flawless execution. This isn’t a 9-to-5. Our team is deeply committed, often putting in 60+ hours a week –not because we’re told to, but because we’re invested. We’re here to build fast, push limits, and deliver without compromise. If you thrive under pressure and want to do the most meaningful work of your career, you’ll feel right at home. If you’re looking for something easier –this isn’t it.The RoleWe’re seeking a Data Scientist to join our Data & Analytics team within the Growth organization. This isn't a typical analyst role - you'll be the lead growth data scientist at Eight Sleep, owning complex forecasting challenges that directly impact our supply chain, marketing spend, and business growth. The ideal candidate has deep experience with time series forecasting, experimentation design, and translating ambiguous business problems into data-driven solutions.What You'll BuildAdvanced demand forecasting models to solve our biggest operational challenge: eliminating stock-outs during peak periods (Black Friday, holidays) while avoiding overcommitting capital on inventoryMarketing spend optimization algorithms that go beyond our current Prophet model, incorporating seasonality, macro trends, and cross-channel effects to improve our CAC targetingExperimentation frameworks for incrementality testing across our marketing channels (Meta, Google, YouTube) with proper statistical rigorMarketing attribution models that handle our complex customer journey and multi-touch attribution challengesWhat You'll Need to Succeed 5+ years of data science experience with demonstrable impact on business-critical forecasting and optimization problemsForecasting expertise - you've built production models that handle seasonality, growth trends, and external factors. Bonus points if you've worked with Prophet and know its limitationsStrong experimentation background - you can design proper A/B tests, understand incrementality measurement, and have experience with marketing mix modelingAdvanced Python/R skills plus expert SQL - you'll be building production pipelines and complex models, not just ad-hoc analysisHigh ownership mentality - comfortable with ambiguous problems and 60+ hour weeks during critical periods. This is a startup where your work directly impacts our trajectoryBusiness acumen - you can translate between technical solutions and business stakeholders (Growth, Finance, Operations teams)Strong written and verbal communication - you can translate complex findings into actionable insights. Your written and verbal communication skills are excellentHigh sense of agency - you are an independent self-starter who thrives in a fast-paced environment and is comfortable owning a high level of responsibilityWe will hire for this role anywhere in the US, but working hours will be EST. Note: Employees approved for remote work must perform their duties from a single, company-approved based location #LI-RemoteWhy join Eight Sleep?Innovation in a culture of excellenceJoin us in a workplace where innovation isn’t just encouraged - it’s a standard. Our flagship product, the Pod, is a testament to our culture of excellence, beloved by hundreds of thousands of customers worldwide. At Eight Sleep, you will be part of a team that continuously pushes the boundaries of technology in sleep fitness.Immediate responsibility and accelerated career growthFrom your first day, you’ll take on substantial responsibilities that have a direct impact on our core business and product success. We are a small team that empowers you to own your projects and see the tangible effects of your efforts, enhancing both your professional growth and our company’s trajectory. Your path will be challenging but rewarding, perfect for those who thrive in fast-paced environments aiming for high standards.Collaboration with exceptional talentWork alongside other bright minds like you: at Eight Sleep exceptional intelligence and a passion for breakthroughs are the norms. Our team members are not only experts in their fields but also avid innovators who thrive in our dynamic, fast-paced environment.Equitable compensation and continuous equity investmentWe extend equity participation to every full-time team member, recognizing and rewarding your direct contributions to our success. This includes periodic equity refreshments based on performance, ensuring that as Eight Sleep grows and succeeds, so do you – perfectly aligning your achievements with the broader triumphs of the company. Pay grows rapidly as you accumulate experience with Eight Sleep and translate it into concrete impact.Your own Pod - and other great benefitsEvery Eight Sleep employee receives the very product that defines our mission: a Pod of their own. If you join us you’ll get your own Pod, along with*:Full access to health, vision, and dental insurance for you and your dependentsSupplemental life insuranceFlexible PTOCommuter benefits to ease your daily commutePaid parental leave*List of benefits may vary depending on your location
No items found.
Hidden link
WRITER.jpg

Solutions architect (pre-sales) (West)

Writer
$207,200 – $250,000
US.svg
United States
Full-time
Remote
false
🚀 About WRITERWRITER is where the world's leading enterprises orchestrate AI-powered work. Our vision is to expand human capacity through superintelligence. And we're proving it's possible – through powerful, trustworthy AI that unites IT and business teams together to unlock enterprise-wide transformation. With WRITER's end-to-end platform, hundreds of companies like Mars, Marriott, Uber, and Vanguard are building and deploying AI agents that are grounded in their company's data and fueled by WRITER's enterprise-grade LLMs. Valued at $1.9B and backed by industry-leading investors including Premji Invest, Radical Ventures, and ICONIQ Growth, WRITER is rapidly cementing its position as the leader in enterprise generative AI.Founded in 2020 with office hubs in San Francisco, New York City, Austin, Chicago, and London, our team thinks big and moves fast, and we're looking for smart, hardworking builders and scalers to join us on our journey to create a better future of work with AI.📐 About the roleEvery company grows differently, and at WRITER, our growth is directly tied to empowering our users to create better content, faster, and at an unprecedented scale. As a strategic solutions architect, you'll be at the forefront of this mission, working directly with our largest and most strategic prospects to identify, validate, and build innovative generative AI solutions that unlock massive business value. This isn't just about selling a product; it's about deeply understanding complex enterprise needs and architecting a future where AI transforms how our customers operate. You'll be instrumental in shaping how the world's leading companies adopt and scale AI, making a tangible impact on both their success and WRITER’s continued leadership in the enterprise generative AI space.This is a full-time, hybrid role based out of our San Francisco hub. You will report directly to the VP of solutions architecture.🦸🏻‍♀️ What you'll doDrive strategic technical discovery with Fortune 500 prospects and customers, translating complex business challenges into clear, impactful technical solutions for AI-powered workArchitect and design robust, scalable, and secure generative AI solutions for enterprise clients, leveraging WRITER's platform, APIs, and custom applications to solve critical business problemsLead the development and execution of compelling proofs of concept (PoCs) and demonstrations, building custom templates and integrating WRITER's capabilities to showcase transformative value and accelerate time-to-value for customersServe as a trusted technical advisor to C-suite executives, VPs of Engineering, and AI leaders, guiding their generative AI strategy and collaborating to define enterprise-level architecture roadmapsPartner closely with WRITER's Product and Engineering teams, providing critical feedback from customer engagements to influence our product roadmap and ensure our solutions meet evolving market needsChampion the adoption of WRITER's platform and APIs, educating prospects and partners on the art of the possible with generative AI and empowering them to build their own innovative solutions.⭐️ What you need5+ years of experience in technical customer-facing roles such as solutions architect, enterprise architect, or sales engineering, ideally within a high-growth, B2B SaaS company serving Fortune 500 clientsDeep expertise in generative AI principles, large language models (LLMs), and prompt engineering best practices, with a passion for staying ahead of the curve in this rapidly evolving fieldProficiency in Python for building out WRITER Framework solutions and extensive experience with APIs, microservices, and cloud AI platforms (e.g., Azure AI, AWS SageMaker) to integrate AI capabilities into complex enterprise systemsExceptional communication and presentation skills, able to simplify complex technical concepts into clear, compelling business language for diverse audiences, from technical teams to executive leadershipA highly versatile and tenured problem solver, capable of leading multi-stakeholder discovery sessions, uncovering root causes, and architecting innovative solutions to complex business and technical challengesA builder's mindset and a proven ability to Connect deeply with customer needs, Challenge the status quo with innovative AI solutions, and Own the technical success and impact of client engagements 🍩 Benefits & perks (US Full-time employees)Generous PTO, plus company holidaysMedical, dental, and vision coverage for you and your familyPaid parental leave for all parents (12 weeks)Fertility and family planning supportEarly-detection cancer testing through GalleriFlexible spending account and dependent FSA optionsHealth savings account for eligible plans with company contributionAnnual work-life stipends for:Wellness stipend for gym, massage/chiropractor, personal training, etc.Learning and development stipendCompany-wide off-sites and team off-sitesCompetitive compensation, company stock options and 401kWRITER is an equal-opportunity employer and is committed to diversity. We don't make hiring or employment decisions based on race, color, religion, creed, gender, national origin, age, disability, veteran status, marital status, pregnancy, sex, gender expression or identity, sexual orientation, citizenship, or any other basis protected by applicable local, state or federal law. Under the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.By submitting your application on the application page, you acknowledge and agree to WRITER's Global Candidate Privacy Notice.
No items found.
Hidden link
ElevenLabs.jpg

Forward Deployed Engineer - Strategist - Europe

ElevenLabs
GB.svg
United Kingdom
Full-time
Remote
false
About ElevenLabsElevenLabs is an AI research and product company transforming how we interact with technology.We launched in January 2023 with the first human-like AI voice model. Today, we serve millions of users and thousands of businesses - from fast-growing startups to large enterprises like Deutsche Telekom and Meta. Our investors are some of the world's most prominent, including Andreessen Horowitz, ICONIQ Growth and Sequoia. We've raised $781M in funding and our last valuation was $11B - multiples of 11, always. We have expanded from voice into three main platforms:ElevenAgents enables businesses to deliver seamless and intelligent customer experiences, with the integrations, testing, monitoring, and reliability necessary to deploy voice and chat agents at scale.ElevenCreative empowers creators and marketers to generate and edit speech, music, image, and video across 70+ languages.ElevenAPI gives developers access to our leading AI audio foundational models.Everything we do is the result of the creativity and commitment of our team - builders doing the best work of their lives. We are researchers, engineers, and operators. IOI medalists and ex-founders. If you want to work hard and create lasting positive impact, we want to hear from you.How we workHigh-velocity: Rapid experimentation, lean autonomous teams, and minimal bureaucracy.Impact not job titles: We don’t have job titles. Instead, it’s about the impact you have. No task is above or beneath you.AI first: We use AI to move faster with higher-quality results. We do this across the whole company—from engineering to growth to operations.Excellence everywhere: Everything we do should match the quality of our AI models.Global team: We prioritize your talent, not your location.What we offerInnovative culture: You’ll be part of a generational opportunity to define the trajectory of AI, surrounded by a team pushing the boundaries of what’s possible.Growth paths: Joining ElevenLabs means joining a dynamic team with countless opportunities to drive impact - beyond your immediate role and responsibilities.Learning & development: ElevenLabs proactively supports professional development through an annual discretionary stipend.Social travel: We also provide an annual discretionary stipend to meet up with colleagues each year, however you choose.Annual company offsite: Each year, we bring the entire team together in a new location - past offsites have included Croatia and Italy.Co-working: If you’re not located near one of our main hubs, we offer a monthly co-working stipend.About the roleAs a Forward Deployed Engineer Strategist, you'll work as part of a driven and creative team of Engineers, Product Designers, and other Strategists to deploy our voice AI technology against the most challenging problems our customers face. Your mission is to synthesize disconnected streams of thought into a cohesive understanding of what the most important problem is, what the data means, what the product needs, what users are motivated by, and where the impact could be.No two days are the same, but as a FDE Strategist you can expect to:Meet with strategic customers to understand their critical audio and voice AI needs and locate their biggest pain points.Identify relevant use cases through deep engagement with customer problems and workflows, and work with Engineers to implement our voice and audio AI technology into innovative solutions.Design and architect bespoke integrations for customers, ensuring our technology fits seamlessly into their products and operations.Guide customers on best practices for implementing our voice and audio AI models to maximize their effectiveness.Present the results of our work and proposals for future work to audiences ranging from technical teams to C-suite executives.Collaborate with our Research and Product teams to incorporate field insights into ElevenLabs' software products and AI models.Build and deliver compelling demos of our voice and audio AI technology to new and existing customers.Scope out potential applications in new industries and expand our AI solutions across different sectors globally.Take full ownership of end-to-end execution of major projects for our most strategic partners, working hands-on to deliver high-impact solutions.Collaborate daily with our customers' engineering and executive teams to ensure optimal implementation of ElevenLabs' technologies.RequirementsExperience working with customers in a technical capacity. It's ok if you only worked with customers in student clubs or side projects, as long as you are interested in working closely with them.Basic proficiency in Python and understanding of API integration to implement scripts and help with prototyping/demo building.Excellent communication and problem-solving skills, especially in terms of ability to summarize complex technical concepts and using logic in pursuing optimal solutions.A proven track record of taking ownership of complex projects and delivering results.Adaptability to work across different customer environments and technical use cases.Technical aptitude to quickly understand our voice and audio AI models and their applications.LocationThis role is remote and can be executed globally. If you prefer, you can work from our offices in Bangalore, Dublin, London, New York, San Francisco, Tokyo, and Warsaw.#LI-Remote
No items found.
Hidden link
Snorkel AI.jpg

Senior Strategy & Operations Manager, Expert Contributor Experience

Snorkel AI
$172,000 – $300,000
US.svg
United States
Full-time
Remote
false
About Snorkel At Snorkel, we believe meaningful AI doesn’t start with the model, it starts with the data. We’re on a mission to help enterprises transform expert knowledge into specialized AI at scale. The AI landscape has gone through incredible changes between 2015, when Snorkel started as a research project in the Stanford AI Lab, to the generative AI breakthroughs of today. But one thing has remained constant: the data you use to build AI is the key to achieving differentiation, high performance, and production-ready systems. We work with some of the world’s largest organizations to empower scientists, engineers, financial experts, product creators, journalists, and more to build custom AI with their data faster than ever before. Excited to help us redefine how AI is built? Apply to be the newest Snorkeler!About Snorkel We’re on a mission to democratize AI by building the definitive AI data development platform. The AI landscape has gone through incredible change between 2016, when Snorkel started as a research project in the Stanford AI Lab, to the generative AI breakthroughs of today. But one thing has remained constant: the data you use to build AI is the key to achieving differentiation, high performance, and production-ready systems. We work with some of the world’s largest organizations to empower scientists, engineers, financial experts, product creators, journalists, and more to build custom AI with their data faster than ever before. Excited to help us redefine how AI is built? Apply to be the newest Snorkeler! As an Applied AI Engineer, you’ll research and utilize state-of-the-art Gen AI and machine learning (ML) techniques to successfully deliver solutions to our customers. You will work directly with our customers to understand their business and technical needs and design and deliver AI solutions to solve them - either by leveraging Snorkel Flow or developing custom approaches when needed. You will also help define Snorkel’s Applied AI tooling by translating repeatable real-world challenges into reusable solution recipes, workflows, best practices, and platform-level capabilities that become part of Snorkel Flow’s next generation of AI tooling. We move fast and are constantly prototyping and innovating new ways to deliver value to our customers. This position is ideal for someone who enjoys solving complex problems, bridging the gap between AI technology and business value, working directly with customers, keeping up-to date with AI research, and standardizing bespoke solutions into internal recipes and staying naturally curious about the infrastructure that underpin the Applied AI stack end-to-end. Main Responsibilities Partner with customers to build and deploy impactful Gen AI and machine learning solutions, from use case scoping and data exploration to model development and deployment. This may involve leveraging Snorkel Flow or designing custom approaches using state-of-the-art tools, with the goal of delivering real business value and informing the evolution of the Snorkel platform. Develop and implement state of the art AI systems such as retrieval-augmented generation (RAG), fine-tuning pipelines, prompt engineering recipes and agentic workflows. Create augmented real-world datasets and comprehensive evaluation workflows to ensure model reliability, transparency, and stakeholder trust. A data- and evaluation-first mindset is essential for success in this role. Forge and manage relationships with our customers’ leadership and stakeholders to ensure successful development and deployment of AI projects with Snorkel Flow. Collaborate closely with pre-sales Solutions and Product teams to map customer needs to existing capabilities, prioritize roadmap gaps, and guide successful project setup. Work with other Applied AI Engineers to standardize solutions and contribute to internal tooling and best practices. Lead stakeholder education on quantitative capabilities, helping them to understand the strengths and weaknesses of different approaches and what problems are best-suited for Snorkel AI. Serve as the voice of our customers for new AI paradigms, data science workflows, and share customer feedback to product teams. Conduct one-to-few and one-to-many enablement workshops to transfer knowledge to customers considering or already using Snorkel AI. Annual travel up to 25%. Preferred Qualifications B.S. degree in a quantitative field such as Computer Science, Engineering, Mathematics, Statistics, or comparable degree/experience. 3+ years of customer-facing experience in the design and implementation of AI/ML solutions. Proficiency in Python, including strong grounding in software engineering fundamentals (e.g., modular design, testing, profiling, packaging) and experience with modern Python constructs and libraries for type validation and typed data modeling (e.g., pydantic), building type-safe systems (e.g., mypy), testing (e.g., pytest), packaging and environment configuration (e.g., poetry), API and service frameworks (e.g., FastAPI), serialization and structured data handling (e.g., msgspec), and orchestration tooling relevant to ML deployment (e.g., Ray, Airflow). Expertise across the Applied AI stack, spanning classical ML libraries (e.g., scikit-learn), deep learning frameworks (e.g., PyTorch), foundation-model ecosystems (e.g., Hugging Face Transformers), vector/embedding tooling (e.g., FAISS), data processing frameworks (e.g., pandas, Spark), retrieval/RAG tooling (e.g., Chroma, Weaviate), synthetic dataset curation, evaluation workflows, and LLM orchestration, workflow, agent authoring tools (e.g., LlamaIndex, LangGraph, CrewAI). Experience leading strategic, customer-facing initiatives and collaborating with business stakeholders to ensure ML solutions drive successful business outcomes, with a strong focus on teaching and enablement. Outstanding presentation skills to technical and executive audiences, whether impromptu on a whiteboard or using presentations and demos. Ability to work in a fast-paced environment and balance priorities across multiple projects at once. Compensation range for Tier 1 locations of San Francisco Bay Area $172K - $300K OTE. All offers also include equity in the form of employee stock options. Our compensation ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Locations Redwood City, CA - Hybrid; San Francisco, CA - Hybrid - US; New York, NY - Hybrid #LI-CG1Salary Range $172,000—$300,000 USDBe Your Best at Snorkel Joining Snorkel AI means becoming part of a company that has market proven solutions, robust funding, and is scaling rapidly—offering a unique combination of stability and the excitement of high growth. As a member of our team, you’ll have meaningful opportunities to shape priorities and initiatives, influence key strategic decisions, and directly impact our ongoing success. Whether you’re looking to deepen your technical expertise, explore leadership opportunities, or learn new skills across multiple functions, you’re fully supported in building your career in an environment designed for growth, learning, and shared success. Snorkel AI is proud to be an Equal Employment Opportunity employer and is committed to building a team that represents a variety of backgrounds, perspectives, and skills. Snorkel AI embraces diversity and provides equal employment opportunities to all employees and applicants for employment. Snorkel AI prohibits discrimination and harassment of any type on the basis of race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local law. All employment is decided on the basis of qualifications, performance, merit, and business need. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
No items found.
Hidden link
Arize AI.jpg

Enterprise Account Executive (New York City)

Arize AI
No items found.
Full-time
Remote
false
About Arize AI is rapidly transforming the world. As generative AI reshapes industries, teams need powerful ways to monitor, troubleshoot, and optimize their AI systems. That’s where we come in. Arize AI is the leading AI & Agent Engineering observability and evaluation platform, empowering AI engineers to ship high-performing, reliable agents and applications. From first prototype to production scale, Arize AX unifies build, test, and run in a single workspace—so teams can ship faster with confidence. We’re a Series C company backed by top-tier investors, with over $135M in funding and a rapidly growing customer base of 150+ leading enterprises and Fortune 500 companies. Customers like Booking.com, Uber, Siemens, and PepsiCo leverage Arize to deliver AI that works.Note: The nature of this role requires candidates to be based in the Buenos Aires area, though there isn't an in-office requirement. The Opportunity We’re looking for an Application Engineer who thrives on solving hard problems with code. In this role, you'll have the opportunity to work at the cutting edge of generative AI in a high-impact role with autonomy and ownership. What You’ll Do Debug and fix issues in our platform (and ship PRs with your fixes). Build internal tools and copilots powered by generative AI to supercharge our team. Rapidly prototype proof-of-concepts for customer use cases. Work across Engineering, Product, and Solutions to unblock customers and push the boundaries of AI adoption. What We’re Looking For You have 2-5 years of experience in software. Strong in Python and Golang; comfortable shipping fixes in production systems. Hands-on with generative AI (LLM APIs, frameworks, building copilots or automations) Hands-on with OpenTelimetry and deep familiarity with distributed tracing concepts. Familiarity with AI frameworks (CrewAI, Langchain, Langgraph, DiFy, LiteLLM, etc). Familiarity or eagerness to learn JavaScript/TypeScript. Great debugger, creative problem solver, and fast learner. Independent and resourceful. You create solutions, not dependencies. Bonus Points (but not required!) Experience in a customer-facing role Built copilots, plugins, or custom GenAI-powered applications. Open-sourced or contributed PRs to real codebases. Startup or fast-moving environment experience. Actual compensation is determined based upon a variety of job related factors that may include: transferable work experience, skill sets, and qualifications. Total compensation also includes unlimited paid time off, generous parental leave plan, and others for mental and wellness support.More About Arize Arize’s mission is to make the world’s AI work—and work for people. Our founders came together through a shared frustration: while investments in AI are growing rapidly across every industry, organizations face a critical challenge—understanding whether AI is performing and how to improve it at scale. Learn more about what we're doing here: https://techcrunch.com/2025/02/20/arize-ai-hopes-it-has-first-mover-advantage-in-ai-observability/ https://arize.com/blog/arize-ai-raises-70m-series-c-to-build-the-gold-standard-for-ai-evaluation-observability/ Diversity & Inclusion @ Arize Our company's mission is to make AI work and make AI work for the people, we hope to make an impact in bias industry-wide and that's a big motivator for people who work here. We actively hope that individuals contribute to a good culture Regularly have chats with industry experts, researchers, and ethicists across the ecosystem to advance the use of responsible AI Culturally conscious events such as LGBTQ trivia during pride month We have an active Lady Arizers subgroup
No items found.
Hidden link
No job found
Your search did not match any job. Please try again
Department
Clear
Category
Clear
Country
Clear
Job type
Clear
Remote
Clear
Only remote job
Company size
Clear
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.