⚠️ Sorry, this job is no longer available.

The AI job market moves fast. We keep up so you don't have to.

Fresh roles added daily, reviewed for quality — across every corner of the AI ecosystem.

Edit filters

New AI Opportunities

Showing 6179  of 79 jobs
Tag
Intrinsic.jpg

Real Estate, Workplace Programs and User Experience Lead

Intrinsic
US.svg
United States
Full-time
Remote
false
Intrinsic is an AI robotics group at Google aiming to reimagine the potential of industrial robotics. Our team believes that advances in AI, perception and simulation will redefine what’s possible for industrial robotics in the near future – with software and data at the core. Our mission is to make industrial robotics intelligent, accessible, and usable for millions more businesses, entrepreneurs, and developers. We are a dynamic team of engineers, roboticists, designers, and technologists who are passionate about unlocking the creative and economic potential of industrial robotics.Role As a Senior AI Research Scientist for Perception for Contact Rich Manipulation you will lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. You will explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Your work will involve creating models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. You will collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities that can be deployed on physical hardware for industrial applications. How your work moves the mission forward Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios. Design algorithms that enable robots to manipulate complex or deformable objects with high precision. Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware. Evaluate model performance in both simulation and real-world environments to ensure robustness and reliability. Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems. Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap. Skills you will need to be successful PhD in Computer Science, Robotics, or a related field with a focus on machine learning or computer vision. 3 years of experience in applied research focused on robotic manipulation or robot learning. Proficiency in programming with Python and C++. Experience with deep learning frameworks such as PyTorch, JAX, or TensorFlow. Experience developing algorithms for vision-based manipulation or contact-rich interaction. Publication record in top-tier robotics or AI conferences (e.g., ICRA, IROS, CVPR, NeurIPS).  Skills that will differentiate your candidacy Experience with reinforcement learning or imitation learning for robotics. Familiarity with physics simulators like MuJoCo, Isaac Sim, or Gazebo. Experience integrating tactile sensors with visual perception systems. Experience in LfD (Learning from Demonstrations), kinesthetic learning. Background in sim-to-real transfer techniques for manipulation policies. Experience with transformer-based architectures or foundation models in a robotics context. Experience deploying machine learning models on edge compute hardwar​e. At Intrinsic, we are proud to be an equal opportunity workplace. Employment at Intrinsic is based solely on a person's merit and qualifications directly related to professional competence. Intrinsic does not discriminate against any employee or applicant because of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), or any other basis protected by law. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. It is Intrinsic’s policy to comply with all applicable national, state and local laws pertaining to nondiscrimination and equal opportunity. If you have a disability or special need that requires accommodation, please contact us at: candidate-support@intrinsic.ai.
No items found.
Hidden link
Harmattan AI.jpg

Machine Learning Engineer (Semantic Scene Understanding)

Harmattan AI
FR.svg
France
Full-time
Remote
false
About UsHarmattan AI is a next-generation defense prime building autonomous and scalable defense systems. Following the close of a $200M Series B, valuing the company at $1.4 billion, we are expanding our teams and capabilities to deliver mission-critical systems to allied forces.Our work is guided by clear values: building technologies with real-world impact, pursuing excellence in everything we do, setting ambitious goals, and taking on the hardest technical challenges. We operate in a demanding environment where rigor, ownership, and execution are expected.About the RoleWe are looking for a Machine Learning Engineer to join our Semantic Scene Understanding team in Paris. In this role, you will design the core algorithms to extract semantic information in real-time from the theatre of operations as seen through the different cameras of our different UAVs, to improve the operator’s scene understanding.ResponsibilitiesDesign and Train: Develop state-of-the-art machine learning algorithms for semantic segmentation, object detection, and classification tailored to aerial imagery.Advanced Feature Extraction: Build high-level tactical features on top of base semantic data, such as real-time road vectorization, trafficability analysis, and dynamic obstacle mapping.Multi-Agent Fusion: Architect pipelines that temporally and spatially align semantic data from multiple moving UAVs into a cohesive Common Operational Picture (COP).Edge Optimization: Optimize and deploy these algorithms directly into our tactical C2 platform, utilizing quantization, pruning, and hardware acceleration to meet strict real-time compute constraints.Candidate RequirementsEducational Background: MSc in Computer Science, Machine Learning, or a related field. A PhD is a strong plus.Foundational Knowledge: Deep understanding of Machine Learning theory, Linear Algebra, and 3D-Geometry algorithms.Core Tech Stack: Expert-level command of Python and deep learning frameworks (PyTorch).Performance Engineering: Experience with C++ and inference optimization frameworks (e.g., TensorRT, ONNX Runtime, CUDA) is highly desirable.Domain Experience (Plus): A track record of shipping CV/ML algorithms in production, particularly for edge/embedded systems or involving aerial (EO/IR) imagery.Strong Ownership: Ability to take a feature from an ArXiv paper all the way to a ruggedized tactical PC.Adaptability & Mission Focus: Thrives in a fast-paced startup environment and is 100% dedicated to building ethical defense technologies that bring a strategic edge to allied nations.Communication: Excellent verbal and written communication skills to collaborate effectively with software engineers and hardware teams.We look forward to hearing how you can help shape the future of autonomous defense systems at Harmattan AI.
No items found.
Hidden link
OpenAI.jpg

Researcher, Safety & Privacy

OpenAI
$295,000 – $445,000
US.svg
United States
Full-time
Remote
false
About the Team: Our Safety Systems org ensures that OpenAI’s most capable models can be responsibly developed and deployed. We build evaluations, safeguards, and safety frameworks that help our models behave as intended in real-world settings. About the Role:We are seeking a Researcher in Privacy-Preserving Safety to help design and build the next generation of privacy-preserving safety systems for frontier AI models. This role sits at the intersection of AI safety, security, and privacy, with a focus on developing auditable, privacy-first mechanisms that enable robust harm detection and mitigation without exposing sensitive user data.You will help define and operationalize frameworks for identifying and addressing frontier risks (e.g., bioweapon instructions, malware creation, suicide/self-harm risks, jailbreaks), while ensuring that privacy guarantees remain intact—even under adversarial conditions.This role is central to our long-term goal of scaling our automated privacy-preserving safety systems to mitigate potential harms while minimizing human review.You’ll work on foundational problems such as privacy-preserving monitoring, algorithmic auditing, secure enclaves, and adversarially robust safety enforcement protocols, helping ensure that safety systems scale without compromising user trust.In this role, you will:Design and implement privacy-first architectures for detecting and mitigating harmful model behaviors.Build frameworks for auditable private identification of high-risk content (jailbreaks, cyber threats, or weaponization instructions).Develop strict, auditable mechanisms triggered only by harm signals.Drive the development of automated safety systems that preserve privacy at every level. You might thrive in this role if you:Are a researcher with deep interest in privacy, security, and AI safety, motivated by building systems that are both trustworthy and effective at scale.Hold a PhD or equivalent experience in Computer Science, Cryptography, Security, Machine Learning, or related fieldsHave the ability to translate ambiguous problem spaces into formal frameworks and deployable systemsDemonstrate profiency in one or more of:Privacy-preserving computation (e.g., secure enclaves, MPC, differential privacy)Security and adversarial systemsMachine learning safety or alignmentExperience designing robust systems under adversarial threat modelsHave experience with AI safety, jailbreak detection, or model alignmentAre familiar with privacy-preserving machine learning techniques, algorithmic auditing and/or secure system designAbout OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
OpenAI.jpg

Forward Deployed Engineer - Sydney

OpenAI
AU.svg
Australia
Full-time
Remote
false
About the teamOpenAI’s Forward Deployed Engineering team partners with customers to turn research breakthroughs into production systems. We operate at the intersection of customer delivery and core platform development.About the roleForward Deployed Engineers (FDEs) lead complex end-to-end deployments of frontier models in production alongside our most strategic customers. You will own discovery, technical scoping, system design, build, and production rollout, partnering directly with customer engineering and domain teams.You will measure success through production adoption, measurable workflow impact, and eval-driven feedback that changes product and model roadmaps. You’ll work closely with our Product, Research, Partnerships, GRC, Security, and GTM teams.This role is based in Sydney. We use a hybrid work model of 3 days in the office per week. We offer relocation assistance. Travel up to 50% is required.In this role you willOwn technical delivery across multiple deployments from first prototype to stable production.Build full-stack systems that deliver customer value and sharpen how we learn.Embed closely with customer teams, understand their needs, and guide adoption of what you build.Scope work, sequence delivery, and remove blockers early.Make trade-offs between scope, speed, and quality; adjust plans to protect delivery.Contribute directly in the code when progress or clarity depends on it.Codify working patterns into tools, playbooks, or building blocks that others can use.Share field feedback that helps Research and Product understand where the models succeed and where they can improve.Keep teams moving through clarity and follow-through.You might thrive in this role if youBring 5+ years of engineering or technical deployment experience that includes customer-facing work.Have scoped and delivered complex systems in fast-moving or ambiguous environments.Write and review production-grade code across frontend and backend using Python, JavaScript, or comparable stacks.Have built or deployed systems powered by LLMs or generative models and understand how model behaviour affects product experience.Simplify complexity and make fast, sound decisions under pressure.Communicate clearly with engineers, product teams, and customer stakeholders.Spot risks early and adjust without slowing down.Model calm and judgment when the stakes are high.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
The Talent Labs.jpg

Forward Deployed AI Engineer

Talent Labs
US.svg
United States
Full-time
Remote
false
Forward Deployed AI EngineerThe opportunityWe are looking for a Forward Deployed AI Engineer to serve as the critical bridge between Latent Labs’ frontier generative models and the customers who rely on them. You will work directly with pharmaceutical and biotech customers to deploy, integrate and optimise our technology within their scientific workflows. This is a highly technical, customer-facing role that combines deep infrastructure expertise with a passion for solving real-world problems in drug discovery and protein engineering.You will work closely with our customers, understanding their unique technical environments and ensuring that our generative biology platform integrates seamlessly with their systems. You will own the full lifecycle of customer deployments - from initial technical scoping through to production-grade delivery - and act as the voice of the customer back into our product and research teams.Who we areAt Latent Labs, we are building frontier models that learn the fundamentals of biology. We pursue ambitious goals with curiosity and are committed to scientific excellence. Before building Latent Labs, our team co-developed DeepMind’s Nobel-prize winning AlphaFold, invented latent diffusion, and built pioneering lab data management systems as well as high throughput protein screening platforms. At Latent Labs you will be working with some of the brightest minds in generative AI and biology.Our team is committed to interdisciplinary exchange, continuous learning and collaboration. Team offsites help us foster a culture of trust across our London and San Francisco sites.We’re looking for innovators passionate about tackling complex challenges and maximizing positive global impact. Join us on our moonshot mission.Who you areYou have a strong CS or ML educational background. You hold a degree (BSc, MSc or PhD) in Computer Science, Machine Learning, or a closely related quantitative field. You have a solid grounding in software engineering principles and modern ML frameworks.You have built systems that access large models via APIs. You have significant experience designing, deploying and maintaining infrastructure for large-scale model serving and have hands-on experience building robust API layers around ML models.You are customer-facing and delivery-oriented. You have direct experience deploying AI systems for external customers. You can translate complex technical concepts into clear language for non-technical stakeholders and thrive in environments where customer success is the primary measure of your work.You are fluent in cloud infrastructure. You have hands-on experience with AWS and ideally other major cloud platforms (GCP, Azure). You are comfortable with containerisation (Docker, Kubernetes), CI/CD pipelines, and cloud-native architectures.You are a strong communicator and collaborator. You work effectively across functions - with research scientists and business executives alike. You are comfortable leading technical discussions, writing clear documentation, and presenting solutions to senior stakeholders at partner organisations.You are mission driven and adaptable. You are passionate about making a positive impact on the world, whether it’s for patients, customers or beyond. You thrive in a dynamic, fast-paced environment where priorities can shift and you need to context-switch between multiple customer engagements.What sets you apartYou have experience with bio or protein design models. You have worked on ML-driven projects in computational biology, protein design, or related life science domains. You understand the unique data challenges and evaluation paradigms of biological modelling.You have contributed to generative modelling innovation. You have a track record of novel contributions to generative modelling - whether through publications, open-source work, or impactful product features.You have built production enterprise software. You have experience delivering software that meets enterprise-grade requirements for security, compliance, auditability and uptime. You understand the difference between a prototype and a production system.You have pharma or biotech industry experience. You understand the regulatory landscape, data governance requirements and scientific workflows common in pharmaceutical and biotech organisations.Your responsibilitiesCustomer deployment & integration:Drive the end-to-end technical deployment of Latent Labs models into customer environments, ensuring seamless integration with existing scientific and IT infrastructure.Design and build production-grade API integrations, data pipelines and model-serving infrastructure tailored to each customer’s requirements.Work on-site or embedded with pharma and biotech partners to scope technical requirements, troubleshoot issues and deliver solutions.Ensure deployments meet enterprise standards for security, performance and reliability.Customer advocacy & product feedback:Serve as the technical point of contact for assigned customers, building trusted relationships with their scientific and engineering teams, including spending time working on-site at international partner locations as neededGather and synthesise customer feedback, translating it into actionable insights for our product, research and platform teams.Collaborate with internal teams to shape the product roadmap based on real-world deployment learnings.Create technical documentation, integration guides and best-practice resources for customers.Self development:Stay on top of the latest developments in ML infrastructure, model serving and cloud-native tooling.Gain a strong working understanding of protein and cell biology as it relates to our product.Participate in knowledge sharing, e.g. organise and present at our internal reading group.ApplyWe offer strongly competitive compensation and benefits packages, including:Private health insurancePension contributionsGenerous leave policies (including gender neutral parental leave)Hybrid workingTravel opportunities and moreWe also offer a stimulating work environment, and the opportunity to shape the future of synthetic biology through the application of breakthrough generative models.We welcome applicants from all backgrounds and we are committed to building a team that represents a variety of backgrounds, perspectives, and skills.
No items found.
Hidden link
Tenstorrent.jpg

Regional Sales Lead, Singapore

Tenstorrent
$100,000 – $500,000
US.svg
United States
Full-time
Remote
false
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.Tenstorrent is seeking an Physical Design Engineer to lead  cross-functional efforts to solve complex physical design challenges and develop end-to-end RTL-to-GDS methodologies across advanced nodes, with a strong focus on PPA and runtime improvements. The engineer will architect, integrate, and deploy AI/ML-driven solutions into production physical design flows, creating custom CAD tools and partnering with internal teams and EDA vendors to drive next-generation, ML-enabled capabilities.  This role is hybrid, based out of Santa Clara, CA or Austin, TX or Fort Collins, CO. We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.   Who you are BS in Electrical or Computer Engineering (or equivalent experience) with 5+ years in Physical Design CAD methodology at advanced nodes. Proven track record improving PPA and/or runtime on high-performance, low-power taped-out designs. Hands-on with industry-standard EDA tools (e.g., Fusion Compiler) across synthesis, P&R, STA, signoff, and hierarchical flows. Strong Python/Tcl and data skills, with interest or experience in ML frameworks (PyTorch, TensorFlow), and the ability to drive complex projects independently.   What we need Lead and contribute to cross-functional efforts solving complex physical design challenges across IPs, projects, and advanced technology nodes. Develop and enhance RTL-to-GDS methodologies, including floorplanning, synthesis, P&R, STA, signoff, and assembly. Architect and deploy AI/ML-driven solutions in production flows to improve engineering efficiency, turnaround time, and QoR. Optimize EDA tools and custom CAD flows using data-driven and ML-based techniques, in close collaboration with verification, extraction, timing, DFT, and EDA vendors.   What you will learn How to scale AI/ML-driven methodologies across diverse products and advanced technology nodes in real production flows. New ways to blend classical EDA algorithms with modern ML techniques to push PPA and runtime limits. Best practices for deploying, validating, and monitoring ML models in production CAD environments. How to influence next-generation ML-enabled EDA tools and collaborate deeply with cross-functional teams (PV, extraction, timing, DFT).   Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made. Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer. This position requires access to technology that requires a U.S. export license for persons whose most recent country of citizenship or permanent residence is a U.S. EAR Country Groups D:1, E1, or E2 country. This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology.  Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2).   These requirements apply to persons located in the U.S. and all countries outside the U.S.  As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency.  If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
No items found.
Hidden link
Tenstorrent.jpg

Head of ISV Partnerships, Experience GTM

Tenstorrent
$100,000 – $500,000
US.svg
United States
Full-time
Remote
false
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.Tenstorrent is seeking an Physical Design Engineer to lead  cross-functional efforts to solve complex physical design challenges and develop end-to-end RTL-to-GDS methodologies across advanced nodes, with a strong focus on PPA and runtime improvements. The engineer will architect, integrate, and deploy AI/ML-driven solutions into production physical design flows, creating custom CAD tools and partnering with internal teams and EDA vendors to drive next-generation, ML-enabled capabilities.  This role is hybrid, based out of Santa Clara, CA or Austin, TX or Fort Collins, CO. We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.   Who you are BS in Electrical or Computer Engineering (or equivalent experience) with 5+ years in Physical Design CAD methodology at advanced nodes. Proven track record improving PPA and/or runtime on high-performance, low-power taped-out designs. Hands-on with industry-standard EDA tools (e.g., Fusion Compiler) across synthesis, P&R, STA, signoff, and hierarchical flows. Strong Python/Tcl and data skills, with interest or experience in ML frameworks (PyTorch, TensorFlow), and the ability to drive complex projects independently.   What we need Lead and contribute to cross-functional efforts solving complex physical design challenges across IPs, projects, and advanced technology nodes. Develop and enhance RTL-to-GDS methodologies, including floorplanning, synthesis, P&R, STA, signoff, and assembly. Architect and deploy AI/ML-driven solutions in production flows to improve engineering efficiency, turnaround time, and QoR. Optimize EDA tools and custom CAD flows using data-driven and ML-based techniques, in close collaboration with verification, extraction, timing, DFT, and EDA vendors.   What you will learn How to scale AI/ML-driven methodologies across diverse products and advanced technology nodes in real production flows. New ways to blend classical EDA algorithms with modern ML techniques to push PPA and runtime limits. Best practices for deploying, validating, and monitoring ML models in production CAD environments. How to influence next-generation ML-enabled EDA tools and collaborate deeply with cross-functional teams (PV, extraction, timing, DFT).   Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made. Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer. This position requires access to technology that requires a U.S. export license for persons whose most recent country of citizenship or permanent residence is a U.S. EAR Country Groups D:1, E1, or E2 country. This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology.  Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2).   These requirements apply to persons located in the U.S. and all countries outside the U.S.  As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency.  If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
No items found.
Hidden link
Gong.jpg

Manager, Commercial Sales - Industry Expansion

Gong
$148,000 – $225,000
No items found.
Full-time
Remote
false
Gong harnesses the power of AI to transform how revenue teams win. The Gong Revenue AI Operating System unifies data, insights, and workflows into a single, trusted system that observes, guides, and acts alongside the world’s most successful revenue teams. Powered by the Gong Revenue Graph, AI-powered intelligence, specialized agents, and trusted applications, Gong helps more than 5,000 companies around the world deeply understand their teams and customers, automate critical sales workflows, and close more deals with less effort. For more information, visit www.gong.io. At Gong, you will join a company built on innovative products, ambitious goals, and passionate people. We are shaping the future of revenue intelligence and we want people who are excited to build what comes next. You will work with a team that dreams big, moves fast, and cares deeply about the craft and about each other. Here, transparency and trust are core to how we operate, and every person has the opportunity to make a visible impact. If you want to grow, stretch, and do work that truly matters, Gong is the place to do the best work of your career.Gong is seeking a hands-on Staff, AI Enablement and Innovation professional to own our internal AI operating model. Sitting within our IT organization, this role is the heartbeat of our internal digital transformation. You will empower our internal teams by bridging the gap between high-level business discovery and deep technical execution. You will be the primary architect of Gong’s internal agentic strategy—responsible for "mining" the business for efficiency opportunities while simultaneously building the centralized orchestration layer that ensures our enterprise AI spend is governed, consistent, and scalable. This is a high-impact IC (Individual Contributor) role designed for a "scrappy builder" who thrives on turning internal complexity into streamlined, automated excellence. RESPONSIBILITIES Strategy & Governance (The "Guardrails") Define the Roadmap: Partner with Security, Legal, and business leaders to define the internal AI roadmap. Own the Stack: Operate the enterprise AI stack, including LLMs, vector databases, and gateways. Standardization: Enforce consistent patterns for tool calling, prompt versioning, state management, and error handling to prevent fragmented, "ad-hoc" agent implementations. Lifecycle Management: Manage the full model lifecycle, from evaluation and testing to upgrades and deprecations. Discovery & Execution (The "Gold Mining") Business Partnership: Proactively interview teams (Talent, Support, Sales) to identify manual workflows that can be automated via agentic AI. Proof of Efficacy: Build and deploy POCs independently to demonstrate ROI before scaling. Financial & Performance Operations (The "Numbers") Cost Management: Own the token procurement process and build forecasting/chargeback models to prevent uncontrolled spend. Performance Monitoring: Build dashboards to track SLAs/SLOs (latency, accuracy, uptime) and monitor usage, cost, and error rates. Optimization: Proactively identify opportunities for cost-saving (e.g., model switching) and performance tuning. QUALIFICATIONS The Persona: You are a Senior IT Business Analyst, Technical Implementation Lead, or Solutions Architect.  Technical Depth: Practical, hands-on experience with the modern AI stack (OpenAI, Gemini, Anthropic, Vector DBs). You understand the nuances of state management and prompt versioning. Scrappy Builder: You have a "hands-on-keyboard" mentality. You can take an idea from a stakeholder and turn it into a working agentic workflow without needing external engineering resources. Business Acumen: Ability to translate complex technical AI patterns into clear business value and ROI for stakeholders. Operational Rigor: Experience managing vendor relationships, forecasting technical costs (tokens), and maintaining system uptime/SLAs. YOU ARE Orchestration: Experienced with LangChain, or similar agentic frameworks. AI Tooling: Prompt Flow, Vector Databases, and API integration. Data & Analytics: Ability to build performance and cost-tracking dashboards (SQL, Tableau, etc.). PERKS & BENEFITS  We offer Gongsters a variety of medical, dental, and vision plans, designed to fit you and your family’s needs. Wellbeing Fund - flexible wellness stipend to support a healthy lifestyle. Mental Health benefits with covered therapy and coaching. 401(k) program to help you invest in your future. Education & learning stipend for personal growth and development. Flexible vacation time to promote a healthy work-life blend. Paid parental leave to support you and your family. Company-wide recharge days each quarter. Work from home stipend to help you succeed in a remote environment. The annual salary hiring range for this position is $148,000 - $225,000 USD.  Compensation is based on factors unique to each candidate, including, but not limited to, job-related skills, qualification, education, experience, and location. At Gong, we have a location-based compensation structure, which means there may be a different range for candidates in other locations. The total compensation package for this position, in addition to base compensation, may include incentive compensation, bonus, equity, and benefits. Some of our sales compensation programs also offer the potential to achieve above targeted earnings for those who exceed their sales targets.  We are always looking for outstanding Gongsters! So if this sounds like something that interests you regardless of compensation, please reach out. We may have more roles for you to consider and would love to connect. We have noticed a rise in recruiting impersonations across the industry, where scammers attempt to access candidates' personal and financial information through fake interviews and offers. All Gong recruiting email communications will always come from the @gong.io domain. Any outreach claiming to be from Gong via other sources should be ignored. Gong is an equal-opportunity employer. We believe that diversity is integral to our success, and do not discriminate based on race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, military status, genetic information, or any other basis protected by applicable law. To review Gong's privacy policy, visit https://www.gong.io/gong-io-job-candidates-privacy-notice/ for more details.
No items found.
Hidden link
Shield AI.jpg

Staff Engineer, G&C (R4763)

Shield AI
$180,000 – $280,000
US.svg
United States
Full-time
Remote
false
Founded in 2015, Shield AI is a venture-backed deep-tech company with the mission of protecting service members and civilians with intelligent systems. Its products include the V-BAT and X-BAT aircraft, Hivemind Enterprise, and the Hivemind Vision product lines. With offices and facilities across the U.S., Europe, the Middle East, and the Asia-Pacific, Shield AI’s technology actively supports operations worldwide. For more information, visit www.shield.ai. Follow Shield AI on LinkedIn, X, Instagram, and YouTube. Job Description: Founded in 2015, Shield AI is a venture-backed defense technology company whose mission is to protect service members and civilians with intelligent systems. In pursuit of this mission, Shield AI is building the world’s best AI pilot. Its AI pilot, Hivemind, has flown a fighter jet (F-16), a vertical takeoff and landing drone (V-BAT), and a quadcopter (Nova). The company has offices in San Diego, Dallas, Washington DC and abroad. Shield AI’s products and people are currently in the field actively supporting operations with the U.S. Department of Defense and U.S. allies. As a Guidance and Controls engineer, you will be responsible creating and maintaining all of control and autonomy algorithms within the XBAT code base. This includes algorithm development, unit tests, component tests, flight software qualification and flight test support. You will also be responsible for helping update and validate the truth models as required.Required qualifications: Typically requires a minimum of 7 years of related experience with a Bachelor’s degree; or 3 years and a Master’s degree; or a PhD with 2 year experience; or equivalent experience. Proven track record of successfully shipping products, showcasing the ability to navigate through development cycles, overcome obstacles, and deliver high-quality solutions to meet project deadlines and exceed expectations in a fast-paced environment.. Excellent problem-solving and analytical skills, with a focus on delivering user-centric software solutions. Preferred qualifications: Familiarity with continuous integration / delivery and test-driven development Experience working with robotics and/or control systems, specifically unmanned aerial systems 180,000 - 280,000 a year#LI-SM1 #LD Full-time regular employee offer package: Pay within range listed + Bonus + Benefits + Equity Temporary employee offer package: Pay within range listed above + temporary benefits package (applicable after 60 days of employment) Salary compensation is influenced by a wide array of factors including but not limited to skill set, level of experience, licenses and certifications, and specific work location. All offers are contingent on a cleared background and possible reference check. Military fellows and part-time employees are not eligible for benefits. Please speak to your talent acquisition representative for more information. ### Shield AI is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, marital status, disability, gender identity or Veteran status. If you have a disability or special need that requires accommodation, please let us know. 
No items found.
Hidden link
Handshake.jpg

Mathematics PhDs - AI Trainer

Handshake
$75 – $75 / hour
US.svg
United States
Contractor
Remote
false
Opportunity OverviewHandshake is looking for Mathematics PhDs to support AI research through flexible, hourly contract work. This is not a traditional academic or industry research role. You'll draw on your mathematical expertise to evaluate AI-generated content and provide feedback that helps AI better understand mathematical reasoning, proof construction, and technical problem-solving.This is an ongoing, project-based opportunity you can take on alongside anything else you have going on.Who This Is ForThis is a good fit if you're a current or graduated Mathematics PhD with deep expertise in one or more of the following areas:Pure or applied mathematics, analysis, algebra, or topologyProbability, statistics, or stochastic processesDiscrete mathematics, combinatorics, or number theoryComputational or numerical mathematicsMathematical physics or mathematical biologyWhat You'll DoYou'll use your mathematical expertise to create domain-relevant questions and review AI-generated responses for accuracy, rigor, and relevance to real-world mathematical research and practice.No prior AI or technical experience is required.QualificationsWe're looking for people who have:A current enrollment in or a completed PhD in Mathematics or a closely related fieldDeep expertise in at least one area of mathematics at the doctoral levelStrong written communication skills and attention to detailThe ability to work independently and follow written guidelinesApplication ProcessCreate a Handshake accountUpload your resume and verify your identityGet matched and onboarded into relevant projectsStart working and earning Work Model and Project DetailsStatus: Independent contractor (not a full-time employee role)Location: Fully remote; work from anywhere with a reliable internet connection and access to a desktop or laptop computerSchedule: Flexible and asynchronous, with no minimum hour requirement. Many contributors work approximately 5–20 hours per week when assigned to an active projectDuration: The Handshake AI program runs year-round, with projects opening periodically across different areas of expertise. Placement depends on current project needs, with opportunities to be considered for future projects as they become availableWork AuthorizationMust be currently residing in the United States. F-1 students who are eligible for CPT or OPT may be eligible for projects on Handshake AI. Work with your Designated School Official to determine your eligibility. If your school requires a CPT course, Handshake AI may not meet your school's requirements. STEM OPT is not supported. See our Help Center article for more information on what types of work authorizations are supported on Handshake AI.
No items found.
Hidden link
Together AI.jpg

Sr. Partnerships Manager, Model Ecosystem

Together AI
$200,000 – $280,000
No items found.
Full-time
Remote
false
About the Role The Turbo team sits at the intersection of efficient inference (algorithms, architectures, engines) and post‑training / RL systems. We build and operate the systems behind Together’s API, including high‑performance inference and RL/post‑training engines that can run at production scale. Our mandate is to push the frontier of efficient inference and RL‑driven training: making models dramatically faster and cheaper to run, while improving their capabilities through RL‑based post‑training (e.g., GRPO‑style objectives). This work lives at the interface of algorithms and systems: asynchronous RL, rollout collection, scheduling, and batching all interact with engine design, creating many knobs to tune across the RL algorithm, training loop, and inference stack. Much of the job is modifying production inference systems—for example, SGLang‑ or vLLM‑style serving stacks and speculative decoding systems such as ATLAS—grounded in a strong understanding of post‑training and inference theory, rather than purely theoretical algorithm design. You’ll work across the stack—from RL algorithms and training engines to kernels and serving systems—to build and improve frontier models via RL pipelines. People on this team are often spiky: some are more RL‑first, some are more systems‑first. Depth in one of these areas plus appetite to collaborate across (and grow toward more full‑stack ownership over time) is ideal. Requirements We don’t expect anyone to check every box below. People on this team typically have deep expertise in one or more areas and enough breadth (or interest) to work effectively across the stack. The closer you are to full‑stack (inference + post‑training/RL + systems), the stronger the fit—but being spiky in one area and eager to grow is absolutely okay. You might be a good fit if you: Have strong expertise in at least one of the following, and are excited to collaborate across (and grow into) the others: Systems‑first profile: Large‑scale inference systems (e.g., SGLang, vLLM, FasterTransformer, TensorRT, custom engines, or similar), GPU performance, distributed serving. RL‑first profile: RL / post‑training for LLMs or large models (e.g., GRPO, RLHF/RLAIF, DPO‑like methods, reward modeling), and using these to train or fine‑tune real models. Model architecture design for Transformers or other large neural nets. Distributed systems / high‑performance computing for ML. Are comfortable working from algorithms to engines: Strong coding ability in Python Experience profiling and optimizing performance across GPU, networking, and memory layers. Able to take a new sampling method, scheduler, or RL update and turn it into a production‑grade implementation in the engine and/or training stack. Have a solid research foundation in your area(s) of depth: Track record of impactful work in ML systems, RL, or large‑scale model training (papers, open‑source projects, or production systems). Can read new RL / post‑training papers, understand their implications on the stack, and design minimal, correct changes in the right layer (training engine vs. inference engine vs. data / API). Operate well as a full‑stack problem solver: You naturally ask: “Where in the stack is this really bottlenecked?” You enjoy collaborating with infra, research, and product teams, and you care about both scientific quality and user‑visible wins. Minimum qualifications 3+ years of experience working on ML systems, large‑scale model training, inference, or adjacent areas (or equivalent experience via research / open source). Advanced degree in Computer Science, EE, or a related field, or equivalent practical experience. Demonstrated experience owning complex technical projects end‑to‑end. If you’re excited about the role and strong in some of these areas, we encourage you to apply even if you don’t meet every single requirement. Responsibilities Advance inference efficiency end‑to‑end Design and prototype algorithms, architectures, and scheduling strategies for low‑latency, high‑throughput inference. Implement and maintain changes in high‑performance inference engines (e.g., SGLang‑ or vLLM‑style systems and Together’s inference stack), including kernel backends, speculative decoding (e.g., ATLAS), quantization, etc. Profile and optimize performance across GPU, networking, and memory layers to improve latency, throughput, and cost. Unify inference with RL / post‑training Design and operate RL and post‑training pipelines (e.g., RLHF, RLAIF, GRPO, DPO‑style methods, reward modeling) where 90+% of the cost is inference, jointly optimizing algorithms and systems. Make RL and post‑training workloads more efficient with inference‑aware training loops—for example, async RL rollouts, speculative decoding, and other techniques that make large‑scale rollout collection and evaluation cheaper. Use these pipelines to train, evaluate, and iterate on frontier models on top of our inference stack. Co‑design algorithms and infrastructure so that objectives, rollout collection, and evaluation are tightly coupled to efficient inference, and quickly identify bottlenecks across the training engine, inference engine, data pipeline, and user‑facing layers. Run ablations and scale‑up experiments to understand trade‑offs between model quality, latency, throughput, and cost, and feed these insights back into model, RL, and system design. Own critical systems at production scale Profile, debug, and optimize inference and post‑training services under real production workloads. Drive roadmap items that require real engine modification—changing kernels, memory layouts, scheduling logic, and APIs as needed. Establish metrics, benchmarks, and experimentation frameworks to validate improvements rigorously. Provide technical leadership (Staff level) Set technical direction for cross‑team efforts at the intersection of inference, RL, and post‑training. Mentor other engineers and researchers on full‑stack ML systems work and performance engineering. About Together AI Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers in our journey in building the next generation AI infrastructure. Compensation We offer competitive compensation, startup equity, health insurance and other competitive benefits. The US base salary range for this full-time position is: $200,000 - $280,000 + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge. Equal Opportunity Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more. Please see our privacy policy at https://www.together.ai/privacy    
No items found.
Hidden link
Together AI.jpg

Customer Support Engineer (GPU Cluster)

Together AI
$200,000 – $280,000
US.svg
United States
Full-time
Remote
false
About the Role The Turbo team sits at the intersection of efficient inference (algorithms, architectures, engines) and post‑training / RL systems. We build and operate the systems behind Together’s API, including high‑performance inference and RL/post‑training engines that can run at production scale. Our mandate is to push the frontier of efficient inference and RL‑driven training: making models dramatically faster and cheaper to run, while improving their capabilities through RL‑based post‑training (e.g., GRPO‑style objectives). This work lives at the interface of algorithms and systems: asynchronous RL, rollout collection, scheduling, and batching all interact with engine design, creating many knobs to tune across the RL algorithm, training loop, and inference stack. Much of the job is modifying production inference systems—for example, SGLang‑ or vLLM‑style serving stacks and speculative decoding systems such as ATLAS—grounded in a strong understanding of post‑training and inference theory, rather than purely theoretical algorithm design. You’ll work across the stack—from RL algorithms and training engines to kernels and serving systems—to build and improve frontier models via RL pipelines. People on this team are often spiky: some are more RL‑first, some are more systems‑first. Depth in one of these areas plus appetite to collaborate across (and grow toward more full‑stack ownership over time) is ideal. Requirements We don’t expect anyone to check every box below. People on this team typically have deep expertise in one or more areas and enough breadth (or interest) to work effectively across the stack. The closer you are to full‑stack (inference + post‑training/RL + systems), the stronger the fit—but being spiky in one area and eager to grow is absolutely okay. You might be a good fit if you: Have strong expertise in at least one of the following, and are excited to collaborate across (and grow into) the others: Systems‑first profile: Large‑scale inference systems (e.g., SGLang, vLLM, FasterTransformer, TensorRT, custom engines, or similar), GPU performance, distributed serving. RL‑first profile: RL / post‑training for LLMs or large models (e.g., GRPO, RLHF/RLAIF, DPO‑like methods, reward modeling), and using these to train or fine‑tune real models. Model architecture design for Transformers or other large neural nets. Distributed systems / high‑performance computing for ML. Are comfortable working from algorithms to engines: Strong coding ability in Python Experience profiling and optimizing performance across GPU, networking, and memory layers. Able to take a new sampling method, scheduler, or RL update and turn it into a production‑grade implementation in the engine and/or training stack. Have a solid research foundation in your area(s) of depth: Track record of impactful work in ML systems, RL, or large‑scale model training (papers, open‑source projects, or production systems). Can read new RL / post‑training papers, understand their implications on the stack, and design minimal, correct changes in the right layer (training engine vs. inference engine vs. data / API). Operate well as a full‑stack problem solver: You naturally ask: “Where in the stack is this really bottlenecked?” You enjoy collaborating with infra, research, and product teams, and you care about both scientific quality and user‑visible wins. Minimum qualifications 3+ years of experience working on ML systems, large‑scale model training, inference, or adjacent areas (or equivalent experience via research / open source). Advanced degree in Computer Science, EE, or a related field, or equivalent practical experience. Demonstrated experience owning complex technical projects end‑to‑end. If you’re excited about the role and strong in some of these areas, we encourage you to apply even if you don’t meet every single requirement. Responsibilities Advance inference efficiency end‑to‑end Design and prototype algorithms, architectures, and scheduling strategies for low‑latency, high‑throughput inference. Implement and maintain changes in high‑performance inference engines (e.g., SGLang‑ or vLLM‑style systems and Together’s inference stack), including kernel backends, speculative decoding (e.g., ATLAS), quantization, etc. Profile and optimize performance across GPU, networking, and memory layers to improve latency, throughput, and cost. Unify inference with RL / post‑training Design and operate RL and post‑training pipelines (e.g., RLHF, RLAIF, GRPO, DPO‑style methods, reward modeling) where 90+% of the cost is inference, jointly optimizing algorithms and systems. Make RL and post‑training workloads more efficient with inference‑aware training loops—for example, async RL rollouts, speculative decoding, and other techniques that make large‑scale rollout collection and evaluation cheaper. Use these pipelines to train, evaluate, and iterate on frontier models on top of our inference stack. Co‑design algorithms and infrastructure so that objectives, rollout collection, and evaluation are tightly coupled to efficient inference, and quickly identify bottlenecks across the training engine, inference engine, data pipeline, and user‑facing layers. Run ablations and scale‑up experiments to understand trade‑offs between model quality, latency, throughput, and cost, and feed these insights back into model, RL, and system design. Own critical systems at production scale Profile, debug, and optimize inference and post‑training services under real production workloads. Drive roadmap items that require real engine modification—changing kernels, memory layouts, scheduling logic, and APIs as needed. Establish metrics, benchmarks, and experimentation frameworks to validate improvements rigorously. Provide technical leadership (Staff level) Set technical direction for cross‑team efforts at the intersection of inference, RL, and post‑training. Mentor other engineers and researchers on full‑stack ML systems work and performance engineering. About Together AI Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers in our journey in building the next generation AI infrastructure. Compensation We offer competitive compensation, startup equity, health insurance and other competitive benefits. The US base salary range for this full-time position is: $200,000 - $280,000 + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge. Equal Opportunity Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more. Please see our privacy policy at https://www.together.ai/privacy    
No items found.
Hidden link
Together AI.jpg

Director, Data Center Operations

Together AI
$200,000 – $280,000
No items found.
Full-time
Remote
false
About the Role The Turbo team sits at the intersection of efficient inference (algorithms, architectures, engines) and post‑training / RL systems. We build and operate the systems behind Together’s API, including high‑performance inference and RL/post‑training engines that can run at production scale. Our mandate is to push the frontier of efficient inference and RL‑driven training: making models dramatically faster and cheaper to run, while improving their capabilities through RL‑based post‑training (e.g., GRPO‑style objectives). This work lives at the interface of algorithms and systems: asynchronous RL, rollout collection, scheduling, and batching all interact with engine design, creating many knobs to tune across the RL algorithm, training loop, and inference stack. Much of the job is modifying production inference systems—for example, SGLang‑ or vLLM‑style serving stacks and speculative decoding systems such as ATLAS—grounded in a strong understanding of post‑training and inference theory, rather than purely theoretical algorithm design. You’ll work across the stack—from RL algorithms and training engines to kernels and serving systems—to build and improve frontier models via RL pipelines. People on this team are often spiky: some are more RL‑first, some are more systems‑first. Depth in one of these areas plus appetite to collaborate across (and grow toward more full‑stack ownership over time) is ideal. Requirements We don’t expect anyone to check every box below. People on this team typically have deep expertise in one or more areas and enough breadth (or interest) to work effectively across the stack. The closer you are to full‑stack (inference + post‑training/RL + systems), the stronger the fit—but being spiky in one area and eager to grow is absolutely okay. You might be a good fit if you: Have strong expertise in at least one of the following, and are excited to collaborate across (and grow into) the others: Systems‑first profile: Large‑scale inference systems (e.g., SGLang, vLLM, FasterTransformer, TensorRT, custom engines, or similar), GPU performance, distributed serving. RL‑first profile: RL / post‑training for LLMs or large models (e.g., GRPO, RLHF/RLAIF, DPO‑like methods, reward modeling), and using these to train or fine‑tune real models. Model architecture design for Transformers or other large neural nets. Distributed systems / high‑performance computing for ML. Are comfortable working from algorithms to engines: Strong coding ability in Python Experience profiling and optimizing performance across GPU, networking, and memory layers. Able to take a new sampling method, scheduler, or RL update and turn it into a production‑grade implementation in the engine and/or training stack. Have a solid research foundation in your area(s) of depth: Track record of impactful work in ML systems, RL, or large‑scale model training (papers, open‑source projects, or production systems). Can read new RL / post‑training papers, understand their implications on the stack, and design minimal, correct changes in the right layer (training engine vs. inference engine vs. data / API). Operate well as a full‑stack problem solver: You naturally ask: “Where in the stack is this really bottlenecked?” You enjoy collaborating with infra, research, and product teams, and you care about both scientific quality and user‑visible wins. Minimum qualifications 3+ years of experience working on ML systems, large‑scale model training, inference, or adjacent areas (or equivalent experience via research / open source). Advanced degree in Computer Science, EE, or a related field, or equivalent practical experience. Demonstrated experience owning complex technical projects end‑to‑end. If you’re excited about the role and strong in some of these areas, we encourage you to apply even if you don’t meet every single requirement. Responsibilities Advance inference efficiency end‑to‑end Design and prototype algorithms, architectures, and scheduling strategies for low‑latency, high‑throughput inference. Implement and maintain changes in high‑performance inference engines (e.g., SGLang‑ or vLLM‑style systems and Together’s inference stack), including kernel backends, speculative decoding (e.g., ATLAS), quantization, etc. Profile and optimize performance across GPU, networking, and memory layers to improve latency, throughput, and cost. Unify inference with RL / post‑training Design and operate RL and post‑training pipelines (e.g., RLHF, RLAIF, GRPO, DPO‑style methods, reward modeling) where 90+% of the cost is inference, jointly optimizing algorithms and systems. Make RL and post‑training workloads more efficient with inference‑aware training loops—for example, async RL rollouts, speculative decoding, and other techniques that make large‑scale rollout collection and evaluation cheaper. Use these pipelines to train, evaluate, and iterate on frontier models on top of our inference stack. Co‑design algorithms and infrastructure so that objectives, rollout collection, and evaluation are tightly coupled to efficient inference, and quickly identify bottlenecks across the training engine, inference engine, data pipeline, and user‑facing layers. Run ablations and scale‑up experiments to understand trade‑offs between model quality, latency, throughput, and cost, and feed these insights back into model, RL, and system design. Own critical systems at production scale Profile, debug, and optimize inference and post‑training services under real production workloads. Drive roadmap items that require real engine modification—changing kernels, memory layouts, scheduling logic, and APIs as needed. Establish metrics, benchmarks, and experimentation frameworks to validate improvements rigorously. Provide technical leadership (Staff level) Set technical direction for cross‑team efforts at the intersection of inference, RL, and post‑training. Mentor other engineers and researchers on full‑stack ML systems work and performance engineering. About Together AI Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers in our journey in building the next generation AI infrastructure. Compensation We offer competitive compensation, startup equity, health insurance and other competitive benefits. The US base salary range for this full-time position is: $200,000 - $280,000 + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge. Equal Opportunity Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more. Please see our privacy policy at https://www.together.ai/privacy    
No items found.
Hidden link
Together AI.jpg

Staff Analytics Engineer — Data Warehouse

Together AI
$200,000 – $280,000
No items found.
Full-time
Remote
false
About the Role The Turbo team sits at the intersection of efficient inference (algorithms, architectures, engines) and post‑training / RL systems. We build and operate the systems behind Together’s API, including high‑performance inference and RL/post‑training engines that can run at production scale. Our mandate is to push the frontier of efficient inference and RL‑driven training: making models dramatically faster and cheaper to run, while improving their capabilities through RL‑based post‑training (e.g., GRPO‑style objectives). This work lives at the interface of algorithms and systems: asynchronous RL, rollout collection, scheduling, and batching all interact with engine design, creating many knobs to tune across the RL algorithm, training loop, and inference stack. Much of the job is modifying production inference systems—for example, SGLang‑ or vLLM‑style serving stacks and speculative decoding systems such as ATLAS—grounded in a strong understanding of post‑training and inference theory, rather than purely theoretical algorithm design. You’ll work across the stack—from RL algorithms and training engines to kernels and serving systems—to build and improve frontier models via RL pipelines. People on this team are often spiky: some are more RL‑first, some are more systems‑first. Depth in one of these areas plus appetite to collaborate across (and grow toward more full‑stack ownership over time) is ideal. Requirements We don’t expect anyone to check every box below. People on this team typically have deep expertise in one or more areas and enough breadth (or interest) to work effectively across the stack. The closer you are to full‑stack (inference + post‑training/RL + systems), the stronger the fit—but being spiky in one area and eager to grow is absolutely okay. You might be a good fit if you: Have strong expertise in at least one of the following, and are excited to collaborate across (and grow into) the others: Systems‑first profile: Large‑scale inference systems (e.g., SGLang, vLLM, FasterTransformer, TensorRT, custom engines, or similar), GPU performance, distributed serving. RL‑first profile: RL / post‑training for LLMs or large models (e.g., GRPO, RLHF/RLAIF, DPO‑like methods, reward modeling), and using these to train or fine‑tune real models. Model architecture design for Transformers or other large neural nets. Distributed systems / high‑performance computing for ML. Are comfortable working from algorithms to engines: Strong coding ability in Python Experience profiling and optimizing performance across GPU, networking, and memory layers. Able to take a new sampling method, scheduler, or RL update and turn it into a production‑grade implementation in the engine and/or training stack. Have a solid research foundation in your area(s) of depth: Track record of impactful work in ML systems, RL, or large‑scale model training (papers, open‑source projects, or production systems). Can read new RL / post‑training papers, understand their implications on the stack, and design minimal, correct changes in the right layer (training engine vs. inference engine vs. data / API). Operate well as a full‑stack problem solver: You naturally ask: “Where in the stack is this really bottlenecked?” You enjoy collaborating with infra, research, and product teams, and you care about both scientific quality and user‑visible wins. Minimum qualifications 3+ years of experience working on ML systems, large‑scale model training, inference, or adjacent areas (or equivalent experience via research / open source). Advanced degree in Computer Science, EE, or a related field, or equivalent practical experience. Demonstrated experience owning complex technical projects end‑to‑end. If you’re excited about the role and strong in some of these areas, we encourage you to apply even if you don’t meet every single requirement. Responsibilities Advance inference efficiency end‑to‑end Design and prototype algorithms, architectures, and scheduling strategies for low‑latency, high‑throughput inference. Implement and maintain changes in high‑performance inference engines (e.g., SGLang‑ or vLLM‑style systems and Together’s inference stack), including kernel backends, speculative decoding (e.g., ATLAS), quantization, etc. Profile and optimize performance across GPU, networking, and memory layers to improve latency, throughput, and cost. Unify inference with RL / post‑training Design and operate RL and post‑training pipelines (e.g., RLHF, RLAIF, GRPO, DPO‑style methods, reward modeling) where 90+% of the cost is inference, jointly optimizing algorithms and systems. Make RL and post‑training workloads more efficient with inference‑aware training loops—for example, async RL rollouts, speculative decoding, and other techniques that make large‑scale rollout collection and evaluation cheaper. Use these pipelines to train, evaluate, and iterate on frontier models on top of our inference stack. Co‑design algorithms and infrastructure so that objectives, rollout collection, and evaluation are tightly coupled to efficient inference, and quickly identify bottlenecks across the training engine, inference engine, data pipeline, and user‑facing layers. Run ablations and scale‑up experiments to understand trade‑offs between model quality, latency, throughput, and cost, and feed these insights back into model, RL, and system design. Own critical systems at production scale Profile, debug, and optimize inference and post‑training services under real production workloads. Drive roadmap items that require real engine modification—changing kernels, memory layouts, scheduling logic, and APIs as needed. Establish metrics, benchmarks, and experimentation frameworks to validate improvements rigorously. Provide technical leadership (Staff level) Set technical direction for cross‑team efforts at the intersection of inference, RL, and post‑training. Mentor other engineers and researchers on full‑stack ML systems work and performance engineering. About Together AI Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers in our journey in building the next generation AI infrastructure. Compensation We offer competitive compensation, startup equity, health insurance and other competitive benefits. The US base salary range for this full-time position is: $200,000 - $280,000 + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge. Equal Opportunity Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more. Please see our privacy policy at https://www.together.ai/privacy    
No items found.
Hidden link
Horizon3.ai

Director, Engineering, Proactive Offense

Horizon3ai
$240,000 – $285,000
US.svg
United States
Full-time
Remote
false
Get to Know UsHorizon3.ai is a fast-growing, remote cybersecurity company dedicated to the mission of enabling organizations to proactively find, fix and verify exploitable attack vectors before criminals exploit them. Our flagship product, the NodeZeroTM platform, delivers production-safe autonomous pentests and other key assessment operations that scale across the largest internal, external, cloud, and hybrid cloud environments. NodeZero has been adopted by organizations of all sizes, from small educational institutions to government agencies and Global 100 enterprises. It is used by IT Ops/SecOps teams, consulting pentesters, and MSSPs and MSPs.We are a fusion of former U.S. Special Operations cyber operators, startup engineers & operators, and formerly frustrated cybersecurity practitioners. We're committed to helping solve our common security problems: ineffective security tools and false positives, resulting in alert fatigue, blind spots, "checkbox” security culture, cybersecurity skills shortage, and the long lead time and expense of hiring outside consultants. Collectively, we are a team of learn-it-alls, committed to a culture of respect, collaboration, ownership, and results.As a remote first company, we require minimum 25Mbps consumer grade broadband connection. What You'll DoAs Director of Software Engineering – Offensive Security, you’ll lead the strategy, design, and development of NodeZero’s offensive capabilities, driving innovation in autonomous attack content, exploit development, and platform scalability. You’ll manage multiple engineering teams, balancing deep technical expertise with product and organizational leadership.This role sits above our offensive engineering leaders and pods (Attack Engineering) and will play a key role in scaling the offensive product organization as we continue to expand NodeZero’s reach and sophistication.Key ResponsibilitiesLeadership & Strategy: Lead and scale Horizon3.ai’s Offensive Engineering organization, overseeing teams responsible for exploit development, offensive content, and attack automation within the NodeZero platform. Set clear technical and product direction for how NodeZero identifies, exploits, and validates vulnerabilities across large, complex environments. Product Ownership: Partner closely with Product, Precision Defense, and Platform teams to define and deliver offensive capabilities that directly influence the roadmap and enhance customer outcomes. Drive execution from proof-of-concept through production-transforming cutting-edge attack research into scalable, productized features. Technical Depth: Stay hands-on enough to guide architectural decisions and evaluate complex exploit and automation approaches. Mentor technical leads in building resilient, modular systems that power NodeZero’s offensive testing engine. Team Building: Build, mentor, and scale diverse teams of software engineers, exploit developers, and offensive researchers. Foster a culture of collaboration, creativity, and engineering excellence that bridges traditional offensive and product software development. Cross-Functional Collaboration: Collaborate across engineering, product, and GTM teams to align offensive innovation with business priorities, and ensure delivery of measurable, impactful capabilities for customers. This is a highly visible leadership role central to Horizon3.ai’s mission of delivering continuous, autonomous security testing at scale.What You’ll BringProven experience leading and scaling engineering teams in offensive or cybersecurity product development, ideally within a fast-paced startup or growth-stage environment.Strong technical background in software development and system architecture, with hands-on experience in offensive security domains such as exploit development, vulnerability research, attack automation, or red teaming.Demonstrated success taking offensive capabilities or SaaS products from concept to market, including driving POCs, MVPs, and production launches.Deep understanding of distributed systems, automation pipelines, and large-scale SaaS platforms, with the ability to guide architectural and design decisions.A product-oriented mindset, skilled at balancing technical excellence, customer impact, and speed to market.Exceptional leadership and collaboration skills—experienced in managing managers, aligning cross-functional teams, and partnering effectively with Product and GTM stakeholders.Excellent analytical, communication, and storytelling abilities—capable of translating complex offensive engineering concepts into clear, actionable direction.High degree of initiative and ownership; creative, detail-oriented, and results-driven.Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field (or equivalent experience). Required Tech Stack ExperienceDeep expertise in offensive security techniques, frameworks, and tooling (e.g. Metasploit, Cobalt Strike, Sliver, or custom exploit frameworks).Proficiency in at least one modern object-oriented programming language such as Python, Go, C++, or C#, with experience building and maintaining large-scale software systems.Strong understanding of vulnerability research, exploit development, and post-exploitation automation, with the ability to translate offensive tradecraft into scalable product capabilities.Solid grasp of platform design, system architecture, and automation pipelines, including CI/CD, containerization, and infrastructure-as-code principles.Experience with cloud infrastructure and services (AWS, Azure, GCP), as well as modern DevOps and observability practices.Deep familiarity with network protocols, multiple operating systems (Windows, Linux, macOS, Kali, Ubuntu), and common enterprise technologies.Hands-on experience building or leading engineering for B2B SaaS or security platforms, ideally within a cyber or offensive security company.Working knowledge of databases (PostgreSQL, Neo4j, or similar) and data flow design.Awareness of cybersecurity industry standards and trends, with an ability to bridge technical and product perspectives.Bonus Qualifications:Offensive security certifications such as OSCP, OSEP, OSED, or GPEN.Experience mentoring teams on offensive tradecraft or developing proprietary offensive tooling.Travel RequiredWe are a fully remote company, and this job may require up to 10% of travel to be successful.Compensation and ValuesAt Horizon3, we believe that our people are our greatest asset, and our compensation philosophy reflects this core value. We are committed to fostering an environment where all employees feel valued, respected, and rewarded for their contributions. Our compensation structure is designed to be fair, competitive, and transparent, ensuring that every team member is recognized and compensated equitably across roles, levels, and locations.In accordance with various State’s transparency regulations, we provide the following salary range information for this position:Base salary range: $240,000 - $285,000 annually. The exact salary will be determined based on the selected candidate’s location, qualifications, experience, and relevant skills.Additional compensation: All full-time roles are eligible for an equity package in the form of stock options.Perks of Horizon3.aiInclusive Team: We value diversity and promote an inclusive culture where everyone can thrive.Growth Opportunities: Be part of a dynamic and growing team with numerous career development opportunities.Innovative Culture: Work in a collaborative environment that encourages creativity and out-of-the-box thinking.Remote Work: We are a 100% remote company. Enjoy the flexibility to work in the way that supports you and brings out your best.Competitive Compensation: We offer competitive salary, equity and benefits. Our benefits include health, vision & dental insurance for you and your family, a flexible vacation policy, and generous parental leave.You Belong HereHorizon3 is not just an equal opportunity employer - we are a community that values diversity, equity, and inclusion as fundamental principles of our culture and success. We are dedicated to fostering a workplace where everyone feels welcome and respected, regardless of race, color, religion, sex, national origin, age, disability, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, or any other legally protected status by law.Our commitment to diversity and inclusion means we strive to attract, develop, and retain a workforce that reflects the varied communities we serve. We believe that diverse perspectives drive innovation and strengthen our ability to create cutting-edge cybersecurity solutions. At Horizon3, every team member is valued and supported in an environment that encourages personal and professional growth.We welcome candidates from all backgrounds and experiences, and we encourage all qualified individuals to apply. Come be a part of Horizon3, where your unique contributions are recognized, and your potential is limitless.Other DutiesPlease note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee. Duties, responsibilities, and activities may change at any time with or without notice.Application NoteIn any materials you submit, you may redact or remove age-identifying information such as age, date of birth, or dates of school attendance or graduation. You will not be penalized for redacting or removing this information.
No items found.
Hidden link
Labelbox.jpg

Forward Deployed Engineer, RL Environments

Labelbox
$250,000 – $300,000
US.svg
United States
PL.svg
Poland
Full-time
Remote
false
Shape the Future of AI At Labelbox, we're building the critical infrastructure that powers breakthrough AI models at leading research labs and enterprises. Since 2018, we've been pioneering data-centric approaches that are fundamental to AI development, and our work becomes even more essential as AI capabilities expand exponentially. About Labelbox We're the only company offering three integrated solutions for frontier AI development: Enterprise Platform & Tools: Advanced annotation tools, workflow automation, and quality control systems that enable teams to produce high-quality training data at scale Frontier Data Labeling Service: Specialized data labeling through Alignerr, leveraging subject matter experts for next-generation AI models Expert Marketplace: Connecting AI teams with highly skilled annotators and domain experts for flexible scaling Why Join Us High-Impact Environment: We operate like an early-stage startup, focusing on impact over process. You'll take on expanded responsibilities quickly, with career growth directly tied to your contributions. Technical Excellence: Work at the cutting edge of AI development, collaborating with industry leaders and shaping the future of artificial intelligence. Innovation at Speed: We celebrate those who take ownership, move fast, and deliver impact. Our environment rewards high agency and rapid execution. Continuous Growth: Every role requires continuous learning and evolution. You'll be surrounded by curious minds solving complex problems at the frontier of AI. Clear Ownership: You'll know exactly what you're responsible for and have the autonomy to execute. We empower people to drive results through clear ownership and metrics. Role Overview As an Applied Research Engineer at Labelbox, you will be at the forefront of developing cutting-edge systems and methods to create, analyze, and leverage high-quality human-in-the-loop data for frontier model developers. Your role will involve designing and implementing advanced systems that align human feedback into AI training processes, such as Reinforcement Learning from Human Feedback (RLHF), Direct Preference Optimization (DPO), etc. You will also work on innovative techniques to measure and improve human data quality, and develop AI-assisted tools to enhance the data labeling process. Your expertise in machine learning, frontier model training, and advanced human data alignment techniques will be crucial in pushing the boundaries of AI capabilities and delivering state-of-the-art solutions to meet the evolving needs of our customers. Your Impact Advance the field of AI alignment by developing cutting-edge methods, such as RLHF and novel approaches, that ensure AI systems reflect human preferences more accurately. Improve the quality of human-in-the-loop data by designing and deploying rigorous measurement and enhancement systems, leading to more reliable AI training. Increase efficiency and effectiveness in AI-assisted data labeling by creating tools that leverage active learning and adaptive sampling, reducing manual effort while improving accuracy. Shape the next generation of AI models by investigating how different types of human feedback (e.g., demonstrations, preferences, critiques) impact model performance and alignment. Optimize human feedback collection by developing novel algorithms that enhance how AI learns from human input, improving model adaptability and responsiveness. Bridge research and real-world application by integrating breakthroughs into Labelbox’s product suite, making human-AI alignment techniques scalable and impactful for users. Drive industry innovation by engaging with customers and the broader AI community to understand evolving data needs and share best practices for training frontier models. Contribute to the AI research ecosystem by publishing in top-tier journals, presenting at leading conferences, and influencing the future of human-centric AI. Stay ahead of AI advancements by continuously exploring new frontiers in human-AI collaboration, human data quality, and AI alignment, keeping Labelbox at the cutting edge. Establish Labelbox as a thought leader in AI by creating technical documentation, blog posts, and educational content that shape the industry's approach to human-centric AI development. What You Bring A strong foundation in AI and machine learning, backed by a Ph.D. or Master’s degree in Computer Science, Machine Learning, AI, or a related field. Proven experience (3+ years) in solving complex ML challenges and delivering impactful solutions that improve real-world AI applications. Expertise in designing and implementing data quality measurement and refinement systems that directly enhance model performance and reliability. A deep understanding of frontier AI models—such as large language models and multimodal models—and the human data strategies needed to optimize them. Proficiency in Python and experience with deep learning frameworks like PyTorch, JAX, or TensorFlow to prototype and develop cutting-edge solutions. A track record of publishing in top-tier AI/ML conferences (e.g., NeurIPS, ICML, ICLR, ACL, EMNLP, NAACL) and contributing to the broader research community. The ability to bridge research and application by interpreting new findings and rapidly translating them into functional prototypes. Strong analytical and problem-solving skills that enable you to tackle ambiguous AI challenges with structured, data-driven approaches. Exceptional communication and collaboration skills, allowing you to work effectively across multidisciplinary teams and with external stakeholders. Labelbox Applied Research At Labelbox Applied Research, we're committed to pushing the boundaries of AI and data-centric machine learning, with a particular focus on advanced human-AI interaction techniques. We believe that high-quality human data and sophisticated human feedback integration methods are key to unlocking the next generation of AI capabilities. Our research team works at the intersection of machine learning, human-computer interaction, and AI ethics to develop innovative solutions that can be practically applied in real-world scenarios. We foster an environment of intellectual curiosity, collaboration, and innovation. We encourage our researchers to explore new ideas, engage in open discussions, and contribute to the wider AI community through publications and conference presentations. Our goal is to be at the forefront of human-centric AI development, setting new standards for how AI systems learn from and interact with humans.Labelbox strives to ensure pay parity across the organization and discuss compensation transparently.  The expected annual base salary range for United States-based candidates is below. This range is not inclusive of any potential equity packages or additional benefits. Exact compensation varies based on a variety of factors, including skills and competencies, experience, and geographical location.Annual base salary range$250,000—$300,000 USDLife at Labelbox Location: Join our dedicated tech hubs in San Francisco or Wrocław, Poland Work Style: Hybrid model with 2 days per week in office, combining collaboration and flexibility Environment: Fast-paced and high-intensity, perfect for ambitious individuals who thrive on ownership and quick decision-making Growth: Career advancement opportunities directly tied to your impact Vision: Be part of building the foundation for humanity's most transformative technology Our Vision We believe data will remain crucial in achieving artificial general intelligence. As AI models become more sophisticated, the need for high-quality, specialized training data will only grow. Join us in developing new products and services that enable the next generation of AI breakthroughs. Labelbox is backed by leading investors including SoftBank, Andreessen Horowitz, B Capital, Gradient Ventures, Databricks Ventures, and Kleiner Perkins. Our customers include Fortune 500 enterprises and leading AI labs. Your Personal Data Privacy: Any personal information you provide Labelbox as a part of your application will be processed in accordance with Labelbox’s Job Applicant Privacy notice. Any emails from Labelbox team members will originate from a @labelbox.com email address. If you encounter anything that raises suspicions during your interactions, we encourage you to exercise caution and suspend or discontinue communications.
No items found.
Hidden link
Mercor.jpg

Machine Learning Engineer, Anonymization

Mercor
$130,000 – $500,000
US.svg
United States
Full-time
Remote
false
About MercorMercor is defining the future of work. We partner with leading AI labs and enterprises to provide the human intelligence essential to AI development.Our vast talent network trains frontier AI models in the same way teachers teach students: by sharing knowledge, experience, and context that can't be captured in code alone. Today, more than 30,000 experts in our network collectively earn over $2 million a day.Mercor is creating a new category of work where expertise powers AI advancement. Achieving this requires an ambitious, fast-paced and deeply committed team. You’ll work alongside researchers, operators, and AI companies at the forefront of shaping the systems that are redefining society.Mercor is a profitable Series C company valued at $10 billion. We work in-person five days a week in our San Francisco, NYC, or London offices.About the RoleAs a Machine Learning Engineer focused on Anonymization at Mercor, you will be critical in designing and implementing our industry-best data privacy pipeline. You'll operate at the intersection of advanced ML techniques, sensitive data handling, and robust backend engineering. Your primary focus will be shipping production systems that employ state-of-the-art anonymization and de-identification methods, ensuring maximum utility of our vast data network while maintaining the highest standards of data integrity and regulatory compliance. This role requires bringing deep statistical and modeling rigor to challenging problems in privacy-preserving data access.What You’ll DoDesign, implement, and productionize advanced ML models and techniques (like federated learning, differential privacy, or synthetic data generation) for data anonymization.Build and maintain the core backend infrastructure and APIs to securely process and serve anonymized data at Mercor's scale.Benchmark our anonymization pipeline against industry best practices and regulatory standards (e.g., k-anonymity), continuously running experiments to improve both privacy guarantees and data utility.Collaborate cross-functionally with Legal, Security, and Engineering teams to translate compliance requirements into robust, model-driven solutions.Act as the subject matter expert on data anonymization, flexing between applied ML, complex data pipeline engineering, and driving architectural decisions for data privacy.What We’re Looking ForStrong backend engineering skills (ex. Python/Django or similar) plus a solid foundation in applied ML and statistics.Proven experience shipping production systems or ML-driven products end-to-end.High ownership and comfort operating in ambiguous, fast-changing environments.Demonstrated knowledge of industry best practices and common frameworks for data privacy and security.Why MercorImpact: Your work powers how the world’s leading AI labs train and test their models.Learning: Get early insights into frontier model capabilities months before the market.Growth: Work on both infrastructure and research-adjacent projects with fast paths to ownership.BenefitsGenerous equity grant vested over 4 yearsA $10K housing bonus (if you live within 0.5 miles of our office)A $1.5K monthly stipend for mealsFree Equinox membershipHealth insurance
No items found.
Hidden link
Netomi.jpg

Agentic Solution Engineer

Netomi
IN.svg
India
Full-time
Remote
false
About the Company:Netomi is the leading agentic AI platform for enterprise customer experience. We work with the largest global brands like Delta Airlines, MetLife, MGM, United, and others to enable agentic automation at scale across the entire customer journey. Our no-code platform delivers the fastest time to market, lowest total cost of ownership, and simple, scalable management of AI agents for any CX use case. Backed by WndrCo, Y Combinator, and Index Ventures, we help enterprises drive efficiency, lower costs, and deliver higher quality customer experiences. Want to be part of the AI revolution and transform how the world’s largest global brands do business? Join us! About the Role Netomi is looking for a Solution Engineer - a key technical leader at the intersection of pre-sales engineering, AI architecture, and product innovation. This individual will design and implement agentic workflows that leverage the Netomi platform to power real-world enterprise solutions. You’ll work directly with enterprise clients and internal stakeholders to translate visionary AI concepts into practical, scalable systems - enabling AI agents to engage, reason, and act autonomously within complex customer ecosystems. Responsibilities Partner with Account Executives to discover and scope customer challenges, designing high-value technical solutions that showcase the ROI of Netomi’s platform. Architect and build agentic workflows that integrate generative AI with APIs, databases, and enterprise tools to power experiences for our customer’s end users. Develop custom demonstrations, prototypes, and proofs of concept using the Netomi platform tailored to specific clients use cases. Design, test, and refine prompts and AI orchestration chains to optimize performance, reasoning, and reliability across varied use cases. Communicate complex technical concepts clearly and persuasively to audiences ranging from C-level executives to hands-on engineers. Collaborate with product and engineering teams, contributing insights from customer engagements to inform roadmap priorities. Document and present solution designs, workflows, and technical configurations for both internal and client-facing reference. Requirements 1-2 years of experience in a customer-facing sales engineering or solutions engineering role, ideally in AI, automation, or enterprise SaaS. Hands-on experience with AI prompt design, workflow orchestration, and integrating REST APIs, webhooks, and cloud services (AWS, GCP, or Azure). Working knowledge of JavaScript, Python, or related scripting languages for building integrations and automation logic. Proven ability to architect and communicate end-to-end technical solutions that align AI capabilities with business outcomes. Functional understanding enterprise software ecosystems, and data flow patterns. Excellent communication, presentation, and interpersonal skills with a record of collaboration across technical and non-technical teams. Preferred Qualifications Experience in agentic or autonomous AI systems (e.g., LangChain, LlamaIndex, or similar frameworks). Familiarity with MLOps, AI governance, and compliance in production-scale deployments. Background working in high-growth startup environments. Awareness of UX/UI principles for designing customer-facing AI experiences. Netomi is an equal opportunity employer committed to diversity in the workplace. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, disability, veteran status, and other protected characteristics.
No items found.
Hidden link
Armada.jpg

AI Factory Customer Engineer

Armada
$154,560 – $193,200
US.svg
United States
Full-time
Remote
false
  About the Company Armada is a full-stack edge infrastructure company delivering compute, connectivity, and sovereign AI/ML to some of the world’s most remote places. Named one of Fast Company's Most Innovative Companies, Armada’s solutions are deployed in over 60 countries globally for organizations ranging from energy to defense.    With over $200 million in funding, Armada is backed by top investors such as Microsoft (M12), Founders Fund, and has strategic partnerships including Starlink, Skydio, and NVIDIA. We are looking for the most brilliant minds in the world to join us.    Working at Armada means taking ownership, driving autonomy, and delivering impact. You’ll tackle challenges that haven’t been solved before and help build something transformative from the ground up. What you do here will not only define your career but help further Armada’s mission to bridge the digital divide for customers around the world.      About the role At Armada, we are unlocking the limitless potential of AI to transform operations and improve lives in some of the most remote locations on Earth. From the expansive mines of Australia to the oil fields of Northern Canada, and the coffee plantations of Colombia, Armada offers a unique opportunity to tackle exciting AI and ML challenges on a global scale. We are actively seeking passionate AI Engineers with hands-on expertise across a range of domains, including real-time computer vision, statistical machine learning, natural language processing, transformers, control and navigation, reinforcement learning, and large-scale distributed AI systems. Ideal candidates will possess strong skills in machine learning (ML), deep learning (DL), and real-time computer vision techniques. You will be responsible for building ML/DL models tailored to specific challenges, preparing datasets for testing, evaluating model performance, and deploying solutions in production environments. Familiarity with containerization, microservices architecture, and the ability to independently deploy ML models into production is essential. If you are a self-driven individual with a passion for cutting-edge AI, we want to hear from you. Armada offers an unparalleled opportunity to confront some of the most thrilling AI and ML challenges in the world. Join our dynamic AI Engineering team as we deliver disruptive edge-compute systems capable of autonomous learning, prediction, and adaptation using vast, real-time datasets. We are pioneers in developing high-performance computing solutions for self-driving cars, camera networks, robotics, drones, conversational agents, and real-time monitoring and diagnostic systems. Our vision is to empower AI systems to seamlessly and securely interact with the complexities and uncertainties of the real world, and our mission is to bridge the digital divide in the process.  Location. This role is office-based at our Bellevue, Washington office.  What You'll Do (Key Responsibilities) Translating business requirements into requirements for AI/ML models. Preparing data to train and evaluate AI/ML/DL models. Building AI/ML/DL models by applying state-of-the-art algorithms, especially transformers. In some cases, leverage existing algorithms from academic or industrial research. Testing, evaluating the AI/ML/DL models, benchmarking their quality, and publishing the models, data sets, and evaluations. Deploying the models in production by containerizing the models. Working with customers and internal employees to refine the quality of the models. Establishing continuous learning pipelines for models with online learning or transfer learning. Building and deploying containerized applications on the cloud or on-premise environments Required Qualifications BS or MS degree in computer science, computational. science/engineering, or related technical field (or equivalent experience). 3+ years of work-related experience in software development with good Python, Java, and/or C/C++ programming skills. Familiarity with containers, numeric libraries, modular software design. Hands-on expertise with traditional statistical machine learning techniques as well as deep-learning and natural language processing modeling. Expertise in supervised, unsupervised, and transfer learning techniques. Hands-on expertise in machine learning techniques and algorithms with a strong background in state-of-the-art DNN architectures (Transformers, CNN, R-CNN, RNN, BERT, GAN, autoencoders, etc.) and experience in developing or using major deep learning frameworks (e.g., PyTorch, Tensorflow, etc). Experience with solving and using machine learning for real-world problems. Preferred Experience and Skills Demonstrable experience in building, programming, and integrating software and hardware for autonomous or robotic systems. Proven experience producing computationally efficient software to meet real-time requirements. Background with container platforms such as Kubernetes. Strong analytical skills with a bias for action. Strong time-management and organization skills to thrive in a fast-paced, dynamic environment. Solid written and oral communications skills. Good teamwork and interpersonal skills. Compensation For U.S. Based candidates: To ensure fairness and transparency, the starting base salary range for this role for candidates in the U.S. are listed below, varying based on location experience, skills, and qualifications. In addition to base salary, this role will also be offered equity and subsidized benefits (details available upon request). Benefits Competitive base salary and equity Medical, dental, and vision (subsidized cost) Health savings accounts (HSA), flexible spending accounts (FSA), and dependent care FSAs (DCFSA) Retirement plan options, including 401(k) and Roth 401(k) Unlimited paid time off (PTO) 14 paid company holidays per year #LI-SM2 #LI-Onsite   Compensation$154,560—$193,200 USD  You're a Great Fit if You're A go-getter with a growth mindset. You're intellectually curious, have strong business acumen, and actively seek opportunities to build relevant skills and knowledge  A detail-oriented problem-solver. You can independently gather information, solve problems efficiently, and deliver results with a "get-it-done" attitude  Thrive in a fast-paced environment. You're energized by an entrepreneurial spirit, capable of working quickly, and excited to contribute to a growing company A collaborative team player. You focus on business success and are motivated by team accomplishment vs personal agenda  Highly organized and results-driven. Strong prioritization skills and a dedicated work ethic are essential for you    Equal Opportunity Statement At Armada, we are committed to fostering a work environment where everyone is given equal opportunities to thrive. As an equal opportunity employer, we strictly prohibit discrimination or harassment based on race, color, gender, religion, sexual orientation, national origin, disability, genetic information, pregnancy, or any other characteristic protected by law. This policy applies to all employment decisions, including hiring, promotions, and compensation. Our hiring is guided by qualifications, merit, and the business needs at the time.   Unsolicited Resumes and Candidates Armada does not accept unsolicited resumes or candidate submissions from external agencies or recruiters. All candidates must apply directly through our careers page. Any resumes submitted by agencies without a prior signed agreement will be considered unsolicited and Armada will not be obligated to pay any fees.  
No items found.
Hidden link
Tenstorrent.jpg

AI/ML Physical Design Flow Engineer

Tenstorrent
$100,000 – $500,000
US.svg
United States
Full-time
Remote
false
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.Tenstorrent is seeking an Physical Design Engineer to lead  cross-functional efforts to solve complex physical design challenges and develop end-to-end RTL-to-GDS methodologies across advanced nodes, with a strong focus on PPA and runtime improvements. The engineer will architect, integrate, and deploy AI/ML-driven solutions into production physical design flows, creating custom CAD tools and partnering with internal teams and EDA vendors to drive next-generation, ML-enabled capabilities.  This role is hybrid, based out of Santa Clara, CA or Austin, TX or Fort Collins, CO. We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.   Who you are BS in Electrical or Computer Engineering (or equivalent experience) with 5+ years in Physical Design CAD methodology at advanced nodes. Proven track record improving PPA and/or runtime on high-performance, low-power taped-out designs. Hands-on with industry-standard EDA tools (e.g., Fusion Compiler) across synthesis, P&R, STA, signoff, and hierarchical flows. Strong Python/Tcl and data skills, with interest or experience in ML frameworks (PyTorch, TensorFlow), and the ability to drive complex projects independently.   What we need Lead and contribute to cross-functional efforts solving complex physical design challenges across IPs, projects, and advanced technology nodes. Develop and enhance RTL-to-GDS methodologies, including floorplanning, synthesis, P&R, STA, signoff, and assembly. Architect and deploy AI/ML-driven solutions in production flows to improve engineering efficiency, turnaround time, and QoR. Optimize EDA tools and custom CAD flows using data-driven and ML-based techniques, in close collaboration with verification, extraction, timing, DFT, and EDA vendors.   What you will learn How to scale AI/ML-driven methodologies across diverse products and advanced technology nodes in real production flows. New ways to blend classical EDA algorithms with modern ML techniques to push PPA and runtime limits. Best practices for deploying, validating, and monitoring ML models in production CAD environments. How to influence next-generation ML-enabled EDA tools and collaborate deeply with cross-functional teams (PV, extraction, timing, DFT).   Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made. Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer. This position requires access to technology that requires a U.S. export license for persons whose most recent country of citizenship or permanent residence is a U.S. EAR Country Groups D:1, E1, or E2 country. This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology.  Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2).   These requirements apply to persons located in the U.S. and all countries outside the U.S.  As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency.  If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
No items found.
Hidden link
No job found
Your search did not match any job. Please try again
Department
Clear
Category
Clear
Country
Clear
Job type
Clear
Remote
Clear
Only remote job
Company size
Clear
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.