⚠️ Sorry, this job is no longer available.

The AI job market moves fast. We keep up so you don't have to.

Fresh roles added daily, reviewed for quality — across every corner of the AI ecosystem.

Edit filters

New AI Opportunities

Showing 6179  of 79 jobs
Tag
Ema.jpg

Senior AI Applications Engineer

Ema
US.svg
United States
Full-time
Remote
false
Who are we?Ema is building the next generation AI technology to empower every employee in the enterprise to be their most creative and productive. Our proprietary tech allows enterprises to delegate most repetitive tasks to Ema, the Universal AI employee. We are founded by ex-Google, Coinbase, Okta executives, and serial entrepreneurs. We’re well-funded by the top investors and angels in the world. Ema is based in Silicon Valley and Bangalore. Who are you?The Senior AI Application Engineer will be pivotal in providing world-class post-deployment support for Ema’s Agent Assist/Chatbot product. This role requires a strong blend of technical expertise and customer-focused problem-solving. You will configure, troubleshoot, and optimize AI solutions for our customers, ensuring high performance, continuous improvement of customer workflows, and exceptional customer satisfaction. You will independently manage complex deployments and solutions for enterprise customers with minimal guidance.You will:Roles and ResponsibilitiesConfigure and integrate AI/GenAI workflows using platform APIs and customer data, ensuring smooth and secure deployment.Translate customer requirements into technical solutions, validating workflow correctness and continuously refining prompts for optimal performance.Develop and automate tasks using Python & SQL, and build prototypes using open-source Agentic Frameworks to streamline CX workflows.Design, run, and analyze A/A experiments to measure and improve workflow quality and reliability.Monitor AI workflows using business metrics and observability dashboards, quickly responding to issues to ensure high availability.Provide proactive solutions and regular updates to customers, maintaining high CSAT through effective troubleshooting and communication.Diagnose and resolve platform issues related to APIs, data integration, and workflow configurations, collaborating with engineering when needed.Utilize APIs and integration protocols (JSON, REST, SOAP) to configure workflows and integrate with CRM/ATS tools.Write custom scripts to query databases and create reports for in-depth analysis and to demonstrate ROI to customers.Ideally, you'd have:Experience:4+ years of software engineering experience.Significant experience in AI/LLM application development, including building agents, integrations, workflows, and evaluating/improving their performance.Experience setting up, configuring, and supporting ML Systems or AI Agents is a plus.Experience in building LLM-based evaluation frameworks & conducting UATs with real customers.Experience with prompt engineering techniques.Demonstrated the ability to manage complex deployments & solution design for an enterprise customer independently with minimal guidance.Familiarity with key support metrics (FRT, AHT, TAT, CSAT) and reporting.Strong troubleshooting and problem-solving skills, especially in production environments. Ideal candidates should be able to troubleshoot ML performance issues & recommend solutions to improve accuracy metrics & other business KPIs.Excellent communication skills to collaborate effectively with both technical and non-technical stakeholders.Technical Skills:Proficiency in Python & SQL scripting.Understanding of Data Structures & Algorithms is a plus.Experience with prompt engineering techniques.Proficient at invoking platform APIs to retrieve or push data, configure workflows.Familiarity with JSON, REST, SOAP or other integration protocols. Familiarity with integrating common CRMs/ATS tools would be a plus.Data-driven evaluation skills, including setting up and interpreting A/A experiments.Familiarity with logging, dashboard creation, and alerting tools.Soft Skills:Excellent communication skills to collaborate effectively with both technical and non-technical stakeholders.Demonstrated ability to manage complex deployments & solution design for an enterprise customer independently.Strong troubleshooting and problem-solving skills, especially in production environments.Ownership & Initiative: Takes responsibility for ongoing workflow management and customer outcomes.Curiosity & Continuous Learning: Actively explores new GenAI/automation techniques and best practices to enhance workflows.Collaboration: Works effectively across multiple teams.Customer Empathy: Prioritizes customer success criteria, ensuring solutions address real business needs.Ema Unlimited is an equal opportunity employer and is committed to providing equal employment opportunities to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability, sexual orientation, gender identity, or genetics.
No items found.
Hidden link
OpenAI.jpg

Software Engineer, Codex Core Agents

OpenAI
$230,000 – $325,000
US.svg
United States
Full-time
Remote
false
About the Team The Codex Core Agent team builds the kernel of Codex. We own making the agent better, accelerating research, and making those improvements real in production for our users.That means working across the systems that make Codex actually function as an agent in the real world: the production performance envelope around tokens, latency, reliability, cost, and capacity; the core execution loop and interfaces that turn models into useful behavior; the shared infrastructure that enables other teams to build on Codex; and the feedback loops that turn real-world usage into better models and better agent behavior over time.About the Role We’re looking for engineers to build the infrastructure that powers Codex agents in production. This role focuses on the systems that let models safely execute code, interact with tools, complete long-running tasks, and operate reliably and efficiently at scale.You’ll design and operate the infrastructure behind sandboxed execution, orchestration, stateful workflows, app-server and SDK boundaries, and model rollouts. You’ll work at the intersection of distributed systems, developer tooling, and AI, building primitives that make Codex faster, safer, more reliable, and easier for the rest of the organization to build on.What You’ll DoDesign and build execution environments for AI agents, including sandboxing, isolation, and reproducibility.Develop systems for agent orchestration across multi-step, tool-using workflows.Build infrastructure for running, testing, and debugging code generated by models.Create state and memory systems that allow agents to persist context across long-running tasks.Optimize tokens, latency, reliability, and cost across Codex’s production fleet.Support model rollouts, capacity planning, and the core tradeoffs between quality, speed, and economics to manage a fleet of frontier agents at scale.Build shared platform capabilities that unblock product teams, partner teams, and open source Codex.You Might Be a Good Fit If YouHave strong experience in distributed systems or infrastructure engineering.Have built systems involving containers, sandboxing, or virtualization.Are comfortable working across backend systems, APIs, and developer tooling.Care deeply about system reliability, performance, and security.Enjoy working on ambiguous, zero-to-one problems.Want to help build the systems that turn model capability into a dependable software engineering agent.Bonus PointsExperience with code execution platforms, CI/CD systems, or build systems.Familiarity with LLMs, agents, or tool-use frameworks.Background in security engineering or isolation systems.Experience building developer platforms, IDE tooling, or open source infrastructure.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
OpenAI.jpg

Applied AI Engineer, Codex Core Agent

OpenAI
$230,000 – $325,000
US.svg
United States
Full-time
Remote
false
About the Team The Codex Core Agent team builds the kernel of Codex. We own making the agent better, accelerating research, and making those improvements real in production for our users.That means working across the systems that make Codex actually function as an agent in the real world: the production performance envelope around tokens, latency, reliability, cost, and capacity; the core execution loop and interfaces that turn models into useful behavior; the shared infrastructure that enables other teams to build on Codex; and the feedback loops that turn real-world usage into better models and better agent behavior over time.About the Role We’re looking for applied AI engineers to help bring Codex agents from impressive demos to dependable tools. This role is about improving agent performance on real software engineering tasks and closing the gap between research capability and real-world usefulness.You’ll work closely with research, infrastructure, and product to ensure agents are not just powerful, but useful, steerable, and reliable in practice. The job is not only to improve model behavior in isolation, but to turn those improvements into measurable gains in solve rate, usefulness, and economic value for users.What You’ll DoDesign and iterate on agent behaviors across real-world coding tasks and long-horizon workflows.Work closely with research to develop and run evals to measure agent performance, regressions, failure modes, and edge cases.Improve performance through prompting, tool-use strategies, context construction, and model-facing experimentation.Analyze failures in production and systematically improve robustness and reliability.Build feedback loops and data systems that get better real-task data into evaluation and research.Work with product teams to shape user-facing agent experiences and the interfaces the agent depends on.Help define what “good” looks like for agents completing complex tasks end-to-end.You Might Be a Good Fit If YouHave experience building or shipping machine learning or LLM-powered products.Are strong in Python and comfortable with modern ML tooling.Have worked on model evaluation, fine-tuning, or prompt design.Think in terms of systems and user outcomes, not just model metrics.Enjoy debugging messy, real-world failures and turning them into improvements.Want to work in the layer that turns research and model potential into systems that actually work for users.Bonus PointsExperience with agent frameworks or tool-using LLM systems.Research experiencewith code generation models or developer tooling.Experience working with large, messy datasets or production logs.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
Silver.dev

Product Engineer @ Silver.dev

Silver.dev
$36,000 – $48,000
AR.svg
Argentina
Full-time
Remote
false
Sobre el rolEstamos buscando un Product Engineer para tener a cargo el área de Recruiter Operations, potenciando al equipo de Recruiting con herramientas de apoyo, optimizar la eficiencia operativa, y creando herramientas open source con IA aplicada para los candidatos como en open.silver.dev.Silver.dev está en el centro de las startups en Argentina, y algunas de las empresas mas importantes del mundo trabajan con nosotros, como X.com, Ramp, ScaleAI, Motion o Nubank. Trabajar en Silver es la oportunidad de estar en el centro del ecosistema, ver todo lo que pasa y catapultarte al mercado internacional. Vas a trabajar codo a codo con Gabriel Benmergui (CEO) y Lautaro Paskevicius (Founding Engineer) para lograr otro año récord en contrataciones y oportunidades.Requerimientos2+ años de experiencia como desarrollador fullstackCompatibilidad con nuestro stack: React/NextJS, Typescript, Zod, PostgreSQL & DrizzleORM, TanStack, AISDK, SST (AWS), Vercel & Cloudflare, OpenCodeInglés B2+Transparencia head-first y high agencyNice-to-haveExperiencia fullstack shippeando productos con revenueExperiencia shippeando productos con LLMs Experiencia en la industria de talent, marketplaces BeneficiosCafeteria y almuerzos incluidosComputadora mac high-endAI Plans high-end (e.g Claude Max 20x)2 semanas anuales de PTO, feriados argentinosBonos por performanceProceso de entrevistaLive Coding de FE o BEBehavioral interviewWork test pago (1-2 semanas) en nuestras oficinas de Palermo, CABA
No items found.
Hidden link
GrowthX AI.jpg

Engineering Manager, Forward-Deployed Engineering

Growth AI
US.svg
United States
Full-time
Remote
false
About GrowthX AIAt GrowthX, we’re building the modern growth engine for marketing teams. Since launching in 2024, we've grown from concept to eight-figure annual revenue. To date, we've raised a $12M Series A led by Madrona Venture Group and we partner with 60+ companies ranging from high-growth startups to industry leaders like Lovable, Webflow, Ramp, Superhuman, Strapi, Abnormal Security, and Augment Code.We’ve built something different: an operating system that unifies powerful AI workflows with forward-deployed experts to transform content into compounding organic growth. We’re building the growth infrastructure marketing teams wish they’d always had — the system that reliably turns context into strategy, strategy into execution, and execution into measurable growth.About The RoleAI is reshaping go-to-market. Not just content creation. The entire surface area: product experiences, developer tools, documentation, integrations, research systems.GrowthX runs a high-end AI product studio for this shift. We build AI agents, custom products, and advanced automations for B2B companies on top of Output.ai, our open-source AI workflow framework. Think Palantir's forward-deployed model, but for GTM.Recent examples:For Lovable: An AI agent that creates templates for their productFor Airbyte: An agent that integrates with new connectors and creates docs pages by sending pull requestsFor Augment: An entire research site for discovering and cataloging MCPsThis is AI + product development + GTM. We have a team of 5 today. We're growing to ~20. We need someone to lead this expansion.What You'll OwnTeam Leadership. Build a team of AI-native forward-deployed engineers and designers. Set technical standards. Run effective 1:1s, invest in each engineer's growth, and give direct feedback. Build a team that clients trust and engineers want to join.Technical Direction. Define architecture patterns for AI agents and client products. Review code. Make build vs. buy decisions. Push the boundaries of what's possible with Output.ai.Client Delivery. Ensure the team ships excellent work. Scope technical feasibility early with the sales team. Jump in on complex projects. Solve the hard problems when they come up.Platform Building. Turn client projects into reusable capabilities. An agent pattern built for one client becomes a template for the next. A custom integration becomes a module in Output.ai. Each engagement should make the next one faster and better. The goal is compounding leverage.Operations. Run prioritization cycles. Allocate capacity across client pods. Balance urgent client requests with system improvements.Who You AreTechnical leader who still codes. You can architect AI agents, debug production issues, and review PRs. You're hands-on when it matters. You're not looking for a role where you just manage people and attend meetings.Product thinker. You don't just build what's asked. You understand the client's problem, propose solutions, and ship products that work. You think about UX, not just architecture.Experienced people manager. You've built and led engineering teams before. You know how to hire well, give direct feedback, and terminate team members when needed. You've navigated the messy parts of team building.Client-facing capable. You can sit in a strategy call, understand what the client actually needs, and translate that into a project. You're comfortable with ambiguity and client expectations.AI-native. You've built with LLMs. You understand prompting, context windows, agent architectures, and the current state of what works and what doesn't. This isn't theoretical for you.High bar on output quality. AI can generate a lot of code fast — that makes reviewing it more important, not less. You read what it produces, understand it, refactor it, and don't merge output you wouldn't defend in a review. You QA your own work manually before it reaches anyone else.Pragmatic over perfect. You ship working solutions and iterate based on real usage. You don't over-engineer. You solve the problem in front of you.Excellent communicator. Writing and verbal. You can explain technical decisions to non-technical people. You document well. You run effective meetings.What Would Make You ExceptionalLed forward-deployed or professional services engineering teams beforeBuilt production AI agents or LLM systemsShipped products end-to-end (not just features within a product)Experience in developer tools, GTM tech, or B2B SaaSContributed to or maintained open-source projectsAgency or consulting background with high-touch client deliveryHow We WorkThink First, Build Second. AI made building fast and cheap. Writing code is no longer the bottleneck — thinking is. Before you build anything, invest real time understanding the problem. Study the context. Research what great looks like. Work backwards from the business reason into a detailed plan. The planning can still be fast, but it can't be skipped. Speed without direction is just drift.Own the Outcome. Freedom and responsibility go together. We own our work, chase down answers, and take initiative. Don't wait for direction. Don't treat your scope as someone else's problem once it leaves your hands. Every one of us shapes the outcome.Work in the Open. We're a remote company. Nobody can see you work. Share the plan before you execute. Share progress while you execute. A rough plan shared early beats a polished plan shared after the work is done wrong. If leadership has to chase you down to find out what happened, the process already failed.Chase the Context. Read the docs. Understand the product, the space, the business, the customer. Nobody is going to sit you down and explain everything. The context is there — go get it. We're too small for anyone to just write code without understanding the business.Move Fast, Stay Steady. Know when to sprint and when to pump the brakes. Don't over-engineer low-stakes tasks. Don't under-think critical ones. That judgment is what makes senior people here genuinely senior, regardless of title.Process for Context, Not Control. We believe good process exists to share context, not enforce control. It should make smart people faster — not make average teams adequate. If you need Scrum, OKRs, and sprint cycles to keep a team productive, we're probably not the right fit — and that's okay.Small team. High autonomy. No layers. You'll work directly with founders, engineers, and delivery leads.What Success Looks Like30 days: Understand the team, the clients, and Output.ai. Work on one client project yourself. Build relationships with leads and team.90 days: Own team operations. Improve at least one major process. Make your first hires. Have clear understanding of team capacity and client satisfaction.6 months: Team is running smoothly at larger scale. Reusable patterns are documented and accelerating implementations. Clients and delivery teams trust the engineering function.How to ApplyWe're not just looking for a resume and a cover letter. We're looking for how you think.Write a short article (1,000–2,000 words) covering two things:Your story. What you've built, how you think about engineering leadership, and why this role fits where you're headed.Explore Output.ai and share what you find — a critique of our architectural decisions, a product you'd build on top of it, a comparison to frameworks you've used, or something you actually built with it. The format is up to you. We care about the quality of your thinking, not the scope of the project.Writing is how we work. We're a remote, async team where the quality of your written thinking determines the quality of what gets built. AI made everyone faster at writing code — that makes clear thinking, strong writing, and taste for great architecture the real differentiators.Your article is your first impression. Make it count.Perks:🌍 Fully Remote WorkWork from anywhere—your productivity, your space!We help set up your work environment to enable your best work.Minimal meetings and async communication for deep focus🏖 Ample Time OffEnablement for you to recognize your countries holidaysWe encourage you to take the time you need to recharge!📚 Professional GrowthWe’re an AI forward culture and work environment - Learn all things AI!Access to training, workshops & mentorship programsSupport for external conferences & industry events🤝 Collaborative CultureOpen communication & feedback-driven environment.Clear, concise communication is key to our success.Work Visa: Applicants must have legal authorization to work in the country of residence. We are not able to provide employment visa sponsorship at this time.#LI-Remote
No items found.
Hidden link
HackerOne.jpg

Senior Data Engineer

HackerOne
₹3,672,000 – ₹4,131,000
IN.svg
India
Full-time
Remote
false
HackerOne is a global leader in Continuous Threat Exposure Management (CTEM). The HackerOne Platform unites agentic AI solutions with the ingenuity of the world’s largest community of security researchers to continuously discover, validate, prioritize, and remediate exposures across code, cloud, and AI systems. Through solutions like bug bounty, vulnerability disclosure, agentic pentesting, AI red teaming, and code security, HackerOne delivers measurable, continuous reduction of cyber risk for enterprises. Industry leaders, including Anthropic, Crypto.com, General Motors, Goldman Sachs, Lufthansa, Uber, UK Ministry of Defence, and the U.S. Department of Defense, trust HackerOne to safeguard their digital ecosystems. HackerOne was recognized in Gartner’s Emerging Tech Impact Radar: AI Cybersecurity Ecosystem report for its leadership in AI Security Testing and has been named a Most Loved Workplace for Young Professionals (2024).HackerOne is at a pivotal inflection point in the security industry. Offensive security is no longer optional – it is the standard for forward-thinking companies that want to build trust and resilience in a world where AI-driven innovation and adversaries are moving faster than ever. With the industry shifting, HackerOne stands apart: we combine the ingenuity of the largest security research community with a best-in-class AI-powered platform, trusted by the world’s top organizations.HackerOne ValuesHackerOne is dedicated to fostering a strong and inclusive culture. HackerOne is Customer Obsessed and prioritizes customer outcomes in our decisions and actions. We Default to Disclosure by operating with transparency and integrity, ensuring trust and accountability. Employees, researchers, customers, and partners Win Together by fostering empowerment, inclusion, respect, and accountability.Senior Data EngineerLocation : PuneWork model : In officeHackerOne is a global leader in Continuous Threat Exposure Management (CTEM). The HackerOne Platform unites agentic AI solutions with the ingenuity of the world’s largest community of security researchers to continuously discover, validate, prioritize, and remediate exposures across code, cloud, and AI systems. Through solutions like bug bounty, vulnerability disclosure, agentic pentesting, AI red teaming, and code security, HackerOne delivers measurable, continuous reduction of cyber risk for enterprises. Industry leaders, including Anthropic, Crypto.com, General Motors, Goldman Sachs, Lufthansa, Uber, UK Ministry of Defence, and the U.S. Department of Defense, trust HackerOne to safeguard their digital ecosystems. HackerOne was recognized in Gartner’s Emerging Tech Impact Radar: AI Cybersecurity Ecosystem report for its leadership in AI Security Testing and has been named a Most Loved Workplace for Young Professionals (2024).HackerOne is at a pivotal inflection point in the security industry. Offensive security is no longer optional – it is the standard for forward-thinking companies that want to build trust and resilience in a world where AI-driven innovation and adversaries are moving faster than ever. With the industry shifting, HackerOne stands apart: we combine the ingenuity of the largest security research community with a best-in-class AI-powered platform, trusted by the world’s top organizations.HackerOne ValuesHackerOne is dedicated to fostering a strong and inclusive culture. HackerOne is Customer Obsessed and prioritizes customer outcomes in our decisions and actions. We Default to Disclosure by operating with transparency and integrity, ensuring trust and accountability. Employees, researchers, customers, and partners Win Together by fostering empowerment, inclusion, respect, and accountability.Position SummaryHackerOne is seeking a Senior Data Engineer to lead the discovery, architecture, and development of high-performance, scalable data products and solutions. Joining our growing, distributed organization, you'll be instrumental in building the foundation that powers HackerOne's enterprise transformation from human-powered operations to data-driven, AI-powered, and human-led agentic operations.You’ll achieve success by leading with AI-first thinking, demonstrating agility through change, applying first-principles problem solving, and using data to learn and adapt along the way. Leveraging your technological expertise, domain knowledge, and dedication to business objectives, you'll drive innovation to propel HackerOne forward.Enterprise Data & AI MissionEnterprise Data & AI provides the data and systems to enable our enterprise transformation from human-powered operations to data-driven, AI-powered, and human-led agentic operations. This data and systems infrastructure includes autonomous AI agents handling routine or repeatable work and centralized AI and data infrastructure for cross-org leverage.What You Will DoLead the end-to-end design and delivery of scalable, secure, and intelligent data products and solutions that support HackerOne’s transformation into an AI-first organization.Partner across business and engineering teams to identify high-leverage opportunities for automation, integration, and system modernization.Drive the architecture and execution of platform-level capabilities, leveraging AI and modern tooling to reduce manual effort, improve decision-making, and increase system resilience.Provide technical leadership to internal engineers and external development partners, ensuring design quality, operational excellence, and long-term maintainability.Shape and contribute to our incident and on-call response strategy, playbooks, and processes, focusing on building systems that fail gracefully and recover quickly.Act as a multiplier to mentor other engineers, advocate for technical excellence, and promote a culture of innovation, curiosity, and continuous improvement.Champion effective change management and enablement, ensuring systems are not only launched, but adopted, understood, and evolved.Minimum Qualifications6+ years of experience in a Data, Engineering, Science, or similar role w/ proven track record of leading the design, development, and deployment of AI-first data products and solutions (preferably using Python).Extensive hands-on experience building and optimizing data pipelines, products, and solutions.Strong SQL for data manipulation and programming skills.Knowledge of algorithms and data structures.Extensive experience working with various data technologies and tools such as Airflow, Snowflake, Meltano, Fivetran, DBT, Looker, and AWS.Experience with infrastructure as code tools such as Terraform or PulumiProven track record of successfully championing new initiatives focused on architectural enhancements.Proven track record of having substantial impact across the company, demonstrating your ability to drive positive change and achieve significant results.Passion for working backwards from the Customer and empathy for business stakeholders.Excellent communication skills, and can present data-driven narratives in verbal, presentation, and written formats.Preferred QualificationsProven track record of driving innovation, adopting emerging technologies, and implementing industry best practices.Experience building and managing a cloud deployed data lakeExperience working with Kubernetes.Experience working with Agile and iterative development processes.Understanding of network architecture.Interview ProcessHiring Manager roundTechnical Interview (coding)Team InterviewLeadership round along with a assignment(Round 2 and 3 will be scheduled paralleling)Job Benefits:Health (medical, vision, dental), life, and disability insurance*Equity stock optionsRetirement plansPaid public holidays and unlimited PTOPaid maternity and parental leaveLeaves of absence (including caregiver leave and leave under CO's Healthy Families and Workplaces Act)Employee Assistance Program*Eligibility may differ by countryWe're committed to building a global team! For certain roles outside the United States, India, the U.K., and the Netherlands, we partner with Remote.com as our Employer of Record (EOR).Visa/work permit sponsorship is not available. Employment at HackerOne is contingent on a background check.HackerOne is an Equal Opportunity Employer in the terms and conditions of employment for all employees and job applicants without regard to race, color, religion, sex, sexual orientation, age, gender identity or gender expression, national origin, pregnancy, disability or veteran status, or any other protected characteristic as outlined by international, federal, state, or local laws.This policy applies to all HackerOne employment practices, including hiring, recruiting, promotion, termination, layoff, recall, leave of absence, compensation, benefits, training, and apprenticeship. HackerOne makes hiring decisions based solely on qualifications, merit, and business needs at the time.For US based roles only: Pursuant to the San Francisco Fair Chance Ordinance, all qualified applicants with arrest and conviction records will be considered for the position.
No items found.
Hidden link
Netomi.jpg

Agentic Solution Engineer

Netomi
CA.svg
Canada
Full-time
Remote
false
About the Company:Netomi is the leading agentic AI platform for enterprise customer experience. We work with the largest global brands like Delta Airlines, MetLife, MGM, United, and others to enable agentic automation at scale across the entire customer journey. Our no-code platform delivers the fastest time to market, lowest total cost of ownership, and simple, scalable management of AI agents for any CX use case. Backed by WndrCo, Y Combinator, and Index Ventures, we help enterprises drive efficiency, lower costs, and deliver higher quality customer experiences. Want to be part of the AI revolution and transform how the world’s largest global brands do business? Join us! About the Role Netomi is looking for a Solution Engineer - a key technical leader at the intersection of pre-sales engineering, AI architecture, and product innovation. This individual will design and implement agentic workflows that leverage the Netomi platform to power real-world enterprise solutions. You’ll work directly with enterprise clients and internal stakeholders to translate visionary AI concepts into practical, scalable systems - enabling AI agents to engage, reason, and act autonomously within complex customer ecosystems. Responsibilities Partner with Account Executives to discover and scope customer challenges, designing high-value technical solutions that showcase the ROI of Netomi’s platform. Architect and build agentic workflows that integrate generative AI with APIs, databases, and enterprise tools to power experiences for our customer’s end users. Develop custom demonstrations, prototypes, and proofs of concept using the Netomi platform tailored to specific clients use cases. Design, test, and refine prompts and AI orchestration chains to optimize performance, reasoning, and reliability across varied use cases. Communicate complex technical concepts clearly and persuasively to audiences ranging from C-level executives to hands-on engineers. Collaborate with product and engineering teams, contributing insights from customer engagements to inform roadmap priorities. Document and present solution designs, workflows, and technical configurations for both internal and client-facing reference. Requirements 1-2 years of experience in a customer-facing sales engineering or solutions engineering role, ideally in AI, automation, or enterprise SaaS. Hands-on experience with AI prompt design, workflow orchestration, and integrating REST APIs, webhooks, and cloud services (AWS, GCP, or Azure). Working knowledge of JavaScript, Python, or related scripting languages for building integrations and automation logic. Proven ability to architect and communicate end-to-end technical solutions that align AI capabilities with business outcomes. Functional understanding enterprise software ecosystems, and data flow patterns. Excellent communication, presentation, and interpersonal skills with a record of collaboration across technical and non-technical teams. Preferred Qualifications Experience in agentic or autonomous AI systems (e.g., LangChain, LlamaIndex, or similar frameworks). Familiarity with MLOps, AI governance, and compliance in production-scale deployments. Background working in high-growth startup environments. Awareness of UX/UI principles for designing customer-facing AI experiences. Netomi is an equal opportunity employer committed to diversity in the workplace. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, disability, veteran status, and other protected characteristics.
No items found.
Hidden link
Netomi.jpg

Agentic Senior Solution Engineer

Netomi
CA.svg
Canada
Full-time
Remote
false
About the Company:Netomi is the leading agentic AI platform for enterprise customer experience. We work with the largest global brands like Delta Airlines, MetLife, MGM, United, and others to enable agentic automation at scale across the entire customer journey. Our no-code platform delivers the fastest time to market, lowest total cost of ownership, and simple, scalable management of AI agents for any CX use case. Backed by WndrCo, Y Combinator, and Index Ventures, we help enterprises drive efficiency, lower costs, and deliver higher quality customer experiences. Want to be part of the AI revolution and transform how the world’s largest global brands do business? Join us! About the Role Netomi is looking for a Senior Solution Engineer - a key technical leader at the intersection of pre-sales engineering, AI architecture, and product innovation. This individual will design and implement agentic workflows that leverage the Netomi platform to power real-world enterprise solutions. You’ll work directly with enterprise clients and internal stakeholders to translate visionary AI concepts into practical, scalable systems - enabling AI agents to engage, reason, and act autonomously within complex customer ecosystems. Responsibilities Partner with Account Executives to discover and scope customer challenges, designing high-value technical solutions that showcase the ROI of Netomi’s platform. Architect and build agentic workflows that integrate generative AI with APIs, databases, and enterprise tools to power experiences for our customer’s end users. Develop custom demonstrations, prototypes, and proofs of concept using the Netomi platform tailored to specific clients use cases. Design, test, and refine prompts and AI orchestration chains to optimize performance, reasoning, and reliability across varied use cases. Communicate complex technical concepts clearly and persuasively to audiences ranging from C-level executives to hands-on engineers. Collaborate with product and engineering teams, contributing insights from customer engagements to inform roadmap priorities. Document and present solution designs, workflows, and technical configurations for both internal and client-facing reference. Requirements 4-6 years of experience in a customer-facing sales engineering or solutions engineering role, ideally in AI, automation, or enterprise SaaS. Hands-on experience with AI prompt design, workflow orchestration, and integrating REST APIs, webhooks, and cloud services (AWS, GCP, or Azure). Working knowledge of JavaScript, Python, or related scripting languages for building integrations and automation logic. Proven ability to architect and communicate end-to-end technical solutions that align AI capabilities with business outcomes. Functional understanding enterprise software ecosystems, and data flow patterns. Excellent communication, presentation, and interpersonal skills with a record of collaboration across technical and non-technical teams. Preferred Qualifications Experience in agentic or autonomous AI systems (e.g., LangChain, LlamaIndex, or similar frameworks). Familiarity with MLOps, AI governance, and compliance in production-scale deployments. Background working in high-growth startup environments. Awareness of UX/UI principles for designing customer-facing AI experiences. Netomi is an equal opportunity employer committed to diversity in the workplace. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, disability, veteran status, and other protected characteristics.
No items found.
Hidden link
Intrinsic.jpg

Tech Lead Manager SWE, SDK

Intrinsic
US.svg
United States
Full-time
Remote
false
Intrinsic is an AI robotics group at Google aiming to reimagine the potential of industrial robotics. Our team believes that advances in AI, perception and simulation will redefine what’s possible for industrial robotics in the near future – with software and data at the core. Our mission is to make industrial robotics intelligent, accessible, and usable for millions more businesses, entrepreneurs, and developers. We are a dynamic team of engineers, roboticists, designers, and technologists who are passionate about unlocking the creative and economic potential of industrial robotics.Role As a Senior AI Research Scientist for Perception for Contact Rich Manipulation you will lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. You will explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Your work will involve creating models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. You will collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities that can be deployed on physical hardware for industrial applications. How your work moves the mission forward Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios. Design algorithms that enable robots to manipulate complex or deformable objects with high precision. Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware. Evaluate model performance in both simulation and real-world environments to ensure robustness and reliability. Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems. Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap. Skills you will need to be successful PhD in Computer Science, Robotics, or a related field with a focus on machine learning or computer vision. 3 years of experience in applied research focused on robotic manipulation or robot learning. Proficiency in programming with Python and C++. Experience with deep learning frameworks such as PyTorch, JAX, or TensorFlow. Experience developing algorithms for vision-based manipulation or contact-rich interaction. Publication record in top-tier robotics or AI conferences (e.g., ICRA, IROS, CVPR, NeurIPS).  Skills that will differentiate your candidacy Experience with reinforcement learning or imitation learning for robotics. Familiarity with physics simulators like MuJoCo, Isaac Sim, or Gazebo. Experience integrating tactile sensors with visual perception systems. Experience in LfD (Learning from Demonstrations), kinesthetic learning. Background in sim-to-real transfer techniques for manipulation policies. Experience with transformer-based architectures or foundation models in a robotics context. Experience deploying machine learning models on edge compute hardwar​e. At Intrinsic, we are proud to be an equal opportunity workplace. Employment at Intrinsic is based solely on a person's merit and qualifications directly related to professional competence. Intrinsic does not discriminate against any employee or applicant because of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), or any other basis protected by law. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. It is Intrinsic’s policy to comply with all applicable national, state and local laws pertaining to nondiscrimination and equal opportunity. If you have a disability or special need that requires accommodation, please contact us at: candidate-support@intrinsic.ai.
No items found.
Hidden link
Intrinsic.jpg

Solutions Product Manager

Intrinsic
No items found.
Full-time
Remote
false
Intrinsic is an AI robotics group at Google aiming to reimagine the potential of industrial robotics. Our team believes that advances in AI, perception and simulation will redefine what’s possible for industrial robotics in the near future – with software and data at the core. Our mission is to make industrial robotics intelligent, accessible, and usable for millions more businesses, entrepreneurs, and developers. We are a dynamic team of engineers, roboticists, designers, and technologists who are passionate about unlocking the creative and economic potential of industrial robotics.Role As a Senior AI Research Scientist for Perception for Contact Rich Manipulation you will lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. You will explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Your work will involve creating models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. You will collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities that can be deployed on physical hardware for industrial applications. How your work moves the mission forward Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios. Design algorithms that enable robots to manipulate complex or deformable objects with high precision. Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware. Evaluate model performance in both simulation and real-world environments to ensure robustness and reliability. Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems. Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap. Skills you will need to be successful PhD in Computer Science, Robotics, or a related field with a focus on machine learning or computer vision. 3 years of experience in applied research focused on robotic manipulation or robot learning. Proficiency in programming with Python and C++. Experience with deep learning frameworks such as PyTorch, JAX, or TensorFlow. Experience developing algorithms for vision-based manipulation or contact-rich interaction. Publication record in top-tier robotics or AI conferences (e.g., ICRA, IROS, CVPR, NeurIPS).  Skills that will differentiate your candidacy Experience with reinforcement learning or imitation learning for robotics. Familiarity with physics simulators like MuJoCo, Isaac Sim, or Gazebo. Experience integrating tactile sensors with visual perception systems. Experience in LfD (Learning from Demonstrations), kinesthetic learning. Background in sim-to-real transfer techniques for manipulation policies. Experience with transformer-based architectures or foundation models in a robotics context. Experience deploying machine learning models on edge compute hardwar​e. At Intrinsic, we are proud to be an equal opportunity workplace. Employment at Intrinsic is based solely on a person's merit and qualifications directly related to professional competence. Intrinsic does not discriminate against any employee or applicant because of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), or any other basis protected by law. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. It is Intrinsic’s policy to comply with all applicable national, state and local laws pertaining to nondiscrimination and equal opportunity. If you have a disability or special need that requires accommodation, please contact us at: candidate-support@intrinsic.ai.
No items found.
Hidden link
Distyl.jpg

AI Strategist, Healthcare Solutions

Distyl
$150,000 – $250,000
US.svg
United States
Full-time
Remote
false
About Distyl AI Distyl is an applied AI technology company partnering with the world’s most ambitious institutions to rearchitect critical operations for the frontier of AI. Our customers include the largest companies in telecom, healthcare, insurance, manufacturing, consumer goods, and global social organizations. We research and deploy technologies that power AI-native operations — both for our partners and for Distyl itself. Our work spans research into self-constructing systems, the development of the most reliable execution of AI systems, and products that transform mission-critical workflows. As a result, Distyl's technologies affect some of the world's largest operations — from hundreds of millions of consumer interactions to tens of millions of supply chain transactions and millions of patient journeys. Distyl is backed by leading investors including Lightspeed Venture Partners, Khosla Ventures, Coatue, DST Global, and the board-members of 20+ F500s. The results reflect this approach: a 100% production deployment success rate for our customers and one of the few enterprise AI companies to run a profitable business.What We Are Looking ForAs an AI Strategist, you'll lead and drive the technology solutions in Healthcare vertical and drive the GTM process with healthcare customer engagements, from first meeting to deal signed. You'll drive revenue and new logo expansion by diagnosing customer problems through an AI lens and designing data-driven, AI-enabled strategies that deliver desired outcomes and help prospective clients see the immediate value of AI adoption. As a core member of Distyl’s Healthcare GTM Team, you'll operate in 0→1 mode — rapidly building demos and proof-of-concept solutions that showcase the Distillery platform’s capabilities to prospective healthcare clients.You'll work at the intersection of business strategy and AI implementation, bridging the gap between Distyl’s technical AI tools and the business outcomes that get deals signed. Strong communication and storytelling skills are essential. You are the person in the room making the case for AI transformation. You'll also get hands-on — prototyping workflows on the Distillery platform, building demos in days not months, and collaborating with healthcare SMEs (clinicians, case managers, claims analysts) to ensure solutions address real operational pain points.This is a high-ownership role for someone who thrives at the intersection of AI strategy, analytics, and execution — and wants to build the GTM engine of an early-stage AI company from the ground up.Key ResponsibilitiesDrive the AI solutioning throughout the pre-sales process on healthcare deals — from initial discovery through deal close, you own the momentum that gets engagements across the finish lineDiagnose prospective client problems through an AI lens and identify where AI-enabled strategies can transform operations, optimize workflows, and improve outcomes at scaleResearch clients and their industry landscape to prepare materials for prospective client meetings, including use case points of view tailored to each prospectDevelop deep expertise in Distyl’s Healthcare Solutions (Utilization Management, Care Management, Cost of Care, Claims, Member/Provider Communications etc.) to identify and prioritize high-value use cases for each customerFeed insights to AEs to sharpen the pitch and differentiate Distyl’s AI-enabled offeringsConduct technical discovery after Account Executives open the door — identify the right use cases, solution architecture, and feasibilityIdentify the most valuable problems to solve. Embed yourself in the customer’s workflow to understand it better than the users themselvesWork closely with engineering to co-design solutions — you can whiteboard an architecture, debate tradeoffs in system design, and bridge the gap between what the customer needs and what the platform can deliverBuild demos and competitive positioning docs with internal stakeholders, driving rapid delivery for demos and POC scopingBuild out POC proposals — define scope, success criteria, timelines, and resource requirementsNavigate noisy enterprise data to identify how AI can create reliable outcomes despite real-world constraintsBuild compelling storylines and decks that make the ROI of AI transformation concrete and undeniableCreate and maintain reusable GTM assets — demo scripts, ROI calculators, competitive positioning materials, and use-case-specific pitch decks aligned for SolutionsOwn the knowledge transfer, ensuring clean, formal handoff to the expansion teamMaintain living account documentation during the pre-sales phaseWho You Are8-12 years of experience in sales engineering, solutions consulting, product management, technical growth, chief of staff, or a similar role at the intersection of business and technology, with deep, direct experience in healthcare (payer operations, provider workflows, or health tech). You understand how a prior authorization gets processed, why claims adjudication breaks, and what a medical director actually cares aboutTechnical undergraduate degree (CS, engineering, math, or similar) — you don’t just understand technical concepts, you build with them. You can write code to prototype a workflow, interrogate a dataset in Python/SQL, and speak fluently with engineers about system design and API architectureHands-on experience designing and building GenAI solutions — not just scoping them. You’ve gotten your hands dirty with LLM pipelines, prompt engineering, RAG architectures, or similar, and can prototype on a platform like Distillery without waiting for engineering to hand you a demoAbility to translate complex AI technical capabilities into specific, high-value business outcomes. You can make enterprise executives see why AI adoption matters nowStrong engineering-adjacent data skills: Write SQL, work in Python/pandas, and navigate messy enterprise datasets to pull the insights that drive deal narratives. Bonus if you’ve worked with healthcare-specific data formatsExceptional communication and storytelling skills: You can command a room, structure a pitch, build a narrative arc, and make complex AI concepts land with senior executivesDeep domain expertise in healthcare operations — you’ve lived in the workflows (UM, care management, claims, network) and understand the regulatory, clinical, and operational constraints that shape AI adoption in this space. Experience navigating NCQA, CMS guidelines, or clinical documentation requirements is a strong signal.Thrive in Ambiguity: Ability to identify critical problems in complex situations and define outcomes.High ownership and intensity — you operate like a founder, not a task-takerComfortable engaging with VP/C-level executives at Fortune 500 healthcare organizationsYou’ll develop deep fluency in Distyl’s Distillery platform, its primitives, and its capabilities — building demos and POCs directly on the platform, not just presenting what someone else builtTravel: Ability to travel 50%What We Offer The base salary range for this role is $150K – $250K , depending on experience, location, and level. In addition to base compensation, meaningful equity, along with a comprehensive benefits package100% covered medical, dental, and vision for employees and dependents401(k) with additional perks (e.g., commuter benefits, in‑office lunch)Access to state‑of‑the‑art models, generous usage of modern AI tools, and real‑world business problemsOwnership of high‑impact projects across top enterprisesA mission‑driven, fast‑moving culture that prizes curiosity, pragmatism, and excellenceDistyl has offices in San Francisco and New York. This role follows a hybrid collaboration model with 3+ days per week (Tuesday–Thursday) in‑office.
No items found.
Hidden link
OpenAI.jpg

Engineering Manager, Distillation & Dectection Platform

OpenAI
$293,000 – $385,000
US.svg
United States
Full-time
Remote
false
About the roleWe’re looking for an engineering manager to lead a team building software systems that detect and prevent harmful misuse of frontier AI models—before incidents occur. This is a builder’s role: you’ll lead engineers shipping production services, detection pipelines, and mitigation mechanisms that protect frontier model integrity and reduce high-severity misuse risk.While this work intersects with frontier model development, security and risk, we’re explicitly seeking someone with a software engineering foundation who is comfortable building reliable systems that can operate at billions of users scale. In this role you will:Lead a team of software engineers building detection + mitigation systems for frontier model misuse, with an emphasis on model IP protection / distillation detection and emerging risk surfaces from autonomous agents.Set the technical roadmap and execution strategy: prioritize, design, ship, iterate, measure impact.Build production systems: services, pipelines, tooling, instrumentation, and automation that scale with frontier model usage.Partner deeply with Research and Product to translate evolving model capabilities into concrete tests, signals, and mitigations that can be deployed at scale.Drive strong engineering fundamentals: architecture, reliability, monitoring, performance, and operational excellence.Hire and grow an exceptional team across backend, data systems, and applied ML engineering domains as needed.Anticipate what breaks at scale as agentic workflows become more capable.You might thrive in this role if you:Experience building systems in adversarial, fast-evolving environmentsAre comfortable with ambiguity and noveltyHave experience adjacent to security (e.g., abuse prevention, fraud, integrity, platform defense, auth/identity, malware/spam, adversarial environments)Communicate clearly and build trust quickly with senior stakeholders—pragmatic, collaborative, and calm under scrutiny.Significant experience leading engineering teams and delivering production systems end-to-end.Strong technical judgment in system design, distributed systems, data pipelines, observability, and operational reliability.Demonstrated ability to partner cross-functionally with Research/Product/Security to ship systems that materially reduce risk or abuse at scale.Familiarity with model extraction / distillation, adversarial evaluation, or scalable detection/mitigation approaches.Background in autonomy, high-scale real-time systems, or intelligence-adjacent technical domains is a plus.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
OpenAI.jpg

AI Deployment Engineer, Startups

OpenAI
SG.svg
Singapore
Full-time
Remote
false
AI Deployment Engineer, StartupsAbout the teamThe AI Deployment Engineering team works closely with frontier startups. We are trusted advisors to, and thought partners with, startups to ensure that OpenAI’s technology is deployed safely and effectively, whilst also partnering with engineering, research, and product to turn those insights into evaluation systems, product improvements, and better model behavior.This team sits at the intersection of customer reality and model quality. We combine hands-on technical depth with strong product judgment, helping translate complex, high-value use cases into clear signals that can improve both the customer experience and the underlying systems.About the roleWe are seeking a technically proficient, product-minded engineer to help push the frontier of advanced AI with our strategic startup customers. You'll work with some of the most exciting AI startups in the world, helping them optimize their own systems and turning those learnings into durable improvements across OpenAI’s research and products. You will partner deeply on complex workflows, identify the gaps that matter, and help transform those gaps into reproducible evaluations, technical insights - helping shape OpenAI's research and product direction.This role is well suited to engineers who are equally comfortable debugging a workflow, iterating on prompts or agents, designing evaluations, and collaborating across research and product. You should be excited by ambiguous, high-impact problems and motivated by the opportunity to shape how advanced AI systems improve in practice.This role in particular will include a specific focus on building relationships with the startup ecosystem, including presenting to developer and startup audiences. You’ll require strong one to many communication skills, and should enjoy discussing high level technical trends alongside the deeper technical engagements with specific startups.This role is based in our Singapore office. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.In this role, you will:Work directly with strategic startup customers to understand critical workflows, uncover failure modes, and identify high-impact opportunities for improvement.Prototype and iterate on prompts, agents, and workflow designs to better understand system behavior and unlock customer value.Synthesize and deliver valuable feedback to the Product and Research teams, turning real usage patterns into clear, reproducible evals, benchmarks, and technical artifacts that improve model and product quality and ensure customer-grounded learnings influence roadmap and model development.Build repeatable tools, patterns, and evaluation approaches that raise the quality bar across multiple use cases.Operate with strong judgment in ambiguous environments, balancing immediate technical problem-solving with longer-term system improvement.Build relationships within the startup ecosystem, serving as a technical partner to both individual customers and the broader community.Create technical presentations, demos, and other forms of community engagement with top developers and startups around the region. You’ll thrive in this role if you:Have strong software engineering & AI fundamentals. For example, experience as a software engineer, ML engineer, Data Scientist or equivalent. Experience shipping production systems end-to-end is a strong plus.Have familiarity with, or interest in, model training pipelines and reinforcement learning.Have experience building AI applications, agents, or evaluation systems, and can reason clearly about model behavior in complex workflows.Are comfortable working directly with highly technical users and translating their challenges into concrete technical signals.Can move fluidly between prototyping, debugging, evaluation design, and cross-functional collaboration.Communicate clearly across technical and non-technical audiences.Bring high agency, strong product sense, and a bias toward building durable improvements rather than one-off fixesEnjoy having some days where you engage in community events and present to large audiences, and other days where you go deep on specific customer problems.Speak multiple languages: Strong proficiency in English is required. Additional proficiency in Mandarin, Korean, and/or Japanese is a nice to have. About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
OpenAI.jpg

AI Deployment Engineer

OpenAI
JP.svg
Japan
Full-time
Remote
false
About the TeamThe Technical Success team is responsible for ensuring developers and enterprises are successful in building scalable production applications with the OpenAI API platform. We guide and support customers to achieve maximum benefits, value, and adoption from deploying our highly-capable models. OpenAI's customers represent a range of diverse backgrounds and maturity, from early-stage startups to established global enterprises.About the RoleWe are looking for a technically savvy and business-minded AI Deployment Engineer to deeply partner with our most strategic and high-impact platform customers, guiding them through application ideation, development, delivery, and scale to accelerate and maximize the value of what they build with our platform. You will have the opportunity to work on the most novel and creative use cases being built on our API, serving as a critical partner for collecting and delivering high fidelity feedback to Product and Research teams.This role is based in Tokyo, Japan. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employeesIn this role, you will:Deeply embed with our most strategic platform customers, serving as their technical thought partner in ideating and building novel applications on our API.Proactively provide guidance to our customers on how to maximize business impact from their applications, accelerating their time to value.Experiment and prototype solutions with and for your customers.Forge and manage relationships with our customers’ leadership and stakeholders to ensure their application’s successful deployment and scale.Contribute to our open-source developer and enterprise resources.Scale the AI Deployment Engineering function through sharing knowledge, codifying best practices, and publishing notebooks to our internal and external repositories.Validate, synthesize, and deliver high-signal feedback to the Product and Research teams.Use your expertise in programming with Python and Javascript.You’ll thrive in this role if you:Have 5+ years of technical consulting (or equivalent) experience.Are proficient in Python and Javascript.Are fluent in both Japanese and English, as this is essential for partnering with customers, providing technical expertise, demonstrating value, and collaborating effectively with teams at headquarters.Built and/or delivered prototypes on top of our API platform.Led complex technical projects and programs with many stakeholders.Can proactively identify opportunities for maximizing our customers’ business value through leveraging the OpenAI API.Own problems end-to-end, and are willing to pick up whatever knowledge you're missing to get the job done to ensure both your team and our customers succeed.Have a humble attitude and an eagerness to help others with empathy.Operate with high horsepower, are adept at frequent context switching and working on multiple projects at once with expansive ownership, and ruthlessly prioritize.Thrive in dynamic environments and can navigate ambiguity with ease.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
The Reflection.jpg

Member of Technical Staff - Pre-Training Infra

Reflection
US.svg
United States
Full-time
Remote
false
Our MissionReflection’s mission is to build open superintelligence and make it accessible to all.We’re developing open weight models for individuals, agents, enterprises, and even nation states. Our team of AI researchers and company builders come from DeepMind, OpenAI, Google Brain, Meta, Character.AI, Anthropic and beyond.About the RoleBuild and scale distributed training systems that power frontier model pre-training.Work closely with research teams to design and operate large-scale training runs for foundation models.Develop infrastructure that enables efficient training across thousands of GPUs using modern distributed training frameworks.Optimize training throughput, stability, and efficiency for large model training workloads.Collaborate directly with pre-training researchers to translate experimental ideas into scalable, production-ready training systems.Improve performance of distributed training workloads through optimization of communication, memory usage, and GPU utilization.Build and maintain training pipelines that support large-scale datasets, checkpointing, and experiment iteration.Debug and resolve performance bottlenecks across distributed training stacks including model parallelism, GPU communication, and training runtime systems.Contribute to the development of systems that enable rapid experimentation and iteration on new training techniques. Ideal ExperienceExperience building or operating distributed training systems for large machine learning models.Strong experience working with modern distributed training frameworks such as Megatron, DeepSpeed, or similar large-scale training systems.Familiarity with large-scale model parallelism strategies (data, tensor, pipeline, or expert parallelism).Experience optimizing training throughput and GPU utilization in large distributed environments.Familiarity with GPU communication libraries such as NCCL and performance tuning for distributed workloads.Experience working closely with ML researchers to productionize experimental training workflows.Strong debugging skills across GPU compute, distributed training systems, and large-scale ML pipelinesExperience working with large datasets and training pipelines used for foundation model pre-training.What We Offer:We believe that to build superintelligence that is truly open, you need to start at the foundation. Joining Reflection means building from the ground up as part of a small talent-dense team. You will help define our future as a company, and help define the frontier of open foundational models.We want you to do the most impactful work of your career with the confidence that you and the people you care about most are supported.Top-tier compensation: Salary and equity structured to recognize and retain the best talent globally.Health & wellness: Comprehensive medical, dental, vision, life, and disability insurance.Life & family: Fully paid parental leave for all new parents, including adoptive and surrogate journeys. Financial support for family planning.Benefits & balance: paid time off when you need it, relocation support, and more perks that optimize your time. Opportunities to connect with teammates: lunch and dinner are provided daily. We have regular off-sites and team celebrations.
No items found.
Hidden link
The Reflection.jpg

Member of Technical Staff - Mid-Training Infra

Reflection
US.svg
United States
Full-time
Remote
false
Our MissionReflection’s mission is to build open superintelligence and make it accessible to all.We’re developing open weight models for individuals, agents, enterprises, and even nation states. Our team of AI researchers and company builders come from DeepMind, OpenAI, Google Brain, Meta, Character.AI, Anthropic and beyond.About the RoleDesign, build, and operate large-scale GPU infrastructure for high-throughput model inference and mid-training workloads.Develop systems that power synthetic data generation and reinforcement learning pipelines at scale.Build high-performance inference platforms capable of serving and evaluating models across thousands of GPUs.Optimize throughput, latency, and GPU utilization for large language model inference and rollout workloads.Build infrastructure that supports reinforcement learning pipelines, including large-scale rollout generation, evaluation, and policy improvement loops.Work closely with research teams to support distributed RL workloads and large-scale model evaluation infrastructure.Improve performance of model execution through kernel-level optimization, model parallelism strategies, and GPU runtime improvements.Develop distributed systems that enable large-scale synthetic data generation and RL-driven training workflows.Diagnose and resolve performance bottlenecks across inference runtimes, GPU kernels, networking, and distributed compute systems. Ideal ExperienceExperience deploying and operating large-scale GPU systems for inference or model serving.Several years of hands-on experience building and running production infrastructure.Strong understanding of GPU performance characteristics and optimization techniques.Experience working with modern inference frameworks such as SGLang, Megatron, or similar high-performance LLM runtimes.Familiarity with distributed reinforcement learning infrastructure or rollout generation systems.Experience optimizing throughput for large-scale model execution workloads.Experience working with GPU kernels or low-level performance optimization.Familiarity with infrastructure used for synthetic data pipelines or RL training workflows.Experience debugging performance issues across GPU, networking, and distributed execution layers.What We Offer:We believe that to build superintelligence that is truly open, you need to start at the foundation. Joining Reflection means building from the ground up as part of a small talent-dense team. You will help define our future as a company, and help define the frontier of open foundational models.We want you to do the most impactful work of your career with the confidence that you and the people you care about most are supported.Top-tier compensation: Salary and equity structured to recognize and retain the best talent globally.Health & wellness: Comprehensive medical, dental, vision, life, and disability insurance.Life & family: Fully paid parental leave for all new parents, including adoptive and surrogate journeys. Financial support for family planning.Benefits & balance: paid time off when you need it, relocation support, and more perks that optimize your time. Opportunities to connect with teammates: lunch and dinner are provided daily. We have regular off-sites and team celebrations.
No items found.
Hidden link
Silver.dev

AceUp - Machine Learning Engineer (Generative AI & LLM Focus)

Silver.dev
$72,000 – $96,000
AR.svg
Argentina
Full-time
Remote
false
About the company AceUp is evolving from a traditional SaaS platform into an AI-first leadership development engine. We are looking for a Machine Learning Engineer who is passionate about Generative AI. You will work closely with our Lead ML Engineer and Product team to turn high-level architectural designs into functioning, production-ready AI features. This is a hands-on coding role where you will live in Python and Vertex AI.The Tech Stack:We are a GCP-native shop. You will be building directly within the Google Cloud ecosystem:GenAI & Compute: Vertex AI, Gemini Pro/Ultra, PaLM API, Cloud Functions.Data & Vector: Firestore, BigQuery, Vertex AI Vector Search.Orchestration: Cloud Run, Pub/Sub.Frameworks: Python, LangChain/LangGraph.What You Will Do:Build & Optimize LLM Workflows: Implement the specific prompt chains and logical flows that power our conversational agents. You will iterate on system instructions to improve model tone, accuracy, and compliance.Develop RAG Retrievers: Write the code that chunks, embeds, and indexes our proprietary content. You will fine-tune retrieval queries to ensure the AI always has the right context.Python Backend Development: Build the robust Python services and APIs (using frameworks like FastAPI or GCP Cloud Functions) that wrap our AI models and expose them to the Full Stack team.Model Evaluation: Create and run evaluation datasets (“Golden Sets”) to measure the performance of our models. You will be responsible for catching hallucinations or regressions before they reach the user.Data Processing Pipelines: Write efficient scripts to clean and structure unstructured data (text logs, transcripts) for downstream analysis.Collaborate on Architecture: Work with the Lead ML Engineer to prototype new ideas. You will be the “first implementer” validating if a new approach (like a specific agentic framework) actually works in practice.Who You Are:A Strong Python Developer: You write clean, modular, and testable Python code. You are comfortable with async programming and type hinting.GenAI Curious: You have likely built your own projects using OpenAI APIs or LangChain. You understand concepts like “Context Window,” “Temperature,” and “Embeddings” intuitively.Problem Solver: You don’t just blame the model when it fails; you investigate the data, the prompt, and the retrieval logic to find the root cause.Team Player: You are ready to collaborate with Full Stack engineers to define the JSON contracts and APIs they need to integrate your models.Requirements:Experience: 3+ years of professional software engineering experience.ML Experience: 1+ years of experience working with Machine Learning or Data Engineering pipelines.LLM Exposure: Demonstrated experience (professional or significant side projects) working with LLM APIs (OpenAI, Anthropic, or Vertex AI).Cloud Experience: Familiarity with a cloud provider (GCP preferred, AWS/Azure acceptable).Language: Conversational English is required.Education: B.S. in Computer Science, Mathematics, or related field.Nice to Haves:Experience with LangChain or LlamaIndex.Experience using Firebase/Firestore.Understanding of basic DevOps (Docker, CI/CD).AceUp is proud to be an equal opportunity employer, seeking to create a welcoming and diverse environment.Interview process:Silver Screening interviewTake Home ChallengeFirst meeting with client: Intro Intro Call + Problem SolvingTechnical InterviewInterview w/Product
No items found.
Hidden link
Armada.jpg

Customer Engineer, Leviathan

Armada
$154,560 – $193,200
US.svg
United States
Full-time
Remote
false
  About the Company Armada is an edge computing startup that provides computing infrastructure to remote areas where connectivity and cloud infrastructure is limited, as well as areas where data needs to be processed locally for real-time analytics and AI at the edge. We’re looking to bring on the most brilliant minds to help further our mission of bridging the digital divide with advanced technology infrastructure that can be rapidly deployed anywhere.     About the role At Armada, we are unlocking the limitless potential of AI to transform operations and improve lives in some of the most remote locations on Earth. From the expansive mines of Australia to the oil fields of Northern Canada, and the coffee plantations of Colombia, Armada offers a unique opportunity to tackle exciting AI and ML challenges on a global scale. We are actively seeking passionate AI Engineers with hands-on expertise across a range of domains, including real-time computer vision, statistical machine learning, natural language processing, transformers, control and navigation, reinforcement learning, and large-scale distributed AI systems. Ideal candidates will possess strong skills in machine learning (ML), deep learning (DL), and real-time computer vision techniques. You will be responsible for building ML/DL models tailored to specific challenges, preparing datasets for testing, evaluating model performance, and deploying solutions in production environments. Familiarity with containerization, microservices architecture, and the ability to independently deploy ML models into production is essential. If you are a self-driven individual with a passion for cutting-edge AI, we want to hear from you. Armada offers an unparalleled opportunity to confront some of the most thrilling AI and ML challenges in the world. Join our dynamic AI Engineering team as we deliver disruptive edge-compute systems capable of autonomous learning, prediction, and adaptation using vast, real-time datasets. We are pioneers in developing high-performance computing solutions for self-driving cars, camera networks, robotics, drones, conversational agents, and real-time monitoring and diagnostic systems. Our vision is to empower AI systems to seamlessly and securely interact with the complexities and uncertainties of the real world, and our mission is to bridge the digital divide in the process.  Location. This role is office-based at our Bellevue, Washington office.  What You'll Do (Key Responsibilities) Translating business requirements into requirements for AI/ML models. Preparing data to train and evaluate AI/ML/DL models. Building AI/ML/DL models by applying state-of-the-art algorithms, especially transformers. In some cases, leverage existing algorithms from academic or industrial research. Testing, evaluating the AI/ML/DL models, benchmarking their quality, and publishing the models, data sets, and evaluations. Deploying the models in production by containerizing the models. Working with customers and internal employees to refine the quality of the models. Establishing continuous learning pipelines for models with online learning or transfer learning. Building and deploying containerized applications on the cloud or on-premise environments Required Qualifications BS or MS degree in computer science, computational. science/engineering, or related technical field (or equivalent experience). 3+ years of work-related experience in software development with good Python, Java, and/or C/C++ programming skills. Familiarity with containers, numeric libraries, modular software design. Hands-on expertise with traditional statistical machine learning techniques as well as deep-learning and natural language processing modeling. Expertise in supervised, unsupervised, and transfer learning techniques. Hands-on expertise in machine learning techniques and algorithms with a strong background in state-of-the-art DNN architectures (Transformers, CNN, R-CNN, RNN, BERT, GAN, autoencoders, etc.) and experience in developing or using major deep learning frameworks (e.g., PyTorch, Tensorflow, etc). Experience with solving and using machine learning for real-world problems. Preferred Experience and Skills Demonstrable experience in building, programming, and integrating software and hardware for autonomous or robotic systems. Proven experience producing computationally efficient software to meet real-time requirements. Background with container platforms such as Kubernetes. Strong analytical skills with a bias for action. Strong time-management and organization skills to thrive in a fast-paced, dynamic environment. Solid written and oral communications skills. Good teamwork and interpersonal skills. Compensation For U.S. Based candidates: To ensure fairness and transparency, the starting base salary range for this role for candidates in the U.S. are listed below, varying based on location experience, skills, and qualifications. In addition to base salary, this role will also be offered equity and subsidized benefits (details available upon request). Benefits Competitive base salary and equity Medical, dental, and vision (subsidized cost) Health savings accounts (HSA), flexible spending accounts (FSA), and dependent care FSAs (DCFSA) Retirement plan options, including 401(k) and Roth 401(k) Unlimited paid time off (PTO) 14 paid company holidays per year #LI-SM2 #LI-Onsite   Compensation$154,560—$193,200 USD  You're a Great Fit if You're A go-getter with a growth mindset. You're intellectually curious, have strong business acumen, and actively seek opportunities to build relevant skills and knowledge  A detail-oriented problem-solver. You can independently gather information, solve problems efficiently, and deliver results with a "get-it-done" attitude  Thrive in a fast-paced environment. You're energized by an entrepreneurial spirit, capable of working quickly, and excited to contribute to a growing company A collaborative team player. You focus on business success and are motivated by team accomplishment vs personal agenda  Highly organized and results-driven. Strong prioritization skills and a dedicated work ethic are essential for you    Equal Opportunity Statement At Armada, we are committed to fostering a work environment where everyone is given equal opportunities to thrive. As an equal opportunity employer, we strictly prohibit discrimination or harassment based on race, color, gender, religion, sexual orientation, national origin, disability, genetic information, pregnancy, or any other characteristic protected by law. This policy applies to all employment decisions, including hiring, promotions, and compensation. Our hiring is guided by qualifications, merit, and the business needs at the time.   Unsolicited Resumes and Candidates Armada does not accept unsolicited resumes or candidate submissions from external agencies or recruiters. All candidates must apply directly through our careers page. Any resumes submitted by agencies without a prior signed agreement will be considered unsolicited and Armada will not be obligated to pay any fees.  
No items found.
Hidden link
Bland.jpg

Senior Infrastructure Engineer

Bland
$120,000 – $200,000
US.svg
United States
Full-time
Remote
false
About Bland At Bland.com, our goal is to empower enterprises to make AI-phone agents at scale. Based out of San Francisco, we're a quickly growing team striving to change the way customers interact with businesses. We've raised $65 million from Silicon Valley's finest; Including Emergence Capital, Scale Venture Partners, YC, the founders of Twilio, Affirm, ElevenLabs, and many more.About the Role As a Senior Infrastructure Engineer at Bland, you'll help us to build the backbone that enables millions of AI-powered phone conversations. You're not just keeping servers running, you're architecting distributed systems that handle real-time voice processing, scale ML inference, and integrate with enterprise telephony infrastructure. Your work directly determines whether our platform can handle business-defining call volumes for our customers, or leaves them with dead air.What You'll DoContribute to the designing of scalable architecture: Build distributed systems using Kubernetes that handle high-volume, real-time voice processing with strict latency and reliability requirements.Build and Support ML infrastructure: Create and optimize the infrastructure supporting our AI models, from training pipelines to real-time inference serving across multiple regions.Integrate with telephony: Maintain robust connections between our platform and complex enterprise phone systems, SIP trunks, and VoIP infrastructure.Recognize Flaws, Control for them: We’re building a new type of architecture that takes something from Column A, and Column B. We’re never going to get it perfect, so you’ll be helping us keep a look out for what we need to solve.Ensure reliability: Implement monitoring, alerting, and incident response systems that keep our platform running 24/7 with enterprise-grade uptime.Scale with growth: Anticipate and solve scaling challenges before they become problems—our call volume grows exponentially and infrastructure needs to stay ahead.Security and compliance: Implement security best practices and compliance requirements for enterprise customers in regulated industries.Interesting Problems to OwnOld-Meets-New: Telephone calls have been around for awhile. Now with an explosion in modern technologies - comes interesting new ways to wrangle old-school protocols and techniques. You’ll have the space to be creative and really own a new emergent type of architecture.Sizable Call Volumes requires new approaches: Understand and deeply invest in ensuring that we match any amount of customer’s customers call volume! We need unique solutions, that you’ll help us discover along the way.Streaming Architectures: On top of building to support our APIs, you’ll also be building to helping maintain the reliability, failover, and scaling of our important stream-based traffic.What Makes You a Great FitInfrastructure expertise: 5+ years building and scaling distributed systems, with deep knowledge of cloud infrastructure (AWS/GCP preferred).You “get” the fundamentals, and beyond: For example, you can casually tell someone how TLS works beyond buzzwords, do a quick sketch of how different load balancing strategies work, or even tell us the obscure thing you fell asleep reading about last night. There isn’t a blank stare, there’s an excitement to share.Real-time systems experience: You've built systems that handle high-throughput, low-latency workloads, streaming, real-time processing, or similar.Startup mentality: You've worked at fast-growing companies where you wear multiple hats and solve problems as they come up.You’re opinionated, but you’re not alienating: You accept that opinions drive progress, but you don’t intend to break into alienating discussions at the risk of not finding compromises for our customers.You’re familiar with some tools/components like: Cloudflare, HAProxy, Go, TypeScript, Datadog, Terraform, Docker, Kubernetes, Nvidia Hardware (nvlink for example), and anything in between.Bonus Points If You HaveExperience with telephony systems (SIP, VOIP, WebRTC.)Background in ML infrastructure, model serving, or GPU computing.Experience with real-time audio/video processing.Benefits and Pay:Healthcare, dental, vision, all the good stuffMeaningful equity in a fast-growing companyEvery tool you need to succeedBeautiful office in Jackson Square, SF with rooftop viewsIf you don't have the perfect experience that is fine! We're a bunch of drop-outs and hackers. Working at a start-up is really hard. We work a lot and we figure things out on the fly.Compensation Range: $120,000-$200,000
No items found.
Hidden link
Mistral AI.jpg

Applied AI, Technical Lead, Forward Deployed AI Engineer - Montreal

Mistral AI
CA.svg
Canada
Full-time
Remote
false
About Mistral   At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.   We democratize AI through high-performance, optimized, open-source and cutting-edge models, products, and solutions. Our comprehensive AI platform is designed to meet enterprise needs, whether on-premises or in cloud environments. Our offerings include le Chat, the AI assistant for life and work.   We are a dynamic, collaborative team passionate about AI and its potential to transform society. Our diverse workforce thrives in competitive environments and is committed to driving innovation. Our teams are distributed between France, USA, UK, Germany, and Singapore. We are creative, low-ego, and team-spirited.   Join us to be part of a pioneering company shaping the future of AI. Together, we can make a meaningful impact. See more about our culture on mistral.ai/careers.   About The Job:   Mistral AI is seeking a Technical Lead, Applied AI to drive the technical strategy, execution, and delivery of complex AI solutions for our enterprise customers. In this role, you will lead a project teams of Applied AI Engineers, ensuring the successful deployment of Mistral AI products and the development of high-impact, scalable AI use cases.   You will act as the primary technical point of contact for our most strategic customers, guiding them through the entire lifecycle—from pre-sales to post-implementation—while collaborating closely with research, product, and engineering teams to shape the future of our offerings.   As a Technical Lead, you will bridge the gap between cutting-edge AI research and real-world enterprise applications, ensuring our solutions are robust, scalable, and aligned with both customer needs and Mistral’s technological vision.   What you will do   - Deliver as an IC the critical lines of codes of our complex projects, you’ll be hands-on and de-risk the critical parts of our complex projects. You’ll stay deeply involved in coding, reviewing, and optimizing AI solutions. - Lead technical teams of Applied AI Engineers, providing mentorship, technical guidance, and best practices for deploying state-of-the-art GenAI applications across industries. - Lead technical discussions during pre-sales, translating customer requirements into actionable solutions and communicating Mistral’s technological advantages to diverse stakeholders. - Design and oversee the implementation of complex AI systems, including fine-tuning, RAG, agentic workflows, and custom LLM applications, ensuring alignment with Mistral’s product roadmap and open-source initiatives. - Drive innovation by identifying emerging trends in AI, evaluating new tools and methodologies, and championing best practices for fine-tuning, inference, and deployment. - Work closely with product managers, researchers, and engineers to ensure seamless integration of customer feedback into Mistral’s product development cycle.   How We Work in Applied AI   - We care about people and outputs. - What matters is what you ship, not the time you spend on it - Bureaucracy is where urgency goes to vanish. You talk to whoever you need to talk to. The best idea wins, whether it comes from a principal engineer or someone in their first week. - Always ask why. The best solutions come from deep understanding, not from copying what worked before - We say what we mean. Feedback is direct, timely, and given because we care. - No politics. Low ego, high standards. - We embrace an unstructured environment and find joy in it.   About you   - You are fluent in French and English. - You hold a PhD or Master’s degree in AI, Machine Learning, Computer Science, or a related field. - You have 7/8+ years of experience in AI/ML, with at least 2+ years in a technical leadership role (e.g., Tech Lead, Engineering Manager, Staff Engineer or Solutions Architect) focused on AI products or enterprise solutions. - You have a proven track record of leading teams to deliver complex AI projects, from prototyping to production, in industries such as tech, finance, healthcare, or industrial automation. - You possess deep expertise in fine-tuning LLMs, advanced RAG, agentic systems, and deploying NLP applications at scale. - You are proficient in Python, PyTorch, and modern AI frameworks (e.g., LangChain, Hugging Face). Experience with cloud platforms (AWS, GCP, Azure) and MLOps tools is a plus. - You have strong software engineering skills, including API design, backend/full-stack development, and system architecture. - You excel in technical communication, with the ability to articulate complex concepts to both technical and non-technical audiences, including executives and engineers. - You thrive in fast-paced, collaborative environments and are passionate about mentoring and growing technical talent.   Ideally, you have: - Contributed to open-source projects, particularly in the LLM or AI space. - Experience in customer-facing roles (e.g., Solutions Architect, Customer Engineer, or Technical Product Manager) with a focus on enterprise AI adoption. - A track record of driving technical strategy and influencing product direction based on customer needs and market opportunities.   Why joining us?  You’ll have the opportunity to shape the future of AI adoption in enterprises, work with a world-class team, and contribute to open-source projects that impact millions. If you’re excited about leading technical innovation and solving real-world challenges with AI, we’d love to hear from you!Why join us?    You’ll have the opportunity to shape the future of AI adoption in enterprises, work with a world-class team, and contribute to open-source projects that impact millions. If you’re excited about leading technical innovation and solving real-world challenges with AI, we’d love to hear from you!
No items found.
Hidden link
No job found
Your search did not match any job. Please try again
Department
Clear
Category
Clear
Country
Clear
Job type
Clear
Remote
Clear
Only remote job
Company size
Clear
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.