The AI job market moves fast. We keep up so you don't have to.
Fresh roles added daily, reviewed for quality — across every corner of the AI ecosystem.
I'm strong in:
Edit filters
New AI Opportunities
Showing 61 – 79 of 79 jobs
Tag
ML Engineer, Post-Training and Evaluation
Reflection
1-10
United States
Full-time
Remote
false
Our MissionReflection’s mission is to build open superintelligence and make it accessible to all.We’re developing open weight models for individuals, agents, enterprises, and even nation states. Our team of AI researchers and company builders come from DeepMind, OpenAI, Google Brain, Meta, Character.AI, Anthropic and beyond.Role Overview
We're looking for a core member of Reflection's Applied AI team to drive model fine-tuning and evaluations for enterprise customers. This team takes Reflection's open-weight models and adapts them for specific customer domains, tasks, and constraints. As a ML Engineer, you will work hands-on with customer data, run fine-tuning workflows, build evaluation harnesses, and deploy adapted models to production. You'll work directly with customers to understand what they need and with research teams to push what's possible.What You'll DoFine-tune Reflection's open-weight models for customer-specific use cases: prepare datasets, configure training runs (SFT, preference optimization, reinforcement fine-tuning), and iterate based on evals.Build and maintain evaluation infrastructure: design eval suites, curate test sets, establish baselines, and measure whether fine-tuned models actually improve on the tasks customers care about.Prepare training data from raw customer inputs: inspect data quality, clean and format datasets, identify adversarial or noisy samples, and build reproducible data pipelines.Debug and diagnose training and inference issues: interpret loss curves, catch data quality problems, and identify when training dynamics indicate something is wrong.Support end-to-end deployments of fine-tuned models across hybrid environments (public cloud, VPC, and on-premises), helping ensure inference performance and reliability in production.Contribute to evolving playbooks, evaluation benchmarks, and best practices as part of a growing fine-tuning and evals practice.What We're Looking ForApplied ML experience with hands-on fine-tuning of language models. You have prepared datasets, run training loops, evaluated results, and shipped a fine-tuned model. Familiarity with SFT, DPO, RLHF, or similar techniques.Understanding of evaluation methodology: how to design evals, interpret training graphs, and tell whether a model is actually better or just overfitting to the benchmark.Comfort with training infrastructure: GPUs, compute management, debugging common training failures. You don't need to be an infra engineer, but you should not be afraid of a stack trace from a training loop.Strong software engineering fundamentals (Python). You write clean, reproducible code. Experience with data pipelines and version control for datasets and experiments.3+ years of engineering experience with meaningful exposure to applied ML or ML engineering (e.g., MLE, Applied Scientist, Data Scientist who shipped models to production, or ML-focused SWE).Demonstrated ability and interest to work in customer-facing environments, understanding user needs and translating domain requirements into training strategies.Self-starter with high agency and ownership, excelling in fast-paced startup environments where playbooks are still being written.What We Offer:We believe that to build superintelligence that is truly open, you need to start at the foundation. Joining Reflection means building from the ground up as part of a small talent-dense team. You will help define our future as a company, and help define the frontier of open foundational models.We want you to do the most impactful work of your career with the confidence that you and the people you care about most are supported.Top-tier compensation: Salary and equity structured to recognize and retain the best talent globally.Health & wellness: Comprehensive medical, dental, vision, life, and disability insurance.Life & family: Fully paid parental leave for all new parents, including adoptive and surrogate journeys. Financial support for family planning.Benefits & balance: paid time off when you need it, relocation support, and more perks that optimize your time. Opportunities to connect with teammates: lunch and dinner are provided daily. We have regular off-sites and team celebrations.
No items found.
2026-04-23 1:50
Member of Engineering (Post-training)
Poolside
201-500
United Kingdom
Full-time
Remote
false
ABOUT POOLSIDEIn this decade, the world will create Artificial General Intelligence. There will only be a small number of companies who will achieve this. Their ability to stack advantages and pull ahead will define the winners. These companies will move faster than anyone else. They will attract the world's most capable talent. They will be on the forefront of applied research, engineering, infrastructure and deployment at scale. They will continue to scale their training to larger & more capable models. They will be given the right to raise large amounts of capital along their journey to enable this. They will create powerful economic engines. They will obsess over the success of their users and customers.
poolside exists to be this company - to build a world where AI will be the engine behind economically valuable work and scientific progress.View GDPR PolicyABOUT OUR TEAMWe are a remote-first team that sits across Europe and North America and comes together once a month in-person for 3 days and for longer offsites twice a year.Our R&D and production teams are a combination of more research and more engineering-oriented profiles, however, everyone deeply cares about the quality of the systems we build and has a strong underlying knowledge of software development. We believe that good engineering leads to faster development iterations, which allows us to compound our efforts.ABOUT THE ROLEYou would be working as part of our Applied Research team, focused on turning pre-trained LLMs into well-aligned and highly capable AI systems for coding and software development. This is a hands-on role where you'll work across a variety of efforts, including: Building data pipelines and environments for agentic use cases, researching and implementing post-training algorithms, designing experiments and testing hypothesis, and more. You will have access to thousands of GPUs in this team.YOUR MISSIONTo turn pre-trained LLMs into well-aligned and highly capable AI systems.RESPONSIBILITIESResearch and experiment on ways to specialize foundational models to agentic use casesBuild and maintain data and training pipelinesKeep up with latest research, and be familiar with state of the art in LLMs, alignment, synthetic data generation, code generationDesign, analyze, and iterate on training/fine-tuning/data generation experimentsWrite high-quality, pragmatic codeWork as part of a team: plan future steps, discuss, and communicate clearly with your peersSKILLS & EXPERIENCEExperience with Large Language Models (LLM)Deep knowledge of TransformersStrong deep learning fundamentalsGood taste in dataPost-training experience with LLMsExtensively used and probed LLMs, familiarity of their capabilities and limitationsKnowledge of distributed trainingStrong machine learning and engineering backgroundResearch experienceExperience in proposing and evaluating novel research ideasFamiliar with, or contributed to the state of the art in multiple of the following topics: Fine-tuning and alignment of LLMs, synthetic data generation, continual learning, RLVR, code generationIs comfortable in a rapidly iterating environmentIs reasonably opinionatedProgramming experienceLinuxStrong algorithmic skillsPython with PyTorch or JaxUse modern tools, including latest code agents and are always looking to improveStrong critical thinking and ability to question code quality policies when applicablePrior experience in non-ML programming, especially not in Python - is a nice to havePROCESSIntro call with one of our Founding EngineersTechnical Interview(s) with one of our Founding EngineersTeam fit call with the People teamFinal interview with one of our Founding EngineersBENEFITSFully remote work & flexible hours37 days/year of vacation & holidaysHealth insurance allowance for you and dependentsCompany-provided equipmentWellbeing, always-be-learning and home office allowancesFrequent team get togethersGreat diverse & inclusive people-first culture
No items found.
2026-04-22 18:35
Senior Engineer, System-Level Design Verification
Tenstorrent
1001-5000
$100,000 – $500,000
United States
Full-time
Remote
false
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.Tenstorrent is seeking an Physical Design Engineer to lead cross-functional efforts to solve complex physical design challenges and develop end-to-end RTL-to-GDS methodologies across advanced nodes, with a strong focus on PPA and runtime improvements. The engineer will architect, integrate, and deploy AI/ML-driven solutions into production physical design flows, creating custom CAD tools and partnering with internal teams and EDA vendors to drive next-generation, ML-enabled capabilities.
This role is hybrid, based out of Santa Clara, CA or Austin, TX or Fort Collins, CO.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who you are
BS in Electrical or Computer Engineering (or equivalent experience) with 5+ years in Physical Design CAD methodology at advanced nodes.
Proven track record improving PPA and/or runtime on high-performance, low-power taped-out designs.
Hands-on with industry-standard EDA tools (e.g., Fusion Compiler) across synthesis, P&R, STA, signoff, and hierarchical flows.
Strong Python/Tcl and data skills, with interest or experience in ML frameworks (PyTorch, TensorFlow), and the ability to drive complex projects independently.
What we need
Lead and contribute to cross-functional efforts solving complex physical design challenges across IPs, projects, and advanced technology nodes.
Develop and enhance RTL-to-GDS methodologies, including floorplanning, synthesis, P&R, STA, signoff, and assembly.
Architect and deploy AI/ML-driven solutions in production flows to improve engineering efficiency, turnaround time, and QoR.
Optimize EDA tools and custom CAD flows using data-driven and ML-based techniques, in close collaboration with verification, extraction, timing, DFT, and EDA vendors.
What you will learn
How to scale AI/ML-driven methodologies across diverse products and advanced technology nodes in real production flows.
New ways to blend classical EDA algorithms with modern ML techniques to push PPA and runtime limits.
Best practices for deploying, validating, and monitoring ML models in production CAD environments.
How to influence next-generation ML-enabled EDA tools and collaborate deeply with cross-functional teams (PV, extraction, timing, DFT).
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.
This position requires access to technology that requires a U.S. export license for persons whose most recent country of citizenship or permanent residence is a U.S. EAR Country Groups D:1, E1, or E2 country. This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
No items found.
2026-04-22 18:06
Staff Engineer, CPU Core Verification
Tenstorrent
1001-5000
$100,000 – $500,000
United States
Full-time
Remote
false
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.Tenstorrent is seeking an Physical Design Engineer to lead cross-functional efforts to solve complex physical design challenges and develop end-to-end RTL-to-GDS methodologies across advanced nodes, with a strong focus on PPA and runtime improvements. The engineer will architect, integrate, and deploy AI/ML-driven solutions into production physical design flows, creating custom CAD tools and partnering with internal teams and EDA vendors to drive next-generation, ML-enabled capabilities.
This role is hybrid, based out of Santa Clara, CA or Austin, TX or Fort Collins, CO.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who you are
BS in Electrical or Computer Engineering (or equivalent experience) with 5+ years in Physical Design CAD methodology at advanced nodes.
Proven track record improving PPA and/or runtime on high-performance, low-power taped-out designs.
Hands-on with industry-standard EDA tools (e.g., Fusion Compiler) across synthesis, P&R, STA, signoff, and hierarchical flows.
Strong Python/Tcl and data skills, with interest or experience in ML frameworks (PyTorch, TensorFlow), and the ability to drive complex projects independently.
What we need
Lead and contribute to cross-functional efforts solving complex physical design challenges across IPs, projects, and advanced technology nodes.
Develop and enhance RTL-to-GDS methodologies, including floorplanning, synthesis, P&R, STA, signoff, and assembly.
Architect and deploy AI/ML-driven solutions in production flows to improve engineering efficiency, turnaround time, and QoR.
Optimize EDA tools and custom CAD flows using data-driven and ML-based techniques, in close collaboration with verification, extraction, timing, DFT, and EDA vendors.
What you will learn
How to scale AI/ML-driven methodologies across diverse products and advanced technology nodes in real production flows.
New ways to blend classical EDA algorithms with modern ML techniques to push PPA and runtime limits.
Best practices for deploying, validating, and monitoring ML models in production CAD environments.
How to influence next-generation ML-enabled EDA tools and collaborate deeply with cross-functional teams (PV, extraction, timing, DFT).
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.
This position requires access to technology that requires a U.S. export license for persons whose most recent country of citizenship or permanent residence is a U.S. EAR Country Groups D:1, E1, or E2 country. This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
No items found.
2026-04-22 18:06
Director of Customer Engineering
Tenstorrent
1001-5000
$100,000 – $500,000
United States
Full-time
Remote
false
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.Tenstorrent is seeking an Physical Design Engineer to lead cross-functional efforts to solve complex physical design challenges and develop end-to-end RTL-to-GDS methodologies across advanced nodes, with a strong focus on PPA and runtime improvements. The engineer will architect, integrate, and deploy AI/ML-driven solutions into production physical design flows, creating custom CAD tools and partnering with internal teams and EDA vendors to drive next-generation, ML-enabled capabilities.
This role is hybrid, based out of Santa Clara, CA or Austin, TX or Fort Collins, CO.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who you are
BS in Electrical or Computer Engineering (or equivalent experience) with 5+ years in Physical Design CAD methodology at advanced nodes.
Proven track record improving PPA and/or runtime on high-performance, low-power taped-out designs.
Hands-on with industry-standard EDA tools (e.g., Fusion Compiler) across synthesis, P&R, STA, signoff, and hierarchical flows.
Strong Python/Tcl and data skills, with interest or experience in ML frameworks (PyTorch, TensorFlow), and the ability to drive complex projects independently.
What we need
Lead and contribute to cross-functional efforts solving complex physical design challenges across IPs, projects, and advanced technology nodes.
Develop and enhance RTL-to-GDS methodologies, including floorplanning, synthesis, P&R, STA, signoff, and assembly.
Architect and deploy AI/ML-driven solutions in production flows to improve engineering efficiency, turnaround time, and QoR.
Optimize EDA tools and custom CAD flows using data-driven and ML-based techniques, in close collaboration with verification, extraction, timing, DFT, and EDA vendors.
What you will learn
How to scale AI/ML-driven methodologies across diverse products and advanced technology nodes in real production flows.
New ways to blend classical EDA algorithms with modern ML techniques to push PPA and runtime limits.
Best practices for deploying, validating, and monitoring ML models in production CAD environments.
How to influence next-generation ML-enabled EDA tools and collaborate deeply with cross-functional teams (PV, extraction, timing, DFT).
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.
This position requires access to technology that requires a U.S. export license for persons whose most recent country of citizenship or permanent residence is a U.S. EAR Country Groups D:1, E1, or E2 country. This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
No items found.
2026-04-22 18:06
Lead Hardware Solutions Architect
Tenstorrent
1001-5000
$100,000 – $500,000
United States
Full-time
Remote
false
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.Tenstorrent is seeking an Physical Design Engineer to lead cross-functional efforts to solve complex physical design challenges and develop end-to-end RTL-to-GDS methodologies across advanced nodes, with a strong focus on PPA and runtime improvements. The engineer will architect, integrate, and deploy AI/ML-driven solutions into production physical design flows, creating custom CAD tools and partnering with internal teams and EDA vendors to drive next-generation, ML-enabled capabilities.
This role is hybrid, based out of Santa Clara, CA or Austin, TX or Fort Collins, CO.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who you are
BS in Electrical or Computer Engineering (or equivalent experience) with 5+ years in Physical Design CAD methodology at advanced nodes.
Proven track record improving PPA and/or runtime on high-performance, low-power taped-out designs.
Hands-on with industry-standard EDA tools (e.g., Fusion Compiler) across synthesis, P&R, STA, signoff, and hierarchical flows.
Strong Python/Tcl and data skills, with interest or experience in ML frameworks (PyTorch, TensorFlow), and the ability to drive complex projects independently.
What we need
Lead and contribute to cross-functional efforts solving complex physical design challenges across IPs, projects, and advanced technology nodes.
Develop and enhance RTL-to-GDS methodologies, including floorplanning, synthesis, P&R, STA, signoff, and assembly.
Architect and deploy AI/ML-driven solutions in production flows to improve engineering efficiency, turnaround time, and QoR.
Optimize EDA tools and custom CAD flows using data-driven and ML-based techniques, in close collaboration with verification, extraction, timing, DFT, and EDA vendors.
What you will learn
How to scale AI/ML-driven methodologies across diverse products and advanced technology nodes in real production flows.
New ways to blend classical EDA algorithms with modern ML techniques to push PPA and runtime limits.
Best practices for deploying, validating, and monitoring ML models in production CAD environments.
How to influence next-generation ML-enabled EDA tools and collaborate deeply with cross-functional teams (PV, extraction, timing, DFT).
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.
This position requires access to technology that requires a U.S. export license for persons whose most recent country of citizenship or permanent residence is a U.S. EAR Country Groups D:1, E1, or E2 country. This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
No items found.
2026-04-22 18:06
Defense / Edge Tech Lead
Deepgram
201-500
$185,000 – $245,000
United States
Full-time
Remote
false
Company OverviewDeepgram is the leading platform underpinning the emerging trillion-dollar Voice AI economy, providing real-time APIs for speech-to-text (STT), text-to-speech (TTS), and building production-grade voice agents at scale. More than 200,000 developers and 1,300+ organizations build voice offerings that are ‘Powered by Deepgram’, including Twilio, Cloudflare, Sierra, Decagon, Vapi, Daily, Cresta, Granola, and Jack in the Box. Deepgram’s voice-native foundation models are accessed through cloud APIs or as self-hosted and on-premises software, with unmatched accuracy, low latency, and cost efficiency. Backed by a recent Series C led by leading global investors and strategic partners, Deepgram has processed over 50,000 years of audio and transcribed more than 1 trillion words. There is no organization in the world that understands voice better than Deepgram.Company Operating RhythmAt Deepgram, we expect an AI-first mindset—AI use and comfort aren’t optional, they’re core to how we operate, innovate, and measure performance.Every team member who works at Deepgram is expected to actively use and experiment with advanced AI tools, and even build your own into your everyday work. We measure how effectively AI is applied to deliver results, and consistent, creative use of the latest AI capabilities is key to success here. Candidates should be comfortable adopting new models and modes quickly, integrating AI into their workflows, and continuously pushing the boundaries of what these technologies can do.Additionally, we move at the pace of AI. Change is rapid, and you can expect your day-to-day work to evolve just as quickly. This may not be the right role if you’re not excited to experiment, adapt, think on your feet, and learn constantly, or if you’re seeking something highly prescriptive with a traditional 9-to-5.The OpportunityDeepgram's speech AI models are among the fastest and most accurate in the world — and an increasing number of defense and edge computing customers need those models to run outside of the cloud. On devices, on-premises, in disconnected environments, and on hardware with strict power and compute constraints. This is the frontier where AI meets the physical world, and it requires a fundamentally different engineering approach.As the Defense / Edge Tech Lead, you will own the technical direction for deploying Deepgram's models to edge and embedded environments. You will work closely with hardware partners like Qualcomm and Motorola, support defense customer requirements through AWS NatSec partnerships, and drive the model optimization and runtime engineering needed to deliver production-quality speech AI on constrained platforms. You will be the technical point of contact for some of Deepgram's most strategically important partnerships and customers.This role requires a rare combination of systems engineering depth, model optimization expertise, and the judgment to navigate defense and government customer environments. Note that Deepgram does not currently hold facility clearance — this role does not require an active security clearance, though experience working in or alongside classified programs is highly valued.What You'll DoLead the technical strategy for edge deployment of Deepgram's STT and TTS models, defining the architecture for on-device, on-premises, and air-gapped inference across diverse hardware targets.Optimize models for edge and embedded platforms, driving quantization, pruning, distillation, and runtime optimization to meet strict latency, memory, and power constraints.Partner with Qualcomm, Motorola, and other hardware vendors to ensure Deepgram models run efficiently on their chipsets, collaborating on SDK integration, performance benchmarking, and joint go-to-market.Support defense customer requirements through AWS NatSec partnerships, translating mission requirements into engineering deliverables and ensuring Deepgram's solutions meet the unique demands of government environments.Design and build edge runtime infrastructure, including model packaging, deployment pipelines, OTA update mechanisms, and telemetry for devices operating in low-connectivity or disconnected environments.Harden deployments for security-sensitive environments, implementing secure boot chains, encrypted model storage, tamper detection, and audit logging appropriate for defense and government use cases.Benchmark and validate performance across target hardware platforms, establishing repeatable test suites for latency, accuracy, power consumption, and resource utilization.Collaborate with Research and Engine teams to influence model architectures toward edge-friendly designs from the start, reducing the optimization burden at deployment time.Provide technical leadership to cross-functional teams working on defense and edge projects, setting engineering standards, reviewing designs, and mentoring engineers on systems and optimization practices.You'll Love This Role If YouYou find deep satisfaction in making a 300M-parameter model run on hardware with 4GB of RAM — and still hit accuracy targets.You want to work at the intersection of AI and hardware, where optimization is not optional but existential.You are energized by partnerships with hardware companies and enjoy the back-and-forth of getting a model to sing on a new chipset.You understand the unique dynamics of defense and government customers and can navigate their requirements without losing engineering velocity.You believe that edge AI is the next major deployment frontier, and you want to define how speech AI gets there.You prefer working on hard, constrained problems over open-ended research — you want to ship, not just publish.It's Important To Us That You Have5+ years of experience in systems engineering, embedded computing, or edge AI deployment, with a track record of delivering production systems on constrained hardware.Strong proficiency in C, C++, and/or Rust, with experience writing performance-critical code for resource-constrained environments.Hands-on experience with model optimization for edge deployment, including quantization, pruning, knowledge distillation, or architecture-specific compilation.Familiarity with edge inference runtimes such as ONNX Runtime, TensorRT, TFLite, or vendor-specific SDKs (Qualcomm SNPE/QNN, MediaTek NeuroPilot, etc.).Experience with security-conscious development practices, including secure boot, encrypted storage, code signing, and secure deployment pipelines.Strong understanding of hardware-software interaction — CPU/GPU/NPU architectures, memory hierarchies, power management, and how they affect model inference performance.Excellent communication skills — you will be the technical face of Deepgram to hardware partners and defense customers, and you need to be credible and clear in both contexts.It Would Be Great if You HadPrior experience working on or alongside classified defense programs — you understand SCIFs, accreditation processes, and the operational constraints of secure environments, even if you do not currently hold an active clearance.Experience with ML model optimization techniques at depth — custom quantization schemes, mixed-precision inference, neural architecture search for edge targets.Familiarity with ONNX, TensorRT, or similar model compilation and optimization toolchains and their tradeoffs across hardware targets.Defense or govtech industry experience, including familiarity with procurement processes, ITAR, FedRAMP, or DoD software development standards.Experience with real-time audio processing on embedded platforms — DSP pipelines, audio codec optimization, or streaming inference on microcontrollers or edge SoCs.Background in hardware evaluation and benchmarking — systematically comparing accelerators, SoCs, or GPUs for specific workload profiles.Benefits & Perks*Holistic healthMedical, dental, vision benefitsAnnual wellness stipendMental health supportLife, STD, LTD Income Insurance PlansWork/life blendUnlimited PTOGenerous paid parental leaveFlexible schedule12 Paid US company holidaysQuarterly personal productivity stipendOne-time stipend for home office upgrades401(k) plan with company matchTax Savings ProgramsContinuous learningLearning / Education stipendParticipation in talks and conferencesEmployee Resource GroupsAI enablement workshops / sessions*For candidates outside of the US, we use an Employer of Record model in many countries, which means benefits are administered locally and governed by country-specific regulations. Because of this, benefits will differ by region — in some cases international employees receive benefits US employees do not, and vice versa. As we scale, we will continue to evaluate where we can create more alignment, but a 1:1 global benefits structure is not always legally or operationally possible.Backed by prominent investors including Y Combinator, Madrona, Tiger Global, Wing VC and NVIDIA, Deepgram has raised over $215M in total funding. If you're looking to work on cutting-edge technology and make a significant impact in the AI industry, we'd love to hear from you!Deepgram is an equal opportunity employer. We want all voices and perspectives represented in our workforce. We are a curious bunch focused on collaboration and doing the right thing. We put our customers first, grow together and move quickly. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, gender identity or expression, age, marital status, veteran status, disability status, pregnancy, parental status, genetic information, political affiliation, or any other status protected by the laws or regulations in the locations where we operate.We are happy to provide accommodations for applicants who need them.
No items found.
2026-04-22 17:21
Enterprise Account Executive
Gong
1001-5000
$148,000 – $225,000
Singapore
Full-time
Remote
false
Gong harnesses the power of AI to transform how revenue teams win. The Gong Revenue AI Operating System unifies data, insights, and workflows into a single, trusted system that observes, guides, and acts alongside the world’s most successful revenue teams. Powered by the Gong Revenue Graph, AI-powered intelligence, specialized agents, and trusted applications, Gong helps more than 5,000 companies around the world deeply understand their teams and customers, automate critical sales workflows, and close more deals with less effort. For more information, visit www.gong.io.
At Gong, you will join a company built on innovative products, ambitious goals, and passionate people. We are shaping the future of revenue intelligence and we want people who are excited to build what comes next. You will work with a team that dreams big, moves fast, and cares deeply about the craft and about each other. Here, transparency and trust are core to how we operate, and every person has the opportunity to make a visible impact. If you want to grow, stretch, and do work that truly matters, Gong is the place to do the best work of your career.Gong is seeking a hands-on Staff, AI Enablement and Innovation professional to own our internal AI operating model. Sitting within our IT organization, this role is the heartbeat of our internal digital transformation. You will empower our internal teams by bridging the gap between high-level business discovery and deep technical execution.
You will be the primary architect of Gong’s internal agentic strategy—responsible for "mining" the business for efficiency opportunities while simultaneously building the centralized orchestration layer that ensures our enterprise AI spend is governed, consistent, and scalable. This is a high-impact IC (Individual Contributor) role designed for a "scrappy builder" who thrives on turning internal complexity into streamlined, automated excellence.
RESPONSIBILITIES
Strategy & Governance (The "Guardrails")
Define the Roadmap: Partner with Security, Legal, and business leaders to define the internal AI roadmap.
Own the Stack: Operate the enterprise AI stack, including LLMs, vector databases, and gateways.
Standardization: Enforce consistent patterns for tool calling, prompt versioning, state management, and error handling to prevent fragmented, "ad-hoc" agent implementations.
Lifecycle Management: Manage the full model lifecycle, from evaluation and testing to upgrades and deprecations.
Discovery & Execution (The "Gold Mining")
Business Partnership: Proactively interview teams (Talent, Support, Sales) to identify manual workflows that can be automated via agentic AI.
Proof of Efficacy: Build and deploy POCs independently to demonstrate ROI before scaling.
Financial & Performance Operations (The "Numbers")
Cost Management: Own the token procurement process and build forecasting/chargeback models to prevent uncontrolled spend.
Performance Monitoring: Build dashboards to track SLAs/SLOs (latency, accuracy, uptime) and monitor usage, cost, and error rates.
Optimization: Proactively identify opportunities for cost-saving (e.g., model switching) and performance tuning.
QUALIFICATIONS
The Persona: You are a Senior IT Business Analyst, Technical Implementation Lead, or Solutions Architect.
Technical Depth: Practical, hands-on experience with the modern AI stack (OpenAI, Gemini, Anthropic, Vector DBs). You understand the nuances of state management and prompt versioning.
Scrappy Builder: You have a "hands-on-keyboard" mentality. You can take an idea from a stakeholder and turn it into a working agentic workflow without needing external engineering resources.
Business Acumen: Ability to translate complex technical AI patterns into clear business value and ROI for stakeholders.
Operational Rigor: Experience managing vendor relationships, forecasting technical costs (tokens), and maintaining system uptime/SLAs.
YOU ARE
Orchestration: Experienced with LangChain, or similar agentic frameworks.
AI Tooling: Prompt Flow, Vector Databases, and API integration.
Data & Analytics: Ability to build performance and cost-tracking dashboards (SQL, Tableau, etc.).
PERKS & BENEFITS
We offer Gongsters a variety of medical, dental, and vision plans, designed to fit you and your family’s needs.
Wellbeing Fund - flexible wellness stipend to support a healthy lifestyle.
Mental Health benefits with covered therapy and coaching.
401(k) program to help you invest in your future.
Education & learning stipend for personal growth and development.
Flexible vacation time to promote a healthy work-life blend.
Paid parental leave to support you and your family.
Company-wide recharge days each quarter.
Work from home stipend to help you succeed in a remote environment.
The annual salary hiring range for this position is $148,000 - $225,000 USD.
Compensation is based on factors unique to each candidate, including, but not limited to, job-related skills, qualification, education, experience, and location. At Gong, we have a location-based compensation structure, which means there may be a different range for candidates in other locations. The total compensation package for this position, in addition to base compensation, may include incentive compensation, bonus, equity, and benefits. Some of our sales compensation programs also offer the potential to achieve above targeted earnings for those who exceed their sales targets.
We are always looking for outstanding Gongsters! So if this sounds like something that interests you regardless of compensation, please reach out. We may have more roles for you to consider and would love to connect.
We have noticed a rise in recruiting impersonations across the industry, where scammers attempt to access candidates' personal and financial information through fake interviews and offers. All Gong recruiting email communications will always come from the @gong.io domain. Any outreach claiming to be from Gong via other sources should be ignored.
Gong is an equal-opportunity employer. We believe that diversity is integral to our success, and do not discriminate based on race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, military status, genetic information, or any other basis protected by applicable law.
To review Gong's privacy policy, visit https://www.gong.io/gong-io-job-candidates-privacy-notice/ for more details.
No items found.
2026-04-22 17:20
Member of Technical Staff - Product Engineer (Internal Data & Agent Platform)
Liquid AI
51-100
United States
Full-time
Remote
false
About Liquid AISpun out of MIT CSAIL, we build general-purpose AI systems that run efficiently across deployment targets, from data center accelerators to on-device hardware, ensuring low latency, minimal memory usage, privacy, and reliability. We partner with enterprises across consumer electronics, automotive, life sciences, and financial services. We are scaling rapidly and need exceptional people to help us get there.The OpportunityWe are building the operating system of the company itself. Not the product we sell, but the internal data and agent infrastructure that makes Liquid run at the speed of a 10-person team while scaling well past that. The thesis is simple: replace coordination overhead with visibility, and replace guesswork with informed judgment. You will build the unified company data graph and the agent layer on top of it. This is a founding role. There is no existing team, no legacy system, no playbook.What We're Looking ForWe need someone who:Builds end-to-end: You are equally comfortable designing a data architecture, writing the integrations, deploying the infrastructure, and iterating on the agent layer. This is not a role where you hand off specs to someone else.Thinks in systems: You see the connections between a Slack message, a Linear ticket, a GitHub PR, and a Rippling org chart. You can model how information flows through an organization and where the gaps are.Ships fast with high standards: We need the first version of the data graph running in weeks, not months. You know how to make pragmatic tradeoffs without building something you will regret.Understands LLMs deeply: You will build agents on top of foundation models. You need real experience with prompt engineering, tool use, evals, and the practical limits of what models can and cannot do today.Operates with high autonomy: You will have direct access to the leadership team and broad latitude to make decisions. We need someone who thrives with that, not someone who needs a product spec before writing code.The WorkBuild the unified company data graph by integrating systems across execution (GitHub, Linear), communication (Slack, email, Zoom, calendars), model performance (W&B, eval dashboards), and operations (Rippling, Vanta, Ramp, Runway)Design and ship agents that surface performance signals, resource allocation suggestions, bottleneck detection, and opportunity visibility to leadershipStart with observability. The first milestone is a real-time map of work, ownership, and impact across the companyProgress from visibility to recommendations to partial automation, following the progressive autonomy principle: never automate a decision you do not yet understandOwn the entire stack: data pipelines, APIs, agent orchestration, evals, and the interfaces leadership uses to interact with the systemDesired Experience5+ years of software engineering with significant experience building data pipelines, integrations, or internal platformsHands-on experience building with LLMs in production: agent systems, tool use, RAG, or similarStrong Python. Comfortable with TypeScript for frontend/tooling as neededExperience integrating SaaS APIs (Slack, GitHub, Google Workspace, HRIS systems, or similar)Track record of shipping systems from zero to one with minimal guidanceBonus: experience with data modeling, knowledge graphs, or organizational analyticsWhat We OfferCompensation: Competitive base salary with equity in a unicorn-stage companyHealth: We pay 100% of medical, dental, and vision premiums for employees and dependentsFinancial: 401(k) matching up to 4% of base payTime Off: Unlimited PTO plus company-wide Refill Days throughout the year
No items found.
2026-04-22 17:06
AI Solutions Engineer
Baseten
101-200
$165,000 – $330,000
United States
Full-time
Remote
false
ABOUT BASETENBaseten powers mission-critical inference for the world's most dynamic AI companies, like Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma and Writer. By uniting applied AI research, flexible infrastructure, and seamless developer tooling, we enable companies operating at the frontier of AI to bring cutting-edge models into production. We're growing quickly and recently raised our $300M Series E, backed by investors including BOND, IVP, Spark Capital, Greylock, and Conviction. Join us and help build the platform engineers turn to to ship AI products.THE ROLEAs an AI Solutions Engineer at Baseten, you will partner directly with customers to architect, build, and deploy high-scale production AI applications on Baseten’s platform. You’ll own the journey with customers from initial exploration to production deployment, translating ambiguous business goals into reliable, observable services with clear quality, latency, and cost outcomes.This role is a great fit for entrepreneurial engineers who want a front-row view into how modern companies adopt AI at scale and who enjoy working across product, software development, performance engineering, and customer-facing implementations.To be clear, this is an engineering role with hands-on coding and software development that also includes aspects of product management, technical customer success, and pre-sales solution engineering mixed in.EXAMPLE INITIATIVESTake a look at these blog posts written by members of our Forward Deployed Engineering team: Forward Deployed Engineering on the frontier of AIThe fastest, most accurate Whisper transcriptionDeploy production-ready model servers from Docker imagesDeploy custom ComfyUI workflows as APIs
RESPONSIBILITIESDevelop and maintain software systems and product features using one or more general-purpose programming languages in a production-level environment, with a preference for Python due to its relevance in ML projects.Drive customer impact by designing, implementing, and deploying Baseten solutions end-to-end (problem framing → evaluation → production deployment → monitoring). This involves working with customers’ engineering teams at every stage of the customer journey including: sales, implementation, and expansion.Deliver with velocity: turn vague objectives into clear specs and well-defined PoCs so we can rapidly ship well-tested services and outcomes for our customersOptimize and enhance AI/ML projects, contributing to the continuous improvement of our technical stack. This includes developing features and PRDs with other engineering and product orgs.Own products and customer projects end-to-end, functioning as both an engineer, project manager, and product manager, with a focus on user empathy, project specification, and end-to-end execution.Navigate ambiguity and exercise good judgment on tradeoffs and tools needed to solve problems, avoiding unnecessary complexity.Demonstrate pride, ownership, and accountability for your work, expecting the same from your teammates.REQUIREMENTSBachelor's, Master's, or Ph.D. degree in Computer Science, Engineering, Mathematics, or related field.1+ years of professional work experience in a fast-paced, high-growth environment.Demonstrated experience with one or more general-purpose programming languages in a production-level environment, with a strong preference for Python.Familiarity with AI/ML pipelines and the lifecycle of ML model development and deployment.Strong communication skills, particularly on complex technical topics.Experience in building or optimizing AI/ML projects is highly valued.BENEFITSCompetitive compensation, including meaningful equity.100% coverage of medical, dental, and vision insurance for employee and dependentsFlexible PTO policy including company wide Winter Break (our offices are closed from Christmas Eve to New Year's Day!)Paid parental leaveFertility and family-building stipend through CarrotCompany-facilitated 401(k)Exposure to a variety of ML startups, offering unparalleled learning and networking opportunities.Apply now to embark on a rewarding journey in shaping the future of AI! If you are a motivated individual with a passion for machine learning and a desire to be part of a collaborative and forward-thinking team, we would love to hear from you.At Baseten, we are committed to fostering a diverse and inclusive workplace. We provide equal employment opportunities to all employees and applicants without regard to race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status.We are an Equal Opportunity Employer and will consider qualified applicants with criminal histories in a manner consistent with applicable law (by example, the requirements of the San Francisco Fair Chance Ordinance, where applicable).
No items found.
2026-04-22 14:51
Applied AI Inference Engineer
Baseten
101-200
$165,000 – $330,000
United States
Full-time
Remote
false
ABOUT BASETENBaseten powers mission-critical inference for the world's most dynamic AI companies, like Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma and Writer. By uniting applied AI research, flexible infrastructure, and seamless developer tooling, we enable companies operating at the frontier of AI to bring cutting-edge models into production. We're growing quickly and recently raised our $300M Series E, backed by investors including BOND, IVP, Spark Capital, Greylock, and Conviction. Join us and help build the platform engineers turn to to ship AI products.THE ROLEAs an Applied AI Inference Engineer at Baseten, you will partner directly with customers to architect, build, and deploy high-scale production AI applications on Baseten’s platform. You’ll own the journey with customers from initial exploration to production deployment, translating ambiguous business goals into reliable, observable services with clear quality, latency, and cost outcomes.This role is a great fit for entrepreneurial engineers who want a front-row view into how modern companies adopt AI at scale and who enjoy working across product, software development, performance engineering, and customer-facing implementations.To be clear, this is an engineering role with hands-on coding and software development that also includes aspects of product management, technical customer success, and pre-sales solution engineering mixed in.EXAMPLE INITIATIVESTake a look at these blog posts written by members of our Forward Deployed Engineering team: Forward Deployed Engineering on the frontier of AIThe fastest, most accurate Whisper transcriptionDeploy production-ready model servers from Docker imagesDeploy custom ComfyUI workflows as APIs
RESPONSIBILITIESDevelop and maintain software systems and product features using one or more general-purpose programming languages in a production-level environment, with a preference for Python due to its relevance in ML projects.Drive customer impact by designing, implementing, and deploying Baseten solutions end-to-end (problem framing → evaluation → production deployment → monitoring). This involves working with customers’ engineering teams at every stage of the customer journey including: sales, implementation, and expansion.Deliver with velocity: turn vague objectives into clear specs and well-defined PoCs so we can rapidly ship well-tested services and outcomes for our customersOptimize and enhance AI/ML projects, contributing to the continuous improvement of our technical stack. This includes developing features and PRDs with other engineering and product orgs.Own products and customer projects end-to-end, functioning as both an engineer, project manager, and product manager, with a focus on user empathy, project specification, and end-to-end execution.Navigate ambiguity and exercise good judgment on tradeoffs and tools needed to solve problems, avoiding unnecessary complexity.Demonstrate pride, ownership, and accountability for your work, expecting the same from your teammates.REQUIREMENTSBachelor's, Master's, or Ph.D. degree in Computer Science, Engineering, Mathematics, or related field.1+ years of professional work experience in a fast-paced, high-growth environment.Demonstrated experience with one or more general-purpose programming languages in a production-level environment, with a strong preference for Python.Familiarity with AI/ML pipelines and the lifecycle of ML model development and deployment.Strong communication skills, particularly on complex technical topics.Experience in building or optimizing AI/ML projects is highly valued.BENEFITSCompetitive compensation, including meaningful equity.100% coverage of medical, dental, and vision insurance for employee and dependentsFlexible PTO policy including company wide Winter Break (our offices are closed from Christmas Eve to New Year's Day!)Paid parental leaveFertility and family-building stipend through CarrotCompany-facilitated 401(k)Exposure to a variety of ML startups, offering unparalleled learning and networking opportunities.Apply now to embark on a rewarding journey in shaping the future of AI! If you are a motivated individual with a passion for machine learning and a desire to be part of a collaborative and forward-thinking team, we would love to hear from you.At Baseten, we are committed to fostering a diverse and inclusive workplace. We provide equal employment opportunities to all employees and applicants without regard to race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status.We are an Equal Opportunity Employer and will consider qualified applicants with criminal histories in a manner consistent with applicable law (by example, the requirements of the San Francisco Fair Chance Ordinance, where applicable).
No items found.
2026-04-22 14:51
Senior Software Engineer-Founding Engineer (Ayama)
AIFund
51-100
United States
Full-time
Remote
false
Ayama has partnered with Andrew Ng’s AI Fund and a Fortune 500 energy leader to use AI to reshape how $10+ Trillion of heavy assets operate, are managed, and are maintained. With electrification driving rapid growth in energy and reshoring accelerating the build out of U.S. electronics and IT manufacturing, the company plans to close critical gaps by raising asset uptime & output as well as staff productivity. Ayama is looking for a Senior Software Engineer to help drive the AI revolution into critical assets.
The senior engineer will work across the full stack, including backend, frontend, data pipelines, and AI systems. The problems span optimization, retrieval, and real-time decision support for field teams. This is a 100% hands-on role. You would be in the code and architecture all day, making decisions and shipping. We expect you to be AI-native and use modern AI development tools to move fast while making sound architectural choices.
This is a full-time hybrid role located in the Bay Area. There is no relocation package budgeted for this role.
Responsibilities for the Role Include:
Build and improve production RAG and LLM-based systems
Design and maintain data pipelines that integrate enterprise data sources at scale
Develop full-stack product features end-to-end, from Python backend through React/TypeScript frontend
Work on optimization and scheduling problems with real operational constraints
Build evaluation frameworks that measure whether AI systems are actually improving
Requirements For the Role Include:
5+ years of hands-on software engineering experience
Full-stack depth in Python (Django/FastAPI) and modern React/TypeScript
Production experience building RAG and LLM-based systems, not prototypes
Strong architectural judgment
Comfort with large-scale data processing and cloud infrastructure (GCP preferred)
Fluency with AI-assisted development tools
Nice To Haves Include:
Experience with ERP systems, especially SAP (maintenance workflows, master data, integrations)
Operations research or constraint programming background
Data transformation and orchestration tools (dbt, Prefect, Airflow)
Why Join Ayama:
Exclusive Ecosystem: Unparalleled access to AI Fund’s experts and Dr.
Andrew Ng’s network of AI luminaries.
Real-World Impact: We are solving critical performance gaps in energy supply while demand driven by global electrification skyrockets.
Growth Path: Opportunity to evolve from a lead individual contributor to a
team leader as the company scales.
No items found.
2026-04-22 13:51
Staff Engineer, Applications (R4828)
Shield AI
1001-5000
Taiwan
Contractor
Remote
false
Founded in 2015, Shield AI is a venture-backed deep-tech company with the mission of protecting service members and civilians with intelligent systems. Its products include the V-BAT and X-BAT aircraft, Hivemind Enterprise, and the Hivemind Vision product lines. With offices and facilities across the U.S., Europe, the Middle East, and the Asia-Pacific, Shield AI’s technology actively supports operations worldwide. For more information, visit www.shield.ai. Follow Shield AI on LinkedIn, X, Instagram, and YouTube. Job Description:
Our Applications Engineers are highly technical, customer-facing problem solvers who play a critical role in deploying Shield AI’s Hivemind software in real-world environments.
In this role, you will work closely with customers to understand their requirements, provide technical expertise and customer support during deployment, and ensure successful integration of Hivemind. You’ll also collaborate internally with engineering teams to develop and test new autonomy capabilities.
This role is travel-intensive, with frequent trips, often international, and sometimes lasting multiple weeks, to work directly alongside customers on-site. What you'll do:
Deploy with customers on site globally (~50% travel) to support software integration and development activities.
Become an expert user of the Hivemind enterprise software stack and its various autonomy modules.
Provide technical support and training to customers on use of Hivemind.
Develop AI & Autonomy applications using the Shield AI enterprise software development kit.
Assist the sales team in pre-sales activities, e.g., demos, conferences, immersions
Assist in post-sales deployment and integration of Shield AI enterprise software products.
Develop and maintain technical documentation and training materials.
Help customers debug software/API integration issues.
Collaborate with the product engineering team to address customer feedback and improve products.
Act as a technical leader across engagements by elevating team performance, driving execution across cross-functional teams, and ensuring successful delivery in complex environments.
Required qualifications:
Bachelor’s degree in Engineering, Computer Science, or a related field, and 7+ years of industry experience, OR
Master’s degree in Engineering, Computer Science, or a related field, and 5+ years of industry experience.
Strong technical background in software engineering.
Strong proficiency in writing modern C++ code.
Excellent problem-solving and analytical skills.
Strong communication and interpersonal skills.
Preferred qualifications:
Experience in the defense, aviation, or robotics industry.
Prior experience as a customer-facing solutions engineer, application engineer, or sales engineer.
Experience operating in a fast-paced, startup-like, environment.
Advanced technical degree, especially in robotics or autonomy related fields
#LI-FB1
#LD
Our international teammates receive a comprehensive total rewards package aligned to your country office location. For full details on compensation and benefits, please consult your talent acquisition partner.
No items found.
2026-04-22 13:06
Engineer II, Applications (R4789)
Shield AI
1001-5000
Taiwan
Contractor
Remote
false
Founded in 2015, Shield AI is a venture-backed deep-tech company with the mission of protecting service members and civilians with intelligent systems. Its products include the V-BAT and X-BAT aircraft, Hivemind Enterprise, and the Hivemind Vision product lines. With offices and facilities across the U.S., Europe, the Middle East, and the Asia-Pacific, Shield AI’s technology actively supports operations worldwide. For more information, visit www.shield.ai. Follow Shield AI on LinkedIn, X, Instagram, and YouTube. Job Description:
Our Applications Engineers are highly technical, customer-facing problem solvers who play a critical role in deploying Shield AI’s Hivemind software in real-world environments.
In this role, you will work closely with customers to understand their requirements, provide technical expertise and customer support during deployment, and ensure successful integration of Hivemind. You’ll also collaborate internally with engineering teams to develop and test new autonomy capabilities.
This role is travel-intensive, with frequent trips, often international, and sometimes lasting multiple weeks, to work directly alongside customers on-site. What you'll do:
Deploy with customers on site globally (~50% travel) to support software integration and development activities.
Become an expert user of the Hivemind enterprise software stack and its various autonomy modules.
Provide technical support and training to customers on use of Hivemind.
Develop AI & Autonomy applications using the Shield AI enterprise software development kit.
Assist the sales team in pre-sales activities, e.g., demos, conferences, immersions
Assist in post-sales deployment and integration of Shield AI enterprise software products.
Develop and maintain technical documentation and training materials.
Help customers debug software/API integration issues.
Collaborate with the product engineering team to address customer feedback and improve products.
Required qualifications:
Bachelor’s degree in Engineering, Computer Science, or a related field, and 3+ years of industry experience, OR
Master’s degree in Engineering, Computer Science, or a related field, and 1+ years of industry experience.
Strong technical background in software engineering.
Strong proficiency in writing modern C++ code.
Excellent problem-solving and analytical skills.
Strong communication and interpersonal skills.
Preferred qualifications:
Experience in the defense, aviation, or robotics industry.
Prior experience as a customer-facing solutions engineer, application engineer, or sales engineer.
Experience operating in a fast-paced, startup-like, environment.
Advanced technical degree, especially in robotics or autonomy related fields
#LI-FB1
#LB
Our international teammates receive a comprehensive total rewards package aligned to your country office location. For full details on compensation and benefits, please consult your talent acquisition partner.
No items found.
2026-04-22 13:06
Research Engineer, Data Infrastructure
Mistral AI
501-1000
United States
Full-time
Remote
false
About Mistral
At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.
We democratize AI through high-performance, optimized, open-source and cutting-edge models, products and solutions. Our comprehensive AI platform is designed to meet enterprise as well as personal needs. Our offerings include Le Chat, La Plateforme, Mistral Code and Mistral Compute - a suite that brings frontier intelligence to end-users.
We are a dynamic, collaborative team passionate about AI and its potential to transform society. Our diverse workforce thrives in competitive environments and is committed to driving innovation. Our teams are distributed between France, USA, UK, Germany and Singapore. We are creative, low-ego and team-spirited.
Join us to be part of a pioneering company shaping the future of AI. Together, we can make a meaningful impact. See more about our culture on https://mistral.ai/careers.Role Summary
This role focuses on building and operating the next generation of data infrastructure at Mistral AI. You will be a core contributor to our evolution, helping us design and scale massive compute fleets and storage systems designed for high performance and scalability.
You will help us move toward a future of decoupled control and data planes, scaling big data compute and storage platforms while ensuring secure and governed data access for MLOps and research. You will take full lifecycle ownership: from architecting the migration away from legacy orchestrators to implementing production-grade pipelines and participating in on-call rotations for critical training jobs.
What will you do
• Build & Scale: Help us reach our goal of operating massive distributed compute and storage systems
• Global Orchestration: Architect and maintain multi-cluster orchestration layers to optimize workload placement across diverse hardware and regions.
• Design Future-Proof Storage: Architect our transition to modern storage formats to handle fine-tuning datasets at a scale that anticipates exabyte growth.
• Platform Engineering: Contribute to the development of our internal training platform, ensuring seamless model training and fine-tuning capabilities across Kubernetes and SLURM based environments.
• Metadata & Lineage: Implement and manage systems to provide clear visibility and lineage as our data and model pipelines grow in complexity.
• Operational Excellence: Use modern deployment workflows to manage cloud-native deployments, ensuring our data platform can scale by o
About you
• Have 4+ years of experience in Data Infrastructure, MLOps, or Infrastructure Engineering.
• Have experience or a strong interest in supporting foundational compute and storage platforms.
• Are proficient in Python and enjoy solving the "brittle data lake" problem with modern, columnar storage standards.
• Are well-versed in Kubernetes-native tooling and excited to debug large-scale distributed systems across multi-cluster environments.
• Take pride in building and operating scalable, reliable, and secure systems from the ground up.
• Are comfortable with ambiguity and the challenges of building high-scale infrastructure in a rapid-growth AI environment.
What we offer
💰 Competitive salary and equity.
🚑 Healthcare: Medical/Dental/Vision covered for you and your family.
👴🏻 Pension : 401K (6% matching)
🏝️ PTO : 18 days
🚗 Transportation: Reimburse office parking charges, or $120/month for public transport
🏀 Sport: $120/month reimbursement for gym membership
🥕 Meal stipend: $400 monthly allowance for meals (solution might evolve as we grow bigger)
🌎 Visa sponsorship
🤝 Coaching: we offer BetterUp coaching on a voluntary basis
By applying, you agree to our Applicant Privacy Policy.
No items found.
2026-04-22 13:06
Senior Engineer, Applications (R4792)
Shield AI
1001-5000
Taiwan
Full-time
Remote
false
Founded in 2015, Shield AI is a venture-backed deep-tech company with the mission of protecting service members and civilians with intelligent systems. Its products include the V-BAT and X-BAT aircraft, Hivemind Enterprise, and the Hivemind Vision product lines. With offices and facilities across the U.S., Europe, the Middle East, and the Asia-Pacific, Shield AI’s technology actively supports operations worldwide. For more information, visit www.shield.ai. Follow Shield AI on LinkedIn, X, Instagram, and YouTube. Job Description:
Our Applications Engineers are highly technical, customer-facing problem solvers who play a critical role in deploying Shield AI’s Hivemind software in real-world environments.
In this role, you will work closely with customers to understand their requirements, provide technical expertise and customer support during deployment, and ensure successful integration of Hivemind. You’ll also collaborate internally with engineering teams to develop and test new autonomy capabilities.
This role is travel-intensive, with frequent trips, often international, and sometimes lasting multiple weeks, to work directly alongside customers on-site. What you'll do:
Deploy with customers on site globally (~50% travel) to support software integration and development activities.
Become an expert user of the Hivemind enterprise software stack and its various autonomy modules.
Provide technical support and training to customers on use of Hivemind.
Develop AI & Autonomy applications using the Shield AI enterprise software development kit.
Assist the sales team in pre-sales activities, e.g., demos, conferences, immersions
Assist in post-sales deployment and integration of Shield AI enterprise software products.
Develop and maintain technical documentation and training materials.
Help customers debug software/API integration issues.
Collaborate with the product engineering team to address customer feedback and improve products.
Required qualifications:
Bachelor’s degree in Engineering, Computer Science, or a related field, and 5+ years of industry experience, OR
Master’s degree in Engineering, Computer Science, or a related field, and 3+ years of industry experience.
Strong technical background in software engineering.
Strong proficiency in writing modern C++ code.
Excellent problem-solving and analytical skills.
Strong communication and interpersonal skills.
Preferred qualifications:
Experience in the defense, aviation, or robotics industry.
Prior experience as a customer-facing solutions engineer, application engineer, or sales engineer.
Experience operating in a fast-paced, startup-like, environment.
Advanced technical degree, especially in robotics or autonomy related fields
#LI-FB1
#LB
Our international teammates receive a comprehensive total rewards package aligned to your country office location. For full details on compensation and benefits, please consult your talent acquisition partner.
No items found.
2026-04-22 13:05
Graduation Internship - AI Research - Paris
H Company
201-500
France
Full-time
Remote
false
Graduation Internship - AI Research - ParisAbout H: H Company is a next-generation AI research and product company pioneering the future of autonomous, agentic AI. Founded to build intelligence that acts, H Company is creating the foundational infrastructure for autonomous AI systems that drive real-world outcomes across industries.We have Graduation/Final year Internships available in our Research lab.Our Models team builds the foundational models that power our cutting-edge agentic technology. Our Data Research team advances multimodal intelligence, building large-scale models that operate across diverse input spaces. Our Agent team defines new learning algorithms and agent paradigms to push the frontiers of agentic systems.We are excited to tell you more !Location:ParisWhat We Offer:Join the exciting journey of shaping the future of AI, and be part of the early days of one of the hottest AI startups.Collaborate with a fun, dynamic, and multicultural team, working alongside world-class AI talent in a highly collaborative environment.Unlock opportunities for professional growth, continuous learning, and career development.If you want to change the status quo in AI, join us.
No items found.
2026-04-22 11:20
Member of Technical Staff (Applied AI Engineer)
Videcode
1-10
United States
Full-time
Remote
false
At Vibecode, we want to bring the power of AI to the next 100 million people. If you get deeply excited about being able to take the most wonderful technology of our generation and spread it to the masses through inspiration and education, then Vibecode is the place for you. Since the advent of AI-assisted coding, there is no hard technical requirements anymore, but here are some interesting things we are working on:Custom memory systems that grow and scale as users use the platformA custom agent that is at the cutting edgeBare metal infrastructure and scalability with concurrency and a high degree of reliabilityCost and output optimization with multiple modelsEvaluating LLM performance on a wide domain of tasksYou will be a good fit if you meet all of these criteria:Deeply curious -- you are fascinated by the world of technology and often find yourself tinkering late into the night or early into the morningHard worker -- working long hours comes naturally for you and you gain pleasure from working in an intense team that pushes you to your limitsLow ego / humble -- no problem is too small, and you are always open to other's ideas
No items found.
2026-04-22 10:36
Member of Technical Staff (Growth & Content)
Videcode
1-10
United States
Full-time
Remote
false
At Vibecode, we want to bring the power of AI to the next 100 million people. If you get deeply excited about being able to take the most wonderful technology of our generation and spread it to the masses through inspiration and education, then Vibecode is the place for you. Since the advent of AI-assisted coding, there is no hard technical requirements anymore, but here are some interesting things we are working on:Researching the hot new thing and running deep evals on itHaving early access to new agents, models, and runtimes and being able to finetune our system to themInspiring and educating the next generation to use AI via high-quality long form content on YouTube, X, Instagram and LinkedInBuilding multiple projects with a combination of AI tools that are unique and excitingHaving good taste and making things look good effortlesslyYou will be a good fit if you meet all of these criteria:Deeply curious -- you are fascinated by the world of technology and often find yourself tinkering late into the night or early into the morningHard worker -- working long hours comes naturally for you and you gain pleasure from working in an intense team that pushes you to your limitsLow ego / humble -- no problem is too small, and you are always open to other's ideas
No items found.
2026-04-22 10:35
Senior Software Engineer, Backend
Harvey
501-1000
India
Full-time
Remote
false
Why HarveyAt Harvey, we’re transforming how legal and professional services operate — not incrementally, but end-to-end. By combining frontier agentic AI, an enterprise-grade platform, and deep domain expertise, we’re reshaping how critical knowledge work gets done for decades to come.This is a rare chance to help build a generational company at a true inflection point. With 1000+ customers in 60+ countries, strong product-market fit, and world-class investor support, we’re scaling fast and defining a new category in real time. The work is ambitious, the bar is high, and the opportunity for growth — personal, professional, and financial — is unmatched.Our team is sharp, motivated, and deeply committed to the mission. We move fast, operate with intensity, and take real ownership of the problems we tackle — from early thinking to long-term outcomes. We stay close to our customers — from leadership to engineers — and work together to solve real problems with urgency and care. If you thrive in ambiguity, push for excellence, and want to help shape the future of work alongside others who raise the bar, we invite you to build with us.At Harvey, the future of professional services is being written today — and we’re just getting started.Role OverviewAs a Product Backend Engineer on Harvey’s product engineering team, you will design and operate the backend systems that turn cutting-edge AI capabilities into seamless, dependable product experiences. You’ll build secure, multi-tenant services, orchestrate interactions with LLMs and agentic tools, and help define the backend architecture that powers Harvey’s rapidly expanding product surface. Your work will have immediate business and customer impact as you ship the foundational systems behind our next generation of AI products.What You'll DoCollaborate closely with Product to prioritize customer-focused work and deliver reliable features quickly in a fast-moving environment.Design and own backend services and APIs that power Harvey’s web applications, workflows, and integrations.Model and manage data in Postgres and related data stores to support low-latency, reliable user experiences.Build secure, multi-tenant, permissions-aware systems with appropriate auditing for enterprise and government customers.Implement backend features and workflows that use LLMs and agentic tools, orchestrating calls to our AI systems from robust, well-structured services.Collaborate with frontend, product, and design partners to shape solutions, define clear API contracts, and ship features end-to-end.Add meaningful logging, metrics, and tracing so services are observable, debuggable, and ready for on-call ownership.Improve performance and scalability by profiling bottlenecks, tuning queries, and refining service boundaries as usage grows.Participate in code reviews, technical design discussions, and an on-call rotation for the services you own.What You Have5+ years of backend engineering experience building and operating production web applications or SaaS products.Track record of building fast-growing SaaS products by leveraging PWA technologiesTrack record of shipping highly intuitive products, strong attention to detailStrong programming skills in Python and experience with a modern web framework (e.g., FastAPI or Flask).Hands-on experience with relational databases (e.g., Postgres)Solid understanding of API design, authentication/authorization, background job patterns, and robust error handling.Experience building for Cloud Infrastructure such as Azure, AWS, or GCP.Comfort operating services in production using logs, metrics, dashboards, and alerts, including participation in an on-call rotation.Additional InformationWork eligibility: Must be authorized to work in India. Visa sponsorship is not available for this role.Depending on your location, an Applicant Privacy Notice may apply to you. You can find all of our Applicant Privacy Notices [here].#LI-KV1Harvey is an equal opportunity employer and does not discriminate on the basis of race, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition, or any other basis protected by law.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made by emailing accommodations@harvey.ai
No items found.
2026-04-22 3:20
No job found
Your search did not match any job. Please try again
