AI Factory, Value Engineer
Responsibilities include translating business requirements into requirements for AI/ML models, preparing data to train and evaluate AI/ML/DL models, building AI/ML/DL models using state-of-the-art algorithms especially transformers, testing and evaluating models, benchmarking quality, publishing models and datasets, deploying models in production by containerizing them, working with customers and internal employees to refine model quality, establishing continuous learning pipelines with online or transfer learning, and building and deploying containerized applications on cloud or on-premise environments.
Staff Software Engineer, RLE
Define and drive architecture for scalable, extensible Reinforcement Learning Environments (RLE) systems and data pipelines. Lead development of platform capabilities enabling rapid domain creation. Partner with Research, Product, and Operations to shape strategy and execution. Set standards for reliability, observability, performance, and data quality. Mentor engineers and elevate engineering excellence across the team. Identify and solve systemic bottlenecks in scaling environments and data generation.
Research Engineer, Data Infrastructure
The role involves building and operating the next generation of data infrastructure at Mistral AI, being a core contributor to the design and scaling of massive compute fleets and storage systems for high performance and scalability. Responsibilities include architecting and maintaining multi-cluster orchestration layers for optimizing workload placement across diverse hardware and regions, designing future-proof storage systems anticipating exabyte scale growth, contributing to the internal training platform development to support model training and fine-tuning across Kubernetes and SLURM environments, implementing and managing metadata and lineage systems to provide visibility and traceability of data and model pipelines, and managing cloud-native deployments using modern workflows to ensure scalability and operational excellence. The role also includes full lifecycle ownership, from migrating away from legacy orchestrators to implementing production-grade pipelines and participating in on-call rotations for critical training jobs.
Defense / Edge Tech Lead
As the Defense / Edge Tech Lead, you will own the technical direction for deploying Deepgram's speech-to-text (STT) and text-to-speech (TTS) models to edge and embedded environments. Your responsibilities include leading the technical strategy for edge deployment, defining the architecture for on-device, on-premises, and air-gapped inference across diverse hardware targets. You will optimize models for edge and embedded platforms through quantization, pruning, distillation, and runtime optimization to meet latency, memory, and power constraints. You will partner with hardware vendors like Qualcomm and Motorola for SDK integration, performance benchmarking, and joint go-to-market efforts. Supporting defense customer requirements through AWS NatSec partnerships by translating mission requirements into engineering deliverables is also part of your role. You will design and build edge runtime infrastructure such as model packaging, deployment pipelines, OTA update mechanisms, and telemetry for devices in low or no connectivity environments. Deployments must be hardened for security-sensitive environments with features like secure boot chains, encrypted model storage, tamper detection, and audit logging. You will benchmark and validate performance across hardware platforms, establishing test suites for latency, accuracy, power consumption, and resource utilization. Collaboration with Research and Engine teams to influence model architectures toward edge-friendly designs is expected. Furthermore, you provide technical leadership to cross-functional teams on defense and edge projects, set engineering standards, review designs, and mentor engineers on systems and optimization practices.
Chemistry & Python Expert - Freelance AI Trainer
Contributors design original computational chemistry problems that simulate real chemistry research workflows and create problems requiring Python programming to solve using libraries such as numpy, scipy, and chemical libraries. They ensure problems are computationally intensive and cannot be solved manually within reasonable timeframes (days or weeks), develop problems requiring non-trivial reasoning chains in physical chemistry, quantum chemistry, and molecular modeling, base problems on real research challenges or practical applications from chemistry practice, verify solutions using Python with standard computational chemistry approaches, and document problem statements clearly while providing verified correct answers.
Mathematics & Python Expert - Freelance AI Trainer
Contributors may design original computational mathematics problems that simulate real mathematical research workflows, create problems requiring Python programming to solve (using Numpy, SciPy, Sympy), ensure problems are computationally intensive and cannot be solved manually within reasonable timeframes (days/weeks), develop problems requiring non-trivial reasoning chains in areas like number theory, combinatorics, graph theory, and numerical analysis, base problems on real research challenges or practical applications from mathematical practice, verify solutions using Python with standard mathematical libraries, and document problem statements clearly while providing verified correct answers.
Mathematics & Python Expert - Freelance AI Trainer
Design original computational mathematics problems that simulate real mathematical research workflows; create problems requiring Python programming to solve using libraries such as Numpy, SciPy, and Sympy; ensure problems are computationally intensive and cannot be solved manually within reasonable timeframes; develop problems requiring non-trivial reasoning chains in areas like number theory, combinatorics, graph theory, and numerical analysis; base problems on real research challenges or practical applications from mathematical practice; verify solutions using Python with standard mathematical libraries; document problem statements clearly and provide verified correct answers.
Mathematics & Python Expert - Freelance AI Trainer
Design original computational mathematics problems simulating real mathematical research workflows; create problems requiring Python programming to solve using libraries such as Numpy, SciPy, and Sympy; ensure problems are computationally intensive and cannot be solved manually within reasonable timeframes; develop problems requiring non-trivial reasoning chains in areas like number theory, combinatorics, graph theory, and numerical analysis; base problems on real research challenges or practical applications; verify solutions using Python with standard mathematical libraries; document problem statements clearly and provide verified correct answers.
Civil Site Engineer
Translate business requirements into requirements for AI/ML models, prepare data to train and evaluate AI/ML/DL models, build AI/ML/DL models by applying state-of-the-art algorithms including transformers and leveraging existing algorithms from academic or industrial research, test and evaluate AI/ML/DL models, benchmark their quality, and publish the models, data sets, and evaluations, deploy models in production by containerizing them, work with customers and internal employees to refine model quality, establish continuous learning pipelines for models using online learning or transfer learning, and build and deploy containerized applications on cloud or on-premise environments.
Applied AI, Forward Deployed Machine Learning Engineer - Palo Alto
The Applied AI Engineer at Mistral AI is responsible for facilitating the adoption of Mistral AI products among customers, managing daily customer relations involving multiple stakeholders including CEOs, CTOs, data scientists, and software engineers. They onboard customers on products and APIs, provide guidance on prompting, evaluation, and fine-tuning, and ensure the best production integration with back-end and front-end interfaces. They work on state-of-the-art GenAI applications across consumer products and industrial use cases, help deploy production use cases with significant business impact, collaborate with researchers, AI engineers, and product engineers on complex customer projects involving fine-tuning and LLM applications, and contribute to open source codebases related to inference and fine-tuning. Additionally, they participate in pre-sales calls to understand potential clients' needs and challenges, provide technical guidance on products, explain Mistral technologies to various stakeholders, and collaborate with the product and science teams to continuously improve product and model capabilities based on customer feedback.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Need help with something? Here are our most frequently asked questions.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
