Full-stack Developer (Full-Time/Intern) - SH 全栈工程师 (全职/实习) - 上海
As a Full-Stack Engineer at Flowith, you will be responsible for independently or collaboratively leading the full-stack development of Flowith's core modules crossing front-end and back-end boundaries to deliver highly available and scalable system code. You will deeply integrate advanced AI algorithms and complex models into the product flow to create intelligent interactive experiences, work closely with product managers, designers, and AI engineers in a creative environment to implement innovative AI concepts, automate deployments and manage continuous integration on mainstream cloud infrastructure while monitoring and optimizing system performance and resource usage. Additionally, you will participate in the design evolution of the core architecture, conduct in-depth code reviews, and help accumulate technical components and best practices to elevate the engineering standards of the team.
Head of Product, AI
Own the end-to-end AI product strategy, grounded in technical feasibility and real-world constraints. Translate model capabilities, data limitations, and evaluation results into clear product decisions. Make hard trade-offs across quality, latency, cost, reliability, and user experience. Work daily with ML, backend, and mobile engineers on design, evaluation, and iteration. Define success metrics and feedback loops across offline evaluation, online experiments, and human feedback. Drive execution with clear specifications, risk awareness, and disciplined prioritization. Ensure AI features ship quickly, safely, and reliably into production. Own AI product quality across UX, correctness, and outcomes.
Senior ML Operations (MLOps) Engineer
The Senior ML Operations (MLOps) Engineer at Eight Sleep is responsible for introducing and implementing cutting-edge ML technologies, owning the design and operation of robust ML infrastructure including scalable data, model, and deployment pipelines to ensure reliable model delivery to production. They collaborate cross-functionally with R&D, firmware, data, and backend teams to ensure reliable and scalable ML inference on Pods. They optimize ML systems for cost, scalability, and performance across training and inference, and develop tooling, microservices, and frameworks to streamline data processing, experimentation, and deployment. The role requires effective communication in a remote work environment.
Manual Quality Assurance Engineer, Web Core Product
Work alongside machine learning researchers, engineers, and product managers to bring AI Voices to customers for diverse use cases. Deploy and operate the core ML inference workloads for the AI Voices serving pipeline. Introduce new techniques, tools, and architecture that improve performance, latency, throughput, and efficiency of deployed models. Build tools to identify bottlenecks and sources of instability and design and implement solutions to address the highest priority issues.
Backend Engineer, AI
Build and operate backend systems that serve AI-powered features in production; design inference pipelines, orchestration layers, and service boundaries around models; own production concerns including monitoring, logging, alerting, and incident response; optimize latency and throughput across inference, caching, batching, and streaming.
Safety Engineer
The AI Safety Engineer is responsible for designing and building scalable backend infrastructure for content moderation, abuse detection, and agents guardrails by deploying AI/ML models into production systems. They will architect robust APIs, data pipelines, and service architectures to support real-time and batch moderation workflows. The role includes implementing comprehensive monitoring, alerting, and observability systems, establishing SLIs, SLOs, and performance benchmarks. The engineer will collaborate with ML engineers to translate research models into production-ready systems and integrate them across the product suite. Additionally, they will drive technical decisions and contribute to the vision for the safety roadmap to build next-generation platform guardrails for scale and precision.
AI / ML Solutions Engineer
The AI / ML Solutions Engineer at Anyscale is responsible for designing, implementing, and scaling machine learning and AI workloads using Ray and Anyscale directly with customers. This includes implementing production AI / ML workloads such as distributed model training, scalable inference and serving, and data preprocessing and feature pipelines. The role involves working hands-on with customer codebases to refactor or adapt existing workloads to Ray. The engineer advises customers on ML system architecture including application design for distributed execution, resource management and scaling strategies, and reliability, fault tolerance, and performance tuning. They guide customers through architectural and operational changes needed to adopt Ray and Anyscale effectively. Additionally, the engineer partners with customer MLE and MLOps teams to integrate Ray into existing platforms and workflows, supports CI/CD, monitoring, retraining, and operational best practices, and helps customers transition from experimentation to production-grade ML systems. They also enable customer teams through working sessions, design reviews, training delivery, and hands-on guidance, contribute feedback to product, engineering, and education teams, and help develop reference architectures, examples, and best practices based on real customer use cases.
Enterprise Account Executive - Italy
The AI Outcomes Manager will partner with executive sponsors and end users to identify high-impact use cases and turn them into measurable business outcomes on Glean. They will lead strategic reviews and advise customers on their AI roadmap to ensure maximum value from Glean's platform. The role involves translating business needs into clear problem statements, success metrics, and practical AI solutions while collaborating with Product and R&D to shape priorities. They will conduct discovery workshops, scope pilots, and guide rollouts to drive broad and deep adoption of the Glean platform. Additionally, they will design and build AI agents with and for customers, including rethinking and redesigning underlying business processes to maximize impact and usability. The manager will proactively identify expansion opportunities and drive engagement across teams and functions.
Senior AI Engineer - San Mateo, CA
The role involves training, evaluating, and monitoring new and improved LLMs and other algorithmic models. The engineer will test and deploy content moderation models in production and iterate based on real-world performance metrics and feedback loops. They are expected to develop medium to long-term vision for content understanding-related R&D, collaborating with management, product, policy & operations, and engineering teams. The position requires taking ownership of results delivered to customers, advocating for changes in approach where needed, and leading cross-functional execution.
Member of Technical Staff - ML Research Engineer; Multi-Modal - Audio
Invent and prototype new model architectures that optimize inference speed, including on edge devices; build and maintain evaluation suites for multimodal performance across a range of public and internal tasks; collaborate with the data and infrastructure teams to build scalable pipelines for ingesting and preprocessing large audio datasets; work with the infrastructure team to optimize model training across large-scale GPU clusters; contribute to publications, internal research documents, and thought leadership within the team and the broader ML community; collaborate with the applied research and business teams on client-specific use cases.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
