Electrical Engineer & Python Expert - Freelance AI Trainer
Contributors may design rigorous electrical engineering problems reflecting professional practice, evaluate AI solutions for correctness, assumptions, and constraints, validate calculations or simulations using Python (NumPy, Pandas, SciPy), improve AI reasoning to align with industry-standard logic, and apply structured scoring criteria to multi-step problems.
Field Events Marketing Manager
Debug and fix issues in the platform and ship pull requests with fixes. Build internal tools and copilots powered by generative AI to enhance the team. Rapidly prototype proof-of-concepts for customer use cases. Collaborate across Engineering, Product, and Solutions teams to unblock customers and advance AI adoption.
Lazo - Head of Engineering
The Head of Engineering at Lazo is responsible for owning the technology strategy and roadmap aligned with business and product OKRs, defining the reference architecture for agentic systems, establishing security and compliance baselines including SOC2-readiness, and presenting trade-offs, risks, and progress in leadership reviews. They are also tasked with shipping backend services in Python/TypeScript, driving high-impact PRs and code reviews, orchestrating agents and toolchains, integrating external APIs and databases, and building robust pipelines. The role includes end-to-end DevOps responsibilities such as AWS/GCP management, containerization, IaC, CI/CD, observability, and on-call design, as well as reducing technical debt, improving latency and throughput, and managing infrastructure costs. The individual defines SLOs and error budgets, reduces MTTR and change-fail rates, implements data access policies and secure data flows for AI features, drives post-mortems and preventive engineering practices, hires and mentors engineers, sets performance scorecards with integrated operating systems, fosters a culture of thoughtful trade-offs and fast feedback, partners with Product and AI teams to turn customer problems into scalable solutions, collaborates with Ops, Growth, and Customer teams for reliability and launch readiness, and manages vendors and evaluates build-vs-buy decisions.
AceUp - Lead ML Engineer (Generative AI & LLM Focus)
Architect conversational agents that are stateful, context-aware, and capable of maintaining long-running coherent dialogues to handle complex reasoning tasks. Build retrieval-augmented generation (RAG) pipelines that ground large language model (LLM) responses in proprietary data to ensure high accuracy and minimize hallucinations. Lead the development of natural language processing (NLP) pipelines to extract structured insights from varied unstructured data sources, initially text and eventually audio. Implement advanced personalization layers that adapt model behavior and tone dynamically based on user history and context. Own the deployment lifecycle of LLM models including prompt architecture, evaluation frameworks, latency optimization, and cost management on Vertex AI. Provide technical mentorship by reviewing code, setting architectural standards, and guiding technical decision-making for ML engineers without people management responsibilities.
Prospera AI - AI Backend Engineer
Own and evolve the LLM orchestration pipeline by designing and optimizing the multi-agent orchestration system, implementing parallelization and streaming to reduce response latency, and building robust prompt management with versioning and A/B testing capabilities. Design retrieval-augmented generation (RAG) systems for accurate, contextual responses by working with vector databases, embeddings, and relevance scoring while optimizing for speed and accuracy at scale. Develop production APIs that connect AI capabilities to the frontend, including designing for future integrations with CRMs and advisor tools, implementing authentication, rate limiting, and documentation. Establish code review practices and testing standards, document architecture decisions for future team members, and contribute to technical patents and IP development.
Full Stack AI Engineer – BuilderEx
Design, build, and maintain full-stack applications powering identity and access management (IAM) experiences. Develop and integrate AI/ML models for identity use cases such as fraud detection, anomaly detection, risk-based authentication, and identity verification. Lead and execute SSO migrations across products and platforms, consolidating authentication flows while minimizing user disruption. Drive domain consolidation initiatives by unifying identity systems, services, and user data models across multiple platforms or brands. Improve developer experience (DevEx) by building internal tools, SDKs, APIs, and documentation that simplify identity integrations. Design and evolve secure, scalable APIs supporting authentication, authorization, and identity data services. Partner closely with Security, Platform, and Product teams to implement and standardize protocols and patterns such as OAuth 2.0, OpenID Connect, SAML, JWT, and zero-trust architectures. Ensure AI-powered identity systems are observable, explainable, and production-ready, with robust monitoring and feedback loops. Balance security, performance, and usability while maintaining high standards for privacy and compliance. Contribute to architectural decisions, technical design discussions, and code quality standards.
Full Stack AI Engineer
Design, build, and deploy AI/ML solutions to automate ITSM ticket triage, classification, prioritization, and routing; develop NLP-based models for ticket summarization, root-cause detection, and resolution recommendation; implement AI-powered virtual agents/copilots to assist support engineers and end users; partner with Product Support, SRE, and Engineering teams to understand recurring issues and automate resolution workflows; build intelligent runbooks and self-healing automation for common incidents and service requests; enhance knowledge management by auto-generating and updating KB articles from resolved tickets; integrate AI solutions with ITSM platforms (HALO); develop APIs, workflows, and event-driven automations across monitoring, logging, and ITSM tools; ensure seamless handoff between AI systems and human support engineers; analyze ticket, incident, and operational data to identify automation opportunities; train, evaluate, and continuously improve ML models using real-world support data; implement monitoring for model performance, drift, and accuracy in production; ensure AI solutions meet reliability, security, and compliance standards; implement guardrails, explainability, and auditability for AI-driven decisions; contribute to AI governance and responsible AI practices.
Senior ML Operations (MLOps) Engineer
The Senior ML Operations (MLOps) Engineer at Eight Sleep is responsible for introducing and implementing cutting-edge ML technologies, owning the design and operation of robust ML infrastructure including scalable data, model, and deployment pipelines to ensure reliable model delivery to production. They collaborate cross-functionally with R&D, firmware, data, and backend teams to ensure reliable and scalable ML inference on Pods. They optimize ML systems for cost, scalability, and performance across training and inference, and develop tooling, microservices, and frameworks to streamline data processing, experimentation, and deployment. The role requires effective communication in a remote work environment.
Manual Quality Assurance Engineer, Web Core Product
Work alongside machine learning researchers, engineers, and product managers to bring AI Voices to customers for diverse use cases. Deploy and operate the core ML inference workloads for the AI Voices serving pipeline. Introduce new techniques, tools, and architecture that improve performance, latency, throughput, and efficiency of deployed models. Build tools to identify bottlenecks and sources of instability and design and implement solutions to address the highest priority issues.
Safety Engineer
The AI Safety Engineer is responsible for designing and building scalable backend infrastructure for content moderation, abuse detection, and agents guardrails by deploying AI/ML models into production systems. They will architect robust APIs, data pipelines, and service architectures to support real-time and batch moderation workflows. The role includes implementing comprehensive monitoring, alerting, and observability systems, establishing SLIs, SLOs, and performance benchmarks. The engineer will collaborate with ML engineers to translate research models into production-ready systems and integrate them across the product suite. Additionally, they will drive technical decisions and contribute to the vision for the safety roadmap to build next-generation platform guardrails for scale and precision.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Need help with something? Here are our most frequently asked questions.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
