Senior Python Systems Developer - Functional Testing Project
Create functional black box tests for large codebases in various source languages, create and manage Docker environments to ensure 100% reproducible builds and test execution across different platforms, monitor code coverage and configure automated scoring criteria to meet industry benchmark-level standards, and leverage LLMs such as Roo Code and Claude to accelerate development cycles, automate repetitive tasks, and improve overall code quality.
Software Engineer, Architecture, Reliability, & Compute
As a Production AI Ops Lead, you will design and develop the production lifecycle of full-stack AI applications, support end-to-end system reliability, real-time inference observability, sovereign data orchestration, high-security software integration, and resilient cloud infrastructure for international government partners. You will take full accountability for the long-term performance and reliability of AI use cases deployed across international government agencies, oversee the end-to-end health of the platform ensuring seamless integration between AI core and full-stack components, build automated systems to monitor model performance and data drift across geographically dispersed environments, manage the technical lifecycle within diverse regulatory frameworks, lead response for production issues in mission-critical environments ensuring rapid resolution and prevention, translate technical performance metrics into clear insights for senior international government officials, and partner with Engineering and ML teams to ensure field lessons influence future technical architecture and decisions.
Engineering Manager, Active Learning
The Engineering Manager role at Deepgram involves leading the design and implementation of internal data and ML training systems. Responsibilities include recruiting, hiring, training, and supporting top engineering talent to build a world-class team; transforming cross-functional visions into detailed project plans with clarity on commitments, risks, and timelines; defining and owning technical strategy to accelerate ML training pipelines; promoting a strong team engineering culture focused on rigorous engineering standards and continuous improvement; partnering with DataOps and Research teams to design and implement new services, features, or products end to end; and coaching and mentoring engineers to support personal growth while achieving ambitious team goals.
Research Engineer, Machine Learning Systems
The responsibilities include architecting and managing horizontally scalable systems to accelerate the end-to-end training lifecycle for Speech-to-Text (STT) and Text-to-Speech (TTS) models, focusing on optimized data preparation, high-throughput training pipelines, distributed infrastructure, and automated evaluation tooling. The role also involves designing and implementing internal UIs and tools to make ML systems and workflows accessible and transparent to non-technical stakeholders. Additionally, the position requires overseeing and managing training tooling, job orchestration, experiment tracking, and data storage.
Inference Technical Lead, On-Device Transformers
As a Technical Lead on the Future of Computing Research team, you will evaluate and select silicon platforms such as GPUs, NPUs, and specialized accelerators for on-device and edge deployment of OpenAI models. You will work closely with research teams to co-design model architectures that meet real-world deployment constraints including latency, memory, power, and bandwidth. You will analyze and model system performance, identifying tradeoffs between model design, memory hierarchy, compute throughput, and hardware capabilities. You will partner with hardware vendors and internal infrastructure teams to bring up new accelerators and ensure efficient execution of transformer workloads. Additionally, you will build and lead a team of engineers responsible for implementing the low-level inference stack, including kernel development and runtime systems. You will also take nascent research capabilities and develop them into usable capabilities.
Engineering Manager, Go - Assist & Chat
Own the observability and lifecycle management of AI features across the organization. Build tools and infrastructure to enable teams to develop, monitor, and optimize LLM-powered features. Design and implement closed-loop evaluation pipelines that automatically validate prompt changes. Develop comprehensive metrics and dashboards to track LLM usage including cost per feature, token patterns, and latency. Create systems that tie user feedback to specific prompts and LLM calls. Establish best practices and processes for the full lifecycle of prompts, including development, testing, deployment, and monitoring. Collaborate with engineering teams across the organization to ensure they have the tools and visibility needed to build high-quality AI features.
Head of Internal Tools Engineering
The Head of Internal Tools Engineering is responsible for owning the end-to-end strategy and roadmap for all internal tools, platforms, and automation, treating internal technology as a product. They make strategic build-vs-buy decisions, map current and next-state process flows, and lead systems transformation for internal teams. They architect and maintain the full engineering lifecycle of internal platforms, build seamless API-first ecosystems integrating various internal systems, ensure system reliability and operational resilience, and design scalable, secure architectures using cloud-native principles and microservices. They lead AI strategy by integrating AI and LLMs into internal workflows and deploying intelligent automation tools. They reduce cognitive load for internal users by providing standardized workflows and self-service capabilities, measure platform success by adoption, satisfaction, and productivity impact, and build, lead, and mentor a high-performing engineering team. They cultivate a collaborative culture, provide technical mentorship, foster psychological safety, partner cross-functionally with leadership across departments, and align internal platform investments with company strategy while demonstrating measurable ROI.
Head of Internal Tools Engineering
The role involves architecting, building, and scaling the internal technology ecosystem to accelerate workforce productivity, eliminate operational friction, and provide a compounding infrastructure advantage by treating internal tools with product rigor and user-centricity. Responsibilities include owning the end-to-end strategy and roadmap for all internal tools, platforms, and automation; making strategic build-vs-buy decisions; mapping current and next-state process flows and leading systems transformation. The role requires architecting and maintaining the full engineering lifecycle of internal platforms, building API-first ecosystems integrating with various business systems, owning system reliability and operational resilience, and designing scalable, secure cloud-native architectures. The role leads AI adoption and automation integration into internal workflows, including deploying intelligent automation tools, evaluating AI-assisted troubleshooting, and driving continuous experimentation with prototypes. The person will reduce cognitive load for internal users by providing golden paths and standardized workflows, ensuring frictionless onboarding, and measuring platform success via adoption rates, user satisfaction, DORA metrics, and productivity impact. Team leadership duties include building, leading, and mentoring engineers and managers, fostering a collaborative culture rooted in ownership, speed, craftsmanship, and psychological safety. The role partners cross-functionally with various company leadership teams to translate business needs into a unified technical vision, aligning internal platform investments with company strategy and demonstrating measurable ROI.
Freelance AI Evaluation Engineer (Python/Full-Stack)
Create challenging coding test cases that push AI coding systems to their limits. Review and refine realistic coding tasks based on provided production codebases with realistic scope, requirements, and information sources. Write comprehensive functional tests that validate actual end-to-end behavior and edge-cases, not just superficial checks. Craft fair but hard challenges where the AI has all the context it needs but must work for it, involving information scattered across files and external sources and requiring complex reasoning. Analyze AI failures to understand what the model struggles with versus what it masters. Iterate based on feedback from expert QA reviewers who score work on seven quality criteria.
Freelance AI Evaluation Engineer (Python/Full-Stack)
Create challenging coding test cases that push AI coding systems to their limits by reviewing and refining realistic coding tasks based on provided production codebases with realistic scope, requirements, and information sources. Write comprehensive functional tests that validate actual end-to-end behavior and edge-cases, not just superficial checks. Craft "fair but hard" challenges where the AI has all the context it needs but must work for it, involving information scattered across files and external sources and requiring complex reasoning. Analyze AI failures to understand areas where the model struggles versus what it masters. Iterate based on feedback from expert QA reviewers who score the work on seven quality criteria.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Need help with something? Here are our most frequently asked questions.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
