Senior AI Engineer
The responsibilities include building agent-driven enrollment and parent communication pipelines that scale significantly without proportional headcount growth; creating and managing parallel simulations of students testing curriculum to identify gaps and generate improvements; developing automated culture and community agents for engagement, onboarding, and retention at machine scale; constructing real-time operational dashboards to provide leadership with visibility into various business aspects such as enrollment, academic progress, parent satisfaction, and campus operations; designing AI-first workflows for guides, advisors, and operational staff to reduce administrative burdens and refocus on students; building systems called Brainlifts to capture and compound institutional knowledge over time; and integrating these capabilities into Alpha's broader AI ecosystem including EPHOR, Alpha GPTs, and Fleet/Swarm infrastructure.
DevOps Engineer, Infrastructure & Security
The role involves taking full accountability for the long-term performance and reliability of AI use cases deployed across international government agencies. Responsibilities include overseeing the end-to-end health of the platform to ensure seamless integration between the AI core and all full-stack components, from APIs to UI, maintaining a responsive and production-ready environment. The job also requires building automated systems to monitor model performance and data drift across geographically dispersed environments, managing the technical lifecycle within diverse regulatory frameworks, leading the response for production issues in mission-critical environments, ensuring rapid resolution and prevention of future issues. Additionally, the role requires translating deep technical performance metrics into clear insights for senior international government officials and partnering with Engineering and ML teams to ensure lessons learned in the field influence the technical architecture and decisions of future use cases.
Field Engineering Manager, Public Sector
As a Production AI Ops Lead, you will design and develop the production lifecycle of full-stack AI applications, support end-to-end system reliability, real-time inference observability, sovereign data orchestration, high-security software integration, and resilient cloud infrastructure for international government partners. Responsibilities include owning the production outcome with full accountability for long-term performance and reliability of AI use cases across international government agencies, ensuring full-stack integrity by overseeing all platform components from APIs to UI for a production-ready environment, building automated systems to monitor model performance and data drift across dispersed environments, managing the technical lifecycle within diverse regulatory frameworks, leading incident response in mission-critical environments with rapid resolution and prevention guardrails, translating technical performance metrics into clear insights for senior government officials, and partnering with engineering and ML teams to influence the technical architecture and decisions for future AI use cases.
Full Stack Engineer
Build and maintain features for the web-based property management platform using TypeScript, React, Node.js, PostgreSQL, and AWS. Contribute to a monorepo architecture, working within two-week sprint cycles to deliver high-quality code. Implement integrations including DocuSign, Plaid, Stripe, and ownership group payout systems. Optimize platform performance and user experience by replacing legacy systems. Build and integrate AI agents using Claude and other AI APIs to automate organizational processes, developing API integrations and custom agents. Collaborate with the CEO on prioritizing automation opportunities. Take ownership of tasks, independently research and implement solutions to challenges, proactively identify and implement improvements, and contribute ideas to platform architecture and development priorities.
Senior Software Engineer, Agents
Design and build AI agents that outperform human agents in managing complex customer interactions and driving customer retention. Identify cross-customer trends that guide the evolution of Decagon’s agent building platform and research efforts. Experiment with and run evaluations on the latest text and voice models, then integrate them at scale with large enterprise-grade customers.
Copy of Member of Technical Staff - ML Engineering
Deploy, maintain, and optimize production and research compute clusters. Design and implement scalable and efficient ML inference solutions. Develop dynamic and heterogeneous compute solutions for balancing research and production needs. Contribute to productizing model APIs for external use. Develop infrastructure observability and monitoring solutions.
Product Manager, Agent Harness & Modelling
Define and own the roadmap for North's agent harness, including the agent loop, context engineering layer, tool orchestration, sandbox execution, and sub-agent delegation. Serve as the primary interface between North engineering and Cohere's Modeling team, ensuring new harness capabilities are validated before being built and that neither team limits future possibilities. Own North's agentic evaluation framework, ensuring evaluations are compatible with both the North harness and Modeling's training infrastructure, serving as a reliable bridge between product and research. Engage enterprise customers to identify real-world agentic failures and translate findings into product and model requirements. Stay current with the open-source and commercial agent ecosystem and drive adoption decisions that align North's architecture with emerging standards.
C++ Systems Engineer
Design, build, and optimize the core native runtime powering LM Studio and the C++ libraries powering the app and APIs. Work across runtime, LLM engines, llama.cpp/MLX integrations, build infrastructure, and on-device AI software. Focus on system and library integration by wiring the C++ runtime to GPU backends, vendor SDKs, and operating-system services to support user-facing applications. Implement and harden system-level code involving threading, memory, files, IPC, and scheduling. Integrate platform acceleration paths such as Metal, CUDA, and Vulkan across macOS, Windows, and Linux. Profile, debug, and tune execution paths to ensure fast, dependable local AI and maintainable software. Contribute to the C++ runtime powering LM Studio, extend LLM engine integrations, and build platform-aware performance features for desktop OS. Implement resilient IPC, resource management, and scheduling logic to support concurrent model execution. Improve build, packaging, and release infrastructure for native components. Collaborate with the team to deliver cohesive and recognizable user experiences.
Research Engineer – Benchmarking, Evals & Failure Analysis
As a Research Engineer at Mercor, you will own benchmarking pipelines, evaluation systems, and failure analysis workflows that directly inform how frontier language models are trained and improved. You will design, implement, and maintain benchmarks and metrics for tool use, agentic behavior, and real-world reasoning, ensuring they scale with training and align with product and research goals. You will build and operate LLM evaluation systems including runs, scoring, dashboards, and reporting to allow tracking and comparison of model performance at scale. You will conduct systematic failure analysis on model outputs, categorize failure modes, quantify their prevalence, and use these insights to influence reward design, data curation, and benchmark design. Additionally, you will create and refine rubrics, automated evaluators, and scoring frameworks that influence training and evaluation decisions, balancing rigor and scalability. You will quantify data usability and quality, guide data generation, augmentation, and curation based on evaluations and failure analysis. Collaboration with AI researchers, applied AI teams, and data producers to align evaluations with training objectives and prioritize important benchmarks and failure analyses is expected. Finally, you will operate with strong ownership in a fast-paced, high-iteration research environment.
AI Evaluation Engineer
Design and implement evaluation pipelines to measure the performance and reliability of AI models, develop automated testing frameworks to assess model outputs at scale, analyze model performance using both traditional statistical metrics and AI-specific evaluation methods, evaluate AI systems built on modern architectures such as LLM-based applications and Retrieval-Augmented Generation (RAG), identify potential issues related to accuracy, hallucinations, bias, safety, and model drift, conduct adversarial testing to uncover vulnerabilities and ensure safe model behavior, collaborate with engineering and AI teams to improve prompt design, model outputs, and system performance, monitor model performance in production, and help define best practices for AI evaluation and observability.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Need help with something? Here are our most frequently asked questions.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
