Senior manager, solutions architecture (UK)
Lead and empower a team of highly skilled solutions architects, fostering their technical growth and career development across complex enterprise AI engagements. Drive the successful adoption and deployment of WRITER's generative AI platform by overseeing key pre-sales technical engagements including use case discovery, proof-of-concept execution, and value realization for strategic customers. Partner closely with sales leadership and go-to-market teams to develop strategic account plans, define compelling technical value propositions, and accelerate pipeline growth for WRITER's solutions. Act as an executive technical sponsor for WRITER’s most strategic accounts, building strong relationships with C-level stakeholders and becoming their trusted advisor in AI strategy and implementation. Influence WRITER's product roadmap by gathering critical market insights and customer feedback to ensure the platform continuously addresses evolving enterprise AI needs. Architect robust, scalable, and secure AI solutions for diverse enterprise environments, integrating WRITER's platform with complex customer data ecosystems and existing technical stacks. Transform how customer evaluations and proofs of concept are facilitated, implementing best practices and scalable processes that demonstrate clear ROI and accelerate time-to-value for clients.
Copy of Member of Technical Staff - ML Engineering
Deploy, maintain, and optimize production and research compute clusters. Design and implement scalable and efficient ML inference solutions. Develop dynamic and heterogeneous compute solutions for balancing research and production needs. Contribute to productizing model APIs for external use. Develop infrastructure observability and monitoring solutions.
Engineering Leader
As an Engineering Leader at Ema, you will build and lead a high-performance engineering organization by recruiting, hiring, and developing senior engineers across multiple sub-teams including cloud infrastructure, data platform, ML operations, and developer experience. You will establish engineering standards, a code review culture, on-call expectations, and promote a bias-toward-shipping mentality balanced with production rigor. You will coach and grow senior and staff engineers into technical leaders and manage engineering managers as the organization scales. Your responsibilities include setting the 6–18 month platform roadmap in partnership with engineering teams, making critical architectural decisions such as build versus buy and migration strategies, and driving cross-functional alignment with product, ML/AI research, and go-to-market teams. You will own production health for all platform services, including incident response, postmortems, SLO tracking, and capacity planning. Additionally, you will establish and refine engineering practices to maintain fast shipping without compromising reliability, and participate in executive-level reviews related to infrastructure spend, system health, and engineering velocity.
AI Engineer
Build production AI systems by designing, developing, and deploying robust AI applications using LLMs, including prompt engineering, agent workflows, tool use, and full-stack AI products. Work directly with customers by partnering closely with enterprise stakeholders to understand complex problems and translate them into impactful AI solutions. Lead system architecture by designing scalable architectures for production AI systems that balance performance, reliability, cost, and maintainability. Develop internal platform infrastructure by contributing to Distillery, the internal LLM application platform, building reusable infrastructure, tools, and workflows used across customer deployments. Evaluate AI systems rigorously by developing evaluation frameworks that measure model performance across accuracy, latency, cost, reliability, and safety. Ship production-grade systems ensuring they meet high standards for observability, reliability, security, and maintainability. Raise the engineering bar by improving development workflows, evaluation practices, and deployment strategies as the AI platform evolves.
Head of Internal Tools Engineering
The Head of Internal Tools Engineering is responsible for owning the end-to-end strategy and roadmap for all internal tools, platforms, and automation, treating internal technology as a product. They make strategic build-vs-buy decisions, map current and next-state process flows, and lead systems transformation for internal teams. They architect and maintain the full engineering lifecycle of internal platforms, build seamless API-first ecosystems integrating various internal systems, ensure system reliability and operational resilience, and design scalable, secure architectures using cloud-native principles and microservices. They lead AI strategy by integrating AI and LLMs into internal workflows and deploying intelligent automation tools. They reduce cognitive load for internal users by providing standardized workflows and self-service capabilities, measure platform success by adoption, satisfaction, and productivity impact, and build, lead, and mentor a high-performing engineering team. They cultivate a collaborative culture, provide technical mentorship, foster psychological safety, partner cross-functionally with leadership across departments, and align internal platform investments with company strategy while demonstrating measurable ROI.
Head of Internal Tools Engineering
The role involves architecting, building, and scaling the internal technology ecosystem to accelerate workforce productivity, eliminate operational friction, and provide a compounding infrastructure advantage by treating internal tools with product rigor and user-centricity. Responsibilities include owning the end-to-end strategy and roadmap for all internal tools, platforms, and automation; making strategic build-vs-buy decisions; mapping current and next-state process flows and leading systems transformation. The role requires architecting and maintaining the full engineering lifecycle of internal platforms, building API-first ecosystems integrating with various business systems, owning system reliability and operational resilience, and designing scalable, secure cloud-native architectures. The role leads AI adoption and automation integration into internal workflows, including deploying intelligent automation tools, evaluating AI-assisted troubleshooting, and driving continuous experimentation with prototypes. The person will reduce cognitive load for internal users by providing golden paths and standardized workflows, ensuring frictionless onboarding, and measuring platform success via adoption rates, user satisfaction, DORA metrics, and productivity impact. Team leadership duties include building, leading, and mentoring engineers and managers, fostering a collaborative culture rooted in ownership, speed, craftsmanship, and psychological safety. The role partners cross-functionally with various company leadership teams to translate business needs into a unified technical vision, aligning internal platform investments with company strategy and demonstrating measurable ROI.
Solutions Architect (Dallas)
The Solutions Architect is responsible for designing scalable, highly-available infrastructure for AI platform deployments including compute, storage, networking, security, enterprise integration patterns, Infrastructure as Code (Terraform, Helm), multi-region HA/DR strategies, and CI/CD pipelines. They design multi-agent systems using various patterns, implement agent logic using modern frameworks (langchain/langgraph), design evaluation frameworks, optimize prompts with A/B testing, and guide deployment and operations. The role involves leading technical maturity assessments, working directly with enterprise customers to understand requirements and present recommendations, and partnering with Engagement Managers and Product/Engineering teams.
Solutions Architect (Austin)
The Solutions Architect is responsible for designing, deploying, and optimizing production-grade AI infrastructure and agent systems, including scalable, secure infrastructure deployments and building reliable agent applications. Responsibilities include infrastructure and platform engineering such as designing scalable and highly available infrastructure for AI platform deployments with compute, storage, networking, security, and enterprise integration patterns. They utilize Infrastructure as Code tools like Terraform and Helm and implement multi-region high availability and disaster recovery strategies as well as CI/CD pipelines. The role also covers agent engineering and development including designing multi-agent systems using various patterns, implementing agent logic using frameworks such as LangChain and LangGraph, designing evaluation frameworks, optimizing prompts with A/B testing, and guiding deployment and operations. Additionally, the role involves customer engagement and assessment, leading technical maturity assessments, working directly with enterprise customers to understand requirements, presenting recommendations, and partnering with Engagement Managers and Product/Engineering teams.
Solutions Architect (NYC)
The Solutions Architect is responsible for designing scalable, highly-available infrastructure for AI platform deployments including compute, storage, networking, security, enterprise integration patterns, Infrastructure as Code (Terraform, Helm), multi-region HA/DR strategies, and CI/CD pipelines. They design multi-agent systems using different patterns, implement agent logic using modern frameworks such as langchain/langgraph, design comprehensive evaluation frameworks, optimize prompts with A/B testing, and guide deployment and operations. They lead technical maturity assessments, work directly with enterprise customers to understand requirements and present recommendations, and partner with Engagement Managers and Product/Engineering teams.
Manager/Sr. Manager, Biopharma Marketing
Lead the team responsible for the AI/ML Stack infrastructure that bridges ML research and large-scale production, evolving the stack to meet scalability needs in ML training and inference workloads. Develop and execute the long-term vision and roadmap for the MLOps team to support ML development and deployment across business units, balancing short-term tactical deliveries and long-term architectural transformation. Manage and mentor a team of 6-7+ engineers, allocate resources strategically to support existing services and strategic initiatives. Collaborate across machine learning, data science, product engineering, and infrastructure teams to identify and address bottlenecks and facilitate deployment of new solutions. Architect compute and storage pipelines to manage large datasets without fragmentation or latency. Modernize the AI product inference stack to support significant growth in AI runs globally. Work with Site Reliability Engineering to establish comprehensive system observability metrics. Conduct build vs. buy assessments and technology stack refresh audits to benchmark and ensure best toolsets are in use.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Need help with something? Here are our most frequently asked questions.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
