Deployment Strategist - ANZ
As a Forward Deployed Engineer Strategist, your responsibilities include meeting with strategic customers to understand their critical audio and voice AI needs and identifying their biggest pain points. You will identify relevant use cases through deep engagement with customer problems and workflows and work with engineers to implement voice and audio AI technology into innovative solutions. You will design and architect bespoke integrations for customers to ensure seamless fit of technology into their products and operations, guide customers on best practices for implementing voice and audio AI models, and present results and future proposals to technical teams and C-suite executives. You will collaborate with Research and Product teams to incorporate field insights into software products and AI models, build and deliver demos of voice and audio AI technology to customers, scope out potential applications in new industries, expand AI solutions globally, take ownership of end-to-end execution of major projects for strategic partners, and collaborate daily with customers' engineering and executive teams for optimal implementation.
AI Deployment Engineer- Codex
Serve as the primary technical subject matter expert on OpenAI Codex for a portfolio of customers, embedding deeply with them to enable their engineering teams and build coding workflows. Partner directly with customers to design and implement AI-enhanced development workflows, from rapid prototyping through scalable production rollout. Build high-quality demos, reference implementations, and workflow automations, using Codex itself as part of the development process. Lead large-format workshops, technical deep dives, and hands-on enablement sessions that help engineering organizations adopt AI coding tools effectively and safely. Contribute technical content including examples, guides, patterns, and best practices to the OpenAI Cookbook to help the broader developer community accelerate their work with Codex. Gather high-fidelity product insights from real customer deployments and translate them into clear product proposals and model feedback for internal teams. Influence customer strategy and decision-making by framing how AI coding tools fit into their software development lifecycle, technical roadmap, and organizational workflows. Serve as a trusted advisor on solution architecture, operational readiness, model configuration, security considerations, and best-practice adoption.
Lead Hardware Solutions Architect
Lead and contribute to cross-functional efforts solving complex physical design challenges across IPs, projects, and advanced technology nodes. Develop and enhance RTL-to-GDS methodologies, including floorplanning, synthesis, place and route, static timing analysis, signoff, and assembly. Architect and deploy AI/ML-driven solutions in production flows to improve engineering efficiency, turnaround time, and quality of results. Optimize EDA tools and custom CAD flows using data-driven and machine learning-based techniques in close collaboration with verification, extraction, timing, design for test, and EDA vendors.
Partner AI Deployment Engineer - AWS
The Partner AI Deployment Engineer focused on AWS serves as the senior technical counterpart to AWS field leadership, building trust and credibility across regions and teams, influencing joint account strategy and technical direction for high-priority opportunities, and shaping how OpenAI engages with AWS by defining engagement models, prioritization frameworks, and best practices. The role involves proactively identifying and driving new opportunities and high-impact use cases across the AWS ecosystem, leading technical strategy for large enterprise engagements, guiding customers from ideation through architecture design to production deployment, acting as a technical decision-maker and escalation point to de-risk complex implementations, and prioritizing opportunities and technical resources for maximum impact. Responsibilities also include designing and communicating end-to-end AI architectures using OpenAI and AWS services, building and guiding prototypes, proof of concepts, and reference implementations, establishing best practices for scalable and secure production-ready GenAI systems, and ensuring solutions are repeatable and extensible for partner-led delivery. Additionally, the engineer enables AWS and partners through scalable technical motions such as workshops and playbooks, develops reusable solution patterns and assets for independent deployment, mentors and uplifts partner technical teams, and scales impact by collaborating through GSIs, RSIs, and ISVs. Cross-functionally, the role partners with Alliances, Product, Engineering, GTM, and Enablement teams to align on strategy and execution, acts as a bridge between field and product delivering insights for roadmap and prioritization, and contributes to internal knowledge systems to define standards and playbooks for the AI Deployment Engineering function.
3P Architect
Define rack- and cluster-level reference architectures for AI infrastructure deployments. Translate workload requirements into clear system design specifications and partner deliverables. Collaborate with performance modeling teams to evaluate architectural tradeoffs and system behaviors. Align internal stakeholders and external partners on critical system attributes including performance, cost, power, reliability, and scalability. Identify gaps in current technology offerings and drive vendors such as ODM/JDM, silicon, and networking to close those gaps. Influence and shape vendor roadmaps to meet future infrastructure needs. Track emerging technologies and evaluate their applicability to AI systems. Define and lead proof-of-concept efforts to validate new architectures and technologies. Act as a key interface between OpenAI and external partners, ensuring execution against design intent.
Solution Architect, Agentic AI
Translate customer pain points into AI agents, workflows, and decision-support experiences that solve real business problems. Design end-to-end solution architectures covering data sources, integrations, APIs, orchestration, retrieval, guardrails, human-in-the-loop workflows, and deployment approach. Build or guide technical prototypes, proofs of concept, and pilot solutions that validate business value quickly. Partner with Product, Engineering, Customer Success, and Delivery teams to move solutions from discovery to implementation and adoption. Own technical scoping and solution planning, including requirements, assumptions, dependencies, risks, timelines, and stakeholder alignment. Act as a trusted advisor to customers, explaining technical tradeoffs clearly to both technical and non-technical audiences. Drive measurable business outcomes such as faster time to resolution, improved first-time fix, increased remote resolution, stronger adoption, and better customer experience. Create reusable implementation assets, reference architectures, playbooks, and industry-specific patterns that improve repeatability and speed. Ensure solutions are scalable, secure, explainable, and aligned with customer governance and compliance requirements.
Partner AI Deployment Engineer - AWS
As a Partner AI Deployment Engineer focused on AWS, the role involves serving as the primary technical counterpart to AWS field leadership, shaping strategy, defining engagement models, and building scalable systems globally. Responsibilities include influencing joint account strategy and technical direction, leading technical strategy for large enterprise engagements, guiding customers from ideation through architecture design to production deployment, and acting as a technical decision-maker and escalation point. The role requires designing and communicating AI architectures using OpenAI and AWS services, building prototypes and reference implementations, establishing best practices for scalable and secure GenAI systems, and enabling AWS and partners through scalable technical motions such as workshops and playbooks. It also includes mentoring partner technical teams, scaling impact through GSIs, RSIs, and ISVs, collaborating cross-functionally with Alliances, Product, Engineering, GTM, and Enablement teams, delivering insights to inform product roadmaps, and contributing to internal knowledge systems and standards for the AI Deployment Engineering function.
Agent Engineer - NY
The Agent Engineer is responsible for partnering with customers and internal teams to design and deploy scalable AI agent architectures. Initially, the role involves ramping up on Vapi’s platform architecture, APIs, and agent capabilities, shadowing customer deployments and technical discovery calls, and learning technical architecture of current enterprise implementations. Subsequently, the role includes leading technical discovery with customers, designing solution architectures, building rapid prototypes using AI-assisted development tools, and producing architecture diagrams and technical documentation for customer deployments. Ultimately, the engineer owns end-to-end technical design for enterprise deployments, architects complex integrations using APIs, webhooks, and event-driven systems, and partners with engineering and product teams to identify platform improvements based on customer feedback.
AI Deployment Engineer - Codex | APAC
The AI Deployment Engineer serves as the primary technical subject matter expert on OpenAI Codex for a portfolio of customers, embedding deeply with them to enable their engineering teams and build coding workflows. They partner directly with customers to design and implement AI-enhanced development workflows, from rapid prototyping through scalable production rollout. The role involves building high-quality demos, reference implementations, and workflow automations using Codex as part of the development process. The engineer leads large-format workshops, technical deep dives, and hands-on enablement sessions to help engineering organizations adopt AI coding tools effectively and safely. They contribute technical content including examples, guides, patterns, and best practices to the OpenAI Cookbook to help the broader developer community accelerate their work with Codex. Gathering high-fidelity product insights from real customer deployments, they translate these insights into clear product proposals and model feedback for internal teams. The engineer influences customer strategy and decision-making by framing how AI coding tools fit into their software development life cycle, technical roadmap, and organizational workflows. They also serve as a trusted advisor on solution architecture, operational readiness, model configuration, security considerations, and best-practice adoption.
Director, Revenue Transformation
The Director of Revenue Transformation is responsible for owning Gong's internal AI operating model within the IT organization, including defining the internal AI roadmap by partnering with Security, Legal, and business leaders. They operate the enterprise AI stack, enforce consistent standards for tool usage and management, and manage the full AI model lifecycle from evaluation to deprecation. They proactively interview internal teams to identify manual workflows suitable for automation using agentic AI and independently build and deploy proofs of concept to demonstrate ROI before scaling. Additionally, they manage financial aspects such as token procurement and cost forecasting to prevent uncontrolled spend, build dashboards to monitor service levels, usage, cost, and error rates, and identify optimization opportunities for cost-saving and performance tuning.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
