GCP AI Jobs

Discover the latest remote and onsite GCP AI roles across top active AI companies. Updated hourly.

Check out 66 new GCP AI roles opportunities posted on The Homebase

Manager/Sr. Manager, Biopharma Marketing

New
Top rated
PathAI
Full-time
Full-time
Posted

Lead the team responsible for the AI/ML Stack infrastructure that bridges ML research and large-scale production, evolving the stack to meet scalability needs in ML training and inference workloads. Develop and execute the long-term vision and roadmap for the MLOps team to support ML development and deployment across business units, balancing short-term tactical deliveries and long-term architectural transformation. Manage and mentor a team of 6-7+ engineers, allocate resources strategically to support existing services and strategic initiatives. Collaborate across machine learning, data science, product engineering, and infrastructure teams to identify and address bottlenecks and facilitate deployment of new solutions. Architect compute and storage pipelines to manage large datasets without fragmentation or latency. Modernize the AI product inference stack to support significant growth in AI runs globally. Work with Site Reliability Engineering to establish comprehensive system observability metrics. Conduct build vs. buy assessments and technology stack refresh audits to benchmark and ensure best toolsets are in use.

$181,500 – $278,300
Undisclosed
YEAR

(USD)

Boston
Maybe global
Remote
Kubernetes
AWS
GCP
Azure
CI/CD

AI & IT Systems Engineer

New
Top rated
Jasper
Full-time
Full-time
Posted

As Jasper undergoes an agentic AI shift, the AI & IT Systems Engineer role involves ensuring the IT infrastructure is robust, secure, and fine-tuned for advanced AI workflows, spending 70-80% of time on AI enablement deployments. Responsibilities include modernizing and improving IT systems to support autonomous AI workflows, building scalable automation infrastructure to enhance efficiency and reduce manual tasks, and operationalizing AI initiatives using tools like Claude, ChatGPT, and Zapier to create intelligent, cross-platform workflows involving platforms like Google Workspace and Slack. The role also requires managing core IT systems such as Identity Providers and Mobile Device Management, streamlining identity and access operations using features like Okta Workflows, and providing cross-functional technical support across departments to implement AI enablement projects. Additionally, the engineer manages a broad SaaS ecosystem, including Google Workspace and Linear, and assists in developing training resources and playbooks to facilitate team adoption of new AI tools.

$135,000 – $155,000
Undisclosed
YEAR

(USD)

United States
Maybe global
Remote
Python
Docker
Kubernetes
AWS
GCP

Associate Forward Deployed Engineer

New
Top rated
Handshake
Full-time
Full-time
Posted

As an Associate Forward Deployed Engineer, the responsibilities include working alongside senior engineers and directly with customers, who are leading AI labs, to solve pressing technical challenges. The role involves supporting the design and deployment of solutions that impact customer workflows and model performance, working across the stack, shipping quickly, and iterating based on real-world feedback. Tasks include partnering with AI labs and internal teams to identify needs and gather requirements, supporting solution design, working on custom asks from customers to prototype and deploy tailored solutions, tackling complex technical problems with support from senior engineers, taking increasing ownership from concept through deployment, rapidly prototyping, testing, and iterating on tools in response to real-time feedback, contributing to architectural discussions, helping establish best practices for reliability, scalability, and security, and documenting solutions to create technical playbooks for repeatable and scalable deployment processes.

$125,000 – $145,000
Undisclosed
YEAR

(USD)

San Francisco, United States
Maybe global
Onsite
JavaScript
TypeScript
AWS
GCP
Docker

Manager, AI Deployment Engineering - Health & Life Sciences

New
Top rated
OpenAI
Full-time
Full-time
Posted

The Manager, AI Deployment Engineering for Health & Life Sciences is responsible for owning the strategy and operating model of the HLS AI Deployment Engineering team to ensure alignment with company objectives and customer needs. They hire, mentor, and develop a high-impact team of AI Deployment Engineers focused on production deployments in healthcare and life sciences. This role establishes operating mechanisms, delivery standards, and best practices tailored to regulated environments. They foster a culture of technical excellence, customer empathy, and responsible AI deployment, drive successful enterprise deployments, and oversee end-to-end implementation of generative AI applications in production. The manager guides customers through complex integration efforts spanning R&D, clinical development, regulatory affairs, medical affairs, and IT; develops scalable frameworks for secure, compliant AI adoption under regulations such as HIPAA, GxP, FDA, and EMA; ensures measurable impact through activation, adoption, and workflow transformation; collaborates closely with Sales, Account Directors, Solutions Architects, Product, Security, and Legal teams; serves as a trusted technical advisor to executive and senior technical stakeholders; and provides structured product feedback informed by deployment challenges and industry requirements.

$251,000 – $335,000
Undisclosed
YEAR

(USD)

Seattle or San Francisco, United States
Maybe global
Hybrid
Python
MLOps
Docker
Kubernetes
AWS

Senior Engineer, Internal tools

New
Top rated
Bjak
Full-time
Full-time
Posted

The Senior Engineer on the internal tools team is responsible for building and maintaining internal platforms and tools used by various departments such as People, Finance, Ops, Sales, and Engineering. The role involves owning features end-to-end, including requirements gathering, architecture, implementation, testing, deployment, and monitoring. The engineer is expected to write clean, well-tested, production-grade code and build API-first integrations to connect multiple business systems like HRIS, CRM, finance platforms, and developer tools. Responsibilities include designing for reliability, performance, and scalability, eliminating data silos by creating clean data pipelines, and owning services in production with monitoring, alerting, incident response, and post-mortems. The role also involves building AI/LLM-powered features to automate internal workflows, moving prototypes to production, and staying updated on emerging AI technologies. Collaboration includes working directly with business stakeholders to translate pain points into technical solutions, mentoring junior engineers, conducting code and design reviews, influencing technical direction, proposing architectural improvements, and driving best practices across the team.

Undisclosed

()

New York, United States
Maybe global
Remote
Python
Go
TypeScript
Docker
Kubernetes

Senior Software Engineer, Agent Infrastructure

New
Top rated
Cohere
Full-time
Full-time
Posted

Work on building the next generation of agentic AI infrastructure including secure code execution environments, agent state management, model routing and orchestration, identity and authentication, and resource management for long-running agent workflows. Turn emerging ML research ideas into production-ready infrastructure by building core platform capabilities for execution, storage, and state management; prototype and evaluate new technologies; and partner with research teams to align infrastructure with future agent system needs.

Undisclosed

()

Toronto, Canada
Maybe global
Remote
Python
Kubernetes
Docker
CI/CD
MLOps

Forward Deployed Engineer, Agentic Platform (Public Sector)

New
Top rated
Cohere
Full-time
Full-time
Posted

Build and ship features for North, Cohere's AI workspace platform; develop autonomous agents that interact with sensitive enterprise data; experiment rapidly and with high quality to engage customers and deliver solutions that exceed expectations; work across the entire product lifecycle from conceptualization to production; lead end-to-end deployment of North in private cloud and on-premises environments, including planning, configuration, testing, and rollout.

Undisclosed

()

Ottawa, Canada
Maybe global
Remote
Python
RAG
Docker
Kubernetes
AWS

Platform Product Manager

New
Top rated
Distyl
Full-time
Full-time
Posted

Define and drive the product roadmap for Distyl’s internal AI platform, including vertical products, ensuring support for both internal teams and enterprise deployments. Partner closely with platform engineers, forward-deployed engineers, AI researchers, and customers to translate real-world use cases into scalable platform capabilities. Identify opportunities to turn one-off customer solutions into reusable product features. Prioritize development of platform components such as AI model orchestration, agent frameworks, evaluation pipelines, developer tooling, and enterprise data integrations. Gather feedback from internal teams and enterprise clients to improve developer experience and system reliability. Define clear product requirements, success metrics, and technical specifications for platform capabilities. Ensure the platform enables fast, reliable, and secure AI deployments for enterprise customers. Collaborate with leadership to align platform investments with Distyl’s strategic goals.

$180,000 – $250,000
Undisclosed
YEAR

(USD)

New York, United States
Maybe global
Hybrid
Python
CI/CD
Docker
Kubernetes
AWS

Chief Technology Officer

New
Top rated
Bjak
Full-time
Full-time
Posted

The Chief Technology Officer is responsible for defining the long-term architecture for A1's AI systems, infrastructure, and developer platform, evaluating trade-offs between speed of iteration and long-term system design, and ensuring systems are designed for scalability, reliability, and long-term evolution. They guide key decisions across model integration, data pipelines, distributed systems, and product architecture. The CTO works with engineers to translate product direction into clear technical execution, helps structure engineering workstreams and maintain team alignment on priorities, maintains high engineering standards while encouraging shipping, and establishes engineering culture, development practices, and technical standards across the company. They build and scale a world-class engineering team across key talent hubs including China and the US, identify strong technical leaders, define hiring standards and interview processes, and ensure technical workstreams move forward smoothly across teams and locations. The CTO works closely with product, research, and leadership teams and helps resolve cross-team technical and execution challenges.

Undisclosed

()

New York, United States
Maybe global
Remote
Python
MLOps
Docker
Kubernetes
AWS

Chief Technology Officer

New
Top rated
Bjak
Full-time
Full-time
Posted

The Chief Technology Officer will define the long-term architecture for A1’s AI systems, infrastructure, and developer platform, evaluate trade-offs between speed of iteration and long-term system design, and ensure systems are designed for scalability, reliability, and long-term evolution. They will guide key decisions across model integration, data pipelines, distributed systems, and product architecture. The CTO will work with engineers to translate product direction into clear technical execution, help structure engineering workstreams and keep teams aligned on priorities, maintain high engineering standards while focusing on shipping, and establish engineering culture, development practices, and technical standards. Additionally, they will build and scale a world-class engineering team across key talent hubs including China and the US, identify strong technical leaders, define hiring standards and interview processes, work closely with product, research, and leadership teams, ensure technical workstreams move forward smoothly across teams and locations, and help resolve cross-team technical and execution challenges.

Undisclosed

()

Beijing, China
Maybe global
Remote
Python
Docker
Kubernetes
CI/CD
AWS

Want to see more AI Egnineer jobs?

View all jobs

Access all 4,256 remote & onsite AI jobs.

Join our private AI community to unlock full job access, and connect with founders, hiring managers, and top AI professionals.
(Yes, it’s still free—your best contributions are the price of admission.)

Frequently Asked Questions

Need help with something? Here are our most frequently asked questions.

Question text goes here

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

[{"question":"What are GCP AI jobs?","answer":"GCP AI jobs involve working with Google Cloud Platform to develop, deploy, and manage artificial intelligence solutions. These positions typically use Vertex AI for managing resources, models, and training pipelines. Common roles include AI Engineers, Machine Learning Engineers, and Solutions Architects who implement generative AI solutions across data, infrastructure, and AI components."},{"question":"What roles commonly require GCP skills?","answer":"Roles requiring GCP skills include Field Solutions Architects specializing in Generative AI design, Customer Engineers focusing on Cloud AI implementations, Google Cloud AI Engineers working with AI/ML frameworks, Machine Learning Engineers handling cloud expansions, and Product Managers overseeing Google Distributed Cloud AI initiatives. These positions typically involve deploying AI agents and managing cloud-native architecture."},{"question":"What skills are typically required alongside GCP?","answer":"Alongside GCP, professionals typically need experience with containerization technologies, Kubernetes, and cloud-native architecture. Strong understanding of cloud security and IAM access controls is essential. Familiarity with AI/ML frameworks, Vertex AI components (Feature Store, Agent Engine), and Cloud Run for AI agents is valuable. Data processing skills using BigQuery and experience with service agents for logs and storage are also common requirements."},{"question":"What experience level do GCP AI jobs usually require?","answer":"GCP AI positions typically require mid to senior-level experience, with 3-5 years working in cloud environments. Roles expect practical experience implementing cloud-native architecture, managing containerized applications, and applying AI/ML frameworks within cloud ecosystems. Advanced positions often require hands-on experience with Vertex AI administration, implementing IAM permissions, and designing end-to-end AI solutions on Google Cloud."},{"question":"What is the salary range for GCP AI jobs?","answer":"Salary ranges for GCP AI professionals vary based on location, experience level, and specific role. Entry-level positions start in the upper five-figure range, while mid-level engineers and architects can earn well into six figures. Senior specialists and those with combined expertise in AI architecture, cloud security, and enterprise implementation command premium compensation, especially in technology hubs and at large organizations."},{"question":"Are GCP AI jobs in demand?","answer":"GCP AI jobs show strong demand across multiple industries as organizations accelerate their cloud-based AI initiatives. Companies actively recruit for solutions architects, AI engineers, and machine learning specialists who can implement Vertex AI solutions. The growth in AI chatbot development, generative AI applications, and cloud-native AI services is driving consistent demand for professionals who can design and deploy Google Cloud AI infrastructure."},{"question":"What is the difference between GCP and AWS in AI roles?","answer":"While both platforms support AI workloads, GCP offers Vertex AI with specific administrator and user roles tailored to AI workflows, while AWS uses SageMaker with different permission structures. GCP integrates tightly with Google's AI research through tools like Agent Engine and Feature Store. AWS provides broader industry adoption but GCP often appeals to organizations seeking Google's AI expertise, particularly for generative AI and natural language applications."}]