Go AI Jobs

Discover the latest remote and onsite Go AI roles across top active AI companies. Updated hourly.

Check out 128 new Go AI roles opportunities posted on The Homebase

Full-stack Developer (Full-Time/Intern) - SH 全栈工程师 (全职/实习) - 上海

New
Top rated
Flowith
Full-time
Full-time
Posted

As a Full-Stack Engineer at Flowith, you will be responsible for independently or collaboratively leading the full-stack development of Flowith's core modules crossing front-end and back-end boundaries to deliver highly available and scalable system code. You will deeply integrate advanced AI algorithms and complex models into the product flow to create intelligent interactive experiences, work closely with product managers, designers, and AI engineers in a creative environment to implement innovative AI concepts, automate deployments and manage continuous integration on mainstream cloud infrastructure while monitoring and optimizing system performance and resource usage. Additionally, you will participate in the design evolution of the core architecture, conduct in-depth code reviews, and help accumulate technical components and best practices to elevate the engineering standards of the team.

Undisclosed

()

Shanghai, China
Maybe global
Hybrid
JavaScript
TypeScript
Python
Go
Java

Full Stack Software Engineer - OpenAI for Finance

New
Top rated
OpenAI
Full-time
Full-time
Posted

The responsibilities include owning the end-to-end development lifecycle for new enterprise products, collaborating closely with product, design, and external customers to understand problems and implement effective solutions, and working with the research team to improve the next generation of models.

$230,000 – $385,000
Undisclosed
YEAR

(USD)

San Francisco, United States
Maybe global
Onsite
TypeScript
Python
JavaScript
Go

Senior Product Designer, Mobile

New
Top rated
Grammarly
Full-time
Full-time
Posted

Own the observability and lifecycle management of AI features across the organization. Build tools and infrastructure to enable teams to develop, monitor, and optimize LLM-powered features. Design and implement closed-loop evaluation pipelines that automatically validate prompt changes. Develop comprehensive metrics and dashboards to track LLM usage, including cost per feature, token patterns, and latency. Create systems that tie user feedback to specific prompts and LLM calls. Establish best practices and processes for the full lifecycle of prompts, including development, testing, deployment, and monitoring. Collaborate with engineering teams across the organization to ensure they have the tools and visibility needed to build high-quality AI features.

$103,000 – $128,000
Undisclosed
YEAR

(USD)

United States, Canada, Mexico, Brazil, Argentina
Maybe global
Remote
Go
Kubernetes
Google Cloud
OpenAI API
MLOps

Software Engineer, Inference Platform

New
Top rated
Fluidstack
Full-time
Full-time
Posted

Own inference deployments end-to-end including initial configuration, performance tuning, production SLA maintenance, and incident response; drive measurable improvements in throughput, time-to-first-token (TTFT), and cost-per-token across diverse model families and customer workload patterns; build and operate KV cache and scheduling infrastructure to maximize utilization across concurrent requests; implement and validate disaggregated prefill/decode pipelines and Kubernetes-based orchestration supporting them at scale; profile and resolve bottlenecks at compute, memory, and communication layers and instrument deployments for end-to-end observability; partner with customers to translate model architectures, access patterns, and latency requirements into deployment configurations and platform improvements; contribute to the inference platform architecture and roadmap focusing on reducing deployment complexity, improving hardware utilization, and expanding support for new model classes and accelerators; participate in an on-call rotation to maintain production reliability and SLA commitments.

$165,000 – $500,000
Undisclosed
YEAR

(USD)

San Francisco, United States
Maybe global
Onsite
Python
Go
PyTorch
JAX
Kubernetes

Field Events Marketing Manager

New
Top rated
Arize AI
Full-time
Full-time
Posted

Debug and fix issues in the platform and ship pull requests with fixes. Build internal tools and copilots powered by generative AI to enhance the team. Rapidly prototype proof-of-concepts for customer use cases. Collaborate across Engineering, Product, and Solutions teams to unblock customers and advance AI adoption.

Undisclosed

()

London or Buenos Aires, United Kingdom or Argentina
Maybe global
Remote
Python
Go
JavaScript
TypeScript
OpenAI API

Software Engineer, Agent

New
Top rated
Sierra
Full-time
Full-time
Posted

Design and deliver production-grade AI agents that are highly performant, reliable, and intuitive, central to driving revenue and used in production environments across various industries such as finance, healthcare, and commerce. Have complete ownership and autonomy over the Agent Development Life Cycle (ADLC) from initial pilot through deployment and continuous iteration, including building, tuning, and evolving AI agents while defining ADLC best practices. Partner with large enterprises and startups to understand business challenges and build AI agents that transform operations at scale. Build and evolve Sierra's core platform by surfacing unmet needs, prototyping new tools and features, and collaborating with research, product, and platform teams to shape the future of AI agent development and Sierra's products.

CA$180,000 – CA$390,000
Undisclosed
YEAR

(CAD)

Toronto, Canada
Maybe global
Onsite
Python
TypeScript
Go
MLOps
RAG

Staff Product Designer, Go Enterprise

New
Top rated
Grammarly
Full-time
Full-time
Posted

Own the observability and lifecycle management of AI features across the organization. Build tools and infrastructure to enable teams to develop, monitor, and optimize LLM-powered features. Design and implement closed-loop evaluation pipelines that automatically validate prompt changes. Develop comprehensive metrics and dashboards to track LLM usage including cost per feature, token patterns, and latency. Create systems that tie user feedback to specific prompts and LLM calls. Establish best practices and processes for the full lifecycle of prompts including development, testing, deployment, and monitoring. Collaborate with engineering teams across the organization to ensure they have the tools and visibility needed to build high-quality AI features.

$103,000 – $128,000
Undisclosed
YEAR

(USD)

San Francisco
Maybe global
Hybrid
Go
Kubernetes
Google Cloud
Observability
Metrics

Senior Software Engineer, Managed AI - AI Platform

New
Top rated
Crusoe
Full-time
Full-time
Posted

Lead the design and implementation of core AI services including resilient fault-tolerant queues, model catalogs, and scheduling mechanisms optimized for cost and performance. Architect and scale infrastructure capable of handling millions of API requests per second. Implement robust monitoring and alerting to ensure system health and 24/7 availability. Collaborate closely with product management, business strategy, and other engineering teams to define the AI platform roadmap. Influence the long-term vision and architectural decisions of the platform. Contribute to open-source AI frameworks and participate in the AI community. Prototype and iterate on emerging technologies and new features.

$172,425 – $209,000
Undisclosed
YEAR

(USD)

San Francisco or Sunnyvale, United States
Maybe global
Onsite
Go
Python
Kubernetes
CI/CD
AWS

Engineering Manager, Managed AI

New
Top rated
Crusoe
Full-time
Full-time
Posted

As an Engineering Manager on the Managed AI team at Crusoe, you will lead and scale a team of engineers building next-generation platform infrastructure for Large Language Models (LLMs). Responsibilities include guiding the team through the design and implementation of highly scalable, fault-tolerant infrastructure; leading a team of software engineers; defining and executing the AI roadmap; cultivating a high-performance engineering culture; overseeing architecture and development of core AI services such as fault-tolerant task queues and model management systems; ensuring delivery of scalable systems capable of handling millions of API requests per second; delivering an AI platform capable of handling varied AI loads from training to agentic execution infrastructure; working cross-functionally with product, infrastructure, and GTM stakeholders; representing engineering in strategic discussions; promoting knowledge sharing, mentorship, and evolving engineering processes. This role requires in-office presence in San Francisco or Sunnyvale, CA.

$237,600 – $288,000
Undisclosed
YEAR

(USD)

San Francisco or Sunnyvale, United States
Maybe global
Onsite
Python
Go
MLOps
Kubernetes
Docker

Senior Staff Software Engineer, Model LifeCycle

New
Top rated
Crusoe
Full-time
Full-time
Posted

The Senior Staff Engineer for the Model LifeCycle team at Crusoe is responsible for building a comprehensive managed platform for the entire application development lifecycle with a focus on Machine Learning models including Large Language Models (LLMs). Responsibilities include managing fine-tuning systems for large foundation models such as SFT, PEFT, LoRA, and adapters with multi-node orchestration, checkpointing, failure recovery, and cost-efficient scaling. They implement and maintain end-to-end training pipelines for LLMs, distillation and reinforcement learning pipelines including preference optimization, policy optimization, and reward modeling, as well as manage agent execution infrastructure. They also manage dataset, model, and experiment management tasks including versioning, lineage, evaluation, and reproducible fine-tuning at scale. Additionally, they work closely with product, business, and platform teams to shape core abstractions and APIs, influence architectural decisions around training runtimes, scheduling, storage, and model lifecycle management, contribute to and engage with the open-source LLM ecosystem, and take ownership in designing and building core systems from first principles.

$237,600 – $288,000
Undisclosed
YEAR

(USD)

San Francisco, United States
Maybe global
Onsite
Python
Go
PyTorch
TensorFlow
MLOps

Want to see more AI Egnineer jobs?

View all jobs

Access all 4,256 remote & onsite AI jobs.

Join our private AI community to unlock full job access, and connect with founders, hiring managers, and top AI professionals.
(Yes, it’s still free—your best contributions are the price of admission.)

Frequently Asked Questions

Need help with something? Here are our most frequently asked questions.

Question text goes here

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

[{"question":"What are Go AI jobs?","answer":"Go AI jobs involve developing the infrastructure and systems that power AI applications. These positions focus on building high-performance backends, data processing pipelines, real-time AI services, and scalable frameworks that handle LLM requests. Golang is particularly valued for its concurrency capabilities when creating AI-powered chatbots, recommendation engines, computer vision systems, and edge AI applications."},{"question":"What roles commonly require Go skills?","answer":"Backend developers for AI applications frequently need Go skills, as do engineers working on production AI system deployment and cloud infrastructure. The language is especially valuable in roles involving real-time processing in eCommerce, banking, healthcare, and customer service platforms. Engineers building voice transcription systems, IoT applications, robotics, and networked services also commonly require Go expertise."},{"question":"What skills are typically required alongside Go?","answer":"Alongside Go, employers typically seek proficiency in high-performance computing, multithreading, concurrent programming, and memory-efficient data handling. Experience with tools like GoCV for computer vision, Fuego for fuzzy logic, and Gobot for IoT is valuable. Knowledge of vector databases, Google Cloud Profiler, and cross-platform deployment is often required, as is the ability to integrate with Python codebases for AI model training."},{"question":"What experience level do Go AI jobs usually require?","answer":"The research doesn't specifically address experience levels for Go AI jobs. Typically, these positions require strong knowledge of concurrent programming, memory management, and integration with AI services. Since these roles often involve production systems and scalable infrastructure, mid to senior-level experience with both Go and AI concepts is commonly expected, though requirements vary by company and specific position."},{"question":"What is the salary range for Go AI jobs?","answer":"The provided research doesn't contain specific salary information for Go AI jobs. Compensation typically varies based on factors like location, company size, experience level, and specific technical requirements. Go developers working on AI applications often command competitive salaries due to the specialized intersection of high-performance programming and artificial intelligence expertise."},{"question":"Are Go AI jobs in demand?","answer":"Yes, the demand for Golang in AI application development is increasing. This growth is driven by performance requirements in computer vision, real-time systems, and production AI deployments. Startups, enterprises, and cloud providers are adopting Go for building scalable, secure AI applications. The language is particularly sought after for customer service platforms handling millions of LLM requests and real-time transcription services."},{"question":"What is the difference between Go and Java in AI roles?","answer":"The research doesn't directly compare Go and Java for AI roles. However, Go typically excels in building high-performance, concurrent systems with efficient memory usage and faster startup times—ideal for AI service deployment and orchestration. Java offers robust enterprise features and extensive libraries, but may have higher memory requirements. In AI contexts, Go is often preferred for microservices, real-time processing, and lightweight applications where performance is critical."}]