⚠️ Sorry, this job is no longer available.

The AI job market moves fast. We keep up so you don't have to.

Fresh roles added daily, reviewed for quality — across every corner of the AI ecosystem.

Edit filters

New AI Opportunities

Showing 6179  of 79 jobs
Tag
The Talent Labs.jpg

Computational Protein Design

Talent Labs
US.svg
United States
Full-time
Remote
false
We are seeking a Computational Protein Design Scientist to join our team working at the interface of generative AI and synthetic biology. You will play a key role amongst a team of scientists designing and engineering proteins for specific functions. This is an opportunity to help shape and grow an organization that advances artificial intelligence and applies it to longstanding scientific challenges. Using your blend of computational expertise and in-depth biochemical understanding of proteins, you will generate insights to improve protein functionality and operate at the interface between our machine learning and experimental platform units, working closely to seamlessly integrate AI generations and lab validation data.Who you areYou are a computational protein designer. You have a proven track record of successfully leveraging novel computational tools and knowledge of biochemistry or structural biology to design proteins to functional requirements and applications in synthetic biology.You are a data scientist. You are a strong data scientist and you have experience owning data-driven projects to generate biological insights.You are a successful scientist. You have a PhD (or equivalent industry experience) in computational biology, bioinformatics, computer science, biochemistry, structural biology, physics, biophysics, bio/chem engineering, or a related field. Your research experience was in protein biochemistry using computational expertise.You collaborate with experimentalists. You have experience collaborating with experimental (i.e. wet lab) teams to achieve protein design objectives.You are an owner. You have a proven track record of delivering successful commercial and / or academic research projects, demonstrated through publications, patents, and/or commercially impactful outcomes, as well as other contributions to the scientific community.You are a connector. You love to connect people and enable them to perform at their highest levels. You have excellent communication and presentation skills with the ability to convey complex scientific concepts to both technical and non-technical audiences.You are a mission driven innovator. You are passionate about making a positive impact on the world, whether it's for patients, partners or beyond. You are motivated by the end goal and are flexible in adapting to different approaches and methodologies.You thrive in a dynamic and ambiguous environment. You excel in a fast-paced setting where goals must be achieved efficiently and urgently. You have a keen eye for creating, then optimizing processes to improve speed and repeatability. You are an advocate for lab automation, both through hardware and softwareWhat sets you apart (preferred but not required)You have experience with generative AI. You have experience leveraging generative AI (or other machine learning models) in synthetic biology applications.You have experience engineering gene editing tools, such as nucleases and integrases.You have experience with homology-based and structural bioinformatics, and are able to answer scientific questions using very large databases.You have helped scale a young biotech before. You have worked in startups and helped the company grow.Your ResponsibilitiesLeverage our proprietary generative AI models to design proteins for experimental validation:Analyze protein design problems based on functional requirements, biochemistry, structural biology and sequence homologyGenerate designs using our proprietary generative AI models and optimize designs for experimental validationCoordinate with our lab-based protein engineers to plan and optimize the design process and validation strategyLeverage our proprietary data to improve our models:Analyze and leverage our experimental results to improve the next round of designs and increase our success rate over validation roundsCollaborate with machine learning scientists to fine-tune and prompt our modelsCollaboration and communication:Be an effective interface between machine learning model development and experimental validationCapture bioengineering learnings and feedback to our machine learning unit, and vice versaFoster a collaborative and innovative environment, proactively finding opportunities to innovate and create clarity and alignment between different unitsContribute to our computational tools:Help improve the way we use, serve and integrate our AI models, by feeding back to the software engineers and foundational machine learning unitHelp improve our data management systems and workflowsScientific excellence and self development:Work to the highest scientific standards (publication-grade work) Stay on top of relevant developments in synthetic biologyContinue building your understanding of generative AI as well as expanded areas of protein and cell biologyParticipate in knowledge sharing, e.g. organize and present at our internal reading group.Attend and present at conferences when relevant.ApplyWe offer strongly competitive compensation and benefits packages, including:Private health insurancePension/401(K) contributionsGenerous leave policies (including gender neutral parental leave)Hybrid workingTravel opportunities and moreWe also offer a stimulating work environment, and the opportunity to shape the future of synthetic biology through the application of breakthrough generative models.We welcome applicants from all backgrounds and we are committed to building a team that represents a variety of backgrounds, perspectives, and skills.
No items found.
Hidden link
Eloquent AI.jpg

Lead Software Engineer

Eloquent AI
US.svg
United States
Full-time
Remote
false
Meet Eloquent AIAt Eloquent AI, we’re building the next generation of AI Operators—multimodal, autonomous systems that execute complex workflows across fragmented tools with human-level precision. Our technology goes far beyond chat: it sees, reads, clicks, types, and makes decisions—transforming how work gets done in regulated, high-stakes environments.We’re already powering some of the world’s leading financial institutions and insurers, fundamentally changing how millions of people manage their finances every day. From automating compliance reviews to handling customer operations, our Operators are quietly replacing repetitive, manual tasks with intelligent, end-to-end execution.Headquartered in San Francisco with a global footprint, Eloquent AI is a fast-growing company backed by top-tier investors. Join us to work alongside world-class talent in AI, engineering, and product as we redefine the future of financial services.Your RoleAs a Lead Engineer at Eloquent AI, you will lead the development of AI-powered full-stack applications while overseeing and mentoring other engineers. You’ll remain hands-on across the stack, but also take ownership of technical direction, code quality, and delivery standards.You’ll work closely with engineers, AI researchers, and product teams to ensure scalable, reliable systems that power real-time AI-driven workflows. This role requires strong engineering fundamentals, leadership capability, and the ability to operate effectively in a fast-paced, AI-first environment.You will:Design and build full-stack applications that power AI-driven workflows for enterprise users.Oversee and review the work of other engineers, ensuring high-quality, production-ready code.Provide technical guidance, architectural direction, and hands-on support where needed.Develop high-performance front-end interfaces for AI agent control, monitoring, and visualisation.Build scalable backend services that support real-time AI interactions, knowledge retrieval, and automation.Work closely with AI researchers and ML engineers to integrate LLMs, RAG, and automation into production-ready systems.Establish engineering best practices across testing, deployment, and performance optimisation.Continuously iterate and refine AI-driven products, balancing speed with robustness.Requirements8+ years of hands-on experience building full-stack production applications.Prior experience leading or mentoring engineers.Proficiency in React, TypeScript, and Node.js.Backend experience using Python.Strong knowledge of cloud infrastructure (AWS, GCP, or Azure) and scalable architectures.Understanding of AI-powered applications (LLMs, chat interfaces, agentic workflows).Ability to work in a fast-paced, high-autonomy environment.Strong collaboration skills across engineering, product, and AI teams.Bonus Points If…You have experience building AI-powered applications with LLM integrations.You’ve worked in high-performance startups or enterprise AI environments.You have a sharp eye for UI/UX design and have built intuitive, AI-driven interfaces.You have experience with GraphQL, WebSockets, or real-time data streaming.You’ve contributed to open-source projects or have built developer tools for AI.
No items found.
Hidden link
Xaira Therapeutics.jpg

Scientist I, Platform Development and Antibody Screening

Xaira
GB.svg
United Kingdom
Full-time
Remote
false
About Xaira Therapeutics Xaira is an innovative biotech startup focused on leveraging AI to transform drug discovery and development. The company is leading the development of generative AI models to design protein and antibody therapeutics, enabling the creation of medicines against historically hard-to-drug molecular targets. It is also developing foundation models for biology and disease to enable better target elucidation and patient stratification. Collectively, these technologies aim to continually enable the identification of novel therapies and to improve success in drug development. Xaira is headquartered in the San Francisco Bay Area, Seattle, and London.Position Overview Xaira is seeking enthusiastic and motivated candidates to join our team as Research Engineers. We welcome candidates across the spectrum of experience. Teams thrive when they are diverse (across all axes), and we encourage all eligible applicants to apply.  The role is based in our London office, located near Old Street. Our team is highly collaborative, operating on the belief that hard problems are best solved by multiple people working towards a clear goal, bringing and sharing their expertise with the team. We operate a hybrid working culture based on trust. Members of the team are typically in the office 3 days a week. Key Responsibilities Industry experience as a research engineer, in an AI-related company. Excited to work, learn and teach within a collaborative team working on challenging problems. Desirable Below is a list of qualities/experiences that align with the kinds of things that we are looking for. Please do not read this as an extension of the “requirements” section! We recognise that experiences, opportunities and life-paths vary. Masters (or equivalent)/PhD in AI-related field. Public codebases or contribution to public GitHub repositories. Experience building and training neural networks. Experience in distributed training and inference. Experience profiling and optimising large-scale AI models. Knowledge or experience in BioAI. If you are a motivated individual with a passion for applying AI to advance drug discovery and improve human health, we encourage you to apply and join us in our mission to make a positive difference in the world. Xaira Therapeutics an equal-opportunity employer. We believe that our strength is in our differences. Our goal to build a diverse and inclusive team began on day one, and it will never end. TO ALL RECRUITMENT AGENCIES: Xaira Therapeutics does not accept agency resumes. Please do not forward resumes to our jobs alias or employees. Xaira Therapeutics is not responsible for any fees related to unsolicited resumes.
No items found.
Hidden link
Tenstorrent.jpg

Design Director

Tenstorrent
$100,000 – $500,000
US.svg
United States
Full-time
Remote
false
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.Tenstorrent is accelerating the future of AI and high-performance compute by building industry-leading CPU and AI architectures. As an Automotive and Robotics SoC Architect, you will define scalable, top-down system architectures that unify our CPU and AI technologies for next-generation automotive applications. This senior technical role shapes the architectural direction of our automotive and robotics portfolio, ensuring our products meet the industry's highest expectations for performance, safety, reliability, and security. This position is central to how Tenstorrent delivers world-class automotive solutions and requires strong technical leadership, systems thinking, and cross-functional collaboration. This role is remote, based out of North America. We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.   Who You Are A systems thinker who can architect complex SoCs from concept to execution. A strong communicator who can articulate technical direction across engineering teams and external partners. Someone with deep knowledge of safety-critical systems and the unique needs of automotive environments. An innovator who can identify future use cases and propose next-generation architectural solutions. A leader who thrives in a highly technical, cross-functional, fast-moving environment.   What We Need Bachelor’s, Master’s, or Ph.D. in Electrical Engineering, Computer Engineering, or related field. Extensive experience designing complex SoCs, ideally in automotive applications. Proficiency in hardware description languages such as Verilog or VHDL. Experience with hardware/software co-design and co-verification. Knowledge of automotive safety standards (e.g., ISO 26262) and security principles. Someone comfortable with up to 25% international travel.  Experience with both cameras, sensors, and others is a plus.   What You Will Learn How cutting-edge CPU and AI architectures are adapted for automotive-grade environments. Best-in-class methodologies for safety-critical SoC design, verification, and system integration. How to translate emerging automotive use cases into scalable, future-proof SoC architectures. Approaches to hardware-level security, robustness, and cyber-resilience in automotive compute systems. Cross-functional collaboration strategies that drive innovation across architecture, software, DV, and product teams.   Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made. Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology.  Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2).   These requirements apply to persons located in the U.S. and all countries outside the U.S.  As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency.  If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
No items found.
Hidden link
Deepgram.jpg

Backend Software Engineer - Engine Team (Voice Agent)

Deepgram
$150,000 – $220,000
US.svg
United States
Full-time
Remote
false
Company OverviewDeepgram is the leading platform underpinning the emerging trillion-dollar Voice AI economy, providing real-time APIs for speech-to-text (STT), text-to-speech (TTS), and building production-grade voice agents at scale. More than 200,000 developers and 1,300+ organizations build voice offerings that are ‘Powered by Deepgram’, including Twilio, Cloudflare, Sierra, Decagon, Vapi, Daily, Cresta, Granola, and Jack in the Box. Deepgram’s voice-native foundation models are accessed through cloud APIs or as self-hosted and on-premises software, with unmatched accuracy, low latency, and cost efficiency. Backed by a recent Series C led by leading global investors and strategic partners, Deepgram has processed over 50,000 years of audio and transcribed more than 1 trillion words. There is no organization in the world that understands voice better than Deepgram.Company Operating RhythmAt Deepgram, we expect an AI-first mindset—AI use and comfort aren’t optional, they’re core to how we operate, innovate, and measure performance.Every team member who works at Deepgram is expected to actively use and experiment with advanced AI tools, and even build your own into your everyday work. We measure how effectively AI is applied to deliver results, and consistent, creative use of the latest AI capabilities is key to success here. Candidates should be comfortable adopting new models and modes quickly, integrating AI into their workflows, and continuously pushing the boundaries of what these technologies can do.Additionally, we move at the pace of AI. Change is rapid, and you can expect your day-to-day work to evolve just as quickly. This may not be the right role if you’re not excited to experiment, adapt, think on your feet, and learn constantly, or if you’re seeking something highly prescriptive with a traditional 9-to-5.OpportunityDeepgram is looking for a backend software engineer to lead the design and implementation of Deepgram’s Voice Agent product. You will design and implement secure, robust, and scalable services for speech processing; build integrations supporting telephony providers, RAG systems, and diverse deployment scenarios; engineer for testability and observability within a complex chain of AI models; and more. Your skill at building highly reusable code that overcomes technical challenges is paired with an intuition for delightful user experiences. You will be a critical voice in Deepgram’s Product and Engineering teams, driving high impact products from start to finish.What You’ll DoImprove Deepgram’s core inference services including areas in networking, speech processing, model orchestration, and observabilityDevelop integrations with cutting edge in-house, third-party, and open-source AI models for perception and managing conversational dynamicsDebug complex system issues that include networking, scheduling, and highly concurrent workloadsRapidly customize backend services to support our customer needsPartner with Product to design and implement new services, features, and/or products end to endYou’ll Love This Role If YouThrive in a fast-paced, impact-driven environment where learning new skills on-the-fly is not only encouraged but a regular necessityEnjoy balancing decisions about product and feature maturity to decide when to make minimally invasive changes versus when to incorporate detailed design workIt’s Important To Us That You Have3+ years of experience in an industry roleProgramming experience in Rust (or C, C++), with competence in PythonExcellent communication and organizational skills, both written and verbal.A high level of experience and understanding of version control; preferably git.Comprehensive experience with UNIX-style systems.It Would Be Great If You HadExperience with low-latency, multi-model orchestration for AI-enabled applicationsExperience with audio processingBenefits & Perks*Holistic healthMedical, dental, vision benefitsAnnual wellness stipendMental health supportLife, STD, LTD Income Insurance PlansWork/life blendUnlimited PTOGenerous paid parental leaveFlexible schedule12 Paid US company holidaysQuarterly personal productivity stipendOne-time stipend for home office upgrades401(k) plan with company matchTax Savings ProgramsContinuous learningLearning / Education stipendParticipation in talks and conferencesEmployee Resource GroupsAI enablement workshops / sessions*For candidates outside of the US, we use an Employer of Record model in many countries, which means benefits are administered locally and governed by country-specific regulations. Because of this, benefits will differ by region — in some cases international employees receive benefits US employees do not, and vice versa. As we scale, we will continue to evaluate where we can create more alignment, but a 1:1 global benefits structure is not always legally or operationally possible.Backed by prominent investors including Y Combinator, Madrona, Tiger Global, Wing VC and NVIDIA, Deepgram has raised over $215M in total funding. If you're looking to work on cutting-edge technology and make a significant impact in the AI industry, we'd love to hear from you!Deepgram is an equal opportunity employer. We want all voices and perspectives represented in our workforce. We are a curious bunch focused on collaboration and doing the right thing. We put our customers first, grow together and move quickly. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, gender identity or expression, age, marital status, veteran status, disability status, pregnancy, parental status, genetic information, political affiliation, or any other status protected by the laws or regulations in the locations where we operate.We are happy to provide accommodations for applicants who need them.
No items found.
Hidden link
webAI.jpg

Staff DevOps Engineer

webAI
US.svg
United States
Full-time
Remote
false
About Us:webAI is pioneering the future of artificial intelligence by establishing the first distributed AI infrastructure dedicated to personalized AI. We recognize the evolving demands of a data-driven society for scalability and flexibility, and we firmly believe that the future of AI lies in distributed processing at the edge, bringing computation closer to the source of data generation.Our mission is to build a future where a company's valuable data and intellectual property remain entirely private, enabling the deployment of large-scale AI models directly on standard consumer hardware without compromising theinformation embedded within those models. We are developing an end-to-end platform that is secure, scalable, and fully under the control of our users, empowering enterprises with AI that understands their unique business.We are a team driven by truth, ownership, tenacity, and humility, and we seek individuals who resonate with these core values and are passionate about shaping the next generation ofAI.About the Role:We are seeking a Staff DevOps Engineer to architect, build, and scale secure infrastructure for deploying AI workloads across cloud and edge environments. This is a high-impact, staff-level individual contributor role where you will drive infrastructure strategy, lead technical initiatives, and serve as the subject matter expert on cloud architecture, security best practices, and platform reliability.You will design scalable, automated infrastructure solutions that enable our AI platform to operate efficiently across diverse deployment scenarios—from public cloud to on-premises and edge computing environments. This role requires deep technical expertise, architectural thinking, and the ability to translate complex requirements into production-ready infrastructure automation.Responsibilities:Design and architect secure, scalable cloud and edge infrastructure for deploying AI workloads across multi-cloud (AWS, Azure, GCP) and hybrid environmentsBuild and maintain production-grade Infrastructure as Code (IaC) using Terraform, Ansible, or Pulumi, managing 100+ resources with GitOps workflows and automated validationDesign and operate production Kubernetes clusters optimized for AI/ML workloads with GPU support, implementing container security, multi-tenancy, and resource optimizationImplement secure CI/CD pipelines with integrated security controls (SAST, DAST, vulnerability scanning, secrets management) and automated deployment workflows for containerized AI modelsLead MLOps infrastructure initiatives including model deployment pipelines, versioning, feature stores, experiment tracking, and monitoring for model performance and driftDesign comprehensive observability and monitoring using Prometheus, Grafana, ELK, or Datadog with distributed tracing, APM, and real-time alerting aligned to SLIs/SLOsImplement security best practices including least-privilege access, encryption at rest/in transit, network segmentation, and automated compliance validationLead incident response and reliability initiatives, participate in on-call rotation, conduct post-mortems, and drive continuous improvement for system reliabilityArchitect disaster recovery and business continuity strategies with automated backup, failover, and recovery processesDevelop reusable infrastructure modules and templates to accelerate environment provisioning and standardize deployment patterns across teamsMentor mid-level and senior engineers on cloud architecture, DevOps best practices, and platform reliability through design reviews and technical guidanceDrive technical documentation and knowledge sharing including runbooks, architecture decision records (ADRs), and infrastructure standards Qualifications:7+ years of hands-on experience in DevOps, Site Reliability Engineering, or Infrastructure Engineering with proven track record of architecting production systemsExpert-level proficiency with Docker, Kubernetes (CKA/CKAD preferred), and cloud-native technologies in production environments5+ years implementing Infrastructure as Code with Terraform, Ansible, or Pulumi, managing large-scale (50+) cloud resourcesDeep experience with cloud platforms (AWS, Azure, or GCP) including compute, networking, storage, and managed servicesProven experience building and scaling CI/CD pipelines with integrated security controls (GitHub Actions, GitLab CI, Jenkins, ArgoCD)Strong programming skills in Python (preferred for automation), Bash, or Go for infrastructure tooling and automationProduction experience with observability and monitoring tools: Prometheus, Grafana, ELK, CloudWatch, Datadog, or similarExperience with MLOps workflows: model deployment automation, versioning, and lifecycle managementDemonstrated experience with GitOps methodologies and declarative infrastructure managementStrong understanding of security best practices: encryption, secrets management, identity and access management (IAM), network securityExcellent written and verbal communication skills for technical documentation and cross-functional collaborationPreferred Skills:Experience architecting multi-cloud or hybrid cloud environments with portability and interoperability considerationsHands-on experience deploying large language models (LLMs) or transformer models at scale with model serving infrastructureExpertise in Zero Trust architecture and modern security patterns for cloud-native applicationsExperience with service mesh technologies (Istio, Linkerd) for microservices communication and observabilityStrong understanding of AI/ML infrastructure: feature stores, model registries, A/B testing infrastructure, and model monitoringExperience with edge computing deployments and distributed system architecturesCost optimization expertise: FinOps practices, resource rightsizing, and cloud cost managementExperience mentoring or leading technical initiatives across engineering teamsCertifications: CKA, CKAD, Terraform Associate, AWS Solutions Architect, Azure Administrator, or GCP Professional Cloud ArchitectCore Values:We at webAI are committed to living out the core values we have put in place as the foundation on which we operate as a team. We seek individuals who exemplify the following:Truth - Emphasizing transparency and honesty in every interaction and decision.Ownership - Taking full responsibility for one’s actions and decisions, demonstrating commitment to the success of our clients.Tenacity - Persisting in the face of challenges and setbacks, continually striving for excellence and improvement.Humility - Maintaining a respectful and learning-oriented mindset, acknowledging the strengths and contributions of others.Benefits:Competitive salary and performance-based incentives.Comprehensive health, dental, and vision benefits package.401k Match (US-based only)$200/mos Health and Wellness Stipend$400/year Continuing Education Credit$500/year Function Health subscription (US-based only)Free parking, for in-office employeesUnlimited Approved PTOParental Leave for Eligible EmployeesSupplemental Life InsurancewebAI is an Equal Opportunity Employer and does not discriminate against any employee or applicant on the basis of age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances. We adhere to these principles in all aspects of employment, including recruitment, hiring, training, compensation,promotion, benefits, social and recreational programs, and discipline. In addition, it is the policy of webAI to provide reasonable accommodation to qualified employees who have protected disabilities to the extent required by applicable laws, regulations and ordinances where a particular employee works.
No items found.
Hidden link
Bland.jpg

Machine Learning Engineer, TTS Systems

Bland
$160,000 – $250,000
US.svg
United States
Full-time
Remote
false
ML Engineer, TTS SystemsLocation: San Francisco, CA or Remote (US)About BlandAt Bland.com, we empower enterprises to build and scale AI phone agents. As a fast-growing team in San Francisco, our mission is to advance customer interactions with businesses through natural, reliable, and highly human-like voice technologies. Backed by $65M in funding from leading Silicon Valley investors, including Emergence Capital, Scale Venture Partners, Y Combinator, and founders of Twilio, Affirm, and ElevenLabs.The Role: ML Engineer, TTS SystemsAs an ML Engineer focused on Text To Speech (TTS), you will own the deployment, optimization, and maintenance of our production TTS systems. Your work will transform advanced research models into highly performant, scalable, and robust real-world solutions serving millions of real-time voice interactions daily. You will collaborate with research and engineering teams to implement inference-optimized TTS models, streamline deployment processes, and monitor live systems to ensure best-in-class performance for enterprise clients.What You Will DoDeploy and optimize large-scale TTS models into production environments for reliable, low-latency inference.Implement and refine post training techniques (Like DPO, GRPO, and RLHF) and other modern inference techniques to maximize throughput and audio quality.Collaborate with cross-functional teams to ensure seamless rollout, A/B testing, and iterative improvement of production models.Maintain high availability and scalable infrastructure for multi-speaker, expressive, and controllable TTS use cases.Design and document best practices for efficient TTS inference and system reliability.What Makes You a Great FitHands-on experience deploying large-scale neural TTS models in cloud or on-prem production settings.Deep expertise in TTS inference optimization (e.g., quantization, kernel optimization, batching strategies, GRPO).Strong understanding of real-time, low-latency audio processing pipelines and their challenges.Working knowledge of distributed systems, GPU acceleration, and scalable production infrastructure.Ability to diagnose and resolve quality, performance, and reliability issues in deployed voice systems.Comfortable working in fast-paced, startup environments and taking full ownership from deployment through system maintenance.Bonus PointsContributions to open-source TTS systems or production audio frameworks.Prior work in telephony, streaming, or live enterprise communication environments.Benefits and CompensationHealthcare, dental, visionMeaningful equity in a fast-growing companyEvery tool you need to succeedBeautiful office in Jackson Square, SF with rooftop viewsCompetitive salary: $160,000 to $250,000If you’re passionate about scaling production TTS systems, driving inference excellence, and delivering seamless, human-like voice at scale, we want to hear from you.
No items found.
Hidden link
Bland.jpg

Machine Learning Researcher, Audio

Bland
$160,000 – $250,000
US.svg
United States
Full-time
Remote
false
Machine Learning Researcher, AudioLocation: San Francisco, CA or Remote (US)About BlandAt Bland.com, our mission is to empower enterprises to build AI phone agents at scale. Based in San Francisco, we are a fast-growing team reimagining how customers interact with businesses through voice. We have raised $65 million from leading Silicon Valley investors, including Emergence Capital, Scale Venture Partners, Y Combinator, and founders of Twilio, Affirm, and ElevenLabs.Voice is quickly becoming the primary interface between businesses and their customers. We are building the models and infrastructure that make those interactions feel natural, reliable, and genuinely human.The Role: Machine Learning Researcher, AudioAs a Machine Learning Researcher at Bland, you'll be working on foundational research and development across the core components of our voice stack: speech-to-text, large language models, neural audio codecs, and text-to-speech. Your work will define how our agents understand, reason, and speak in real time at enterprise scale.This is not a narrow research role. You will take ideas from theory to large-scale training to production inference systems serving millions of calls per day. You will design new modeling approaches, validate them with rigorous experimentation, and collaborate with engineering teams to deploy them into real customer environments.What You Will DoBuild and Scale Next-Generation TTS SystemsDesign and train large scale text-to-speech models capable of expressive, controllable, human-sounding output.Develop neural audio codec-based TTS architectures for efficient, high-fidelity generation.Improve prosody modeling, question inflection, emotional expression, and multi-speaker robustness.Optimize for real-time, low-latency inference in production.Advance Speech-to-Text ModelingBuild and fine-tune large scale ASR systems robust to accents, noise, telephony artifacts, and code switching.Leverage self-supervised pretraining and large-scale weak supervision.Improve transcription accuracy for real-world enterprise scenarios, including structured extraction and conversational nuance.Pioneer Neural Audio CodecsResearch and implement neural audio codecs that achieve extreme compression with minimal perceptual loss.Explore discrete and continuous latent representations for scalable speech modeling.Design codec architectures that enable downstream generative modeling and controllable synthesis.Develop Scalable Training PipelinesCurate and process massive audio datasets across languages, speakers, and environments.Design staged training curricula and data filtering strategies.Scale training across distributed GPU clusters focusing on cost, throughput, and reliability.Run Rigorous ExperimentsDesign ablation studies that isolate the impact of architectural changes.Measure improvements using both objective metrics and perceptual evaluations.Validate ideas quickly through focused experiments that confirm or eliminate hypotheses.What Makes You a Great FitDeep Research FoundationsExperience with self-supervised learning, multimodal modeling, or generative modeling.Ability to derive new formulations and implement them efficiently.Expertise in Voice ModelingHands-on experience building or scaling TTS, STT, or neural audio codec systems.Familiarity with large scale speech datasets and real-world audio variability.Strong intuition for audio quality, prosody, and conversational dynamics.Systems and Hardware AwarenessExperience training and serving large models on modern accelerators.Knowledge of inference optimization techniques, including quantization, kernel optimization, and memory efficiency.Understanding of real-time constraints in telephony or streaming environments.Experimental RigorTrack record of designing controlled experiments and meaningful ablations.Comfortable working with both offline benchmarks and live production metrics.Ability to move quickly from hypothesis to validation.Builder MentalityComfortable in fast-moving startup environments.Strong ownership mindset from research through deployment.Excited by ambiguous, unsolved problems.How You Show UpYou treat unsolved problems as opportunities to invent new paradigms.You identify the single experiment that can validate an idea in days, not months.You measure everything and let data drive decisions.You are obsessed with making voice agents sound truly human.You use AI tools aggressively to amplify your own impact and accelerate research cycles.Bonus PointsExperience with large scale distributed training.Research publications or open source contributions in speech or language AI.Background in real-time speech systems or telephony.PhD in ML, AI, or a related field, or equivalent research impact.Benefits and CompensationHealthcare, dental, vision, all the good stuffMeaningful equity in a fast-growing companyEvery tool you need to succeedBeautiful office in Jackson Square, SF with rooftop viewsCompetitive salary: $160,000 to $250,000If you are energized by building and scaling TTS models, pioneering neural audio codecs, and pushing the boundaries of speech-to-text systems, we would love to hear from you.
No items found.
Hidden link
Ideogram.jpg

Full Stack Product Engineer

Ideogram
CA.svg
Canada
Full-time
Remote
false
About IdeogramIdeogram’s mission is to make world-class design accessible to everyone, multiplying human creativity. We build proprietary generative media models and AI native creative workflows, tackling unsolved challenges in graphic design. Our team includes builders with a track record of technology breakthroughs including early research in Diffusion Models, Google’s Imagen, and Imagen Video. We care about design, taste, and craft as much as research and engineering – shipping experiences that creatives actually love.We’ve raised nearly $100M, led by Andreessen Horowitz and Index Ventures. Headquartered in Toronto with a growing team in NYC, we're scaling fast, aiming to triple over the next year. We're a flat team with a culture of high ownership, collaboration, and mentorship. Explore Ideogram 3.0, Canvas, and Character blog posts, and try Ideogram at ideogram.ai.About The RoleAs a Full-Stack Product Engineer at Ideogram, you'll build the products that put generative AI directly into the hands of creators. You'll work across the entire stack, from crafting delightful user experiences to optimizing backend systems that serve millions, with a relentless focus on shipping features that users love. We're looking for someone who combines product instinct with strong ownership, user empathy, and the ability to move fast in an evolving AI landscape.What We're Looking ForProduct & AI MindsetDeep curiosity about generative AI and genuine excitement about its potential to empower creatorsStrong product intuition; you think about user problems first, then architect solutionsExperience building features where AI is core to the user experience (not just a backend detail)Ability to navigate ambiguity and turn open-ended problems into shipped featuresAI-Native Full Stack ExecutionExperience building and shipping full stack applications with real user impactComfortable working across frontend and backend systemsFamiliarity with cloud infrastructure and modern web technologiesCan design APIs and data models that support evolving product needsUse AI-native engineering tools (e.g., Claude Code, Codex, or similar) to meaningfully accelerate development velocity, debugging, and codebase comprehensionOwnership & ExecutionSelf-starter who takes initiative to identify opportunities and drive them to completionOperates with urgency. You ship incremental value and iterate based on real user feedbackComfortable working with minimal direction in a fast-moving environmentTakes responsibility for outcomes, not just code—you care about whether users love what you buildCollaboration & CommunicationCan explain technical concepts to both engineers and non-technical stakeholdersSeeks feedback, acknowledges mistakes, and learns quicklyPushes for quality through constructive code review and collaborationBachelor's degree in Computer Science, Engineering, related field, or equivalent practical experienceOur StackWe primarily use React and Python. Familiarity with the following technologies is a plus, but not required:OpenAPI & gRPCKubernetesRedis & MemcachedGCP, Google Bigtable, Google BigQuery, Google Spanner, Google Pub/SubDocker & TerraformCloudflareNice to HaveExperience integrating ML models into production applications (inference, prompt engineering, fine-tuning workflows)Track record of shipping consumer-facing AI products or featuresContributions to design systems, component libraries, or developer toolingExperience with experimentation frameworks and feature flaggingFamiliarity with real-time systems or high-throughput applicationsOur CultureWe’re a team of exceptionally talented, curious builders who love solving tough problems and turning bold ideas into reality. We move fast, collaborate deeply, and operate without unnecessary hierarchy, because we believe the best ideas can come from anyone.Everyone at Ideogram rolls up their sleeves to make our products and our customers successful. We thrive on curiosity, creativity, and shared ownership. We believe that small, dedicated teams working together with trust and purpose can move faster, think bigger, and create amazing things.Ideogram is committed to welcoming everyone — regardless of gender identity, orientation, or expression. Our mission is to create belonging and remove barriers so everyone can create boldly.What We Offer💸Competitive compensation and equity designed to recognize the value and impact of your contributions to Ideogram’s success. 🌴 4 weeks of vacation to recharge and explore. 🩺 Comprehensive health, vision, and dental coverage starting on day one. 💰 RRSP/401(k) with employer match up to 4% to invest in your future from the moment you join. 💻 Top-of-the-line tools and tech to fuel your creativity and productivity. 📍 Toronto HQ perks: Steps from Union Station and the PATH, with daily in-office lunches and dinners. 🔍 Autonomy to explore and experiment — whether you’re testing new ideas, running large-scale experiments, or diving into research, you’ll have access to compute/resources you need when there’s a clear business or creative use case. We encourage curiosity and bold thinking. 🌱 A culture of learning and growth, where curiosity is encouraged and mentorship is part of the journey.
No items found.
Hidden link
Handshake.jpg

Senior Engineering Manager, Reinforcement Learning Environments (RLE)

Handshake
$230,000 – $280,000
US.svg
United States
Full-time
Remote
false
About HandshakeHandshake is the career network for the AI economy. 20 million knowledge workers, 1,600 educational institutions, 1 million employers (including 100% of the Fortune 50), and every foundational AI lab trust Handshake to power career discovery, hiring, and upskilling, from freelance AI training gigs to first internships to full-time careers and beyond. This unique value is leading to unparalleled growth; in 2025, we tripled our ARR at scale.Why join Handshake now:Shape how every career evolves in the AI economy, at global scale, with impact your friends, family and peers can see and feelWork hand-in-hand with world-class AI labs, Fortune 500 partners and the world’s top educational institutionsJoin a team with leadership from Scale AI, Meta, xAI, Notion, Coinbase, and Palantir, among othersBuild a massive, fast-growing business with billions in revenueAbout the RoleWe’re expanding our team and seeking a Senior Engineering Manager to lead our Reinforcement Learning Environments (RLE) team.The RLE team builds the sandbox environments where frontier AI models learn complete, end-to-end workflows. These environments simulate real-world professional domains such as software engineering, finance, and legal research — complete with realistic tools, constraints, and feedback loops. Instead of learning from static examples, models practice doing the work: navigating multi-step tasks, using domain-specific tools, handling ambiguity, and optimizing for real outcomes.Researchers use these environments and the data they generate to train state-of-the-art models with reinforcement learning grounded in execution — not just prediction, but task completion, quality, and robustness in complex workflows.As a Senior Engineering Manager, you’ll shape the technical direction and long-term strategy of this critical platform. You’ll lead a growing team (currently 9 engineers) and will likely manage an Engineering Manager in the near term. This is a highly strategic role sitting at the intersection of platform engineering, applied AI infrastructure, research tooling, and human-in-the-loop operations systems.Location: San Francisco, CA| 5 days/week in-officeLead and grow a high-performing team of 8–9 engineers building reinforcement learning environmentsManage, mentor, and develop senior engineers and future engineering leadersPartner closely with research, product, and operations teams to define roadmap and execution prioritiesDrive technical architecture for scalable, reliable, and extensible environment systemsBuild plug-and-play environments that integrate seamlessly with model training pipelinesBalance platform rigor with operational complexity and data quality requirementsEstablish engineering best practices around reliability, observability, and performanceFoster a culture of ownership, velocity, and high technical standardsDesired Capabilities3+ years of engineering management experience, with increasing scope and ownershipExperience managing senior engineers; experience managing an Engineering Manager (or equivalent scope) strongly preferred5+ years of prior hands-on engineering experienceStrong technical background in platform systems, distributed systems, or full-stack infrastructureExperience building internal platforms, data pipelines, or research-facing toolsProven ability to operate effectively in fast-paced, ambiguous environmentsExperience driving cross-functional alignment across engineering, research, and operationsWillingness to work in-office in San Francisco 5 days/weekExtra CreditExperience in reinforcement learning, simulation systems, or AI training infrastructureBackground in human-in-the-loop systems, data annotation platforms, or workflow toolingExperience in operations-heavy, tech-enabled organizationsFamiliarity with cloud infrastructure (AWS or GCP), APIs, and modern web stacks (e.g., React, TypeScript, Node.js, Python)Experience building systems used by AI researchers or applied ML teamsWhat Success Looks LikeRLE becomes the default platform researchers use to train reinforcement learning workflowsNew domains (e.g., finance, legal, SWE) can be launched quickly and reliablyEnvironment reliability and data quality are trusted by top AI research partnersThe team scales with strong technical leaders who can independently drive new verticalsThe RLE platform materially accelerates model capability in real-world task completionPerksHandshake delivers benefits that help you feel supported—and thrive at work and in life.The below benefits are for full-time US employees.🎯 Ownership: Equity in a fast-growing company💰 Financial Wellness: 401(k) match, competitive compensation, financial coaching🍼 Family Support: Paid parental leave, fertility benefits, parental coaching💝 Wellbeing: Medical, dental, and vision, mental health support, $500 wellness stipend📚 Growth: $2,000 learning stipend, ongoing development💻 Remote & Office: Internet, commuting, and free lunch/gym in our SF office🏝 Time Off: Flexible PTO, 15 holidays + 2 flex days🤝 Connection: Team outings & referral bonusesExplore our mission, values, and comprehensive US benefits at joinhandshake.com/careers.
No items found.
Hidden link
Together AI.jpg

Research Engineer, Core ML

Together AI
$200,000 – $280,000
No items found.
Full-time
Remote
false
About the Role The Turbo team sits at the intersection of efficient inference (algorithms, architectures, engines) and post‑training / RL systems. We build and operate the systems behind Together’s API, including high‑performance inference and RL/post‑training engines that can run at production scale. Our mandate is to push the frontier of efficient inference and RL‑driven training: making models dramatically faster and cheaper to run, while improving their capabilities through RL‑based post‑training (e.g., GRPO‑style objectives). This work lives at the interface of algorithms and systems: asynchronous RL, rollout collection, scheduling, and batching all interact with engine design, creating many knobs to tune across the RL algorithm, training loop, and inference stack. Much of the job is modifying production inference systems—for example, SGLang‑ or vLLM‑style serving stacks and speculative decoding systems such as ATLAS—grounded in a strong understanding of post‑training and inference theory, rather than purely theoretical algorithm design. You’ll work across the stack—from RL algorithms and training engines to kernels and serving systems—to build and improve frontier models via RL pipelines. People on this team are often spiky: some are more RL‑first, some are more systems‑first. Depth in one of these areas plus appetite to collaborate across (and grow toward more full‑stack ownership over time) is ideal. Requirements We don’t expect anyone to check every box below. People on this team typically have deep expertise in one or more areas and enough breadth (or interest) to work effectively across the stack. The closer you are to full‑stack (inference + post‑training/RL + systems), the stronger the fit—but being spiky in one area and eager to grow is absolutely okay. You might be a good fit if you: Have strong expertise in at least one of the following, and are excited to collaborate across (and grow into) the others: Systems‑first profile: Large‑scale inference systems (e.g., SGLang, vLLM, FasterTransformer, TensorRT, custom engines, or similar), GPU performance, distributed serving. RL‑first profile: RL / post‑training for LLMs or large models (e.g., GRPO, RLHF/RLAIF, DPO‑like methods, reward modeling), and using these to train or fine‑tune real models. Model architecture design for Transformers or other large neural nets. Distributed systems / high‑performance computing for ML. Are comfortable working from algorithms to engines: Strong coding ability in Python Experience profiling and optimizing performance across GPU, networking, and memory layers. Able to take a new sampling method, scheduler, or RL update and turn it into a production‑grade implementation in the engine and/or training stack. Have a solid research foundation in your area(s) of depth: Track record of impactful work in ML systems, RL, or large‑scale model training (papers, open‑source projects, or production systems). Can read new RL / post‑training papers, understand their implications on the stack, and design minimal, correct changes in the right layer (training engine vs. inference engine vs. data / API). Operate well as a full‑stack problem solver: You naturally ask: “Where in the stack is this really bottlenecked?” You enjoy collaborating with infra, research, and product teams, and you care about both scientific quality and user‑visible wins. Minimum qualifications 3+ years of experience working on ML systems, large‑scale model training, inference, or adjacent areas (or equivalent experience via research / open source). Advanced degree in Computer Science, EE, or a related field, or equivalent practical experience. Demonstrated experience owning complex technical projects end‑to‑end. If you’re excited about the role and strong in some of these areas, we encourage you to apply even if you don’t meet every single requirement. Responsibilities Advance inference efficiency end‑to‑end Design and prototype algorithms, architectures, and scheduling strategies for low‑latency, high‑throughput inference. Implement and maintain changes in high‑performance inference engines (e.g., SGLang‑ or vLLM‑style systems and Together’s inference stack), including kernel backends, speculative decoding (e.g., ATLAS), quantization, etc. Profile and optimize performance across GPU, networking, and memory layers to improve latency, throughput, and cost. Unify inference with RL / post‑training Design and operate RL and post‑training pipelines (e.g., RLHF, RLAIF, GRPO, DPO‑style methods, reward modeling) where 90+% of the cost is inference, jointly optimizing algorithms and systems. Make RL and post‑training workloads more efficient with inference‑aware training loops—for example, async RL rollouts, speculative decoding, and other techniques that make large‑scale rollout collection and evaluation cheaper. Use these pipelines to train, evaluate, and iterate on frontier models on top of our inference stack. Co‑design algorithms and infrastructure so that objectives, rollout collection, and evaluation are tightly coupled to efficient inference, and quickly identify bottlenecks across the training engine, inference engine, data pipeline, and user‑facing layers. Run ablations and scale‑up experiments to understand trade‑offs between model quality, latency, throughput, and cost, and feed these insights back into model, RL, and system design. Own critical systems at production scale Profile, debug, and optimize inference and post‑training services under real production workloads. Drive roadmap items that require real engine modification—changing kernels, memory layouts, scheduling logic, and APIs as needed. Establish metrics, benchmarks, and experimentation frameworks to validate improvements rigorously. Provide technical leadership (Staff level) Set technical direction for cross‑team efforts at the intersection of inference, RL, and post‑training. Mentor other engineers and researchers on full‑stack ML systems work and performance engineering. About Together AI Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers in our journey in building the next generation AI infrastructure. Compensation We offer competitive compensation, startup equity, health insurance and other competitive benefits. The US base salary range for this full-time position is: $200,000 - $280,000 + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge. Equal Opportunity Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more. Please see our privacy policy at https://www.together.ai/privacy    
No items found.
Hidden link
Together AI.jpg

Staff Strategic Sourcing Manager (Hardware)

Together AI
$200,000 – $280,000
No items found.
Full-time
Remote
false
About the Role The Turbo team sits at the intersection of efficient inference (algorithms, architectures, engines) and post‑training / RL systems. We build and operate the systems behind Together’s API, including high‑performance inference and RL/post‑training engines that can run at production scale. Our mandate is to push the frontier of efficient inference and RL‑driven training: making models dramatically faster and cheaper to run, while improving their capabilities through RL‑based post‑training (e.g., GRPO‑style objectives). This work lives at the interface of algorithms and systems: asynchronous RL, rollout collection, scheduling, and batching all interact with engine design, creating many knobs to tune across the RL algorithm, training loop, and inference stack. Much of the job is modifying production inference systems—for example, SGLang‑ or vLLM‑style serving stacks and speculative decoding systems such as ATLAS—grounded in a strong understanding of post‑training and inference theory, rather than purely theoretical algorithm design. You’ll work across the stack—from RL algorithms and training engines to kernels and serving systems—to build and improve frontier models via RL pipelines. People on this team are often spiky: some are more RL‑first, some are more systems‑first. Depth in one of these areas plus appetite to collaborate across (and grow toward more full‑stack ownership over time) is ideal. Requirements We don’t expect anyone to check every box below. People on this team typically have deep expertise in one or more areas and enough breadth (or interest) to work effectively across the stack. The closer you are to full‑stack (inference + post‑training/RL + systems), the stronger the fit—but being spiky in one area and eager to grow is absolutely okay. You might be a good fit if you: Have strong expertise in at least one of the following, and are excited to collaborate across (and grow into) the others: Systems‑first profile: Large‑scale inference systems (e.g., SGLang, vLLM, FasterTransformer, TensorRT, custom engines, or similar), GPU performance, distributed serving. RL‑first profile: RL / post‑training for LLMs or large models (e.g., GRPO, RLHF/RLAIF, DPO‑like methods, reward modeling), and using these to train or fine‑tune real models. Model architecture design for Transformers or other large neural nets. Distributed systems / high‑performance computing for ML. Are comfortable working from algorithms to engines: Strong coding ability in Python Experience profiling and optimizing performance across GPU, networking, and memory layers. Able to take a new sampling method, scheduler, or RL update and turn it into a production‑grade implementation in the engine and/or training stack. Have a solid research foundation in your area(s) of depth: Track record of impactful work in ML systems, RL, or large‑scale model training (papers, open‑source projects, or production systems). Can read new RL / post‑training papers, understand their implications on the stack, and design minimal, correct changes in the right layer (training engine vs. inference engine vs. data / API). Operate well as a full‑stack problem solver: You naturally ask: “Where in the stack is this really bottlenecked?” You enjoy collaborating with infra, research, and product teams, and you care about both scientific quality and user‑visible wins. Minimum qualifications 3+ years of experience working on ML systems, large‑scale model training, inference, or adjacent areas (or equivalent experience via research / open source). Advanced degree in Computer Science, EE, or a related field, or equivalent practical experience. Demonstrated experience owning complex technical projects end‑to‑end. If you’re excited about the role and strong in some of these areas, we encourage you to apply even if you don’t meet every single requirement. Responsibilities Advance inference efficiency end‑to‑end Design and prototype algorithms, architectures, and scheduling strategies for low‑latency, high‑throughput inference. Implement and maintain changes in high‑performance inference engines (e.g., SGLang‑ or vLLM‑style systems and Together’s inference stack), including kernel backends, speculative decoding (e.g., ATLAS), quantization, etc. Profile and optimize performance across GPU, networking, and memory layers to improve latency, throughput, and cost. Unify inference with RL / post‑training Design and operate RL and post‑training pipelines (e.g., RLHF, RLAIF, GRPO, DPO‑style methods, reward modeling) where 90+% of the cost is inference, jointly optimizing algorithms and systems. Make RL and post‑training workloads more efficient with inference‑aware training loops—for example, async RL rollouts, speculative decoding, and other techniques that make large‑scale rollout collection and evaluation cheaper. Use these pipelines to train, evaluate, and iterate on frontier models on top of our inference stack. Co‑design algorithms and infrastructure so that objectives, rollout collection, and evaluation are tightly coupled to efficient inference, and quickly identify bottlenecks across the training engine, inference engine, data pipeline, and user‑facing layers. Run ablations and scale‑up experiments to understand trade‑offs between model quality, latency, throughput, and cost, and feed these insights back into model, RL, and system design. Own critical systems at production scale Profile, debug, and optimize inference and post‑training services under real production workloads. Drive roadmap items that require real engine modification—changing kernels, memory layouts, scheduling logic, and APIs as needed. Establish metrics, benchmarks, and experimentation frameworks to validate improvements rigorously. Provide technical leadership (Staff level) Set technical direction for cross‑team efforts at the intersection of inference, RL, and post‑training. Mentor other engineers and researchers on full‑stack ML systems work and performance engineering. About Together AI Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers in our journey in building the next generation AI infrastructure. Compensation We offer competitive compensation, startup equity, health insurance and other competitive benefits. The US base salary range for this full-time position is: $200,000 - $280,000 + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge. Equal Opportunity Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more. Please see our privacy policy at https://www.together.ai/privacy    
No items found.
Hidden link
Faculty.jpg

Senior Data Scientist

Faculty
GB.svg
United Kingdom
Full-time
Remote
false
Why Faculty? We established Faculty in 2014 because we thought that AI would be the most important technology of our time. Since then, we’ve worked with over 350 global customers to transform their performance through human-centric AI. You can read about our real-world impact here.We don’t chase hype cycles. We innovate, build and deploy responsible AI which moves the needle - and we know a thing or two about doing it well. We bring an unparalleled depth of technical, product and delivery expertise to our clients who span government, finance, retail, energy, life sciences and defence.Our business, and reputation, is growing fast and we’re always on the lookout for individuals who share our intellectual curiosity and desire to build a positive legacy through technology.AI is an epoch-defining technology, join a company where you’ll be empowered to envision its most powerful applications, and to make them happen.About the teamThe Faculty Frontier TM product is our ambitious vision to create the first enterprise-grade platform that unifies decision intelligence with AI Agents to optimise real-world outcomes of critical processes across large-scale organisations. You will work on highly complex and consequential problems across the real economy, with particular focus on healthcare and life sciences.About the roleJoin us to shape the future of our Frontier Decision Intelligence Platform. As a Senior Data Scientist, you will lead the design and delivery of AI-powered digital twins that transform how organisations make critical decisions. You will sit at the heart of cross-functional teams, blending technical excellence with commercial insight to solve complex customer problems. This is an opportunity to mentor emerging talent while driving high-impact, production-grade AI solutions in a fast-paced, entrepreneurial environment.What you’ll be doing:Designing and building computational twins, creating AI-driven digital reflections tailored for each unique Frontier deployment.Leading data science efforts within cross-functional teams, partnering with engineers, designers, and commercial leads to ensure successful project outcomes.Understanding deeply core customer challenges to ensure every technical solution delivers significant real-world value.Performing rigorous exploratory data analysis, model building, validation, and performance monitoring.Supporting strong client relationships by working alongside our commercial team to shape the strategic direction of projects.Mentoring and developing other data scientists through task leadership and potential line management.Who we’re looking for:You have senior-level experience in data science or quantitative research, supported by a strong foundation in statistics and mathematics.You’re proficient in Python and essential libraries like NumPy and Pandas, with familiarity in deep-learning frameworks such as TensorFlow or PyTorch.You possess a versatile toolkit—including supervised learning, time-series, and Bayesian methods—and the creativity to develop new algorithms when needed.You bring a leadership mindset focused on technical excellence, team growth, and fostering a collaborative, inclusive culture.You exhibit scientific rigour and a business-focused approach, successfully translating complex problems into actionable technical strategies.You’ve demonstrated success in project planning and delivery, with the communication skills to present persuasively to senior stakeholders.The Interview ProcessTalent Team Screen (30 minutes)Technical Interview (90 minutes)Commercial Interview (60 minutes)#LI-PRIOOur Recruitment EthosWe aim to grow the best team - not the most similar one. We know that diversity of individuals fosters diversity of thought, and that strengthens our principle of seeking truth. And we know from experience that diverse teams deliver better work, relevant to the world in which we live. We’re united by a deep intellectual curiosity and desire to use our abilities for measurable positive impact. We strongly encourage applications from people of all backgrounds, ethnicities, genders, religions and sexual orientations.Some of our standout benefits:Unlimited Annual Leave PolicyPrivate healthcare and dentalEnhanced parental leaveFamily-Friendly Flexibility & Flexible workingSanctus CoachingHybrid Working (2 days in our Old Street office, London)If you don’t feel you meet all the requirements, but are excited by the role and know you bring some key strengths, please don't hesitate in applying as you might be right for this role, or other roles. We are open to conversations about part-time hours.
No items found.
Hidden link
X.jpg

Interior Designer - Workplace Design

X AI
$45 – $100 / hour
US.svg
United States
Full-time
Remote
false
About xAI xAI’s mission is to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. Our team is small, highly motivated, and focused on engineering excellence. This organization is for individuals who appreciate challenging themselves and thrive on curiosity. We operate with a flat organizational structure. All employees are expected to be hands-on and to contribute directly to the company’s mission. Leadership is given to those who show initiative and consistently deliver excellence. Work ethic and strong prioritization skills are important. All employees are expected to have strong communication skills. They should be able to concisely and accurately share knowledge with their teammates.  About the Role As an Accounting Expert, you will be instrumental in enhancing the capabilities of our cutting-edge technologies by providing high-quality input and labels using specialized software. Your role involves collaborating closely with our technical team to support the training of new AI tasks, ensuring the implementation of innovative initiatives. You'll contribute to refining annotation tools and selecting complex problems from corporate accounting domains, with a focus on financial reporting, consolidation, internal controls, and GAAP compliance where your expertise can drive significant improvements in model performance. This position demands a dynamic approach to learning and adapting in a fast-paced environment, where your ability to interpret and execute tasks based on evolving instructions is crucial. AI Tutor’s Role in Advancing xAI’s Mission As an AI Tutor, you will play an essential role in advancing xAI's mission by supporting the training and refinement of xAI’s AI models. AI Tutors teach our AI models about how people interact and react, as well as how people approach issues and discussions in corporate accounting. To accomplish this, AI Tutors will actively participate in gathering or providing data, such as text, voice, and video data, sometimes providing annotations, recording audio, or participating in video sessions. We seek individuals who are comfortable and eager to engage in these activities as a fundamental part of the role, ensuring a strong alignment with xAI’s goals and objectives to innovate. Scope An AI Tutor will provide services that include labeling and annotating data in text, voice, and video formats to support AI model training. At times, this may involve recording audio or video sessions, and tutors are expected to be comfortable with these tasks as they are fundamental to the role. Such data is a job requirement to advance xAI’s mission, and AI Tutors acknowledge that all work is done for hire and owned by xAI. Responsibilities Use proprietary software applications to provide input/labels on defined projects.   Support and ensure the delivery of high-quality curated data.   Play a pivotal role in supporting and contributing to the training of new tasks, working closely with the technical staff to ensure the successful development and implementation of cutting-edge initiatives/technologies.   Interact with the technical staff to help improve the design of efficient annotation tools.   Choose problems from corporate accounting fields that align with your expertise, providing rigorous solutions and model critiques where you can confidently provide detailed solutions and evaluate model responses.   Regularly interpret, analyze, and execute tasks based on given instructions.   Key Qualifications Must have 3+ years of Big 4 public accounting experience (audit/assurance) on corporate or SEC clients, or an equivalent senior corporate accounting role (e.g., Controller, Assistant Controller, or Technical Accounting Manager at a public company or large private enterprise with complex GAAP reporting).   Must possess a Master's or PhD in Accounting (corporate focus) or equivalent as a licensed CPA.   Proficiency in reading and writing, both in informal and professional English.   Strong ability to navigate various corporate accounting information resources, databases, and online resources (e.g., FASB codification, SEC EDGAR, 10-K/10-Q filings, ERP systems).   Outstanding communication, interpersonal, analytical, and organizational capabilities.   Solid reading comprehension skills combined with the capacity to exercise autonomous judgment even when presented with limited data/material.   Strong passion for and commitment to technological advancements and innovation in corporate accounting.  Preferred Qualifications 5+ years at a Big 4 firm or in a senior corporate controllership role, with direct involvement in SEC reporting, SOX 404, or complex consolidations.   Experience drafting or reviewing 10-K/10-Q footnotes, MD&A, or technical accounting memos.   Possesses experience with at least one publication in a reputable accounting journal or outlet.   Teaching experience as a professor   Location & Other Expectations This position is based in Palo Alto, CA, or fully remote.   The Palo Alto option is an in-office role requiring 5 days per week; remote positions require strong self-motivation.   If you are based in the US, please note we are unable to hire in the states of Wyoming and Illinois at this time.   We are unable to provide visa sponsorship.   Team members are expected to work from 9:00am - 5:30pm PST for the first two weeks of training and 9:00am - 5:30pm in their own timezone thereafter.   For those who will be working from a personal device, please note your computer must be a Chromebook, Mac with MacOS 11.0 or later, or Windows 10 or later.   Compensation $45/hour - $100/hour The posted pay range is intended for U.S.-based candidates and depends on factors including relevant experience, skills, education, geographic location, and qualifications. For international candidates, our recruiting team can provide an estimated pay range for your location. Benefits: Hourly pay is just one part of our total rewards package at xAI. Specific benefits vary by country, depending on your country of residence you may have access to medical benefits. We do not offer benefits for part-time roles.xAI is an equal opportunity employer. For details on data processing, view our Recruitment Privacy Notice.
No items found.
Hidden link
Ema.jpg

Agent Product Manager

Ema
$135,000 – $200,000
US.svg
United States
Full-time
Remote
false
About EmaEma is at the forefront of the agentic AI revolution, empowering enterprises to reimagine how work gets done. Our platform enables organizations to design, deploy, and manage fleets of AI employees—multi-agent systems with rich human-in-the-loop interfaces—that automate complex workflows, augment decision-making, and unlock new levels of efficiency and growth. We are a team of ambitious innovators, building the future of work, and we’re looking for passionate individuals to join us on this mission.The RoleThis is not a traditional, backlog-focused product management role. As an Agentic Solutions Product Manager, you’ll partner directly with enterprise leaders to observe and decode human workflows—what data they use, what applications they rely on, and what SOPs they follow. From this, you’ll craft AI employees: multi-agent workflows with intuitive, UI-driven human-in-the-loop controls that transform how businesses operate.You won’t just manage features; you’ll design and deliver entire AI-powered solutions. You’ll be a trusted advisor and a strategic co-creator, working at the intersection of business strategy, workflow design, and cutting-edge AI technology.What You Will DoUnderstand Human Workflows: Partner with enterprise customers to map end-to-end processes, uncover inefficiencies, and identify opportunities where agentic AI can create impact.Design AI Employees: Translate workflows into agentic multi-agent systems, integrating data, applications, and UI-driven human oversight.Bridge Business and Technology: Work hand-in-hand with engineering and design to turn client requirements into scalable agent capabilities and elegant product experiences.Drive Strategic Roadmaps: Own the lifecycle of your AI employees—from concept through deployment—guided by customer feedback, data, and business outcomes.Champion Adoption & Value: Ensure customers achieve measurable ROI, advocate for your solutions internally and externally, and evangelize the power of agentic AI.Continuously Optimize: Use data and customer insights to refine workflows, enhance capabilities, and identify new areas for automation and transformation. What We’re Looking ForEntrepreneurial Mindset: Self-starter who thrives in ambiguity, owns outcomes, and builds solutions from the ground up.Proven Client-Facing Experience: 4+ years in consulting, engagement management, product, or as a founder—trusted by senior stakeholders.Strategic Product Acumen: Ability to go beyond surface-level requests and solve the real business problem.Technical Credibility: Comfortable diving into architectural trade-offs, APIs, and agentic design with engineers.Systems Thinking: Natural ability to see the whole picture, anticipate downstream effects, and design resilient solutions. Preferred SkillsExperience in user research and workflow mapping, with a data-driven mindset.Familiarity with generative AI and agentic AI; hands-on experience designing agent-based systems is a plus.Ability to prototype quickly—comfortable with "vibe coding" to visualize solutions.Background in product management, consulting, or founding roles.Experience in agile development environments and tools (JIRA, Asana, etc.).Hands-on experience with APIs and working closely with technical teams.Degree in Computer Science, Engineering, Math, or equivalent experience. For California Based CandidatesThe standard base salary for this position is $135,000 to $200,000 annually.Compensation offered will be determined by factors such as location, level, job-related knowledge, skills, and experience. Certain roles may be eligible for variable compensation, equity, and benefits.Ema Unlimited is an equal opportunity employer and is committed to providing equal employment opportunities to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability, sexual orientation, gender identity, or genetics.
No items found.
Hidden link
Hiya Inc..jpg

Software Engineer

Hiya
$140,000 – $205,000
US.svg
United States
Full-time
Remote
false
About HiyaAt Hiya, we're making calls safe, useful, and human again.Voice is the most human form of communication, yet it's become one of the least trusted. Spam, scams, and AI manipulation have eroded what was once a simple way to connect.Each month our AI voice technology analyzes 28+ billion calls, protecting over 550 million users and 800+ businesses worldwide. Partnering with a growing global network including, AT&T, Samsung, British Telecom EE, Rogers, MasOrange,Bell Canada, MasMovil, and Virgin Media O2, we're not just stopping bad actors, we're helping people feel good and confident about picking up the phone again.This is a pivotal moment for voice. As new threats and technologies accelerate, so does demand for trusted voice communication. Hiya is growing 40%+ year over year, expanding globally, and defining what voice becomes next.Join us. You won't just work on what voice is today, you'll shape what it becomes tomorrow: smarter, safer, and genuinely worth answering again.About the RoleThe Mobile Experience team owns Hiya's two consumer mobile applications: Hiya AI Phone—an AI-powered phone assistant with real-time scam detection, intelligent call screening, and call transcription—and Hiya Spam Blocker—our established caller ID and spam blocking app with over 10 million downloads. Together, these products protect millions of users from fraud and unwanted calls across iOS and Android.We're looking for a Software Development Engineer to build and own the backend services and mobile integrations that power both applications. You'll work across the stack: designing Node.js services deployed on AWS, scaling infrastructure that handles millions of lookups and call events, building call screening and spam detection features, and collaborating with mobile engineers on SDK integration. The work spans real-time systems with strict latency requirements, AI-powered features that require thoughtful backend architecture, and mobile integrations that demand attention to platform-specific behavior. You'll have significant ownership over how these systems evolve—your decisions will directly shape the experience of users who rely on Hiya to know who's calling and stay safe from scams.What You'll DoOwn backend services that power core features across both apps—caller ID lookups, spam/fraud detection, call screening, real-time scam alerts, call transcription, and AI-generated summaries. This includes designing service architecture, writing production code, deploying to AWS, and operating systems at scale with high availability and low latency.Build and maintain mobile integrations that connect backend services to iOS and Android applications. You'll collaborate closely with mobile engineers to ensure seamless SDK behavior across platforms, debug platform-specific issues, and ship features that work reliably for millions of users.Scale infrastructure to support growing user bases and new feature development. You'll make decisions about how to handle increasing load, optimize costs, and maintain reliability as both products evolve.Make architectural and implementation decisions that balance speed, quality, and long-term maintainability. You'll encounter ambiguous problems—how to handle edge cases in call screening logic, how to structure services for rapid iteration on AI features, how to maintain a shared backend that serves two distinct products—and you'll drive toward solutions with ownership of the outcome.Collaborate across engineering, product, and design to ship features that matter. The Mobile Experience team operates with high autonomy, which means you'll be involved in shaping what we build, not just how we build it.What Success Looks LikeThe backend services and mobile integrations you own are reliable, performant, and support the rapid iteration the team needs to improve both Hiya AI Phone and Hiya Spam Blocker. Users experience accurate caller ID, effective spam blocking, and responsive AI features—and those outcomes trace back to work you shipped.You've built enough context across the stack—services, infrastructure, mobile integration points—that you can independently identify problems, propose solutions, and drive them to completion without requiring constant direction.Your work compounds over time: services are easier to extend, infrastructure scales more efficiently, and the team ships faster because of foundations you've put in place.What We're Looking ForRequired:Experience shipping Node.js services in production environmentsStrong knowledge of AWS infrastructure and services (deployment, scaling, monitoring)Experience building backend systems that serve mobile applicationsFamiliarity with iOS and/or Android development—enough to collaborate effectively on SDK integration and debug platform-specific issuesAbility to design and operate services with high availability and low latency requirementsClear communication and the ability to work effectively with distributed teams (Seattle and London)Familiarity with AI-assisted development techniques and workflowsNice to Have:Experience with telephony, VoIP, or voice-related systems (SIP, SS7, or similar protocols)Familiarity with AI-assisted development techniques and workflowsHow We WorkHiya is not a passive environment. We expect people to take ownership, form opinions, and engage directly with hard problems.We work with a high degree of transparency and autonomy. Context is shared openly, and decisions are discussed, challenged, and then made. Once a call is made, we commit and move forward.You’ll be expected to work through ambiguity, weigh tradeoffs, and take responsibility for results, while keeping a high bar for quality and customer trust.Every team member at Hiya is expected to live our core values:Serve, our customers and partners by holding a high bar for trust and qualityOwn, share in success and open up to failuresLead, listen, show up with a point of view but commit entirely once a decision has been madeImprove, even if it means changing course or contradicting ourselvesDo, rather than observeOur Interview ProcessProcess OverviewOur standard interview process follows this sequence:Initial Screen: We confirm baseline alignment, role interest, relevant experience, and logistics.Hiring Manager (HM) Screen: We evaluate role fit, expectations, and execution readiness.Take Home Assignment: You'll complete a take-home, role-relevant assignment designed to reflect the kind of work you would do at Hiya. The assignment focuses on how you think, prioritize, and explain your approach. You'll review your work and discuss your reasoning with the interview panel.Assignment-Based Interview Loop: Interviewers will explore how you think through the work, ask questions, respond to feedback, and adapt your approach. Each interviewer focuses on specific competencies and how you make decisions, navigate tradeoffs, and collaborate in real time.Future Hiya Value Interview: An independent conversation focused on your long-term potential, judgment under ambiguity, and ability to create sustained value as scope and complexity increase.How We Invest in YouCompensation & OwnershipBase Salary: $140,000 - $205,000Compensation is determined by role scope, skills, experience, location, and market data.Equity Compensation: ownership aligned with your impact and the company's growthBenefits & SupportEmployer-sponsored InsuranceMedical, dental, and vision (PPO & HDHP); 50% dependent coverageHealth, flexible spending, and dependent care accountsLife, AD&D, and accident coverage, with company-paid life and long-term disability401(k) with 3% company match (via Fidelity)Flexible vacation policy and paid company holidaysPaid parental leaveWork-from-home equipment stipend$1,000 annually to invest in your learning and growth$1,000/year in charitable donation matchingTeam lunch 2x per weekCome Work With Us!We're building a team with diverse perspectives, identities, and professional experiences. We evaluate candidates through a business lens and believe that diversity and unique viewpoints make our company stronger, more dynamic, and a great place to build a career.We've been recognized by Built In, GeekWire, Comparably, G2, Forbes, and Deloitte Technology Fast 500 for our culture, innovation, leadership, compensation, and more. At Hiya, we're a people-centric company focused on helping each employee grow both personally and professionally. We create a culture of support and empowerment that challenges the status quo, resulting in an energized team that's passionate about their work. You'll love working here if you're looking for an innovative challenge that's disrupting an industry. Come join us!
No items found.
Hidden link
WRITER.jpg

Software engineer, agents

Writer
$140,700 – $292,400
US.svg
United States
Full-time
Remote
false
🚀 About WRITERWRITER is where the world's leading enterprises orchestrate AI-powered work. Our vision is to expand human capacity through superintelligence. And we're proving it's possible – through powerful, trustworthy AI that unites IT and business teams together to unlock enterprise-wide transformation. With WRITER's end-to-end platform, hundreds of companies like Mars, Marriott, Uber, and Vanguard are building and deploying AI agents that are grounded in their company's data and fueled by WRITER's enterprise-grade LLMs. Valued at $1.9B and backed by industry-leading investors including Premji Invest, Radical Ventures, and ICONIQ Growth, WRITER is rapidly cementing its position as the leader in enterprise generative AI.Founded in 2020 with office hubs in San Francisco, New York City, Austin, Chicago, and London, our team thinks big and moves fast, and we're looking for smart, hardworking builders and scalers to join us on our journey to create a better future of work with AI.📐 About the role We’re seeking a highly skilled fullstack software engineer to join our engineering team building advanced AI-driven agent systems that execute autonomous workflows, orchestrate multi-step tasks, and extend human capacity across enterprise applications. In this role, you’ll play a key part in designing, building, and scaling next-generation AI agents that integrate with enterprise data and services to solve real-world problems. You will collaborate with cross-functional teams to turn complex agent concepts into production-ready systems. 🦸🏻‍♀️ What you'll doDesign, implement, and maintain scalable, secure agent-driven services and systems that autonomously accomplish tasks using modern AI frameworks.Develop and enhance robust infrastructure and high-throughput APIs, focusing on core agent capabilities such as memory, communication channels, skills, intelligent decision logic, security and workflow management.Integrate agent capabilities with backend services, data stores, vector databases, search/retrieval systems, and external APIs.Collaborate with product managers, AI researchers, data engineers, and UX teams to translate high-level agent use cases into robust, production-ready software.Ensure reliability, monitoring, and observability for all agent components (metrics, logging, CI/CD, fault tolerance).Contribute to architectural design decisions and participate in rigorous code reviews to uphold quality and maintainability.⭐️ What you need3+ years of professional software engineering experience with strong proficiency (3+ years) in Python in production environmentsProficiency in native agentic coding, demonstrated through the daily use of tools like Cursor, Claude Code, and other agentic coding platforms.Demonstrated experience building distributed systems, microservices, or complex backend APIs that support AI/agent workflows.Solid expertise with systems that integrate AI models, agent frameworks (e.g., LangChain or platform-specific tooling), vector databases, and large context reasoning services.Understanding of agent orchestration patterns, state management, and asynchronous workflows.Experience with cloud platforms (e.g., AWS, GCP, Azure), containerization (Docker, Kubernetes), and operational engineering best practices.Good grasp of performance optimization, testing frameworks, and CI/CD pipelines.Excellent communication and collaboration skills, with a “connect + challenge + own” mindset.Past work on AI agents that coordinate multi-step actions, reasoning, or autonomous decision-making loops.Contributions to open-source agent toolkits or SDKs.Experience with frontend technologies (React, TypeScript) for tooling around agent management interfaces. 🍩 Benefits & perks (US Full-time employees)Generous PTO, plus company holidaysMedical, dental, and vision coverage for you and your familyPaid parental leave for all parents (12 weeks)Fertility and family planning supportEarly-detection cancer testing through GalleriFlexible spending account and dependent FSA optionsHealth savings account for eligible plans with company contributionAnnual work-life stipends for:Wellness stipend for gym, massage/chiropractor, personal training, etc.Learning and development stipendCompany-wide off-sites and team off-sitesCompetitive compensation, company stock options and 401kWRITER is an equal-opportunity employer and is committed to diversity. We don't make hiring or employment decisions based on race, color, religion, creed, gender, national origin, age, disability, veteran status, marital status, pregnancy, sex, gender expression or identity, sexual orientation, citizenship, or any other basis protected by applicable local, state or federal law. Under the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.By submitting your application on the application page, you acknowledge and agree to WRITER's Global Candidate Privacy Notice.
No items found.
Hidden link
WRITER.jpg

Software engineer, agents (UK)

Writer
US.svg
United States
Full-time
Remote
false
🚀 About WRITERWRITER is where the world's leading enterprises orchestrate AI-powered work. Our vision is to expand human capacity through superintelligence. And we're proving it's possible – through powerful, trustworthy AI that unites IT and business teams together to unlock enterprise-wide transformation. With WRITER's end-to-end platform, hundreds of companies like Mars, Marriott, Uber, and Vanguard are building and deploying AI agents that are grounded in their company's data and fueled by WRITER's enterprise-grade LLMs. Valued at $1.9B and backed by industry-leading investors including Premji Invest, Radical Ventures, and ICONIQ Growth, WRITER is rapidly cementing its position as the leader in enterprise generative AI.Founded in 2020 with office hubs in San Francisco, New York City, Austin, Chicago, and London, our team thinks big and moves fast, and we're looking for smart, hardworking builders and scalers to join us on our journey to create a better future of work with AI.📐 About the role We’re seeking a highly skilled fullstack software engineer to join our engineering team building advanced AI-driven agent systems that execute autonomous workflows, orchestrate multi-step tasks, and extend human capacity across enterprise applications. In this role, you’ll play a key part in designing, building, and scaling next-generation AI agents that integrate with enterprise data and services to solve real-world problems. You will collaborate with cross-functional teams to turn complex agent concepts into production-ready systems. 🦸🏻‍♀️ What you'll doDesign, implement, and maintain scalable, secure agent-driven services and systems that autonomously accomplish tasks using modern AI frameworks.Develop and enhance robust infrastructure and high-throughput APIs, focusing on core agent capabilities such as memory, communication channels, skills, intelligent decision logic, security and workflow management.Integrate agent capabilities with backend services, data stores, vector databases, search/retrieval systems, and external APIs.Collaborate with product managers, AI researchers, data engineers, and UX teams to translate high-level agent use cases into robust, production-ready software.Ensure reliability, monitoring, and observability for all agent components (metrics, logging, CI/CD, fault tolerance).Contribute to architectural design decisions and participate in rigorous code reviews to uphold quality and maintainability.⭐️ What you need3+ years of professional software engineering experience with strong proficiency (3+ years) in Python in production environmentsProficiency in native agentic coding, demonstrated through the daily use of tools like Cursor, Claude Code, and other agentic coding platforms.Demonstrated experience building distributed systems, microservices, or complex backend APIs that support AI/agent workflows.Solid expertise with systems that integrate AI models, agent frameworks (e.g., LangChain or platform-specific tooling), vector databases, and large context reasoning services.Understanding of agent orchestration patterns, state management, and asynchronous workflows.Experience with cloud platforms (e.g., AWS, GCP, Azure), containerization (Docker, Kubernetes), and operational engineering best practices.Good grasp of performance optimization, testing frameworks, and CI/CD pipelines.Excellent communication and collaboration skills, with a “connect + challenge + own” mindset.Past work on AI agents that coordinate multi-step actions, reasoning, or autonomous decision-making loops.Contributions to open-source agent toolkits or SDKs.Experience with frontend technologies (React, TypeScript) for tooling around agent management interfaces.🍩 Benefits & perks (UK full-time employees):Generous PTO, plus company holidaysComprehensive medical and dental insurancePaid parental leave for all parents (12 weeks)Fertility and family planning supportEarly-detection cancer testing through GalleriCompetitive pension scheme and company contributionAnnual work-life stipends for:Wellness stipend for gym, massage/chiropractor, personal training, etc.Learning and development stipendCompany-wide off-sites and team off-sitesCompetitive compensation and company stock options
No items found.
Hidden link
Nomic AI.jpg

Expert in Residence

Nomic AI
US.svg
United States
Full-time
Remote
false
About Nomic Nomic builds domain-specific AI agents for the built world — helping Architecture, Engineering, and Construction (AEC) teams across tasks like drawing reviews, takeoffs, code compliance, RFIs, submittals, specs, QA/QC, coordination, and project documentation. Our goal is to build AI systems that reflect how AEC work is actually done in practice—grounded in real workflows, real constraints, and real project complexity. About the Role We’re launching a Domain Experts in Residence (DEIR) program to embed experienced AEC practitioners directly into how our AI agents and workflows are designed, evaluated, and shipped. This is not a traditional advisory role. As a Domain Expert in Residence, you’ll be hands-on in defining real-world AEC tasks, building datasets, and shaping evaluation criteria that determine how our agents perform across the project lifecycle. This role is ideal for practitioners who want to shape the future of tools used by their profession. What You’ll Do Define real-world AEC task specifications for AI agents (e.g., drawing reviews, code compliance, RFIs, submittals, spec review, QA/QC, coordination)Create and label gold-standard datasets from real project artifacts (drawings, specs, RFIs, submittals)Design evaluation rubrics for agent outputs (severity, discipline, correctness, usefulness)Co-design default workflows that ship to customersPressure-test agents against real-world project scenarios and common failure modesPartner closely with product, ML, and engineering teams to translate domain practice into scalable AI systems Who You Are 6+ years of professional experience in one or more of:ArchitectureStructural EngineeringMEP EngineeringConstruction / Project ManagementQA/QC or Code ComplianceDeep familiarity with real AEC workflows and project deliverablesComfortable articulating how work is actually done in practiceCurious about AI tools and automation (no ML background required)Motivated to shape the future of AEC tooling Nice to HaveExperience with QA/QC processes or design reviewsFamiliarity with building codes (IBC, ADA, NFPA, local codes)Experience writing standards, checklists, or internal review frameworksInterest in product development Why This Role Is DifferentYou’ll help define how AI supports real AEC workflowsYour expertise will directly shape products used by real firmsYou’ll work closely with product and ML teamsYou’ll influence the next generation of tools for your profession CompensationCompetitive compensation based on experienceEquity for full-time rolesFlexible part-time options for specialists
No items found.
Hidden link
Momentic.jpg

Founding Engineer (AI Engineering)

Momentic
$150,000 – $220,000
US.svg
United States
Full-time
Remote
false
At Momentic, we’re building the future of quality.We’re building the the all-in-one quality platform powered by state of the art agents to help our customers ensure quality at every stage of the SDLC. Top engineering teams at companies like Notion, Bilt, Quora, and Xero use Momentic to ship high quality product. Millions of Momentic tests are executed every single day.Our product has a very large problem space so there’s a ton of stuff to build and take ownership of - we’d be excited to get your help as we're hiring several extremely talented software engineers across the stack.The productWe’re building the AI-native automated testing platformOur product is incredibly sticky. Our customers run us on every pull request and merge and before each deploy as quality gatesOur product is leagues ahead of competitors and the status quo today (think Selenium/Cypress/Playwright)Check out the demo video on our websiteAbout usWe’re a lean team of 12 (ex Robinhood, Retool, WeWork, Qualtrics, Assembled)Located in-person San Francisco (650 5th St.)Recent Series A raise of $15M led by Standard Capital, with participation from existing investors - Y Combinator, FCVC, and Transpose Platform.You’re perfect for this role if…You enjoy solving hard problems that have both product and technical ambiguityYou have strong engineering fundamentals, code efficiently, and you know what you're great at and what you're less great atYou dislike meetings and would much rather focus your time on building, being productive, and shipping codeYou thrive when you have autonomy, own as many of the details as possible, and project manage your own workYou're in SF or you're willing to relocate, you love working in-person, and you're serious about joining us to build a culture we'll all love Must-have qualificationsExperience with LLM performance tuning, including prompt engineering and context management strategiesExperience with LLM evaluations and observabilityExperience integrating LLMs into real-world applications using a modern tech stack (Python, TypeScript, etc.)3+ years of experienceNice to haveExperience with supervised or reinforced fine-tuningExperience with classical machine learning techniques such as template matching, bounding box detection, and OCRExperience running statistical experimentsSponsorshipWe can't sponsor new H1B or Green Cards (through I-140) at this time.We can transfer existing H1Bs or TN Visas.Our stackReactTypeScriptNext.jsNode.jsPostgreSQLGoogle CloudKubernetesBenefitsUnlimited supply of sparkling waterMedical, Vision, Dental insurance401KUnlimited PTOFun team offsites and events - we went to French Laundry this year!
No items found.
Hidden link
No job found
Your search did not match any job. Please try again
Department
Clear
Category
Clear
Country
Clear
Job type
Clear
Remote
Clear
Only remote job
Company size
Clear
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.