The AI job market moves fast. We keep up so you don't have to.
Fresh roles added daily, reviewed for quality — across every corner of the AI ecosystem.
I'm strong in:
Edit filters
New AI Opportunities
Showing 61 – 79 of 79 jobs
Tag
Senior Business Psychologist
MakiPeople
101-200
France
Full-time
Remote
false
About the Science TeamAt the heart of Maki People, the Science team is shaping the future of hiring through innovation, rigour, and collaboration. Led by our Head of Science, Aiden Loe, and working closely with our COO, Paul-Louis Caylar, this team drives the development of high-quality content that sets our platform apart.We don’t just create and validate assessments—we innovate. Our work spans:Expanding a cutting-edge library of tests and tools.Designing bespoke activities and experiences for clients.Evaluating and refining AI-driven scoring algorithms and large language models (LLMs) to ensure fairness, accuracy, and transparency.Leveraging psychometric expertise to build reliable, valid, and impactful assessments.Developing tools that analyse candidate and job data to predict performance and potential with precision.Supporting clients in using assessment data to optimise their workforce strategies, from talent acquisition to development and retention.Leading original studies to explore emerging psychological and technological trends and sharing insights through publications, presentations, and client reports.Collaborating with regulatory bodies and industry leaders to establish new standards in ethical AI use and hiring practices.Equipping internal teams and clients with the knowledge and skills needed to understand and apply psychological and AI-driven insights effectively.As Maki continues to grow, the Science team is central to understanding user experiences, refining assessments, and driving broader adoption—all while upholding the highest scientific standards.Your impact as a Senior Business Psychologist will go beyond day-to-day responsibilities— you’ll be a key partner in shaping the future of recruitment while driving exceptional outcomes for our clients.About the RoleApply psychological theories and methodologies to improve our assessment and selection proceduresConduct research and analysis to identify key psychological factors and competencies relevant to various job roles and industriesCreating custom assessments and competency frameworks tailored to client needs and psychometric standardsEnsure that our test bank is diverse, inclusive, and covers the core competencies aligned with the future of work.Building strong client relationships, guiding them through the assessment lifecycle—from needs analysis to results interpretationWorking closely with Onboarding, Customer Success, Psychometricians and Assessment Operations to ensure high-quality, scalable, and impactful assessmentsCommunicating validation studies and statistical insights in clear, actionable ways for non-technical audiencesShare insights and information to external stakeholders on how AI technologies are integrated into our assessments, highlighting their purpose, functionality, and impactEnsuring that test development and AI-driven hiring systems adhere to the highest ethical and compliance standards.Assist in training and educating internal teams on psychological best practices in talent acquisitionStay updated on I/O psychology and AI research, applying insights to product development.Contributing to the completion of client proposals by providing expert input on science-related matters, ensuring the scientific rigor, validity, and innovation of our solutions are effectively communicated to win key deals.Represent Maki People by presenting actionable insights and findings at key industry conferences, bridging scientific rigor with practical business applications.Our ideal candidateExperience in job analysis and developing competency frameworks for clients.Excellent understanding of AI applications in assessments.Deep understanding of ethical considerations and compliance standards in the development and application of assessments, especially those integrating AI technologies.Proven ability to translate technical concepts for diverse audiences, advocating for an evidence-based approachin business discussions.Expert knowledge in assessment design, validation studies, and psychometric methods.Strong interpersonal and relationship-building skills to effectively collaborate with internal teams, clients, and external stakeholders.Advanced research skills to design and execute studies that drive innovative assessment solutions and contribute to thought leadership.Proven ability to manage multiple projects simultaneously, ensuring timely delivery and adherence to quality standards.Familiarity with industry-specific challenges and trends, enabling the customisation of assessments to diverse sectors and job roles.Strong writing skills for creating professional documentation, including technical reports, white papers, and client-facing materials.Experience with data analysis or programming tools (e.g. SPSS, JASP, Mplus, R, Python) for psychometric analysis and research.Strong knowledge of test content creation and review processes.Expertise in assessment design and item-writing best practices, ensuring cultural fairness and minimizing bias in assessments.Comfortable with public speaking and presenting findings at conferences, webinars, and client meetings to represent the organisation effectively.Awareness of diversity, equity, and inclusion principles, applying them to ensure assessments promote fair and unbiased hiring decisions.Understanding automated scoring systems and their ethical considerations.Application ProcessStage 1 - Screening assessment (20 mins)Stage 2 - Hiring manager interview (45 min)Stage 3 - Power skill assessment with our AI agent (15 min)Stage 4 - Executive interview (45 min)Stage 5 - Deep-dive technical interview (60 min)Stage 6 - Interview with Co-founder (30 min)
No items found.
2025-12-24 9:44
Senior Psychometrician
MakiPeople
101-200
France
Full-time
Remote
false
About the Science TeamAt the heart of Maki People, the Science team is shaping the future of hiring through innovation, rigour, and collaboration. Led by our Head of Science, Aiden Loe, and working closely with our COO, Paul-Louis Caylar, this team drives the development of high-quality content that sets our platform apart.We don’t just create and validate assessments—we innovate. Our work spans:Expanding a cutting-edge library of tests and tools.Designing bespoke activities and experiences for clients.Evaluating and refining AI-driven scoring algorithms and large language models (LLMs) to ensure fairness, accuracy, and transparency.Leveraging psychometric expertise to build reliable, valid, and impactful assessments.Developing tools that analyze candidate and job data to predict performance and potential with precision.Supporting clients in using assessment data to optimize their workforce strategies, from talent acquisition to development and retention.Leading original studies to explore emerging psychological and technological trends and sharing insights through publications, presentations, and client reports.Collaborating with regulatory bodies and industry leaders to establish new standards in ethical AI use and hiring practices.Equipping internal teams and clients with the knowledge and skills needed to understand and apply psychological and AI-driven insights effectively.As Maki continues to grow, the Science team is central to understanding user experiences, refining assessments, and driving broader adoption—all while upholding the highest scientific standards.Your impact as a Senior Psychometrician will go beyond day-to-day responsibilities— you’ll be a key partner in shaping the future of recruitment while driving exceptional outcomes for our clients.About the RoleDesign, create and validate psychometric assessments optimised for AI integration, ensuring tools are scientifically rigorous and actionable.Exploring new ways of generating and evaluating assessments by integrating psychometrics with machine learning algorithms.Perform complex analyses, focusing on item calibration, test equating, norming, distractor analysis, fairness evaluation, reliability and validation studies.Develop and refine automated scoring algorithms to provide precise and unbiased candidate evaluations.Monitor assessments for potential bias and ensure adherence to ethical guidelines and best practices.Conduct research to identify novel psychometric solutions that enhance the functionality and effectiveness of AI-driven talent assessments.Implement improvements to existing assessments, leveraging the latest psychometric advancements to enhance candidate engagement and experience.Serve as a thought leader in integrating AI and psychometrics, keeping the team informed about the latest innovations and industry trends.Develop documentation, including test manuals, technical reports, and white papers, to communicate the scientific foundations, methodologies, and validity evidence of assessments to external stakeholders.Contribute to developing resources and educational materials that help clients understand AI-enhanced assessments and their implications.Publish innovative research in high-impact academic journals, and represent Maki People by presenting findings at key conferences.Collaborate with cross-functional teams, including I/O psychologists, AI researchers, and software developers, to seamlessly embed assessments into our platform.Translate complex psychometric findings into clear, actionable insights for clients and external partners.Our Ideal CandidateProven academic publication record in psychometrics or related fields, advancing assessment methodologies.Experience in test design for diverse applications, including custom solutions tailored to client needs.Ability to design, develop, and implement automated scoring systems, ensuring accuracy and fairness.Expertise in conducting complex statistical analyses.Deep knowledge of psychometric methods such as factor analysis, SEM, CTT, IRT, CAT, and test equating.Experience in AI/ML applications for assessment, such as prediction modelling and person-job fit analysis.Strong technical documentation and research writing skills.Robust validation study expertise, ensuring assessment reliability, validity, and practical relevance.Proficiency in automated item generation and recalibration, optimising item pools.Strong knowledge on integrating emerging technologies like large language models (LLMs) into psychometric assessment practicesExperience with programming languages such as R, Python, Julia, or similar, and proficiency in statistical software like SPSS, SAS, JASP, Mplus, or equivalent tools.Good working knowledge of working with cloud databases, such as BigQuery, with expertise in extracting and analysing relevant data.Additional ExpertiseStrong understanding of ethical considerations in data governance, AI fairness, and compliance.A proven track record in multi-team collaboration, particularly with software developers and content teams.Skilled in data visualisation and reporting, turning statistical insights into actionable recommendations.Demonstrated ability to present complex concepts persuasively in business discussions.Application ProcessStage 1 - Screening assessment (20 mins)Stage 2 - Hiring manager interview (45 min)Stage 3 - Power skill assessment with our AI agent (15 min)Stage 4 - Executive interview (45 min)Stage 5 - Deep-dive technical interview (60 min)Stage 6 - Interview with Co-founder (30 min)
No items found.
2025-12-24 9:44
AI deployment architect
MakiPeople
101-200
France
Full-time
Remote
false
We are hiring AI deployment architects to configure, deploy, and evolve our agents for enterprise customers. This role sits at the intersection of product, engineering, and customer delivery ; you will translate business requirements into robust configurations, ensure each deployment reflects client needs, and iterate rapidly to maintain high performance and system stability.You will be a primary technical partner to our customers, guiding them through configuration decisions, troubleshooting complex behaviors, and shaping how they design and operationalize AI-driven screening. You will also act as a critical feedback conduit between the field and the product team, surfacing patterns that should be platformized and validating new capabilities with real-world customers.This is a highly technical & hands-on role. As deployments accelerate across industries and geographies, you will help define the standards, tools, and best practices that make our agents scalable.What you will doConfiguration and deploymentBuild and adapt screening flows based on customer jobs and requirements.Configure state prompts, tone parameters, voice selection, transitions, and conditional logic.Set up and maintain custom vocabularies for ASR when relevant.Prepare and run demos; support pilot implementations from start to finish.Troubleshooting and iterationAnalyze conversation transcripts and identify sources of errors or drift.Run isolated state tests for targeted debugging.Iterate rapidly on prompts and configurations to improve performance.Use SQL to investigate behavioral patterns, identify systemic issues, and validate improvements.Client partnershipAdvise customers on screening design, personas, and best practices for AI-driven interviews.Communicate technical concepts, limitations, and trade-offs clearly.Manage expectations during pilots; build structured feedback loops that drive continuous improvement.Act as a trusted guide throughout deployment and iteration cycles.Product collaborationSurface recurring field issues that should become productized solutions.Contribute insights that shape new configuration surfaces, evaluation tools, and system-level capabilities.Partner with product and engineering teams to test new features with selected customers and validate readiness for scale.What makes this role uniqueYou sit closest to real-world usage; your insights will directly shape how our agents evolve.You bridge product and customer needs, ensuring enterprise deployments remain robust, predictable, and high performing.You influence how AI-driven hiring is operationalized across the world’s leading companies.You will help define repeatable playbooks, tools, and standards that allow the deployment function to scale.As the first hire fully dedicated to this function, you will shape the boundaries of the role itself and set the industry bar for what excellence looks like in full-deployment engineering for conversational AI.Preferred experienceExperienceGraduated from top engineering school or business/engineering dual-degreePrior client-facing technical experience, ideally supporting enterprise implementations.Technical skillsHands-on work with conversational AI, LLM prompting, applied ML models, or workflow-based configuration.Ability to debug state logic, model outputs, and configuration inconsistencies.SQL proficiency for analytics and performance investigation.Comfort with light scripting and structured configuration formats.Analytical and product mindsetStrong ability to reason about system constraints and edge cases.Ability to distinguish between local configuration work and product-level feature needs.Structured approach to experimentation and iteration.Client-facing abilitiesClear and concise communication with both technical and non-technical stakeholders.Ability to present, educate, and manage expectations during pilots and demos.Confidence in advising customers and steering conversations.Recruitment processStage 1 - Screening call with our agent (15 min)Stage 2 - Hiring manager interview (45 min)Stage 2 - Implementation lead interview (45 min)Stage 3 - Deep-dive technical interview (60 min)Stage 4 - Founder interview (45 min)Stage 5 - Offer and alignment discussion
No items found.
2025-12-24 9:44
Senior Machine Learning Engineer
Knowlix
1-10
Germany
Full-time
Remote
false
Senior Machine Learning EngineerFull-time | Munich | Seed Stage | Equity + SalaryHelp Build a Generational CompanyAt Knowlix, we’re on a mission to 10x small businesses' productivity. We strive to become the dominant small business platform for human-AI and AI-to-AI collaboration.We’re still early, but we’re thinking big. This is your opportunity to help shape a product that serves millions and defines a category.We move fast, ship often, and obsess over quality. If you burn to build, ship, and scale - and you love using AI to make things radically better - this is your seat at the table.Why Knowlix, Why NowWe're a well-funded seed-stage startup, backed by top-tier investors and experienced foundersOur tech works - and first users are already feeling the first magicWe're going after a massive market, with a product that changes how people workYou’ll be joining as one of the earliest team members - with real influence and ownership over the trajectory of the companyWe believe speed is a function of focus + iterationThis isn’t a job. It’s a chance to help build a once-in-a-generation company if we do things right.
The RoleYou will design and ship advanced ML systems - especially LLM-driven agents and self-improving workflows. You’ll build robust data and training pipelines, enable fast experimentation, and ensure models and agents continuously improve in production. This is a hands-on role in a fast-moving startup where you ship, measure, and iterate quickly.
What You'll DoBuild LLM-based agents, tool-using workflows, and autonomous self-improvement loops.Design, train, and evaluate ML models across NLP/LLM, vision, and retrieval domains.Develop data pipelines, training code, experiment tooling, and automated deployment systems.Use PyTorch for model development and W&B (or similar) for tracking experiments and lineage.Implement monitoring for performance, drift, safety, and agent behavior.Optimize inference for latency, throughput, and cost.Work closely with engineering and product to turn prototypes into reliable production features.Establish ML engineering best practices and mentor teammates.What You Bring (Requirements)5+ years experience deploying ML systems to production.Strong Python and deep experience with PyTorch.Hands-on experience with LLMs, retrieval (RAG), tool-use, or agent frameworks (e.g., LangGraph, AutoGen).Strong command of experiment management, versioning, and reproducibility using tools like W&B, MLflow, or similar.Solid understanding of data engineering and cloud infrastructure for ML workloads.Ability to take ambiguous ideas to production in a fast-paced startup environment.You’ll Thrive Here If You… (Preferred)Experience with reinforcement learning, self-evaluating or self-correcting agents.Publications or patents.Experience with model compression/optimization (quantization, distillation).Familiarity with safety evaluation, alignment techniques, and responsible AI practices.What You’ll GetA path-defining role in a high-upside company backed by world-class investorsDeep ownership and equityA team that cares deeply about what we’re building and the people we build it withA rare chance to leave your mark on something that has the opportunity to become generational
No items found.
2025-12-24 8:29
Performance Application Engineer, Intelligent Factory
Intrinsic
201-500
No items found.
Full-time
Remote
false
Intrinsic is Alphabet’s bet aiming to reimagine the potential of industrial robotics. Our team believes that advances in AI, perception and simulation will redefine what’s possible for industrial robotics in the near future – with software and data at the core.
Our mission is to make industrial robotics intelligent, accessible, and usable for millions more businesses, entrepreneurs, and developers. We are a dynamic team of engineers, roboticists, designers, and technologists who are passionate about unlocking the creative and economic potential of industrial robotics.Role
As a Senior AI Research Scientist for Vision-guided robotics you will lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. You will explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Your work will involve creating models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. You will collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities that can be deployed on physical hardware for industrial applications.
How your work moves the mission forward
Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios.
Design algorithms that enable robots to manipulate complex or deformable objects with high precision.
Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware.
Evaluate model performance in both simulation and real-world environments to ensure robustness and reliability.
Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems.
Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.
Skills you will need to be successful
PhD in Computer Science, Robotics, or a related field with a focus on machine learning or computer vision.
3 years of experience in applied research focused on robotic manipulation or robot learning.
Proficiency in programming with Python and C++.
Experience with deep learning frameworks such as PyTorch, JAX, or TensorFlow.
Experience developing algorithms for vision-based manipulation or contact-rich interaction.
Publication record in top-tier robotics or AI conferences (e.g., ICRA, IROS, CVPR, NeurIPS).
Skills that will differentiate your candidacy
Experience with reinforcement learning or imitation learning for robotics.
Familiarity with physics simulators like MuJoCo, Isaac Sim, or Gazebo.
Experience integrating tactile sensors with visual perception systems
Experience in LfD (Learning from Demonstrations), kinesthetic learning
Background in sim-to-real transfer techniques for manipulation policies.
Experience with transformer-based architectures or foundation models in a robotics context
Experience deploying machine learning models on edge compute hardware
At Intrinsic, we are proud to be an equal opportunity workplace. Employment at Intrinsic is based solely on a person's merit and qualifications directly related to professional competence. Intrinsic does not discriminate against any employee or applicant because of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), or any other basis protected by law. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. It is Intrinsic’s policy to comply with all applicable national, state and local laws pertaining to nondiscrimination and equal opportunity.
If you have a disability or special need that requires accommodation, please contact us at: candidate-support@intrinsic.ai.
No items found.
2025-12-24 8:14
Senior Software Engineer, ML Ops & Infrastructure
Intrinsic
201-500
Germany
Full-time
Remote
false
Intrinsic is Alphabet’s bet aiming to reimagine the potential of industrial robotics. Our team believes that advances in AI, perception and simulation will redefine what’s possible for industrial robotics in the near future – with software and data at the core.
Our mission is to make industrial robotics intelligent, accessible, and usable for millions more businesses, entrepreneurs, and developers. We are a dynamic team of engineers, roboticists, designers, and technologists who are passionate about unlocking the creative and economic potential of industrial robotics.Role
As a Senior AI Research Scientist for Vision-guided robotics you will lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. You will explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Your work will involve creating models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. You will collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities that can be deployed on physical hardware for industrial applications.
How your work moves the mission forward
Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios.
Design algorithms that enable robots to manipulate complex or deformable objects with high precision.
Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware.
Evaluate model performance in both simulation and real-world environments to ensure robustness and reliability.
Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems.
Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.
Skills you will need to be successful
PhD in Computer Science, Robotics, or a related field with a focus on machine learning or computer vision.
3 years of experience in applied research focused on robotic manipulation or robot learning.
Proficiency in programming with Python and C++.
Experience with deep learning frameworks such as PyTorch, JAX, or TensorFlow.
Experience developing algorithms for vision-based manipulation or contact-rich interaction.
Publication record in top-tier robotics or AI conferences (e.g., ICRA, IROS, CVPR, NeurIPS).
Skills that will differentiate your candidacy
Experience with reinforcement learning or imitation learning for robotics.
Familiarity with physics simulators like MuJoCo, Isaac Sim, or Gazebo.
Experience integrating tactile sensors with visual perception systems
Experience in LfD (Learning from Demonstrations), kinesthetic learning
Background in sim-to-real transfer techniques for manipulation policies.
Experience with transformer-based architectures or foundation models in a robotics context
Experience deploying machine learning models on edge compute hardware
At Intrinsic, we are proud to be an equal opportunity workplace. Employment at Intrinsic is based solely on a person's merit and qualifications directly related to professional competence. Intrinsic does not discriminate against any employee or applicant because of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), or any other basis protected by law. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. It is Intrinsic’s policy to comply with all applicable national, state and local laws pertaining to nondiscrimination and equal opportunity.
If you have a disability or special need that requires accommodation, please contact us at: candidate-support@intrinsic.ai.
No items found.
2025-12-24 8:14
Robotics Software Engineer, Sensor-based Control and Robot Learning
Intrinsic
201-500
Singapore
Full-time
Remote
false
Intrinsic is Alphabet’s bet aiming to reimagine the potential of industrial robotics. Our team believes that advances in AI, perception and simulation will redefine what’s possible for industrial robotics in the near future – with software and data at the core.
Our mission is to make industrial robotics intelligent, accessible, and usable for millions more businesses, entrepreneurs, and developers. We are a dynamic team of engineers, roboticists, designers, and technologists who are passionate about unlocking the creative and economic potential of industrial robotics.Role
As a Senior AI Research Scientist for Vision-guided robotics you will lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. You will explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Your work will involve creating models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. You will collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities that can be deployed on physical hardware for industrial applications.
How your work moves the mission forward
Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios.
Design algorithms that enable robots to manipulate complex or deformable objects with high precision.
Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware.
Evaluate model performance in both simulation and real-world environments to ensure robustness and reliability.
Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems.
Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.
Skills you will need to be successful
PhD in Computer Science, Robotics, or a related field with a focus on machine learning or computer vision.
3 years of experience in applied research focused on robotic manipulation or robot learning.
Proficiency in programming with Python and C++.
Experience with deep learning frameworks such as PyTorch, JAX, or TensorFlow.
Experience developing algorithms for vision-based manipulation or contact-rich interaction.
Publication record in top-tier robotics or AI conferences (e.g., ICRA, IROS, CVPR, NeurIPS).
Skills that will differentiate your candidacy
Experience with reinforcement learning or imitation learning for robotics.
Familiarity with physics simulators like MuJoCo, Isaac Sim, or Gazebo.
Experience integrating tactile sensors with visual perception systems
Experience in LfD (Learning from Demonstrations), kinesthetic learning
Background in sim-to-real transfer techniques for manipulation policies.
Experience with transformer-based architectures or foundation models in a robotics context
Experience deploying machine learning models on edge compute hardware
At Intrinsic, we are proud to be an equal opportunity workplace. Employment at Intrinsic is based solely on a person's merit and qualifications directly related to professional competence. Intrinsic does not discriminate against any employee or applicant because of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), or any other basis protected by law. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. It is Intrinsic’s policy to comply with all applicable national, state and local laws pertaining to nondiscrimination and equal opportunity.
If you have a disability or special need that requires accommodation, please contact us at: candidate-support@intrinsic.ai.
No items found.
2025-12-24 7:59
Full-stack Software Engineer, Perception Experience
Intrinsic
201-500
Germany
Full-time
Remote
false
Intrinsic is Alphabet’s bet aiming to reimagine the potential of industrial robotics. Our team believes that advances in AI, perception and simulation will redefine what’s possible for industrial robotics in the near future – with software and data at the core.
Our mission is to make industrial robotics intelligent, accessible, and usable for millions more businesses, entrepreneurs, and developers. We are a dynamic team of engineers, roboticists, designers, and technologists who are passionate about unlocking the creative and economic potential of industrial robotics.Role
As a Senior AI Research Scientist for Vision-guided robotics you will lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. You will explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Your work will involve creating models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. You will collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities that can be deployed on physical hardware for industrial applications.
How your work moves the mission forward
Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios.
Design algorithms that enable robots to manipulate complex or deformable objects with high precision.
Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware.
Evaluate model performance in both simulation and real-world environments to ensure robustness and reliability.
Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems.
Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.
Skills you will need to be successful
PhD in Computer Science, Robotics, or a related field with a focus on machine learning or computer vision.
3 years of experience in applied research focused on robotic manipulation or robot learning.
Proficiency in programming with Python and C++.
Experience with deep learning frameworks such as PyTorch, JAX, or TensorFlow.
Experience developing algorithms for vision-based manipulation or contact-rich interaction.
Publication record in top-tier robotics or AI conferences (e.g., ICRA, IROS, CVPR, NeurIPS).
Skills that will differentiate your candidacy
Experience with reinforcement learning or imitation learning for robotics.
Familiarity with physics simulators like MuJoCo, Isaac Sim, or Gazebo.
Experience integrating tactile sensors with visual perception systems
Experience in LfD (Learning from Demonstrations), kinesthetic learning
Background in sim-to-real transfer techniques for manipulation policies.
Experience with transformer-based architectures or foundation models in a robotics context
Experience deploying machine learning models on edge compute hardware
At Intrinsic, we are proud to be an equal opportunity workplace. Employment at Intrinsic is based solely on a person's merit and qualifications directly related to professional competence. Intrinsic does not discriminate against any employee or applicant because of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), or any other basis protected by law. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. It is Intrinsic’s policy to comply with all applicable national, state and local laws pertaining to nondiscrimination and equal opportunity.
If you have a disability or special need that requires accommodation, please contact us at: candidate-support@intrinsic.ai.
No items found.
2025-12-24 7:59
Senior Cloud Operations Engineer
Intrinsic
201-500
Singapore
Full-time
Remote
false
Intrinsic is Alphabet’s bet aiming to reimagine the potential of industrial robotics. Our team believes that advances in AI, perception and simulation will redefine what’s possible for industrial robotics in the near future – with software and data at the core.
Our mission is to make industrial robotics intelligent, accessible, and usable for millions more businesses, entrepreneurs, and developers. We are a dynamic team of engineers, roboticists, designers, and technologists who are passionate about unlocking the creative and economic potential of industrial robotics.Role
As a Senior AI Research Scientist for Vision-guided robotics you will lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. You will explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Your work will involve creating models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. You will collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities that can be deployed on physical hardware for industrial applications.
How your work moves the mission forward
Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios.
Design algorithms that enable robots to manipulate complex or deformable objects with high precision.
Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware.
Evaluate model performance in both simulation and real-world environments to ensure robustness and reliability.
Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems.
Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.
Skills you will need to be successful
PhD in Computer Science, Robotics, or a related field with a focus on machine learning or computer vision.
3 years of experience in applied research focused on robotic manipulation or robot learning.
Proficiency in programming with Python and C++.
Experience with deep learning frameworks such as PyTorch, JAX, or TensorFlow.
Experience developing algorithms for vision-based manipulation or contact-rich interaction.
Publication record in top-tier robotics or AI conferences (e.g., ICRA, IROS, CVPR, NeurIPS).
Skills that will differentiate your candidacy
Experience with reinforcement learning or imitation learning for robotics.
Familiarity with physics simulators like MuJoCo, Isaac Sim, or Gazebo.
Experience integrating tactile sensors with visual perception systems
Experience in LfD (Learning from Demonstrations), kinesthetic learning
Background in sim-to-real transfer techniques for manipulation policies.
Experience with transformer-based architectures or foundation models in a robotics context
Experience deploying machine learning models on edge compute hardware
At Intrinsic, we are proud to be an equal opportunity workplace. Employment at Intrinsic is based solely on a person's merit and qualifications directly related to professional competence. Intrinsic does not discriminate against any employee or applicant because of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), or any other basis protected by law. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. It is Intrinsic’s policy to comply with all applicable national, state and local laws pertaining to nondiscrimination and equal opportunity.
If you have a disability or special need that requires accommodation, please contact us at: candidate-support@intrinsic.ai.
No items found.
2025-12-24 7:59
Robotics Software Engineer, Robot Control and Hardware Integration
Intrinsic
201-500
Germany
Full-time
Remote
false
Intrinsic is Alphabet’s bet aiming to reimagine the potential of industrial robotics. Our team believes that advances in AI, perception and simulation will redefine what’s possible for industrial robotics in the near future – with software and data at the core.
Our mission is to make industrial robotics intelligent, accessible, and usable for millions more businesses, entrepreneurs, and developers. We are a dynamic team of engineers, roboticists, designers, and technologists who are passionate about unlocking the creative and economic potential of industrial robotics.Role
As a Senior AI Research Scientist for Vision-guided robotics you will lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. You will explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Your work will involve creating models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. You will collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities that can be deployed on physical hardware for industrial applications.
How your work moves the mission forward
Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios.
Design algorithms that enable robots to manipulate complex or deformable objects with high precision.
Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware.
Evaluate model performance in both simulation and real-world environments to ensure robustness and reliability.
Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems.
Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.
Skills you will need to be successful
PhD in Computer Science, Robotics, or a related field with a focus on machine learning or computer vision.
3 years of experience in applied research focused on robotic manipulation or robot learning.
Proficiency in programming with Python and C++.
Experience with deep learning frameworks such as PyTorch, JAX, or TensorFlow.
Experience developing algorithms for vision-based manipulation or contact-rich interaction.
Publication record in top-tier robotics or AI conferences (e.g., ICRA, IROS, CVPR, NeurIPS).
Skills that will differentiate your candidacy
Experience with reinforcement learning or imitation learning for robotics.
Familiarity with physics simulators like MuJoCo, Isaac Sim, or Gazebo.
Experience integrating tactile sensors with visual perception systems
Experience in LfD (Learning from Demonstrations), kinesthetic learning
Background in sim-to-real transfer techniques for manipulation policies.
Experience with transformer-based architectures or foundation models in a robotics context
Experience deploying machine learning models on edge compute hardware
At Intrinsic, we are proud to be an equal opportunity workplace. Employment at Intrinsic is based solely on a person's merit and qualifications directly related to professional competence. Intrinsic does not discriminate against any employee or applicant because of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), or any other basis protected by law. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. It is Intrinsic’s policy to comply with all applicable national, state and local laws pertaining to nondiscrimination and equal opportunity.
If you have a disability or special need that requires accommodation, please contact us at: candidate-support@intrinsic.ai.
No items found.
2025-12-24 7:59
Robotics Software Engineer, Motion Planning
Intrinsic
201-500
United States
Full-time
Remote
false
Intrinsic is Alphabet’s bet aiming to reimagine the potential of industrial robotics. Our team believes that advances in AI, perception and simulation will redefine what’s possible for industrial robotics in the near future – with software and data at the core.
Our mission is to make industrial robotics intelligent, accessible, and usable for millions more businesses, entrepreneurs, and developers. We are a dynamic team of engineers, roboticists, designers, and technologists who are passionate about unlocking the creative and economic potential of industrial robotics.Role
As a Senior AI Research Scientist for Vision-guided robotics you will lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. You will explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Your work will involve creating models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. You will collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities that can be deployed on physical hardware for industrial applications.
How your work moves the mission forward
Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios.
Design algorithms that enable robots to manipulate complex or deformable objects with high precision.
Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware.
Evaluate model performance in both simulation and real-world environments to ensure robustness and reliability.
Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems.
Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.
Skills you will need to be successful
PhD in Computer Science, Robotics, or a related field with a focus on machine learning or computer vision.
3 years of experience in applied research focused on robotic manipulation or robot learning.
Proficiency in programming with Python and C++.
Experience with deep learning frameworks such as PyTorch, JAX, or TensorFlow.
Experience developing algorithms for vision-based manipulation or contact-rich interaction.
Publication record in top-tier robotics or AI conferences (e.g., ICRA, IROS, CVPR, NeurIPS).
Skills that will differentiate your candidacy
Experience with reinforcement learning or imitation learning for robotics.
Familiarity with physics simulators like MuJoCo, Isaac Sim, or Gazebo.
Experience integrating tactile sensors with visual perception systems
Experience in LfD (Learning from Demonstrations), kinesthetic learning
Background in sim-to-real transfer techniques for manipulation policies.
Experience with transformer-based architectures or foundation models in a robotics context
Experience deploying machine learning models on edge compute hardware
At Intrinsic, we are proud to be an equal opportunity workplace. Employment at Intrinsic is based solely on a person's merit and qualifications directly related to professional competence. Intrinsic does not discriminate against any employee or applicant because of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), or any other basis protected by law. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. It is Intrinsic’s policy to comply with all applicable national, state and local laws pertaining to nondiscrimination and equal opportunity.
If you have a disability or special need that requires accommodation, please contact us at: candidate-support@intrinsic.ai.
No items found.
2025-12-24 7:59
Intern: AI-enabled Robotic and Dexterous Manipulation Research
Intrinsic
201-500
Germany
Intern
Remote
false
Intrinsic is Alphabet’s bet aiming to reimagine the potential of industrial robotics. Our team believes that advances in AI, perception and simulation will redefine what’s possible for industrial robotics in the near future – with software and data at the core.
Our mission is to make industrial robotics intelligent, accessible, and usable for millions more businesses, entrepreneurs, and developers. We are a dynamic team of engineers, roboticists, designers, and technologists who are passionate about unlocking the creative and economic potential of industrial robotics.Role
As a Senior AI Research Scientist for Vision-guided robotics you will lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. You will explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Your work will involve creating models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. You will collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities that can be deployed on physical hardware for industrial applications.
How your work moves the mission forward
Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios.
Design algorithms that enable robots to manipulate complex or deformable objects with high precision.
Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware.
Evaluate model performance in both simulation and real-world environments to ensure robustness and reliability.
Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems.
Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.
Skills you will need to be successful
PhD in Computer Science, Robotics, or a related field with a focus on machine learning or computer vision.
3 years of experience in applied research focused on robotic manipulation or robot learning.
Proficiency in programming with Python and C++.
Experience with deep learning frameworks such as PyTorch, JAX, or TensorFlow.
Experience developing algorithms for vision-based manipulation or contact-rich interaction.
Publication record in top-tier robotics or AI conferences (e.g., ICRA, IROS, CVPR, NeurIPS).
Skills that will differentiate your candidacy
Experience with reinforcement learning or imitation learning for robotics.
Familiarity with physics simulators like MuJoCo, Isaac Sim, or Gazebo.
Experience integrating tactile sensors with visual perception systems
Experience in LfD (Learning from Demonstrations), kinesthetic learning
Background in sim-to-real transfer techniques for manipulation policies.
Experience with transformer-based architectures or foundation models in a robotics context
Experience deploying machine learning models on edge compute hardware
At Intrinsic, we are proud to be an equal opportunity workplace. Employment at Intrinsic is based solely on a person's merit and qualifications directly related to professional competence. Intrinsic does not discriminate against any employee or applicant because of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), or any other basis protected by law. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. It is Intrinsic’s policy to comply with all applicable national, state and local laws pertaining to nondiscrimination and equal opportunity.
If you have a disability or special need that requires accommodation, please contact us at: candidate-support@intrinsic.ai.
No items found.
2025-12-24 7:59
Senior Research Scientist, Robotic Foundation Models
Intrinsic
201-500
Germany
Full-time
Remote
false
Intrinsic is Alphabet’s bet aiming to reimagine the potential of industrial robotics. Our team believes that advances in AI, perception and simulation will redefine what’s possible for industrial robotics in the near future – with software and data at the core.
Our mission is to make industrial robotics intelligent, accessible, and usable for millions more businesses, entrepreneurs, and developers. We are a dynamic team of engineers, roboticists, designers, and technologists who are passionate about unlocking the creative and economic potential of industrial robotics.Role
As a Senior AI Research Scientist for Vision-guided robotics you will lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. You will explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Your work will involve creating models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. You will collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities that can be deployed on physical hardware for industrial applications.
How your work moves the mission forward
Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios.
Design algorithms that enable robots to manipulate complex or deformable objects with high precision.
Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware.
Evaluate model performance in both simulation and real-world environments to ensure robustness and reliability.
Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems.
Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.
Skills you will need to be successful
PhD in Computer Science, Robotics, or a related field with a focus on machine learning or computer vision.
3 years of experience in applied research focused on robotic manipulation or robot learning.
Proficiency in programming with Python and C++.
Experience with deep learning frameworks such as PyTorch, JAX, or TensorFlow.
Experience developing algorithms for vision-based manipulation or contact-rich interaction.
Publication record in top-tier robotics or AI conferences (e.g., ICRA, IROS, CVPR, NeurIPS).
Skills that will differentiate your candidacy
Experience with reinforcement learning or imitation learning for robotics.
Familiarity with physics simulators like MuJoCo, Isaac Sim, or Gazebo.
Experience integrating tactile sensors with visual perception systems
Experience in LfD (Learning from Demonstrations), kinesthetic learning
Background in sim-to-real transfer techniques for manipulation policies.
Experience with transformer-based architectures or foundation models in a robotics context
Experience deploying machine learning models on edge compute hardware
At Intrinsic, we are proud to be an equal opportunity workplace. Employment at Intrinsic is based solely on a person's merit and qualifications directly related to professional competence. Intrinsic does not discriminate against any employee or applicant because of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), or any other basis protected by law. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. It is Intrinsic’s policy to comply with all applicable national, state and local laws pertaining to nondiscrimination and equal opportunity.
If you have a disability or special need that requires accommodation, please contact us at: candidate-support@intrinsic.ai.
No items found.
2025-12-24 7:59
Software Engineer, Simulation
Intrinsic
201-500
Singapore
Full-time
Remote
false
Intrinsic is Alphabet’s bet aiming to reimagine the potential of industrial robotics. Our team believes that advances in AI, perception and simulation will redefine what’s possible for industrial robotics in the near future – with software and data at the core.
Our mission is to make industrial robotics intelligent, accessible, and usable for millions more businesses, entrepreneurs, and developers. We are a dynamic team of engineers, roboticists, designers, and technologists who are passionate about unlocking the creative and economic potential of industrial robotics.Role
As a Senior AI Research Scientist for Vision-guided robotics you will lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. You will explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Your work will involve creating models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. You will collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities that can be deployed on physical hardware for industrial applications.
How your work moves the mission forward
Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios.
Design algorithms that enable robots to manipulate complex or deformable objects with high precision.
Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware.
Evaluate model performance in both simulation and real-world environments to ensure robustness and reliability.
Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems.
Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.
Skills you will need to be successful
PhD in Computer Science, Robotics, or a related field with a focus on machine learning or computer vision.
3 years of experience in applied research focused on robotic manipulation or robot learning.
Proficiency in programming with Python and C++.
Experience with deep learning frameworks such as PyTorch, JAX, or TensorFlow.
Experience developing algorithms for vision-based manipulation or contact-rich interaction.
Publication record in top-tier robotics or AI conferences (e.g., ICRA, IROS, CVPR, NeurIPS).
Skills that will differentiate your candidacy
Experience with reinforcement learning or imitation learning for robotics.
Familiarity with physics simulators like MuJoCo, Isaac Sim, or Gazebo.
Experience integrating tactile sensors with visual perception systems
Experience in LfD (Learning from Demonstrations), kinesthetic learning
Background in sim-to-real transfer techniques for manipulation policies.
Experience with transformer-based architectures or foundation models in a robotics context
Experience deploying machine learning models on edge compute hardware
At Intrinsic, we are proud to be an equal opportunity workplace. Employment at Intrinsic is based solely on a person's merit and qualifications directly related to professional competence. Intrinsic does not discriminate against any employee or applicant because of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), or any other basis protected by law. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. It is Intrinsic’s policy to comply with all applicable national, state and local laws pertaining to nondiscrimination and equal opportunity.
If you have a disability or special need that requires accommodation, please contact us at: candidate-support@intrinsic.ai.
No items found.
2025-12-24 7:59
Automation Lab Engineer
Intrinsic
201-500
Singapore
Full-time
Remote
false
Intrinsic is Alphabet’s bet aiming to reimagine the potential of industrial robotics. Our team believes that advances in AI, perception and simulation will redefine what’s possible for industrial robotics in the near future – with software and data at the core.
Our mission is to make industrial robotics intelligent, accessible, and usable for millions more businesses, entrepreneurs, and developers. We are a dynamic team of engineers, roboticists, designers, and technologists who are passionate about unlocking the creative and economic potential of industrial robotics.Role
As a Senior AI Research Scientist for Vision-guided robotics you will lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. You will explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Your work will involve creating models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. You will collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities that can be deployed on physical hardware for industrial applications.
How your work moves the mission forward
Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios.
Design algorithms that enable robots to manipulate complex or deformable objects with high precision.
Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware.
Evaluate model performance in both simulation and real-world environments to ensure robustness and reliability.
Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems.
Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.
Skills you will need to be successful
PhD in Computer Science, Robotics, or a related field with a focus on machine learning or computer vision.
3 years of experience in applied research focused on robotic manipulation or robot learning.
Proficiency in programming with Python and C++.
Experience with deep learning frameworks such as PyTorch, JAX, or TensorFlow.
Experience developing algorithms for vision-based manipulation or contact-rich interaction.
Publication record in top-tier robotics or AI conferences (e.g., ICRA, IROS, CVPR, NeurIPS).
Skills that will differentiate your candidacy
Experience with reinforcement learning or imitation learning for robotics.
Familiarity with physics simulators like MuJoCo, Isaac Sim, or Gazebo.
Experience integrating tactile sensors with visual perception systems
Experience in LfD (Learning from Demonstrations), kinesthetic learning
Background in sim-to-real transfer techniques for manipulation policies.
Experience with transformer-based architectures or foundation models in a robotics context
Experience deploying machine learning models on edge compute hardware
At Intrinsic, we are proud to be an equal opportunity workplace. Employment at Intrinsic is based solely on a person's merit and qualifications directly related to professional competence. Intrinsic does not discriminate against any employee or applicant because of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), or any other basis protected by law. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. It is Intrinsic’s policy to comply with all applicable national, state and local laws pertaining to nondiscrimination and equal opportunity.
If you have a disability or special need that requires accommodation, please contact us at: candidate-support@intrinsic.ai.
No items found.
2025-12-24 7:59
Senior Research Engineer, Data Engine
Intrinsic
201-500
Germany
Full-time
Remote
false
Intrinsic is Alphabet’s bet aiming to reimagine the potential of industrial robotics. Our team believes that advances in AI, perception and simulation will redefine what’s possible for industrial robotics in the near future – with software and data at the core.
Our mission is to make industrial robotics intelligent, accessible, and usable for millions more businesses, entrepreneurs, and developers. We are a dynamic team of engineers, roboticists, designers, and technologists who are passionate about unlocking the creative and economic potential of industrial robotics.Role
As a Senior AI Research Scientist for Vision-guided robotics you will lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. You will explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Your work will involve creating models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. You will collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities that can be deployed on physical hardware for industrial applications.
How your work moves the mission forward
Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios.
Design algorithms that enable robots to manipulate complex or deformable objects with high precision.
Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware.
Evaluate model performance in both simulation and real-world environments to ensure robustness and reliability.
Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems.
Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.
Skills you will need to be successful
PhD in Computer Science, Robotics, or a related field with a focus on machine learning or computer vision.
3 years of experience in applied research focused on robotic manipulation or robot learning.
Proficiency in programming with Python and C++.
Experience with deep learning frameworks such as PyTorch, JAX, or TensorFlow.
Experience developing algorithms for vision-based manipulation or contact-rich interaction.
Publication record in top-tier robotics or AI conferences (e.g., ICRA, IROS, CVPR, NeurIPS).
Skills that will differentiate your candidacy
Experience with reinforcement learning or imitation learning for robotics.
Familiarity with physics simulators like MuJoCo, Isaac Sim, or Gazebo.
Experience integrating tactile sensors with visual perception systems
Experience in LfD (Learning from Demonstrations), kinesthetic learning
Background in sim-to-real transfer techniques for manipulation policies.
Experience with transformer-based architectures or foundation models in a robotics context
Experience deploying machine learning models on edge compute hardware
At Intrinsic, we are proud to be an equal opportunity workplace. Employment at Intrinsic is based solely on a person's merit and qualifications directly related to professional competence. Intrinsic does not discriminate against any employee or applicant because of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), or any other basis protected by law. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. It is Intrinsic’s policy to comply with all applicable national, state and local laws pertaining to nondiscrimination and equal opportunity.
If you have a disability or special need that requires accommodation, please contact us at: candidate-support@intrinsic.ai.
No items found.
2025-12-24 7:59
Deep Learning Engineer, Perception
Intrinsic
201-500
Singapore
Full-time
Remote
false
Intrinsic is Alphabet’s bet aiming to reimagine the potential of industrial robotics. Our team believes that advances in AI, perception and simulation will redefine what’s possible for industrial robotics in the near future – with software and data at the core.
Our mission is to make industrial robotics intelligent, accessible, and usable for millions more businesses, entrepreneurs, and developers. We are a dynamic team of engineers, roboticists, designers, and technologists who are passionate about unlocking the creative and economic potential of industrial robotics.Role
As a Senior AI Research Scientist for Vision-guided robotics you will lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. You will explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Your work will involve creating models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. You will collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities that can be deployed on physical hardware for industrial applications.
How your work moves the mission forward
Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios.
Design algorithms that enable robots to manipulate complex or deformable objects with high precision.
Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware.
Evaluate model performance in both simulation and real-world environments to ensure robustness and reliability.
Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems.
Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.
Skills you will need to be successful
PhD in Computer Science, Robotics, or a related field with a focus on machine learning or computer vision.
3 years of experience in applied research focused on robotic manipulation or robot learning.
Proficiency in programming with Python and C++.
Experience with deep learning frameworks such as PyTorch, JAX, or TensorFlow.
Experience developing algorithms for vision-based manipulation or contact-rich interaction.
Publication record in top-tier robotics or AI conferences (e.g., ICRA, IROS, CVPR, NeurIPS).
Skills that will differentiate your candidacy
Experience with reinforcement learning or imitation learning for robotics.
Familiarity with physics simulators like MuJoCo, Isaac Sim, or Gazebo.
Experience integrating tactile sensors with visual perception systems
Experience in LfD (Learning from Demonstrations), kinesthetic learning
Background in sim-to-real transfer techniques for manipulation policies.
Experience with transformer-based architectures or foundation models in a robotics context
Experience deploying machine learning models on edge compute hardware
At Intrinsic, we are proud to be an equal opportunity workplace. Employment at Intrinsic is based solely on a person's merit and qualifications directly related to professional competence. Intrinsic does not discriminate against any employee or applicant because of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), or any other basis protected by law. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. It is Intrinsic’s policy to comply with all applicable national, state and local laws pertaining to nondiscrimination and equal opportunity.
If you have a disability or special need that requires accommodation, please contact us at: candidate-support@intrinsic.ai.
No items found.
2025-12-24 7:59
Senior Robotics Software Engineer, Intelligent Factory
Intrinsic
201-500
United States
Full-time
Remote
false
Intrinsic is Alphabet’s bet aiming to reimagine the potential of industrial robotics. Our team believes that advances in AI, perception and simulation will redefine what’s possible for industrial robotics in the near future – with software and data at the core.
Our mission is to make industrial robotics intelligent, accessible, and usable for millions more businesses, entrepreneurs, and developers. We are a dynamic team of engineers, roboticists, designers, and technologists who are passionate about unlocking the creative and economic potential of industrial robotics.Role
As a Senior AI Research Scientist for Vision-guided robotics you will lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. You will explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Your work will involve creating models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. You will collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities that can be deployed on physical hardware for industrial applications.
How your work moves the mission forward
Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios.
Design algorithms that enable robots to manipulate complex or deformable objects with high precision.
Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware.
Evaluate model performance in both simulation and real-world environments to ensure robustness and reliability.
Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems.
Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.
Skills you will need to be successful
PhD in Computer Science, Robotics, or a related field with a focus on machine learning or computer vision.
3 years of experience in applied research focused on robotic manipulation or robot learning.
Proficiency in programming with Python and C++.
Experience with deep learning frameworks such as PyTorch, JAX, or TensorFlow.
Experience developing algorithms for vision-based manipulation or contact-rich interaction.
Publication record in top-tier robotics or AI conferences (e.g., ICRA, IROS, CVPR, NeurIPS).
Skills that will differentiate your candidacy
Experience with reinforcement learning or imitation learning for robotics.
Familiarity with physics simulators like MuJoCo, Isaac Sim, or Gazebo.
Experience integrating tactile sensors with visual perception systems
Experience in LfD (Learning from Demonstrations), kinesthetic learning
Background in sim-to-real transfer techniques for manipulation policies.
Experience with transformer-based architectures or foundation models in a robotics context
Experience deploying machine learning models on edge compute hardware
At Intrinsic, we are proud to be an equal opportunity workplace. Employment at Intrinsic is based solely on a person's merit and qualifications directly related to professional competence. Intrinsic does not discriminate against any employee or applicant because of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), or any other basis protected by law. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. It is Intrinsic’s policy to comply with all applicable national, state and local laws pertaining to nondiscrimination and equal opportunity.
If you have a disability or special need that requires accommodation, please contact us at: candidate-support@intrinsic.ai.
No items found.
2025-12-24 7:59
Machine Learning Engineer Intern: 3D Generative AI (Q1 2026)
Genies
201-500
$40 – $50 / hour
United States
Intern
Remote
false
Genies is an avatar technology company powering the next era of interactive digital identity through AI companions. With the Avatar Framework and intuitive creation tools, Genies enables developers, talent, and creators to generate and deploy game-ready AI companions. The company’s technology stack supports full customization, AI-generated fashion and props, and seamless integration of user-generated content (UGC). Backed by investors including Bob Iger, Silver Lake, BOND, and NEA, Genies’ mission is to become the visual and interactive layer for the LLM-powered internet.
About the opportunity
We are looking for a Backend Software Engineer Intern (LLM) to join our AI Engineering Team based in San Francisco, CA or Los Angeles, CA. The team is responsible for developing the backend infrastructure powering the Genies Avatar AI framework. You will contribute to the next generation of AI 3D avatar entertainment experience, and be involved with designing, coding, and testing software according to the requirements and system plans. You will be expected to collaborate with senior engineers and other team members to develop software solutions, troubleshoot issues, and maintain the quality of our software. You will also be responsible for documenting their work for future reference and improvement.
Our internship program has a minimum duration of 12 weeks.
Key Responsibilities
Develop and deploy LLM agent systems within our AI-powered avatar framework.
Design and implement scalable and efficient backend systems to support AI applications
Collaborate with AI and NLP experts to integrate LLM, and LLM-based systems and algorithms into our avatar ecosystem.
Work with Docker, Kubernetes, and AWS for AI model deployment and scalability
Contribute to code reviews, debugging, and testing to ensure high-quality deliverables.
Minimum Qualifications
Currently pursuing OR a recent graduate from a Master's degree or Bachelor's in Computer Science, Engineering, Machine Learning, or related field.
Course or internship experience related to the following areas : Operating Systems, Data Structures & Algorithms, Machine Learning
Strong programming skills in Python, Java, or C++
Excellent written and verbal communication skills
Basic understanding of AI/LLM concepts and enthusiasm for learning advanced techniques.
Preferred Qualifications
Experience in building ML /LLM powered software systems.
Previous Computer Science/Software Engineering Internship experience
Solid understanding of LLM agents, retrieval-augmented generation (RAG), and prompt engineering.
Experience with AWS, Docker and Kubernetes
Experience with CI/CD pipelines
Experience with API design, schema design
Here's why you'll love working at Genies:
Salary $40-$50 per hour.
You'll work with a team that you’ll be able to learn from and grow with, including support for your own professional development
You'll be at the helm of your own career, shaping it with your own innovative contributions to a nascent team and product with flexible hours and a hybrid(office+home) policy
You'll enjoy the culture and perks of a startup, with the stability of being well funded
Flexible paid time off, sick time, and paid company holidays, in addition to paid parental leave, bereavement leave, and jury duty leave for full-time employees
Health & wellness support through programs such as monthly wellness reimbursement
Choice of MacBook or windows laptop
Genies is an equal opportunity employer committed to promoting an inclusive work environment free of discrimination and harassment. We value diversity, inclusion, and aim to provide a sense of belonging for everyone.
No items found.
2025-12-24 6:44
Research Engineer Intern, Speech Generation - Spring 2026
Genies
201-500
$40 – $50 / hour
United States
Intern
Remote
false
Genies is an avatar technology company powering the next era of interactive digital identity through AI companions. With the Avatar Framework and intuitive creation tools, Genies enables developers, talent, and creators to generate and deploy game-ready AI companions. The company’s technology stack supports full customization, AI-generated fashion and props, and seamless integration of user-generated content (UGC). Backed by investors including Bob Iger, Silver Lake, BOND, and NEA, Genies’ mission is to become the visual and interactive layer for the LLM-powered internet.
About the opportunity
We are looking for a Backend Software Engineer Intern (LLM) to join our AI Engineering Team based in San Francisco, CA or Los Angeles, CA. The team is responsible for developing the backend infrastructure powering the Genies Avatar AI framework. You will contribute to the next generation of AI 3D avatar entertainment experience, and be involved with designing, coding, and testing software according to the requirements and system plans. You will be expected to collaborate with senior engineers and other team members to develop software solutions, troubleshoot issues, and maintain the quality of our software. You will also be responsible for documenting their work for future reference and improvement.
Our internship program has a minimum duration of 12 weeks.
Key Responsibilities
Develop and deploy LLM agent systems within our AI-powered avatar framework.
Design and implement scalable and efficient backend systems to support AI applications
Collaborate with AI and NLP experts to integrate LLM, and LLM-based systems and algorithms into our avatar ecosystem.
Work with Docker, Kubernetes, and AWS for AI model deployment and scalability
Contribute to code reviews, debugging, and testing to ensure high-quality deliverables.
Minimum Qualifications
Currently pursuing OR a recent graduate from a Master's degree or Bachelor's in Computer Science, Engineering, Machine Learning, or related field.
Course or internship experience related to the following areas : Operating Systems, Data Structures & Algorithms, Machine Learning
Strong programming skills in Python, Java, or C++
Excellent written and verbal communication skills
Basic understanding of AI/LLM concepts and enthusiasm for learning advanced techniques.
Preferred Qualifications
Experience in building ML /LLM powered software systems.
Previous Computer Science/Software Engineering Internship experience
Solid understanding of LLM agents, retrieval-augmented generation (RAG), and prompt engineering.
Experience with AWS, Docker and Kubernetes
Experience with CI/CD pipelines
Experience with API design, schema design
Here's why you'll love working at Genies:
Salary $40-$50 per hour.
You'll work with a team that you’ll be able to learn from and grow with, including support for your own professional development
You'll be at the helm of your own career, shaping it with your own innovative contributions to a nascent team and product with flexible hours and a hybrid(office+home) policy
You'll enjoy the culture and perks of a startup, with the stability of being well funded
Flexible paid time off, sick time, and paid company holidays, in addition to paid parental leave, bereavement leave, and jury duty leave for full-time employees
Health & wellness support through programs such as monthly wellness reimbursement
Choice of MacBook or windows laptop
Genies is an equal opportunity employer committed to promoting an inclusive work environment free of discrimination and harassment. We value diversity, inclusion, and aim to provide a sense of belonging for everyone.
No items found.
2025-12-24 6:44
No job found
Your search did not match any job. Please try again
