Machine Learning Engineer - Perception Mapping (copy)
As a software engineer on the perception mapping team at Zoox, you will curate, validate, and label datasets for model training and validation. You will research, implement, and train machine learning models to perform semantic map element detection and closely collaborate with validation teams to formulate and execute model validation pipelines. You will integrate models into the greater onboard autonomy system within compute budgets. Additionally, you will serve as a technical leader on the team, maintaining coding and ML development best practices and contributing to architectural decisions.
Robotics Engineer
As a Software Engineer in the Robotics and Automation group, you will design and deploy systems to automate material science research and discovery laboratories, specializing in robotics, automation, and perception software development. You will architect and develop software systems that control and orchestrate robotic workcells for autonomous materials experimentation, design scalable control frameworks for flexible automation involving robots, motion systems, sensors, and lab instruments. Your role involves collaborating with hardware, mechatronics, and science teams to translate experimental workflows into reliable automated processes; building and maintaining APIs and services for scheduling, execution, monitoring, and data capture; developing simulation, testing, and validation tools to accelerate development and ensure system reliability; integrating 2D and 3D vision systems with robotic manipulation, motion planning, and execution; optimizing system performance, robustness, and throughput under rapid iteration cycles; contributing to technical direction, architecture decisions, and best practices; mentoring junior engineers and helping establish engineering standards; and fostering collaboration and open-mindedness to empower the team to deliver world-class technology at an unprecedented speed.
Perception Engineer
Design, implement, and deploy 2D and 3D vision systems for robotic manipulation, inspection, state verification, and sensor fusion; develop vision-guided automation solutions integrating cameras, lighting, optics, and robots in laboratory and industrial environments; implement perception pipelines for object detection, segmentation, pose estimation, and feature extraction; own camera calibration and system-level accuracy validation; develop novel algorithms for state estimation of fluids and particle flows; integrate vision outputs with robot motion planning, grasping, and task execution; tune and harden vision systems for robustness against variability in materials, reflections, and environmental conditions; collaborate with software, mechatronics, and mechanical teams to translate experimental and operational needs into automated solutions; contribute to technical direction, architecture decisions, and best practices across the robotics, perception, and automation software stack; and bring an attitude of collaboration and open-mindedness to facilitate fearless and creative problem solving that empowers the team to ship world-class technology at an unprecedented speed.
Signal Processing Intern
As an intern in the DSP team at Zoox, you will be working on the design and implementation of signal processing and machine learning algorithms related to radars, depth cameras, lidars, and audio subsystems. You will collaborate with a team of engineers from diverse backgrounds, working on code, algorithms, and research to create and refine key systems enabling autonomous mobility. The work involves understanding and applying concepts in digital signal processing and algorithm design for radar and lidar processing.
Senior / Staff Software Engineer - Perception 3D Tracking
The role involves defining on-vehicle architecture for producing core tracking results from the Perception stack, working with both the model teams and optimization teams to develop a highly performant and efficient system that can run on vehicle, working with Perception data both on the input and output of machine learned models, and taking tracking output to integrate this into the larger behavioral system in the Autonomy stack.
Senior Software Engineer, ML Ops & Infrastructure
As a Senior AI Research Scientist for Vision-guided robotics, you will lead the research and development of novel deep learning algorithms enabling robots to perform complex, contact-rich manipulation tasks. You will explore the intersection of computer vision and robotic control, designing systems for robots to perceive and interact with objects in dynamic environments, creating models that integrate visual data to guide physical manipulation beyond simple grasping. Collaborating with a multidisciplinary team, you'll translate cutting-edge concepts into robust capabilities deployable on physical hardware for industrial applications. You will research and develop deep learning architectures for visual perception and sensorimotor control, design algorithms for manipulation of complex or deformable objects with high precision, collaborate with software engineers to optimize and deploy prototypes onto robotic hardware, evaluate model performance in simulations and real-world environments to ensure robustness, identify opportunities to apply advancements in computer vision and robot learning to industrial problems, and mentor junior researchers contributing to the technical direction of the manipulation research roadmap.
Deep Learning Engineer, Perception
As a Senior AI Research Scientist for Vision-guided robotics, the role involves leading the research and development of novel deep learning algorithms enabling robots to perform complex, contact-rich manipulation tasks. Responsibilities include researching and developing deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios, designing algorithms for precise manipulation of complex or deformable objects, collaborating with software engineers to optimize and deploy research prototypes on physical robotic hardware, evaluating model performance in both simulation and real-world environments for robustness and reliability, identifying opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems, mentoring junior researchers, and contributing to the technical direction of the manipulation research roadmap.
Senior Robotics Software Engineer, Intelligent Factory
Lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. Explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Create models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. Collaborate with a multidisciplinary team to translate cutting-edge concepts into robust capabilities deployable on physical hardware for industrial applications. Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios. Design algorithms enabling robots to manipulate complex or deformable objects with high precision. Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware. Evaluate model performance in simulation and real-world environments to ensure robustness and reliability. Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems. Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.
Multi‑Target Tracking & Sensor Fusion Engineer (R4172)
Design, research, and implement state-of-the-art multi-target tracking and data association algorithms. Develop production-quality C++ software for deployed military aviation platforms, ensuring deterministic, real-time performance. Build and maintain comprehensive unit, integration, and system-level tests to validate algorithm correctness and robustness. Enhance and calibrate sensor models in advanced simulation and hardware-in-the-loop (HWIL) environments. Collaborate on feature planning, decomposition, and milestone execution within an agile development framework. Contribute to flight-test planning, performance analysis, benchmarking, and regression evaluation. For principal-level applicants, provide technical leadership, design reviews, algorithmic mentorship, and subject-matter expertise across the autonomy organization.
Engineering Manager, State Estimation
The Engineering Manager, State Estimation will lead and manage a team of engineers developing advanced mapping, localization, and SLAM algorithms. Responsibilities include setting technical direction and project priorities aligned with organizational goals, driving cross-functional projects from conception to deployment while coordinating with research, hardware, and product teams. The role involves providing technical guidance on algorithm design, system architecture, and implementation, ensuring best practices, performance optimization, and fostering a culture of innovation and collaboration. The manager will develop and maintain project roadmaps, milestones, and documentation, partner with senior leadership for long-term strategy definition in localization, state estimation, and multi-sensor calibration, review and approve designs to ensure technical excellence and scalability, and lead recruitment, hiring, and performance management to grow the engineering team.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
