Staff Software Engineer
As a Staff Software Engineer on the Perception team, you will be responsible for defining and driving the long-term vision and architecture for perception systems, architecting complex, scalable, and robust end-to-end perception and robotics systems for deployment on real-world hardware, ensuring their successful integration into Hayden’s core product platform. You will spearhead the architectural design, implementation, and long-term ownership of next-generation perception systems, transition research prototypes to production solutions, deliver high-performance, tested, and maintainable C++ code optimized for edge and robotics platforms, architect and optimize real-time perception pipelines, drive the integration of state-of-the-art ML and CV models, provide technical leadership in complex problem domains, collaborate with Product leadership and Engineering organizations, and contribute to foundational shared infrastructure, tooling, and architectural patterns to scale pilot initiatives into core product capabilities.
Senior State Estimation Engineer
As a Senior State Estimation Engineer at Hayden AI, the responsibilities include deriving and implementing novel real-time pose estimation algorithms, researching, developing and implementing algorithms to solve large-scale mapping and similar problems, collaborating with other engineers on multi-sensor calibration both in-situ and in-factory, programming and developing software in C++ and/or Python, contributing to high impact multidisciplinary projects across teams, collaborating with deep learning, device, and cloud teams to improve overall system architectures, and providing mentorship to junior engineers.
Software Engineer Intern, Perception Data
The internship opportunity is within the Perception Data team, focusing on tooling and pipelines around autonomous vehicle perception data, specifically systems that discover, curate, and deliver high-quality training data at scale. Interns will work on Zoox's configurable data mining framework to support large-scale extraction of labeled and unlabeled data from autonomous vehicle logs. Responsibilities include building new mining strategies involving run loading, sample selection, and storage stages, integrating ML-driven data curation such as embedding-based search and vision-language model filtering, scaling pipelines across distributed computing resources, and improving developer experience involving mining configuration and observability. Interns will gain experience with mining targeted, high-quality training data from petabyte-scale autonomous driving logs to improve perception models for safe self-driving.
Robotics Software Engineer - Manufacturing Automation
Lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. Explore the intersection of computer vision and robotic control by designing systems that allow robots to perceive and interact with objects in dynamic environments. Create models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. Collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities that can be deployed on physical hardware for industrial applications. Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios. Design algorithms that enable robots to manipulate complex or deformable objects with high precision. Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware. Evaluate model performance in simulation and real-world environments to ensure robustness and reliability. Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems. Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.
Manager, Software - Perception (R3770)
Lead multidisciplinary teams in autonomy, integration, and testing by aligning technical efforts, resolving cross-functional challenges, and driving mission-focused execution while balancing hands-on technical oversight with performance optimization, innovation, and stakeholder communication. Design and implement advanced perception algorithms for object detection, classification, and multi-target tracking across diverse sensors. Integrate data from vision systems, radars, and other sensors using probabilistic and deterministic fusion techniques to generate accurate situational awareness. Develop and refine state estimation algorithms for localization and pose estimation using IMU, GPS, vision, and other sensing inputs. Interpret sensor ICDs and technical specifications to ensure proper data handling and synchronization. Optimize perception pipelines for performance, robustness, and real-time efficiency in simulation and real-world environments. Collaborate closely with autonomy, systems, and integration teams to interface perception outputs with planning, behaviors, and decision-making modules. Validate algorithms using synthetic data, simulations, and field testing. Coordinate with hardware and sensor teams to integrate perception algorithms with onboard compute platforms and sensor payloads. Drive innovation in airborne sensing techniques for unmanned aircraft operating in complex or contested environments. Travel approximately 10-15% of the year to office locations, customer sites, and flight integration events.
Tech Lead Manager SWE, SDK
Lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. Explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Create models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. Collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities that can be deployed on physical hardware for industrial applications. Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios. Design algorithms that enable robots to manipulate complex or deformable objects with high precision. Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware. Evaluate model performance in both simulation and real-world environments to ensure robustness and reliability. Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems. Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.
Senior Robotics Software Engineer | Manipulation
Architect and evolve Gecko Robotics' ROS2-based control framework and planning systems for articulated manipulators. Develop perception-driven motion planning using visual and other sensor inputs. Design closed-loop inspect → analyze → rework workflows. Optimize robotic inspection throughput within active manufacturing lines. Own system-level integration between robot control stack, industrial hardware, and Gecko's inspection software. Support system deployment and validation in production environments.
Robotics Software Testing Engineer, Factory Orchestration
The role involves leading the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. It includes exploring the intersection of computer vision and robotic control to design systems that allow robots to perceive and interact with objects in dynamic environments. Responsibilities include creating models that integrate visual data to guide physical manipulation, collaborating with a multidisciplinary team to translate concepts into deployable robotic capabilities, researching and developing deep learning architectures for visual perception and sensorimotor control, designing algorithms for manipulating complex or deformable objects with precision, optimizing and deploying prototypes onto robotic hardware, evaluating model performance in simulation and real-world environments for robustness, identifying opportunities to apply advancements in computer vision and robot learning to industrial problems, and mentoring junior researchers while contributing to the technical direction of the research roadmap.
Senior Robotics Software Engineer, Mobile Robot Orchestration
Lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. Explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Create models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. Collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities that can be deployed on physical hardware for industrial applications. Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios. Design algorithms that enable robots to manipulate complex or deformable objects with high precision. Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware. Evaluate model performance in both simulation and real-world environments to ensure robustness and reliability. Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems. Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.
Robotics Software Engineer
Lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios. Design algorithms that enable robots to manipulate complex or deformable objects with high precision. Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware. Evaluate model performance in both simulation and real-world environments to ensure robustness and reliability. Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems. Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
