Robotics Software Testing Engineer, Factory Orchestration
The role involves leading the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. It includes exploring the intersection of computer vision and robotic control to design systems that allow robots to perceive and interact with objects in dynamic environments. Responsibilities include creating models that integrate visual data to guide physical manipulation, collaborating with a multidisciplinary team to translate concepts into deployable robotic capabilities, researching and developing deep learning architectures for visual perception and sensorimotor control, designing algorithms for manipulating complex or deformable objects with precision, optimizing and deploying prototypes onto robotic hardware, evaluating model performance in simulation and real-world environments for robustness, identifying opportunities to apply advancements in computer vision and robot learning to industrial problems, and mentoring junior researchers while contributing to the technical direction of the research roadmap.
Senior Robotics Software Engineer, Mobile Robot Orchestration
Lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. Explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Create models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. Collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities that can be deployed on physical hardware for industrial applications. Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios. Design algorithms that enable robots to manipulate complex or deformable objects with high precision. Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware. Evaluate model performance in both simulation and real-world environments to ensure robustness and reliability. Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems. Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.
Robotics Software Engineer
Lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios. Design algorithms that enable robots to manipulate complex or deformable objects with high precision. Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware. Evaluate model performance in both simulation and real-world environments to ensure robustness and reliability. Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems. Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.
Research Engineer, SLAM & Multi-View Geometry
As a SLAM / Multi-View Geometry Engineer on the Robotics team, you will develop systems that enable robots to perceive, track, and reconstruct the world in 3D from multi-camera and multimodal sensor data. You will work on real-time and offline SLAM pipelines used during teleoperation and robot data collection, as well as scalable systems for reconstructing and tracking 3D structure from large datasets. Specific responsibilities include developing and deploying online SLAM systems used during robotic data collection with multi-camera sensor stacks and teleoperation platforms, building systems for large-scale 3D reconstruction and point tracking across massive datasets, working with research and engineering teams to scale multi-view geometry pipelines to large datasets, improving the accuracy, robustness, and scalability of perception systems used in robotics data collection and training pipelines, and collaborating across robotics, perception, and ML teams to integrate geometry-based methods with learned models.
Intern, Software Engineer - Perception
As a Perception Engineering Intern at Hayden AI, the responsibilities include taking ownership of a real project and seeing it through to completion, building and shipping features with support from senior engineers, writing clean and scalable code, testing work and iterating quickly, being involved in all phases from design discussions to deployment, collaborating with engineers in code reviews and team discussions, participating in standups, sprint planning, and retrospectives, supporting the team on ad hoc engineering tasks, helping improve performance, reliability, or usability where needed, and asking questions, seeking feedback, and applying it quickly. Deliverables or project examples may include GPS data analysis, training deep learning models, creating AI datasets, lidar/camera data tooling, test cases for end-to-end system performance, developing a cloud service in the event processing pipeline, and adding a page or new user flow to the Portal web application.
Senior Software Engineering Lead, Resilience and Chaos Engineering
Lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. Explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Create models that integrate visual data to guide physical manipulation for sophisticated handling of diverse items. Collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities deployable on physical hardware for industrial applications. Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios. Design algorithms that enable robots to manipulate complex or deformable objects with high precision. Work with software engineers to optimize and deploy research prototypes onto physical robotic hardware. Evaluate model performance in simulation and real-world environments to ensure robustness and reliability. Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems. Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.
Sensing Systems Engineer
The Systems Engineer is responsible for owning the holistic performance of Sesame’s wearable devices across the full stack, including hardware selection and integration, firmware, signal processing, and application behavior. They research, evaluate, and recommend optimal sensor technologies and devices for various wearable applications considering physical, electrical, software capabilities, cost, schedule, and user impact. They own the end-to-end performance of sensor systems from prototyping through mass production, managing latency, power consumption, thermal constraints, and reliability. The role includes defining system-level test plans, acceptance criteria, and specifications to ensure high-quality user experience and validating each layer of the stack. They design and supervise data collection strategies for ground-truth data sets needed for algorithm and model development. Additionally, they develop, test, and implement signal processing, sensor fusion, and calibration systems to translate raw sensor data into usable outputs, and collaborate with the ML team to enhance Sesame agents’ responses by integrating sensor data.
Service Technician Associate I - Pittsburgh, PA (Contract)
Develop tools for validation and regression testing of image sensors, image processing pipelines, and hardware and software integration. Perform lab and real-world camera data collection and data analysis. Participate in tuning of sensor parameters and image processing pipelines to optimize image quality. Troubleshoot camera and image quality issues observed on autonomous vehicles. Design new hardware and the necessary software for sensor range. Work with perception software team to assess end to end camera performance.
Senior Manager, Perception
Lead high-impact Perception teams, managing technical roadmap and milestone goals. Collaborate with AI and software leaders, simulation, systems design, and mission assurance teams to deliver a dynamic objects perception system across various sensor modalities and perception pipelines. Build and lead a group of managers and engineers responsible for roadmap, productivity, execution, and impact. Set vision for and grow a team of software engineers involved in planning, execution, and success of complex technical projects, providing technical leadership. Collaborate across teams to brainstorm and accelerate perception capability development. Provide summaries, progress updates, and recommendations to executive leadership. Establish best practices and statistical rigor around data-driven decision-making. Stay updated on industry and academic trends in AI and perception.
Senior Software Engineer, Pilots
As a Senior Software Engineer on the Pilots team, the responsibilities include delivering robust, thoroughly tested, and maintainable C++ code for edge and robotics platforms, designing, implementing, and owning prototype perception systems that may transition into production-grade solutions, constructing and refining real-time perception pipelines including detection, tracking, and sensor fusion, adapting and integrating ML and CV models for Hayden-specific applications, driving technical decision-making balancing prototyping speed with production readiness, collaborating with the Product team and cross-functional Engineering departments, and contributing to shared infrastructure, tooling, and architectural patterns as pilots mature into foundational products.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
