AI Perception Engineer Jobs

Discover the latest remote and onsite AI Perception Engineer roles across top active AI companies. Updated hourly.

Check out 20 new AI Perception Engineer opportunities posted on AI Chopping Block

Robotics Software Engineer - Manufacturing Automation

New
Top rated
Intrinsic
Full-time
Full-time
Posted

Lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. Explore the intersection of computer vision and robotic control by designing systems that allow robots to perceive and interact with objects in dynamic environments. Create models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. Collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities that can be deployed on physical hardware for industrial applications. Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios. Design algorithms that enable robots to manipulate complex or deformable objects with high precision. Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware. Evaluate model performance in simulation and real-world environments to ensure robustness and reliability. Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems. Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.

Undisclosed

()

Mountain View, United States
Maybe global
Onsite

Manager, Software - Perception (R3770)

New
Top rated
Shield AI
Full-time
Full-time
Posted

Lead multidisciplinary teams in autonomy, integration, and testing by aligning technical efforts, resolving cross-functional challenges, and driving mission-focused execution while balancing hands-on technical oversight with performance optimization, innovation, and stakeholder communication. Design and implement advanced perception algorithms for object detection, classification, and multi-target tracking across diverse sensors. Integrate data from vision systems, radars, and other sensors using probabilistic and deterministic fusion techniques to generate accurate situational awareness. Develop and refine state estimation algorithms for localization and pose estimation using IMU, GPS, vision, and other sensing inputs. Interpret sensor ICDs and technical specifications to ensure proper data handling and synchronization. Optimize perception pipelines for performance, robustness, and real-time efficiency in simulation and real-world environments. Collaborate closely with autonomy, systems, and integration teams to interface perception outputs with planning, behaviors, and decision-making modules. Validate algorithms using synthetic data, simulations, and field testing. Coordinate with hardware and sensor teams to integrate perception algorithms with onboard compute platforms and sensor payloads. Drive innovation in airborne sensing techniques for unmanned aircraft operating in complex or contested environments. Travel approximately 10-15% of the year to office locations, customer sites, and flight integration events.

$220,441 – $330,661
Undisclosed
YEAR

(USD)

Washington, United States
Maybe global
Onsite

Tech Lead Manager SWE, SDK

New
Top rated
Intrinsic
Full-time
Full-time
Posted

Lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. Explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Create models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. Collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities that can be deployed on physical hardware for industrial applications. Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios. Design algorithms that enable robots to manipulate complex or deformable objects with high precision. Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware. Evaluate model performance in both simulation and real-world environments to ensure robustness and reliability. Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems. Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.

Undisclosed

()

Mountain View, United States
Maybe global
Onsite

Senior Robotics Software Engineer | Manipulation

New
Top rated
Gecko Robotics
Full-time
Full-time
Posted

Architect and evolve Gecko Robotics' ROS2-based control framework and planning systems for articulated manipulators. Develop perception-driven motion planning using visual and other sensor inputs. Design closed-loop inspect → analyze → rework workflows. Optimize robotic inspection throughput within active manufacturing lines. Own system-level integration between robot control stack, industrial hardware, and Gecko's inspection software. Support system deployment and validation in production environments.

$136,000 – $160,000
Undisclosed
YEAR

(USD)

Pittsburgh, United States
Maybe global
Hybrid

Robotics Software Testing Engineer, Factory Orchestration

New
Top rated
Intrinsic
Full-time
Full-time
Posted

The role involves leading the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. It includes exploring the intersection of computer vision and robotic control to design systems that allow robots to perceive and interact with objects in dynamic environments. Responsibilities include creating models that integrate visual data to guide physical manipulation, collaborating with a multidisciplinary team to translate concepts into deployable robotic capabilities, researching and developing deep learning architectures for visual perception and sensorimotor control, designing algorithms for manipulating complex or deformable objects with precision, optimizing and deploying prototypes onto robotic hardware, evaluating model performance in simulation and real-world environments for robustness, identifying opportunities to apply advancements in computer vision and robot learning to industrial problems, and mentoring junior researchers while contributing to the technical direction of the research roadmap.

Undisclosed

()

Singapore
Maybe global
Onsite

Senior Robotics Software Engineer, Mobile Robot Orchestration

New
Top rated
Intrinsic
Full-time
Full-time
Posted

Lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. Explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Create models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. Collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities that can be deployed on physical hardware for industrial applications. Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios. Design algorithms that enable robots to manipulate complex or deformable objects with high precision. Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware. Evaluate model performance in both simulation and real-world environments to ensure robustness and reliability. Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems. Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.

Undisclosed

()

Singapore
Maybe global
Onsite

Robotics Software Engineer

New
Top rated
Intrinsic
Full-time
Full-time
Posted

Lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios. Design algorithms that enable robots to manipulate complex or deformable objects with high precision. Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware. Evaluate model performance in both simulation and real-world environments to ensure robustness and reliability. Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems. Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.

Undisclosed

()

Mountain View
Maybe global
Onsite

Research Engineer, SLAM & Multi-View Geometry

New
Top rated
OpenAI
Full-time
Full-time
Posted

As a SLAM / Multi-View Geometry Engineer on the Robotics team, you will develop systems that enable robots to perceive, track, and reconstruct the world in 3D from multi-camera and multimodal sensor data. You will work on real-time and offline SLAM pipelines used during teleoperation and robot data collection, as well as scalable systems for reconstructing and tracking 3D structure from large datasets. Specific responsibilities include developing and deploying online SLAM systems used during robotic data collection with multi-camera sensor stacks and teleoperation platforms, building systems for large-scale 3D reconstruction and point tracking across massive datasets, working with research and engineering teams to scale multi-view geometry pipelines to large datasets, improving the accuracy, robustness, and scalability of perception systems used in robotics data collection and training pipelines, and collaborating across robotics, perception, and ML teams to integrate geometry-based methods with learned models.

$380,000 – $445,000
Undisclosed
YEAR

(USD)

San Francisco, United States
Maybe global
Hybrid

Intern, Software Engineer - Perception

New
Top rated
Haydenai
Intern
Full-time
Posted

As a Perception Engineering Intern at Hayden AI, the responsibilities include taking ownership of a real project and seeing it through to completion, building and shipping features with support from senior engineers, writing clean and scalable code, testing work and iterating quickly, being involved in all phases from design discussions to deployment, collaborating with engineers in code reviews and team discussions, participating in standups, sprint planning, and retrospectives, supporting the team on ad hoc engineering tasks, helping improve performance, reliability, or usability where needed, and asking questions, seeking feedback, and applying it quickly. Deliverables or project examples may include GPS data analysis, training deep learning models, creating AI datasets, lidar/camera data tooling, test cases for end-to-end system performance, developing a cloud service in the event processing pipeline, and adding a page or new user flow to the Portal web application.

$45 – $45 / hour
Undisclosed
HOUR

(USD)

San Francisco, United States
Maybe global
Hybrid

Senior Software Engineering Lead, Resilience and Chaos Engineering

New
Top rated
Intrinsic
Full-time
Full-time
Posted

Lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. Explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Create models that integrate visual data to guide physical manipulation for sophisticated handling of diverse items. Collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities deployable on physical hardware for industrial applications. Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios. Design algorithms that enable robots to manipulate complex or deformable objects with high precision. Work with software engineers to optimize and deploy research prototypes onto physical robotic hardware. Evaluate model performance in simulation and real-world environments to ensure robustness and reliability. Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems. Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.

Undisclosed

()

Singapore
Maybe global
Onsite

Want to see more AI Perception Engineer jobs?

View all jobs

Access all 4,256 remote & onsite AI jobs.

Join our private AI community to unlock full job access, and connect with founders, hiring managers, and top AI professionals.
(Yes, it’s still free—your best contributions are the price of admission.)

Frequently Asked Questions

Have questions about roles, locations, or requirements for AI Perception Engineer jobs?

Question text goes here

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

[{"question":"What does a AI Perception Engineer do?","answer":"AI Perception Engineers research, design and develop algorithms that help machines understand their environment through sensors like cameras, LiDAR and radar. They work on object detection, tracking, classification, scene understanding, and sensor fusion algorithms. Their responsibilities include prototyping systems, developing data pipelines, optimizing models for deployment, and conducting performance analysis. They typically work in autonomous vehicles, robotics, or computer vision applications while staying current with research advancements."},{"question":"What skills are required for AI Perception Engineer?","answer":"Successful AI Perception Engineers need strong programming skills in Python and C++, experience with computer vision libraries like OpenCV, and proficiency in deep learning frameworks like PyTorch. They should understand sensor technologies (cameras, LiDAR, radar), multi-object tracking algorithms, and sensor fusion techniques. Problem-solving abilities, data analysis expertise, and experience with simulation environments are highly valuable. Additionally, knowledge of deployment tools such as Docker and AWS enhances their effectiveness."},{"question":"What qualifications are needed for AI Perception Engineer role?","answer":"Most AI Perception Engineer positions require a Master's or PhD in Computer Science, Electrical Engineering, Computer Vision, or related fields. Employers typically look for 2-4 years of relevant experience, though senior roles may require 4+ years. A Bachelor's degree with at least 3 years of industry experience may suffice in some cases. Demonstrated expertise in machine learning, computer vision, and sensor calibration is essential, along with a portfolio showing experience with perception algorithms."},{"question":"What is the salary range for AI Perception Engineer job?","answer":"The research provided doesn't contain specific salary information for AI Perception Engineers. Compensation typically varies based on education level (Master's vs. PhD), years of experience (entry-level to senior), geographic location, company size, and specific industry (autonomous vehicles, robotics, etc.). Specialized knowledge in areas like sensor fusion, multimodal perception, and deployment experience can command premium compensation in this highly specialized field."},{"question":"How long does it take to get hired as a AI Perception Engineer?","answer":"The research doesn't specify typical hiring timelines for AI Perception Engineers. The hiring process likely includes technical assessments of algorithm development skills, computer vision knowledge, and possibly coding challenges related to perception problems. With entry-level positions typically requiring at least 18 months of working experience plus a Master's degree, or a Bachelor's with 3+ years of relevant experience, candidates should expect a competitive and thorough evaluation process."},{"question":"Are AI Perception Engineer job in demand?","answer":"AI Perception Engineer jobs appear to be in demand based on the diverse requirements across autonomous vehicles, robotics, and computer vision applications. Companies are actively seeking candidates with specialized skills in algorithm development, sensor fusion, and perception systems. The field's technical complexity, requiring both theoretical knowledge and practical implementation skills, creates ongoing demand for qualified engineers. As perception systems become critical in more industries, this specialized ML role continues to grow in importance."}]