Sensing Systems Engineer
The Systems Engineer is responsible for owning the holistic performance of Sesame’s wearable devices across the full stack, including hardware selection and integration, firmware, signal processing, and application behavior. They research, evaluate, and recommend optimal sensor technologies and devices for various wearable applications considering physical, electrical, software capabilities, cost, schedule, and user impact. They own the end-to-end performance of sensor systems from prototyping through mass production, managing latency, power consumption, thermal constraints, and reliability. The role includes defining system-level test plans, acceptance criteria, and specifications to ensure high-quality user experience and validating each layer of the stack. They design and supervise data collection strategies for ground-truth data sets needed for algorithm and model development. Additionally, they develop, test, and implement signal processing, sensor fusion, and calibration systems to translate raw sensor data into usable outputs, and collaborate with the ML team to enhance Sesame agents’ responses by integrating sensor data.
Senior Autonomy System Test Engineer
As a Senior Autonomy Systems Test Engineer, you will accelerate product development by helping developers build safe and reliable autonomous driving software. You will oversee the development of extensive test plans, develop standardized simulated test design tooling processes to execute various scenarios, validate end-to-end behaviors, and create triage pipelines for analyzing issues seen during offline and on-vehicle testing. You will participate regularly in in-vehicle testing missions to observe feature behavior in real-world conditions, help triage and root cause issues seen during testing, analyze test results to ensure no regression in existing functionality, and update testing processes for scaling QA efficiency. Your specific duties include creating test strategies and test plans for self-driving behavior features, identifying, tracking, reporting, and resolving test strategy, planning, or implementation issues with cross-functional software teams, proposing and validating new testing methodology for the AI stack using automated metrics, designing, developing, and executing synthetic and log-based test scenarios on an in-house simulation test framework, and compiling triage strategies and results from different QA validation platforms and pipelines.
Design Director
As an Automotive and Robotics SoC Architect, you will define scalable, top-down system architectures that unify CPU and AI technologies for next-generation automotive applications. This role involves shaping the architectural direction of the automotive and robotics portfolio to ensure products meet the industry's high standards for performance, safety, reliability, and security. The position requires strong technical leadership, systems thinking, and cross-functional collaboration to deliver world-class automotive solutions.
[MS/PhD Intern] AI Engineer (정규직 전환형)
The position involves participating in an internship for Autonomous Driving Group where the candidate will engage in research and development aiming to connect research results to actual mass-production autonomous driving systems. Responsibilities include End-to-End design, implementation, and validation of core autonomous driving system technologies; designing and validating algorithms and models based on real vehicle data; performance analysis and improvement through simulation and real-road experiments; implementing research outcomes into applicable system forms; and close collaboration with production teams within the AD Group to solve problems. Depending on the specialization, tasks may include implementing perception and prediction ML models, preprocessing and analyzing driving data, evaluating model performance and analyzing results, object-level fusion and tracking using sensor data, real-time fusion logic improvement, SLAM and localization algorithm development, integration and debugging of vehicle software under Linux environment, designing data pipelines for autonomous driving data collection and analysis, vision-language-action model research, and building learning and evaluation pipelines with cross-department collaboration.
Multi-Agents Mission Planning Engineer
Design algorithms that decompose high-level missions into structured, solvable guidance tasks for autonomous robots. Develop and optimize mission and path-planning frameworks for autonomous systems. Build scalable backend integrations for mission guidance and execution. Run simulations and validation campaigns to assess autonomy consistency across diverse mission types. Partner with AI/ML, backend, and product teams to ensure algorithms are efficient, testable, and deployable in real-time environment.
Senior Engineer, XBAT Simulation Modeling
Build and scale simulation frameworks for integrated testing of autonomy, GNC, and embedded systems in C++. Design deterministic, high-performance simulation tools capable of faster-than-real-time execution for development, testing, and release. Implement scenario simulation tooling and formal test infrastructure. Collaborate across autonomy, embedded, GNC, and test engineering to ensure the simulation mirrors real aircraft behavior and mission scenarios. Develop infrastructure for CI integration, parallel simulation execution, and automated regression testing. Profile, optimize, and validate C++ codebases for performance, determinism, and fidelity. Contribute to architecture decisions that define the next generation of aircraft simulation tools within Shield AI. Mentor engineers and guide best practices in C++, simulation architecture, and performance engineering.
Robotics Engineer
As a Software Engineer in the Robotics and Automation group, you will design and deploy systems to automate material science research and discovery laboratories, specializing in robotics, automation, and perception software development. You will architect and develop software systems that control and orchestrate robotic workcells for autonomous materials experimentation, design scalable control frameworks for flexible automation involving robots, motion systems, sensors, and lab instruments. Your role involves collaborating with hardware, mechatronics, and science teams to translate experimental workflows into reliable automated processes; building and maintaining APIs and services for scheduling, execution, monitoring, and data capture; developing simulation, testing, and validation tools to accelerate development and ensure system reliability; integrating 2D and 3D vision systems with robotic manipulation, motion planning, and execution; optimizing system performance, robustness, and throughput under rapid iteration cycles; contributing to technical direction, architecture decisions, and best practices; mentoring junior engineers and helping establish engineering standards; and fostering collaboration and open-mindedness to empower the team to deliver world-class technology at an unprecedented speed.
Software Engineer, Simulation
As a Senior AI Research Scientist for Vision-guided robotics, you will lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. Your work involves exploring the intersection of computer vision and robotic control to design systems that allow robots to perceive and interact with objects in dynamic environments. You will create models that integrate visual data to guide physical manipulation, advancing beyond simple grasping to sophisticated handling of diverse items. Collaboration with a multidisciplinary team of engineers and researchers is required to translate cutting-edge concepts into robust capabilities deployable on physical hardware for industrial applications. Responsibilities include researching and developing deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios, designing algorithms for manipulation of complex or deformable objects with high precision, collaborating with software engineers to optimize and deploy research prototypes onto robotic hardware, evaluating model performance in simulation and real-world settings to ensure robustness, identifying opportunities to apply state-of-the-art computer vision and robot learning advancements to practical industrial problems, mentoring junior researchers, and contributing to the technical direction of the manipulation research roadmap.
Multi‑Target Tracking & Sensor Fusion Engineer (R4172)
Design, research, and implement state-of-the-art multi-target tracking and data association algorithms. Develop production-quality C++ software for deployed military aviation platforms, ensuring deterministic, real-time performance. Build and maintain comprehensive unit, integration, and system-level tests to validate algorithm correctness and robustness. Enhance and calibrate sensor models in advanced simulation and hardware-in-the-loop (HWIL) environments. Collaborate on feature planning, decomposition, and milestone execution within an agile development framework. Contribute to flight-test planning, performance analysis, benchmarking, and regression evaluation. For principal-level applicants, provide technical leadership, design reviews, algorithmic mentorship, and subject-matter expertise across the autonomy organization.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
