Inference Technical Lead, On-Device Transformers
As a Technical Lead on the Future of Computing Research team, you will evaluate and select silicon platforms such as GPUs, NPUs, and specialized accelerators for on-device and edge deployment of OpenAI models. You will work closely with research teams to co-design model architectures that meet real-world deployment constraints including latency, memory, power, and bandwidth. You will analyze and model system performance, identifying tradeoffs between model design, memory hierarchy, compute throughput, and hardware capabilities. You will partner with hardware vendors and internal infrastructure teams to bring up new accelerators and ensure efficient execution of transformer workloads. Additionally, you will build and lead a team of engineers responsible for implementing the low-level inference stack, including kernel development and runtime systems. You will also take nascent research capabilities and develop them into usable capabilities.
System Architect (US)
As a System Architect, you own the end-to-end architecture, system definition, and strategic implementation for the entire portfolio of robotic and autonomous defense systems, collaborating closely with executive leadership and technical leads and forming a partnership with the Product Manager. Responsibilities include translating complex strategic goals into system-of-systems designs, defining and championing system architecture strategy across the enterprise, ensuring all systems are correctly sized and verified through simulations and system sizing, guiding major technical investment decisions, coordinating large multidisciplinary engineering organizations, providing technical leadership across mechanical, electrical, software, GNC, ML, and product teams, governing system integration standards and validation processes, managing specification and architecture reviews, and implementing processes to improve requirements traceability, documentation, and validation workflows across engineering.
Robotics Software Engineer
Lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios. Design algorithms that enable robots to manipulate complex or deformable objects with high precision. Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware. Evaluate model performance in both simulation and real-world environments to ensure robustness and reliability. Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems. Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.
GNC Engineer
Develop state-of-the-art navigation and sensor fusion algorithms for UAVs, design and implement GNC and flight control systems, build filtering and estimation strategies for robust and efficient flight performance, run extensive simulations including Monte Carlo, SITL, HITL, and coverage testing, analyze test flight data and refine algorithmic performance, support full-stack system integration including GNSS, INS/IMU, localization, and fusion, and maintain and evolve a flight-proven flight computer across multiple UAV platforms.
Research Engineer, SLAM & Multi-View Geometry
As a SLAM / Multi-View Geometry Engineer on the Robotics team, you will develop systems that enable robots to perceive, track, and reconstruct the world in 3D from multi-camera and multimodal sensor data. You will work on real-time and offline SLAM pipelines used during teleoperation and robot data collection, as well as scalable systems for reconstructing and tracking 3D structure from large datasets. Specific responsibilities include developing and deploying online SLAM systems used during robotic data collection with multi-camera sensor stacks and teleoperation platforms, building systems for large-scale 3D reconstruction and point tracking across massive datasets, working with research and engineering teams to scale multi-view geometry pipelines to large datasets, improving the accuracy, robustness, and scalability of perception systems used in robotics data collection and training pipelines, and collaborating across robotics, perception, and ML teams to integrate geometry-based methods with learned models.
Intern, Software Engineer - Perception
As a Perception Engineering Intern at Hayden AI, the responsibilities include taking ownership of a real project and seeing it through to completion, building and shipping features with support from senior engineers, writing clean and scalable code, testing work and iterating quickly, being involved in all phases from design discussions to deployment, collaborating with engineers in code reviews and team discussions, participating in standups, sprint planning, and retrospectives, supporting the team on ad hoc engineering tasks, helping improve performance, reliability, or usability where needed, and asking questions, seeking feedback, and applying it quickly. Deliverables or project examples may include GPS data analysis, training deep learning models, creating AI datasets, lidar/camera data tooling, test cases for end-to-end system performance, developing a cloud service in the event processing pipeline, and adding a page or new user flow to the Portal web application.
Videographer
You will be responsible for defining operational domains and evaluating the reliability of the AI capabilities developed in-house. You will develop and extend the state-of-the-art in uncertainty quantification and uncertainty calibration. This will involve understanding the AI systems built, interfacing with them, and evaluating their robustness in real-world and adversarial scenarios. You will contribute to impactful projects and collaborate with people across several teams and backgrounds.
Senior Cyber Security Engineer (AI Safety)
As a Senior Cyber Security Engineer, you will lead security efforts across key projects, bridging engineering and AI safety. You will design evaluations, test harnesses, and "Capture the Flag" scenarios to test the limits of Frontier AI models, and work on securing agentic AI systems deployed in government settings. Responsibilities include designing and building scaffolds to rigorously test the security and capabilities of frontier AI models and agentic systems, setting technical standards for AI Security across consulting and AI safety business units as the senior technical authority, collaborating with cross-functional teams to integrate security into projects, automating security processes including vulnerability management and secure development lifecycles, and mentoring junior engineers and data scientists to foster technical excellence and continuous security learning.
Software Engineer, Developer Experience
Lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. Explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Create models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. Collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities deployable on physical hardware for industrial applications. Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios. Design algorithms enabling robots to manipulate complex or deformable objects with high precision. Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware. Evaluate model performance in simulation and real-world environments to ensure robustness and reliability. Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems. Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.
Safety Research Internship (Spring/Summer 2026)
As a Cohere Research Intern, you will conduct cutting-edge machine learning research, training and evaluating production large language models. You will focus on research projects aimed at making models better understood, safer, more reliable, more inclusive, and more beneficial for the world. You will disseminate your research results through the production of publications, datasets, and code. Additionally, you will contribute to research initiatives that have practical applications in Cohere's product development. The internship involves collaborating with the Modelling Safety team on implementing novel research ideas related to fairness, safety (including for multiple languages, dialects, and cultural contexts), robustness, generalisation, interpretability, safety for agents with complex read/write actions, and safety for codegen. The project details and topic will be designed collaboratively between the intern and the team, with a goal to publish a paper in a top venue and contribute to open science. The internship may be remote or onsite, with no relocation or housing provided.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Need help with something? Here are our most frequently asked questions.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
