DL Communications Collectives SW Engineer

Posted 24 Days Ago
Be an Early Applicant
32 Locations
Remote
Hybrid
Mid level
Software
The Role
The role involves designing and implementing optimized communication libraries for Deep Learning systems, collaborating with hardware and software teams to ensure data synchronization across AI accelerators, and optimizing algorithms for distributed computing applications.
Summary Generated by Built In

We are working on software to improve the Deep Learning ecosystem and help hardware engineers build great Deep Learning parallel systems.

We are looking for a strong candidate with a background in writing systems software for networking devices (and optionally Linux kernel networking stack or network drivers). Someone who's implemented network protocols or has worked on OpenMPI.This role involves designing and implementing highly optimized communication collectives libraries similar to UCC (Unified Collective Communication) and NCCL (NVIDIA Collective Communications Library). The ideal candidate will work closely with hardware and software teams to ensure efficient data communication and synchronization across multiple AI accelerators in a distributed system, enabling scalable deep learning and high-performance computing applications.

You will be learning technical and organizational skills from industry veterans: how to write performant and readable code; how to structure and communicate projects, ideas, and progress; how to work effectively with the Open Source community.

We are big proponents of Open Source and Free software and contribute back our improvements to all the great projects we use.


We prefer candidates who work out of one of our offices, but will consider remote candidates as well.

Responsibilities

  • Build-up communication components of an AI Software Stack
  • Port AI Software to run on a new H/W platform
  • Profiling and tuning of communications within AI applications
  • Design, develop, and optimize communication collectives (e.g., AllReduce, AllGather, Broadcast, ReduceScatter) for large-scale distributed computing and machine learning frameworks.
  • Implement and optimize communication algorithms (ring, tree, butterfly, etc.) tailored for our architectures and multi-node clusters.
  • Ensure low-latency, high-bandwidth communication across multi-GPU setups, supporting interconnects such as PCIe and Infiniband.
  • Collaborate with hardware engineers and other software teams to optimize performance.
  • Implement fault tolerance and scalability mechanisms in distributed systems to handle large-scale workloads.
  • Write unit tests and benchmark tools to validate the performance and correctness of collective operations.
  • Stay current with advancements in hardware and networking technologies to continuously improve the library's performance.

Requirements

  • Strong understanding of GPU architectures (CUDA, AMD ROCm) and experience in GPU programming (CUDA, HIP, or similar).
  • Proficiency in designing and implementing parallel and distributed algorithms, particularly communication collectives.
  • Experience with network interconnects (NVLink, PCIe, Infiniband, RDMA) and understanding of their performance implications.
  • Hands-on experience with communication collectives libraries like UCC, NCCL, or MPI.
  • Strong knowledge of concurrency, synchronization, and memory consistency models in multi-threaded and distributed environments.
  • Experience with profiling and optimizing low-level performance (memory bandwidth, latency, throughput) on GPU architectures.
  • Familiarity with deep learning frameworks (TensorFlow, PyTorch, etc.) and their use of communication collectives.
  • Strong problem-solving skills and ability to work in a fast-paced, collaborative environment.
  • Network driver experience recommended
  • Excellent skills in problem solving, written and verbal communication
  • Strong organization skills, and highly self-motivated.
  • Ability to work well in a team and be productive under aggressive schedules.

Optional Requirements

  • Experience with NumPy, PyTorch, TensorFlow or JAX
  • Experience with Rust
  • Experience with CUDA, OpenCL, OpenGL, or SYCL
  • Coursework or experience with Machine Learning algorithms

Education and Experience

  • Bachelor’s, Master’s, or PhD in Computer Engineering, Software Engineering or Computer Science

Top Skills

Cuda
Hip
The Company
HQ: Mountain View, CA
287 Employees
On-site Workplace
Year Founded: 2021

What We Do

Rivos, a high performance RISC-V System Startup targeting integrated system solutions for Enterprise

Similar Jobs

GitLab Logo GitLab

Intermediate Site Reliability Engineer, Database Operations

Cloud • Security • Software • Cybersecurity • Automation
Easy Apply
Remote
28 Locations
2050 Employees

Smartcat Logo Smartcat

Senior Backend Developer (.NET, C#) - Growth

Artificial Intelligence • Machine Learning • Natural Language Processing • Conversational AI
Easy Apply
Remote
28 Locations
242 Employees

Smartcat Logo Smartcat

Senior DevOps Engineer

Artificial Intelligence • Machine Learning • Natural Language Processing • Conversational AI
Easy Apply
Remote
28 Locations
242 Employees

Smartcat Logo Smartcat

Senior Frontend Engineer (JS/TS)

Artificial Intelligence • Machine Learning • Natural Language Processing • Conversational AI
Easy Apply
Remote
28 Locations
242 Employees

Similar Companies Hiring

Dynatrace Thumbnail
Software • Information Technology • Cloud • Big Data Analytics • Big Data • Automation • Artificial Intelligence
Waltham , MA
4700 Employees
Square Thumbnail
Software • Payments • Hardware • Fintech • Financial Services • eCommerce
Atlanta, GA
12000 Employees
Applied Systems Thumbnail
Software • Payments • Insurance • Cloud • Big Data Analytics • App development
Chicago, IL
2780 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account