Why work at Nebius
Nebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field.
Where we work
Headquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America, and Israel. The team of over 800 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI R&D team.
AI Studio is a part of Nebius Cloud, one of the world’s largest GPU clouds, running tens of thousands of GPUs. We are building an inference platform that makes every kind of foundation model — text, vision, audio, and emerging multimodal architectures — fast, reliable, and effortless to deploy at massive scale. To keep pace with growing demand, we’re searching for engineers who want to craft world-class APIs and push the outer limits of latency, routing, and throughput.
In this role you will design and implement the services that power inference for both internal and external customers. You’ll integrate and scale model back-ends, create sophisticated request-routing logic for high-throughput, low-latency workloads, and fortify our observability pipeline so the platform stays rock-solid as we charge toward the next growth leap. The work ranges from performance tuning and memory management to multi-tenant scheduling, with opportunities to hunt microseconds using CUDA, Triton, or other kernel-level profilers whenever the hardware demands it.
We’re looking for developers who are fluent in Python, Go, or Rust; comfortable with asynchronous programming and distributed architectures; and practiced in API design, load balancing, caching, and queuing, all backed by clean, test-driven code and modern CI/CD. Familiarity with popular inference frameworks and back-ends (such as vLLM, sglang, or ComfyUI) and with orchestration stacks like Kubernetes, Ray, or FastAPI will help you hit the ground running. Equally important is a collaborative mindset: you communicate openly, enjoy working across disciplines, and like mentoring teammates as much as optimizing code paths.If shaping the infrastructure that powers tomorrow’s multimodal AI excites you, we’d love to hear from you.
What we offer
- Competitive salary and comprehensive benefits package.
- Opportunities for professional growth within Nebius.
- Hybrid working arrangements.
- A dynamic and collaborative work environment that values initiative and innovation.
We’re growing and expanding our products every day. If you’re up to the challenge and are excited about AI and ML as much as we are, join us!