This role is available for candidates based in Latin America
At Glacier, we are taking on one of the world’s most pressing problems: trash. Did you know that in the US, we send over half of our recyclables to the landfill? We're hoping to fix that. In doing so, we’ll also be reducing carbon emissions, energy consumption, and depletion of natural resources.
Glacier builds custom sorting robots designed to sort apart recyclables as well as AI-powered business analytics that enable recyclers to superpower their plants and improve our society’s circularity. These two products together are helping to divert tons of recyclables (literally!) from landfills every day.
About us:
Our founders come from Facebook engineering and Bain consulting.
We’re backed by top-tier VCs with extensive technical and industry expertise.
We have a fleet in production and a robust pipeline of upcoming deployments.
About the role:
As a DevOps Engineer at Glacier, you'll be responsible for building and maintaining the software infrastructure that powers our robotics fleet. You'll work closely with our software and hardware teams to ensure our systems are reliable, scalable, and secure as we deploy robots across recycling facilities.
Key Responsibilities:
Design, build, and maintain AWS cloud infrastructure supporting our robotics fleet and data pipelines
Develop and implement CI/CD pipelines to streamline deployment processes for both cloud services and edge devices
Monitor system performance and reliability; implement observability tools and respond to incidents
Automate infrastructure provisioning and configuration management using Infrastructure as Code (IaC)
Manage and operationalize Machine Learning pipelines for computer vision models, including deployment, monitoring, and troubleshooting within the production environment (MLOps).
Collaborate with software engineers to optimize application performance and deployment strategies
Manage databases, serverless functions, and other cloud services at scale
Ensure security best practices across all infrastructure and deployment processes
What you Bring:
3+ years of professional experience in DevOps, CloudOps, or Site Reliability Engineering
3+ years of work experience with Python, Ubuntu/Linux, Git
3+ years working through the command line for both local and remote systems
Strong hands-on experience with AWS services (EC2, Lambda, RDS, S3, CloudWatch, etc.)
Experience building and maintaining CI/CD pipelines (GitHub Actions, Jenkins, or similar)
Proficiency with Infrastructure as Code tools (Terraform, CloudFormation, or similar)
Practical experience with MLOps principles, including deploying, monitoring, and maintaining production computer vision or similar models.
Experience with containerization and orchestration (Docker, Kubernetes)
Experience managing and scaling databases (Postgres, DynamoDB, Elasticsearch, or similar)
English Fluency as you will be working with a US based team (B2 or higher)
Nice to have
Experience with edge computing or IoT device management
Familiarity with robotics or hardware systems
Experience with monitoring and observability tools (Prometheus, Grafana, Kibana, or similar)
Networking debugging
Experience working with US companies or clients is a plus
Why Join Us?
Mission-Driven Work – Be part of a company dedicated to sustainability and ending waste.
Remote Flexibility – Work from anywhere in Latin America with a collaborative, distributed team.
Fast-Paced Growth – Help scale and optimize the infrastructure powering a cutting-edge robotics fleet in the recycling industry.
Technical Impact – Build and own critical infrastructure that directly enables our robots to divert tons of waste from landfills.
If you're excited about building robust, scalable cloud infrastructure that powers real-world robotics systems and want to make a tangible environmental impact, we'd love to hear from you!


