Who is Tenable?
Tenable® is the Exposure Management company. 44,000 organizations around the globe rely on Tenable to understand and reduce cyber risk. Our global employees support 65 percent of the Fortune 500, 45 percent of the Global 2000, and large government agencies. Come be part of our journey!
What makes Tenable such a great place to work?
Ask a member of our team and they’ll answer, “Our people!” We work together to build and innovate best-in-class cybersecurity solutions for our customers; all while creating a culture of belonging, respect, and excellence where we can be our best selves. When you’re part of our #OneTenable team, you can expect to partner with some of the most talented and passionate people in the industry, and have the support and resources you need to do work that truly matters. We deliver results that exceed expectations and we win together!
Your Role:
Tenable's cloud-based exposure management platform helps organizations see, understand, and reduce cyber risk across their entire attack surface. Our SRE teams keep that platform reliable, scalable, and secure — and we're building the next generation of tooling to do it smarter.
This role sits within our SRE Infrastructure Management organization on a focused team dedicated to reducing operational toil through AI-powered automation. You'll build intelligent systems that replace manual workflows — from incident diagnostics to infrastructure provisioning to upgrade automation — using LLMs, agentic architectures, and deep SRE domain knowledge.
This isn't an operations role with some AI on the side. You'll spend most of your time writing production code: designing and building agentic workflows, integrating across observability and infrastructure platforms, and measuring the impact of what you ship against real toil data.
Your Opportunity:
- Design and build AI-powered agentic workflows that automate complex SRE operations — incident investigation, infrastructure provisioning, deployment reliability, and more.
- Improve the accuracy, reliability, and observability of agent pipelines through evaluation frameworks, prompt engineering, retrieval strategies, and structured output validation.
- Build developer tools and internal platforms — CLI tools, IDE plugins, and workflow automation — that engineers across the organization use daily.
- Build tooling that connects across the SRE tech stack — Kubernetes, Terraform, Helm, CI/CD pipelines, observability platforms, and cloud infrastructure APIs.
- Work on a focused team where everyone writes code, owns what they ship, and drives prioritization from measured toil data.
- Participate in SRE on-call rotation — we use on-call as a direct input into what we build, not just a firefighting duty.
- Collaborate with SRE teams across the organization to identify automation opportunities and deliver tooling that gives engineers hours back.
What Success Looks Like:
- Within your first few months, you've shipped an agentic workflow that automates a real SRE toil category — and engineers are using it.
- Within 6 months, you're independently designing and building AI-powered pipelines, contributing to the team's evaluation and accuracy practices, and your work is driving measurable toil reduction.
- Within a year, you've become a go-to contributor on the team — shaping the roadmap, mentoring others on AI + SRE patterns, and building systems that scale the team's impact across engineering.
What You'll Need:
- 5+ years of SRE, platform engineering, or infrastructure engineering experience.
- Strong software engineering skills — you write production-quality code, not just scripts. Python is the primary language for our tooling stack.
- Experience building with LLMs and AI in production or infrastructure contexts — integrating models into real systems, not just experimentation.
- Experience building developer tools or internal platforms — CLI tools, IDE plugins, or workflow automation that other engineers use daily.
- Deep experience with Kubernetes (EKS preferred) — deployment, troubleshooting, helm chart management, and cluster operations.
- Experience with Infrastructure as Code (Terraform preferred) and CI/CD pipeline development.
- Strong experience with AWS services and APIs.
- Experience with observability platforms (Datadog, Coralogix, or similar) — both as a user during incidents and as an integration target for tooling.
- Solid background in bash scripting and Linux systems.
- Comfortable working on a distributed team with emphasis on asynchronous collaboration and documented decision-making.
- Bachelor's or Master's degree in Computer Science, Engineering, or equivalent experience.
And Ideally:
- Hands-on experience building agentic AI workflows — designing multi-step agent pipelines, working with tool-use patterns, and building retrieval-augmented generation (RAG) or similar context-enrichment approaches.
- Experience evaluating and improving AI agent accuracy — prompt optimization, output validation, evaluation harnesses, handling failure modes and hallucinations in production systems.
- Experience with Claude, OpenAI, or similar LLM APIs and SDKs in production systems.
- Experience with managed AI/ML platforms (AWS Bedrock, SageMaker, or similar) for deploying and orchestrating model-backed systems.
- Experience with Helm chart development and deployment pipeline internals.
- Familiarity with distributed systems patterns — micro services, event-driven architectures, message brokers.
- Experience with Go, Java, or Kotlin.
- Background in measuring and reducing operational toil using data-driven approaches.
- Experience building evaluation and testing frameworks for AI/ML system.
#LI-AV1
#LI-Hybrid
We’re committed to promoting Equal Employment Opportunity (EEO) at Tenable - through all equal employment opportunity laws and regulations at the international, federal, state and local levels. If you need a reasonable accommodation due to a disability during the application or recruiting process, please contact [email protected] for further assistance.
Tenable Data Consent Statement
Tenable is committed to protecting the privacy and security of your personal data. This Notice describes how we collect and use your personal data during and after your working relationship with us, in accordance with the General Data Protection Regulation (“GDPR”). Please click here to review.
For California Residents: The California Consumer Privacy Act (CCPA) requires that Tenable advise you of certain rights related to the collection of your private information. Please click here to review.

