Design, build, and maintain scalable data pipelines and cloud solutions. Collaborate with teams on data architecture in Agile environments.
We are looking for a Mid-Senior Data Engineer to design, build, and maintain scalable data pipelines and cloud-based data solutions. This role requires strong hands-on engineering skills, the ability to work autonomously on well-defined problems, and experience operating data pipelines in production environments. You will contribute to the development of modern data platforms, working closely with senior engineers, product teams, and analytics stakeholders in an Agile environment.
Key Responsibilities- Design, develop, and maintain reliable and scalable data pipelines.
- Implement ETL/ELT workflows for batch and streaming data processing.
- Develop data processing jobs using Apache Spark (Python or Scala).
- Build and maintain cloud-native data solutions on Azure or AWS.
- Implement data transformations and models using DBT or equivalent tools.
- Ensure code quality, testing, and documentation across data pipelines.
- Participate in CI/CD pipelines for data engineering workloads.
- Monitor, troubleshoot, and optimize data pipeline performance and reliability.
- Collaborate with senior engineers on architecture and design decisions.
- Work within Agile teams, contributing to planning, estimation, and delivery.
Requirements
- 3–4 years of experience in Data Engineering or similar roles.
- Strong programming skills in Python or Scala.
- Hands-on experience with Apache Spark for large-scale data processing.
- Experience working with cloud platforms (Azure or AWS).
- Solid knowledge of SQL and relational data modelling.
- Experience building and maintaining production-grade data pipelines.
- Hands-on experience with CI/CD pipelines for data workflows.
- Experience with data testing (unit, integration, data validation).
- Ability to work independently on assigned tasks with guidance when needed.
- Strong collaboration and communication skills.
- Experience with DBT or similar data transformation frameworks.
- Experience with Infrastructure as Code (IaC), preferably Terraform.
- Exposure to NoSQL databases.
- Experience with streaming platforms (e.g. Kafka, Kinesis, Event Hubs).
- Familiarity with data quality, monitoring, and observability tools.
Top Skills
Spark
AWS
Azure
Ci/Cd
Dbt
Kafka
Kinesis
Python
Scala
SQL
Terraform
Ardanis Dublin, Dublin, IRL Office
50 Richmond Street S, The Lennox Building, Iconic Office, Dublin, County Dublin, Ireland, D02 FK02
Similar Jobs
Information Technology
The Data Engineer will build ETL/ELT pipelines, manage AI model training datasets, and create automated evaluation pipelines, focusing on AI integration.
Top Skills:
AWSPythonRustSQL
Gaming • Information Technology • News + Entertainment
Design, build, and optimize ETL/ELT pipelines, ensure data quality, and collaborate with teams to deliver scalable data solutions.
Top Skills:
Apache AirflowAWSAzureBigQueryGCPKafkaPythonSnowflakeSQL
Big Data • Fintech • Mobile • Payments • Financial Services
The Staff Data Engineer will design and develop data infrastructure and pipelines, ensuring data quality and supporting business decision-making. This role includes mentorship, collaboration with cross-functional teams, and promoting operational excellence.
Top Skills:
AirflowAWSBigQueryDbtGCPKafkaPythonSnowflakeSparkSQLTerraform
What you need to know about the Dublin Tech Scene
From Bono and Oscar Wilde to today's tech leaders, Dublin has always attracted trailblazers, with more than 70,000 people working in the city's expanding digital sector. Continuing its legacy of drawing pioneers, the city is advancing rapidly. Ireland is now ranked as one of the top tech clusters in the region and the number one destination for digital companies, with the highest hiring intention of any region across all sectors.

