Fiserv Logo

Fiserv

Data Engineer - Commerce Hub

Posted 6 Days Ago
Be an Early Applicant
In-Office
2 Locations
Senior level
In-Office
2 Locations
Senior level
The Data Engineer will design and build data solutions, including ETL pipelines, real-time processing, and data integrations, while collaborating with teams on data warehouse operations.
The summary above was generated by AI

Calling all innovators – find your future at Fiserv.

We’re Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world. We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day – quickly, reliably, and securely. Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, we’re involved. If you want to make an impact on a global scale, come make a difference at Fiserv.

Job Title

Data Engineer - Commerce Hub

What does a successful Data Engineer do?

You will be part of and support agile teams of data analytics domain in EMEA by designing and building cutting-edge data migration, data integration, data replication and data streaming systems to ensure we make data available with amazing quality and speed in Snowflake. You will be responsible for architecting and implementing very large-scale data intelligence solutions around Snowflake Data Warehouse and hence a solid experience and understanding of architecting, designing and operationalization of large-scale data and analytics solutions on Snowflake Cloud Data Warehouse is required.  You will also be responsible for the deployment, monitoring, troubleshooting and maintenance of critical data driven solutions in production 

What you will do:

  • Develope ETL pipelines in and out of data warehouse using combination of Java/Scala/Python Spark jobs for data transformation and aggregation 

  • Write SQL queries against Snowflake. 

  • Provide production support for Data Warehouse issues such data load problems, transformation translation problems 

  • Develop unit tests for transformations and aggregations 

  • Develop production grade real time or batch data integrations between systems 

  • Real time data processing of events from Kafka using streaming processing into data warehouse. 

  • Design and build data pipelines of medium to high complexity  

  • Translate requirements for BI and Reporting to Database design and reporting design 

  • Understand data transformation and translation requirements and which tools to leverage to get the job done 

  • Design and build machine learning pipelines of medium to high complexity 

  • Execute practices such as continuous integration and test-driven development to enable the rapid delivery of working code. 

  • Deploy production grade data pipelines, data infrastructure and data artifacts as code. 

  • Develop estimates for data driven solutions 

  • Communicate technical, product and project information to stakeholders 

  • Establish standards of good practice such as coding standards and data governance 

  • Peer review code developed by others.

What you will need to have:

  • BSc or BTech / B.E in Computer Science, Engineering, or related discipline.  

  • Relevant professional qualification such as AWS Certified Big Data, SnowPro Core certification, other Data Engineer certifications 

  • Strong development hands-on background in creating Snow pipe, and complex data transformations and manipulations using Snow Pipe, Snow SQL 

  • Hands-on experience with Snowflake external tables concepts, Staging, Snow scheduler & performance tuning. 

  • Good understanding of Snowflake Time travel concepts and zero-copy cloning, Network policies, clustering, and tasks 

  • 5+ year experience working in an enterprise big data environment 

  • Deep knowledge of Spark, Kafka and data warehouse such as snowflake, Hive, Redshift etc 

  • Hands-on experience in development, deployment and operation of data technologies and platforms such as: 

  • Integration using APIs, micro-services and ETL patterns  

  • Low latency/Streaming, batch and micro batch processing  

  • Data platforms such as Hadoop, Hive, Redshift or Snowflake  

  • Cloud Services such as AWS 

  • Cloud query services such as Athena 

  • DevOps Platforms such as Gitlab  

  • Containerisation technologies such as Docker and Kubernetes  

  • Orchestration solutions such as Airflow 

  • Deep knowledge of key non-functional requirements such as availability, scalability, operability, and maintainability  

  • Deep knowledge of SQL 

  • OS knowledge particularly Linux 

  • Responsible for planning, highlighting, and implementing possible improvements for existing and new applications. 

What will be nice to have:

  • Migration experience to Snowflake 

  • Hands on experience with Oracle RDBMS 

  • Exposure to Streamsets, DBT or other ETL tool 

#LI-1IB

Thank you for considering employment with Fiserv.  Please:

  • Apply using your legal name
  • Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable).

Our commitment to Diversity and Inclusion:

Fiserv is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law. 

Note to agencies:

Fiserv does not accept resume submissions from agencies outside of existing agreements. Please do not send resumes to Fiserv associates. Fiserv is not responsible for any fees associated with unsolicited resume submissions.

Warning about fake job posts:

Please be aware of fraudulent job postings that are not affiliated with Fiserv. Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information. Any communications from a Fiserv representative will come from a legitimate Fiserv email address.

Top Skills

Airflow
AWS
Docker
Gitlab
Hadoop
Hive
Java
Kafka
Kubernetes
Python
Redshift
Scala
Snowflake
Spark
SQL

Fiserv Dublin, Dublin, IRL Office

10 Hanover Quay, Dublin, Ireland, D02 K510

Similar Jobs

Yesterday
In-Office
2 Locations
Senior level
Senior level
eCommerce • Fintech • Information Technology • Payments • Financial Services
The Data Engineer will design and build large-scale data solutions using Snowflake, develop ETL pipelines, and support data analytics teams with data integration and processing.
Top Skills: AirflowAPIsAWSDbtDockerETLGitlabHadoopHiveJavaKafkaKubernetesMicro-ServicesPythonRedshiftScalaSnowflakeSparkSQL
6 Days Ago
In-Office
2 Locations
Mid level
Mid level
eCommerce • Fintech • Information Technology • Payments • Financial Services
The Data Engineer at Fiserv develops data pipelines and ETL processes, optimizes data workflows, and integrates data systems to ensure effective analytics and reporting.
Top Skills: DbtETLJavaKafkaLinuxPythonScalaSnowflakeSparkSQLStreamsets
25 Minutes Ago
Hybrid
Dublin, IRL
Mid level
Mid level
Blockchain • Fintech • Payments • Consulting • Cryptocurrency • Cybersecurity • Quantum Computing
This role leads product marketing for Mastercard Connect by developing strategies, marketing assets, and measuring campaign effectiveness. Works collaboratively with stakeholders to enhance product adoption.
Top Skills: B2BCommunicationsData AnalysisDigital PaymentsMarketingProduct Marketing

What you need to know about the Dublin Tech Scene

From Bono and Oscar Wilde to today's tech leaders, Dublin has always attracted trailblazers, with more than 70,000 people working in the city's expanding digital sector. Continuing its legacy of drawing pioneers, the city is advancing rapidly. Ireland is now ranked as one of the top tech clusters in the region and the number one destination for digital companies, with the highest hiring intention of any region across all sectors.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account