Calling all innovators – find your future at Fiserv.
We’re Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world. We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day – quickly, reliably, and securely. Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, we’re involved. If you want to make an impact on a global scale, come make a difference at Fiserv.
Job Title
Data Engineer - Commerce HubWhat does a successful Data Engineer do?
You will be part of and support agile teams of data analytics domain in EMEA by designing and building cutting-edge data migration, data integration, data replication and data streaming systems to ensure we make data available with amazing quality and speed in Snowflake. You will be responsible for architecting and implementing very large-scale data intelligence solutions around Snowflake Data Warehouse and hence a solid experience and understanding of architecting, designing and operationalization of large-scale data and analytics solutions on Snowflake Cloud Data Warehouse is required. You will also be responsible for the deployment, monitoring, troubleshooting and maintenance of critical data driven solutions in production.
What you will do:
Develope ETL pipelines in and out of data warehouse using combination of Java/Scala/Python Spark jobs for data transformation and aggregation
Write SQL queries against Snowflake.
Provide production support for Data Warehouse issues such data load problems, transformation translation problems
Develop unit tests for transformations and aggregations
Develop production grade real time or batch data integrations between systems
Real time data processing of events from Kafka using streaming processing into data warehouse.
Design and build data pipelines of medium to high complexity
Translate requirements for BI and Reporting to Database design and reporting design
Understand data transformation and translation requirements and which tools to leverage to get the job done
Design and build machine learning pipelines of medium to high complexity
Execute practices such as continuous integration and test-driven development to enable the rapid delivery of working code.
Deploy production grade data pipelines, data infrastructure and data artifacts as code.
Develop estimates for data driven solutions
Communicate technical, product and project information to stakeholders
Establish standards of good practice such as coding standards and data governance
Peer review code developed by others.
What you will need to have:
BSc or BTech / B.E in Computer Science, Engineering, or related discipline.
Relevant professional qualification such as AWS Certified Big Data, SnowPro Core certification, other Data Engineer certifications
Strong development hands-on background in creating Snow pipe, and complex data transformations and manipulations using Snow Pipe, Snow SQL
Hands-on experience with Snowflake external tables concepts, Staging, Snow scheduler & performance tuning.
Good understanding of Snowflake Time travel concepts and zero-copy cloning, Network policies, clustering, and tasks
5+ year experience working in an enterprise big data environment
Deep knowledge of Spark, Kafka and data warehouse such as snowflake, Hive, Redshift etc
Hands-on experience in development, deployment and operation of data technologies and platforms such as:
Integration using APIs, micro-services and ETL patterns
Low latency/Streaming, batch and micro batch processing
Data platforms such as Hadoop, Hive, Redshift or Snowflake
Cloud Services such as AWS
Cloud query services such as Athena
DevOps Platforms such as Gitlab
Containerisation technologies such as Docker and Kubernetes
Orchestration solutions such as Airflow
Deep knowledge of key non-functional requirements such as availability, scalability, operability, and maintainability
Deep knowledge of SQL
OS knowledge particularly Linux
Responsible for planning, highlighting, and implementing possible improvements for existing and new applications.
What will be nice to have:
Migration experience to Snowflake
Hands on experience with Oracle RDBMS
Exposure to Streamsets, DBT or other ETL tool
#LI-1IB
Thank you for considering employment with Fiserv. Please:
- Apply using your legal name
- Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable).
Our commitment to Diversity and Inclusion:
Fiserv is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law.
Note to agencies:
Fiserv does not accept resume submissions from agencies outside of existing agreements. Please do not send resumes to Fiserv associates. Fiserv is not responsible for any fees associated with unsolicited resume submissions.
Warning about fake job posts:
Please be aware of fraudulent job postings that are not affiliated with Fiserv. Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information. Any communications from a Fiserv representative will come from a legitimate Fiserv email address.
Top Skills
Fiserv Dublin, Dublin, IRL Office
10 Hanover Quay, Dublin, Ireland, D02 K510