Flywheel Infotech Logo

Flywheel Infotech

Data Engineer

Posted 4 Days Ago
Be an Early Applicant
Easy Apply
In-Office
Dublin
Senior level
Easy Apply
In-Office
Dublin
Senior level
The Senior Data Engineer will design and deliver cloud-first data platforms, mentor teammates, and optimize data architectures and pipelines using various cloud services and technologies.
The summary above was generated by AI
About Flywheel 
Flywheel's suite of digital commerce solutions accelerate growth across all major digital marketplaces for the world's leading brands. We give clients access to near real-time performance measurement and improve sales, share, and profit. With teams across the Americas, Europe and APAC, we offer a career with real impact, endless growth opportunities and the support you need to be the best you can be.
Opportunity 
We’re seeking a Senior Data Engineer who is equal parts builder, teacher, and problem solver. You will architect and deliver modern, cloud-first data platforms; mentor teammates; and drive standards that improve reliability, performance, and developer experience. You’ll work across batch and streaming patterns, structured and semi-structured data, and collaborate closely with analysts, product, and engineering teams to turn requirements into scalable, maintainable solutions.
What you'll do:
  • Design and evolve our data architecture leveraging Apache Iceberg on S3 and Snowflake, balancing performance, reliability, and cost.
  • Lead continuous improvement of schemas, data models, pipelines, and engineering standards, own design docs and review forums.
  • Plan and coordinate data migrations with zero/low downtime patterns, including backfills, cutovers, and data validation.
  • Standardize data contracts and enforce quality checks throughout pipelines and transformations.
  • Implement and operate Iceberg tables: catalog strategy (AWS Glue), partition transforms, schema evolution, and time-travel/snapshot management.
  • Optimize data layout (partitioning, clustering, file sizing, compaction) to improve read/write performance and control costs across engines.
  • Build and maintain batch and streaming pipelines using Airflow, AWS Glue, Step Functions, Lambda, Kinesis, and Snowflake.
  • Design normalized and dimensional models; apply partitioning and clustering strategies appropriate to Iceberg and target engines.
  • Own SQL and Spark performance tuning, job optimization, and cost governance (e.g., Snowflake warehouse sizing, query profile analysis).
  • Establish SLAs, lineage, tests, alerts, and runtime metrics; integrate data quality checks into CI/CD and orchestration.
  • Mentor and pair program; elevate craftsmanship, testing, reliability, and operational excellence.
  • Promote security best practices, governance, and compliance-by-default patterns for sensitive data.
  • Provide documentation, code examples, and training that enable partners to self-serve; champion code reviews and design best practices.
Who you are:
  • 4+ years of professional data engineering experience delivering production-grade pipelines and data platforms.
  • Strong problem-solving and analytical skills; track record of decomposing complex problems and shipping pragmatic solutions.
  • Excellent communication and documentation; teacher/mentor who raises the bar for the team.
  • Proficiency in Python and SQL; experience with Java or Scala a plus.
  • Solid software engineering practices: Git/GitHub, branching, code review, testing (Pytest/JUnit), and CI/CD.
  • Hands-on experience operating Apache Iceberg on AWS (S3 + Glue); deep familiarity with Parquet/ORC.
  • Operating Iceberg with Spark (PySpark/Spark SQL); exposure to Snowflake Iceberg Tables and/or Trino/Presto is a plus.
  • Practical knowledge of Iceberg concepts: metadata/manifest layers, partition transforms, schema evolution, snapshots/time-travel/branching, row-level deletes/merges, compaction, and vacuum/snapshot expiration.
  • AWS expertise: S3, Lambda, EventBridge, Glue, Athena, EMR, Kinesis, Step Functions, SQS; cost/performance governance across services.
  • Snowflake proficiency; experience exposing Iceberg data via Snowflake external or Iceberg tables; Redshift/BigQuery a plus.
  • Strong SQL performance tuning across at least one engine (Snowflake, PostgreSQL/RDS, MySQL).
  • Observability with Datadog and/or CloudWatch; strong log analysis, performance profiling, and incident response.
  • Practical use of AI coding assistants (GitHub Copilot, Amazon Q Developer, ChatGPT, Claude) (nice to have

#LI-AD1

Working at Flywheel
We are proud to offer all Flywheelers a competitive rewards package and unparalleled career growth opportunities and a supportive, fun and engaging culture.
  • We have office hubs across the globe where team members can go to feel productive, inspired, and connected to others - team members go into Hub Offices 3x a week
  • Flexible vacation time
  • Great learning and development opportunities
  • Benefits that help you live your best life
  • Parental leave and benefits
  • Volunteering opportunities
  • If you’re looking to connect with teammates on a topic of inclusion and identity, chances are there’s an ERG for that.
  • So you know: The hired candidate will be required to complete a background check
  • Learn more about us here: Life at Flywheel
The Interview Process:
Every role starts the same, an introductory call with someone from our Talent Acquisition team. We will be looking for company and values-fit as well as your professional experience; there may be some technical role-specific questions during this call.
 
Every role is different after the initial call, but you can expect to meet several people from the team 1:1 and there might be further skill assessments in the form of a Take Home Assignment/Case Study Presentation or Pair Programming/Live Coding exercise depending on the role. In your initial call, we will walk you through exactly what to expect the process to be.
Inclusive Workforce

Flywheel Commerce Network’s goal is to create a culture where all individuals of all backgrounds feel comfortable in bringing their authentic selves to work. We want all people to feel included and empowered to contribute fully to our vision and goals. Flywheel Commerce Network is an Equal Opportunity Employer and participates in E-Verify. All applicants will receive fair consideration for employment. We do not discriminate based upon race, color, religion, sex, sexual orientation, age, marital status, gender identity, national origin, disability, or any other applicable legally protected characteristics in the location in which the candidate is applying.

If you have any accessibility requirements that would make you more comfortable during the application and interview process, please let us know at [email protected] so that we can support you.

For more information about what data we collect and how we use it, please refer to our Privacy Policy.
 
We leverage AI technology to streamline our hiring workflow, though all candidate decisions are made by our Talent Acquisition Team
 
IMPORTANT ALERT: Please beware of fraudulent job communications from individuals falsely claiming to be from Flywheel. We've identified fraudulent activity through social media and messaging services purporting to be from Flywheel requesting payments for job- and recruitment-related expenses. Flywheel never asks candidates for personal information such as bank account data or tax IDs nor payments via social media or chat-based applications. Report suspected fraud to local authorities immediately. To learn more, click here.
 
Please note, we do not accept unsolicited resumes from 3rd party Recruitment Firms. 
 
#LI-HYBRID
 

Top Skills

Airflow
Apache Iceberg
Aws Glue
Cloudwatch
Datadog
Git
Git
Java
Junit
Kinesis
Lambda
Orc
Parquet
Pytest
Python
S3
Scala
Snowflake
Snowflake External Tables
SQL
Step Functions

Similar Jobs

2 Days Ago
In-Office
Dublin, IRL
Senior level
Senior level
Financial Services
The Senior Snowflake Data Engineer will develop data models, implement Snowflake best practices, and collaborate with teams to enhance data analytics at Davy.
Top Skills: AzureAzure DevopsDbtPower BIPythonSnowflakeSQL
3 Days Ago
In-Office
5 Locations
Mid level
Mid level
Fintech • Financial Services
As a Data Engineer at Starling, you will build cloud-native data warehouses and ML platforms, improving tooling within AWS and GCP.
Top Skills: AWSBigQueryDbtDebeziumGCPJavaKafkaKubernetesPostgresPythonSQLTerraform
3 Days Ago
Easy Apply
In-Office
Dublin, IRL
Easy Apply
Senior level
Senior level
AdTech • Marketing Tech • Software
The Senior Data Engineer will architect and deliver cloud-first data platforms, mentor teammates, standardize data contracts, optimize pipelines, and ensure data quality and governance.
Top Skills: AirflowApache IcebergAws GlueAws LambdaKinesisPythonS3SnowflakeSparkSQL

What you need to know about the Dublin Tech Scene

From Bono and Oscar Wilde to today's tech leaders, Dublin has always attracted trailblazers, with more than 70,000 people working in the city's expanding digital sector. Continuing its legacy of drawing pioneers, the city is advancing rapidly. Ireland is now ranked as one of the top tech clusters in the region and the number one destination for digital companies, with the highest hiring intention of any region across all sectors.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account