
Junior Data Engineer
Junior Data Engineer
Be the true you
Help leading organizations use data and insights to solve their biggest problems and deliver positive impact with modern data engineering technology at Deloitte.
Benefits
- You will receive a profit-sharing bonus. On top of your fixed salary.
- Continue professional growth. Join our development program.
- A work-from-home office setup allowance to make sure you have everything you need for an ergonomically designed workstation and internet allowance.
- Work part-time (32 hours a week) or full-time (40-hours a week).
Be the true you
A technical Master’s degree in a related field (Data Science, Data Engineering, Artificial Intelligence, Computer Science, Software Engineering, Statistics), or a Bachelor’s degree with at least two years' work experience;
Basic knowledge of, or experience with, designing and implementing data pipelines using modern data engineering technologies and frameworks such as Apache Spark, Apache Airflow, Iceberg, Delta Lake, Hudi;
Knowledge of at least one programming language such as Python or SQL. Experience with PySpark is an advantage.
Familiarity with public cloud platforms such as AWS, Azure, and Google Cloud Platform (GCP);
Strong analytical and problem-solving skills;
An excellent command of English and Dutch. Both written and spoken.
What impact will you make?
Do you want to help shape and accelerate the future of our clients? Join the Platform Development & Integration team within Deloitte Engineering. You will work on a wide range of projects to turn data into actionable insights, modernize data and cloud infrastructures, and help organisations become data‑driven. With your commitment and eagerness to learn, you will help ensure projects are delivered on time and to a high standard.

Let's make progress together
Connect your future to Deloitte
How do you do this?
You will be part of the Deloitte Engineering team and work in multidisciplinary projects for a range of clients. Some of your responsibilities will include:
Analyzing client needs and helping to translate them into technical solutions using modern data engineering technologies including Apache Spark, Delta Lake, Hudi, PySpark, Spark SQL;
Designing, developing, and maintaining scalable data-driven solutions;
Building robust ETL and ELT data pipelines on public cloud and data platforms (AWS, Azure, GCP, Databricks, Snowflake, etc);
Collaborating with senior engineers to design and improve data pipelines, architectures, and modeling for our clients;
Participating in team activities and training sessions to build your skills and support the development of our data engineering capability.

We would like to meet you!
Our application process
Select one of the steps for more information
Step 1: Preparation
Step 2: CV and motivation
Step 3: The assessment
Step 4: The interview
Step 5: The offer
Questions or doubts? Get in touch.
