
Databricks Data Engineer
Databricks Data Engineer
Be the true you
You will design, build, and optimize data pipelines in Databricks. You will apply data engineering best practice and use core Databricks features to deliver scalable and reliable data solutions for our clients. At Deloitte.
Benefits
- You will receive a profit-sharing bonus. On top of your fixed salary.
- Continue professional growth. Join our development program.
- A work-from-home office setup allowance to make sure you have everything you need for an ergonomically designed workstation and internet allowance.
- Work part-time (32 hours a week) or full-time (40-hours a week).
Be the true you
Naturally you are keen to discover and learn new things each day and you are confident and rely on your self-worth. But most importantly, be the true you. The one and only you. With your personal strengths, view of the world and unique personality. You need the following qualifications for the position of Databricks Data Engineer:
- A technical Master’s degree, preferably in Data Science, Big Data, Econometrics, Physics, Mathematics, Computer Science, Artificial Intelligence, or a related field;
- At least 4 years of work experience in data engineering; consulting experience is a plus;
- Strong hands-on experience with Databricks and Apache Spark (PySpark and Spark SQL);
- Deep understanding of Delta Lake and medallion architecture;
- Proficiency in SQL and one or more data engineering related programming languages (Python, Scala, etc);
- Relevant Databricks certifications (Data Engineer Associate/Professional, Spark Developer Associate, Data Analyst Associate);
- Excellent command of English and Dutch.
What impact will you make?
As a Databricks Data Engineer, you will be a trusted adviser to our clients, designing and building reliable ETL/ELT pipelines in Databricks. You will apply the medallion architecture and Delta Lake best practice, optimize clusters for performance and cost, and embed data quality to deliver reliable data products. You will work closely with cloud platform engineers to ensure data solutions are secure, scalable, and easy to operate.

Let's make progress together
Connect your future to Deloitte
How do you do this?
- Develop, optimize, and operate ETL/ELT pipelines in Databricks using PySpark, SparkSQL, and Scala;
- Run migration programmes to Databricks, including legacy ETL processes into scalable data pipelines;
- Build robust data pipelines on Databricks for real time and batch processing;
- Configure and manage Databricks clusters and jobs for cost and performance efficiency;
- Apply CI/CD to automate testing and deployment of notebooks, jobs, and pipelines;
- Drive adoption of Databricks Lakehouse architecture and data engineering practice across our clients.

We would like to meet you!
Our application process
Select one of the steps for more information
Step 1: Preparation
Step 2: CV and motivation
Step 3: The assessment
Step 4: The interview
Step 5: The offer
Questions or doubts? Get in touch.
