BlackFriday 2024! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Databricks Exam Databricks-Certified-Data-Engineer-Associate Topic 4 Question 37 Discussion

Actual exam question for Databricks's Databricks-Certified-Data-Engineer-Associate exam
Question #: 37
Topic #: 4
[All Databricks-Certified-Data-Engineer-Associate Questions]

A data engineer and data analyst are working together on a data pipeline. The data engineer is working on the raw, bronze, and silver layers of the pipeline using Python, and the data analyst is working on the gold layer of the pipeline using SQL The raw source of the pipeline is a streaming input. They now want to migrate their pipeline to use Delta Live Tables.

Which change will need to be made to the pipeline when migrating to Delta Live Tables?

Show Suggested Answer Hide Answer
Suggested Answer: A

When migrating to Delta Live Tables (DLT) with a data pipeline that involves different programming languages across various data layers, the migration does not require unifying the pipeline into a single language. Delta Live Tables support multi-language pipelines, allowing data engineers and data analysts to work in their preferred languages, such as Python for data engineering tasks (raw, bronze, and silver layers) and SQL for data analytics tasks (gold layer). This capability is particularly beneficial in collaborative settings and leverages the strengths of each language for different stages of data processing.

Reference: Databricks documentation on Delta Live Tables: Delta Live Tables Guide


Contribute your Thoughts:

Dudley
22 days ago
Option B, easy peasy. If it's good enough for the data engineer, it's good enough for the data analyst. Time to brush up on that SQL, my friend. Oh, and don't forget the backup sunglasses for when the code inevitably blinds you with its brilliance.
upvoted 0 times
...
Cordelia
23 days ago
Option C, no doubt! Gotta keep that Python love alive, am I right? I mean, who needs SQL when you can just write the whole pipeline in Python? It's like the '90s all over again!
upvoted 0 times
Willie
5 days ago
Agreed, Python is so much more flexible and powerful.
upvoted 0 times
...
Yasuko
9 days ago
I think we should stick with Python for the pipeline.
upvoted 0 times
...
...
King
25 days ago
Whoa, this is a tricky one! I'm going to have to go with option A. Mixing and matching notebook sources in SQL and Python sounds like a recipe for a programmer's paradise. Or a nightmare, depending on your perspective.
upvoted 0 times
...
Merissa
28 days ago
Hmm, I think option D is the way to go. Using a batch source instead of a streaming one for Delta Live Tables seems like the logical choice. After all, who needs real-time when you can have delayed data, am I right?
upvoted 0 times
Vonda
2 days ago
Data engineer: I agree, switching to a batch source for Delta Live Tables makes sense.
upvoted 0 times
...
...
Kiley
1 months ago
Actually, I think the pipeline can still have different notebook sources in SQL & Python even with Delta Live Tables.
upvoted 0 times
...
Louvenia
1 months ago
I bet the data analyst is secretly hoping for option B, just so they can lord their SQL skills over the data engineer. Classic power move!
upvoted 0 times
Marvel
2 days ago
B: I wonder if we'll have to switch to writing the pipeline entirely in SQL.
upvoted 0 times
...
Lashaun
9 days ago
A: Yeah, that sounds right. It'll be a big change for us.
upvoted 0 times
...
Temeka
23 days ago
B: I think we'll need to use a batch source instead of a streaming source.
upvoted 0 times
...
Mee
28 days ago
A: We need to migrate our pipeline to Delta Live Tables.
upvoted 0 times
...
...
Brande
1 months ago
I agree. I believe the pipeline will need to be written entirely in SQL for Delta Live Tables.
upvoted 0 times
...
Gerardo
2 months ago
Wait, what? Option B wants me to write the entire pipeline in SQL? That's a hard pass. I'm sticking with Python, thanks.
upvoted 0 times
...
Kiley
2 months ago
I think we will need to make some changes to our pipeline when migrating to Delta Live Tables.
upvoted 0 times
...
Jenise
2 months ago
Option B? Really? Forcing the whole pipeline to be in SQL seems like a bit of a stretch. This is a tough one.
upvoted 0 times
...
Azalee
2 months ago
Interesting dilemma. I wonder if they could keep the SQL and Python separation, but just wrap it all in Delta Live Tables. Guess we'll have to see what the experts say!
upvoted 0 times
Harley
1 months ago
Maybe we can find a way to integrate both Python and SQL into Delta Live Tables.
upvoted 0 times
...
Yen
1 months ago
But what about the SQL part of the pipeline? We can't just abandon that.
upvoted 0 times
...
Erick
1 months ago
I think we might need to rewrite the pipeline entirely in Python.
upvoted 0 times
...
...
Elvis
2 months ago
I'm not sure about that. The question says the data engineer is working with Python, so I think option C might be the right choice.
upvoted 0 times
Francesco
1 months ago
Let's make the necessary changes to migrate to Delta Live Tables.
upvoted 0 times
...
Jesus
1 months ago
That makes sense since the data engineer is already working with Python.
upvoted 0 times
...
Kimberely
2 months ago
I agree. I believe the pipeline will need to be written entirely in Python.
upvoted 0 times
...
Nada
2 months ago
I think we need to switch to using Delta Live Tables for our pipeline.
upvoted 0 times
...
...
Hortencia
2 months ago
Hmm, I think option D is the way to go. Migrating to Delta Live Tables likely requires using a batch source instead of a streaming one.
upvoted 0 times
Chau
2 months ago
Yeah, it's important to make sure the pipeline is compatible with Delta Live Tables.
upvoted 0 times
...
Eura
2 months ago
I agree, using a batch source would make the migration smoother.
upvoted 0 times
...
...

Save Cancel