Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Salesforce Exam Data Architect Topic 1 Question 52 Discussion

Actual exam question for Salesforce's Data Architect exam
Question #: 52
Topic #: 1
[All Data Architect Questions]

Universal Containers has been a customer of Salesforce for 10 years. Currently they have 2 million accounts in the system. Due to an erroneous integration built 3 years ago, it is estimated there are 500,000 duplicates in the system.

Which solution should a data architect recommend to remediate the duplication issue?

Show Suggested Answer Hide Answer
Suggested Answer: D

Implementing duplicate rules (option D) is the best solution to remediate the duplication issue, as it allows the data architect to identify and merge duplicate accounts in Salesforce using native features and tools. Developing an ETL process that utilizes the merge API to merge the duplicate records (option A) is not a good solution, as it may require more coding and testing effort, and it does not prevent duplicates from being created in Salesforce. Utilizing a data warehouse as the system of truth (option B) is also not a good solution, as it may introduce additional complexity and cost, and it does not address the duplication issue in Salesforce. Extracting the data using data loader and using excel to merge the duplicate records (option C) is also not a good solution, as it may be time-consuming and error-prone, and it does not prevent duplicates from being created in Salesforce.


Contribute your Thoughts:

Gladys
1 months ago
I'd go with option D. Implement duplicate rules and let Salesforce handle the deduplication automatically. Easy peasy!
upvoted 0 times
...
Alona
2 months ago
Haha, using a data warehouse as the 'system of truth' for Salesforce data? That's like using a sledgehammer to crack a nut. Option B is way overkill.
upvoted 0 times
Sophia
11 days ago
D) Implement duplicate rules
upvoted 0 times
...
Lorrine
27 days ago
C) Extract the data using data loader and use excel to merge the duplicate records
upvoted 0 times
...
Evangelina
1 months ago
B) Utilize a data warehouse as the system of truth
upvoted 0 times
...
Catarina
1 months ago
A) Develop an ETL process that utilizers the merge API to merge the duplicate records
upvoted 0 times
...
...
Cathern
2 months ago
Option C sounds too manual and time-consuming for a 2 million record dataset. Excel is not the right tool for this kind of large-scale data cleanup.
upvoted 0 times
Caprice
21 days ago
D) Implement duplicate rules
upvoted 0 times
...
Belen
1 months ago
B) Utilize a data warehouse as the system of truth
upvoted 0 times
...
Alline
1 months ago
A) Develop an ETL process that utilizers the merge API to merge the duplicate records
upvoted 0 times
...
...
Ty
2 months ago
But wouldn't using data loader and excel be a quicker fix?
upvoted 0 times
...
Kerrie
2 months ago
I think option A is the best choice. Using the Salesforce Merge API to automate the deduplication process is the most comprehensive solution here.
upvoted 0 times
...
Bambi
2 months ago
Definitely option D. Implementing duplicate rules is the way to go for a large dataset like this. It's a scalable and efficient solution.
upvoted 0 times
Lavelle
25 days ago
I'm leaning towards option D as well, it seems like the most practical choice.
upvoted 0 times
...
Afton
1 months ago
I would go with option B, using a data warehouse seems like a solid solution.
upvoted 0 times
...
Christoper
1 months ago
I think option A could also work well if implemented correctly.
upvoted 0 times
...
Hoa
2 months ago
I agree, option D is definitely the best choice for this situation.
upvoted 0 times
...
...
Leila
2 months ago
I disagree, I believe implementing duplicate rules would be a more efficient solution.
upvoted 0 times
...
Ty
2 months ago
I think we should develop an ETL process to merge the duplicates.
upvoted 0 times
...

Save Cancel