Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Cloudera Exam CCA175 Topic 5 Question 67 Discussion

Actual exam question for Cloudera's CCA175 exam
Question #: 67
Topic #: 5
[All CCA175 Questions]

Problem Scenario 32 : You have given three files as below.

spark3/sparkdir1/file1.txt

spark3/sparkd ir2ffile2.txt

spark3/sparkd ir3Zfile3.txt

Each file contain some text.

spark3/sparkdir1/file1.txt

Apache Hadoop is an open-source software framework written in Java for distributed storage and distributed processing of very large data sets on computer clusters built from commodity hardware. All the modules in Hadoop are designed with a fundamental assumption that hardware failures are common and should be automatically handled by the framework

spark3/sparkdir2/file2.txt

The core of Apache Hadoop consists of a storage part known as Hadoop Distributed File System (HDFS) and a processing part called MapReduce. Hadoop splits files into large blocks and distributes them across nodes in a cluster. To process data, Hadoop transfers packaged code for nodes to process in parallel based on the data that needs to be processed.

spark3/sparkdir3/file3.txt

his approach takes advantage of data locality nodes manipulating the data they have access to to allow the dataset to be processed faster and more efficiently than it would be in a more conventional supercomputer architecture that relies on a parallel file system where computation and data are distributed via high-speed networking

Now write a Spark code in scala which will load all these three files from hdfs and do the word count by filtering following words. And result should be sorted by word count in reverse order.

Filter words ("a","the","an", "as", "a","with","this","these","is","are","in", "for", "to","and","The","of")

Also please make sure you load all three files as a Single RDD (All three files must be loaded using single API call).

You have also been given following codec

import org.apache.hadoop.io.compress.GzipCodec

Please use above codec to compress file, while saving in hdfs.

Show Suggested Answer Hide Answer
Suggested Answer: A

Contribute your Thoughts:

Kathrine
7 months ago
I see your point, Hildred. It's always good to consider all options before making a decision.
upvoted 0 times
...
Hildred
7 months ago
I'm not sure, but I think B) Solution might also work.
upvoted 0 times
...
Emmett
8 months ago
I agree with you, Kathrine. Option A) seems to be the right one.
upvoted 0 times
...
Kathrine
8 months ago
I think the correct option is A) Solution
upvoted 0 times
...
Melissa
8 months ago
Haha, 'talse'? I guess that's the 'alternative' way to spell 'false'. Definitely going with A on this one.
upvoted 0 times
...
Micah
8 months ago
Wait, did they really write 'talse' instead of 'false'? Rookie mistake, but A is still the way to go.
upvoted 0 times
...
Marylou
8 months ago
Ah, the good old GC details! I'm sure this will come in handy for troubleshooting any performance issues. Option A seems to be the way to go.
upvoted 0 times
...
Billi
8 months ago
I think the correct answer is A. The conf parameter is used to pass the Spark-related configurations, and the provided example shows the correct usage of the extra Java options.
upvoted 0 times
Shawnda
7 months ago
Yes, you are right. Option A correctly specifies the extra Java options needed for the Spark application.
upvoted 0 times
...
Cortney
7 months ago
I think the correct answer is A. The conf parameter is used to pass the Spark-related configurations, and the provided example shows the correct usage of the extra Java options.
upvoted 0 times
...
...

Save Cancel