An architect is planning an HPE Apollo 4530 solution to support the Cloudera distribution of Spark and HDFS.
Each HPE ProLiant XL450 in the solution uses the following baseline recommendations:
- 15 8TB HDDs (120TB total)
- 2 10-core 2.3GHz processors
- 128GB memory (64GB on each processor)
- 2-port 10GbE FlexibleLOM
The architect has planned enough servers to meet the customer's requirements for total capacity.
Which two components in this plan are most likely to cause a bottleneck as an increasing number of Spark applications run?
Currently there are no comments in this discussion, be the first to comment!