BlackFriday 2024! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Nutanix Exam NCM-MCI Topic 1 Question 1 Discussion

Actual exam question for Nutanix's NCM-MCI exam
Question #: 1
Topic #: 1
[All NCM-MCI Questions]

Task 16

Running NCC on a cluster prior to an upgrade results in the following output

FAIL: CVM System Partition /home usage at 93% (greater than threshold, 90%)

Identify the CVM with the issue, remove the fil causing the storage bloat, and check the health again by running the individual disk usage health check only on the problematic CVM do not run NCC health check

Note: Make sure only the individual health check is executed from the affected node

Show Suggested Answer Hide Answer
Suggested Answer: A

To identify the CVM with the issue, remove the file causing the storage bloat, and check the health again, you can follow these steps:

Log in to Prism Central and click on Entities on the left menu.

Select Virtual Machines from the drop-down menu and find the NCC health check output file from the list. You can use the date and time information to locate the file. The file name should be something like ncc-output-YYYY-MM-DD-HH-MM-SS.log.

Open the file and look for the line that says FAIL: CVM System Partition /home usage at 93% (greater than threshold, 90%). Note down the IP address of the CVM that has this issue. It should be something like X.X.X.X.

Log in to the CVM using SSH or console with the username and password provided.

Run the commanddu -sh /home/*to see the disk usage of each file and directory under /home. Identify the file that is taking up most of the space. It could be a log file, a backup file, or a temporary file. Make sure it is not a system file or a configuration file that is needed by the CVM.

Run the commandrm -f /home/<filename>to remove the file causing the storage bloat. Replace <filename> with the actual name of the file.

Run the commandncc health_checks hardware_checks disk_checks disk_usage_check --cvm_list=X.X.X.Xto check the health again by running the individual disk usage health check only on the problematic CVM. Replace X.X.X.X with the IP address of the CVM that you noted down earlier.

Verify that the output shows PASS: CVM System Partition /home usage at XX% (less than threshold, 90%). This means that the issue has been resolved.

#access to CVM IP by Putty

allssh df -h #look for the path /dev/sdb3 and select the IP of the CVM

ssh CVM_IP

ls

cd software_downloads

ls

cd nos

ls -l -h

rm files_name

df -h

ncc health_checks hardware_checks disk_checks disk_usage_check


Contribute your Thoughts:

Mee
4 months ago
I always make sure to follow the steps provided in the explanation for a step-by-step solution.
upvoted 0 times
...
Quentin
4 months ago
I agree with Cherry. Running individual disk usage health check on the affected CVM is crucial to ensure everything is back to normal.
upvoted 0 times
...
Cherry
5 months ago
I had the same issue before. It's important to identify the problematic CVM and remove the file causing the storage bloat.
upvoted 0 times
...
Erinn
5 months ago
I encountered a problem with high /home usage during NCC on a cluster upgrade.
upvoted 0 times
...
Francine
5 months ago
Got it, we need to follow the step-by-step solution provided in the explanation to resolve this issue.
upvoted 0 times
...
Rickie
5 months ago
Exactly, we shouldn't run the NCC health check again, just the individual health check on the affected node.
upvoted 0 times
...
Katie
5 months ago
After that, we should check the health again by running the individual disk usage health check only on the problematic CVM, right?
upvoted 0 times
...
Francine
6 months ago
I think we need to identify the CVM causing the issue and remove the file causing the storage bloat.
upvoted 0 times
...
Rickie
6 months ago
Yes, it's about the CVM System Partition /home usage being at 93%.
upvoted 0 times
...
Katie
6 months ago
I found that error message while running NCC on the cluster.
upvoted 0 times
Raul
5 months ago
Remember not to run the NCC health check again, just the individual health check on the affected node.
upvoted 0 times
...
Sue
5 months ago
After removing the file, we should run the individual disk usage health check on the problematic CVM.
upvoted 0 times
...
Kate
5 months ago
Once we find the problematic CVM, we can remove the file causing the storage bloat.
upvoted 0 times
...
Belen
5 months ago
Let's identify the CVM with the issue first.
upvoted 0 times
...
...
Mira
7 months ago
I'm a little worried about the 'do not run NCC health check' part. Isn't that the whole reason we're in this situation in the first place?
upvoted 0 times
...
Kara
7 months ago
Okay, so the key steps are: 1) identify the CVM, 2) remove the file causing the bloat, and 3) check the health of just that CVM. Sounds straightforward enough.
upvoted 0 times
...
Julieta
7 months ago
Let's not jump to conclusions. The question says we need to identify the CVM with the issue, so we'll have to do some digging to figure out what's going on.
upvoted 0 times
...
Herminia
7 months ago
I wonder what kind of file could be causing such a huge storage issue? Maybe someone downloaded a bunch of cat videos or something. *chuckles*
upvoted 0 times
...
Paz
7 months ago
Yeah, I agree. We need to identify the CVM with the issue, remove the file causing the storage bloat, and then check the health of just that CVM, not the whole cluster.
upvoted 0 times
...
Sol
7 months ago
Hmm, this seems like a tricky one. Running NCC and finding the CVM system partition at 93% usage is definitely a problem that needs to be addressed before an upgrade.
upvoted 0 times
Kristian
7 months ago
Make sure only the individual health check is executed from the affected node.
upvoted 0 times
...
Raylene
7 months ago
Check the health by running individual disk usage check on the problematic CVM.
upvoted 0 times
...
Elenor
7 months ago
Remove the file causing the storage bloat.
upvoted 0 times
...
Johana
7 months ago
Let's find the CVM with the issue.
upvoted 0 times
...
...

Save Cancel