BlackFriday 2024! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Huawei Exam H13-311_V3.5 Topic 5 Question 9 Discussion

Actual exam question for Huawei's H13-311_V3.5 exam
Question #: 9
Topic #: 5
[All H13-311_V3.5 Questions]

Which of the following activation functions may cause the vanishing gradient problem?

Show Suggested Answer Hide Answer
Suggested Answer: C, D

Both Sigmoid and Tanh activation functions can cause the vanishing gradient problem. This issue occurs because these functions squash their inputs into a very small range, leading to very small gradients during backpropagation, which slows down learning. In deep neural networks, this can prevent the weights from updating effectively, causing the training process to stall.

Sigmoid: Outputs values between 0 and 1. For large positive or negative inputs, the gradient becomes very small.

Tanh: Outputs values between -1 and 1. While it has a broader range than Sigmoid, it still suffers from vanishing gradients for larger input values.

ReLU, on the other hand, does not suffer from the vanishing gradient problem since it outputs the input directly if positive, allowing gradients to pass through. However, Softplus is also less prone to this problem compared to Sigmoid and Tanh.

HCIA AI


Deep Learning Overview: Explains the vanishing gradient problem in deep networks, especially when using Sigmoid and Tanh activation functions.

AI Development Framework: Covers the use of ReLU to address the vanishing gradient issue and its prevalence in modern neural networks.

Contribute your Thoughts:

Marti
21 days ago
Sigmoid, you're just not cut out for the big leagues. Time to find a new activation function.
upvoted 0 times
Royal
10 days ago
Yeah, Sigmoid tends to struggle with that issue.
upvoted 0 times
...
Lashandra
11 days ago
I think Sigmoid is causing the vanishing gradient problem.
upvoted 0 times
...
...
Francisca
1 months ago
I bet the sigmoid function is feeling pretty guilty about that vanishing gradient problem. Shame on you, sigmoid!
upvoted 0 times
...
Pamella
1 months ago
Gotta go with C on this one. Sigmoid is the classic vanishing gradient villain.
upvoted 0 times
Alba
2 days ago
B) ReLU is actually less likely to cause the vanishing gradient problem compared to C) Sigmoid.
upvoted 0 times
...
Lynelle
11 days ago
I think D) Tanh can also cause the vanishing gradient problem.
upvoted 0 times
...
France
17 days ago
I agree, C) Sigmoid is known for causing the vanishing gradient problem.
upvoted 0 times
...
...
Catarina
1 months ago
That makes sense, Tanh can indeed cause the vanishing gradient problem.
upvoted 0 times
...
Alease
1 months ago
I disagree, I believe it's D) Tanh because it saturates at extreme values.
upvoted 0 times
...
Kristel
1 months ago
Ah, the good old sigmoid. It's like trying to climb a mountain with baby steps - slow and painful.
upvoted 0 times
Carri
2 days ago
Sigmoid is notorious for causing the vanishing gradient problem.
upvoted 0 times
...
Ozell
4 days ago
D) Tanh
upvoted 0 times
...
Daron
5 days ago
C) Sigmoid
upvoted 0 times
...
Fausto
28 days ago
B) ReLU
upvoted 0 times
...
Margurite
1 months ago
A) Softplus
upvoted 0 times
...
...
Catarina
1 months ago
I think the answer is C) Sigmoid.
upvoted 0 times
...
Bev
2 months ago
The sigmoid function is definitely the culprit here. That gradual slope near zero is a recipe for vanishing gradients.
upvoted 0 times
Vanna
17 days ago
That's right, the gradual slope near zero in the sigmoid function makes it difficult for gradients to propagate.
upvoted 0 times
...
Cecil
21 days ago
Yes, the sigmoid function is known for causing the vanishing gradient problem.
upvoted 0 times
...
Tammara
27 days ago
D) Tanh
upvoted 0 times
...
Ailene
1 months ago
C) Sigmoid
upvoted 0 times
...
Han
1 months ago
B) ReLU
upvoted 0 times
...
Rashad
1 months ago
A) Softplus
upvoted 0 times
...
...

Save Cancel