Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Linux Foundation PCA Exam - Topic 2 Question 4 Discussion

Actual exam question for Linux Foundation's PCA exam
Question #: 4
Topic #: 2
[All PCA Questions]

What is considered the best practice when working with alerting notifications?

Show Suggested Answer Hide Answer
Suggested Answer: B

The Prometheus alerting philosophy emphasizes signal over noise --- meaning alerts should focus only on actionable and user-impacting issues. The best practice is to alert on symptoms that indicate potential or actual user-visible problems, not on every internal metric anomaly.

This approach reduces alert fatigue, avoids desensitizing operators, and ensures high-priority alerts get the attention they deserve. For example, alerting on ''service unavailable'' or ''latency exceeding SLO'' is more effective than alerting on ''CPU above 80%'' or ''disk usage increasing,'' which may not directly affect users.

Option B correctly reflects this principle: keep alerts meaningful, few, and symptom-based. The other options contradict core best practices by promoting excessive or equal-weight alerting, which can overwhelm operations teams.


Verified from Prometheus documentation -- Alerting Best Practices, Alertmanager Design Philosophy, and Prometheus Monitoring and Reliability Engineering Principles.

Contribute your Thoughts:

0/2000 characters
Nathalie
9 hours ago
I think B strikes the right balance!
upvoted 0 times
...
Octavio
5 days ago
D just sounds like a recipe for alert fatigue.
upvoted 0 times
...
Cristina
24 days ago
B) Definitely the way to go. Less is more when it comes to effective alerting.
upvoted 0 times
...
Sabra
29 days ago
Haha, D) sounds like a recipe for alert fatigue. Good luck keeping up with all those notifications!
upvoted 0 times
...
Marion
1 month ago
D) Wow, that's a bit overkill. Monitoring every single metric is just going to create a mess.
upvoted 0 times
...
Julieta
1 month ago
A) I'm not sure I agree with that. Major and minor alerts should be prioritized differently.
upvoted 0 times
...
Cecilia
1 month ago
C) I disagree, more alerts are better! Catch those problems early before they cause any real damage.
upvoted 0 times
...
Blondell
2 months ago
B) Sounds like the most reasonable approach to me. Fewer alerts means less noise to sift through.
upvoted 0 times
...
Michal
2 months ago
I’m leaning towards D, but I remember someone mentioning that generating alerts for every metric could overwhelm the team. It’s tricky!
upvoted 0 times
...
Carlton
2 months ago
I think we practiced a question similar to this, and I recall that having fewer, more meaningful alerts is generally better. So, B sounds familiar.
upvoted 0 times
...
Jess
2 months ago
I'm not entirely sure, but I feel like we talked about the importance of prioritizing alerts. A seems a bit excessive to me.
upvoted 0 times
...
Kayleigh
2 months ago
I'm feeling pretty confident about this one. The answer is B - have as few alerts as possible, but make sure they're for the critical stuff that could cause real problems. Gotta be selective with alerts to avoid alert fatigue.
upvoted 0 times
...
Yasuko
3 months ago
Okay, I think I've got this. The key is to focus on the most important alerts that could lead to major outages. Minor issues can be handled through other monitoring, but the alerts need to be strategic and impactful.
upvoted 0 times
...
Annamae
3 months ago
I'm a bit confused on this one. Should we really be generating alerts for every single metric? That seems like it could get out of hand quickly. I'll need to review my notes on best practices for alerting.
upvoted 0 times
...
Jose
3 months ago
A little surprised that people think minor alerts aren't important.
upvoted 0 times
...
Jesus
3 months ago
B is definitely the way to go! Less noise, more focus.
upvoted 0 times
...
Melina
3 months ago
I remember discussing how too many alerts can lead to alert fatigue, so I think B might be the right approach.
upvoted 0 times
...
Jin
4 months ago
C could work, but it might create alert fatigue.
upvoted 0 times
...
Alba
4 months ago
I disagree, minor alerts can prevent bigger issues later.
upvoted 0 times
...
Xuan
4 months ago
I'm pretty sure the best practice is to have as few alerts as possible, but make sure they're for the really critical issues. Catching problems early is important, but too many alerts can be overwhelming.
upvoted 0 times
...
Edmond
4 months ago
Hmm, this is a tricky one. I'll need to think carefully about the trade-offs between alert volume and alert importance.
upvoted 0 times
...

Save Cancel