New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Salesforce ANC-301 Exam - Topic 2 Question 47 Discussion

Actual exam question for Salesforce's ANC-301 exam
Question #: 47
Topic #: 2
[All ANC-301 Questions]

A CRM Analytics consultant is reviewing results from an Einstein Discovery story with a business user. They agree with the findings but notice that none of the fields used in the story have a correlation value greater than 4%. The client is now concerned that the model

may not be good enough to deploy.

Which action should the consultant take?

Show Suggested Answer Hide Answer
Suggested Answer: A

Contribute your Thoughts:

0/2000 characters
Clorinda
3 months ago
A is the way to go, find stronger relationships!
upvoted 0 times
...
Ronny
3 months ago
Wait, can we really trust this model at all?
upvoted 0 times
...
Flo
3 months ago
I think editing the model settings could help.
upvoted 0 times
...
Novella
4 months ago
Totally agree, they need better data!
upvoted 0 times
...
Myra
4 months ago
Correlation values under 4% are pretty low.
upvoted 0 times
...
Jolanda
4 months ago
Using the right algorithm sounds familiar, but I wonder if that alone would solve the problem if the data isn't strong enough.
upvoted 0 times
...
Paola
4 months ago
I feel like editing the model accuracy settings could be risky. What if it just masks the underlying issue?
upvoted 0 times
...
Lorriane
4 months ago
I think we practiced a similar question where identifying additional data helped improve the model. That might be the best approach here.
upvoted 0 times
...
Barrie
5 months ago
I remember we discussed the importance of correlation values in class, but I'm not sure if a 4% correlation is too low for deployment.
upvoted 0 times
...
Dawne
5 months ago
For this type of question, I think the best approach would be to identify additional data that could have a stronger relationship with the outcome variable. Low correlation values are a red flag, so we need to dig deeper and see if there are other factors we're missing. Trying a different algorithm is an option too, but I'd start with the data first.
upvoted 0 times
...
Carline
5 months ago
I'm a little confused by this one. If the correlation values are all below 4%, that does seem pretty low. I'm not sure if editing the model settings would really help, or if we need to look at a different algorithm entirely. Might need to do some more research on this.
upvoted 0 times
...
Hailey
5 months ago
Okay, so the client is worried the model isn't good enough to deploy. I'd probably try editing the model accuracy settings and rerunning it to see if that improves the correlation. Might be worth a shot before looking for new data.
upvoted 0 times
...
Merilyn
5 months ago
Hmm, this is an interesting one. I think I'd start by looking at the data and seeing if there are any other variables that might have a stronger relationship with the outcome. The low correlation values are a bit concerning, so I'd want to explore that further.
upvoted 0 times
...
Lawanda
1 year ago
I like option A - expanding the data sources seems like the surest way to boost the model's performance. Although, the client might be more impressed if the consultant could also do a stand-up comedy routine while they're at it.
upvoted 0 times
Heike
1 year ago
User 4: Yeah, let's focus on getting more relevant data for better results.
upvoted 0 times
...
Edward
1 year ago
User 3: I agree, let's try to expand the data sources first.
upvoted 0 times
...
Jodi
1 year ago
User 2: That sounds like a good idea, more data could definitely help.
upvoted 0 times
...
Ozell
1 year ago
User 1: I think we should go with option A and find more data to improve the model.
upvoted 0 times
...
...
Nickolas
1 year ago
Option A sounds like the most thorough approach. But if I were the consultant, I'd also bring a bag of lucky charms just in case. You never know when a little extra magic might come in handy!
upvoted 0 times
...
Afton
1 year ago
This is a tough one, but I'd say option C is the safest bet. The algorithm might need some tweaking to get the best results.
upvoted 0 times
Kimberlie
1 year ago
Let's go with option C and update the model with the appropriate algorithm.
upvoted 0 times
...
Lisandra
1 year ago
I agree, tweaking the algorithm could improve the model's accuracy.
upvoted 0 times
...
Francisca
1 year ago
Option C is a good choice. The algorithm might need some adjustments.
upvoted 0 times
...
...
Dusti
1 year ago
Hmm, I'm not sure. Editing the model accuracy settings might work, but it could also just be masking the underlying issue. I'd lean towards A or C.
upvoted 0 times
...
Hannah
1 year ago
I think option A is the way to go. Identifying additional data with stronger correlations could really improve the model's predictive power.
upvoted 0 times
Elmira
1 year ago
User 4: Using the appropriate algorithm and updating the model could also be helpful.
upvoted 0 times
...
Amber
1 year ago
User 3: Maybe we should also consider editing the model accuracy settings and rerunning it.
upvoted 0 times
...
Shakira
1 year ago
User 1: I think we should identify additional data with stronger correlations.
upvoted 0 times
...
Maia
1 year ago
User 2: That could definitely improve the model's predictive power.
upvoted 0 times
...
...
Lauran
1 year ago
I agree, adding more relevant data could strengthen the relationship with the outcome variable.
upvoted 0 times
...
Desire
1 year ago
I think we should identify additional data to improve the model.
upvoted 0 times
...

Save Cancel