You are running a Pentaho MapReduce (PMR) job that is facing on Hadoop. You review the YARN logs and determine that the mappers are generating out of memory errors.
I don't recall much about the "Enable blocking" option, but it seems like it wouldn't directly address memory issues. I think we should focus on memory settings instead.
I feel like setting the JVM memory parameters in the User Defined tab is the right approach. We did a similar question in class about tuning memory settings.
I'm not entirely sure, but I think increasing the number of mapper tasks could just lead to more issues if the mappers are already running out of memory.
I remember we discussed increasing memory settings in the Pentaho server startup script during our last practice session. That might help with the out of memory errors.
This is a good test of our understanding of the indexing process. I'll methodically go through each option and think about how it relates to index time processing to make sure I get this right.
Okay, I think I've got this. The key is to look at the shape of the utilization line on the chart and match it to the options provided. Based on the description, option D seems to be the best fit - a high utilization level that drops off sharply.
upvoted 0 times
...
Log in to Pass4Success
Sign in:
Report Comment
Is the comment made by USERNAME spam or abusive?
Commenting
In order to participate in the comments you need to be logged-in.
You can sign-up or
login
Harris
4 months agoShawana
4 months agoRaina
4 months agoKallie
4 months agoHubert
4 months agoLinette
5 months agoJani
5 months agoIndia
5 months agoPamella
5 months agoJoanna
5 months agoIsabella
5 months agoFrance
5 months ago