New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

iSQI CT-AI Exam - Topic 7 Question 8 Discussion

Actual exam question for iSQI's CT-AI exam
Question #: 8
Topic #: 7
[All CT-AI Questions]

A ML engineer is trying to determine the correctness of the new open-source implementation *X", of a supervised regression algorithm implementation. R-Square is one of the functional performance metrics used to determine the quality of the model.

Which ONE of the following would be an APPROPRIATE strategy to achieve this goal?

SELECT ONE OPTION

Show Suggested Answer Hide Answer
Suggested Answer: C

A . Add 10% of the rows randomly and create another model and compare the R-Square scores of both the models.

Adding more data to the training set can affect the R-Square score, but it does not directly verify the correctness of the implementation.

B . Train various models by changing the order of input features and verify that the R-Square score of these models vary significantly.

Changing the order of input features should not significantly affect the R-Square score if the implementation is correct, but this approach is more about testing model robustness rather than correctness of the implementation.

C . Compare the R-Square score of the model obtained using two different implementations that utilize two different programming languages while using the same algorithm and the same training and testing data.

This approach directly compares the performance of two implementations of the same algorithm. If both implementations produce similar R-Square scores on the same training and testing data, it suggests that the new implementation 'X' is correct.

D . Drop 10% of the rows randomly and create another model and compare the R-Square scores of both the models.

Dropping data can lead to variations in the R-Square score but does not directly verify the correctness of the implementation.

Therefore, option C is the most appropriate strategy because it directly compares the performance of the new implementation 'X' with another implementation using the same algorithm and datasets, which helps in verifying the correctness of the implementation.


Contribute your Thoughts:

0/2000 characters
Xuan
3 months ago
Wait, dropping rows? That sounds risky!
upvoted 0 times
...
William
3 months ago
A is interesting too, but I think C is more reliable.
upvoted 0 times
...
Lon
3 months ago
Not sure if C is valid, different languages can introduce bias.
upvoted 0 times
...
Oren
4 months ago
Totally agree, C is the way to go!
upvoted 0 times
...
Eun
4 months ago
C seems like the best option to compare implementations.
upvoted 0 times
...
Malcom
4 months ago
I feel like comparing R-Square scores from different implementations is a solid strategy. It’s a direct way to see if the new implementation holds up against the original.
upvoted 0 times
...
Makeda
4 months ago
I practiced a question similar to this where we compared models based on different input features. But I think changing the order of features might not be the best way to assess correctness.
upvoted 0 times
...
Walker
4 months ago
I'm not entirely sure, but I think adding or dropping rows randomly might not give a clear picture of the model's performance. It feels a bit risky.
upvoted 0 times
...
Zack
5 months ago
I remember we discussed how comparing models from different implementations can help validate performance metrics like R-Square. Option C seems like a good choice.
upvoted 0 times
...
Wayne
5 months ago
I'm a bit confused by this question. Wouldn't option A or D just introduce unnecessary noise and variability in the R-Square scores? I think I'll go with option C - it seems like the most straightforward and reliable way to evaluate the new implementation.
upvoted 0 times
...
Desiree
5 months ago
Hmm, I'm a bit unsure about this one. Adding or dropping random rows to create new models and compare their R-Square scores doesn't seem like the most reliable approach to me. I'm leaning towards option C, but I'll need to think it through a bit more.
upvoted 0 times
...
Shelba
5 months ago
This seems like a straightforward question about evaluating the performance of a regression model. I think I'll go with option C - comparing the R-Square scores of the same algorithm implemented in different programming languages.
upvoted 0 times
...
Teri
5 months ago
Option C sounds like the best strategy here. Comparing the R-Square scores of the same algorithm implemented in different languages is a good way to assess the correctness of the new open-source implementation. The other options seem a bit too simplistic or potentially biased.
upvoted 0 times
...
Pamella
5 months ago
Alright, I feel pretty confident about this. I'll carefully evaluate each option and select the one that best describes the key considerations for enabling Person Accounts.
upvoted 0 times
...
Shaniqua
5 months ago
This is a good networking question. The key is understanding the purpose of a static route, which is to specify routing to an adjacent network when dynamic routing is not being used. I think option D is the correct answer here.
upvoted 0 times
...
Veronika
2 years ago
This question is making my head spin. Can we just get a calculator and start crunching numbers? That's what real engineers do, right?
upvoted 0 times
Fidelia
1 year ago
Let's focus on comparing the R-Square scores of different models to determine the correctness of the new implementation.
upvoted 0 times
...
Florinda
2 years ago
No, we need to follow a systematic approach to evaluate the model's performance.
upvoted 0 times
...
...
Leah
2 years ago
Jaleesa, you're a riot! Adding or dropping rows randomly is not going to give you any meaningful insights about the algorithm implementation. Option C is definitely the way to go here.
upvoted 0 times
Amie
2 years ago
Definitely, using two different programming languages with the same algorithm and data will provide valuable insights into the performance of the new implementation.
upvoted 0 times
...
Omer
2 years ago
Yeah, comparing the R-Square scores of models obtained using different implementations is a solid way to evaluate the quality of the algorithm.
upvoted 0 times
...
Alex
2 years ago
I agree with you, option C seems like the most appropriate strategy to determine the correctness of the new implementation.
upvoted 0 times
...
...
Aracelis
2 years ago
I agree with Royce, comparing R-Square scores using different programming languages would provide a more robust evaluation.
upvoted 0 times
...
Jaleesa
2 years ago
Hmm, I'm not sure about that. Wouldn't it be better to just throw more data at it and see what happens? I mean, that's how I usually debug my code. Just add more rows, right?
upvoted 0 times
Keshia
2 years ago
C: Comparing the R-Square scores of models from different implementations using the same algorithm and data can provide valuable insights into the quality of the new implementation.
upvoted 0 times
...
Clemencia
2 years ago
B: Training various models with different input feature orders could help determine if the model is robust and reliable.
upvoted 0 times
...
Jaime
2 years ago
A: Adding more data might not necessarily improve the model's performance. It's important to use appropriate strategies to evaluate the correctness of the new implementation.
upvoted 0 times
...
Xochitl
2 years ago
C: Comparing R-Square scores using different implementations can provide valuable insights.
upvoted 0 times
...
Joesph
2 years ago
B: It's important to use appropriate strategies to evaluate the model's correctness.
upvoted 0 times
...
Telma
2 years ago
A: Adding more data might not necessarily improve the model's performance.
upvoted 0 times
...
...
Royce
2 years ago
But changing the order of input features may not necessarily help determine the correctness of the implementation.
upvoted 0 times
...
Sage
2 years ago
I disagree, I believe option B is more appropriate.
upvoted 0 times
...
Royce
2 years ago
I think option C would be a good strategy.
upvoted 0 times
...
Jaclyn
2 years ago
I agree with Ona. Option C is the most appropriate strategy. Comparing the results across different implementations is the best approach to ensure the correctness of the new algorithm.
upvoted 0 times
Edmond
2 years ago
I think comparing the R-Square scores from models using different programming languages is a solid strategy.
upvoted 0 times
...
Zack
2 years ago
Option C is definitely the way to go. It's important to compare the results from different implementations.
upvoted 0 times
...
...
Ona
2 years ago
Option C is the way to go. Comparing the R-Square scores of the same algorithm implemented in different languages is the best way to verify the correctness of the new open-source implementation.
upvoted 0 times
Leeann
2 years ago
Agreed, it's a reliable method to ensure the accuracy of the new open-source implementation.
upvoted 0 times
...
Celeste
2 years ago
Definitely, comparing the scores from different implementations is a good way to validate the new implementation.
upvoted 0 times
...
Jesusa
2 years ago
That sounds like a solid plan.
upvoted 0 times
...
Elly
2 years ago
C) Compare the R-Square score of the model obtained using two different implementations that utilize two different programming languages while using the same algorithm and the same training and testing data.
upvoted 0 times
...
...

Save Cancel