New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Microsoft AI-102 Exam - Topic 6 Question 86 Discussion

Actual exam question for Microsoft's AI-102 exam
Question #: 86
Topic #: 6
[All AI-102 Questions]

You have an Azure subscription that contains an Azure Al Content Safety resource named CS1. You plan to build an app that will analyze user-gene rated documents and identify obscure offensive terms. You need to create a dictionary that will contain the offensive terms. The solution must minimize development effort. What should you use?

Show Suggested Answer Hide Answer
Suggested Answer: D

Contribute your Thoughts:

0/2000 characters
Pauline
3 months ago
Wait, can a blacklist really cover all obscure terms? Seems risky.
upvoted 0 times
...
Candida
3 months ago
Not so sure about that, a text classifier might be better for nuance.
upvoted 0 times
...
Joni
3 months ago
Language detection is cool, but not really what you need here.
upvoted 0 times
...
Lauran
3 months ago
Definitely agree, a blacklist is simple and effective!
upvoted 0 times
...
Ozell
4 months ago
I think a blacklist is the way to go for this.
upvoted 0 times
...
Lavonne
4 months ago
A text classifier seems too complex for just identifying offensive terms. I lean towards using a blacklist for simplicity.
upvoted 0 times
...
Shasta
4 months ago
I practiced a similar question where we had to choose between classifiers and moderation tools. I feel like text moderation might be the right choice here too.
upvoted 0 times
...
Gilberto
4 months ago
I'm not entirely sure, but I remember something about text moderation being useful for identifying offensive content.
upvoted 0 times
...
Dalene
4 months ago
I think a blacklist could be the easiest option since it allows you to specify the exact terms you want to filter out.
upvoted 0 times
...
Tamekia
5 months ago
Language detection could be an interesting approach. If we can identify the language of the documents, we might be able to tailor our offensive term detection more effectively. I'll have to research that option a bit more.
upvoted 0 times
...
Brandon
5 months ago
A blacklist seems like the easiest solution to implement. I can just create a list of offensive terms and use that to analyze the documents. Minimizing development effort is key, so I think that's the way to go.
upvoted 0 times
...
Ula
5 months ago
Hmm, I'm a bit confused. I'm not sure if text moderation is the best option here. Maybe a text classifier or a blacklist would be more appropriate. I'll have to think this through a bit more.
upvoted 0 times
...
Stefanie
5 months ago
This seems like a straightforward question. I'd go with text moderation since that's specifically designed for identifying offensive content.
upvoted 0 times
...
Eric
10 months ago
This is a real head-scratcher. I'm torn between the text classifier and the blacklist. Maybe I should just flip a coin and call it a day. At least that way, I won't have to worry about maintaining a list of naughty words for the rest of my life.
upvoted 0 times
...
Tatum
10 months ago
Definitely go with the text moderation option. That's what it's designed for, and it'll probably give you the best results with the least amount of hassle. I mean, why reinvent the wheel, right?
upvoted 0 times
Justa
9 months ago
Using a blacklist would be too manual and time-consuming.
upvoted 0 times
...
Lenna
9 months ago
It's definitely the most efficient option for this scenario.
upvoted 0 times
...
Annette
9 months ago
I agree, text moderation is the way to go.
upvoted 0 times
...
...
Bonita
11 months ago
Hmm, I'm not sure. A blacklist might work, but it could be a pain to maintain. Maybe a text classifier would be more robust and effective in the long run. Just my two cents.
upvoted 0 times
Joesph
9 months ago
Text moderation could also be a good choice.
upvoted 0 times
...
Lewis
9 months ago
I agree, a text classifier would be more robust.
upvoted 0 times
...
Emilio
10 months ago
Yeah, a blacklist might be hard to maintain.
upvoted 0 times
...
Lamar
10 months ago
I think a text classifier would be the best option.
upvoted 0 times
...
...
Earleen
11 months ago
I think option D is the way to go. A blacklist is the simplest solution and will get the job done with minimal effort. Who needs all that fancy AI stuff anyway?
upvoted 0 times
...
Tayna
11 months ago
I'm not sure, but maybe text moderation could also work to identify offensive terms.
upvoted 0 times
...
Kristeen
11 months ago
I agree with Curt, using a blacklist would be the most efficient solution.
upvoted 0 times
...
Curt
11 months ago
I think we should use a blacklist for the offensive terms.
upvoted 0 times
...

Save Cancel