New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

SISA CSPAI Exam - Topic 1 Question 1 Discussion

Actual exam question for SISA's CSPAI exam
Question #: 1
Topic #: 1
[All CSPAI Questions]

In a machine translation system where context from both early and later words in a sentence is crucial, a team is considering moving from RNN-based models to Transformer models. How does the self-attention mechanism in Transformer architecture support this task?

Show Suggested Answer Hide Answer
Suggested Answer: B

Contribute your Thoughts:

0/2000 characters
Miss
2 months ago
Yup, B is the way to go! Transformers are awesome!
upvoted 0 times
...
Nu
2 months ago
C makes no sense, constant weights don’t help with translation.
upvoted 0 times
...
France
2 months ago
Wait, can self-attention really handle context that well?
upvoted 0 times
...
Santos
3 months ago
A is totally wrong, it’s not about strict order.
upvoted 0 times
...
Angella
3 months ago
Definitely B! Self-attention is key for long-range dependencies.
upvoted 0 times
...
Sylvie
3 months ago
I thought self-attention was about focusing on recent words only, but that doesn’t seem right for capturing full context.
upvoted 0 times
...
Franklyn
4 months ago
I feel like we practiced a question similar to this, and I recall that processing words simultaneously is a key feature of Transformers.
upvoted 0 times
...
Felix
4 months ago
I’m not entirely sure, but I think the self-attention mechanism helps with long-range dependencies, which is something we discussed in class.
upvoted 0 times
...
Bok
4 months ago
I remember that self-attention allows the model to look at all words at once, which seems important for understanding context.
upvoted 0 times
...
Haydee
4 months ago
Wait, I'm a bit confused. Isn't the whole point of self-attention to focus on the most recent word and speed up translation? I'm not sure how that would help capture context from earlier and later words in the sentence. I'll need to re-read the question and options more closely.
upvoted 0 times
...
Dana
4 months ago
Ah, I see what they're getting at. The self-attention mechanism in Transformers lets the model establish long-range dependencies between words, which is exactly what's needed for a machine translation system that relies on context from across the entire sentence. I feel confident I can explain this well in my answer.
upvoted 0 times
...
Gerry
4 months ago
Hmm, I'm a bit unsure about this one. I know Transformers use self-attention, but I'm not entirely clear on how that specifically supports the task of machine translation with context from both early and later words. I'll need to think this through carefully.
upvoted 0 times
...
Trinidad
5 months ago
This question seems pretty straightforward. The self-attention mechanism in Transformers allows the model to consider all words in the sentence simultaneously, which should help capture the crucial context from both early and later words.
upvoted 0 times
...
Azalee
5 months ago
I agree with Gearldine, the self-attention mechanism in Transformer models considers all words equally.
upvoted 0 times
...
Theresia
5 months ago
The self-attention mechanism in Transformer architecture is perfect for this task! It allows the model to consider all the words in the sentence simultaneously, capturing the crucial context from both early and later words.
upvoted 0 times
Minna
2 months ago
Definitely better than RNNs for this task!
upvoted 0 times
...
Matt
2 months ago
I agree, it’s all about those long-range dependencies.
upvoted 0 times
...
Merri
2 months ago
Totally! It captures context so well.
upvoted 0 times
...
Daisy
3 months ago
The self-attention really changes the game!
upvoted 0 times
...
...
Gearldine
6 months ago
I think option B is the correct answer.
upvoted 0 times
...

Save Cancel