An ML engineer needs to use Amazon SageMaker to fine-tune a large language model (LLM) for text summarization. The ML engineer must follow a low-code no-code (LCNC) approach.
I'm a bit confused on the difference between deploying on EC2 instances versus a custom API endpoint. Can someone clarify which one would be considered more "low-code"?
I think option D is the way to go here. SageMaker Autopilot should handle the fine-tuning of the LLM, and then we can deploy it using SageMaker JumpStart, which sounds like a pretty low-code approach.
Hayley
2 days agoDorsey
8 days agoAdelle
13 days agoDesmond
19 days agoTesha
24 days agoLachelle
1 month ago