You have a Fabric tenant that contains a warehouse.
You are designing a star schema model that will contain a customer dimension. The customer dimension table will be a Type 2 slowly changing dimension (SCD).
You need to recommend which columns to add to the table. The columns must NOT already exist in the source.
Which three types of columns should you recommend? Each correct answer presents part of the solution.
NOTE: Each correct answer is worth one point.
For a Type 2 slowly changing dimension (SCD), you typically need to add the following types of columns that do not exist in the source system:
An effective start date and time (E): This column records the date and time from which the data in the row is effective.
An effective end date and time (A): This column indicates until when the data in the row was effective. It allows you to keep historical records for changes over time.
A surrogate key (C): A surrogate key is a unique identifier for each row in a table, which is necessary for Type 2 SCDs to differentiate between historical and current records.
You have a Fabric tenant.
You are creating a Fabric Data Factory pipeline.
You have a stored procedure that returns the number of active customers and their average sales for the current month.
You need to add an activity that will execute the stored procedure in a warehouse. The returned values must be available to the downstream activities of the pipeline.
Which type of activity should you add?
In a Fabric Data Factory pipeline, to execute a stored procedure and make the returned values available for downstream activities, the Lookup activity is used. This activity can retrieve a dataset from a data store and pass it on for further processing. Here's how you would use the Lookup activity in this context:
Add a Lookup activity to your pipeline.
Configure the Lookup activity to use the stored procedure by providing the necessary SQL statement or stored procedure name.
In the settings, specify that the activity should use the stored procedure mode.
Once the stored procedure executes, the Lookup activity will capture the results and make them available in the pipeline's memory.
Downstream activities can then reference the output of the Lookup activity.
You have a Fabric notebook that has the Python code and output shown in the following exhibit.
Which type of analytics are you performing?
The Python code and output shown in the exhibit display a histogram, which is a representation of the distribution of data. This kind of analysis is descriptive analytics, which is used to describe or summarize the features of a dataset. Descriptive analytics answers the question of 'what has happened' by providing insight into past data through tools such as mean, median, mode, standard deviation, and graphical representations like histograms.
You have an Azure Repos Git repository named Repo1 and a Fabric-enabled Microsoft Power Bl Premium capacity. The capacity contains two workspaces named Workspace! and Workspace2. Git integration is enabled at the workspace level.
You plan to use Microsoft Power Bl Desktop and Workspace! to make version-controlled changes to a semantic model stored in Repo1. The changes will be built and deployed lo Workspace2 by using Azure Pipelines.
You need to ensure that report and semantic model definitions are saved as individual text files in a folder hierarchy. The solution must minimize development and maintenance effort.
In which file format should you save the changes?
When working with Power BI Desktop and Git integration for version control, report and semantic model definitions should be saved in the PBIX format. PBIX is the Power BI Desktop file format that contains definitions for reports, data models, and queries, and it can be easily saved and tracked in a version-controlled environment. The solution should minimize development and maintenance effort, and saving in PBIX format allows for the easiest transition from development to deployment, especially when using Azure Pipelines for CI/CD (continuous integration/continuous deployment) practices.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a Fabric tenant that contains a lakehouse named Lakehousel. Lakehousel contains a Delta table named Customer.
When you query Customer, you discover that the query is slow to execute. You suspect that maintenance was NOT performed on the table.
You need to identify whether maintenance tasks were performed on Customer.
Solution: You run the following Spark SQL statement:
DESCRIBE DETAIL customer
Does this meet the goal?
Marla
14 hours agoKrystal
6 days agoJacki
21 days agoValene
21 days agoLauran
30 days agoKaron
2 months agoMarge
3 months agoTamekia
3 months agoMary
3 months agoStephaine
3 months agoLing
4 months agoJonelle
4 months agoKerry
4 months agoNobuko
4 months agoalison
4 months agoAlex
4 months agopereexe
4 months agokorey
4 months agoAsuncion
6 months agoAlexas
6 months ago