You have a Microsoft Power Bl semantic model.
You need to identify any surrogate key columns in the model that have the Summarize By property set to a value other than to None. The solution must minimize effort.
What should you use?
To identify surrogate key columns with the 'Summarize By' property set to a value other than 'None,' the Best Practice Analyzer in Tabular Editor is the most efficient tool. The Best Practice Analyzer can analyze the entire model and provide a report on all columns that do not meet a specified best practice, such as having the 'Summarize By' property set correctly for surrogate key columns. Here's how you would proceed:
Open your Power BI model in Tabular Editor.
Go to the Advanced Scripting window.
Write or use an existing script that checks the 'Summarize By' property of each column.
Execute the script to get a report on the surrogate key columns that do not have their 'Summarize By' property set to 'None'.
You can then review and adjust the properties of the columns directly within the Tabular Editor.
You have a Fabric tenant that contains a warehouse.
You are designing a star schema model that will contain a customer dimension. The customer dimension table will be a Type 2 slowly changing dimension (SCD).
You need to recommend which columns to add to the table. The columns must NOT already exist in the source.
Which three types of columns should you recommend? Each correct answer presents part of the solution.
NOTE: Each correct answer is worth one point.
For a Type 2 slowly changing dimension (SCD), you typically need to add the following types of columns that do not exist in the source system:
An effective start date and time (E): This column records the date and time from which the data in the row is effective.
An effective end date and time (A): This column indicates until when the data in the row was effective. It allows you to keep historical records for changes over time.
A surrogate key (C): A surrogate key is a unique identifier for each row in a table, which is necessary for Type 2 SCDs to differentiate between historical and current records.
You have a Fabric tenant.
You are creating a Fabric Data Factory pipeline.
You have a stored procedure that returns the number of active customers and their average sales for the current month.
You need to add an activity that will execute the stored procedure in a warehouse. The returned values must be available to the downstream activities of the pipeline.
Which type of activity should you add?
In a Fabric Data Factory pipeline, to execute a stored procedure and make the returned values available for downstream activities, the Lookup activity is used. This activity can retrieve a dataset from a data store and pass it on for further processing. Here's how you would use the Lookup activity in this context:
Add a Lookup activity to your pipeline.
Configure the Lookup activity to use the stored procedure by providing the necessary SQL statement or stored procedure name.
In the settings, specify that the activity should use the stored procedure mode.
Once the stored procedure executes, the Lookup activity will capture the results and make them available in the pipeline's memory.
Downstream activities can then reference the output of the Lookup activity.
You have a Fabric notebook that has the Python code and output shown in the following exhibit.
Which type of analytics are you performing?
The Python code and output shown in the exhibit display a histogram, which is a representation of the distribution of data. This kind of analysis is descriptive analytics, which is used to describe or summarize the features of a dataset. Descriptive analytics answers the question of 'what has happened' by providing insight into past data through tools such as mean, median, mode, standard deviation, and graphical representations like histograms.
You have an Azure Repos Git repository named Repo1 and a Fabric-enabled Microsoft Power Bl Premium capacity. The capacity contains two workspaces named Workspace! and Workspace2. Git integration is enabled at the workspace level.
You plan to use Microsoft Power Bl Desktop and Workspace! to make version-controlled changes to a semantic model stored in Repo1. The changes will be built and deployed lo Workspace2 by using Azure Pipelines.
You need to ensure that report and semantic model definitions are saved as individual text files in a folder hierarchy. The solution must minimize development and maintenance effort.
In which file format should you save the changes?
When working with Power BI Desktop and Git integration for version control, report and semantic model definitions should be saved in the PBIX format. PBIX is the Power BI Desktop file format that contains definitions for reports, data models, and queries, and it can be easily saved and tracked in a version-controlled environment. The solution should minimize development and maintenance effort, and saving in PBIX format allows for the easiest transition from development to deployment, especially when using Azure Pipelines for CI/CD (continuous integration/continuous deployment) practices.
Leonor
5 days agoLeota
16 days agoLachelle
20 days agoFletcher
1 months agoMarla
1 months agoKrystal
2 months agoJacki
2 months agoValene
2 months agoLauran
2 months agoKaron
4 months agoMarge
4 months agoTamekia
4 months agoMary
5 months agoStephaine
5 months agoLing
5 months agoJonelle
5 months agoKerry
5 months agoNobuko
5 months agoalison
6 months agoAlex
6 months agopereexe
6 months agokorey
6 months agoAsuncion
7 months agoAlexas
7 months ago