Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Microsoft DP-800 Exam Questions

Exam Name: Developing AI-Enabled Database Solutions
Exam Code: DP-800
Related Certification(s): Microsoft SQL AI Developer Associate Certification
Certification Provider: Microsoft
Number of DP-800 practice questions in our database: 61 (updated: Mar. 30, 2026)
Expected DP-800 Exam Topics, as suggested by Microsoft :
  • Topic 1: Design and develop database solutions: This domain covers designing and building database objects such as tables, views, functions, stored procedures, and triggers, along with writing advanced T-SQL code and leveraging AI-assisted tools like GitHub Copilot and MCP for SQL development.
  • Topic 2: Secure, optimize, and deploy database solutions: This domain focuses on implementing data security measures like encryption, masking, and row-level security, optimizing query performance, managing CI/CD pipelines using SQL Database Projects, and integrating SQL solutions with Azure services including Data API builder and monitoring tools.
  • Topic 3: Implement AI capabilities in database solutions: This domain covers designing and managing external AI models and embeddings, implementing full-text, semantic vector, and hybrid search strategies, and building retrieval-augmented generation (RAG) solutions that connect database outputs with language models.
Disscuss Microsoft DP-800 Topics, Questions or Ask Anything Related
0/2000 characters

Currently there are no comments in this discussion, be the first to comment!

Free Microsoft DP-800 Exam Actual Questions

Note: Premium Questions for DP-800 were last updated On Mar. 30, 2026 (see below)

Question #1

You have an Azure SQL database that contains tables named dbo.ProduetDocs and dbo.ProductuocsEnbeddings. dbo.ProductOocs contains product documentation and the following columns:

* Docld (int)

* Title (nvdrchdr(200))

* Body (nvarthar(max))

* LastHodified (datetime2)

The documentation is edited throughout the day. dbo.ProductDocsEabeddings contains the following columns:

* Dotid (int)

* ChunkOrder (int)

* ChunkText (nvarchar(aax))

* Embedding (vector(1536))

The current embedding pipeline runs once per night

Vou need to ensure that embeddings are updated every time the underlying documentation content changes The solution must NOT 'equire a nightly batch process.

What should you include in the solution?

Reveal Solution Hide Solution
Correct Answer: D

The requirement is to ensure embeddings are updated every time the underlying content changes without relying on a nightly batch job. The right design is to enable change tracking on the source table so an external process can identify which rows changed and regenerate embeddings only for those rows. Microsoft documents that change detection mechanisms are used to pick up new and updated rows incrementally, which is the right pattern when you need near-continuous refresh instead of full nightly rebuilds.

This is better than:

A . fixed-size chunking, which affects chunk strategy but not change detection.

B . a smaller embedding model, which affects model cost/latency but not update triggering.

C . table triggers, which would push embedding-maintenance logic directly into write operations and is generally not the best design for AI-processing pipelines. The question specifically asks for a solution that replaces the nightly batch requirement, not one that performs heavyweight work inline during every transaction.


Question #2

You have an SDK-style SQL database project stored in a Git repository. The project targets an Azure SQL database.

The CI build fails with unresolved reference errors when the project ieferences system objects.

You need to update the SQL database project to ensure that dotnet build validates successfully by including the correct system objects in the database model for Azure SQL Database.

Solution: Add the Microsoft.SqlServer.Dacpacs.Mastet NuGet package to the project.

Does this meet the goal?

Reveal Solution Hide Solution
Correct Answer: B

The package named Microsoft.SqlServer.Dacpacs.Master is the generic master system DACPAC package, but the question requires the correct system objects for Azure SQL Database. Microsoft's system-objects documentation distinguishes platform-specific system references, and for Azure SQL Database the correct package is the Azure-specific master DACPAC, not the generic master package.

So adding Microsoft.SqlServer.Dacpacs.Master does not meet the goal for an Azure SQL Database-targeted SDK-style project. The expected package is the Azure-specific one.


Question #3

You have a GitHub Actions workflow that builds and deploys an Azure SQL database. The schema is stored in a GitHub repository as an SDK-style SQL database project.

Following a code review, you discover that you need to generate a report that shows whether the production schema has diverged from the model in source control.

Which action should you add to the pipeline?

Reveal Solution Hide Solution
Correct Answer: A

Microsoft documents that DriftReport creates an XML report showing changes that have been made to the registered database since it was last registered. That is the action intended to detect whether the production schema has diverged from the expected model baseline in your deployment workflow.

This is different from DeployReport, which shows the changes that would be made by a publish action. In other words:

DriftReport answers: Has the deployed database drifted from the registered state/model?

DeployReport answers: What changes would be applied if I published now?

The other options are not the right fit:

Extract creates a DACPAC from an existing database, not a drift analysis report.

Script generates a deployment script, not a schema-drift report.

So to generate a report that shows whether production has diverged from the model in source control, add:

SqlPackage.exe /Action:DriftReport


Question #4

You have a GitHub Codespaces environment that has GitHub Copilot Chat installed and is connected to a SQL database in Microsoft Fabric named DB1 DB1 contains tables named Sales.Orders and Sales.Customers.

You use GitHub Copilot Chat in the context of DB1 .

A company policy prohibits sharing customer Personally Identifiable Information (Pll), secrets, and query result sets with any Al service.

You need to use GitHub Copilot Chat to write and review Transact-SQL code for a new stored procedure that will join Sales.Orders to sales .Customers and return customer names and email addresses. The solution must NOT share the actual data in the tables with GitHub Copilot Chat.

What should you do?

Reveal Solution Hide Solution
Correct Answer: D

The correct answer is D because the policy explicitly prohibits sharing customer PII, secrets, and query result sets with any AI service. The safe way to use GitHub Copilot Chat here is to provide only schema-level information such as table names, column names, relationships, and the required procedure behavior, without sharing actual table contents or result sets. That lets Copilot help generate and review the Transact-SQL while avoiding disclosure of customer data. This is consistent with Microsoft and GitHub guidance that content provided in prompts is what the AI can use, so avoiding real data in the prompt is the appropriate control.

The other options violate the requirement:

A pastes real rows containing email addresses, which is direct PII disclosure.

B shares actual query result sets, which the policy forbids.

C provides the connection string so Copilot can validate against the database, which is inappropriate because it exposes connection details and could enable access beyond schema-only assistance.

So the correct approach is to ask Copilot to generate the stored procedure using only the schema and requirements, not real customer data.


Question #5

Your team is developing an Azure SQL dataset solution from a locally cloned GitHub repository by using Microsoft Visual Studio Code and GitHub Copilot Chat.

You need to disable the GitHub Copilot repository-level instructions for yourself without affecting other users.

What should you do?

Reveal Solution Hide Solution
Correct Answer: A

GitHub documents that repository custom instructions for Copilot Chat can be disabled for your own use in the editor settings, and that doing so does not affect other users. In VS Code, this is controlled through settings related to instruction files, where you can disable the use of repository instruction files for your own environment.

The other options are incorrect:

B is not a documented mechanism for disabling repository-level Copilot instructions.

C would remove the repository instruction file itself and therefore affect everyone using that repository, which violates the requirement.



Unlock Premium DP-800 Exam Questions with Advanced Practice Test Features:
  • Select Question Types you want
  • Set your Desired Pass Percentage
  • Allocate Time (Hours : Minutes)
  • Create Multiple Practice tests with Limited Questions
  • Customer Support
Get Full Access Now

Save Cancel