Hurry Up, Grab the New Year Discount - Save 25% - Ends In 00:00:00 Coupon code: NY25
Welcome to Pass4Success

- Free Preparation Discussions
Mail Us support@pass4success.com
Location Virginia, US

Databricks Certified Associate Developer for Apache Spark 3.0 Exam

Certification Provider: Databricks
Exam Name: Databricks Certified Associate Developer for Apache Spark 3.0
Duration: 120 Minutes
Number of questions in our database: 180
Exam Version: Jan. 17, 2022
Databricks Certified Associate Developer for Apache Spark 3.0 Exam Official Topics:
  • Topic 1: Navigate the Spark UI and describe how the catalyst optimizer, partitioning, and caching affect Spark's execution performance
  • Topic 2: Apply the Structured Streaming API to perform analytics on streaming data/ Define the major components of Spark architecture and execution hierarchy
  • Topic 3: Describe how DataFrames are built, transformed, and evaluated in Spark/ Apply the DataFrame API to explore, preprocess, join, and ingest data in Spark

Free Databricks Databricks Certified Associate Developer for Apache Spark 3.0 Exam Actual Questions

The questions for Databricks Certified Associate Developer for Apache Spark 3.0 were last updated On Jan. 17, 2022

Question #1

The code block displayed below contains an error. The code block should configure Spark to split data in 20 parts when exchanging data between executors for joins or aggregations. Find the error.

Code block:

spark.conf.set(spark.sql.shuffle.partitions, 20)

Reveal Solution Hide Solution
Correct Answer: C

Correct code block:

spark.conf.set('spark.sql.shuffle.partitions', 20)

The code block expresses the option incorrectly.

Correct! The option should be expressed as a string.

The code block sets the wrong option.

No, spark.sql.shuffle.partitions is the correct option for the use case in the question.

The code block sets the incorrect number of parts.

Wrong, the code block correctly states 20 parts.

The code block uses the wrong command for setting an option.

No, in PySpark spark.conf.set() is the correct command for setting an option.

The code block is missing a parameter.

Incorrect, spark.conf.set() takes two parameters.

More info: Configuration - Spark 3.1.2 Documentation


Question #2

The code block displayed below contains multiple errors. The code block should remove column transactionDate from DataFrame transactionsDf and add a column transactionTimestamp in which

dates that are expressed as strings in column transactionDate of DataFrame transactionsDf are converted into unix timestamps. Find the errors.

Sample of DataFrame transactionsDf:

1. +-------------+---------+-----+-------+---------+----+----------------+

2. |transactionId|predError|value|storeId|productId| f| transactionDate|

3. +-------------+---------+-----+-------+---------+----+----------------+

4. | 1| 3| 4| 25| 1|null|2020-04-26 15:35|

5. | 2| 6| 7| 2| 2|null|2020-04-13 22:01|

6. | 3| 3| null| 25| 3|null|2020-04-02 10:53|

7. +-------------+---------+-----+-------+---------+----+----------------+

Code block:

1. transactionsDf = transactionsDf.drop("transactionDate")

2. transactionsDf["transactionTimestamp"] = unix_timestamp("transactionDate", "yyyy-MM-dd")

Reveal Solution Hide Solution
Correct Answer: E

This Question: requires a lot of thinking to get right. For solving it, you may take advantage of the digital notepad that is provided to you during the test. You have probably seen that the code

block

includes multiple errors. In the test, you are usually confronted with a code block that only contains a single error. However, since you are practicing here, this challenging multi-error QUESTION

NO: will

make it easier for you to deal with single-error questions in the real exam.

You can clearly see that column transactionDate should be dropped only after transactionTimestamp has been written. This is because to generate column transactionTimestamp, Spark needs to

read the values from column transactionDate.

Values in column transactionDate in the original transactionsDf DataFrame look like 2020-04-26 15:35. So, to convert those correctly, you would have to pass yyyy-MM-dd HH:mm. In other words:

The string indicating the date format should be adjusted.

While you might be tempted to change unix_timestamp() to to_unixtime() (in line with the from_unixtime() operator), this function does not exist in Spark. unix_timestamp() is the correct operator to

use here.

Also, there is no DataFrame.withColumnReplaced() operator. A similar operator that exists is DataFrame.withColumnRenamed().

Whether you use col() or not is irrelevant with unix_timestamp() - the command is fine with both.

Finally, you cannot assign a column like transactionsDf['columnName'] = ... in Spark. This is Pandas syntax (Pandas is a popular Python package for data analysis), but it is not supported in Spark.

So, you need to use Spark's DataFrame.withColumn() syntax instead.

More info: pyspark.sql.functions.unix_timestamp --- PySpark 3.1.2 documentation

Static notebook | Dynamic notebook: See test 3, Question: 28 (Databricks import instructions)


Question #3

Which of the following code blocks returns a new DataFrame in which column attributes of DataFrame itemsDf is renamed to feature0 and column supplier to feature1?

Reveal Solution Hide Solution
Correct Answer: D

itemsDf.withColumnRenamed('attributes', 'feature0').withColumnRenamed('supplier', 'feature1')

Correct! Spark's DataFrame.withColumnRenamed syntax makes it relatively easy to change the name of a column.

itemsDf.withColumnRenamed(attributes, feature0).withColumnRenamed(supplier, feature1)

Incorrect. In this code block, the Python interpreter will try to use attributes and the other column names as variables. Needless to say, they are undefined, and as a result the block will not run.

itemsDf.withColumnRenamed(col('attributes'), col('feature0'), col('supplier'), col('feature1'))

Wrong. The DataFrame.withColumnRenamed() operator takes exactly two string arguments. So, in this answer both using col() and using four arguments is wrong.

itemsDf.withColumnRenamed('attributes', 'feature0')

itemsDf.withColumnRenamed('supplier', 'feature1')

No. In this answer, the returned DataFrame will only have column supplier be renamed, since the result of the first line is not written back to itemsDf.

itemsDf.withColumn('attributes', 'feature0').withColumn('supplier', 'feature1')

Incorrect. While withColumn works for adding and naming new columns, you cannot use it to rename existing columns.

More info: pyspark.sql.DataFrame.withColumnRenamed --- PySpark 3.1.2 documentation

Static notebook | Dynamic notebook: See test 3, Question: 29 (Databricks import instructions)


Question #4

The code block displayed below contains multiple errors. The code block should return a DataFrame that contains only columns transactionId, predError, value and storeId of DataFrame

transactionsDf. Find the errors.

Code block:

transactionsDf.select([col(productId), col(f)])

Sample of transactionsDf:

1. +-------------+---------+-----+-------+---------+----+

2. |transactionId|predError|value|storeId|productId| f|

3. +-------------+---------+-----+-------+---------+----+

4. | 1| 3| 4| 25| 1|null|

5. | 2| 6| 7| 2| 2|null|

6. | 3| 3| null| 25| 3|null|

7. +-------------+---------+-----+-------+---------+----+

Reveal Solution Hide Solution
Correct Answer: B

Correct code block: transactionsDf.drop('productId', 'f')

This Question: requires a lot of thinking to get right. For solving it, you may take advantage of the digital notepad that is provided to you during the test. You have probably seen that the code

block

includes multiple errors. In the test, you are usually confronted with a code block that only contains a single error. However, since you are practicing here, this challenging multi-error QUESTION

NO: will

make it easier for you to deal with single-error questions in the real exam.

The select operator should be replaced by a drop operator, the column names should be listed directly as arguments to the operator and not as a list, and all column names should be expressed as

strings without being wrapped in a col() operator.

Correct! Here, you need to figure out the many, many things that are wrong with the initial code block. While the Question: can be solved by using a select statement, a drop statement, given

the

answer options, is the correct one. Then, you can read in the documentation that drop does not take a list as an argument, but just the column names that should be dropped. Finally, the column

names should be expressed as strings and not as Python variable names as in the original code block.

The column names should be listed directly as arguments to the operator and not as a list.

Incorrect. While this is a good first step and part of the correct solution (see above), this modification is insufficient to solve the question.

The column names should be listed directly as arguments to the operator and not as a list and following the pattern of how column names are expressed in the code block, columns productId and f

should be replaced by transactionId, predError, value and storeId.

Wrong. If you use the same pattern as in the original code block (col(productId), col(f)), you are still making a mistake. col(productId) will trigger Python to search for the content of a variable named

productId instead of telling Spark to use the column productId - for that, you need to express it as a string.

The select operator should be replaced by a drop operator, the column names should be listed directly as arguments to the operator and not as a list, and all col() operators should be removed.

No. This still leaves you with Python trying to interpret the column names as Python variables (see above).

The select operator should be replaced by a drop operator.

Wrong, this is not enough to solve the question. If you do this, you will still face problems since you are passing a Python list to drop and the column names are still interpreted as Python variables

(see above).

More info: pyspark.sql.DataFrame.drop --- PySpark 3.1.2 documentation

Static notebook | Dynamic notebook: See test 3, Question: 30 (Databricks import instructions)


Question #5

Which of the following code blocks returns a DataFrame with approximately 1,000 rows from the 10,000-row DataFrame itemsDf, without any duplicates, returning the same rows even if the code

block is run twice?

Reveal Solution Hide Solution
Correct Answer: B

itemsDf.sample(fraction=0.1, seed=87238)

Correct. If itemsDf has 10,000 rows, this code block returns about 1,000, since DataFrame.sample() is never guaranteed to return an exact amount of rows. To ensure you are not returning

duplicates, you should leave the withReplacement parameter at False, which is the default. Since the Question: specifies that the same rows should be returned even if the code block is run

twice,

you need to specify a seed. The number passed in the seed does not matter as long as it is an integer.

itemsDf.sample(withReplacement=True, fraction=0.1, seed=23536)

Incorrect. While this code block fulfills almost all requirements, it may return duplicates. This is because withReplacement is set to True.

Here is how to understand what replacement means: Imagine you have a bucket of 10,000 numbered balls and you need to take 1,000 balls at random from the bucket (similar to the problem in the

question). Now, if you would take those balls with replacement, you would take a ball, note its number, and put it back into the bucket, meaning the next time you take a ball from the bucket there

would be a chance you could take the exact same ball again. If you took the balls without replacement, you would leave the ball outside the bucket and not put it back in as you take the next 999

balls.

itemsDf.sample(fraction=1000, seed=98263)

Wrong. The fraction parameter needs to have a value between 0 and 1. In this case, it should be 0.1, since 1,000/10,000 = 0.1.

itemsDf.sampleBy('row', fractions={0: 0.1}, seed=82371)

No, DataFrame.sampleBy() is meant for stratified sampling. This means that based on the values in a column in a DataFrame, you can draw a certain fraction of rows containing those values from

the DataFrame (more details linked below). In the scenario at hand, sampleBy is not the right operator to use because you do not have any information about any column that the sampling should

depend on.

itemsDf.sample(fraction=0.1)

Incorrect. This code block checks all the boxes except that it does not ensure that when you run it a second time, the exact same rows will be returned. In order to achieve this, you would have to

specify a seed.

More info:

- pyspark.sql.DataFrame.sample --- PySpark 3.1.2 documentation

- pyspark.sql.DataFrame.sampleBy --- PySpark 3.1.2 documentation

- Types of Samplings in PySpark 3. The explanations of the sampling... | by Pinar Ersoy | Towards Data Science



Unlock all Databricks Certified Associate Developer for Apache Spark 3.0 Exam Questions with Advanced Practice Test Features:
  • Select Question Types you want
  • Set your Desired Pass Percentage
  • Allocate Time (Hours : Minutes)
  • Create Multiple Practice tests with Limited Questions
  • Customer Support
Get Full Access Now
Disscuss Databricks Databricks Certified Associate Developer for Apache Spark 3.0 Topics, Questions or Ask Anything Related

Save Cancel