Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Snowflake Exam DSA-C02 Topic 3 Question 41 Discussion

Actual exam question for Snowflake's DSA-C02 exam
Question #: 41
Topic #: 3
[All DSA-C02 Questions]

Which one is not the feature engineering techniques used in ML data science world?

Show Suggested Answer Hide Answer
Suggested Answer: D

Feature engineering is the pre-processing step of machine learning, which is used to transform raw data into features that can be used for creating a predictive model using Machine learning or statistical Modelling.

What is a feature?

Generally, all machine learning algorithms take input data to generate the output. The input data re-mains in a tabular form consisting of rows (instances or observations) and columns (variable or at-tributes), and these attributes are often known as features. For example, an image is an instance in computer vision, but a line in the image could be the feature. Similarly, in NLP, a document can be an observation, and the word count could be the feature. So, we can say a feature is an attribute that impacts a problem or is useful for the problem.

What is Feature Engineering?

Feature engineering is the pre-processing step of machine learning, which extracts features from raw data. It helps to represent an underlying problem to predictive models in a better way, which as a result, improve the accuracy of the model for unseen data. The predictive model contains predictor variables and an outcome variable, and while the feature engineering process selects the most useful predictor variables for the model.

Some of the popular feature engineering techniques include:

1. Imputation

Feature engineering deals with inappropriate data, missing values, human interruption, general errors, insufficient data sources, etc. Missing values within the dataset highly affect the performance of the algorithm, and to deal with them 'Imputation' technique is used. Imputation is responsible for handling irregularities within the dataset.

For example, removing the missing values from the complete row or complete column by a huge percentage of missing values. But at the same time, to maintain the data size, it is required to impute the missing data, which can be done as:

For numerical data imputation, a default value can be imputed in a column, and missing values can be filled with means or medians of the columns.

For categorical data imputation, missing values can be interchanged with the maximum occurred value in a column.

2. Handling Outliers

Outliers are the deviated values or data points that are observed too away from other data points in such a way that they badly affect the performance of the model. Outliers can be handled with this feature engineering technique. This technique first identifies the outliers and then remove them out.

Standard deviation can be used to identify the outliers. For example, each value within a space has a definite to an average distance, but if a value is greater distant than a certain value, it can be considered as an outlier. Z-score can also be used to detect outliers.

3. Log transform

Logarithm transformation or log transform is one of the commonly used mathematical techniques in machine learning. Log transform helps in handling the skewed data, and it makes the distribution more approximate to normal after transformation. It also reduces the effects of outliers on the data, as because of the normalization of magnitude differences, a model becomes much robust.

4. Binning

In machine learning, overfitting is one of the main issues that degrade the performance of the model and which occurs due to a greater number of parameters and noisy data. However, one of the popular techniques of feature engineering, 'binning', can be used to normalize the noisy data. This process involves segmenting different features into bins.

5. Feature Split

As the name suggests, feature split is the process of splitting features intimately into two or more parts and performing to make new features. This technique helps the algorithms to better understand and learn the patterns in the dataset.

The feature splitting process enables the new features to be clustered and binned, which results in extracting useful information and improving the performance of the data models.

6. One hot encoding

One hot encoding is the popular encoding technique in machine learning. It is a technique that converts the categorical data in a form so that they can be easily understood by machine learning algorithms and hence can make a good prediction. It enables group the of categorical data without losing any information.


Contribute your Thoughts:

Colette
23 days ago
Haha, this question is a real brain-teaser! I bet the professor is trying to trip us up with these tricky options.
upvoted 0 times
Antonette
2 days ago
A) Imputation
upvoted 0 times
...
...
Tracie
1 months ago
B) Binning is definitely a feature engineering technique, so that can't be the answer. I'm leaning towards A) Imputation.
upvoted 0 times
Whitley
7 days ago
D) Statistical methods are often used in feature engineering as well.
upvoted 0 times
...
Earlean
13 days ago
C) One hot encoding is also a popular technique used in ML data science.
upvoted 0 times
...
Bok
15 days ago
A) Imputation is actually a common feature engineering technique.
upvoted 0 times
...
...
Kayleigh
1 months ago
Hmm, I'm not sure. I'll have to think about this one a bit more. Maybe I'll ask the professor for a hint during office hours.
upvoted 0 times
Marguerita
15 days ago
I think the answer is D) Statistical.
upvoted 0 times
...
...
Lynelle
1 months ago
D) Statistical is not a feature engineering technique in the ML data science world. It's more of a data analysis method.
upvoted 0 times
Glory
1 days ago
D) Statistical
upvoted 0 times
...
Ilene
16 days ago
C) One hot encoding
upvoted 0 times
...
Catina
18 days ago
B) Binning
upvoted 0 times
...
Daryl
1 months ago
A) Imputation
upvoted 0 times
...
...
Daren
2 months ago
Actually, both D) Statistical and A) Imputation are used in feature engineering, so the correct answer is B) Binning.
upvoted 0 times
...
Lachelle
2 months ago
I think option A, Imputation, is the correct answer. Feature engineering techniques usually involve transforming or creating new features from existing ones.
upvoted 0 times
C) One hot encoding is also not a feature engineering technique.
upvoted 0 times
...
Leota
3 days ago
I agree, Imputation is not a feature engineering technique.
upvoted 0 times
...
Lavonne
8 days ago
D) Statistical
upvoted 0 times
...
Artie
19 days ago
C) One hot encoding
upvoted 0 times
...
Janessa
22 days ago
B) Binning
upvoted 0 times
...
Doyle
1 months ago
A) Imputation
upvoted 0 times
...
...
Selma
2 months ago
I disagree, I believe A) Imputation is not a feature engineering technique.
upvoted 0 times
...
Daren
2 months ago
I think D) Statistical is not a feature engineering technique.
upvoted 0 times
...

Save Cancel