I had attached the ufo_sightings_large.csv
Take a look at the UFO dataset’s column types using the dtypes attribute. Please convert the column types to the proper types. For example, the date column, which can be transformed into the datetime type. That will make our feature engineering efforts easier later on.
Let’s remove some of the rows where certain columns have missing values.
The length_of_time column in the UFO dataset is a text field that has the number of minutes within the string. Here, you’ll extract that number from that text field using regular expressions.
In [ ]:
In this section, you’ll investigate the variance of columns in the UFO dataset to determine which features should be standardized. You can log normlize the high variance column.
There are couple of columns in the UFO dataset that need to be encoded before they can be modeled through scikit-learn. You’ll do that transformation here, using both binary and one-hot encoding methods.
Let’s transform the desc column in the UFO dataset into tf/idf vectors, since there’s likely something we can learn from this field.
Let’s get rid of some of the unnecessary features.
In [9]:
X = ufo.drop(["type"],axis = 1)
y = ufo["type"].astype(str)
In [1]:
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=5)
# Fit knn to the training sets
knn.fit(train_X, train_y)
# Print the score of knn on the test sets
print(knn.score(test_X, test_y))