My Solution to the Analytic Edge Kaggle Competition

The Kaggle competition for the MIT's course The Analytic Edge on edX is now over.

Here is my solution to the competition, it ranked 244 on the final leader board, which is by no means great but still rank within top 10%.

My approach is to minimizing the effort to do feature engineering by hand. Instead using a couple of automated methods to extracting features from the text in the dataset. For the modeling process, I used the Xgboost package, using cross-validation to pick the number of iterations to avoid overfit to the training set.

0 House Keeping

Set up working directory 设定工作环境

Load Libraries 装载函数包

Function Definition

Function for dummy encoding 用于生成虚拟编码的函数

1 Data Preparing 数据准备工作

Loading Data

Imputing missing categorical data 填补下缺失数据
I used a really simple approach, after inspecting the correlation among them.
主要是通过SectionName来填补NewsDesk的值

Remove data entries which has a NewsDesk value not appeared in the testing data
训练数据中有几个文章NewsDesk是National或者Sports,然而测试数据中没有这样的,干脆去掉

Change "" to "Other", No effect on modeling, just don't like to see ""
将""改成"Other",应该没什么影响

Log Transform "WordCount" 将字数做个对数转变

2 Feature Extraction 提取特征

QR = Question Mark in the title?
QR表示标题中有无问号

Extract Hour and day in week
从发表日期中抽取时间和周几

Extract all headline and abstract to form a corpus
抽取题名和摘要文本构建一个语料库

Corpus processing
语料库处理

Document ~ TF-IDF matrix And Document ~ TF matrix

构建文档~TF-IDF矩阵 以及文档~TF矩阵

Get frequent terms matrix as feature
抽取频繁的术语

Clustering  聚类

PCA 主要成分分析

LDA  潜在狄利克雷分配

Dummy Encoding 虚拟编码

3 Model Fitting 模型拟合

Using cross validation to pick number of rounds  for Xgboost

用交叉验证的方式选择迭代次数,模型为Xgboost

 

Leave a Reply

Your email address will not be published. Required fields are marked *