Feature Selection Ordered By Correlation - FSOC

Arturo Heredia Márquez, Adolfo Guzmán Arenas, Gilberto Lorenzo Martínez Luna

Abstract


Data sets have increased in volume and features, yielding longer times for classification and training. Many features usually mean that not all of them are highly correlated with the target class, and that significant correlation may exist between certain pairs of features. The proper reduction of “useless” features saves time and effort at data collection, assures faster learning and classification times, with little or no reduction in classification accuracy. However, algorithms for selecting adequate features have high levels of processing. FSOC achieves this reduction by selecting a subset of the original features. To refrain from reformatting, transforming and combining existing features it is necessary to create a new set with lower cardinality, since the artificially created features mask the relevance of the original for decision making, and causes clarity loss in the built model.
This article presents a new filter type method, called FSOC (Feature Selection Ordered by Correlation), to select, with small computational cost, relevant features. FSOC uses correlations between features and the merit heuristic to identify a subset of "good" features. FSOC's reduced computational cost arises from an ordering of the features. To test it, a statistical analysis was performed on a sample comprising 36 data sets from several repositories some with millions of objects. The classification percentages (efficiency) of FSOC were similar to other feature selection features. Nevertheless, FSOC was up to 42 times faster than other algorithms (CFS, FCFB and ECMBF) to obtain the selected features.

Keywords


Feature selection; Data mining; Feature reduction; Pre-processing, Data analysis

Full Text: PDF