Feature Selection by Maximizing Independent Classification Information
Rs4,500.00
10000 in stock
SupportDescription
Variable and feature selection have become the focus of much research in areas of application for which datasets with tens or hundreds of thousands of variables are available. The objective of variable selection is three-fold: improving the clustering performance of the clusters, providing faster and more cost-effective clusters , and providing a better understanding of the underlying process that generated the data. This Feature selection approaches based on mutual information can be roughly categorized into two groups. The first group minimizes the redundancy of features between each other. The second group maximizes the new classification information of features providing for the selected subset. A critical issue is that large new information does not signify little redundancy, and vice versa. Features with large new information but with high redundancy may be selected by the second group, and features with low redundancy but with little relevance with classes may be highly scored by the first group. Existing approaches fail to balance the importance of both terms. As such, a new information term denoted as Independent Classification Information is proposed in this paper. This strategy helps find the predictive features providing large new information and little redundancy.
Only logged in customers who have purchased this product may leave a review.
Reviews
There are no reviews yet.