Harman Patil (Editor)

Discretization of continuous features

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

In statistics and machine learning, discretization refers to the process of converting or partitioning continuous attributes, features or variables to discretized or nominal attributes/features/variables/intervals. This can be useful when creating probability mass functions – formally, in density estimation. It is a form of discretization in general and also of binning, as in making a histogram. Whenever continuous data is discretized, there is always some amount of discretization error. The goal is to reduce the amount to a level considered negligible for the modeling purposes at hand.

Typically data is discretized into partitions of K equal lengths/width (equal intervals) or K% of the total data (equal frequencies).

Mechanisms for discretizing continuous data include Fayyad & Irani's MDL method, which uses mutual information to recursively define the best bins, CAIM, CACC, Ameva, and many others

Many machine learning algorithms are known to produce better models by discretizing continuous attributes.

Software

This is a partial list of software that implement MDL algorithm.

  • discretize4crf tool designed to work with popular CRF implementations (C++)
  • mdlp in the R package discretize
  • Discretize in the R package RWeka
  • References

    Discretization of continuous features Wikipedia


    Similar Topics