An effective noise reduction technique for class imbalance classification
DOI:
https://doi.org/10.61841/6hp16v03Keywords:
Data Mining, Knowledge Discovery,, Feature subset,, priority instance picking.Abstract
The paper presents a unique approach to handle noisy instances in the data sources using the novel technique of priority instance picking for weak range feature subsets. The technique used in the proposed approach quickly identifies the noisy instances in the data source than the benchmark C4.5 algorithm. The C4.5 algorithm also removes the noisy instances from the formed decision tree but in the final stage by applying the pruning technique. The results conducted on 12 UCI datasets suggest that the proposed approach performs better than the benchmark algorithm.
Downloads
References
1. H. He and E. A. Garcia, “Learning fromimbalanced data,” IEEE Transactions on Knowledge and Data Engineering, vol. 21, no. 9, pp. 1263–1284, 2009.
2. A. Estabrooks, T. Jo, and N. Japkowicz, “Amultiple resampling method for learning fromimbalanced data sets,” Computational Intelligence, vol. 20, no. 1, pp. 18–36, 2004.
3. H. Han, W.-Y. Wang, and B.-H. Mao, “Borderline-SMOTE: a new over-sampling method in imbalanced data sets learning,” in Advances in Intelligent Computing, pp. 878–887, Springer, 2005.
4. N. V. Chawla, N. Japkowicz, and A. Kotcz, “Editorial: special issue on learning from imbalanced data sets,”
ACM SIGKDD Explorations Newsletter, vol. 6, no. 1, pp. 1–6, 2004.
5. U. Bhowan, M. Johnston, and M. Zhang, “Developing new fitness functions in genetic programming for classification with unbalanced data,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 42,no. 2, pp.406–421, 2012.
6. J.-H. Xue and P. Hall, “Why does rebalancing class-unbalanced data improve AUC for linear discriminant analysis?” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 5, pp. 1109–1112, 2015.
7. R. Batuwita and V. Palade, “Class imbalance learning methods for support vector machines,” in Imbalanced Learning: Foundations, Algorithms, and Applications, pp. 83–99, John Wiley & Sons, Berlin, Germany, 2013.
8. V. L´opez, A. Fern´andez, S. Garc´ıa, V. Palade, and F. Herrera, “An insight into classification with imbalanced data: empirical results and current trends on using data intrinsic characteristics,” Information Sciences, vol. 250, pp. 113–141, 2013.
9. F. Provost, “Machine learning from imbalanced data sets 101,” in Proceedings of the AAAI’2000Workshop on Imbalanced Data Sets, pp. 1–3, 2000.
10. L. Pelayo and S. Dick, “Applying novel resampling strategies to software defect prediction,” in Proceedings of the Annual Meeting of the North American Fuzzy Information Processing Society (NAFIPS ’07), pp. 69– 72, June 2007.
11. J. Long, J.-P. Yin, E. Zhu, and W.-T. Zhao, “A novel active cost sensitive learning method for intrusion detection,” in Proceedings of the 7th International Conference on Machine Learning and Cybernetics (ICMLC ’08), pp. 1099–1104, IEEE, Kunming, China, July 2008.
12. K. Zahirnia, M. Teimouri, R. Rahmani, and A. Salaq, “Diagnosis of type 2 diabetes using cost-sensitive learning,” in Proceedings of the 5th International Conference on Computer and Knowledge Engineering (ICCKE ’15), pp. 158–163, October 2015.
13. M. Kubat, R. C.Holte, and S.Matwin, “Machine learning for the detection of oil spills in satellite radar images,” Machine Learning, vol. 30, no. 2-3, pp. 195–215, 1998.
14. T. Fawcett and F. Provost, “Adaptive fraud detection,” Data Mining and Knowledge Discovery, vol. 1, no. 3, pp. 291–316, 1997.
15. I. Triguero, S. del R´ıo, V. L´opez, J. Bacardit, J. M. Ben´ıtez, and F. Herrera, “ROSEFW-RF: the winner algorithm for the ECBDL’14 big data competition: an extremely imbalanced big data bioinformatics problem,” Knowledge-Based Systems, vol. 87, pp. 69–79, 2015.
16. K. V. Uma,” Improving the Classification accuracy of Noisy Dataset by Effective Data Preprocessing”,
International Journal of Computer Applications (0975 – 8887) Volume 180 – No.36, April 2018
17. Nittaya Kerdprasop and Kittisak Kerdprasop,” A Heuristic-Based Decision Tree Induction Methodfor Noisy Data”, T.-h. Kim et al. (Eds.): DTA/BSBT 2011, CCIS 258, pp. 1–10, 2011.
18. Dragan Gamberger, Nada Lavrac,” FILTERING NOISY INSTANCES AND OUTLIERS”, H. Liu et al. (eds.), Instance Selection and Construction for Data Mining, © Springer Science+Business Media Dordrecht 2001
19. THOMAS G. DIETTERICH,” An Experimental Comparison of Three Methodsfor Constructing Ensembles of Decision Trees:Bagging, Boosting, and Randomization”, Machine Learning, 40, 139–157, 2000, Kluwer Academic Publishers. Manufactured in The Netherlands.
20. Cèsar Ferri, José Hernández-Orallo, Peter Flach,” Setting decision thresholds when operating conditions are uncertain”, Data Mining and Knowledge Discovery (2019) 33:805–847, https://doi.org/10.1007/s10618- 019-00613-7
21. Ludmila I. Kuncheva · Juan J. Rodríguez,” A weighted voting framework for classifiers ensembles”, Knowl Inf Syst (2014) 38:259–275, DOI 10.1007/s10115-012-0586-6
22. Shaghayegh Gharghabi · Chin-Chia Michael Yeh · Yifei Ding, Wei Ding · Paul Hibbing Samuel LaMunion
· Andrew Kaplan, Scott E. Crouter · Eamonn Keogh,” Domain agnostic online semantic segmentation formulti-dimensional time series”,Data Mining and Knowledge Discovery (2019) 33:96–130, https://doi.org/10.1007/s10618-018-0589-3
23. Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science.
24. A Witten, I.H. and Frank, E. (2005) Data Mining: Practical machine learning tools and techniques. 2nd edition Morgan Kaufmann, San Francisco.
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution 4.0 International License.
You are free to:
- Share — copy and redistribute the material in any medium or format for any purpose, even commercially.
- Adapt — remix, transform, and build upon the material for any purpose, even commercially.
- The licensor cannot revoke these freedoms as long as you follow the license terms.
Under the following terms:
- Attribution — You must give appropriate credit , provide a link to the license, and indicate if changes were made . You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
- No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
Notices:
You do not have to comply with the license for elements of the material in the public domain or where your use is permitted by an applicable exception or limitation .
No warranties are given. The license may not give you all of the permissions necessary for your intended use. For example, other rights such as publicity, privacy, or moral rights may limit how you use the material.