Clickbait Identification in Social Media Text using LSTM based Approach

Authors

  • Lakshmi Dheeraj UG Scholar, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai, India. Author
  • Dr.Nelson Professor, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai, India. Author

DOI:

https://doi.org/10.61841/7179a798

Keywords:

Clickbaits, Long Term Short Memory, Social Media Text, Vectorization

Abstract

Capsule networks can also be used to investigate the working under the input of text data rather than images. Such limitations were taken into account while identifying clickbaits in the usage of texts from social media. Several systems in existing work focus on the usage of the Long Term Short Memory (LSTM) approach to identify clickbaits. Due to certain disadvantages, the issue was considered a research focus area. The proposed system utilizes a three-layered architecture where the first layer takes the input as text. After vectorization, the input is categorized, and the final layer produces the output. The layered architecture also takes less response time, thereby making the system efficient, which is showcased in the experimental results obtained after implementation. 

Downloads

Download data is not yet available.

References

[1] A. Srivastava, Anuradha, and D. J. Gupta, ”Social Network Analysis: Hardly Easy,” 2014 International Conference on Reliability Optimization and Information Technology (ICROIT), Faridabad, 2014, pp. 128–135. doi: 10.1109/ICROIT.2014.6798311.

[2] A. Chakraborty, B. Paranjape, S. Kakarla, and N. Ganguly, ”Stop Clickbait: Detecting and preventing clickbaits in online news media,” 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), San Francisco, CA, 2016, pp. 9-16. doi:

10.1109/ASONAM.2016.7752207

[3] Jing Ma, Wei Gao, Prasenjit Mitra, Sejeong Kwon, Bernard J. Jansen, Kam-Fai Wong, and Meeyoung Cha. 2016. Detecting rumors from microblogs with recurrent neural networks. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI’16), Gerhard Brewka (Ed.). AAAI Press 3818-3824.

[4] Kim, Jaeyoung & Jang, Sion & Choi, Sungchul & Park, Eunjeong. (2018). Text classification using capsules.

[5] D. Li and J. Qian, ”Text sentiment analysis based on long shortterm memory,” 2016 First IEEE International Conference on Computer Communication and the Internet (ICCCI), Wuhan, 2016, pp. 471-475. doi:10.1109/CCI.2016.7778967.

[6] Yann LeCun and Yoshua Bengio. 1998. Convolutional networks for images, speech, and time series. In The Handbook of Brain Theory and Neural Networks, Michael A. Arbib (Ed.). MIT Press, Cambridge, MA, USA 255-258.

[7] M. Young, The Technical Writer’s Handbook. Mill Valley, CA: University Science, 1989.

[8] Sabour, S., Frosst, N., & Hinton, G.E. (2017). Dynamic Routing Between Capsules. NIPS.

[9] Geoffrey E. Hinton, Alex Krizhevsky, and Sida D. Wang. 2011. Transforming auto-encoders. In Proceedings of the 21th international conference on Artificial Neural Networks, Volume Part I (ICANN’11), Timo Honkela, Duch Wodzisaw, Mark Girolami, and Samuel Kaski (Eds.), Vol. Part I. Springer-Verlag, Berlin, Heidelberg, 44-51.

[10] Samujjwal Ghosh and Maunendra Sankar Desarkar. 2018. Class Specific TF-IDF Boosting for Short-Text Classification: Application to Short-Texts Generated During Disasters. In Companion Proceedings of the The Web Conference 2018 (WWW ’18). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, Switzerland, 1629-1637. DOI: https://doi.org/10.1145/3184558.3191621.

[11] Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schtze. 2008. Introduction to Information Retrieval. Cambridge University Press, New York, NY, USA.

[12] Pennington, Jeffrey, Richard Socher, and Christopher D. Manning. Glove: Global Vectors for Word Representation. EMNLP (2014).

[13] Sepp Hochreiter and Jrgen Schmidhuber. 1997. Long Short-Term Memory. Neural Comput. 9, 8 (November 1997), 1735-1780. DOI=http://dx.doi.org/10.1162/neco.1997.9.8.1735.

[14] A. Tjandra, S. Sakti, R. Manurung, M. Adriani, and S. Nakamura, ”Gated Recurrent Neural Tensor Network,” 2016 International Joint Conference on Neural Networks (IJCNN), Vancouver, BC, 2016, pp. 448–455. doi: 10.1109/IJCNN.2016.7727233.

[15] Mikolov, T., Chen, K., Corrado, G.S., & Dean, J. (2013). Efficient Estimation of Word Representations in Vector Space. CoRR, abs/1301.3781. [16] Hastie, T., Tibshirani, R., Friedman, J. (2001). The elements of statistical learning. New York, NY, USA: Springer New York Inc.

[16] Tianqi Chen and Carlos Guestrin. 2016. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’16). ACM, New York, NY, USA, 785-794. DOI: https://doi.org/10.1145/2939672.2939785

Downloads

Published

30.04.2020

How to Cite

Dheeraj, L., & Nelson. (2020). Clickbait Identification in Social Media Text using LSTM based Approach. International Journal of Psychosocial Rehabilitation, 24(2), 5152-5157. https://doi.org/10.61841/7179a798