Research on Text Mining Method for Users' Implicit Needs in Recommendation Systems under Few-Sample Conditions
DOI:
https://doi.org/10.54691/k7d6n225Keywords:
Few-sample Learning, Recommendation Systems, implicit needs, Text Mining, Semantic Analysis.Abstract
The core value of the recommendation system is to accurately match the supply and demand resources, and the mining of users' implicit needs (unspoken potential preferences and demands) is the key to improving the recommendation quality. The effect of traditional mining methods that rely on massive annotation data is limited due to the few-sample scenarios such as the cold start of new users and the scarcity of data in niche fields. This paper focuses on this pain point, constructs a phased framework of "data preprocessing-feature enhancement-demand recognition-result optimization", designs a text mining scheme by integrating transfer learning, semantic similarity analysis and meta-learning technology, and verifies the effectiveness based on a public few-sample subset of the "Yelp Review Polarity Dataset". Experiments show that the accuracy of this method is 81.7%, the recall is 78.9%, and the F1 value is 80.3%, which is better than the traditional model. The research provides a landing technology path for the mining of implicit needs in few-sample scenarios, which has practical value for improving the accuracy of recommendation of small and medium-sized platforms.
Downloads
References
[1] Shi, L., Xing, M., Li, M., Wang, Y., Li, S., & Wang, Q. (2020, June). Detection of hidden feature requests from massive chat messages via deep siamese network. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering (pp. 641-653).
[2] Yuan, J., Gao, H., Dai, D., Luo, J., Zhao, L., Zhang, Z., ... & Zeng, W. (2025, July). Native sparse attention: Hardware-aligned and natively trainable sparse attention. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (pp. 23078-23097).
[3] Ma, Z., Li, J., Li, G., & Cheng, Y. (2022, May). UniTranSeR: A unified transformer semantic representation framework for multimodal task-oriented dialog system. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (pp. 103-114).
[4] Pan, C., Huang, J., Gong, J., & Yuan, X. (2019). Few-shot transfer learning for text classification with lightweight word embedding based models. IEEE Access, 7, 53296-53304.
[5] Fard, K. B., Nilashi, M., Rahmani, M., & Ibrahim, O. (2016). Recommender system based on semantic similarity. International Journal of Electrical and Computer Engineering (IJECE), 3(6), 1-11.
[6] He, J., Zhou, C., Ma, X., Berg-Kirkpatrick, T., & Neubig, G. (2021). Towards a unified view of parameter-efficient transfer learning. arXiv preprint arXiv:2110.04366.
[7] Shorten, C., & Khoshgoftaar, T. M. (2019). A survey on image data augmentation for deep learning. Journal of big data, 6(1), 1-48.
[8] Géron, A. (2022). Hands-on machine learning with Scikit-Learn, Keras, and TensorFlow. " O'Reilly Media, Inc.".
[9] Lee, H., Im, J., Jang, S., Cho, H., & Chung, S. (2019, July). Melu: Meta-learned user preference estimator for cold-start recommendation. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining (pp. 1073-1082).
[10] Forman, G. (2003). An extensive empirical study of feature selection metrics for text classification. Journal of machine learning research, 3(Mar), 1289-1305.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Frontiers in Humanities and Social Sciences

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.






