Open Access
Issue |
BIO Web Conf.
Volume 97, 2024
Fifth International Scientific Conference of Alkafeel University (ISCKU 2024)
|
|
---|---|---|
Article Number | 00121 | |
Number of page(s) | 7 | |
DOI | https://doi.org/10.1051/bioconf/20249700121 | |
Published online | 05 April 2024 |
- Mingxiao An, Fangzhao, Wu, Chuhan, Wu, Kun, Zhang, Zheng, Liu, and Xing, Xie. 2019. Neural news recommendation with long-and short-term user representations. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 336–345 (2019). [Google Scholar]
- Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, et al. 2023. A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity. arXiv preprint 2302.04023 (2023). [Google Scholar]
- Yahui Chen. 2015. Convolutional neural network for sentence classification. Master’s thesis. University of Waterloo (2015). [Google Scholar]
- Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint 1406.1078 (2014). [Google Scholar]
- Zeyu Cui, Jianxin Ma, Chang Zhou, Jingren Zhou, and Hongxia Yang. 2022. M6-Rec: Generative Pretrained Language Models are Open-Ended Recommender Systems. arXiv preprint 2205.08084 (2022). [Google Scholar]
- Sunhao Dai, Ninglu Shao, Haiyuan Zhao, Weijie Yu, Zihua Si, Chen Xu, Zhongxiang Sun, Xiao Zhang, and Jun Xu. 2023. Uncovering ChatGPT’s Capabilities in Recommender Systems. arXiv preprint 2305.02182 (2023). [Google Scholar]
- Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint 1810.04805 (2018). [Google Scholar]
- Shijie Geng, Shuchang Liu, Zuohui Fu, Yingqiang Ge, and Yongfeng Zhang. 2022. Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5). RecSys (2022). [Google Scholar]
- Shijie Geng, Juntao Tan, Shuchang Liu, Zuohui Fu, and Yongfeng Zhang. 2023. VIP5: Towards Multimodal Foundation Models for Recommendation. EMNLP (2023). [Google Scholar]
- Yitong Ji, Aixin Sun, Jie Zhang, and Chenliang Li. 2020. A re-visit of the popularity baseline in recommender systems. In Proceedings of the 43rd International, A.C.M SIGIR Conference on Research and Development in Information Retrieval. 1749–1752 (2020). [Google Scholar]
- Woojeong Jin, Yu Cheng, Yelong Shen, Weizhu Chen, and Xiang Ren. 2021. A good prompt is worth millions of parameters? low-resource prompt-based learning for vision-language models. arXiv preprint 2110.08484 (2021). [Google Scholar]
- Nirmal Jonnalagedda, Susan Gauch, Kevin Labille, and Sultan Alfarhood. 2016. Incorporating popularity in a personalized news recommender system. PeerJ Computer Science 2 (2016). [Google Scholar]
- Lei Li, Yongfeng Zhang, and Li Chen. 2022. Personalized prompt learning for an explainable recommendation. arXiv preprint 2202.07371 (2022). [Google Scholar]
- Xinyi Li, Yongfeng Zhang, and Edward C Malthouse. 2023. PBNR: Prompt-based News Recommender System. arXiv preprint 2304.07862 (2023). [Google Scholar]
- Xinyi Li, Yongfeng Zhang, and Edward C Malthouse. 2023. A Preliminary Study of ChatGPT on News Recommendation: Personalization, Provider Fairness, Fake News. arXiv preprint 2306.10702 (2023). [Google Scholar]
- Jianxun Lian, Fuzheng Zhang, Xing Xie, and Guangzhong Sun. 2018. Towards Better Representation Learning for Personalized News Recommendation: a Multi-Channel Deep Fusion Approach.. In, I.J.CAI. 3805–3811 (2018). [Google Scholar]
- Junling Liu, Chao Liu, Renjie Lv, Kang Zhou, and Yan Zhang. 2023. Is chatgpt a good recommender? a preliminary study. arXiv preprint 2304.10149 (2023). [Google Scholar]
- Shumpei Okura, Yukihiro Tagami, Shingo Ono, and Akira Tajima. 2017. Embedding-based news recommendation for millions of users. In Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining. 1933–1942 (2017). [Google Scholar]
- Tao Qi, Fangzhao Wu, Chuhan Wu, and Yongfeng Huang. 2021. Pp-rec: News recommendation with personalized user interest and time-aware news popularity. arXiv preprint 2106.01300 (2021). [Google Scholar]
- Chengwei Qin, Aston Zhang, Zhuosheng Zhang, Jiaao Chen, Michihiro Yasunaga, and Diyi Yang. 2023. Is ChatGPT a general-purpose natural language processing task solver? arXiv preprint 2302.06476 (2023). [Google Scholar]
- Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. 2018. Improving language understanding by generative pre-training. (2018). [Google Scholar]
- Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21, 1 (2020), 5485–5551. [MathSciNet] [Google Scholar]
- Ralf C Staudemeyer and Eric Rothstein Morris. 2019. Understanding, L.S.TM-a tutorial into long short-term memory recurrent neural networks. arXiv preprint 1909.09586 (2019). [Google Scholar]
- Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is allyou need. Advances in neural information processing systems 30 (2017). [Google Scholar]
- Zengzhi Wang, Qiming Xie, Zixiang Ding, Yi Feng, and Rui Xia. 2023. Is ChatGPT a good sentiment analyzer? A preliminary study. arXiv preprint 2304.04339 (2023). [Google Scholar]
- Chuhan Wu, Fangzhao Wu, Mingxiao An, Jianqiang Huang, Yongfeng Huang, and Xing Xie. 2019. Neural news recommendation with attentive multi-view learning. arXiv preprint 1907.05576 (2019). [Google Scholar]
- Chuhan Wu, Fangzhao Wu, Suyu Ge, Tao Qi, Yongfeng Huang, and Xing Xie. 2019. Neural news recommendation with multi-head self-attention. In Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP). 6389–6394 (2019). [Google Scholar]
- Chuhan Wu, Fangzhao Wu, Tao Qi, Chenliang Li, and Yongfeng Huang. 2022. Is News Recommendation a Sequential Recommendation Task?. In Proceedings of the 45th International, A.C.M SIGIR Conference on Research and Development in Information Retrieval. 2382–2386. [Google Scholar]
- Fangzhao Wu, Ying Qiao, Jiun-Hung Chen, Chuhan Wu, Tao Qi, Jianxun Lian, Danyang Liu, Xing Xie, Jianfeng Gao, Winnie, Wu, et al. 2020. Mind: A large-scale dataset for news recommendation. In Proceedings of the 58th Annual Meeting of the Association for mputational Linguistics. 3597–3606. [Google Scholar]
- Shuyuan Xu, Wenyue Hua, and Yongfeng Zhang. 2023. OpenP5: Benchmarking oundation Models for Recommendation. 2306.11134 (2023). [Google Scholar]
- Jung Ae Yang. 2016. Effects of popularity-based news recommendations (“mostviewed”) [Google Scholar]
- On users’ exposure to online news. Media Psychology 19, 2, 243–271 (2016). [CrossRef] [Google Scholar]
- Jizhi Zhang, Keqin Bao, Yang Zhang, Wenjie Wang, Fuli Feng, and Xiangnan He. 2023. Is chatgpt fair for recommendation? evaluating fairness in large language model recommendation. arXiv preprint 2305.07609 (2023). [Google Scholar]
- Zizhuo Zhang and Bang Wang. 2023. Prompt learning for news recommendation. arXiv preprint 2304.05263 (2023). [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.