{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,16]],"date-time":"2026-02-16T18:12:47Z","timestamp":1771265567823,"version":"3.50.1"},"reference-count":23,"publisher":"MDPI AG","issue":"8","license":[{"start":{"date-parts":[[2025,8,8]],"date-time":"2025-08-08T00:00:00Z","timestamp":1754611200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"European Commission: Multi-hazard and Risk-informed System for Enhanced Local and Regional Disaster Risk Management (MEDiate)","award":["101074075"],"award-info":[{"award-number":["101074075"]}]},{"name":"IIASA\u2019s internal IBGF grant","award":["101074075"],"award-info":[{"award-number":["101074075"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Information"],"abstract":"<jats:p>Sentiment analysis is a cornerstone in many contextual data analyses, from opinion mining to public discussion analysis. Gender bias is one of the well-known issues in sentiment analysis models, which can produce different results for the same text depending on the gender it refers to. This gender bias leads to further bias in other text analyses that use such sentiment analysis models. This study reviews existing solutions to reduce gender bias in sentiment analysis and proposes a new method to address this issue. The proposed method offers more practical flexibility as it focuses on sentiment estimation rather than model training. Furthermore, it provides a quantitative measure to investigate the gender bias in sentiment analysis results. The performance of the proposed method across five sentiment analysis models is presented using texts containing gender-specific words. The proposed method is applied to a set of social media posts related to Morocco\u2019s 2023 earthquake to estimate the gender-unbiased sentiment of the posts and evaluate the gender-unbiasedness of five different sentiment analysis models in this context. The result shows that, although the sentiments estimated with different models are very different, the gender bias in none of the models is drastically large.<\/jats:p>","DOI":"10.3390\/info16080679","type":"journal-article","created":{"date-parts":[[2025,8,8]],"date-time":"2025-08-08T13:20:09Z","timestamp":1754659209000},"page":"679","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["Identifying and Mitigating Gender Bias in Social Media Sentiment Analysis: A Post-Training Approach on Example of the 2023 Morocco Earthquake"],"prefix":"10.3390","volume":"16","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-4109-0690","authenticated-orcid":false,"given":"Mohammad Reza","family":"Yeganegi","sequence":"first","affiliation":[{"name":"Cooperation and Transformative Governance Group, Advancing Systems Analysis Program, International Institute for Applied Systems Analysis (IIASA), 2361 Laxenburg, Austria"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0897-8663","authenticated-orcid":false,"given":"Hossein","family":"Hassani","sequence":"additional","affiliation":[{"name":"Cooperation and Transformative Governance Group, Advancing Systems Analysis Program, International Institute for Applied Systems Analysis (IIASA), 2361 Laxenburg, Austria"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2568-6179","authenticated-orcid":false,"given":"Nadejda","family":"Komendantova","sequence":"additional","affiliation":[{"name":"Cooperation and Transformative Governance Group, Advancing Systems Analysis Program, International Institute for Applied Systems Analysis (IIASA), 2361 Laxenburg, Austria"}]}],"member":"1968","published-online":{"date-parts":[[2025,8,8]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Kiritchenko, S., and Mohammad, S. (2018, January 5\u20136). Examining Gender and Race Bias in Two Hundred Sentiment Analysis Systems. Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, New Orleans, LA, USA.","DOI":"10.18653\/v1\/S18-2005"},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Zhao, J., Wang, T., Yatskar, M., Ordonez, V., and Chang, K.-W. (2018, January 1\u20136). Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), New Orleans, LA, USA.","DOI":"10.18653\/v1\/N18-2003"},{"key":"ref_3","first-page":"149","article-title":"Fairness in Machine Learning: Lessons from Political Philosophy","volume":"Volume 81","author":"Binns","year":"2018","journal-title":"Proceedings of Machine Learning Research, Proceedings of the 1st Conference on Fairness, Accountability and Transparency"},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Liang, P.P., Li, I.M., Zheng, E., Lim, Y.C., Salakhutdinov, R., and Morency, L.-P. (2020, January 5\u201310). Towards Debiasing Sentence Representations. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online.","DOI":"10.18653\/v1\/2020.acl-main.488"},{"key":"ref_5","unstructured":"Bolukbasi, T., Chang, K.-W., Zou, J.Y., Saligrama, V., and Kalai, A. (2016). Man Is to Computer Programmer as Woman Is to Homemaker? Debiasing Word Embeddings. arXiv."},{"key":"ref_6","first-page":"77","article-title":"Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification","volume":"Volume 81","author":"Buolamwini","year":"2018","journal-title":"Proceedings of Machine Learning Research, Proceedings of the 1st Conference on Fairness, Accountability and Transparency"},{"key":"ref_7","unstructured":"Dastin, J. (Reuters, 2018). Amazon Scraps Secret AI Recruiting Tool That Showed Bias against Women, Reuters."},{"key":"ref_8","first-page":"92","article-title":"Analyze, Detect and Remove Gender Stereotyping from Bollywood Movies","volume":"Volume 81","author":"Madaan","year":"2018","journal-title":"Proceedings of Machine Learning Research, Proceedings of the 1st Conference on Fairness, Accountability and Transparency"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Srivastava, B., and Rossi, F. (2018, January 2\u20133). Towards Composable Bias Rating of AI Services. Proceedings of the 2018 AAAI\/ACM Conference on AI, Ethics, and Society, New Orleans, LA, USA.","DOI":"10.1145\/3278721.3278744"},{"key":"ref_10","unstructured":"Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R. (2017). Optimized Pre-Processing for Discrimination Prevention. Proceedings of the Advances in Neural Information Processing Systems, Curran Associates, Inc."},{"key":"ref_11","unstructured":"Sun, T., Gaut, A., Tang, S., Huang, Y., ElSherief, M., Zhao, J., Mirza, D., Belding, E., Chang, K.-W., and Wang, W.Y. (August, January 28). Mitigating Gender Bias in Natural Language Processing: Literature Review. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy."},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Dixon, L., Li, J., Sorensen, J., Thain, N., and Vasserman, L. (2018, January 2\u20133). Measuring and Mitigating Unintended Bias in Text Classification. Proceedings of the 2018 AAAI\/ACM Conference on AI, Ethics, and Society, New Orleans, LA, USA.","DOI":"10.1145\/3278721.3278729"},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"183","DOI":"10.1126\/science.aal4230","article-title":"Semantics Derived Automatically from Language Corpora Contain Human-like Biases","volume":"356","author":"Caliskan","year":"2017","journal-title":"Science"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Sheng, E., Chang, K.-W., Natarajan, P., and Peng, N. (2019). The Woman Worked as a Babysitter: On Biases in Language Generation. arXiv.","DOI":"10.18653\/v1\/D19-1339"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Zhang, B.H., Lemoine, B., and Mitchell, M. (2018, January 2\u20133). Mitigating Unwanted Biases with Adversarial Learning. Proceedings of the 2018 AAAI\/ACM Conference on AI, Ethics, and Society, New Orleans, LA, USA.","DOI":"10.1145\/3278721.3278779"},{"key":"ref_16","unstructured":"Kaneko, M., and Bollegala, D. (August, January 28). Gender-Preserving Debiasing for Pre-Trained Word Embeddings. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy."},{"key":"ref_17","unstructured":"Guo, Y., Guo, M., Su, J., Yang, Z., Zhu, M., Li, H., Qiu, M., and Liu, S.S. (2024). Bias in Large Language Models: Origin, Evaluation, and Mitigation. arXiv."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"1921","DOI":"10.3390\/make5040093","article-title":"Social Intelligence Mining: Unlocking Insights from X","volume":"5","author":"Hassani","year":"2023","journal-title":"Mach. Learn. Knowl. Extr."},{"key":"ref_19","unstructured":"Rinker, T.W. Sentimentr: Calculate Text Polarity Sentiment; Buffalo, NY, USA. Available online: https:\/\/cran.r-project.org\/web\/packages\/sentimentr\/."},{"key":"ref_20","unstructured":"Loria, S. Textblob Documentation, release 0.15; 2018. Available online: https:\/\/textblob.readthedocs.io\/en\/dev\/index.html."},{"key":"ref_21","unstructured":"What is sentiment analysis and opinion mining? Microsoft Azure AI Language\u2014Sentiment Analysis 2025. Availiable online: https:\/\/learn.microsoft.com\/en-us\/azure\/ai-services\/language-service\/sentiment-opinion-mining\/overview."},{"key":"ref_22","unstructured":"Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov, V. (2019). RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv."},{"key":"ref_23","unstructured":"Roehrick, K. Vader: Valence Aware Dictionary and sEntiment Reasoner (VADER). Available online: https:\/\/cran.r-project.org\/web\/packages\/vader\/vader.pdf."}],"container-title":["Information"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2078-2489\/16\/8\/679\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,9]],"date-time":"2025-10-09T18:26:38Z","timestamp":1760034398000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2078-2489\/16\/8\/679"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,8,8]]},"references-count":23,"journal-issue":{"issue":"8","published-online":{"date-parts":[[2025,8]]}},"alternative-id":["info16080679"],"URL":"https:\/\/doi.org\/10.3390\/info16080679","relation":{},"ISSN":["2078-2489"],"issn-type":[{"value":"2078-2489","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,8,8]]}}}