This paper investigates the effectiveness of selected tools for sentiment analysis, focusing on both dedicated software libraries (NLTK, Pattern, TextBlob) and large language models (ChatGPT and Gemini). The evaluation was conducted in two stages: sentiment analysis of 30 synthetic opinions of varying linguistic complexity, and analysis of 5 sets of real user reviews collected from the web. The results show that large language models - although not explicitly designed for sentiment analysis - achieved the highest accuracy, with ChatGPT consistently producing the lowest deviation from human ratings. In contrast, software libraries showed greater variation, especially in the presence of complex linguistic structures. These findings highlight the potential of large language models in sentiment analysis tasks and underscore their robustness in interpreting nuanced language.
- APA 7th style
- Chicago style
- IEEE style
- Vancouver style
< Prev | Next > |
---|