Studying Limitations of Generative Transformer based models for Aspect Based Sentiment Analysis
Date
Author
Institution
Degree Level
Degree
Department
Supervisor / Co-Supervisor and Their Department(s)
Citation for Previous Publication
Link to Related Item
Abstract
Companies can only progress if they understand what their customers feel about their products and services. With companies having an online presence, and with the availability of third-party online reviewing platforms like Yelp, it becomes critical to scour through online reviews. Analysing millions of online reviews across various platforms is not a trivial task. Aspect-based Sentiment Analysis (ABSA) is an NLP task useful for automating such an analysis. ABSA solutions have historically used discriminative models, but there have been recent advances in the field which use generative transformer models (like T5 and BART). Generative ABSA models treat the ABSA task as a text generation problem. We study the latest generative ABSA models and discuss some of their limitations. We find that state-of-the-art generative ABSA models perform well for the standard ABSA settings. However, they face problems in certain real-life scenarios like handling cross-lingual settings and with reviews containing coreference resolution. We propose solutions for these limitations, justifying why they work.
