Fall 2025 theses and dissertations (non-restricted) will be available in ERA on November 17, 2025.

Modeling Inflectional Complexity in Natural Language Processing

Loading...
Thumbnail Image

Institution

http://id.loc.gov/authorities/names/n79058482

Degree Level

Doctoral

Degree

Doctor of Philosophy

Department

Department of Computing Science

Supervisor / Co-Supervisor and Their Department(s)

Examining Committee Member(s) and Their Department(s)

Citation for Previous Publication

Link to Related Item

Abstract

Inflectional morphology presents numerous problems for traditional computational models, not least of which is an increase in the number of rare types in any corpus. Although few annotated corpora exist for morphologically complex languages, it is possible for lay-speakers of the language to generate data such as inflection tables that describe patterns that can be learned by machine learning algorithms. We investigate four inflectional tasks: inflection generation, stemming, lemmatization, and morphological analysis, and demonstrate that each of these tasks can be accurately modeled using sequential string transduction methods. Furthermore, expert annotation is unnecessary: inflectional models are learned from crowd-sourced inflection tables. We first investigate inflection generation: given a dictionary form and a tag representing inflectional information, we produce inflected word-forms. We then refine our predictions by referring to the other forms within a paradigm. Results of experiments on six diverse languages with varying amounts of training data demonstrate that our approach improves the state of the art in terms of predicting inflected word-forms. We next investigate stemming: the removal of inflectional prefixes and suffixes from a word. Unlike the inflection generation task, it is not possible to use inflection tables to learn a fully-supervised stemming model; however, we exploit paradigmatic regularity to identify stems in an unsupervised manner with over 85% accuracy. Experiments on English, Dutch, and German show that our stemmers substantially outperform rule-based and unsupervised stemmers such as Snowball and Morfessor, and approach the accuracy of a fully-supervised system. Furthermore, the generated stems are more consistent than those annotated by experts. We also use the inflection tables to learn models that generate lemmas from inflected forms. Unlike stemming, lemmatization restores orthographic changes that have occurred during inflection. These models are more accurate than Morfette and Lemming on most datasets. Finally, we extend our lemmatization methods to produce complete morphological analyses: given a word, return a set of lemma / tag pairs that may have generated it. This task is more ambiguous than inflectional generation or lemmatization which typically produce only a small number of outputs. Thus, morphological analysis involves producing a complete list of lemma+tag analyses for a given word-form. Experiments on four languages demonstrate that our system has much higher coverage than a hand-engineered FST analyzer, and is more accurate than a state-of-the-art morphological tagger.

Item Type

http://purl.org/coar/resource_type/c_46ec

Alternative

License

Other License Text / Link

This thesis is made available by the University of Alberta Libraries with permission of the copyright owner solely for non-commercial purposes. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law.

Language

en

Location

Time Period

Source