id
stringlengths 8
23
| title
stringlengths 24
120
| abstract
stringlengths 323
2.05k
⌀ | year
int64 2.02k
2.02k
| url
stringlengths 34
49
| venues
stringclasses 9
values | reading_list
stringlengths 4.41k
62.7k
|
|---|---|---|---|---|---|---|
C16-3001
|
Compositional Distributional Models of Meaning
|
Compositional distributional models of meaning (CDMs) provide a function that produces a vectorial representation for a phrase or a sentence by composing the vectors of its words. Being the natural evolution of the traditional and well-studied distributional models at the word level, CDMs are steadily evolving to a popular and active area of NLP. This COLING 2016 tutorial aims at providing a concise introduction to this emerging field, presenting the different classes of CDMs and the various issues related to them in sufficient detail.
| 2,016
|
https://aclanthology.org/C16-3001/
|
COLING
|
[{'id': 8360910, 'paperId': '37efe2ef1b9d27cc598361a8013ec888a6f7c4d8', 'title': 'Nouns are Vectors, Adjectives are Matrices: Representing Adjective-Noun Constructions in Semantic Space', 'authors': [{'authorId': '145283199', 'name': 'Marco Baroni'}, {'authorId': '2713535', 'name': 'Roberto Zamparelli'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'We propose an approach to adjective-noun composition (AN) for corpus-based distributional semantics that, building on insights from theoretical linguistics, represents nouns as vectors and adjectives as data-induced (linear) functions (encoded as matrices) over nominal vectors. Our model significantly outperforms the rivals on the task of reconstructing AN vectors not seen in training. A small post-hoc analysis further suggests that, when the model-generated AN vector is not similar to the corpus-observed AN vector, this is due to anomalies in the latter. We show moreover that our approach provides two novel ways to represent adjective meanings, alternative to its representation via corpus-based co-occurrence vectors, both outperforming the latter in an adjective clustering task.', 'year': 2010, 'in_acl': True, 'citationCount': 542, 'section': None, 'subsection': None}, {'id': 5917203, 'paperId': '228d9e4b69926594fd26080f4cfaa9ecfca44eb3', 'title': 'Mathematical Foundations for a Compositional Distributional Model of Meaning', 'authors': [{'authorId': '3326718', 'name': 'B. Coecke'}, {'authorId': '1784777', 'name': 'M. Sadrzadeh'}, {'authorId': '144523372', 'name': 'S. Clark'}], 'venue': 'arXiv.org', 'abstract': "We propose a mathematical framework for a unification of the distributional theory of meaning in terms of vector space models, and a compositional theory for grammatical types, for which we rely on the algebra of Pregroups, introduced by Lambek. This mathematical framework enables us to compute the meaning of a well-typed sentence from the meanings of its constituents. Concretely, the type reductions of Pregroups are `lifted' to morphisms in a category, a procedure that transforms meanings of constituents into a meaning of the (well-typed) whole. Importantly, meanings of whole sentences live in a single space, independent of the grammatical structure of the sentence. Hence the inner-product can be used to compare meanings of arbitrary sentences, as it is for comparing the meanings of words in the distributional model. The mathematical structure we employ admits a purely diagrammatic calculus which exposes how the information flows between the words in a sentence in order to make up the meaning of the whole sentence. A variation of our `categorical model' which involves constraining the scalars of the vector spaces to the semiring of Booleans results in a Montague-style Boolean-valued semantics.", 'year': 2010, 'in_acl': False, 'citationCount': 547, 'section': None, 'subsection': None}, {'id': 11691908, 'paperId': '167d0e6bbb4199764773e7fb77882ce64e586e89', 'title': 'A Unified Sentence Space for Categorical Distributional-Compositional Semantics: Theory and Experiments', 'authors': [{'authorId': '2940780', 'name': 'Dimitri Kartsaklis'}, {'authorId': '1784777', 'name': 'M. Sadrzadeh'}, {'authorId': '50419262', 'name': 'S. Pulman'}], 'venue': 'International Conference on Computational Linguistics', 'abstract': 'This short paper summarizes a faithful implementation of the categorical framework of Coecke et al. (2010), the aim of which is to provide compositionality in distributional models of lexical semantics. Based on Frobenius Algebras, our method enable us to (1) have a unifying meaning space for phrases and sentences of different structure and word vectors, (2) stay faithful to the linguistic types suggested by the underlying type-logic, and (3) perform the concrete computations in lower dimensions by reducing the space complexity. We experiment with two different parameters of the model and apply the setting to a verb disambiguation and a term/definition classification task with promising results.', 'year': 2012, 'in_acl': True, 'citationCount': 75, 'section': None, 'subsection': None}, {'id': 26901423, 'paperId': '745d86adca56ec50761591733e157f84cfb19671', 'title': 'Composition in Distributional Models of Semantics', 'authors': [{'authorId': '34902160', 'name': 'Jeff Mitchell'}, {'authorId': '1747893', 'name': 'Mirella Lapata'}], 'venue': 'Cognitive Sciences', 'abstract': 'Vector-based models of word meaning have become increasingly popular in cognitive science. The appeal of these models lies in their ability to represent meaning simply by using distributional information under the assumption that words occurring within similar contexts are semantically similar. Despite their widespread use, vector-based models are typically directed at representing words in isolation, and methods for constructing representations for phrases or sentences have received little attention in the literature. This is in marked contrast to experimental evidence (e.g., in sentential priming) suggesting that semantic similarity is more complex than simply a relation between isolated words. This article proposes a framework for representing the meaning of word combinations in vector space. Central to our approach is vector composition, which we operationalize in terms of additive and multiplicative functions. Under this framework, we introduce a wide range of composition models that we evaluate empirically on a phrase similarity task.', 'year': 2010, 'in_acl': False, 'citationCount': 989, 'section': None, 'subsection': None}, {'id': 806709, 'paperId': '27e38351e48fe4b7da2775bf94341738bc4da07e', 'title': 'Semantic Compositionality through Recursive Matrix-Vector Spaces', 'authors': [{'authorId': '2166511', 'name': 'R. Socher'}, {'authorId': '2570381', 'name': 'Brody Huval'}, {'authorId': '144783904', 'name': 'Christopher D. Manning'}, {'authorId': '34699434', 'name': 'A. Ng'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Single-word vector space models have been very successful at learning lexical information. However, they cannot capture the compositional meaning of longer phrases, preventing them from a deeper understanding of language. We introduce a recursive neural network (RNN) model that learns compositional vector representations for phrases and sentences of arbitrary syntactic type and length. Our model assigns a vector and a matrix to every node in a parse tree: the vector captures the inherent meaning of the constituent, while the matrix captures how it changes the meaning of neighboring words or phrases. This matrix-vector RNN can learn the meaning of operators in propositional logic and natural language. The model obtains state of the art performance on three different experiments: predicting fine-grained sentiment distributions of adverb-adjective pairs; classifying sentiment labels of movie reviews and classifying semantic relationships such as cause-effect or topic-message between nouns using the syntactic path between them.', 'year': 2012, 'in_acl': True, 'citationCount': 1380, 'section': None, 'subsection': None}, {'id': 1500900, 'paperId': '3a0e788268fafb23ab20da0e98bb578b06830f7d', 'title': 'From Frequency to Meaning: Vector Space Models of Semantics', 'authors': [{'authorId': '1689647', 'name': 'Peter D. Turney'}, {'authorId': '1990190', 'name': 'Patrick Pantel'}], 'venue': 'Journal of Artificial Intelligence Research', 'abstract': 'Computers understand very little of the meaning of human language. This profoundly limits our ability to give instructions to computers, the ability of computers to explain their actions to us, and the ability of computers to analyse and process text. Vector space models (VSMs) of semantics are beginning to address these limits. This paper surveys the use of VSMs for semantic processing of text. We organize the literature on VSMs according to the structure of the matrix in a VSM. There are currently three broad classes of VSMs, based on term-document, word-context, and pair-pattern matrices, yielding three classes of applications. We survey a broad range of applications in these three categories and we take a detailed look at a specific open source project in each category. Our goal in this survey is to show the breadth of applications of VSMs for semantics, to provide a new perspective on VSMs for those who are already familiar with the area, and to provide pointers into the literature for those who are less familiar with the field.', 'year': 2010, 'in_acl': False, 'citationCount': 2942, 'section': None, 'subsection': None}]
|
P19-4004
|
Computational Analysis of Political Texts: Bridging Research Efforts Across Communities
|
In the last twenty years, political scientists started adopting and developing natural language processing (NLP) methods more actively in order to exploit text as an additional source of data in their analyses. Over the last decade the usage of computational methods for analysis of political texts has drastically expanded in scope, allowing for a sustained growth of the text-as-data community in political science. In political science, NLP methods have been extensively used for a number of analyses types and tasks, including inferring policy position of actors from textual evidence, detecting topics in political texts, and analyzing stylistic aspects of political texts (e.g., assessing the role of language ambiguity in framing the political agenda). Just like in numerous other domains, much of the work on computational analysis of political texts has been enabled and facilitated by the development of resources such as, the topically coded electoral programmes (e.g., the Manifesto Corpus) or topically coded legislative texts (e.g., the Comparative Agenda Project). Political scientists created resources and used available NLP methods to process textual data largely in isolation from the NLP community. At the same time, NLP researchers addressed closely related tasks such as election prediction, ideology classification, and stance detection. In other words, these two communities have been largely agnostic of one another, with NLP researchers mostly unaware of interesting applications in political science and political scientists not applying cutting-edge NLP methodology to their problems. The main goal of this tutorial is to systematize and analyze the body of research work on political texts from both communities. We aim to provide a gentle, all-round introduction to methods and tasks related to computational analysis of political texts. Our vision is to bring the two research communities closer to each other and contribute to faster and more significant developments in this interdisciplinary research area.
| 2,019
|
https://aclanthology.org/P19-4004/
|
ACL
|
[{'id': 16196219, 'paperId': 'b9921fb4d1448058642897797e77bdaf8f444404', 'title': 'Text as Data: The Promise and Pitfalls of Automatic Content Analysis Methods for Political Texts', 'authors': [{'authorId': '2361828', 'name': 'Justin Grimmer'}, {'authorId': '28924497', 'name': 'Brandon M Stewart'}], 'venue': 'Political Analysis', 'abstract': 'Politics and political conflict often occur in the written and spoken word. Scholars have long recognized this, but the massive costs of analyzing even moderately sized collections of texts have hindered their use in political science research. Here lies the promise of automated text analysis: it substantially reduces the costs of analyzing large collections of text. We provide a guide to this exciting new area of research and show how, in many instances, the methods have already obtained part of their promise. But there are pitfalls to using automated methods—they are no substitute for careful thought and close reading and require extensive and problem-specific validation. We survey a wide range of new methods, provide guidance on how to validate the output of the models, and clarify misconceptions and errors in the literature. To conclude, we argue that for automated text methods to become a standard tool for political scientists, methodologists must contribute new methods and new methods of validation.', 'year': 2013, 'in_acl': False, 'citationCount': 2486, 'section': None, 'subsection': None}, {'id': 10274824, 'paperId': '7d9cc63dfbd34acf271e3a2c922ea1c07fb2f482', 'title': 'Extracting Policy Positions from Political Texts Using Words as Data', 'authors': [{'authorId': '143758665', 'name': 'M. Laver'}, {'authorId': '26916605', 'name': 'K. Benoit'}, {'authorId': '80157164', 'name': 'John Garry'}], 'venue': 'American Political Science Review', 'abstract': 'We present a new way of extracting policy positions from political texts that treats texts not as discourses to be understood and interpreted but rather, as data in the form of words. We compare this approach to previous methods of text analysis and use it to replicate published estimates of the policy positions of political parties in Britain and Ireland, on both economic and social policy dimensions. We “export” the method to a non-English-language environment, analyzing the policy positions of German parties, including the PDS as it entered the former West German party system. Finally, we extend its application beyond the analysis of party manifestos, to the estimation of political positions from legislative speeches. Our “language-blind” word scoring technique successfully replicates published policy estimates without the substantial costs of time and labor that these require. Furthermore, unlike in any previous method for extracting policy positions from political texts, we provide uncertainty measures for our estimates, allowing analysts to make informed judgments of the extent to which differences between two estimated policy positions can be viewed as significant or merely as products of measurement error.We thank Raj Chari, Gary King, Michael McDonald, Gail McElroy, and three anonymous reviewers for comments on drafts of this paper.', 'year': 2003, 'in_acl': False, 'citationCount': 1277, 'section': None, 'subsection': None}, {'id': 17026162, 'paperId': '5109c519cd4442041a5d3915ca305eba6d68ee10', 'title': 'A Scaling Model for Estimating Time-Series Party Positions from Texts', 'authors': [{'authorId': '70665044', 'name': 'Jonathan B. Slapin'}, {'authorId': '145688599', 'name': 'Sven-Oliver Proksch'}], 'venue': '', 'abstract': 'However, existing text-based methods face challenges in producing valid and reliable time-series data. This article proposes a scaling algorithm called WORDFISH to estimate policy positions based on word frequencies in texts. The technique allows researchers to locate parties in one or multiple elections. We demonstrate the algorithm by estimating the positions of German political parties from 1990 to 2005 using word frequencies in party manifestos. The extracted positions reflect changes in the party system more accurately than existing time-series estimates. In addition, the method allows researchers to examine which words are important for placing parties on the left and on the right. We find that words with strong political connotations are the best discriminators between parties. Finally, a series of robustness checks demonstrate that the estimated positions are insensitive to distributional assumptions and document selection.', 'year': 2007, 'in_acl': False, 'citationCount': 679, 'section': None, 'subsection': None}]
|
2020.acl-tutorials.1
|
Interpretability and Analysis in Neural NLP
|
While deep learning has transformed the natural language processing (NLP) field and impacted the larger computational linguistics community, the rise of neural networks is stained by their opaque nature: It is challenging to interpret the inner workings of neural network models, and explicate their behavior. Therefore, in the last few years, an increasingly large body of work has been devoted to the analysis and interpretation of neural network models in NLP. This body of work is so far lacking a common framework and methodology. Moreover, approaching the analysis of modern neural networks can be difficult for newcomers to the field. This tutorial aims to fill this gap and introduce the nascent field of interpretability and analysis of neural networks in NLP. The tutorial will cover the main lines of analysis work, such as structural analyses using probing classifiers, behavioral studies and test suites, and interactive visualizations. We will highlight not only the most commonly applied analysis methods, but also the specific limitations and shortcomings of current approaches, in order to inform participants where to focus future efforts.
| 2,020
|
https://aclanthology.org/2020.acl-tutorials.1
|
ACL
|
[{'id': 56657817, 'paperId': '668f42a4d4094f0a66d402a16087e14269b31a1f', 'title': 'Analysis Methods in Neural Language Processing: A Survey', 'authors': [{'authorId': '2083259', 'name': 'Yonatan Belinkov'}, {'authorId': '145898106', 'name': 'James R. Glass'}], 'venue': 'Transactions of the Association for Computational Linguistics', 'abstract': 'The field of natural language processing has seen impressive progress in recent years, with neural network models replacing many of the traditional systems. A plethora of new models have been proposed, many of which are thought to be opaque compared to their feature-rich counterparts. This has led researchers to analyze, interpret, and evaluate neural networks in novel and more fine-grained ways. In this survey paper, we review analysis methods in neural language processing, categorize them according to prominent research trends, highlight existing limitations, and point to potential directions for future work.', 'year': 2018, 'in_acl': True, 'citationCount': 513, 'section': None, 'subsection': None}, {'id': 5013113, 'paperId': 'f170fed9acd71bd5feb20901c7ec1fe395f3fae5', 'title': "Visualisation and 'diagnostic classifiers' reveal how recurrent and recursive neural networks process hierarchical structure", 'authors': [{'authorId': '3449411', 'name': 'Dieuwke Hupkes'}, {'authorId': '1787819', 'name': 'Willem H. Zuidema'}], 'venue': 'Journal of Artificial Intelligence Research', 'abstract': "We investigate how neural networks can learn and process languages with hierarchical, compositional semantics. To this end, we define the artificial task of processing nested arithmetic expressions, and study whether different types of neural networks can learn to compute their meaning. We find that recursive neural networks can find a generalising solution to this problem, and we visualise this solution by breaking it up in three steps: project, sum and squash. As a next step, we investigate recurrent neural networks, and show that a gated recurrent unit, that processes its input incrementally, also performs very well on this task. To develop an understanding of what the recurrent network encodes, visualisation techniques alone do not suffice. Therefore, we develop an approach where we formulate and test multiple hypotheses on the information encoded and processed by the network. For each hypothesis, we derive predictions about features of the hidden state representations at each time step, and train 'diagnostic classifiers' to test those predictions. Our results indicate that the networks follow a strategy similar to our hypothesised 'cumulative strategy', which explains the high accuracy of the network on novel expressions, the generalisation to longer expressions than seen in training, and the mild deterioration with increasing length. This is turn shows that diagnostic classifiers can be a useful technique for opening up the black box of neural networks. We argue that diagnostic classification, unlike most visualisation techniques, does scale up from small networks in a toy domain, to larger and deeper recurrent networks dealing with real-life data, and may therefore contribute to a better understanding of the internal dynamics of current state-of-the-art models in natural language processing.", 'year': 2017, 'in_acl': False, 'citationCount': 235, 'section': None, 'subsection': None}, {'id': 108300988, 'paperId': 'e2587eddd57bc4ba286d91b27c185083f16f40ee', 'title': 'What do you learn from context? Probing for sentence structure in contextualized word representations', 'authors': [{'authorId': '6117577', 'name': 'Ian Tenney'}, {'authorId': '2465658', 'name': 'Patrick Xia'}, {'authorId': '2108381400', 'name': 'Berlin Chen'}, {'authorId': '144906624', 'name': 'Alex Wang'}, {'authorId': '48926630', 'name': 'Adam Poliak'}, {'authorId': '145534175', 'name': 'R. Thomas McCoy'}, {'authorId': '8756748', 'name': 'Najoung Kim'}, {'authorId': '7536576', 'name': 'Benjamin Van Durme'}, {'authorId': '3644767', 'name': 'Samuel R. Bowman'}, {'authorId': '143790066', 'name': 'Dipanjan Das'}, {'authorId': '2949185', 'name': 'Ellie Pavlick'}], 'venue': 'International Conference on Learning Representations', 'abstract': 'Contextualized representation models such as ELMo (Peters et al., 2018a) and BERT (Devlin et al., 2018) have recently achieved state-of-the-art results on a diverse array of downstream NLP tasks. Building on recent token-level probing work, we introduce a novel edge probing task design and construct a broad suite of sub-sentence tasks derived from the traditional structured NLP pipeline. We probe word-level contextual representations from four recent models and investigate how they encode sentence structure across a range of syntactic, semantic, local, and long-range phenomena. We find that existing models trained on language modeling and translation produce strong representations for syntactic phenomena, but only offer comparably small improvements on semantic tasks over a non-contextual baseline.', 'year': 2019, 'in_acl': False, 'citationCount': 808, 'section': None, 'subsection': None}, {'id': 14091946, 'paperId': '3aa52436575cf6768a0a1a476601825f6a62e58f', 'title': 'Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies', 'authors': [{'authorId': '2467508', 'name': 'Tal Linzen'}, {'authorId': '2202008', 'name': 'Emmanuel Dupoux'}, {'authorId': '79775260', 'name': 'Yoav Goldberg'}], 'venue': 'Transactions of the Association for Computational Linguistics', 'abstract': 'The success of long short-term memory (LSTM) neural networks in language processing is typically attributed to their ability to capture long-distance statistical regularities. Linguistic regularities are often sensitive to syntactic structure; can such dependencies be captured by LSTMs, which do not have explicit structural representations? We begin addressing this question using number agreement in English subject-verb dependencies. We probe the architecture’s grammatical competence both using training objectives with an explicit grammatical target (number prediction, grammaticality judgments) and using language models. In the strongly supervised settings, the LSTM achieved very high overall accuracy (less than 1% errors), but errors increased when sequential and structural information conflicted. The frequency of such errors rose sharply in the language-modeling setting. We conclude that LSTMs can capture a non-trivial amount of grammatical structure given targeted supervision, but stronger architectures may be required to further reduce errors; furthermore, the language modeling signal is insufficient for capturing syntax-sensitive dependencies, and should be supplemented with more direct supervision if such dependencies need to be captured.', 'year': 2016, 'in_acl': True, 'citationCount': 860, 'section': None, 'subsection': None}, {'id': 49363457, 'paperId': '843c6b0a35b02e2c3d74bb545e74bc655e16e992', 'title': 'Assessing Composition in Sentence Vector Representations', 'authors': [{'authorId': '37907837', 'name': 'Allyson Ettinger'}, {'authorId': '143718836', 'name': 'Ahmed Elgohary'}, {'authorId': '143843506', 'name': 'C. Phillips'}, {'authorId': '1680292', 'name': 'P. Resnik'}], 'venue': 'International Conference on Computational Linguistics', 'abstract': 'An important component of achieving language understanding is mastering the composition of sentence meaning, but an immediate challenge to solving this problem is the opacity of sentence vector representations produced by current neural sentence composition models. We present a method to address this challenge, developing tasks that directly target compositional meaning information in sentence vector representations with a high degree of precision and control. To enable the creation of these controlled tasks, we introduce a specialized sentence generation system that produces large, annotated sentence sets meeting specified syntactic, semantic and lexical constraints. We describe the details of the method and generation system, and then present results of experiments applying our method to probe for compositional information in embeddings from a number of existing sentence composition models. We find that the method is able to extract useful information about the differing capacities of these models, and we discuss the implications of our results with respect to these systems’ capturing of sentence information. We make available for public use the datasets used for these experiments, as well as the generation system.', 'year': 2018, 'in_acl': True, 'citationCount': 78, 'section': None, 'subsection': None}, {'id': 11212020, 'paperId': 'fa72afa9b2cbc8f0d7b05d52548906610ffbb9c5', 'title': 'Neural Machine Translation by Jointly Learning to Align and Translate', 'authors': [{'authorId': '3335364', 'name': 'Dzmitry Bahdanau'}, {'authorId': '1979489', 'name': 'Kyunghyun Cho'}, {'authorId': '1751762', 'name': 'Yoshua Bengio'}], 'venue': 'International Conference on Learning Representations', 'abstract': 'Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.', 'year': 2014, 'in_acl': False, 'citationCount': 26130, 'section': None, 'subsection': None}, {'id': 13017314, 'paperId': '4c41104e871bccbd56494350a71d77a7f1da5bb0', 'title': 'Understanding Neural Networks through Representation Erasure', 'authors': [{'authorId': '49298465', 'name': 'Jiwei Li'}, {'authorId': '145768639', 'name': 'Will Monroe'}, {'authorId': '1746807', 'name': 'Dan Jurafsky'}], 'venue': 'arXiv.org', 'abstract': "While neural networks have been successfully applied to many natural language processing tasks, they come at the cost of interpretability. In this paper, we propose a general methodology to analyze and interpret decisions from a neural model by observing the effects on the model of erasing various parts of the representation, such as input word-vector dimensions, intermediate hidden units, or input words. We present several approaches to analyzing the effects of such erasure, from computing the relative difference in evaluation metrics, to using reinforcement learning to erase the minimum set of input words in order to flip a neural model's decision. In a comprehensive analysis of multiple NLP tasks, including linguistic feature classification, sentence-level sentiment analysis, and document level sentiment aspect prediction, we show that the proposed methodology not only offers clear explanations about neural model decisions, but also provides a way to conduct error analysis on neural models.", 'year': 2016, 'in_acl': False, 'citationCount': 536, 'section': None, 'subsection': None}, {'id': 3085700, 'paperId': '63c4114bd373dd0fcfe0d25a605b353c62be2995', 'title': 'How Grammatical is Character-level Neural Machine Translation? Assessing MT Quality with Contrastive Translation Pairs', 'authors': [{'authorId': '2082372', 'name': 'Rico Sennrich'}], 'venue': 'Conference of the European Chapter of the Association for Computational Linguistics', 'abstract': 'Analysing translation quality in regards to specific linguistic phenomena has historically been difficult and time-consuming. Neural machine translation has the attractive property that it can produce scores for arbitrary translations, and we propose a novel method to assess how well NMT systems model specific linguistic phenomena such as agreement over long distances, the production of novel words, and the faithful translation of polarity. The core idea is that we measure whether a reference translation is more probable under a NMT model than a contrastive translation which introduces a specific type of error. We present LingEval97, a large-scale data set of 97000 contrastive translation pairs based on the WMT English->German translation task, with errors automatically created with simple rules. We report results for a number of systems, and find that recently introduced character-level NMT systems perform better at transliteration than models with byte-pair encoding (BPE) segmentation, but perform more poorly at morphosyntactic agreement, and translating discontiguous units of meaning.', 'year': 2016, 'in_acl': True, 'citationCount': 161, 'section': None, 'subsection': None}, {'id': 17362994, 'paperId': '78aa018ee7d52360e15d103390ea1cdb3a0beb41', 'title': 'Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples', 'authors': [{'authorId': '1967156', 'name': 'Nicolas Papernot'}, {'authorId': '144061974', 'name': 'P. Mcdaniel'}, {'authorId': '153440022', 'name': 'I. Goodfellow'}], 'venue': 'arXiv.org', 'abstract': 'Many machine learning models are vulnerable to adversarial examples: inputs that are specially crafted to cause a machine learning model to produce an incorrect output. Adversarial examples that affect one model often affect another model, even if the two models have different architectures or were trained on different training sets, so long as both models were trained to perform the same task. An attacker may therefore train their own substitute model, craft adversarial examples against the substitute, and transfer them to a victim model, with very little information about the victim. Recent work has further developed a technique that uses the victim model as an oracle to label a synthetic training set for the substitute, so the attacker need not even collect a training set to mount the attack. We extend these recent techniques using reservoir sampling to greatly enhance the efficiency of the training procedure for the substitute model. We introduce new transferability attacks between previously unexplored (substitute, victim) pairs of machine learning model classes, most notably SVMs and decision trees. We demonstrate our attacks on two commercial machine learning classification systems from Amazon (96.19% misclassification rate) and Google (88.94%) using only 800 queries of the victim model, thereby showing that existing machine learning approaches are in general vulnerable to systematic black-box attacks regardless of their structure.', 'year': 2016, 'in_acl': False, 'citationCount': 1657, 'section': None, 'subsection': None}, {'id': 21698802, 'paperId': '514e7fb769950dbe96eb519c88ca17e04dc829f6', 'title': 'HotFlip: White-Box Adversarial Examples for Text Classification', 'authors': [{'authorId': '39043512', 'name': 'J. Ebrahimi'}, {'authorId': '36290866', 'name': 'Anyi Rao'}, {'authorId': '3021654', 'name': 'Daniel Lowd'}, {'authorId': '1721158', 'name': 'D. Dou'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'We propose an efficient method to generate white-box adversarial examples to trick a character-level neural classifier. We find that only a few manipulations are needed to greatly decrease the accuracy. Our method relies on an atomic flip operation, which swaps one token for another, based on the gradients of the one-hot input vectors. Due to efficiency of our method, we can perform adversarial training which makes the model more robust to attacks at test time. With the use of a few semantics-preserving constraints, we demonstrate that HotFlip can be adapted to attack a word-level classifier as well.', 'year': 2017, 'in_acl': True, 'citationCount': 956, 'section': None, 'subsection': None}]
|
2020.acl-tutorials.2
|
Integrating Ethics into the NLP Curriculum
|
To raise awareness among future NLP practitioners and prevent inertia in the field, we need to place ethics in the curriculum for all NLP students—not as an elective, but as a core part of their education. Our goal in this tutorial is to empower NLP researchers and practitioners with tools and resources to teach others about how to ethically apply NLP techniques. We will present both high-level strategies for developing an ethics-oriented curriculum, based on experience and best practices, as well as specific sample exercises that can be brought to a classroom. This highly interactive work session will culminate in a shared online resource page that pools lesson plans, assignments, exercise ideas, reading suggestions, and ideas from the attendees. Though the tutorial will focus particularly on examples for university classrooms, we believe these ideas can extend to company-internal workshops or tutorials in a variety of organizations. In this setting, a key lesson is that there is no single approach to ethical NLP: each project requires thoughtful consideration about what steps can be taken to best support people affected by that project. However, we can learn (and teach) what issues to be aware of, what questions to ask, and what strategies are available to mitigate harm.
| 2,020
|
https://aclanthology.org/2020.acl-tutorials.2
|
ACL
|
[{'id': 26039972, 'paperId': '0e661bd2cfe94ed58e4e2abc1409c75b98c2582c', 'title': 'Dual use and the ethical responsibility of scientists', 'authors': [{'authorId': '3920554', 'name': 'Hans-Jörg Ehni'}], 'venue': 'Archivum Immunologiae et Therapiae Experimentalis', 'abstract': 'The main normative problem in the context of dual use is to determine the ethical responsibility of scientists especially in the case of unintended, harmful, and criminal dual use of new technological applications of scientific results. This article starts from an analysis of the concepts of responsibility and complicity, examining alternative options regarding the responsibility of scientists. Within the context of the basic conflict between the freedom of science and the duty to avoid causing harm, two positions are discussed: moral skepticism and the ethics of responsibility by Hans Jonas. According to these reflections, four duties are suggested and evaluated: stopping research, systematically carrying out research for dual-use applications, informing public authorities, and not publishing results. In the conclusion it is argued that these duties should be considered as imperfect duties in a Kantian sense and that the individual scientist should be discharged as much as possible from obligations which follow from them by the scientific community and institutions created for this purpose.', 'year': 2008, 'in_acl': False, 'citationCount': 40, 'section': None, 'subsection': None}, {'id': 52113954, 'paperId': 'e8fa186444d98a39ee9139b1f5dd0c7618caef8f', 'title': 'Privacy-preserving Neural Representations of Text', 'authors': [{'authorId': '3443469', 'name': 'Maximin Coavoux'}, {'authorId': None, 'name': 'Shashi Narayan'}, {'authorId': '40146204', 'name': 'Shay B. Cohen'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'This article deals with adversarial attacks towards deep learning systems for Natural Language Processing (NLP), in the context of privacy protection. We study a specific type of attack: an attacker eavesdrops on the hidden representations of a neural text classifier and tries to recover information about the input text. Such scenario may arise in situations when the computation of a neural network is shared across multiple devices, e.g. some hidden representation is computed by a user’s device and sent to a cloud-based model. We measure the privacy of a hidden representation by the ability of an attacker to predict accurately specific private information from it and characterize the tradeoff between the privacy and the utility of neural representations. Finally, we propose several defense methods based on modified training objectives and show that they improve the privacy of neural representations.', 'year': 2018, 'in_acl': True, 'citationCount': 106, 'section': None, 'subsection': None}, {'id': 9460040, 'paperId': 'f9acf607b858ac110c1bf83bf62835bcc1820e83', 'title': 'Value scenarios: a technique for envisioning systemic effects of new technologies', 'authors': [{'authorId': '34869420', 'name': 'L. Nathan'}, {'authorId': '2035680', 'name': 'P. Klasnja'}, {'authorId': '144029598', 'name': 'Batya Friedman'}], 'venue': 'CHI Extended Abstracts', 'abstract': 'In this paper we argue that there is a scarcity of methods which support critical, systemic, long-term thinking in current design practice, technology development and deployment. To address this need we introduce value scenarios, an extension of scenario-based design which can support envisioning the systemic effects of new technologies. We identify and describe five key elements of value scenarios; stakeholders, pervasiveness, time, systemic effects, and value implications. We provide two examples of value scenarios, which draw from our current work on urban simulation and human-robotic interaction . We conclude with suggestions for how value scenarios might be used by others.', 'year': 2007, 'in_acl': False, 'citationCount': 101, 'section': None, 'subsection': None}, {'id': 53782832, 'paperId': 'c9fa1cb56feeeb5033aa7ba40fa035ca2b9018ce', 'title': '50 Years of Test (Un)fairness: Lessons for Machine Learning', 'authors': [{'authorId': '2044655623', 'name': 'Ben Hutchinson'}, {'authorId': '49501003', 'name': 'Margaret Mitchell'}], 'venue': 'FAT', 'abstract': 'Quantitative definitions of what is unfair and what is fair have been introduced in multiple disciplines for well over 50 years, including in education, hiring, and machine learning. We trace how the notion of fairness has been defined within the testing communities of education and hiring over the past half century, exploring the cultural and social context in which different fairness definitions have emerged. In some cases, earlier definitions of fairness are similar or identical to definitions of fairness in current machine learning research, and foreshadow current formal work. In other cases, insights into what fairness means and how to measure it have largely gone overlooked. We compare past and current notions of fairness along several dimensions, including the fairness criteria, the focus of the criteria (e.g., a test, a model, or its use), the relationship of fairness to individuals, groups, and subgroups, and the mathematical method for measuring fairness (e.g., classification, regression). This work points the way towards future research and measurement of (un)fairness that builds from our modern understanding of fairness while incorporating insights from the past.', 'year': 2018, 'in_acl': False, 'citationCount': 330, 'section': None, 'subsection': None}, {'id': 2077168, 'paperId': '0fee3b6c72f7676b4934651e517d0a328048c600', 'title': 'Certifying and Removing Disparate Impact', 'authors': [{'authorId': '2053453944', 'name': 'Michael Feldman'}, {'authorId': '34597147', 'name': 'Sorelle A. Friedler'}, {'authorId': '144275618', 'name': 'John Moeller'}, {'authorId': '1786183', 'name': 'C. Scheidegger'}, {'authorId': '72563021', 'name': 'S. Venkatasubramanian'}], 'venue': 'Knowledge Discovery and Data Mining', 'abstract': 'What does it mean for an algorithm to be biased? In U.S. law, unintentional bias is encoded via disparate impact, which occurs when a selection process has widely different outcomes for different groups, even as it appears to be neutral. This legal determination hinges on a definition of a protected class (ethnicity, gender) and an explicit description of the process. When computers are involved, determining disparate impact (and hence bias) is harder. It might not be possible to disclose the process. In addition, even if the process is open, it might be hard to elucidate in a legal setting how the algorithm makes its decisions. Instead of requiring access to the process, we propose making inferences based on the data it uses. We present four contributions. First, we link disparate impact to a measure of classification accuracy that while known, has received relatively little attention. Second, we propose a test for disparate impact based on how well the protected class can be predicted from the other attributes. Third, we describe methods by which data might be made unbiased. Finally, we present empirical evidence supporting the effectiveness of our test for disparate impact and our approach for both masking bias and preserving relevant information in the data. Interestingly, our approach resembles some actual selection practices that have recently received legal scrutiny.', 'year': 2014, 'in_acl': False, 'citationCount': 1846, 'section': None, 'subsection': None}]
|
2020.acl-tutorials.3
|
Achieving Common Ground in Multi-modal Dialogue
|
All communication aims at achieving common ground (grounding): interlocutors can work together effectively only with mutual beliefs about what the state of the world is, about what their goals are, and about how they plan to make their goals a reality. Computational dialogue research offers some classic results on grouding, which unfortunately offer scant guidance to the design of grounding modules and behaviors in cutting-edge systems. In this tutorial, we focus on three main topic areas: 1) grounding in human-human communication; 2) grounding in dialogue systems; and 3) grounding in multi-modal interactive systems, including image-oriented conversations and human-robot interactions. We highlight a number of achievements of recent computational research in coordinating complex content, show how these results lead to rich and challenging opportunities for doing grounding in more flexible and powerful ways, and canvass relevant insights from the literature on human–human conversation. We expect that the tutorial will be of interest to researchers in dialogue systems, computational semantics and cognitive modeling, and hope that it will catalyze research and system building that more directly explores the creative, strategic ways conversational agents might be able to seek and offer evidence about their understanding of their interlocutors.
| 2,020
|
https://aclanthology.org/2020.acl-tutorials.3
|
ACL
|
[{'id': 153811205, 'paperId': '5a9cac54de14e58697d0315fe3c01f3dbe69c186', 'title': 'Grounding in communication', 'authors': [{'authorId': '29224904', 'name': 'H. H. Clark'}, {'authorId': '71463834', 'name': 'S. Brennan'}], 'venue': 'Perspectives on socially shared cognition', 'abstract': "GROUNDING It takes two people working together to play a duet, shake hands, play chess, waltz, teach, or make love. To succeed, the two of them have to coordinate both the content and process of what they are doing. Alan and Barbara, on the piano, must come to play the same Mozart duet. This is coordination of content. They must also synchronize their entrances and exits, coordinate how loudly to play forte and pianissimo, and otherwise adjust to each other's tempo and dynamics. This is coordination of process. They cannot even begin to coordinate on content without assuming a vast amount of shared information or common ground-that is, mutual knowledge, mutual beliefs, and mutual assumptions And to coordinate on process, they need to update their common ground moment by moment. All collective actions are built on common ground and its accumulation. We thank many colleagues for discussion of the issues we take up here.", 'year': 1991, 'in_acl': False, 'citationCount': 4465, 'section': None, 'subsection': None}, {'id': 14623495, 'paperId': '06b6595034f6a8ea850ac12814030c0ef214d300', 'title': 'Meaning and Demonstration', 'authors': [{'authorId': '144884556', 'name': 'Matthew Stone'}, {'authorId': '3458697', 'name': 'Una Stojnić'}], 'venue': '', 'abstract': 'In demonstration, speakers use real-world activity both for its practical effects and to help make their points. The demonstrations of origami mathematics, for example, reconfigure pieces of paper by folding, while simultaneously allowing their author to signal geometric inferences. Demonstration challenges us to explain how practical actions can get such precise significance and how this meaning compares with that of other representations. In this paper, we propose an explanation inspired by David Lewis’s characterizations of coordination and scorekeeping in conversation. In particular, we argue that words, gestures, diagrams and demonstrations can function together as integrated ensembles that contribute to conversation, because interlocutors use them in parallel ways to coordinate updates to the conversational record.', 'year': 2015, 'in_acl': False, 'citationCount': 13, 'section': None, 'subsection': None}, {'id': 10161834, 'paperId': '68922969c1b91cdfb4a13f1dab9b90d015179a9c', 'title': 'Using Reinforcement Learning to Model Incrementality in a Fast-Paced Dialogue Game', 'authors': [{'authorId': '2175808', 'name': 'R. Manuvinakurike'}, {'authorId': '144662324', 'name': 'David DeVault'}, {'authorId': '3194430', 'name': 'Kallirroi Georgila'}], 'venue': 'SIGDIAL Conference', 'abstract': 'We apply Reinforcement Learning (RL) to the problem of incremental dialogue policy learning in the context of a fast-paced dialogue game. We compare the policy learned by RL with a high-performance baseline policy which has been shown to perform very efficiently (nearly as well as humans) in this dialogue game. The RL policy outperforms the baseline policy in offline simulations (based on real user data). We provide a detailed comparison of the RL policy and the baseline policy, including information about how much effort and time it took to develop each one of them. We also highlight the cases where the RL policy performs better, and show that understanding the RL policy can provide valuable insights which can inform the creation of an even better rule-based policy.', 'year': 2017, 'in_acl': True, 'citationCount': 20, 'section': None, 'subsection': None}, {'id': 51609464, 'paperId': '0e3c3599bf5dc2e24e724f097b80948f25c57d1d', 'title': 'Language to Action: Towards Interactive Task Learning with Physical Agents', 'authors': [{'authorId': '1707259', 'name': 'J. Chai'}, {'authorId': '3193409', 'name': 'Qiaozi Gao'}, {'authorId': '2720582', 'name': 'Lanbo She'}, {'authorId': '47569745', 'name': 'Shaohua Yang'}, {'authorId': '1411038811', 'name': 'S. Saba-Sadiya'}, {'authorId': '49560239', 'name': 'Guangyue Xu'}], 'venue': 'International Joint Conference on Artificial Intelligence', 'abstract': 'Language communication plays an important role in human learning and knowledge acquisition. With the emergence of a new generation of cognitive robots, empowering these robots to learn directly from human partners becomes increasingly important. This paper gives a brief introduction to interactive task learning where humans can teach physical agents new tasks through natural language communication and action demonstration. It discusses research challenges and opportunities in language and communication grounding that are critical in this process. It further highlights the importance of commonsense knowledge, particularly the very basic physical causality knowledge, in grounding language to perception and action.', 'year': 2018, 'in_acl': False, 'citationCount': 87, 'section': None, 'subsection': None}, {'id': 14843216, 'paperId': 'a2a4cc9bd34ed61383979edd365d29a32a74368e', 'title': "It's Not What You Do, It's How You Do It: Grounding Uncertainty for a Simple Robot", 'authors': [{'authorId': '144397346', 'name': 'Julian Hough'}, {'authorId': '1817455', 'name': 'David Schlangen'}], 'venue': 'IEEE/ACM International Conference on Human-Robot Interaction', 'abstract': 'For effective HRI, robots must go beyond having good legibility of their intentions shown by their actions, but also ground the degree of uncertainty they have. We show how in simple robots which have spoken language understanding capacities, uncertainty can be communicated to users by principles of grounding in dialogue interaction even without natural language generation. We present a model which makes this possible for robots with limited communication channels beyond the execution of task actions themselves. We implement our model in a pick-and-place robot, and experiment with two strategies for grounding uncertainty. In an observer study, we show that participants observing interactions with the robot run by the two different strategies were able to infer the degree of understanding the robot had internally, and in the more uncertainty-expressive system, were also able to perceive the degree of internal uncertainty the robot had reliably.', 'year': 2017, 'in_acl': False, 'citationCount': 33, 'section': None, 'subsection': None}, {'id': 11824338, 'paperId': '440dd122c93f707c213bb3096449848ac7d1bda5', 'title': 'Learning Effective Multimodal Dialogue Strategies from Wizard-of-Oz Data: Bootstrapping and Evaluation', 'authors': [{'authorId': '1681799', 'name': 'Verena Rieser'}, {'authorId': '1782798', 'name': 'Oliver Lemon'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'We address two problems in the field of automatic optimization of dialogue strategies: learning effective dialogue strategies when no initial data or system exists, and evaluating the result with real users. We use Reinforcement Learning (RL) to learn multimodal dialogue strategies by interaction with a simulated environment which is “bootstrapped” from small amounts of Wizard-of-Oz (WOZ) data. This use of WOZ data allows development of optimal strategies for domains where no working prototype is available. We compare the RL-based strategy against a supervised strategy which mimics the wizards’ policies. This comparison allows us to measure relative improvement over the training data. Our results show that RL significantly outperforms Supervised Learning when interacting in simulation as well as for interactions with real users. The RL-based policy gains on average 50-times more reward when tested in simulation, and almost 18-times more reward when interacting with real users. Users also subjectively rate the RL-based policy on average 10% higher.', 'year': 2008, 'in_acl': True, 'citationCount': 97, 'section': None, 'subsection': None}, {'id': 117398433, 'paperId': '39253602324619c130e140c4e5cef4af0d746880', 'title': 'A Survey of Nonverbal Signaling Methods for Non-Humanoid Robots', 'authors': [{'authorId': '3289717', 'name': 'Elizabeth Cha'}, {'authorId': '1717667', 'name': 'Yunkyung Kim'}, {'authorId': '145585047', 'name': 'T. Fong'}, {'authorId': '1742183', 'name': 'M. Matarić'}], 'venue': 'Found. Trends Robotics', 'abstract': 'This monograph surveys and informs the design and usage of nonverbal signals for human-robot interaction. With robots increasingly being utilized for tasks that require them to not only operate in close proximity to humans but to interact with them as well, there has been great interest in the communication challenges associated with the varying degrees of interaction in these environments. The success of such interactions depends on robots’ ability to convey information about their knowledge, intent, and actions to co-located humans. The monograph presents a comprehensive review of literature related to the generation and usage of nonverbal signals that facilitate legibility of non-humanoid robot state and behavior. To motivate the need for these signaling behaviors, it surveys literature in human communication and psychology and outlines target use cases of non-humanoid robots. Specifically, the focus is on works that provide insight into the cognitive processes that enable humans to recognize, interpret, and exploit nonverbal signals. From these use cases, information is identified that is potentially important for non-humanoid robots to signal and organize it into three categories of robot state. The monograph then presents a review of signal design techniques to illustrate how signals conveying this information can be generated and utilized. It concludes by discussing issues that must be considered during nonverbal signaling and open research areas, with a focus on informing the design and usage of generalizable nonverbal signaling behaviors for task-oriented non-humanoid robots.', 'year': 2018, 'in_acl': False, 'citationCount': 97, 'section': None, 'subsection': None}, {'id': 202588687, 'paperId': '4a5c3216f8aad40b17531e78df8cc18e2b5c73ff', 'title': 'The Devil is in the Details: A Magnifying Glass for the GuessWhich Visual Dialogue Game', 'authors': [{'authorId': '50829868', 'name': 'A. Testoni'}, {'authorId': '145543514', 'name': 'Ravi Shekhar'}, {'authorId': '144151273', 'name': 'R. Fernández'}], 'venue': '', 'abstract': 'Grounded conversational agents are a fascinating research line on which important progress has beenmade lately thanks to the development of neural network models and to the release of visual dialogue datasets. The latter have been used to set visual dialogue games which are an interesting test bed to evaluate conversational agents. Researchers’ attention is on building models of increasing complexity, trained with computationally costly machine learning paradigms that lead to higher task success scores. In this paper, we take a step back: We use a rather simple neural network architecture and we scrutinize theGuessWhich task, the dataset, and the quality of the generated dialogues. We show that our simple Questioner agent reaches state-of-the art performance, that the evaluation metric commonly used is too coarse to compare different models, and that high task success does not correspond to high quality of the dialogues. Our work shows the importance of running detailed analyses of the results to spot possible models’ weaknesses rather than aiming to outperform state-of-the-art scores.', 'year': 2019, 'in_acl': False, 'citationCount': 6, 'section': None, 'subsection': None}]
|
2020.acl-tutorials.4
|
Reviewing Natural Language Processing Research
|
This tutorial will cover the theory and practice of reviewing research in natural language processing. Heavy reviewing burdens on natural language processing researchers have made it clear that our community needs to increase the size of our pool of potential reviewers. Simultaneously, notable “false negatives”—rejection by our conferences of work that was later shown to be tremendously important after acceptance by other conferences—have raised awareness of the fact that our reviewing practices leave something to be desired. We do not often talk about “false positives” with respect to conference papers, but leaders in the field have noted that we seem to have a publication bias towards papers that report high performance, with perhaps not much else of interest in them. It need not be this way. Reviewing is a learnable skill, and you will learn it here via lectures and a considerable amount of hands-on practice.
| 2,020
|
https://aclanthology.org/2020.acl-tutorials.4
|
ACL
|
[{'id': 154339, 'paperId': '33ff45f364dac785b8bd4e3bf70fb169dc1d39b4', 'title': "Who's afraid of peer review?", 'authors': [{'authorId': '145179131', 'name': 'J. Bohannon'}], 'venue': 'Science', 'abstract': 'Dozens of open-access journals targeted in an elaborate Science sting accepted a spoof research article, raising questions about peer-review practices in much of the open-access world.', 'year': 2013, 'in_acl': False, 'citationCount': 885, 'section': None, 'subsection': None}, {'id': 8460592, 'paperId': '9ca5552008fe2c24e0541f6af47fd5110d4015b3', 'title': 'Last Words: Reviewing the Reviewers', 'authors': [{'authorId': '2272727361', 'name': 'K. Church'}], 'venue': 'International Conference on Computational Logic', 'abstract': '', 'year': 2005, 'in_acl': True, 'citationCount': 48, 'section': None, 'subsection': None}, {'id': 16508456, 'paperId': '4fb5a17d4066116a8fc928e43aa558732d8b7cb2', 'title': 'Preventing the ends from justifying the means: withholding results to address publication bias in peer-review', 'authors': [{'authorId': '4058655', 'name': 'K. Button'}, {'authorId': '38974348', 'name': 'Liz Bal'}, {'authorId': '145879163', 'name': 'A. Clark'}, {'authorId': '19854097', 'name': 'Tim Shipley'}], 'venue': 'BMC Psychology', 'abstract': 'The evidence that many of the findings in the published literature may be unreliable is compelling. There is an excess of positive results, often from studies with small sample sizes, or other methodological limitations, and the conspicuous absence of null findings from studies of a similar quality. This distorts the evidence base, leading to false conclusions and undermining scientific progress. Central to this problem is a peer-review system where the decisions of authors, reviewers, and editors are more influenced by impressive results than they are by the validity of the study design. To address this, BMC Psychology is launching a pilot to trial a new ‘results-free’ peer-review process, whereby editors and reviewers are blinded to the study’s results, initially assessing manuscripts on the scientific merits of the rationale and methods alone. The aim is to improve the reliability and quality of published research, by focusing editorial decisions on the rigour of the methods, and preventing impressive ends justifying poor means.', 'year': 2016, 'in_acl': False, 'citationCount': 42, 'section': None, 'subsection': None}, {'id': 53149927, 'paperId': 'a772589606f9880d74ac79519ccef073eefd5519', 'title': 'Double-blind peer review and gender publication bias', 'authors': [{'authorId': '3145400', 'name': 'L. Engqvist'}, {'authorId': '2149229', 'name': 'Joachim G. Frommen'}], 'venue': 'Animal Behaviour', 'abstract': '', 'year': 2008, 'in_acl': False, 'citationCount': 34, 'section': None, 'subsection': None}, {'id': 7350256, 'paperId': 'd8cf5c798397b6a0be1b41f18f979f0988f1ece7', 'title': 'Publication prejudices: An experimental study of confirmatory bias in the peer review system', 'authors': [{'authorId': '35256346', 'name': 'M. Mahoney'}], 'venue': 'Cognitive Therapy and Research', 'abstract': "Confirmatory bias is the tendency to emphasize and believe experiences which support one's views and to ignore or discredit those which do not. The effects of this tendency have been repeatedly documented in clinical research. However, its ramifications for the behavior of scientists have yet to be adequately explored. For example, although publication is a critical element in determining the contribution and impact of scientific findings, little research attention has been devoted to the variables operative in journal review policies. In the present study, 75 journal reviewers were asked to referee manuscripts which described identical experimental procedures but which reported positive, negative, mixed, or no results. In addition to showing poor interrater agreement, reviewers were strongly biased against manuscripts which reported results contrary to their theoretical perspective. The implications of these findings for epistemology and the peer review system are briefly addressed.", 'year': 1977, 'in_acl': False, 'citationCount': 667, 'section': None, 'subsection': None}, {'id': 155600340, 'paperId': '8d75051e8151fa5b7bd7c863102d0c4be7608c93', 'title': 'Peer review — reviewed', 'authors': [], 'venue': 'Nature', 'abstract': '', 'year': 2014, 'in_acl': False, 'citationCount': 3, 'section': None, 'subsection': None}, {'id': 256659588, 'paperId': '87e849787dcfda83d7315c7d3d5c54851c82d264', 'title': 'On becoming a discipline', 'authors': [{'authorId': '5922478', 'name': 'Melissa J. Fickling'}], 'venue': 'Counselor Education and Supervision', 'abstract': "Clarifying counselor education's status as a discipline carries implications for pedagogy, researcher identity development, and knowledge production. In this manuscript, these implications are discussed within a historical context and with attention to the successful career transitions for new counselor educators, as well as those pursuing promotion and tenure in academia.", 'year': 2023, 'in_acl': False, 'citationCount': 3, 'section': None, 'subsection': None}, {'id': 1570550, 'paperId': '830ab38207bd40189752a301967b865c38dab591', 'title': 'Last Words: Breaking News: Changing Attitudes and Practices', 'authors': [{'authorId': '1736049', 'name': 'B. Webber'}], 'venue': 'International Conference on Computational Logic', 'abstract': '', 'year': 2007, 'in_acl': True, 'citationCount': 3, 'section': None, 'subsection': None}, {'id': 522864, 'paperId': 'cf222293e2447365ad25e603bfdd064646ef6652', 'title': 'Nepotism and sexism in peer-review', 'authors': [{'authorId': '6447164', 'name': 'C. Wennerås'}, {'authorId': '2053374957', 'name': 'Agnes E. Wold'}], 'venue': 'Nature', 'abstract': 'In the first-ever analysis of peer-review scores for postdoctoral fellowship applications, the system is revealed as being riddled with prejudice. The policy of secrecy in evaluation must be abandoned.', 'year': 1997, 'in_acl': False, 'citationCount': 1359, 'section': None, 'subsection': None}]
|
2020.acl-tutorials.6
|
Multi-modal Information Extraction from Text, Semi-structured, and Tabular Data on the Web
|
The World Wide Web contains vast quantities of textual information in several forms: unstructured text, template-based semi-structured webpages (which present data in key-value pairs and lists), and tables. Methods for extracting information from these sources and converting it to a structured form have been a target of research from the natural language processing (NLP), data mining, and database communities. While these researchers have largely separated extraction from web data into different problems based on the modality of the data, they have faced similar problems such as learning with limited labeled data, defining (or avoiding defining) ontologies, making use of prior knowledge, and scaling solutions to deal with the size of the Web. In this tutorial we take a holistic view toward information extraction, exploring the commonalities in the challenges and solutions developed to address these different forms of text. We will explore the approaches targeted at unstructured text that largely rely on learning syntactic or semantic textual patterns, approaches targeted at semi-structured documents that learn to identify structural patterns in the template, and approaches targeting web tables which rely heavily on entity linking and type information. While these different data modalities have largely been considered separately in the past, recent research has started taking a more inclusive approach toward textual extraction, in which the multiple signals offered by textual, layout, and visual clues are combined into a single extraction model made possible by new deep learning approaches. At the same time, trends within purely textual extraction have shifted toward full-document understanding rather than considering sentences as independent units. With this in mind, it is worth considering the information extraction problem as a whole to motivate solutions that harness textual semantics along with visual and semi-structured layout information. We will discuss these approaches and suggest avenues for future work.
| 2,020
|
https://aclanthology.org/2020.acl-tutorials.6
|
ACL
|
[{'id': 13091007, 'paperId': '7a12502ba5b9686e37b0ec9d86a2dc7f4b7022ac', 'title': 'Web-scale information extraction with vertex', 'authors': [{'authorId': '2627799', 'name': 'P. Gulhane'}, {'authorId': '2136102', 'name': 'Amit Madaan'}, {'authorId': '3259494', 'name': 'Rupesh R. Mehta'}, {'authorId': '2311735', 'name': 'J. Ramamirtham'}, {'authorId': '1696519', 'name': 'R. Rastogi'}, {'authorId': '1837802', 'name': 'Sandeepkumar Satpal'}, {'authorId': '1757518', 'name': 'Srinivasan H. Sengamedu'}, {'authorId': '2990683', 'name': 'Ashwin Tengli'}, {'authorId': '2081450365', 'name': 'Charu Tiwari'}], 'venue': 'IEEE International Conference on Data Engineering', 'abstract': 'Vertex is a Wrapper Induction system developed at Yahoo! for extracting structured records from template-based Web pages. To operate at Web scale, Vertex employs a host of novel algorithms for (1) Grouping similar structured pages in a Web site, (2) Picking the appropriate sample pages for wrapper inference, (3) Learning XPath-based extraction rules that are robust to variations in site structure, (4) Detecting site changes by monitoring sample pages, and (5) Optimizing editorial costs by reusing rules, etc. The system is deployed in production and currently extracts more than 250 million records from more than 200 Web sites. To the best of our knowledge, Vertex is the first system to do high-precision information extraction at Web scale.', 'year': 2011, 'in_acl': False, 'citationCount': 90, 'section': None, 'subsection': None}, {'id': 51993171, 'paperId': 'e49e6dbdfdb813b42fff716a8b11951de2d5cbf3', 'title': 'Ten Years of WebTables', 'authors': [{'authorId': '1725561', 'name': 'Michael J. Cafarella'}, {'authorId': '1770962', 'name': 'A. Halevy'}, {'authorId': '8386466', 'name': 'Hongrae Lee'}, {'authorId': '2224716', 'name': 'Jayant Madhavan'}, {'authorId': '40592227', 'name': 'Cong Yu'}, {'authorId': '2111220343', 'name': 'D. Wang'}, {'authorId': '48144872', 'name': 'Eugene Wu'}], 'venue': 'Proceedings of the VLDB Endowment', 'abstract': '\n In 2008, we wrote about WebTables, an effort to exploit the large and diverse set of structured databases casually published online in the form of HTML tables. The past decade has seen a flurry of research and commercial activities around the WebTables project itself, as well as the broad topic of informal online structured data. In this paper, we\n 1\n will review the WebTables project, and try to place it in the broader context of the decade of work that followed. We will also show how the progress over the past ten years sets up an exciting agenda for the future, and will draw upon many corners of the data management community.\n', 'year': 2018, 'in_acl': False, 'citationCount': 62, 'section': None, 'subsection': None}, {'id': 3627801, 'paperId': '0e46803ac8fc715b72d7f935a3f383ade945487f', 'title': 'Fonduer: Knowledge Base Construction from Richly Formatted Data', 'authors': [{'authorId': '144766615', 'name': 'Sen Wu'}, {'authorId': '2065637845', 'name': 'Luke Hsiao'}, {'authorId': '2149478197', 'name': 'Xiaoxia Cheng'}, {'authorId': '34302368', 'name': 'Braden Hancock'}, {'authorId': '145071799', 'name': 'Theodoros Rekatsinas'}, {'authorId': '1721681', 'name': 'P. Levis'}, {'authorId': '2114485554', 'name': 'C. Ré'}], 'venue': 'SIGMOD Conference', 'abstract': "We focus on knowledge base construction (KBC) from richly formatted data. In contrast to KBC from text or tabular data, KBC from richly formatted data aims to extract relations conveyed jointly via textual, structural, tabular, and visual expressions. We introduce Fonduer, a machine-learning-based KBC system for richly formatted data. Fonduer presents a new data model that accounts for three challenging characteristics of richly formatted data: (1) prevalent document-level relations, (2) multimodality, and (3) data variety. Fonduer uses a new deep-learning model to automatically capture the representation (i.e., features) needed to learn how to extract relations from richly formatted data. Finally, Fonduer provides a new programming model that enables users to convert domain expertise, based on multiple modalities of information, to meaningful signals of supervision for training a KBC system. Fonduer-based KBC systems are in production for a range of use cases, including at a major online retailer. We compare Fonduer against state-of-the-art KBC approaches in four different domains. We show that Fonduer achieves an average improvement of 41 F1 points on the quality of the output knowledge base---and in some cases produces up to 1.87x the number of correct entries---compared to expert-curated public knowledge bases. We also conduct a user study to assess the usability of Fonduer's new programming model. We show that after using Fonduer for only 30 minutes, non-domain experts are able to design KBC systems that achieve on average 23 F1 points higher quality than traditional machine-learning-based KBC approaches.", 'year': 2017, 'in_acl': False, 'citationCount': 97, 'section': None, 'subsection': None}, {'id': 102353905, 'paperId': 'dcb28c8ba94434eb8a06e81eb55bfdbc343d2340', 'title': 'Document-Level N-ary Relation Extraction with Multiscale Representation Learning', 'authors': [{'authorId': '3422908', 'name': 'Robin Jia'}, {'authorId': '2109566188', 'name': 'Cliff Wong'}, {'authorId': '1759772', 'name': 'Hoifung Poon'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'Most information extraction methods focus on binary relations expressed within single sentences. In high-value domains, however, n-ary relations are of great demand (e.g., drug-gene-mutation interactions in precision oncology). Such relations often involve entity mentions that are far apart in the document, yet existing work on cross-sentence relation extraction is generally confined to small text spans (e.g., three consecutive sentences), which severely limits recall. In this paper, we propose a novel multiscale neural architecture for document-level n-ary relation extraction. Our system combines representations learned over various text spans throughout the document and across the subrelation hierarchy. Widening the system’s purview to the entire document maximizes potential recall. Moreover, by integrating weak signals across the document, multiscale modeling increases precision, even in the presence of noisy labels from distant supervision. Experiments on biomedical machine reading show that our approach substantially outperforms previous n-ary relation extraction methods.', 'year': 2019, 'in_acl': True, 'citationCount': 135, 'section': None, 'subsection': None}, {'id': 5774632, 'paperId': '131383aa1f91eb0e9578dcae80f4dfcfb0f11e3e', 'title': 'Extraction and Integration of Partially Overlapping Web Sources', 'authors': [{'authorId': '1760944', 'name': 'Mirko Bronzi'}, {'authorId': '1791339', 'name': 'Valter Crescenzi'}, {'authorId': '1796590', 'name': 'P. Merialdo'}, {'authorId': '1802817', 'name': 'Paolo Papotti'}], 'venue': 'Proceedings of the VLDB Endowment', 'abstract': 'We present an unsupervised approach for harvesting the data exposed by a set of structured and partially overlapping data-intensive web sources. Our proposal comes within a formal framework tackling two problems: the data extraction problem, to generate extraction rules based on the input websites, and the data integration problem, to integrate the extracted data in a unified schema. We introduce an original algorithm, WEIR, to solve the stated problems and formally prove its correctness. WEIR leverages the overlapping data among sources to make better decisions both in the data extraction (by pruning rules that do not lead to redundant information) and in the data integration (by reflecting local properties of a source over the mediated schema). Along the way, we characterize the amount of redundancy needed by our algorithm to produce a solution, and present experimental results to show the benefits of our approach with respect to existing solutions.', 'year': 2013, 'in_acl': False, 'citationCount': 60, 'section': None, 'subsection': None}, {'id': 4557963, 'paperId': 'cf5ea582bccc7cb21a2ebeb7a0987f79652bde8d', 'title': 'Knowledge vault: a web-scale approach to probabilistic knowledge fusion', 'authors': [{'authorId': '145867172', 'name': 'X. Dong'}, {'authorId': '1718798', 'name': 'E. Gabrilovich'}, {'authorId': '1728179', 'name': 'Geremy Heitz'}, {'authorId': '40428294', 'name': 'Wilko Horn'}, {'authorId': '1914797', 'name': 'N. Lao'}, {'authorId': '1702318', 'name': 'K. Murphy'}, {'authorId': '2931575', 'name': 'Thomas Strohmann'}, {'authorId': '2109375570', 'name': 'Shaohua Sun'}, {'authorId': None, 'name': 'Wei Zhang'}], 'venue': 'Knowledge Discovery and Data Mining', 'abstract': "Recent years have witnessed a proliferation of large-scale knowledge bases, including Wikipedia, Freebase, YAGO, Microsoft's Satori, and Google's Knowledge Graph. To increase the scale even further, we need to explore automatic methods for constructing knowledge bases. Previous approaches have primarily focused on text-based extraction, which can be very noisy. Here we introduce Knowledge Vault, a Web-scale probabilistic knowledge base that combines extractions from Web content (obtained via analysis of text, tabular data, page structure, and human annotations) with prior knowledge derived from existing knowledge repositories. We employ supervised machine learning methods for fusing these distinct information sources. The Knowledge Vault is substantially bigger than any previously published structured knowledge repository, and features a probabilistic inference system that computes calibrated probabilities of fact correctness. We report the results of multiple studies that explore the relative utility of the different information sources and extraction methods.", 'year': 2014, 'in_acl': False, 'citationCount': 1700, 'section': None, 'subsection': None}, {'id': 102352698, 'paperId': 'f262ef2f50dfcaf07dc6598f22fb9b2470b37cf1', 'title': 'A general framework for information extraction using dynamic span graphs', 'authors': [{'authorId': '145081697', 'name': 'Yi Luan'}, {'authorId': '30051202', 'name': 'David Wadden'}, {'authorId': '2265599', 'name': 'Luheng He'}, {'authorId': '2107663537', 'name': 'A. Shah'}, {'authorId': '144339506', 'name': 'Mari Ostendorf'}, {'authorId': '2548384', 'name': 'Hannaneh Hajishirzi'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'We introduce a general framework for several information extraction tasks that share span representations using dynamically constructed span graphs. The graphs are dynamically constructed by selecting the most confident entity spans and linking these nodes with confidence-weighted relation types and coreferences. The dynamic span graph allow coreference and relation type confidences to propagate through the graph to iteratively refine the span representations. This is unlike previous multi-task frameworks for information extraction in which the only interaction between tasks is in the shared first-layer LSTM. Our framework significantly outperforms state-of-the-art on multiple information extraction tasks across multiple datasets reflecting different domains. We further observe that the span enumeration approach is good at detecting nested span entities, with significant F1 score improvement on the ACE dataset.', 'year': 2019, 'in_acl': True, 'citationCount': 307, 'section': None, 'subsection': None}, {'id': 174799980, 'paperId': 'b31eef8d9263b02f7d0c1ab55b26012550a2e95a', 'title': 'OpenCeres: When Open Information Extraction Meets the Semi-Structured Web', 'authors': [{'authorId': '144182018', 'name': 'Colin Lockard'}, {'authorId': '3310534', 'name': 'Prashant Shiralkar'}, {'authorId': '2143917898', 'name': 'Xin Dong'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'Open Information Extraction (OpenIE), the problem of harvesting triples from natural language text whose predicate relations are not aligned to any pre-defined ontology, has been a popular subject of research for the last decade. However, this research has largely ignored the vast quantity of facts available in semi-structured webpages. In this paper, we define the problem of OpenIE from semi-structured websites to extract such facts, and present an approach for solving it. We also introduce a labeled evaluation dataset to motivate research in this area. Given a semi-structured website and a set of seed facts for some relations existing on its pages, we employ a semi-supervised label propagation technique to automatically create training data for the relations present on the site. We then use this training data to learn a classifier for relation extraction. Experimental results of this method on our new benchmark dataset obtained a precision of over 70%. A larger scale extraction experiment on 31 websites in the movie vertical resulted in the extraction of over 2 million triples.', 'year': 2019, 'in_acl': True, 'citationCount': 50, 'section': None, 'subsection': None}, {'id': 53109320, 'paperId': '1da8e1ad1814d81f69433ac877ef70caa950e4e6', 'title': 'GraphIE: A Graph-Based Framework for Information Extraction', 'authors': [{'authorId': '5606742', 'name': 'Yujie Qian'}, {'authorId': '2628786', 'name': 'Enrico Santus'}, {'authorId': '8752221', 'name': 'Zhijing Jin'}, {'authorId': '144084849', 'name': 'Jiang Guo'}, {'authorId': '1741283', 'name': 'R. Barzilay'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'Most modern Information Extraction (IE) systems are implemented as sequential taggers and only model local dependencies. Non-local and non-sequential context is, however, a valuable source of information to improve predictions. In this paper, we introduce GraphIE, a framework that operates over a graph representing a broad set of dependencies between textual units (i.e. words or sentences). The algorithm propagates information between connected nodes through graph convolutions, generating a richer representation that can be exploited to improve word-level predictions. Evaluation on three different tasks — namely textual, social media and visual information extraction — shows that GraphIE consistently outperforms the state-of-the-art sequence tagging model by a significant margin.', 'year': 2018, 'in_acl': True, 'citationCount': 105, 'section': None, 'subsection': None}]
|
2020.acl-tutorials.7
|
Commonsense Reasoning for Natural Language Processing
|
Commonsense knowledge, such as knowing that “bumping into people annoys them” or “rain makes the road slippery”, helps humans navigate everyday situations seamlessly. Yet, endowing machines with such human-like commonsense reasoning capabilities has remained an elusive goal of artificial intelligence research for decades. In recent years, commonsense knowledge and reasoning have received renewed attention from the natural language processing (NLP) community, yielding exploratory studies in automated commonsense understanding. We organize this tutorial to provide researchers with the critical foundations and recent advances in commonsense representation and reasoning, in the hopes of casting a brighter light on this promising area of future research. In our tutorial, we will (1) outline the various types of commonsense (e.g., physical, social), and (2) discuss techniques to gather and represent commonsense knowledge, while highlighting the challenges specific to this type of knowledge (e.g., reporting bias). We will then (3) discuss the types of commonsense knowledge captured by modern NLP systems (e.g., large pretrained language models), and (4) present ways to measure systems’ commonsense reasoning abilities. We will finish with (5) a discussion of various ways in which commonsense reasoning can be used to improve performance on NLP tasks, exemplified by an (6) interactive session on integrating commonsense into a downstream task.
| 2,020
|
https://aclanthology.org/2020.acl-tutorials.7
|
ACL
|
[{'id': 91184338, 'paperId': 'b1832b749528755dfcbe462717f4f5afc07243b8', 'title': 'Commonsense Reasoning for Natural Language Understanding: A Survey of Benchmarks, Resources, and Approaches', 'authors': [{'authorId': '89093987', 'name': 'Shane Storks'}, {'authorId': '3193409', 'name': 'Qiaozi Gao'}, {'authorId': '1707259', 'name': 'J. Chai'}], 'venue': 'arXiv.org', 'abstract': "Commonsense knowledge and commonsense reasoning are some of the main bottlenecks in machine intelligence. In the NLP community, many benchmark datasets and tasks have been created to address commonsense reasoning for language understanding. These tasks are designed to assess machines' ability to acquire and learn commonsense knowledge in order to reason and understand natural language text. As these tasks become instrumental and a driving force for commonsense research, this paper aims to provide an overview of existing tasks and benchmarks, knowledge resources, and learning and inference approaches toward commonsense reasoning for natural language understanding. Through this, our goal is to support a better understanding of the state of the art, its limitations, and future challenges.", 'year': 2019, 'in_acl': False, 'citationCount': 72, 'section': None, 'subsection': None}, {'id': 15710851, 'paperId': '128cb6b891aee1b5df099acb48e2efecfcff689f', 'title': 'The Winograd Schema Challenge', 'authors': [{'authorId': '143634377', 'name': 'H. Levesque'}, {'authorId': '144883814', 'name': 'E. Davis'}, {'authorId': '40429476', 'name': 'L. Morgenstern'}], 'venue': 'AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning', 'abstract': 'In this paper, we present an alternative to the Turing Test that has some conceptual and practical advantages. Like the original, it involves responding to typed English sentences, and English-speaking adults will have no difficulty with it. Unlike the original, the subject is not required to engage in a conversation and fool an interrogator into believing she is dealing with a person. Moreover, the test is arranged in such a way that having full access to a large corpus of English text might not help much. Finally, the interrogator or a third party will be able to decide unambiguously after a few minutes whether or not a subject has passed the test.', 'year': 2011, 'in_acl': False, 'citationCount': 1275, 'section': None, 'subsection': None}, {'id': 15206880, 'paperId': '26aa6fe2028b5eefbaa40ab54ef725bbbe7d9810', 'title': 'ConceptNet 5.5: An Open Multilingual Graph of General Knowledge', 'authors': [{'authorId': '145696762', 'name': 'R. Speer'}, {'authorId': '2060230787', 'name': 'Joshua Chin'}, {'authorId': '2232845', 'name': 'Catherine Havasi'}], 'venue': 'AAAI Conference on Artificial Intelligence', 'abstract': '\n \n Machine learning about language can be improved by supplying it with specific knowledge and sources of external information. We present here a new version of the linked open data resource ConceptNet that is particularly well suited to be used with modern NLP techniques such as word embeddings. ConceptNet is a knowledge graph that connects words and phrases of natural language with labeled edges. Its knowledge is collected from many sources that include expert-created resources, crowd-sourcing, and games with a purpose. It is designed to represent the general knowledge involved in understanding language, improving natural language applications by allowing the application to better understand the meanings behind the words people use. When ConceptNet is combined with word embeddings acquired from distributional semantics (such as word2vec), it provides applications with understanding that they would not acquire from distributional semantics alone, nor from narrower resources such as WordNet or DBPedia. We demonstrate this with state-of-the-art results on intrinsic evaluations of word relatedness that translate into improvements on applications of word vectors, including solving SAT-style analogies.\n \n', 'year': 2016, 'in_acl': False, 'citationCount': 2626, 'section': None, 'subsection': None}, {'id': 16567195, 'paperId': 'cceb698cbbb828537f2f195fb70b6fdc586d3327', 'title': 'Reporting bias and knowledge acquisition', 'authors': [{'authorId': '145402198', 'name': 'Jonathan Gordon'}, {'authorId': '7536576', 'name': 'Benjamin Van Durme'}], 'venue': 'Conference on Automated Knowledge Base Construction', 'abstract': 'Much work in knowledge extraction from text tacitly assumes that the frequency with which people write about actions, outcomes, or properties is a reflection of real-world frequencies or the degree to which a property is characteristic of a class of individuals. In this paper, we question this idea, examining the phenomenon of reporting bias and the challenge it poses for knowledge extraction. We conclude with discussion of approaches to learning commonsense knowledge from text despite this distortion.', 'year': 2013, 'in_acl': False, 'citationCount': 210, 'section': None, 'subsection': None}, {'id': 1726501, 'paperId': '85b68477a6e031d88b963833e15a4b4fc6855264', 'title': 'A Corpus and Cloze Evaluation for Deeper Understanding of Commonsense Stories', 'authors': [{'authorId': '2400138', 'name': 'N. Mostafazadeh'}, {'authorId': '1729918', 'name': 'Nathanael Chambers'}, {'authorId': '144137069', 'name': 'Xiaodong He'}, {'authorId': '153432684', 'name': 'Devi Parikh'}, {'authorId': '1746610', 'name': 'Dhruv Batra'}, {'authorId': '1909300', 'name': 'Lucy Vanderwende'}, {'authorId': '143967473', 'name': 'Pushmeet Kohli'}, {'authorId': '145844737', 'name': 'James F. Allen'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': "Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.", 'year': 2016, 'in_acl': True, 'citationCount': 670, 'section': None, 'subsection': None}, {'id': 53296520, 'paperId': 'c21a4d70d83e0f6eb2a9e1c41d034842dd561e47', 'title': 'CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge', 'authors': [{'authorId': '12371246', 'name': 'Alon Talmor'}, {'authorId': '47426264', 'name': 'Jonathan Herzig'}, {'authorId': '35219984', 'name': 'Nicholas Lourie'}, {'authorId': '1750652', 'name': 'Jonathan Berant'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'When answering a question, people often draw upon their rich world knowledge in addition to the particular context. Recent work has focused primarily on answering questions given some relevant document or context, and required very little general background. To investigate question answering with prior knowledge, we present CommonsenseQA: a challenging new dataset for commonsense question answering. To capture common sense beyond associations, we extract from ConceptNet (Speer et al., 2017) multiple target concepts that have the same semantic relation to a single source concept. Crowd-workers are asked to author multiple-choice questions that mention the source concept and discriminate in turn between each of the target concepts. This encourages workers to create questions with complex semantics that often require prior knowledge. We create 12,247 questions through this procedure and demonstrate the difficulty of our task with a large number of strong baselines. Our best baseline is based on BERT-large (Devlin et al., 2018) and obtains 56% accuracy, well below human performance, which is 89%.', 'year': 2019, 'in_acl': True, 'citationCount': 1364, 'section': None, 'subsection': None}]
|
2020.coling-tutorials.1
|
Cross-lingual Semantic Representation for NLP with UCCA
|
This is an introductory tutorial to UCCA (Universal Conceptual Cognitive Annotation), a cross-linguistically applicable framework for semantic representation, with corpora annotated in English, German and French, and ongoing annotation in Russian and Hebrew. UCCA builds on extensive typological work and supports rapid annotation. The tutorial will provide a detailed introduction to the UCCA annotation guidelines, design philosophy and the available resources; and a comparison to other meaning representations. It will also survey the existing parsing work, including the findings of three recent shared tasks, in SemEval and CoNLL, that addressed UCCA parsing. Finally, the tutorial will present recent applications and extensions to the scheme, demonstrating its value for natural language processing in a range of languages and domains.
| 2,020
|
https://aclanthology.org/2020.coling-tutorials.1/
|
COLING
|
[{'id': 60742189, 'paperId': '54ffc8f1cb11ec21eb14a6706b8b6d9b192a1b32', 'title': 'A Semantic Approach to English Grammar', 'authors': [{'authorId': '34256957', 'name': 'R. Dixon'}], 'venue': '', 'abstract': 'This book shows how grammar helps people communicate and looks at the ways grammar and meaning interrelate. The author starts from the notion that a speaker codes a meaning into grammatical forms which the listener is then able to recover: each word, he shows, has its own meaning and each bit of grammar its own function, their combinations creating and limiting the possibilities for different words. He uncovers a rationale for the varying grammatical properties of different words and in the process explains many facts about English - such as why we can say I wish to go, I wish that he would go, and I want to go but not I want that he would go. The first part of the book reviews the main points of English syntax and discusses English verbs in terms of their semantic types including those of Motion, Giving, Speaking, Liking, and Trying. In the second part Professor Dixon looks at eight grammatical topics, including complement clauses, transitivity and causatives, passives, and the promotion of a non-subject to subject, as in Dictionaries sell well. This is the updated and revised edition of A New Approach to English Grammar on Semantic Principles. It includes new chapters on tense and aspect, nominalizations and possession, and adverbs and negation, and contains a new discussion of comparative forms of adjectives. It also explains recent changes in English grammar, including how they has replaced the tabooed he as a pronoun referring to either gender, as in When a student reads this book, they will learn a lot about English grammar in a most enjoyable manner.', 'year': 2005, 'in_acl': False, 'citationCount': 223, 'section': None, 'subsection': None}, {'id': 1642392, 'paperId': 'eec3a236ecd185712ce65fb336141f8656eea13d', 'title': 'Simple and Accurate Dependency Parsing Using Bidirectional LSTM Feature Representations', 'authors': [{'authorId': '2022679', 'name': 'E. Kiperwasser'}, {'authorId': '2089067', 'name': 'Yoav Goldberg'}], 'venue': 'Transactions of the Association for Computational Linguistics', 'abstract': 'We present a simple and effective scheme for dependency parsing which is based on bidirectional-LSTMs (BiLSTMs). Each sentence token is associated with a BiLSTM vector representing the token in its sentential context, and feature vectors are constructed by concatenating a few BiLSTM vectors. The BiLSTM is trained jointly with the parser objective, resulting in very effective feature extractors for parsing. We demonstrate the effectiveness of the approach by applying it to a greedy transition-based parser as well as to a globally optimized graph-based parser. The resulting parsers have very simple architectures, and match or surpass the state-of-the-art accuracies on English and Chinese.', 'year': 2016, 'in_acl': True, 'citationCount': 658, 'section': None, 'subsection': None}, {'id': 8233374, 'paperId': '4a715ea217dc5ecc2b16d6cf542bfb3f4a10f2b5', 'title': 'A Transition-Based Directed Acyclic Graph Parser for UCCA', 'authors': [{'authorId': '2086349', 'name': 'Daniel Hershcovich'}, {'authorId': '2769805', 'name': 'Omri Abend'}, {'authorId': '145009917', 'name': 'A. Rappoport'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'We present the first parser for UCCA, a cross-linguistically applicable framework for semantic representation, which builds on extensive typological work and supports rapid annotation. UCCA poses a challenge for existing parsing techniques, as it exhibits reentrancy (resulting in DAG structures), discontinuous structures and non-terminal nodes corresponding to complex semantic units. To our knowledge, the conjunction of these formal properties is not supported by any existing parser. Our transition-based parser, which uses a novel transition set and features based on bidirectional LSTMs, has value not just for UCCA parsing: its ability to handle more general graph structures can inform the development of parsers for other semantic DAG structures, and in languages that frequently use discontinuous structures.', 'year': 2017, 'in_acl': True, 'citationCount': 93, 'section': None, 'subsection': None}, {'id': 15939234, 'paperId': '4908fc4d7f58383170c085fe8238a868e9a901f9', 'title': 'Deep Multitask Learning for Semantic Dependency Parsing', 'authors': [{'authorId': '1818378366', 'name': 'Hao Peng'}, {'authorId': '38094552', 'name': 'Sam Thomson'}, {'authorId': '144365875', 'name': 'Noah A. Smith'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'We present a deep neural architecture that parses sentences into three semantic dependency graph formalisms. By using efficient, nearly arc-factored inference and a bidirectional-LSTM composed with a multi-layer perceptron, our base system is able to significantly improve the state of the art for semantic dependency parsing, without using hand-engineered features or syntax. We then explore two multitask learning approaches—one that shares parameters across formalisms, and one that uses higher-order structures to predict the graphs jointly. We find that both approaches improve performance across formalisms on average, achieving a new state of the art. Our code is open-source and available at https://github.com/Noahs-ARK/NeurboParser.', 'year': 2017, 'in_acl': True, 'citationCount': 144, 'section': None, 'subsection': None}, {'id': 19488885, 'paperId': '7ada8577807aefcad4f8120e8a031cceba065ec9', 'title': 'Multitask Parsing Across Semantic Representations', 'authors': [{'authorId': '2086349', 'name': 'Daniel Hershcovich'}, {'authorId': '2769805', 'name': 'Omri Abend'}, {'authorId': '145009917', 'name': 'A. Rappoport'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'The ability to consolidate information of different types is at the core of intelligence, and has tremendous practical value in allowing learning for one task to benefit from generalizations learned for others. In this paper we tackle the challenging task of improving semantic parsing performance, taking UCCA parsing as a test case, and AMR, SDP and Universal Dependencies (UD) parsing as auxiliary tasks. We experiment on three languages, using a uniform transition-based system and learning architecture for all parsing tasks. Despite notable conceptual, formal and domain differences, we show that multitask learning significantly improves UCCA parsing in both in-domain and out-of-domain settings.', 'year': 2018, 'in_acl': True, 'citationCount': 67, 'section': None, 'subsection': None}, {'id': 7741748, 'paperId': 'e32e3feb2225b427caa05eb26f241671196fc942', 'title': 'The State of the Art in Semantic Representation', 'authors': [{'authorId': '2769805', 'name': 'Omri Abend'}, {'authorId': '145009917', 'name': 'A. Rappoport'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Semantic representation is receiving growing attention in NLP in the past few years, and many proposals for semantic schemes (e.g., AMR, UCCA, GMB, UDS) have been put forth. Yet, little has been done to assess the achievements and the shortcomings of these new contenders, compare them with syntactic schemes, and clarify the general goals of research on semantic representation. We address these gaps by critically surveying the state of the art in the field.', 'year': 2017, 'in_acl': True, 'citationCount': 82, 'section': None, 'subsection': None}, {'id': 11461990, 'paperId': '02258d796c3b52c2fd88bca8300465ba79f6199a', 'title': 'Translation Divergences in Chinese–English Machine Translation: An Empirical Investigation', 'authors': [{'authorId': '121137142', 'name': 'D. Deng'}, {'authorId': '1702849', 'name': 'Nianwen Xue'}], 'venue': 'International Conference on Computational Logic', 'abstract': 'In this article, we conduct an empirical investigation of translation divergences between Chinese and English relying on a parallel treebank. To do this, we first devise a hierarchical alignment scheme where Chinese and English parse trees are aligned in a way that eliminates conflicts and redundancies between word alignments and syntactic parses to prevent the generation of spurious translation divergences. Using this Hierarchically Aligned Chinese–English Parallel Treebank (HACEPT), we are able to semi-automatically identify and categorize the translation divergences between the two languages and quantify each type of translation divergence. Our results show that the translation divergences are much broader than described in previous studies that are largely based on anecdotal evidence and linguistic knowledge. The distribution of the translation divergences also shows that some high-profile translation divergences that motivate previous research are actually very rare in our data, whereas other translation divergences that have previously received little attention actually exist in large quantities. We also show that HACEPT allows the extraction of syntax-based translation rules, most of which are expressive enough to capture the translation divergences, and point out that the syntactic annotation in existing treebanks is not optimal for extracting such translation rules. We also discuss the implications of our study for attempts to bridge translation divergences by devising shared semantic representations across languages. Our quantitative results lend further support to the observation that although it is possible to bridge some translation divergences with semantic representations, other translation divergences are open-ended, thus building a semantic representation that captures all possible translation divergences may be impractical.', 'year': 2017, 'in_acl': True, 'citationCount': 27, 'section': None, 'subsection': None}, {'id': 245635, 'paperId': '6c6fa1184cfc25b0e1b9e2c835b40be4d716bfe2', 'title': 'Linguistic Typology meets Universal Dependencies', 'authors': [{'authorId': '144456145', 'name': 'W. Bruce Croft'}, {'authorId': '8613581', 'name': 'D. Nordquist'}, {'authorId': '27963760', 'name': 'Katherine Looney'}, {'authorId': '145666891', 'name': 'Michael Regan'}], 'venue': 'International Workshop on Treebanks and Linguistic Theories', 'abstract': 'Current work on universal dependency schemes in NLP does not make reference to the extensive typological research on language universals, but could benefit since many principles are shared between the two enterprises. We propose a revision of the syntactic dependencies in the Universal Dependencies scheme (Nivre et al. [16, 17]) based on four principles derived from contemporary typological theory: dependencies should be based primarily on universal construction types over language-specific strategies; syntactic dependency labels should match lexical feature names for the same function; dependencies should be based on the information packaging function of constructions, not lexical semantic types; and dependencies should keep distinct the “ranks” of the functional dependency tree.', 'year': 2017, 'in_acl': False, 'citationCount': 41, 'section': None, 'subsection': None}, {'id': 216056404, 'paperId': 'b805693c17961af2cc7f859c1a54320b26036f46', 'title': 'Universal Dependencies v2: An Evergrowing Multilingual Treebank Collection', 'authors': [{'authorId': '1720988', 'name': 'Joakim Nivre'}, {'authorId': '2241127', 'name': 'M. Marneffe'}, {'authorId': '1694491', 'name': 'Filip Ginter'}, {'authorId': '1602260260', 'name': 'Jan Hajivc'}, {'authorId': '144783904', 'name': 'Christopher D. Manning'}, {'authorId': '1708916', 'name': 'S. Pyysalo'}, {'authorId': '145157639', 'name': 'Sebastian Schuster'}, {'authorId': '3262036', 'name': 'Francis M. Tyers'}, {'authorId': '1771298', 'name': 'Daniel Zeman'}], 'venue': 'International Conference on Language Resources and Evaluation', 'abstract': 'Universal Dependencies is an open community effort to create cross-linguistically consistent treebank annotation for many languages within a dependency-based lexicalist framework. The annotation consists in a linguistically motivated word segmentation; a morphological layer comprising lemmas, universal part-of-speech tags, and standardized morphological features; and a syntactic layer focusing on syntactic relations between predicates, arguments and modifiers. In this paper, we describe version 2 of the universal guidelines (UD v2), discuss the major changes from UD v1 to UD v2, and give an overview of the currently available treebanks for 90 languages.', 'year': 2020, 'in_acl': True, 'citationCount': 476, 'section': None, 'subsection': None}]
|
2020.coling-tutorials.2
|
Embeddings in Natural Language Processing
|
Embeddings have been one of the most important topics of interest in NLP for the past decade. Representing knowledge through a low-dimensional vector which is easily integrable in modern machine learning models has played a central role in the development of the field. Embedding techniques initially focused on words but the attention soon started to shift to other forms. This tutorial will provide a high-level synthesis of the main embedding techniques in NLP, in the broad sense. We will start by conventional word embeddings (e.g., Word2Vec and GloVe) and then move to other types of embeddings, such as sense-specific and graph alternatives. We will finalize with an overview of the trending contextualized representations (e.g., ELMo and BERT) and explain their potential and impact in NLP.
| 2,020
|
https://aclanthology.org/2020.coling-tutorials.2/
|
COLING
|
[{'id': 15829786, 'paperId': 'e569d99f3a0fcfa038631dda2b44c73a6e8e97b8', 'title': 'Dimensions of meaning', 'authors': [{'authorId': '144418438', 'name': 'Hinrich Schütze'}], 'venue': "Supercomputing '92", 'abstract': 'The representation of documents and queries as vectors in a high-dimensional space is well-established in information retrieval. The author proposes that the semantics of words and contexts in a text be represented as vectors. The dimensions of the space are words and the initial vectors are determined by the words occurring close to the entity to be represented, which implies that the space has several thousand dimensions (words). This makes the vector representations (which are dense) too cumbersome to use directly. Therefore, dimensionality reduction by means of a singular value decomposition is employed. The author analyzes the structure of the vector representations and applies them to word sense disambiguation and thesaurus induction. >', 'year': 1992, 'in_acl': False, 'citationCount': 467, 'section': None, 'subsection': None}, {'id': 1500900, 'paperId': '3a0e788268fafb23ab20da0e98bb578b06830f7d', 'title': 'From Frequency to Meaning: Vector Space Models of Semantics', 'authors': [{'authorId': '1689647', 'name': 'Peter D. Turney'}, {'authorId': '1990190', 'name': 'Patrick Pantel'}], 'venue': 'Journal of Artificial Intelligence Research', 'abstract': 'Computers understand very little of the meaning of human language. This profoundly limits our ability to give instructions to computers, the ability of computers to explain their actions to us, and the ability of computers to analyse and process text. Vector space models (VSMs) of semantics are beginning to address these limits. This paper surveys the use of VSMs for semantic processing of text. We organize the literature on VSMs according to the structure of the matrix in a VSM. There are currently three broad classes of VSMs, based on term-document, word-context, and pair-pattern matrices, yielding three classes of applications. We survey a broad range of applications in these three categories and we take a detailed look at a specific open source project in each category. Our goal in this survey is to show the breadth of applications of VSMs for semantics, to provide a new perspective on VSMs for those who are already familiar with the area, and to provide pointers into the literature for those who are less familiar with the field.', 'year': 2010, 'in_acl': False, 'citationCount': 2942, 'section': None, 'subsection': None}, {'id': 5959482, 'paperId': 'f6b51c8753a871dc94ff32152c00c01e94f90f09', 'title': 'Efficient Estimation of Word Representations in Vector Space', 'authors': [{'authorId': '2047446108', 'name': 'Tomas Mikolov'}, {'authorId': '2118440152', 'name': 'Kai Chen'}, {'authorId': '32131713', 'name': 'G. Corrado'}, {'authorId': '49959210', 'name': 'J. Dean'}], 'venue': 'International Conference on Learning Representations', 'abstract': 'We propose two novel model architectures for computing continuous vector\nrepresentations of words from very large data sets. The quality of these\nrepresentations is measured in a word similarity task, and the results are\ncompared to the previously best performing techniques based on different types\nof neural networks. We observe large improvements in accuracy at much lower\ncomputational cost, i.e. it takes less than a day to learn high quality word\nvectors from a 1.6 billion words data set. Furthermore, we show that these\nvectors provide state-of-the-art performance on our test set for measuring\nsyntactic and semantic word similarities.', 'year': 2013, 'in_acl': False, 'citationCount': 29962, 'section': None, 'subsection': None}, {'id': 1957433, 'paperId': 'f37e1b62a767a307c046404ca96bc140b3e68cb5', 'title': 'GloVe: Global Vectors for Word Representation', 'authors': [{'authorId': '143845796', 'name': 'Jeffrey Pennington'}, {'authorId': '2166511', 'name': 'R. Socher'}, {'authorId': '144783904', 'name': 'Christopher D. Manning'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75% on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition.', 'year': 2014, 'in_acl': True, 'citationCount': 30717, 'section': None, 'subsection': None}, {'id': 7890036, 'paperId': '59761abc736397539bdd01ad7f9d91c8607c0457', 'title': 'context2vec: Learning Generic Context Embedding with Bidirectional LSTM', 'authors': [{'authorId': '2298649', 'name': 'Oren Melamud'}, {'authorId': '34508613', 'name': 'J. Goldberger'}, {'authorId': '7465342', 'name': 'Ido Dagan'}], 'venue': 'Conference on Computational Natural Language Learning', 'abstract': 'Context representations are central to various NLP tasks, such as word sense disam-biguation, named entity recognition, co-reference resolution, and many more. In this work we present a neural model for efficiently learning a generic context embedding function from large corpora, us-ing bidirectional LSTM. With a very simple application of our context representations, we manage to surpass or nearly reach state-of-the-art results on sentence completion, lexical substitution and word sense disambiguation tasks, while substantially outperforming the popular context representation of averaged word embeddings. We release our code and pre-trained models, suggesting they could be useful in a wide variety of NLP tasks.', 'year': 2016, 'in_acl': True, 'citationCount': 484, 'section': None, 'subsection': None}, {'id': 3626819, 'paperId': '3febb2bed8865945e7fddc99efd791887bb7e14f', 'title': 'Deep Contextualized Word Representations', 'authors': [{'authorId': '39139825', 'name': 'Matthew E. Peters'}, {'authorId': '50043859', 'name': 'Mark Neumann'}, {'authorId': '2136562', 'name': 'Mohit Iyyer'}, {'authorId': '40642935', 'name': 'Matt Gardner'}, {'authorId': '143997772', 'name': 'Christopher Clark'}, {'authorId': '2544107', 'name': 'Kenton Lee'}, {'authorId': '1982950', 'name': 'Luke Zettlemoyer'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pre-trained on a large text corpus. We show that these representations can be easily added to existing models and significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment and sentiment analysis. We also present an analysis showing that exposing the deep internals of the pre-trained network is crucial, allowing downstream models to mix different types of semi-supervision signals.', 'year': 2018, 'in_acl': True, 'citationCount': 11138, 'section': None, 'subsection': None}, {'id': 13696533, 'paperId': 'bf9db8ca2dce7386cbed1ae0fd6465148cdb2b98', 'title': 'From Word to Sense Embeddings: A Survey on Vector Representations of Meaning', 'authors': [{'authorId': '1387447871', 'name': 'José Camacho-Collados'}, {'authorId': '1717641', 'name': 'Mohammad Taher Pilehvar'}], 'venue': 'Journal of Artificial Intelligence Research', 'abstract': '\n \n \nOver the past years, distributed semantic representations have proved to be effective and flexible keepers of prior knowledge to be integrated into downstream applications. This survey focuses on the representation of meaning. We start from the theoretical background behind word vector space models and highlight one of their major limitations: the meaning conflation deficiency, which arises from representing a word with all its possible meanings as a single vector. Then, we explain how this deficiency can be addressed through a transition from the word level to the more fine-grained level of word senses (in its broader acceptation) as a method for modelling unambiguous lexical meaning. We present a comprehensive overview of the wide range of techniques in the two main branches of sense representation, i.e., unsupervised and knowledge-based. Finally, this survey covers the main evaluation procedures and applications for this type of representation, and provides an analysis of four of its important aspects: interpretability, sense granularity, adaptability to different domains and compositionality. \n \n \n', 'year': 2018, 'in_acl': False, 'citationCount': 317, 'section': None, 'subsection': None}, {'id': 52098907, 'paperId': 'ac11062f1f368d97f4c826c317bf50dcc13fdb59', 'title': 'Dissecting Contextual Word Embeddings: Architecture and Representation', 'authors': [{'authorId': '39139825', 'name': 'Matthew E. Peters'}, {'authorId': '50043859', 'name': 'Mark Neumann'}, {'authorId': '1982950', 'name': 'Luke Zettlemoyer'}, {'authorId': '144105277', 'name': 'Wen-tau Yih'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Contextual word representations derived from pre-trained bidirectional language models (biLMs) have recently been shown to provide significant improvements to the state of the art for a wide range of NLP tasks. However, many questions remain as to how and why these models are so effective. In this paper, we present a detailed empirical study of how the choice of neural architecture (e.g. LSTM, CNN, or self attention) influences both end task accuracy and qualitative properties of the representations that are learned. We show there is a tradeoff between speed and accuracy, but all architectures learn high quality contextual representations that outperform word embeddings for four challenging NLP tasks. Additionally, all architectures learn representations that vary with network depth, from exclusively morphological based at the word embedding layer through local syntax based in the lower contextual layers to longer range semantics such coreference at the upper layers. Together, these results suggest that unsupervised biLMs, independent of architecture, are learning much more about the structure of language than previously appreciated.', 'year': 2018, 'in_acl': True, 'citationCount': 406, 'section': None, 'subsection': None}, {'id': 52967399, 'paperId': 'df2b0e26d0599ce3e70df8a9da02e51594e0e992', 'title': 'BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding', 'authors': [{'authorId': '39172707', 'name': 'Jacob Devlin'}, {'authorId': '1744179', 'name': 'Ming-Wei Chang'}, {'authorId': '2544107', 'name': 'Kenton Lee'}, {'authorId': '3259253', 'name': 'Kristina Toutanova'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018a; Radford et al., 2018), BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5 (7.7 point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).', 'year': 2019, 'in_acl': True, 'citationCount': 84138, 'section': None, 'subsection': None}]
|
2020.coling-tutorials.5
|
A guide to the dataset explosion in QA, NLI, and commonsense reasoning
|
Question answering, natural language inference and commonsense reasoning are increasingly popular as general NLP system benchmarks, driving both modeling and dataset work. Only for question answering we already have over 100 datasets, with over 40 published after 2018. However, most new datasets get “solved” soon after publication, and this is largely due not to the verbal reasoning capabilities of our models, but to annotation artifacts and shallow cues in the data that they can exploit. This tutorial aims to (1) provide an up-to-date guide to the recent datasets, (2) survey the old and new methodological issues with dataset construction, and (3) outline the existing proposals for overcoming them. The target audience is the NLP practitioners who are lost in dozens of the recent datasets, and would like to know what these datasets are actually measuring. Our overview of the problems with the current datasets and the latest tips and tricks for overcoming them will also be useful to the researchers working on future benchmarks.
| 2,020
|
https://aclanthology.org/2020.coling-tutorials.5/
|
COLING
|
[{'id': 182952898, 'paperId': 'a1f000b88e81f02b2a0d7a4097171428364af8c7', 'title': 'A Survey on Neural Machine Reading Comprehension', 'authors': [{'authorId': '2064466537', 'name': 'Boyu Qiu'}, {'authorId': '2118183867', 'name': 'Xu Chen'}, {'authorId': '2073589', 'name': 'Jungang Xu'}, {'authorId': '46676156', 'name': 'Yingfei Sun'}], 'venue': 'arXiv.org', 'abstract': 'Enabling a machine to read and comprehend the natural language documents so that it can answer some questions remains an elusive challenge. In recent years, the popularity of deep learning and the establishment of large-scale datasets have both promoted the prosperity of Machine Reading Comprehension. This paper aims to present how to utilize the Neural Network to build a Reader and introduce some classic models, analyze what improvements they make. Further, we also point out the defects of existing models and future research directions', 'year': 2019, 'in_acl': False, 'citationCount': 29, 'section': None, 'subsection': None}, {'id': 213613608, 'paperId': '4043a936960de8e149dc208178fe1bcb157c7fa4', 'title': 'Recent Advances in Natural Language Inference: A Survey of Benchmarks, Resources, and Approaches', 'authors': [{'authorId': '89093987', 'name': 'Shane Storks'}, {'authorId': '3193409', 'name': 'Qiaozi Gao'}, {'authorId': '1707259', 'name': 'J. Chai'}], 'venue': '', 'abstract': "In the NLP community, recent years have seen a surge of research activities that address machines' ability to perform deep language understanding which goes beyond what is explicitly stated in text, rather relying on reasoning and knowledge of the world. Many benchmark tasks and datasets have been created to support the development and evaluation of such natural language inference ability. As these benchmarks become instrumental and a driving force for the NLP research community, this paper aims to provide an overview of recent benchmarks, relevant knowledge resources, and state-of-the-art learning and inference approaches in order to support a better understanding of this growing field.", 'year': 2019, 'in_acl': False, 'citationCount': 116, 'section': None, 'subsection': None}, {'id': 219124082, 'paperId': '4311eefc03a3f391bae39ebf364cbd5f8b90a001', 'title': 'Beyond Leaderboards: A survey of methods for revealing weaknesses in Natural Language Inference data and models', 'authors': [{'authorId': '71034258', 'name': 'Viktor Schlegel'}, {'authorId': '2144507', 'name': 'G. Nenadic'}, {'authorId': '1400900759', 'name': 'R. Batista-Navarro'}], 'venue': 'arXiv.org', 'abstract': "Recent years have seen a growing number of publications that analyse Natural Language Inference (NLI) datasets for superficial cues, whether they undermine the complexity of the tasks underlying those datasets and how they impact those models that are optimised and evaluated on this data. This structured survey provides an overview of the evolving research area by categorising reported weaknesses in models and datasets and the methods proposed to reveal and alleviate those weaknesses for the English language. We summarise and discuss the findings and conclude with a set of recommendations for possible future research directions. We hope it will be a useful resource for researchers who propose new datasets, to have a set of tools to assess the suitability and quality of their data to evaluate various phenomena of interest, as well as those who develop novel architectures, to further understand the implications of their improvements with respect to their model's acquired capabilities.", 'year': 2020, 'in_acl': False, 'citationCount': 18, 'section': None, 'subsection': None}, {'id': 91184338, 'paperId': 'b1832b749528755dfcbe462717f4f5afc07243b8', 'title': 'Commonsense Reasoning for Natural Language Understanding: A Survey of Benchmarks, Resources, and Approaches', 'authors': [{'authorId': '89093987', 'name': 'Shane Storks'}, {'authorId': '3193409', 'name': 'Qiaozi Gao'}, {'authorId': '1707259', 'name': 'J. Chai'}], 'venue': 'arXiv.org', 'abstract': "Commonsense knowledge and commonsense reasoning are some of the main bottlenecks in machine intelligence. In the NLP community, many benchmark datasets and tasks have been created to address commonsense reasoning for language understanding. These tasks are designed to assess machines' ability to acquire and learn commonsense knowledge in order to reason and understand natural language text. As these tasks become instrumental and a driving force for commonsense research, this paper aims to provide an overview of existing tasks and benchmarks, knowledge resources, and learning and inference approaches toward commonsense reasoning for natural language understanding. Through this, our goal is to support a better understanding of the state of the art, its limitations, and future challenges.", 'year': 2019, 'in_acl': False, 'citationCount': 72, 'section': None, 'subsection': None}]
|
2020.coling-tutorials.6
|
A Crash Course in Automatic Grammatical Error Correction
|
Grammatical Error Correction (GEC) is the task of automatically detecting and correcting all types of errors in written text. Although most research has focused on correcting errors in the context of English as a Second Language (ESL), GEC can also be applied to other languages and native text. The main application of a GEC system is thus to assist humans with their writing. Academic and commercial interest in GEC has grown significantly since the Helping Our Own (HOO) and Conference on Natural Language Learning (CoNLL) shared tasks in 2011-14, and a record-breaking 24 teams took part in the recent Building Educational Applications (BEA) shared task. Given this interest, and the recent shift towards neural approaches, we believe the time is right to offer a tutorial on GEC for researchers who may be new to the field or who are interested in the current state of the art and future challenges. With this in mind, the main goal of this tutorial is not only to bring attendees up to speed with GEC in general, but also examine the development of neural-based GEC systems.
| 2,020
|
https://aclanthology.org/2020.coling-tutorials.6/
|
COLING
|
[{'id': 219306476, 'paperId': '20499f3c6fe9f84a12c9def941e2e12846a00c77', 'title': 'The CoNLL-2014 Shared Task on Grammatical Error Correction', 'authors': [{'authorId': '34789794', 'name': 'H. Ng'}, {'authorId': '2069266', 'name': 'S. Wu'}, {'authorId': '145693410', 'name': 'Ted Briscoe'}, {'authorId': '3271719', 'name': 'Christian Hadiwinoto'}, {'authorId': '32406168', 'name': 'Raymond Hendy Susanto'}, {'authorId': '145178009', 'name': 'Christopher Bryant'}], 'venue': 'CoNLL Shared Task', 'abstract': 'The CoNLL-2014 shared task was devoted to grammatical error correction of all error types. In this paper, we give the task definition, present the data sets, and describe the evaluation metric and scorer used in the shared task. We also give an overview of the various approaches adopted by the participating teams, and present the evaluation results. Compared to the CoNLL2013 shared task, we have introduced the following changes in CoNLL-2014: (1) A participating system is expected to detect and correct grammatical errors of all types, instead of just the five error types in CoNLL-2013; (2) The evaluation metric was changed from F1 to F0.5, to emphasize precision over recall; and (3) We have two human annotators who independently annotated the test essays, compared to just one human annotator in CoNLL-2013.', 'year': 2014, 'in_acl': True, 'citationCount': 498, 'section': None, 'subsection': None}, {'id': 18051414, 'paperId': 'be08a1189ce88c4a6d6a98784377eb02e578d95a', 'title': 'Building a State-of-the-Art Grammatical Error Correction System', 'authors': [{'authorId': '2271568', 'name': 'Alla Rozovskaya'}, {'authorId': '144590225', 'name': 'D. Roth'}], 'venue': 'Transactions of the Association for Computational Linguistics', 'abstract': 'This paper identifies and examines the key principles underlying building a state-of-the-art grammatical error correction system. We do this by analyzing the Illinois system that placed first among seventeen teams in the recent CoNLL-2013 shared task on grammatical error correction. The system focuses on five different types of errors common among non-native English writers. We describe four design principles that are relevant for correcting all of these errors, analyze the system along these dimensions, and show how each of these dimensions contributes to the performance.', 'year': 2014, 'in_acl': True, 'citationCount': 33, 'section': None, 'subsection': None}, {'id': 6820419, 'paperId': 'b19f365aab0bf8c6cf712c07313b919556bfacc0', 'title': 'Phrase-based Machine Translation is State-of-the-Art for Automatic Grammatical Error Correction', 'authors': [{'authorId': '1733933', 'name': 'Marcin Junczys-Dowmunt'}, {'authorId': '3272639', 'name': 'Roman Grundkiewicz'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'In this work, we study parameter tuning towards the M^2 metric, the standard metric for automatic grammar error correction (GEC) tasks. After implementing M^2 as a scorer in the Moses tuning framework, we investigate interactions of dense and sparse features, different optimizers, and tuning strategies for the CoNLL-2014 shared task. We notice erratic behavior when optimizing sparse feature weights with M^2 and offer partial solutions. We find that a bare-bones phrase-based SMT setup with task-specific parameter-tuning outperforms all previously published results for the CoNLL-2014 test set by a large margin (46.37% M^2 over previously 41.75%, by an SMT system with neural features) while being trained on the same, publicly available data. Our newly introduced dense and sparse features widen that gap, and we improve the state-of-the-art to 49.49% M^2.', 'year': 2016, 'in_acl': True, 'citationCount': 102, 'section': None, 'subsection': None}, {'id': 19236015, 'paperId': '6ed38b0cb510fa91434eb63ab464bee66c9323c6', 'title': 'A Multilayer Convolutional Encoder-Decoder Neural Network for Grammatical Error Correction', 'authors': [{'authorId': '3422793', 'name': 'Shamil Chollampatt'}, {'authorId': '34789794', 'name': 'H. Ng'}], 'venue': 'AAAI Conference on Artificial Intelligence', 'abstract': '\n \n We improve automatic correction of grammatical, orthographic, and collocation errors in text using a multilayer convolutional encoder-decoder neural network. The network is initialized with embeddings that make use of character N-gram information to better suit this task. When evaluated on common benchmark test data sets (CoNLL-2014 and JFLEG), our model substantially outperforms all prior neural approaches on this task as well as strong statistical machine translation-based systems with neural and task-specific features trained on the same data. Our analysis shows the superiority of convolutional neural networks over recurrent neural networks such as long short-term memory (LSTM) networks in capturing the local context via attention, and thereby improving the coverage in correcting grammatical errors. By ensembling multiple models, and incorporating an N-gram language model and edit features via rescoring, our novel method becomes the first neural approach to outperform the current state-of-the-art statistical machine translation-based approach, both in terms of grammaticality and fluency.\n \n', 'year': 2018, 'in_acl': False, 'citationCount': 210, 'section': None, 'subsection': None}, {'id': 195504787, 'paperId': '7cc6f009feb5ad5ad0e1ff00c551fb318fc95016', 'title': 'Neural Grammatical Error Correction Systems with Unsupervised Pre-training on Synthetic Data', 'authors': [{'authorId': '3272639', 'name': 'Roman Grundkiewicz'}, {'authorId': '1733933', 'name': 'Marcin Junczys-Dowmunt'}, {'authorId': '1702066', 'name': 'Kenneth Heafield'}], 'venue': 'BEA@ACL', 'abstract': 'Considerable effort has been made to address the data sparsity problem in neural grammatical error correction. In this work, we propose a simple and surprisingly effective unsupervised synthetic error generation method based on confusion sets extracted from a spellchecker to increase the amount of training data. Synthetic data is used to pre-train a Transformer sequence-to-sequence model, which not only improves over a strong baseline trained on authentic error-annotated data, but also enables the development of a practical GEC system in a scenario where little genuine error-annotated data is available. The developed systems placed first in the BEA19 shared task, achieving 69.47 and 64.24 F_{0.5} in the restricted and low-resource tracks respectively, both on the W&I+LOCNESS test set. On the popular CoNLL 2014 test set, we report state-of-the-art results of 64.16 M² for the submitted system, and 61.30 M² for the constrained system trained on the NUCLE and Lang-8 data.', 'year': 2019, 'in_acl': True, 'citationCount': 169, 'section': None, 'subsection': None}, {'id': 202539354, 'paperId': '1a5ef51ae0c0ee1216e14aa390734cf7581c3b27', 'title': 'An Empirical Study of Incorporating Pseudo Data into Grammatical Error Correction', 'authors': [{'authorId': '32140786', 'name': 'Shun Kiyono'}, {'authorId': '144042991', 'name': 'Jun Suzuki'}, {'authorId': '35643168', 'name': 'Masato Mita'}, {'authorId': '3079116', 'name': 'Tomoya Mizumoto'}, {'authorId': '3040648', 'name': 'Kentaro Inui'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'The incorporation of pseudo data in the training of grammatical error correction models has been one of the main factors in improving the performance of such models. However, consensus is lacking on experimental configurations, namely, choosing how the pseudo data should be generated or used. In this study, these choices are investigated through extensive experiments, and state-of-the-art performance is achieved on the CoNLL-2014 test set (F0.5=65.0) and the official test set of the BEA-2019 shared task (F0.5=70.2) without making any modifications to the model architecture.', 'year': 2019, 'in_acl': True, 'citationCount': 143, 'section': None, 'subsection': None}]
|
2020.coling-tutorials.7
|
Endangered Languages meet Modern NLP
|
This tutorial will focus on NLP for endangered languages documentation and revitalization. First, we will acquaint the attendees with the process and the challenges of language documentation, showing how the needs of the language communities and the documentary linguists map to specific NLP tasks. We will then present the state-of-the-art in NLP applied in this particularly challenging setting (extremely low-resource datasets, noisy transcriptions, limited annotations, non-standard orthographies). In doing so, we will also analyze the challenges of working in this domain and expand on both the capabilities and the limitations of current NLP approaches. Our ultimate goal is to motivate more NLP practitioners to work towards this very important direction, and also provide them with the tools and understanding of the limitations/challenges, both of which are needed in order to have an impact.
| 2,020
|
https://aclanthology.org/2020.coling-tutorials.7/
|
COLING
|
[{'id': 48356442, 'paperId': 'b4aa5354e88564b2e4eeee3019ed04e5388042f3', 'title': 'Challenges of language technologies for the indigenous languages of the Americas', 'authors': [{'authorId': '153151470', 'name': 'Manuel Mager'}, {'authorId': '1409305289', 'name': 'Ximena Gutierrez-Vasques'}, {'authorId': '32889164', 'name': 'Gerardo E Sierra'}, {'authorId': '1403616824', 'name': 'Ivan Vladimir Meza Ruiz'}], 'venue': 'International Conference on Computational Linguistics', 'abstract': 'Indigenous languages of the American continent are highly diverse. However, they have received little attention from the technological perspective. In this paper, we review the research, the digital resources and the available NLP systems that focus on these languages. We present the main challenges and research questions that arise when distant languages and low-resource scenarios are faced. We would like to encourage NLP research in linguistically rich and diverse areas like the Americas.', 'year': 2018, 'in_acl': True, 'citationCount': 83, 'section': None, 'subsection': None}, {'id': 38196008, 'paperId': '842232e4c1f11239fb67cb0f1bc068149df64183', 'title': 'Last Words: Natural Language Processing and Linguistic Fieldwork', 'authors': [{'authorId': '21308992', 'name': 'Steven Bird'}], 'venue': 'International Conference on Computational Logic', 'abstract': '', 'year': 2009, 'in_acl': True, 'citationCount': 35, 'section': None, 'subsection': None}, {'id': 201666291, 'paperId': '21d4ffff54d3f71d028f9bda89c22a496e0fbb82', 'title': 'Deploying Technology to Save Endangered Languages', 'authors': [{'authorId': '38878141', 'name': 'Hilaria Cruz'}, {'authorId': '2060201926', 'name': 'Joseph Waring'}], 'venue': 'arXiv.org', 'abstract': 'Computer scientists working on natural language processing, native speakers of endangered languages, and field linguists to discuss ways to harness Automatic Speech Recognition, especially neural networks, to automate annotation, speech tagging, and text parsing on endangered languages.', 'year': 2019, 'in_acl': False, 'citationCount': 6, 'section': None, 'subsection': None}, {'id': 146056073, 'paperId': '55b9e8f6194e745190cce0dd6db7eca4ff53a260', 'title': 'Future Directions in Technological Support for Language Documentation', 'authors': [{'authorId': '8775666', 'name': 'D. Esch'}, {'authorId': '92304991', 'name': 'Ben Foley'}, {'authorId': '79396737', 'name': 'Nay San'}], 'venue': 'Proceedings of the Workshop on Computational Methods for Endangered Languages', 'abstract': 'To reduce the annotation burden placed on linguistic fieldworkers, freeing up time for deeper linguistic analysis and descriptive work, the language documentation community has been working with machine learning researchers to investigate what assistive role technology can play, with promising early results. This paper describes a number of potential follow-up technical projects that we believe would be worthwhile and straightforward to do. We provide examples of the annotation tasks for computer scientists; descriptions of the technological challenges involved and the estimated level of complexity; and pointers to relevant literature. We hope providing a clear overview of what the needs are and what annotation challenges exist will help facilitate the dialogue and collaboration between computer scientists and fieldwork linguists.', 'year': 2019, 'in_acl': True, 'citationCount': 17, 'section': None, 'subsection': None}, {'id': 52011439, 'paperId': '243880fde63abfc287bd1356c2e1dbf68a1a0aac', 'title': 'Indigenous language technologies in Canada: Assessment, challenges, and successes', 'authors': [{'authorId': '3070462', 'name': 'Patrick Littell'}, {'authorId': '145131025', 'name': 'Anna Kazantseva'}, {'authorId': '143937779', 'name': 'R. Kuhn'}, {'authorId': '51183422', 'name': 'Aidan Pine'}, {'authorId': '2587266', 'name': 'Antti Arppe'}, {'authorId': '2052347303', 'name': 'Christopher Cox'}, {'authorId': '46230685', 'name': 'M. Junker'}], 'venue': 'International Conference on Computational Linguistics', 'abstract': 'In this article, we discuss which text, speech, and image technologies have been developed, and would be feasible to develop, for the approximately 60 Indigenous languages spoken in Canada. In particular, we concentrate on technologies that may be feasible to develop for most or all of these languages, not just those that may be feasible for the few most-resourced of these. We assess past achievements and consider future horizons for Indigenous language transliteration, text prediction, spell-checking, approximate search, machine translation, speech recognition, speaker diarization, speech synthesis, optical character recognition, and computer-aided language learning.', 'year': 2018, 'in_acl': True, 'citationCount': 40, 'section': None, 'subsection': None}, {'id': 11955283, 'paperId': '82e14d316f8e21b883a6a580f29c9953e6ce1886', 'title': 'Automatic speech recognition for under-resourced languages: A survey', 'authors': [{'authorId': '143823463', 'name': 'L. Besacier'}, {'authorId': '1790459', 'name': 'E. Barnard'}, {'authorId': '145191867', 'name': 'Alexey Karpov'}, {'authorId': '145618636', 'name': 'Tanja Schultz'}], 'venue': 'Speech Communication', 'abstract': 'Speech processing for under-resourced languages is an active field of research, which has experienced significant progress during the past decade. We propose, in this paper, a survey that focuses on automatic speech recognition (ASR) for these languages. The definition of under-resourced languages and the challenges associated to them are first defined. The main part of the paper is a literature review of the recent (last 8 years) contributions made in ASR for under-resourced languages. Examples of past projects and future trends when dealing with under-resourced languages are also presented. We believe that this paper will be a good starting point for anyone interested to initiate research in (or operational development of) ASR for one or several under-resourced languages. It should be clear, however, that many of the issues and approaches presented here, apply to speech technology in general (text-to-speech synthesis for instance).', 'year': 2014, 'in_acl': False, 'citationCount': 484, 'section': None, 'subsection': None}, {'id': 53065219, 'paperId': 'c859416a8e5682bee3c35df29bc02e02a22de072', 'title': 'Integrating automatic transcription into the language documentation workflow: Experiments with Na data and the Persephone toolkit', 'authors': [{'authorId': '39720748', 'name': 'Alexis Michaud'}, {'authorId': '38535429', 'name': 'Oliver Adams'}, {'authorId': '143620680', 'name': 'Trevor Cohn'}, {'authorId': '1700325', 'name': 'Graham Neubig'}, {'authorId': '46673007', 'name': 'Severine Guillaume'}], 'venue': '', 'abstract': 'Automatic speech recognition tools have potential for facilitating language documentation, but in practice these tools remain little-used by linguists for a variety of reasons, such as that the technology is still new (and evolving rapidly), user-friendly interfaces are still under development, and case studies demonstrating the practical usefulness of automatic recognition in a low-resource setting remain few. This article reports on a success story in integrating automatic transcription into the language documentation workflow, specifically for Yongning Na, a language of Southwest China. Using PERSEPHONE, an open-source toolkit, a single-speaker speech transcription tool was trained over five hours of manually transcribed speech. The experiments found that this method can achieve a remarkably low error rate (on the order of 17%), and that automatic transcriptions were useful as a canvas for the linguist. The present report is intended for linguists with little or no knowledge of speech processing. It aims to provide insights into (i) the way the tool operates and (ii) the process of collaborating with natural language processing specialists. Practical recommendations are offered on how to anticipate the requirements of this type of technology from the early stages of data collection in the field.', 'year': 2018, 'in_acl': False, 'citationCount': 44, 'section': None, 'subsection': None}, {'id': 53333371, 'paperId': '9476918d232768de4f2cbc13240c6626f49b4d04', 'title': 'Building Speech Recognition Systems for Language Documentation: The CoEDL Endangered Language Pipeline and Inference System (Elpis)', 'authors': [{'authorId': '92304991', 'name': 'Ben Foley'}, {'authorId': '32286262', 'name': 'Joshua T. Arnold'}, {'authorId': '1405433180', 'name': 'Rolando Coto-Solano'}, {'authorId': '5963331', 'name': 'Gautier Durantin'}, {'authorId': '144016062', 'name': 'T. M. Ellison'}, {'authorId': '8775666', 'name': 'D. Esch'}, {'authorId': '145936386', 'name': 'Scott Heath'}, {'authorId': '4438531', 'name': 'František Kratochvil'}, {'authorId': '1410034849', 'name': 'Zara Maxwell-Smith'}, {'authorId': '2082556414', 'name': 'David Nash'}, {'authorId': '26949393', 'name': 'Ola Olsson'}, {'authorId': '2072102908', 'name': 'Mark Richards'}, {'authorId': '79396737', 'name': 'Nay San'}, {'authorId': '2343936', 'name': 'H. Stoakes'}, {'authorId': '24739790', 'name': 'N. Thieberger'}, {'authorId': '1716264', 'name': 'Janet Wiles'}], 'venue': '', 'abstract': 'Machine learning has revolutionised speech technologies for major world languages, but these technologies have generally not been available for the roughly 4,000 languages with populations of fewer than 10,000 speakers. This paper describes the development of Elpis, a pipeline which language documentation workers with minimal computational experience can use to build their own speech recognition models, resulting in models being built for 16 languages from the Asia-Pacific region. Elpis puts machine learning speech technologies within reach of people working with languages with scarce data, in a scalable way. This is impactful since it enables language communities to cross the digital divide, and speeds up language documentation. Complete automation of the process is not feasible for languages with small quantities of data and potentially large vocabularies. Hence our goal is not full automation, but rather to make a practical and effective workflow that integrates machine learning technologies.', 'year': 2018, 'in_acl': False, 'citationCount': 47, 'section': None, 'subsection': None}, {'id': 201058388, 'paperId': 'f249e3a7d4f7f964e9a4ca6e633ac31410a91dd8', 'title': 'Pushing the Limits of Low-Resource Morphological Inflection', 'authors': [{'authorId': '49513989', 'name': 'Antonios Anastasopoulos'}, {'authorId': '1700325', 'name': 'Graham Neubig'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Recent years have seen exceptional strides in the task of automatic morphological inflection generation. However, for a long tail of languages the necessary resources are hard to come by, and state-of-the-art neural methods that work well under higher resource settings perform poorly in the face of a paucity of data. In response, we propose a battery of improvements that greatly improve performance under such low-resource conditions. First, we present a novel two-step attention architecture for the inflection decoder. In addition, we investigate the effects of cross-lingual transfer from single and multiple languages, as well as monolingual data hallucination. The macro-averaged accuracy of our models outperforms the state-of-the-art by 15 percentage points. Also, we identify the crucial factors for success with cross-lingual transfer for morphological inflection: typological similarity and a common representation across languages.', 'year': 2019, 'in_acl': True, 'citationCount': 75, 'section': None, 'subsection': None}]
|
2020.emnlp-tutorials.1
|
Machine Reasoning: Technology, Dilemma and Future
|
Machine reasoning research aims to build interpretable AI systems that can solve problems or draw conclusions from what they are told (i.e. facts and observations) and already know (i.e. models, common sense and knowledge) under certain constraints. In this tutorial, we will (1) describe the motivation of this tutorial and give our definition on machine reasoning; (2) introduce typical machine reasoning frameworks, including symbolic reasoning, probabilistic reasoning, neural-symbolic reasoning and neural-evidence reasoning, and show their successful applications in real-world scenarios; (3) talk about the dilemma between black-box neural networks with state-of-the-art performance and machine reasoning approaches with better interpretability; (4) summarize the content of this tutorial and discuss possible future directions.
| 2,020
|
https://aclanthology.org/2020.emnlp-tutorials.1
|
EMNLP
|
[{'id': 63671278, 'paperId': '20394c89e24d9060ecc69b8a58bdab7833c5b5bd', 'title': 'Markov Logic: A Unifying Framework for Statistical Relational Learning', 'authors': [{'authorId': '1746034', 'name': 'L. Getoor'}, {'authorId': '1685978', 'name': 'B. Taskar'}], 'venue': '', 'abstract': 'This chapter contains sections titled: The Need for a Unifying Framework, Markov Networks, First-order Logic, Markov Logic, SRL Approaches, SRL Tasks, Inference, Learning, Experiments, Conclusion, Acknowledgments, References', 'year': 2007, 'in_acl': False, 'citationCount': 203, 'section': None, 'subsection': None}, {'id': 1067591, 'paperId': 'ab4850b6151ca9a9337dbba94115bde342876d50', 'title': 'From machine learning to machine reasoning', 'authors': [{'authorId': '52184096', 'name': 'L. Bottou'}], 'venue': 'Machine-mediated learning', 'abstract': 'A plausible definition of “reasoning” could be “algebraically manipulating previously acquired knowledge in order to answer a new question”. This definition covers first-order logical inference or probabilistic inference. It also includes much simpler manipulations commonly used to build large learning systems. For instance, we can build an optical character recognition system by first training a character segmenter, an isolated character recognizer, and a language model, using appropriate labelled training sets. Adequately concatenating these modules and fine tuning the resulting system can be viewed as an algebraic operation in a space of models. The resulting model answers a new question, that is, converting the image of a text page into a computer readable text. This observation suggests a conceptual continuity between algebraically rich inference systems, such as logical or probabilistic inference, and simple manipulations, such as the mere concatenation of trainable learning systems. Therefore, instead of trying to bridge the gap between machine learning systems and sophisticated “all-purpose” inference mechanisms, we can instead algebraically enrich the set of manipulations applicable to training systems, and build reasoning capabilities from the ground up.', 'year': 2011, 'in_acl': False, 'citationCount': 267, 'section': None, 'subsection': None}, {'id': 1755720, 'paperId': '9dbb506ded56ff4b7ab65aa92b363c0112987f10', 'title': 'Neural-Symbolic Learning and Reasoning: A Survey and Interpretation', 'authors': [{'authorId': '143862012', 'name': 'Tarek R. Besold'}, {'authorId': '2925941', 'name': 'A. Garcez'}, {'authorId': '144349956', 'name': 'Sebastian Bader'}, {'authorId': '145042515', 'name': 'H. Bowman'}, {'authorId': '1740213', 'name': 'Pedro M. Domingos'}, {'authorId': '1699771', 'name': 'P. Hitzler'}, {'authorId': '1743582', 'name': 'Kai-Uwe Kühnberger'}, {'authorId': '2335532', 'name': 'L. Lamb'}, {'authorId': '3021654', 'name': 'Daniel Lowd'}, {'authorId': '144829981', 'name': 'P. Lima'}, {'authorId': '2910868', 'name': 'L. Penning'}, {'authorId': '2263909', 'name': 'Gadi Pinkas'}, {'authorId': '1759772', 'name': 'Hoifung Poon'}, {'authorId': '1753715', 'name': 'Gerson Zaverucha'}], 'venue': 'Neuro-Symbolic Artificial Intelligence', 'abstract': 'The study and understanding of human behaviour is relevant to computer science, artificial intelligence, neural computation, cognitive science, philosophy, psychology, and several other areas. Presupposing cognition as basis of behaviour, among the most prominent tools in the modelling of behaviour are computational-logic systems, connectionist models of cognition, and models of uncertainty. Recent studies in cognitive science, artificial intelligence, and psychology have produced a number of cognitive models of reasoning, learning, and language that are underpinned by computation. In addition, efforts in computer science research have led to the development of cognitive computational systems integrating machine learning and automated reasoning. Such systems have shown promise in a range of applications, including computational biology, fault diagnosis, training and assessment in simulators, and software verification. This joint survey reviews the personal ideas and views of several researchers on neural-symbolic learning and reasoning. The article is organised in three parts: Firstly, we frame the scope and goals of neural-symbolic computation and have a look at the theoretical foundations. We then proceed to describe the realisations of neural-symbolic computation, systems, and applications. Finally we present the challenges facing the area and avenues for further research.', 'year': 2017, 'in_acl': False, 'citationCount': 297, 'section': None, 'subsection': None}, {'id': 155092677, 'paperId': '833c4ac0599f4b8c5f1ee6ea948ec675fbe56b15', 'title': 'Neural-Symbolic Computing: An Effective Methodology for Principled Integration of Machine Learning and Reasoning', 'authors': [{'authorId': '2925941', 'name': 'A. Garcez'}, {'authorId': '145467467', 'name': 'M. Gori'}, {'authorId': '2335532', 'name': 'L. Lamb'}, {'authorId': '144077615', 'name': 'L. Serafini'}, {'authorId': '145570895', 'name': 'Michael Spranger'}, {'authorId': '1930235', 'name': 'S. Tran'}], 'venue': 'FLAP', 'abstract': 'Current advances in Artificial Intelligence and machine learning in general, and deep learning in particular have reached unprecedented impact not only across research communities, but also over popular media channels. However, concerns about interpretability and accountability of AI have been raised by influential thinkers. In spite of the recent impact of AI, several works have identified the need for principled knowledge representation and reasoning mechanisms integrated with deep learning-based systems to provide sound and explainable models for such systems. Neural-symbolic computing aims at integrating, as foreseen by Valiant, two most fundamental cognitive abilities: the ability to learn from the environment, and the ability to reason from what has been learned. Neural-symbolic computing has been an active topic of research for many years, reconciling the advantages of robust learning in neural networks and reasoning and interpretability of symbolic representation. In this paper, we survey recent accomplishments of neural-symbolic computing as a principled methodology for integrated machine learning and reasoning. We illustrate the effectiveness of the approach by outlining the main characteristics of the methodology: principled integration of neural learning with symbolic knowledge representation and reasoning allowing for the construction of explainable AI systems. The insights provided by neural-symbolic computing shed new light on the increasingly prominent need for interpretable and accountable AI systems.', 'year': 2019, 'in_acl': False, 'citationCount': 262, 'section': None, 'subsection': None}, {'id': 213613608, 'paperId': '4043a936960de8e149dc208178fe1bcb157c7fa4', 'title': 'Recent Advances in Natural Language Inference: A Survey of Benchmarks, Resources, and Approaches', 'authors': [{'authorId': '89093987', 'name': 'Shane Storks'}, {'authorId': '3193409', 'name': 'Qiaozi Gao'}, {'authorId': '1707259', 'name': 'J. Chai'}], 'venue': '', 'abstract': "In the NLP community, recent years have seen a surge of research activities that address machines' ability to perform deep language understanding which goes beyond what is explicitly stated in text, rather relying on reasoning and knowledge of the world. Many benchmark tasks and datasets have been created to support the development and evaluation of such natural language inference ability. As these benchmarks become instrumental and a driving force for the NLP research community, this paper aims to provide an overview of recent benchmarks, relevant knowledge resources, and state-of-the-art learning and inference approaches in order to support a better understanding of this growing field.", 'year': 2019, 'in_acl': False, 'citationCount': 116, 'section': None, 'subsection': None}, {'id': 51893222, 'paperId': '3df952d4a724655f7520ff95d4b2cef90fff0cae', 'title': 'Techniques for interpretable machine learning', 'authors': [{'authorId': '3432460', 'name': 'Mengnan Du'}, {'authorId': '47717322', 'name': 'Ninghao Liu'}, {'authorId': '48539382', 'name': 'Xia Hu'}], 'venue': 'Communications of the ACM', 'abstract': 'Uncovering the mysterious ways machine learning models make decisions.', 'year': 2018, 'in_acl': False, 'citationCount': 979, 'section': None, 'subsection': None}, {'id': 220058074, 'paperId': '6efe7653b9a7928bc47b61dfeb84c0831a1d7a39', 'title': 'Open-Domain Question Answering', 'authors': [{'authorId': '50536468', 'name': 'Danqi Chen'}, {'authorId': '144105277', 'name': 'Wen-tau Yih'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'This tutorial provides a comprehensive and coherent overview of cutting-edge research in open-domain question answering (QA), the task of answering questions using a large collection of documents of diversified topics. We will start by first giving a brief historical background, discussing the basic setup and core technical challenges of the research problem, and then describe modern datasets with the common evaluation metrics and benchmarks. The focus will then shift to cutting-edge models proposed for open-domain QA, including two-stage retriever-reader approaches, dense retriever and end-to-end training, and retriever-free methods. Finally, we will cover some hybrid approaches using both text and large knowledge bases and conclude the tutorial with important open questions. We hope that the tutorial will not only help the audience to acquire up-to-date knowledge but also provide new perspectives to stimulate the advances of open-domain QA research in the next phase.', 'year': 2020, 'in_acl': True, 'citationCount': 17, 'section': None, 'subsection': None}, {'id': 220060314, 'paperId': 'c24a3ba1f161df77bdf9374d787851d6ce7e366b', 'title': 'Introductory Tutorial: Commonsense Reasoning for Natural Language Processing', 'authors': [{'authorId': '2729164', 'name': 'Maarten Sap'}, {'authorId': '3103343', 'name': 'Vered Shwartz'}, {'authorId': '8536286', 'name': 'Antoine Bosselut'}, {'authorId': '2257385140', 'name': 'Yejin Choi'}, {'authorId': '2249759427', 'name': 'Dan Roth'}], 'venue': '', 'abstract': 'Commonsense knowledge, such as knowing that “bumping into people annoys them” or “rain makes the road slippery”, helps humans navigate everyday situations seamlessly. Yet, endowing machines with such human-like commonsense reasoning capabilities has remained an elusive goal of artificial intelligence research for decades. In recent years, commonsense knowledge and reasoning have received renewed attention from the natural language processing (NLP) community, yielding exploratory studies in automated commonsense understanding. We organize this tutorial to provide researchers with the critical foundations and recent advances in commonsense representation and reasoning, in the hopes of casting a brighter light on this promising area of future research. In our tutorial, we will (1) outline the various types of commonsense (e.g., physical, social), and (2) discuss techniques to gather and represent commonsense knowledge, while highlighting the challenges specific to this type of knowledge (e.g., reporting bias). We will then (3) discuss the types of commonsense knowledge captured by modern NLP systems (e.g., large pretrained language models), and (4) present ways to measure systems’ commonsense reasoning abilities. We will finish with (5) a discussion of various ways in which commonsense reasoning can be used to improve performance on NLP tasks, exemplified by an (6) interactive session on integrating commonsense into a downstream task.', 'year': 2020, 'in_acl': False, 'citationCount': 92, 'section': None, 'subsection': None}]
|
2020.emnlp-tutorials.2
|
Fact-Checking, Fake News, Propaganda, and Media Bias: Truth Seeking in the Post-Truth Era
|
The rise of social media has democratized content creation and has made it easy for everybody to share and spread information online. On the positive side, this has given rise to citizen journalism, thus enabling much faster dissemination of information compared to what was possible with newspapers, radio, and TV. On the negative side, stripping traditional media from their gate-keeping role has left the public unprotected against the spread of misinformation, which could now travel at breaking-news speed over the same democratic channel. This has given rise to the proliferation of false information specifically created to affect individual people’s beliefs, and ultimately to influence major events such as political elections. There are strong indications that false information was weaponized at an unprecedented scale during Brexit and the 2016 U.S. presidential elections. “Fake news,” which can be defined as fabricated information that mimics news media content in form but not in organizational process or intent, became the Word of the Year for 2017, according to Collins Dictionary. Thus, limiting the spread of “fake news” and its impact has become a major focus for computer scientists, journalists, social media companies, and regulatory authorities. The tutorial will offer an overview of the broad and emerging research area of disinformation, with focus on the latest developments and research directions.
| 2,020
|
https://aclanthology.org/2020.emnlp-tutorials.2
|
EMNLP
|
[{'id': 207718082, 'paperId': 'cb40a5e6d4fc0290452345791bb91040aed76961', 'title': 'Fake News Detection on Social Media: A Data Mining Perspective', 'authors': [{'authorId': '145800151', 'name': 'Kai Shu'}, {'authorId': '2880010', 'name': 'A. Sliva'}, {'authorId': '2893721', 'name': 'Suhang Wang'}, {'authorId': '1736632', 'name': 'Jiliang Tang'}, {'authorId': '145896397', 'name': 'Huan Liu'}], 'venue': 'SKDD', 'abstract': 'Social media for news consumption is a double-edged sword. On the one hand, its low cost, easy access, and rapid dissemination of information lead people to seek out and consume news from social media. On the other hand, it enables the wide spread of \\fake news", i.e., low quality news with intentionally false information. The extensive spread of fake news has the potential for extremely negative impacts on individuals and society. Therefore, fake news detection on social media has recently become an emerging research that is attracting tremendous attention. Fake news detection on social media presents unique characteristics and challenges that make existing detection algorithms from traditional news media ine ective or not applicable. First, fake news is intentionally written to mislead readers to believe false information, which makes it difficult and nontrivial to detect based on news content; therefore, we need to include auxiliary information, such as user social engagements on social media, to help make a determination. Second, exploiting this auxiliary information is challenging in and of itself as users\' social engagements with fake news produce data that is big, incomplete, unstructured, and noisy. Because the issue of fake news detection on social media is both challenging and relevant, we conducted this survey to further facilitate research on the problem. In this survey, we present a comprehensive review of detecting fake news on social media, including fake news characterizations on psychology and social theories, existing algorithms from a data mining perspective, evaluation metrics and representative datasets. We also discuss related research areas, open problems, and future research directions for fake news detection on social media.', 'year': 2017, 'in_acl': False, 'citationCount': 2533, 'section': None, 'subsection': None}, {'id': 207743293, 'paperId': '290513795d653bd13a27c0688b12a459eb66c711', 'title': 'Detection and Resolution of Rumours in Social Media', 'authors': [{'authorId': '2805349', 'name': 'A. Zubiaga'}, {'authorId': '145970060', 'name': 'Ahmet Aker'}, {'authorId': '1723649', 'name': 'Kalina Bontcheva'}, {'authorId': '1991548', 'name': 'Maria Liakata'}, {'authorId': '144723416', 'name': 'R. Procter'}], 'venue': 'ACM Computing Surveys', 'abstract': 'Despite the increasing use of social media platforms for information and news gathering, its unmoderated nature often leads to the emergence and spread of rumours, i.e., items of information that are unverified at the time of posting. At the same time, the openness of social media platforms provides opportunities to study how users share and discuss rumours, and to explore how to automatically assess their veracity, using natural language processing and data mining techniques. In this article, we introduce and discuss two types of rumours that circulate on social media: long-standing rumours that circulate for long periods of time, and newly emerging rumours spawned during fast-paced events such as breaking news, where reports are released piecemeal and often with an unverified status in their early stages. We provide an overview of research into social media rumours with the ultimate goal of developing a rumour classification system that consists of four components: rumour detection, rumour tracking, rumour stance classification, and rumour veracity classification. We delve into the approaches presented in the scientific literature for the development of each of these four components. We summarise the efforts and achievements so far toward the development of rumour classification systems and conclude with suggestions for avenues for future research in social media mining for the detection and resolution of rumours.', 'year': 2017, 'in_acl': False, 'citationCount': 757, 'section': None, 'subsection': None}, {'id': 49320819, 'paperId': '22616702da06431668022c649a017af9b333c530', 'title': 'Automated Fact Checking: Task Formulations, Methods and Future Directions', 'authors': [{'authorId': '144603330', 'name': 'James Thorne'}, {'authorId': '2064056928', 'name': 'Andreas Vlachos'}], 'venue': 'International Conference on Computational Linguistics', 'abstract': 'The recently increased focus on misinformation has stimulated research in fact checking, the task of assessing the truthfulness of a claim. Research in automating this task has been conducted in a variety of disciplines including natural language processing, machine learning, knowledge representation, databases, and journalism. While there has been substantial progress, relevant papers and articles have been published in research communities that are often unaware of each other and use inconsistent terminology, thus impeding understanding and further progress. In this paper we survey automated fact checking research stemming from natural language processing and related disciplines, unifying the task formulations and methodologies across papers and authors. Furthermore, we highlight the use of evidence as an important distinguishing factor among them cutting across task formulations and methods. We conclude with proposing avenues for future NLP research on automated fact checking.', 'year': 2018, 'in_acl': True, 'citationCount': 258, 'section': None, 'subsection': None}, {'id': 9060471, 'paperId': '6447bfcda1dfb2fa8484683711af92b7cbaeca2b', 'title': 'A Survey on Truth Discovery', 'authors': [{'authorId': '2110479359', 'name': 'Yaliang Li'}, {'authorId': '144407304', 'name': 'Jing Gao'}, {'authorId': '2598592', 'name': 'Chuishi Meng'}, {'authorId': '37696683', 'name': 'Qi Li'}, {'authorId': '143843304', 'name': 'Lu Su'}, {'authorId': '2112525352', 'name': 'Bo Zhao'}, {'authorId': '3228071', 'name': 'Wei Fan'}, {'authorId': '145325584', 'name': 'Jiawei Han'}], 'venue': 'SKDD', 'abstract': 'Thanks to information explosion, data for the objects of interest can be collected from increasingly more sources. However, for the same object, there usually exist conflicts among the collected multi-source information. To tackle this challenge, truth discovery, which integrates multi-source noisy information by estimating the reliability of each source, has emerged as a hot topic. Several truth discovery methods have been proposed for various scenarios, and they have been successfully applied in diverse application domains. In this survey, we focus on providing a comprehensive overview of truth discovery methods, and summarizing them from different aspects. We also discuss some future directions of truth discovery research. We hope that this survey will promote a better understanding of the current progress on truth discovery, and offer some guidelines on how to apply these approaches in application domains.', 'year': 2015, 'in_acl': False, 'citationCount': 403, 'section': None, 'subsection': None}, {'id': 4410672, 'paperId': '73bfad11b96a69cb882028ead115751adb55252d', 'title': 'The science of fake news', 'authors': [{'authorId': '3185333', 'name': 'D. Lazer'}, {'authorId': '40508064', 'name': 'M. Baum'}, {'authorId': '2237559', 'name': 'Y. Benkler'}, {'authorId': '4859855', 'name': 'Adam J. Berinsky'}, {'authorId': '40828798', 'name': 'Kelly M. Greenhill'}, {'authorId': '143653472', 'name': 'F. Menczer'}, {'authorId': '1976593', 'name': 'Miriam J. Metzger'}, {'authorId': '2064358', 'name': 'B. Nyhan'}, {'authorId': '2998138', 'name': 'Gordon Pennycook'}, {'authorId': '145792941', 'name': 'David M. Rothschild'}, {'authorId': '50156656', 'name': 'M. Schudson'}, {'authorId': '2404363', 'name': 'S. Sloman'}, {'authorId': '3171769', 'name': 'C. Sunstein'}, {'authorId': '26668235', 'name': 'Emily A. Thorson'}, {'authorId': '1783914', 'name': 'D. Watts'}, {'authorId': '46714697', 'name': 'Jonathan Zittrain'}], 'venue': 'Science', 'abstract': 'Addressing fake news requires a multidisciplinary effort The rise of fake news highlights the erosion of long-standing institutional bulwarks against misinformation in the internet age. Concern over the problem is global. However, much remains unknown regarding the vulnerabilities of individuals, institutions, and society to manipulations by malicious actors. A new system of safeguards is needed. Below, we discuss extant social and computer science research regarding belief in fake news and the mechanisms by which it spreads. Fake news has a long history, but we focus on unanswered scientific questions raised by the proliferation of its most recent, politically oriented incarnation. Beyond selected references in the text, suggested further reading can be found in the supplementary materials.', 'year': 2018, 'in_acl': False, 'citationCount': 2893, 'section': None, 'subsection': None}, {'id': 4549072, 'paperId': 'ef07defaf08123d5e1a8bd41ad6e2db5e5b225e3', 'title': 'The spread of true and false news online', 'authors': [{'authorId': '1918441', 'name': 'Soroush Vosoughi'}, {'authorId': '145364504', 'name': 'D. Roy'}, {'authorId': '2413779', 'name': 'Sinan Aral'}], 'venue': 'Science', 'abstract': 'Lies spread faster than the truth There is worldwide concern over false news and the possibility that it can influence political, economic, and social well-being. To understand how false news spreads, Vosoughi et al. used a data set of rumor cascades on Twitter from 2006 to 2017. About 126,000 rumors were spread by ∼3 million people. False news reached more people than the truth; the top 1% of false news cascades diffused to between 1000 and 100,000 people, whereas the truth rarely diffused to more than 1000 people. Falsehood also diffused faster than the truth. The degree of novelty and the emotional reactions of recipients may be responsible for the differences observed. Science, this issue p. 1146 A large-scale analysis of tweets reveals that false rumors spread further and faster than the truth. We investigated the differential diffusion of all of the verified true and false news stories distributed on Twitter from 2006 to 2017. The data comprise ~126,000 stories tweeted by ~3 million people more than 4.5 million times. We classified news as true or false using information from six independent fact-checking organizations that exhibited 95 to 98% agreement on the classifications. Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information, and the effects were more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information. We found that false news was more novel than true news, which suggests that people were more likely to share novel information. Whereas false stories inspired fear, disgust, and surprise in replies, true stories inspired anticipation, sadness, joy, and trust. Contrary to conventional wisdom, robots accelerated the spread of true and false news at the same rate, implying that false news spreads more than the truth because humans, not robots, are more likely to spread it.', 'year': 2018, 'in_acl': False, 'citationCount': 5292, 'section': None, 'subsection': None}, {'id': 67748733, 'paperId': '8114cf0628c29e8309d6f1e2ef61030f64a7b28c', 'title': 'Stance Detection', 'authors': [{'authorId': '1910084', 'name': 'D. Küçük'}, {'authorId': '2083563', 'name': 'F. Can'}], 'venue': 'ACM Computing Surveys', 'abstract': 'Automatic elicitation of semantic information from natural language texts is an important research problem with many practical application areas. Especially after the recent proliferation of online content through channels such as social media sites, news portals, and forums; solutions to problems such as sentiment analysis, sarcasm/controversy/veracity/rumour/fake news detection, and argument mining gained increasing impact and significance, revealed with large volumes of related scientific publications. In this article, we tackle an important problem from the same family and present a survey of stance detection in social media posts and (online) regular texts. Although stance detection is defined in different ways in different application settings, the most common definition is “automatic classification of the stance of the producer of a piece of text, towards a target, into one of these three classes: {Favor, Against, Neither}.” Our survey includes definitions of related problems and concepts, classifications of the proposed approaches so far, descriptions of the relevant datasets and tools, and related outstanding issues. Stance detection is a recent natural language processing topic with diverse application areas, and our survey article on this newly emerging topic will act as a significant resource for interested researchers and practitioners.', 'year': 2020, 'in_acl': False, 'citationCount': 136, 'section': None, 'subsection': None}, {'id': 220483038, 'paperId': 'd3833e446e536f7627ae01c45cf265d6e736e78c', 'title': 'A Survey on Computational Propaganda Detection', 'authors': [{'authorId': '34086979', 'name': 'Giovanni Da San Martino'}, {'authorId': '40598011', 'name': 'S. Cresci'}, {'authorId': '1397442049', 'name': 'Alberto Barrón-Cedeño'}, {'authorId': '1885974', 'name': 'Seunghak Yu'}, {'authorId': '1728076', 'name': 'R. D. Pietro'}, {'authorId': '1683562', 'name': 'Preslav Nakov'}], 'venue': 'International Joint Conference on Artificial Intelligence', 'abstract': "Propaganda campaigns aim at influencing people's mindset with the purpose of advancing a specific agenda. They exploit the anonymity of the Internet, the micro-profiling ability of social networks, and the ease of automatically creating and managing coordinated networks of accounts, to reach millions of social network users with persuasive messages, specifically targeted to topics each individual user is sensitive to, and ultimately influencing the outcome on a targeted issue. \n\nIn this survey, we review the state of the art on computational propaganda detection from the perspective of Natural Language Processing and Network Analysis, arguing about the need for combined efforts between these communities. We further discuss current challenges and future research directions.", 'year': 2020, 'in_acl': False, 'citationCount': 172, 'section': None, 'subsection': None}, {'id': 1914124, 'paperId': '6f90ad2553c2a73948f614d19c763ec3d5e58542', 'title': 'The rise of social bots', 'authors': [{'authorId': '48898287', 'name': 'Emilio Ferrara'}, {'authorId': '2307347', 'name': 'Onur Varol'}, {'authorId': '2057124', 'name': 'Clayton A. Davis'}, {'authorId': '143653472', 'name': 'F. Menczer'}, {'authorId': '1769960', 'name': 'A. Flammini'}], 'venue': 'Communications of the ACM', 'abstract': "Today's social bots are sophisticated and sometimes menacing. Indeed, their presence can endanger online ecosystems as well as our society.", 'year': 2014, 'in_acl': False, 'citationCount': 1752, 'section': None, 'subsection': None}, {'id': 252277994, 'paperId': '8ce2cb70d5a98ebe3bc6cb10d830dd2282a3e766', 'title': 'The Web of False Information', 'authors': [{'authorId': '3447293', 'name': 'Savvas Zannettou'}, {'authorId': '2698864', 'name': 'Michael Sirivianos'}, {'authorId': '144728530', 'name': 'Jeremy Blackburn'}, {'authorId': '1946641', 'name': 'N. Kourtellis'}], 'venue': 'ACM Journal of Data and Information Quality', 'abstract': 'A new era of Information Warfare has arrived. Various actors, including state-sponsored ones, are weaponizing information on Online Social Networks to run false-information campaigns with targeted manipulation of public opinion on specific topics. These false-information campaigns can have dire consequences to the public: mutating their opinions and actions, especially with respect to critical world events like major elections. Evidently, the problem of false information on the Web is a crucial one and needs increased public awareness as well as immediate attention from law enforcement agencies, public institutions, and in particular, the research community. In this article, we make a step in this direction by providing a typology of the Web’s false-information ecosystem, composed of various types of false-information, actors, and their motives. We report a comprehensive overview of existing research on the false-information ecosystem by identifying several lines of work: (1) how the public perceives false information; (2) understanding the propagation of false information; (3) detecting and containing false information on the Web; and (4) false information on the political stage. In this work, we pay particular attention to political false information as: (1) it can have dire consequences to the community (e.g., when election results are mutated) and (2) previous work shows that this type of false information propagates faster and further when compared to other types of false information. Finally, for each of these lines of work, we report several future research directions that can help us better understand and mitigate the emerging problem of false-information dissemination on the Web.', 'year': 2018, 'in_acl': False, 'citationCount': 146, 'section': None, 'subsection': None}, {'id': 44111303, 'paperId': '2ed166a3301209ccd9838e26ec4648a4d2f07bd9', 'title': 'Bias on the web', 'authors': [{'authorId': '1389957009', 'name': 'R. Baeza-Yates'}], 'venue': 'Communications of the ACM', 'abstract': 'Bias in Web data and use taints the algorithms behind Web-based applications, delivering equally biased results.', 'year': 2018, 'in_acl': False, 'citationCount': 214, 'section': None, 'subsection': None}]
|
2020.emnlp-tutorials.3
|
Interpreting Predictions of NLP Models
|
Although neural NLP models are highly expressive and empirically successful, they also systematically fail in counterintuitive ways and are opaque in their decision-making process. This tutorial will provide a background on interpretation techniques, i.e., methods for explaining the predictions of NLP models. We will first situate example-specific interpretations in the context of other ways to understand models (e.g., probing, dataset analyses). Next, we will present a thorough study of example-specific interpretations, including saliency maps, input perturbations (e.g., LIME, input reduction), adversarial attacks, and influence functions. Alongside these descriptions, we will walk through source code that creates and visualizes interpretations for a diverse set of NLP tasks. Finally, we will discuss open problems in the field, e.g., evaluating, extending, and improving interpretation methods.
| 2,020
|
https://aclanthology.org/2020.emnlp-tutorials.3
|
EMNLP
|
[{'id': 11319376, 'paperId': '5c39e37022661f81f79e481240ed9b175dec6513', 'title': 'Towards A Rigorous Science of Interpretable Machine Learning', 'authors': [{'authorId': '1388372395', 'name': 'F. Doshi-Velez'}, {'authorId': '3351164', 'name': 'Been Kim'}], 'venue': '', 'abstract': 'As machine learning systems become ubiquitous, there has been a surge of interest in interpretable machine learning: systems that provide explanation for their outputs. These explanations are often used to qualitatively assess other criteria such as safety or non-discrimination. However, despite the interest in interpretability, there is very little consensus on what interpretable machine learning is and how it should be measured. In this position paper, we first define interpretability and describe when interpretability is needed (and when it is not). Next, we suggest a taxonomy for rigorous evaluation and expose open questions towards a more rigorous science of interpretable machine learning.', 'year': 2017, 'in_acl': False, 'citationCount': 3309, 'section': None, 'subsection': None}, {'id': 5981909, 'paperId': 'd516daff247f7157fccde6649ace91d969cd1973', 'title': 'The mythos of model interpretability', 'authors': [{'authorId': '32219137', 'name': 'Zachary Chase Lipton'}], 'venue': 'Queue', 'abstract': 'In machine learning, the concept of interpretability is both important and slippery.', 'year': 2016, 'in_acl': False, 'citationCount': 3363, 'section': None, 'subsection': None}, {'id': 67855860, 'paperId': '1e83c20def5c84efa6d4a0d80aa3159f55cb9c3f', 'title': 'Attention is not Explanation', 'authors': [{'authorId': '49837811', 'name': 'Sarthak Jain'}, {'authorId': '1912476', 'name': 'Byron C. Wallace'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'Attention mechanisms have seen wide adoption in neural NLP models. In addition to improving predictive performance, these are often touted as affording transparency: models equipped with attention provide a distribution over attended-to input units, and this is often presented (at least implicitly) as communicating the relative importance of inputs. However, it is unclear what relationship exists between attention weights and model outputs. In this work we perform extensive experiments across a variety of NLP tasks that aim to assess the degree to which attention weights provide meaningful “explanations” for predictions. We find that they largely do not. For example, learned attention weights are frequently uncorrelated with gradient-based measures of feature importance, and one can identify very different attention distributions that nonetheless yield equivalent predictions. Our findings show that standard attention modules do not provide meaningful explanations and should not be treated as though they do.', 'year': 2019, 'in_acl': True, 'citationCount': 1207, 'section': None, 'subsection': None}, {'id': 7228830, 'paperId': 'ffb949d3493c3b2f3c9acf9c75cb03938933ddf0', 'title': 'Adversarial Examples for Evaluating Reading Comprehension Systems', 'authors': [{'authorId': '3422908', 'name': 'Robin Jia'}, {'authorId': '145419642', 'name': 'Percy Liang'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Standard accuracy metrics indicate that reading comprehension systems are making rapid progress, but the extent to which these systems truly understand language remains unclear. To reward systems with real language understanding abilities, we propose an adversarial evaluation scheme for the Stanford Question Answering Dataset (SQuAD). Our method tests whether systems can answer questions about paragraphs that contain adversarially inserted sentences, which are automatically generated to distract computer systems without changing the correct answer or misleading humans. In this adversarial setting, the accuracy of sixteen published models drops from an average of 75% F1 score to 36%; when the adversary is allowed to add ungrammatical sequences of words, average accuracy on four models decreases further to 7%. We hope our insights will motivate the development of new models that understand language more precisely.', 'year': 2017, 'in_acl': True, 'citationCount': 1536, 'section': None, 'subsection': None}, {'id': 13029170, 'paperId': 'c0883f5930a232a9c1ad601c978caede29155979', 'title': '“Why Should I Trust You?”: Explaining the Predictions of Any Classifier', 'authors': [{'authorId': '78846919', 'name': 'Marco Tulio Ribeiro'}, {'authorId': '34650964', 'name': 'Sameer Singh'}, {'authorId': '1730156', 'name': 'Carlos Guestrin'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one. In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally varound the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.', 'year': 2016, 'in_acl': True, 'citationCount': 14884, 'section': None, 'subsection': None}, {'id': 1450294, 'paperId': 'dc6ac3437f0a6e64e4404b1b9d188394f8a3bf71', 'title': 'Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps', 'authors': [{'authorId': '34838386', 'name': 'K. Simonyan'}, {'authorId': '1687524', 'name': 'A. Vedaldi'}, {'authorId': '1688869', 'name': 'Andrew Zisserman'}], 'venue': 'International Conference on Learning Representations', 'abstract': 'This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [Erhan et al., 2009], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [Zeiler et al., 2013].', 'year': 2013, 'in_acl': False, 'citationCount': 6811, 'section': None, 'subsection': None}, {'id': 202712654, 'paperId': 'ddd27dba038d0ed14c48cd027812df58a902ece2', 'title': 'AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models', 'authors': [{'authorId': '145217343', 'name': 'Eric Wallace'}, {'authorId': '1388109456', 'name': 'Jens Tuyls'}, {'authorId': '49606614', 'name': 'Junlin Wang'}, {'authorId': '17097887', 'name': 'Sanjay Subramanian'}, {'authorId': '40642935', 'name': 'Matt Gardner'}, {'authorId': '34650964', 'name': 'Sameer Singh'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Neural NLP models are increasingly accurate but are imperfect and opaque—they break in counterintuitive ways and leave end users puzzled at their behavior. Model interpretation methods ameliorate this opacity by providing explanations for specific model predictions. Unfortunately, existing interpretation codebases make it difficult to apply these methods to new models and tasks, which hinders adoption for practitioners and burdens interpretability researchers. We introduce AllenNLP Interpret, a flexible framework for interpreting NLP models. The toolkit provides interpretation primitives (e.g., input gradients) for any AllenNLP model and task, a suite of built-in interpretation methods, and a library of front-end visualization components. We demonstrate the toolkit’s flexibility and utility by implementing live demos for five interpretation methods (e.g., saliency maps and adversarial attacks) on a variety of models and tasks (e.g., masked language modeling using BERT and reading comprehension using BiDAF). These demos, alongside our code and tutorials, are available at https://allennlp.org/interpret.', 'year': 2019, 'in_acl': True, 'citationCount': 133, 'section': None, 'subsection': None}]
|
2020.emnlp-tutorials.5
|
Representation, Learning and Reasoning on Spatial Language for Downstream NLP Tasks
|
Understating spatial semantics expressed in natural language can become highly complex in real-world applications. This includes applications of language grounding, navigation, visual question answering, and more generic human-machine interaction and dialogue systems. In many of such downstream tasks, explicit representation of spatial concepts and relationships can improve the capabilities of machine learning models in reasoning and deep language understanding. In this tutorial, we overview the cutting-edge research results and existing challenges related to spatial language understanding including semantic annotations, existing corpora, symbolic and sub-symbolic representations, qualitative spatial reasoning, spatial common sense, deep and structured learning models. We discuss the recent results on the above-mentioned applications –that need spatial language learning and reasoning – and highlight the research gaps and future directions.
| 2,020
|
https://aclanthology.org/2020.emnlp-tutorials.5
|
EMNLP
|
[{'id': 5705211, 'paperId': '946dabbc13f06070f7618cd4ca6733a95b4b03c3', 'title': 'A linguistic ontology of space for natural language processing', 'authors': [{'authorId': '2242089300', 'name': 'John A. Bateman'}, {'authorId': '3176893', 'name': 'J. Hois'}, {'authorId': '2323115674', 'name': 'Robert J. Ross'}, {'authorId': '2089708', 'name': 'T. Tenbrink'}], 'venue': 'Artificial Intelligence', 'abstract': "We present a detailed semantics for linguistic spatial expressions supportive of computational processing that draws substantially on the principles and tools of ontological engineering and formal ontology. We cover language concerned with space, actions in space and spatial relationships and develop an ontological organization that relates such expressions to general classes of fixed semantic import. The result is given as an extension of a linguistic ontology, the Generalized Upper Model, an organization which has been used for over a decade in natural language processing applications. We describe the general nature and features of this ontology and show how we have extended it for working particularly with space. Treaitng the semantics of natural language expressions concerning space in this way offers a substantial simplification of the general problem of relating natural spatial language to its contextualized interpretation. Example specifications based on natural language examples are presented, as well as an evaluation of the ontology's coverage, consistency, predictive power, and applicability.", 'year': 2010, 'in_acl': False, 'citationCount': 233, 'section': None, 'subsection': None}, {'id': 2655805, 'paperId': '0ee8427e27e193c23dbfdc0aed8c5123528e6579', 'title': 'Spatial Role Labeling: Task Definition and Annotation Scheme', 'authors': [{'authorId': '2190934', 'name': 'Parisa Kordjamshidi'}, {'authorId': '2541098', 'name': 'M. V. Otterlo'}, {'authorId': '145446752', 'name': 'Marie-Francine Moens'}], 'venue': 'International Conference on Language Resources and Evaluation', 'abstract': 'One of the essential functions of natural language is to talk about spatial relationships between objects. Linguistic constructs can express highly complex, relational structures of objects, spatial relations between them, and patterns of motion through spaces relative to some reference point. Learning how to map this information onto a formal representation from a text is a challenging problem. At present no well-defined framework for automatic spatial information extraction exists that can handle all of these issues. In this paper we introduce the task of spatial role labeling and propose an annotation scheme that is language-independent and facilitates the application of machine learning techniques. Our framework consists of a set of spatial roles based on the theory of holistic spatial semantics with the intent of covering all aspects of spatial concepts, including both static and dynamic spatial relations. We illustrate our annotation scheme with many examples throughout the paper, and in addition we highlight how to connect to spatial calculi such as region connection calculus and also how our approach fits into related work.', 'year': 2010, 'in_acl': True, 'citationCount': 78, 'section': None, 'subsection': None}, {'id': 205811696, 'paperId': '63471e3ed74385b14cd74b7abcfb52f61b00086f', 'title': 'The Qualitative Spatial Dynamics of Motion in Language', 'authors': [{'authorId': '1707726', 'name': 'J. Pustejovsky'}, {'authorId': '1723806', 'name': 'Jessica L. Moszkowicz'}], 'venue': 'Spatial Cogn. Comput.', 'abstract': 'Abstract In this paper, we discuss the strategies that languages employ to express motion, focusing on the distinction between path predicates, such as enter, arrive, and leave and manner-of-motion predicates, such as walk, bike, and roll. We present an overview of some qualitative spatiotemporal models of movement, and discuss their adequacy for capturing motion constructions in natural languages. Building on many aspects of these qualitative models, we introduce a framework within dynamic logic for the characterization of spatial change. This model, called Dynamic Interval Temporal Logic (DITL), is developed to analyze both classes of motion predicates, as well as complex compositional constructions involving spatial and manner Prepositional Phrases. Further, DITL serves as a semantics for a linguistically expressive markup language for annotating spatiotemporal information in text, called Spatiotemporal Markup Language (STML). We outline the syntax of this language, and discuss how DITL provides for a natural interpretation of the annotation specification for use in a variety of applications.', 'year': 2011, 'in_acl': False, 'citationCount': 68, 'section': None, 'subsection': None}, {'id': 263861428, 'paperId': '2c5d4e99bd86411305e42c52009af75758272471', 'title': 'Interpreting Motion - Grounded Representations for Spatial Language', 'authors': [{'authorId': '1729172', 'name': 'I. Mani'}, {'authorId': '2265111544', 'name': 'James Pustejovsky'}], 'venue': 'Explorations in language and space', 'abstract': '', 'year': 2012, 'in_acl': False, 'citationCount': 38, 'section': None, 'subsection': None}, {'id': 221508017, 'paperId': '258eb3cbc20b350cb4c183c09d3029d850da6c8e', 'title': 'Changing perspective: Local alignment of reference frames in dialogue', 'authors': [{'authorId': '2995275', 'name': 'Simon Dobnik'}, {'authorId': '1812874', 'name': 'C. Howes'}], 'venue': '', 'abstract': 'In this paper we examine how people negotiate, interpret and repair the frame of reference (FoR) in free dialogues discussing spatial scenes. We describe a pilot study in which participants are given different perspectives of the same scene and asked to locate several objects that are only shown on one of their pictures. This task requires participants to coordinate on FoR in order to identify the missing objects. Preliminary results indicate that conversational participants align locally on FoR but do not converge on a global frame of reference. Misunderstandings lead to clarification sequences in which participants shift the FoR. These findings have implications for situated dialogue systems.', 'year': 2015, 'in_acl': False, 'citationCount': 14, 'section': None, 'subsection': None}, {'id': 15921971, 'paperId': '5b12f6ff72b42659138b2ab4c25cc7052edf72d0', 'title': 'Global machine learning for spatial ontology population', 'authors': [{'authorId': '2190934', 'name': 'Parisa Kordjamshidi'}, {'authorId': '145446752', 'name': 'Marie-Francine Moens'}], 'venue': 'Journal of Web Semantics', 'abstract': 'Understanding spatial language is important in many applications such as geographical information systems, human computer interaction or text-to-scene conversion. Due to the challenges of designing spatial ontologies, the extraction of spatial information from natural language still has to be placed in a well-defined framework. In this work, we propose an ontology which bridges between cognitive–linguistic spatial concepts in natural language and multiple qualitative spatial representation and reasoning models. To make a mapping between natural language and the spatial ontology, we propose a novel global machine learning framework for ontology population. In this framework we consider relational features and background knowledge which originate from both ontological relationships between the concepts and the structure of the spatial language. The advantage of the proposed global learning model is the scalability of the inference, and the flexibility for automatically describing text with arbitrary semantic labels that form a structured ontological representation of its content. The machine learning framework is evaluated with SemEval-2012 and SemEval-2013 data from the spatial role labeling task.', 'year': 2015, 'in_acl': False, 'citationCount': 57, 'section': None, 'subsection': None}, {'id': 11947761, 'paperId': 'c3b2e90aba82756e0725e23867a87fec91d9bcd9', 'title': 'VoxML: A Visualization Modeling Language', 'authors': [{'authorId': '1707726', 'name': 'J. Pustejovsky'}, {'authorId': '34079649', 'name': 'Nikhil Krishnaswamy'}], 'venue': 'International Conference on Language Resources and Evaluation', 'abstract': 'We present the specification for a modeling language, VoxML, which encodes semantic knowledge of real-world objects represented as three-dimensional models, and of events and attributes related to and enacted over these objects.VoxML is intended to overcome the limitations of existing 3D visual markup languages by allowing for the encoding of a broad range of semantic knowledge that can be exploited by a variety of systems and platforms, leading to multimodal simulations of real-world scenarios using conceptual objects that represent their semantic values', 'year': 2016, 'in_acl': True, 'citationCount': 73, 'section': None, 'subsection': None}, {'id': 20973461, 'paperId': 'b8d358cbe798c0cd25af8fd78785d575ead3daa3', 'title': 'Do You See What I See? Effects of POV on Spatial Relation Specifications', 'authors': [{'authorId': '34079649', 'name': 'Nikhil Krishnaswamy'}, {'authorId': '1707726', 'name': 'J. Pustejovsky'}], 'venue': '', 'abstract': 'In this paper, we examine a set of object interactions generated with a 3D natural language simulation and visualization platform, VoxSim (Krishnaswamy and Pustejovsky 2016b). These simulations all realize the natural language relations “touching” and “near” over a test set of various objects within a 3-dimensional world that interprets descriptions of motion events and renders their visual instantiations from the perspective of an embodied virtual agent. These object interactions were evaluated by human judges using Amazon Mechanical Turk and we examine some of the qualitative interpretations provided by humans over these computer-generated interpretations of underspecified relations, conditioned on the frame of reference (agent’s point of view) and object position relative to that point of view (POV). Through analysis of the human evaluations, we find that average eval-uator satisfaction with many specifications for these relations appears to strongly depend on the relationship between the two objects and between the objects and the POV.', 'year': 2017, 'in_acl': False, 'citationCount': 1, 'section': None, 'subsection': None}, {'id': 56798299, 'paperId': 'b29e13444e3da7c7e2fa605742211435f2c615ae', 'title': 'ISO-Space: Annotating Static and Dynamic Spatial Information', 'authors': [{'authorId': '1707726', 'name': 'J. Pustejovsky'}], 'venue': '', 'abstract': 'An understanding of spatial information in natural language is necessary for many computational linguistics and artificial intelligence applications. In this chapter, we describe an annotation scheme for the markup of spatial relations, both static and dynamic, as expressed in text and other media. The desiderata for such a specification language are presented along with what representational mechanisms are required for such a specification to be successful. We review the annotation development process, and the adoption of the initial specification ISOspace, as an ISO standard, renamed ISOspace. We conclude with a discussion of the use of ISOspace in the context of the shared task SpaceEval 2015.', 'year': 2017, 'in_acl': False, 'citationCount': 27, 'section': None, 'subsection': None}, {'id': 12628986, 'paperId': '1295f871f2532274fb32b7815a605b3b3e6c7b6f', 'title': 'Spatial Role Labeling Annotation Scheme', 'authors': [{'authorId': '2190934', 'name': 'Parisa Kordjamshidi'}, {'authorId': '2541098', 'name': 'M. V. Otterlo'}, {'authorId': '145446752', 'name': 'Marie-Francine Moens'}], 'venue': '', 'abstract': '', 'year': 2017, 'in_acl': False, 'citationCount': 13, 'section': None, 'subsection': None}, {'id': 5911617, 'paperId': 'a5dbb7039fe593990186f4bc0dca0a6d14ff5b06', 'title': 'Source-Target Inference Models for Spatial Instruction Understanding', 'authors': [{'authorId': '47300698', 'name': 'Hao Tan'}, {'authorId': '143977268', 'name': 'Mohit Bansal'}], 'venue': 'AAAI Conference on Artificial Intelligence', 'abstract': '\n \n Models that can execute natural language instructions for situated robotic tasks such as assembly and navigation have several useful applications in homes, offices, and remote scenarios.We study the semantics of spatially-referred configuration and arrangement instructions, based on the challenging Bisk-2016 blank-labeled block dataset. This task involves finding a source block and moving it to the target position (mentioned via a reference block and offset), where the blocks have no names or colors and are just referred to via spatial location features.We present novel models for the subtasks of source block classification and target position regression, based on joint-loss language and spatial-world representation learning, as well as CNN-based and dual attention models to compute the alignment between the world blocks and the instruction phrases. For target position prediction, we compare two inference approaches: annealed sampling via policy gradient versus expectation inference via supervised regression. Our models achieve the new state-of-the-art on this task, with an improvement of 47% on source block accuracy and 22% on target position distance.\n \n', 'year': 2017, 'in_acl': False, 'citationCount': 14, 'section': None, 'subsection': None}, {'id': 19152379, 'paperId': '45f0115613b9c79572af9365daafc58bede9851e', 'title': 'Acquiring Common Sense Spatial Knowledge through Implicit Spatial Templates', 'authors': [{'authorId': '144481186', 'name': 'Guillem Collell'}, {'authorId': '1681236', 'name': 'L. Gool'}, {'authorId': '145446752', 'name': 'Marie-Francine Moens'}], 'venue': 'AAAI Conference on Artificial Intelligence', 'abstract': '\n \n Spatial understanding is a fundamental problem with wide-reaching real-world applications. The representation of spatial knowledge is often modeled with spatial templates, i.e., regions of acceptability of two objects under an explicit spatial relationship (e.g., "on," "below," etc.). In contrast with prior work that restricts spatial templates to explicit spatial prepositions (e.g., "glass on table"), here we extend this concept to implicit spatial language, i.e., those relationships (generally actions) for which the spatial arrangement of the objects is only implicitly implied (e.g., "man riding horse"). In contrast with explicit relationships, predicting spatial arrangements from implicit spatial language requires significant common sense spatial understanding. Here, we introduce the task of predicting spatial templates for two objects under a relationship, which can be seen as a spatial question-answering task with a (2D) continuous output ("where is the man w.r.t. a horse when the man is walking the horse?"). We present two simple neural-based models that leverage annotated images and structured text to learn this task. The good performance of these models reveals that spatial locations are to a large extent predictable from implicit spatial language. Crucially, the models attain similar performance in a challenging generalized setting, where the object-relation-object combinations (e.g., "man walking dog") have never been seen before. Next, we go one step further by presenting the models with unseen objects (e.g., "dog"). In this scenario, we show that leveraging word embeddings enables the models to output accurate spatial predictions, proving that the models acquire solid common sense spatial knowledge allowing for such generalization.\n \n', 'year': 2017, 'in_acl': False, 'citationCount': 39, 'section': None, 'subsection': None}, {'id': 195063887, 'paperId': '3630c7b73f08b942f58acbce2179ef03442e1ad4', 'title': 'Generating a Novel Dataset of Multimodal Referring Expressions', 'authors': [{'authorId': '34079649', 'name': 'Nikhil Krishnaswamy'}, {'authorId': '1707726', 'name': 'J. Pustejovsky'}], 'venue': 'International Conference on Computational Semantics', 'abstract': 'Referring expressions and definite descriptions of objects in space exploit information both about object characteristics and locations. To resolve potential ambiguity, referencing strategies in language can rely on increasingly abstract concepts to distinguish an object in a given location from similar ones elsewhere, yet the description of the intended location may still be imprecise or difficult to interpret. Meanwhile, modalities such as gesture may communicate spatial information such as locations in a more concise manner. In real peer-to-peer communication, humans use language and gesture together to reference entities, with a capacity for mixing and changing modalities where needed. While recent progress in AI and human-computer interaction has created systems where a human can interact with a computer multimodally, computers often lack the capacity to intelligently mix modalities when generating referring expressions. We present a novel dataset of referring expressions combining natural language and gesture, describe its creation and evaluation, and its uses to train computational models for generating and interpreting multimodal referring expressions.', 'year': 2019, 'in_acl': True, 'citationCount': 17, 'section': None, 'subsection': None}]
|
2020.emnlp-tutorials.6
|
Simultaneous Translation
|
Simultaneous translation, which performs translation concurrently with the source speech, is widely useful in many scenarios such as international conferences, negotiations, press releases, legal proceedings, and medicine. This problem has long been considered one of the hardest problems in AI and one of its holy grails. Recently, with rapid improvements in machine translation, speech recognition, and speech synthesis, there has been exciting progress towards simultaneous translation. This tutorial will focus on the design and evaluation of policies for simultaneous translation, to leave attendees with a deep technical understanding of the history, the recent advances, and the remaining challenges in this field.
| 2,020
|
https://aclanthology.org/2020.emnlp-tutorials.6
|
EMNLP
|
[{'id': 216638592, 'paperId': 'b7adef89d7a0e7b743ab098d583a90b1cbfc6de7', 'title': "Don't Until the Final Verb Wait: Reinforcement Learning for Simultaneous Machine Translation", 'authors': [{'authorId': '2778913', 'name': 'Alvin Grissom II'}, {'authorId': '144533687', 'name': 'He He'}, {'authorId': '1389036863', 'name': 'Jordan L. Boyd-Graber'}, {'authorId': '2113564796', 'name': 'John Morgan'}, {'authorId': '1722360', 'name': 'Hal Daumé'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'We introduce a reinforcement learningbased approach to simultaneous machine translation—producing a translation while receiving input words— between languages with drastically different word orders: from verb-final languages (e.g., German) to verb-medial languages (English). In traditional machine translation, a translator must “wait” for source material to appear before translation begins. We remove this bottleneck by predicting the final verb in advance. We use reinforcement learning to learn when to trust predictions about unseen, future portions of the sentence. We also introduce an evaluation metric to measure expeditiousness and quality. We show that our new translation model outperforms batch and monotone translation strategies.', 'year': 2014, 'in_acl': True, 'citationCount': 119, 'section': None, 'subsection': None}, {'id': 216804792, 'paperId': '942eb04b5fe958fc07d5a00df1d27edcada4f05e', 'title': 'Syntax-based Rewriting for Simultaneous Machine Translation', 'authors': [{'authorId': '144466851', 'name': 'He He'}, {'authorId': '2778913', 'name': 'Alvin Grissom II'}, {'authorId': '2113564796', 'name': 'John Morgan'}, {'authorId': '1389036863', 'name': 'Jordan L. Boyd-Graber'}, {'authorId': '1722360', 'name': 'Hal Daumé'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Divergent word order between languages causes delay in simultaneous machine translation. We present a sentence rewriting method that generates more monotonic translations to improve the speedaccuracy tradeoff. We design grammaticality and meaning-preserving syntactic transformation rules that operate on constituent parse trees. We apply the rules to reference translations to make their word order closer to the source language word order. On Japanese-English translation (two languages with substantially different structure), incorporating the rewritten, more monotonic reference translation into a phrase-based machine translation system enables better translations faster than a baseline system that only uses gold reference translations.', 'year': 2015, 'in_acl': True, 'citationCount': 35, 'section': None, 'subsection': None}, {'id': 14003387, 'paperId': 'db2a669b6b7a7c8bcab31e973a9e010cd6dcc7f5', 'title': 'Can neural machine translation do simultaneous translation?', 'authors': [{'authorId': '1979489', 'name': 'Kyunghyun Cho'}, {'authorId': '3422698', 'name': 'Masha Esipova'}], 'venue': 'arXiv.org', 'abstract': 'We investigate the potential of attention-based neural machine translation in simultaneous translation. We introduce a novel decoding algorithm, called simultaneous greedy decoding, that allows an existing neural machine translation model to begin translating before a full source sentence is received. This approach is unique from previous works on simultaneous translation in that segmentation and translation are done jointly to maximize the translation quality and that translating each segment is strongly conditioned on all the previous segments. This paper presents a first step toward building a full simultaneous translation system based on neural machine translation.', 'year': 2016, 'in_acl': False, 'citationCount': 149, 'section': None, 'subsection': None}, {'id': 2782776, 'paperId': 'b13e9d23983273c0c67b91ae70c55d4c3f745b8b', 'title': 'Learning to Translate in Real-time with Neural Machine Translation', 'authors': [{'authorId': '1700325', 'name': 'Graham Neubig'}, {'authorId': '1979489', 'name': 'Kyunghyun Cho'}, {'authorId': '3016273', 'name': 'Jiatao Gu'}, {'authorId': '2052674293', 'name': 'V. Li'}], 'venue': 'Conference of the European Chapter of the Association for Computational Linguistics', 'abstract': 'Translating in real-time, a.k.a.simultaneous translation, outputs translation words before the input sentence ends, which is a challenging problem for conventional machine translation methods. We propose a neural machine translation (NMT) framework for simultaneous translation in which an agent learns to make decisions on when to translate from the interaction with a pre-trained NMT environment. To trade off quality and delay, we extensively explore various targets for delay and design a method for beam-search applicable in the simultaneous MT setting. Experiments against state-of-the-art baselines on two language pairs demonstrate the efficacy of the proposed framework both quantitatively and qualitatively.', 'year': 2016, 'in_acl': True, 'citationCount': 208, 'section': None, 'subsection': None}, {'id': 189762487, 'paperId': '9d3480e46cc506b73d5291387c6452998690fdd3', 'title': 'STACL: Simultaneous Translation with Implicit Anticipation and Controllable Latency using Prefix-to-Prefix Framework', 'authors': [{'authorId': '1847848', 'name': 'Mingbo Ma'}, {'authorId': '48545084', 'name': 'Liang Huang'}, {'authorId': '2054473960', 'name': 'Hao Xiong'}, {'authorId': '40223399', 'name': 'Renjie Zheng'}, {'authorId': '66057453', 'name': 'Kaibo Liu'}, {'authorId': '20712300', 'name': 'Baigong Zheng'}, {'authorId': '30750818', 'name': 'Chuanqiang Zhang'}, {'authorId': '37985966', 'name': 'Zhongjun He'}, {'authorId': '2110117273', 'name': 'Hairong Liu'}, {'authorId': '2155448199', 'name': 'Xing Li'}, {'authorId': '40354707', 'name': 'Hua Wu'}, {'authorId': '144270731', 'name': 'Haifeng Wang'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Simultaneous translation, which translates sentences before they are finished, is use- ful in many scenarios but is notoriously dif- ficult due to word-order differences. While the conventional seq-to-seq framework is only suitable for full-sentence translation, we pro- pose a novel prefix-to-prefix framework for si- multaneous translation that implicitly learns to anticipate in a single translation model. Within this framework, we present a very sim- ple yet surprisingly effective “wait-k” policy trained to generate the target sentence concur- rently with the source sentence, but always k words behind. Experiments show our strat- egy achieves low latency and reasonable qual- ity (compared to full-sentence translation) on 4 directions: zh↔en and de↔en.', 'year': 2018, 'in_acl': True, 'citationCount': 257, 'section': None, 'subsection': None}, {'id': 186206508, 'paperId': '05b3a6acc8be299cc2a2678e5d81712b71c748e5', 'title': 'Monotonic Infinite Lookback Attention for Simultaneous Machine Translation', 'authors': [{'authorId': '3365231', 'name': 'N. Arivazhagan'}, {'authorId': '144507724', 'name': 'Colin Cherry'}, {'authorId': '3153147', 'name': 'Wolfgang Macherey'}, {'authorId': '145039780', 'name': 'Chung-Cheng Chiu'}, {'authorId': '3014143', 'name': 'Semih Yavuz'}, {'authorId': '34320634', 'name': 'Ruoming Pang'}, {'authorId': '40400230', 'name': 'Wei Li'}, {'authorId': '2402716', 'name': 'Colin Raffel'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Simultaneous machine translation begins to translate each source sentence before the source speaker is finished speaking, with applications to live and streaming scenarios. Simultaneous systems must carefully schedule their reading of the source sentence to balance quality against latency. We present the first simultaneous translation system to learn an adaptive schedule jointly with a neural machine translation (NMT) model that attends over all source tokens read thus far. We do so by introducing Monotonic Infinite Lookback (MILk) attention, which maintains both a hard, monotonic attention head to schedule the reading of the source sentence, and a soft attention head that extends from the monotonic head back to the beginning of the source. We show that MILk’s adaptive schedule allows it to arrive at latency-quality trade-offs that are favorable to those of a recently proposed wait-k strategy for many latency values.', 'year': 2019, 'in_acl': True, 'citationCount': 187, 'section': None, 'subsection': None}]
|
2020.emnlp-tutorials.7
|
The Amazing World of Neural Language Generation
|
Neural Language Generation (NLG) – using neural network models to generate coherent text – is among the most promising methods for automated text creation. Recent years have seen a paradigm shift in neural text generation, caused by the advances in deep contextual language modeling (e.g., LSTMs, GPT, GPT2) and transfer learning (e.g., ELMo, BERT). While these tools have dramatically improved the state of NLG, particularly for low resources tasks, state-of-the-art NLG models still face many challenges: a lack of diversity in generated text, commonsense violations in depicted situations, difficulties in making use of factual information, and difficulties in designing reliable evaluation metrics. In this tutorial, we will present an overview of the current state-of-the-art in neural network architectures, and how they shaped recent research directions in text generation. We will discuss how and why these models succeed/fail at generating coherent text, and provide insights on several applications.
| 2,020
|
https://aclanthology.org/2020.emnlp-tutorials.7
|
EMNLP
|
[{'id': 16946362, 'paperId': 'd13bb317e87f3f6da10da11059ebf4350b754814', 'title': 'Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation', 'authors': [{'authorId': '1700894', 'name': 'Albert Gatt'}, {'authorId': '2297195264', 'name': 'E. Krahmer'}], 'venue': 'Journal of Artificial Intelligence Research', 'abstract': 'This paper surveys the current state of the art in Natural Language Generation (NLG), defined as the task of generating text or speech from non-linguistic input. A survey of NLG is timely in view of the changes that the field has undergone over the past two decades, especially in relation to new (usually data-driven) methods, as well as new applications of NLG technology. This survey therefore aims to (a) give an up-to-date synthesis of research on the core tasks in NLG and the architectures adopted in which such tasks are organised; (b) highlight a number of recent research topics that have arisen partly as a result of growing synergies between NLG and other areas of artifical intelligence; (c) draw attention to the challenges in NLG evaluation, relating them to similar challenges faced in other areas of nlp, with an emphasis on different evaluation methods and the relationships between them.', 'year': 2017, 'in_acl': False, 'citationCount': 774, 'section': None, 'subsection': None}, {'id': 160025533, 'paperId': '9405cc0d6169988371b2755e573cc28650d14dfe', 'title': 'Language Models are Unsupervised Multitask Learners', 'authors': [{'authorId': '38909097', 'name': 'Alec Radford'}, {'authorId': '49387725', 'name': 'Jeff Wu'}, {'authorId': '48422824', 'name': 'R. Child'}, {'authorId': '150970919', 'name': 'D. Luan'}, {'authorId': '2698777', 'name': 'Dario Amodei'}, {'authorId': '1701686', 'name': 'I. Sutskever'}], 'venue': '', 'abstract': 'Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets. We demonstrate that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText. When conditioned on a document plus questions, the answers generated by the language model reach 55 F1 on the CoQA dataset matching or exceeding the performance of 3 out of 4 baseline systems without using the 127,000+ training examples. The capacity of the language model is essential to the success of zero-shot task transfer and increasing it improves performance in a log-linear fashion across tasks. Our largest model, GPT-2, is a 1.5B parameter Transformer that achieves state of the art results on 7 out of 8 tested language modeling datasets in a zero-shot setting but still underfits WebText. Samples from the model reflect these improvements and contain coherent paragraphs of text. These findings suggest a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.', 'year': 2019, 'in_acl': False, 'citationCount': 19436, 'section': None, 'subsection': None}, {'id': 162169061, 'paperId': 'eb011ccdf9ea739ea86be85b268a4d958266b624', 'title': 'Sample Efficient Text Summarization Using a Single Pre-Trained Transformer', 'authors': [{'authorId': '3030219', 'name': 'Urvashi Khandelwal'}, {'authorId': '144358401', 'name': 'Kevin Clark'}, {'authorId': '1746807', 'name': 'Dan Jurafsky'}, {'authorId': '40527594', 'name': 'Lukasz Kaiser'}], 'venue': 'arXiv.org', 'abstract': 'Language model (LM) pre-training has resulted in impressive performance and sample efficiency on a variety of language understanding tasks. However, it remains unclear how to best use pre-trained LMs for generation tasks such as abstractive summarization, particularly to enhance sample efficiency. In these sequence-to-sequence settings, prior work has experimented with loading pre-trained weights into the encoder and/or decoder networks, but used non-pre-trained encoder-decoder attention weights. We instead use a pre-trained decoder-only network, where the same Transformer LM both encodes the source and generates the summary. This ensures that all parameters in the network, including those governing attention over source states, have been pre-trained before the fine-tuning step. Experiments on the CNN/Daily Mail dataset show that our pre-trained Transformer LM substantially improves over pre-trained Transformer encoder-decoder networks in limited-data settings. For instance, it achieves 13.1 ROUGE-2 using only 1% of the training data (~3000 examples), while pre-trained encoder-decoder models score 2.3 ROUGE-2.', 'year': 2019, 'in_acl': False, 'citationCount': 74, 'section': None, 'subsection': None}, {'id': 127986954, 'paperId': 'cf4aa38ae31b43fd07abe13b4ffdb265babb7be1', 'title': 'The Curious Case of Neural Text Degeneration', 'authors': [{'authorId': '14487640', 'name': 'Ari Holtzman'}, {'authorId': '144685020', 'name': 'Jan Buys'}, {'authorId': '2152141637', 'name': 'Li Du'}, {'authorId': '39191185', 'name': 'Maxwell Forbes'}, {'authorId': '1699545', 'name': 'Yejin Choi'}], 'venue': 'International Conference on Learning Representations', 'abstract': 'Despite considerable advancements with deep neural language models, the enigma of neural text degeneration persists when these models are tested as text generators. The counter-intuitive empirical observation is that even though the use of likelihood as training objective leads to high quality models for a broad range of language understanding tasks, using likelihood as a decoding objective leads to text that is bland and strangely repetitive. \nIn this paper, we reveal surprising distributional differences between human text and machine text. In addition, we find that decoding strategies alone can dramatically effect the quality of machine text, even when generated from exactly the same neural language model. Our findings motivate Nucleus Sampling, a simple but effective method to draw the best out of neural generation. By sampling text from the dynamic nucleus of the probability distribution, which allows for diversity while effectively truncating the less reliable tail of the distribution, the resulting text better demonstrates the quality of human text, yielding enhanced diversity without sacrificing fluency and coherence.', 'year': 2019, 'in_acl': False, 'citationCount': 2755, 'section': None, 'subsection': None}, {'id': 14674248, 'paperId': '66021a920001bc3e6258bffe7076d647614147b7', 'title': 'From Word Embeddings To Document Distances', 'authors': [{'authorId': '1940272', 'name': 'Matt J. Kusner'}, {'authorId': '2117103358', 'name': 'Yu Sun'}, {'authorId': '1971973', 'name': 'Nicholas I. Kolkin'}, {'authorId': '7446832', 'name': 'Kilian Q. Weinberger'}], 'venue': 'International Conference on Machine Learning', 'abstract': 'We present the Word Mover\'s Distance (WMD), a novel distance function between text documents. Our work is based on recent results in word embeddings that learn semantically meaningful representations for words from local cooccurrences in sentences. The WMD distance measures the dissimilarity between two text documents as the minimum amount of distance that the embedded words of one document need to "travel" to reach the embedded words of another document. We show that this distance metric can be cast as an instance of the Earth Mover\'s Distance, a well studied transportation problem for which several highly efficient solvers have been developed. Our metric has no hyperparameters and is straight-forward to implement. Further, we demonstrate on eight real world document classification data sets, in comparison with seven state-of-the-art baselines, that the WMD metric leads to unprecedented low k-nearest neighbor document classification error rates.', 'year': 2015, 'in_acl': False, 'citationCount': 2019, 'section': None, 'subsection': None}, {'id': 7147309, 'paperId': 'b7aee9dfb027d6061c6a653684c0fa9a9bba750d', 'title': 'Sequence Level Training with Recurrent Neural Networks', 'authors': [{'authorId': '1706809', 'name': "Marc'Aurelio Ranzato"}, {'authorId': '3295092', 'name': 'S. Chopra'}, {'authorId': '2325985', 'name': 'Michael Auli'}, {'authorId': '2563432', 'name': 'Wojciech Zaremba'}], 'venue': 'International Conference on Learning Representations', 'abstract': 'Many natural language processing applications use language models to generate text. These models are typically trained to predict the next word in a sequence, given the previous words and some context such as an image. However, at test time the model is expected to generate the entire sequence from scratch. This discrepancy makes generation brittle, as errors may accumulate along the way. We address this issue by proposing a novel sequence level training algorithm that directly optimizes the metric used at test time, such as BLEU or ROUGE. On three different tasks, our approach outperforms several strong baselines for greedy generation. The method is also competitive when these baselines employ beam search, while being several times faster.', 'year': 2015, 'in_acl': False, 'citationCount': 1543, 'section': None, 'subsection': None}, {'id': 748227, 'paperId': 'd82b55c35c8673774a708353838918346f6c006f', 'title': 'Generating Sentences from a Continuous Space', 'authors': [{'authorId': '3644767', 'name': 'Samuel R. Bowman'}, {'authorId': '2546951', 'name': 'L. Vilnis'}, {'authorId': '1689108', 'name': 'O. Vinyals'}, {'authorId': '2555924', 'name': 'Andrew M. Dai'}, {'authorId': '1944541', 'name': 'R. Józefowicz'}, {'authorId': '1751569', 'name': 'Samy Bengio'}], 'venue': 'Conference on Computational Natural Language Learning', 'abstract': "The standard recurrent neural network language model (RNNLM) generates sentences one word at a time and does not work from an explicit global sentence representation. In this work, we introduce and study an RNN-based variational autoencoder generative model that incorporates distributed latent representations of entire sentences. This factorization allows it to explicitly model holistic properties of sentences such as style, topic, and high-level syntactic features. Samples from the prior over these sentence representations remarkably produce diverse and well-formed sentences through simple deterministic decoding. By examining paths through this latent space, we are able to generate coherent novel sentences that interpolate between known sentences. We present techniques for solving the difficult learning problem presented by this model, demonstrate its effectiveness in imputing missing words, explore many interesting properties of the model's latent sentence space, and present negative results on the use of the model in language modeling.", 'year': 2015, 'in_acl': True, 'citationCount': 2264, 'section': None, 'subsection': None}, {'id': 21731209, 'paperId': '6db2b93a2d4007371030644173f1001c959214d2', 'title': 'Learning to Write with Cooperative Discriminators', 'authors': [{'authorId': '14487640', 'name': 'Ari Holtzman'}, {'authorId': '144685020', 'name': 'Jan Buys'}, {'authorId': '39191185', 'name': 'Maxwell Forbes'}, {'authorId': '2691021', 'name': 'Antoine Bosselut'}, {'authorId': '145798491', 'name': 'David Golub'}, {'authorId': '1699545', 'name': 'Yejin Choi'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Despite their local fluency, long-form text generated from RNNs is often generic, repetitive, and even self-contradictory. We propose a unified learning framework that collectively addresses all the above issues by composing a committee of discriminators that can guide a base RNN generator towards more globally coherent generations. More concretely, discriminators each specialize in a different principle of communication, such as Grice’s maxims, and are collectively combined with the base RNN generator through a composite decoding objective. Human evaluation demonstrates that text generated by our model is preferred over that of baselines by a large margin, significantly enhancing the overall coherence, style, and information of the generations.', 'year': 2018, 'in_acl': True, 'citationCount': 222, 'section': None, 'subsection': None}]
|
2021.acl-tutorials.1
|
Advances in Debating Technologies: Building AI That Can Debate Humans
|
The tutorial focuses on Debating Technologies, a sub-field of computational argumentation defined as “computational technologies developed directly to enhance, support, and engage with human debating” (Gurevych et al., 2016). A recent milestone in this field is Project Debater, which was revealed in 2019 as the first AI system that can debate human experts on complex topics. Project Debater is the third in the series of IBM Research AI’s grand challenges, following Deep Blue and Watson. It has been developed for over six years by a large team of researchers and engineers, and its live demonstration in February 2019 received massive media attention. This research effort has resulted in more than 50 scientific papers to date, and many datasets freely available for research purposes. We discuss the scientific challenges that arise when building such a system, including argument mining, argument quality assessment, stance classification, principled argument detection, narrative generation, and rebutting a human opponent. Many of the underlying capabilities of Project Debater have been made freely available for academic research, and the tutorial will include a detailed explanation of how to use and leverage these tools. In addition to discussing individual components, the tutorial also provides a holistic view of a debating system. Such a view is largely missing in the academic literature, where each paper typically addresses a specific problem in isolation. We present a complete pipeline of a debating system, and discuss the information flow and the interaction between the various components. Finally, we discuss practical applications and future challenges of debating technologies.
| 2,021
|
https://aclanthology.org/2021.acl-tutorials.1
|
ACL, IJCNLP
|
[{'id': 203912051, 'paperId': 'a2ae7155d94686fe83f26f6d6ca2dfacd16c5e5c', 'title': 'Argument Mining: A Survey', 'authors': [{'authorId': '2055083035', 'name': 'J. Lawrence'}, {'authorId': '145989424', 'name': 'C. Reed'}], 'venue': 'Computational Linguistics', 'abstract': 'Argument mining is the automatic identification and extraction of the structure of inference and reasoning expressed as arguments presented in natural language. Understanding argumentative structure makes it possible to determine not only what positions people are adopting, but also why they hold the opinions they do, providing valuable insights in domains as diverse as financial market prediction and public relations. This survey explores the techniques that establish the foundations for argument mining, provides a review of recent advances in argument mining techniques, and discusses the challenges faced in automatically extracting a deeper understanding of reasoning expressed in language in general.', 'year': 2020, 'in_acl': False, 'citationCount': 403, 'section': 'A survey on argument mining', 'subsection': None}, {'id': 232305184, 'paperId': '8f7697835d88d1e61cf36cf005f3b46558f05a49', 'title': 'An autonomous debating system', 'authors': [{'authorId': '1766595', 'name': 'N. Slonim'}, {'authorId': '2911299', 'name': 'Yonatan Bilu'}, {'authorId': '2059199916', 'name': 'Carlos Alzate'}, {'authorId': '1693525', 'name': 'Roy Bar-Haim'}, {'authorId': '50757607', 'name': 'Ben Bogin'}, {'authorId': '2610023', 'name': 'Francesca Bonin'}, {'authorId': '41019330', 'name': 'Leshem Choshen'}, {'authorId': '1405434669', 'name': 'Edo Cohen-Karlik'}, {'authorId': '2839128', 'name': 'Lena Dankin'}, {'authorId': '39068807', 'name': 'Lilach Edelstein'}, {'authorId': '1402680837', 'name': 'L. Ein-Dor'}, {'authorId': '2056556257', 'name': 'Roni Friedman-Melamed'}, {'authorId': '71873369', 'name': 'A. Gavron'}, {'authorId': '48835746', 'name': 'Ariel Gera'}, {'authorId': '2975469', 'name': 'Martin Gleize'}, {'authorId': '3291191', 'name': 'Shai Gretz'}, {'authorId': '1891570', 'name': 'Dan Gutfreund'}, {'authorId': '41127252', 'name': 'Alon Halfon'}, {'authorId': '2086349', 'name': 'Daniel Hershcovich'}, {'authorId': '1678934', 'name': 'R. Hoory'}, {'authorId': '150348519', 'name': 'Yufang Hou'}, {'authorId': '31732092', 'name': 'S. Hummel'}, {'authorId': '2697312', 'name': 'Michal Jacovi'}, {'authorId': '2078553', 'name': 'Charles Jochim'}, {'authorId': '2965962', 'name': 'Yoav Kantor'}, {'authorId': '1722434', 'name': 'Yoav Katz'}, {'authorId': '1775524', 'name': 'D. Konopnicki'}, {'authorId': '2981455', 'name': 'Zvi Kons'}, {'authorId': '2020379', 'name': 'Lili Kotlerman'}, {'authorId': '2058740289', 'name': 'Dalia Krieger'}, {'authorId': '1396093833', 'name': 'Dan Lahav'}, {'authorId': '1847650', 'name': 'Tamar Lavee'}, {'authorId': '48496836', 'name': 'Ran Levy'}, {'authorId': '2089779821', 'name': 'Naftali Liberman'}, {'authorId': '1727535', 'name': 'Y. Mass'}, {'authorId': '48499250', 'name': 'Amir Menczel'}, {'authorId': '8963527', 'name': 'Shachar Mirkin'}, {'authorId': '81154108', 'name': 'Guy Moshkowich'}, {'authorId': '1405604910', 'name': 'Shila Ofek-Koifman'}, {'authorId': '80108223', 'name': 'Matan Orbach'}, {'authorId': '2653682', 'name': 'Ella Rabinovich'}, {'authorId': '1905713', 'name': 'Ruty Rinott'}, {'authorId': '1764141', 'name': 'Slava Shechtman'}, {'authorId': '2035252', 'name': 'D. Sheinwald'}, {'authorId': '1734246', 'name': 'Eyal Shnarch'}, {'authorId': '2627091', 'name': 'Ilya Shnayderman'}, {'authorId': '1696998', 'name': 'A. Soffer'}, {'authorId': '51451979', 'name': 'Artem Spector'}, {'authorId': '2464133', 'name': 'B. Sznajder'}, {'authorId': '35874066', 'name': 'Assaf Toledo'}, {'authorId': '1403181290', 'name': 'Orith Toledo-Ronen'}, {'authorId': '5598623', 'name': 'Elad Venezian'}, {'authorId': '48361424', 'name': 'R. Aharonov'}], 'venue': 'Nature', 'abstract': 'Artificial intelligence (AI) is defined as the ability of machines to perform tasks that are usually associated with intelligent beings. Argument and debate are fundamental capabilities of human intelligence, essential for a wide range of human activities, and common to all human societies. The development of computational argumentation technologies is therefore an important emerging discipline in AI research1. Here we present Project Debater, an autonomous debating system that can engage in a competitive debate with humans. We provide a complete description of the system’s architecture, a thorough and systematic evaluation of its operation across a wide range of debate topics, and a detailed account of the system’s performance in its public debut against three expert human debaters. We also highlight the fundamental differences between debating with humans as opposed to challenging humans in game competitions, the latter being the focus of classical ‘grand challenges’ pursued by the AI research community over the past few decades. We suggest that such challenges lie in the ‘comfort zone’ of AI, whereas debating with humans lies in a different territory, in which humans still prevail, and for which novel paradigms are required to make substantial progress.', 'year': 2021, 'in_acl': False, 'citationCount': 124, 'section': 'Project Debater', 'subsection': None}, {'id': 18847466, 'paperId': '1f09e8a4c897543253065cbcdbd6f4f0053d4ebf', 'title': 'Context Dependent Claim Detection', 'authors': [{'authorId': '48496836', 'name': 'Ran Levy'}, {'authorId': '2911299', 'name': 'Yonatan Bilu'}, {'authorId': '2086349', 'name': 'Daniel Hershcovich'}, {'authorId': '1697314', 'name': 'E. Aharoni'}, {'authorId': '1766595', 'name': 'N. Slonim'}], 'venue': 'International Conference on Computational Linguistics', 'abstract': 'While discussing a concrete controversial topic, most humans will find it challenging to swiftly raise a diverse set of convincing and relevant claims that should set the basis of their arguments. Here, we formally define the challenging task of automatic claim detection in a given context and discuss its associated unique difficulties. Further, we outline a preliminary solution to this task, and assess its performance over annotated real world data, collected specifically for that purpose over hundreds of Wikipedia articles. We report promising results of a supervised learning approach, which is based on a cascade of classifiers designed to properly handle the skewed data which is inherent to the defined task. These results demonstrate the viability of the introduced task.', 'year': 2014, 'in_acl': True, 'citationCount': 224, 'section': 'Identification of argument components within an article', 'subsection': None}, {'id': 1804771, 'paperId': '3c95247788654b7b60deca996bd749413d21e781', 'title': 'Show Me Your Evidence - an Automatic Method for Context Dependent Evidence Detection', 'authors': [{'authorId': '1905713', 'name': 'Ruty Rinott'}, {'authorId': '2839128', 'name': 'Lena Dankin'}, {'authorId': '2069526063', 'name': 'C. A. Perez'}, {'authorId': '2361078', 'name': 'Mitesh M. Khapra'}, {'authorId': '1697314', 'name': 'E. Aharoni'}, {'authorId': '1766595', 'name': 'N. Slonim'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Engaging in a debate with oneself or others to take decisions is an integral part of our day-today life. A debate on a topic (say, use of performance enhancing drugs) typically proceeds by one party making an assertion/claim (say, PEDs are bad for health) and then providing an evidence to support the claim (say, a 2006 study shows that PEDs have psychiatric side effects). In this work, we propose the task of automatically detecting such evidences from unstructured text that support a given claim. This task has many practical applications in decision support and persuasion enhancement in a wide range of domains. We first introduce an extensive benchmark data set tailored for this task, which allows training statistical models and assessing their performance. Then, we suggest a system architecture based on supervised learning to address the evidence detection task. Finally, promising experimental results are reported.', 'year': 2015, 'in_acl': True, 'citationCount': 217, 'section': 'Identification of argument components within an article', 'subsection': None}, {'id': 2938060, 'paperId': '0f4b0cef1e0305dc4e5139387e67a13a664668b2', 'title': 'Context-Independent Claim Detection for Argument Mining', 'authors': [{'authorId': '3428634', 'name': 'Marco Lippi'}, {'authorId': '2896208', 'name': 'Paolo Torroni'}], 'venue': 'International Joint Conference on Artificial Intelligence', 'abstract': 'Argumentation mining aims to automatically identify structured argument data from unstructured natural language text. This challenging, multi-faceted task is recently gaining a growing attention, especially due to its many potential applications. One particularly important aspect of argumentation mining is claim identification. Most of the current approaches are engineered to address specific domains. However, argumentative sentences are often characterized by common rhetorical structures, independently of the domain. We thus propose a method that exploits structured parsing information to detect claims without resorting to contextual information, and yet achieve a performance comparable to that of state-of-the-art methods that heavily rely on the context.', 'year': 2015, 'in_acl': False, 'citationCount': 141, 'section': 'Identification of argument components within an article', 'subsection': None}, {'id': 53083232, 'paperId': 'a1e91322798c7ba97ab115c58817adb06005c0c1', 'title': 'Cross-topic Argument Mining from Heterogeneous Sources', 'authors': [{'authorId': '3067663', 'name': 'Christian Stab'}, {'authorId': '1818919', 'name': 'Tristan Miller'}, {'authorId': '36294183', 'name': 'Benjamin Schiller'}, {'authorId': '32832559', 'name': 'Pranav Rai'}, {'authorId': '1730400', 'name': 'Iryna Gurevych'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Argument mining is a core technology for automating argument search in large document collections. Despite its usefulness for this task, most current approaches are designed for use only with specific text types and fall short when applied to heterogeneous texts. In this paper, we propose a new sentential annotation scheme that is reliably applicable by crowd workers to arbitrary Web texts. We source annotations for over 25,000 instances covering eight controversial topics. We show that integrating topic information into bidirectional long short-term memory networks outperforms vanilla BiLSTMs by more than 3 percentage points in F1 in two- and three-label cross-topic settings. We also show that these results can be further improved by leveraging additional data for topic relevance using multi-task learning.', 'year': 2018, 'in_acl': True, 'citationCount': 191, 'section': 'Corpus-wide argument mining', 'subsection': None}, {'id': 208267859, 'paperId': 'a774a4cda9523cdc618830538861cd47311cfd27', 'title': 'Corpus Wide Argument Mining - a Working Solution', 'authors': [{'authorId': '1402680837', 'name': 'L. Ein-Dor'}, {'authorId': '1734246', 'name': 'Eyal Shnarch'}, {'authorId': '2839128', 'name': 'Lena Dankin'}, {'authorId': '41127252', 'name': 'Alon Halfon'}, {'authorId': '2464133', 'name': 'B. Sznajder'}, {'authorId': '48835746', 'name': 'Ariel Gera'}, {'authorId': '2059199916', 'name': 'Carlos Alzate'}, {'authorId': '2975469', 'name': 'Martin Gleize'}, {'authorId': '41019330', 'name': 'Leshem Choshen'}, {'authorId': '39517968', 'name': 'Yufang Hou'}, {'authorId': '2911299', 'name': 'Yonatan Bilu'}, {'authorId': '48361424', 'name': 'R. Aharonov'}, {'authorId': '1766595', 'name': 'N. Slonim'}], 'venue': 'AAAI Conference on Artificial Intelligence', 'abstract': 'One of the main tasks in argument mining is the retrieval of argumentative content pertaining to a given topic. Most previous work addressed this task by retrieving a relatively small number of relevant documents as the initial source for such content. This line of research yielded moderate success, which is of limited use in a real-world system. Furthermore, for such a system to yield a comprehensive set of relevant arguments, over a wide range of topics, it requires leveraging a large and diverse corpus in an appropriate manner. Here we present a first end-to-end high-precision, corpus-wide argument mining system. This is made possible by combining sentence-level queries over an appropriate indexing of a very large corpus of newspaper articles, with an iterative annotation scheme. This scheme addresses the inherent label bias in the data and pinpoints the regions of the sample space whose manual labeling is required to obtain high-precision among top-ranked candidates.', 'year': 2019, 'in_acl': False, 'citationCount': 61, 'section': 'Corpus-wide argument mining', 'subsection': None}, {'id': 141282, 'paperId': '3f020157c741f869da2a5daa2971b90d37fa9581', 'title': 'Computational Argumentation Quality Assessment in Natural Language', 'authors': [{'authorId': '2626599', 'name': 'Henning Wachsmuth'}, {'authorId': '33494034', 'name': 'Nona Naderi'}, {'authorId': '39517968', 'name': 'Yufang Hou'}, {'authorId': '2911299', 'name': 'Yonatan Bilu'}, {'authorId': '3331141', 'name': 'Vinodkumar Prabhakaran'}, {'authorId': '2007871156', 'name': 'Tim Alberdingk Thijm'}, {'authorId': '145036961', 'name': 'Graeme Hirst'}, {'authorId': '144146081', 'name': 'Benno Stein'}], 'venue': 'Conference of the European Chapter of the Association for Computational Linguistics', 'abstract': 'Research on computational argumentation faces the problem of how to automatically assess the quality of an argument or argumentation. While different quality dimensions have been approached in natural language processing, a common understanding of argumentation quality is still missing. This paper presents the first holistic work on computational argumentation quality in natural language. We comprehensively survey the diverse existing theories and approaches to assess logical, rhetorical, and dialectical quality dimensions, and we derive a systematic taxonomy from these. In addition, we provide a corpus with 320 arguments, annotated for all 15 dimensions in the taxonomy. Our results establish a common ground for research on computational argumentation quality assessment.', 'year': 2017, 'in_acl': True, 'citationCount': 197, 'section': 'Argument quality', 'subsection': None}, {'id': 3083231, 'paperId': '25ae911c13da7ef9def56ee30170920ebd48a668', 'title': 'Which argument is more convincing? Analyzing and predicting convincingness of Web arguments using bidirectional LSTM', 'authors': [{'authorId': '2572366', 'name': 'Ivan Habernal'}, {'authorId': '1730400', 'name': 'Iryna Gurevych'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'We propose a new task in the field of computational argumentation in which we investigate qualitative properties of Web arguments, namely their convincingness. We cast the problem as relation classification, where a pair of arguments having the same stance to the same prompt is judged. We annotate a large datasets of 16k pairs of arguments over 32 topics and investigate whether the relation “A is more convincing than B” exhibits properties of total ordering; these findings are used as global constraints for cleaning the crowdsourced data. We propose two tasks: (1) predicting which argument from an argument pair is more convincing and (2) ranking all arguments to the topic based on their convincingness. We experiment with feature-rich SVM and bidirectional LSTM and obtain 0.76-0.78 accuracy and 0.35-0.40 Spearman’s correlation in a cross-topic evaluation. We release the newly created corpus UKPConvArg1 and the experimental software under open licenses.', 'year': 2016, 'in_acl': True, 'citationCount': 214, 'section': 'Argument quality', 'subsection': None}, {'id': 10432955, 'paperId': '68b81fe3662c30bc0e27a5e23a69e7091ae22f53', 'title': 'Stance Classification of Context-Dependent Claims', 'authors': [{'authorId': '1693525', 'name': 'Roy Bar-Haim'}, {'authorId': '145963427', 'name': 'Indrajit Bhattacharya'}, {'authorId': '3125402', 'name': 'Francesco Dinuzzo'}, {'authorId': '2909575', 'name': 'Amrita Saha'}, {'authorId': '1766595', 'name': 'N. Slonim'}], 'venue': 'Conference of the European Chapter of the Association for Computational Linguistics', 'abstract': 'Recent work has addressed the problem of detecting relevant claims for a given controversial topic. We introduce the complementary task of Claim Stance Classification, along with the first benchmark dataset for this task. We decompose this problem into: (a) open-domain target identification for topic and claim (b) sentiment classification for each target, and (c) open-domain contrast detection between the topic and the claim targets. Manual annotation of the dataset confirms the applicability and validity of our model. We describe an implementation of our model, focusing on a novel algorithm for contrast detection. Our approach achieves promising results, and is shown to outperform several baselines, which represent the common practice of applying a single, monolithic classifier for stance classification.', 'year': 2017, 'in_acl': True, 'citationCount': 149, 'section': 'Stance classification', 'subsection': None}, {'id': 196199072, 'paperId': '2819c908cd3fb0a4135eccc9ff86dc952851a5c8', 'title': 'Argument Invention from First Principles', 'authors': [{'authorId': '2911299', 'name': 'Yonatan Bilu'}, {'authorId': '48835746', 'name': 'Ariel Gera'}, {'authorId': '2086349', 'name': 'Daniel Hershcovich'}, {'authorId': '2464133', 'name': 'B. Sznajder'}, {'authorId': '1396093833', 'name': 'Dan Lahav'}, {'authorId': '81154108', 'name': 'Guy Moshkowich'}, {'authorId': '150020801', 'name': 'Anael Malet'}, {'authorId': '71873369', 'name': 'A. Gavron'}, {'authorId': '1766595', 'name': 'N. Slonim'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Competitive debaters often find themselves facing a challenging task – how to debate a topic they know very little about, with only minutes to prepare, and without access to books or the Internet? What they often do is rely on ”first principles”, commonplace arguments which are relevant to many topics, and which they have refined in past debates. In this work we aim to explicitly define a taxonomy of such principled recurring arguments, and, given a controversial topic, to automatically identify which of these arguments are relevant to the topic. As far as we know, this is the first time that this approach to argument invention is formalized and made explicit in the context of NLP. The main goal of this work is to show that it is possible to define such a taxonomy. While the taxonomy suggested here should be thought of as a ”first attempt” it is nonetheless coherent, covers well the relevant topics and coincides with what professional debaters actually argue in their speeches, and facilitates automatic argument invention for new topics.', 'year': 2019, 'in_acl': True, 'citationCount': 19, 'section': 'Modeling human dilemma', 'subsection': None}, {'id': 53080821, 'paperId': '89f256abf0e0187fcf0a56a4df6f447a2c0b17bb', 'title': 'Listening Comprehension over Argumentative Content', 'authors': [{'authorId': '8963527', 'name': 'Shachar Mirkin'}, {'authorId': '81154108', 'name': 'Guy Moshkowich'}, {'authorId': '80108223', 'name': 'Matan Orbach'}, {'authorId': '2020379', 'name': 'Lili Kotlerman'}, {'authorId': '2965962', 'name': 'Yoav Kantor'}, {'authorId': '1847650', 'name': 'Tamar Lavee'}, {'authorId': '2697312', 'name': 'Michal Jacovi'}, {'authorId': '2911299', 'name': 'Yonatan Bilu'}, {'authorId': '48361424', 'name': 'R. Aharonov'}, {'authorId': '1766595', 'name': 'N. Slonim'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'This paper presents a task for machine listening comprehension in the argumentation domain and a corresponding dataset in English. We recorded 200 spontaneous speeches arguing for or against 50 controversial topics. For each speech, we formulated a question, aimed at confirming or rejecting the occurrence of potential arguments in the speech. Labels were collected by listening to the speech and marking which arguments were mentioned by the speaker. We applied baseline methods addressing the task, to be used as a benchmark for future work over this dataset. All data used in this work is freely available for research.', 'year': 2018, 'in_acl': True, 'citationCount': 24, 'section': 'Listening Comprehension', 'subsection': None}]
|
2021.acl-tutorials.2
|
Event-Centric Natural Language Processing
|
This tutorial targets researchers and practitioners who are interested in AI technologies that help machines understand natural language text, particularly real-world events described in the text. These include methods to extract the internal structures of an event regarding its protagonist(s), participant(s) and properties, as well as external structures concerning memberships, temporal and causal relations of multiple events. This tutorial will provide audience with a systematic introduction of (i) knowledge representations of events, (ii) various methods for automated extraction, conceptualization and prediction of events and their relations, (iii) induction of event processes and properties, and (iv) a wide range of NLU and commonsense understanding tasks that benefit from aforementioned techniques. We will conclude the tutorial by outlining emerging research problems in this area.
| 2,021
|
https://aclanthology.org/2021.acl-tutorials.2
|
ACL, IJCNLP
|
[{'id': 60498119, 'paperId': 'e8eb363f3d87aaec3bb7d1f74917e21724123b93', 'title': 'The algebra of events', 'authors': [{'authorId': '144282503', 'name': 'Emmon Bach'}], 'venue': 'The Language of Time - A Reader', 'abstract': 'A number of writers have commented on the close parallels between the mass-count distinction in nominal systems and the aspectual classification of verbal expressions (Allen, 1966; Mourelatos, 1978; L. Carlson, 1981; Hoepelman and Rohrer, 1980) that has been the subject of much attention in recent years in linguistics and philosophy. To take just one class of examples for now, there is a parallel between the two sets of distinctions in their cooccurrence patterns with expressions denoting numbers or amounts, as in Examples (1a)–(4b):', 'year': 1986, 'in_acl': False, 'citationCount': 796, 'section': None, 'subsection': None}, {'id': 6341459, 'paperId': 'b1cdc9884113bb055505272b9c15772cb558d221', 'title': 'Event Schema Induction with a Probabilistic Entity-Driven Model', 'authors': [{'authorId': '1729918', 'name': 'Nathanael Chambers'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Event schema induction is the task of learning high-level representations of complex events (e.g., a bombing) and their entity roles (e.g., perpetrator and victim) from unlabeled text. Event schemas have important connections to early NLP research on frames and scripts, as well as modern applications like template extraction. Recent research suggests event schemas can be learned from raw text. Inspired by a pipelined learner based on named entity coreference, this paper presents the first generative model for schema induction that integrates coreference chains into learning. Our generative model is conceptually simpler than the pipelined approach and requires far less training data. It also provides an interesting contrast with a recent HMM-based model. We evaluate on a common dataset for template schema extraction. Our generative model matches the pipeline’s performance, and outperforms the HMM by 7 F1 points (20%).', 'year': 2013, 'in_acl': True, 'citationCount': 122, 'section': None, 'subsection': None}, {'id': 202539612, 'paperId': 'ec9446482c8448911cbc7c98a676fcd156ab20af', 'title': 'A Logic-Driven Framework for Consistency of Neural Models', 'authors': [{'authorId': '47387745', 'name': 'Tao Li'}, {'authorId': '46346053', 'name': 'Vivek Gupta'}, {'authorId': '41016174', 'name': 'Maitrey Mehta'}, {'authorId': '3052879', 'name': 'Vivek Srikumar'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'While neural models show remarkable accuracy on individual predictions, their internal beliefs can be inconsistent across examples. In this paper, we formalize such inconsistency as a generalization of prediction error. We propose a learning framework for constraining models using logic rules to regularize them away from inconsistency. Our framework can leverage both labeled and unlabeled examples and is directly compatible with off-the-shelf learning schemes without model redesign. We instantiate our framework on natural language inference, where experiments show that enforcing invariants stated in logic can help make the predictions of neural models both accurate and consistent.', 'year': 2019, 'in_acl': True, 'citationCount': 90, 'section': None, 'subsection': None}, {'id': 220048375, 'paperId': '222dcbf5ee19fdfc9cfbd9c75af168a5c2122a4a', 'title': 'A Joint Neural Model for Information Extraction with Global Features', 'authors': [{'authorId': '2117032681', 'name': 'Ying Lin'}, {'authorId': '2113323573', 'name': 'Heng Ji'}, {'authorId': '143857288', 'name': 'Fei Huang'}, {'authorId': '3008832', 'name': 'Lingfei Wu'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Most existing joint neural models for Information Extraction (IE) use local task-specific classifiers to predict labels for individual instances (e.g., trigger, relation) regardless of their interactions. For example, a victim of a die event is likely to be a victim of an attack event in the same sentence. In order to capture such cross-subtask and cross-instance inter-dependencies, we propose a joint neural framework, OneIE, that aims to extract the globally optimal IE result as a graph from an input sentence. OneIE performs end-to-end IE in four stages: (1) Encoding a given sentence as contextualized word representations; (2) Identifying entity mentions and event triggers as nodes; (3) Computing label scores for all nodes and their pairwise links using local classifiers; (4) Searching for the globally optimal graph with a beam decoder. At the decoding stage, we incorporate global features to capture the cross-subtask and cross-instance interactions. Experiments show that adding global features improves the performance of our model and achieves new state of-the-art on all subtasks. In addition, as OneIE does not use any language-specific feature, we prove it can be easily applied to new languages or trained in a multilingual manner.', 'year': 2020, 'in_acl': True, 'citationCount': 373, 'section': None, 'subsection': None}, {'id': 222306079, 'paperId': '518a0d1669369511c6b2f0687b68d65da3938e12', 'title': 'Joint Constrained Learning for Event-Event Relation Extraction', 'authors': [{'authorId': '34269118', 'name': 'Haoyu Wang'}, {'authorId': '1998918', 'name': 'Muhao Chen'}, {'authorId': '49723569', 'name': 'Hongming Zhang'}, {'authorId': '144590225', 'name': 'D. Roth'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Understanding natural language involves recognizing how multiple event mentions structurally and temporally interact with each other. In this process, one can induce event complexes that organize multi-granular events with temporal order and membership relations interweaving among them. Due to the lack of jointly labeled data for these relational phenomena and the restriction on the structures they articulate, we propose a joint constrained learning framework for modeling event-event relations. Specifically, the framework enforces logical constraints within and across multiple temporal and subevent relations by converting these constraints into differentiable learning objectives. We show that our joint constrained learning approach effectively compensates for the lack of jointly labeled data, and outperforms SOTA methods on benchmarks for both temporal relation extraction and event hierarchy construction, replacing a commonly used but more expensive global inference process. We also present a promising case study showing the effectiveness of our approach in inducing event complexes on an external corpus.', 'year': 2020, 'in_acl': True, 'citationCount': 104, 'section': None, 'subsection': None}, {'id': 529375, 'paperId': '74f61af390292fc197659ae698429df4a2de62df', 'title': 'Unsupervised Learning of Narrative Event Chains', 'authors': [{'authorId': '1729918', 'name': 'Nathanael Chambers'}, {'authorId': '1746807', 'name': 'Dan Jurafsky'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Hand-coded scripts were used in the 1970-80s as knowledge backbones that enabled inference and other NLP tasks requiring deep semantic knowledge. We propose unsupervised induction of similar schemata called narrative event chains from raw newswire text. A narrative event chain is a partially ordered set of events related by a common protagonist. We describe a three step process to learning narrative event chains. The first uses unsupervised distributional methods to learn narrative relations between events sharing coreferring arguments. The second applies a temporal classifier to partially order the connected events. Finally, the third prunes and clusters self-contained chains from the space of events. We introduce two evaluations: the narrative cloze to evaluate event relatedness, and an order coherence task to evaluate narrative order. We show a 36% improvement over baseline for narrative prediction and 25% for temporal coherence.', 'year': 2008, 'in_acl': True, 'citationCount': 641, 'section': None, 'subsection': None}, {'id': 222305618, 'paperId': '88119224c18e891c9dd550b2ced8ff5049be3849', 'title': 'Analogous Process Structure Induction for Sub-event Sequence Prediction', 'authors': [{'authorId': '49723569', 'name': 'Hongming Zhang'}, {'authorId': '1998918', 'name': 'Muhao Chen'}, {'authorId': '34269118', 'name': 'Haoyu Wang'}, {'authorId': '1809614', 'name': 'Yangqiu Song'}, {'authorId': '144590225', 'name': 'D. Roth'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Computational and cognitive studies of event understanding suggest that identifying, comprehending, and predicting events depend on having structured representations of a sequence of events and on conceptualizing (abstracting) its components into (soft) event categories. Thus, knowledge about a known process such as "buying a car" can be used in the context of a new but analogous process such as "buying a house". Nevertheless, most event understanding work in NLP is still at the ground level and does not consider abstraction. In this paper, we propose an Analogous Process Structure Induction APSI framework, which leverages analogies among processes and conceptualization of sub-event instances to predict the whole sub-event sequence of previously unseen open-domain processes. As our experiments and analysis indicate, APSI supports the generation of meaningful sub-event sequences for unseen processes and can help predict missing events.', 'year': 2020, 'in_acl': True, 'citationCount': 42, 'section': None, 'subsection': None}, {'id': 218581125, 'paperId': 'b9485d1e2c66c3ae452ec4903c2a157caef4d2ed', 'title': 'Temporal Common Sense Acquisition with Minimal Supervision', 'authors': [{'authorId': '145360756', 'name': 'Ben Zhou'}, {'authorId': '3333257', 'name': 'Qiang Ning'}, {'authorId': '1783281', 'name': 'Daniel Khashabi'}, {'authorId': '144590225', 'name': 'D. Roth'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Temporal common sense (e.g., duration and frequency of events) is crucial for understanding natural language. However, its acquisition is challenging, partly because such information is often not expressed explicitly in text, and human annotation on such concepts is costly. This work proposes a novel sequence modeling approach that exploits explicit and implicit mentions of temporal common sense, extracted from a large corpus, to build TacoLM, a temporal common sense language model. Our method is shown to give quality predictions of various dimensions of temporal common sense (on UDST and a newly collected dataset from RealNews). It also produces representations of events for relevant tasks such as duration comparison, parent-child relations, event coreference and temporal QA (on TimeBank, HiEVE and MCTACO) that are better than using the standard BERT. Thus, it will be an important component of temporal NLP.', 'year': 2020, 'in_acl': True, 'citationCount': 88, 'section': None, 'subsection': None}]
|
2021.acl-tutorials.3
|
Meta Learning and Its Applications to Natural Language Processing
|
Deep learning based natural language processing (NLP) has become the mainstream of research in recent years and significantly outperforms conventional methods. However, deep learning models are notorious for being data and computation hungry. These downsides limit the application of such models from deployment to different domains, languages, countries, or styles, since collecting in-genre data and model training from scratch are costly. The long-tail nature of human language makes challenges even more significant. Meta-learning, or ‘Learning to Learn’, aims to learn better learning algorithms, including better parameter initialization, optimization strategy, network architecture, distance metrics, and beyond. Meta-learning has been shown to allow faster fine-tuning, converge to better performance, and achieve amazing results for few-shot learning in many applications. Meta-learning is one of the most important new techniques in machine learning in recent years. There is a related tutorial in ICML 2019 and a related course at Stanford, but most of the example applications given in these materials are about image processing. It is believed that meta-learning has great potential to be applied in NLP, and some works have been proposed with notable achievements in several relevant problems, e.g., relation extraction, machine translation, and dialogue generation and state tracking. However, it does not catch the same level of attention as in the image processing community. In the tutorial, we will first introduce Meta-learning approaches and the theory behind them, and then review the works of applying this technology to NLP problems. This tutorial intends to facilitate researchers in the NLP community to understand this new technology better and promote more research studies using this new technology.
| 2,021
|
https://aclanthology.org/2021.acl-tutorials.3
|
ACL, IJCNLP
|
[{'id': 6719686, 'paperId': 'c889d6f98e6d79b89c3a6adf8a921f88fa6ba518', 'title': 'Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks', 'authors': [{'authorId': '46881670', 'name': 'Chelsea Finn'}, {'authorId': '1689992', 'name': 'P. Abbeel'}, {'authorId': '1736651', 'name': 'S. Levine'}], 'venue': 'International Conference on Machine Learning', 'abstract': 'We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two few-shot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies.', 'year': 2017, 'in_acl': False, 'citationCount': 10723, 'section': 'Learning to Initialize', 'subsection': None}, {'id': 309759, 'paperId': 'c269858a7bb34e8350f2442ccf37797856ae9bca', 'title': 'Prototypical Networks for Few-shot Learning', 'authors': [{'authorId': '39770136', 'name': 'Jake Snell'}, {'authorId': '1754860', 'name': 'Kevin Swersky'}, {'authorId': '1804104', 'name': 'R. Zemel'}], 'venue': 'Neural Information Processing Systems', 'abstract': 'We propose Prototypical Networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical Networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend Prototypical Networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.', 'year': 2017, 'in_acl': False, 'citationCount': 7323, 'section': 'Learning to Compare', 'subsection': None}, {'id': 8909022, 'paperId': 'be1bb4e4aa1fcf70281b4bd24d8cd31c04864bb6', 'title': 'Matching Networks for One Shot Learning', 'authors': [{'authorId': '1689108', 'name': 'O. Vinyals'}, {'authorId': '1723876', 'name': 'C. Blundell'}, {'authorId': '2542999', 'name': 'T. Lillicrap'}, {'authorId': '2645384', 'name': 'K. Kavukcuoglu'}, {'authorId': '1688276', 'name': 'Daan Wierstra'}], 'venue': 'Neural Information Processing Systems', 'abstract': 'Learning from a few examples remains a key challenge in machine learning. Despite recent advances in important domains such as vision and language, the standard supervised deep learning paradigm does not offer a satisfactory solution for learning new concepts rapidly from little data. In this work, we employ ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories. Our framework learns a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. We then define one-shot learning problems on vision (using Omniglot, ImageNet) and language tasks. Our algorithm improves one-shot accuracy on ImageNet from 87.6% to 93.2% and from 88.0% to 93.8% on Omniglot compared to competing approaches. We also demonstrate the usefulness of the same model on language modeling by introducing a one-shot task on the Penn Treebank.', 'year': 2016, 'in_acl': False, 'citationCount': 6792, 'section': 'Learning to Compare', 'subsection': None}, {'id': 67413369, 'paperId': '29c887794eed2ca9462638ff853e6fe1ab91d5d8', 'title': 'Optimization as a Model for Few-Shot Learning', 'authors': [{'authorId': '49517463', 'name': 'S. Ravi'}, {'authorId': '1777528', 'name': 'H. Larochelle'}], 'venue': 'International Conference on Learning Representations', 'abstract': 'Though deep neural networks have shown great success in the large data domain, they generally perform poorly on few-shot learning tasks, where a model has to quickly generalize after seeing very few examples from each class. The general belief is that gradient-based optimization in high capacity models requires many iterative steps over many examples to perform well. Here, we propose an LSTM-based meta-learner model to learn the exact optimization algorithm used to train another learner neural network in the few-shot regime. The parametrization of our model allows it to learn appropriate parameter updates specifically for the scenario where a set amount of updates will be made, while also learning a general initialization of the learner network that allows for quick convergence of training. We demonstrate that this meta-learning model is competitive with deep metric-learning techniques for few-shot learning.', 'year': 2016, 'in_acl': False, 'citationCount': 3268, 'section': 'Other Methods', 'subsection': None}, {'id': 2928017, 'paperId': '71683e224ab91617950956b5005ed0439a733a71', 'title': 'Learning to learn by gradient descent by gradient descent', 'authors': [{'authorId': '2206490', 'name': 'Marcin Andrychowicz'}, {'authorId': '1715051', 'name': 'Misha Denil'}, {'authorId': '2016840', 'name': 'Sergio Gomez Colmenarejo'}, {'authorId': '3243579', 'name': 'Matthew W. Hoffman'}, {'authorId': '144846367', 'name': 'David Pfau'}, {'authorId': '1725157', 'name': 'T. Schaul'}, {'authorId': '1737568', 'name': 'Nando de Freitas'}], 'venue': 'Neural Information Processing Systems', 'abstract': 'The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.', 'year': 2016, 'in_acl': False, 'citationCount': 1906, 'section': 'Other Methods', 'subsection': None}]
|
2021.acl-tutorials.4
|
Pre-training Methods for Neural Machine Translation
|
This tutorial provides a comprehensive guide to make the most of pre-training for neural machine translation. Firstly, we will briefly introduce the background of NMT, pre-training methodology, and point out the main challenges when applying pre-training for NMT. Then we will focus on analysing the role of pre-training in enhancing the performance of NMT, how to design a better pre-training model for executing specific NMT tasks and how to better integrate the pre-trained model into NMT system. In each part, we will provide examples, discuss training techniques and analyse what is transferred when applying pre-training.
| 2,021
|
https://aclanthology.org/2021.acl-tutorials.4
|
ACL, IJCNLP
|
[{'id': 13756489, 'paperId': '204e3073870fae3d05bcbc2f6a8e263d9b72e776', 'title': 'Attention is All you Need', 'authors': [{'authorId': '40348417', 'name': 'Ashish Vaswani'}, {'authorId': '1846258', 'name': 'Noam M. Shazeer'}, {'authorId': '3877127', 'name': 'Niki Parmar'}, {'authorId': '39328010', 'name': 'Jakob Uszkoreit'}, {'authorId': '145024664', 'name': 'Llion Jones'}, {'authorId': '19177000', 'name': 'Aidan N. Gomez'}, {'authorId': '40527594', 'name': 'Lukasz Kaiser'}, {'authorId': '3443442', 'name': 'Illia Polosukhin'}], 'venue': 'Neural Information Processing Systems', 'abstract': 'The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.', 'year': 2017, 'in_acl': False, 'citationCount': 109681, 'section': None, 'subsection': None}, {'id': 260464809, 'paperId': 'a486e2839291111bb44fa1f07731ada123539f75', 'title': 'Google’s Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation', 'authors': [{'authorId': '2109675545', 'name': 'Melvin Johnson'}, {'authorId': '144927151', 'name': 'M. Schuster'}, {'authorId': '2827616', 'name': 'Quoc V. Le'}, {'authorId': '2048712', 'name': 'M. Krikun'}, {'authorId': '48607963', 'name': 'Yonghui Wu'}, {'authorId': '2545358', 'name': 'Z. Chen'}, {'authorId': '144203200', 'name': 'Nikhil Thorat'}, {'authorId': '1765169', 'name': 'F. Viégas'}, {'authorId': '145233583', 'name': 'M. Wattenberg'}, {'authorId': '2227182886', 'name': 'Gregory S. Corrado'}, {'authorId': '48342565', 'name': 'Macduff Hughes'}, {'authorId': '2056946837', 'name': 'Jeffrey Dean'}], 'venue': 'Transactions of the Association for Computational Linguistics', 'abstract': 'We propose a simple solution to use a single Neural Machine Translation (NMT) model to translate between multiple languages. Our solution requires no changes to the model architecture from a standard NMT system but instead introduces an artificial token at the beginning of the input sentence to specify the required target language. Using a shared wordpiece vocabulary, our approach enables Multilingual NMT systems using a single model. On the WMT’14 benchmarks, a single multilingual model achieves comparable performance for English→French and surpasses state-of-theart results for English→German. Similarly, a single multilingual model surpasses state-of-the-art results for French→English and German→English on WMT’14 and WMT’15 benchmarks, respectively. On production corpora, multilingual models of up to twelve language pairs allow for better translation of many individual pairs. Our models can also learn to perform implicit bridging between language pairs never seen explicitly during training, showing that transfer learning and zero-shot translation is possible for neural translation. Finally, we show analyses that hints at a universal interlingua representation in our models and also show some interesting examples when mixing languages.', 'year': 2016, 'in_acl': True, 'citationCount': 2002, 'section': None, 'subsection': None}, {'id': 52967399, 'paperId': 'df2b0e26d0599ce3e70df8a9da02e51594e0e992', 'title': 'BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding', 'authors': [{'authorId': '39172707', 'name': 'Jacob Devlin'}, {'authorId': '1744179', 'name': 'Ming-Wei Chang'}, {'authorId': '2544107', 'name': 'Kenton Lee'}, {'authorId': '3259253', 'name': 'Kristina Toutanova'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018a; Radford et al., 2018), BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5 (7.7 point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).', 'year': 2019, 'in_acl': True, 'citationCount': 84138, 'section': None, 'subsection': None}, {'id': 160025533, 'paperId': '9405cc0d6169988371b2755e573cc28650d14dfe', 'title': 'Language Models are Unsupervised Multitask Learners', 'authors': [{'authorId': '38909097', 'name': 'Alec Radford'}, {'authorId': '49387725', 'name': 'Jeff Wu'}, {'authorId': '48422824', 'name': 'R. Child'}, {'authorId': '150970919', 'name': 'D. Luan'}, {'authorId': '2698777', 'name': 'Dario Amodei'}, {'authorId': '1701686', 'name': 'I. Sutskever'}], 'venue': '', 'abstract': 'Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets. We demonstrate that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText. When conditioned on a document plus questions, the answers generated by the language model reach 55 F1 on the CoQA dataset matching or exceeding the performance of 3 out of 4 baseline systems without using the 127,000+ training examples. The capacity of the language model is essential to the success of zero-shot task transfer and increasing it improves performance in a log-linear fashion across tasks. Our largest model, GPT-2, is a 1.5B parameter Transformer that achieves state of the art results on 7 out of 8 tested language modeling datasets in a zero-shot setting but still underfits WebText. Samples from the model reflect these improvements and contain coherent paragraphs of text. These findings suggest a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.', 'year': 2019, 'in_acl': False, 'citationCount': 19436, 'section': None, 'subsection': None}, {'id': 266840788, 'paperId': '4954c898d898197663d46e91ae7d83c3df7ac11f', 'title': 'wav2vec: Unsupervised Pre-training for Speech Recognition', 'authors': [{'authorId': '2278301828', 'name': 'Steffen Schneider'}, {'authorId': '51428394', 'name': 'Alexei Baevski'}, {'authorId': '2939803', 'name': 'R. Collobert'}, {'authorId': '2325985', 'name': 'Michael Auli'}], 'venue': 'Interspeech', 'abstract': 'We explore unsupervised pre-training for speech recognition by learning representations of raw audio. wav2vec is trained on large amounts of unlabeled audio data and the resulting representations are then used to improve acoustic model training. We pre-train a simple multi-layer convolutional neural network optimized via a noise contrastive binary classification task. Our experiments on WSJ reduce WER of a strong character-based log-mel filterbank baseline by up to 36% when only a few hours of transcribed data is available. Our approach achieves 2.43% WER on the nov92 test set. This outperforms Deep Speech 2, the best reported character-based system in the literature while using two orders of magnitude less labeled training data.', 'year': 2019, 'in_acl': False, 'citationCount': 490, 'section': None, 'subsection': None}, {'id': 219966759, 'paperId': '49a049dc85e2380dde80501a984878341dd8efdf', 'title': 'wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations', 'authors': [{'authorId': '51428394', 'name': 'Alexei Baevski'}, {'authorId': '2110147709', 'name': 'Henry Zhou'}, {'authorId': '40360972', 'name': 'Abdel-rahman Mohamed'}, {'authorId': '2325985', 'name': 'Michael Auli'}], 'venue': 'Neural Information Processing Systems', 'abstract': 'We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.', 'year': 2020, 'in_acl': False, 'citationCount': 4690, 'section': None, 'subsection': None}]
|
2021.acl-tutorials.5
|
Prosody: Models, Methods, and Applications
|
Prosody is essential in human interaction, enabling people to show interest, establish rapport, efficiently convey nuances of attitude or intent, and so on. Some applications that exploit prosodic knowledge have recently shown superhuman performance, and in many respects our ability to effectively model prosody is rapidly advancing. This tutorial will overview the computational modeling of prosody, including recent advances and diverse actual and potential applications.
| 2,021
|
https://aclanthology.org/2021.acl-tutorials.5
|
ACL, IJCNLP
|
[{'id': 33433997, 'paperId': '38be2a1a9aa093a3d1de074f04fab47147d01418', 'title': 'An Introduction to English Phonetics', 'authors': [{'authorId': '144059102', 'name': 'Richard Ogden'}], 'venue': 'Phonetica: International Journal of Phonetic Science', 'abstract': '1. Introduction 2. Overview of the human speech mechanism 3. Representing speech 4. Voicing 5. Vowels 6. Approximants 7. Plosives 8. Nasals 9. Fricatives 10. Airstreams 11. Sounds and structures Glossary', 'year': 2009, 'in_acl': False, 'citationCount': 140, 'section': None, 'subsection': None}, {'id': 60158090, 'paperId': '1f2585c4a0d9ef823d28f8e3a55cbdfb3466b5e9', 'title': 'Analysing Conversation: An Introduction to Prosody', 'authors': [{'authorId': '12507724', 'name': 'B. S. Reed'}], 'venue': '', 'abstract': 'List of Tables and Figures Acknowledgements Preliminaries Pitch: Introduction Pitch: Intonation Pitch: Range and Register Time: Sound and Syllable Duration Speech Rate Speech Rhythm Pauses Loudness Voice Quality Outlook: Future Issues in Research on Prosody in Conversation Answers to Exercises Appendix: Transcription Conventions Glossary Notes Bibliography Index', 'year': 2010, 'in_acl': False, 'citationCount': 85, 'section': None, 'subsection': None}, {'id': 14486649, 'paperId': '10f992779c2601af8a33f53c813cb6342791873f', 'title': 'The Geneva Minimalistic Acoustic Parameter Set (GeMAPS) for Voice Research and Affective Computing', 'authors': [{'authorId': '1751126', 'name': 'F. Eyben'}, {'authorId': '2462740', 'name': 'K. Scherer'}, {'authorId': '145411696', 'name': 'Björn Schuller'}, {'authorId': '144000012', 'name': 'J. Sundberg'}, {'authorId': '1742930', 'name': 'E. André'}, {'authorId': '2106794', 'name': 'C. Busso'}, {'authorId': '1713369', 'name': 'L. Devillers'}, {'authorId': '145815422', 'name': 'J. Epps'}, {'authorId': '2576655', 'name': 'Petri Laukka'}, {'authorId': '145254843', 'name': 'Shrikanth S. Narayanan'}, {'authorId': '1776021', 'name': 'K. Truong'}], 'venue': 'IEEE Transactions on Affective Computing', 'abstract': 'Work on voice sciences over recent decades has led to a proliferation of acoustic parameters that are used quite selectively and are not always extracted in a similar fashion. With many independent teams working in different research areas, shared standards become an essential safeguard to ensure compliance with state-of-the-art methods allowing appropriate comparison of results across studies and potential integration and combination of extraction and recognition systems. In this paper we propose a basic standard acoustic parameter set for various areas of automatic voice analysis, such as paralinguistic or clinical speech analysis. In contrast to a large brute-force parameter set, we present a minimalistic set of voice parameters here. These were selected based on a) their potential to index affective physiological changes in voice production, b) their proven value in former studies as well as their automatic extractability, and c) their theoretical significance. The set is intended to provide a common baseline for evaluation of future research and eliminate differences caused by varying parameter sets or even different implementations of the same parameters. Our implementation is publicly available with the openSMILE toolkit. Comparative evaluations of the proposed feature set and large baseline feature sets of INTERSPEECH challenges show a high performance of the proposed set in relation to its size.', 'year': 2016, 'in_acl': False, 'citationCount': 1367, 'section': None, 'subsection': None}, {'id': 7389778, 'paperId': 'f03df59311ecff3ab229f59058d091c123dfacb6', 'title': 'Prosody in context: a review', 'authors': [{'authorId': '145914754', 'name': 'Jennifer Cole'}], 'venue': '', 'abstract': 'Prosody conveys information about the linguistic context of an utterance at every level of linguistic organisation, from the word up to the discourse context. Acoustic correlates of prosody cue this rich contextual information, but interpreting prosodic cues in terms of the lexical, syntactic and discourse information they encode also requires recognising prosodic variation due to speaker, language variety, speech style and other properties of the situational context. This review reveals the complex interaction among contextual factors that influence the phonological form and phonetic expression of prosody. Empirical challenges in prosodic transcription are discussed along with production evidence that reveals striking variability in the phonological encoding of prosody and in its phonetic expression. The review points to the need for a model of prosody that is robust to contextually driven variation affecting the production and perception of prosodic form.', 'year': 2015, 'in_acl': False, 'citationCount': 147, 'section': None, 'subsection': None}, {'id': 2067922, 'paperId': '2be0e1198aabd6c36e8f0c52d3f8540c704a03ab', 'title': 'SPEECH PROSODY — THEORIES , MODELS AND ANALYSIS', 'authors': [{'authorId': '2110289371', 'name': 'Yi Xu'}], 'venue': '', 'abstract': 'Modern study of speech prosody started almost as early as modern study of segmental aspect of speech (Cruttendon, 1997). Over the decades, many theories and models are proposed. While the diversity of approaches is a sign of creativity of the field, the situation could be confusing for readers who are new to the area. Even for seasoned researchers, if they have not given much thought to methodological issues, the key differences between the many approaches may not be immediately clear. This chapter offers an overview of the state of the art in prosody research mainly from a methodological perspective. I will first try to highlight the critical differences between the theories and models of prosody by outlining a way of classifying them along a number of dividing lines. I will then discuss a number of key issues in prosody analysis, with focus also mainly on methodological differences.', 'year': 2014, 'in_acl': False, 'citationCount': 7, 'section': None, 'subsection': None}, {'id': 150384188, 'paperId': '14dac7d55fed0b20c43660a573c73702391bdf77', 'title': 'Prosodic Patterns in English Conversation', 'authors': [{'authorId': '32987878', 'name': 'Nigel G. Ward'}], 'venue': '', 'abstract': 'Beyond words, spoken language involves prosody: intonation, loudness, timing, and the like. In conversation prosody is vital: it enables people to mark things as important or incidental, show interest in something or change the topic, be sympathetic or business-like, and so on. Without prosody, conversations would be just alternating short speeches: the human element would be lost. This book explains how speakers of American English use prosody to accomplish things in conversation. While native speakers do this without conscious awareness, that does not mean it is simple. Attempts to pin down the details have faced many challenges, but now, in a remarkable convergence, researchers in diverse traditions – experimental phonetics, formal phonology, conversation analysis, and signal processing – have independently begun using compatible styles of description. The shared core is the notion of prosodic construction. Prosodic constructions are recurring temporal patterns of prosodic features that express specific meanings and functions. These typically involve not only intonation but also energy, speaking rate, timing, and articulation properties, often with synchronized contributions by two participants. For example, consider one that is common in active listening. A listener can show interest and engagement by periodically nodding or saying uh-huh or the like, but this is not done at random. Example 1 illustrates.', 'year': 2019, 'in_acl': False, 'citationCount': 68, 'section': None, 'subsection': None}]
|
2021.acl-tutorials.6
|
Recognizing Multimodal Entailment
|
How information is created, shared and consumed has changed rapidly in recent decades, in part thanks to new social platforms and technologies on the web. With ever-larger amounts of unstructured and limited labels, organizing and reconciling information from different sources and modalities is a central challenge in machine learning. This cutting-edge tutorial aims to introduce the multimodal entailment task, which can be useful for detecting semantic alignments when a single modality alone does not suffice for a whole content understanding. Starting with a brief overview of natural language processing, computer vision, structured data and neural graph learning, we lay the foundations for the multimodal sections to follow. We then discuss recent multimodal learning literature covering visual, audio and language streams, and explore case studies focusing on tasks which require fine-grained understanding of visual and linguistic semantics question answering, veracity and hatred classification. Finally, we introduce a new dataset for recognizing multimodal entailment, exploring it in a hands-on collaborative section. Overall, this tutorial gives an overview of multimodal learning, introduces a multimodal entailment dataset, and encourages future research in the topic.
| 2,021
|
https://aclanthology.org/2021.acl-tutorials.6
|
ACL, IJCNLP
|
[{'id': 13401254, 'paperId': '93ed6511a0ae5b13ccf445081ab829d415ca47df', 'title': 'Neural Graph Machines: Learning Neural Networks Using Graphs', 'authors': [{'authorId': '23519191', 'name': 'T. Bui'}, {'authorId': '35014893', 'name': 'Sujith Ravi'}, {'authorId': '2525389', 'name': 'Vivek Ramavajjala'}], 'venue': 'arXiv.org', 'abstract': 'Label propagation is a powerful and flexible semi-supervised learning technique on graphs. Neural network architectures, on the other hand, have proven track records in many supervised learning tasks. In this work, we propose a training objective for neural networks, Neural Graph Machines, for combining the power of neural networks and label propagation. The new objective allows the neural networks to harness both labeled and unlabeled data by: (a) allowing the network to train using labeled data as in the supervised setting, (b) biasing the network to learn similar hidden representations for neighboring nodes on a graph, in the same vein as label propagation. Such architectures with the proposed objective can be trained efficiently using stochastic gradient descent and scaled to large graphs. The proposed method is experimentally validated on a wide range of tasks (multi- label classification on social graphs, news categorization and semantic intent classification) using different architectures (NNs, CNNs, and LSTM RNNs).', 'year': 2017, 'in_acl': False, 'citationCount': 28, 'section': None, 'subsection': None}, {'id': 13756489, 'paperId': '204e3073870fae3d05bcbc2f6a8e263d9b72e776', 'title': 'Attention is All you Need', 'authors': [{'authorId': '40348417', 'name': 'Ashish Vaswani'}, {'authorId': '1846258', 'name': 'Noam M. Shazeer'}, {'authorId': '3877127', 'name': 'Niki Parmar'}, {'authorId': '39328010', 'name': 'Jakob Uszkoreit'}, {'authorId': '145024664', 'name': 'Llion Jones'}, {'authorId': '19177000', 'name': 'Aidan N. Gomez'}, {'authorId': '40527594', 'name': 'Lukasz Kaiser'}, {'authorId': '3443442', 'name': 'Illia Polosukhin'}], 'venue': 'Neural Information Processing Systems', 'abstract': 'The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.', 'year': 2017, 'in_acl': False, 'citationCount': 109681, 'section': None, 'subsection': None}, {'id': 3626819, 'paperId': '3febb2bed8865945e7fddc99efd791887bb7e14f', 'title': 'Deep Contextualized Word Representations', 'authors': [{'authorId': '39139825', 'name': 'Matthew E. Peters'}, {'authorId': '50043859', 'name': 'Mark Neumann'}, {'authorId': '2136562', 'name': 'Mohit Iyyer'}, {'authorId': '40642935', 'name': 'Matt Gardner'}, {'authorId': '143997772', 'name': 'Christopher Clark'}, {'authorId': '2544107', 'name': 'Kenton Lee'}, {'authorId': '1982950', 'name': 'Luke Zettlemoyer'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pre-trained on a large text corpus. We show that these representations can be easily added to existing models and significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment and sentiment analysis. We also present an analysis showing that exposing the deep internals of the pre-trained network is crucial, allowing downstream models to mix different types of semi-supervision signals.', 'year': 2018, 'in_acl': True, 'citationCount': 11138, 'section': None, 'subsection': None}, {'id': 52967399, 'paperId': 'df2b0e26d0599ce3e70df8a9da02e51594e0e992', 'title': 'BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding', 'authors': [{'authorId': '39172707', 'name': 'Jacob Devlin'}, {'authorId': '1744179', 'name': 'Ming-Wei Chang'}, {'authorId': '2544107', 'name': 'Kenton Lee'}, {'authorId': '3259253', 'name': 'Kristina Toutanova'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018a; Radford et al., 2018), BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5 (7.7 point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).', 'year': 2019, 'in_acl': True, 'citationCount': 84138, 'section': None, 'subsection': None}, {'id': 202888986, 'paperId': '7a064df1aeada7e69e5173f7d4c8606f4470365b', 'title': 'ALBERT: A Lite BERT for Self-supervised Learning of Language Representations', 'authors': [{'authorId': '2362534', 'name': 'Zhenzhong Lan'}, {'authorId': '46221498', 'name': 'Mingda Chen'}, {'authorId': '7685850', 'name': 'Sebastian Goodman'}, {'authorId': '1700980', 'name': 'Kevin Gimpel'}, {'authorId': '48267618', 'name': 'Piyush Sharma'}, {'authorId': '1737285', 'name': 'Radu Soricut'}], 'venue': 'International Conference on Learning Representations', 'abstract': 'Increasing model size when pretraining natural language representations often results in improved performance on downstream tasks. However, at some point further model increases become harder due to GPU/TPU memory limitations and longer training times. To address these problems, we present two parameter-reduction techniques to lower memory consumption and increase the training speed of BERT. Comprehensive empirical evidence shows that our proposed methods lead to models that scale much better compared to the original BERT. We also use a self-supervised loss that focuses on modeling inter-sentence coherence, and show it consistently helps downstream tasks with multi-sentence inputs. As a result, our best model establishes new state-of-the-art results on the GLUE, RACE, and \\squad benchmarks while having fewer parameters compared to BERT-large. The code and the pretrained models are available at this https URL.', 'year': 2019, 'in_acl': False, 'citationCount': 5926, 'section': None, 'subsection': None}, {'id': 204838007, 'paperId': '6c4b76232bb72897685d19b3d264c6ee3005bc2b', 'title': 'Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer', 'authors': [{'authorId': '2402716', 'name': 'Colin Raffel'}, {'authorId': '1846258', 'name': 'Noam M. Shazeer'}, {'authorId': '145625142', 'name': 'Adam Roberts'}, {'authorId': '3844009', 'name': 'Katherine Lee'}, {'authorId': '46617804', 'name': 'Sharan Narang'}, {'authorId': '1380243217', 'name': 'Michael Matena'}, {'authorId': '2389316', 'name': 'Yanqi Zhou'}, {'authorId': '2157338362', 'name': 'Wei Li'}, {'authorId': '35025299', 'name': 'Peter J. Liu'}], 'venue': 'Journal of machine learning research', 'abstract': 'Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new "Colossal Clean Crawled Corpus", we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.', 'year': 2019, 'in_acl': False, 'citationCount': 16889, 'section': None, 'subsection': None}, {'id': 352650, 'paperId': 'a78273144520d57e150744cf75206e881e11cc5b', 'title': 'Multimodal Deep Learning', 'authors': [{'authorId': '2020608', 'name': 'Jiquan Ngiam'}, {'authorId': '2556428', 'name': 'A. Khosla'}, {'authorId': '1390603950', 'name': 'Mingyu Kim'}, {'authorId': '145578392', 'name': 'Juhan Nam'}, {'authorId': '1697141', 'name': 'Honglak Lee'}, {'authorId': '34699434', 'name': 'A. Ng'}], 'venue': 'International Conference on Machine Learning', 'abstract': 'Deep networks have been successfully applied to unsupervised feature learning for single modalities (e.g., text, images or audio). In this work, we propose a novel application of deep networks to learn features over multiple modalities. We present a series of tasks for multimodal learning and show how to train deep networks that learn features to address these tasks. In particular, we demonstrate cross modality feature learning, where better features for one modality (e.g., video) can be learned if multiple modalities (e.g., audio and video) are present at feature learning time. Furthermore, we show how to learn a shared representation between modalities and evaluate it on a unique task, where the classifier is trained with audio-only data but tested with video-only data and vice-versa. Our models are validated on the CUAVE and AVLetters datasets on audio-visual speech classification, demonstrating best published visual speech classification on AVLetters and effective shared representation learning.', 'year': 2011, 'in_acl': False, 'citationCount': 3032, 'section': None, 'subsection': None}, {'id': 199453025, 'paperId': '65a9c7b0800c86a196bc14e7621ff895cc6ab287', 'title': 'ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks', 'authors': [{'authorId': '8553015', 'name': 'Jiasen Lu'}, {'authorId': '1746610', 'name': 'Dhruv Batra'}, {'authorId': '153432684', 'name': 'Devi Parikh'}, {'authorId': '2297229', 'name': 'Stefan Lee'}], 'venue': 'Neural Information Processing Systems', 'abstract': 'We present ViLBERT (short for Vision-and-Language BERT), a model for learning task-agnostic joint representations of image content and natural language. We extend the popular BERT architecture to a multi-modal two-stream model, pro-cessing both visual and textual inputs in separate streams that interact through co-attentional transformer layers. We pretrain our model through two proxy tasks on the large, automatically collected Conceptual Captions dataset and then transfer it to multiple established vision-and-language tasks -- visual question answering, visual commonsense reasoning, referring expressions, and caption-based image retrieval -- by making only minor additions to the base architecture. We observe significant improvements across tasks compared to existing task-specific models -- achieving state-of-the-art on all four tasks. Our work represents a shift away from learning groundings between vision and language only as part of task training and towards treating visual grounding as a pretrainable and transferable capability.', 'year': 2019, 'in_acl': False, 'citationCount': 3292, 'section': None, 'subsection': None}, {'id': 208637516, 'paperId': '9915315f5cae822e98c94382ce3b0a6f9a7f8e5e', 'title': '12-in-1: Multi-Task Vision and Language Representation Learning', 'authors': [{'authorId': '8553015', 'name': 'Jiasen Lu'}, {'authorId': '28554843', 'name': 'Vedanuj Goswami'}, {'authorId': '34849128', 'name': 'Marcus Rohrbach'}, {'authorId': '153432684', 'name': 'Devi Parikh'}, {'authorId': '121944615', 'name': 'Stefan Lee'}], 'venue': 'Computer Vision and Pattern Recognition', 'abstract': 'Much of vision-and-language research focuses on a small but diverse set of independent tasks and supporting datasets often studied in isolation; however, the visually-grounded language understanding skills required for success at these tasks overlap significantly. In this work, we investigate these relationships between vision-and-language tasks by developing a large-scale, multi-task model. Our approach culminates in a single model on 12 datasets from four broad categories of task including visual question answering, caption-based image retrieval, grounding referring expressions, and multimodal verification. Compared to independently trained single-task models, this represents a reduction from approximately 3 billion parameters to 270 million while simultaneously improving performance by 2.05 points on average across tasks. We use our multi-task framework to perform in-depth analysis of the effect of joint training diverse tasks. Further, we show that finetuning task-specific models from our single multi-task model can lead to further improvements, achieving performance at or above the state-of-the-art.', 'year': 2019, 'in_acl': False, 'citationCount': 458, 'section': None, 'subsection': None}, {'id': 201103729, 'paperId': '79c93274429d6355959f1e4374c2147bb81ea649', 'title': 'LXMERT: Learning Cross-Modality Encoder Representations from Transformers', 'authors': [{'authorId': '3218666', 'name': 'Hao Hao Tan'}, {'authorId': '143977268', 'name': 'Mohit Bansal'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Vision-and-language reasoning requires an understanding of visual concepts, language semantics, and, most importantly, the alignment and relationships between these two modalities. We thus propose the LXMERT (Learning Cross-Modality Encoder Representations from Transformers) framework to learn these vision-and-language connections. In LXMERT, we build a large-scale Transformer model that consists of three encoders: an object relationship encoder, a language encoder, and a cross-modality encoder. Next, to endow our model with the capability of connecting vision and language semantics, we pre-train the model with large amounts of image-and-sentence pairs, via five diverse representative pre-training tasks: masked language modeling, masked object prediction (feature regression and label classification), cross-modality matching, and image question answering. These tasks help in learning both intra-modality and cross-modality relationships. After fine-tuning from our pre-trained parameters, our model achieves the state-of-the-art results on two visual question answering datasets (i.e., VQA and GQA). We also show the generalizability of our pre-trained cross-modality model by adapting it to a challenging visual-reasoning task, NLVR2, and improve the previous best result by 22% absolute (54% to 76%). Lastly, we demonstrate detailed ablation studies to prove that both our novel model components and pre-training strategies significantly contribute to our strong results. Code and pre-trained models publicly available at: https://github.com/airsplay/lxmert', 'year': 2019, 'in_acl': True, 'citationCount': 2274, 'section': None, 'subsection': None}, {'id': 201317624, 'paperId': '4aa6298b606941a282d735fa3143da293199d2ca', 'title': 'VL-BERT: Pre-training of Generic Visual-Linguistic Representations', 'authors': [{'authorId': '145499378', 'name': 'Weijie Su'}, {'authorId': '2578924', 'name': 'Xizhou Zhu'}, {'authorId': '2112823372', 'name': 'Yue Cao'}, {'authorId': '2156072370', 'name': 'Bin Li'}, {'authorId': '152309485', 'name': 'Lewei Lu'}, {'authorId': '49807919', 'name': 'Furu Wei'}, {'authorId': '3304536', 'name': 'Jifeng Dai'}], 'venue': 'International Conference on Learning Representations', 'abstract': 'We introduce a new pre-trainable generic representation for visual-linguistic tasks, called Visual-Linguistic BERT (VL-BERT for short). VL-BERT adopts the simple yet powerful Transformer model as the backbone, and extends it to take both visual and linguistic embedded features as input. In it, each element of the input is either of a word from the input sentence, or a region-of-interest (RoI) from the input image. It is designed to fit for most of the visual-linguistic downstream tasks. To better exploit the generic representation, we pre-train VL-BERT on the massive-scale Conceptual Captions dataset, together with text-only corpus. Extensive empirical analysis demonstrates that the pre-training procedure can better align the visual-linguistic clues and benefit the downstream tasks, such as visual commonsense reasoning, visual question answering and referring expression comprehension. It is worth noting that VL-BERT achieved the first place of single model on the leaderboard of the VCR benchmark. Code is released at \\url{this https URL}.', 'year': 2019, 'in_acl': False, 'citationCount': 1561, 'section': None, 'subsection': None}, {'id': 102483628, 'paperId': 'c41a11c0e9b8b92b4faaf97749841170b760760a', 'title': 'VideoBERT: A Joint Model for Video and Language Representation Learning', 'authors': [{'authorId': '1491624845', 'name': 'Chen Sun'}, {'authorId': '49588480', 'name': 'Austin Myers'}, {'authorId': '1856025', 'name': 'Carl Vondrick'}, {'authorId': '1702318', 'name': 'K. Murphy'}, {'authorId': '2462253', 'name': 'C. Schmid'}], 'venue': 'IEEE International Conference on Computer Vision', 'abstract': 'Self-supervised learning has become increasingly important to leverage the abundance of unlabeled data available on platforms like YouTube. Whereas most existing approaches learn low-level representations, we propose a joint visual-linguistic model to learn high-level features without any explicit supervision. In particular, inspired by its recent success in language modeling, we build upon the BERT model to learn bidirectional joint distributions over sequences of visual and linguistic tokens, derived from vector quantization of video data and off-the-shelf speech recognition outputs, respectively. We use VideoBERT in numerous tasks, including action classification and video captioning. We show that it can be applied directly to open-vocabulary classification, and confirm that large amounts of training data and cross-modal information are critical to performance. Furthermore, we outperform the state-of-the-art on video captioning, and quantitative results verify that the model learns high-level semantic features.', 'year': 2019, 'in_acl': False, 'citationCount': 1154, 'section': None, 'subsection': None}, {'id': 203594078, 'paperId': '025a0dc4a2a98742f1b410b6318a46de2c854b22', 'title': 'Learning Video Representations using Contrastive Bidirectional Transformer', 'authors': [{'authorId': '1491624845', 'name': 'Chen Sun'}, {'authorId': '9943923', 'name': 'Fabien Baradel'}, {'authorId': '1702318', 'name': 'K. Murphy'}, {'authorId': '2462253', 'name': 'C. Schmid'}], 'venue': '', 'abstract': 'This paper proposes a self-supervised learning approach for video features that results in significantly improved performance on downstream tasks (such as video classification, captioning and segmentation) compared to existing methods. Our method extends the BERT model for text sequences to the case of sequences of real-valued feature vectors, by replacing the softmax loss with noise contrastive estimation (NCE). We also show how to learn representations from sequences of visual features and sequences of words derived from ASR (automatic speech recognition), and show that such cross-modal training (when possible) helps even more.', 'year': 2019, 'in_acl': False, 'citationCount': 262, 'section': None, 'subsection': None}, {'id': 220249786, 'paperId': '10d11f0045dc7f217c7f01bc6cbb47929e9b8808', 'title': 'Self-Supervised MultiModal Versatile Networks', 'authors': [{'authorId': '2285263', 'name': 'Jean-Baptiste Alayrac'}, {'authorId': '39257069', 'name': 'Adrià Recasens'}, {'authorId': '145721402', 'name': 'R. Schneider'}, {'authorId': '2065840140', 'name': "Relja Arandjelovi'c"}, {'authorId': '16092809', 'name': 'Jason Ramapuram'}, {'authorId': '3364908', 'name': 'J. Fauw'}, {'authorId': '1466466597', 'name': 'Lucas Smaira'}, {'authorId': '48373216', 'name': 'S. Dieleman'}, {'authorId': '1688869', 'name': 'Andrew Zisserman'}], 'venue': 'Neural Information Processing Systems', 'abstract': 'Videos are a rich source of multi-modal supervision. In this work, we learn representations using self-supervision by leveraging three modalities naturally present in videos: vision, audio and language. To this end, we introduce the notion of a multimodal versatile network -- a network that can ingest multiple modalities and whose representations enable downstream tasks in multiple modalities. In particular, we explore how best to combine the modalities, such that fine-grained representations of audio and vision can be maintained, whilst also integrating text into a common embedding. Driven by versatility, we also introduce a novel process of deflation, so that the networks can be effortlessly applied to the visual data in the form of video or a static image. We demonstrate how such networks trained on large collections of unlabelled video data can be applied on video, video-text, image and audio tasks. Equipped with these representations, we obtain state-of-the-art performance on multiple challenging benchmarks including UCF101, HMDB51 and ESC-50 when compared to previous self-supervised work.', 'year': 2020, 'in_acl': False, 'citationCount': 353, 'section': None, 'subsection': None}]
|
2021.eacl-tutorials.1
|
Unsupervised Natural Language Parsing (Introductory Tutorial)
|
Unsupervised parsing learns a syntactic parser from training sentences without parse tree annotations. Recently, there has been a resurgence of interest in unsupervised parsing, which can be attributed to the combination of two trends in the NLP community: a general trend towards unsupervised training or pre-training, and an emerging trend towards finding or modeling linguistic structures in neural models. In this tutorial, we will introduce to the general audience what unsupervised parsing does and how it can be useful for and beyond syntactic parsing. We will then provide a systematic overview of major classes of approaches to unsupervised parsing, namely generative and discriminative approaches, and analyze their relative strengths and weaknesses. We will cover both decade-old statistical approaches and more recent neural approaches to give the audience a sense of the historical and recent development of the field. We will also discuss emerging research topics such as BERT-based approaches and visually grounded learning.
| 2,021
|
https://aclanthology.org/2021.eacl-tutorials.1
|
EACL
|
[{'id': 1364249, 'paperId': 'ca2858b2040724ae9f29ba601df12aae2e539596', 'title': 'Corpus-Based Induction of Syntactic Structure: Models of Dependency and Constituency', 'authors': [{'authorId': '38666915', 'name': 'D. Klein'}, {'authorId': '144783904', 'name': 'Christopher D. Manning'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'We present a generative model for the unsupervised learning of dependency structures. We also describe the multiplicative combination of this dependency model with a model of linear constituency. The product model outperforms both components on their respective evaluation metrics, giving the best published figures for unsupervised dependency parsing and unsupervised constituency parsing. We also demonstrate that the combined model works and is robust cross-linguistically, being able to exploit either attachment or distributional regularities that are salient in the data.', 'year': 2004, 'in_acl': True, 'citationCount': 572, 'section': None, 'subsection': None}, {'id': 12885015, 'paperId': 'b360859eb746963767e554ae32cee1d1f3bcbc22', 'title': 'Unsupervised Neural Dependency Parsing', 'authors': [{'authorId': '50262192', 'name': 'Yong Jiang'}, {'authorId': '144836032', 'name': 'Wenjuan Han'}, {'authorId': '40341553', 'name': 'Kewei Tu'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Unsupervised dependency parsing aims to learn a dependency grammar from text annotated with only POS tags. Various features and inductive biases are often used to incorporate prior knowledge into learning. One useful type of prior information is that there exist correlations between the parameters of grammar rules involving different POS tags. Previous work employed manually designed features or special prior distributions to encode such information. In this paper, we propose a novel approach to unsupervised dependency parsing that uses a neural model to predict grammar rule probabilities based on distributed representation of POS tags. The distributed representation is automatically learned from data and captures the correlations between POS tags. Our experiments show that our approach outperforms previous approaches utilizing POS correlations and is competitive with recent state-of-the-art approaches on nine different languages. © 2016 Association for Computational Linguistics', 'year': 2016, 'in_acl': True, 'citationCount': 57, 'section': None, 'subsection': None}, {'id': 7324510, 'paperId': '0bf69a49c2baed67fa9a044daa24b9e199e73093', 'title': 'Inducing Probabilistic Grammars by Bayesian Model Merging', 'authors': [{'authorId': '1762744', 'name': 'A. Stolcke'}, {'authorId': '1808760', 'name': 'S. Omohundro'}], 'venue': 'International Conference on Graphics and Interaction', 'abstract': "We describe a framework for inducing probabilistic grammars from corpora of positive samples. First, samples are incorporated by adding ad-hoc rules to a working grammar; subsequently, elements of the model (such as states or nonterminals) are merged to achieve generalization and a more compact representation. The choice of what to merge and when to stop is governed by the Bayesian posterior probability of the grammar given the data, which formalizes a trade-off between a close fit to the data and a default preference for simpler models (‘Occam's Razor’). The general scheme is illustrated using three types of probabilistic grammars: Hidden Markov models, class-based n-grams, and stochastic context-free grammars.", 'year': 1994, 'in_acl': False, 'citationCount': 281, 'section': None, 'subsection': None}, {'id': 6397366, 'paperId': 'e55e5df3dd48e913d9c2a1704a5c1bf6d8e5ba1d', 'title': 'Unambiguity Regularization for Unsupervised Learning of Probabilistic Grammars', 'authors': [{'authorId': '40341553', 'name': 'Kewei Tu'}, {'authorId': '145513516', 'name': 'Vasant G Honavar'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'We introduce a novel approach named unambiguity regularization for unsupervised learning of probabilistic natural language grammars. The approach is based on the observation that natural language is remarkably unambiguous in the sense that only a tiny portion of the large number of possible parses of a natural language sentence are syntactically valid. We incorporate an inductive bias into grammar learning in favor of grammars that lead to unambiguous parses on natural language sentences. The resulting family of algorithms includes the expectation-maximization algorithm (EM) and its variant, Viterbi EM, as well as a so-called softmax-EM algorithm. The softmax-EM algorithm can be implemented with a simple and computationally efficient extension to standard EM. In our experiments of unsupervised dependency grammar learning, we show that unambiguity regularization is beneficial to learning, and in combination with annealing (of the regularization strength) and sparsity priors it leads to improvement over the current state of the art.', 'year': 2012, 'in_acl': True, 'citationCount': 37, 'section': None, 'subsection': None}, {'id': 28556787, 'paperId': 'a1733d564c3283199aa23cf68fdf9e944f0c5359', 'title': 'CRF Autoencoder for Unsupervised Dependency Parsing', 'authors': [{'authorId': '4442130', 'name': 'Jiong Cai'}, {'authorId': '50262192', 'name': 'Yong Jiang'}, {'authorId': '40341553', 'name': 'Kewei Tu'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Unsupervised dependency parsing, which tries to discover linguistic dependency structures from unannotated data, is a very challenging task. Almost all previous work on this task focuses on learning generative models. In this paper, we develop an unsupervised dependency parsing model based on the CRF autoencoder. The encoder part of our model is discriminative and globally normalized which allows us to use rich features as well as universal linguistic priors. We propose an exact algorithm for parsing as well as a tractable learning algorithm. We evaluated the performance of our model on eight multilingual treebanks and found that our model achieved comparable performance with state-of-the-art approaches.', 'year': 2017, 'in_acl': True, 'citationCount': 34, 'section': None, 'subsection': None}, {'id': 102350997, 'paperId': 'd7dc79050f17154e7cf57501cf6cab1b9c18f232', 'title': 'Unsupervised Recurrent Neural Network Grammars', 'authors': [{'authorId': '38367242', 'name': 'Yoon Kim'}, {'authorId': '2531268', 'name': 'Alexander M. Rush'}, {'authorId': '2109352263', 'name': 'Lei Yu'}, {'authorId': '3376845', 'name': 'A. Kuncoro'}, {'authorId': '1745899', 'name': 'Chris Dyer'}, {'authorId': '94303026', 'name': 'Gábor Melis'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'Recurrent neural network grammars (RNNG) are generative models of language which jointly model syntax and surface structure by incrementally generating a syntax tree and sentence in a top-down, left-to-right order. Supervised RNNGs achieve strong language modeling and parsing performance, but require an annotated corpus of parse trees. In this work, we experiment with unsupervised learning of RNNGs. Since directly marginalizing over the space of latent trees is intractable, we instead apply amortized variational inference. To maximize the evidence lower bound, we develop an inference network parameterized as a neural CRF constituency parser. On language modeling, unsupervised RNNGs perform as well their supervised counterparts on benchmarks in English and Chinese. On constituency grammar induction, they are competitive with recent neural language models that induce tree structures from words through attention mechanisms.', 'year': 2019, 'in_acl': True, 'citationCount': 113, 'section': None, 'subsection': None}]
|
2021.eacl-tutorials.2
|
Aggregating and Learning from Multiple Annotators
|
The success of NLP research is founded on high-quality annotated datasets, which are usually obtained from multiple expert annotators or crowd workers. The standard practice to training machine learning models is to first adjudicate the disagreements and then perform the training. To this end, there has been a lot of work on aggregating annotations, particularly for classification tasks. However, many other tasks, particularly in NLP, have unique characteristics not considered by standard models of annotation, e.g., label interdependencies in sequence labelling tasks, unrestricted labels for anaphoric annotation, or preference labels for ranking texts. In recent years, researchers have picked up on this and are covering the gap. A first objective of this tutorial is to connect NLP researchers with state-of-the-art aggregation models for a diverse set of canonical language annotation tasks. There is also a growing body of recent work arguing that following the convention and training with adjudicated labels ignores any uncertainty the labellers had in their classifications, which results in models with poorer generalisation capabilities. Therefore, a second objective of this tutorial is to teach NLP workers how they can augment their (deep) neural models to learn from data with multiple interpretations.
| 2,021
|
https://aclanthology.org/2021.eacl-tutorials.2
|
EACL
|
[{'id': 219302730, 'paperId': '919aa58480ac34f6b7ea433d5fa6368745aa572b', 'title': 'The Benefits of a Model of Annotation', 'authors': [{'authorId': '1703046', 'name': 'R. Passonneau'}, {'authorId': '2579894', 'name': 'Bob Carpenter'}], 'venue': 'Transactions of the Association for Computational Linguistics', 'abstract': 'Standard agreement measures for interannotator reliability are neither necessary nor sufficient to ensure a high quality corpus. In a case study of word sense annotation, conventional methods for evaluating labels from trained annotators are contrasted with a probabilistic annotation model applied to crowdsourced data. The annotation model provides far more information, including a certainty measure for each gold standard label; the crowdsourced data was collected at less than half the cost of the conventional approach.', 'year': 2014, 'in_acl': True, 'citationCount': 40, 'section': None, 'subsection': None}, {'id': 58535743, 'paperId': '6806f6aba9170a4d6e8a6ebeb25c539fe756aebb', 'title': 'Comparing Bayesian Models of Annotation', 'authors': [{'authorId': '11545402', 'name': 'Silviu Paun'}, {'authorId': '2579894', 'name': 'Bob Carpenter'}, {'authorId': '144010750', 'name': 'Jon Chamberlain'}, {'authorId': '2022288', 'name': 'Dirk Hovy'}, {'authorId': '2993548', 'name': 'Udo Kruschwitz'}, {'authorId': '1678591', 'name': 'Massimo Poesio'}], 'venue': 'Transactions of the Association for Computational Linguistics', 'abstract': 'The analysis of crowdsourced annotations in natural language processing is concerned with identifying (1) gold standard labels, (2) annotator accuracies and biases, and (3) item difficulties and error patterns. Traditionally, majority voting was used for 1, and coefficients of agreement for 2 and 3. Lately, model-based analysis of corpus annotations have proven better at all three tasks. But there has been relatively little work comparing them on the same datasets. This paper aims to fill this gap by analyzing six models of annotation, covering different approaches to annotator ability, item difficulty, and parameter pooling (tying) across annotators and items. We evaluate these models along four aspects: comparison to gold labels, predictive accuracy for new annotations, annotator characterization, and item difficulty, using four datasets with varying degrees of noise in the form of random (spammy) annotators. We conclude with guidelines for model selection, application, and implementation.', 'year': 2018, 'in_acl': True, 'citationCount': 86, 'section': None, 'subsection': None}, {'id': 202538925, 'paperId': '8905e237469322e921d6a1bc7e8e0269a99b4fc1', 'title': 'A Bayesian Approach for Sequence Tagging with Crowds', 'authors': [{'authorId': '145795026', 'name': 'Edwin Simpson'}, {'authorId': '1730400', 'name': 'Iryna Gurevych'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Current methods for sequence tagging, a core task in NLP, are data hungry, which motivates the use of crowdsourcing as a cheap way to obtain labelled data. However, annotators are often unreliable and current aggregation methods cannot capture common types of span annotation error. To address this, we propose a Bayesian method for aggregating sequence tags that reduces errors by modelling sequential dependencies between the annotations as well as the ground-truth labels. By taking a Bayesian approach, we account for uncertainty in the model due to both annotator errors and the lack of data for modelling annotators who complete few tasks. We evaluate our model on crowdsourced data for named entity recognition, information extraction and argument mining, showing that our sequential model outperforms the previous state of the art, and that Bayesian approaches outperform non-Bayesian alternatives. We also find that our approach can reduce crowdsourcing costs through more effective active learning, as it better captures uncertainty in the sequence labels when there are few annotations.', 'year': 2018, 'in_acl': True, 'citationCount': 29, 'section': None, 'subsection': None}, {'id': 23142740, 'paperId': '83d322146d7558124bf6052a16adff4fe22a9412', 'title': 'Aggregating Crowd Wisdoms with Label-aware Autoencoders', 'authors': [{'authorId': '2705335', 'name': "Li'ang Yin"}, {'authorId': '47180442', 'name': 'Jianhua Han'}, {'authorId': '2108309275', 'name': 'Weinan Zhang'}, {'authorId': '1811427', 'name': 'Yong Yu'}], 'venue': 'International Joint Conference on Artificial Intelligence', 'abstract': 'Aggregating crowd wisdoms takes multiple labels from various sources and infers true labels for objects. Recent research work makes progress by learning source credibility from data and roughly form three kinds of modeling frameworks: weighted majority voting, trust propagation, and generative models. In this paper, we propose a novel framework named Label-Aware Autoencoders (LAA) to aggregate crowd wisdoms. LAA integrates a classifier and a reconstructor into a unified model to infer labels in an unsupervised manner. Analogizing classical autoencoders, we can regard the classifier as an encoder, the reconstructor as a decoder, and inferred labels as latent features. To the best of our knowledge, it is the first trial to combine label aggregation with autoencoders. We adopt networks to implement the classifier and the reconstructor which have the potential to automatically learn underlying patterns of source credibility. To further improve inference accuracy, we introduce object ambiguity and latent aspects into LAA. Experiments on three real-world datasets show that proposed models achieve impressive inference accuracy improvement over state-of-the-art models.', 'year': 2017, 'in_acl': False, 'citationCount': 50, 'section': None, 'subsection': None}, {'id': 201103726, 'paperId': 'd2a2be6ce932a0f1939f31cfff4d64ea3d76723d', 'title': 'Human Uncertainty Makes Classification More Robust', 'authors': [{'authorId': '6672056', 'name': 'Joshua C. Peterson'}, {'authorId': '4348165', 'name': 'Ruairidh M. Battleday'}, {'authorId': '1799860', 'name': 'T. Griffiths'}, {'authorId': '2192178', 'name': 'Olga Russakovsky'}], 'venue': 'IEEE International Conference on Computer Vision', 'abstract': 'The classification performance of deep neural networks has begun to asymptote at near-perfect levels. However, their ability to generalize outside the training set and their robustness to adversarial attacks have not. In this paper, we make progress on this problem by training with full label distributions that reflect human perceptual uncertainty. We first present a new benchmark dataset which we call CIFAR10H, containing a full distribution of human labels for each image of the CIFAR10 test set. We then show that, while contemporary classifiers fail to exhibit human-like uncertainty on their own, explicit training on our dataset closes this gap, supports improved generalization to increasingly out-of-training-distribution test datasets, and confers robustness to adversarial attacks.', 'year': 2019, 'in_acl': False, 'citationCount': 269, 'section': None, 'subsection': None}, {'id': 19179988, 'paperId': 'a464e45d17da3e60bffb87290fab46f89607b7be', 'title': 'Deep learning from crowds', 'authors': [{'authorId': '143791184', 'name': 'Filipe Rodrigues'}, {'authorId': '143915473', 'name': 'Francisco Câmara Pereira'}], 'venue': 'AAAI Conference on Artificial Intelligence', 'abstract': '\n \n Over the last few years, deep learning has revolutionized the field of machine learning by dramatically improving the state-of-the-art in various domains. However, as the size of supervised artificial neural networks grows, typically so does the need for larger labeled datasets. Recently, crowdsourcing has established itself as an efficient and cost-effective solution for labeling large sets of data in a scalable manner, but it often requires aggregating labels from multiple noisy contributors with different levels of expertise. In this paper, we address the problem of learning deep neural networks from crowds. We begin by describing an EM algorithm for jointly learning the parameters of the network and the reliabilities of the annotators. Then, a novel general-purpose crowd layer is proposed, which allows us to train deep neural networks end-to-end, directly from the noisy labels of multiple annotators, using only backpropagation. We empirically show that the proposed approach is able to internally capture the reliability and biases of different annotators and achieve new state-of-the-art results for various crowdsourced datasets across different settings, namely classification, regression and sequence labeling.\n \n', 'year': 2017, 'in_acl': False, 'citationCount': 238, 'section': None, 'subsection': None}]
|
2021.eacl-tutorials.3
|
Tutorial: End-to-End Speech Translation
|
Speech translation is the translation of speech in one language typically to text in another, traditionally accomplished through a combination of automatic speech recognition and machine translation. Speech translation has attracted interest for many years, but the recent successful applications of deep learning to both individual tasks have enabled new opportunities through joint modeling, in what we today call ‘end-to-end speech translation.’ In this tutorial we introduce the techniques used in cutting-edge research on speech translation. Starting from the traditional cascaded approach, we give an overview on data sources and model architectures to achieve state-of-the art performance with end-to-end speech translation for both high- and low-resource languages. In addition, we discuss methods to evaluate analyze the proposed solutions, as well as the challenges faced when applying speech translation models for real-world applications.
| 2,021
|
https://aclanthology.org/2021.eacl-tutorials.3
|
EACL
|
[{'id': 215754220, 'paperId': 'b57a537ae33092b7acf83dbd0470c6c03752fc79', 'title': 'Speech Translation and the End-to-End Promise: Taking Stock of Where We Are', 'authors': [{'authorId': '3011998', 'name': 'Matthias Sperber'}, {'authorId': '1775245', 'name': 'M. Paulik'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Over its three decade history, speech translation has experienced several shifts in its primary research themes; moving from loosely coupled cascades of speech recognition and machine translation, to exploring questions of tight coupling, and finally to end-to-end models that have recently attracted much attention. This paper provides a brief survey of these developments, along with a discussion of the main challenges of traditional approaches which stem from committing to intermediate representations from the speech recognizer, and from training cascaded models separately towards different objectives. Recent end-to-end modeling techniques promise a principled way of overcoming these issues by allowing joint training of all model components and removing the need for explicit intermediate representations. However, a closer look reveals that many end-to-end models fall short of solving these issues, due to compromises made to address data scarcity. This paper provides a unifying categorization and nomenclature that covers both traditional and recent approaches and that may help researchers by highlighting both trade-offs and open research questions.', 'year': 2020, 'in_acl': True, 'citationCount': 98, 'section': 'Survey paper', 'subsection': None}, {'id': 266923, 'paperId': 'c95a9010bb05d77e334e280fb6dd987aaf053098', 'title': 'Listen and Translate: A Proof of Concept for End-to-End Speech-to-Text Translation', 'authors': [{'authorId': '3449364', 'name': 'Alexandre Berard'}, {'authorId': '1721354', 'name': 'O. Pietquin'}, {'authorId': '1890220', 'name': 'Christophe Servan'}, {'authorId': '143823463', 'name': 'L. Besacier'}], 'venue': 'Neural Information Processing Systems', 'abstract': 'This paper proposes a first attempt to build an end-to-end speech-to-text translation \nsystem, which does not use source language text during learning or decoding. Relaxing the need \nfor source language transcription would drastically change the data collection methodology in \nspeech translation, especially in under-resourced scenarios.', 'year': 2016, 'in_acl': False, 'citationCount': 300, 'section': 'The first papers on end-to-end ST', 'subsection': None}, {'id': 7857444, 'paperId': 'dda047fd87610911c82778243f72f60d1c063383', 'title': 'Sequence-to-Sequence Models Can Directly Translate Foreign Speech', 'authors': [{'authorId': '39571582', 'name': 'Ron J. Weiss'}, {'authorId': '2292403', 'name': 'J. Chorowski'}, {'authorId': '3111912', 'name': 'N. Jaitly'}, {'authorId': '48607963', 'name': 'Yonghui Wu'}, {'authorId': '2545358', 'name': 'Z. Chen'}], 'venue': 'Interspeech', 'abstract': 'We present a recurrent encoder-decoder deep neural network architecture that directly translates speech in one language into text in another. The model does not explicitly transcribe the speech into text in the source language, nor does it require supervision from the ground truth source language transcription during training. We apply a slightly modified sequence-to-sequence with attention architecture that has previously been used for speech recognition and show that it can be repurposed for this more complex task, illustrating the power of attention-based models. A single model trained end-to-end obtains state-of-the-art performance on the Fisher Callhome Spanish-English speech translation task, outperforming a cascade of independently trained sequence-to-sequence speech recognition and machine translation models by 1.8 BLEU points on the Fisher test set. In addition, we find that making use of the training data in both languages by multi-task training sequence-to-sequence speech translation and recognition models with a shared encoder network can improve performance by a further 1.4 BLEU points.', 'year': 2017, 'in_acl': False, 'citationCount': 326, 'section': 'The first papers on end-to-end ST', 'subsection': None}, {'id': 174800963, 'paperId': '999b4b988180b9168d4fd4bdceaf421cd7f17096', 'title': 'MuST-C: a Multilingual Speech Translation Corpus', 'authors': [{'authorId': '39640268', 'name': 'Mattia Antonino Di Gangi'}, {'authorId': '27086451', 'name': 'R. Cattoni'}, {'authorId': '2486762', 'name': 'L. Bentivogli'}, {'authorId': '2138026', 'name': 'Matteo Negri'}, {'authorId': '145862931', 'name': 'Marco Turchi'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'Current research on spoken language translation (SLT) has to confront with the scarcity of sizeable and publicly available training corpora. This problem hinders the adoption of neural end-to-end approaches, which represent the state of the art in the two parent tasks of SLT: automatic speech recognition and machine translation. To fill this gap, we created MuST-C, a multilingual speech translation corpus whose size and quality will facilitate the training of end-to-end systems for SLT from English into 8 languages. For each target language, MuST-C comprises at least 385 hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual transcriptions and translations. Together with a description of the corpus creation methodology (scalable to add new data and cover new languages), we provide an empirical verification of its quality and SLT results computed with a state-of-the-art approach on each language direction.', 'year': 2019, 'in_acl': True, 'citationCount': 397, 'section': 'Data for end-to-end ST', 'subsection': None}, {'id': 52160439, 'paperId': '19f66dd83abef074b04169ed448251b55429e6d9', 'title': 'Pre-training on high-resource speech recognition improves low-resource speech-to-text translation', 'authors': [{'authorId': '3469333', 'name': 'Sameer Bansal'}, {'authorId': '2308553', 'name': 'H. Kamper'}, {'authorId': '2924113', 'name': 'Karen Livescu'}, {'authorId': '144871732', 'name': 'Adam Lopez'}, {'authorId': '1991315', 'name': 'S. Goldwater'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'We present a simple approach to improve direct speech-to-text translation (ST) when the source language is low-resource: we pre-train the model on a high-resource automatic speech recognition (ASR) task, and then fine-tune its parameters for ST. We demonstrate that our approach is effective by pre-training on 300 hours of English ASR data to improve Spanish English ST from 10.8 to 20.2 BLEU when only 20 hours of Spanish-English ST training data are available. Through an ablation study, we find that the pre-trained encoder (acoustic model) accounts for most of the improvement, despite the fact that the shared language in these tasks is the target language text, not the source language audio. Applying this insight, we show that pre-training on ASR helps ST even when the ASR language differs from both source and target ST languages: pre-training on French ASR also improves Spanish-English ST. Finally, we show that the approach improves performance on a true low-resource task: pre-training on a combination of English ASR and French ASR improves Mboshi-French ST, where only 4 hours of data are available, from 3.5 to 7.1 BLEU.', 'year': 2018, 'in_acl': True, 'citationCount': 184, 'section': 'Integrating additional data', 'subsection': None}, {'id': 53222964, 'paperId': 'b6222ad8acdf327368b45fb7fa5f4cf374d6da80', 'title': 'Leveraging Weakly Supervised Data to Improve End-to-end Speech-to-text Translation', 'authors': [{'authorId': '1691944', 'name': 'Ye Jia'}, {'authorId': '145657834', 'name': 'Melvin Johnson'}, {'authorId': '3153147', 'name': 'Wolfgang Macherey'}, {'authorId': '39571582', 'name': 'Ron J. Weiss'}, {'authorId': '145144022', 'name': 'Yuan Cao'}, {'authorId': '145039780', 'name': 'Chung-Cheng Chiu'}, {'authorId': '51893005', 'name': 'Naveen Ari'}, {'authorId': '51923161', 'name': 'Stella Laurenzo'}, {'authorId': '48607963', 'name': 'Yonghui Wu'}], 'venue': 'IEEE International Conference on Acoustics, Speech, and Signal Processing', 'abstract': 'End-to-end Speech Translation (ST) models have many potential advantages when compared to the cascade of Automatic Speech Recognition (ASR) and text Machine Translation (MT) models, including lowered inference latency and the avoidance of error compounding. However, the quality of end-to-end ST is often limited by a paucity of training data, since it is difficult to collect large parallel corpora of speech and translated transcript pairs. Previous studies have proposed the use of pre-trained components and multi-task learning in order to benefit from weakly supervised training data, such as speech-to-transcript or text-to-foreign-text pairs. In this paper, we demonstrate that using pre-trained MT or text-to-speech (TTS) synthesis models to convert weakly supervised data into speech-to-translation pairs for ST training can be more effective than multi-task learning. Furthermore, we demonstrate that a high quality end-to-end ST model can be trained using only weakly supervised datasets, and that synthetic data sourced from unlabeled monolingual text or speech can be used to improve performance. Finally, we discuss methods for avoiding overfitting to synthetic speech with a quantitative ablation study.', 'year': 2018, 'in_acl': False, 'citationCount': 155, 'section': 'Integrating additional data', 'subsection': None}, {'id': 119111907, 'paperId': '5dab371fecc43904c0b785a50136d20cee43a99a', 'title': 'Attention-Passing Models for Robust and Data-Efficient End-to-End Speech Translation', 'authors': [{'authorId': '3011998', 'name': 'Matthias Sperber'}, {'authorId': '1700325', 'name': 'Graham Neubig'}, {'authorId': '2920247', 'name': 'J. Niehues'}, {'authorId': '1724972', 'name': 'A. Waibel'}], 'venue': 'Transactions of the Association for Computational Linguistics', 'abstract': 'Speech translation has traditionally been approached through cascaded models consisting of a speech recognizer trained on a corpus of transcribed speech, and a machine translation system trained on parallel texts. Several recent works have shown the feasibility of collapsing the cascade into a single, direct model that can be trained in an end-to-end fashion on a corpus of translated speech. However, experiments are inconclusive on whether the cascade or the direct model is stronger, and have only been conducted under the unrealistic assumption that both are trained on equal amounts of data, ignoring other available speech recognition and machine translation corpora. In this paper, we demonstrate that direct speech translation models require more data to perform well than cascaded models, and although they allow including auxiliary data through multi-task training, they are poor at exploiting such data, putting them at a severe disadvantage. As a remedy, we propose the use of end- to-end trainable models with two attention mechanisms, the first establishing source speech to source text alignments, the second modeling source to target text alignment. We show that such models naturally decompose into multi-task–trainable recognition and translation tasks and propose an attention-passing technique that alleviates error propagation issues in a previous formulation of a model with two attention stages. Our proposed model outperforms all examined baselines and is able to exploit auxiliary training data much more effectively than direct attentional models.', 'year': 2019, 'in_acl': True, 'citationCount': 97, 'section': 'Integrating additional data', 'subsection': None}, {'id': 218971763, 'paperId': '15fb586993d1b269a72e61cfcebb69a56de6a3f1', 'title': 'Phone Features Improve Speech Translation', 'authors': [{'authorId': '3448427', 'name': 'Elizabeth Salesky'}, {'authorId': '1690706', 'name': 'A. Black'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'End-to-end models for speech translation (ST) more tightly couple speech recognition (ASR) and machine translation (MT) than a traditional cascade of separate ASR and MT models, with simpler model architectures and the potential for reduced error propagation. Their performance is often assumed to be superior, though in many conditions this is not yet the case. We compare cascaded and end-to-end models across high, medium, and low-resource conditions, and show that cascades remain stronger baselines. Further, we introduce two methods to incorporate phone features into ST models. We show that these features improve both architectures, closing the gap between end-to-end models and cascades, and outperforming previous academic work – by up to 9 BLEU on our low-resource setting.', 'year': 2020, 'in_acl': True, 'citationCount': 27, 'section': 'Data representation', 'subsection': None}, {'id': 202714219, 'paperId': 'd0a313a557bd43a7cacb3e5479cd7c491f7faa5c', 'title': 'Adapting Transformer to End-to-End Spoken Language Translation', 'authors': [{'authorId': '39640268', 'name': 'Mattia Antonino Di Gangi'}, {'authorId': '2138026', 'name': 'Matteo Negri'}, {'authorId': '145862931', 'name': 'Marco Turchi'}], 'venue': 'Interspeech', 'abstract': 'Neural end-to-end architectures for sequence-to-sequence learning represent the state of the art in machine translation (MT) and speech recognition (ASR). Their use is also promising for end-to-end spoken language translation (SLT), which combines the main challenges of ASR and MT. Exploiting existing neural architectures, however, requires task-specific adaptations. A network that has obtained state-of-the-art results in MT with reduced training time is Transformer. However, its direct application to speech input is hindered by two limitations of the self-attention network on which it is based: quadratic memory complexity and no explicit modeling of short-range dependencies between input features. High memory complexity poses constraints to the size of models trainable with a GPU, while the inadequate modeling of local dependencies harms final translation quality. This paper presents an adaptation of Transformer to end-to-end SLT that consists in: i) downsampling the input with convolutional neural networks to make the training process feasible on GPUs, ii) modeling the bidimensional nature of a spectrogram, and iii) adding a distance penalty to the attention, so to bias it towards local context. SLT experiments on 8 language directions show that, with our adaptation, Transformer outperforms a strong RNN-based baseline with a significant reduction in training time.', 'year': 2019, 'in_acl': False, 'citationCount': 121, 'section': 'Adapting the Transformer for ST', 'subsection': None}, {'id': 203610481, 'paperId': '8b231737e0048a400527d89aa56c712e8b9bc690', 'title': 'Multilingual End-to-End Speech Translation', 'authors': [{'authorId': '49276525', 'name': 'H. Inaguma'}, {'authorId': '1800354', 'name': 'Kevin Duh'}, {'authorId': '1717105', 'name': 'Tatsuya Kawahara'}, {'authorId': '1746678', 'name': 'Shinji Watanabe'}], 'venue': 'Automatic Speech Recognition & Understanding', 'abstract': 'In this paper, we propose a simple yet effective framework for multilingual end-to-end speech translation (ST), in which speech utterances in source languages are directly translated to the desired target languages with a universal sequence-to-sequence architecture. While multilingual models have shown to be useful for automatic speech recognition (ASR) and machine translation (MT), this is the first time they are applied to the end-to-end ST problem. We show the effectiveness of multilingual end-to-end ST in two scenarios: one-to-many and many-to-many translations with publicly available data. We experimentally confirm that multilingual end-to-end ST models significantly outperform bilingual ones in both scenarios. The generalization of multilingual training is also evaluated in a transfer learning scenario to a very low-resource language pair. All of our codes and the database are publicly available to encourage further research in this emergent multilingual ST topic11Available at https://github.com/espnet/espnet..', 'year': 2019, 'in_acl': False, 'citationCount': 83, 'section': 'Multilingual models', 'subsection': None}]
|
2021.eacl-tutorials.4
|
Reviewing Natural Language Processing Research
|
The reviewing procedure has been identified as one of the major issues in the current situation of the NLP field. While it is implicitly assumed that junior researcher learn reviewing during their PhD project, this might not always be the case. Additionally, with the growing NLP community and the efforts in the context of widening the NLP community, researchers joining the field might not have the opportunity to practise reviewing. This tutorial fills in this gap by providing an opportunity to learn the basics of reviewing. Also more experienced researchers might find this tutorial interesting to revise their reviewing procedure.
| 2,021
|
https://aclanthology.org/2021.eacl-tutorials.4
|
EACL
|
[{'id': 8460592, 'paperId': '9ca5552008fe2c24e0541f6af47fd5110d4015b3', 'title': 'Last Words: Reviewing the Reviewers', 'authors': [{'authorId': '2272727361', 'name': 'K. Church'}], 'venue': 'International Conference on Computational Logic', 'abstract': '', 'year': 2005, 'in_acl': True, 'citationCount': 48, 'section': None, 'subsection': None}, {'id': 16508456, 'paperId': '4fb5a17d4066116a8fc928e43aa558732d8b7cb2', 'title': 'Preventing the ends from justifying the means: withholding results to address publication bias in peer-review', 'authors': [{'authorId': '4058655', 'name': 'K. Button'}, {'authorId': '38974348', 'name': 'Liz Bal'}, {'authorId': '145879163', 'name': 'A. Clark'}, {'authorId': '19854097', 'name': 'Tim Shipley'}], 'venue': 'BMC Psychology', 'abstract': 'The evidence that many of the findings in the published literature may be unreliable is compelling. There is an excess of positive results, often from studies with small sample sizes, or other methodological limitations, and the conspicuous absence of null findings from studies of a similar quality. This distorts the evidence base, leading to false conclusions and undermining scientific progress. Central to this problem is a peer-review system where the decisions of authors, reviewers, and editors are more influenced by impressive results than they are by the validity of the study design. To address this, BMC Psychology is launching a pilot to trial a new ‘results-free’ peer-review process, whereby editors and reviewers are blinded to the study’s results, initially assessing manuscripts on the scientific merits of the rationale and methods alone. The aim is to improve the reliability and quality of published research, by focusing editorial decisions on the rigour of the methods, and preventing impressive ends justifying poor means.', 'year': 2016, 'in_acl': False, 'citationCount': 42, 'section': None, 'subsection': None}, {'id': 53149927, 'paperId': 'a772589606f9880d74ac79519ccef073eefd5519', 'title': 'Double-blind peer review and gender publication bias', 'authors': [{'authorId': '3145400', 'name': 'L. Engqvist'}, {'authorId': '2149229', 'name': 'Joachim G. Frommen'}], 'venue': 'Animal Behaviour', 'abstract': '', 'year': 2008, 'in_acl': False, 'citationCount': 34, 'section': None, 'subsection': None}, {'id': 7350256, 'paperId': 'd8cf5c798397b6a0be1b41f18f979f0988f1ece7', 'title': 'Publication prejudices: An experimental study of confirmatory bias in the peer review system', 'authors': [{'authorId': '35256346', 'name': 'M. Mahoney'}], 'venue': 'Cognitive Therapy and Research', 'abstract': "Confirmatory bias is the tendency to emphasize and believe experiences which support one's views and to ignore or discredit those which do not. The effects of this tendency have been repeatedly documented in clinical research. However, its ramifications for the behavior of scientists have yet to be adequately explored. For example, although publication is a critical element in determining the contribution and impact of scientific findings, little research attention has been devoted to the variables operative in journal review policies. In the present study, 75 journal reviewers were asked to referee manuscripts which described identical experimental procedures but which reported positive, negative, mixed, or no results. In addition to showing poor interrater agreement, reviewers were strongly biased against manuscripts which reported results contrary to their theoretical perspective. The implications of these findings for epistemology and the peer review system are briefly addressed.", 'year': 1977, 'in_acl': False, 'citationCount': 667, 'section': None, 'subsection': None}, {'id': 29476961, 'paperId': '0c40a8815f6e977d713cf253a636219b32c17559', 'title': 'Last Words: On Becoming a Discipline', 'authors': [{'authorId': '145332819', 'name': 'Mark Steedman'}], 'venue': 'International Conference on Computational Logic', 'abstract': '', 'year': 2008, 'in_acl': True, 'citationCount': 10, 'section': None, 'subsection': None}, {'id': 1570550, 'paperId': '830ab38207bd40189752a301967b865c38dab591', 'title': 'Last Words: Breaking News: Changing Attitudes and Practices', 'authors': [{'authorId': '1736049', 'name': 'B. Webber'}], 'venue': 'International Conference on Computational Logic', 'abstract': '', 'year': 2007, 'in_acl': True, 'citationCount': 3, 'section': None, 'subsection': None}]
|
2021.eacl-tutorials.5
|
Advances and Challenges in Unsupervised Neural Machine Translation
|
Unsupervised cross-lingual language representation initialization methods, together with mechanisms such as denoising and back-translation, have advanced unsupervised neural machine translation (UNMT), which has achieved impressive results. Meanwhile, there are still several challenges for UNMT. This tutorial first introduces the background and the latest progress of UNMT. We then examine a number of challenges to UNMT and give empirical results on how well the technology currently holds up.
| 2,021
|
https://aclanthology.org/2021.eacl-tutorials.5
|
EACL
|
[{'id': 11212020, 'paperId': 'fa72afa9b2cbc8f0d7b05d52548906610ffbb9c5', 'title': 'Neural Machine Translation by Jointly Learning to Align and Translate', 'authors': [{'authorId': '3335364', 'name': 'Dzmitry Bahdanau'}, {'authorId': '1979489', 'name': 'Kyunghyun Cho'}, {'authorId': '1751762', 'name': 'Yoshua Bengio'}], 'venue': 'International Conference on Learning Representations', 'abstract': 'Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.', 'year': 2014, 'in_acl': False, 'citationCount': 26130, 'section': 'Neural Machine Translation', 'subsection': None}, {'id': 3074096, 'paperId': '2913c2bf3f92b5ae369400a42b2d27cc5bc05ecb', 'title': 'Deep Learning', 'authors': [{'authorId': '1688882', 'name': 'Yann LeCun'}, {'authorId': '1751762', 'name': 'Yoshua Bengio'}, {'authorId': '1695689', 'name': 'Geoffrey E. Hinton'}], 'venue': '', 'abstract': 'Machine-learning technology powers many aspects of modern society: from web searches to content filtering on social networks to recommendations on e-commerce websites, and it is increasingly present in consumer products such as cameras and smartphones. Machine-learning systems are used to identify objects in images, transcribe speech into text, match news items, posts or products with users’ interests, and select relevant results of search. Increasingly, these applications make use of a class of techniques called deep learning. Conventional machine-learning techniques were limited in their ability to process natural data in their raw form. For decades, constructing a pattern-recognition or machine-learning system required careful engineering and considerable domain expertise to design a feature extractor that transformed the raw data (such as the pixel values of an image) into a suitable internal representation or feature vector from which the learning subsystem, often a classifier, could detect or classify patterns in the input. Representation learning is a set of methods that allows a machine to be fed with raw data and to automatically discover the representations needed for detection or classification. Deep-learning methods are representation-learning methods with multiple levels of representation, obtained by composing simple but non-linear modules that each transform the representation at one level (starting with the raw input) into a representation at a higher, slightly more abstract level. With the composition of enough such transformations, very complex functions can be learned. For classification tasks, higher layers of representation amplify aspects of the input that are important for discrimination and suppress irrelevant variations. An image, for example, comes in the form of an array of pixel values, and the learned features in the first layer of representation typically represent the presence or absence of edges at particular orientations and locations in the image. The second layer typically detects motifs by spotting particular arrangements of edges, regardless of small variations in the edge positions. The third layer may assemble motifs into larger combinations that correspond to parts of familiar objects, and subsequent layers would detect objects as combinations of these parts. The key aspect of deep learning is that these layers of features are not designed by human engineers: they are learned from data using a general-purpose learning procedure. Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years. It has turned out to be very good at discovering intricate structures in high-dimensional data and is therefore applicable to many domains of science, business and government. In addition to beating records in image recognition and speech recognition, it has beaten other machine-learning techniques at predicting the activity of potential drug molecules, analysing particle accelerator data, reconstructing brain circuits, and predicting the effects of mutations in non-coding DNA on gene expression and disease. Perhaps more surprisingly, deep learning has produced extremely promising results for various tasks in natural language understanding, particularly topic classification, sentiment analysis, question answering and language translation. We think that deep learning will have many more successes in the near future because it requires very little engineering by hand, so it can easily take advantage of increases in the amount of available computation and data. New learning algorithms and architectures that are currently being developed for deep neural networks will only accelerate this progress.', 'year': 2015, 'in_acl': False, 'citationCount': 40798, 'section': 'Neural Machine Translation', 'subsection': None}, {'id': 3515219, 'paperId': 'c2a7afbb5609a723f8eea91bfde4b02579b048d6', 'title': 'Unsupervised Neural Machine Translation', 'authors': [{'authorId': '2347956', 'name': 'Mikel Artetxe'}, {'authorId': '3255091', 'name': 'Gorka Labaka'}, {'authorId': '1733049', 'name': 'Eneko Agirre'}, {'authorId': '1979489', 'name': 'Kyunghyun Cho'}], 'venue': 'International Conference on Learning Representations', 'abstract': 'In spite of the recent success of neural machine translation (NMT) in standard benchmarks, the lack of large parallel corpora poses a major practical problem for many language pairs. There have been several proposals to alleviate this issue with, for instance, triangulation and semi-supervised learning techniques, but they still require a strong cross-lingual signal. In this work, we completely remove the need of parallel data and propose a novel method to train an NMT system in a completely unsupervised manner, relying on nothing but monolingual corpora. Our model builds upon the recent work on unsupervised embedding mappings, and consists of a slightly modified attentional encoder-decoder model that can be trained on monolingual corpora alone using a combination of denoising and backtranslation. Despite the simplicity of the approach, our system obtains 15.56 and 10.21 BLEU points in WMT 2014 French-to-English and German-to-English translation. The model can also profit from small parallel corpora, and attains 21.81 and 15.24 points when combined with 100,000 parallel sentences, respectively. Our implementation is released as an open source project.', 'year': 2017, 'in_acl': False, 'citationCount': 756, 'section': 'UNMT', 'subsection': None}, {'id': 3518190, 'paperId': 'e3d772986d176057aca2f5e3eb783da53b559134', 'title': 'Unsupervised Machine Translation Using Monolingual Corpora Only', 'authors': [{'authorId': '1830914', 'name': 'Guillaume Lample'}, {'authorId': '8905591', 'name': 'Ludovic Denoyer'}, {'authorId': '1706809', 'name': "Marc'Aurelio Ranzato"}], 'venue': 'International Conference on Learning Representations', 'abstract': 'Machine translation has recently achieved impressive performance thanks to recent advances in deep learning and the availability of large-scale parallel corpora. There have been numerous attempts to extend these successes to low-resource language pairs, yet requiring tens of thousands of parallel sentences. In this work, we take this research direction to the extreme and investigate whether it is possible to learn to translate even without any parallel data. We propose a model that takes sentences from monolingual corpora in two different languages and maps them into the same latent space. By learning to reconstruct in both languages from this shared feature space, the model effectively learns to translate without using any labeled data. We demonstrate our model on two widely used datasets and two language pairs, reporting BLEU scores of 32.8 and 15.1 on the Multi30k and WMT English-French datasets, without using even a single parallel sentence at training time.', 'year': 2017, 'in_acl': False, 'citationCount': 1058, 'section': 'UNMT', 'subsection': None}, {'id': 201634541, 'paperId': 'd802623e75b44b227acf33aec26a1607da2898b6', 'title': 'NICT’s Unsupervised Neural and Statistical Machine Translation Systems for the WMT19 News Translation Task', 'authors': [{'authorId': '2064068087', 'name': 'Benjamin Marie'}, {'authorId': '122309052', 'name': 'Haipeng Sun'}, {'authorId': '108085542', 'name': 'Rui Wang'}, {'authorId': '2849740', 'name': 'Kehai Chen'}, {'authorId': '46566611', 'name': 'Atsushi Fujita'}, {'authorId': '1802277', 'name': 'M. Utiyama'}, {'authorId': '1698363', 'name': 'E. Sumita'}], 'venue': 'Conference on Machine Translation', 'abstract': 'This paper presents the NICT’s participation in the WMT19 unsupervised news translation task. We participated in the unsupervised translation direction: German-Czech. Our primary submission to the task is the result of a simple combination of our unsupervised neural and statistical machine translation systems. Our system is ranked first for the German-to-Czech translation task, using only the data provided by the organizers (“constraint’”), according to both BLEU-cased and human evaluation. We also performed contrastive experiments with other language pairs, namely, English-Gujarati and English-Kazakh, to better assess the effectiveness of unsupervised machine translation in for distant language pairs and in truly low-resource conditions.', 'year': 2019, 'in_acl': True, 'citationCount': 21, 'section': 'UNMT', 'subsection': None}, {'id': 222291274, 'paperId': '067906c924810e8ffc595ff8c9c4b0b2906cca85', 'title': 'SJTU-NICT’s Supervised and Unsupervised Neural Machine Translation Systems for the WMT20 News Translation Task', 'authors': [{'authorId': '30658665', 'name': 'Z. Li'}, {'authorId': '47941144', 'name': 'Hai Zhao'}, {'authorId': '108085542', 'name': 'Rui Wang'}, {'authorId': '2849740', 'name': 'Kehai Chen'}, {'authorId': '1802277', 'name': 'M. Utiyama'}, {'authorId': '1698363', 'name': 'E. Sumita'}], 'venue': 'Conference on Machine Translation', 'abstract': 'In this paper, we introduced our joint team SJTU-NICT ‘s participation in the WMT 2020 machine translation shared task. In this shared task, we participated in four translation directions of three language pairs: English-Chinese, English-Polish on supervised machine translation track, German-Upper Sorbian on low-resource and unsupervised machine translation tracks. Based on different conditions of language pairs, we have experimented with diverse neural machine translation (NMT) techniques: document-enhanced NMT, XLM pre-trained language model enhanced NMT, bidirectional translation as a pre-training, reference language based UNMT, data-dependent gaussian prior objective, and BT-BLEU collaborative filtering self-training. We also used the TF-IDF algorithm to filter the training set to obtain a domain more similar set with the test set for finetuning. In our submissions, the primary systems won the first place on English to Chinese, Polish to English, and German to Upper Sorbian translation directions.', 'year': 2020, 'in_acl': True, 'citationCount': 14, 'section': 'UNMT', 'subsection': None}]
|
2021.emnlp-tutorials.1
|
Crowdsourcing Beyond Annotation: Case Studies in Benchmark Data Collection
|
Crowdsourcing from non-experts is one of the most common approaches to collecting data and annotations in NLP. Even though it is such a fundamental tool in NLP, crowdsourcing use is largely guided by common practices and the personal experience of researchers. Developing a theory of crowdsourcing use for practical language problems remains an open challenge. However, there are various principles and practices that have proven effective in generating high quality and diverse data. This tutorial exposes NLP researchers to such data collection crowdsourcing methods and principles through a detailed discussion of a diverse set of case studies. The selection of case studies focuses on challenging settings where crowdworkers are asked to write original text or otherwise perform relatively unconstrained work. Through these case studies, we discuss in detail processes that were carefully designed to achieve data with specific properties, for example to require logical inference, grounded reasoning or conversational understanding. Each case study focuses on data collection crowdsourcing protocol details that often receive limited attention in research presentations, for example in conferences, but are critical for research success.
| 2,021
|
https://aclanthology.org/2021.emnlp-tutorials.1
|
EMNLP
|
[{'id': 3432876, 'paperId': '5ded2b8c64491b4a67f6d39ce473d4b9347a672e', 'title': 'A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference', 'authors': [{'authorId': '81840293', 'name': 'Adina Williams'}, {'authorId': '10666396', 'name': 'Nikita Nangia'}, {'authorId': '3644767', 'name': 'Samuel R. Bowman'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'This paper introduces the Multi-Genre Natural Language Inference (MultiNLI) corpus, a dataset designed for use in the development and evaluation of machine learning models for sentence understanding. At 433k examples, this resource is one of the largest corpora available for natural language inference (a.k.a. recognizing textual entailment), improving upon available resources in both its coverage and difficulty. MultiNLI accomplishes this by offering data from ten distinct genres of written and spoken English, making it possible to evaluate systems on nearly the full complexity of the language, while supplying an explicit setting for evaluating cross-genre domain adaptation. In addition, an evaluation using existing machine learning models designed for the Stanford NLI corpus shows that it represents a substantially more difficult task than does that corpus, despite the two showing similar levels of inter-annotator agreement.', 'year': 2017, 'in_acl': True, 'citationCount': 4153, 'section': None, 'subsection': None}, {'id': 19435386, 'paperId': 'a9e28863c7fb963b40a379c5a4e0da00eb031933', 'title': 'A Corpus of Natural Language for Visual Reasoning', 'authors': [{'authorId': '32849969', 'name': 'Alane Suhr'}, {'authorId': '35084211', 'name': 'M. Lewis'}, {'authorId': '2053174592', 'name': 'James Yeh'}, {'authorId': '3167681', 'name': 'Yoav Artzi'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'We present a new visual reasoning language dataset, containing 92,244 pairs of examples of natural statements grounded in synthetic images with 3,962 unique sentences. We describe a method of crowdsourcing linguistically-diverse data, and present an analysis of our data. The data demonstrates a broad set of linguistic phenomena, requiring visual and set-theoretic reasoning. We experiment with various models, and show the data presents a strong challenge for future research.', 'year': 2017, 'in_acl': True, 'citationCount': 226, 'section': None, 'subsection': None}, {'id': 53178856, 'paperId': 'cf336d272a30d6ad6141db67faa64deb8791cd61', 'title': 'A Corpus for Reasoning about Natural Language Grounded in Photographs', 'authors': [{'authorId': '32849969', 'name': 'Alane Suhr'}, {'authorId': '49219517', 'name': 'Stephanie Zhou'}, {'authorId': '83384205', 'name': 'Iris Zhang'}, {'authorId': '1471618793', 'name': 'Huajun Bai'}, {'authorId': '3167681', 'name': 'Yoav Artzi'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'We introduce a new dataset for joint reasoning about natural language and images, with a focus on semantic diversity, compositionality, and visual reasoning challenges. The data contains 107,292 examples of English sentences paired with web photographs. The task is to determine whether a natural language caption is true about a pair of photographs. We crowdsource the data using sets of visually rich images and a compare-and-contrast task to elicit linguistically diverse language. Qualitative analysis shows the data requires compositional joint reasoning, including about quantities, comparisons, and relations. Evaluation using state-of-the-art visual reasoning methods shows the data presents a strong challenge.', 'year': 2018, 'in_acl': True, 'citationCount': 542, 'section': None, 'subsection': None}, {'id': 202764530, 'paperId': 'be2ce82730600d9b2eb2df9f2762f9d4beb6222d', 'title': 'Executing Instructions in Situated Collaborative Interactions', 'authors': [{'authorId': '32849969', 'name': 'Alane Suhr'}, {'authorId': '12693261', 'name': 'Claudia Yan'}, {'authorId': '1379733243', 'name': 'Jack Schluger'}, {'authorId': '2112268019', 'name': 'Stanley Yu'}, {'authorId': '2138322941', 'name': 'Hadi Khader'}, {'authorId': '1379733217', 'name': 'Marwa Mouallem'}, {'authorId': '83384205', 'name': 'Iris Zhang'}, {'authorId': '3167681', 'name': 'Yoav Artzi'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'We study a collaborative scenario where a user not only instructs a system to complete tasks, but also acts alongside it. This allows the user to adapt to the system abilities by changing their language or deciding to simply accomplish some tasks themselves, and requires the system to effectively recover from errors as the user strategically assigns it new goals. We build a game environment to study this scenario, and learn to map user instructions to system actions. We introduce a learning approach focused on recovery from cascading errors between instructions, and modeling methods to explicitly reason about instructions with multiple goals. We evaluate with a new evaluation protocol using recorded interactions and online games with human users, and observe how users adapt to the system abilities.', 'year': 2019, 'in_acl': True, 'citationCount': 84, 'section': None, 'subsection': None}, {'id': 52057510, 'paperId': '39e734da43eb8c72e9549b42e96760545036f8e5', 'title': 'QuAC: Question Answering in Context', 'authors': [{'authorId': '2890423', 'name': 'Eunsol Choi'}, {'authorId': '144533687', 'name': 'He He'}, {'authorId': '2136562', 'name': 'Mohit Iyyer'}, {'authorId': '2064210', 'name': 'Mark Yatskar'}, {'authorId': '144105277', 'name': 'Wen-tau Yih'}, {'authorId': '1699545', 'name': 'Yejin Choi'}, {'authorId': '145419642', 'name': 'Percy Liang'}, {'authorId': '1982950', 'name': 'Luke Zettlemoyer'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'We present QuAC, a dataset for Question Answering in Context that contains 14K information-seeking QA dialogs (100K questions in total). The dialogs involve two crowd workers: (1) a student who poses a sequence of freeform questions to learn as much as possible about a hidden Wikipedia text, and (2) a teacher who answers the questions by providing short excerpts from the text. QuAC introduces challenges not found in existing machine comprehension datasets: its questions are often more open-ended, unanswerable, or only meaningful within the dialog context, as we show in a detailed qualitative evaluation. We also report results for a number of reference models, including a recently state-of-the-art reading comprehension architecture extended to model dialog context. Our best model underperforms humans by 20 F1, suggesting that there is significant room for future work on this data. Dataset, baseline, and leaderboard available at http://quac.ai.', 'year': 2018, 'in_acl': True, 'citationCount': 777, 'section': None, 'subsection': None}]
|
2021.emnlp-tutorials.2
|
Financial Opinion Mining
|
In this tutorial, we will show where we are and where we will be to those researchers interested in this topic. We divide this tutorial into three parts, including coarse-grained financial opinion mining, fine-grained financial opinion mining, and possible research directions. This tutorial starts by introducing the components in a financial opinion proposed in our research agenda and summarizes their related studies. We also highlight the task of mining customers’ opinions toward financial services in the FinTech industry, and compare them with usual opinions. Several potential research questions will be addressed. We hope the audiences of this tutorial will gain an overview of financial opinion mining and figure out their research directions.
| 2,021
|
https://aclanthology.org/2021.emnlp-tutorials.2
|
EMNLP
|
[{'id': 55268987, 'paperId': '349b576d6919ebf68617c81378bab9a90c7389b5', 'title': 'When is a Liability not a Liability? Textual Analysis, Dictionaries, and 10-Ks', 'authors': [{'authorId': '46173917', 'name': 'Tim Loughran'}, {'authorId': '35005086', 'name': 'B. Mcdonald'}], 'venue': '', 'abstract': 'Previous research uses negative word counts to measure the tone of a text. We show that word lists developed for other disciplines misclassify common words in financial text. In a large sample of 10 Ks during 1994 to 2008, almost three-fourths of the words identified as negative by the widely used Harvard Dictionary are words typically not considered negative in financial contexts. We develop an alternative negative word list, along with five other word lists, that better reflect tone in financial text. We link the word lists to 10 K filing returns, trading volume, return volatility, fraud, material weakness, and unexpected earnings.', 'year': 2010, 'in_acl': False, 'citationCount': 4077, 'section': None, 'subsection': None}, {'id': 14727513, 'paperId': 'e498784edf2c02fe0b228479f88120f08b381cb6', 'title': 'Twitter mood predicts the stock market', 'authors': [{'authorId': '1756468', 'name': 'J. Bollen'}, {'authorId': '2053590600', 'name': 'Huina Mao'}, {'authorId': '145681778', 'name': 'Xiao-Jun Zeng'}], 'venue': 'Journal of Computer Science', 'abstract': "Behavioral economics tells us that emotions can profoundly affect individual behavior and decision-making. Does this also apply to societies at large, i.e. can societies experience mood states that affect their collective decision making? By extension is the public mood correlated or even predictive of economic indicators? Here we investigate whether measurements of collective mood states derived from large-scale Twitter feeds are correlated to the value of the Dow Jones Industrial Average (DJIA) over time. We analyze the text content of daily Twitter feeds by two mood tracking tools, namely OpinionFinder that measures positive vs. negative mood and Google-Profile of Mood States (GPOMS) that measures mood in terms of 6 dimensions (Calm, Alert, Sure, Vital, Kind, and Happy). We cross-validate the resulting mood time series by comparing their ability to detect the public's response to the presidential election and Thanksgiving day in 2008. A Granger causality analysis and a Self-Organizing Fuzzy Neural Network are then used to investigate the hypothesis that public mood states, as measured by the OpinionFinder and GPOMS mood time series, are predictive of changes in DJIA closing values. Our results indicate that the accuracy of DJIA predictions can be significantly improved by the inclusion of specific public mood dimensions but not others. We find an accuracy of 86.7% in predicting the daily up and down changes in the closing values of the DJIA and a reduction of the Mean Average Percentage Error (MAPE) by more than 6%.", 'year': 2010, 'in_acl': False, 'citationCount': 4927, 'section': None, 'subsection': None}, {'id': 264221114, 'paperId': 'cabb0a468af8184e0e930841435b65679b580521', 'title': 'Numeral Understanding in Financial Tweets for Fine-Grained Crowd-Based Forecasting', 'authors': [{'authorId': '2109523457', 'name': 'Chung-Chi Chen'}, {'authorId': '152354730', 'name': 'Hen-Hsen Huang'}, {'authorId': '3448762', 'name': 'Yow-Ting Shiue'}, {'authorId': '2237850759', 'name': 'Hsin-Hsi Chen'}], 'venue': 'International Conference on Wirtschaftsinformatik', 'abstract': 'Numerals that contain much information in financial documents are crucial for financial decision making. They play different roles in financial analysis processes. This paper is aimed at understanding the meanings of numerals in financial tweets for fine-grained crowd-based forecasting. We propose a taxonomy that classifies the numerals in financial tweets into 7 categories, and further extend some of these categories into several subcategories. Neural network-based models with word and character-level encoders are proposed for 7-way classification and 17-way classification. We perform backtest to confirm the effectiveness of the numeric opinions made by the crowd. This work is the first attempt to understand numerals in financial social media data, and we provide the first comparison of fine-grained opinion of individual investors and analysts based on their forecast price. The numeral corpus used in our experiments, called FinNum 1.0, is available for research purposes.', 'year': 2018, 'in_acl': False, 'citationCount': 42, 'section': None, 'subsection': None}, {'id': 174801540, 'paperId': 'f0c3de5686d859ad60bb872e150cf8b598f92c9a', 'title': 'Modeling Financial Analysts’ Decision Making via the Pragmatics and Semantics of Earnings Calls', 'authors': [{'authorId': '145137850', 'name': 'Katherine A. Keith'}, {'authorId': '1690152', 'name': 'Amanda Stent'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Every fiscal quarter, companies hold earnings calls in which company executives respond to questions from analysts. After these calls, analysts often change their price target recommendations, which are used in equity re- search reports to help investors make deci- sions. In this paper, we examine analysts’ decision making behavior as it pertains to the language content of earnings calls. We identify a set of 20 pragmatic features of analysts’ questions which we correlate with analysts’ pre-call investor recommendations. We also analyze the degree to which semantic and pragmatic features from an earnings call complement market data in predicting analysts’ post-call changes in price targets. Our results show that earnings calls are moderately predictive of analysts’ decisions even though these decisions are influenced by a number of other factors including private communication with company executives and market conditions. A breakdown of model errors indicates disparate performance on calls from different market sectors.', 'year': 2019, 'in_acl': True, 'citationCount': 49, 'section': None, 'subsection': None}, {'id': 234787011, 'paperId': 'b68bb98aa5075934d92b1d34387be31e2449faf8', 'title': 'From Opinion Mining to Financial Argument Mining', 'authors': [{'authorId': '2109523457', 'name': 'Chung-Chi Chen'}, {'authorId': '152354730', 'name': 'Hen-Hsen Huang'}, {'authorId': '153924342', 'name': 'Hsin-Hsi Chen'}], 'venue': 'Springer Briefs in Computer Science', 'abstract': "Opinion mining is a prevalent research issue in many domains. In the financial domain, however, it is still in the early stages. Most of the researches on this topic only focus on the coarse-grained market sentiment analysis, i.e., 2-way classification for bullish/bearish. Thanks to the recent financial technology (FinTech) development, some interdisciplinary researchers start to involve in the in-depth analysis of investors' opinions. These works indicate the trend toward fine-grained opinion mining in the financial domain. When expressing opinions in finance, terms like bullish/bearish often spring to mind. However, the market sentiment of the financial instrument is just one type of opinion in the financial industry. Like other industries such as manufacturing and textiles, the financial industry also has a large number of products. Financial services are also a major business for many financial companies, especially in the context of the recent FinTech trend. For instance, many commercial banks focus on loans and credit cards. Although there are a variety of issues that could be explored in the financial domain, most researchers in the AI and NLP communities only focus on the market sentiment of the stock or foreign exchange. This open access book addresses several research issues that can broaden the research topics in the AI community. It also provides an overview of the status quo in fine-grained financial opinion mining to offer insights into the futures goals. For a better understanding of the past and the current research, it also discusses the components of financial opinions one-by-one with the related works and highlights some possible research avenues, providing a research agenda with both micro- and macro-views toward financial opinions.", 'year': 2021, 'in_acl': False, 'citationCount': 28, 'section': None, 'subsection': None}]
|
2021.emnlp-tutorials.3
|
Knowledge-Enriched Natural Language Generation
|
Knowledge-enriched text generation poses unique challenges in modeling and learning, driving active research in several core directions, ranging from integrated modeling of neural representations and symbolic information in the sequential/hierarchical/graphical structures, learning without direct supervisions due to the cost of structured annotation, efficient optimization and inference with massive and global constraints, to language grounding on multiple modalities, and generative reasoning with implicit commonsense knowledge and background knowledge. In this tutorial we will present a roadmap to line up the state-of-the-art methods to tackle these challenges on this cutting-edge problem. We will dive deep into various technical components: how to represent knowledge, how to feed knowledge into a generation model, how to evaluate generation results, and what are the remaining challenges?
| 2,021
|
https://aclanthology.org/2021.emnlp-tutorials.3
|
EMNLP
|
[{'id': 222272210, 'paperId': 'c845494445f3bfa01d8245a4759b144e27aa3788', 'title': 'A Survey of Knowledge-enhanced Text Generation', 'authors': [{'authorId': '38767143', 'name': 'W. Yu'}, {'authorId': '70461341', 'name': 'Wenhao Yu'}, {'authorId': '8652308', 'name': 'Chenguang Zhu'}, {'authorId': '1993150474', 'name': 'Zaitang Li'}, {'authorId': '2749311', 'name': 'Zhiting Hu'}, {'authorId': '1786863', 'name': 'Qingyun Wang'}, {'authorId': '2113323573', 'name': 'Heng Ji'}, {'authorId': '1470716407', 'name': 'Meng Jiang'}], 'venue': 'ACM Computing Surveys', 'abstract': 'The goal of text-to-text generation is to make machines express like a human in many applications such as conversation, summarization, and translation. It is one of the most important yet challenging tasks in natural language processing (NLP). Various neural encoder-decoder models have been proposed to achieve the goal by learning to map input text to output text. However, the input text alone often provides limited knowledge to generate the desired output, so the performance of text generation is still far from satisfaction in many real-world scenarios. To address this issue, researchers have considered incorporating (i) internal knowledge embedded in the input text and (ii) external knowledge from outside sources such as knowledge base and knowledge graph into the text generation system. This research topic is known as knowledge-enhanced text generation. In this survey, we present a comprehensive review of the research on this topic over the past five years. The main content includes two parts: (i) general methods and architectures for integrating knowledge into text generation; (ii) specific techniques and applications according to different forms of knowledge data. This survey can have broad audiences, researchers and practitioners, in academia and industry.', 'year': 2020, 'in_acl': False, 'citationCount': 232, 'section': 'Survey', 'subsection': None}, {'id': 11212020, 'paperId': 'fa72afa9b2cbc8f0d7b05d52548906610ffbb9c5', 'title': 'Neural Machine Translation by Jointly Learning to Align and Translate', 'authors': [{'authorId': '3335364', 'name': 'Dzmitry Bahdanau'}, {'authorId': '1979489', 'name': 'Kyunghyun Cho'}, {'authorId': '1751762', 'name': 'Yoshua Bengio'}], 'venue': 'International Conference on Learning Representations', 'abstract': 'Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.', 'year': 2014, 'in_acl': False, 'citationCount': 26130, 'section': 'General learning and NLG frameworks', 'subsection': None}, {'id': 13756489, 'paperId': '204e3073870fae3d05bcbc2f6a8e263d9b72e776', 'title': 'Attention is All you Need', 'authors': [{'authorId': '40348417', 'name': 'Ashish Vaswani'}, {'authorId': '1846258', 'name': 'Noam M. Shazeer'}, {'authorId': '3877127', 'name': 'Niki Parmar'}, {'authorId': '39328010', 'name': 'Jakob Uszkoreit'}, {'authorId': '145024664', 'name': 'Llion Jones'}, {'authorId': '19177000', 'name': 'Aidan N. Gomez'}, {'authorId': '40527594', 'name': 'Lukasz Kaiser'}, {'authorId': '3443442', 'name': 'Illia Polosukhin'}], 'venue': 'Neural Information Processing Systems', 'abstract': 'The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.', 'year': 2017, 'in_acl': False, 'citationCount': 109681, 'section': 'General learning and NLG frameworks', 'subsection': None}, {'id': 8174613, 'paperId': '02534853626c18c9a097c2712f1ddf3613257d35', 'title': 'Incorporating Copying Mechanism in Sequence-to-Sequence Learning', 'authors': [{'authorId': '3016273', 'name': 'Jiatao Gu'}, {'authorId': '11955007', 'name': 'Zhengdong Lu'}, {'authorId': '49404233', 'name': 'Hang Li'}, {'authorId': '2052674293', 'name': 'V. Li'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'We address an important problem in sequence-to-sequence (Seq2Seq) learning referred to as copying, in which certain segments in the input sequence are selectively replicated in the output sequence. A similar phenomenon is observable in human language communication. For example, humans tend to repeat entity names or even long phrases in conversation. The challenge with regard to copying in Seq2Seq is that new machinery is needed to decide when to perform the operation. In this paper, we incorporate copying into neural network-based Seq2Seq learning and propose a new model called CopyNet with encoder-decoder structure. CopyNet can nicely integrate the regular way of word generation in the decoder with the new copying mechanism which can choose sub-sequences in the input sequence and put them at proper places in the output sequence. Our empirical study on both synthetic data sets and real world data sets demonstrates the efficacy of CopyNet. For example, CopyNet can outperform regular RNN-based model with remarkable margins on text summarization tasks.', 'year': 2016, 'in_acl': True, 'citationCount': 1506, 'section': 'General learning and NLG frameworks', 'subsection': None}, {'id': 9514751, 'paperId': 'd95069ee71bc2c3e171832872f437caa2e53432f', 'title': 'Topic Aware Neural Response Generation', 'authors': [{'authorId': '1399291043', 'name': 'Chen Xing'}, {'authorId': '145717888', 'name': 'Wei Wu'}, {'authorId': '49176273', 'name': 'Yu Wu'}, {'authorId': '2146651412', 'name': 'Jie Liu'}, {'authorId': '9221107', 'name': 'Yalou Huang'}, {'authorId': '143849609', 'name': 'M. Zhou'}, {'authorId': '1712167', 'name': 'Wei-Ying Ma'}], 'venue': 'AAAI Conference on Artificial Intelligence', 'abstract': '\n \n We consider incorporating topic information into a sequence-to-sequence framework to generate informative and interesting responses for chatbots. To this end, we propose a topic aware sequence-to-sequence (TA-Seq2Seq) model. The model utilizes topics to simulate prior human knowledge that guides them to form informative and interesting responses in conversation, and leverages topic information in generation by a joint attention mechanism and a biased generation probability. The joint attention mechanism summarizes the hidden vectors of an input message as context vectors by message attention and synthesizes topic vectors by topic attention from the topic words of the message obtained from a pre-trained LDA model, with these vectors jointly affecting the generation of words in decoding. To increase the possibility of topic words appearing in responses, the model modifies the generation probability of topic words by adding an extra probability item to bias the overall distribution. Empirical studies on both automatic evaluation metrics and human annotations show that TA-Seq2Seq can generate more informative and interesting responses, significantly outperforming state-of-the-art response generation models.\n \n', 'year': 2016, 'in_acl': False, 'citationCount': 472, 'section': 'Semantic knowledge for enhancing NLG', 'subsection': None}, {'id': 20981275, 'paperId': '2a215755d7548ffc82079ce734c4ac60b62f6f56', 'title': 'Toward Controlled Generation of Text', 'authors': [{'authorId': '2749311', 'name': 'Zhiting Hu'}, {'authorId': '8387085', 'name': 'Zichao Yang'}, {'authorId': '40250403', 'name': 'Xiaodan Liang'}, {'authorId': '145124475', 'name': 'R. Salakhutdinov'}, {'authorId': '143977260', 'name': 'E. Xing'}], 'venue': 'International Conference on Machine Learning', 'abstract': 'Generic generation and manipulation of text is challenging and has limited success compared to recent deep generative modeling in visual domain. This paper aims at generating plausible natural language sentences, whose attributes are dynamically controlled by learning disentangled latent representations with designated semantics. We propose a new neural generative model which combines variational auto-encoders and holistic attribute discriminators for effective imposition of semantic structures. With differentiable approximation to discrete text samples, explicit constraints on independent attribute controls, and efficient collaborative learning of generator and discriminators, our model learns highly interpretable representations from even only word annotations, and produces realistic sentences with desired attributes. Quantitative evaluation validates the accuracy of sentence and attribute generation.', 'year': 2017, 'in_acl': False, 'citationCount': 955, 'section': 'Semantic knowledge for enhancing NLG', 'subsection': None}, {'id': 2024574, 'paperId': '7b221cce8fdbc1105956b27c938730fce8c1fc10', 'title': 'Emotional Chatting Machine: Emotional Conversation Generation with Internal and External Memory', 'authors': [{'authorId': '144751955', 'name': 'Hao Zhou'}, {'authorId': '1730108', 'name': 'Minlie Huang'}, {'authorId': '50615630', 'name': 'Tianyang Zhang'}, {'authorId': '145213540', 'name': 'Xiaoyan Zhu'}, {'authorId': '2149124481', 'name': 'Bing-Qian Liu'}], 'venue': 'AAAI Conference on Artificial Intelligence', 'abstract': '\n \n Perception and expression of emotion are key factors to the success of dialogue systems or conversational agents. However, this problem has not been studied in large-scale conversation generation so far. In this paper, we propose Emotional Chatting Machine (ECM) that can generate appropriate responses not only in content (relevant and grammatical) but also in emotion (emotionally consistent). To the best of our knowledge, this is the first work that addresses the emotion factor in large-scale conversation generation. ECM addresses the factor using three new mechanisms that respectively (1) models the high-level abstraction of emotion expressions by embedding emotion categories, (2) captures the change of implicit internal emotion states, and (3) uses explicit emotion expressions with an external emotion vocabulary. Experiments show that the proposed model can generate responses appropriate not only in content but also in emotion.\n \n', 'year': 2017, 'in_acl': False, 'citationCount': 699, 'section': 'Semantic knowledge for enhancing NLG', 'subsection': None}, {'id': 7672408, 'paperId': '3580d8a5e7584e98d547ebfed900749d347f6714', 'title': 'Table-to-text Generation by Structure-aware Seq2seq Learning', 'authors': [{'authorId': '1500520681', 'name': 'Tianyu Liu'}, {'authorId': '94053409', 'name': 'Kexiang Wang'}, {'authorId': '39058310', 'name': 'Lei Sha'}, {'authorId': '39488576', 'name': 'Baobao Chang'}, {'authorId': '3335836', 'name': 'Zhifang Sui'}], 'venue': 'AAAI Conference on Artificial Intelligence', 'abstract': '\n \n Table-to-text generation aims to generate a description for a factual table which can be viewed as a set of field-value records. To encode both the content and the structure of a table, we propose a novel structure-aware seq2seq architecture which consists of field-gating encoder and description generator with dual attention. In the encoding phase, we update the cell memory of the LSTM unit by a field gate and its corresponding field value in order to incorporate field information into table representation. In the decoding phase, dual attention mechanism which contains word level attention and field level attention is proposed to model the semantic relevance between the generated description and the table. We conduct experiments on the WIKIBIO dataset which contains over 700k biographies and corresponding infoboxes from Wikipedia. The attention visualizations and case studies show that our model is capable of generating coherent and informative descriptions based on the comprehensive understanding of both the content and the structure of a table. Automatic evaluations also show our model outperforms the baselines by a great margin. Code for this work is available on https://github.com/tyliupku/wiki2bio.\n \n', 'year': 2017, 'in_acl': False, 'citationCount': 257, 'section': 'Structured knowledge for enhancing NLG', 'subsection': None}, {'id': 23892230, 'paperId': '13395213d47f78672ab4e81573f2b0fa0cfc8c6d', 'title': 'Challenges in Data-to-Document Generation', 'authors': [{'authorId': '2844243', 'name': 'Sam Wiseman'}, {'authorId': '1692491', 'name': 'Stuart M. Shieber'}, {'authorId': '2531268', 'name': 'Alexander M. Rush'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Recent neural models have shown significant progress on the problem of generating short descriptive texts conditioned on a small number of database records. In this work, we suggest a slightly more difficult data-to-text generation task, and investigate how effective current approaches are on this task. In particular, we introduce a new, large-scale corpus of data records paired with descriptive documents, propose a series of extractive evaluation methods for analyzing performance, and obtain baseline results using current neural generation methods. Experiments show that these models produce fluent text, but fail to convincingly approximate human-generated documents. Moreover, even templated baselines exceed the performance of these neural models on some metrics, though copy- and reconstruction-based extensions lead to noticeable improvements.', 'year': 2017, 'in_acl': True, 'citationCount': 566, 'section': 'Structured knowledge for enhancing NLG', 'subsection': None}, {'id': 51608183, 'paperId': '05cf65bea06b26d11a6324113bb4d6219e495a7b', 'title': 'Commonsense Knowledge Aware Conversation Generation with Graph Attention', 'authors': [{'authorId': '144751955', 'name': 'Hao Zhou'}, {'authorId': '2061649994', 'name': 'Tom Young'}, {'authorId': '1730108', 'name': 'Minlie Huang'}, {'authorId': '2664328', 'name': 'Haizhou Zhao'}, {'authorId': '2774294', 'name': 'Jingfang Xu'}, {'authorId': '145213540', 'name': 'Xiaoyan Zhu'}], 'venue': 'International Joint Conference on Artificial Intelligence', 'abstract': 'Commonsense knowledge is vital to many natural language processing tasks. In this paper, we present a novel open-domain conversation generation model to demonstrate how large-scale commonsense knowledge can facilitate language understanding and generation. Given a user post, the model retrieves relevant knowledge graphs from a knowledge base and then encodes the graphs with a static graph attention mechanism, which augments the semantic information of the post and thus supports better understanding of the post. Then, during word generation, the model attentively reads the retrieved knowledge graphs and the knowledge triples within each graph to facilitate better generation through a dynamic graph attention mechanism. This is the first attempt that uses large-scale commonsense knowledge in conversation generation. Furthermore, unlike existing models that use knowledge triples (entities) separately and independently, our model treats each knowledge graph as a whole, which encodes more structured, connected semantic information in the graphs.\xa0Experiments show that the proposed model can generate more appropriate and informative responses than state-of-the-art baselines.\xa0', 'year': 2018, 'in_acl': False, 'citationCount': 487, 'section': 'Structured knowledge for enhancing NLG', 'subsection': None}, {'id': 102354588, 'paperId': 'cb15c1c51e8a7da42d5b2ebac955bf1cd9dd4022', 'title': 'Text Generation from Knowledge Graphs with Graph Transformers', 'authors': [{'authorId': '1403698986', 'name': 'Rik Koncel-Kedziorski'}, {'authorId': '93836311', 'name': 'Dhanush Bekal'}, {'authorId': '145081697', 'name': 'Yi Luan'}, {'authorId': '1747893', 'name': 'Mirella Lapata'}, {'authorId': '2548384', 'name': 'Hannaneh Hajishirzi'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'Generating texts which express complex ideas spanning multiple sentences requires a structured representation of their content (document plan), but these representations are prohibitively expensive to manually produce. In this work, we address the problem of generating coherent multi-sentence texts from the output of an information extraction system, and in particular a knowledge graph. Graphical knowledge representations are ubiquitous in computing, but pose a significant challenge for text generation techniques due to their non-hierarchical nature, collapsing of long-distance dependencies, and structural variety. We introduce a novel graph transforming encoder which can leverage the relational structure of such knowledge graphs without imposing linearization or hierarchical constraints. Incorporated into an encoder-decoder setup, we provide an end-to-end trainable system for graph-to-text generation that we apply to the domain of scientific text. Automatic and human evaluations show that our technique produces more informative texts which exhibit better document structure than competitive encoder-decoder methods.', 'year': 2019, 'in_acl': True, 'citationCount': 309, 'section': 'Structured knowledge for enhancing NLG', 'subsection': None}]
|
2021.emnlp-tutorials.6
|
Syntax in End-to-End Natural Language Processing
|
This tutorial surveys the latest technical progress of syntactic parsing and the role of syntax in end-to-end natural language processing (NLP) tasks, in which semantic role labeling (SRL) and machine translation (MT) are the representative NLP tasks that have always been beneficial from informative syntactic clues since a long time ago, though the advance from end-to-end deep learning models shows new results. In this tutorial, we will first introduce the background and the latest progress of syntactic parsing and SRL/NMT. Then, we will summarize the key evidence about the syntactic impacts over these two concerning tasks, and explore the behind reasons from both computational and linguistic backgrounds.
| 2,021
|
https://aclanthology.org/2021.emnlp-tutorials.6
|
EMNLP
|
[{'id': 3074096, 'paperId': '2913c2bf3f92b5ae369400a42b2d27cc5bc05ecb', 'title': 'Deep Learning', 'authors': [{'authorId': '1688882', 'name': 'Yann LeCun'}, {'authorId': '1751762', 'name': 'Yoshua Bengio'}, {'authorId': '1695689', 'name': 'Geoffrey E. Hinton'}], 'venue': '', 'abstract': 'Machine-learning technology powers many aspects of modern society: from web searches to content filtering on social networks to recommendations on e-commerce websites, and it is increasingly present in consumer products such as cameras and smartphones. Machine-learning systems are used to identify objects in images, transcribe speech into text, match news items, posts or products with users’ interests, and select relevant results of search. Increasingly, these applications make use of a class of techniques called deep learning. Conventional machine-learning techniques were limited in their ability to process natural data in their raw form. For decades, constructing a pattern-recognition or machine-learning system required careful engineering and considerable domain expertise to design a feature extractor that transformed the raw data (such as the pixel values of an image) into a suitable internal representation or feature vector from which the learning subsystem, often a classifier, could detect or classify patterns in the input. Representation learning is a set of methods that allows a machine to be fed with raw data and to automatically discover the representations needed for detection or classification. Deep-learning methods are representation-learning methods with multiple levels of representation, obtained by composing simple but non-linear modules that each transform the representation at one level (starting with the raw input) into a representation at a higher, slightly more abstract level. With the composition of enough such transformations, very complex functions can be learned. For classification tasks, higher layers of representation amplify aspects of the input that are important for discrimination and suppress irrelevant variations. An image, for example, comes in the form of an array of pixel values, and the learned features in the first layer of representation typically represent the presence or absence of edges at particular orientations and locations in the image. The second layer typically detects motifs by spotting particular arrangements of edges, regardless of small variations in the edge positions. The third layer may assemble motifs into larger combinations that correspond to parts of familiar objects, and subsequent layers would detect objects as combinations of these parts. The key aspect of deep learning is that these layers of features are not designed by human engineers: they are learned from data using a general-purpose learning procedure. Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years. It has turned out to be very good at discovering intricate structures in high-dimensional data and is therefore applicable to many domains of science, business and government. In addition to beating records in image recognition and speech recognition, it has beaten other machine-learning techniques at predicting the activity of potential drug molecules, analysing particle accelerator data, reconstructing brain circuits, and predicting the effects of mutations in non-coding DNA on gene expression and disease. Perhaps more surprisingly, deep learning has produced extremely promising results for various tasks in natural language understanding, particularly topic classification, sentiment analysis, question answering and language translation. We think that deep learning will have many more successes in the near future because it requires very little engineering by hand, so it can easily take advantage of increases in the amount of available computation and data. New learning algorithms and architectures that are currently being developed for deep neural networks will only accelerate this progress.', 'year': 2015, 'in_acl': False, 'citationCount': 40798, 'section': 'Deep Learning', 'subsection': None}, {'id': 7942973, 'paperId': '8cbef23c9ee2ae7c35cc691a0c1d713a6377c9f2', 'title': 'Deep Biaffine Attention for Neural Dependency Parsing', 'authors': [{'authorId': '2277385', 'name': 'Timothy Dozat'}, {'authorId': '144783904', 'name': 'Christopher D. Manning'}], 'venue': 'International Conference on Learning Representations', 'abstract': 'This paper builds off recent work from Kiperwasser & Goldberg (2016) using neural attention in a simple graph-based dependency parser. We use a larger but more thoroughly regularized parser than other recent BiLSTM-based approaches, with biaffine classifiers to predict arcs and labels. Our parser gets state of the art or near state of the art performance on standard treebanks for six different languages, achieving 95.7% UAS and 94.1% LAS on the most popular English PTB dataset. This makes it the highest-performing graph-based parser on this benchmark---outperforming Kiperwasser Goldberg (2016) by 1.8% and 2.2%---and comparable to the highest performing transition-based parser (Kuncoro et al., 2016), which achieves 95.8% UAS and 94.6% LAS. We also show which hyperparameter choices had a significant effect on parsing accuracy, allowing us to achieve large gains over other graph-based approaches.', 'year': 2016, 'in_acl': False, 'citationCount': 1177, 'section': 'Syntactic Parsing', 'subsection': None}, {'id': 19206893, 'paperId': '928f9dccb806a3278d20d82cc53781c5f44e2bb1', 'title': 'Constituency Parsing with a Self-Attentive Encoder', 'authors': [{'authorId': '143808231', 'name': 'Nikita Kitaev'}, {'authorId': '38666915', 'name': 'D. Klein'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'We demonstrate that replacing an LSTM encoder with a self-attentive architecture can lead to improvements to a state-of-the-art discriminative constituency parser. The use of attention makes explicit the manner in which information is propagated between different locations in the sentence, which we use to both analyze our model and propose potential improvements. For example, we find that separating positional and content information in the encoder can lead to improved parsing accuracy. Additionally, we evaluate different approaches for lexical representation. Our parser achieves new state-of-the-art results for single models trained on the Penn Treebank: 93.55 F1 without the use of any external data, and 95.13 F1 when using pre-trained word representations. Our parser also outperforms the previous best-published accuracy figures on 8 of the 9 languages in the SPMRL dataset.', 'year': 2018, 'in_acl': True, 'citationCount': 514, 'section': 'Syntactic Parsing', 'subsection': None}, {'id': 51879191, 'paperId': '74b3f93ee47fe36ff1862ec7d52745f30ec7be49', 'title': 'Syntax for Semantic Role Labeling, To Be, Or Not To Be', 'authors': [{'authorId': '51129953', 'name': 'Shexia He'}, {'authorId': '30658665', 'name': 'Z. Li'}, {'authorId': '36225434', 'name': 'Zhao Hai'}, {'authorId': '51133532', 'name': 'Hongxiao Bai'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Semantic role labeling (SRL) is dedicated to recognizing the predicate-argument structure of a sentence. Previous studies have shown syntactic information has a remarkable contribution to SRL performance. However, such perception was challenged by a few recent neural SRL models which give impressive performance without a syntactic backbone. This paper intends to quantify the importance of syntactic information to dependency SRL in deep learning framework. We propose an enhanced argument labeling model companying with an extended korder argument pruning algorithm for effectively exploiting syntactic information. Our model achieves state-of-the-art results on the CoNLL-2008, 2009 benchmarks for both English and Chinese, showing the quantitative significance of syntax to neural SRL together with a thorough empirical survey over existing models.', 'year': 2018, 'in_acl': True, 'citationCount': 136, 'section': 'SRL', 'subsection': None}, {'id': 33626727, 'paperId': 'a4dd3beea286a20c4e4f66436875932d597190bc', 'title': 'Deep Semantic Role Labeling: What Works and What’s Next', 'authors': [{'authorId': '2265599', 'name': 'Luheng He'}, {'authorId': '2544107', 'name': 'Kenton Lee'}, {'authorId': '35084211', 'name': 'M. Lewis'}, {'authorId': '1982950', 'name': 'Luke Zettlemoyer'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'We introduce a new deep learning model for semantic role labeling (SRL) that significantly improves the state of the art, along with detailed analyses to reveal its strengths and limitations. We use a deep highway BiLSTM architecture with constrained decoding, while observing a number of recent best practices for initialization and regularization. Our 8-layer ensemble model achieves 83.2 F1 on theCoNLL 2005 test set and 83.4 F1 on CoNLL 2012, roughly a 10% relative error reduction over the previous state of the art. Extensive empirical analysis of these gains show that (1) deep models excel at recovering long-distance dependencies but can still make surprisingly obvious errors, and (2) that there is still room for syntactic parsers to improve these results.', 'year': 2017, 'in_acl': True, 'citationCount': 429, 'section': 'SRL', 'subsection': None}, {'id': 267846283, 'paperId': '247a912fbd3165ffb1463bb3c9bc17e2096ab9b5', 'title': 'Book Review: Statistical Machine Translation by Philipp Koehn', 'authors': [{'authorId': '2285655408', 'name': 'Colin Cherry'}], 'venue': 'International Conference on Computational Logic', 'abstract': 'Statistical Machine Translation provides a comprehensive and clear introduction to the most prominent techniques employed in the field of the same name (SMT). This textbook is aimed at students or researchers interested in a thorough entry-point to the field, and it does an excellent job of providing basic understanding for each of the many pieces of a statistical translation system. I consider this book to be an essential addition to any advanced undergraduate course or graduate course on SMT. The book is divided into three parts: Foundations, Core Methods, and Advanced Topics. Foundations (75 pages) covers an introduction to translation, working with text, and probability theory. Core Methods (170 pages) covers the main components of a standard phrase-based SMT system. Advanced Topics (125 pages) covers discriminative training and linguistics in SMT, including an in-depth discussion of syntactic SMT. The text as a whole assumes a certain familiarity with natural language processing; though the Foundations section provides an effort to fill in the gaps, the book’s focus is decidedly translation. As such, students unfamiliar with NLP may sometimes need to consult a general NLP text. The book aims to provide a thorough introduction to each component of a statistical translation system, and it definitely succeeds in doing so. Supplementing this core material for each chapter is a highly inclusive Further Reading section. These sections provide brief narratives highlighting many relevant papers and alternative techniques for each topic addressed in the chapter. I suspect many readers will find these literature pointers to be quite valuable, from students wishing to dive deeper, to experienced SMT researchers wishing to get started in a new sub-field. Each chapter also closes with a short list of exercises. Many of these are very challenging (accurately indicated by a star-rating system), and involve getting your hands dirty with tools downloaded from the Web. The usefulness of these exercises will depend largely on the instructor’s tastes; I view them as a bonus rather than a core feature of the book.', 'year': 2010, 'in_acl': True, 'citationCount': 0, 'section': 'Machine Translationg', 'subsection': None}, {'id': 11212020, 'paperId': 'fa72afa9b2cbc8f0d7b05d52548906610ffbb9c5', 'title': 'Neural Machine Translation by Jointly Learning to Align and Translate', 'authors': [{'authorId': '3335364', 'name': 'Dzmitry Bahdanau'}, {'authorId': '1979489', 'name': 'Kyunghyun Cho'}, {'authorId': '1751762', 'name': 'Yoshua Bengio'}], 'venue': 'International Conference on Learning Representations', 'abstract': 'Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.', 'year': 2014, 'in_acl': False, 'citationCount': 26130, 'section': 'Machine Translationg', 'subsection': None}]
|
2021.naacl-tutorials.2
|
Fine-grained Interpretation and Causation Analysis in Deep NLP Models
|
Deep neural networks have constantly pushed the state-of-the-art performance in natural language processing and are considered as the de-facto modeling approach in solving complex NLP tasks such as machine translation, summarization and question-answering. Despite the proven efficacy of deep neural networks at-large, their opaqueness is a major cause of concern. In this tutorial, we will present research work on interpreting fine-grained components of a neural network model from two perspectives, i) fine-grained interpretation, and ii) causation analysis. The former is a class of methods to analyze neurons with respect to a desired language concept or a task. The latter studies the role of neurons and input features in explaining the decisions made by the model. We will also discuss how interpretation methods and causation analysis can connect towards better interpretability of model prediction. Finally, we will walk you through various toolkits that facilitate fine-grained interpretation and causation analysis of neural models.
| 2,021
|
https://aclanthology.org/2021.naacl-tutorials.2
|
NAACL
|
[{'id': 56657817, 'paperId': '668f42a4d4094f0a66d402a16087e14269b31a1f', 'title': 'Analysis Methods in Neural Language Processing: A Survey', 'authors': [{'authorId': '2083259', 'name': 'Yonatan Belinkov'}, {'authorId': '145898106', 'name': 'James R. Glass'}], 'venue': 'Transactions of the Association for Computational Linguistics', 'abstract': 'The field of natural language processing has seen impressive progress in recent years, with neural network models replacing many of the traditional systems. A plethora of new models have been proposed, many of which are thought to be opaque compared to their feature-rich counterparts. This has led researchers to analyze, interpret, and evaluate neural networks in novel and more fine-grained ways. In this survey paper, we review analysis methods in neural language processing, categorize them according to prominent research trends, highlight existing limitations, and point to potential directions for future work.', 'year': 2018, 'in_acl': True, 'citationCount': 513, 'section': 'survey papers', 'subsection': None}, {'id': 222125099, 'paperId': '4b322cf280f459deb6d9e2eb2430d1a28141934c', 'title': 'A Survey of the State of Explainable AI for Natural Language Processing', 'authors': [{'authorId': '1994333', 'name': 'Marina Danilevsky'}, {'authorId': '143857309', 'name': 'Kun Qian'}, {'authorId': '48361424', 'name': 'R. Aharonov'}, {'authorId': '2208580', 'name': 'Yannis Katsis'}, {'authorId': '1814905', 'name': 'B. Kawas'}, {'authorId': '40655309', 'name': 'Prithviraj Sen'}], 'venue': 'AACL', 'abstract': 'Recent years have seen important advances in the quality of state-of-the-art models, but this has come at the expense of models becoming less interpretable. This survey presents an overview of the current state of Explainable AI (XAI), considered within the domain of Natural Language Processing (NLP). We discuss the main categorization of explanations, as well as the various ways explanations can be arrived at and visualized. We detail the operations and explainability techniques currently available for generating explanations for NLP model predictions, to serve as a resource for model developers in the community. Finally, we point out the current gaps and encourage directions for future work in this important research area.', 'year': 2020, 'in_acl': True, 'citationCount': 324, 'section': 'survey papers', 'subsection': None}, {'id': 53215110, 'paperId': 'c5489d244bfc1e9b0d8c94bf6dd774ee1aca2def', 'title': 'Identifying and Controlling Important Neurons in Neural Machine Translation', 'authors': [{'authorId': '2063937616', 'name': 'A. Bau'}, {'authorId': '2083259', 'name': 'Yonatan Belinkov'}, {'authorId': '145775792', 'name': 'Hassan Sajjad'}, {'authorId': '145938140', 'name': 'Nadir Durrani'}, {'authorId': '6415321', 'name': 'Fahim Dalvi'}, {'authorId': '145898106', 'name': 'James R. Glass'}], 'venue': 'International Conference on Learning Representations', 'abstract': 'Neural machine translation (NMT) models learn representations containing substantial linguistic information. However, it is not clear if such information is fully distributed or if some of it can be attributed to individual neurons. We develop unsupervised methods for discovering important neurons in NMT models. Our methods rely on the intuition that different models learn similar properties, and do not require any costly external supervision. We show experimentally that translation quality depends on the discovered neurons, and find that many of them capture common linguistic phenomena. Finally, we show how to control NMT translations in predictable ways, by modifying activations of individual neurons.', 'year': 2018, 'in_acl': False, 'citationCount': 168, 'section': 'Fine-grained analysis and its Applications', 'subsection': None}, {'id': 56895415, 'paperId': '9c2156bc35c6f8e68aa21d4b2f339134a4d28708', 'title': 'What Is One Grain of Sand in the Desert? Analyzing Individual Neurons in Deep NLP Models', 'authors': [{'authorId': '6415321', 'name': 'Fahim Dalvi'}, {'authorId': '145938140', 'name': 'Nadir Durrani'}, {'authorId': '145775792', 'name': 'Hassan Sajjad'}, {'authorId': '2083259', 'name': 'Yonatan Belinkov'}, {'authorId': '2063937616', 'name': 'A. Bau'}, {'authorId': '145898106', 'name': 'James R. Glass'}], 'venue': 'AAAI Conference on Artificial Intelligence', 'abstract': 'Despite the remarkable evolution of deep neural networks in natural language processing (NLP), their interpretability remains a challenge. Previous work largely focused on what these models learn at the representation level. We break this analysis down further and study individual dimensions (neurons) in the vector representation learned by end-to-end neural models in NLP tasks. We propose two methods: Linguistic Correlation Analysis, based on a supervised method to extract the most relevant neurons with respect to an extrinsic task, and Cross-model Correlation Analysis, an unsupervised method to extract salient neurons w.r.t. the model itself. We evaluate the effectiveness of our techniques by ablating the identified neurons and reevaluating the network’s performance for two tasks: neural machine translation (NMT) and neural language modeling (NLM). We further present a comprehensive analysis of neurons with the aim to address the following questions: i) how localized or distributed are different linguistic properties in the models? ii) are certain neurons exclusive to some properties and not others? iii) is the information more or less distributed in NMT vs. NLM? and iv) how important are the neurons identified through the linguistic correlation method to the overall task? Our code is publicly available as part of the NeuroX toolkit (Dalvi et al. 2019a). This paper is a non-archived version of the paper published at AAAI (Dalvi et al. 2019b).', 'year': 2018, 'in_acl': False, 'citationCount': 172, 'section': 'Fine-grained analysis and its Applications', 'subsection': None}, {'id': 220055965, 'paperId': '56d1003fd02346e93354ab55cd204485c268512a', 'title': 'Compositional Explanations of Neurons', 'authors': [{'authorId': '24835910', 'name': 'Jesse Mu'}, {'authorId': '2112400', 'name': 'Jacob Andreas'}], 'venue': 'Neural Information Processing Systems', 'abstract': 'We describe a procedure for explaining neurons in deep representations by identifying compositional logical concepts that closely approximate neuron behavior. Compared to prior work that uses atomic labels as explanations, analyzing neurons compositionally allows us to more precisely and expressively characterize their behavior. We use this procedure to answer several questions on interpretability in models for vision and natural language processing. First, we examine the kinds of abstractions learned by neurons. In image classification, we find that many neurons learn highly abstract but semantically coherent visual concepts, while other polysemantic neurons detect multiple unrelated features; in natural language inference (NLI), neurons learn shallow lexical heuristics from dataset biases. Second, we see whether compositional explanations give us insight into model performance: vision neurons that detect human-interpretable concepts are positively correlated with task performance, while NLI neurons that fire for shallow heuristics are negatively correlated with task performance. Finally, we show how compositional explanations provide an accessible way for end users to produce simple "copy-paste" adversarial examples that change model behavior in predictable ways.', 'year': 2020, 'in_acl': False, 'citationCount': 152, 'section': 'Fine-grained analysis and its Applications', 'subsection': None}, {'id': 218665291, 'paperId': 'bf30db07357427cda6cc2b64fbcea783eb048f05', 'title': 'Finding Experts in Transformer Models', 'authors': [{'authorId': '2270464', 'name': 'Xavier Suau'}, {'authorId': '1753336', 'name': 'L. Zappella'}, {'authorId': '3301859', 'name': 'N. Apostoloff'}], 'venue': 'arXiv.org', 'abstract': "In this work we study the presence of expert units in pre-trained Transformer Models (TM), and how they impact a model's performance. We define expert units to be neurons that are able to classify a concept with a given average precision, where a concept is represented by a binary set of sentences containing the concept (or not). Leveraging the OneSec dataset (Scarlini et al., 2019), we compile a dataset of 1641 concepts that allows diverse expert units in TM to be discovered. We show that expert units are important in several ways: (1) The presence of expert units is correlated ($r^2=0.833$) with the generalization power of TM, which allows ranking TM without requiring fine-tuning on suites of downstream tasks. We further propose an empirical method to decide how accurate such experts should be to evaluate generalization. (2) The overlap of top experts between concepts provides a sensible way to quantify concept co-learning, which can be used for explainability of unknown concepts. (3) We show how to self-condition off-the-shelf pre-trained language models to generate text with a given concept by forcing the top experts to be active, without requiring re-training the model or using additional parameters.", 'year': 2020, 'in_acl': False, 'citationCount': 30, 'section': 'Fine-grained analysis and its Applications', 'subsection': None}, {'id': 21889700, 'paperId': '442e10a3c6640ded9408622005e3c2a8906ce4c2', 'title': 'A Unified Approach to Interpreting Model Predictions', 'authors': [{'authorId': '23451726', 'name': 'Scott M. Lundberg'}, {'authorId': '2180463', 'name': 'Su-In Lee'}], 'venue': 'Neural Information Processing Systems', 'abstract': "Understanding why a model makes a certain prediction can be as crucial as the prediction's accuracy in many applications. However, the highest accuracy for large modern datasets is often achieved by complex models that even experts struggle to interpret, such as ensemble or deep learning models, creating a tension between accuracy and interpretability. In response, various methods have recently been proposed to help users interpret the predictions of complex models, but it is often unclear how these methods are related and when one method is preferable over another. To address this problem, we present a unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations). SHAP assigns each feature an importance value for a particular prediction. Its novel components include: (1) the identification of a new class of additive feature importance measures, and (2) theoretical results showing there is a unique solution in this class with a set of desirable properties. The new class unifies six existing methods, notable because several recent methods in the class lack the proposed desirable properties. Based on insights from this unification, we present new methods that show improved computational performance and/or better consistency with human intuition than previous approaches.", 'year': 2017, 'in_acl': False, 'citationCount': 17387, 'section': 'Causation analysis', 'subsection': None}, {'id': 224818197, 'paperId': 'cf592385909a1e3e9a428d8d6d8f427ab70b60a9', 'title': 'Analyzing the Source and Target Contributions to Predictions in Neural Machine Translation', 'authors': [{'authorId': '46235299', 'name': 'Elena Voita'}, {'authorId': '2082372', 'name': 'Rico Sennrich'}, {'authorId': '144889265', 'name': 'Ivan Titov'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'In Neural Machine Translation (and, more generally, conditional language modeling), the generation of a target token is influenced by two types of context: the source and the prefix of the target sequence. While many attempts to understand the internal workings of NMT models have been made, none of them explicitly evaluates relative source and target contributions to a generation decision. We argue that this relative contribution can be evaluated by adopting a variant of Layerwise Relevance Propagation (LRP). Its underlying ‘conservation principle’ makes relevance propagation unique: differently from other methods, it evaluates not an abstract quantity reflecting token importance, but the proportion of each token’s influence. We extend LRP to the Transformer and conduct an analysis of NMT models which explicitly evaluates the source and target relative contributions to the generation process. We analyze changes in these contributions when conditioning on different types of prefixes, when varying the training objective or the amount of training data, and during the training process. We find that models trained with more data tend to rely on source information more and to have more sharp token contributions; the training process is non-monotonic with several stages of different nature.', 'year': 2020, 'in_acl': True, 'citationCount': 80, 'section': 'Causation analysis', 'subsection': None}, {'id': 16747630, 'paperId': 'f302e136c41db5de1d624412f68c9174cf7ae8be', 'title': 'Axiomatic Attribution for Deep Networks', 'authors': [{'authorId': '30740726', 'name': 'Mukund Sundararajan'}, {'authorId': '40511120', 'name': 'Ankur Taly'}, {'authorId': '34789908', 'name': 'Qiqi Yan'}], 'venue': 'International Conference on Machine Learning', 'abstract': 'We study the problem of attributing the prediction of a deep network to its input features, a problem previously studied by several other works. We identify two fundamental axioms— Sensitivity and Implementation Invariance that attribution methods ought to satisfy. We show that they are not satisfied by most known attribution methods, which we consider to be a fundamental weakness of those methods. We use the axioms to guide the design of a new attribution method called Integrated Gradients. Our method requires no modification to the original network and is extremely simple to implement; it just needs a few calls to the standard gradient operator. We apply this method to a couple of image models, a couple of text models and a chemistry model, demonstrating its ability to debug networks, to extract rules from a network, and to enable users to engage with models better.', 'year': 2017, 'in_acl': False, 'citationCount': 5255, 'section': 'Causation analysis', 'subsection': None}, {'id': 44167055, 'paperId': 'eb322f6f798fd1b381896d0b79f5498d89585b1f', 'title': 'How Important Is a Neuron?', 'authors': [{'authorId': '1696833', 'name': 'Kedar Dhamdhere'}, {'authorId': '30740726', 'name': 'Mukund Sundararajan'}, {'authorId': '34789908', 'name': 'Qiqi Yan'}], 'venue': 'International Conference on Learning Representations', 'abstract': "The problem of attributing a deep network's prediction to its \\emph{input/base} features is well-studied. We introduce the notion of \\emph{conductance} to extend the notion of attribution to the understanding the importance of \\emph{hidden} units. \nInformally, the conductance of a hidden unit of a deep network is the \\emph{flow} of attribution via this hidden unit. We use conductance to understand the importance of a hidden unit to the prediction for a specific input, or over a set of inputs. We evaluate the effectiveness of conductance in multiple ways, including theoretical properties, ablation studies, and a feature selection task. The empirical evaluations are done using the Inception network over ImageNet data, and a sentiment analysis network over reviews. In both cases, we demonstrate the effectiveness of conductance in identifying interesting insights about the internal workings of these networks.", 'year': 2018, 'in_acl': False, 'citationCount': 117, 'section': 'Causation analysis', 'subsection': None}, {'id': 13029170, 'paperId': 'c0883f5930a232a9c1ad601c978caede29155979', 'title': '“Why Should I Trust You?”: Explaining the Predictions of Any Classifier', 'authors': [{'authorId': '78846919', 'name': 'Marco Tulio Ribeiro'}, {'authorId': '34650964', 'name': 'Sameer Singh'}, {'authorId': '1730156', 'name': 'Carlos Guestrin'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one. In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally varound the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.', 'year': 2016, 'in_acl': True, 'citationCount': 14884, 'section': 'Causation analysis', 'subsection': None}, {'id': 211075890, 'paperId': '798ea191aad9401462b405fde1a6cefb4fe53fd5', 'title': 'Explaining Explanations: Axiomatic Feature Interactions for Deep Networks', 'authors': [{'authorId': '51290559', 'name': 'Joseph D. Janizek'}, {'authorId': '8575816', 'name': 'Pascal Sturmfels'}, {'authorId': '2180463', 'name': 'Su-In Lee'}], 'venue': 'Journal of machine learning research', 'abstract': "Recent work has shown great promise in explaining neural network behavior. In particular, feature attribution methods explain which features were most important to a model's prediction on a given input. However, for many tasks, simply knowing which features were important to a model's prediction may not provide enough insight to understand model behavior. The interactions between features within the model may better help us understand not only the model, but also why certain features are more important than others. In this work, we present Integrated Hessians, an extension of Integrated Gradients that explains pairwise feature interactions in neural networks. Integrated Hessians overcomes several theoretical limitations of previous methods to explain interactions, and unlike such previous methods is not limited to a specific architecture or class of neural network. Additionally, we find that our method is faster than existing methods when the number of features is large, and outperforms previous methods on existing quantitative benchmarks. Code available at this https URL", 'year': 2020, 'in_acl': False, 'citationCount': 131, 'section': 'Causation analysis', 'subsection': None}]
|
2021.naacl-tutorials.3
|
Deep Learning on Graphs for Natural Language Processing
|
Due to its great power in modeling non-Euclidean data like graphs or manifolds, deep learning on graph techniques (i.e., Graph Neural Networks (GNNs)) have opened a new door to solving challenging graph-related NLP problems. There has seen a surge of interests in applying deep learning on graph techniques to NLP, and has achieved considerable success in many NLP tasks, ranging from classification tasks like sentence classification, semantic role labeling and relation extraction, to generation tasks like machine translation, question generation and summarization. Despite these successes, deep learning on graphs for NLP still face many challenges, including automatically transforming original text sequence data into highly graph-structured data, and effectively modeling complex data that involves mapping between graph-based inputs and other highly structured output data such as sequences, trees, and graph data with multi-types in both nodes and edges. This tutorial will cover relevant and interesting topics on applying deep learning on graph techniques to NLP, including automatic graph construction for NLP, graph representation learning for NLP, advanced GNN based models (e.g., graph2seq, graph2tree, and graph2graph) for NLP, and the applications of GNNs in various NLP tasks (e.g., machine translation, natural language generation, information extraction and semantic parsing). In addition, hands-on demonstration sessions will be included to help the audience gain practical experience on applying GNNs to solve challenging NLP problems using our recently developed open source library – Graph4NLP, the first library for researchers and practitioners for easy use of GNNs for various NLP tasks.
| 2,021
|
https://aclanthology.org/2021.naacl-tutorials.3
|
NAACL
|
[{'id': 3144218, 'paperId': '36eff562f65125511b5dfab68ce7f7a943c27478', 'title': 'Semi-Supervised Classification with Graph Convolutional Networks', 'authors': [{'authorId': '41016725', 'name': 'Thomas Kipf'}, {'authorId': '1678311', 'name': 'M. Welling'}], 'venue': 'International Conference on Learning Representations', 'abstract': 'We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin.', 'year': 2016, 'in_acl': False, 'citationCount': 25974, 'section': 'GNNs', 'subsection': None}, {'id': 8393918, 'paperId': '492f57ee9ceb61fb5a47ad7aebfec1121887a175', 'title': 'Gated Graph Sequence Neural Networks', 'authors': [{'authorId': '47002813', 'name': 'Yujia Li'}, {'authorId': '1725299', 'name': 'Daniel Tarlow'}, {'authorId': '2107692', 'name': 'Marc Brockschmidt'}, {'authorId': '1804104', 'name': 'R. Zemel'}], 'venue': 'International Conference on Learning Representations', 'abstract': 'Abstract: Graph-structured data appears frequently in domains including chemistry, natural language semantics, social networks, and knowledge bases. In this work, we study feature learning techniques for graph-structured inputs. Our starting point is previous work on Graph Neural Networks (Scarselli et al., 2009), which we modify to use gated recurrent units and modern optimization techniques and then extend to output sequences. The result is a flexible and broadly useful class of neural network models that has favorable inductive biases relative to purely sequence-based models (e.g., LSTMs) when the problem is graph-structured. We demonstrate the capabilities on some simple AI (bAbI) and graph algorithm learning tasks. We then show it achieves state-of-the-art performance on a problem from program verification, in which subgraphs need to be matched to abstract data structures.', 'year': 2015, 'in_acl': False, 'citationCount': 3122, 'section': 'GNNs', 'subsection': None}, {'id': 4755450, 'paperId': '6b7d6e6416343b2a122f8416e69059ce919026ef', 'title': 'Inductive Representation Learning on Large Graphs', 'authors': [{'authorId': '49437682', 'name': 'William L. Hamilton'}, {'authorId': '4058003', 'name': 'Z. Ying'}, {'authorId': '1702139', 'name': 'J. Leskovec'}], 'venue': 'Neural Information Processing Systems', 'abstract': "Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.", 'year': 2017, 'in_acl': False, 'citationCount': 13217, 'section': 'GNNs', 'subsection': None}, {'id': 6206777, 'paperId': '2784000e1a3554374662f4d18cb5ad52f59c8de6', 'title': 'Graph Convolutional Encoders for Syntax-aware Neural Machine Translation', 'authors': [{'authorId': '3000862', 'name': 'Jasmijn Bastings'}, {'authorId': '144889265', 'name': 'Ivan Titov'}, {'authorId': '2782694', 'name': 'Wilker Aziz'}, {'authorId': '2022957', 'name': 'Diego Marcheggiani'}, {'authorId': '3540477', 'name': "K. Sima'an"}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'We present a simple and effective approach to incorporating syntactic structure into neural attention-based encoder-decoder models for machine translation. We rely on graph-convolutional networks (GCNs), a recent class of neural networks developed for modeling graph-structured data. Our GCNs use predicted syntactic dependency trees of source sentences to produce representations of words (i.e. hidden states of the encoder) that are sensitive to their syntactic neighborhoods. GCNs take word representations as input and produce word representations as output, so they can easily be incorporated as layers into standard encoders (e.g., on top of bidirectional RNNs or convolutional neural networks). We evaluate their effectiveness with English-German and English-Czech translation experiments for different types of encoders and observe substantial improvements over their syntax-agnostic versions in all the considered setups.', 'year': 2017, 'in_acl': True, 'citationCount': 482, 'section': 'automatic graph construction for NLP', 'subsection': None}, {'id': 199577786, 'paperId': 'e47e6c814d2742527fdd352db13a5fd95b7ce24b', 'title': 'Reinforcement Learning Based Graph-to-Sequence Model for Natural Question Generation', 'authors': [{'authorId': '2144836395', 'name': 'Yu Chen'}, {'authorId': '3008832', 'name': 'Lingfei Wu'}, {'authorId': '1693515', 'name': 'Mohammed J. Zaki'}], 'venue': 'International Conference on Learning Representations', 'abstract': 'Natural question generation (QG) aims to generate questions from a passage and an answer. Previous works on QG either (i) ignore the rich structure information hidden in text, (ii) solely rely on cross-entropy loss that leads to issues like exposure bias and inconsistency between train/test measurement, or (iii) fail to fully exploit the answer information. To address these limitations, in this paper, we propose a reinforcement learning (RL) based graph-to-sequence (Graph2Seq) model for QG. Our model consists of a Graph2Seq generator with a novel Bidirectional Gated Graph Neural Network based encoder to embed the passage, and a hybrid evaluator with a mixed objective combining both cross-entropy and RL losses to ensure the generation of syntactically and semantically valid text. We also introduce an effective Deep Alignment Network for incorporating the answer information into the passage at both the word and contextual levels. Our model is end-to-end trainable and achieves new state-of-the-art scores, outperforming existing methods by a significant margin on the standard SQuAD benchmark.', 'year': 2019, 'in_acl': False, 'citationCount': 144, 'section': 'automatic graph construction for NLP', 'subsection': None}, {'id': 214003631, 'paperId': 'ff6a4a9a41b78c8b1fcab185db780266bbb06caf', 'title': 'Iterative Deep Graph Learning for Graph Neural Networks: Better and Robust Node Embeddings', 'authors': [{'authorId': '2144836395', 'name': 'Yu Chen'}, {'authorId': '3008832', 'name': 'Lingfei Wu'}, {'authorId': '1693515', 'name': 'Mohammed J. Zaki'}], 'venue': 'Neural Information Processing Systems', 'abstract': 'In this paper, we propose an end-to-end graph learning framework, namely Iterative Deep Graph Learning (IDGL), for jointly and iteratively learning graph structure and graph embedding. The key rationale of IDGL is to learn a better graph structure based on better node embeddings, and vice versa (i.e., better node embeddings based on a better graph structure). Our iterative method dynamically stops when the learned graph approaches close enough to the graph optimized for the prediction task. In addition, we cast the graph learning problem as a similarity metric learning problem and leverage adaptive graph regularization for controlling the quality of the learned graph. Finally, combining the anchor-based approximation technique, we further propose a scalable version of IDGL, namely IDGL-ANCH, which significantly reduces the time and space complexity of IDGL without compromising the performance. Our extensive experiments on nine benchmarks show that our proposed IDGL models can consistently outperform or match state-of-the-art baselines. Furthermore, IDGL can be more robust to adversarial graphs and cope with both transductive and inductive learning.', 'year': 2020, 'in_acl': False, 'citationCount': 357, 'section': 'automatic graph construction for NLP', 'subsection': None}, {'id': 218486837, 'paperId': '9e979667aa81c294062c02ab3a48e87e47c54987', 'title': 'Scalable Multi-Hop Relational Reasoning for Knowledge-Aware Question Answering', 'authors': [{'authorId': '2115389088', 'name': 'Yanlin Feng'}, {'authorId': '121022966', 'name': 'Xinyue Chen'}, {'authorId': '51583409', 'name': 'Bill Yuchen Lin'}, {'authorId': '2784644', 'name': 'Peifeng Wang'}, {'authorId': '49781448', 'name': 'Jun Yan'}, {'authorId': '1384550891', 'name': 'Xiang Ren'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': "Existing work on augmenting question answering (QA) models with external knowledge (e.g., knowledge graphs) either struggle to model multi-hop relations efficiently, or lack transparency into the model's prediction rationale. In this paper, we propose a novel knowledge-aware approach that equips pre-trained language models (PTLMs) with a multi-hop relational reasoning module, named multi-hop graph relation network (MHGRN). It performs multi-hop, multi-relational reasoning over subgraphs extracted from external knowledge graphs. The proposed reasoning module unifies path-based reasoning methods and graph neural networks to achieve better interpretability and scalability. We also empirically show its effectiveness and scalability on CommonsenseQA and OpenbookQA datasets, and interpret its behaviors with case studies.", 'year': 2020, 'in_acl': True, 'citationCount': 219, 'section': 'joint text and knowledge representation learning', 'subsection': None}, {'id': 220048375, 'paperId': '222dcbf5ee19fdfc9cfbd9c75af168a5c2122a4a', 'title': 'A Joint Neural Model for Information Extraction with Global Features', 'authors': [{'authorId': '2117032681', 'name': 'Ying Lin'}, {'authorId': '2113323573', 'name': 'Heng Ji'}, {'authorId': '143857288', 'name': 'Fei Huang'}, {'authorId': '3008832', 'name': 'Lingfei Wu'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Most existing joint neural models for Information Extraction (IE) use local task-specific classifiers to predict labels for individual instances (e.g., trigger, relation) regardless of their interactions. For example, a victim of a die event is likely to be a victim of an attack event in the same sentence. In order to capture such cross-subtask and cross-instance inter-dependencies, we propose a joint neural framework, OneIE, that aims to extract the globally optimal IE result as a graph from an input sentence. OneIE performs end-to-end IE in four stages: (1) Encoding a given sentence as contextualized word representations; (2) Identifying entity mentions and event triggers as nodes; (3) Computing label scores for all nodes and their pairwise links using local classifiers; (4) Searching for the globally optimal graph with a beam decoder. At the decoding stage, we incorporate global features to capture the cross-subtask and cross-instance interactions. Experiments show that adding global features improves the performance of our model and achieves new state of-the-art on all subtasks. In addition, as OneIE does not use any language-specific feature, we prove it can be easily applied to new languages or trained in a multilingual manner.', 'year': 2020, 'in_acl': True, 'citationCount': 373, 'section': 'joint text and knowledge representation learning', 'subsection': None}, {'id': 4590511, 'paperId': '94eb48c1878efbe2ccff121bd600dd0fd8a75650', 'title': 'Graph2Seq: Graph to Sequence Learning with Attention-based Neural Networks', 'authors': [{'authorId': '151485141', 'name': 'Kun Xu'}, {'authorId': '3008832', 'name': 'Lingfei Wu'}, {'authorId': '40296541', 'name': 'Zhiguo Wang'}, {'authorId': '1717629', 'name': 'Yansong Feng'}, {'authorId': '1757683', 'name': 'V. Sheinin'}], 'venue': 'arXiv.org', 'abstract': 'The celebrated Sequence to Sequence learning (Seq2Seq) technique and its numerous variants achieve excellent performance on many tasks. However, many machine learning tasks have inputs naturally represented as graphs; existing Seq2Seq models face a significant challenge in achieving accurate conversion from graph form to the appropriate sequence. To address this challenge, we introduce a novel general end-to-end graph-to-sequence neural encoder-decoder model that maps an input graph to a sequence of vectors and uses an attention-based LSTM method to decode the target sequence from these vectors. Our method first generates the node and graph embeddings using an improved graph-based neural network with a novel aggregation strategy to incorporate edge direction information in the node embeddings. We further introduce an attention mechanism that aligns node embeddings and the decoding sequence to better cope with large graphs. Experimental results on bAbI, Shortest Path, and Natural Language Generation tasks demonstrate that our model achieves state-of-the-art performance and significantly outperforms existing graph neural networks, Seq2Seq, and Tree2Seq models; using the proposed bi-directional node embedding aggregation strategy, the model can converge rapidly to the optimal performance.', 'year': 2018, 'in_acl': False, 'citationCount': 161, 'section': 'modeling directed graphs', 'subsection': None}, {'id': 199577786, 'paperId': 'e47e6c814d2742527fdd352db13a5fd95b7ce24b', 'title': 'Reinforcement Learning Based Graph-to-Sequence Model for Natural Question Generation', 'authors': [{'authorId': '2144836395', 'name': 'Yu Chen'}, {'authorId': '3008832', 'name': 'Lingfei Wu'}, {'authorId': '1693515', 'name': 'Mohammed J. Zaki'}], 'venue': 'International Conference on Learning Representations', 'abstract': 'Natural question generation (QG) aims to generate questions from a passage and an answer. Previous works on QG either (i) ignore the rich structure information hidden in text, (ii) solely rely on cross-entropy loss that leads to issues like exposure bias and inconsistency between train/test measurement, or (iii) fail to fully exploit the answer information. To address these limitations, in this paper, we propose a reinforcement learning (RL) based graph-to-sequence (Graph2Seq) model for QG. Our model consists of a Graph2Seq generator with a novel Bidirectional Gated Graph Neural Network based encoder to embed the passage, and a hybrid evaluator with a mixed objective combining both cross-entropy and RL losses to ensure the generation of syntactically and semantically valid text. We also introduce an effective Deep Alignment Network for incorporating the answer information into the passage at both the word and contextual levels. Our model is end-to-end trainable and achieves new state-of-the-art scores, outperforming existing methods by a significant margin on the standard SQuAD benchmark.', 'year': 2019, 'in_acl': False, 'citationCount': 144, 'section': 'modeling directed graphs', 'subsection': None}, {'id': 6206777, 'paperId': '2784000e1a3554374662f4d18cb5ad52f59c8de6', 'title': 'Graph Convolutional Encoders for Syntax-aware Neural Machine Translation', 'authors': [{'authorId': '3000862', 'name': 'Jasmijn Bastings'}, {'authorId': '144889265', 'name': 'Ivan Titov'}, {'authorId': '2782694', 'name': 'Wilker Aziz'}, {'authorId': '2022957', 'name': 'Diego Marcheggiani'}, {'authorId': '3540477', 'name': "K. Sima'an"}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'We present a simple and effective approach to incorporating syntactic structure into neural attention-based encoder-decoder models for machine translation. We rely on graph-convolutional networks (GCNs), a recent class of neural networks developed for modeling graph-structured data. Our GCNs use predicted syntactic dependency trees of source sentences to produce representations of words (i.e. hidden states of the encoder) that are sensitive to their syntactic neighborhoods. GCNs take word representations as input and produce word representations as output, so they can easily be incorporated as layers into standard encoders (e.g., on top of bidirectional RNNs or convolutional neural networks). We evaluate their effectiveness with English-German and English-Czech translation experiments for different types of encoders and observe substantial improvements over their syntax-agnostic versions in all the considered setups.', 'year': 2017, 'in_acl': True, 'citationCount': 482, 'section': 'and heterogeneous graphs', 'subsection': None}, {'id': 215745291, 'paperId': 'd365f9c805d59788f9ae5ad36fee69f9abd8d3c7', 'title': 'Toward Subgraph-Guided Knowledge Graph Question Generation With Graph Neural Networks', 'authors': [{'authorId': '2144836395', 'name': 'Yu Chen'}, {'authorId': '3008832', 'name': 'Lingfei Wu'}, {'authorId': '1693515', 'name': 'Mohammed J. Zaki'}], 'venue': 'IEEE Transactions on Neural Networks and Learning Systems', 'abstract': 'Knowledge graph (KG) question generation (QG) aims to generate natural language questions from KGs and target answers. Previous works mostly focus on a simple setting that is to generate questions from a single KG triple. In this work, we focus on a more realistic setting where we aim to generate questions from a KG subgraph and target answers. In addition, most previous works built on either RNN- or Transformer-based models to encode a linearized KG subgraph, which totally discards the explicit structure information of a KG subgraph. To address this issue, we propose to apply a bidirectional Graph2Seq model to encode the KG subgraph. Furthermore, we enhance our RNN decoder with a node-level copying mechanism to allow direct copying of node attributes from the KG subgraph to the output question. Both automatic and human evaluation results demonstrate that our model achieves new state-of-the-art scores, outperforming existing methods by a significant margin on two QG benchmarks. Experimental results also show that our QG model can consistently benefit the question-answering (QA) task as a means of data augmentation.', 'year': 2020, 'in_acl': False, 'citationCount': 37, 'section': 'and heterogeneous graphs', 'subsection': None}, {'id': 4590511, 'paperId': '94eb48c1878efbe2ccff121bd600dd0fd8a75650', 'title': 'Graph2Seq: Graph to Sequence Learning with Attention-based Neural Networks', 'authors': [{'authorId': '151485141', 'name': 'Kun Xu'}, {'authorId': '3008832', 'name': 'Lingfei Wu'}, {'authorId': '40296541', 'name': 'Zhiguo Wang'}, {'authorId': '1717629', 'name': 'Yansong Feng'}, {'authorId': '1757683', 'name': 'V. Sheinin'}], 'venue': 'arXiv.org', 'abstract': 'The celebrated Sequence to Sequence learning (Seq2Seq) technique and its numerous variants achieve excellent performance on many tasks. However, many machine learning tasks have inputs naturally represented as graphs; existing Seq2Seq models face a significant challenge in achieving accurate conversion from graph form to the appropriate sequence. To address this challenge, we introduce a novel general end-to-end graph-to-sequence neural encoder-decoder model that maps an input graph to a sequence of vectors and uses an attention-based LSTM method to decode the target sequence from these vectors. Our method first generates the node and graph embeddings using an improved graph-based neural network with a novel aggregation strategy to incorporate edge direction information in the node embeddings. We further introduce an attention mechanism that aligns node embeddings and the decoding sequence to better cope with large graphs. Experimental results on bAbI, Shortest Path, and Natural Language Generation tasks demonstrate that our model achieves state-of-the-art performance and significantly outperforms existing graph neural networks, Seq2Seq, and Tree2Seq models; using the proposed bi-directional node embedding aggregation strategy, the model can converge rapidly to the optimal performance.', 'year': 2018, 'in_acl': False, 'citationCount': 161, 'section': 'GNN based encoder-decoder models', 'subsection': None}, {'id': 199577786, 'paperId': 'e47e6c814d2742527fdd352db13a5fd95b7ce24b', 'title': 'Reinforcement Learning Based Graph-to-Sequence Model for Natural Question Generation', 'authors': [{'authorId': '2144836395', 'name': 'Yu Chen'}, {'authorId': '3008832', 'name': 'Lingfei Wu'}, {'authorId': '1693515', 'name': 'Mohammed J. Zaki'}], 'venue': 'International Conference on Learning Representations', 'abstract': 'Natural question generation (QG) aims to generate questions from a passage and an answer. Previous works on QG either (i) ignore the rich structure information hidden in text, (ii) solely rely on cross-entropy loss that leads to issues like exposure bias and inconsistency between train/test measurement, or (iii) fail to fully exploit the answer information. To address these limitations, in this paper, we propose a reinforcement learning (RL) based graph-to-sequence (Graph2Seq) model for QG. Our model consists of a Graph2Seq generator with a novel Bidirectional Gated Graph Neural Network based encoder to embed the passage, and a hybrid evaluator with a mixed objective combining both cross-entropy and RL losses to ensure the generation of syntactically and semantically valid text. We also introduce an effective Deep Alignment Network for incorporating the answer information into the passage at both the word and contextual levels. Our model is end-to-end trainable and achieves new state-of-the-art scores, outperforming existing methods by a significant margin on the standard SQuAD benchmark.', 'year': 2019, 'in_acl': False, 'citationCount': 144, 'section': 'GNN based encoder-decoder models', 'subsection': None}, {'id': 216642054, 'paperId': 'f1aef5403012d2a70344bc70d58d720aef85834c', 'title': 'Graph-to-Tree Neural Networks for Learning Structured Input-Output Translation with Applications to Semantic Parsing and Math Word Problem', 'authors': [{'authorId': '2109104122', 'name': 'Shucheng Li'}, {'authorId': '3008832', 'name': 'Lingfei Wu'}, {'authorId': '2188773882', 'name': 'Shiwei Feng'}, {'authorId': '2392383', 'name': 'Fangli Xu'}, {'authorId': '1741521', 'name': 'Fengyuan Xu'}, {'authorId': '2053863706', 'name': 'Sheng Zhong'}], 'venue': 'Findings', 'abstract': 'The celebrated Seq2Seq technique and its numerous variants achieve excellent performance on many tasks such as neural machine translation, semantic parsing, and math word problem solving. However, these models either only consider input objects as sequences while ignoring the important structural information for encoding, or they simply treat output objects as sequence outputs instead of structural objects for decoding. In this paper, we present a novel Graph-to-Tree Neural Networks, namely Graph2Tree consisting of a graph encoder and a hierarchical tree decoder, that encodes an augmented graph-structured input and decodes a tree-structured output. In particular, we investigated our model for solving two problems, neural semantic parsing and math word problem. Our extensive experiments demonstrate that our Graph2Tree model outperforms or matches the performance of other state-of-the-art models on these tasks.', 'year': 2020, 'in_acl': True, 'citationCount': 68, 'section': 'GNN based encoder-decoder models', 'subsection': None}]
|
2021.naacl-tutorials.5
|
Beyond Paragraphs: NLP for Long Sequences
|
In this tutorial, we aim at bringing interested NLP researchers up to speed about the recent and ongoing techniques for document-level representation learning. Additionally, our goal is to reveal new research opportunities to the audience, which will hopefully bring us closer to address existing challenges in this domain.
| 2,021
|
https://aclanthology.org/2021.naacl-tutorials.5
|
NAACL
|
[{'id': 6857205, 'paperId': '455afd748e8834ef521e4b67c7c056d3c33429e2', 'title': 'Hierarchical Attention Networks for Document Classification', 'authors': [{'authorId': '8387085', 'name': 'Zichao Yang'}, {'authorId': '2022168', 'name': 'Diyi Yang'}, {'authorId': '1745899', 'name': 'Chris Dyer'}, {'authorId': '144137069', 'name': 'Xiaodong He'}, {'authorId': '46234526', 'name': 'Alex Smola'}, {'authorId': '144547315', 'name': 'E. Hovy'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'We propose a hierarchical attention network for document classification. Our model has two distinctive characteristics: (i) it has a hierarchical structure that mirrors the hierarchical structure of documents; (ii) it has two levels of attention mechanisms applied at the wordand sentence-level, enabling it to attend differentially to more and less important content when constructing the document representation. Experiments conducted on six large scale text classification tasks demonstrate that the proposed architecture outperform previous methods by a substantial margin. Visualization of the attention layers illustrates that the model selects qualitatively informative words and sentences.', 'year': 2016, 'in_acl': True, 'citationCount': 4308, 'section': None, 'subsection': None}, {'id': 207853300, 'paperId': '2c88d7486f9871cb741ba3c7076b8adbb7fd5b68', 'title': 'Hierarchical Graph Network for Multi-hop Question Answering', 'authors': [{'authorId': '51444591', 'name': 'Yuwei Fang'}, {'authorId': '2419809', 'name': 'S. Sun'}, {'authorId': '144702900', 'name': 'Zhe Gan'}, {'authorId': '2128387353', 'name': 'R. Pillai'}, {'authorId': '2992833', 'name': 'Shuohang Wang'}, {'authorId': '46700348', 'name': 'Jingjing Liu'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'In this paper, we present Hierarchical Graph Network (HGN) for multi-hop question answering. To aggregate clues from scattered texts across multiple paragraphs, a hierarchical graph is created by constructing nodes from different levels of granularity (questions, paragraphs, sentences, and entities), the representations of which are initialized with RoBERTa-based context encoders. Given this hierarchical graph, the initial node representations are updated through graph propagation, and multi-hop reasoning is performed via traversing through the graph edges for each subsequent sub-task (e.g., paragraph selection, supporting facts extraction, answer prediction). By weaving heterogeneous nodes into an integral unified graph, this characteristic hierarchical differentiation of node granularity enables HGN to support different question answering sub-tasks simultaneously. Experiments on the HotpotQA benchmark demonstrate that the proposed model achieves new state of the art in both the Distractor and Fullwiki settings.', 'year': 2019, 'in_acl': True, 'citationCount': 164, 'section': None, 'subsection': None}, {'id': 221702858, 'paperId': '7e5709d81558d3ef4265de29ea75931afeb1f2dd', 'title': 'Efficient Transformers: A Survey', 'authors': [{'authorId': '144447820', 'name': 'Yi Tay'}, {'authorId': '3226635', 'name': 'Mostafa Dehghani'}, {'authorId': '11774695', 'name': 'Dara Bahri'}, {'authorId': '1680617', 'name': 'Donald Metzler'}], 'venue': 'ACM Computing Surveys', 'abstract': 'Transformer model architectures have garnered immense interest lately due to their effectiveness across a range of domains like language, vision, and reinforcement learning. In the field of natural language processing for example, Transformers have become an indispensable staple in the modern deep learning stack. Recently, a dizzying number of “X-former” models have been proposed—Reformer, Linformer, Performer, Longformer, to name a few—which improve upon the original Transformer architecture, many of which make improvements around computational and memory efficiency. With the aim of helping the avid researcher navigate this flurry, this article characterizes a large and thoughtful selection of recent efficiency-flavored “X-former” models, providing an organized and comprehensive overview of existing work and models across multiple domains.', 'year': 2020, 'in_acl': False, 'citationCount': 979, 'section': None, 'subsection': None}, {'id': 202541012, 'paperId': '5665805becad6c87b194b260f2270d86d560bd3f', 'title': 'On Extractive and Abstractive Neural Document Summarization with Transformer Language Models', 'authors': [{'authorId': '104354626', 'name': 'Jonathan Pilault'}, {'authorId': '144235909', 'name': 'Raymond Li'}, {'authorId': '50324141', 'name': 'Sandeep Subramanian'}, {'authorId': '1972076', 'name': 'C. Pal'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'We present a method to produce abstractive summaries of long documents that exceed several thousand words via neural abstractive summarization. We perform a simple extractive step before generating a summary, which is then used to condition the transformer language model on relevant information before being tasked with generating a summary. We also show that this approach produces more abstractive summaries compared to prior work that employs a copy mechanism while still achieving higher ROUGE scores. We provide extensive comparisons with strong baseline methods, prior state of the art work as well as multiple variants of our approach including those using only transformers, only extractive techniques and combinations of the two. We examine these models using four different summarization tasks and datasets: arXiv papers, PubMed papers, the Newsroom and BigPatent datasets. We find that transformer based methods produce summaries with fewer n-gram copies, leading to n-gram copying statistics that are more similar to human generated abstracts. We include a human evaluation, finding that transformers are ranked highly for coherence and fluency, but purely extractive methods score higher for informativeness and relevance. We hope that these architectures and experiments may serve as strong points of comparison for future work. Note: The abstract above was collaboratively written by the authors and one of the models presented in this paper based on an earlier draft of this paper.', 'year': 2020, 'in_acl': True, 'citationCount': 191, 'section': None, 'subsection': None}]
|
2021.naacl-tutorials.6
|
Crowdsourcing Natural Language Data at Scale: A Hands-On Tutorial
|
In this tutorial, we present a portion of unique industry experience in efficient natural language data annotation via crowdsourcing shared by both leading researchers and engineers from Yandex. We will make an introduction to data labeling via public crowdsourcing marketplaces and will present the key components of efficient label collection. This will be followed by a practical session, where participants address a real-world language resource production task, experiment with selecting settings for the labeling process, and launch their label collection project on one of the largest crowdsourcing marketplaces. The projects will be run on real crowds within the tutorial session and we will present useful quality control techniques and provide the attendees with an opportunity to discuss their own annotation ideas.
| 2,021
|
https://aclanthology.org/2021.naacl-tutorials.6
|
NAACL
|
[{'id': 45813168, 'paperId': 'c80c7ab615b2fad5148a7848dbdd26a2dc50dd3d', 'title': 'Maximum Likelihood Estimation of Observer Error‐Rates Using the EM Algorithm', 'authors': [{'authorId': '144845491', 'name': 'A. Dawid'}, {'authorId': '1909853', 'name': 'A. Skene'}], 'venue': '', 'abstract': 'In compiling a patient record many facets are subject to errors of measurement. A model is presented which allows individual error-rates to be estimated for polytomous facets even when the patient\'s "true" response is not available. The EM algorithm is shown to provide a slow but sure way of obtaining maximum likelihood estimates of the parameters of interest. Some preliminary experience is reported and the limitations of the method are described.', 'year': 1979, 'in_acl': False, 'citationCount': 1775, 'section': 'Quality Control', 'subsection': None}, {'id': 207908258, 'paperId': '12584bd527408145b1a1d6b9489c49710ff3d737', 'title': 'A Dataset of Crowdsourced Word Sequences: Collections and Answer Aggregation for Ground Truth Creation', 'authors': [{'authorId': '40340606', 'name': 'Jiyi Li'}, {'authorId': '3029852', 'name': 'Fumiyo Fukumoto'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'The target outputs of many NLP tasks are word sequences. To collect the data for training and evaluating models, the crowd is a cheaper and easier to access than the oracle. To ensure the quality of the crowdsourced data, people can assign multiple workers to one question and then aggregate the multiple answers with diverse quality into a golden one. How to aggregate multiple crowdsourced word sequences with diverse quality is a curious and challenging problem. People need a dataset for addressing this problem. We thus create a dataset (CrowdWSA2019) which contains the translated sentences generated from multiple workers. We provide three approaches as the baselines on the task of extractive word sequence aggregation. Specially, one of them is an original one we propose which models the reliability of workers. We also discuss some issues on ground truth creation of word sequences which can be addressed based on this dataset.', 'year': 2019, 'in_acl': True, 'citationCount': 13, 'section': 'Quality Control', 'subsection': None}, {'id': 53228157, 'paperId': '50f34fed4cd6cfa761efc0a9ca12bf75d799cc8e', 'title': 'Soylent: a word processor with a crowd inside', 'authors': [{'authorId': '145879842', 'name': 'Michael S. Bernstein'}, {'authorId': '48155668', 'name': 'Greg Little'}, {'authorId': '152160465', 'name': 'Rob Miller'}, {'authorId': '28226629', 'name': 'Bjoern Hartmann'}, {'authorId': '1797833', 'name': 'M. Ackerman'}, {'authorId': '1743286', 'name': 'David R Karger'}, {'authorId': '144141670', 'name': 'David Crowell'}, {'authorId': '1814699', 'name': 'Katrina Panovich'}], 'venue': 'ACM Symposium on User Interface Software and Technology', 'abstract': 'This paper introduces architectural and interaction patterns for integrating crowdsourced human contributions directly into user interfaces. We focus on writing and editing, complex endeavors that span many levels of conceptual and pragmatic activity. Authoring tools offer help with pragmatics, but for higher-level help, writers commonly turn to other people. We thus present Soylent, a word processing interface that enables writers to call on Mechanical Turk workers to shorten, proofread, and otherwise edit parts of their documents on demand. To improve worker quality, we introduce the Find-Fix-Verify crowd programming pattern, which splits tasks into a series of generation and review stages. Evaluation studies demonstrate the feasibility of crowdsourced editing and investigate questions of reliability, cost, wait time, and work time for edits.', 'year': 2010, 'in_acl': False, 'citationCount': 928, 'section': 'Task Design for NLP', 'subsection': None}, {'id': 263898183, 'paperId': '92f24748ecb81af9220e6305dd23f16e4570e303', 'title': 'Creating Speech and Language Data With Amazon’s Mechanical Turk', 'authors': [{'authorId': '1763608', 'name': 'Chris Callison-Burch'}, {'authorId': '1478928280', 'name': 'Mark Dredze'}], 'venue': 'Mturk@HLT-NAACL', 'abstract': "In this paper we give an introduction to using Amazon's Mechanical Turk crowdsourcing platform for the purpose of collecting data for human language technologies. We survey the papers published in the NAACL-2010 Workshop. 24 researchers participated in the workshop's shared task to create data for speech and language applications with $100.", 'year': 2010, 'in_acl': True, 'citationCount': 204, 'section': 'Task Design for NLP', 'subsection': None}, {'id': 6837877, 'paperId': 'bc556572a30553cffb4f80263573e6c2d7c2e3d7', 'title': 'Creating a system for lexical substitutions from scratch using crowdsourcing', 'authors': [{'authorId': '31565315', 'name': 'Chris Biemann'}], 'venue': 'Language Resources and Evaluation', 'abstract': 'This article describes the creation and application of the Turk Bootstrap Word Sense Inventory for 397 frequent nouns, which is a publicly available resource for lexical substitution. This resource was acquired using Amazon Mechanical Turk. In a bootstrapping process with massive collaborative input, substitutions for target words in context are elicited and clustered by sense; then, more contexts are collected. Contexts that cannot be assigned to a current target word’s sense inventory re-enter the bootstrapping loop and get a supply of substitutions. This process yields a sense inventory with its granularity determined by substitutions as opposed to psychologically motivated concepts. It comes with a large number of sense-annotated target word contexts. Evaluation on data quality shows that the process is robust against noise from the crowd, produces a less fine-grained inventory than WordNet and provides a rich body of high precision substitution data at low cost. Using the data to train a system for lexical substitutions, we show that amount and quality of the data is sufficient for producing high quality substitutions automatically. In this system, co-occurrence cluster features are employed as a means to cheaply model topicality.', 'year': 2012, 'in_acl': False, 'citationCount': 69, 'section': 'Task Design for NLP', 'subsection': None}, {'id': 7008675, 'paperId': '0165568bcc1a819c18564567f2ec15d859be2519', 'title': 'Cheap and Fast – But is it Good? Evaluating Non-Expert Annotations for Natural Language Tasks', 'authors': [{'authorId': '144621026', 'name': 'R. Snow'}, {'authorId': '1401020033', 'name': "Brendan T. O'Connor"}, {'authorId': '1746807', 'name': 'Dan Jurafsky'}, {'authorId': '34699434', 'name': 'A. Ng'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': "Human linguistic annotation is crucial for many natural language processing tasks but can be expensive and time-consuming. We explore the use of Amazon's Mechanical Turk system, a significantly cheaper and faster method for collecting annotations from a broad base of paid non-expert contributors over the Web. We investigate five tasks: affect recognition, word similarity, recognizing textual entailment, event temporal ordering, and word sense disambiguation. For all five, we show high agreement between Mechanical Turk non-expert annotations and existing gold standard labels provided by expert labelers. For the task of affect recognition, we also show that using non-expert labels for training machine learning algorithms can be as effective as using gold standard annotations from experts. We propose a technique for bias correction that significantly improves annotation quality on two tasks. We conclude that many large labeling tasks can be effectively designed and carried out in this method at a fraction of the usual expense.", 'year': 2008, 'in_acl': True, 'citationCount': 2327, 'section': 'Incentives', 'subsection': None}, {'id': 8924783, 'paperId': '9005cc8e33e2cf875588e5d1225c8b9e3f300a57', 'title': 'Quality-Based Pricing for Crowdsourced Workers', 'authors': [{'authorId': '2152453484', 'name': 'Jing Wang'}, {'authorId': '2942126', 'name': 'Panagiotis G. Ipeirotis'}, {'authorId': '1752722', 'name': 'F. Provost'}], 'venue': '', 'abstract': 'The emergence of online paid crowdsourcing platforms, such as Amazon Mechanical Turk (AMT), presents us huge opportunities to distribute tasks to human workers around the world, on-demand and at scale. In such settings, online workers can come and complete tasks posted by a company, and work for as long or as little as they wish. Given the absolute freedom of choice, crowdsourcing eliminates the overhead of the hiring (and dismissal) process. However, this exibility introduces a di erent set of ine ciencies: verifying the quality of every submitted piece of work is an expensive operation, which often requires the same level of e ort as performing the task itself. There are many research challenges that emerge in this paid-crowdsourcing setting. How can we ensure that the submitted work is accurate? How can we estimate the quality of the workers, and the quality of the submitted results? How should we pay online workers that have imperfect quality? We present a comprehensive scheme for managing quality of crowdsourcing processes: First, we present an algorithm for estimating the quality of the participating workers and, by extension, of the generated data. We show how we can separate systematic worker biases from unrecoverable errors and how to generate an unbiased "worker quality" measurement that can be used to objectively rank workers according to their performance. Next, we describe a pricing scheme that identi es the fair payment level for a worker, adjusting the payment level according to the contributed information by each worker. Our pricing policy, which pays workers based on their expected quality, reservation wage, and expected lifetime, estimates not only 1 the payment level but also accommodates measurement uncertainties and allows the workers to receive a fair wage, even in the presence of temporary incorrect estimations of quality. Our experimental results demonstrate that the proposed pricing strategy performs better than the commonly adopted uniform-pricing strategy. We conclude the paper by describing strategies that build on our quality control and pricing framework, to build crowdsourced tasks of increasingly higher complexity, while still maintaining a tight quality control of the process, even if we allow participants of unknown quality to join the process.', 'year': 2013, 'in_acl': False, 'citationCount': 47, 'section': 'Incentives', 'subsection': None}]
|
2022.aacl-tutorials.1
|
Efficient and Robust Knowledge Graph Construction
|
Knowledge graph construction which aims to extract knowledge from the text corpus, has appealed to the NLP community researchers. Previous decades have witnessed the remarkable progress of knowledge graph construction on the basis of neural models; however, those models often cost massive computation or labeled data resources and suffer from unstable inference accounting for biased or adversarial samples. Recently, numerous approaches have been explored to mitigate the efficiency and robustness issues for knowledge graph construction, such as prompt learning and adversarial training. In this tutorial, we aim to bring interested NLP researchers up to speed on the recent and ongoing techniques for efficient and robust knowledge graph construction. Additionally, our goal is to provide a systematic and up-to-date overview of these methods and reveal new research opportunities to the audience.
| 2,022
|
https://aclanthology.org/2022.aacl-tutorials.1
|
AACL, IJCNLP
|
[{'id': 4557963, 'paperId': 'cf5ea582bccc7cb21a2ebeb7a0987f79652bde8d', 'title': 'Knowledge vault: a web-scale approach to probabilistic knowledge fusion', 'authors': [{'authorId': '145867172', 'name': 'X. Dong'}, {'authorId': '1718798', 'name': 'E. Gabrilovich'}, {'authorId': '1728179', 'name': 'Geremy Heitz'}, {'authorId': '40428294', 'name': 'Wilko Horn'}, {'authorId': '1914797', 'name': 'N. Lao'}, {'authorId': '1702318', 'name': 'K. Murphy'}, {'authorId': '2931575', 'name': 'Thomas Strohmann'}, {'authorId': '2109375570', 'name': 'Shaohua Sun'}, {'authorId': None, 'name': 'Wei Zhang'}], 'venue': 'Knowledge Discovery and Data Mining', 'abstract': "Recent years have witnessed a proliferation of large-scale knowledge bases, including Wikipedia, Freebase, YAGO, Microsoft's Satori, and Google's Knowledge Graph. To increase the scale even further, we need to explore automatic methods for constructing knowledge bases. Previous approaches have primarily focused on text-based extraction, which can be very noisy. Here we introduce Knowledge Vault, a Web-scale probabilistic knowledge base that combines extractions from Web content (obtained via analysis of text, tabular data, page structure, and human annotations) with prior knowledge derived from existing knowledge repositories. We employ supervised machine learning methods for fusing these distinct information sources. The Knowledge Vault is substantially bigger than any previously published structured knowledge repository, and features a probabilistic inference system that computes calibrated probabilities of fact correctness. We report the results of multiple studies that explore the relative utility of the different information sources and extraction methods.", 'year': 2014, 'in_acl': False, 'citationCount': 1700, 'section': None, 'subsection': None}, {'id': 3627801, 'paperId': '0e46803ac8fc715b72d7f935a3f383ade945487f', 'title': 'Fonduer: Knowledge Base Construction from Richly Formatted Data', 'authors': [{'authorId': '144766615', 'name': 'Sen Wu'}, {'authorId': '2065637845', 'name': 'Luke Hsiao'}, {'authorId': '2149478197', 'name': 'Xiaoxia Cheng'}, {'authorId': '34302368', 'name': 'Braden Hancock'}, {'authorId': '145071799', 'name': 'Theodoros Rekatsinas'}, {'authorId': '1721681', 'name': 'P. Levis'}, {'authorId': '2114485554', 'name': 'C. Ré'}], 'venue': 'SIGMOD Conference', 'abstract': "We focus on knowledge base construction (KBC) from richly formatted data. In contrast to KBC from text or tabular data, KBC from richly formatted data aims to extract relations conveyed jointly via textual, structural, tabular, and visual expressions. We introduce Fonduer, a machine-learning-based KBC system for richly formatted data. Fonduer presents a new data model that accounts for three challenging characteristics of richly formatted data: (1) prevalent document-level relations, (2) multimodality, and (3) data variety. Fonduer uses a new deep-learning model to automatically capture the representation (i.e., features) needed to learn how to extract relations from richly formatted data. Finally, Fonduer provides a new programming model that enables users to convert domain expertise, based on multiple modalities of information, to meaningful signals of supervision for training a KBC system. Fonduer-based KBC systems are in production for a range of use cases, including at a major online retailer. We compare Fonduer against state-of-the-art KBC approaches in four different domains. We show that Fonduer achieves an average improvement of 41 F1 points on the quality of the output knowledge base---and in some cases produces up to 1.87x the number of correct entries---compared to expert-curated public knowledge bases. We also conduct a user study to assess the usability of Fonduer's new programming model. We show that after using Fonduer for only 30 minutes, non-domain experts are able to design KBC systems that achieve on average 23 F1 points higher quality than traditional machine-learning-based KBC approaches.", 'year': 2017, 'in_acl': False, 'citationCount': 97, 'section': None, 'subsection': None}, {'id': 225062337, 'paperId': '455cdafd55a5b5ddefa029bf97801327e142646d', 'title': 'A Survey on Recent Approaches for Natural Language Processing in Low-Resource Scenarios', 'authors': [{'authorId': '51133383', 'name': 'Michael A. Hedderich'}, {'authorId': '47665464', 'name': 'Lukas Lange'}, {'authorId': '145793834', 'name': 'Heike Adel'}, {'authorId': '2013656', 'name': 'Jannik Strotgen'}, {'authorId': '2561225', 'name': 'D. Klakow'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'Deep neural networks and huge language models are becoming omnipresent in natural language applications. As they are known for requiring large amounts of training data, there is a growing body of work to improve the performance in low-resource settings. Motivated by the recent fundamental changes towards neural models and the popular pre-train and fine-tune paradigm, we survey promising approaches for low-resource natural language processing. After a discussion about the different dimensions of data availability, we give a structured overview of methods that enable learning when training data is sparse. This includes mechanisms to create additional labeled data like data augmentation and distant supervision as well as transfer learning settings that reduce the need for target supervision. A goal of our survey is to explain how these methods differ in their requirements as understanding them is essential for choosing a technique suited for a specific low-resource setting. Further key aspects of this work are to highlight open issues and to outline promising directions for future research.', 'year': 2020, 'in_acl': True, 'citationCount': 247, 'section': None, 'subsection': None}, {'id': 243865588, 'paperId': '84aec29de31b56b3324c00667dfac62850f8dadf', 'title': 'Few-Shot Named Entity Recognition: An Empirical Baseline Study', 'authors': [{'authorId': '3488341', 'name': 'Jiaxin Huang'}, {'authorId': '2109737569', 'name': 'Chunyuan Li'}, {'authorId': '2043231778', 'name': 'K. Subudhi'}, {'authorId': '144430856', 'name': 'Damien Jose'}, {'authorId': '2071648958', 'name': 'S. Balakrishnan'}, {'authorId': '2109136147', 'name': 'Weizhu Chen'}, {'authorId': '1780690', 'name': 'Baolin Peng'}, {'authorId': '48441311', 'name': 'Jianfeng Gao'}, {'authorId': '153034701', 'name': 'Jiawei Han'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'This paper presents an empirical study to efficiently build named entity recognition (NER) systems when a small amount of in-domain labeled data is available. Based upon recent Transformer-based self-supervised pre-trained language models (PLMs), we investigate three orthogonal schemes to improve model generalization ability in few-shot settings: (1) meta-learning to construct prototypes for different entity types, (2) task-specific supervised pre-training on noisy web data to extract entity-related representations and (3) self-training to leverage unlabeled in-domain data. On 10 public NER datasets, we perform extensive empirical comparisons over the proposed schemes and their combinations with various proportions of labeled data, our experiments show that (i)in the few-shot learning setting, the proposed NER schemes significantly improve or outperform the commonly used baseline, a PLM-based linear classifier fine-tuned using domain labels. (ii) We create new state-of-the-art results on both few-shot and training-free settings compared with existing methods.', 'year': 2021, 'in_acl': True, 'citationCount': 85, 'section': None, 'subsection': None}, {'id': 246867025, 'paperId': 'aeb8344608e4f89bca8f508831ef24b64ec01e9a', 'title': 'Information Extraction in Low-Resource Scenarios: Survey and Perspective', 'authors': [{'authorId': '152931849', 'name': 'Shumin Deng'}, {'authorId': '2153010067', 'name': 'Ningyu Zhang'}, {'authorId': '2155551120', 'name': 'Hui Chen'}, {'authorId': '2068169902', 'name': 'Feiyu Xiong'}, {'authorId': '9416872', 'name': 'Jeff Z. Pan'}, {'authorId': '49178307', 'name': 'Huajun Chen'}], 'venue': '', 'abstract': 'Information Extraction (IE) seeks to derive structured information from unstructured texts, often facing challenges in low-resource scenarios due to data scarcity and unseen classes. This paper presents a review of neural approaches to low-resource IE from \\emph{traditional} and \\emph{LLM-based} perspectives, systematically categorizing them into a fine-grained taxonomy. Then we conduct empirical study on LLM-based methods compared with previous state-of-the-art models, and discover that (1) well-tuned LMs are still predominant; (2) tuning open-resource LLMs and ICL with GPT family is promising in general; (3) the optimal LLM-based technical solution for low-resource IE can be task-dependent. In addition, we discuss low-resource IE with LLMs, highlight promising applications, and outline potential research directions. This survey aims to foster understanding of this field, inspire new ideas, and encourage widespread applications in both academia and industry.', 'year': 2022, 'in_acl': False, 'citationCount': 4, 'section': None, 'subsection': None}, {'id': 226262304, 'paperId': '7db87539bcaed817c820c4e0c0855ec5fb24344c', 'title': 'Uncertainty-Aware Label Refinement for Sequence Labeling', 'authors': [{'authorId': '2067331064', 'name': 'Tao Gui'}, {'authorId': '65846898', 'name': 'Jiacheng Ye'}, {'authorId': '1409702669', 'name': 'Qi Zhang'}, {'authorId': '2145371018', 'name': 'Zhengyan Li'}, {'authorId': '1411255713', 'name': 'Zichu Fei'}, {'authorId': '2171182', 'name': 'Yeyun Gong'}, {'authorId': '1790227', 'name': 'Xuanjing Huang'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Conditional random fields (CRF) for label decoding has become ubiquitous in sequence labeling tasks. However, the local label dependencies and inefficient Viterbi decoding have always been a problem to be solved. In this work, we introduce a novel two-stage label decoding framework to model long-term label dependencies, while being much more computationally efficient. A base model first predicts draft labels, and then a novel two-stream self-attention model makes refinements on these draft predictions based on long-range label dependencies, which can achieve parallel decoding for a faster prediction. In addition, in order to mitigate the side effects of incorrect draft labels, Bayesian neural networks are used to indicate the labels with a high probability of being wrong, which can greatly assist in preventing error propagation. The experimental results on three sequence labeling benchmarks demonstrated that the proposed method not only outperformed the CRF-based methods but also greatly accelerated the inference process.', 'year': 2020, 'in_acl': True, 'citationCount': 25, 'section': None, 'subsection': None}, {'id': 218613850, 'paperId': '013faec0400d315935e71a2bdfeb22cc83752b3e', 'title': 'Reasoning with Latent Structure Refinement for Document-Level Relation Extraction', 'authors': [{'authorId': '2056582888', 'name': 'Guoshun Nan'}, {'authorId': '2681038', 'name': 'Zhijiang Guo'}, {'authorId': '3305422', 'name': 'Ivan Sekulic'}, {'authorId': '2153424287', 'name': 'Wei Lu'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Document-level relation extraction requires integrating information within and across multiple sentences of a document and capturing complex interactions between inter-sentence entities. However, effective aggregation of relevant information in the document remains a challenging research question. Existing approaches construct static document-level graphs based on syntactic trees, co-references or heuristics from the unstructured text to model the dependencies. Unlike previous methods that may not be able to capture rich non-local interactions for inference, we propose a novel model that empowers the relational reasoning across sentences by automatically inducing the latent document-level graph. We further develop a refinement strategy, which enables the model to incrementally aggregate relevant information for multi-hop reasoning. Specifically, our model achieves an F1 score of 59.05 on a large-scale document-level dataset (DocRED), significantly improving over the previous results, and also yields new state-of-the-art results on the CDR and GDA dataset. Furthermore, extensive analyses show that the model is able to discover more accurate inter-sentence relations.', 'year': 2020, 'in_acl': True, 'citationCount': 252, 'section': None, 'subsection': None}, {'id': 237295186, 'paperId': '1a2e90dff605dad7dbefeed121e6d295c7a77d62', 'title': 'KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation Extraction', 'authors': [{'authorId': '2143735911', 'name': 'Xiang Chen'}, {'authorId': '2608639', 'name': 'Ningyu Zhang'}, {'authorId': '2153010067', 'name': 'Ningyu Zhang'}, {'authorId': '2110972563', 'name': 'Xin Xie'}, {'authorId': '152931849', 'name': 'Shumin Deng'}, {'authorId': '4841460', 'name': 'Yunzhi Yao'}, {'authorId': '2111727840', 'name': 'Chuanqi Tan'}, {'authorId': '2087380523', 'name': 'Fei Huang'}, {'authorId': '2059080424', 'name': 'Luo Si'}, {'authorId': '49178307', 'name': 'Huajun Chen'}], 'venue': 'The Web Conference', 'abstract': 'Recently, prompt-tuning has achieved promising results for specific few-shot classification tasks. The core idea of prompt-tuning is to insert text pieces (i.e., templates) into the input and transform a classification task into a masked language modeling problem. However, for relation extraction, determining an appropriate prompt template requires domain expertise, and it is cumbersome and time-consuming to obtain a suitable label word. Furthermore, there exists abundant semantic and prior knowledge among the relation labels that cannot be ignored. To this end, we focus on incorporating knowledge among relation labels into prompt-tuning for relation extraction and propose a Knowledge-aware Prompt-tuning approach with synergistic optimization (KnowPrompt). Specifically, we inject latent knowledge contained in relation labels into prompt construction with learnable virtual type words and answer words. Then, we synergistically optimize their representation with structured constraints. Extensive experimental results on five datasets with standard and low-resource settings demonstrate the effectiveness of our approach. Our code and datasets are available in GitHub1 for reproducibility.', 'year': 2021, 'in_acl': False, 'citationCount': 351, 'section': None, 'subsection': None}]
|
2022.aacl-tutorials.2
|
Recent Advances in Pre-trained Language Models: Why Do They Work and How Do They Work
|
Pre-trained language models (PLMs) are language models that are pre-trained on large-scaled corpora in a self-supervised fashion. These PLMs have fundamentally changed the natural language processing community in the past few years. In this tutorial, we aim to provide a broad and comprehensive introduction from two perspectives: why those PLMs work, and how to use them in NLP tasks. The first part of the tutorial shows some insightful analysis on PLMs that partially explain their exceptional downstream performance. The second part first focuses on emerging pre-training methods that enable PLMs to perform diverse downstream tasks and then illustrates how one can apply those PLMs to downstream tasks under different circumstances. These circumstances include fine-tuning PLMs when under data scarcity, and using PLMs with parameter efficiency. We believe that attendees of different backgrounds would find this tutorial informative and useful.
| 2,022
|
https://aclanthology.org/2022.aacl-tutorials.2
|
AACL, IJCNLP
|
[{'id': 13756489, 'paperId': '204e3073870fae3d05bcbc2f6a8e263d9b72e776', 'title': 'Attention is All you Need', 'authors': [{'authorId': '40348417', 'name': 'Ashish Vaswani'}, {'authorId': '1846258', 'name': 'Noam M. Shazeer'}, {'authorId': '3877127', 'name': 'Niki Parmar'}, {'authorId': '39328010', 'name': 'Jakob Uszkoreit'}, {'authorId': '145024664', 'name': 'Llion Jones'}, {'authorId': '19177000', 'name': 'Aidan N. Gomez'}, {'authorId': '40527594', 'name': 'Lukasz Kaiser'}, {'authorId': '3443442', 'name': 'Illia Polosukhin'}], 'venue': 'Neural Information Processing Systems', 'abstract': 'The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.', 'year': 2017, 'in_acl': False, 'citationCount': 109681, 'section': 'Transformer model', 'subsection': None}, {'id': 49313245, 'paperId': 'cd18800a0fe0b668a1cc19f2ec95b5003d0a5035', 'title': 'Improving Language Understanding by Generative Pre-Training', 'authors': [{'authorId': '38909097', 'name': 'Alec Radford'}, {'authorId': '144958935', 'name': 'Karthik Narasimhan'}], 'venue': '', 'abstract': 'Natural language understanding comprises a wide range of diverse tasks such as textual entailment, question answering, semantic similarity assessment, and document classification. Although large unlabeled text corpora are abundant, labeled data for learning these specific tasks is scarce, making it challenging for discriminatively trained models to perform adequately. We demonstrate that large gains on these tasks can be realized by generative pre-training of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task. In contrast to previous approaches, we make use of task-aware input transformations during fine-tuning to achieve effective transfer while requiring minimal changes to the model architecture. We demonstrate the effectiveness of our approach on a wide range of benchmarks for natural language understanding. Our general task-agnostic model outperforms discriminatively trained models that use architectures specifically crafted for each task, significantly improving upon the state of the art in 9 out of the 12 tasks studied. For instance, we achieve absolute improvements of 8.9% on commonsense reasoning (Stories Cloze Test), 5.7% on question answering (RACE), and 1.5% on textual entailment (MultiNLI).', 'year': 2018, 'in_acl': False, 'citationCount': 10182, 'section': 'PLMs', 'subsection': None}, {'id': 52967399, 'paperId': 'df2b0e26d0599ce3e70df8a9da02e51594e0e992', 'title': 'BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding', 'authors': [{'authorId': '39172707', 'name': 'Jacob Devlin'}, {'authorId': '1744179', 'name': 'Ming-Wei Chang'}, {'authorId': '2544107', 'name': 'Kenton Lee'}, {'authorId': '3259253', 'name': 'Kristina Toutanova'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018a; Radford et al., 2018), BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5 (7.7 point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).', 'year': 2019, 'in_acl': True, 'citationCount': 84138, 'section': 'PLMs', 'subsection': None}, {'id': 204838007, 'paperId': '6c4b76232bb72897685d19b3d264c6ee3005bc2b', 'title': 'Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer', 'authors': [{'authorId': '2402716', 'name': 'Colin Raffel'}, {'authorId': '1846258', 'name': 'Noam M. Shazeer'}, {'authorId': '145625142', 'name': 'Adam Roberts'}, {'authorId': '3844009', 'name': 'Katherine Lee'}, {'authorId': '46617804', 'name': 'Sharan Narang'}, {'authorId': '1380243217', 'name': 'Michael Matena'}, {'authorId': '2389316', 'name': 'Yanqi Zhou'}, {'authorId': '2157338362', 'name': 'Wei Li'}, {'authorId': '35025299', 'name': 'Peter J. Liu'}], 'venue': 'Journal of machine learning research', 'abstract': 'Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new "Colossal Clean Crawled Corpus", we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.', 'year': 2019, 'in_acl': False, 'citationCount': 16889, 'section': 'PLMs', 'subsection': None}]
|
2022.aacl-tutorials.3
|
When Cantonese NLP Meets Pre-training: Progress and Challenges
|
Cantonese is an influential Chinese variant with a large population of speakers worldwide. However, it is under-resourced in terms of the data scale and diversity, excluding Cantonese Natural Language Processing (NLP) from the stateof-the-art (SOTA) “pre-training and fine-tuning” paradigm. This tutorial will start with a substantially review of the linguistics and NLP progress for shaping language specificity, resources, and methodologies. It will be followed by an introduction to the trendy transformerbased pre-training methods, which have been largely advancing the SOTA performance of a wide range of downstream NLP tasks in numerous majority languages (e.g., English and Chinese). Based on the above, we will present the main challenges for Cantonese NLP in relation to Cantonese language idiosyncrasies of colloquialism and multilingualism, followed by the future directions to line NLP for Cantonese and other low-resource languages up to the cutting-edge pre-training practice.
| 2,022
|
https://aclanthology.org/2022.aacl-tutorials.3
|
AACL, IJCNLP
|
[{'id': 160543569, 'paperId': 'ef8d3369f0d5d78f1c0989e2dd59d5ca8f045441', 'title': 'Cantonese as Written Language: The Growth of a Written Chinese Vernacular', 'authors': [{'authorId': '66196923', 'name': 'Don Snow'}], 'venue': '', 'abstract': 'List of Illustrations and TablesPreface 1. Why Study the Development of Written Cantonese? 2. From Spoken Vernacular to Written Language 3. Spoken and Written Cantonese 4. Written Cantonese in Pre-modern Guangdong 5. The Hong Kong Dialect Literature Movement 6. Written Cantonese in Modern Hong Kong 7. Why Has Use of Written Cantonese Increased? 8. Epilogue: The Future of Written Cantonese Appendix 1: Cantonese Texts Appendix 2: Interviews and Public Lectures Appendix 3: Titles of Publications and Published Works Appendix 4: Characters for Chinese Terms Notes References Index', 'year': 2004, 'in_acl': False, 'citationCount': 118, 'section': None, 'subsection': None}, {'id': 144678737, 'paperId': '484cdbf507139714232d40a6139b17dc72c7919f', 'title': 'Cantonese as an additional language in Hong Kong: Problems and prospects', 'authors': [{'authorId': '71255084', 'name': 'G. Sachs'}, {'authorId': '1575956608', 'name': 'David C. S. Li'}], 'venue': '', 'abstract': 'Abstract Based on data obtained from a questionnaire survey and in-depth interviews with four Caucasians and four dark-skinned Asians, this study shows that while some ‘foreigners’ do make an effort to learn Cantonese, many find the teaching methods not so useful and the language difficult to master, especially its tone system. The data are analyzed following the interactive multicultural model of acculturation. The findings point toward a huge chasm between non-local groups and the Cantonese-speaking community. The receptivity of Hong Kong Chinese towards attempts by members of non-local groups to speak Cantonese varies, depending on their racial identity and socioeconomic status.', 'year': 2007, 'in_acl': False, 'citationCount': 12, 'section': None, 'subsection': None}, {'id': 13756489, 'paperId': '204e3073870fae3d05bcbc2f6a8e263d9b72e776', 'title': 'Attention is All you Need', 'authors': [{'authorId': '40348417', 'name': 'Ashish Vaswani'}, {'authorId': '1846258', 'name': 'Noam M. Shazeer'}, {'authorId': '3877127', 'name': 'Niki Parmar'}, {'authorId': '39328010', 'name': 'Jakob Uszkoreit'}, {'authorId': '145024664', 'name': 'Llion Jones'}, {'authorId': '19177000', 'name': 'Aidan N. Gomez'}, {'authorId': '40527594', 'name': 'Lukasz Kaiser'}, {'authorId': '3443442', 'name': 'Illia Polosukhin'}], 'venue': 'Neural Information Processing Systems', 'abstract': 'The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.', 'year': 2017, 'in_acl': False, 'citationCount': 109681, 'section': None, 'subsection': None}, {'id': 52967399, 'paperId': 'df2b0e26d0599ce3e70df8a9da02e51594e0e992', 'title': 'BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding', 'authors': [{'authorId': '39172707', 'name': 'Jacob Devlin'}, {'authorId': '1744179', 'name': 'Ming-Wei Chang'}, {'authorId': '2544107', 'name': 'Kenton Lee'}, {'authorId': '3259253', 'name': 'Kristina Toutanova'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018a; Radford et al., 2018), BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5 (7.7 point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).', 'year': 2019, 'in_acl': True, 'citationCount': 84138, 'section': None, 'subsection': None}, {'id': 198953378, 'paperId': '077f8329a7b6fa3b7c877a57b81eb6c18b5f87de', 'title': 'RoBERTa: A Robustly Optimized BERT Pretraining Approach', 'authors': [{'authorId': '11323179', 'name': 'Yinhan Liu'}, {'authorId': '40511414', 'name': 'Myle Ott'}, {'authorId': '39589154', 'name': 'Naman Goyal'}, {'authorId': '3048577', 'name': 'Jingfei Du'}, {'authorId': '144863691', 'name': 'Mandar Joshi'}, {'authorId': '50536468', 'name': 'Danqi Chen'}, {'authorId': '39455775', 'name': 'Omer Levy'}, {'authorId': '35084211', 'name': 'M. Lewis'}, {'authorId': '1982950', 'name': 'Luke Zettlemoyer'}, {'authorId': '1759422', 'name': 'Veselin Stoyanov'}], 'venue': 'arXiv.org', 'abstract': 'Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging. Training is computationally expensive, often done on private datasets of different sizes, and, as we will show, hyperparameter choices have significant impact on the final results. We present a replication study of BERT pretraining (Devlin et al., 2019) that carefully measures the impact of many key hyperparameters and training data size. We find that BERT was significantly undertrained, and can match or exceed the performance of every model published after it. Our best model achieves state-of-the-art results on GLUE, RACE and SQuAD. These results highlight the importance of previously overlooked design choices, and raise questions about the source of recently reported improvements. We release our models and code.', 'year': 2019, 'in_acl': False, 'citationCount': 21699, 'section': None, 'subsection': None}, {'id': 218971783, 'paperId': '90abbc2cf38462b954ae1b772fac9532e2ccd8b0', 'title': 'Language Models are Few-Shot Learners', 'authors': [{'authorId': '31035595', 'name': 'Tom B. Brown'}, {'authorId': '2056658938', 'name': 'Benjamin Mann'}, {'authorId': '39849748', 'name': 'Nick Ryder'}, {'authorId': '2065894334', 'name': 'Melanie Subbiah'}, {'authorId': '152724169', 'name': 'J. Kaplan'}, {'authorId': '6515819', 'name': 'Prafulla Dhariwal'}, {'authorId': '2072676', 'name': 'Arvind Neelakantan'}, {'authorId': '67311962', 'name': 'Pranav Shyam'}, {'authorId': '144864359', 'name': 'Girish Sastry'}, {'authorId': '119609682', 'name': 'Amanda Askell'}, {'authorId': '144517868', 'name': 'Sandhini Agarwal'}, {'authorId': '1404060687', 'name': 'Ariel Herbert-Voss'}, {'authorId': '2064404342', 'name': 'Gretchen Krueger'}, {'authorId': '103143311', 'name': 'T. Henighan'}, {'authorId': '48422824', 'name': 'R. Child'}, {'authorId': '1992922591', 'name': 'A. Ramesh'}, {'authorId': '2052152920', 'name': 'Daniel M. Ziegler'}, {'authorId': '49387725', 'name': 'Jeff Wu'}, {'authorId': '2059411355', 'name': 'Clemens Winter'}, {'authorId': '144239765', 'name': 'Christopher Hesse'}, {'authorId': '2108828435', 'name': 'Mark Chen'}, {'authorId': '2064673055', 'name': 'Eric Sigler'}, {'authorId': '1380985420', 'name': 'Ma-teusz Litwin'}, {'authorId': '145565184', 'name': 'Scott Gray'}, {'authorId': '1490681878', 'name': 'B. Chess'}, {'authorId': '2115193883', 'name': 'Jack Clark'}, {'authorId': '133740015', 'name': 'Christopher Berner'}, {'authorId': '52238703', 'name': 'Sam McCandlish'}, {'authorId': '38909097', 'name': 'Alec Radford'}, {'authorId': '1701686', 'name': 'I. Sutskever'}, {'authorId': '2698777', 'name': 'Dario Amodei'}], 'venue': 'Neural Information Processing Systems', 'abstract': "Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general.", 'year': 2020, 'in_acl': False, 'citationCount': 33118, 'section': None, 'subsection': None}, {'id': 125977708, 'paperId': '031e4e43aaffd7a479738dcea69a2d5be7957aa3', 'title': 'ERNIE: Enhanced Representation through Knowledge Integration', 'authors': [{'authorId': '2117103617', 'name': 'Yu Sun'}, {'authorId': '104463827', 'name': 'Shuohuan Wang'}, {'authorId': '1710861', 'name': 'Yukun Li'}, {'authorId': '1718657', 'name': 'Shikun Feng'}, {'authorId': '2109214103', 'name': 'Xuyi Chen'}, {'authorId': '48213346', 'name': 'Han Zhang'}, {'authorId': '2117127187', 'name': 'Xin Tian'}, {'authorId': '152867082', 'name': 'Danxiang Zhu'}, {'authorId': '50007795', 'name': 'Hao Tian'}, {'authorId': '40354707', 'name': 'Hua Wu'}], 'venue': 'arXiv.org', 'abstract': 'We present a novel language representation model enhanced by knowledge called ERNIE (Enhanced Representation through kNowledge IntEgration). Inspired by the masking strategy of BERT, ERNIE is designed to learn language representation enhanced by knowledge masking strategies, which includes entity-level masking and phrase-level masking. Entity-level strategy masks entities which are usually composed of multiple words.Phrase-level strategy masks the whole phrase which is composed of several words standing together as a conceptual unit.Experimental results show that ERNIE outperforms other baseline methods, achieving new state-of-the-art results on five Chinese natural language processing tasks including natural language inference, semantic similarity, named entity recognition, sentiment analysis and question answering. We also demonstrate that ERNIE has more powerful knowledge inference capacity on a cloze test.', 'year': 2019, 'in_acl': False, 'citationCount': 829, 'section': None, 'subsection': None}, {'id': 218719869, 'paperId': '72cdd6ebe0221fb568ef20534f44ba5b35190a56', 'title': 'BERTweet: A pre-trained language model for English Tweets', 'authors': [{'authorId': '34691913', 'name': 'Dat Quoc Nguyen'}, {'authorId': '143768607', 'name': 'Thanh Vu'}, {'authorId': '1398541475', 'name': 'A. Nguyen'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'We present BERTweet, the first public large-scale pre-trained language model for English Tweets. Our BERTweet, having the same architecture as BERT-base (Devlin et al., 2019), is trained using the RoBERTa pre-training procedure (Liu et al., 2019). Experiments show that BERTweet outperforms strong baselines RoBERTa-base and XLM-R-base (Conneau et al., 2020), producing better performance results than the previous state-of-the-art models on three Tweet NLP tasks: Part-of-speech tagging, Named-entity recognition and text classification. We release BERTweet under the MIT License to facilitate future research and applications on Tweet data. Our BERTweet is available at https://github.com/VinAIResearch/BERTweet', 'year': 2020, 'in_acl': True, 'citationCount': 815, 'section': None, 'subsection': None}]
|
2022.aacl-tutorials.4
|
Grounding Meaning Representation for Situated Reasoning
|
As natural language technology becomes ever-present in everyday life, people will expect artificial agents to understand language use as humans do. Nevertheless, most advanced neural AI systems fail at some types of interactions that are trivial for humans (e.g., ask a smart system “What am I pointing at?”). One critical aspect of human language understanding is situated reasoning, where inferences make reference to the local context, perceptual surroundings, and contextual groundings from the interaction. In this cutting-edge tutorial, we bring to the NLP/CL community a synthesis of multimodal grounding and meaning representation techniques with formal and computational models of embodied reasoning. We will discuss existing approaches to multimodal language grounding and meaning representations, discuss the kind of information each method captures and their relative suitability to situated reasoning tasks, and demon- strate how to construct agents that conduct situated reasoning by embodying a simulated environment. In doing so, these agents also represent their human interlocutor(s) within the simulation, and are represented through their virtual embodiment in the real world, enabling true bidirectional communication with a computer using multiple modalities.
| 2,022
|
https://aclanthology.org/2022.aacl-tutorials.4
|
AACL, IJCNLP
|
[{'id': 7771402, 'paperId': 'e72e5ee5de14fd463ab58ce830474157258e3578', 'title': 'Abstract Meaning Representation for Sembanking', 'authors': [{'authorId': '3460261', 'name': 'L. Banarescu'}, {'authorId': '3202888', 'name': 'C. Bonial'}, {'authorId': '2112618394', 'name': 'Shu Cai'}, {'authorId': '2065872210', 'name': 'Madalina Georgescu'}, {'authorId': '3168985', 'name': 'Kira Griffitt'}, {'authorId': '1791311', 'name': 'U. Hermjakob'}, {'authorId': '152971314', 'name': 'Kevin Knight'}, {'authorId': '1755162', 'name': 'Philipp Koehn'}, {'authorId': '145755155', 'name': 'Martha Palmer'}, {'authorId': '145254207', 'name': 'Nathan Schneider'}], 'venue': 'LAW@ACL', 'abstract': 'We describe Abstract Meaning Representation (AMR), a semantic representation language in which we are writing down the meanings of thousands of English sentences. We hope that a sembank of simple, whole-sentence semantic structures will spur new work in statistical natural language understanding and generation, like the Penn Treebank encouraged work on statistical parsing. This paper gives an overview of AMR and tools associated with it.', 'year': 2013, 'in_acl': True, 'citationCount': 1379, 'section': None, 'subsection': None}, {'id': 5271395, 'paperId': 'e867a965033a074e4074875e0916ce1ca42f3bf6', 'title': 'Minimal Recursion Semantics: An Introduction', 'authors': [{'authorId': '15379653', 'name': 'Ann A. Copestake'}, {'authorId': '3209288', 'name': 'D. Flickinger'}, {'authorId': '144741427', 'name': 'C. Pollard'}, {'authorId': '2393013', 'name': 'I. Sag'}], 'venue': '', 'abstract': 'Minimal recursion semantics (MRS) is a framework for computational semantics that is suitable for parsing and generation and that can be implemented in typed feature structure formalisms. We discuss why, in general, a semantic representation with minimal structure is desirable and illustrate how a descriptively adequate representation with a nonrecursive structure may be achieved. MRS enables a simple formulation of the grammatical constraints on lexical and phrasal semantics, including the principles of semantic composition. We have integrated MRS with a broad-coverage HPSG grammar.', 'year': 2005, 'in_acl': False, 'citationCount': 1231, 'section': None, 'subsection': None}, {'id': 61979056, 'paperId': '3e8fb256977dca342ef4e86ac4218abee8aa0d03', 'title': 'Type Theory with Records for Natural Language Semantics', 'authors': [{'authorId': '46887822', 'name': 'R. Cooper'}, {'authorId': '2055207', 'name': 'J. Ginzburg'}], 'venue': '', 'abstract': 'Semantic analysis of interaction and coordination in dialogue (SAICD).', 'year': 2015, 'in_acl': False, 'citationCount': 68, 'section': None, 'subsection': None}, {'id': 263897619, 'paperId': '0e6967076a5bf740ba168c82886fc09c9c113788', 'title': 'A Formal Semantic Analysis of Gesture', 'authors': [{'authorId': '1876168', 'name': 'A. Lascarides'}, {'authorId': '2257292206', 'name': 'Matthew Stone'}], 'venue': 'Journal of Semantics', 'abstract': "The gestures that speakers use in tandem with speech include not only conventionalized actions with identifiable meanings (so-called narrow gloss gestures or emblems) but also productive iconic and deictic gestures whose form and meanings seem largely improvised in context. In this paper, we bridge the descriptive tradition with formal models of reference and discourse structure so as to articulate an approach to the interpretation of these productive gestures. Our model captures gestures' partial and incomplete meanings as derived from form and accounts for the more specific interpretations they derive in context. Our work emphasizes the commonality of the pragmatic mechanisms for interpreting both language and gesture, and the place of formal methods in discovering the principles and knowledge that those mechanisms rely on.", 'year': 2009, 'in_acl': False, 'citationCount': 161, 'section': None, 'subsection': None}, {'id': 216951354, 'paperId': '78ce048d39a088554a655e45670cb47066b4551c', 'title': 'Aligning speech and co-speech gesture in a constraint-based grammar', 'authors': [{'authorId': '2743841', 'name': 'K. Alahverdzhieva'}, {'authorId': '1876168', 'name': 'A. Lascarides'}, {'authorId': '3209288', 'name': 'D. Flickinger'}], 'venue': 'Journal of Language Modelling', 'abstract': 'This paper concerns the form-meaning mapping of communicative actions consisting of speech and improvised co-speech gestures. Based on the findings of previous cognitive and computational approaches, we advance a new theory in which this form-meaning mapping is analysed in a constraint-based grammar. Motivated by observations in naturally occurring examples, we propose several construction rules, which use linguistic form, gesture form and their relative timing to constrain the derivation of a single speech-gesture syntax tree, from which a meaning representation can be composed via standard methods for semantic composition. The paper further reports on implementing these speech-gesture construction rules within the English Resource Grammar (Copestake\xa0and Flickinger 2000). Since gestural form often underspecifies its meaning, the logical formulae that are composed via syntax are underspecified so that current models of the semantics/pragmatics interface support the range of possible interpretations of the speech-gesture act in its context of use.', 'year': 2018, 'in_acl': False, 'citationCount': 18, 'section': None, 'subsection': None}, {'id': 205811696, 'paperId': '63471e3ed74385b14cd74b7abcfb52f61b00086f', 'title': 'The Qualitative Spatial Dynamics of Motion in Language', 'authors': [{'authorId': '1707726', 'name': 'J. Pustejovsky'}, {'authorId': '1723806', 'name': 'Jessica L. Moszkowicz'}], 'venue': 'Spatial Cogn. Comput.', 'abstract': 'Abstract In this paper, we discuss the strategies that languages employ to express motion, focusing on the distinction between path predicates, such as enter, arrive, and leave and manner-of-motion predicates, such as walk, bike, and roll. We present an overview of some qualitative spatiotemporal models of movement, and discuss their adequacy for capturing motion constructions in natural languages. Building on many aspects of these qualitative models, we introduce a framework within dynamic logic for the characterization of spatial change. This model, called Dynamic Interval Temporal Logic (DITL), is developed to analyze both classes of motion predicates, as well as complex compositional constructions involving spatial and manner Prepositional Phrases. Further, DITL serves as a semantics for a linguistically expressive markup language for annotating spatiotemporal information in text, called Spatiotemporal Markup Language (STML). We outline the syntax of this language, and discuss how DITL provides for a natural interpretation of the annotation specification for use in a variety of applications.', 'year': 2011, 'in_acl': False, 'citationCount': 68, 'section': None, 'subsection': None}, {'id': 257985710, 'paperId': '295b18b1794b837296959ea3d58e93352c42373c', 'title': 'Situated Meaning in Multimodal Dialogue: Human-Robot and Human-Computer Interactions', 'authors': [{'authorId': '1707726', 'name': 'J. Pustejovsky'}, {'authorId': '34079649', 'name': 'Nikhil Krishnaswamy'}], 'venue': 'ICON', 'abstract': '. The demand for more sophisticated natural human-computer and human-robot interactions is rapidly increasing, as users become more accustomed to conversation-like interactions with their devices. This requires not only the robust recognition and generation of expressions through multiple modalities (language, gesture, vision, action), but also the encoding of situated meaning: (a) the situated grounding of expressions in context; (b) an interpretation of the expression contextualized to the dynamics of the discourse; and (c) an appreciation of the actions and consequences associated with objects in the environment. In this paper, we introduce VoxWorld, a multimodal simulation platform for modeling human-computer interactions. It is built on the language VoxML', 'year': 2020, 'in_acl': True, 'citationCount': 11, 'section': None, 'subsection': None}, {'id': 240531414, 'paperId': 'f8f021f47f185aec7a0200e22eeb5c7570f44b64', 'title': 'Embodied Human Computer Interaction', 'authors': [{'authorId': '1707726', 'name': 'J. Pustejovsky'}, {'authorId': '34079649', 'name': 'Nikhil Krishnaswamy'}], 'venue': 'KI - Künstliche Intelligenz', 'abstract': 'In this paper, we argue that embodiment can play an important role in the design and modeling of systems developed for Human Computer Interaction. To this end, we describe a simulation platform for building Embodied Human Computer Interactions (EHCI). This system, VoxWorld, enables multimodal dialogue systems that communicate through language, gesture, action, facial expressions, and gaze tracking, in the context of task-oriented interactions. A multimodal simulation is an embodied 3D virtual realization of both the situational environment and the co-situated agents, as well as the most salient content denoted by communicative acts in a discourse. It is built on the modeling language VoxML (Pustejovsky and Krishnaswamy in VoxML: a visualization modeling language, proceedings of LREC, 2016), which encodes objects with rich semantic typing and action affordances, and actions themselves as multimodal programs, enabling contextually salient inferences and decisions in the environment. VoxWorld enables an embodied HCI by situating both human and artificial agents within the same virtual simulation environment, where they share perceptual and epistemic common ground. We discuss the formal and computational underpinnings of embodiment and common ground, how they interact and specify parameters of the interaction between humans and artificial agents, and demonstrate behaviors and types of interactions on different classes of artificial agents.', 'year': 2021, 'in_acl': False, 'citationCount': 43, 'section': None, 'subsection': None}]
|
2022.aacl-tutorials.5
|
The Battlefront of Combating Misinformation and Coping with Media Bias
|
Misinformation is a pressing issue in modern society. It arouses a mixture of anger, distrust, confusion, and anxiety that cause damage on our daily life judgments and public policy decisions. While recent studies have explored various fake news detection and media bias detection techniques in attempts to tackle the problem, there remain many ongoing challenges yet to be addressed, as can be witnessed from the plethora of untrue and harmful content present during the COVID-19 pandemic and the international crises of late. In this tutorial, we provide researchers and practitioners with a systematic overview of the frontier in fighting misinformation. Specifically, we dive into the important research questions of how to (i) develop a robust fake news detection system, which not only fact-check information pieces provable by background knowledge but also reason about the consistency and the reliability of subtle details for emerging events; (ii) uncover the bias and agenda of news sources to better characterize misinformation; as well as (iii) correct false information and mitigate news bias, while allowing diverse opinions to be expressed. Moreover, we discuss the remaining challenges, future research directions, and exciting opportunities to help make this world a better place, with safer and more harmonic information sharing.
| 2,022
|
https://aclanthology.org/2022.aacl-tutorials.5
|
AACL, IJCNLP
|
[{'id': 168169824, 'paperId': 'ad7129af0644dbcafa9aa2f111cb76526ea444a1', 'title': 'Defending Against Neural Fake News', 'authors': [{'authorId': '2545335', 'name': 'Rowan Zellers'}, {'authorId': '14487640', 'name': 'Ari Holtzman'}, {'authorId': '2516777', 'name': 'Hannah Rashkin'}, {'authorId': '3312309', 'name': 'Yonatan Bisk'}, {'authorId': '143787583', 'name': 'Ali Farhadi'}, {'authorId': '3268360', 'name': 'Franziska Roesner'}, {'authorId': '1699545', 'name': 'Yejin Choi'}], 'venue': 'Neural Information Processing Systems', 'abstract': "Recent progress in natural language generation has raised dual-use concerns. While applications like summarization and translation are positive, the underlying technology also might enable adversaries to generate neural fake news: targeted propaganda that closely mimics the style of real news. \nModern computer security relies on careful threat modeling: identifying potential threats and vulnerabilities from an adversary's point of view, and exploring potential mitigations to these threats. Likewise, developing robust defenses against neural fake news requires us first to carefully investigate and characterize the risks of these models. We thus present a model for controllable text generation called Grover. Given a headline like `Link Found Between Vaccines and Autism,' Grover can generate the rest of the article; humans find these generations to be more trustworthy than human-written disinformation. \nDeveloping robust verification techniques against generators like Grover is critical. We find that best current discriminators can classify neural fake news from real, human-written, news with 73% accuracy, assuming access to a moderate level of training data. Counterintuitively, the best defense against Grover turns out to be Grover itself, with 92% accuracy, demonstrating the importance of public release of strong generators. We investigate these results further, showing that exposure bias -- and sampling strategies that alleviate its effects -- both leave artifacts that similar discriminators can pick up on. We conclude by discussing ethical issues regarding the technology, and plan to release Grover publicly, helping pave the way for better detection of neural fake news.", 'year': 2019, 'in_acl': False, 'citationCount': 911, 'section': 'the rising threats of neural fake news', 'subsection': None}, {'id': 197547831, 'paperId': 'c3b3a6d27dbbfed4df630b39fc0a8a6692b1828a', 'title': 'Deepfakes : How a pervert shook the world', 'authors': [{'authorId': '1576730528', 'name': 'R. Chawla'}], 'venue': '', 'abstract': 'Recently a software has made it easy to create hyper-realistic face swaps in videos that leaves little-to-no traces of manipulation, in what is known as “deepfake” videos. Scenarios, where these AI manipulated/generated videos, are used for political distress, blackmail or even terrorism are easily envisioned as a near dystopia. This paper explores the various aspects of deepfake videos including its consequences and newly developed innovations in detecting deepfakes.', 'year': 2019, 'in_acl': False, 'citationCount': 36, 'section': 'the rising threats of neural fake news', 'subsection': None}, {'id': 236460257, 'paperId': 'f95a4568a714c34984aa32327fa66344ebe52861', 'title': 'Compare to The Knowledge: Graph Neural Fake News Detection with External Knowledge', 'authors': [{'authorId': '1771202', 'name': 'Linmei Hu'}, {'authorId': '2800519', 'name': 'Tianchi Yang'}, {'authorId': '2156145866', 'name': 'Luhao Zhang'}, {'authorId': '81970097', 'name': 'Wanjun Zhong'}, {'authorId': '39483833', 'name': 'Duyu Tang'}, {'authorId': '144123161', 'name': 'C. Shi'}, {'authorId': '46429989', 'name': 'Nan Duan'}, {'authorId': '92660691', 'name': 'Ming Zhou'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Nowadays, fake news detection, which aims to verify whether a news document is trusted or fake, has become urgent and important. Most existing methods rely heavily on linguistic and semantic features from the news content, and fail to effectively exploit external knowledge which could help determine whether the news document is trusted. In this paper, we propose a novel end-to-end graph neural model called CompareNet, which compares the news to the knowledge base (KB) through entities for fake news detection. Considering that fake news detection is correlated with topics, we also incorporate topics to enrich the news representation. Specifically, we first construct a directed heterogeneous document graph for each news incorporating topics and entities. Based on the graph, we develop a heterogeneous graph attention network for learning the topic-enriched news representation as well as the contextual entity representations that encode the semantics of the news content. The contextual entity representations are then compared to the corresponding KB-based entity representations through a carefully designed entity comparison network, to capture the consistency between the news content and KB. Finally, the topic-enriched news representation combining the entity comparison features is fed into a fake news classifier. Experimental results on two benchmark datasets demonstrate that CompareNet significantly outperforms state-of-the-art methods.', 'year': 2021, 'in_acl': True, 'citationCount': 142, 'section': 'knowledge-driven misinformation detection', 'subsection': None}, {'id': 236460326, 'paperId': 'bbfed74eed1796b4534bcce6811b2c7c0b74024a', 'title': 'InfoSurgeon: Cross-Media Fine-grained Information Consistency Checking for Fake News Detection', 'authors': [{'authorId': '51135899', 'name': 'Y. Fung'}, {'authorId': '2150796972', 'name': 'Christopher Thomas'}, {'authorId': '47016316', 'name': 'Revanth Reddy Gangi Reddy'}, {'authorId': '1944118877', 'name': 'Sandeep Polisetty'}, {'authorId': '2113323573', 'name': 'Heng Ji'}, {'authorId': '2122374530', 'name': 'Shih-Fu Chang'}, {'authorId': '145590324', 'name': 'K. McKeown'}, {'authorId': '143977268', 'name': 'Mohit Bansal'}, {'authorId': '2707234', 'name': 'Avirup Sil'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'To defend against machine-generated fake news, an effective mechanism is urgently needed. We contribute a novel benchmark for fake news detection at the knowledge element level, as well as a solution for this task which incorporates cross-media consistency checking to detect the fine-grained knowledge elements making news articles misinformative. Due to training data scarcity, we also formulate a novel data synthesis method by manipulating knowledge elements within the knowledge graph to generate noisy training data with specific, hard to detect, known inconsistencies. Our detection approach outperforms the state-of-the-art (up to 16.8% accuracy gain), and more critically, yields fine-grained explanations.', 'year': 2021, 'in_acl': True, 'citationCount': 57, 'section': 'knowledge-driven misinformation detection', 'subsection': None}, {'id': 237304047, 'paperId': '02e46711fc86877bdd279c736abe5415a2415e48', 'title': 'A Survey on Automated Fact-Checking', 'authors': [{'authorId': '2681038', 'name': 'Zhijiang Guo'}, {'authorId': '8804828', 'name': 'M. Schlichtkrull'}, {'authorId': '2064056928', 'name': 'Andreas Vlachos'}], 'venue': 'Transactions of the Association for Computational Linguistics', 'abstract': 'Fact-checking has become increasingly important due to the speed with which both information and misinformation can spread in the modern media ecosystem. Therefore, researchers have been exploring how fact-checking can be automated, using techniques based on natural language processing, machine learning, knowledge representation, and databases to automatically predict the veracity of claims. In this paper, we survey automated fact-checking stemming from natural language processing, and discuss its connections to related tasks and disciplines. In this process, we present an overview of existing datasets and models, aiming to unify the various definitions given and identify common concepts. Finally, we highlight challenges for future research.', 'year': 2021, 'in_acl': True, 'citationCount': 356, 'section': 'knowledge-driven misinformation detection', 'subsection': None}, {'id': 222216945, 'paperId': '3e42fea41d2737bb5ef7a6ebf9bbb4f9006a503b', 'title': 'Why do people spread false information online? The effects of message and viewer characteristics on self-reported likelihood of sharing social media disinformation', 'authors': [{'authorId': '29789516', 'name': 'T. Buchanan'}], 'venue': 'PLoS ONE', 'abstract': 'Individuals who encounter false information on social media may actively spread it further, by sharing or otherwise engaging with it. Much of the spread of disinformation can thus be attributed to human action. Four studies (total N = 2,634) explored the effect of message attributes (authoritativeness of source, consensus indicators), viewer characteristics (digital literacy, personality, and demographic variables) and their interaction (consistency between message and recipient beliefs) on self-reported likelihood of spreading examples of disinformation. Participants also reported whether they had shared real-world disinformation in the past. Reported likelihood of sharing was not influenced by authoritativeness of the source of the material, nor indicators of how many other people had previously engaged with it. Participants’ level of digital literacy had little effect on their responses. The people reporting the greatest likelihood of sharing disinformation were those who thought it likely to be true, or who had pre-existing attitudes consistent with it. They were likely to have previous familiarity with the materials. Across the four studies, personality (lower Agreeableness and Conscientiousness, higher Extraversion and Neuroticism) and demographic variables (male gender, lower age and lower education) were weakly and inconsistently associated with self-reported likelihood of sharing. These findings have implications for strategies more or less likely to work in countering disinformation in social media.', 'year': 2020, 'in_acl': False, 'citationCount': 124, 'section': 'intent characterization', 'subsection': None}, {'id': 236459973, 'paperId': 'acfcb88fbd0ece7956cf5ad5eb0f8087311b5b3d', 'title': 'Edited Media Understanding Frames: Reasoning About the Intent and Implications of Visual Misinformation', 'authors': [{'authorId': '1380616323', 'name': 'Jeff Da'}, {'authorId': '39191185', 'name': 'Maxwell Forbes'}, {'authorId': '2545335', 'name': 'Rowan Zellers'}, {'authorId': '146227594', 'name': 'Anthony Zheng'}, {'authorId': '2012510', 'name': 'Jena D. Hwang'}, {'authorId': '8536286', 'name': 'Antoine Bosselut'}, {'authorId': '1699545', 'name': 'Yejin Choi'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Understanding manipulated media, from automatically generated ‘deepfakes’ to manually edited ones, raises novel research challenges. Because the vast majority of edited or manipulated images are benign, such as photoshopped images for visual enhancements, the key challenge is to understand the complex layers of underlying intents of media edits and their implications with respect to disinformation. In this paper, we study Edited Media Frames, a new formalism to understand visual media manipulation as structured annotations with respect to the intents, emotional reactions, attacks on individuals, and the overall implications of disinformation. We introduce a dataset for our task, EMU, with 56k question-answer pairs written in rich natural language. We evaluate a wide variety of vision-and-language models for our task, and introduce a new model PELICAN, which builds upon recent progress in pretrained multimodal representations. Our model obtains promising results on our dataset, with humans rating its answers as accurate 48.2% of the time. At the same time, there is still much work to be done – and we provide analysis that highlights areas for further progress.', 'year': 2021, 'in_acl': True, 'citationCount': 11, 'section': 'intent characterization', 'subsection': None}, {'id': 245916820, 'paperId': '9963b8bdbfa3e35220757fef2a2667372241b2e5', 'title': 'The psychological drivers of misinformation belief and its resistance to correction', 'authors': [{'authorId': '3139999', 'name': 'Ullrich K. H. Ecker'}, {'authorId': '2573193', 'name': 'S. Lewandowsky'}, {'authorId': '2115139230', 'name': 'J. Cook'}, {'authorId': '2068468483', 'name': 'P. Schmid'}, {'authorId': '2587105', 'name': 'Lisa K. Fazio'}, {'authorId': '3439222', 'name': 'Nadia M. Brashier'}, {'authorId': '4247735', 'name': 'Panayiota Kendeou'}, {'authorId': '2469467', 'name': 'E. Vraga'}, {'authorId': '41094808', 'name': 'Michelle A. Amazeen'}], 'venue': 'Nature Reviews Psychology', 'abstract': 'Misinformation has been identified as a major contributor to various contentious contemporary events ranging from elections and referenda to the response to the COVID-19 pandemic. Not only can belief in misinformation lead to poor judgements and decision-making, it also exerts a lingering influence on people’s reasoning after it has been corrected — an effect known as the continued influence effect. In this Review, we describe the cognitive, social and affective factors that lead people to form or endorse misinformed views, and the psychological barriers to knowledge revision after misinformation has been corrected, including theories of continued influence. We discuss the effectiveness of both pre-emptive (‘prebunking’) and reactive (‘debunking’) interventions to reduce the effects of misinformation, as well as implications for information consumers and practitioners in various areas including journalism, public health, policymaking and education.', 'year': 2022, 'in_acl': False, 'citationCount': 519, 'section': 'study of fake news impact from a psychological point of view', 'subsection': None}]
|
2022.aacl-tutorials.6
|
A Tour of Explicit Multilingual Semantics: Word Sense Disambiguation, Semantic Role Labeling and Semantic Parsing
|
The recent advent of modern pretrained language models has sparked a revolution in Natural Language Processing (NLP), especially in multilingual and cross-lingual applications. Today, such language models have become the de facto standard for providing rich input representations to neural systems, achieving unprecedented results in an increasing range of benchmarks. However, questions that often arise are: firstly, whether current language models are, indeed, able to capture explicit, symbolic meaning; secondly, if they are, to what extent; thirdly, and perhaps more importantly, whether current approaches are capable of scaling across languages. In this cutting-edge tutorial, we will review recent efforts that have aimed at shedding light on meaning in NLP, with a focus on three key open problems in lexical and sentence-level semantics: Word Sense Disambiguation, Semantic Role Labeling, and Semantic Parsing. After a brief introduction, we will spotlight how state-of-the-art models tackle these tasks in multiple languages, showing where they excel and where they fail. We hope that this tutorial will broaden the audience interested in multilingual semantics and inspire researchers to further advance the field.
| 2,022
|
https://aclanthology.org/2022.aacl-tutorials.6
|
AACL, IJCNLP
|
[{'id': 237100274, 'paperId': 'b0fe3bf02e16bbea17df617fed6d367a0cc5e739', 'title': 'Recent Trends in Word Sense Disambiguation: A Survey', 'authors': [{'authorId': '143802044', 'name': 'Michele Bevilacqua'}, {'authorId': '40438851', 'name': 'Tommaso Pasini'}, {'authorId': '3106437', 'name': 'Alessandro Raganato'}, {'authorId': '1733928', 'name': 'Roberto Navigli'}], 'venue': 'International Joint Conference on Artificial Intelligence', 'abstract': 'Word Sense Disambiguation (WSD) aims at making explicit the semantics of a word in context by identifying the most suitable meaning from a predefined sense inventory. Recent breakthroughs in representation learning have fueled intensive WSD research, resulting in considerable performance improvements, breaching the 80% glass ceiling set by the inter-annotator agreement. In this survey, we provide an extensive overview of current advances in WSD, describing the state of the art in terms of i) resources for the task, i.e., sense inventories and reference datasets for training and testing, as well as ii) automatic disambiguation approaches, detailing their peculiarities, strengths and weaknesses. Finally, we highlight the current limitations of the task itself, but also point out recent trends that could help expand the scope and applicability of WSD, setting up new promising directions for the future.', 'year': 2021, 'in_acl': False, 'citationCount': 119, 'section': 'WSD', 'subsection': None}, {'id': 218517044, 'paperId': '9a25609275bb1113aaf7c92b28477ed7ff0677a8', 'title': 'Moving Down the Long Tail of Word Sense Disambiguation with Gloss Informed Bi-encoders', 'authors': [{'authorId': '3443287', 'name': 'Terra Blevins'}, {'authorId': '1982950', 'name': 'Luke Zettlemoyer'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'A major obstacle in Word Sense Disambiguation (WSD) is that word senses are not uniformly distributed, causing existing models to generally perform poorly on senses that are either rare or unseen during training. We propose a bi-encoder model that independently embeds (1) the target word with its surrounding context and (2) the dictionary definition, or gloss, of each sense. The encoders are jointly optimized in the same representation space, so that sense disambiguation can be performed by finding the nearest sense embedding for each target word embedding. Our system outperforms previous state-of-the-art models on English all-words WSD; these gains predominantly come from improved performance on rare senses, leading to a 31.1% error reduction on less frequent senses over prior work. This demonstrates that rare senses can be more effectively disambiguated by modeling their definitions.', 'year': 2020, 'in_acl': True, 'citationCount': 157, 'section': 'WSD', 'subsection': None}, {'id': 235097236, 'paperId': '486df033e3689be451a5e6137b88493d01c8de43', 'title': 'ESC: Redesigning WSD with Extractive Sense Comprehension', 'authors': [{'authorId': '1810690342', 'name': 'Edoardo Barba'}, {'authorId': '40438851', 'name': 'Tommaso Pasini'}, {'authorId': '1733928', 'name': 'Roberto Navigli'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'Word Sense Disambiguation (WSD) is a historical NLP task aimed at linking words in contexts to discrete sense inventories and it is usually cast as a multi-label classification task. Recently, several neural approaches have employed sense definitions to better represent word meanings. Yet, these approaches do not observe the input sentence and the sense definition candidates all at once, thus potentially reducing the model performance and generalization power. We cope with this issue by reframing WSD as a span extraction problem — which we called Extractive Sense Comprehension (ESC) — and propose ESCHER, a transformer-based neural architecture for this new formulation. By means of an extensive array of experiments, we show that ESC unleashes the full potential of our model, leading it to outdo all of its competitors and to set a new state of the art on the English WSD task. In the few-shot scenario, ESCHER proves to exploit training data efficiently, attaining the same performance as its closest competitor while relying on almost three times fewer annotations. Furthermore, ESCHER can nimbly combine data annotated with senses from different lexical resources, achieving performances that were previously out of everyone’s reach. The model along with data is available at https://github.com/SapienzaNLP/esc.', 'year': 2021, 'in_acl': True, 'citationCount': 72, 'section': 'WSD', 'subsection': None}, {'id': 1134497, 'paperId': 'f84529b40be07ba1f8fdd3d318ddeacd7394c908', 'title': 'Special Issue Introduction: Semantic Role Labeling: An Introduction to the Special Issue', 'authors': [{'authorId': '3049328', 'name': 'Lluís Màrquez i Villodre'}, {'authorId': '1701734', 'name': 'X. Carreras'}, {'authorId': '2262551743', 'name': 'Kenneth C. Litkowski'}, {'authorId': '145584212', 'name': 'S. Stevenson'}], 'venue': 'International Conference on Computational Logic', 'abstract': 'Semantic role labeling, the computational identification and labeling of arguments in text, has become a leading task in computational linguistics today. Although the issues for this task have been studied for decades, the availability of large resources and the development of statistical machine learning methods have heightened the amount of effort in this field. This special issue presents selected and representative work in the field. This overview describes linguistic background of the problem, the movement from linguistic theories to computational practice, the major resources that are being used, an overview of steps taken in computational systems, and a description of the key issues and results in semantic role labeling (as revealed in several international evaluations). We assess weaknesses in semantic role labeling and identify important challenges facing the field. Overall, the opportunities and the potential for useful further research in semantic role labeling are considerable.', 'year': 2008, 'in_acl': True, 'citationCount': 285, 'section': 'SRL', 'subsection': None}, {'id': 9210201, 'paperId': 'c7d3f610b528226f1c862c4f9cd6b37623f7390f', 'title': 'The CoNLL-2009 Shared Task: Syntactic and Semantic Dependencies in Multiple Languages', 'authors': [{'authorId': '144002335', 'name': 'Jan Hajic'}, {'authorId': '2754495', 'name': 'Massimiliano Ciaramita'}, {'authorId': '145341661', 'name': 'Richard Johansson'}, {'authorId': '2368642', 'name': 'Daisuke Kawahara'}, {'authorId': '40430085', 'name': 'M. A. Martí'}, {'authorId': '3049328', 'name': 'Lluís Màrquez i Villodre'}, {'authorId': '144817783', 'name': 'Adam Meyers'}, {'authorId': '1720988', 'name': 'Joakim Nivre'}, {'authorId': '1708581', 'name': 'Sebastian Padó'}, {'authorId': '153593239', 'name': 'J. Stepánek'}, {'authorId': '1788237', 'name': 'P. Stranák'}, {'authorId': '1760868', 'name': 'M. Surdeanu'}, {'authorId': '1702849', 'name': 'Nianwen Xue'}, {'authorId': '49889438', 'name': 'Yi Zhang'}], 'venue': 'CoNLL Shared Task', 'abstract': 'For the 11th straight year, the Conference on Computational Natural Language Learning has been accompanied by a shared task whose purpose is to promote natural language processing applications and evaluate them in a standard setting. In 2009, the shared task was dedicated to the joint parsing of syntactic and semantic dependencies in multiple languages. This shared task combines the shared tasks of the previous five years under a unique dependency-based formalism similar to the 2008 task. In this paper, we define the shared task, describe how the data sets were created and show their quantitative properties, report the results and summarize the approaches of the participating systems.', 'year': 2009, 'in_acl': True, 'citationCount': 614, 'section': 'SRL', 'subsection': None}, {'id': 202540311, 'paperId': 'e6ae88330fcc31ff744d8669c9f2f51114494418', 'title': 'Syntax-aware Multilingual Semantic Role Labeling', 'authors': [{'authorId': '51129953', 'name': 'Shexia He'}, {'authorId': '30658665', 'name': 'Z. Li'}, {'authorId': '36225434', 'name': 'Zhao Hai'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Recently, semantic role labeling (SRL) has earned a series of success with even higher performance improvements, which can be mainly attributed to syntactic integration and enhanced word representation. However, most of these efforts focus on English, while SRL on multiple languages more than English has received relatively little attention so that is kept underdevelopment. Thus this paper intends to fill the gap on multilingual SRL with special focus on the impact of syntax and contextualized word representation. Unlike existing work, we propose a novel method guided by syntactic rule to prune arguments, which enables us to integrate syntax into multilingual SRL model simply and effectively. We present a unified SRL model designed for multiple languages together with the proposed uniform syntax enhancement. Our model achieves new state-of-the-art results on the CoNLL-2009 benchmarks of all seven languages. Besides, we pose a discussion on the syntactic role among different languages and verify the effectiveness of deep enhanced representation for multilingual SRL.', 'year': 2019, 'in_acl': True, 'citationCount': 49, 'section': 'SRL', 'subsection': None}, {'id': 235097227, 'paperId': '2172c289b97e4d709f1c54683a242ce9a4c2f37c', 'title': 'Unifying Cross-Lingual Semantic Role Labeling with Heterogeneous Linguistic Resources', 'authors': [{'authorId': '1396456007', 'name': 'Simone Conia'}, {'authorId': '151426607', 'name': 'Andrea Bacciu'}, {'authorId': '1733928', 'name': 'Roberto Navigli'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'While cross-lingual techniques are finding increasing success in a wide range of Natural Language Processing tasks, their application to Semantic Role Labeling (SRL) has been strongly limited by the fact that each language adopts its own linguistic formalism, from PropBank for English to AnCora for Spanish and PDT-Vallex for Czech, inter alia. In this work, we address this issue and present a unified model to perform cross-lingual SRL over heterogeneous linguistic resources. Our model implicitly learns a high-quality mapping for different formalisms across diverse languages without resorting to word alignment and/or translation techniques. We find that, not only is our cross-lingual system competitive with the current state of the art but that it is also robust to low-data scenarios. Most interestingly, our unified model is able to annotate a sentence in a single forward pass with all the inventories it was trained with, providing a tool for the analysis and comparison of linguistic theories across different languages. We release our code and model at https://github.com/SapienzaNLP/unify-srl.', 'year': 2021, 'in_acl': True, 'citationCount': 34, 'section': 'SRL', 'subsection': None}, {'id': 199022404, 'paperId': '5fd6339f299304a7541c805c5ee443fbfb0bac3c', 'title': 'Graph-Based Meaning Representations: Design and Processing', 'authors': [{'authorId': '145542037', 'name': 'Alexander Koller'}, {'authorId': '2949607', 'name': 'S. Oepen'}, {'authorId': '144780931', 'name': 'Weiwei SUN'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'This tutorial is on representing and processing sentence meaning in the form of labeled directed graphs. The tutorial will (a) briefly review relevant background in formal and linguistic semantics; (b) semi-formally define a unified abstract view on different flavors of semantic graphs and associated terminology; (c) survey common frameworks for graph-based meaning representation and available graph banks; and (d) offer a technical overview of a representative selection of different parsing approaches.', 'year': 2019, 'in_acl': True, 'citationCount': 31, 'section': 'SP', 'subsection': None}, {'id': 226283995, 'paperId': 'dff49c89b2d15704b7122c309e76bf7c545200b2', 'title': 'MRP 2020: The Second Shared Task on Cross-Framework and Cross-Lingual Meaning Representation Parsing', 'authors': [{'authorId': '2949607', 'name': 'S. Oepen'}, {'authorId': '2769805', 'name': 'Omri Abend'}, {'authorId': '2453967', 'name': 'Lasha Abzianidze'}, {'authorId': '3461596', 'name': 'Johan Bos'}, {'authorId': '144002335', 'name': 'Jan Hajic'}, {'authorId': '2086349', 'name': 'Daniel Hershcovich'}, {'authorId': '2185910490', 'name': 'Bin Li'}, {'authorId': '1388957618', 'name': "Timothy J. O'Gorman"}, {'authorId': '1702849', 'name': 'Nianwen Xue'}, {'authorId': '1771298', 'name': 'Daniel Zeman'}], 'venue': 'Conference on Computational Natural Language Learning', 'abstract': 'The 2020 Shared Task at the Conference for Computational Language Learning (CoNLL) was devoted to Meaning Representation Parsing (MRP) across frameworks and languages. Extending a similar setup from the previous year, five distinct approaches to the representation of sentence meaning in the form of directed graphs were represented in the English training and evaluation data for the task, packaged in a uniform graph abstraction and serialization; for four of these representation frameworks, additional training and evaluation data was provided for one additional language per framework. The task received submissions from eight teams, of which two do not participate in the official ranking because they arrived after the closing deadline or made use of additional training data. All technical information regarding the task, including system submissions, official results, and links to supporting resources and software are available from the task web site at: http://mrp.nlpl.eu', 'year': 2020, 'in_acl': True, 'citationCount': 66, 'section': 'SP', 'subsection': None}, {'id': 7771402, 'paperId': 'e72e5ee5de14fd463ab58ce830474157258e3578', 'title': 'Abstract Meaning Representation for Sembanking', 'authors': [{'authorId': '3460261', 'name': 'L. Banarescu'}, {'authorId': '3202888', 'name': 'C. Bonial'}, {'authorId': '2112618394', 'name': 'Shu Cai'}, {'authorId': '2065872210', 'name': 'Madalina Georgescu'}, {'authorId': '3168985', 'name': 'Kira Griffitt'}, {'authorId': '1791311', 'name': 'U. Hermjakob'}, {'authorId': '152971314', 'name': 'Kevin Knight'}, {'authorId': '1755162', 'name': 'Philipp Koehn'}, {'authorId': '145755155', 'name': 'Martha Palmer'}, {'authorId': '145254207', 'name': 'Nathan Schneider'}], 'venue': 'LAW@ACL', 'abstract': 'We describe Abstract Meaning Representation (AMR), a semantic representation language in which we are writing down the meanings of thousands of English sentences. We hope that a sembank of simple, whole-sentence semantic structures will spur new work in statistical natural language understanding and generation, like the Penn Treebank encouraged work on statistical parsing. This paper gives an overview of AMR and tools associated with it.', 'year': 2013, 'in_acl': True, 'citationCount': 1379, 'section': 'SP', 'subsection': None}, {'id': 235349016, 'paperId': '25e7c9dcc294d77d184c4c1122c8304cdb58c69d', 'title': 'One SPRING to Rule Them Both: Symmetric AMR Semantic Parsing and Generation without a Complex Pipeline', 'authors': [{'authorId': '143802044', 'name': 'Michele Bevilacqua'}, {'authorId': '2008183673', 'name': 'Rexhina Blloshmi'}, {'authorId': '1733928', 'name': 'Roberto Navigli'}], 'venue': 'AAAI Conference on Artificial Intelligence', 'abstract': 'In Text-to-AMR parsing, current state-of-the-art semantic parsers use cumbersome pipelines integrating several different modules or components, and exploit graph recategorization, i.e., a set of content-specific heuristics that are developed on the basis of the training set. However, the generalizability of graph recategorization in an out-of-distribution setting is unclear. In contrast, state-of-the-art AMR-to-Text generation, which can be seen as the inverse to parsing, is based on simpler seq2seq. In this paper, we cast Text-to-AMR and AMR-to-Text as a symmetric transduction task and show that by devising a careful graph linearization and extending a pretrained encoder-decoder model, it is possible to obtain state-of-the-art performances in both tasks using the very same seq2seq approach, i.e., SPRING (Symmetric PaRsIng aNd Generation). Our model does not require complex pipelines, nor heuristics built on heavy assumptions. In fact, we drop the need for graph recategorization, showing that this technique is actually harmful outside of the standard benchmark. Finally, we outperform the previous state of the art on the English AMR 2.0 dataset by a large margin: on Text-to-AMR we obtain an improvement of 3.6 Smatch points, while on AMR-to-Text we outperform the state of the art by 11.2 BLEU points. \nWe release the software at github.com/SapienzaNLP/spring.', 'year': 2021, 'in_acl': False, 'citationCount': 152, 'section': 'SP', 'subsection': None}]
|
2022.acl-tutorials.2
|
Towards Reproducible Machine Learning Research in Natural Language Processing
|
While recent progress in the field of ML has been significant, the reproducibility of these cutting-edge results is often lacking, with many submissions lacking the necessary information in order to ensure subsequent reproducibility. Despite proposals such as the Reproducibility Checklist and reproducibility criteria at several major conferences, the reflex for carrying out research with reproducibility in mind is lacking in the broader ML community. We propose this tutorial as a gentle introduction to ensuring reproducible research in ML, with a specific emphasis on computational linguistics and NLP. We also provide a framework for using reproducibility as a teaching tool in university-level computer science programs.
| 2,022
|
https://aclanthology.org/2022.acl-tutorials.2
|
ACL
|
[{'id': 4460617, 'paperId': '57b101db87fb0b67fbe8b57f90b83f8e9efe81a6', 'title': '1,500 scientists lift the lid on reproducibility', 'authors': [{'authorId': '2225440', 'name': 'M. Baker'}], 'venue': 'Nature', 'abstract': 'Survey sheds light on the ‘crisis’ rocking research.', 'year': 2016, 'in_acl': False, 'citationCount': 2933, 'section': 'General Background', 'subsection': None}, {'id': 210936371, 'paperId': '83c550e602a7cbabbdf8a157b6014458c1f8bc5a', 'title': 'Community Perspective on Replicability in Natural Language Processing', 'authors': [{'authorId': '2921990', 'name': 'Margot Mieskes'}, {'authorId': '3196675', 'name': 'Karën Fort'}, {'authorId': '48680958', 'name': 'Aurélie Névéol'}, {'authorId': '2105490', 'name': 'Cyril Grouin'}, {'authorId': '145468230', 'name': 'K. Cohen'}], 'venue': 'Recent Advances in Natural Language Processing', 'abstract': 'With recent efforts in drawing attention to the task of replicating and/or reproducing results, for example in the context of COLING 2018 and various LREC workshops, the question arises how the NLP community views the topic of replicability in general. Using a survey, in which we involve members of the NLP community, we investigate how our community perceives this topic, its relevance and options for improvement. Based on over two hundred participants, the survey results confirm earlier observations, that successful reproducibility requires more than having access to code and data. Additionally, the results show that the topic has to be tackled from the authors’, reviewers’ and community’s side.', 'year': 2019, 'in_acl': True, 'citationCount': 13, 'section': 'NLP', 'subsection': None}, {'id': 232232827, 'paperId': '59e7ed6132ce9992a6790a0a179b9eed73959780', 'title': 'A Systematic Review of Reproducibility Research in Natural Language Processing', 'authors': [{'authorId': '41052836', 'name': 'Anya Belz'}, {'authorId': '2114358275', 'name': 'Shubham Agarwal'}, {'authorId': '2181869', 'name': 'Anastasia Shimorina'}, {'authorId': '144568312', 'name': 'Ehud Reiter'}], 'venue': 'Conference of the European Chapter of the Association for Computational Linguistics', 'abstract': 'Against the background of what has been termed a reproducibility crisis in science, the NLP field is becoming increasingly interested in, and conscientious about, the reproducibility of its results. The past few years have seen an impressive range of new initiatives, events and active research in the area. However, the field is far from reaching a consensus about how reproducibility should be defined, measured and addressed, with diversity of views currently increasing rather than converging. With this focused contribution, we aim to provide a wide-angle, and as near as possible complete, snapshot of current work on reproducibility in NLP,', 'year': 2021, 'in_acl': True, 'citationCount': 55, 'section': 'NLP', 'subsection': None}, {'id': 227247527, 'paperId': '38a24433220d3c1251c9d69bb3d2d242c52c2241', 'title': 'ReproducedPapers.org: Openly teaching and structuring machine learning reproducibility', 'authors': [{'authorId': '2099875322', 'name': 'Burak Yildiz'}, {'authorId': '2064627681', 'name': 'Hayley Hung'}, {'authorId': '3308507', 'name': 'J. Krijthe'}, {'authorId': '1968667', 'name': 'Cynthia C. S. Liem'}, {'authorId': '1380498259', 'name': 'M. Loog'}, {'authorId': '2766199', 'name': 'Gosia Migut'}, {'authorId': '2064861783', 'name': 'Frans Oliehoek'}, {'authorId': '3013302', 'name': 'Annibale Panichella'}, {'authorId': '2784409', 'name': 'P. Pawełczak'}, {'authorId': '1686538', 'name': 'S. Picek'}, {'authorId': '1788228', 'name': 'M. D. Weerdt'}, {'authorId': '1738975', 'name': 'J. V. Gemert'}], 'venue': 'International Workshop on Reproducible Research in Pattern Recognition', 'abstract': 'We present ReproducedPapers.org: an open online repository for teaching and structuring machine learning reproducibility. We evaluate doing a reproduction project among students and the added value of an online reproduction repository among AI researchers. We use anonymous self-assessment surveys and obtained 144 responses. Results suggest that students who do a reproduction project place more value on scientific reproductions and become more critical thinkers. Students and AI researchers agree that our online reproduction repository is valuable.', 'year': 2020, 'in_acl': False, 'citationCount': 12, 'section': 'Teaching Reproducibility', 'subsection': None}]
|
2022.acl-tutorials.4
|
Non-Autoregressive Sequence Generation
|
Non-autoregressive sequence generation (NAR) attempts to generate the entire or partial output sequences in parallel to speed up the generation process and avoid potential issues (e.g., label bias, exposure bias) in autoregressive generation. While it has received much research attention and has been applied in many sequence generation tasks in natural language and speech, naive NAR models still face many challenges to close the performance gap between state-of-the-art autoregressive models because of a lack of modeling power. In this tutorial, we will provide a thorough introduction and review of non-autoregressive sequence generation, in four sections: 1) Background, which covers the motivation of NAR generation, the problem definition, the evaluation protocol, and the comparison with standard autoregressive generation approaches. 2) Method, which includes different aspects: model architecture, objective function, training data, learning paradigm, and additional inference tricks. 3) Application, which covers different tasks in text and speech generation, and some advanced topics in applications. 4) Conclusion, in which we describe several research challenges and discuss the potential future research directions. We hope this tutorial can serve both academic researchers and industry practitioners working on non-autoregressive sequence generation.
| 2,022
|
https://aclanthology.org/2022.acl-tutorials.4
|
ACL
|
[{'id': 13756489, 'paperId': '204e3073870fae3d05bcbc2f6a8e263d9b72e776', 'title': 'Attention is All you Need', 'authors': [{'authorId': '40348417', 'name': 'Ashish Vaswani'}, {'authorId': '1846258', 'name': 'Noam M. Shazeer'}, {'authorId': '3877127', 'name': 'Niki Parmar'}, {'authorId': '39328010', 'name': 'Jakob Uszkoreit'}, {'authorId': '145024664', 'name': 'Llion Jones'}, {'authorId': '19177000', 'name': 'Aidan N. Gomez'}, {'authorId': '40527594', 'name': 'Lukasz Kaiser'}, {'authorId': '3443442', 'name': 'Illia Polosukhin'}], 'venue': 'Neural Information Processing Systems', 'abstract': 'The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.', 'year': 2017, 'in_acl': False, 'citationCount': 109681, 'section': None, 'subsection': None}, {'id': 3480671, 'paperId': '15e81c8d1c21f9e928c72721ac46d458f3341454', 'title': 'Non-Autoregressive Neural Machine Translation', 'authors': [{'authorId': '3016273', 'name': 'Jiatao Gu'}, {'authorId': '40518045', 'name': 'James Bradbury'}, {'authorId': '2228109', 'name': 'Caiming Xiong'}, {'authorId': '2052674293', 'name': 'V. Li'}, {'authorId': '2166511', 'name': 'R. Socher'}], 'venue': 'International Conference on Learning Representations', 'abstract': 'Existing approaches to neural machine translation condition each output word on previously generated outputs. We introduce a model that avoids this autoregressive property and produces its outputs in parallel, allowing an order of magnitude lower latency during inference. Through knowledge distillation, the use of input token fertilities as a latent variable, and policy gradient fine-tuning, we achieve this at a cost of as little as 2.0 BLEU points relative to the autoregressive Transformer network used as a teacher. We demonstrate substantial cumulative improvements associated with each of the three aspects of our training strategy, and validate our approach on IWSLT 2016 English-German and two WMT language pairs. By sampling fertilities in parallel at inference time, our non-autoregressive model achieves near-state-of-the-art performance of 29.8 BLEU on WMT 2016 English-Romanian.', 'year': 2017, 'in_acl': False, 'citationCount': 763, 'section': None, 'subsection': None}, {'id': 202538740, 'paperId': '5efadc9019ce3378a0eb6c8f939cdde6c8918b1e', 'title': 'Mask-Predict: Parallel Decoding of Conditional Masked Language Models', 'authors': [{'authorId': '2320509', 'name': 'Marjan Ghazvininejad'}, {'authorId': '39455775', 'name': 'Omer Levy'}, {'authorId': '11323179', 'name': 'Yinhan Liu'}, {'authorId': '1982950', 'name': 'Luke Zettlemoyer'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Most machine translation systems generate text autoregressively from left to right. We, instead, use a masked language modeling objective to train a model to predict any subset of the target words, conditioned on both the input text and a partially masked target translation. This approach allows for efficient iterative decoding, where we first predict all of the target words non-autoregressively, and then repeatedly mask out and regenerate the subset of words that the model is least confident about. By applying this strategy for a constant number of iterations, our model improves state-of-the-art performance levels for non-autoregressive and parallel decoding translation models by over 4 BLEU on average. It is also able to reach within about 1 BLEU point of a typical left-to-right transformer model, while decoding significantly faster.', 'year': 2019, 'in_acl': True, 'citationCount': 539, 'section': None, 'subsection': None}, {'id': 216056470, 'paperId': 'bed87e8fb3e7e9bc87e1c2ee459ae405a35d3267', 'title': 'A Study of Non-autoregressive Model for Sequence Generation', 'authors': [{'authorId': '1500435161', 'name': 'Yi Ren'}, {'authorId': '48211720', 'name': 'Jinglin Liu'}, {'authorId': '48391466', 'name': 'Xu Tan'}, {'authorId': '47601191', 'name': 'Sheng Zhao'}, {'authorId': '47122432', 'name': 'Zhou Zhao'}, {'authorId': '2110264337', 'name': 'Tie-Yan Liu'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Non-autoregressive (NAR) models generate all the tokens of a sequence in parallel, resulting in faster generation speed compared to their autoregressive (AR) counterparts but at the cost of lower accuracy. Different techniques including knowledge distillation and source-target alignment have been proposed to bridge the gap between AR and NAR models in various tasks such as neural machine translation (NMT), automatic speech recognition (ASR), and text to speech (TTS). With the help of those techniques, NAR models can catch up with the accuracy of AR models in some tasks but not in some others. In this work, we conduct a study to understand the difficulty of NAR sequence generation and try to answer: (1) Why NAR models can catch up with AR models in some tasks but not all? (2) Why techniques like knowledge distillation and source-target alignment can help NAR models. Since the main difference between AR and NAR models is that NAR models do not use dependency among target tokens while AR models do, intuitively the difficulty of NAR sequence generation heavily depends on the strongness of dependency among target tokens. To quantify such dependency, we propose an analysis model called CoMMA to characterize the difficulty of different NAR sequence generation tasks. We have several interesting findings: 1) Among the NMT, ASR and TTS tasks, ASR has the most target-token dependency while TTS has the least. 2) Knowledge distillation reduces the target-token dependency in target sequence and thus improves the accuracy of NAR models. 3) Source-target alignment constraint encourages dependency of a target token on source tokens and thus eases the training of NAR models.', 'year': 2020, 'in_acl': True, 'citationCount': 59, 'section': None, 'subsection': None}]
|
2022.acl-tutorials.5
|
Learning with Limited Text Data
|
Natural Language Processing (NLP) has achieved great progress in the past decade on the basis of neural models, which often make use of large amounts of labeled data to achieve state-of-the-art performance. The dependence on labeled data prevents NLP models from being applied to low-resource settings and languages because of the time, money, and expertise that is often required to label massive amounts of textual data. Consequently, the ability to learn with limited labeled data is crucial for deploying neural systems to real-world NLP applications. Recently, numerous approaches have been explored to alleviate the need for labeled data in NLP such as data augmentation and semi-supervised learning. This tutorial aims to provide a systematic and up-to-date overview of these methods in order to help researchers and practitioners understand the landscape of approaches and the challenges associated with learning from limited labeled data, an emerging topic in the computational linguistics community. We will consider applications to a wide variety of NLP tasks (including text classification, generation, and structured prediction) and will highlight current challenges and future directions.
| 2,022
|
https://aclanthology.org/2022.acl-tutorials.5
|
ACL
|
[{'id': 235422524, 'paperId': '013eb12ce5468f79d58bf859653f4929c5a2bd14', 'title': 'An Empirical Survey of Data Augmentation for Limited Data Learning in NLP', 'authors': [{'authorId': '47739850', 'name': 'Jiaao Chen'}, {'authorId': '1390031652', 'name': 'Derek Tam'}, {'authorId': '2402716', 'name': 'Colin Raffel'}, {'authorId': '143977268', 'name': 'Mohit Bansal'}, {'authorId': '2143919864', 'name': 'Diyi Yang'}], 'venue': 'Transactions of the Association for Computational Linguistics', 'abstract': 'NLP has achieved great progress in the past decade through the use of neural models and large labeled datasets. The dependence on abundant data prevents NLP models from being applied to low-resource settings or novel tasks where significant time, money, or expertise is required to label massive amounts of textual data. Recently, data augmentation methods have been explored as a means of improving data efficiency in NLP. To date, there has been no systematic empirical overview of data augmentation for NLP in the limited labeled data setting, making it difficult to understand which methods work in which settings. In this paper, we provide an empirical survey of recent progress on data augmentation for NLP in the limited labeled data setting, summarizing the landscape of methods (including token-level augmentations, sentence-level augmentations, adversarial augmentations, and hidden-space augmentations) and carrying out experiments on 11 datasets covering topics/news classification, inference tasks, paraphrasing tasks, and single-sentence tasks. Based on the results, we draw several conclusions to help practitioners choose appropriate augmentations in different settings and discuss the current challenges and future directions for limited data learning in NLP.', 'year': 2021, 'in_acl': True, 'citationCount': 143, 'section': None, 'subsection': None}, {'id': 216553182, 'paperId': 'ae2c03cbe6162dadf65edd2ff7dfc5333524dca5', 'title': 'MixText: Linguistically-Informed Interpolation of Hidden Space for Semi-Supervised Text Classification', 'authors': [{'authorId': '47739850', 'name': 'Jiaao Chen'}, {'authorId': '8387085', 'name': 'Zichao Yang'}, {'authorId': '2022168', 'name': 'Diyi Yang'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'This paper presents MixText, a semi-supervised learning method for text classification, which uses our newly designed data augmentation method called TMix. TMix creates a large amount of augmented training samples by interpolating text in hidden space. Moreover, we leverage recent advances in data augmentation to guess low-entropy labels for unlabeled data, hence making them as easy to use as labeled data. By mixing labeled, unlabeled and augmented data, MixText significantly outperformed current pre-trained and fined-tuned models and other state-of-the-art semi-supervised learning methods on several text classification benchmarks. The improvement is especially prominent when supervision is extremely limited. We have publicly released our code at https://github.com/GT-SALT/MixText.', 'year': 2020, 'in_acl': True, 'citationCount': 326, 'section': None, 'subsection': None}, {'id': 58981712, 'paperId': 'ec4eba83f6b3266d9ae7cabb2b2cb1518f727edc', 'title': 'Cross-lingual Language Model Pretraining', 'authors': [{'authorId': '1830914', 'name': 'Guillaume Lample'}, {'authorId': '2480903', 'name': 'Alexis Conneau'}], 'venue': 'Neural Information Processing Systems', 'abstract': 'Recent studies have demonstrated the efficiency of generative pretraining for English natural language understanding. In this work, we extend this approach to multiple languages and show the effectiveness of cross-lingual pretraining. We propose two methods to learn cross-lingual language models (XLMs): one unsupervised that only relies on monolingual data, and one supervised that leverages parallel data with a new cross-lingual language model objective. We obtain state-of-the-art results on cross-lingual classification, unsupervised and supervised machine translation. On XNLI, our approach pushes the state of the art by an absolute gain of 4.9% accuracy. On unsupervised machine translation, we obtain 34.3 BLEU on WMT’16 German-English, improving the previous state of the art by more than 9 BLEU. On supervised machine translation, we obtain a new state of the art of 38.5 BLEU on WMT’16 Romanian-English, outperforming the previous best approach by more than 4 BLEU. Our code and pretrained models will be made publicly available.', 'year': 2019, 'in_acl': False, 'citationCount': 2597, 'section': None, 'subsection': None}, {'id': 221995566, 'paperId': '110c13fbf4ff87b52ee1fd9eb2d3616c839ceb41', 'title': 'Parsing with Multilingual BERT, a Small Treebank, and a Small Corpus', 'authors': [{'authorId': '1974400874', 'name': 'Ethan C. Chau'}, {'authorId': '49478117', 'name': 'Lucy H. Lin'}, {'authorId': '144365875', 'name': 'Noah A. Smith'}], 'venue': 'Findings', 'abstract': 'Pretrained multilingual contextual representations have shown great success, but due to the limits of their pretraining data, their benefits do not apply equally to all language varieties. This presents a challenge for language varieties unfamiliar to these models, whose labeled and unlabeled data is too limited to train a monolingual model effectively. We propose the use of additional language-specific pretraining and vocabulary augmentation to adapt multilingual models to low-resource settings. Using dependency parsing of four diverse low-resource language varieties as a case study, we show that these methods significantly improve performance over baselines, especially in the lowest-resource cases, and demonstrate the importance of the relationship between such models’ pretraining data and target language varieties.', 'year': 2020, 'in_acl': True, 'citationCount': 16, 'section': None, 'subsection': None}, {'id': 220714040, 'paperId': 'c9b56cb026a38e39bb0228faac57accd6f65e6f7', 'title': 'TextAttack: A Framework for Adversarial Attacks, Data Augmentation, and Adversarial Training in NLP', 'authors': [{'authorId': '153769695', 'name': 'John X. Morris'}, {'authorId': '1453652787', 'name': 'Eli Lifland'}, {'authorId': '1693182792', 'name': 'Jin Yong Yoo'}, {'authorId': '1829303908', 'name': 'J. Grigsby'}, {'authorId': '2068347799', 'name': 'Di Jin'}, {'authorId': '121817403', 'name': 'Yanjun Qi'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'While there has been substantial research using adversarial attacks to analyze NLP models, each attack is implemented in its own code repository. It remains challenging to develop NLP attacks and utilize them to improve model performance. This paper introduces TextAttack, a Python framework for adversarial attacks, data augmentation, and adversarial training in NLP. TextAttack builds attacks from four components: a goal function, a set of constraints, a transformation, and a search method. TextAttack’s modular design enables researchers to easily construct attacks from combinations of novel and existing components. TextAttack provides implementations of 16 adversarial attacks from the literature and supports a variety of models and datasets, including BERT and other transformers, and all GLUE tasks. TextAttack also includes data augmentation and adversarial training modules for using components of adversarial attacks to improve model accuracy and robustness.TextAttack is democratizing NLP: anyone can try data augmentation and adversarial training on any model or dataset, with just a few lines of code. Code and tutorials are available at https://github.com/QData/TextAttack.', 'year': 2020, 'in_acl': True, 'citationCount': 657, 'section': None, 'subsection': None}, {'id': 222132916, 'paperId': 'a321d1ec561a23512a5aa687c0d89c971bb5687b', 'title': 'Self-training Improves Pre-training for Natural Language Understanding', 'authors': [{'authorId': '3048577', 'name': 'Jingfei Du'}, {'authorId': '3024698', 'name': 'Edouard Grave'}, {'authorId': '7653327', 'name': 'Beliz Gunel'}, {'authorId': '113810201', 'name': 'Vishrav Chaudhary'}, {'authorId': '2061077885', 'name': 'Onur Çelebi'}, {'authorId': '2325985', 'name': 'Michael Auli'}, {'authorId': '1389924486', 'name': 'Ves Stoyanov'}, {'authorId': '2480903', 'name': 'Alexis Conneau'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'Unsupervised pre-training has led to much recent progress in natural language understanding. In this paper, we study self-training as another way to leverage unlabeled data through semi-supervised learning. To obtain additional data for a specific task, we introduce SentAugment, a data augmentation method which computes task-specific query embeddings from labeled data to retrieve sentences from a bank of billions of unlabeled sentences crawled from the web. Unlike previous semi-supervised methods, our approach does not require in-domain unlabeled data and is therefore more generally applicable. Experiments show that self-training is complementary to strong RoBERTa baselines on a variety of tasks. Our augmentation approach leads to scalable and effective self-training with improvements of up to 2.6% on standard text classification benchmarks. Finally, we also show strong gains on knowledge-distillation and few-shot learning.', 'year': 2020, 'in_acl': True, 'citationCount': 156, 'section': None, 'subsection': None}]
|
2022.acl-tutorials.6
|
Zero- and Few-Shot NLP with Pretrained Language Models
|
The ability to efficiently learn from little-to-no data is critical to applying NLP to tasks where data collection is costly or otherwise difficult. This is a challenging setting both academically and practically—particularly because training neutral models typically require large amount of labeled data. More recently, advances in pretraining on unlabelled data have brought up the potential of better zero-shot or few-shot learning (Devlin et al., 2019; Brown et al., 2020). In particular, over the past year, a great deal of research has been conducted to better learn from limited data using large-scale language models. In this tutorial, we aim at bringing interested NLP researchers up to speed about the recent and ongoing techniques for zero- and few-shot learning with pretrained language models. Additionally, our goal is to reveal new research opportunities to the audience, which will hopefully bring us closer to address existing challenges in this domain.
| 2,022
|
https://aclanthology.org/2022.acl-tutorials.6
|
ACL
|
[{'id': 218971783, 'paperId': '90abbc2cf38462b954ae1b772fac9532e2ccd8b0', 'title': 'Language Models are Few-Shot Learners', 'authors': [{'authorId': '31035595', 'name': 'Tom B. Brown'}, {'authorId': '2056658938', 'name': 'Benjamin Mann'}, {'authorId': '39849748', 'name': 'Nick Ryder'}, {'authorId': '2065894334', 'name': 'Melanie Subbiah'}, {'authorId': '152724169', 'name': 'J. Kaplan'}, {'authorId': '6515819', 'name': 'Prafulla Dhariwal'}, {'authorId': '2072676', 'name': 'Arvind Neelakantan'}, {'authorId': '67311962', 'name': 'Pranav Shyam'}, {'authorId': '144864359', 'name': 'Girish Sastry'}, {'authorId': '119609682', 'name': 'Amanda Askell'}, {'authorId': '144517868', 'name': 'Sandhini Agarwal'}, {'authorId': '1404060687', 'name': 'Ariel Herbert-Voss'}, {'authorId': '2064404342', 'name': 'Gretchen Krueger'}, {'authorId': '103143311', 'name': 'T. Henighan'}, {'authorId': '48422824', 'name': 'R. Child'}, {'authorId': '1992922591', 'name': 'A. Ramesh'}, {'authorId': '2052152920', 'name': 'Daniel M. Ziegler'}, {'authorId': '49387725', 'name': 'Jeff Wu'}, {'authorId': '2059411355', 'name': 'Clemens Winter'}, {'authorId': '144239765', 'name': 'Christopher Hesse'}, {'authorId': '2108828435', 'name': 'Mark Chen'}, {'authorId': '2064673055', 'name': 'Eric Sigler'}, {'authorId': '1380985420', 'name': 'Ma-teusz Litwin'}, {'authorId': '145565184', 'name': 'Scott Gray'}, {'authorId': '1490681878', 'name': 'B. Chess'}, {'authorId': '2115193883', 'name': 'Jack Clark'}, {'authorId': '133740015', 'name': 'Christopher Berner'}, {'authorId': '52238703', 'name': 'Sam McCandlish'}, {'authorId': '38909097', 'name': 'Alec Radford'}, {'authorId': '1701686', 'name': 'I. Sutskever'}, {'authorId': '2698777', 'name': 'Dario Amodei'}], 'venue': 'Neural Information Processing Systems', 'abstract': "Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general.", 'year': 2020, 'in_acl': False, 'citationCount': 33118, 'section': None, 'subsection': None}, {'id': 221703107, 'paperId': 'f30444fbb6ad806168e2564db4815cd27faa7fd9', 'title': 'It’s Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners', 'authors': [{'authorId': '32246932', 'name': 'Timo Schick'}, {'authorId': '144418438', 'name': 'Hinrich Schütze'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'When scaled to hundreds of billions of parameters, pretrained language models such as GPT-3 (Brown et al., 2020) achieve remarkable few-shot performance. However, enormous amounts of compute are required for training and applying such big models, resulting in a large carbon footprint and making it difficult for researchers and practitioners to use them. We show that performance similar to GPT-3 can be obtained with language models that are much “greener” in that their parameter count is several orders of magnitude smaller. This is achieved by converting textual inputs into cloze questions that contain a task description, combined with gradient-based optimization; exploiting unlabeled data gives further improvements. We identify key factors required for successful natural language understanding with small language models.', 'year': 2020, 'in_acl': True, 'citationCount': 880, 'section': None, 'subsection': None}, {'id': 237416585, 'paperId': 'ff0b2681d7b05e16c46dfb71d980cc2f605907cd', 'title': 'Finetuned Language Models Are Zero-Shot Learners', 'authors': [{'authorId': '144026731', 'name': 'Jason Wei'}, {'authorId': '40377863', 'name': 'Maarten Bosma'}, {'authorId': '2664737', 'name': 'Vincent Zhao'}, {'authorId': '2091768', 'name': 'Kelvin Guu'}, {'authorId': '40625240', 'name': 'Adams Wei Yu'}, {'authorId': '144104130', 'name': 'Brian Lester'}, {'authorId': '2140321952', 'name': 'Nan Du'}, {'authorId': '2555924', 'name': 'Andrew M. Dai'}, {'authorId': '2827616', 'name': 'Quoc V. Le'}], 'venue': 'International Conference on Learning Representations', 'abstract': 'This paper explores a simple method for improving the zero-shot learning abilities of language models. We show that instruction tuning -- finetuning language models on a collection of tasks described via instructions -- substantially improves zero-shot performance on unseen tasks. We take a 137B parameter pretrained language model and instruction-tune it on over 60 NLP tasks verbalized via natural language instruction templates. We evaluate this instruction-tuned model, which we call FLAN, on unseen task types. FLAN substantially improves the performance of its unmodified counterpart and surpasses zero-shot 175B GPT-3 on 20 of 25 tasks that we evaluate. FLAN even outperforms few-shot GPT-3 by a large margin on ANLI, RTE, BoolQ, AI2-ARC, OpenbookQA, and StoryCloze. Ablation studies reveal that number of finetuning datasets, model scale, and natural language instructions are key to the success of instruction tuning.', 'year': 2021, 'in_acl': False, 'citationCount': 3047, 'section': None, 'subsection': None}, {'id': 235899116, 'paperId': '2ee03e28208a9310a9be4032c2b04ebdddb83cc7', 'title': 'FLEX: Unifying Evaluation for Few-Shot NLP', 'authors': [{'authorId': '2699105', 'name': 'Jonathan Bragg'}, {'authorId': '2527954', 'name': 'Arman Cohan'}, {'authorId': '46258841', 'name': 'Kyle Lo'}, {'authorId': '46181066', 'name': 'Iz Beltagy'}], 'venue': 'Neural Information Processing Systems', 'abstract': 'Few-shot NLP research is highly active, yet conducted in disjoint research threads with evaluation suites that lack challenging-yet-realistic testing setups and fail to employ careful experimental design. Consequently, the community does not know which techniques perform best or even if they outperform simple baselines. In response, we formulate the FLEX Principles, a set of requirements and best practices for unified, rigorous, valid, and cost-sensitive few-shot NLP evaluation. These principles include Sample Size Design, a novel approach to benchmark design that optimizes statistical accuracy and precision while keeping evaluation costs manageable. Following the principles, we release the FLEX benchmark, which includes four few-shot transfer settings, zero-shot evaluation, and a public leaderboard that covers diverse NLP tasks. In addition, we present UniFew, a prompt-based model for few-shot learning that unifies pretraining and finetuning prompt formats, eschewing complex machinery of recent prompt-based approaches in adapting downstream task formats to language model pretraining objectives. We demonstrate that despite simplicity, UniFew achieves results competitive with both popular meta-learning and prompt-based approaches.', 'year': 2021, 'in_acl': False, 'citationCount': 95, 'section': None, 'subsection': None}]
|
2022.acl-tutorials.8
|
Natural Language Processing for Multilingual Task-Oriented Dialogue
|
Recent advances in deep learning have also enabled fast progress in the research of task-oriented dialogue (ToD) systems. However, the majority of ToD systems are developed for English and merely a handful of other widely spoken languages, e.g., Chinese and German. This hugely limits the global reach and, consequently, transformative socioeconomic potential of such systems. In this tutorial, we will thus discuss and demonstrate the importance of (building) multilingual ToD systems, and then provide a systematic overview of current research gaps, challenges and initiatives related to multilingual ToD systems, with a particular focus on their connections to current research and challenges in multilingual and low-resource NLP. The tutorial will aim to provide answers or shed new light to the following questions: a) Why are multilingual dialogue systems so hard to build: what makes multilinguality for dialogue more challenging than for other NLP applications and tasks? b) What are the best existing methods and datasets for multilingual and cross-lingual (task-oriented) dialog systems? How are (multilingual) ToD systems usually evaluated? c) What are the promising future directions for multilingual ToD research: where can one draw inspiration from related NLP areas and tasks?
| 2,022
|
https://aclanthology.org/2022.acl-tutorials.8
|
ACL
|
[{'id': 10565222, 'paperId': '0a22389bd99b7efe3627ec6fc77ddaf3ff5e2faa', 'title': 'A Network-based End-to-End Trainable Task-oriented Dialogue System', 'authors': [{'authorId': '1388702112', 'name': 'L. Rojas-Barahona'}, {'authorId': '51175233', 'name': 'M. Gašić'}, {'authorId': '3334541', 'name': 'N. Mrksic'}, {'authorId': '2131709', 'name': 'Pei-hao Su'}, {'authorId': '2295429', 'name': 'Stefan Ultes'}, {'authorId': '144256365', 'name': 'Tsung-Hsien Wen'}, {'authorId': '145259603', 'name': 'S. Young'}, {'authorId': '92480907', 'name': 'David Vandyke'}], 'venue': 'Conference of the European Chapter of the Association for Computational Linguistics', 'abstract': 'Teaching machines to accomplish tasks by conversing naturally with humans is challenging. Currently, developing task-oriented dialogue systems requires creating multiple components and typically this involves either a large amount of handcrafting, or acquiring costly labelled datasets to solve a statistical learning problem for each component. In this work we introduce a neural network-based text-in, text-out end-to-end trainable goal-oriented dialogue system along with a new way of collecting dialogue data based on a novel pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue systems easily and without making too many assumptions about the task at hand. The results show that the model can converse with human subjects naturally whilst helping them to accomplish tasks in a restaurant search domain.', 'year': 2016, 'in_acl': True, 'citationCount': 1070, 'section': None, 'subsection': None}, {'id': 238744120, 'paperId': '3ea8767b852253e2636b6e57925be7fcc1d739df', 'title': 'Systematic Inequalities in Language Technology Performance across the World’s Languages', 'authors': [{'authorId': '6894443', 'name': 'Damián E. Blasi'}, {'authorId': '49513989', 'name': 'Antonios Anastasopoulos'}, {'authorId': '1700325', 'name': 'Graham Neubig'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Natural language processing (NLP) systems have become a central technology in communication, education, medicine, artificial intelligence, and many other domains of research and development. While the performance of NLP methods has grown enormously over the last decade, this progress has been restricted to a minuscule subset of the world’s \\approx6,500 languages. We introduce a framework for estimating the global utility of language technologies as revealed in a comprehensive snapshot of recent publications in NLP. Our analyses involve the field at large, but also more in-depth studies on both user-facing technologies (machine translation, language understanding, question answering, text-to-speech synthesis) as well as foundational NLP tasks (dependency parsing, morphological inflection). In the process, we (1) quantify disparities in the current state of NLP research, (2) explore some of its associated societal and academic factors, and (3) produce tailored recommendations for evidence-based policy making aimed at promoting more global and equitable language technologies. Data and code to reproduce the findings discussed in this paper areavailable on GitHub (https://github.com/neubig/globalutility).', 'year': 2021, 'in_acl': True, 'citationCount': 113, 'section': None, 'subsection': None}, {'id': 235313293, 'paperId': 'd29036946152bddf950fec7a08c2828a8a8f902e', 'title': 'Crossing the Conversational Chasm: A Primer on Natural Language Processing for Multilingual Task-Oriented Dialogue Systems', 'authors': [{'authorId': '66879943', 'name': 'E. Razumovskaia'}, {'authorId': '1666177566', 'name': 'Goran Glavavs'}, {'authorId': '46963731', 'name': 'Olga Majewska'}, {'authorId': '3381663', 'name': 'E. Ponti'}, {'authorId': '145762466', 'name': 'A. Korhonen'}, {'authorId': '1747849', 'name': 'Ivan Vulic'}], 'venue': 'Journal of Artificial Intelligence Research', 'abstract': 'In task-oriented dialogue (ToD), a user holds a conversation with an artificial agent\xa0 with the aim of completing a concrete task. Although this technology represents one of\xa0 the central objectives of AI and has been the focus of ever more intense research and\xa0 development efforts, it is currently limited to a few narrow domains (e.g., food ordering,\xa0 ticket booking) and a handful of languages (e.g., English, Chinese). This work provides an\xa0 extensive overview of existing methods and resources in multilingual ToD as an entry point\xa0 to this exciting and emerging field. We find that the most critical factor preventing the\xa0 creation of truly multilingual ToD systems is the lack of datasets in most languages for\xa0 both training and evaluation. In fact, acquiring annotations or human feedback for each\xa0 component of modular systems or for data-hungry end-to-end systems is expensive and\xa0 tedious. Hence, state-of-the-art approaches to multilingual ToD mostly rely on (zero- or\xa0 few-shot) cross-lingual transfer from resource-rich languages (almost exclusively English),\xa0 either by means of (i) machine translation or (ii) multilingual representations. These\xa0 approaches are currently viable only for typologically similar languages and languages with\xa0 parallel / monolingual corpora available. On the other hand, their effectiveness beyond these\xa0 boundaries is doubtful or hard to assess due to the lack of linguistically diverse benchmarks\xa0 (especially for natural language generation and end-to-end evaluation). To overcome this\xa0 limitation, we draw parallels between components of the ToD pipeline and other NLP tasks,\xa0 which can inspire solutions for learning in low-resource scenarios. Finally, we list additional\xa0 challenges that multilinguality poses for related areas (such as speech, fluency in generated\xa0 text, and human-centred evaluation), and indicate future directions that hold promise to\xa0 further expand language coverage and dialogue capabilities of current ToD systems.\xa0', 'year': 2021, 'in_acl': False, 'citationCount': 28, 'section': None, 'subsection': None}]
|
2022.emnlp-tutorials.1
|
Meaning Representations for Natural Languages: Design, Models and Applications
|
This tutorial reviews the design of common meaning representations, SoTA models for predicting meaning representations, and the applications of meaning representations in a wide range of downstream NLP tasks and real-world applications. Reporting by a diverse team of NLP researchers from academia and industry with extensive experience in designing, building and using meaning representations, our tutorial has three components: (1) an introduction to common meaning representations, including basic concepts and design challenges; (2) a review of SoTA methods on building models for meaning representations; and (3) an overview of applications of meaning representations in downstream NLP tasks and real-world applications. We will also present qualitative comparisons of common meaning representations and a quantitative study on how their differences impact model performance. Finally, we will share best practices in choosing the right meaning representation for downstream tasks.
| 2,022
|
https://aclanthology.org/2022.emnlp-tutorials.1
|
EMNLP
|
[{'id': 2486369, 'paperId': '99d2dcdcf4cf05facaa101a48c7e31d140b4736d', 'title': 'The Proposition Bank: An Annotated Corpus of Semantic Roles', 'authors': [{'authorId': '145755155', 'name': 'Martha Palmer'}, {'authorId': '2489901', 'name': 'Paul R. Kingsbury'}, {'authorId': '1793218', 'name': 'D. Gildea'}], 'venue': 'International Conference on Computational Logic', 'abstract': 'The Proposition Bank project takes a practical approach to semantic representation, adding a layer of predicate-argument information, or semantic role labels, to the syntactic structures of the Penn Treebank. The resulting resource can be thought of as shallow, in that it does not represent coreference, quantification, and many other higher-order phenomena, but also broad, in that it covers every instance of every verb in the corpus and allows representative statistics to be calculated. We discuss the criteria used to define the sets of semantic roles used in the annotation process and to analyze the frequency of syntactic/semantic alternations in the corpus. We describe an automatic system for semantic role tagging trained on the corpus and discuss the effect on its performance of various types of information, including a comparison of full syntactic parsing with a flat representation and the contribution of the empty trace categories of the treebank.', 'year': 2005, 'in_acl': True, 'citationCount': 2586, 'section': 'PropBank', 'subsection': None}, {'id': 7771402, 'paperId': 'e72e5ee5de14fd463ab58ce830474157258e3578', 'title': 'Abstract Meaning Representation for Sembanking', 'authors': [{'authorId': '3460261', 'name': 'L. Banarescu'}, {'authorId': '3202888', 'name': 'C. Bonial'}, {'authorId': '2112618394', 'name': 'Shu Cai'}, {'authorId': '2065872210', 'name': 'Madalina Georgescu'}, {'authorId': '3168985', 'name': 'Kira Griffitt'}, {'authorId': '1791311', 'name': 'U. Hermjakob'}, {'authorId': '152971314', 'name': 'Kevin Knight'}, {'authorId': '1755162', 'name': 'Philipp Koehn'}, {'authorId': '145755155', 'name': 'Martha Palmer'}, {'authorId': '145254207', 'name': 'Nathan Schneider'}], 'venue': 'LAW@ACL', 'abstract': 'We describe Abstract Meaning Representation (AMR), a semantic representation language in which we are writing down the meanings of thousands of English sentences. We hope that a sembank of simple, whole-sentence semantic structures will spur new work in statistical natural language understanding and generation, like the Penn Treebank encouraged work on statistical parsing. This paper gives an overview of AMR and tools associated with it.', 'year': 2013, 'in_acl': True, 'citationCount': 1379, 'section': 'AMR', 'subsection': None}, {'id': 235563506, 'paperId': 'b1c1bfe5f7a5696909c0ee7de7fbb4092a04c907', 'title': 'Designing a Uniform Meaning Representation for Natural Language Processing', 'authors': [{'authorId': '116305713', 'name': 'J. V. Gysel'}, {'authorId': '51882643', 'name': 'Meagan Vigus'}, {'authorId': '41124366', 'name': 'Jayeol Chun'}, {'authorId': '2715566', 'name': 'Kenneth Lai'}, {'authorId': '51500425', 'name': 'Sarah Moeller'}, {'authorId': '40040342', 'name': 'Jiarui Yao'}, {'authorId': '1388957618', 'name': "Timothy J. O'Gorman"}, {'authorId': '2070241255', 'name': 'Andrew Cowell'}, {'authorId': '144456145', 'name': 'W. Bruce Croft'}, {'authorId': '1405994600', 'name': 'Chu-Ren Huang'}, {'authorId': '144002335', 'name': 'Jan Hajic'}, {'authorId': '1740728360', 'name': 'James H. Martin'}, {'authorId': '2949607', 'name': 'S. Oepen'}, {'authorId': '145755155', 'name': 'Martha Palmer'}, {'authorId': '1707726', 'name': 'J. Pustejovsky'}, {'authorId': '143886116', 'name': 'Rosa Vallejos'}, {'authorId': '1702849', 'name': 'Nianwen Xue'}], 'venue': 'KI - Künstliche Intelligenz', 'abstract': 'In this paper we present Uniform Meaning Representation (UMR), a meaning representation designed to annotate the semantic content of a text. UMR is primarily based on Abstract Meaning Representation (AMR), an annotation framework initially designed for English, but also draws from other meaning representations. UMR extends AMR to other languages, particularly morphologically complex, low-resource languages. UMR also adds features to AMR that are critical to semantic interpretation and enhances AMR by proposing a companion document-level representation that captures linguistic phenomena such as coreference as well as temporal and modal dependencies that potentially go beyond sentence boundaries.', 'year': 2021, 'in_acl': False, 'citationCount': 66, 'section': 'UMR', 'subsection': None}, {'id': 2440012, 'paperId': '1ae5c1646ea445a670fe6cc8bf72b589dd9f6e5c', 'title': 'Semantic Role Labeling Using Different Syntactic Views', 'authors': [{'authorId': '1735131', 'name': 'Sameer Pradhan'}, {'authorId': '1866226', 'name': 'Wayne H. Ward'}, {'authorId': '2483422', 'name': 'K. Hacioglu'}, {'authorId': '10796472', 'name': 'James H. Martin'}, {'authorId': '1746807', 'name': 'Dan Jurafsky'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Semantic role labeling is the process of annotating the predicate-argument structure in text with semantic labels. In this paper we present a state-of-the-art baseline semantic role labeling system based on Support Vector Machine classifiers. We show improvements on this system by: i) adding new features including features extracted from dependency parses, ii) performing feature selection and calibration and iii) combining parses obtained from semantic parsers trained using different syntactic views. Error analysis of the baseline system showed that approximately half of the argument identification errors resulted from parse errors in which there was no syntactic constituent that aligned with the correct argument. In order to address this problem, we combined semantic parses from a Minipar syntactic parse and from a chunked syntactic representation with our original baseline system which was based on Charniak parses. All of the reported techniques resulted in performance improvements.', 'year': 2005, 'in_acl': True, 'citationCount': 156, 'section': 'SRL models', 'subsection': None}, {'id': 33626727, 'paperId': 'a4dd3beea286a20c4e4f66436875932d597190bc', 'title': 'Deep Semantic Role Labeling: What Works and What’s Next', 'authors': [{'authorId': '2265599', 'name': 'Luheng He'}, {'authorId': '2544107', 'name': 'Kenton Lee'}, {'authorId': '35084211', 'name': 'M. Lewis'}, {'authorId': '1982950', 'name': 'Luke Zettlemoyer'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'We introduce a new deep learning model for semantic role labeling (SRL) that significantly improves the state of the art, along with detailed analyses to reveal its strengths and limitations. We use a deep highway BiLSTM architecture with constrained decoding, while observing a number of recent best practices for initialization and regularization. Our 8-layer ensemble model achieves 83.2 F1 on theCoNLL 2005 test set and 83.4 F1 on CoNLL 2012, roughly a 10% relative error reduction over the previous state of the art. Extensive empirical analysis of these gains show that (1) deep models excel at recovering long-distance dependencies but can still make surprisingly obvious errors, and (2) that there is still room for syntactic parsers to improve these results.', 'year': 2017, 'in_acl': True, 'citationCount': 429, 'section': 'SRL models', 'subsection': None}, {'id': 5000956, 'paperId': '33a9d1a702eb75da709d26c44aaeb7c2015c870b', 'title': 'A Discriminative Graph-Based Parser for the Abstract Meaning Representation', 'authors': [{'authorId': '144683841', 'name': 'Jeffrey Flanigan'}, {'authorId': '38094552', 'name': 'Sam Thomson'}, {'authorId': '143712374', 'name': 'J. Carbonell'}, {'authorId': '1745899', 'name': 'Chris Dyer'}, {'authorId': '144365875', 'name': 'Noah A. Smith'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Abstract Meaning Representation (AMR) is a semantic formalism for which a grow- ing set of annotated examples is avail- able. We introduce the first approach to parse sentences into this representa- tion, providing a strong baseline for fu- ture improvement. The method is based on a novel algorithm for finding a maxi- mum spanning, connected subgraph, em- bedded within a Lagrangian relaxation of an optimization problem that imposes lin- guistically inspired constraints. Our ap- proach is described in the general frame- work of structured prediction, allowing fu- ture incorporation of additional features and constraints, and may extend to other formalisms as well. Our open-source sys- tem, JAMR, is available at: http://github.com/jflanigan/jamr', 'year': 2014, 'in_acl': True, 'citationCount': 321, 'section': 'AMR models', 'subsection': None}, {'id': 46889674, 'paperId': '63ef50238ba765edf47c86e3e3fe9f608d8ea00b', 'title': 'AMR Parsing as Graph Prediction with Latent Alignment', 'authors': [{'authorId': '2753561', 'name': 'Chunchuan Lyu'}, {'authorId': '144889265', 'name': 'Ivan Titov'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Abstract meaning representations (AMRs) are broad-coverage sentence-level semantic representations. AMRs represent sentences as rooted labeled directed acyclic graphs. AMR parsing is challenging partly due to the lack of annotated alignments between nodes in the graphs and words in the corresponding sentences. We introduce a neural parser which treats alignments as latent variables within a joint probabilistic model of concepts, relations and alignments. As exact inference requires marginalizing over alignments and is infeasible, we use the variational autoencoding framework and a continuous relaxation of the discrete alignments. We show that joint modeling is preferable to using a pipeline of align and parse. The parser achieves the best reported results on the standard benchmark (74.4% on LDC2016E25).', 'year': 2018, 'in_acl': True, 'citationCount': 126, 'section': 'AMR models', 'subsection': None}, {'id': 222133032, 'paperId': '12b28c2d1b58234daa0f06ab43353c401eda1958', 'title': 'Improving AMR Parsing with Sequence-to-Sequence Pre-training', 'authors': [{'authorId': '1510477221', 'name': 'Dong Xu'}, {'authorId': '2108988344', 'name': 'Junhui Li'}, {'authorId': '145490067', 'name': 'Muhua Zhu'}, {'authorId': '2156053331', 'name': 'Min Zhang'}, {'authorId': '143740945', 'name': 'Guodong Zhou'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'In the literature, the research on abstract meaning representation (AMR) parsing is much restricted by the size of human-curated dataset which is critical to build an AMR parser with good performance. To alleviate such data size restriction, pre-trained models have been drawing more and more attention in AMR parsing. However, previous pre-trained models, like BERT, are implemented for general purpose which may not work as expected for the specific task of AMR parsing. In this paper, we focus on sequence-to-sequence (seq2seq) AMR parsing and propose a seq2seq pre-training approach to build pre-trained models in both single and joint way on three relevant tasks, i.e., machine translation, syntactic parsing, and AMR parsing itself. Moreover, we extend the vanilla fine-tuning method to a multi-task learning fine-tuning method that optimizes for the performance of AMR parsing while endeavors to preserve the response of pre-trained models. Extensive experimental results on two English benchmark datasets show that both the single and joint pre-trained models significantly improve the performance (e.g., from 71.5 to 80.2 on AMR 2.0), which reaches the state of the art. The result is very encouraging since we achieve this with seq2seq models rather than complex models. We make our code and model available at this https URL.', 'year': 2020, 'in_acl': True, 'citationCount': 68, 'section': 'AMR models', 'subsection': None}]
|
2022.emnlp-tutorials.4
|
CausalNLP Tutorial: An Introduction to Causality for Natural Language Processing
|
Causal inference is becoming an increasingly important topic in deep learning, with the potential to help with critical deep learning problems such as model robustness, interpretability, and fairness. In addition, causality is naturally widely used in various disciplines of science, to discover causal relationships among variables and estimate causal effects of interest. In this tutorial, we introduce the fundamentals of causal discovery and causal effect estimation to the natural language processing (NLP) audience, provide an overview of causal perspectives to NLP problems, and aim to inspire novel approaches to NLP further. This tutorial is inclusive to a variety of audiences and is expected to facilitate the community’s developments in formulating and addressing new, important NLP problems in light of emerging causal principles and methodologies.
| 2,022
|
https://aclanthology.org/2022.emnlp-tutorials.4
|
EMNLP
|
[{'id': 259253713, 'paperId': '1e3ee4a75451ef74febb720a7bdda561f16b964a', 'title': 'A Survey of Learning Causality with Data: Problems and Methods', 'authors': [{'authorId': '2773849', 'name': 'Ruocheng Guo'}, {'authorId': '2140175677', 'name': 'Lu Cheng'}, {'authorId': '2040455', 'name': 'Jundong Li'}, {'authorId': '144974208', 'name': 'P. R. Hahn'}, {'authorId': '2146398099', 'name': 'Huan Liu'}], 'venue': 'arXiv.org', 'abstract': 'The era of big data provides researchers with convenient access to copious data. However, we often have little knowledge of such data. The increasing prevalence of massive data is challenging the traditional methods of learning causality because they were developed for the cases with limited amount of data and strong prior causal knowledge. This survey aims to close the gap between big data and learning causality with a comprehensive and structured review of both traditional and frontier methods followed by a discussion about some open problems of learning causality. We begin with preliminaries of learning causality. Then we categorize and revisit methods of learning causality for the typical problems and data types. After that, we discuss the connections between learning causality and machine learning. At the end, some open problems are presented to show the great potential of learning causality with data', 'year': 2018, 'in_acl': False, 'citationCount': 223, 'section': None, 'subsection': None}, {'id': 261325982, 'paperId': '3803ea42e1fc773db3b1d0fa05f41b5ebf0a61d1', 'title': 'Toward Causal Representation Learning', 'authors': [{'authorId': '2231240655', 'name': 'Bernhard Schölkopf'}, {'authorId': '9557137', 'name': 'Francesco Locatello'}, {'authorId': '153125952', 'name': 'Stefan Bauer'}, {'authorId': '145604319', 'name': 'Nan Rosemary Ke'}, {'authorId': '2583391', 'name': 'Nal Kalchbrenner'}, {'authorId': '1996705', 'name': 'Anirudh Goyal'}, {'authorId': '1865800402', 'name': 'Y. Bengio'}], 'venue': 'Proceedings of the IEEE', 'abstract': 'The two fields of machine learning and graphical causality arose and are developed separately. However, there is, now, cross-pollination and increasing interest in both fields to benefit from the advances of the other. In this article, we review fundamental concepts of causal inference and relate them to crucial open problems of machine learning, including transfer and generalization, thereby assaying how causality can contribute to modern machine learning research. This also applies in the opposite direction: we note that most work in causality starts from the premise that the causal variables are given. A central problem for AI and causality is, thus, causal representation learning, that is, the discovery of high-level causal variables from low-level observations. Finally, we delineate some implications of causality for machine learning and propose key research areas at the intersection of both communities.', 'year': 2021, 'in_acl': False, 'citationCount': 729, 'section': None, 'subsection': None}, {'id': 237386009, 'paperId': '130d432ccbc836380a212bea618f84ff094a6a52', 'title': 'Causal Inference in Natural Language Processing: Estimation, Prediction, Interpretation and Beyond', 'authors': [{'authorId': '46609506', 'name': 'Amir Feder'}, {'authorId': '145137850', 'name': 'Katherine A. Keith'}, {'authorId': '2125374460', 'name': 'Emaad A. Manzoor'}, {'authorId': '2253657208', 'name': 'Reid Pryzant'}, {'authorId': '153485411', 'name': 'Dhanya Sridhar'}, {'authorId': '1411379613', 'name': 'Zach Wood-Doughty'}, {'authorId': '144154709', 'name': 'Jacob Eisenstein'}, {'authorId': '2361828', 'name': 'Justin Grimmer'}, {'authorId': '1762757', 'name': 'Roi Reichart'}, {'authorId': '2464550', 'name': 'Margaret E. Roberts'}, {'authorId': '28924497', 'name': 'Brandon M Stewart'}, {'authorId': '2974320', 'name': 'Victor Veitch'}, {'authorId': '2143919864', 'name': 'Diyi Yang'}], 'venue': 'Transactions of the Association for Computational Linguistics', 'abstract': 'Abstract A fundamental goal of scientific research is to learn about causal relationships. However, despite its critical role in the life and social sciences, causality has not had the same importance in Natural Language Processing (NLP), which has traditionally placed more emphasis on predictive tasks. This distinction is beginning to fade, with an emerging area of interdisciplinary research at the convergence of causal inference and language processing. Still, research on causality in NLP remains scattered across domains without unified definitions, benchmark datasets and clear articulations of the challenges and opportunities in the application of causal inference to the textual domain, with its unique properties. In this survey, we consolidate research across academic areas and situate it in the broader NLP landscape. We introduce the statistical challenge of estimating causal effects with text, encompassing settings where text is used as an outcome, treatment, or to address confounding. In addition, we explore potential uses of causal inference to improve the robustness, fairness, and interpretability of NLP models. We thus provide a unified overview of causal inference for the NLP community.1', 'year': 2021, 'in_acl': False, 'citationCount': 210, 'section': None, 'subsection': None}]
|
2022.emnlp-tutorials.6
|
Non-Autoregressive Models for Fast Sequence Generation
|
Autoregressive (AR) models have achieved great success in various sequence generation tasks. However, AR models can only generate target sequence word-by-word due to the AR mechanism and hence suffer from slow inference. Recently, non-autoregressive (NAR) models, which generate all the tokens in parallel by removing the sequential dependencies within the target sequence, have received increasing attention in sequence generation tasks such as neural machine translation (NMT), automatic speech recognition (ASR), and text to speech (TTS). In this tutorial, we will provide a comprehensive introduction to non-autoregressive sequence generation.
| 2,022
|
https://aclanthology.org/2022.emnlp-tutorials.6
|
EMNLP
|
[{'id': 13756489, 'paperId': '204e3073870fae3d05bcbc2f6a8e263d9b72e776', 'title': 'Attention is All you Need', 'authors': [{'authorId': '40348417', 'name': 'Ashish Vaswani'}, {'authorId': '1846258', 'name': 'Noam M. Shazeer'}, {'authorId': '3877127', 'name': 'Niki Parmar'}, {'authorId': '39328010', 'name': 'Jakob Uszkoreit'}, {'authorId': '145024664', 'name': 'Llion Jones'}, {'authorId': '19177000', 'name': 'Aidan N. Gomez'}, {'authorId': '40527594', 'name': 'Lukasz Kaiser'}, {'authorId': '3443442', 'name': 'Illia Polosukhin'}], 'venue': 'Neural Information Processing Systems', 'abstract': 'The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.', 'year': 2017, 'in_acl': False, 'citationCount': 109681, 'section': None, 'subsection': None}, {'id': 3480671, 'paperId': '15e81c8d1c21f9e928c72721ac46d458f3341454', 'title': 'Non-Autoregressive Neural Machine Translation', 'authors': [{'authorId': '3016273', 'name': 'Jiatao Gu'}, {'authorId': '40518045', 'name': 'James Bradbury'}, {'authorId': '2228109', 'name': 'Caiming Xiong'}, {'authorId': '2052674293', 'name': 'V. Li'}, {'authorId': '2166511', 'name': 'R. Socher'}], 'venue': 'International Conference on Learning Representations', 'abstract': 'Existing approaches to neural machine translation condition each output word on previously generated outputs. We introduce a model that avoids this autoregressive property and produces its outputs in parallel, allowing an order of magnitude lower latency during inference. Through knowledge distillation, the use of input token fertilities as a latent variable, and policy gradient fine-tuning, we achieve this at a cost of as little as 2.0 BLEU points relative to the autoregressive Transformer network used as a teacher. We demonstrate substantial cumulative improvements associated with each of the three aspects of our training strategy, and validate our approach on IWSLT 2016 English-German and two WMT language pairs. By sampling fertilities in parallel at inference time, our non-autoregressive model achieves near-state-of-the-art performance of 29.8 BLEU on WMT 2016 English-Romanian.', 'year': 2017, 'in_acl': False, 'citationCount': 763, 'section': None, 'subsection': None}, {'id': 8451212, 'paperId': '57a10537978600fd33dcdd48922c791609a4851a', 'title': 'Sequence-Level Knowledge Distillation', 'authors': [{'authorId': '38367242', 'name': 'Yoon Kim'}, {'authorId': '2531268', 'name': 'Alexander M. Rush'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Neural machine translation (NMT) offers a novel alternative formulation of translation that is potentially simpler than statistical approaches. However to reach competitive performance, NMT models need to be exceedingly large. In this paper we consider applying knowledge distillation approaches (Bucila et al., 2006; Hinton et al., 2015) that have proven successful for reducing the size of neural models in other domains to the problem of NMT. We demonstrate that standard knowledge distillation applied to word-level prediction can be effective for NMT, and also introduce two novel sequence-level versions of knowledge distillation that further improve performance, and somewhat surprisingly, seem to eliminate the need for beam search (even when applied on the original teacher model). Our best student model runs 10 times faster than its state-of-the-art teacher with little loss in performance. It is also significantly better than a baseline model trained without knowledge distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight pruning on top of knowledge distillation results in a student model that has 13 times fewer parameters than the original teacher model, with a decrease of 0.4 BLEU.', 'year': 2016, 'in_acl': True, 'citationCount': 1024, 'section': None, 'subsection': None}, {'id': 235435717, 'paperId': '128b6540b23cb8316edc496a2c532ea194dbf10e', 'title': 'Sequence-Level Training for Non-Autoregressive Neural Machine Translation', 'authors': [{'authorId': '81050636', 'name': 'Chenze Shao'}, {'authorId': '49771779', 'name': 'Yang Feng'}, {'authorId': '2108970018', 'name': 'Jinchao Zhang'}, {'authorId': '33427918', 'name': 'Fandong Meng'}, {'authorId': '2108485135', 'name': 'Jie Zhou'}], 'venue': 'International Conference on Computational Logic', 'abstract': 'Abstract In recent years, Neural Machine Translation (NMT) has achieved notable results in various translation tasks. However, the word-by-word generation manner determined by the autoregressive mechanism leads to high translation latency of the NMT and restricts its low-latency applications. Non-Autoregressive Neural Machine Translation (NAT) removes the autoregressive mechanism and achieves significant decoding speedup by generating target words independently and simultaneously. Nevertheless, NAT still takes the word-level cross-entropy loss as the training objective, which is not optimal because the output of NAT cannot be properly evaluated due to the multimodality problem. In this article, we propose using sequence-level training objectives to train NAT models, which evaluate the NAT outputs as a whole and correlates well with the real translation quality. First, we propose training NAT models to optimize sequence-level evaluation metrics (e.g., BLEU) based on several novel reinforcement algorithms customized for NAT, which outperform the conventional method by reducing the variance of gradient estimation. Second, we introduce a novel training objective for NAT models, which aims to minimize the Bag-of-N-grams (BoN) difference between the model output and the reference sentence. The BoN training objective is differentiable and can be calculated efficiently without doing any approximations. Finally, we apply a three-stage training strategy to combine these two methods to train the NAT model. We validate our approach on four translation tasks (WMT14 En↔De, WMT16 En↔Ro), which shows that our approach largely outperforms NAT baselines and achieves remarkable performance on all translation tasks. The source code is available at https://github.com/ictnlp/Seq-NAT.', 'year': 2021, 'in_acl': True, 'citationCount': 25, 'section': None, 'subsection': None}, {'id': 201103818, 'paperId': 'd6cfb4e345b1031040ccd3683730854c560a2b0d', 'title': 'Latent-Variable Non-Autoregressive Neural Machine Translation with Deterministic Inference using a Delta Posterior', 'authors': [{'authorId': '7412686', 'name': 'Raphael Shu'}, {'authorId': '100811091', 'name': 'Jason Lee'}, {'authorId': '48731103', 'name': 'Hideki Nakayama'}, {'authorId': '1979489', 'name': 'Kyunghyun Cho'}], 'venue': 'AAAI Conference on Artificial Intelligence', 'abstract': "Although neural machine translation models reached high translation quality, the autoregressive nature makes inference difficult to parallelize and leads to high translation latency. Inspired by recent refinement-based approaches, we propose LaNMT, a latent-variable non-autoregressive model with continuous latent variables and deterministic inference procedure. In contrast to existing approaches, we use a deterministic inference algorithm to find the target sequence that maximizes the lowerbound to the log-probability. During inference, the length of translation automatically adapts itself. Our experiments show that the lowerbound can be greatly increased by running the inference algorithm, resulting in significantly improved translation quality. Our proposed model closes the performance gap between non-autoregressive and autoregressive approaches on ASPEC Ja-En dataset with 8.6x faster decoding. On WMT'14 En-De dataset, our model narrows the gap with autoregressive baseline to 2.0 BLEU points with 12.5x speedup. By decoding multiple initial latent variables in parallel and rescore using a teacher model, the proposed model further brings the gap down to 1.0 BLEU point on WMT'14 En-De task with 6.8x speedup.", 'year': 2019, 'in_acl': False, 'citationCount': 112, 'section': None, 'subsection': None}, {'id': 202538740, 'paperId': '5efadc9019ce3378a0eb6c8f939cdde6c8918b1e', 'title': 'Mask-Predict: Parallel Decoding of Conditional Masked Language Models', 'authors': [{'authorId': '2320509', 'name': 'Marjan Ghazvininejad'}, {'authorId': '39455775', 'name': 'Omer Levy'}, {'authorId': '11323179', 'name': 'Yinhan Liu'}, {'authorId': '1982950', 'name': 'Luke Zettlemoyer'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Most machine translation systems generate text autoregressively from left to right. We, instead, use a masked language modeling objective to train a model to predict any subset of the target words, conditioned on both the input text and a partially masked target translation. This approach allows for efficient iterative decoding, where we first predict all of the target words non-autoregressively, and then repeatedly mask out and regenerate the subset of words that the model is least confident about. By applying this strategy for a constant number of iterations, our model improves state-of-the-art performance levels for non-autoregressive and parallel decoding translation models by over 4 BLEU on average. It is also able to reach within about 1 BLEU point of a typical left-to-right transformer model, while decoding significantly faster.', 'year': 2019, 'in_acl': True, 'citationCount': 539, 'section': None, 'subsection': None}, {'id': 9901844, 'paperId': '261a056f8b21918e8616a429b2df6e1d5d33be41', 'title': 'Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks', 'authors': [{'authorId': '2251771699', 'name': 'Alex Graves'}, {'authorId': '2313915717', 'name': 'Santiago Fern´andez'}, {'authorId': '145842938', 'name': 'Faustino J. Gomez'}, {'authorId': '2286647360', 'name': 'J. Schmidhuber'}], 'venue': '', 'abstract': 'Many real-world sequence learning tasks require the prediction of sequences of labels from noisy, unsegmented input data. In speech recognition, for example, an acoustic signal is transcribed into words or sub-word units. Recurrent neural networks (RNNs) are powerful sequence learners that would seem well suited to such tasks. However, because they require pre-segmented training data, and post-processing to transform their outputs into label sequences, their applicability has so far been limited. This paper presents a novel method for training RNNs to label un-segmented sequences directly, thereby solving both problems. An experiment on the TIMIT speech corpus demonstrates its advantages over both a baseline HMM and a hybrid HMM-RNN.', 'year': 2006, 'in_acl': False, 'citationCount': 4441, 'section': None, 'subsection': None}, {'id': 216056470, 'paperId': 'bed87e8fb3e7e9bc87e1c2ee459ae405a35d3267', 'title': 'A Study of Non-autoregressive Model for Sequence Generation', 'authors': [{'authorId': '1500435161', 'name': 'Yi Ren'}, {'authorId': '48211720', 'name': 'Jinglin Liu'}, {'authorId': '48391466', 'name': 'Xu Tan'}, {'authorId': '47601191', 'name': 'Sheng Zhao'}, {'authorId': '47122432', 'name': 'Zhou Zhao'}, {'authorId': '2110264337', 'name': 'Tie-Yan Liu'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Non-autoregressive (NAR) models generate all the tokens of a sequence in parallel, resulting in faster generation speed compared to their autoregressive (AR) counterparts but at the cost of lower accuracy. Different techniques including knowledge distillation and source-target alignment have been proposed to bridge the gap between AR and NAR models in various tasks such as neural machine translation (NMT), automatic speech recognition (ASR), and text to speech (TTS). With the help of those techniques, NAR models can catch up with the accuracy of AR models in some tasks but not in some others. In this work, we conduct a study to understand the difficulty of NAR sequence generation and try to answer: (1) Why NAR models can catch up with AR models in some tasks but not all? (2) Why techniques like knowledge distillation and source-target alignment can help NAR models. Since the main difference between AR and NAR models is that NAR models do not use dependency among target tokens while AR models do, intuitively the difficulty of NAR sequence generation heavily depends on the strongness of dependency among target tokens. To quantify such dependency, we propose an analysis model called CoMMA to characterize the difficulty of different NAR sequence generation tasks. We have several interesting findings: 1) Among the NMT, ASR and TTS tasks, ASR has the most target-token dependency while TTS has the least. 2) Knowledge distillation reduces the target-token dependency in target sequence and thus improves the accuracy of NAR models. 3) Source-target alignment constraint encourages dependency of a target token on source tokens and thus eases the training of NAR models.', 'year': 2020, 'in_acl': True, 'citationCount': 59, 'section': None, 'subsection': None}]
|
2022.naacl-tutorials.1
|
Text Generation with Text-Editing Models
|
Text-editing models have recently become a prominent alternative to seq2seq models for monolingual text-generation tasks such as grammatical error correction, text simplification, and style transfer. These tasks share a common trait – they exhibit a large amount of textual overlap between the source and target texts. Text-editing models take advantage of this observation and learn to generate the output by predicting edit operations applied to the source sequence. In contrast, seq2seq models generate outputs word-by-word from scratch thus making them slow at inference time. Text-editing models provide several benefits over seq2seq models including faster inference speed, higher sample efficiency, and better control and interpretability of the outputs. This tutorial provides a comprehensive overview of the text-edit based models and current state-of-the-art approaches analyzing their pros and cons. We discuss challenges related to deployment and how these models help to mitigate hallucination and bias, both pressing challenges in the field of text generation.
| 2,022
|
https://aclanthology.org/2022.naacl-tutorials.1
|
NAACL
|
[{'id': 13756489, 'paperId': '204e3073870fae3d05bcbc2f6a8e263d9b72e776', 'title': 'Attention is All you Need', 'authors': [{'authorId': '40348417', 'name': 'Ashish Vaswani'}, {'authorId': '1846258', 'name': 'Noam M. Shazeer'}, {'authorId': '3877127', 'name': 'Niki Parmar'}, {'authorId': '39328010', 'name': 'Jakob Uszkoreit'}, {'authorId': '145024664', 'name': 'Llion Jones'}, {'authorId': '19177000', 'name': 'Aidan N. Gomez'}, {'authorId': '40527594', 'name': 'Lukasz Kaiser'}, {'authorId': '3443442', 'name': 'Illia Polosukhin'}], 'venue': 'Neural Information Processing Systems', 'abstract': 'The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.', 'year': 2017, 'in_acl': False, 'citationCount': 109681, 'section': 'background', 'subsection': None}, {'id': 52967399, 'paperId': 'df2b0e26d0599ce3e70df8a9da02e51594e0e992', 'title': 'BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding', 'authors': [{'authorId': '39172707', 'name': 'Jacob Devlin'}, {'authorId': '1744179', 'name': 'Ming-Wei Chang'}, {'authorId': '2544107', 'name': 'Kenton Lee'}, {'authorId': '3259253', 'name': 'Kristina Toutanova'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018a; Radford et al., 2018), BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5 (7.7 point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).', 'year': 2019, 'in_acl': True, 'citationCount': 84138, 'section': 'background', 'subsection': None}, {'id': 249017760, 'paperId': '8ba66cb690ff3a37c63ff0f67b595f03dd78dc75', 'title': 'EdiT5: Semi-Autoregressive Text-Editing with T5 Warm-Start', 'authors': [{'authorId': '1931758', 'name': 'Jonathan Mallinson'}, {'authorId': '50290651', 'name': 'Jakub Adamek'}, {'authorId': '3288074', 'name': 'Eric Malmi'}, {'authorId': '3091861', 'name': 'Aliaksei Severyn'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'We present EdiT5 - a novel semi-autoregressive text-editing model designed to combine the strengths of non-autoregressive text-editing and autoregressive decoding. EdiT5 is faster during inference than conventional sequence-to-sequence (seq2seq) models, while being capable of modelling flexible input-output transformations. This is achieved by decomposing the generation process into three sub-tasks: (1) tagging to decide on the subset of input tokens to be preserved in the output, (2) re-ordering to define their order in the output text, and (3) insertion to infill the missing tokens that are not present in the input. The tagging and re-ordering steps, which are responsible for generating the largest portion of the output, are non-autoregressive, while the insertion step uses an autoregressive decoder. Depending on the task, EdiT5 on average requires significantly fewer autoregressive steps, demonstrating speedups of up to 25x when compared to seq2seq models. Quality-wise, EdiT5 is initialized with a pre-trained T5 checkpoint yielding comparable performance to T5 in high-resource settings when evaluated on three NLG tasks: Sentence Fusion, Grammatical Error Correction, and Decontextualization while clearly outperforming T5 in low-resource settings.', 'year': 2022, 'in_acl': False, 'citationCount': 40, 'section': 'text-editing works', 'subsection': None}, {'id': 195068920, 'paperId': '340b59e6ee93d30c055b5e89a7cfbc88874c9958', 'title': 'EditNTS: An Neural Programmer-Interpreter Model for Sentence Simplification through Explicit Editing', 'authors': [{'authorId': '49265991', 'name': 'Yue Dong'}, {'authorId': '2118274302', 'name': 'Zichao Li'}, {'authorId': '1924511', 'name': 'Mehdi Rezagholizadeh'}, {'authorId': '3159752', 'name': 'J. Cheung'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'We present the first sentence simplification model that learns explicit edit operations (ADD, DELETE, and KEEP) via a neural programmer-interpreter approach. Most current neural sentence simplification systems are variants of sequence-to-sequence models adopted from machine translation. These methods learn to simplify sentences as a byproduct of the fact that they are trained on complex-simple sentence pairs. By contrast, our neural programmer-interpreter is directly trained to predict explicit edit operations on targeted parts of the input sentence, resembling the way that humans perform simplification and revision. Our model outperforms previous state-of-the-art neural sentence simplification models (without external knowledge) by large margins on three benchmark text simplification corpora in terms of SARI (+0.95 WikiLarge, +1.89 WikiSmall, +1.41 Newsela), and is judged by humans to produce overall better and simpler output sentences.', 'year': 2019, 'in_acl': True, 'citationCount': 154, 'section': 'text-editing works', 'subsection': None}, {'id': 214623124, 'paperId': '8c881df7a42e6798bf69b6ecb26b9d0792a378e7', 'title': 'FELIX: Flexible Text Editing Through Tagging and Insertion', 'authors': [{'authorId': '1931758', 'name': 'Jonathan Mallinson'}, {'authorId': '3091861', 'name': 'Aliaksei Severyn'}, {'authorId': '3288074', 'name': 'Eric Malmi'}, {'authorId': '143944406', 'name': 'Guillermo Garrido'}], 'venue': 'Findings', 'abstract': 'We present FELIX – a flexible text-editing approach for generation, designed to derive maximum benefit from the ideas of decoding with bi-directional contexts and self-supervised pretraining. In contrast to conventional sequenceto-sequence (seq2seq) models, FELIX is efficient in low-resource settings and fast at inference time, while being capable of modeling flexible input-output transformations. We achieve this by decomposing the text-editing task into two sub-tasks: tagging to decide on the subset of input tokens and their order in the output text and insertion to in-fill the missing tokens in the output not present in the input. The tagging model employs a novel Pointer mechanism, while the insertion model is based on a Masked Language Model (MLM). Both of these models are chosen to be non-autoregressive to guarantee faster inference. FELIX performs favourably when compared to recent text-editing methods and strong seq2seq baselines when evaluated on four NLG tasks: Sentence Fusion, Machine Translation Automatic Post-Editing, Summarization, and Text Simplification', 'year': 2020, 'in_acl': True, 'citationCount': 71, 'section': 'text-editing works', 'subsection': None}, {'id': 218889746, 'paperId': '9f6b659033da6fff11da1af64fea7c0d728ab433', 'title': 'GECToR – Grammatical Error Correction: Tag, Not Rewrite', 'authors': [{'authorId': '1388819710', 'name': 'Kostiantyn Omelianchuk'}, {'authorId': '1720714633', 'name': 'Vitaliy Atrasevych'}, {'authorId': '2064101037', 'name': 'Artem Chernodub'}, {'authorId': '2291241972', 'name': 'Oleksandr Skurzhanskyi'}], 'venue': 'Workshop on Innovative Use of NLP for Building Educational Applications', 'abstract': 'In this paper, we present a simple and efficient GEC sequence tagger using a Transformer encoder. Our system is pre-trained on synthetic data and then fine-tuned in two stages: first on errorful corpora, and second on a combination of errorful and error-free parallel corpora. We design custom token-level transformations to map input tokens to target corrections. Our best single-model/ensemble GEC tagger achieves an F_0.5 of 65.3/66.5 on CONLL-2014 (test) and F_0.5 of 72.4/73.6 on BEA-2019 (test). Its inference speed is up to 10 times as fast as a Transformer-based seq2seq GEC system.', 'year': 2020, 'in_acl': True, 'citationCount': 285, 'section': 'text-editing works', 'subsection': None}, {'id': 247246721, 'paperId': 'f327a27515c72d0c7c92e8d2e83475477e68f877', 'title': 'Hierarchical Context Tagging for Utterance Rewriting', 'authors': [{'authorId': '2152165376', 'name': 'Lisa Jin'}, {'authorId': '1748796', 'name': 'Linfeng Song'}, {'authorId': '50496698', 'name': 'Lifeng Jin'}, {'authorId': '2111505433', 'name': 'Dong Yu'}, {'authorId': '1793218', 'name': 'D. Gildea'}], 'venue': 'AAAI Conference on Artificial Intelligence', 'abstract': 'Utterance rewriting aims to recover coreferences and omitted information from the latest turn of a multi-turn dialogue. Recently, methods that tag rather than linearly generate sequences have proven stronger in both in- and out-of-domain rewriting settings. This is due to a tagger\'s smaller search space as it can only copy tokens from the dialogue context. However, these methods may suffer from low coverage when phrases that must be added to a source utterance cannot be covered by a single context span. This can occur in languages like English that introduce tokens such as prepositions into the rewrite for grammaticality. We propose a hierarchical context tagger (HCT) that mitigates this issue by predicting slotted rules (e.g., "besides _") whose slots are later filled with context spans. HCT (i) tags the source string with token-level edit actions and slotted rules and (ii) fills in the resulting rule slots with spans from the dialogue context. This rule tagging allows HCT to add out-of-context tokens and multiple spans at once; we further cluster the rules to truncate the long tail of the rule distribution. Experiments on several benchmarks show that HCT can outperform state-of-the-art rewriting systems by ~2 BLEU points.', 'year': 2022, 'in_acl': False, 'citationCount': 10, 'section': 'text-editing works', 'subsection': None}, {'id': 202541578, 'paperId': 'a3707f5ce5cde48960475f6c6013f10d2e851f15', 'title': 'Encode, Tag, Realize: High-Precision Text Editing', 'authors': [{'authorId': '3288074', 'name': 'Eric Malmi'}, {'authorId': '32632038', 'name': 'Sebastian Krause'}, {'authorId': '2204815', 'name': 'S. Rothe'}, {'authorId': '1789341', 'name': 'Daniil Mirylenka'}, {'authorId': '3091861', 'name': 'Aliaksei Severyn'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'We propose LaserTagger - a sequence tagging approach that casts text generation as a text editing task. Target texts are reconstructed from the inputs using three main edit operations: keeping a token, deleting it, and adding a phrase before the token. To predict the edit operations, we propose a novel model, which combines a BERT encoder with an autoregressive Transformer decoder. This approach is evaluated on English text on four tasks: sentence fusion, sentence splitting, abstractive summarization, and grammar correction. LaserTagger achieves new state-of-the-art results on three of these tasks, performs comparably to a set of strong seq2seq baselines with a large number of training examples, and outperforms them when the number of examples is limited. Furthermore, we show that at inference time tagging can be more than two orders of magnitude faster than comparable seq2seq models, making it more attractive for running in a live environment.', 'year': 2019, 'in_acl': True, 'citationCount': 163, 'section': 'text-editing works', 'subsection': None}, {'id': 166227937, 'paperId': 'f87de21b46683b5743c4d82af3c9cb8bbcd26f21', 'title': 'Levenshtein Transformer', 'authors': [{'authorId': '3016273', 'name': 'Jiatao Gu'}, {'authorId': '20132361', 'name': 'Changhan Wang'}, {'authorId': '2109914894', 'name': 'Jake Zhao'}], 'venue': 'Neural Information Processing Systems', 'abstract': "Modern neural sequence generation models are built to either generate tokens step-by-step from scratch or (iteratively) modify a sequence of tokens bounded by a fixed length. In this work, we develop Levenshtein Transformer, a new partially autoregressive model devised for more flexible and amenable sequence generation. Unlike previous approaches, the atomic operations of our model are insertion and deletion. The combination of them facilitates not only generation but also sequence refinement allowing dynamic length changes. We also propose a set of new training techniques dedicated at them, effectively exploiting one as the other's learning signal thanks to their complementary nature. Experiments applying the proposed model achieve comparable performance but much-improved efficiency on both generation (e.g. machine translation, text summarization) and refinement tasks (e.g. automatic post-editing). We further confirm the flexibility of our model by showing a Levenshtein Transformer trained by machine translation can straightforwardly be used for automatic post-editing.", 'year': 2019, 'in_acl': False, 'citationCount': 345, 'section': 'text-editing works', 'subsection': None}, {'id': 234762899, 'paperId': 'fd90d2d2853c5b550eab7db203db9f4e7e5a2aaa', 'title': 'LEWIS: Levenshtein Editing for Unsupervised Text Style Transfer', 'authors': [{'authorId': '1557386977', 'name': 'Machel Reid'}, {'authorId': '3428769', 'name': 'Victor Zhong'}], 'venue': 'Findings', 'abstract': 'Many types of text style transfer can be achieved with only small, precise edits (e.g. sentiment transfer from I had a terrible time... to I had a great time...). We propose a coarse-to-fine editor for style transfer that transforms text using Levenshtein edit operations (e.g. insert, replace, delete). Unlike prior single-span edit methods, our method concurrently edits multiple spans in the source text. To train without parallel style text pairs (e.g. pairs of +/- sentiment statements), we propose an unsupervised data synthesis procedure. We first convert text to style-agnostic templates using style classifier attention (e.g. I had a SLOT time...), then fill in slots in these templates using fine-tuned pretrained language models. Our method outperforms existing generation and editing style transfer methods on sentiment (Yelp, Amazon) and politeness (Polite) transfer. In particular, multi-span editing achieves higher performance and more diverse output than single-span editing. Moreover, compared to previous methods on unsupervised data synthesis, our method results in higher quality parallel style pairs and improves model performance.', 'year': 2021, 'in_acl': True, 'citationCount': 68, 'section': 'text-editing works', 'subsection': None}, {'id': 222125019, 'paperId': '9616236d5b19006d30cd512001bb217d88c1f830', 'title': 'Unsupervised Text Style Transfer with Masked Language Models', 'authors': [{'authorId': '3288074', 'name': 'Eric Malmi'}, {'authorId': '3091861', 'name': 'Aliaksei Severyn'}, {'authorId': '2204815', 'name': 'S. Rothe'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': "We propose Masker, an unsupervised text-editing method for style transfer. To tackle cases when no parallel source-target pairs are available, we train masked language models (MLMs) for both the source and the target domain. Then we find the text spans where the two models disagree the most in terms of likelihood. This allows us to identify the source tokens to delete to transform the source text to match the style of the target domain. The deleted tokens are replaced with the target MLM, and by using a padded MLM variant, we avoid having to predetermine the number of inserted tokens. Our experiments on sentence fusion and sentiment transfer demonstrate that Masker performs competitively in a fully unsupervised setting. Moreover, in low-resource settings, it improves supervised methods' accuracy by over 10 percentage points when pre-training them on silver training data generated by Masker.", 'year': 2020, 'in_acl': True, 'citationCount': 11, 'section': 'text-editing works', 'subsection': None}, {'id': 202765548, 'paperId': '9da95e99afd4ea899bd1fb40dd350e0be0a12a84', 'title': 'Parallel Iterative Edit Models for Local Sequence Transduction', 'authors': [{'authorId': '67126112', 'name': 'Abhijeet Awasthi'}, {'authorId': '1770124', 'name': 'Sunita Sarawagi'}, {'authorId': '1381285886', 'name': 'Rasna Goyal'}, {'authorId': '3168730', 'name': 'Sabyasachi Ghosh'}, {'authorId': '2748067', 'name': 'Vihari Piratla'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'We present a Parallel Iterative Edit (PIE) model for the problem of local sequence transduction arising in tasks like Grammatical error correction (GEC). Recent approaches are based on the popular encoder-decoder (ED) model for sequence to sequence learning. The ED model auto-regressively captures full dependency among output tokens but is slow due to sequential decoding. The PIE model does parallel decoding, giving up the advantage of modeling full dependency in the output, yet it achieves accuracy competitive with the ED model for four reasons: 1. predicting edits instead of tokens, 2. labeling sequences instead of generating sequences, 3. iteratively refining predictions to capture dependencies, and 4. factorizing logits over edits and their token argument to harness pre-trained language models like BERT. Experiments on tasks spanning GEC, OCR correction and spell correction demonstrate that the PIE model is an accurate and significantly faster alternative for local sequence transduction.', 'year': 2019, 'in_acl': True, 'citationCount': 139, 'section': 'text-editing works', 'subsection': None}, {'id': 221856672, 'paperId': '58ef9f9682c0ae4561dc30079a52867f108f704e', 'title': 'Seq2Edits: Sequence Transduction Using Span-level Edit Operations', 'authors': [{'authorId': '48404632', 'name': 'Felix Stahlberg'}, {'authorId': '2109681515', 'name': 'Shankar Kumar'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'We propose Seq2Edits, an open-vocabulary approach to sequence editing for natural language processing (NLP) tasks with a high degree of overlap between input and output texts. In this approach, each sequence-to-sequence transduction is represented as a sequence of edit operations, where each operation either replaces an entire source span with target tokens or keeps it unchanged. We evaluate our method on five NLP tasks (text normalization, sentence fusion, sentence splitting & rephrasing, text simplification, and grammatical error correction) and report competitive results across the board. For grammatical error correction, our method speeds up inference by up to 5.2x compared to full sequence models because inference time depends on the number of edits rather than the number of target tokens. For text normalization, sentence fusion, and grammatical error correction, our approach improves explainability by associating each edit operation with a human-readable tag.', 'year': 2020, 'in_acl': True, 'citationCount': 78, 'section': 'text-editing works', 'subsection': None}, {'id': 12836470, 'paperId': 'a1765ca8c4aa99a0d35f82d9e310ec3f79004e62', 'title': 'Learning How to Simplify From Explicit Labeling of Complex-Simplified Text Pairs', 'authors': [{'authorId': '69930782', 'name': 'Fernando Alva-Manchego'}, {'authorId': '3053695', 'name': 'Joachim Bingel'}, {'authorId': '3302745', 'name': 'Gustavo Paetzold'}, {'authorId': '2797847', 'name': 'Carolina Scarton'}, {'authorId': '1702974', 'name': 'Lucia Specia'}], 'venue': 'International Joint Conference on Natural Language Processing', 'abstract': 'Current research in text simplification has been hampered by two central problems: (i) the small amount of high-quality parallel simplification data available, and (ii) the lack of explicit annotations of simplification operations, such as deletions or substitutions, on existing data. While the recently introduced Newsela corpus has alleviated the first problem, simplifications still need to be learned directly from parallel text using black-box, end-to-end approaches rather than from explicit annotations. These complex-simple parallel sentence pairs often differ to such a high degree that generalization becomes difficult. End-to-end models also make it hard to interpret what is actually learned from data. We propose a method that decomposes the task of TS into its sub-problems. We devise a way to automatically identify operations in a parallel corpus and introduce a sequence-labeling approach based on these annotations. Finally, we provide insights on the types of transformations that different approaches can model.', 'year': 2017, 'in_acl': True, 'citationCount': 68, 'section': 'text-editing works', 'subsection': None}]
|
2022.naacl-tutorials.2
|
Self-supervised Representation Learning for Speech Processing
|
There is a trend in the machine learning community to adopt self-supervised approaches to pre-train deep networks. Self-supervised representation learning (SSL) utilizes proxy supervised learning tasks, for example, distinguishing parts of the input signal from distractors, or generating masked input segments conditioned on the unmasked ones, to obtain training data from unlabeled corpora. BERT and GPT in NLP and SimCLR and BYOL in CV are famous examples in this direction. These approaches make it possible to use a tremendous amount of unlabeled data available on the web to train large networks and solve complicated tasks. Thus, SSL has the potential to scale up current machine learning technologies, especially for low-resourced, under-represented use cases, and democratize the technologies. Recently self-supervised approaches for speech processing are also gaining popularity. There are several workshops in relevant topics hosted at ICML 2020 (https://icml-sas.gitlab.io/), NeurIPS 2020 (https://neurips-sas-2020.github.io/), and AAAI 2022 (https://aaai-sas-2022.github.io/). However, there is no previous tutorial about a similar topic based on the authors’ best knowledge. Due to the growing popularity of SSL, and the shared mission of the areas in bringing speech and language technologies to more use cases with better quality and scaling the technologies for under-represented languages, we propose this tutorial to systematically survey the latest SSL techniques, tools, datasets, and performance achievement in speech processing. The proposed tutorial is highly relevant to the special theme of ACL about language diversity. One of the main focuses of the tutorial is leveraging SSL to reduce the dependence of speech technologies on labeled data, and to scale up the technologies especially for under-represented languages and use cases.
| 2,022
|
https://aclanthology.org/2022.naacl-tutorials.2
|
NAACL
|
[{'id': 239017006, 'paperId': 'aa62d5e43cb151cd574e4df058b4c6a509d62644', 'title': 'Self-Supervised Representation Learning: Introduction, advances, and challenges', 'authors': [{'authorId': '37151799', 'name': 'Linus Ericsson'}, {'authorId': '2319565', 'name': 'H. Gouk'}, {'authorId': '1717179', 'name': 'Chen Change Loy'}, {'authorId': '1697755', 'name': 'Timothy M. Hospedales'}], 'venue': 'IEEE Signal Processing Magazine', 'abstract': 'Self-supervised representation learning (SSRL) methods aim to provide powerful, deep feature learning without the requirement of large annotated data sets, thus alleviating the annotation bottleneck—one of the main barriers to the practical deployment of deep learning today. These techniques have advanced rapidly in recent years, with their efficacy approaching and sometimes surpassing fully supervised pretraining alternatives across a variety of data modalities, including image, video, sound, text, and graphs. This article introduces this vibrant area, including key concepts, the four main families of approaches and associated state-of-the-art techniques, and how self-supervised methods are applied to diverse modalities of data. We further discuss practical considerations including workflows, representation transferability, and computational cost. Finally, we survey major open challenges in the field, that provide fertile ground for future work.', 'year': 2021, 'in_acl': False, 'citationCount': 230, 'section': None, 'subsection': None}, {'id': 211532403, 'paperId': 'bd20069f5cac3e63083ecf6479abc1799db33ce0', 'title': 'A Primer in BERTology: What We Know About How BERT Works', 'authors': [{'authorId': '145046059', 'name': 'Anna Rogers'}, {'authorId': '152176221', 'name': 'Olga Kovaleva'}, {'authorId': '1681193', 'name': 'Anna Rumshisky'}], 'venue': 'Transactions of the Association for Computational Linguistics', 'abstract': 'Abstract Transformer-based models have pushed state of the art in many areas of NLP, but our understanding of what is behind their success is still limited. This paper is the first survey of over 150 studies of the popular BERT model. We review the current state of knowledge about how BERT works, what kind of information it learns and how it is represented, common modifications to its training objectives and architecture, the overparameterization issue, and approaches to compression. We then outline directions for future research.', 'year': 2020, 'in_acl': False, 'citationCount': 1333, 'section': None, 'subsection': None}, {'id': 236493269, 'paperId': '28692beece311a90f5fa1ca2ec9d0c2ce293d069', 'title': 'Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing', 'authors': [{'authorId': '144118452', 'name': 'Pengfei Liu'}, {'authorId': '30300197', 'name': 'Weizhe Yuan'}, {'authorId': '41037252', 'name': 'Jinlan Fu'}, {'authorId': '2669515', 'name': 'Zhengbao Jiang'}, {'authorId': '50376014', 'name': 'Hiroaki Hayashi'}, {'authorId': '1700325', 'name': 'Graham Neubig'}], 'venue': 'ACM Computing Surveys', 'abstract': 'This article surveys and organizes research works in a new paradigm in natural language processing, which we dub “prompt-based learning.” Unlike traditional supervised learning, which trains a model to take in an input x and predict an output y as P(y|x), prompt-based learning is based on language models that model the probability of text directly. To use these models to perform prediction tasks, the original input x is modified using a template into a textual string prompt x′ that has some unfilled slots, and then the language model is used to probabilistically fill the unfilled information to obtain a final string x̂, from which the final output y can be derived. This framework is powerful and attractive for a number of reasons: It allows the language model to be pre-trained on massive amounts of raw text, and by defining a new prompting function the model is able to perform few-shot or even zero-shot learning, adapting to new scenarios with few or no labeled data. In this article, we introduce the basics of this promising paradigm, describe a unified set of mathematical notations that can cover a wide variety of existing work, and organize existing work along several dimensions, e.g., the choice of pre-trained language models, prompts, and tuning strategies. To make the field more accessible to interested beginners, we not only make a systematic review of existing works and a highly structured typology of prompt-based concepts but also release other resources, e.g., a website NLPedia–Pretrain including constantly updated survey and paperlist.', 'year': 2021, 'in_acl': False, 'citationCount': 3190, 'section': None, 'subsection': None}, {'id': 212747830, 'paperId': '3bcb17559ce96eb20fa79af8194f4af0380d194a', 'title': 'Pre-trained models for natural language processing: A survey', 'authors': [{'authorId': '1767521', 'name': 'Xipeng Qiu'}, {'authorId': '153345698', 'name': 'Tianxiang Sun'}, {'authorId': '26339093', 'name': 'Yige Xu'}, {'authorId': '95329799', 'name': 'Yunfan Shao'}, {'authorId': '145493218', 'name': 'Ning Dai'}, {'authorId': '1790227', 'name': 'Xuanjing Huang'}], 'venue': 'Science China Technological Sciences', 'abstract': 'Recently, the emergence of pre-trained models (PTMs) has brought natural language processing (NLP) to a new era. In this survey, we provide a comprehensive review of PTMs for NLP. We first briefly introduce language representation learning and its research progress. Then we systematically categorize existing PTMs based on a taxonomy from four different perspectives. Next, we describe how to adapt the knowledge of PTMs to downstream tasks. Finally, we outline some potential directions of PTMs for future research. This survey is purposed to be a hands-on guide for understanding, using, and developing PTMs for various NLP tasks.', 'year': 2020, 'in_acl': False, 'citationCount': 1316, 'section': None, 'subsection': None}]
|
2022.naacl-tutorials.3
|
New Frontiers of Information Extraction
|
This tutorial targets researchers and practitioners who are interested in AI and ML technologies for structural information extraction (IE) from unstructured textual sources. Particularly, this tutorial will provide audience with a systematic introduction to recent advances of IE, by answering several important research questions. These questions include (i) how to develop an robust IE system from noisy, insufficient training data, while ensuring the reliability of its prediction? (ii) how to foster the generalizability of IE through enhancing the system’s cross-lingual, cross-domain, cross-task and cross-modal transferability? (iii) how to precisely support extracting structural information with extremely fine-grained, diverse and boundless labels? (iv) how to further improve IE by leveraging indirect supervision from other NLP tasks, such as NLI, QA or summarization, and pre-trained language models? (v) how to acquire knowledge to guide the inference of IE systems? We will discuss several lines of frontier research that tackle those challenges, and will conclude the tutorial by outlining directions for further investigation.
| 2,022
|
https://aclanthology.org/2022.naacl-tutorials.3
|
NAACL
|
[{'id': 233297055, 'paperId': 'dbfc17833434243e07c4629e58f3d8ed7112dbfe', 'title': 'Learning from Noisy Labels for Entity-Centric Information Extraction', 'authors': [{'authorId': '2203076', 'name': 'Wenxuan Zhou'}, {'authorId': '1998918', 'name': 'Muhao Chen'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Recent information extraction approaches have relied on training deep neural models. However, such models can easily overfit noisy labels and suffer from performance degradation. While it is very costly to filter noisy labels in large learning resources, recent studies show that such labels take more training steps to be memorized and are more frequently forgotten than clean labels, therefore are identifiable in training. Motivated by such properties, we propose a simple co-regularization framework for entity-centric information extraction, which consists of several neural models with identical structures but different parameter initialization. These models are jointly optimized with the task-specific losses and are regularized to generate similar predictions based on an agreement loss, which prevents overfitting on noisy labels. Extensive experiments on two widely used but noisy benchmarks for information extraction, TACRED and CoNLL03, demonstrate the effectiveness of our framework. We release our code to the community for future research.', 'year': 2021, 'in_acl': True, 'citationCount': 60, 'section': None, 'subsection': None}, {'id': 233296689, 'paperId': 'a9b04a3e0cf5766df9b3af8c442f2d85ac5e2c7e', 'title': 'Contrastive Out-of-Distribution Detection for Pretrained Transformers', 'authors': [{'authorId': '2203076', 'name': 'Wenxuan Zhou'}, {'authorId': '144097210', 'name': 'Fangyu Liu'}, {'authorId': '1998918', 'name': 'Muhao Chen'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Pretrained Transformers achieve remarkable performance when training and test data are from the same distribution. However, in real-world scenarios, the model often faces out-of-distribution (OOD) instances that can cause severe semantic shift problems at inference time. Therefore, in practice, a reliable model should identify such instances, and then either reject them during inference or pass them over to models that handle another distribution. In this paper, we develop an unsupervised OOD detection method, in which only the in-distribution (ID) data are used in training. We propose to fine-tune the Transformers with a contrastive loss, which improves the compactness of representations, such that OOD instances can be better differentiated from ID ones. These OOD instances can then be accurately detected using the Mahalanobis distance in the model’s penultimate layer. We experiment with comprehensive settings and achieve near-perfect OOD detection performance, outperforming baselines drastically. We further investigate the rationales behind the improvement, finding that more compact representations through margin-based contrastive learning bring the improvement. We release our code to the community for future research.', 'year': 2021, 'in_acl': True, 'citationCount': 89, 'section': None, 'subsection': None}, {'id': 219179670, 'paperId': '0ba05a24b090435d0edd3865abdd70a9168290b4', 'title': 'Learning Constraints for Structured Prediction Using Rectifier Networks', 'authors': [{'authorId': '3265780', 'name': 'Xingyuan Pan'}, {'authorId': '41016174', 'name': 'Maitrey Mehta'}, {'authorId': '3052879', 'name': 'Vivek Srikumar'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Various natural language processing tasks are structured prediction problems where outputs are constructed with multiple interdependent decisions. Past work has shown that domain knowledge, framed as constraints over the output space, can help improve predictive accuracy. However, designing good constraints often relies on domain expertise. In this paper, we study the problem of learning such constraints. We frame the problem as that of training a two-layer rectifier network to identify valid structures or substructures, and show a construction for converting a trained network into a system of linear constraints over the inference variables. Our experiments on several NLP tasks show that the learned constraints can improve the prediction accuracy, especially when the number of training examples is small.', 'year': 2020, 'in_acl': True, 'citationCount': 8, 'section': None, 'subsection': None}, {'id': 237485601, 'paperId': '0cdc27a99c1520c2ec604b97470ae75227e096ee', 'title': 'Foreseeing the Benefits of Incidental Supervision', 'authors': [{'authorId': '7146703', 'name': 'Hangfeng He'}, {'authorId': '2119682611', 'name': 'Mingyuan Zhang'}, {'authorId': '3333257', 'name': 'Qiang Ning'}, {'authorId': '144590225', 'name': 'D. Roth'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Real-world applications often require improved models by leveraging *a range of cheap incidental supervision signals*. These could include partial labels, noisy labels, knowledge-based constraints, and cross-domain or cross-task annotations – all having statistical associations with gold annotations but not exactly the same. However, we currently lack a principled way to measure the benefits of these signals to a given target task, and the common practice of evaluating these benefits is through exhaustive experiments with various models and hyperparameters. This paper studies whether we can, *in a single framework, quantify the benefits of various types of incidental signals for a given target task without going through combinatorial experiments*. We propose a unified PAC-Bayesian motivated informativeness measure, PABI, that characterizes the uncertainty reduction provided by incidental supervision signals. We demonstrate PABI’s effectiveness by quantifying the value added by various types of incidental signals to sequence tagging tasks. Experiments on named entity recognition (NER) and question answering (QA) show that PABI’s predictions correlate well with learning performance, providing a promising way to determine, ahead of learning, which supervision signals would be beneficial.', 'year': 2020, 'in_acl': True, 'citationCount': 11, 'section': None, 'subsection': None}, {'id': 218581125, 'paperId': 'b9485d1e2c66c3ae452ec4903c2a157caef4d2ed', 'title': 'Temporal Common Sense Acquisition with Minimal Supervision', 'authors': [{'authorId': '145360756', 'name': 'Ben Zhou'}, {'authorId': '3333257', 'name': 'Qiang Ning'}, {'authorId': '1783281', 'name': 'Daniel Khashabi'}, {'authorId': '144590225', 'name': 'D. Roth'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Temporal common sense (e.g., duration and frequency of events) is crucial for understanding natural language. However, its acquisition is challenging, partly because such information is often not expressed explicitly in text, and human annotation on such concepts is costly. This work proposes a novel sequence modeling approach that exploits explicit and implicit mentions of temporal common sense, extracted from a large corpus, to build TacoLM, a temporal common sense language model. Our method is shown to give quality predictions of various dimensions of temporal common sense (on UDST and a newly collected dataset from RealNews). It also produces representations of events for relevant tasks such as duration comparison, parent-child relations, event coreference and temporal QA (on TimeBank, HiEVE and MCTACO) that are better than using the standard BERT. Thus, it will be an important component of temporal NLP.', 'year': 2020, 'in_acl': True, 'citationCount': 88, 'section': None, 'subsection': None}, {'id': 222142136, 'paperId': 'e2d38543bd3cf813c63df336b21b003156ed48a8', 'title': 'Universal Natural Language Processing with Limited Annotations: Try Few-shot Textual Entailment as a Start', 'authors': [{'authorId': '40483594', 'name': 'Wenpeng Yin'}, {'authorId': '8937909', 'name': 'Nazneen Rajani'}, {'authorId': '9215251', 'name': 'Dragomir R. Radev'}, {'authorId': '2166511', 'name': 'R. Socher'}, {'authorId': '2228109', 'name': 'Caiming Xiong'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'A standard way to address different NLP problems is by first constructing a problem-specific dataset, then building a model to fit this dataset. To build the ultimate artificial intelligence, we desire a single machine that can handle diverse new problems, for which task-specific annotations are limited. We bring up textual entailment as a unified solver for such NLP problems. However, current research of textual entailment has not spilled much ink on the following questions: (i) How well does a pretrained textual entailment system generalize across domains with only a handful of domain-specific examples? and (ii) When is it worth transforming an NLP task into textual entailment? We argue that the transforming is unnecessary if we can obtain rich annotations for this task. Textual entailment really matters particularly when the target NLP task has insufficient annotations. \nUniversal NLP can be probably achieved through different routines. In this work, we introduce Universal Few-shot textual Entailment (UFO-Entail). We demonstrate that this framework enables a pretrained entailment model to work well on new entailment domains in a few-shot setting, and show its effectiveness as a unified solver for several downstream NLP tasks such as question answering and coreference resolution when the end-task annotations are limited. Code: this https URL', 'year': 2020, 'in_acl': True, 'citationCount': 66, 'section': None, 'subsection': None}, {'id': 246822845, 'paperId': 'ef25f1586cf6630f4a30d41ee5a2848b064dede3', 'title': 'Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference', 'authors': [{'authorId': '1596827240', 'name': 'Bangzheng Li'}, {'authorId': '40483594', 'name': 'Wenpeng Yin'}, {'authorId': '1998918', 'name': 'Muhao Chen'}], 'venue': 'Transactions of the Association for Computational Linguistics', 'abstract': 'The task of ultra-fine entity typing (UFET) seeks to predict diverse and free-form words or phrases that describe the appropriate types of entities mentioned in sentences. A key challenge for this task lies in the large number of types and the scarcity of annotated data per type. Existing systems formulate the task as a multi-way classification problem and train directly or distantly supervised classifiers. This causes two issues: (i) the classifiers do not capture the type semantics because types are often converted into indices; (ii) systems developed in this way are limited to predicting within a pre-defined type set, and often fall short of generalizing to types that are rarely seen or unseen in training. This work presents LITE🍻, a new approach that formulates entity typing as a natural language inference (NLI) problem, making use of (i) the indirect supervision from NLI to infer type information meaningfully represented as textual hypotheses and alleviate the data scarcity issue, as well as (ii) a learning-to-rank objective to avoid the pre-defining of a type set. Experiments show that, with limited training data, LITE obtains state-of-the-art performance on the UFET task. In addition, LITE demonstrates its strong generalizability by not only yielding best results on other fine-grained entity typing benchmarks, more importantly, a pre-trained LITE system works well on new data containing unseen types.1', 'year': 2022, 'in_acl': True, 'citationCount': 28, 'section': None, 'subsection': None}, {'id': 1240016, 'paperId': '5a3abc60f0c91a255b1a86843d9e97ab7d63bf08', 'title': 'Zero-Shot Transfer Learning for Event Extraction', 'authors': [{'authorId': '34170717', 'name': 'Lifu Huang'}, {'authorId': '2113323573', 'name': 'Heng Ji'}, {'authorId': '1979489', 'name': 'Kyunghyun Cho'}, {'authorId': '1817166', 'name': 'Clare R. Voss'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Most previous supervised event extraction methods have relied on features derived from manual annotations, and thus cannot be applied to new event types without extra annotation effort. We take a fresh look at event extraction and model it as a generic grounding problem: mapping each event mention to a specific type in a target event ontology. We design a transferable architecture of structural and compositional neural networks to jointly represent and map event mentions and types into a shared semantic space. Based on this new framework, we can select, for each event mention, the event type which is semantically closest in this space as its type. By leveraging manual annotations available for a small set of existing event types, our framework can be applied to new unseen event types without additional manual annotations. When tested on 23 unseen event types, our zero-shot framework, without manual annotations, achieved performance comparable to a supervised model trained from 3,000 sentences annotated with 500 event mentions.', 'year': 2017, 'in_acl': True, 'citationCount': 196, 'section': None, 'subsection': None}, {'id': 202544143, 'paperId': 'ad3b204c495b049fbd25ccf26eb49e78655f8512', 'title': 'Cross-lingual Structure Transfer for Relation and Event Extraction', 'authors': [{'authorId': '3393606', 'name': 'Ananya Subburathinam'}, {'authorId': '152347526', 'name': 'Di Lu'}, {'authorId': '2113323573', 'name': 'Heng Ji'}, {'authorId': '143823227', 'name': 'Jonathan May'}, {'authorId': '9546964', 'name': 'Shih-Fu Chang'}, {'authorId': '2707234', 'name': 'Avirup Sil'}, {'authorId': '1817166', 'name': 'Clare R. Voss'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'The identification of complex semantic structures such as events and entity relations, already a challenging Information Extraction task, is doubly difficult from sources written in under-resourced and under-annotated languages. We investigate the suitability of cross-lingual structure transfer techniques for these tasks. We exploit relation- and event-relevant language-universal features, leveraging both symbolic (including part-of-speech and dependency path) and distributional (including type representation and contextualized representation) information. By representing all entity mentions, event triggers, and contexts into this complex and structured multilingual common space, using graph convolutional networks, we can train a relation or event extractor from source language annotations and apply it to the target language. Extensive experiments on cross-lingual relation and event transfer among English, Chinese, and Arabic demonstrate that our approach achieves performance comparable to state-of-the-art supervised models trained on up to 3,000 manually annotated mentions: up to 62.6% F-score for Relation Extraction, and 63.1% F-score for Event Argument Role Labeling. The event argument role labeling model transferred from English to Chinese achieves similar performance as the model trained from Chinese. We thus find that language-universal symbolic and distributional representations are complementary for cross-lingual structure transfer.', 'year': 2019, 'in_acl': True, 'citationCount': 74, 'section': None, 'subsection': None}, {'id': 71147690, 'paperId': '64584150548d79aadad3a0b3e7a3c949967c5d55', 'title': 'Sentence Embedding Alignment for Lifelong Relation Extraction', 'authors': [{'authorId': '46507182', 'name': 'Hong Wang'}, {'authorId': '22253126', 'name': 'Wenhan Xiong'}, {'authorId': '2482533', 'name': 'Mo Yu'}, {'authorId': '1955964', 'name': 'Xiaoxiao Guo'}, {'authorId': '3307026', 'name': 'Shiyu Chang'}, {'authorId': '1682479', 'name': 'William Yang Wang'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'Conventional approaches to relation extraction usually require a fixed set of pre-defined relations. Such requirement is hard to meet in many real applications, especially when new data and relations are emerging incessantly and it is computationally expensive to store all data and re-train the whole model every time new data and relations come in. We formulate such challenging problem as lifelong relation extraction and investigate memory-efficient incremental learning methods without catastrophically forgetting knowledge learned from previous tasks. We first investigate a modified version of the stochastic gradient methods with a replay memory, which surprisingly outperforms recent state-of-the-art lifelong learning methods. We further propose to improve this approach to alleviate the forgetting problem by anchoring the sentence embedding space. Specifically, we utilize an explicit alignment model to mitigate the sentence embedding distortion of learned model when training on new data and new relations. Experiment results on multiple benchmarks show that our proposed method significantly outperforms the state-of-the-art lifelong learning approaches.', 'year': 2019, 'in_acl': True, 'citationCount': 118, 'section': None, 'subsection': None}, {'id': 53845347, 'paperId': '2718cd594d2aa09315da52594877cd71d377dfcf', 'title': 'Multi-Level Multimodal Common Semantic Space for Image-Phrase Grounding', 'authors': [{'authorId': '153769937', 'name': 'Hassan Akbari'}, {'authorId': '35862299', 'name': 'Svebor Karaman'}, {'authorId': '1754397', 'name': 'Surabhi Bhargava'}, {'authorId': '2108342501', 'name': 'Brian Chen'}, {'authorId': '1856025', 'name': 'Carl Vondrick'}, {'authorId': '9546964', 'name': 'Shih-Fu Chang'}], 'venue': 'Computer Vision and Pattern Recognition', 'abstract': 'We address the problem of phrase grounding by learning a multi-level common semantic space shared by the textual and visual modalities. We exploit multiple levels of feature maps of a Deep Convolutional Neural Network, as well as contextualized word and sentence embeddings extracted from a character-based language model. Following dedicated non-linear mappings for visual features at each level, word, and sentence embeddings, we obtain multiple instantiations of our common semantic space in which comparisons between any target text and the visual content is performed with cosine similarity. We guide the model by a multi-level multimodal attention mechanism which outputs attended visual features at each level. The best level is chosen to be compared with text content for maximizing the pertinence scores of image-sentence pairs of the ground truth. Experiments conducted on three publicly available datasets show significant performance gains (20%-60% relative) over the state-of-the-art in phrase localization and set a new performance record on those datasets. We provide a detailed ablation study to show the contribution of each element of our approach and release our code on GitHub.', 'year': 2018, 'in_acl': False, 'citationCount': 73, 'section': None, 'subsection': None}, {'id': 218501728, 'paperId': '04f7834936bf8f455f804c4d84b52fcffc6784ee', 'title': 'Cross-media Structured Common Space for Multimedia Event Extraction', 'authors': [{'authorId': '3361240', 'name': 'Manling Li'}, {'authorId': '2778637', 'name': 'Alireza Zareian'}, {'authorId': '145653969', 'name': 'Qi Zeng'}, {'authorId': '153188991', 'name': 'Spencer Whitehead'}, {'authorId': '152347526', 'name': 'Di Lu'}, {'authorId': '2113323573', 'name': 'Heng Ji'}, {'authorId': '9546964', 'name': 'Shih-Fu Chang'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'We introduce a new task, MultiMedia Event Extraction, which aims to extract events and their arguments from multimedia documents. We develop the first benchmark and collect a dataset of 245 multimedia news articles with extensively annotated events and arguments. We propose a novel method, Weakly Aligned Structured Embedding (WASE), that encodes structured representations of semantic information from textual and visual data into a common embedding space. The structures are aligned across modalities by employing a weakly supervised training strategy, which enables exploiting available resources without explicit cross-media annotation. Compared to uni-modal state-of-the-art methods, our approach achieves 4.0% and 9.8% absolute F-score gains on text event argument role labeling and visual event extraction. Compared to state-of-the-art multimedia unstructured representations, we achieve 8.3% and 5.0% absolute F-score gains on multimedia event extraction and argument role labeling, respectively. By utilizing images, we extract 21.4% more event mentions than traditional text-only methods.', 'year': 2020, 'in_acl': True, 'citationCount': 94, 'section': None, 'subsection': None}]
|
2022.naacl-tutorials.5
|
Tutorial on Multimodal Machine Learning
|
Multimodal machine learning involves integrating and modeling information from multiple heterogeneous sources of data. It is a challenging yet crucial area with numerous real-world applications in multimedia, affective computing, robotics, finance, HCI, and healthcare. This tutorial, building upon a new edition of a survey paper on multimodal ML as well as previously-given tutorials and academic courses, will describe an updated taxonomy on multimodal machine learning synthesizing its core technical challenges and major directions for future research.
| 2,022
|
https://aclanthology.org/2022.naacl-tutorials.5
|
NAACL
|
[{'id': 252118396, 'paperId': '63f93a6d9c38d656933706acfc720684470bc108', 'title': 'Foundations and Recent Trends in Multimodal Machine Learning: Principles, Challenges, and Open Questions', 'authors': [{'authorId': '28130078', 'name': 'P. Liang'}, {'authorId': '144802290', 'name': 'Amir Zadeh'}, {'authorId': '49933077', 'name': 'Louis-philippe Morency'}], 'venue': 'arXiv.org', 'abstract': 'Multimodal machine learning is a vibrant multi-disciplinary research field that aims to design computer agents with intelligent capabilities such as understanding, reasoning, and learning through integrating multiple communicative modalities, including linguistic, acoustic, visual, tactile, and physiological messages. With the recent interest in video understanding, embodied autonomous agents, text-to-image generation, and multisensor fusion in application domains such as healthcare and robotics, multimodal machine learning has brought unique computational and theoretical challenges to the machine learning community given the heterogeneity of data sources and the interconnections often found between modalities. However, the breadth of progress in multimodal research has made it difficult to identify the common themes and open questions in the field. By synthesizing a broad range of application domains and theoretical frameworks from both historical and recent perspectives, this paper is designed to provide an overview of the computational and theoretical foundations of multimodal machine learning. We start by defining two key principles of modality heterogeneity and interconnections that have driven subsequent innovations, and propose a taxonomy of 6 core technical challenges: representation, alignment, reasoning, generation, transference, and quantification covering historical and recent trends. Recent technical achievements will be presented through the lens of this taxonomy, allowing researchers to understand the similarities and differences across new approaches. We end by motivating several open problems for future research as identified by our taxonomy.', 'year': 2022, 'in_acl': False, 'citationCount': 102, 'section': 'General', 'subsection': None}, {'id': 10137425, 'paperId': '6bc4b1376ec2812b6d752c4f6bc8d8fd0512db91', 'title': 'Multimodal Machine Learning: A Survey and Taxonomy', 'authors': [{'authorId': '1756344', 'name': 'T. Baltrušaitis'}, {'authorId': '118242121', 'name': 'Chaitanya Ahuja'}, {'authorId': '49933077', 'name': 'Louis-philippe Morency'}], 'venue': 'IEEE Transactions on Pattern Analysis and Machine Intelligence', 'abstract': 'Our experience of the world is multimodal - we see objects, hear sounds, feel texture, smell odors, and taste flavors. Modality refers to the way in which something happens or is experienced and a research problem is characterized as multimodal when it includes multiple such modalities. In order for Artificial Intelligence to make progress in understanding the world around us, it needs to be able to interpret such multimodal signals together. Multimodal machine learning aims to build models that can process and relate information from multiple modalities. It is a vibrant multi-disciplinary field of increasing importance and with extraordinary potential. Instead of focusing on specific multimodal applications, this paper surveys the recent advances in multimodal machine learning itself and presents them in a common taxonomy. We go beyond the typical early and late fusion categorization and identify broader challenges that are faced by multimodal machine learning, namely: representation, translation, alignment, fusion, and co-learning. This new taxonomy will enable researchers to better understand the state of the field and identify directions for future research.', 'year': 2017, 'in_acl': False, 'citationCount': 2523, 'section': 'General', 'subsection': None}, {'id': 393948, 'paperId': '184ac0766262312ba76bbdece4e7ffad0aa8180b', 'title': 'Representation Learning: A Review and New Perspectives', 'authors': [{'authorId': '1751762', 'name': 'Yoshua Bengio'}, {'authorId': '1760871', 'name': 'Aaron C. Courville'}, {'authorId': '145467703', 'name': 'Pascal Vincent'}], 'venue': 'IEEE Transactions on Pattern Analysis and Machine Intelligence', 'abstract': 'The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. This motivates longer term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation, and manifold learning.', 'year': 2012, 'in_acl': False, 'citationCount': 11774, 'section': 'General', 'subsection': None}, {'id': 213009174, 'paperId': 'caa31faab39d34ecbb8100911640dfaec76f9ee9', 'title': 'Multiplicative Interactions and Where to Find Them', 'authors': [{'authorId': '35880964', 'name': 'Siddhant M. Jayakumar'}, {'authorId': '10698483', 'name': 'Jacob Menick'}, {'authorId': '144792148', 'name': 'Wojciech M. Czarnecki'}, {'authorId': '144735987', 'name': 'Jonathan Schwarz'}, {'authorId': '34269227', 'name': 'Jack W. Rae'}, {'authorId': '2217144', 'name': 'Simon Osindero'}, {'authorId': '1725303', 'name': 'Y. Teh'}, {'authorId': '3367786', 'name': 'Tim Harley'}, {'authorId': '1996134', 'name': 'Razvan Pascanu'}], 'venue': 'International Conference on Learning Representations', 'abstract': 'We explore the role of multiplicative interaction as a unifying framework to describe a range of classical and modern neural network architectural motifs, such as gating, attention layers, hypernetworks, and dynamic convolutions amongst others. Multiplicative interaction layers as primitive operations have a long-established presence in the literature, though this often not emphasized and thus under-appreciated. We begin by showing that such layers strictly enrich the representable function classes of neural networks. We conjecture that multiplicative interactions offer a particularly powerful inductive bias when fusing multiple streams of information or when conditional computation is required. We therefore argue that they should be considered in many situation where multiple compute or information paths need to be combined, in place of the simple and oft-used concatenation operation. Finally, we back up our claims and demonstrate the potential of multiplicative interactions by applying them in large-scale complex RL and sequence modelling tasks, where their use allows us to deliver state-of-the-art results, and thereby provides new evidence in support of multiplicative interactions playing a more prominent role when designing new neural network architectures.', 'year': 2020, 'in_acl': False, 'citationCount': 124, 'section': 'Representation', 'subsection': None}, {'id': 710430, 'paperId': 'adb4ea2c0f3eff8a17c97a67f28b923e8e5bdff1', 'title': 'Multimodal learning with deep Boltzmann machines', 'authors': [{'authorId': '2897313', 'name': 'Nitish Srivastava'}, {'authorId': '145124475', 'name': 'R. Salakhutdinov'}], 'venue': 'Journal of machine learning research', 'abstract': 'Data often consists of multiple diverse modalities. For example, images are tagged with textual information and videos are accompanied by audio. Each modality is characterized by having distinct statistical properties. We propose a Deep Boltzmann Machine for learning a generative model of such multimodal data. We show that the model can be used to create fused representations by combining features across modalities. These learned representations are useful for classification and information retrieval. By sampling from the conditional distributions over each data modality, it is possible to create these representations even when some data modalities are missing. We conduct experiments on bimodal image-text and audio-video data. The fused representation achieves good classification results on the MIR-Flickr data set matching or outperforming other deep models as well as SVM based models that use Multiple Kernel Learning. We further demonstrate that this multimodal model helps classification and retrieval even when only unimodal data is available at test time.', 'year': 2012, 'in_acl': False, 'citationCount': 1690, 'section': 'Representation', 'subsection': None}, {'id': 49303347, 'paperId': '034f1c5589644a6b42f50bf61b1628a1c5607fd9', 'title': 'Learning Factorized Multimodal Representations', 'authors': [{'authorId': '145639633', 'name': 'Yao-Hung Hubert Tsai'}, {'authorId': '28130078', 'name': 'P. Liang'}, {'authorId': '144802290', 'name': 'Amir Zadeh'}, {'authorId': '49933077', 'name': 'Louis-philippe Morency'}, {'authorId': '145124475', 'name': 'R. Salakhutdinov'}], 'venue': 'International Conference on Learning Representations', 'abstract': 'Learning multimodal representations is a fundamentally complex research problem due to the presence of multiple heterogeneous sources of information. Although the presence of multiple modalities provides additional valuable information, there are two key challenges to address when learning from multimodal data: 1) models must learn the complex intra-modal and cross-modal interactions for prediction and 2) models must be robust to unexpected missing or noisy modalities during testing. In this paper, we propose to optimize for a joint generative-discriminative objective across multimodal data and labels. We introduce a model that factorizes representations into two sets of independent factors: multimodal discriminative and modality-specific generative factors. Multimodal discriminative factors are shared across all modalities and contain joint multimodal features required for discriminative tasks such as sentiment prediction. Modality-specific generative factors are unique for each modality and contain the information required for generating data. Experimental results show that our model is able to learn meaningful multimodal representations that achieve state-of-the-art or competitive performance on six multimodal datasets. Our model demonstrates flexible generative capabilities by conditioning on independent factors and can reconstruct missing modalities without significantly impacting performance. Lastly, we interpret our factorized representations to understand the interactions that influence multimodal learning.', 'year': 2018, 'in_acl': False, 'citationCount': 359, 'section': 'Representation', 'subsection': None}, {'id': 231740629, 'paperId': '81002fbb777f860f9aac2bbc24467a62345af279', 'title': 'Decoupling the Role of Data, Attention, and Losses in Multimodal Transformers', 'authors': [{'authorId': '2234342', 'name': 'Lisa Anne Hendricks'}, {'authorId': '1386957852', 'name': 'John F. J. Mellor'}, {'authorId': '145721402', 'name': 'R. Schneider'}, {'authorId': '2285263', 'name': 'Jean-Baptiste Alayrac'}, {'authorId': '3208081', 'name': 'Aida Nematzadeh'}], 'venue': 'Transactions of the Association for Computational Linguistics', 'abstract': 'Abstract Recently, multimodal transformer models have gained popularity because their performance on downstream tasks suggests they learn rich visual-linguistic representations. Focusing on zero-shot image retrieval tasks, we study three important factors that can impact the quality of learned representations: pretraining data, the attention mechanism, and loss functions. By pretraining models on six datasets, we observe that dataset noise and language similarity to our downstream task are important indicators of model performance. Through architectural analysis, we learn that models with a multimodal attention mechanism can outperform deeper models with modality-specific attention mechanisms. Finally, we show that successful contrastive losses used in the self-supervised learning literature do not yield similar performance gains when used in multimodal transformers.', 'year': 2021, 'in_acl': False, 'citationCount': 102, 'section': 'Alignment', 'subsection': None}, {'id': 2495132, 'paperId': 'e2257e3f56ccb12875a57bc0a8cca1d9d7e93ec6', 'title': 'Deep Canonical Correlation Analysis', 'authors': [{'authorId': '144339350', 'name': 'Galen Andrew'}, {'authorId': '144365054', 'name': 'R. Arora'}, {'authorId': '1748118', 'name': 'J. Bilmes'}, {'authorId': '2924113', 'name': 'Karen Livescu'}], 'venue': 'International Conference on Machine Learning', 'abstract': 'We introduce Deep Canonical Correlation Analysis (DCCA), a method to learn complex nonlinear transformations of two views of data such that the resulting representations are highly linearly correlated. Parameters of both transformations are jointly learned to maximize the (regularized) total correlation. It can be viewed as a nonlinear extension of the linear method canonical correlation analysis (CCA). It is an alternative to the nonparametric method kernel canonical correlation analysis (KCCA) for learning correlated nonlinear transformations. Unlike KCCA, DCCA does not require an inner product, and has the advantages of a parametric method: training time scales well with data size and the training data need not be referenced when computing the representations of unseen instances. In experiments on two real-world datasets, we find that DCCA learns representations with significantly higher correlation than those learned by CCA and KCCA. We also introduce a novel non-saturating sigmoid function based on the cube root that may be useful more generally in feedforward neural networks.', 'year': 2013, 'in_acl': False, 'citationCount': 1726, 'section': 'Alignment', 'subsection': None}, {'id': 222341606, 'paperId': 'dedcdc1fb3a6def9772dce674d89150923dd75b9', 'title': 'Vokenization: Improving Language Understanding via Contextualized, Visually-Grounded Supervision', 'authors': [{'authorId': '80940793', 'name': 'Hao Tan'}, {'authorId': '143977268', 'name': 'Mohit Bansal'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Humans learn language by listening, speaking, writing, reading, and also, via interaction with the multimodal real world. Existing language pre-training frameworks show the effectiveness of text-only self-supervision while we explore the idea of a visually-supervised language model in this paper. We find that the main reason hindering this exploration is the large divergence in magnitude and distributions between the visually-grounded language datasets and pure-language corpora. Therefore, we develop a technique named "vokenization" that extrapolates multimodal alignments to language-only data by contextually mapping language tokens to their related images (which we call "vokens"). The "vokenizer" is trained on relatively small image captioning datasets and we then apply it to generate vokens for large language corpora. Trained with these contextually generated vokens, our visually-supervised language models show consistent improvements over self-supervised alternatives on multiple pure-language tasks such as GLUE, SQuAD, and SWAG. Code and pre-trained models publicly available at this https URL', 'year': 2020, 'in_acl': True, 'citationCount': 112, 'section': 'Transference', 'subsection': None}, {'id': 221911618, 'paperId': '757782a0524d6d23f430d6d8f924c6212d6afeac', 'title': 'Foundations of Multimodal Co-learning', 'authors': [{'authorId': '144802290', 'name': 'Amir Zadeh'}, {'authorId': '28130078', 'name': 'P. Liang'}, {'authorId': '49933077', 'name': 'Louis-philippe Morency'}], 'venue': 'Information Fusion', 'abstract': 'In the current state of the field of machine learning, often, real-world phenomena are learned through studies of isolated modalities; such as modeling language exclusively from verbal modality, which is a common theme in natural language processing. This is widely adopted since downstream tasksin different disciplines of machine learning are also often similarly isolated and unimodal. In sharp contrast to this, human learning from real-world experiences is rarely unimodal, and often exhibits a multisensory nature, regardless of any assumptions about downstream tasks. The cognitive constructs in human brain are consistently developed through multisensory reinforcement, and the same constructs generalize to unimodal scenarios. The difference between the trend of unimodal learning and human cognitive development raises the following question: “Even if downstream tasks are unimodal during test time, is it better to learn from the isolated modality or from multimodal information?”. In this paper we focus on an in-depth study of this research question. We study the differences between unimodal learning and Multimodal Co-learning (MCl), both from empirical and theoretical standpoints. Through the lens of information entropy and characteristics of deep neural networks, we demonstrate strong theoretical justifications in favor of MCl.', 'year': 2020, 'in_acl': False, 'citationCount': 40, 'section': 'Transference', 'subsection': None}, {'id': 108296442, 'paperId': '50f76736c3090c6effac25400e5e40cc0b7b5ad9', 'title': 'The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences From Natural Supervision', 'authors': [{'authorId': '13589371', 'name': 'Jiayuan Mao'}, {'authorId': '144158271', 'name': 'Chuang Gan'}, {'authorId': '143967473', 'name': 'Pushmeet Kohli'}, {'authorId': '1763295', 'name': 'J. Tenenbaum'}, {'authorId': '3045089', 'name': 'Jiajun Wu'}], 'venue': 'International Conference on Learning Representations', 'abstract': 'We propose the Neuro-Symbolic Concept Learner (NS-CL), a model that learns visual concepts, words, and semantic parsing of sentences without explicit supervision on any of them; instead, our model learns by simply looking at images and reading paired questions and answers. Our model builds an object-based scene representation and translates sentences into executable, symbolic programs. To bridge the learning of two modules, we use a neuro-symbolic reasoning module that executes these programs on the latent scene representation. Analogical to human concept learning, the perception module learns visual concepts based on the language description of the object being referred to. Meanwhile, the learned visual concepts facilitate learning new words and parsing new sentences. We use curriculum learning to guide the searching over the large compositional space of images and language. Extensive experiments demonstrate the accuracy and efficiency of our model on learning visual concepts, word representations, and semantic parsing of sentences. Further, our method allows easy generalization to new object attributes, compositions, language concepts, scenes and questions, and even new program domains. It also empowers applications including visual question answering and bidirectional image-text retrieval.', 'year': 2019, 'in_acl': False, 'citationCount': 639, 'section': 'Reasoning', 'subsection': None}, {'id': 182952502, 'paperId': '7dc156eb9d84ae8fd521ecac5ccc5b5426a42b50', 'title': 'A Survey of Reinforcement Learning Informed by Natural Language', 'authors': [{'authorId': '1818756', 'name': 'Jelena Luketina'}, {'authorId': '39683441', 'name': 'Nantas Nardelli'}, {'authorId': '38698094', 'name': 'Gregory Farquhar'}, {'authorId': '145356667', 'name': 'Jakob N. Foerster'}, {'authorId': '2112400', 'name': 'Jacob Andreas'}, {'authorId': '1864353', 'name': 'Edward Grefenstette'}, {'authorId': '1766767', 'name': 'Shimon Whiteson'}, {'authorId': '2620211', 'name': 'Tim Rocktäschel'}], 'venue': 'International Joint Conference on Artificial Intelligence', 'abstract': 'To be successful in real-world tasks, Reinforcement Learning (RL) needs to exploit the compositional, relational, and hierarchical structure of the world, and learn to transfer it to the task at hand. Recent advances in representation learning for language make it possible to build models that acquire world knowledge from text corpora and integrate this knowledge into downstream decision making problems. We thus argue that the time is right to investigate a tight integration of natural language understanding into RL in particular. We survey the state of the field, including work on instruction following, text games, and learning from textual domain knowledge. Finally, we call for the development of new environments as well as further investigation into the potential uses of recent Natural Language Processing (NLP) techniques for such tasks.', 'year': 2019, 'in_acl': False, 'citationCount': 268, 'section': 'Reasoning', 'subsection': None}, {'id': 211171653, 'paperId': '20dc158a6abd1f92a4534ae064d527821a91685d', 'title': 'VQA-LOL: Visual Question Answering under the Lens of Logic', 'authors': [{'authorId': '120838645', 'name': 'Tejas Gokhale'}, {'authorId': '120722271', 'name': 'Pratyay Banerjee'}, {'authorId': '1760291', 'name': 'Chitta Baral'}, {'authorId': '1784500', 'name': 'Yezhou Yang'}], 'venue': 'European Conference on Computer Vision', 'abstract': 'Logical connectives and their implications on the meaning of a natural language sentence are a fundamental aspect of understanding. In this paper, we investigate whether visual question answering (VQA) systems trained to answer a question about an image, are able to answer the logical composition of multiple such questions. When put under this Lens of Logic, state-of-the-art VQA models have difficulty in correctly answering these logically composed questions. We construct an augmentation of the VQA dataset as a benchmark, with questions containing logical compositions and linguistic transformations (negation, disjunction, conjunction, and antonyms). We propose our Lens of Logic (LOL) model which uses question-attention and logic-attention to understand logical connectives in the question, and a novel Fréchet-Compatibility Loss, which ensures that the answers of the component questions and the composed question are consistent with the inferred logical operation. Our model shows substantial improvement in learning logical compositions while retaining performance on VQA. We suggest this work as a move towards robustness by embedding logical connectives in visual understanding.', 'year': 2020, 'in_acl': False, 'citationCount': 72, 'section': 'Reasoning', 'subsection': None}, {'id': 220047809, 'paperId': '474952c4ceeec59d2677c60e92ebbf6d34140b2d', 'title': 'Cross-modal Coherence Modeling for Caption Generation', 'authors': [{'authorId': '2715920', 'name': 'Malihe Alikhani'}, {'authorId': '48267618', 'name': 'Piyush Sharma'}, {'authorId': '51019115', 'name': 'Shengjie Li'}, {'authorId': '1737285', 'name': 'Radu Soricut'}, {'authorId': '144884556', 'name': 'Matthew Stone'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'We use coherence relations inspired by computational models of discourse to study the information needs and goals of image captioning. Using an annotation protocol specifically devised for capturing image–caption coherence relations, we annotate 10,000 instances from publicly-available image–caption pairs. We introduce a new task for learning inferences in imagery and text, coherence relation prediction, and show that these coherence annotations can be exploited to learn relation classifiers as an intermediary step, and also train coherence-aware, controllable image captioning models. The results show a dramatic improvement in the consistency and quality of the generated captions with respect to information needs specified via coherence relations.', 'year': 2020, 'in_acl': True, 'citationCount': 53, 'section': 'Generation', 'subsection': None}, {'id': 232035663, 'paperId': '2cd605106b88c85d7d8b865b1ef0f8c8293debf1', 'title': 'Zero-Shot Text-to-Image Generation', 'authors': [{'authorId': '1992922591', 'name': 'A. Ramesh'}, {'authorId': '2068123790', 'name': 'Mikhail Pavlov'}, {'authorId': '40087786', 'name': 'Gabriel Goh'}, {'authorId': '145565184', 'name': 'Scott Gray'}, {'authorId': '153387869', 'name': 'Chelsea Voss'}, {'authorId': '38909097', 'name': 'Alec Radford'}, {'authorId': '2108828435', 'name': 'Mark Chen'}, {'authorId': '1701686', 'name': 'I. Sutskever'}], 'venue': 'International Conference on Machine Learning', 'abstract': 'Text-to-image generation has traditionally focused on finding better modeling assumptions for training on a fixed dataset. These assumptions might involve complex architectures, auxiliary losses, or side information such as object part labels or segmentation masks supplied during training. We describe a simple approach for this task based on a transformer that autoregressively models the text and image tokens as a single stream of data. With sufficient data and scale, our approach is competitive with previous domain-specific models when evaluated in a zero-shot fashion.', 'year': 2021, 'in_acl': False, 'citationCount': 4039, 'section': 'Generation', 'subsection': None}, {'id': 235899386, 'paperId': 'af86df6a0af3226a1b4b5eb27c17c9e45367f896', 'title': 'MultiBench: Multiscale Benchmarks for Multimodal Representation Learning', 'authors': [{'authorId': '28130078', 'name': 'P. Liang'}, {'authorId': '2066413750', 'name': 'Yiwei Lyu'}, {'authorId': '2152774190', 'name': 'Xiang Fan'}, {'authorId': '1576080679', 'name': 'Zetian Wu'}, {'authorId': '2153511720', 'name': 'Yun Cheng'}, {'authorId': '2115565414', 'name': 'Jason Wu'}, {'authorId': '2146072218', 'name': 'Leslie Chen'}, {'authorId': '2111194238', 'name': 'Peter Wu'}, {'authorId': '2115303776', 'name': 'Michelle A. Lee'}, {'authorId': '2117748', 'name': 'Yuke Zhu'}, {'authorId': '145124475', 'name': 'R. Salakhutdinov'}, {'authorId': '49933077', 'name': 'Louis-philippe Morency'}], 'venue': 'NeurIPS Datasets and Benchmarks', 'abstract': 'Learning multimodal representations involves integrating information from multiple heterogeneous sources of data. It is a challenging yet crucial area with numerous real-world applications in multimedia, affective computing, robotics, finance, human-computer interaction, and healthcare. Unfortunately, multimodal research has seen limited resources to study (1) generalization across domains and modalities, (2) complexity during training and inference, and (3) robustness to noisy and missing modalities. In order to accelerate progress towards understudied modalities and tasks while ensuring real-world robustness, we release MultiBench, a systematic and unified large-scale benchmark for multimodal learning spanning 15 datasets, 10 modalities, 20 prediction tasks, and 6 research areas. MultiBench provides an automated end-to-end machine learning pipeline that simplifies and standardizes data loading, experimental setup, and model evaluation. To enable holistic evaluation, MultiBench offers a comprehensive methodology to assess (1) generalization, (2) time and space complexity, and (3) modality robustness. MultiBench introduces impactful challenges for future research, including scalability to large-scale multimodal datasets and robustness to realistic imperfections. To accompany this benchmark, we also provide a standardized implementation of 20 core approaches in multimodal learning spanning innovations in fusion paradigms, optimization objectives, and training approaches. Simply applying methods proposed in different research areas can improve the state-of-the-art performance on 9/15 datasets. Therefore, MultiBench presents a milestone in unifying disjoint efforts in multimodal machine learning research and paves the way towards a better understanding of the capabilities and limitations of multimodal models, all the while ensuring ease of use, accessibility, and reproducibility. MultiBench, our standardized implementations, and leaderboards are publicly available, will be regularly updated, and welcomes inputs from the community.', 'year': 2021, 'in_acl': False, 'citationCount': 139, 'section': 'Quantification', 'subsection': None}, {'id': 236087303, 'paperId': 'd7b8014c2a348a631ed466c6fba3825330b2f195', 'title': 'M2Lens: Visualizing and Explaining Multimodal Models for Sentiment Analysis', 'authors': [{'authorId': '50141732', 'name': 'Xingbo Wang'}, {'authorId': '35573278', 'name': 'Jianben He'}, {'authorId': '2111472932', 'name': 'Zhihua Jin'}, {'authorId': '8724126', 'name': 'Muqiao Yang'}, {'authorId': '2300486168', 'name': 'Huamin Qu'}], 'venue': 'IEEE Transactions on Visualization and Computer Graphics', 'abstract': "Multimodal sentiment analysis aims to recognize people's attitudes from multiple communication channels such as verbal content (i.e., text), voice, and facial expressions. It has become a vibrant and important research topic in natural language processing. Much research focuses on modeling the complex intra- and inter-modal interactions between different communication channels. However, current multimodal models with strong performance are often deep-learning-based techniques and work like black boxes. It is not clear how models utilize multimodal information for sentiment predictions. Despite recent advances in techniques for enhancing the explainability of machine learning models, they often target unimodal scenarios (e.g., images, sentences), and little research has been done on explaining multimodal models. In this paper, we present an interactive visual analytics system, M2Lens, to visualize and explain multimodal models for sentiment analysis. M2Lens provides explanations on intra- and inter-modal interactions at the global, subset, and local levels. Specifically, it summarizes the influence of three typical interaction types (i.e., dominance, complement, and conflict) on the model predictions. Moreover, M2Lens identifies frequent and influential multimodal features and supports the multi-faceted exploration of model behaviors from language, acoustic, and visual modalities. Through two case studies and expert interviews, we demonstrate our system can help users gain deep insights into the multimodal models for sentiment analysis.", 'year': 2021, 'in_acl': False, 'citationCount': 61, 'section': 'Quantification', 'subsection': None}, {'id': 4384334, 'paperId': 'b0c5dc3fa19a2bc97606ccb6f55226b913984395', 'title': 'Women also Snowboard: Overcoming Bias in Captioning Models', 'authors': [{'authorId': '40895688', 'name': 'Kaylee Burns'}, {'authorId': '2234342', 'name': 'Lisa Anne Hendricks'}, {'authorId': '1753210', 'name': 'Trevor Darrell'}, {'authorId': '34721166', 'name': 'Anna Rohrbach'}], 'venue': 'European Conference on Computer Vision', 'abstract': 'Most machine learning methods are known to capture and exploit biases of the training data. While some biases are beneficial for learning, others are harmful. Specifically, image captioning models tend to exaggerate biases present in training data (e.g., if a word is present in 60% of training sentences, it might be predicted in 70% of sentences at test time). This can lead to incorrect captions in domains where unbiased captions are desired, or required, due to over-reliance on the learned prior and image context. In this work we investigate generation of gender-specific caption words (e.g. man, woman) based on the person’s appearance or the image context. We introduce a new Equalizer model that encourages equal gender probability when gender evidence is occluded in a scene and confident predictions when gender evidence is present. The resulting model is forced to look at a person rather than use contextual cues to make a gender-specific prediction. The losses that comprise our model, the Appearance Confusion Loss and the Confident Loss, are general, and can be added to any description model in order to mitigate impacts of unwanted bias in a description dataset. Our proposed model has lower error than prior work when describing images with people and mentioning their gender and more closely matches the ground truth ratio of sentences including women to sentences including men. Finally, we show that our model more often looks at people when predicting their gender (https://people.eecs.berkeley.edu/~lisa anne/snowboard.html).', 'year': 2018, 'in_acl': False, 'citationCount': 459, 'section': 'Quantification', 'subsection': None}]
|
2022.naacl-tutorials.6
|
Contrastive Data and Learning for Natural Language Processing
|
Current NLP models heavily rely on effective representation learning algorithms. Contrastive learning is one such technique to learn an embedding space such that similar data sample pairs have close representations while dissimilar samples stay far apart from each other. It can be used in supervised or unsupervised settings using different loss functions to produce task-specific or general-purpose representations. While it has originally enabled the success for vision tasks, recent years have seen a growing number of publications in contrastive NLP. This first line of works not only delivers promising performance improvements in various NLP tasks, but also provides desired characteristics such as task-agnostic sentence representation, faithful text generation, data-efficient learning in zero-shot and few-shot settings, interpretability and explainability. In this tutorial, we aim to provide a gentle introduction to the fundamentals of contrastive learning approaches and the theory behind them. We then survey the benefits and the best practices of contrastive learning for various downstream NLP applications including Text Classification, Question Answering, Summarization, Text Generation, Interpretability and Explainability, Commonsense Knowledge and Reasoning, Vision-and-Language.This tutorial intends to help researchers in the NLP and computational linguistics community to understand this emerging topic and promote future research directions of using contrastive learning for NLP applications.
| 2,022
|
https://aclanthology.org/2022.naacl-tutorials.6
|
NAACL
|
[{'id': 211096730, 'paperId': '7af72a461ed7cda180e7eab878efd5f35d79bbf4', 'title': 'A Simple Framework for Contrastive Learning of Visual Representations', 'authors': [{'authorId': '145358498', 'name': 'Ting Chen'}, {'authorId': '40464924', 'name': 'Simon Kornblith'}, {'authorId': '144739074', 'name': 'Mohammad Norouzi'}, {'authorId': '1695689', 'name': 'Geoffrey E. Hinton'}], 'venue': 'International Conference on Machine Learning', 'abstract': 'This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive self-supervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework. We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by SimCLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-of-the-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100X fewer labels.', 'year': 2020, 'in_acl': False, 'citationCount': 16161, 'section': None, 'subsection': None}, {'id': 231591445, 'paperId': '6f870f7f02a8c59c3e23f407f3ef00dd1dcf8fc4', 'title': 'Learning Transferable Visual Models From Natural Language Supervision', 'authors': [{'authorId': '38909097', 'name': 'Alec Radford'}, {'authorId': '2110935237', 'name': 'Jong Wook Kim'}, {'authorId': '2004021329', 'name': 'Chris Hallacy'}, {'authorId': '1992922591', 'name': 'A. Ramesh'}, {'authorId': '40087786', 'name': 'Gabriel Goh'}, {'authorId': '144517868', 'name': 'Sandhini Agarwal'}, {'authorId': '144864359', 'name': 'Girish Sastry'}, {'authorId': '119609682', 'name': 'Amanda Askell'}, {'authorId': '2051714782', 'name': 'Pamela Mishkin'}, {'authorId': '2115193883', 'name': 'Jack Clark'}, {'authorId': '2064404342', 'name': 'Gretchen Krueger'}, {'authorId': '1701686', 'name': 'I. Sutskever'}], 'venue': 'International Conference on Machine Learning', 'abstract': 'State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.', 'year': 2021, 'in_acl': False, 'citationCount': 20677, 'section': None, 'subsection': None}, {'id': 233296292, 'paperId': 'c26759e6c701201af2f62f7ee4eb68742b5bf085', 'title': 'SimCSE: Simple Contrastive Learning of Sentence Embeddings', 'authors': [{'authorId': '4800645', 'name': 'Tianyu Gao'}, {'authorId': '2087141625', 'name': 'Xingcheng Yao'}, {'authorId': '50536468', 'name': 'Danqi Chen'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'This paper presents SimCSE, a simple contrastive learning framework that greatly advances the state-of-the-art sentence embeddings. We first describe an unsupervised approach, which takes an input sentence and predicts itself in a contrastive objective, with only standard dropout used as noise. This simple method works surprisingly well, performing on par with previous supervised counterparts. We find that dropout acts as minimal data augmentation and removing it leads to a representation collapse. Then, we propose a supervised approach, which incorporates annotated pairs from natural language inference datasets into our contrastive learning framework, by using “entailment” pairs as positives and “contradiction” pairs as hard negatives. We evaluate SimCSE on standard semantic textual similarity (STS) tasks, and our unsupervised and supervised models using BERT base achieve an average of 76.3% and 81.6% Spearman’s correlation respectively, a 4.2% and 2.2% improvement compared to previous best results. We also show—both theoretically and empirically—that contrastive learning objective regularizes pre-trained embeddings’ anisotropic space to be more uniform, and it better aligns positive pairs when supervised signals are available.', 'year': 2021, 'in_acl': True, 'citationCount': 2826, 'section': None, 'subsection': None}, {'id': 222124366, 'paperId': '3f9514630194a9fba9505b594ec921b247fecb48', 'title': 'Evaluating Models’ Local Decision Boundaries via Contrast Sets', 'authors': [{'authorId': '40642935', 'name': 'Matt Gardner'}, {'authorId': '3167681', 'name': 'Yoav Artzi'}, {'authorId': '1750652', 'name': 'Jonathan Berant'}, {'authorId': '50757607', 'name': 'Ben Bogin'}, {'authorId': '2087205', 'name': 'Sihao Chen'}, {'authorId': '33546336', 'name': 'Dheeru Dua'}, {'authorId': '51131518', 'name': 'Yanai Elazar'}, {'authorId': '1471885977', 'name': 'Ananth Gottumukkala'}, {'authorId': '2285178', 'name': 'Nitish Gupta'}, {'authorId': '2548384', 'name': 'Hannaneh Hajishirzi'}, {'authorId': '2123694087', 'name': 'Gabriel Ilharco'}, {'authorId': '1783281', 'name': 'Daniel Khashabi'}, {'authorId': '2154184644', 'name': 'Kevin Lin'}, {'authorId': '2262374', 'name': 'Jiangming Liu'}, {'authorId': '22243769', 'name': 'Nelson F. Liu'}, {'authorId': '46244238', 'name': 'Phoebe Mulcaire'}, {'authorId': '3333257', 'name': 'Qiang Ning'}, {'authorId': '34650964', 'name': 'Sameer Singh'}, {'authorId': '144365875', 'name': 'Noah A. Smith'}, {'authorId': '17097887', 'name': 'Sanjay Subramanian'}, {'authorId': '145217343', 'name': 'Eric Wallace'}, {'authorId': '2153658440', 'name': 'Ally Zhang'}, {'authorId': '145360756', 'name': 'Ben Zhou'}], 'venue': 'Findings', 'abstract': 'Standard test sets for supervised learning evaluate in-distribution generalization. Unfortunately, when a dataset has systematic gaps (e.g., annotation artifacts), these evaluations are misleading: a model can learn simple decision rules that perform well on the test set but do not capture the abilities a dataset is intended to test. We propose a more rigorous annotation paradigm for NLP that helps to close systematic gaps in the test data. In particular, after a dataset is constructed, we recommend that the dataset authors manually perturb the test instances in small but meaningful ways that (typically) change the gold label, creating contrast sets. Contrast sets provide a local view of a model’s decision boundary, which can be used to more accurately evaluate a model’s true linguistic capabilities. We demonstrate the efficacy of contrast sets by creating them for 10 diverse NLP datasets (e.g., DROP reading comprehension, UD parsing, and IMDb sentiment analysis). Although our contrast sets are not explicitly adversarial, model performance is significantly lower on them than on the original test sets—up to 25% in some cases. We release our contrast sets as new evaluation benchmarks and encourage future dataset construction efforts to follow similar annotation processes.', 'year': 2020, 'in_acl': True, 'citationCount': 367, 'section': None, 'subsection': None}]
|
2023.acl-tutorials.1
|
Goal Awareness for Conversational AI: Proactivity, Non-collaborativity, and Beyond
|
Conversational systems are envisioned to provide social support or functional service to human users via natural language interactions. Conventional conversation researches mainly focus on the responseability of the system, such as dialogue context understanding and response generation, but overlooks the design of an essential property in intelligent conversations, i.e., goal awareness. The awareness of goals means the state of not only being responsive to the users but also aware of the target conversational goal and capable of leading the conversation towards the goal, which is a significant step towards higher-level intelligence and artificial consciousness. It can not only largely improve user engagement and service efficiency in the conversation, but also empower the system to handle more complicated conversation tasks that involve strategical and motivational interactions. In this tutorial, we will introduce the recent advances on the design of agent’s awareness of goals in a wide range of conversational systems.
| 2,023
|
https://aclanthology.org/2023.acl-tutorials.1
|
ACL
|
[{'id': 12633363, 'paperId': 'a5a8dfe5dcfa998124c8f115f5f743da2c40d714', 'title': 'Deep Learning for Dialogue Systems', 'authors': [{'authorId': '1725643', 'name': 'Yun-Nung (Vivian) Chen'}, {'authorId': '1709797', 'name': 'Asli Celikyilmaz'}, {'authorId': '1395813836', 'name': 'Dilek Z. Hakkani-Tür'}], 'venue': 'International Conference on Computational Linguistics', 'abstract': 'In the past decade, goal-oriented spoken dialogue systems have been the most prominent component in today’s virtual personal assistants. The classic dialogue systems have rather complex and/or modular pipelines. The advance of deep learning technologies has recently risen the applications of neural models to dialogue modeling. However, how to successfully apply deep learning based approaches to a dialogue system is still challenging. Hence, this tutorial is designed to focus on an overview of the dialogue system development while describing most recent research for building dialogue systems and summarizing the challenges, in order to allow researchers to study the potential improvements of the state-of-the-art dialogue systems. The tutorial material is available at http://deepdialogue. miulab.tw.', 'year': 2018, 'in_acl': True, 'citationCount': 52, 'section': 'Previous Tutorials', 'subsection': None}, {'id': 46918362, 'paperId': '8cf9c6bccccde9283f4089f7fe99fb1fa2c2df9a', 'title': 'Deep Learning for Conversational AI', 'authors': [{'authorId': '2131709', 'name': 'Pei-hao Su'}, {'authorId': '3334541', 'name': 'N. Mrksic'}, {'authorId': '3450866', 'name': 'I. Casanueva'}, {'authorId': '1747849', 'name': 'Ivan Vulic'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'Spoken Dialogue Systems (SDS) have great commercial potential as they promise to revolutionise the way in which humans interact with machines. The advent of deep learning led to substantial developments in this area of NLP research, and the goal of this tutorial is to familiarise the research community with the recent advances in what some call the most difficult problem in NLP. From a research perspective, the design of spoken dialogue systems provides a number of significant challenges, as these systems depend on: a) solving several difficult NLP and decision-making tasks; and b) combining these into a functional dialogue system pipeline. A key long-term goal of dialogue system research is to enable open-domain systems that can converse about arbitrary topics and assist humans with completing a wide range of tasks. Furthermore, such systems need to autonomously learn on-line to improve their performance and recover from errors using both signals from their environment and from implicit and explicit user feedback. While the design of such systems has traditionally been modular, domain and language-specific, advances in deep learning have alleviated many of the design problems. The main purpose of this tutorial is to encourage dialogue research in the NLP community by providing the research background, a survey of available resources, and giving key insights to application of state-of-the-art SDS methodology into industry-scale conversational AI systems. We plan to introduce researchers to the pipeline framework for modelling goal-oriented dialogue systems, which includes three key components: 1) Language Understanding; 2) Dialogue Management; and 3) Language Generation. The differences between goal-oriented dialogue systems and chat-bot style conversational agents will be explained in order to show the motivation behind the design of both, with the main focus on the pipeline SDS framework. For each key component, we will define the research problem, provide a brief literature review and introduce the current state-of-the-art approaches. Complementary resources (e.g. available datasets and toolkits) will also be discussed. Finally, future work, outstanding challenges, and current industry practices will be presented. All of the presented material will be made available online for future reference.', 'year': 2018, 'in_acl': True, 'citationCount': 16, 'section': 'Previous Tutorials', 'subsection': None}, {'id': 68167178, 'paperId': '83e567c2822aeda91006096a5d7ac0b34721d2a5', 'title': 'Neural Approaches to Conversational AI', 'authors': [{'authorId': '48441311', 'name': 'Jianfeng Gao'}, {'authorId': '1947267', 'name': 'Michel Galley'}, {'authorId': '47681372', 'name': 'Lihong Li'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'This tutorial surveys neural approaches to conversational AI that were developed in the last few years. We group conversational systems into three categories: (1) question answering agents, (2) task-oriented dialogue agents, and (3) social bots. For each category, we present a review of state-of-the-art neural approaches, draw the connection between neural approaches and traditional symbolic approaches, and discuss the progress we have made and challenges we are facing, using specific systems and models as case studies.', 'year': 2018, 'in_acl': True, 'citationCount': 645, 'section': 'Previous Tutorials', 'subsection': None}, {'id': 220730260, 'paperId': '2165904b8041784ac545e7934d235664b6e77a08', 'title': 'Recent Advances in Conversational Information Retrieval', 'authors': [{'authorId': '48441311', 'name': 'Jianfeng Gao'}, {'authorId': '144628574', 'name': 'Chenyan Xiong'}, {'authorId': '144609235', 'name': 'Paul N. Bennett'}], 'venue': 'Annual International ACM SIGIR Conference on Research and Development in Information Retrieval', 'abstract': 'Recent progress in deep learning has brought tremendous improvements in conversational AI, leading to a plethora of commercial conversational services that allow naturally spoken interactions, increasing the need for more human-centric interactions in IR. As a result, we have witnessed a resurgent interest in developing modern CIR systems in research communities and industry. This tutorial presents recent advances in CIR, focusing mainly on neural approaches and new applications developed in the past five years. Our goal is to provide a thorough and in-depth overview of the general definition of CIR, the components of CIR systems, new applications raised for its conversational aspects, and the (neural) techniques recently developed for it.', 'year': 2020, 'in_acl': False, 'citationCount': 19, 'section': 'Previous Tutorials', 'subsection': None}, {'id': 250340235, 'paperId': '8bc6162766b4e6cd616ad508ea488ecc628cf4ac', 'title': 'Conversational Information Seeking: Theory and Application', 'authors': [{'authorId': '145269114', 'name': 'Jeffrey Dalton'}, {'authorId': '2164704207', 'name': 'Sophie Fischer'}, {'authorId': '2105439683', 'name': 'Paul Owoicho'}, {'authorId': '2065812052', 'name': 'Filip Radlinski'}, {'authorId': '48890086', 'name': 'Federico Rossetto'}, {'authorId': '2528063', 'name': 'Johanne R. Trippas'}, {'authorId': '2499986', 'name': 'Hamed Zamani'}], 'venue': 'Annual International ACM SIGIR Conference on Research and Development in Information Retrieval', 'abstract': 'Conversational information seeking (CIS) involves interaction sequences between one or more users and an information system. Interactions in CIS are primarily based on natural language dialogue, while they may include other types of interactions, such as click, touch, and body gestures. CIS recently attracted significant attention and advancements continue to be made. This tutorial follows the content of the recent Conversational Information Seeking book authored by several of the tutorial presenters. The tutorial aims to be an introduction to CIS for newcomers to CIS in addition to the recent advanced topics and state-of-the-art approaches for students and researchers with moderate knowledge of the topic. A significant part of the tutorial is dedicated to hands-on experiences based on toolkits developed by the presenters for conversational passage retrieval and multi-modal task-oriented dialogues. The outcomes of this tutorial include theoretical and practical knowledge, including a forum to meet researchers interested in CIS.', 'year': 2022, 'in_acl': False, 'citationCount': 22, 'section': 'Previous Tutorials', 'subsection': None}, {'id': 5523008, 'paperId': 'a6401e102c03a441992b3e45f7b63eec09d4b89d', 'title': 'A Survey on Dialogue Systems: Recent Advances and New Frontiers', 'authors': [{'authorId': '2957953', 'name': 'Hongshen Chen'}, {'authorId': '1390612725', 'name': 'Xiaorui Liu'}, {'authorId': '50559722', 'name': 'Dawei Yin'}, {'authorId': '1736632', 'name': 'Jiliang Tang'}], 'venue': 'SKDD', 'abstract': 'Dialogue systems have attracted more and more attention. Recent advances on dialogue systems are overwhelmingly contributed by deep learning techniques, which have been employed to enhance a wide range of big data applications such as computer vision, natural language processing, and recommender systems. For dialogue systems, deep learning can leverage a massive amount of data to learn meaningful feature representations and response generation strategies, while requiring a minimum amount of hand-crafting. In this article, we give an overview to these recent advances on dialogue systems from various perspectives and discuss some possible research directions. In particular, we generally divide existing dialogue systems into task-oriented and nontask- oriented models, then detail how deep learning techniques help them with representative algorithms and finally discuss some appealing research directions that can bring the dialogue system research into a new frontier', 'year': 2017, 'in_acl': False, 'citationCount': 659, 'section': 'Related Surveys or Book Chapters', 'subsection': None}, {'id': 68167178, 'paperId': '83e567c2822aeda91006096a5d7ac0b34721d2a5', 'title': 'Neural Approaches to Conversational AI', 'authors': [{'authorId': '48441311', 'name': 'Jianfeng Gao'}, {'authorId': '1947267', 'name': 'Michel Galley'}, {'authorId': '47681372', 'name': 'Lihong Li'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'This tutorial surveys neural approaches to conversational AI that were developed in the last few years. We group conversational systems into three categories: (1) question answering agents, (2) task-oriented dialogue agents, and (3) social bots. For each category, we present a review of state-of-the-art neural approaches, draw the connection between neural approaches and traditional symbolic approaches, and discuss the progress we have made and challenges we are facing, using specific systems and models as case studies.', 'year': 2018, 'in_acl': True, 'citationCount': 645, 'section': 'Related Surveys or Book Chapters', 'subsection': None}, {'id': 153312680, 'paperId': '89e65078d37d076627818d9dba2c8ca9bf8f66bc', 'title': 'Challenges in Building Intelligent Open-domain Dialog Systems', 'authors': [{'authorId': '1730108', 'name': 'Minlie Huang'}, {'authorId': '145213540', 'name': 'Xiaoyan Zhu'}, {'authorId': '1800422', 'name': 'Jianfeng Gao'}], 'venue': 'ACM Trans. Inf. Syst.', 'abstract': 'There is a resurgent interest in developing intelligent open-domain dialog systems due to the availability of large amounts of conversational data and the recent progress on neural approaches to conversational AI [33]. Unlike traditional task-oriented bots, an open-domain dialog system aims to establish long-term connections with users by satisfying the human need for communication, affection, and social belonging. This article reviews the recent work on neural approaches that are devoted to addressing three challenges in developing such systems: semantics, consistency, and interactiveness. Semantics requires a dialog system to not only understand the content of the dialog but also identify users’ emotional and social needs during the conversation. Consistency requires the system to demonstrate a consistent personality to win users’ trust and gain their long-term confidence. Interactiveness refers to the system’s ability to generate interpersonal responses to achieve particular social goals such as entertainment and conforming. The studies we select to present in this survey are based on our unique views and are by no means complete. Nevertheless, we hope that the discussion will inspire new research in developing more intelligent open-domain dialog systems.', 'year': 2019, 'in_acl': False, 'citationCount': 287, 'section': 'Related Surveys or Book Chapters', 'subsection': None}, {'id': 246210119, 'paperId': '4ad0dc7a0a6a0142d308439b6c7375d27d81db36', 'title': 'Conversational Information Seeking', 'authors': [{'authorId': '2499986', 'name': 'Hamed Zamani'}, {'authorId': '2528063', 'name': 'Johanne R. Trippas'}, {'authorId': '145269114', 'name': 'Jeffrey Dalton'}, {'authorId': '2065812052', 'name': 'Filip Radlinski'}], 'venue': 'Foundations and Trends in Information Retrieval', 'abstract': 'Conversational information seeking (CIS) is concerned with a sequence of interactions between one or more users and an information system. Interactions in CIS are primarily based on natural language dialogue, while they may include other types of interactions, such as click, touch, and body gestures. This monograph provides a thorough overview of CIS definitions, applications, interactions, interfaces, design, implementation, and evaluation. This monograph views CIS applications as including conversational search, conversational question answering, and conversational recommendation. Our aim is to provide an overview of past research related to CIS, introduce the current state-of-the-art in CIS, highlight the challenges still being faced in the community. and suggest future directions.', 'year': 2022, 'in_acl': False, 'citationCount': 76, 'section': 'Related Surveys or Book Chapters', 'subsection': None}, {'id': 245986359, 'paperId': 'b97a33933541c276778c3fe63baad6964f4bdf44', 'title': 'Neural Approaches to Conversational Information Retrieval', 'authors': [{'authorId': '48441311', 'name': 'Jianfeng Gao'}, {'authorId': '2139787803', 'name': 'Chenyan Xiong'}, {'authorId': '144609235', 'name': 'Paul N. Bennett'}, {'authorId': '2286321410', 'name': 'Nick Craswell'}], 'venue': 'The Information Retrieval Series', 'abstract': 'This book surveys recent advances in Conversational Information Retrieval (CIR), focusing on neural approaches that have been developed in the last few years. Progress in deep learning has brought tremendous improvements in natural language processing (NLP) and conversational AI, leading to a plethora of commercial conversational services that allow naturally spoken and typed interaction, increasing the need for more human-centric interactions in IR. The book contains nine chapters. Chapter 1 motivates the research of CIR by reviewing the studies on how people search and subsequently defines a CIR system and a reference architecture which is described in detail in the rest of the book. Chapter 2 provides a detailed discussion of techniques for evaluating a CIR system – a goal-oriented conversational AI system with a human in the loop. Then Chapters 3 to 7 describe the algorithms and methods for developing the main CIR modules (or sub-systems). In Chapter 3, conversational document search is discussed, which can be viewed as a sub-system of the CIR system. Chapter 4 is about algorithms and methods for query-focused multi-document summarization. Chapter 5 describes various neural models for conversational machine comprehension, which generate a direct answer to a user query based on retrieved query-relevant documents, while Chapter 6 details neural approaches to conversational question answering over knowledge bases, which is fundamental to the knowledge base search module of a CIR system. Chapter 7 elaborates various techniques and models that aim to equip a CIR system with the capability of proactively leading a human-machine conversation. Chapter 8 reviews a variety of commercial systems for CIR and related tasks. It first presents an overview of research platforms and toolkits which enable scientists and practitioners to build conversational experiences, and continues with historical highlights and recent trends in a range of application areas. Chapter 9 eventually concludes the book with a brief discussion of research trends and areas for future work. The primary target audience of the book are the IR and NLP research communities. However, audiences with another background, such as machine learning or human-computer interaction, will also find it an accessible introduction to CIR.', 'year': 2022, 'in_acl': False, 'citationCount': 66, 'section': 'Related Surveys or Book Chapters', 'subsection': None}, {'id': 249808144, 'paperId': 'a7c86246a3cfdfb5219a0fec1426ecd9072c66ab', 'title': 'Deep Learning for Dialogue Systems: Chit-Chat and Beyond', 'authors': [{'authorId': '2055863231', 'name': 'Rui Yan'}, {'authorId': '143959787', 'name': 'Juntao Li'}, {'authorId': '144007938', 'name': 'Zhou Yu'}], 'venue': 'Foundations and Trends in Information Retrieval', 'abstract': 'Recommendation, information retrieval, and other information access systems pose unique challenges for investigating and applying the fairness and non-discrimination concepts that have been developed for studying other machine learning systems. While fair information access shares many commonalities with fair classification, there are important differences: the multistakeholder nature of information access applications, the rank-based problem setting, the centrality of personalization in many cases, and the role of user response all complicate the problem of identifying precisely what types and operationalizations of fairness may be relevant. In this monograph, we present a taxonomy of the various dimensions of fair information access and survey the literature to date on this new and rapidly-growing topic. We Michael D. Ekstrand, Anubrata Das, Robin Burke and Fernando Diaz (2022), “Fairness in Information Access Systems”, Foundations and Trends® in Information Retrieval: Vol. 16, No. 1-2, pp 1–177. DOI: 10.1561/1500000079. ©2022 M. D. Ekstrand et al. Full text available at: http://dx.doi.org/10.1561/1500000079', 'year': 2022, 'in_acl': False, 'citationCount': 18, 'section': 'Related Surveys or Book Chapters', 'subsection': None}]
|
2023.acl-tutorials.2
|
Complex Reasoning in Natural Language
|
Teaching machines to reason over texts has been a long-standing goal of natural language processing (NLP). To this end, researchers have designed a diverse set of complex reasoning tasks that involve compositional reasoning, knowledge retrieval, grounding, commonsense reasoning, etc. A standard choice for building systems that perform a desired type of reasoning is to fine-tune a pretrained language model (LM) on specific downstream tasks. However, recent research has demonstrated that such a straightforward approach is often brittle. For example, Elazar et al. (2021) and Branco et al. (2021) show that, on question-answering (QA) tasks, similar performance can be achieved with questions removed from the inputs. Min et al. (2019), Chen and Durrett (2019), and Tang et al. (2021) show that models trained on multi-hop QA do not generalize to answer single-hop questions. The reasoning capabilities of these models thus remain at a surface level, i.e., exploiting data patterns. Consequently, augmenting LMs with techniques that make them robust and effective becomes an active research area. We will start the tutorial by providing an overview of complex reasoning tasks where the standard application of pretrained language models fails. This tutorial then reviews recent promising directions for tackling these tasks. Specifically, we focus on the following groups of approaches that explicitly consider problem structures: (1) knowledge-augmented methods, where the knowledge is either incorporated during fine-tuning or pretraining; (2) few-shot prompting methods, which effectively guide the models to follow instructions; (3) neuro-symbolic methods, which produce explicit intermediate representations; and, (4) rationale-based methods, one of the most popular forms of the neuro-symbolic methods, which highlight subsets of input as explanations for individual model predictions.
| 2,023
|
https://aclanthology.org/2023.acl-tutorials.2
|
ACL
|
[{'id': 236447339, 'paperId': '0e6e8274d0dcbc1c3c1ccdbd87f3e5d53fdf62b4', 'title': 'QA Dataset Explosion: A Taxonomy of NLP Resources for Question Answering and Reading Comprehension', 'authors': [{'authorId': '145046059', 'name': 'Anna Rogers'}, {'authorId': '40642935', 'name': 'Matt Gardner'}, {'authorId': '1736067', 'name': 'Isabelle Augenstein'}], 'venue': 'ACM Computing Surveys', 'abstract': 'Alongside huge volumes of research on deep learning models in NLP in the recent years, there has been much work on benchmark datasets needed to track modeling progress. Question answering and reading comprehension have been particularly prolific in this regard, with more than 80 new datasets appearing in the past 2 years. This study is the largest survey of the field to date. We provide an overview of the various formats and domains of the current resources, highlighting the current lacunae for future work. We further discuss the current classifications of “skills” that question answering/reading comprehension systems are supposed to acquire and propose a new taxonomy. The supplementary materials survey the current multilingual resources and monolingual resources for languages other than English, and we discuss the implications of overfocusing on English. The study is aimed at both practitioners looking for pointers to the wealth of existing data and at researchers working on new resources.', 'year': 2021, 'in_acl': False, 'citationCount': 143, 'section': None, 'subsection': None}, {'id': 213613608, 'paperId': '4043a936960de8e149dc208178fe1bcb157c7fa4', 'title': 'Recent Advances in Natural Language Inference: A Survey of Benchmarks, Resources, and Approaches', 'authors': [{'authorId': '89093987', 'name': 'Shane Storks'}, {'authorId': '3193409', 'name': 'Qiaozi Gao'}, {'authorId': '1707259', 'name': 'J. Chai'}], 'venue': '', 'abstract': "In the NLP community, recent years have seen a surge of research activities that address machines' ability to perform deep language understanding which goes beyond what is explicitly stated in text, rather relying on reasoning and knowledge of the world. Many benchmark tasks and datasets have been created to support the development and evaluation of such natural language inference ability. As these benchmarks become instrumental and a driving force for the NLP research community, this paper aims to provide an overview of recent benchmarks, relevant knowledge resources, and state-of-the-art learning and inference approaches in order to support a better understanding of this growing field.", 'year': 2019, 'in_acl': False, 'citationCount': 116, 'section': None, 'subsection': None}, {'id': 236493269, 'paperId': '28692beece311a90f5fa1ca2ec9d0c2ce293d069', 'title': 'Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing', 'authors': [{'authorId': '144118452', 'name': 'Pengfei Liu'}, {'authorId': '30300197', 'name': 'Weizhe Yuan'}, {'authorId': '41037252', 'name': 'Jinlan Fu'}, {'authorId': '2669515', 'name': 'Zhengbao Jiang'}, {'authorId': '50376014', 'name': 'Hiroaki Hayashi'}, {'authorId': '1700325', 'name': 'Graham Neubig'}], 'venue': 'ACM Computing Surveys', 'abstract': 'This article surveys and organizes research works in a new paradigm in natural language processing, which we dub “prompt-based learning.” Unlike traditional supervised learning, which trains a model to take in an input x and predict an output y as P(y|x), prompt-based learning is based on language models that model the probability of text directly. To use these models to perform prediction tasks, the original input x is modified using a template into a textual string prompt x′ that has some unfilled slots, and then the language model is used to probabilistically fill the unfilled information to obtain a final string x̂, from which the final output y can be derived. This framework is powerful and attractive for a number of reasons: It allows the language model to be pre-trained on massive amounts of raw text, and by defining a new prompting function the model is able to perform few-shot or even zero-shot learning, adapting to new scenarios with few or no labeled data. In this article, we introduce the basics of this promising paradigm, describe a unified set of mathematical notations that can cover a wide variety of existing work, and organize existing work along several dimensions, e.g., the choice of pre-trained language models, prompts, and tuning strategies. To make the field more accessible to interested beginners, we not only make a systematic review of existing works and a highly structured typology of prompt-based concepts but also release other resources, e.g., a website NLPedia–Pretrain including constantly updated survey and paperlist.', 'year': 2021, 'in_acl': False, 'citationCount': 3190, 'section': None, 'subsection': None}, {'id': 252519203, 'paperId': '285d13bf3cbe6a8a0f164f584d84f8b74067271f', 'title': 'Towards Faithful Model Explanation in NLP: A Survey', 'authors': [{'authorId': '1904906987', 'name': 'Qing Lyu'}, {'authorId': '2817917', 'name': 'Marianna Apidianaki'}, {'authorId': '1763608', 'name': 'Chris Callison-Burch'}], 'venue': 'Computational Linguistics', 'abstract': 'Abstract End-to-end neural Natural Language Processing (NLP) models are notoriously difficult to understand. This has given rise to numerous efforts towards model explainability in recent years. One desideratum of model explanation is faithfulness, that is, an explanation should accurately represent the reasoning process behind the model’s prediction. In this survey, we review over 110 model explanation methods in NLP through the lens of faithfulness. We first discuss the definition and evaluation of faithfulness, as well as its significance for explainability. We then introduce recent advances in faithful explanation, grouping existing approaches into five categories: similarity-based methods, analysis of model-internal structures, backpropagation-based methods, counterfactual intervention, and self-explanatory models. For each category, we synthesize its representative studies, strengths, and weaknesses. Finally, we summarize their common virtues and remaining challenges, and reflect on future work directions towards faithful explainability in NLP.', 'year': 2022, 'in_acl': False, 'citationCount': 75, 'section': None, 'subsection': None}, {'id': 232035689, 'paperId': '962aa5b847f1692af058bd14fc0e8c3f0a0fee73', 'title': 'Teach Me to Explain: A Review of Datasets for Explainable Natural Language Processing', 'authors': [{'authorId': '35823986', 'name': 'Sarah Wiegreffe'}, {'authorId': '3451494', 'name': 'Ana Marasović'}], 'venue': 'NeurIPS Datasets and Benchmarks', 'abstract': 'Explainable NLP (ExNLP) has increasingly focused on collecting human-annotated textual explanations. These explanations are used downstream in three ways: as data augmentation to improve performance on a predictive task, as supervision to train models to produce explanations for their predictions, and as a ground-truth to evaluate model-generated explanations. In this review, we identify 65 datasets with three predominant classes of textual explanations (highlights, free-text, and structured), organize the literature on annotating each type, identify strengths and shortcomings of existing collection methodologies, and give recommendations for collecting ExNLP datasets in the future.', 'year': 2021, 'in_acl': False, 'citationCount': 137, 'section': None, 'subsection': None}, {'id': 5276660, 'paperId': '21c99706bb26e9012bfb4d8d48009a3d45af59b2', 'title': 'Neural Module Networks', 'authors': [{'authorId': '2112400', 'name': 'Jacob Andreas'}, {'authorId': '34849128', 'name': 'Marcus Rohrbach'}, {'authorId': '1753210', 'name': 'Trevor Darrell'}, {'authorId': '38666915', 'name': 'D. Klein'}], 'venue': 'Computer Vision and Pattern Recognition', 'abstract': 'Visual question answering is fundamentally compositional in nature-a question like where is the dog? shares substructure with questions like what color is the dog? and where is the cat? This paper seeks to simultaneously exploit the representational capacity of deep networks and the compositional linguistic structure of questions. We describe a procedure for constructing and learning neural module networks, which compose collections of jointly-trained neural "modules" into deep networks for question answering. Our approach decomposes questions into their linguistic substructures, and uses these structures to dynamically instantiate modular networks (with reusable components for recognizing dogs, classifying colors, etc.). The resulting compound networks are jointly trained. We evaluate our approach on two challenging datasets for visual question answering, achieving state-of-the-art results on both the VQA natural image dataset and a new dataset of complex questions about abstract shapes.', 'year': 2015, 'in_acl': False, 'citationCount': 1028, 'section': None, 'subsection': None}, {'id': 252734772, 'paperId': 'f58ca7ba4a08b7082e86b7a5989b4b0fda2107ab', 'title': 'Binding Language Models in Symbolic Languages', 'authors': [{'authorId': '1471878967', 'name': 'Zhoujun Cheng'}, {'authorId': '2057038673', 'name': 'Tianbao Xie'}, {'authorId': '2055356856', 'name': 'Peng Shi'}, {'authorId': '2155795167', 'name': 'Chengzu Li'}, {'authorId': '40027281', 'name': 'Rahul Nadkarni'}, {'authorId': '2112209725', 'name': 'Yushi Hu'}, {'authorId': '2054594326', 'name': 'Caiming Xiong'}, {'authorId': '9215251', 'name': 'Dragomir R. Radev'}, {'authorId': '81444299', 'name': 'M. Ostendorf'}, {'authorId': '2137813791', 'name': 'Luke S. Zettlemoyer'}, {'authorId': '2116827887', 'name': 'N. A. Smith'}, {'authorId': None, 'name': 'Tao Yu'}], 'venue': 'International Conference on Learning Representations', 'abstract': 'Though end-to-end neural approaches have recently been dominating NLP tasks in both performance and ease-of-use, they lack interpretability and robustness. We propose Binder, a training-free neural-symbolic framework that maps the task input to a program, which (1) allows binding a unified API of language model (LM) functionalities to a programming language (e.g., SQL, Python) to extend its grammar coverage and thus tackle more diverse questions, (2) adopts an LM as both the program parser and the underlying model called by the API during execution, and (3) requires only a few in-context exemplar annotations. Specifically, we employ GPT-3 Codex as the LM. In the parsing stage, with only a few in-context exemplars, Codex is able to identify the part of the task input that cannot be answerable by the original programming language, correctly generate API calls to prompt Codex to solve the unanswerable part, and identify where to place the API calls while being compatible with the original grammar. In the execution stage, Codex can perform versatile functionalities (e.g., commonsense QA, information extraction) given proper prompts in the API calls. Binder achieves state-of-the-art results on WikiTableQuestions and TabFact datasets, with explicit output programs that benefit human debugging. Note that previous best systems are all finetuned on tens of thousands of task-specific samples, while Binder only uses dozens of annotations as in-context exemplars without any training. Our code is available at https://github.com/HKUNLP/Binder .', 'year': 2022, 'in_acl': False, 'citationCount': 171, 'section': None, 'subsection': None}, {'id': 253708270, 'paperId': '6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7', 'title': 'PAL: Program-aided Language Models', 'authors': [{'authorId': '49715441', 'name': 'Luyu Gao'}, {'authorId': '21626987', 'name': 'Aman Madaan'}, {'authorId': '2149163534', 'name': 'Shuyan Zhou'}, {'authorId': '47051926', 'name': 'Uri Alon'}, {'authorId': '144118452', 'name': 'Pengfei Liu'}, {'authorId': '46286308', 'name': 'Yiming Yang'}, {'authorId': '144987107', 'name': 'Jamie Callan'}, {'authorId': '1700325', 'name': 'Graham Neubig'}], 'venue': 'International Conference on Machine Learning', 'abstract': 'Large language models (LLMs) have recently demonstrated an impressive ability to perform arithmetic and symbolic reasoning tasks, when provided with a few examples at test time ("few-shot prompting"). Much of this success can be attributed to prompting methods such as"chain-of-thought\'\', which employ LLMs for both understanding the problem description by decomposing it into steps, as well as solving each step of the problem. While LLMs seem to be adept at this sort of step-by-step decomposition, LLMs often make logical and arithmetic mistakes in the solution part, even when the problem is decomposed correctly. In this paper, we present Program-Aided Language models (PAL): a novel approach that uses the LLM to read natural language problems and generate programs as the intermediate reasoning steps, but offloads the solution step to a runtime such as a Python interpreter. With PAL, decomposing the natural language problem into runnable steps remains the only learning task for the LLM, while solving is delegated to the interpreter. We demonstrate this synergy between a neural LLM and a symbolic interpreter across 13 mathematical, symbolic, and algorithmic reasoning tasks from BIG-Bench Hard and other benchmarks. In all these natural language reasoning tasks, generating code using an LLM and reasoning using a Python interpreter leads to more accurate results than much larger models. For example, PAL using Codex achieves state-of-the-art few-shot accuracy on the GSM8K benchmark of math word problems, surpassing PaLM-540B which uses chain-of-thought by absolute 15% top-1. Our code and data are publicly available at http://reasonwithpal.com/ .', 'year': 2022, 'in_acl': False, 'citationCount': 358, 'section': None, 'subsection': None}]
|
2023.acl-tutorials.5
|
Indirectly Supervised Natural Language Processing
|
This tutorial targets researchers and practitioners who are interested in ML technologies for NLP from indirect supervision. In particular, we will present a diverse thread of indirect supervision studies that try to answer the following questions: (i) when and how can we provide supervision for a target task T, if all we have is data that corresponds to a “related” task T′? (ii) humans do not use exhaustive supervision; they rely on occasional feedback, and learn from incidental signals from various sources; how can we effectively incorporate such supervision in machine learning? (iii) how can we leverage multi-modal supervision to help NLP? To the end, we will discuss several lines of research that address those challenges, including (i) indirect supervision from T ′ that handles T with outputs spanning from a moderate size to an open space, (ii) the use of sparsely occurring and incidental signals, such as partial labels, noisy labels, knowledge-based constraints, and cross-domain or cross-task annotations—all having statistical associations with the task, (iii) principled ways to measure and understand why these incidental signals can contribute to our target tasks, and (iv) indirect supervision from vision-language signals. We will conclude the tutorial by outlining directions for further investigation.
| 2,023
|
https://aclanthology.org/2023.acl-tutorials.5
|
ACL
|
[{'id': 202540839, 'paperId': '093d9253a2fe765ca6577b091d3f99bab3155a7d', 'title': 'Benchmarking Zero-shot Text Classification: Datasets, Evaluation and Entailment Approach', 'authors': [{'authorId': '40483594', 'name': 'Wenpeng Yin'}, {'authorId': '153035400', 'name': 'Jamaal Hay'}, {'authorId': '144590225', 'name': 'D. Roth'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Zero-shot text classification (0Shot-TC) is a challenging NLU problem to which little attention has been paid by the research community. 0Shot-TC aims to associate an appropriate label with a piece of text, irrespective of the text domain and the aspect (e.g., topic, emotion, event, etc.) described by the label. And there are only a few articles studying 0Shot-TC, all focusing only on topical categorization which, we argue, is just the tip of the iceberg in 0Shot-TC. In addition, the chaotic experiments in literature make no uniform comparison, which blurs the progress. This work benchmarks the 0Shot-TC problem by providing unified datasets, standardized evaluations, and state-of-the-art baselines. Our contributions include: i) The datasets we provide facilitate studying 0Shot-TC relative to conceptually different and diverse aspects: the “topic” aspect includes “sports” and “politics” as labels; the “emotion” aspect includes “joy” and “anger”; the “situation” aspect includes “medical assistance” and “water shortage”. ii) We extend the existing evaluation setup (label-partially-unseen) – given a dataset, train on some labels, test on all labels – to include a more challenging yet realistic evaluation label-fully-unseen 0Shot-TC (Chang et al., 2008), aiming at classifying text snippets without seeing task specific training data at all. iii) We unify the 0Shot-TC of diverse aspects within a textual entailment formulation and study it this way.', 'year': 2019, 'in_acl': True, 'citationCount': 495, 'section': None, 'subsection': None}, {'id': 248505827, 'paperId': '6f0650a429add68e9f9430b09e1c6e8780ca787c', 'title': 'Textual Entailment for Event Argument Extraction: Zero- and Few-Shot with Multi-Source Learning', 'authors': [{'authorId': '1724648481', 'name': 'Oscar Sainz'}, {'authorId': '1404791152', 'name': 'Itziar Gonzalez-Dios'}, {'authorId': '1715983', 'name': 'Oier Lopez de Lacalle'}, {'authorId': '1875233', 'name': 'Bonan Min'}, {'authorId': '1733049', 'name': 'Eneko Agirre'}], 'venue': 'NAACL-HLT', 'abstract': 'Recent work has shown that NLP tasks such as Relation Extraction (RE) can be recasted as Textual Entailment tasks using verbalizations, with strong performance in zero-shot and few-shot settings thanks to pre-trained entailment models. The fact that relations in current RE datasets are easily verbalized casts doubts on whether entailment would be effective in more complex tasks. In this work we show that entailment is also effective in Event Argument Extraction (EAE), reducing the need of manual annotation to 50% and 20% in ACE and WikiEvents respectively, while achieving the same performance as with full training. More importantly, we show that recasting EAE as entailment alleviates the dependency on schemas, which has been a road-block for transferring annotations between domains. Thanks to the entailment, the multi-source transfer between ACE and WikiEvents further reduces annotation down to 10% and 5% (respectively) of the full training without transfer. Our analysis shows that the key to good results is the use of several entailment datasets to pre-train the entailment model. Similar to previous approaches, our method requires a small amount of effort for manual verbalization: only less than 15 minutes per event argument type is needed, and comparable results can be achieved with users with different level of expertise.', 'year': 2022, 'in_acl': False, 'citationCount': 44, 'section': None, 'subsection': None}, {'id': 238408158, 'paperId': 'f18a9dc44d66346a8d75005f7b5ab9e7e4899de5', 'title': 'EntQA: Entity Linking as Question Answering', 'authors': [{'authorId': '2107940644', 'name': 'Wenzheng Zhang'}, {'authorId': '2007245028', 'name': 'Wenyue Hua'}, {'authorId': '1714215', 'name': 'K. Stratos'}], 'venue': 'International Conference on Learning Representations', 'abstract': 'A conventional approach to entity linking is to first find mentions in a given document and then infer their underlying entities in the knowledge base. A well-known limitation of this approach is that it requires finding mentions without knowing their entities, which is unnatural and difficult. We present a new model that does not suffer from this limitation called EntQA, which stands for Entity linking as Question Answering. EntQA first proposes candidate entities with a fast retrieval module, and then scrutinizes the document to find mentions of each candidate with a powerful reader module. Our approach combines progress in entity linking with that in open-domain question answering and capitalizes on pretrained models for dense entity retrieval and reading comprehension. Unlike in previous works, we do not rely on a mention-candidates dictionary or large-scale weak supervision. EntQA achieves strong results on the GERBIL benchmarking platform.', 'year': 2021, 'in_acl': False, 'citationCount': 43, 'section': None, 'subsection': None}, {'id': 248965470, 'paperId': '11fc7b4e459479ec5facb344b42a9bad940da37a', 'title': 'Summarization as Indirect Supervision for Relation Extraction', 'authors': [{'authorId': '1515662094', 'name': 'K. Lu'}, {'authorId': '34809425', 'name': 'I-Hung Hsu'}, {'authorId': '2203076', 'name': 'Wenxuan Zhou'}, {'authorId': '144592155', 'name': 'Mingyu Derek Ma'}, {'authorId': '1998918', 'name': 'Muhao Chen'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Relation extraction (RE) models have been challenged by their reliance on training data with expensive annotations. Considering that summarization tasks aim at acquiring concise expressions of synoptical information from the longer context, these tasks naturally align with the objective of RE, i.e., extracting a kind of synoptical information that describes the relation of entity mentions. We present SuRE, which converts RE into a summarization formulation. SuRE leads to more precise and resource-efficient RE based on indirect supervision from summarization tasks. To achieve this goal, we develop sentence and relation conversion techniques that essentially bridge the formulation of summarization and RE tasks. We also incorporate constraint decoding techniques with Trie scoring to further enhance summarization-based RE with robust inference. Experiments on three RE datasets demonstrate the effectiveness of SuRE in both full-dataset and low-resource settings, showing that summarization is a promising source of indirect supervision to improve RE models.', 'year': 2022, 'in_acl': False, 'citationCount': 50, 'section': None, 'subsection': None}, {'id': 245219282, 'paperId': '4081aeb7ff148cc4678efca4e44a72dece4542e3', 'title': 'Reframing Human-AI Collaboration for Generating Free-Text Explanations', 'authors': [{'authorId': '35823986', 'name': 'Sarah Wiegreffe'}, {'authorId': '2689239', 'name': 'Jack Hessel'}, {'authorId': '2133324514', 'name': 'Swabha Swayamdipta'}, {'authorId': '2065904932', 'name': 'Mark O. Riedl'}, {'authorId': '1699545', 'name': 'Yejin Choi'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'Large language models are increasingly capable of generating fluent-appearing text with relatively little task-specific supervision. But can these models accurately explain classification decisions? We consider the task of generating free-text explanations using human-written examples in a few-shot manner. We find that (1) authoring higher quality prompts results in higher quality generations; and (2) surprisingly, in a head-to-head comparison, crowdworkers often prefer explanations generated by GPT-3 to crowdsourced explanations in existing datasets. Our human studies also show, however, that while models often produce factual, grammatical, and sufficient explanations, they have room to improve along axes such as providing novel information and supporting the label. We create a pipeline that combines GPT-3 with a supervised filter that incorporates binary acceptability judgments from humans in the loop. Despite the intrinsic subjectivity of acceptability judgments, we demonstrate that acceptability is partially correlated with various fine-grained attributes of explanations. Our approach is able to consistently filter GPT-3-generated explanations deemed acceptable by humans.', 'year': 2021, 'in_acl': True, 'citationCount': 129, 'section': None, 'subsection': None}, {'id': 253237669, 'paperId': '1d417bdd331912a458de920459f23fcc7f6e8699', 'title': 'Learning to Decompose: Hypothetical Question Decomposition Based on Comparable Texts', 'authors': [{'authorId': '2108536188', 'name': 'Ben Zhou'}, {'authorId': '46666605', 'name': 'Kyle Richardson'}, {'authorId': '3099583', 'name': 'Xiaodong Yu'}, {'authorId': '144590225', 'name': 'D. Roth'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Explicit decomposition modeling, which involves breaking down complex tasks into more straightforward and often more interpretable sub-tasks, has long been a central theme in developing robust and interpretable NLU systems. However, despite the many datasets and resources built as part of this effort, the majority have small-scale annotations and limited scope, which is insufficient to solve general decomposition tasks. In this paper, we look at large-scale intermediate pre-training of decomposition-based transformers using distant supervision from comparable texts, particularly large-scale parallel news. We show that with such intermediate pre-training, developing robust decomposition-based models for a diverse range of tasks becomes more feasible. For example, on semantic parsing, our model, DecompT5, improves 20% to 30% on two datasets, Overnight and TORQUE, over the baseline language model. We further use DecompT5 to build a novel decomposition-based QA system named DecompEntail, improving over state-of-the-art models, including GPT-3, on both HotpotQA and StrategyQA by 8% and 4%, respectively.', 'year': 2022, 'in_acl': True, 'citationCount': 17, 'section': None, 'subsection': None}, {'id': 237485601, 'paperId': '0cdc27a99c1520c2ec604b97470ae75227e096ee', 'title': 'Foreseeing the Benefits of Incidental Supervision', 'authors': [{'authorId': '7146703', 'name': 'Hangfeng He'}, {'authorId': '2119682611', 'name': 'Mingyuan Zhang'}, {'authorId': '3333257', 'name': 'Qiang Ning'}, {'authorId': '144590225', 'name': 'D. Roth'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Real-world applications often require improved models by leveraging *a range of cheap incidental supervision signals*. These could include partial labels, noisy labels, knowledge-based constraints, and cross-domain or cross-task annotations – all having statistical associations with gold annotations but not exactly the same. However, we currently lack a principled way to measure the benefits of these signals to a given target task, and the common practice of evaluating these benefits is through exhaustive experiments with various models and hyperparameters. This paper studies whether we can, *in a single framework, quantify the benefits of various types of incidental signals for a given target task without going through combinatorial experiments*. We propose a unified PAC-Bayesian motivated informativeness measure, PABI, that characterizes the uncertainty reduction provided by incidental supervision signals. We demonstrate PABI’s effectiveness by quantifying the value added by various types of incidental signals to sequence tagging tasks. Experiments on named entity recognition (NER) and question answering (QA) show that PABI’s predictions correlate well with learning performance, providing a promising way to determine, ahead of learning, which supervision signals would be beneficial.', 'year': 2020, 'in_acl': True, 'citationCount': 11, 'section': None, 'subsection': None}, {'id': 219708822, 'paperId': '952fdd071c124691ee24c216e24474b7dec0f70e', 'title': 'Learnability with Indirect Supervision Signals', 'authors': [{'authorId': '2148355232', 'name': 'Kaifu Wang'}, {'authorId': '3333257', 'name': 'Qiang Ning'}, {'authorId': '144590225', 'name': 'D. Roth'}], 'venue': 'Neural Information Processing Systems', 'abstract': "Learning from indirect supervision signals is important in real-world AI applications when, often, gold labels are missing or too costly. In this paper, we develop a unified theoretical framework for multi-class classification when the supervision is provided by a variable that contains nonzero mutual information with the gold label. The nature of this problem is determined by (i) the transition probability from the gold labels to the indirect supervision variables and (ii) the learner's prior knowledge about the transition. Our framework relaxes assumptions made in the literature, and supports learning with unknown, non-invertible and instance-dependent transitions. Our theory introduces a novel concept called \\emph{separation}, which characterizes the learnability and generalization bounds. We also demonstrate the application of our framework via concrete novel results in a variety of learning scenarios such as learning with superset annotations and joint supervision signals.", 'year': 2020, 'in_acl': False, 'citationCount': 7, 'section': None, 'subsection': None}, {'id': 53734356, 'paperId': '6dfc2ff03534a4325d06c6f88c3144831996629b', 'title': 'From Recognition to Cognition: Visual Commonsense Reasoning', 'authors': [{'authorId': '2545335', 'name': 'Rowan Zellers'}, {'authorId': '3312309', 'name': 'Yonatan Bisk'}, {'authorId': '143787583', 'name': 'Ali Farhadi'}, {'authorId': '1699545', 'name': 'Yejin Choi'}], 'venue': 'Computer Vision and Pattern Recognition', 'abstract': "Visual understanding goes well beyond object recognition. With one glance at an image, we can effortlessly imagine the world beyond the pixels: for instance, we can infer people's actions, goals, and mental states. While this task is easy for humans, it is tremendously difficult for today's vision systems, requiring higher-order cognition and commonsense reasoning about the world. We formalize this task as Visual Commonsense Reasoning. Given a challenging question about an image, a machine must answer correctly and then provide a rationale justifying its answer. Next, we introduce a new dataset, VCR, consisting of 290k multiple choice QA problems derived from 110k movie scenes. The key recipe for generating non-trivial and high-quality problems at scale is Adversarial Matching, a new approach to transform rich annotations into multiple choice questions with minimal bias. Experimental results show that while humans find VCR easy (over 90% accuracy), state-of-the-art vision models struggle (~45%). To move towards cognition-level understanding, we present a new reasoning engine, Recognition to Cognition Networks (R2C), that models the necessary layered inferences for grounding, contextualization, and reasoning. R2C helps narrow the gap between humans and machines (~65%); still, the challenge is far from solved, and we provide analysis that suggests avenues for future work.", 'year': 2018, 'in_acl': False, 'citationCount': 806, 'section': None, 'subsection': None}, {'id': 222341606, 'paperId': 'dedcdc1fb3a6def9772dce674d89150923dd75b9', 'title': 'Vokenization: Improving Language Understanding via Contextualized, Visually-Grounded Supervision', 'authors': [{'authorId': '80940793', 'name': 'Hao Tan'}, {'authorId': '143977268', 'name': 'Mohit Bansal'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Humans learn language by listening, speaking, writing, reading, and also, via interaction with the multimodal real world. Existing language pre-training frameworks show the effectiveness of text-only self-supervision while we explore the idea of a visually-supervised language model in this paper. We find that the main reason hindering this exploration is the large divergence in magnitude and distributions between the visually-grounded language datasets and pure-language corpora. Therefore, we develop a technique named "vokenization" that extrapolates multimodal alignments to language-only data by contextually mapping language tokens to their related images (which we call "vokens"). The "vokenizer" is trained on relatively small image captioning datasets and we then apply it to generate vokens for large language corpora. Trained with these contextually generated vokens, our visually-supervised language models show consistent improvements over self-supervised alternatives on multiple pure-language tasks such as GLUE, SQuAD, and SWAG. Code and pre-trained models publicly available at this https URL', 'year': 2020, 'in_acl': True, 'citationCount': 112, 'section': None, 'subsection': None}]
|
2023.acl-tutorials.6
|
Retrieval-based Language Models and Applications
|
Retrieval-based language models (LMs) have shown impressive performance on diverse NLP tasks. In this tutorial, we will provide a comprehensive and coherent overview of recent advances in retrieval-based LMs. We will start by providing preliminaries covering the foundation of LMs (e.g., masked LMs, autoregressive LMs) and retrieval systems (e.g., nearest-neighbor search). We will then detail recent progress in retrieval-based models, focusing on their model architectures and learning approaches. Finally, we will show how retrieval-based LMs are adapted to downstream applications, and extended to multilingual and multi-modal settings. Finally, we will use an exercise to showcase the effectiveness of retrieval-based LMs.
| 2,023
|
https://aclanthology.org/2023.acl-tutorials.6
|
ACL
|
[{'id': 249097975, 'paperId': '4f4a409f701f7552d45c46a5b0fea69dca6f8e84', 'title': 'Unsupervised Dense Information Retrieval with Contrastive Learning', 'authors': [{'authorId': '1410231361', 'name': 'Gautier Izacard'}, {'authorId': '2062862676', 'name': 'Mathilde Caron'}, {'authorId': '26360550', 'name': 'Lucas Hosseini'}, {'authorId': '48662861', 'name': 'Sebastian Riedel'}, {'authorId': '2329288', 'name': 'Piotr Bojanowski'}, {'authorId': '2319608', 'name': 'Armand Joulin'}, {'authorId': '3024698', 'name': 'Edouard Grave'}], 'venue': 'Trans. Mach. Learn. Res.', 'abstract': 'Recently, information retrieval has seen the emergence of dense retrievers, using neural networks, as an alternative to classical sparse methods based on term-frequency. These models have obtained state-of-the-art results on datasets and tasks where large training sets are available. However, they do not transfer well to new applications with no training data, and are outperformed by unsupervised term-frequency methods such as BM25. In this work, we explore the limits of contrastive learning as a way to train unsupervised dense retrievers and show that it leads to strong performance in various retrieval settings. On the BEIR benchmark our unsupervised model outperforms BM25 on 11 out of 15 datasets for the Recall@100. When used as pre-training before fine-tuning, either on a few thousands in-domain examples or on the large MS~MARCO dataset, our contrastive model leads to improvements on the BEIR benchmark. Finally, we evaluate our approach for multi-lingual retrieval, where training data is even scarcer than for English, and show that our approach leads to strong unsupervised performance. Our model also exhibits strong cross-lingual transfer when fine-tuned on supervised English data only and evaluated on low resources language such as Swahili. We show that our unsupervised models can perform cross-lingual retrieval between different scripts, such as retrieving English documents from Arabic queries, which would not be possible with term matching methods.', 'year': 2021, 'in_acl': False, 'citationCount': 608, 'section': None, 'subsection': None}, {'id': 253581733, 'paperId': '8cf05ed2b7cd3b0f601c454914a678c24d393de3', 'title': 'Task-aware Retrieval with Instructions', 'authors': [{'authorId': '35584853', 'name': 'Akari Asai'}, {'authorId': '32246932', 'name': 'Timo Schick'}, {'authorId': '145222654', 'name': 'Patrick Lewis'}, {'authorId': '1769736', 'name': 'Xilun Chen'}, {'authorId': '1410231361', 'name': 'Gautier Izacard'}, {'authorId': '48662861', 'name': 'Sebastian Riedel'}, {'authorId': '2548384', 'name': 'Hannaneh Hajishirzi'}, {'authorId': '2072801764', 'name': 'Wen-tau Yih'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': "We study the problem of retrieval with instructions, where users of a retrieval system explicitly describe their intent along with their queries. We aim to develop a general-purpose task-aware retrieval system using multi-task instruction tuning, which can follow human-written instructions to find the best documents for a given query. We introduce the first large-scale collection of approximately 40 retrieval datasets with instructions, BERRI, and present TART, a multi-task retrieval system trained on BERRI with instructions. TART shows strong capabilities to adapt to a new retrieval task via instructions and advances the state of the art on two zero-shot retrieval benchmarks, BEIR and LOTTE, outperforming models up to three times larger. We further introduce a new evaluation setup, X^2-Retrieval to better reflect real-world scenarios, where diverse domains and tasks are pooled and a system needs to find documents aligning users' intents. In this setup, TART significantly outperforms competitive baselines, further demonstrating the effectiveness of guiding retrieval with instructions.", 'year': 2022, 'in_acl': False, 'citationCount': 73, 'section': None, 'subsection': None}, {'id': 251371732, 'paperId': '916be31cbf847faa65cad0549e153f0c25b9f424', 'title': 'Few-shot Learning with Retrieval Augmented Language Models', 'authors': [{'authorId': '1410231361', 'name': 'Gautier Izacard'}, {'authorId': '145222654', 'name': 'Patrick Lewis'}, {'authorId': '3376175', 'name': 'M. Lomeli'}, {'authorId': '26360550', 'name': 'Lucas Hosseini'}, {'authorId': '40052301', 'name': 'F. Petroni'}, {'authorId': '32246932', 'name': 'Timo Schick'}, {'authorId': '2129456957', 'name': 'Jane A. Yu'}, {'authorId': '2319608', 'name': 'Armand Joulin'}, {'authorId': '48662861', 'name': 'Sebastian Riedel'}, {'authorId': '3024698', 'name': 'Edouard Grave'}], 'venue': 'Journal of machine learning research', 'abstract': 'Large language models have shown impressive few-shot results on a wide range of tasks. However, when knowledge is key for such results, as is the case for tasks such as question answering and fact checking, massive parameter counts to store knowledge seem to be needed. Retrieval augmented models are known to excel at knowledge intensive tasks without the need for as many parameters, but it is unclear whether they work in few-shot settings. In this work we present Atlas, a carefully designed and pre-trained retrieval augmented language model able to learn knowledge intensive tasks with very few training examples. We perform evaluations on a wide range of tasks, including MMLU, KILT and NaturalQuestions, and study the impact of the content of the document index, showing that it can easily be updated. Notably, Atlas reaches over 42% accuracy on Natural Questions using only 64 examples, outperforming a 540B parameters model by 3% despite having 50x fewer parameters.', 'year': 2022, 'in_acl': False, 'citationCount': 580, 'section': None, 'subsection': None}, {'id': 244954723, 'paperId': '002c256d30d6be4b23d365a8de8ae0e67e4c9641', 'title': 'Improving language models by retrieving from trillions of tokens', 'authors': [{'authorId': '148016269', 'name': 'Sebastian Borgeaud'}, {'authorId': '1697879', 'name': 'A. Mensch'}, {'authorId': '46616544', 'name': 'Jordan Hoffmann'}, {'authorId': '2072572294', 'name': 'Trevor Cai'}, {'authorId': '2143538252', 'name': 'Eliza Rutherford'}, {'authorId': '2143434227', 'name': 'Katie Millican'}, {'authorId': '47568983', 'name': 'George van den Driessche'}, {'authorId': '143783339', 'name': 'Jean-Baptiste Lespiau'}, {'authorId': '2143374656', 'name': 'Bogdan Damoc'}, {'authorId': '31993415', 'name': 'Aidan Clark'}, {'authorId': '40550616', 'name': 'Diego de Las Casas'}, {'authorId': '40895205', 'name': 'Aurelia Guy'}, {'authorId': '10698483', 'name': 'Jacob Menick'}, {'authorId': '81387328', 'name': 'Roman Ring'}, {'authorId': '4629007', 'name': 'T. Hennigan'}, {'authorId': '2148653469', 'name': 'Saffron Huang'}, {'authorId': '108173905', 'name': 'Lorenzo Maggiore'}, {'authorId': '2115601070', 'name': 'Chris Jones'}, {'authorId': '51042571', 'name': 'Albin Cassirer'}, {'authorId': '2065040422', 'name': 'Andy Brock'}, {'authorId': '35550664', 'name': 'Michela Paganini'}, {'authorId': '2060655766', 'name': 'G. Irving'}, {'authorId': '1689108', 'name': 'O. Vinyals'}, {'authorId': '2217144', 'name': 'Simon Osindero'}, {'authorId': '34838386', 'name': 'K. Simonyan'}, {'authorId': '34269227', 'name': 'Jack W. Rae'}, {'authorId': '152585800', 'name': 'Erich Elsen'}, {'authorId': '2175946', 'name': 'L. Sifre'}], 'venue': 'International Conference on Machine Learning', 'abstract': 'We enhance auto-regressive language models by conditioning on document chunks retrieved from a large corpus, based on local similarity with preceding tokens. With a $2$ trillion token database, our Retrieval-Enhanced Transformer (RETRO) obtains comparable performance to GPT-3 and Jurassic-1 on the Pile, despite using 25$\\times$ fewer parameters. After fine-tuning, RETRO performance translates to downstream knowledge-intensive tasks such as question answering. RETRO combines a frozen Bert retriever, a differentiable encoder and a chunked cross-attention mechanism to predict tokens based on an order of magnitude more data than what is typically consumed during training. We typically train RETRO from scratch, yet can also rapidly RETROfit pre-trained transformers with retrieval and still achieve good performance. Our work opens up new avenues for improving language models through explicit memory at unprecedented scale.', 'year': 2021, 'in_acl': False, 'citationCount': 829, 'section': None, 'subsection': None}, {'id': 248545122, 'paperId': '7b7416c90e8d3fc9ad5c9fb3923a638f69294ed7', 'title': 'MENTION MEMORY : INCORPORATING TEXTUAL KNOWLEDGE INTO TRANSFORMERS THROUGH ENTITY MENTION ATTENTION', 'authors': [{'authorId': '21379393', 'name': 'Michiel de Jong'}, {'authorId': '51199981', 'name': 'Yury Zemlyanskiy'}, {'authorId': '143883142', 'name': 'Nicholas FitzGerald'}, {'authorId': '145757665', 'name': 'Fei Sha'}, {'authorId': '2058480371', 'name': 'W. Cohen'}], 'venue': 'International Conference on Learning Representations', 'abstract': "Natural language understanding tasks such as open-domain question answering often require retrieving and assimilating factual information from multiple sources. We propose to address this problem by integrating a semi-parametric representation of a large text corpus into a Transformer model as a source of factual knowledge. Specifically, our method represents knowledge with `mention memory', a table of dense vector representations of every entity mention in a corpus. The proposed model - TOME - is a Transformer that accesses the information through internal memory layers in which each entity mention in the input passage attends to the mention memory. This approach enables synthesis of and reasoning over many disparate sources of information within a single Transformer model. In experiments using a memory of 150 million Wikipedia mentions, TOME achieves strong performance on several open-domain knowledge-intensive tasks, including the claim verification benchmarks HoVer and FEVER and several entity-based QA benchmarks. We also show that the model learns to attend to informative mentions without any direct supervision. Finally we demonstrate that the model can generalize to new unseen entities by updating the memory without retraining.", 'year': 2022, 'in_acl': False, 'citationCount': 42, 'section': None, 'subsection': None}, {'id': 207870430, 'paperId': '7be8c119dbe065c52125ee7716601751f3116844', 'title': 'Generalization through Memorization: Nearest Neighbor Language Models', 'authors': [{'authorId': '3030219', 'name': 'Urvashi Khandelwal'}, {'authorId': '39455775', 'name': 'Omer Levy'}, {'authorId': '1746807', 'name': 'Dan Jurafsky'}, {'authorId': '1982950', 'name': 'Luke Zettlemoyer'}, {'authorId': '35084211', 'name': 'M. Lewis'}], 'venue': 'International Conference on Learning Representations', 'abstract': 'We introduce $k$NN-LMs, which extend a pre-trained neural language model (LM) by linearly interpolating it with a $k$-nearest neighbors ($k$NN) model. The nearest neighbors are computed according to distance in the pre-trained LM embedding space, and can be drawn from any text collection, including the original LM training data. Applying this augmentation to a strong Wikitext-103 LM, with neighbors drawn from the original training set, our $k$NN-LM achieves a new state-of-the-art perplexity of 15.79 - a 2.9 point improvement with no additional training. We also show that this approach has implications for efficiently scaling up to larger training sets and allows for effective domain adaptation, by simply varying the nearest neighbor datastore, again without further training. Qualitatively, the model is particularly helpful in predicting rare patterns, such as factual knowledge. Together, these results strongly suggest that learning similarity between sequences of text is easier than predicting the next word, and that nearest neighbor search is an effective approach for language modeling in the long tail.', 'year': 2019, 'in_acl': False, 'citationCount': 724, 'section': None, 'subsection': None}, {'id': 254220735, 'paperId': '9492ee1435e183cb62b65d8d7f39be0dfd17377a', 'title': 'Nonparametric Masked Language Modeling', 'authors': [{'authorId': '48872685', 'name': 'Sewon Min'}, {'authorId': '3040379', 'name': 'Weijia Shi'}, {'authorId': '35084211', 'name': 'M. Lewis'}, {'authorId': '1769736', 'name': 'Xilun Chen'}, {'authorId': '2072801764', 'name': 'Wen-tau Yih'}, {'authorId': '2548384', 'name': 'Hannaneh Hajishirzi'}, {'authorId': '1982950', 'name': 'Luke Zettlemoyer'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Existing language models (LMs) predict tokens with a softmax over a finite vocabulary, which can make it difficult to predict rare tokens or phrases. We introduce NPM, the first nonparametric masked language model that replaces this softmax with a nonparametric distribution over every phrase in a reference corpus. NPM fills in the [MASK] solely from retrieving a token from a text corpus. We show that NPM can be efficiently trained with a contrastive objective and an in-batch approximation to full corpus retrieval. Zero-shot evaluation on 16 tasks including classification, fact probing and question answering demonstrates that NPM outperforms significantly larger parametric models, either with or without a retrieve-and-generate approach. It is particularly better at dealing with rare patterns (word senses or facts) and predicting rare or nearly unseen words (e.g., non-Latin script). We release the model and code at github.com/facebookresearch/NPM.', 'year': 2022, 'in_acl': False, 'citationCount': 44, 'section': None, 'subsection': None}, {'id': 249062699, 'paperId': 'da1d6445b6b64ce9eb4587ba8abbdc490f648ec1', 'title': 'Training Language Models with Memory Augmentation', 'authors': [{'authorId': '49164966', 'name': 'Zexuan Zhong'}, {'authorId': '49986267', 'name': 'Tao Lei'}, {'authorId': '50536468', 'name': 'Danqi Chen'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Recent work has improved language models (LMs) remarkably by equipping them with a non-parametric memory component. However, most existing approaches only introduce mem-ories at testing time or represent them using a separately trained encoder, resulting in suboptimal training of the language model. In this work, we present TRIME, a novel yet simple training approach designed for training LMs with memory augmentation. Our approach uses a training objective that directly takes in-batch examples as accessible memory. We also present new methods for memory construction and data batching, which are used for adapting to different sets of memories—local, long-term, and external memory—at testing time. We evaluate TRIME on multiple language modeling and machine translation benchmarks and show that it is able to achieve significant improvements across all the settings. Concretely, TRIME reduces the perplexity from 18.70 to 15.37 on WIKITEXT-103, by effectively leveraging a large memory set from the training corpus. Compared to standard LM training, TRIME adds negligible computational overhead and is compatible with different neural architectures, making it a versatile solution for training memory-augmented LMs.', 'year': 2022, 'in_acl': True, 'citationCount': 115, 'section': None, 'subsection': None}, {'id': 249152130, 'paperId': '563a851106623b9f112d0e2a290d3950a871079c', 'title': 'Nearest Neighbor Zero-Shot Inference', 'authors': [{'authorId': '3040379', 'name': 'Weijia Shi'}, {'authorId': '38614754', 'name': 'Julian Michael'}, {'authorId': '40895369', 'name': 'Suchin Gururangan'}, {'authorId': '1982950', 'name': 'Luke Zettlemoyer'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Retrieval-augmented language models (LMs) use non-parametric memory to substantially outperform their non-retrieval counterparts on perplexity-based evaluations, but it is an open question whether they achieve similar gains in few- and zero-shot end-task accuracy. We extensively study one such model, the k-nearest neighbor LM (kNN-LM), showing that the gains marginally transfer. The main challenge is to achieve coverage of the verbalizer tokens that define the different end-task class labels. To address this challenge, we also introduce kNN-Prompt, a simple and effective kNN-LM with automatically expanded fuzzy verbalizers (e.g. to expand “terrible” to also include “silly” and other task-specific synonyms for sentiment classification). Across nine diverse end-tasks, using kNN-Prompt with GPT-2 large yields significant performance boosts over strong zeroshot baselines (13.4% absolute improvement over the base LM on average). We also show that other advantages of non-parametric augmentation hold for end tasks; kNN-Prompt is effective for domain adaptation with no further training, and gains increase with the size of the retrieval model.', 'year': 2022, 'in_acl': True, 'citationCount': 30, 'section': None, 'subsection': None}, {'id': 246431219, 'paperId': 'f92535edac9d1c735feabdb4d94c1157f12d899c', 'title': 'Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval', 'authors': [{'authorId': '47051926', 'name': 'Uri Alon'}, {'authorId': '40027632', 'name': 'Frank F. Xu'}, {'authorId': '6215698', 'name': 'Junxian He'}, {'authorId': '2072419570', 'name': 'Sudipta Sengupta'}, {'authorId': '144590225', 'name': 'D. Roth'}, {'authorId': '1700325', 'name': 'Graham Neubig'}], 'venue': 'International Conference on Machine Learning', 'abstract': 'Retrieval-based language models (R-LM) model the probability of natural language text by combining a standard language model (LM) with examples retrieved from an external datastore at test time. While effective, a major bottleneck of using these models in practice is the computationally costly datastore search, which can be performed as frequently as every time step. In this paper, we present RetoMaton - retrieval automaton - which approximates the datastore search, based on (1) saving pointers between consecutive datastore entries, and (2) clustering of entries into"states". This effectively results in a weighted finite automaton built on top of the datastore, instead of representing the datastore as a flat list. The creation of the automaton is unsupervised, and a RetoMaton can be constructed from any text collection: either the original training corpus or from another domain. Traversing this automaton at inference time, in parallel to the LM inference, reduces its perplexity by up to 1.85, or alternatively saves up to 83% of the nearest neighbor searches over $k$NN-LM (Khandelwal et al., 2020) without hurting perplexity. Our code and trained models are available at https://github.com/neulab/retomaton .', 'year': 2022, 'in_acl': False, 'citationCount': 54, 'section': None, 'subsection': None}]
|
2023.eacl-tutorials.1
|
Mining, Assessing, and Improving Arguments in NLP and the Social Sciences
|
Computational argumentation is an interdisciplinary research field, connecting Natural Language Processing (NLP) to other disciplines such as the social sciences. This tutorial will focus on a task that recently got into the center of attention in the community: argument quality assessment, that is, what makes an argument good or bad? We structure the tutorial along three main coordinates: (1) the notions of argument quality across disciplines (how do we recognize good and bad arguments?), (2) the modeling of subjectivity (who argues to whom; what are their beliefs?), and (3) the generation of improved arguments (what makes an argument better?). The tutorial highlights interdisciplinary aspects of the field, ranging from the collaboration of theory and practice (e.g., in NLP and social sciences), to approaching different types of linguistic structures (e.g., social media versus parliamentary texts), and facing the ethical issues involved (e.g., how to build applications for the social good). A key feature of this tutorial is its interactive nature: We will involve the participants in two annotation studies on the assessment and the improvement of quality, and we will encourage them to reflect on the challenges and potential of these tasks.
| 2,023
|
https://aclanthology.org/2023.eacl-tutorials.1
|
EACL
|
[{'id': 51609977, 'paperId': '0957dc1a757f292ff8ba7a8e186ffc63a63d6b5a', 'title': 'Five Years of Argument Mining: a Data-driven Analysis', 'authors': [{'authorId': '1772891', 'name': 'Elena Cabrio'}, {'authorId': '1725656', 'name': 'S. Villata'}], 'venue': 'International Joint Conference on Artificial Intelligence', 'abstract': 'Argument mining is the research area aiming at extracting natural language arguments and their relations from text, with the final goal of providing machine-processable structured data for computational models of argument. This research topic has started to attract the attention of a small community of researchers around 2014, and it is nowadays counted as one of the most promising research areas in Artificial Intelligence in terms of growing of the community, funded projects, and involvement of companies. In this paper, we present the argument mining tasks, and we discuss the obtained results in the area from a data-driven perspective. An open discussion highlights the main weaknesses suffered by the existing work in the literature, and proposes open challenges to be faced in the future.', 'year': 2018, 'in_acl': False, 'citationCount': 161, 'section': 'Survey Papers', 'subsection': None}, {'id': 203912051, 'paperId': 'a2ae7155d94686fe83f26f6d6ca2dfacd16c5e5c', 'title': 'Argument Mining: A Survey', 'authors': [{'authorId': '2055083035', 'name': 'J. Lawrence'}, {'authorId': '145989424', 'name': 'C. Reed'}], 'venue': 'Computational Linguistics', 'abstract': 'Argument mining is the automatic identification and extraction of the structure of inference and reasoning expressed as arguments presented in natural language. Understanding argumentative structure makes it possible to determine not only what positions people are adopting, but also why they hold the opinions they do, providing valuable insights in domains as diverse as financial market prediction and public relations. This survey explores the techniques that establish the foundations for argument mining, provides a review of recent advances in argument mining techniques, and discusses the challenges faced in automatically extracting a deeper understanding of reasoning expressed in language in general.', 'year': 2020, 'in_acl': False, 'citationCount': 403, 'section': 'Survey Papers', 'subsection': None}, {'id': 236460206, 'paperId': 'dcb0b23685c9c116d8d53fe47e5157753659d3bd', 'title': 'Towards Argument Mining for Social Good: A Survey', 'authors': [{'authorId': '36274653', 'name': 'Eva Maria Vecchi'}, {'authorId': '1724540796', 'name': 'Neele Falk'}, {'authorId': '2121302675', 'name': 'Iman Jundi'}, {'authorId': '2424572', 'name': 'Gabriella Lapesa'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'This survey builds an interdisciplinary picture of Argument Mining (AM), with a strong focus on its potential to address issues related to Social and Political Science. More specifically, we focus on AM challenges related to its applications to social media and in the multilingual domain, and then proceed to the widely debated notion of argument quality. We propose a novel definition of argument quality which is integrated with that of deliberative quality from the Social Science literature. Under our definition, the quality of a contribution needs to be assessed at multiple levels: the contribution itself, its preceding context, and the consequential effect on the development of the upcoming discourse. The latter has not received the deserved attention within the community. We finally define an application of AM for Social Good: (semi-)automatic moderation, a highly integrative application which (a) represents a challenging testbed for the integrated notion of quality we advocate, (b) allows the empirical quantification of argument/deliberative quality to benefit from the developments in other NLP fields (i.e. hate speech detection, fact checking, debiasing), and (c) has a clearly beneficial potential at the level of its societal thanks to its real-world application (even if extremely ambitious).', 'year': 2021, 'in_acl': True, 'citationCount': 37, 'section': 'Survey Papers', 'subsection': None}, {'id': 5252401, 'paperId': '834d68b9befcc6c68415b460b33435a1822799fb', 'title': 'Argumentation Mining in User-Generated Web Discourse', 'authors': [{'authorId': '2572366', 'name': 'Ivan Habernal'}, {'authorId': '1730400', 'name': 'Iryna Gurevych'}], 'venue': 'International Conference on Computational Logic', 'abstract': 'The goal of argumentation mining, an evolving research field in computational linguistics, is to design methods capable of analyzing people’s argumentation. In this article, we go beyond the state of the art in several ways. (i) We deal with actual Web data and take up the challenges given by the variety of registers, multiple domains, and unrestricted noisy user-generated Web discourse. (ii) We bridge the gap between normative argumentation theories and argumentation phenomena encountered in actual data by adapting an argumentation model tested in an extensive annotation study. (iii) We create a new gold standard corpus (90k tokens in 340 documents) and experiment with several machine learning methods to identify argument components. We offer the data, source codes, and annotation guidelines to the community under free licenses. Our findings show that argumentation mining in user-generated Web discourse is a feasible but challenging task.', 'year': 2016, 'in_acl': True, 'citationCount': 257, 'section': 'Mining Arguments ', 'subsection': None}, {'id': 11014757, 'paperId': '08f6819e66318cd49cddefd5d690a752d1098da7', 'title': 'What is the Essence of a Claim? Cross-Domain Claim Identification', 'authors': [{'authorId': '1790638', 'name': 'Johannes Daxenberger'}, {'authorId': '2620186', 'name': 'Steffen Eger'}, {'authorId': '2572366', 'name': 'Ivan Habernal'}, {'authorId': '3067663', 'name': 'Christian Stab'}, {'authorId': '1730400', 'name': 'Iryna Gurevych'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Argument mining has become a popular research area in NLP. It typically includes the identification of argumentative components, e.g. claims, as the central component of an argument. We perform a qualitative analysis across six different datasets and show that these appear to conceptualize claims quite differently. To learn about the consequences of such different conceptualizations of claim for practical applications, we carried out extensive experiments using state-of-the-art feature-rich and deep learning systems, to identify claims in a cross-domain fashion. While the divergent conceptualization of claims in different datasets is indeed harmful to cross-domain classification, we show that there are shared properties on the lexical level as well as system configurations that can help to overcome these gaps.', 'year': 2017, 'in_acl': True, 'citationCount': 92, 'section': 'Mining Arguments ', 'subsection': None}, {'id': 31107411, 'paperId': '29e4e14a5613be06a39e76bf0d8a4c8217573c2f', 'title': 'Argument Mining on Twitter: Arguments, Facts and Sources', 'authors': [{'authorId': '24871970', 'name': 'Mihai Dusmanu'}, {'authorId': '1772891', 'name': 'Elena Cabrio'}, {'authorId': '1725656', 'name': 'S. Villata'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Social media collect and spread on the Web personal opinions, facts, fake news and all kind of information users may be interested in. Applying argument mining methods to such heterogeneous data sources is a challenging open research issue, in particular considering the peculiarities of the language used to write textual messages on social media. In addition, new issues emerge when dealing with arguments posted on such platforms, such as the need to make a distinction between personal opinions and actual facts, and to detect the source disseminating information about such facts to allow for provenance verification. In this paper, we apply supervised classification to identify arguments on Twitter, and we present two new tasks for argument mining, namely facts recognition and source identification. We study the feasibility of the approaches proposed to address these tasks on a set of tweets related to the Grexit and Brexit news topics.', 'year': 2017, 'in_acl': True, 'citationCount': 76, 'section': 'Mining Arguments ', 'subsection': None}, {'id': 141282, 'paperId': '3f020157c741f869da2a5daa2971b90d37fa9581', 'title': 'Computational Argumentation Quality Assessment in Natural Language', 'authors': [{'authorId': '2626599', 'name': 'Henning Wachsmuth'}, {'authorId': '33494034', 'name': 'Nona Naderi'}, {'authorId': '39517968', 'name': 'Yufang Hou'}, {'authorId': '2911299', 'name': 'Yonatan Bilu'}, {'authorId': '3331141', 'name': 'Vinodkumar Prabhakaran'}, {'authorId': '2007871156', 'name': 'Tim Alberdingk Thijm'}, {'authorId': '145036961', 'name': 'Graeme Hirst'}, {'authorId': '144146081', 'name': 'Benno Stein'}], 'venue': 'Conference of the European Chapter of the Association for Computational Linguistics', 'abstract': 'Research on computational argumentation faces the problem of how to automatically assess the quality of an argument or argumentation. While different quality dimensions have been approached in natural language processing, a common understanding of argumentation quality is still missing. This paper presents the first holistic work on computational argumentation quality in natural language. We comprehensively survey the diverse existing theories and approaches to assess logical, rhetorical, and dialectical quality dimensions, and we derive a systematic taxonomy from these. In addition, we provide a corpus with 320 arguments, annotated for all 15 dimensions in the taxonomy. Our results establish a common ground for research on computational argumentation quality assessment.', 'year': 2017, 'in_acl': True, 'citationCount': 197, 'section': 'Assessing Argument Quality', 'subsection': None}, {'id': 219177237, 'paperId': 'ebe7c37c60024330bc8e90f7057961f9b849ff8d', 'title': 'Rhetoric, Logic, and Dialectic: Advancing Theory-based Argument Quality Assessment in Natural Language Processing', 'authors': [{'authorId': '29891652', 'name': 'Anne Lauscher'}, {'authorId': '2069505550', 'name': 'Lily Ng'}, {'authorId': '3047950', 'name': 'Courtney Napoles'}, {'authorId': '1739099', 'name': 'Joel R. Tetreault'}], 'venue': 'International Conference on Computational Linguistics', 'abstract': 'Though preceding work in computational argument quality (AQ) mostly focuses on assessing overall AQ, researchers agree that writers would benefit from feedback targeting individual dimensions of argumentation theory. However, a large-scale theory-based corpus and corresponding computational models are missing. We fill this gap by conducting an extensive analysis covering three diverse domains of online argumentative writing and presenting GAQCorpus: the first large-scale English multi-domain (community Q&A forums, debate forums, review forums) corpus annotated with theory-based AQ scores. We then propose the first computational approaches to theory-based assessment, which can serve as strong baselines for future work. We demonstrate the feasibility of large-scale AQ annotation, show that exploiting relations between dimensions yields performance improvements, and explore the synergies between theory-based prediction and practical AQ assessment.', 'year': 2020, 'in_acl': True, 'citationCount': 41, 'section': 'Assessing Argument Quality', 'subsection': None}, {'id': 256631011, 'paperId': 'f2b93fa29a948b7699ed35e80889b74e70ca7b4c', 'title': 'Graph Embeddings for Argumentation Quality Assessment', 'authors': [{'authorId': '122316116', 'name': 'Santiago Marro'}, {'authorId': '1772891', 'name': 'Elena Cabrio'}, {'authorId': '1725656', 'name': 'S. Villata'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': ',', 'year': 2022, 'in_acl': False, 'citationCount': 10, 'section': 'Assessing Argument Quality', 'subsection': None}, {'id': 56161114, 'paperId': '3ac244867115f42c255fea0b0460022e55e72c73', 'title': 'Measuring Political Deliberation: A Discourse Quality Index', 'authors': [{'authorId': '72113330', 'name': 'M. Steenbergen'}, {'authorId': '67002751', 'name': 'André Bächtiger'}, {'authorId': '117113588', 'name': 'Markus Spörndli'}, {'authorId': '31852878', 'name': 'J. Steiner'}], 'venue': '', 'abstract': "In this paper, we develop a discourse quality index (DQI) that serves as a quantitative measure of discourse in deliberation. The DQI is rooted in Habermas' discourse ethics and provides an accurate representation of the most important principles underlying deliberation. At the same time, the DQI can be shown to be a reliable measurement instrument due to its focus on observable behavior and its detailed coding instructions. We illustrate the DQI for a parliamentary debate in the British House of Commons. We show that the DQI yields reliable data and we discuss how these data could be used in subsequent analysis. We conclude by discussing some limitations of the DQI and by identifying some areas in which it could prove useful.", 'year': 2003, 'in_acl': False, 'citationCount': 572, 'section': 'Assessing Deliberative Quality', 'subsection': None}, {'id': 157370225, 'paperId': '3d1605960bc44899f0e1f9db9122a57cc0e1305f', 'title': 'Deliberative Abilities and Influence in a Transnational Deliberative Poll (EuroPolis)', 'authors': [{'authorId': '50466938', 'name': 'Marlène Gerber'}, {'authorId': '67002751', 'name': 'André Bächtiger'}, {'authorId': '47480319', 'name': 'Susumu Shikano'}, {'authorId': '121576386', 'name': 'Simon Reber'}, {'authorId': '2073637685', 'name': 'Samuel Rohr'}], 'venue': 'British Journal of Political Science', 'abstract': 'This article investigates the deliberative abilities of ordinary citizens in the context of ‘EuroPolis’, a transnational deliberative poll. Drawing upon a philosophically grounded instrument, an updated version of the Discourse Quality Index (DQI), it explores how capable European citizens are of meeting deliberative ideals; whether socio-economic, cultural and psychological biases affect the ability to deliberate; and whether opinion change results from the exchange of arguments. On the positive side, EuroPolis shows that the ideal deliberator scoring high on all deliberative standards does actually exist, and that participants change their opinions more often when rational justification is used in the discussions. On the negative side, deliberative abilities are unequally distributed: in particular, working-class members are less likely to contribute to a high standard of deliberation.', 'year': 2016, 'in_acl': False, 'citationCount': 62, 'section': 'Assessing Deliberative Quality', 'subsection': None}, {'id': 44099358, 'paperId': '01f401fefac301b8a49371099e4039b4c74d5d73', 'title': 'Neural Argument Generation Augmented with Externally Retrieved Evidence', 'authors': [{'authorId': '7156360', 'name': 'Xinyu Hua'}, {'authorId': None, 'name': 'Lu Wang'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'High quality arguments are essential elements for human reasoning and decision-making processes. However, effective argument construction is a challenging task for both human and machines. In this work, we study a novel task on automatically generating arguments of a different stance for a given statement. We propose an encoder-decoder style neural network-based argument generation model enriched with externally retrieved evidence from Wikipedia. Our model first generates a set of talking point phrases as intermediate representation, followed by a separate decoder producing the final argument based on both input and the keyphrases. Experiments on a large-scale dataset collected from Reddit show that our model constructs arguments with more topic-relevant content than popular sequence-to-sequence generation models according to automatic evaluation and human assessments.', 'year': 2018, 'in_acl': True, 'citationCount': 59, 'section': 'Improving Arguments', 'subsection': None}, {'id': 219065800, 'paperId': '356466a042a763de6cff0fbaa3ceaa6ac65b3d80', 'title': 'Target Inference in Argument Conclusion Generation', 'authors': [{'authorId': '2300829', 'name': 'Milad Alshomary'}, {'authorId': '18417916', 'name': 'S. Syed'}, {'authorId': '3046200', 'name': 'Martin Potthast'}, {'authorId': '2626599', 'name': 'Henning Wachsmuth'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'In argumentation, people state premises to reason towards a conclusion. The conclusion conveys a stance towards some target, such as a concept or statement. Often, the conclusion remains implicit, though, since it is self-evident in a discussion or left out for rhetorical reasons. However, the conclusion is key to understanding an argument and, hence, to any application that processes argumentation. We thus study the question to what extent an argument’s conclusion can be reconstructed from its premises. In particular, we argue here that a decisive step is to infer a conclusion’s target, and we hypothesize that this target is related to the premises’ targets. We develop two complementary target inference approaches: one ranks premise targets and selects the top-ranked target as the conclusion target, the other finds a new conclusion target in a learned embedding space using a triplet neural network. Our evaluation on corpora from two domains indicates that a hybrid of both approaches is best, outperforming several strong baselines. According to human annotators, we infer a reasonably adequate conclusion target in 89% of the cases.', 'year': 2020, 'in_acl': True, 'citationCount': 21, 'section': 'Improving Arguments', 'subsection': None}, {'id': 239885913, 'paperId': '1aeb7e6b23e913a1c9bd3d6879d125a042184c64', 'title': 'Assessing the Sufficiency of Arguments through Conclusion Generation', 'authors': [{'authorId': '2047307140', 'name': 'Timon Ziegenbein'}, {'authorId': '2300829', 'name': 'Milad Alshomary'}, {'authorId': '2626599', 'name': 'Henning Wachsmuth'}], 'venue': 'Workshop on Argument Mining', 'abstract': 'The premises of an argument give evidence or other reasons to support a conclusion. However, the amount of support required depends on the generality of a conclusion, the nature of the individual premises, and similar. An argument whose premises make its conclusion rationally worthy to be drawn is called sufficient in argument quality research. Previous work tackled sufficiency assessment as a standard text classification problem, not modeling the inherent relation of premises and conclusion. In this paper, we hypothesize that the conclusion of a sufficient argument can be generated from its premises. To study this hypothesis, we explore the potential of assessing sufficiency based on the output of large-scale pre-trained language models. Our best model variant achieves an F1-score of .885, outperforming the previous state-of-the-art and being on par with human experts. While manual evaluation reveals the quality of the generated conclusions, their impact remains low ultimately.', 'year': 2021, 'in_acl': True, 'citationCount': 22, 'section': 'Improving Arguments', 'subsection': None}, {'id': 202768765, 'paperId': 'd13004d17c61c6ca423b74707bb5b8d7440613b7', 'title': 'The Role of Pragmatic and Discourse Context in Determining Argument Impact', 'authors': [{'authorId': '41152329', 'name': 'Esin Durmus'}, {'authorId': '8759332', 'name': 'Faisal Ladhak'}, {'authorId': '1748501', 'name': 'Claire Cardie'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Research in the social sciences and psychology has shown that the persuasiveness of an argument depends not only the language employed, but also on attributes of the source/communicator, the audience, and the appropriateness and strength of the argument’s claims given the pragmatic and discourse context of the argument. Among these characteristics of persuasive arguments, prior work in NLP does not explicitly investigate the effect of the pragmatic and discourse context when determining argument quality. This paper presents a new dataset to initiate the study of this aspect of argumentation: it consists of a diverse collection of arguments covering 741 controversial topics and comprising over 47,000 claims. We further propose predictive models that incorporate the pragmatic and discourse context of argumentative claims and show that they outperform models that rely only on claim-specific linguistic features for predicting the perceived impact of individual claims within a particular line of argument.', 'year': 2019, 'in_acl': True, 'citationCount': 26, 'section': 'Challenges', 'subsection': None}, {'id': 222310410, 'paperId': 'eb64f8c3e56ccfb3263def4edd4249a7cf1541ad', 'title': 'Multilingual Argument Mining: Datasets and Analysis', 'authors': [{'authorId': '3022994', 'name': 'Orith Toledo-Ronen'}, {'authorId': '80108223', 'name': 'Matan Orbach'}, {'authorId': '2911299', 'name': 'Yonatan Bilu'}, {'authorId': '51451979', 'name': 'Artem Spector'}, {'authorId': '1766595', 'name': 'N. Slonim'}], 'venue': 'Findings', 'abstract': 'The growing interest in argument mining and computational argumentation brings with it a plethora of Natural Language Understanding (NLU) tasks and corresponding datasets. However, as with many other NLU tasks, the dominant language is English, with resources in other languages being few and far between. In this work, we explore the potential of transfer learning using the multilingual BERT model to address argument mining tasks in non-English languages, based on English datasets and the use of machine translation. We show that such methods are well suited for classifying the stance of arguments and detecting evidence, but less so for assessing the quality of arguments, presumably because quality is harder to preserve under translation. In addition, focusing on the translate-train approach, we show how the choice of languages for translation, and the relations among them, affect the accuracy of the resultant model. Finally, to facilitate evaluation of transfer learning on argument mining tasks, we provide a human-generated dataset with more than 10k arguments in multiple languages, as well as machine translation of the English datasets.', 'year': 2020, 'in_acl': True, 'citationCount': 32, 'section': 'Challenges', 'subsection': None}, {'id': 227151662, 'paperId': '32f16fa23ee77456400ddacfceeb1b06b99220ec', 'title': 'Argument from Old Man’s View: Assessing Social Bias in Argumentation', 'authors': [{'authorId': '83854974', 'name': 'Maximilian Spliethöver'}, {'authorId': '2626599', 'name': 'Henning Wachsmuth'}], 'venue': 'Workshop on Argument Mining', 'abstract': 'Social bias in language - towards genders, ethnicities, ages, and other social groups - poses a problem with ethical impact for many NLP applications. Recent research has shown that machine learning models trained on respective data may not only adopt, but even amplify the bias. So far, however, little attention has been paid to bias in computational argumentation. In this paper, we study the existence of social biases in large English debate portals. In particular, we train word embedding models on portal-specific corpora and systematically evaluate their bias using WEAT, an existing metric to measure bias in word embeddings. In a word co-occurrence analysis, we then investigate causes of bias. The results suggest that all tested debate corpora contain unbalanced and biased data, mostly in favor of male people with European-American names. Our empirical insights contribute towards an understanding of bias in argumentative data sources.', 'year': 2020, 'in_acl': True, 'citationCount': 18, 'section': 'Challenges', 'subsection': None}]
|
2023.eacl-tutorials.2
|
Emotion Analysis from Texts
|
Emotion analysis in text is an area of research that encompasses a set of various natural language processing (NLP) tasks, including classification and regression settings, as well as structured prediction tasks like role labelling or stimulus detection. In this tutorial, we provide an overview of research from emotion psychology which sets the ground for choosing adequate NLP methodology, and present existing resources and classification methods used for emotion analysis in texts. We further discuss appraisal theories and how events can be interpreted regarding their presumably caused emotion and briefly introduce emotion role labelling. In addition to these technical topics, we discuss the use cases of emotion analysis in text, their societal impact, ethical considerations, as well as the main challenges in the field.
| 2,023
|
https://aclanthology.org/2023.eacl-tutorials.2
|
EACL
|
[{'id': 10784127, 'paperId': '435f22636f6fe13797951fd6cbe4532bc88f89ee', 'title': 'Emotion and Motivation: Toward Consensus Definitions and a Common Research Purpose', 'authors': [{'authorId': '143853826', 'name': 'P. Lang'}], 'venue': '', 'abstract': 'Historically, the hypothesis driving emotion research has been that emotion’s data-base—in language, physiology, and behavior— is organized around specific mental states, as reflected in evaluative language. It is suggested that this approach has not greatly advanced a natural science of emotion and that the developing motivational model of emotion defines a better path: emotion is an evolved trait founded on motivational neural circuitry shared by mammalian species, primitively prompting heightened perceptual processing and reflex mobilization for action to appetitive or threatening survival cues. As the field moves forward with increasingly sophisticated measurement technology and assessing more complex affective functioning, scientific understanding of human emotion will proceed best within the framework of this mammalian brain model.', 'year': 2010, 'in_acl': False, 'citationCount': 139, 'section': None, 'subsection': None}, {'id': 221320207, 'paperId': '1ab38e55ed557f4dd03158268235fd2050baa730', 'title': 'The Nature of Emotions', 'authors': [{'authorId': '84527386', 'name': 'R. Plutchik'}], 'venue': 'American Scientist', 'abstract': 'What is an emotion? More than 90 definitions have been offered over the past century, and there are almost as many theories of emotion—not to mention a complex array of overlapping words in our languages to describe them. Plutchik offers an integrative theory based on evolutionary principles. Emotions are adaptive—in fact, they have a complexity born of a long evolutionary history—and although we conceive of emotions as feeling states, Robert Plutchik says the feeling state is part of a process involving both cognition and behavior and containing several feedback loops.', 'year': 2001, 'in_acl': False, 'citationCount': 754, 'section': None, 'subsection': None}, {'id': 52012769, 'paperId': '723f744455914bf1ead6ad267976a1500b7dee4a', 'title': 'An Analysis of Annotated Corpora for Emotion Classification in Text', 'authors': [{'authorId': '51199643', 'name': 'Laura Ana Maria Bostan'}, {'authorId': '66339110', 'name': 'Roman Klinger'}], 'venue': 'International Conference on Computational Linguistics', 'abstract': 'Several datasets have been annotated and published for classification of emotions. They differ in several ways: (1) the use of different annotation schemata (e. g., discrete label sets, including joy, anger, fear, or sadness or continuous values including valence, or arousal), (2) the domain, and, (3) the file formats. This leads to several research gaps: supervised models often only use a limited set of available resources. Additionally, no previous work has compared emotion corpora in a systematic manner. We aim at contributing to this situation with a survey of the datasets, and aggregate them in a common file format with a common annotation schema. Based on this aggregation, we perform the first cross-corpus classification experiments in the spirit of future research enabled by this paper, in order to gain insight and a better understanding of differences of models inferred from the data. This work also simplifies the choice of the most appropriate resources for developing a model for a novel domain. One result from our analysis is that a subset of corpora is better classified with models trained on a different corpus. For none of the corpora, training on all data altogether is better than using a subselection of the resources. Our unified corpus is available at http://www.ims.uni-stuttgart.de/data/unifyemotion.', 'year': 2018, 'in_acl': True, 'citationCount': 185, 'section': None, 'subsection': None}, {'id': 226237153, 'paperId': 'eef65f8affe3dceba40f87b57789a02a40366d30', 'title': 'XED: A Multilingual Dataset for Sentiment Analysis and Emotion Detection', 'authors': [{'authorId': '35497520', 'name': 'Emily Öhman'}, {'authorId': '1850527789', 'name': 'Marc Pàmies'}, {'authorId': '41033530', 'name': 'Kaisla Kajava'}, {'authorId': '143675545', 'name': 'J. Tiedemann'}], 'venue': 'International Conference on Computational Linguistics', 'abstract': 'We introduce XED, a multilingual fine-grained emotion dataset. The dataset consists of human-annotated Finnish (25k) and English sentences (30k), as well as projected annotations for 30 additional languages, providing new resources for many low-resource languages. We use Plutchik’s core emotions to annotate the dataset with the addition of neutral to create a multilabel multiclass dataset. The dataset is carefully evaluated using language-specific BERT models and SVMs to show that XED performs on par with other similar datasets and is therefore a useful tool for sentiment analysis and emotion detection.', 'year': 2020, 'in_acl': True, 'citationCount': 47, 'section': None, 'subsection': None}, {'id': 249605436, 'paperId': '0ed26f2ec23335da48d03418cd067dd807fbb330', 'title': 'Dimensional Modeling of Emotions in Text with Appraisal Theories: Corpus Creation, Annotation Reliability, and Prediction', 'authors': [{'authorId': '82895194', 'name': 'Enrica Troiano'}, {'authorId': '2000278193', 'name': 'Laura Oberländer'}, {'authorId': '66339110', 'name': 'Roman Klinger'}], 'venue': 'International Conference on Computational Logic', 'abstract': 'The most prominent tasks in emotion analysis are to assign emotions to texts and to understand how emotions manifest in language. An important observation for natural language processing is that emotions can be communicated implicitly by referring to events alone, appealing to an empathetic, intersubjective understanding of events, even without explicitly mentioning an emotion name. In psychology, the class of emotion theories known as appraisal theories aims at explaining the link between events and emotions. Appraisals can be formalized as variables that measure a cognitive evaluation by people living through an event that they consider relevant. They include the assessment if an event is novel, if the person considers themselves to be responsible, if it is in line with their own goals, and so forth. Such appraisals explain which emotions are developed based on an event, for example, that a novel situation can induce surprise or one with uncertain consequences could evoke fear. We analyze the suitability of appraisal theories for emotion analysis in text with the goal of understanding if appraisal concepts can reliably be reconstructed by annotators, if they can be predicted by text classifiers, and if appraisal concepts help to identify emotion categories. To achieve that, we compile a corpus by asking people to textually describe events that triggered particular emotions and to disclose their appraisals. Then, we ask readers to reconstruct emotions and appraisals from the text. This set-up allows us to measure if emotions and appraisals can be recovered purely from text and provides a human baseline to judge a model’s performance measures. Our comparison of text classification methods to human annotators shows that both can reliably detect emotions and appraisals with similar performance. Therefore, appraisals constitute an alternative computational emotion analysis paradigm and further improve the categorization of emotions in text with joint models.', 'year': 2022, 'in_acl': True, 'citationCount': 31, 'section': None, 'subsection': None}]
|
2023.eacl-tutorials.3
|
Summarization of Dialogues and Conversations At Scale
|
Conversations are the natural communication format for people. This fact has motivated the large body of question answering and chatbot research as a seamless way for people to interact with machines. The conversations between people however, captured as video, audio or private or public written conversations, largely remain untapped as a source of compelling starting point for developing language technology. Summarizing such conversations can be enormously beneficial: automatic minutes for meetings or meeting highlights sent to relevant people can optimize communication in various groups while minimizing demands on people’s time; similarly analysis of conversations in online support groups can provide valuable information to doctors about the patient concerns. Summarizing written and spoken conversation poses unique research challenges—text reformulation, discourse and meaning analysis beyond the sentence, collecting data, and proper evaluation metrics. All these have been revisited by researchers since the emergence of neural approaches as the dominant approach for solving language processing problems. In this tutorial, we will survey the cutting-edge methods for summarization of conversations, covering key sub-areas whose combination is needed for a successful solution.
| 2,023
|
https://aclanthology.org/2023.eacl-tutorials.3
|
EACL
|
[{'id': 208010268, 'paperId': 'f9700e31a1d0ae34d4571ab056dfb268c1543349', 'title': 'SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization', 'authors': [{'authorId': '1782426', 'name': 'Bogdan Gliwa'}, {'authorId': '103241417', 'name': 'Iwona Mochol'}, {'authorId': '2065251929', 'name': 'M. Biesek'}, {'authorId': '1744393', 'name': 'A. Wawer'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'This paper introduces the SAMSum Corpus, a new dataset with abstractive dialogue summaries. We investigate the challenges it poses for automated summarization by testing several models and comparing their results with those obtained on a corpus of news articles. We show that model-generated summaries of dialogues achieve higher ROUGE scores than the model-generated summaries of news – in contrast with human evaluators’ judgement. This suggests that a challenging task of abstractive dialogue summarization requires dedicated models and non-standard quality measures. To our knowledge, our study is the first attempt to introduce a high-quality chat-dialogues corpus, manually annotated with abstractive summarizations, which can be used by the research community for further studies.', 'year': 2019, 'in_acl': True, 'citationCount': 530, 'section': None, 'subsection': None}, {'id': 221749138, 'paperId': '86cb79083bfa5dc6329ab1b8c7099af76fefde36', 'title': 'A Hierarchical Network for Abstractive Meeting Summarization with Cross-Domain Pretraining', 'authors': [{'authorId': '8652308', 'name': 'Chenguang Zhu'}, {'authorId': '8233965', 'name': 'Ruochen Xu'}, {'authorId': '48262024', 'name': 'Michael Zeng'}, {'authorId': '144531812', 'name': 'Xuedong Huang'}], 'venue': 'Findings', 'abstract': 'With the abundance of automatic meeting transcripts, meeting summarization is of great interest to both participants and other parties. Traditional methods of summarizing meetings depend on complex multi-step pipelines that make joint optimization intractable. Meanwhile, there are a handful of deep neural models for text summarization and dialogue systems. However, the semantic structure and styles of meeting transcripts are quite different from articles and conversations. In this paper, we propose a novel abstractive summary network that adapts to the meeting scenario. We design a hierarchical structure to accommodate long meeting transcripts and a role vector to depict the difference among speakers. Furthermore, due to the inadequacy of meeting summary data, we pretrain the model on large-scale news summary data. Empirical results show that our model outperforms previous approaches in both automatic metrics and human evaluation. For example, on ICSI dataset, the ROUGE-1 score increases from 34.66% to 46.28%.', 'year': 2020, 'in_acl': True, 'citationCount': 134, 'section': None, 'subsection': None}, {'id': 237420813, 'paperId': 'ac95a18762133d4065ac8af518c33084d83c5582', 'title': 'DialogLM: Pre-trained Model for Long Dialogue Understanding and Summarization', 'authors': [{'authorId': '1606040932', 'name': 'Ming Zhong'}, {'authorId': '39798499', 'name': 'Yang Liu'}, {'authorId': '2110197273', 'name': 'Yichong Xu'}, {'authorId': '1456009348', 'name': 'Chenguang Zhu'}, {'authorId': '48262024', 'name': 'Michael Zeng'}], 'venue': 'AAAI Conference on Artificial Intelligence', 'abstract': 'Dialogue is an essential part of human communication and cooperation. Existing research mainly focuses on short dialogue scenarios in a one-on-one fashion. However, multi-person interactions in the real world, such as meetings or interviews, are frequently over a few thousand words. There is still a lack of corresponding research and powerful tools to understand and process such long dialogues. Therefore, in this work, we present a pre-training framework for long dialogue understanding and summarization. Considering the nature of long conversations, we propose a window-based denoising approach for generative pre-training. For a dialogue, it corrupts a window of text with dialogue-inspired noise, and guides the model to reconstruct this window based on the content of the remaining conversation. Furthermore, to process longer input, we augment the model with sparse attention which is combined with conventional attention in a hybrid manner. We conduct extensive experiments on five datasets of long dialogues, covering tasks of dialogue summarization, abstractive question answering and topic segmentation. Experimentally, we show that our pre-trained model DialogLM significantly surpasses the state-of-the-art models across datasets and tasks. Source code and all the pre-trained models are available on our GitHub repository (https://github.com/microsoft/DialogLM).', 'year': 2021, 'in_acl': False, 'citationCount': 114, 'section': None, 'subsection': None}, {'id': 222133028, 'paperId': '0ac7c7279f52e8cc98171254534276d9644cf92c', 'title': 'Multi-View Sequence-to-Sequence Models with Conversational Structure for Abstractive Dialogue Summarization', 'authors': [{'authorId': '47739850', 'name': 'Jiaao Chen'}, {'authorId': '2022168', 'name': 'Diyi Yang'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Text summarization is one of the most challenging and interesting problems in NLP. Although much attention has been paid to summarizing structured text like news reports or encyclopedia articles, summarizing conversations---an essential part of human-human/machine interaction where most important pieces of information are scattered across various utterances of different speakers---remains relatively under-investigated. This work proposes a multi-view sequence-to-sequence model by first extracting conversational structures of unstructured daily chats from different views to represent conversations and then utilizing a multi-view decoder to incorporate different views to generate dialogue summaries. Experiments on a large-scale dialogue summarization corpus demonstrated that our methods significantly outperformed previous state-of-the-art models via both automatic evaluations and human judgment. We also discussed specific challenges that current approaches faced with this task. We have publicly released our code at this https URL', 'year': 2020, 'in_acl': True, 'citationCount': 136, 'section': None, 'subsection': None}, {'id': 243865654, 'paperId': 'b519511993b63423be6e3580cf3ce63ea77e9e2f', 'title': 'Simple Conversational Data Augmentation for Semi-supervised Abstractive Dialogue Summarization', 'authors': [{'authorId': '47739850', 'name': 'Jiaao Chen'}, {'authorId': '2143919864', 'name': 'Diyi Yang'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Abstractive conversation summarization has received growing attention while most current state-of-the-art summarization models heavily rely on human-annotated summaries. To reduce the dependence on labeled summaries, in this work, we present a simple yet effective set of Conversational Data Augmentation (CODA) methods for semi-supervised abstractive conversation summarization, such as random swapping/deletion to perturb the discourse relations inside conversations, dialogue-acts-guided insertion to interrupt the development of conversations, and conditional-generation-based substitution to substitute utterances with their paraphrases generated based on the conversation context. To further utilize unlabeled conversations, we combine CODA with two-stage noisy self-training where we first pre-train the summarization model on unlabeled conversations with pseudo summaries and then fine-tune it on labeled conversations. Experiments conducted on the recent conversation summarization datasets demonstrate the effectiveness of our methods over several state-of-the-art data augmentation baselines.', 'year': 2021, 'in_acl': True, 'citationCount': 37, 'section': None, 'subsection': None}, {'id': 233219904, 'paperId': 'aa28873534c24e4a8c5deb7bff723cd5fc69a6f0', 'title': 'QMSum: A New Benchmark for Query-based Multi-domain Meeting Summarization', 'authors': [{'authorId': '1606040932', 'name': 'Ming Zhong'}, {'authorId': '144508458', 'name': 'Da Yin'}, {'authorId': '48881008', 'name': 'Tao Yu'}, {'authorId': '2056402211', 'name': 'A. Zaidi'}, {'authorId': '2074120594', 'name': 'Mutethia Mutuma'}, {'authorId': '144598922', 'name': 'Rahul Jha'}, {'authorId': '2072795428', 'name': 'A. Awadallah'}, {'authorId': '1709797', 'name': 'Asli Celikyilmaz'}, {'authorId': '39798499', 'name': 'Yang Liu'}, {'authorId': '1767521', 'name': 'Xipeng Qiu'}, {'authorId': '9215251', 'name': 'Dragomir R. Radev'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'Meetings are a key component of human collaboration. As increasing numbers of meetings are recorded and transcribed, meeting summaries have become essential to remind those who may or may not have attended the meetings about the key decisions made and the tasks to be completed. However, it is hard to create a single short summary that covers all the content of a long meeting involving multiple people and topics. In order to satisfy the needs of different types of users, we define a new query-based multi-domain meeting summarization task, where models have to select and summarize relevant spans of meetings in response to a query, and we introduce QMSum, a new benchmark for this task. QMSum consists of 1,808 query-summary pairs over 232 meetings in multiple domains. Besides, we investigate a locate-then-summarize method and evaluate a set of strong summarization baselines on the task. Experimental results and manual analysis reveal that QMSum presents significant challenges in long meeting summarization for future research. Dataset is available at https://github.com/Yale-LILY/QMSum.', 'year': 2021, 'in_acl': True, 'citationCount': 280, 'section': None, 'subsection': None}, {'id': 235294200, 'paperId': 'fa51076458b7bcf9a60f476d525755e47199a6d8', 'title': 'ConvoSumm: Conversation Summarization Benchmark and Improved Abstractive Summarization with Argument Mining', 'authors': [{'authorId': '46255971', 'name': 'Alexander R. Fabbri'}, {'authorId': '151002933', 'name': 'Faiaz Rahman'}, {'authorId': '2106627934', 'name': 'Imad Rizvi'}, {'authorId': '2203187', 'name': 'Borui Wang'}, {'authorId': '1391218521', 'name': 'Haoran Li'}, {'authorId': '2263803', 'name': 'Yashar Mehdad'}, {'authorId': '9215251', 'name': 'Dragomir R. Radev'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'While online conversations can cover a vast amount of information in many different formats, abstractive text summarization has primarily focused on modeling solely news articles. This research gap is due, in part, to the lack of standardized datasets for summarizing online discussions. To address this gap, we design annotation protocols motivated by an issues–viewpoints–assertions framework to crowdsource four new datasets on diverse online conversation forms of news comments, discussion forums, community question answering forums, and email threads. We benchmark state-of-the-art models on our datasets and analyze characteristics associated with the data. To create a comprehensive benchmark, we also evaluate these models on widely-used conversation summarization datasets to establish strong baselines in this domain. Furthermore, we incorporate argument mining through graph construction to directly model the issues, viewpoints, and assertions present in a conversation and filter noisy input, showing comparable or improved results according to automatic and human evaluations.', 'year': 2021, 'in_acl': True, 'citationCount': 55, 'section': None, 'subsection': None}]
|
2023.emnlp-tutorial.2
|
Security Challenges in Natural Language Processing Models
|
Large-scale natural language processing models have been developed and integrated into numerous applications, given the advantage of their remarkable performance. Nonetheless, the security concerns associated with these models prevent the widespread adoption of these black-box machine learning models. In this tutorial, we will dive into three emerging security issues in NLP research, i.e., backdoor attacks, private data leakage, and imitation attacks. These threats will be introduced in accordance with their threatening usage scenarios, attack methodologies, and defense technologies.
| 2,023
|
https://aclanthology.org/2023.emnlp-tutorial.2
|
EMNLP
|
[{'id': 26783139, 'paperId': '573fd2ce97c70bb29097e8efb28a27af791225ca', 'title': 'BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain', 'authors': [{'authorId': '2367353', 'name': 'Tianyu Gu'}, {'authorId': '1398683279', 'name': 'Brendan Dolan-Gavitt'}, {'authorId': '1696125', 'name': 'S. Garg'}], 'venue': 'arXiv.org', 'abstract': "Deep learning-based techniques have achieved state-of-the-art performance on a wide variety of recognition and classification tasks. However, these networks are typically computationally expensive to train, requiring weeks of computation on many GPUs; as a result, many users outsource the training procedure to the cloud or rely on pre-trained models that are then fine-tuned for a specific task. In this paper we show that outsourced training introduces new security risks: an adversary can create a maliciously trained network (a backdoored neural network, or a \\emph{BadNet}) that has state-of-the-art performance on the user's training and validation samples, but behaves badly on specific attacker-chosen inputs. We first explore the properties of BadNets in a toy example, by creating a backdoored handwritten digit classifier. Next, we demonstrate backdoors in a more realistic scenario by creating a U.S. street sign classifier that identifies stop signs as speed limits when a special sticker is added to the stop sign; we then show in addition that the backdoor in our US street sign detector can persist even if the network is later retrained for another task and cause a drop in accuracy of {25}\\% on average when the backdoor trigger is present. These results demonstrate that backdoors in neural networks are both powerful and---because the behavior of neural networks is difficult to explicate---stealthy. This work provides motivation for further research into techniques for verifying and inspecting neural networks, just as we have developed tools for verifying and debugging software.", 'year': 2017, 'in_acl': False, 'citationCount': 1557, 'section': 'Backdoor Attack', 'subsection': None}, {'id': 168170110, 'paperId': 'f182ccbc90c1d20d358e3d197b340691f277428f', 'title': 'A Backdoor Attack Against LSTM-Based Text Classification Systems', 'authors': [{'authorId': '2774171', 'name': 'Jiazhu Dai'}, {'authorId': '2109151603', 'name': 'Chuanshuai Chen'}, {'authorId': '2110464101', 'name': 'Yufeng Li'}], 'venue': 'IEEE Access', 'abstract': 'With the widespread use of deep learning system in many applications, the adversary has strong incentive to explore vulnerabilities of deep neural networks and manipulate them. Backdoor attacks against deep neural networks have been reported to be a new type of threat. In this attack, the adversary will inject backdoors into the model and then cause the misbehavior of the model through inputs including backdoor triggers. Existed research mainly focuses on backdoor attacks in image classification based on CNN, little attention has been paid to the backdoor attacks in RNN. In this paper, we implement a backdoor attack against LSTM-based text classification by data poisoning. After the backdoor is injected, the model will misclassify any text samples that contains a specific trigger sentence into the target category determined by the adversary. The backdoor attack is stealthy and the backdoor injected in the model has little impact on the performance of the model. We consider the backdoor attack in black-box setting, where the adversary has no knowledge of model structures or training algorithms except for a small amount of training data. We verify the attack through sentiment analysis experiment on the dataset of IMDB movie reviews. The experimental results indicate that our attack can achieve around 96% success rate with 1% poisoning rate.', 'year': 2019, 'in_acl': False, 'citationCount': 271, 'section': 'Backdoor Attack', 'subsection': None}, {'id': 215754328, 'paperId': '0d360a1256ccdfca58cf98d12243df8407fd442d', 'title': 'Weight Poisoning Attacks on Pretrained Models', 'authors': [{'authorId': '147225682', 'name': 'Keita Kurita'}, {'authorId': '144397625', 'name': 'Paul Michel'}, {'authorId': '1700325', 'name': 'Graham Neubig'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Recently, NLP has seen a surge in the usage of large pre-trained models. Users download weights of models pre-trained on large datasets, then fine-tune the weights on a task of their choice. This raises the question of whether downloading untrusted pre-trained weights can pose a security threat. In this paper, we show that it is possible to construct “weight poisoning” attacks where pre-trained weights are injected with vulnerabilities that expose “backdoors” after fine-tuning, enabling the attacker to manipulate the model prediction simply by injecting an arbitrary keyword. We show that by applying a regularization method which we call RIPPLe and an initialization procedure we call Embedding Surgery, such attacks are possible even with limited knowledge of the dataset and fine-tuning procedure. Our experiments on sentiment classification, toxicity detection, and spam detection show that this attack is widely applicable and poses a serious threat. Finally, we outline practical defenses against such attacks.', 'year': 2020, 'in_acl': True, 'citationCount': 373, 'section': 'Backdoor Attack', 'subsection': None}, {'id': 53099247, 'paperId': '30e0ffeb519a4df2d4a2067e899c5fb5c5e85e70', 'title': 'Exploiting Unintended Feature Leakage in Collaborative Learning', 'authors': [{'authorId': '145557680', 'name': 'Luca Melis'}, {'authorId': '3469125', 'name': 'Congzheng Song'}, {'authorId': '1728207', 'name': 'Emiliano De Cristofaro'}, {'authorId': '1723945', 'name': 'Vitaly Shmatikov'}], 'venue': 'IEEE Symposium on Security and Privacy', 'abstract': "Collaborative machine learning and related techniques such as federated learning allow multiple participants, each with his own training dataset, to build a joint model by training locally and periodically exchanging model updates. We demonstrate that these updates leak unintended information about participants' training data and develop passive and active inference attacks to exploit this leakage. First, we show that an adversarial participant can infer the presence of exact data points -- for example, specific locations -- in others' training data (i.e., membership inference). Then, we show how this adversary can infer properties that hold only for a subset of the training data and are independent of the properties that the joint model aims to capture. For example, he can infer when a specific person first appears in the photos used to train a binary gender classifier. We evaluate our attacks on a variety of tasks, datasets, and learning configurations, analyze their limitations, and discuss possible defenses.", 'year': 2018, 'in_acl': False, 'citationCount': 1318, 'section': 'Privacy and Data Leakage', 'subsection': None}, {'id': 229156229, 'paperId': 'df7d26339adf4eb0c07160947b9d2973c24911ba', 'title': 'Extracting Training Data from Large Language Models', 'authors': [{'authorId': '2483738', 'name': 'Nicholas Carlini'}, {'authorId': '2444919', 'name': 'Florian Tramèr'}, {'authorId': '145217343', 'name': 'Eric Wallace'}, {'authorId': '40844378', 'name': 'Matthew Jagielski'}, {'authorId': '1404060687', 'name': 'Ariel Herbert-Voss'}, {'authorId': '3844009', 'name': 'Katherine Lee'}, {'authorId': '145625142', 'name': 'Adam Roberts'}, {'authorId': '31035595', 'name': 'Tom B. Brown'}, {'authorId': '143711382', 'name': 'D. Song'}, {'authorId': '1758110', 'name': 'Ú. Erlingsson'}, {'authorId': '3046437', 'name': 'Alina Oprea'}, {'authorId': '2402716', 'name': 'Colin Raffel'}], 'venue': 'USENIX Security Symposium', 'abstract': "It has become common to publish large (billion parameter) language models that have been trained on private datasets. This paper demonstrates that in such settings, an adversary can perform a training data extraction attack to recover individual training examples by querying the language model. \nWe demonstrate our attack on GPT-2, a language model trained on scrapes of the public Internet, and are able to extract hundreds of verbatim text sequences from the model's training data. These extracted examples include (public) personally identifiable information (names, phone numbers, and email addresses), IRC conversations, code, and 128-bit UUIDs. Our attack is possible even though each of the above sequences are included in just one document in the training data. \nWe comprehensively evaluate our extraction attack to understand the factors that contribute to its success. For example, we find that larger models are more vulnerable than smaller models. We conclude by drawing lessons and discussing possible safeguards for training large language models.", 'year': 2020, 'in_acl': False, 'citationCount': 1504, 'section': 'Privacy and Data Leakage', 'subsection': None}, {'id': 253080602, 'paperId': 'eb39dda2df56270599f2a28bc6433c84c1704949', 'title': 'Extracted BERT Model Leaks More Information than You Think!', 'authors': [{'authorId': '2288269593', 'name': 'Xuanli He'}, {'authorId': None, 'name': 'Chen Chen'}, {'authorId': '3366777', 'name': 'L. Lyu'}, {'authorId': '3101288', 'name': 'Qiongkai Xu'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'The collection and availability of big data, combined with advances in pre-trained models (e.g. BERT), have revolutionized the predictive performance of natural language processing tasks. This allows corporations to provide machine learning as a service (MLaaS) by encapsulating fine-tuned BERT-based models as APIs. Due to significant commercial interest, there has been a surge of attempts to steal remote services via model extraction. Although previous works have made progress in defending against model extraction attacks, there has been little discussion on their performance in preventing privacy leakage. This work bridges this gap by launching an attribute inference attack against the extracted BERT model. Our extensive experiments reveal that model extraction can cause severe privacy leakage even when victim models are facilitated with state-of-the-art defensive strategies.', 'year': 2022, 'in_acl': True, 'citationCount': 4, 'section': 'Privacy and Data Leakage', 'subsection': None}, {'id': 216868525, 'paperId': 'd73561ab8318ce343f5cb15f96c74f210b6b24fa', 'title': 'Imitation Attacks and Defenses for Black-box Machine Translation Systems', 'authors': [{'authorId': '145217343', 'name': 'Eric Wallace'}, {'authorId': '144872294', 'name': 'Mitchell Stern'}, {'authorId': '143711382', 'name': 'D. Song'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'We consider an adversary looking to steal or attack a black-box machine translation (MT) system, either for financial gain or to exploit model errors. We first show that black-box MT systems can be stolen by querying them with monolingual sentences and training models to imitate their outputs. Using simulated experiments, we demonstrate that MT model stealing is possible even when imitation models have different input data or architectures than their victims. Applying these ideas, we train imitation models that reach within 0.6 BLEU of three production MT systems on both high-resource and low-resource language pairs. We then leverage the similarity of our imitation models to transfer adversarial examples to the production systems. We use gradient-based attacks that expose inputs which lead to semantically-incorrect translations, dropped content, and vulgar model outputs. To mitigate these vulnerabilities, we propose a defense that modifies translation outputs in order to misdirect the optimization of imitation models. This defense degrades imitation model BLEU and attack transfer rates at some cost in BLEU and inference speed.', 'year': 2020, 'in_acl': True, 'citationCount': 108, 'section': 'Imitation Attack', 'subsection': None}, {'id': 252090014, 'paperId': '975e8d7065161d3dc0020ef343aa1db2a3db5a7b', 'title': 'Student Surpasses Teacher: Imitation Attack for Black-Box NLP APIs', 'authors': [{'authorId': '3101288', 'name': 'Qiongkai Xu'}, {'authorId': '2288269593', 'name': 'Xuanli He'}, {'authorId': '3366777', 'name': 'L. Lyu'}, {'authorId': '153139892', 'name': 'Lizhen Qu'}, {'authorId': '2561045', 'name': 'Gholamreza Haffari'}], 'venue': 'International Conference on Computational Linguistics', 'abstract': 'Machine-learning-as-a-service (MLaaS) has attracted millions of users to their splendid large-scale models. Although published as black-box APIs, the valuable models behind these services are still vulnerable to imitation attacks. Recently, a series of works have demonstrated that attackers manage to steal or extract the victim models. Nonetheless, none of the previous stolen models can outperform the original black-box APIs. In this work, we conduct unsupervised domain adaptation and multi-victim ensemble to showing that attackers could potentially surpass victims, which is beyond previous understanding of model extraction. Extensive experiments on both benchmark datasets and real-world APIs validate that the imitators can succeed in outperforming the original black-box models on transferred domains. We consider our work as a milestone in the research of imitation attack, especially on NLP APIs, as the superior performance could influence the defense or even publishing strategy of API providers.', 'year': 2021, 'in_acl': True, 'citationCount': 19, 'section': 'Imitation Attack', 'subsection': None}, {'id': 222134003, 'paperId': '8a027e49b3e961e1a9cd8e842281d112ab2698c9', 'title': 'Differentially Private Representation for NLP: Formal Guarantee and An Empirical Study on Privacy and Fairness', 'authors': [{'authorId': '3366777', 'name': 'L. Lyu'}, {'authorId': '2288269593', 'name': 'Xuanli He'}, {'authorId': '40609859', 'name': 'Yitong Li'}], 'venue': 'Findings', 'abstract': 'It has been demonstrated that hidden representation learned by deep model can encode private information of the input, hence can be exploited to recover such information with reasonable accuracy. To address this issue, we propose a novel approach called Differentially Private Neural Representation (DPNR) to preserve privacy of the extracted representation from text. DPNR utilises Differential Privacy (DP) to provide formal privacy guarantee. Further, we show that masking words via dropout can further enhance privacy. To maintain utility of the learned representation, we integrate DP-noisy representation into a robust training process to derive a robust target model, which also helps for model fairness over various demographic variables. Experimental results on benchmark datasets under various parameter settings demonstrate that DPNR largely reduces privacy leakage without significantly sacrificing the main task performance.', 'year': 2020, 'in_acl': True, 'citationCount': 75, 'section': 'Defense using', 'subsection': 'differential privacy'}, {'id': 237353275, 'paperId': 'bda3fe4ae1cb73ef99f48add40967179577d29e8', 'title': 'Selective Differential Privacy for Language Modeling', 'authors': [{'authorId': '8299781', 'name': 'Weiyan Shi'}, {'authorId': '2004851330', 'name': 'Aiqi Cui'}, {'authorId': '2150320252', 'name': 'Evan Li'}, {'authorId': '39823639', 'name': 'R. Jia'}, {'authorId': '1564034697', 'name': 'Zhou Yu'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'With the increasing applications of language models, it has become crucial to protect these models from leaking private information. Previous work has attempted to tackle this challenge by training RNN-based language models with differential privacy guarantees.However, applying classical differential privacy to language models leads to poor model performance as the underlying privacy notion is over-pessimistic and provides undifferentiated protection for all tokens in the data. Given that the private information in natural language is sparse (for example, the bulk of an email might not carry personally identifiable information), we propose a new privacy notion, selective differential privacy, to provide rigorous privacy guarantees on the sensitive portion of the data to improve model utility. To realize such a new notion, we develop a corresponding privacy mechanism, Selective-DPSGD, for RNN-based language models. Besides language modeling, we also apply the method to a more concrete application – dialog systems. Experiments on both language modeling and dialog system building show that the proposed privacy-preserving mechanism achieves better utilities while remaining safe under various privacy attacks compared to the baselines. The data and code are released at https://github.com/wyshi/lm_privacy to facilitate future research.', 'year': 2021, 'in_acl': True, 'citationCount': 59, 'section': 'Defense using', 'subsection': 'differential privacy'}, {'id': 208909851, 'paperId': '8e58dc63817a2a26e5a2ddad38d8b1d19d1c3795', 'title': 'Machine Unlearning', 'authors': [{'authorId': '1452678444', 'name': 'Lucas Bourtoule'}, {'authorId': '143754359', 'name': 'Varun Chandrasekaran'}, {'authorId': '1415982317', 'name': 'Christopher A. Choquette-Choo'}, {'authorId': '120074583', 'name': 'Hengrui Jia'}, {'authorId': '1452679273', 'name': 'Adelin Travers'}, {'authorId': '23696685', 'name': 'Baiwu Zhang'}, {'authorId': '47412202', 'name': 'D. Lie'}, {'authorId': '1967156', 'name': 'Nicolas Papernot'}], 'venue': 'IEEE Symposium on Security and Privacy', 'abstract': 'Once users have shared their data online, it is generally difficult for them to revoke access and ask for the data to be deleted. Machine learning (ML) exacerbates this problem because any model trained with said data may have memorized it, putting users at risk of a successful privacy attack exposing their information. Yet, having models unlearn is notoriously difficult.We introduce SISA training, a framework that expedites the unlearning process by strategically limiting the influence of a data point in the training procedure. While our framework is applicable to any learning algorithm, it is designed to achieve the largest improvements for stateful algorithms like stochastic gradient descent for deep neural networks. SISA training reduces the computational overhead associated with unlearning, even in the worst-case setting where unlearning requests are made uniformly across the training set. In some cases, the service provider may have a prior on the distribution of unlearning requests that will be issued by users. We may take this prior into account to partition and order data accordingly, and further decrease overhead from unlearning.Our evaluation spans several datasets from different domains, with corresponding motivations for unlearning. Under no distributional assumptions, for simple learning tasks, we observe that SISA training improves time to unlearn points from the Purchase dataset by 4.63×, and 2.45× for the SVHN dataset, over retraining from scratch. SISA training also provides a speed-up of 1.36× in retraining for complex learning tasks such as ImageNet classification; aided by transfer learning, this results in a small degradation in accuracy. Our work contributes to practical data governance in machine unlearning.', 'year': 2019, 'in_acl': False, 'citationCount': 642, 'section': 'Defense using', 'subsection': 'machine unlearning'}, {'id': 244909149, 'paperId': '2569a7309142e40815cf556b6417059df9abbda8', 'title': 'Protecting Intellectual Property of Language Generation APIs with Lexical Watermark', 'authors': [{'authorId': '2288269593', 'name': 'Xuanli He'}, {'authorId': '3101288', 'name': 'Qiongkai Xu'}, {'authorId': '3366777', 'name': 'L. Lyu'}, {'authorId': '2397264', 'name': 'Fangzhao Wu'}, {'authorId': '2143199909', 'name': 'Chenguang Wang'}], 'venue': 'AAAI Conference on Artificial Intelligence', 'abstract': 'Nowadays, due to the breakthrough in natural language generation (NLG), including machine translation, document summarization, image captioning, etc NLG models have been encapsulated in cloud APIs to serve over half a billion people worldwide and process over one hundred billion word generations per day. Thus, NLG APIs have already become essential profitable services in many commercial companies. Due to the substantial financial and intellectual investments, service providers adopt a pay-as-you-use policy to promote sustainable market growth. However, recent works have shown that cloud platforms suffer from financial losses imposed by model extraction attacks, which aim to imitate the functionality and utility of the victim services, thus violating the intellectual property (IP) of cloud APIs. This work targets at protecting IP of NLG APIs by identifying the attackers who have utilized watermarked responses from the victim NLG APIs. However, most existing watermarking techniques are not directly amenable for IP protection of NLG APIs. To bridge this gap, we first present a novel watermarking method for text generation APIs by conducting lexical modification to the original outputs. Compared with the competitive baselines, our watermark approach achieves better identifiable performance in terms of p-value, with fewer semantic losses. In addition, our watermarks are more understandable and intuitive to humans than the baselines. Finally, the empirical studies show our approach is also applicable to queries from different domains, and is effective on the attacker trained on a mixture of the corpus which includes less than 10% watermarked samples.', 'year': 2021, 'in_acl': False, 'citationCount': 82, 'section': 'Defense using', 'subsection': 'watermarking'}]
|
2023.emnlp-tutorial.3
|
Designing, Evaluating, and Learning from Humans Interacting with NLP Models
|
The rapid advancement of natural language processing (NLP) research has led to various applications spanning a wide range of domains that require models to interact with humans – e.g., chatbots responding to human inquiries, machine translation systems assisting human translators, designers prompting Large Language Models for co-creation or prototyping AI-infused applications, etc. In these cases, humans interaction is key to the success of NLP applications; any potential misconceptions or differences might lead to error cascades at the subsequent stages. Such interaction involves a lot of design choices around models, e.g. the sensitivity of interfaces, the impact of design choice and evaluation questions, etc. This tutorial aims to provide a systematic and up-to-date overview of key considerations and effective approaches for studying human-NLP model interactions. Our tutorial will focus specifically on the scenario where end users – lay people and domain experts who have access to NLP models but are less familiar with NLP techniques – use or collaborate with deployed models. Throughout the tutorial, we will use five case studies (on classifier-assisted decision making, machine-aided translation, dialog systems, and prompting) to cover three major themes: (1) how to conduct human-in-the-loop usability evaluations to ensure that models are capable of interacting with humans; (2) how to design user interfaces (UIs) and interaction mechanisms that provide end users with easy access to NLP models; (3) how to learn and improve NLP models through the human interactions. We will use best practices from HCI to ground our discussion, and will highlight current challenges and future directions.
| 2,023
|
https://aclanthology.org/2023.emnlp-tutorial.3
|
EMNLP
|
[{'id': 232147529, 'paperId': 'c4788d6d19c9c6555264f274d01fd0c34c22c674', 'title': 'Putting Humans in the Natural Language Processing Loop: A Survey', 'authors': [{'authorId': '1390877819', 'name': 'Zijie J. Wang'}, {'authorId': '2026030439', 'name': 'Dongjin Choi'}, {'authorId': '51132439', 'name': 'Shenyu Xu'}, {'authorId': '2022168', 'name': 'Diyi Yang'}], 'venue': 'HCINLP', 'abstract': 'How can we design Natural Language Processing (NLP) systems that learn from human feedback? There is a growing research body of Human-in-the-loop (HITL) NLP frameworks that continuously integrate human feedback to improve the model itself. HITL NLP research is nascent but multifarious—solving various NLP problems, collecting diverse feedback from different people, and applying different methods to learn from human feedback. We present a survey of HITL NLP work from both Machine Learning (ML) and Human-computer Interaction (HCI) communities that highlights its short yet inspiring history, and thoroughly summarize recent frameworks focusing on their tasks, goals, human interactions, and feedback learning methods. Finally, we discuss future studies for integrating human feedback in the NLP development loop.', 'year': 2021, 'in_acl': True, 'citationCount': 66, 'section': None, 'subsection': None}, {'id': 235694265, 'paperId': 'a16ae67070de155789a871cb27ecbf9eaa98b379', 'title': 'All That’s ‘Human’ Is Not Gold: Evaluating Human Evaluation of Generated Text', 'authors': [{'authorId': '40684993', 'name': 'Elizabeth Clark'}, {'authorId': '50509991', 'name': 'Tal August'}, {'authorId': '38618739', 'name': 'Sofia Serrano'}, {'authorId': '3465456', 'name': 'Nikita Haduong'}, {'authorId': '40895369', 'name': 'Suchin Gururangan'}, {'authorId': '144365875', 'name': 'Noah A. Smith'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Human evaluations are typically considered the gold standard in natural language generation, but as models’ fluency improves, how well can evaluators detect and judge machine-generated text? We run a study assessing non-experts’ ability to distinguish between human- and machine-authored text (GPT2 and GPT3) in three domains (stories, news articles, and recipes). We find that, without training, evaluators distinguished between GPT3- and human-authored text at random chance level. We explore three approaches for quickly training evaluators to better identify GPT3-authored text (detailed instructions, annotated examples, and paired examples) and find that while evaluators’ accuracy improved up to 55%, it did not significantly improve across the three domains. Given the inconsistent results across text domains and the often contradictory reasons evaluators gave for their judgments, we examine the role untrained human evaluations play in NLG evaluation and provide recommendations to NLG researchers for improving human evaluations of text generated from state-of-the-art models.', 'year': 2021, 'in_acl': True, 'citationCount': 326, 'section': None, 'subsection': None}, {'id': 218483124, 'paperId': '529025645c70a935221bd434484faee695ad0f25', 'title': 'Re-examining Whether, Why, and How Human-AI Interaction Is Uniquely Difficult to Design', 'authors': [{'authorId': '2117860470', 'name': 'Qian Yang'}, {'authorId': '1792714', 'name': 'Aaron Steinfeld'}, {'authorId': '35959897', 'name': 'C. Rosé'}, {'authorId': '145308025', 'name': 'J. Zimmerman'}], 'venue': 'International Conference on Human Factors in Computing Systems', 'abstract': "Artificial Intelligence (AI) plays an increasingly important role in improving HCI and user experience. Yet many challenges persist in designing and innovating valuable human-AI interactions. For example, AI systems can make unpredictable errors, and these errors damage UX and even lead to undesired societal impact. However, HCI routinely grapples with complex technologies and mitigates their unintended consequences. What makes AI different? What makes human-AI interaction appear particularly difficult to design? This paper investigates these questions. We synthesize prior research, our own design and research experience, and our observations when teaching human-AI interaction. We identify two sources of AI's distinctive design challenges: 1) uncertainty surrounding AI's capabilities, 2) AI's output complexity, spanning from simple to adaptive complex. We identify four levels of AI systems. On each level, designers encounter a different subset of the design challenges. We demonstrate how these findings reveal new insights for designers, researchers, and design tool makers in productively addressing the challenges of human-AI interaction going forward.", 'year': 2020, 'in_acl': False, 'citationCount': 371, 'section': None, 'subsection': None}, {'id': 220128138, 'paperId': 'ebcbbb8fe297940d79b17aeb6d46bedff9db7fec', 'title': 'Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance', 'authors': [{'authorId': '33340656', 'name': 'Gagan Bansal'}, {'authorId': '35232494', 'name': 'Tongshuang Sherry Wu'}, {'authorId': '153823289', 'name': 'Joyce Zhou'}, {'authorId': '27083453', 'name': 'Raymond Fok'}, {'authorId': '2571049', 'name': 'Besmira Nushi'}, {'authorId': '1783184', 'name': 'Ece Kamar'}, {'authorId': '78846919', 'name': 'Marco Tulio Ribeiro'}, {'authorId': '1780531', 'name': 'Daniel S. Weld'}], 'venue': 'International Conference on Human Factors in Computing Systems', 'abstract': 'Many researchers motivate explainable AI with studies showing that human-AI team performance on decision-making tasks improves when the AI explains its recommendations. However, prior studies observed improvements from explanations only when the AI, alone, outperformed both the human and the best team. Can explanations help lead to complementary performance, where team accuracy is higher than either the human or the AI working solo? We conduct mixed-method user studies on three datasets, where an AI with accuracy comparable to humans helps participants solve a task (explaining itself in some conditions). While we observed complementary improvements from AI augmentation, they were not increased by explanations. Rather, explanations increased the chance that humans will accept the AI’s recommendation, regardless of its correctness. Our result poses new challenges for human-centered AI: Can we develop explanatory approaches that encourage appropriate trust in AI, and therefore help generate (or improve) complementary performance?', 'year': 2020, 'in_acl': False, 'citationCount': 463, 'section': None, 'subsection': None}, {'id': 8943607, 'paperId': '5df85ae89af55c6d82a1a14836ea6bcfbfc2c0ec', 'title': 'Principles of mixed-initiative user interfaces', 'authors': [{'authorId': '145479841', 'name': 'E. Horvitz'}], 'venue': 'International Conference on Human Factors in Computing Systems', 'abstract': 'Recent debate has centered on the relative promise of focusinguser-interface research on developing new metaphors and tools thatenhance users abilities to directly manipulate objects versusdirecting effort toward developing interface agents that provideautomation. In this paper, we review principles that show promisefor allowing engineers to enhance human-computer interactionthrough an elegant coupling of automated services with directmanipulation. Key ideas will be highlighted in terms of the Lookoutsystem for scheduling and meeting management.', 'year': 1999, 'in_acl': False, 'citationCount': 1329, 'section': None, 'subsection': None}, {'id': 86866942, 'paperId': 'ad3cf68bae32d21f25ac142287d4a556155619d2', 'title': 'Guidelines for Human-AI Interaction', 'authors': [{'authorId': '1719124', 'name': 'Saleema Amershi'}, {'authorId': '1780531', 'name': 'Daniel S. Weld'}, {'authorId': '3109339', 'name': 'Mihaela Vorvoreanu'}, {'authorId': '3318905', 'name': 'Adam Fourney'}, {'authorId': '2571049', 'name': 'Besmira Nushi'}, {'authorId': '9703838', 'name': 'Penny Collisson'}, {'authorId': '38972741', 'name': 'Jina Suh'}, {'authorId': '1730570', 'name': 'Shamsi T. Iqbal'}, {'authorId': '144609235', 'name': 'Paul N. Bennett'}, {'authorId': '1781500', 'name': 'K. Quinn'}, {'authorId': '144113253', 'name': 'J. Teevan'}, {'authorId': '1405707881', 'name': 'Ruth Kikin-Gil'}, {'authorId': '145479841', 'name': 'E. Horvitz'}], 'venue': 'International Conference on Human Factors in Computing Systems', 'abstract': 'Advances in artificial intelligence (AI) frame opportunities and challenges for user interface design. Principles for human-AI interaction have been discussed in the human-computer interaction community for over two decades, but more study and innovation are needed in light of advances in AI and the growing uses of AI technologies in human-facing applications. We propose 18 generally applicable design guidelines for human-AI interaction. These guidelines are validated through multiple rounds of evaluation including a user study with 49 design practitioners who tested the guidelines against 20 popular AI-infused products. The results verify the relevance of the guidelines over a spectrum of interaction scenarios and reveal gaps in our knowledge, highlighting opportunities for further research. Based on the evaluations, we believe the set of design guidelines can serve as a resource to practitioners working on the design of applications and features that harness AI technologies, and to researchers interested in the further development of human-AI interaction design principles.', 'year': 2019, 'in_acl': False, 'citationCount': 1094, 'section': None, 'subsection': None}, {'id': 246426909, 'paperId': 'd766bffc357127e0dc86dd69561d5aeb520d6f4c', 'title': 'Training language models to follow instructions with human feedback', 'authors': [{'authorId': '31793034', 'name': 'Long Ouyang'}, {'authorId': '49387725', 'name': 'Jeff Wu'}, {'authorId': '2115903168', 'name': 'Xu Jiang'}, {'authorId': '2061137049', 'name': 'Diogo Almeida'}, {'authorId': '2064084601', 'name': 'Carroll L. Wainwright'}, {'authorId': '2051714782', 'name': 'Pamela Mishkin'}, {'authorId': None, 'name': 'Chong Zhang'}, {'authorId': '144517868', 'name': 'Sandhini Agarwal'}, {'authorId': '2117680841', 'name': 'Katarina Slama'}, {'authorId': '2064770039', 'name': 'Alex Ray'}, {'authorId': '47971768', 'name': 'John Schulman'}, {'authorId': '2052366271', 'name': 'Jacob Hilton'}, {'authorId': '2151735262', 'name': 'Fraser Kelton'}, {'authorId': '2142365973', 'name': 'Luke E. Miller'}, {'authorId': '2151735251', 'name': 'Maddie Simens'}, {'authorId': '119609682', 'name': 'Amanda Askell'}, {'authorId': '2930640', 'name': 'P. Welinder'}, {'authorId': '145791315', 'name': 'P. Christiano'}, {'authorId': '2990741', 'name': 'J. Leike'}, {'authorId': '49407415', 'name': 'Ryan J. Lowe'}], 'venue': 'Neural Information Processing Systems', 'abstract': "Making language models bigger does not inherently make them better at following a user's intent. For example, large language models can generate outputs that are untruthful, toxic, or simply not helpful to the user. In other words, these models are not aligned with their users. In this paper, we show an avenue for aligning language models with user intent on a wide range of tasks by fine-tuning with human feedback. Starting with a set of labeler-written prompts and prompts submitted through the OpenAI API, we collect a dataset of labeler demonstrations of the desired model behavior, which we use to fine-tune GPT-3 using supervised learning. We then collect a dataset of rankings of model outputs, which we use to further fine-tune this supervised model using reinforcement learning from human feedback. We call the resulting models InstructGPT. In human evaluations on our prompt distribution, outputs from the 1.3B parameter InstructGPT model are preferred to outputs from the 175B GPT-3, despite having 100x fewer parameters. Moreover, InstructGPT models show improvements in truthfulness and reductions in toxic output generation while having minimal performance regressions on public NLP datasets. Even though InstructGPT still makes simple mistakes, our results show that fine-tuning with human feedback is a promising direction for aligning language models with human intent.", 'year': 2022, 'in_acl': False, 'citationCount': 9395, 'section': None, 'subsection': None}, {'id': 221665105, 'paperId': '053b1d7b97eb2c91fc3921d589c160b0923c70b1', 'title': 'Learning to summarize from human feedback', 'authors': [{'authorId': '1387983862', 'name': 'Nisan Stiennon'}, {'authorId': '31793034', 'name': 'Long Ouyang'}, {'authorId': '49387725', 'name': 'Jeff Wu'}, {'authorId': '2052152920', 'name': 'Daniel M. Ziegler'}, {'authorId': '49407415', 'name': 'Ryan J. Lowe'}, {'authorId': '153387869', 'name': 'Chelsea Voss'}, {'authorId': '38909097', 'name': 'Alec Radford'}, {'authorId': '2698777', 'name': 'Dario Amodei'}, {'authorId': '145370786', 'name': 'Paul Christiano'}], 'venue': 'Neural Information Processing Systems', 'abstract': 'As language models become more powerful, training and evaluation are increasingly bottlenecked by the data and metrics used for a particular task. For example, summarization models are often trained to predict human reference summaries and evaluated using ROUGE, but both of these metrics are rough proxies for what we really care about---summary quality. In this work, we show that it is possible to significantly improve summary quality by training a model to optimize for human preferences. We collect a large, high-quality dataset of human comparisons between summaries, train a model to predict the human-preferred summary, and use that model as a reward function to fine-tune a summarization policy using reinforcement learning. We apply our method to a version of the TL;DR dataset of Reddit posts and find that our models significantly outperform both human reference summaries and much larger models fine-tuned with supervised learning alone. Our models also transfer to CNN/DM news articles, producing summaries nearly as good as the human reference without any news-specific fine-tuning. We conduct extensive analyses to understand our human feedback dataset and fine-tuned models We establish that our reward model generalizes to new datasets, and that optimizing our reward model results in better summaries than optimizing ROUGE according to humans. We hope the evidence from our paper motivates machine learning researchers to pay closer attention to how their training loss affects the model behavior they actually want.', 'year': 2020, 'in_acl': False, 'citationCount': 1573, 'section': None, 'subsection': None}]
|
2023.emnlp-tutorial.4
|
LLM-driven Instruction Following: Progresses and Concerns
|
The progress of natural language processing (NLP) is primarily driven by machine learning that optimizes a system on a large-scale set of task-specific labeled examples. This learning paradigm limits the ability of machines to have the same capabilities as humans in handling new tasks since humans can often solve unseen tasks with a couple of examples accompanied by task instructions. In addition, we may not have a chance to prepare task-specific examples of large-volume for new tasks because we cannot foresee what task needs to be addressed next and how complex to annotate for it. Therefore, task instructions act as a novel and promising resource for supervision. This tutorial targets researchers and practitioners who are interested in AI and ML technologies for NLP generalization in a low-shot scenario. In particular, we will present a diverse thread of instruction-driven NLP studies that try to answer the following questions: (i) What is task instruction? (ii) How is the process of creating datasets and evaluating systems conducted? (iii) How to encode task instructions? (iv) When and why do some instructions work better? (v) What concerns remain in LLM-driven instruction following? We will discuss several lines of frontier research that tackle those challenges and will conclude the tutorial by outlining directions for further investigation.
| 2,023
|
https://aclanthology.org/2023.emnlp-tutorial.4
|
EMNLP
|
[{'id': 211126925, 'paperId': '7b4358d7692353003eae7e39cececa2c2c44c43a', 'title': 'Learning from Explanations with Neural Execution Tree', 'authors': [{'authorId': '1390880371', 'name': 'Ziqi Wang'}, {'authorId': '50625437', 'name': 'Yujia Qin'}, {'authorId': '2203076', 'name': 'Wenxuan Zhou'}, {'authorId': '49781448', 'name': 'Jun Yan'}, {'authorId': '1557391091', 'name': 'Qinyuan Ye'}, {'authorId': '152842060', 'name': 'Leonardo Neves'}, {'authorId': '49293587', 'name': 'Zhiyuan Liu'}, {'authorId': '1384550891', 'name': 'Xiang Ren'}], 'venue': 'International Conference on Learning Representations', 'abstract': 'While deep neural networks have achieved impressive performance on a range of NLP tasks, these data-hungry models heavily rely on labeled data, which restricts their applications in scenarios where data annotation is expensive. Natural language (NL) explanations have been demonstrated very useful additional supervision, which can provide sufficient domain knowledge for generating more labeled data over new instances, while the annotation time only doubles. However, directly applying them for augmenting model learning encounters two challenges: (1) NL explanations are unstructured and inherently compositional, which asks for a modularized model to represent their semantics, (2) NL explanations often have large numbers of linguistic variants, resulting in low recall and limited generalization ability. In this paper, we propose a novel Neural Execution Tree (NExT) framework to augment training data for text classification using NL explanations. After transforming NL explanations into executable logical forms by semantic parsing, NExT generalizes different types of actions specified by the logical forms for labeling data instances, which substantially increases the coverage of each NL explanation. Experiments on two NLP tasks (relation extraction and sentiment analysis) demonstrate its superiority over baseline methods. Its extension to multi-hop question answering achieves performance gain with light annotation effort.', 'year': 2019, 'in_acl': False, 'citationCount': 38, 'section': None, 'subsection': None}, {'id': 202540839, 'paperId': '093d9253a2fe765ca6577b091d3f99bab3155a7d', 'title': 'Benchmarking Zero-shot Text Classification: Datasets, Evaluation and Entailment Approach', 'authors': [{'authorId': '40483594', 'name': 'Wenpeng Yin'}, {'authorId': '153035400', 'name': 'Jamaal Hay'}, {'authorId': '144590225', 'name': 'D. Roth'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Zero-shot text classification (0Shot-TC) is a challenging NLU problem to which little attention has been paid by the research community. 0Shot-TC aims to associate an appropriate label with a piece of text, irrespective of the text domain and the aspect (e.g., topic, emotion, event, etc.) described by the label. And there are only a few articles studying 0Shot-TC, all focusing only on topical categorization which, we argue, is just the tip of the iceberg in 0Shot-TC. In addition, the chaotic experiments in literature make no uniform comparison, which blurs the progress. This work benchmarks the 0Shot-TC problem by providing unified datasets, standardized evaluations, and state-of-the-art baselines. Our contributions include: i) The datasets we provide facilitate studying 0Shot-TC relative to conceptually different and diverse aspects: the “topic” aspect includes “sports” and “politics” as labels; the “emotion” aspect includes “joy” and “anger”; the “situation” aspect includes “medical assistance” and “water shortage”. ii) We extend the existing evaluation setup (label-partially-unseen) – given a dataset, train on some labels, test on all labels – to include a more challenging yet realistic evaluation label-fully-unseen 0Shot-TC (Chang et al., 2008), aiming at classifying text snippets without seeing task specific training data at all. iii) We unify the 0Shot-TC of diverse aspects within a textual entailment formulation and study it this way.', 'year': 2019, 'in_acl': True, 'citationCount': 495, 'section': None, 'subsection': None}, {'id': 248505827, 'paperId': '6f0650a429add68e9f9430b09e1c6e8780ca787c', 'title': 'Textual Entailment for Event Argument Extraction: Zero- and Few-Shot with Multi-Source Learning', 'authors': [{'authorId': '1724648481', 'name': 'Oscar Sainz'}, {'authorId': '1404791152', 'name': 'Itziar Gonzalez-Dios'}, {'authorId': '1715983', 'name': 'Oier Lopez de Lacalle'}, {'authorId': '1875233', 'name': 'Bonan Min'}, {'authorId': '1733049', 'name': 'Eneko Agirre'}], 'venue': 'NAACL-HLT', 'abstract': 'Recent work has shown that NLP tasks such as Relation Extraction (RE) can be recasted as Textual Entailment tasks using verbalizations, with strong performance in zero-shot and few-shot settings thanks to pre-trained entailment models. The fact that relations in current RE datasets are easily verbalized casts doubts on whether entailment would be effective in more complex tasks. In this work we show that entailment is also effective in Event Argument Extraction (EAE), reducing the need of manual annotation to 50% and 20% in ACE and WikiEvents respectively, while achieving the same performance as with full training. More importantly, we show that recasting EAE as entailment alleviates the dependency on schemas, which has been a road-block for transferring annotations between domains. Thanks to the entailment, the multi-source transfer between ACE and WikiEvents further reduces annotation down to 10% and 5% (respectively) of the full training without transfer. Our analysis shows that the key to good results is the use of several entailment datasets to pre-train the entailment model. Similar to previous approaches, our method requires a small amount of effort for manual verbalization: only less than 15 minutes per event argument type is needed, and comparable results can be achieved with users with different level of expertise.', 'year': 2022, 'in_acl': False, 'citationCount': 44, 'section': None, 'subsection': None}, {'id': 236493269, 'paperId': '28692beece311a90f5fa1ca2ec9d0c2ce293d069', 'title': 'Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing', 'authors': [{'authorId': '144118452', 'name': 'Pengfei Liu'}, {'authorId': '30300197', 'name': 'Weizhe Yuan'}, {'authorId': '41037252', 'name': 'Jinlan Fu'}, {'authorId': '2669515', 'name': 'Zhengbao Jiang'}, {'authorId': '50376014', 'name': 'Hiroaki Hayashi'}, {'authorId': '1700325', 'name': 'Graham Neubig'}], 'venue': 'ACM Computing Surveys', 'abstract': 'This article surveys and organizes research works in a new paradigm in natural language processing, which we dub “prompt-based learning.” Unlike traditional supervised learning, which trains a model to take in an input x and predict an output y as P(y|x), prompt-based learning is based on language models that model the probability of text directly. To use these models to perform prediction tasks, the original input x is modified using a template into a textual string prompt x′ that has some unfilled slots, and then the language model is used to probabilistically fill the unfilled information to obtain a final string x̂, from which the final output y can be derived. This framework is powerful and attractive for a number of reasons: It allows the language model to be pre-trained on massive amounts of raw text, and by defining a new prompting function the model is able to perform few-shot or even zero-shot learning, adapting to new scenarios with few or no labeled data. In this article, we introduce the basics of this promising paradigm, describe a unified set of mathematical notations that can cover a wide variety of existing work, and organize existing work along several dimensions, e.g., the choice of pre-trained language models, prompts, and tuning strategies. To make the field more accessible to interested beginners, we not only make a systematic review of existing works and a highly structured typology of prompt-based concepts but also release other resources, e.g., a website NLPedia–Pretrain including constantly updated survey and paperlist.', 'year': 2021, 'in_acl': False, 'citationCount': 3190, 'section': None, 'subsection': None}, {'id': 244708947, 'paperId': '605c32428861eb26b8631617b8f6c97a850d6a04', 'title': 'True Few-Shot Learning with Prompts—A Real-World Perspective', 'authors': [{'authorId': '32246932', 'name': 'Timo Schick'}, {'authorId': '144418438', 'name': 'Hinrich Schütze'}], 'venue': 'Transactions of the Association for Computational Linguistics', 'abstract': 'Abstract Prompt-based approaches excel at few-shot learning. However, Perez et al. (2021) recently cast doubt on their performance as they had difficulty getting good results in a “true” few-shot setting in which prompts and hyperparameters cannot be tuned on a dev set. In view of this, we conduct an extensive study of Pet, a method that combines textual instructions with example-based finetuning. We show that, if correctly configured, Pet performs strongly in true few-shot settings without a dev set. Crucial for this strong performance is a number of design choices, including Pet’s ability to intelligently handle multiple prompts. We put our findings to a real-world test by running Pet on RAFT, a benchmark of tasks taken from realistic NLP applications for which no labeled dev or test sets are available. Pet achieves a new state of the art on RAFT and performs close to non-expert humans for 7 out of 11 tasks. These results demonstrate that prompt-based learners can successfully be applied in true few-shot settings and underpin our belief that learning from instructions will play an important role on the path towards human-like few-shot learning capabilities.', 'year': 2021, 'in_acl': False, 'citationCount': 56, 'section': None, 'subsection': None}, {'id': 225062157, 'paperId': 'ecb2b0859bab2761be397804516b8de3983366e8', 'title': 'The Turking Test: Can Language Models Understand Instructions?', 'authors': [{'authorId': '1388010852', 'name': 'Avia Efrat'}, {'authorId': '39455775', 'name': 'Omer Levy'}], 'venue': 'arXiv.org', 'abstract': 'Supervised machine learning provides the learner with a set of input-output examples of the target task. Humans, however, can also learn to perform new tasks from instructions in natural language. Can machines learn to understand instructions as well? We present the Turking Test, which examines a model\'s ability to follow natural language instructions of varying complexity. These range from simple tasks, like retrieving the nth word of a sentence, to ones that require creativity, such as generating examples for SNLI and SQuAD in place of human intelligence workers ("turkers"). Despite our lenient evaluation methodology, we observe that a large pretrained language model performs poorly across all tasks. Analyzing the model\'s error patterns reveals that the model tends to ignore explicit instructions and often generates outputs that cannot be construed as an attempt to solve the task. While it is not yet clear whether instruction understanding can be captured by traditional language models, the sheer expressivity of instruction understanding makes it an appealing alternative to the rising few-shot inference paradigm.', 'year': 2020, 'in_acl': False, 'citationCount': 90, 'section': None, 'subsection': None}, {'id': 235358897, 'paperId': 'f41e6c832c9e0d5360b66ee7681d3b1ffd2d9c3d', 'title': 'Hierarchical Task Learning from Language Instructions with Unified Transformers and Self-Monitoring', 'authors': [{'authorId': '46868553', 'name': 'Yichi Zhang'}, {'authorId': '1707259', 'name': 'J. Chai'}], 'venue': 'Findings', 'abstract': 'Despite recent progress, learning new tasks through language instructions remains an extremely challenging problem. On the ALFRED benchmark for task learning, the published state-of-the-art system only achieves a task success rate of less than 10% in an unseen environment, compared to the human performance of over 90%. To address this issue, this paper takes a closer look at task learning. In a departure from a widely applied end-to-end architecture, we decomposed task learning into three sub-problems: sub-goal planning, scene navigation, and object manipulation; and developed a model HiTUT (stands for Hierarchical Tasks via Unified Transformers) that addresses each sub-problem in a unified manner to learn a hierarchical task structure. On the ALFRED benchmark, HiTUT has achieved the best performance with a remarkably higher generalization ability. In the unseen environment, HiTUT achieves over 160% performance gain in success rate compared to the previous state of the art. The explicit representation of task structures also enables an in-depth understanding of the nature of the problem and the ability of the agent, which provides insight for future benchmark development and evaluation.', 'year': 2021, 'in_acl': True, 'citationCount': 73, 'section': None, 'subsection': None}, {'id': 237421373, 'paperId': 'cbdb45fc16b0885905b91d84281c310e6cb49e9c', 'title': 'Cross-Task Generalization via Natural Language Crowdsourcing Instructions', 'authors': [{'authorId': '1817207', 'name': 'Swaroop Mishra'}, {'authorId': '1783281', 'name': 'Daniel Khashabi'}, {'authorId': '2064619864', 'name': 'Chitta Baral'}, {'authorId': '2548384', 'name': 'Hannaneh Hajishirzi'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Humans (e.g., crowdworkers) have a remarkable ability in solving different tasks, by simply reading textual instructions that define them and looking at a few examples. Despite the success of the conventional supervised learning on individual datasets, such models often struggle with generalization across tasks (e.g., a question-answering system cannot solve classification tasks). A long-standing challenge in AI is to build a model that learns a new task by understanding the human-readable instructions that define it. To study this, we introduce NATURAL INSTRUCTIONS, a dataset of 61 distinct tasks, their human-authored instructions, and 193k task instances (input-output pairs). The instructions are obtained from crowdsourcing instructions used to create existing NLP datasets and mapped to a unified schema. Using this meta-dataset, we measure cross-task generalization by training models on seen tasks and measuring generalization to the remaining unseen ones. We adopt generative pre-trained language models to encode task-specific instructions along with input and generate task output. Our results indicate that models benefit from instructions when evaluated in terms of generalization to unseen tasks (19% better for models utilizing instructions). These models, however, are far behind an estimated performance upperbound indicating significant room for more progress in this direction.', 'year': 2021, 'in_acl': True, 'citationCount': 645, 'section': None, 'subsection': None}, {'id': 265659379, 'paperId': 'a2b150e02306038389f5df683428f5a4659a468e', 'title': 'MUFFIN: Curating Multi-Faceted Instructions for Improving Instruction-Following', 'authors': [{'authorId': '2118614649', 'name': 'Renze Lou'}, {'authorId': '145086492', 'name': 'Kai Zhang'}, {'authorId': '2153624353', 'name': 'Jian Xie'}, {'authorId': '2253808926', 'name': 'Yuxuan Sun'}, {'authorId': '2269759096', 'name': 'Janice Ahn'}, {'authorId': '2143534669', 'name': 'Hanzi Xu'}, {'authorId': '2254227752', 'name': 'Yu Su'}, {'authorId': '2269736245', 'name': 'Wenpeng Yin'}], 'venue': 'International Conference on Learning Representations', 'abstract': 'In the realm of large language models (LLMs), enhancing instruction-following capability often involves curating expansive training data. This is achieved through two primary schemes: i) Scaling-Inputs: Amplifying (input, output) pairs per task instruction, aiming for better instruction adherence. ii) Scaling Input-Free Tasks: Enlarging tasks, each composed of an (instruction, output) pair (without requiring a separate input anymore). However, LLMs under Scaling-Inputs tend to be overly sensitive to inputs, leading to misinterpretation or non-compliance with instructions. Conversely, Scaling Input-Free Tasks demands a substantial number of tasks but is less effective in instruction following when dealing with instances in Scaling-Inputs. This work introduces MUFFIN, a new scheme of instruction-following dataset curation. Specifically, we automatically Scale Tasks per Input by diversifying these tasks with various input facets. Experimental results across four zero-shot benchmarks, spanning both Scaling-Inputs and Scaling Input-Free Tasks schemes, reveal that LLMs, at various scales, trained on MUFFIN generally demonstrate superior instruction-following capabilities compared to those trained on the two aforementioned schemes.', 'year': 2023, 'in_acl': False, 'citationCount': 20, 'section': None, 'subsection': None}]
|
2023.ijcnlp-tutorials.2
|
Current Status of NLP in South East Asia with Insights from Multilingualism and Language Diversity
| null | 2,023
|
https://aclanthology.org/2023.ijcnlp-tutorials.2
|
IJCNLP, AACL
|
[{'id': 247748611, 'paperId': 'a747e8f2659df479c0092301b9658fc582423df1', 'title': 'One Country, 700+ Languages: NLP Challenges for Underrepresented Languages and Dialects in Indonesia', 'authors': [{'authorId': '8129718', 'name': 'Alham Fikri Aji'}, {'authorId': '9162688', 'name': 'Genta Indra Winata'}, {'authorId': '2789148', 'name': 'Fajri Koto'}, {'authorId': '66986482', 'name': 'Samuel Cahyawijaya'}, {'authorId': '2279712392', 'name': 'Ade Romadhony'}, {'authorId': '1935324', 'name': 'Rahmad Mahendra'}, {'authorId': '46199596', 'name': 'Kemal Kurniawan'}, {'authorId': '35722593', 'name': 'David Moeljadi'}, {'authorId': '2368148', 'name': 'Radityo Eko Prasojo'}, {'authorId': '145465286', 'name': 'Timothy Baldwin'}, {'authorId': '1800564', 'name': 'Jey Han Lau'}, {'authorId': '2884561', 'name': 'Sebastian Ruder'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'NLP research is impeded by a lack of resources and awareness of the challenges presented by underrepresented languages and dialects. Focusing on the languages spoken in Indonesia, the second most linguistically diverse and the fourth most populous nation of the world, we provide an overview of the current state of NLP research for Indonesia’s 700+ languages. We highlight challenges in Indonesian NLP and how these affect the performance of current NLP systems. Finally, we provide general recommendations to help develop NLP technology not only for languages of Indonesia but also other underrepresented languages.', 'year': 2022, 'in_acl': True, 'citationCount': 75, 'section': 'General Background', 'subsection': None}, {'id': 236460241, 'paperId': 'ee6d66efc86746d42ace14db30fcbaf9d3380e25', 'title': 'A Survey of Code-switching: Linguistic and Social Perspectives for Language Technologies', 'authors': [{'authorId': '1904399', 'name': 'A. Seza Doğruöz'}, {'authorId': '3010457', 'name': 'Sunayana Sitaram'}, {'authorId': '32383682', 'name': 'Barbara E. Bullock'}, {'authorId': '8770367', 'name': 'Almeida Jacqueline Toribio'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'The analysis of data in which multiple languages are represented has gained popularity among computational linguists in recent years. So far, much of this research focuses mainly on the improvement of computational methods and largely ignores linguistic and social aspects of C-S discussed across a wide range of languages within the long-established literature in linguistics. To fill this gap, we offer a survey of code-switching (C-S) covering the literature in linguistics with a reflection on the key issues in language technologies. From the linguistic perspective, we provide an overview of structural and functional patterns of C-S focusing on the literature from European and Indian contexts as highly multilingual areas. From the language technologies perspective, we discuss how massive language models fail to represent diverse C-S types due to lack of appropriate training data, lack of robust evaluation benchmarks for C-S (across multilingual situations and types of C-S) and lack of end-to- end systems that cover sociolinguistic aspects of C-S as well. Our survey will be a step to- wards an outcome of mutual benefit for computational scientists and linguists with a shared interest in multilingualism and C-S.', 'year': 2023, 'in_acl': True, 'citationCount': 65, 'section': 'General Background', 'subsection': None}, {'id': 252184818, 'paperId': 'b592a963ca5383fbb8aa4e9db3ae82e298f7fff1', 'title': 'Language Technologies for Low Resource Languages: Sociolinguistic and Multilingual Insights', 'authors': [{'authorId': '1904399', 'name': 'A. Seza Doğruöz'}, {'authorId': '3010457', 'name': 'Sunayana Sitaram'}], 'venue': 'SIGUL', 'abstract': 'There is a growing interest in building language technologies (LTs) for low resource languages (LRLs). However, there are flaws in the planning, data collection and development phases mostly due to the assumption that LRLs are similar to High Resource Languages (HRLs) but only smaller in size. In our paper, we first provide examples of failed LTs for LRLs and provide the reasons for these failures. Second, we discuss the problematic issues with the data for LRLs. Finally, we provide recommendations for building better LTs for LRLs through insights from sociolinguistics and multilingualism. Our goal is not to solve all problems around LTs for LRLs but to raise awareness about the existing issues, provide recommendations toward possible solutions and encourage collaboration across academic disciplines for developing LTs that actually serve the needs and preferences of the LRL communities.', 'year': 2022, 'in_acl': True, 'citationCount': 11, 'section': 'General Background', 'subsection': None}, {'id': 252624636, 'paperId': '7625e596f3c6fba28e57afabf70dcf6e6bed7718', 'title': 'Survey on Thai NLP Language Resources and Tools', 'authors': [{'authorId': '51913103', 'name': 'Ratchakrit Arreerard'}, {'authorId': '2186552673', 'name': 'Stephen Mander'}, {'authorId': '2068413302', 'name': 'S. Piao'}], 'venue': 'International Conference on Language Resources and Evaluation', 'abstract': 'Over the past decades, Natural Language Processing (NLP) research has been expanding to cover more languages. Recently particularly, NLP community has paid increasing attention to under-resourced languages. However, there are still many languages for which NLP research is limited in terms of both language resources and software tools. Thai language is one of the under-resourced languages in the NLP domain, although it is spoken by nearly 70 million people globally. In this paper, we report on our survey on the past development of Thai NLP research to help understand its current state and future research directions. Our survey shows that, although Thai NLP community has achieved a significant achievement over the past three decades, particularly on NLP upstream tasks such as tokenisation, research on downstream tasks such as syntactic parsing and semantic analysis is still limited. But we foresee that Thai NLP research will advance rapidly as richer Thai language resources and more robust NLP techniques become available.', 'year': 2022, 'in_acl': True, 'citationCount': 10, 'section': 'General Background', 'subsection': None}, {'id': 45262369, 'paperId': '61fa36ae4d2f4608c8c2facd4a8f88b052e36d31', 'title': 'AREAL LINGUISTICS AND MAINLAND SOUTHEAST ASIA', 'authors': [{'authorId': '3247678', 'name': 'N. Enfield'}], 'venue': '', 'abstract': 'AbstractMainland Southeast Asia provides a dramatic demonstration of the areal phenomenon in linguistics: When languages are spoken historically in the same location they often show significant parallels in the organization of a wide range of structural domains, whether the languages descend from the same historical source. The effects of areal diffusion raise fundamental questions for the traditional essentialist vision of languages as entities with offspring that diverge, with shared innovations marking divergent branches and internal processes of evolution accounting for diversity among modern languages. Recent theoretical and empirical research on linguistic diversity, language change, and social diffusion of innovation argues for a unit-based approach to language change and relatedness, where the units of analysis are individual speakers and individual linguistic items. This review begins with discussion of the language situation in Mainland Southeast Asia, where the language “genealogies” have been ...', 'year': 2005, 'in_acl': False, 'citationCount': 171, 'section': 'General Background', 'subsection': None}, {'id': 250425961, 'paperId': 'e19b54ad4c1c8af045069e9cac350ffc2ce60e1a', 'title': 'No Language Left Behind: Scaling Human-Centered Machine Translation', 'authors': [{'authorId': '2175650427', 'name': 'Nllb team'}, {'authorId': '1398996347', 'name': 'M. Costa-jussà'}, {'authorId': '2059363961', 'name': 'James Cross'}, {'authorId': '2166310112', 'name': 'Onur cCelebi'}, {'authorId': '46183659', 'name': 'Maha Elbayad'}, {'authorId': '1702066', 'name': 'Kenneth Heafield'}, {'authorId': '47926975', 'name': 'Kevin Heffernan'}, {'authorId': '2175649909', 'name': 'Elahe Kalbassi'}, {'authorId': '82469889', 'name': 'Janice Lam'}, {'authorId': '2082021589', 'name': 'Daniel Licht'}, {'authorId': '40148380', 'name': 'Jean Maillard'}, {'authorId': '2091912142', 'name': 'Anna Sun'}, {'authorId': '2175975228', 'name': 'Skyler Wang'}, {'authorId': '2293203', 'name': 'Guillaume Wenzek'}, {'authorId': '118325632', 'name': 'Alison Youngblood'}, {'authorId': '2175649907', 'name': 'Bapi Akula'}, {'authorId': '2934336', 'name': 'Loïc Barrault'}, {'authorId': '2175804876', 'name': 'Gabriel Mejia Gonzalez'}, {'authorId': '2175651102', 'name': 'Prangthip Hansanti'}, {'authorId': '2055094870', 'name': 'John Hoffman'}, {'authorId': '2175650054', 'name': 'Semarley Jarrett'}, {'authorId': '1914544232', 'name': 'Kaushik Ram Sadagopan'}, {'authorId': '2175650209', 'name': 'Dirk Rowe'}, {'authorId': '3416737', 'name': 'Shannon L. Spruit'}, {'authorId': '33806547', 'name': 'C. Tran'}, {'authorId': '5657660', 'name': 'Pierre Yves Andrews'}, {'authorId': '1769060', 'name': 'Necip Fazil Ayan'}, {'authorId': '2116473', 'name': 'Shruti Bhosale'}, {'authorId': '2068070', 'name': 'Sergey Edunov'}, {'authorId': '144270981', 'name': 'Angela Fan'}, {'authorId': '2107063269', 'name': 'Cynthia Gao'}, {'authorId': '28554843', 'name': 'Vedanuj Goswami'}, {'authorId': '2061585840', 'name': "Francisco Guzm'an"}, {'authorId': '1755162', 'name': 'Philipp Koehn'}, {'authorId': '2175651049', 'name': 'Alexandre Mourachko'}, {'authorId': '146424711', 'name': 'C. Ropers'}, {'authorId': '2475227', 'name': 'Safiyyah Saleem'}, {'authorId': '144518416', 'name': 'Holger Schwenk'}, {'authorId': '2155451431', 'name': 'Jeff Wang'}], 'venue': 'arXiv.org', 'abstract': 'Driven by the goal of eradicating language barriers on a global scale, machine translation has solidified itself as a key focus of artificial intelligence research today. However, such efforts have coalesced around a small subset of languages, leaving behind the vast majority of mostly low-resource languages. What does it take to break the 200 language barrier while ensuring safe, high quality results, all while keeping ethical considerations in mind? In No Language Left Behind, we took on this challenge by first contextualizing the need for low-resource language translation support through exploratory interviews with native speakers. Then, we created datasets and models aimed at narrowing the performance gap between low and high-resource languages. More specifically, we developed a conditional compute model based on Sparsely Gated Mixture of Experts that is trained on data obtained with novel and effective data mining techniques tailored for low-resource languages. We propose multiple architectural and training improvements to counteract overfitting while training on thousands of tasks. Critically, we evaluated the performance of over 40,000 different translation directions using a human-translated benchmark, Flores-200, and combined human evaluation with a novel toxicity benchmark covering all languages in Flores-200 to assess translation safety. Our model achieves an improvement of 44% BLEU relative to the previous state-of-the-art, laying important groundwork towards realizing a universal translation system. Finally, we open source all contributions described in this work, accessible at https://github.com/facebookresearch/fairseq/tree/nllb.', 'year': 2022, 'in_acl': False, 'citationCount': 940, 'section': 'General Background', 'subsection': None}, {'id': 3004919, 'paperId': 'f25baf281c13207c2459b0264aa3fa30212ab5e8', 'title': 'Crowdsourcing-based Annotation of Emotions in Filipino and English Tweets', 'authors': [{'authorId': '35645750', 'name': 'F. Lapitan'}, {'authorId': '1400900759', 'name': 'R. Batista-Navarro'}, {'authorId': '2674576', 'name': 'E. Albacea'}], 'venue': 'WSSANLP@COLING', 'abstract': 'The automatic analysis of emotions conveyed in social media content, e.g., tweets, has many beneficial applications. In the Philippines, one of the most disaster-prone countries in the world, such methods could potentially enable first responders to make timely decisions despite the risk of data deluge. However, recognising emotions expressed in Philippine-generated tweets, which are mostly written in Filipino, English or a mix of both, is a non-trivial task. In order to facilitate the development of natural language processing (NLP) methods that will automate such type of analysis, we have built a corpus of tweets whose predominant emotions have been manually annotated by means of crowdsourcing. Defining measures ensuring that only high-quality annotations were retained, we have produced a gold standard corpus of 1,146 emotion-labelled Filipino and English tweets. We validate the value of this manually produced resource by demonstrating that an automatic emotion-prediction method based on the use of a publicly available word-emotion association lexicon was unable to reproduce the labels assigned via crowdsourcing. While we are planning to make a few extensions to the corpus in the near future, its current version has been made publicly available in order to foster the development of emotion analysis methods based on advanced Filipino and English NLP.', 'year': 2016, 'in_acl': True, 'citationCount': 16, 'section': 'Resource Collection and Availability', 'subsection': 'Datasets and Evaluation'}, {'id': 5631708, 'paperId': 'f28cb37e0f1a225f0d4f27f43ef4e05eee8b321c', 'title': 'SEAME: a Mandarin-English code-switching speech corpus in south-east asia', 'authors': [{'authorId': '1719058', 'name': 'Dau-Cheng Lyu'}, {'authorId': '1805622', 'name': 'T. Tan'}, {'authorId': '1742722', 'name': 'Chng Eng Siong'}, {'authorId': '1711271', 'name': 'Haizhou Li'}], 'venue': 'Interspeech', 'abstract': 'In Singapore and Malaysia, people often speak a mixture of Mandarin and English within a single sentence. We call such sentences intra-sentential code-switch sentences. In this paper, we report on the development of a Mandarin-English codeswitching spontaneous speech corpus: SEAME. The corpus is developed as part of a multilingual speech recognition project and will be used to examine how Mandarin-English codeswitch speech occurs in the spoken language in South-East Asia. Additionally, it can provide insights into the development of large vocabulary continuous speech recognition (LVCSR) for code-switching speech. The corpus collected consists of intra-sentential code-switching utterances that are recorded under both interview and conversational settings. This paper describes the corpus design and the analysis of collected corpus.', 'year': 2010, 'in_acl': False, 'citationCount': 131, 'section': 'Resource Collection and Availability', 'subsection': 'Datasets and Evaluation'}, {'id': 249209909, 'paperId': '11f64ec047782cada21d50efea1e0dc5843675f6', 'title': 'NusaX: Multilingual Parallel Sentiment Dataset for 10 Indonesian Local Languages', 'authors': [{'authorId': '9162688', 'name': 'Genta Indra Winata'}, {'authorId': '8129718', 'name': 'Alham Fikri Aji'}, {'authorId': '66986482', 'name': 'Samuel Cahyawijaya'}, {'authorId': '1935324', 'name': 'Rahmad Mahendra'}, {'authorId': '2789148', 'name': 'Fajri Koto'}, {'authorId': '2279712392', 'name': 'Ade Romadhony'}, {'authorId': '46199596', 'name': 'Kemal Kurniawan'}, {'authorId': '35722593', 'name': 'David Moeljadi'}, {'authorId': '2368148', 'name': 'Radityo Eko Prasojo'}, {'authorId': '40539650', 'name': 'Pascale Fung'}, {'authorId': '145465286', 'name': 'Timothy Baldwin'}, {'authorId': '1800564', 'name': 'Jey Han Lau'}, {'authorId': '2082372', 'name': 'Rico Sennrich'}, {'authorId': '2884561', 'name': 'Sebastian Ruder'}], 'venue': 'Conference of the European Chapter of the Association for Computational Linguistics', 'abstract': 'Natural language processing (NLP) has a significant impact on society via technologies such as machine translation and search engines. Despite its success, NLP technology is only widely available for high-resource languages such as English and Chinese, while it remains inaccessible to many languages due to the unavailability of data resources and benchmarks. In this work, we focus on developing resources for languages in Indonesia. Despite being the second most linguistically diverse country, most languages in Indonesia are categorized as endangered and some are even extinct. We develop the first-ever parallel resource for 10 low-resource languages in Indonesia. Our resource includes sentiment and machine translation datasets, and bilingual lexicons. We provide extensive analyses and describe challenges for creating such resources. We hope this work can spark NLP research on Indonesian and other underrepresented languages.', 'year': 2022, 'in_acl': True, 'citationCount': 66, 'section': 'Resource Collection and Availability', 'subsection': 'Datasets and Evaluation'}, {'id': 253762058, 'paperId': '940fc621079ea349109202c7d705461b50d541d8', 'title': 'Cross-lingual Few-Shot Learning on Unseen Languages', 'authors': [{'authorId': '9162688', 'name': 'Genta Indra Winata'}, {'authorId': '50425845', 'name': 'Shijie Wu'}, {'authorId': '145304589', 'name': 'Mayank Kulkarni'}, {'authorId': '1794626', 'name': 'T. Solorio'}, {'authorId': '1398830377', 'name': 'Daniel Preotiuc-Pietro'}], 'venue': 'AACL', 'abstract': 'Large pre-trained language models (LMs) have demonstrated the ability to obtain good performance on downstream tasks with limited examples in cross-lingual settings. However, this was mostly studied for relatively resource-rich languages, where at least enough unlabeled data is available to be included in pre-training a multilingual language model. In this paper, we explore the problem of cross-lingual transfer in unseen languages, where no unlabeled data is available for pre-training a model. We use a downstream sentiment analysis task across 12 languages, including 8 unseen languages, to analyze the effectiveness of several few-shot learning strategies across the three major types of model architectures and their learning dynamics. We also compare strategies for selecting languages for transfer and contrast findings across languages seen in pre-training compared to those that are not. Our findings contribute to the body of knowledge on cross-lingual models for low-resource settings that is paramount to increasing coverage, diversity, and equity in access to NLP technology. We show that, in few-shot learning, linguistically similar and geographically similar languages are useful for cross-lingual adaptation, but taking the context from a mixture of random source languages is surprisingly more effective. We also compare different model architectures and show that the encoder-only model, XLM-R, gives the best downstream task performance.', 'year': 2022, 'in_acl': True, 'citationCount': 30, 'section': 'Resource Collection and Availability', 'subsection': 'Datasets and Evaluation'}, {'id': 254853901, 'paperId': '03c19ddaa26068f23e27ba94b10f08160e87f668', 'title': 'NusaCrowd: Open Source Initiative for Indonesian NLP Resources', 'authors': [{'authorId': '66986482', 'name': 'Samuel Cahyawijaya'}, {'authorId': '116344405', 'name': 'Holy Lovenia'}, {'authorId': '8129718', 'name': 'Alham Fikri Aji'}, {'authorId': '9162688', 'name': 'Genta Indra Winata'}, {'authorId': '150048491', 'name': 'Bryan Wilie'}, {'authorId': '1935324', 'name': 'Rahmad Mahendra'}, {'authorId': '104768157', 'name': 'C. Wibisono'}, {'authorId': '2279712392', 'name': 'Ade Romadhony'}, {'authorId': '1939999507', 'name': 'Karissa Vincentio'}, {'authorId': '2789148', 'name': 'Fajri Koto'}, {'authorId': '117696399', 'name': 'Jennifer Santoso'}, {'authorId': '35722593', 'name': 'David Moeljadi'}, {'authorId': '2197090678', 'name': 'Cahya Wirawan'}, {'authorId': '2197090652', 'name': 'Frederikus Hudi'}, {'authorId': '134112343', 'name': 'Ivan Halim Parmonangan'}, {'authorId': '2683858', 'name': 'Ika Alfina'}, {'authorId': '2196919922', 'name': 'Muhammad Satrio Wicaksono'}, {'authorId': '1943296899', 'name': 'Ilham Firdausi Putra'}, {'authorId': '2197090705', 'name': 'Samsul Rahmadani'}, {'authorId': '9128778', 'name': 'Yulianti Oenang'}, {'authorId': '22171680', 'name': 'Ali Akbar Septiandri'}, {'authorId': '2197071075', 'name': 'James Jaya'}, {'authorId': '4834571', 'name': 'Kaustubh D. Dhole'}, {'authorId': '9366773', 'name': 'Arie A. Suryani'}, {'authorId': '9358635', 'name': 'Rifki Afina Putri'}, {'authorId': '144610224', 'name': 'Dan Su'}, {'authorId': '144077726', 'name': 'K. Stevens'}, {'authorId': '66436856', 'name': 'Made Nindyatama Nityasya'}, {'authorId': '2191731497', 'name': 'Muhammad Farid Adilazuarda'}, {'authorId': '2197071063', 'name': 'Ryan Ignatius'}, {'authorId': '2197070752', 'name': 'Ryandito Diandaru'}, {'authorId': '1660855299', 'name': 'Tiezheng Yu'}, {'authorId': '2197070698', 'name': 'Vito Ghifari'}, {'authorId': '47653392', 'name': 'Wenliang Dai'}, {'authorId': '98271906', 'name': 'Yan Xu'}, {'authorId': '2197071047', 'name': 'Dyah Damapuspita'}, {'authorId': '120064613', 'name': 'C. Tho'}, {'authorId': '18159304', 'name': 'I. M. K. Karo'}, {'authorId': '36045311', 'name': 'Tirana Noor Fatyanosa'}, {'authorId': '3391272', 'name': 'Ziwei Ji'}, {'authorId': '2057151752', 'name': 'Pascale Fung'}, {'authorId': '1700325', 'name': 'Graham Neubig'}, {'authorId': '145465286', 'name': 'Timothy Baldwin'}, {'authorId': '2884561', 'name': 'Sebastian Ruder'}, {'authorId': '2085419515', 'name': 'Herry Sujaini'}, {'authorId': '1783949', 'name': 'S. Sakti'}, {'authorId': '1962263', 'name': 'A. Purwarianti'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': "We present NusaCrowd, a collaborative initiative to collect and unify existing resources for Indonesian languages, including opening access to previously non-public resources. Through this initiative, we have brought together 137 datasets and 118 standardized data loaders. The quality of the datasets has been assessed manually and automatically, and their value is demonstrated through multiple experiments. NusaCrowd's data collection enables the creation of the first zero-shot benchmarks for natural language understanding and generation in Indonesian and the local languages of Indonesia. Furthermore, NusaCrowd brings the creation of the first multilingual automatic speech recognition benchmark in Indonesian and the local languages of Indonesia. Our work strives to advance natural language processing (NLP) research for languages that are under-represented despite being widely spoken.", 'year': 2022, 'in_acl': False, 'citationCount': 36, 'section': 'Resource Collection and Availability', 'subsection': 'Datasets and Evaluation'}, {'id': 252819207, 'paperId': '2abfa04644b89342ddde7a40068b673d8b23bd13', 'title': 'BRCC and SentiBahasaRojak: The First Bahasa Rojak Corpus for Pretraining and Sentiment Analysis Dataset', 'authors': [{'authorId': '2187454749', 'name': 'Nanda Putri Romadhona'}, {'authorId': '2135796743', 'name': 'Sin-En Lu'}, {'authorId': '2187456195', 'name': 'Bo-Han Lu'}, {'authorId': '1724351', 'name': 'Richard Tzong-Han Tsai'}], 'venue': 'International Conference on Computational Linguistics', 'abstract': 'Code-mixing refers to the mixed use of multiple languages. It is prevalent in multilingual societies and is also one of the most challenging natural language processing tasks. In this paper, we study Bahasa Rojak, a dialect popular in Malaysia that consists of English, Malay, and Chinese. Aiming to establish a model to deal with the code-mixing phenomena of Bahasa Rojak, we use data augmentation to automatically construct the first Bahasa Rojak corpus for pre-training language models, which we name the Bahasa Rojak Crawled Corpus (BRCC). We also develop a new pre-trained model called “Mixed XLM”. The model can tag the language of the input token automatically to process code-mixing input. Finally, to test the effectiveness of the Mixed XLM model pre-trained on BRCC for social media scenarios where code-mixing is found frequently, we compile a new Bahasa Rojak sentiment analysis dataset, SentiBahasaRojak, with a Kappa value of 0.77.', 'year': 2022, 'in_acl': True, 'citationCount': 4, 'section': 'Resource Collection and Availability', 'subsection': 'Datasets and Evaluation'}, {'id': 226226744, 'paperId': '1109d62ebd2b29a7dc148bc30dd6cfc803a63dec', 'title': 'IndoLEM and IndoBERT: A Benchmark Dataset and Pre-trained Language Model for Indonesian NLP', 'authors': [{'authorId': '2789148', 'name': 'Fajri Koto'}, {'authorId': '2953039', 'name': 'Afshin Rahimi'}, {'authorId': '1800564', 'name': 'Jey Han Lau'}, {'authorId': '145465286', 'name': 'Timothy Baldwin'}], 'venue': 'International Conference on Computational Linguistics', 'abstract': 'Although the Indonesian language is spoken by almost 200 million people and the 10th most spoken language in the world, it is under-represented in NLP research. Previous work on Indonesian has been hampered by a lack of annotated datasets, a sparsity of language resources, and a lack of resource standardization. In this work, we release the IndoLEM dataset comprising seven tasks for the Indonesian language, spanning morpho-syntax, semantics, and discourse. We additionally release IndoBERT, a new pre-trained language model for Indonesian, and evaluate it over IndoLEM, in addition to benchmarking it against existing resources. Our experiments show that IndoBERT achieves state-of-the-art performance over most of the tasks in IndoLEM.', 'year': 2020, 'in_acl': True, 'citationCount': 195, 'section': 'Resource Collection and Availability', 'subsection': 'Pretrained Language Models'}, {'id': 211677475, 'paperId': 'a622332550eaf535cf0f0f6c3a3f3ba197c39cac', 'title': 'PhoBERT: Pre-trained language models for Vietnamese', 'authors': [{'authorId': '34691913', 'name': 'Dat Quoc Nguyen'}, {'authorId': '1398541475', 'name': 'A. Nguyen'}], 'venue': 'Findings', 'abstract': 'We present PhoBERT with two versions, PhoBERT-base and PhoBERT-large, the first public large-scale monolingual language models pre-trained for Vietnamese. Experimental results show that PhoBERT consistently outperforms the recent best pre-trained multilingual model XLM-R (Conneau et al., 2020) and improves the state-of-the-art in multiple Vietnamese-specific NLP tasks including Part-of-speech tagging, Dependency parsing, Named-entity recognition and Natural language inference. We release PhoBERT to facilitate future research and downstream applications for Vietnamese NLP. Our PhoBERT models are available at https://github.com/VinAIResearch/PhoBERT', 'year': 2020, 'in_acl': True, 'citationCount': 307, 'section': 'Resource Collection and Availability', 'subsection': 'Pretrained Language Models'}, {'id': 231698737, 'paperId': '86b369759bff13ec23b45ca23cc5461292a75415', 'title': 'WangchanBERTa: Pretraining transformer-based Thai Language Models', 'authors': [{'authorId': '1796312599', 'name': 'Lalita Lowphansirikul'}, {'authorId': '122556419', 'name': 'Charin Polpanumas'}, {'authorId': '2047307090', 'name': 'Nawat Jantrakulchai'}, {'authorId': '2304090', 'name': 'Sarana Nutanong'}], 'venue': 'arXiv.org', 'abstract': 'Transformer-based language models, more specifically BERT-based architectures have achieved state-of-the-art performance in many downstream tasks. However, for a relatively low-resource language such as Thai, the choices of models are limited to training a BERT-based model based on a much smaller dataset or finetuning multi-lingual models, both of which yield suboptimal downstream performance. Moreover, large-scale multi-lingual pretraining does not take into account language-specific features for Thai. To overcome these limitations, we pretrain a language model based on RoBERTa-base architecture on a large, deduplicated, cleaned training set (78GB in total size), curated from diverse domains of social media posts, news articles and other publicly available datasets. We apply text processing rules that are specific to Thai most importantly preserving spaces, which are important chunk and sentence boundaries in Thai before subword tokenization. We also experiment with word-level, syllable-level and SentencePiece tokenization with a smaller dataset to explore the effects on tokenization on downstream performance. Our model wangchanberta-base-att-spm-uncased trained on the 78.5GB dataset outperforms strong baselines (NBSVM, CRF and ULMFit) and multi-lingual models (XLMR and mBERT) on both sequence classification and token classification tasks in human-annotated, mono-lingual contexts.', 'year': 2021, 'in_acl': False, 'citationCount': 65, 'section': 'Resource Collection and Availability', 'subsection': 'Pretrained Language Models'}, {'id': 256657381, 'paperId': 'aefc100870ab0adea73105b7a616472c204f81e4', 'title': 'Encoder-Decoder Language Model for Khmer Handwritten Text Recognition in Historical Documents', 'authors': [{'authorId': '2204866832', 'name': 'Seanghort Born'}, {'authorId': '8700325', 'name': 'Dona Valy'}, {'authorId': '38591458', 'name': 'Phutphalla Kong'}], 'venue': 'International Conference on Software, Knowledge, Information Management and Applications', 'abstract': 'Correcting spelling errors in texts extracted from Khmer palm leaf manuscripts by handwritten text recognition (HTR) systems can be very challenging. A Khmer Language Model developed in this study aims to facilitate the task mentioned above. The proposed model utilizes long short-term memory (LSTM) modules applicable for improving the performance of text recognition which is to predict a sequence of characters as output. The architecture of the language model is based on an encoder-decoder mechanism which is composed of two parts: an encoder to capture the context of the input erroneous word and a decoder to decode and predict the correctly spelt output word. Experimental evaluations are conducted on a text corpus consisting of Khmer words extracted from Sleuk-Rith set.', 'year': 2022, 'in_acl': False, 'citationCount': 1, 'section': 'Resource Collection and Availability', 'subsection': 'Pretrained Language Models'}, {'id': 238634516, 'paperId': '01a207b8f352f1971f04cd2a28b8859c4cde3746', 'title': 'LaoPLM: Pre-trained Language Models for Lao', 'authors': [{'authorId': '67285699', 'name': 'Nankai Lin'}, {'authorId': '2110659921', 'name': 'Yingwen Fu'}, {'authorId': '2128681494', 'name': 'Chuwei Chen'}, {'authorId': None, 'name': 'Ziyu Yang'}, {'authorId': '2130537542', 'name': 'Shengyi Jiang'}], 'venue': 'International Conference on Language Resources and Evaluation', 'abstract': 'Trained on the large corpus, pre-trained language models (PLMs) can capture different levels of concepts in context and hence generate universal language representations. They can benefit from multiple downstream natural language processing (NLP) tasks. Although PTMs have been widely used in most NLP applications, especially for high-resource languages such as English, it is under-represented in Lao NLP research. Previous work on Lao has been hampered by the lack of annotated datasets and the sparsity of language resources. In this work, we construct a text classification dataset to alleviate the resource-scarce situation of the Lao language. In addition, we present the first transformer-based PTMs for Lao with four versions: BERT-Small , BERT-Base , ELECTRA-Small , and ELECTRA-Base . Furthermore, we evaluate them on two downstream tasks: part-of-speech (POS) tagging and text classification. Experiments demonstrate the effectiveness of our Lao models. We release our models and datasets to the community, hoping to facilitate the future development of Lao NLP applications.', 'year': 2021, 'in_acl': True, 'citationCount': 3, 'section': 'Resource Collection and Availability', 'subsection': 'Pretrained Language Models'}, {'id': 243986012, 'paperId': 'a25a5210ee6c2721ab53b44c62b3a8eb66cf22dc', 'title': 'Improving Large-scale Language Models and Resources for Filipino', 'authors': [{'authorId': '51017310', 'name': 'Jan Christian Blaise Cruz'}, {'authorId': '1973047', 'name': 'C. Cheng'}], 'venue': 'International Conference on Language Resources and Evaluation', 'abstract': 'In this paper, we improve on existing language resources for the low-resource Filipino language in two ways. First, we outline the construction of the TLUnified dataset, a large-scale pretraining corpus that serves as an improvement over smaller existing pretraining datasets for the language in terms of scale and topic variety. Second, we pretrain new Transformer language models following the RoBERTa pretraining technique to supplant existing models trained with small corpora. Our new RoBERTa models show significant improvements over existing Filipino models in three benchmark datasets with an average gain of 4.47% test accuracy across three classification tasks with varying difficulty.', 'year': 2021, 'in_acl': True, 'citationCount': 21, 'section': 'Resource Collection and Availability', 'subsection': 'Pretrained Language Models'}]
|
2023.ijcnlp-tutorials.4
|
Editing Large Language Models
|
Even with their impressive abilities, Large Language Models (LLMs) such as ChatGPT are not immune to issues of factual or logically consistent. Concretely, the key concern is how to seamlessly update those LLMs to correct mistakes without resorting to an exhaustive retraining or continuous training procedure, both of which can demand significant computational resources and time. Thus, the capability to edit LLMs offers an efficient solution to alter a model’s behavior, notably within a distinct area of interest, without negatively impacting its performance on other tasks. Through this tutorial, we strive to acquaint interested NLP researchers with recent and emerging techniques for editing LLMs. Specifically, we aim to present a systematic and current overview of cutting-edge methods, supplemented with practical tools, and unveil new research opportunities for our audiences. All the valuable resources can be accessed at https://github.com/zjunlp/KnowledgeEditingPapers.
| 2,023
|
https://aclanthology.org/2023.ijcnlp-tutorials.4
|
IJCNLP, AACL
|
[{'id': 258833129, 'paperId': 'f5c73d9e6641b018b633690102121f5605d34fb0', 'title': 'Editing Large Language Models: Problems, Methods, and Opportunities', 'authors': [{'authorId': '4841460', 'name': 'Yunzhi Yao'}, {'authorId': '144282672', 'name': 'Peng Wang'}, {'authorId': '2064522174', 'name': 'Bo Tian'}, {'authorId': '46378881', 'name': 'Siyuan Cheng'}, {'authorId': '9956037', 'name': 'Zhoubo Li'}, {'authorId': '152931849', 'name': 'Shumin Deng'}, {'authorId': '2144200945', 'name': 'Huajun Chen'}, {'authorId': '2608639', 'name': 'Ningyu Zhang'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Despite the ability to train capable LLMs, the methodology for maintaining their relevancy and rectifying errors remains elusive. To this end, the past few years have witnessed a surge in techniques for editing LLMs, the objective of which is to efficiently alter the behavior of LLMs within a specific domain without negatively impacting performance across other inputs. This paper embarks on a deep exploration of the problems, methods, and opportunities related to model editing for LLMs. In particular, we provide an exhaustive overview of the task definition and challenges associated with model editing, along with an in-depth empirical analysis of the most progressive methods currently at our disposal. We also build a new benchmark dataset to facilitate a more robust evaluation and pinpoint enduring issues intrinsic to existing techniques. Our objective is to provide valuable insights into the effectiveness and feasibility of each editing technique, thereby assisting the community in making informed decisions on the selection of the most appropriate method for a specific task or context. Code and datasets are available at https://github.com/zjunlp/EasyEdit.', 'year': 2023, 'in_acl': False, 'citationCount': 207, 'section': None, 'subsection': None}, {'id': 249642147, 'paperId': '1d650f1afd45c59ff907396fe8b678595dcb85ea', 'title': 'Memory-Based Model Editing at Scale', 'authors': [{'authorId': '49688913', 'name': 'E. Mitchell'}, {'authorId': '2116721670', 'name': 'Charles Lin'}, {'authorId': '2691021', 'name': 'Antoine Bosselut'}, {'authorId': '144783904', 'name': 'Christopher D. Manning'}, {'authorId': '46881670', 'name': 'Chelsea Finn'}], 'venue': 'International Conference on Machine Learning', 'abstract': "Even the largest neural networks make errors, and once-correct predictions can become invalid as the world changes. Model editors make local updates to the behavior of base (pre-trained) models to inject updated knowledge or correct undesirable behaviors. Existing model editors have shown promise, but also suffer from insufficient expressiveness: they struggle to accurately model an edit's intended scope (examples affected by the edit), leading to inaccurate predictions for test inputs loosely related to the edit, and they often fail altogether after many edits. As a higher-capacity alternative, we propose Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model (SERAC), which stores edits in an explicit memory and learns to reason over them to modulate the base model's predictions as needed. To enable more rigorous evaluation of model editors, we introduce three challenging language model editing problems based on question answering, fact-checking, and dialogue generation. We find that only SERAC achieves high performance on all three problems, consistently outperforming existing approaches to model editing by a significant margin. Code, data, and additional project information will be made available at https://sites.google.com/view/serac-editing.", 'year': 2022, 'in_acl': False, 'citationCount': 258, 'section': None, 'subsection': None}, {'id': 252762125, 'paperId': '7471cb40a33e9d971a922b5dff5ca9b4a73ca609', 'title': 'Calibrating Factual Knowledge in Pretrained Language Models', 'authors': [{'authorId': '2047143813', 'name': 'Qingxiu Dong'}, {'authorId': '10780897', 'name': 'Damai Dai'}, {'authorId': '2183730942', 'name': 'Yifan Song'}, {'authorId': '47883405', 'name': 'Jingjing Xu'}, {'authorId': '3335836', 'name': 'Zhifang Sui'}, {'authorId': '143900005', 'name': 'Lei Li'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Previous literature has proved that Pretrained Language Models (PLMs) can store factual knowledge. However, we find that facts stored in the PLMs are not always correct. It motivates us to explore a fundamental question: How do we calibrate factual knowledge in PLMs without re-training from scratch? In this work, we propose a simple and lightweight method CaliNet to achieve this goal. To be specific, we first detect whether PLMs can learn the right facts via a contrastive score between right and fake facts. If not, we then use a lightweight method to add and adapt new parameters to specific factual texts. Experiments on the knowledge probing task show the calibration effectiveness and efficiency. In addition, through closed-book question answering, we find that the calibrated PLM possesses knowledge generalization ability after fine-tuning. Beyond the calibration performance, we further investigate and visualize the knowledge calibration mechanism.', 'year': 2022, 'in_acl': False, 'citationCount': 72, 'section': None, 'subsection': None}, {'id': 256194369, 'paperId': 'a9be51698e7c2247853b7b6f1f70fc4d6d7ef605', 'title': 'Transformer-Patcher: One Mistake worth One Neuron', 'authors': [{'authorId': '2109583210', 'name': 'Zeyu Huang'}, {'authorId': '2714199', 'name': 'Yikang Shen'}, {'authorId': '2144555913', 'name': 'Xiaofeng Zhang'}, {'authorId': '49178343', 'name': 'Jie Zhou'}, {'authorId': '21505283', 'name': 'Wenge Rong'}, {'authorId': '2091444262', 'name': 'Zhang Xiong'}], 'venue': 'International Conference on Learning Representations', 'abstract': "Large Transformer-based Pretrained Language Models (PLMs) dominate almost all Natural Language Processing (NLP) tasks. Nevertheless, they still make mistakes from time to time. For a model deployed in an industrial environment, fixing these mistakes quickly and robustly is vital to improve user experiences. Previous works formalize such problems as Model Editing (ME) and mostly focus on fixing one mistake. However, the one-mistake-fixing scenario is not an accurate abstraction of the real-world challenge. In the deployment of AI services, there are ever-emerging mistakes, and the same mistake may recur if not corrected in time. Thus a preferable solution is to rectify the mistakes as soon as they appear nonstop. Therefore, we extend the existing ME into Sequential Model Editing (SME) to help develop more practical editing methods. Our study shows that most current ME methods could yield unsatisfying results in this scenario. We then introduce Transformer-Patcher, a novel model editor that can shift the behavior of transformer-based models by simply adding and training a few neurons in the last Feed-Forward Network layer. Experimental results on both classification and generation tasks show that Transformer-Patcher can successively correct up to thousands of errors (Reliability) and generalize to their equivalent inputs (Generality) while retaining the model's accuracy on irrelevant inputs (Locality). Our method outperforms previous fine-tuning and HyperNetwork-based methods and achieves state-of-the-art performance for Sequential Model Editing (SME). The code is available at https://github.com/ZeroYuHuang/Transformer-Patcher.", 'year': 2023, 'in_acl': False, 'citationCount': 127, 'section': None, 'subsection': None}, {'id': 258832407, 'paperId': 'ff2a0fb125e7f03428420230c6ecbeafd4cf07a8', 'title': 'Can We Edit Factual Knowledge by In-Context Learning?', 'authors': [{'authorId': '2113919886', 'name': 'Ce Zheng'}, {'authorId': '49192881', 'name': 'Lei Li'}, {'authorId': '2047143813', 'name': 'Qingxiu Dong'}, {'authorId': '2118167265', 'name': 'Yuxuan Fan'}, {'authorId': '150358371', 'name': 'Zhiyong Wu'}, {'authorId': '47883405', 'name': 'Jingjing Xu'}, {'authorId': '7267809', 'name': 'Baobao Chang'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Previous studies have shown that large language models (LLMs) like GPTs store massive factual knowledge in their parameters. However, the stored knowledge could be false or out-dated. Traditional knowledge editing methods refine LLMs via fine-tuning on texts containing specific knowledge. However, with the increasing scales of LLMs, these gradient-based approaches bring large computation costs. The trend of model-as-a-service also makes it impossible to modify knowledge in black-box LMs. Inspired by in-context learning (ICL), a new paradigm based on demonstration contexts without parameter updating, we explore whether ICL can edit factual knowledge. To answer this question, we give a comprehensive empirical study of ICL strategies. Experiments show that in-context knowledge editing (IKE), without any gradient and parameter updating, achieves a competitive success rate compared to gradient-based methods on GPT-J (6B) but with much fewer side effects, including less over-editing on similar but unrelated facts and less knowledge forgetting on previously stored knowledge. We also apply the method to larger LMs with tens or hundreds of parameters like OPT-175B, which shows the scalability of our method. The code is available at https://github.com/Zce1112zslx/IKE.', 'year': 2023, 'in_acl': False, 'citationCount': 136, 'section': None, 'subsection': None}, {'id': 233289412, 'paperId': '240b0caabb415578bdea4da7d0a32bdff2e8163f', 'title': 'Editing Factual Knowledge in Language Models', 'authors': [{'authorId': '41019080', 'name': 'Nicola De Cao'}, {'authorId': '2782694', 'name': 'Wilker Aziz'}, {'authorId': '144889265', 'name': 'Ivan Titov'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'The factual knowledge acquired during pre-training and stored in the parameters of Language Models (LMs) can be useful in downstream tasks (e.g., question answering or textual inference). However, some facts can be incorrectly induced or become obsolete over time. We present KnowledgeEditor, a method which can be used to edit this knowledge and, thus, fix ‘bugs’ or unexpected predictions without the need for expensive re-training or fine-tuning. Besides being computationally efficient, KnowledgeEditordoes not require any modifications in LM pre-training (e.g., the use of meta-learning). In our approach, we train a hyper-network with constrained optimization to modify a fact without affecting the rest of the knowledge; the trained hyper-network is then used to predict the weight update at test time. We show KnowledgeEditor’s efficacy with two popular architectures and knowledge-intensive tasks: i) a BERT model fine-tuned for fact-checking, and ii) a sequence-to-sequence BART model for question answering. With our method, changing a prediction on the specific wording of a query tends to result in a consistent change in predictions also for its paraphrases. We show that this can be further encouraged by exploiting (e.g., automatically-generated) paraphrases during training. Interestingly, our hyper-network can be regarded as a ‘probe’ revealing which components need to be changed to manipulate factual knowledge; our analysis shows that the updates tend to be concentrated on a small subset of components. Source code available at https://github.com/nicola-decao/KnowledgeEditor', 'year': 2021, 'in_acl': True, 'citationCount': 410, 'section': None, 'subsection': None}, {'id': 239050360, 'paperId': '9286ac6e9b1aacd7d93496eb4615ae7678876d2a', 'title': 'Fast Model Editing at Scale', 'authors': [{'authorId': '49688913', 'name': 'E. Mitchell'}, {'authorId': '2116721670', 'name': 'Charles Lin'}, {'authorId': '2691021', 'name': 'Antoine Bosselut'}, {'authorId': '46881670', 'name': 'Chelsea Finn'}, {'authorId': '144783904', 'name': 'Christopher D. Manning'}], 'venue': 'International Conference on Learning Representations', 'abstract': "While large pre-trained models have enabled impressive results on a variety of downstream tasks, the largest existing models still make errors, and even accurate predictions may become outdated over time. Because detecting all such failures at training time is impossible, enabling both developers and end users of such models to correct inaccurate outputs while leaving the model otherwise intact is desirable. However, the distributed, black-box nature of the representations learned by large neural networks makes producing such targeted edits difficult. If presented with only a single problematic input and new desired output, fine-tuning approaches tend to overfit; other editing algorithms are either computationally infeasible or simply ineffective when applied to very large models. To enable easy post-hoc editing at scale, we propose Model Editor Networks using Gradient Decomposition (MEND), a collection of small auxiliary editing networks that use a single desired input-output pair to make fast, local edits to a pre-trained model's behavior. MEND learns to transform the gradient obtained by standard fine-tuning, using a low-rank decomposition of the gradient to make the parameterization of this transformation tractable. MEND can be trained on a single GPU in less than a day even for 10 billion+ parameter models; once trained MEND enables rapid application of new edits to the pre-trained model. Our experiments with T5, GPT, BERT, and BART models show that MEND is the only approach to model editing that effectively edits the behavior of models with more than 10 billion parameters. Code and data available at https://sites.google.com/view/mend-editing.", 'year': 2021, 'in_acl': False, 'citationCount': 283, 'section': None, 'subsection': None}, {'id': 233296761, 'paperId': '2c871df72c52b58f05447fcb3afc838168d94505', 'title': 'Knowledge Neurons in Pretrained Transformers', 'authors': [{'authorId': '10780897', 'name': 'Damai Dai'}, {'authorId': '145307652', 'name': 'Li Dong'}, {'authorId': '34128716', 'name': 'Y. Hao'}, {'authorId': '3335836', 'name': 'Zhifang Sui'}, {'authorId': '49807919', 'name': 'Furu Wei'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Large-scale pretrained language models are surprisingly good at recalling factual knowledge presented in the training corpus. In this paper, we present preliminary studies on how factual knowledge is stored in pretrained Transformers by introducing the concept of knowledge neurons. Specifically, we examine the fill-in-the-blank cloze task for BERT. Given a relational fact, we propose a knowledge attribution method to identify the neurons that express the fact. We find that the activation of such knowledge neurons is positively correlated to the expression of their corresponding facts. In our case studies, we attempt to leverage knowledge neurons to edit (such as update, and erase) specific factual knowledge without fine-tuning. Our results shed light on understanding the storage of knowledge within pretrained Transformers.', 'year': 2021, 'in_acl': True, 'citationCount': 340, 'section': None, 'subsection': None}, {'id': 255825985, 'paperId': '996445d847f06e99b0bd259345408a0cf1bce87e', 'title': 'Locating and Editing Factual Associations in GPT', 'authors': [{'authorId': '153615419', 'name': 'Kevin Meng'}, {'authorId': '144159726', 'name': 'David Bau'}, {'authorId': '50112310', 'name': 'A. Andonian'}, {'authorId': '2083259', 'name': 'Yonatan Belinkov'}], 'venue': 'Neural Information Processing Systems', 'abstract': "We analyze the storage and recall of factual associations in autoregressive transformer language models, finding evidence that these associations correspond to localized, directly-editable computations. We first develop a causal intervention for identifying neuron activations that are decisive in a model's factual predictions. This reveals a distinct set of steps in middle-layer feed-forward modules that mediate factual predictions while processing subject tokens. To test our hypothesis that these computations correspond to factual association recall, we modify feed-forward weights to update specific factual associations using Rank-One Model Editing (ROME). We find that ROME is effective on a standard zero-shot relation extraction (zsRE) model-editing task, comparable to existing methods. To perform a more sensitive evaluation, we also evaluate ROME on a new dataset of counterfactual assertions, on which it simultaneously maintains both specificity and generalization, whereas other methods sacrifice one or another. Our results confirm an important role for mid-layer feed-forward modules in storing factual associations and suggest that direct manipulation of computational mechanisms may be a feasible approach for model editing. The code, dataset, visualizations, and an interactive demo notebook are available at https://rome.baulab.info/", 'year': 2022, 'in_acl': False, 'citationCount': 893, 'section': None, 'subsection': None}, {'id': 252873467, 'paperId': '2fe1ac0b09cc0f50eb83eef6c7c6b45ac8b12413', 'title': 'Mass-Editing Memory in a Transformer', 'authors': [{'authorId': '153615419', 'name': 'Kevin Meng'}, {'authorId': '1429844787', 'name': 'Arnab Sen Sharma'}, {'authorId': '50112310', 'name': 'A. Andonian'}, {'authorId': '2083259', 'name': 'Yonatan Belinkov'}, {'authorId': '144159726', 'name': 'David Bau'}], 'venue': 'International Conference on Learning Representations', 'abstract': 'Recent work has shown exciting promise in updating large language models with new memories, so as to replace obsolete information or add specialized knowledge. However, this line of work is predominantly limited to updating single associations. We develop MEMIT, a method for directly updating a language model with many memories, demonstrating experimentally that it can scale up to thousands of associations for GPT-J (6B) and GPT-NeoX (20B), exceeding prior work by orders of magnitude. Our code and data are at https://memit.baulab.info.', 'year': 2022, 'in_acl': False, 'citationCount': 399, 'section': None, 'subsection': None}, {'id': 258865984, 'paperId': '56e952fd463accff09cf2e35432aaabd7c7c57f3', 'title': 'MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions', 'authors': [{'authorId': '49164966', 'name': 'Zexuan Zhong'}, {'authorId': '47039337', 'name': 'Zhengxuan Wu'}, {'authorId': '144783904', 'name': 'Christopher D. Manning'}, {'authorId': '144922861', 'name': 'Christopher Potts'}, {'authorId': '50536468', 'name': 'Danqi Chen'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': "The information stored in large language models (LLMs) falls out of date quickly, and retraining from scratch is often not an option. This has recently given rise to a range of techniques for injecting new facts through updating model weights. Current evaluation paradigms are extremely limited, mainly validating the recall of edited facts, but changing one fact should cause rippling changes to the model's related beliefs. If we edit the UK Prime Minister to now be Rishi Sunak, then we should get a different answer to Who is married to the British Prime Minister? In this work, we present a benchmark, MQuAKE (Multi-hop Question Answering for Knowledge Editing), comprising multi-hop questions that assess whether edited models correctly answer questions where the answer should change as an entailed consequence of edited facts. While we find that current knowledge-editing approaches can recall edited facts accurately, they fail catastrophically on the constructed multi-hop questions. We thus propose a simple memory-based approach, MeLLo, which stores all edited facts externally while prompting the language model iteratively to generate answers that are consistent with the edited facts. While MQuAKE remains challenging, we show that MeLLo scales well with LLMs (e.g., OpenAI GPT-3.5-turbo) and outperforms previous model editors by a large margin.", 'year': 2023, 'in_acl': False, 'citationCount': 144, 'section': None, 'subsection': None}, {'id': 258437155, 'paperId': '56da914761e445a24481629cfc116336a0aec978', 'title': 'Can LMs Learn New Entities from Descriptions? Challenges in Propagating Injected Knowledge', 'authors': [{'authorId': '115412405', 'name': 'Yasumasa Onoe'}, {'authorId': '2129403254', 'name': 'Michael J.Q. Zhang'}, {'authorId': '2204461780', 'name': 'Shankar Padmanabhan'}, {'authorId': '1814094', 'name': 'Greg Durrett'}, {'authorId': '2890423', 'name': 'Eunsol Choi'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Pre-trained language models (LMs) are used for knowledge intensive tasks like question answering, but their knowledge gets continuously outdated as the world changes. Prior work has studied targeted updates to LMs, injecting individual facts and evaluating whether the model learns these facts while not changing predictions on other contexts. We take a step forward and study LMs’ abilities to make inferences based on injected facts (or propagate those facts): for example, after learning that something is a TV show, does an LM predict that you can watch it? We study this with two cloze-style tasks: an existing dataset of real-world sentences about novel entities (ECBD) as well as a new controlled benchmark with manually designed templates requiring varying levels of inference about injected knowledge. Surprisingly, we find that existing methods for updating knowledge (gradient-based fine-tuning and modifications of this approach) show little propagation of injected knowledge. These methods improve performance on cloze instances only when there is lexical overlap between injected facts and target inferences. Yet, prepending entity definitions in an LM’s context improves performance across all settings, suggesting that there is substantial headroom for parameter-updating approaches for knowledge injection.', 'year': 2023, 'in_acl': True, 'citationCount': 61, 'section': None, 'subsection': None}, {'id': 258960406, 'paperId': 'cc57a02307b77585f69779cca2937dedc69006d6', 'title': 'Detecting Edit Failures In Large Language Models: An Improved Specificity Benchmark', 'authors': [{'authorId': '1388392024', 'name': 'J. Hoelscher-Obermaier'}, {'authorId': '2218886081', 'name': 'Julia Persson'}, {'authorId': '2005663935', 'name': 'Esben Kran'}, {'authorId': '2621022', 'name': 'Ioannis Konstas'}, {'authorId': '2143198655', 'name': 'Fazl Barez'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Recent model editing techniques promise to mitigate the problem of memorizing false or outdated associations during LLM training. However, we show that these techniques can introduce large unwanted side effects which are not detected by existing specificity benchmarks. We extend the existing CounterFact benchmark to include a dynamic component and dub our benchmark CounterFact+. Additionally, we extend the metrics used for measuring specificity by a principled KL divergence-based metric. We use this improved benchmark to evaluate recent model editing techniques and find that they suffer from low specificity. Our findings highlight the need for improved specificity benchmarks that identify and prevent unwanted side effects.', 'year': 2023, 'in_acl': False, 'citationCount': 49, 'section': None, 'subsection': None}, {'id': 258865393, 'paperId': '2b72888cc3ff048038f6011b8e3d89ba106540b6', 'title': 'Editing Common Sense in Transformers', 'authors': [{'authorId': '2200083553', 'name': 'Anshita Gupta'}, {'authorId': '2261673039', 'name': 'Debanjan Mondal'}, {'authorId': '2046876495', 'name': 'Akshay Krishna Sheshadri'}, {'authorId': '50771250', 'name': 'Wenlong Zhao'}, {'authorId': '1737850', 'name': 'Xiang Lorraine Li'}, {'authorId': '35823986', 'name': 'Sarah Wiegreffe'}, {'authorId': '1721168', 'name': 'Niket Tandon'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': "Editing model parameters directly in Transformers makes updating open-source transformer-based models possible without re-training (Meng et al., 2023). However, these editing methods have only been evaluated on statements about encyclopedic knowledge with a single correct answer. Commonsense knowledge with multiple correct answers, e.g., an apple can be green or red but not transparent, has not been studied but is as essential for enhancing transformers' reliability and usefulness. In this paper, we investigate whether commonsense judgments are causally associated with localized, editable parameters in Transformers, and we provide an affirmative answer. We find that directly applying the MEMIT editing algorithm results in sub-par performance and improve it for the commonsense domain by varying edit tokens and improving the layer selection strategy, i.e., $MEMIT_{CSK}$. GPT-2 Large and XL models edited using $MEMIT_{CSK}$ outperform best-fine-tuned baselines by 10.97% and 10.73% F1 scores on PEP3k and 20Q datasets. In addition, we propose a novel evaluation dataset, PROBE SET, that contains unaffected and affected neighborhoods, affected paraphrases, and affected reasoning challenges. $MEMIT_{CSK}$ performs well across the metrics while fine-tuning baselines show significant trade-offs between unaffected and affected metrics. These results suggest a compelling future direction for incorporating feedback about common sense into Transformers through direct model editing.", 'year': 2023, 'in_acl': False, 'citationCount': 14, 'section': None, 'subsection': None}]
|
2023.ijcnlp-tutorials.5
|
Learning WHO Saying WHAT to WHOM in Multi-Party Conversations
|
Multi-party conversations (MPC) are a more practical and challenging scenario involving more than two interlocutors. This research topic has drawn significant attention from both academia and industry, and it is nowadays counted as one of the most promising research areas in the field of dialogue systems. In general, MPC algorithms aim at addressing the issues of Who saying What to Whom, specifically, who speaks, say what, and address whom. The complicated interactions between interlocutors, between utterances, and between interlocutors and utterances develop many variant tasks of MPC worth investigation. In this tutorial, we present a comprehensive survey of recent advances in MPC. In particular, we summarize recent advances on the research of MPC modeling which is categorized by Who saying What to Whom. Finally, we highlight the challenges which are not yet well addressed in MPC and present future research directions.
| 2,023
|
https://aclanthology.org/2023.ijcnlp-tutorials.5
|
IJCNLP, AACL
|
[{'id': 250637571, 'paperId': '7958647ee241185ab253cdaa63466033e37e78ca', 'title': 'Who Says What to Whom: A Survey of Multi-Party Conversations', 'authors': [{'authorId': '3028818', 'name': 'Jia-Chen Gu'}, {'authorId': '8801869', 'name': 'Chongyang Tao'}, {'authorId': '1749989', 'name': 'Zhenhua Ling'}], 'venue': 'International Joint Conference on Artificial Intelligence', 'abstract': 'Multi-party conversations (MPCs) are a more practical and challenging scenario involving more than two interlocutors. This research topic has drawn significant attention from both academia and industry, and it is nowadays counted as one of the most promising research areas in the field of dialogue systems. In general, MPC algorithms aim at addressing the issues of Who says What to Whom, specifically, who speaks, say what, and address whom. The complicated interactions between interlocutors, between utterances, and between interlocutors and utterances develop many variant tasks of MPCs worth investigation. In this paper, we present a comprehensive survey of recent advances in text-based MPCs. In particular, we first summarize recent advances on the research of MPC context modeling including dialogue discourse parsing, dialogue flow modeling and self-supervised training for MPCs. Then we review the state-of-the-art models categorized by Who says What to Whom in MPCs. Finally, we highlight the challenges which are not yet well addressed in MPCs and present future research directions.', 'year': 2022, 'in_acl': False, 'citationCount': 27, 'section': None, 'subsection': None}, {'id': 232110776, 'paperId': '281b4a7e7fb057d8266ec0610888905c46fd715d', 'title': 'Advances in Multi-turn Dialogue Comprehension: A Survey', 'authors': [{'authorId': '3322871', 'name': 'Zhuosheng Zhang'}, {'authorId': '47941144', 'name': 'Hai Zhao'}], 'venue': 'arXiv.org', 'abstract': 'Training machines to understand natural language and interact with humans is an elusive and essential task of artificial intelligence. A diversity of dialogue systems has been designed with the rapid development of deep learning techniques, especially the recent pre-trained language models (PrLMs). Among these studies, the fundamental yet challenging type of task is dialogue comprehension whose role is to teach the machines to read and comprehend the dialogue context before responding. In this paper, we review the previous methods from the technical perspective of dialogue modeling for the dialogue comprehension task. We summarize the characteristics and challenges of dialogue comprehension in contrast to plain-text reading comprehension. Then, we discuss three typical patterns of dialogue modeling. In addition, we categorize dialogue-related pre-training techniques which are employed to enhance PrLMs in dialogue scenarios. Finally, we highlight the technical advances in recent years and point out the lessons from the empirical analysis and the prospects towards a new frontier of researches.', 'year': 2021, 'in_acl': False, 'citationCount': 19, 'section': None, 'subsection': None}, {'id': 16537814, 'paperId': '811e014002d1e4d1e185fc236cf9e3fafe2aade5', 'title': 'Addressee and Response Selection for Multi-Party Conversation', 'authors': [{'authorId': '33516663', 'name': 'Hiroki Ouchi'}, {'authorId': '3229899', 'name': 'Yuta Tsuboi'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'To create conversational systems working in actual situations, it is crucial to assume that they interact with multiple agents. In this work, we tackle addressee and response selection for multi-party conversation, in which systems are expected to select whom they address as well as what they say. The key challenge of this task is to jointly model who is talking about what in a previous context. For the joint modeling, we propose two modeling frameworks: 1) static modeling and 2) dynamic modeling. To show benchmark results of our frameworks, we created a multi-party conversation corpus. Our experiments on the dataset show that the recurrent neural network based models of our frameworks robustly predict addressees and responses in conversations with a large number of agents.', 'year': 2016, 'in_acl': True, 'citationCount': 61, 'section': None, 'subsection': None}, {'id': 173188574, 'paperId': '5c0909aab443692887b25261da3af74d570c07cd', 'title': 'GSN: A Graph-Structured Network for Multi-Party Dialogues', 'authors': [{'authorId': '7849217', 'name': 'Wenpeng Hu'}, {'authorId': '51177175', 'name': 'Zhangming Chan'}, {'authorId': '47655430', 'name': 'Bing Liu'}, {'authorId': '144060462', 'name': 'Dongyan Zhao'}, {'authorId': '1685259', 'name': 'Jinwen Ma'}, {'authorId': '144539156', 'name': 'Rui Yan'}], 'venue': 'International Joint Conference on Artificial Intelligence', 'abstract': "Existing neural models for dialogue response generation assume that utterances are sequentially organized. However, many real-world dialogues involve multiple interlocutors (i.e., multi-party dialogues), where the assumption does not hold as utterances from different interlocutors can occur ``in parallel.'' This paper generalizes existing sequence-based models to a Graph-Structured neural Network (GSN) for dialogue modeling. The core of GSN is a graph-based encoder that can model the information flow along the graph-structured dialogues (two-party sequential dialogues are a special case). Experimental results show that GSN significantly outperforms existing sequence-based models.", 'year': 2019, 'in_acl': False, 'citationCount': 69, 'section': None, 'subsection': None}, {'id': 196192979, 'paperId': 'c59d36e79d573cc4a2440cb2a7154eada5c0ead2', 'title': 'A Large-Scale Corpus for Conversation Disentanglement', 'authors': [{'authorId': '1727211', 'name': 'Jonathan K. Kummerfeld'}, {'authorId': '1905888', 'name': 'S. R. Gouravajhala'}, {'authorId': '79548673', 'name': 'Joseph Peper'}, {'authorId': '81176329', 'name': 'V. Athreya'}, {'authorId': '144543562', 'name': 'R. Chulaka Gunasekara'}, {'authorId': '2504586', 'name': 'Jatin Ganhotra'}, {'authorId': '80836534', 'name': 'S. Patel'}, {'authorId': '1725498', 'name': 'L. Polymenakos'}, {'authorId': '2598433', 'name': 'Walter S. Lasecki'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Disentangling conversations mixed together in a single stream of messages is a difficult task, made harder by the lack of large manually annotated datasets. We created a new dataset of 77,563 messages manually annotated with reply-structure graphs that both disentangle conversations and define internal conversation structure. Our data is 16 times larger than all previously released datasets combined, the first to include adjudication of annotation disagreements, and the first to include context. We use our data to re-examine prior work, in particular, finding that 89% of conversations in a widely used dialogue corpus are either missing messages or contain extra messages. Our manually-annotated data presents an opportunity to develop robust data-driven methods for conversation disentanglement, which will help advance dialogue research.', 'year': 2018, 'in_acl': True, 'citationCount': 95, 'section': None, 'subsection': None}, {'id': 235313361, 'paperId': '51b9f8aef39de4b6db820b5c4b5bca14fc32aa4d', 'title': 'MPC-BERT: A Pre-Trained Language Model for Multi-Party Conversation Understanding', 'authors': [{'authorId': '3028818', 'name': 'Jia-Chen Gu'}, {'authorId': '8801869', 'name': 'Chongyang Tao'}, {'authorId': '1749989', 'name': 'Zhenhua Ling'}, {'authorId': '2110091832', 'name': 'Can Xu'}, {'authorId': '2442662', 'name': 'Xiubo Geng'}, {'authorId': '71790825', 'name': 'Daxin Jiang'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Recently, various neural models for multi-party conversation (MPC) have achieved impressive improvements on a variety of tasks such as addressee recognition, speaker identification and response prediction. However, these existing methods on MPC usually represent interlocutors and utterances individually and ignore the inherent complicated structure in MPC which may provide crucial interlocutor and utterance semantics and would enhance the conversation understanding process. To this end, we present MPC-BERT, a pre-trained model for MPC understanding that considers learning who says what to whom in a unified model with several elaborated self-supervised tasks. Particularly, these tasks can be generally categorized into (1) interlocutor structure modeling including reply-to utterance recognition, identical speaker searching and pointer consistency distinction, and (2) utterance semantics modeling including masked shared utterance restoration and shared node detection. We evaluate MPC-BERT on three downstream tasks including addressee recognition, speaker identification and response selection. Experimental results show that MPC-BERT outperforms previous methods by large margins and achieves new state-of-the-art performance on all three downstream tasks at two benchmarks.', 'year': 2021, 'in_acl': True, 'citationCount': 50, 'section': None, 'subsection': None}, {'id': 247451284, 'paperId': '23bb7ac9d1164b0b429e59eb012584c1c1c64e73', 'title': 'Structural Characterization for Dialogue Disentanglement', 'authors': [{'authorId': '2141114505', 'name': 'Xinbei Ma'}, {'authorId': '3322871', 'name': 'Zhuosheng Zhang'}, {'authorId': '47941144', 'name': 'Hai Zhao'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Tangled multi-party dialogue contexts lead to challenges for dialogue reading comprehension, where multiple dialogue threads flow simultaneously within a common dialogue record, increasing difficulties in understanding the dialogue history for both human and machine. Previous studies mainly focus on utterance encoding methods with carefully designed features but pay inadequate attention to characteristic features of the structure of dialogues. We specially take structure factors into account and design a novel model for dialogue disentangling. Based on the fact that dialogues are constructed on successive participation and interactions between speakers, we model structural information of dialogues in two aspects: 1)speaker property that indicates whom a message is from, and 2) reference dependency that shows whom a message may refer to. The proposed method achieves new state-of-the-art on the Ubuntu IRC benchmark dataset and contributes to dialogue-related comprehension.', 'year': 2021, 'in_acl': True, 'citationCount': 15, 'section': None, 'subsection': None}, {'id': 247476252, 'paperId': 'daadd8b4af33abc89f1148ee1685b8a3099759ed', 'title': 'HeterMPC: A Heterogeneous Graph Neural Network for Response Generation in Multi-Party Conversations', 'authors': [{'authorId': '3028818', 'name': 'Jia-Chen Gu'}, {'authorId': '2111728713', 'name': 'Chao-Hong Tan'}, {'authorId': '8801869', 'name': 'Chongyang Tao'}, {'authorId': '2072392338', 'name': 'Zhen-Hua Ling'}, {'authorId': '2144026961', 'name': 'Huang Hu'}, {'authorId': '2442662', 'name': 'Xiubo Geng'}, {'authorId': '2086994543', 'name': 'Daxin Jiang'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Recently, various response generation models for two-party conversations have achieved impressive improvements, but less effort has been paid to multi-party conversations (MPCs) which are more practical and complicated. Compared with a two-party conversation where a dialogue context is a sequence of utterances, building a response generation model for MPCs is more challenging, since there exist complicated context structures and the generated responses heavily rely on both interlocutors (i.e., speaker and addressee) and history utterances. To address these challenges, we present HeterMPC, a heterogeneous graph-based neural network for response generation in MPCs which models the semantics of utterances and interlocutors simultaneously with two types of nodes in a graph. Besides, we also design six types of meta relations with node-edge-type-dependent parameters to characterize the heterogeneous interactions within the graph. Through multi-hop updating, HeterMPC can adequately utilize the structural knowledge of conversations for response generation. Experimental results on the Ubuntu Internet Relay Chat (IRC) channel benchmark show that HeterMPC outperforms various baseline models for response generation in MPCs.', 'year': 2022, 'in_acl': True, 'citationCount': 24, 'section': None, 'subsection': None}, {'id': 258833159, 'paperId': '13068f5f0f2ab8a50ee0c43a9362e351d2019377', 'title': 'EM Pre-training for Multi-party Dialogue Response Generation', 'authors': [{'authorId': '2110418724', 'name': 'Yiyang Li'}, {'authorId': '2146232510', 'name': 'Hai Zhao'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Dialogue response generation requires an agent to generate a response according to the current dialogue history, in terms of which two-party dialogues have been well studied, but leaving a great gap for multi-party dialogues at the same time. Different from two-party dialogues where each response is a direct reply to its previous utterance, the addressee of a response utterance should be specified before it is generated in the multi-party scenario. Thanks to the huge amount of two-party conversational data, various pre-trained language models for two-party dialogue response generation have been proposed. However, due to the lack of annotated addressee labels in multi-party dialogue datasets, it is hard to use them to pre-train a response generation model for multi-party dialogues. To tackle this obstacle, we propose an Expectation-Maximization (EM) approach that iteratively performs the expectation steps to generate addressee labels, and the maximization steps to optimize a response generation model. Theoretical analyses and extensive experiments have justified the feasibility and effectiveness of our proposed method. The official implementation of this paper is available at https://github.com/EricLee8/MPDRG.', 'year': 2023, 'in_acl': True, 'citationCount': 8, 'section': None, 'subsection': None}, {'id': 258715296, 'paperId': '9c2d502097be3a364d51315123c248282d6d534e', 'title': 'GIFT: Graph-Induced Fine-Tuning for Multi-Party Conversation Understanding', 'authors': [{'authorId': '3028818', 'name': 'Jia-Chen Gu'}, {'authorId': '2072392338', 'name': 'Zhen-Hua Ling'}, {'authorId': '145014498', 'name': 'QUAN LIU'}, {'authorId': '2155398718', 'name': 'Cong Liu'}, {'authorId': '2090465180', 'name': 'Guoping Hu'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Addressing the issues of who saying what to whom in multi-party conversations (MPCs) has recently attracted a lot of research attention. However, existing methods on MPC understanding typically embed interlocutors and utterances into sequential information flows, or utilize only the superficial of inherent graph structures in MPCs. To this end, we present a plug-and-play and lightweight method named graph-induced fine-tuning (GIFT) which can adapt various Transformer-based pre-trained language models (PLMs) for universal MPC understanding. In detail, the full and equivalent connections among utterances in regular Transformer ignore the sparse but distinctive dependency of an utterance on another in MPCs. To distinguish different relationships between utterances, four types of edges are designed to integrate graph-induced signals into attention mechanisms to refine PLMs originally designed for processing sequential texts. We evaluate GIFT by implementing it into three PLMs, and test the performance on three downstream tasks including addressee recognition, speaker identification and response selection. Experimental results show that GIFT can significantly improve the performance of three PLMs on three downstream tasks and two benchmarks with only 4 additional parameters per encoding layer, achieving new state-of-the-art performance on MPC understanding.', 'year': 2023, 'in_acl': True, 'citationCount': 7, 'section': None, 'subsection': None}]
|
2024.eacl-tutorials.1
|
Computational modeling of semantic change
|
Languages change constantly over time, influenced by social, technological, cultural and political factors that affect how people express themselves. In particular, words can undergo the process of semantic change, which can be subtle and significantly impact the interpretation of texts. For example, the word terrific used to mean ‘causing terror’ and was as such synonymous to terrifying. Nowadays, speakers use the word in the sense of ‘excessive’ and even ‘amazing’. In Historical Linguistics, tools and methods have been developed to analyse this phenomenon, including systematic categorisations of the types of change, the causes and the mechanisms underlying the different types of change. However, traditional linguistic methods, while informative, are often based on small, carefully curated samples. Thanks to the availability of both large diachronic corpora, the computational means to model word meaning unsupervised, and evaluation benchmarks, we are seeing an increasing interest in the computational modelling of semantic change. This is evidenced by the increasing number of publications in this new domain as well as the organisation of initiatives and events related to this topic, such as four editions of the International Workshop on Computational Approaches to Historical Language Change LChange1, and several evaluation campaigns (Schlechtweg et al., 2020a; Basile et al., 2020b; Kutuzov et al.; Zamora-Reina et al., 2022).
| 2,024
|
https://aclanthology.org/2024.eacl-tutorials.1
|
EACL
|
[{'id': 140928479, 'paperId': '5a05cd1f253baaa1b67c55d22335403a6251094c', 'title': 'How anger rose: Hypothesis testing in diachronic semantics', 'authors': [{'authorId': '1796288', 'name': 'D. Geeraerts'}, {'authorId': '145914884', 'name': 'C. Gevaert'}, {'authorId': '1754574', 'name': 'D. Speelman'}], 'venue': '', 'abstract': "On the basis of a large database of attested examples of anger, ire and wrath in Middle English texts, we perform a statistical analysis of the factors contributing to the emergence of anger as the dominant term. Specifically, we perform a logistic regression to test the hypothesis formulated by Diller (1994), who suggests that anger was introduced in the lexical field of anger expressions because social changes gave rise to new forms of anger: in contrast with the traditional reference to anger, in which the angry person has a high social rank and typically reacts in a violent way, anger expressed the emotions of lower-ranked persons, who react less violently. Overall, our statistical analysis is consonant with Diller's hypothesis, but it appears, importantly, that the hypothesis needs to be lectally enriched by means of a reference to the text type in which anger appears.", 'year': 2011, 'in_acl': False, 'citationCount': 18, 'section': 'Introduction to Semantic Change ', 'subsection': None}, {'id': 47019063, 'paperId': 'a41cf8154a64fe95f9c362e5664aebb02b18ee85', 'title': 'Diachronic word embeddings and semantic shifts: a survey', 'authors': [{'authorId': '2689095', 'name': 'Andrey Kutuzov'}, {'authorId': '2732223', 'name': 'Lilja Øvrelid'}, {'authorId': '3461918', 'name': 'Terrence Szymanski'}, {'authorId': '2027091', 'name': 'Erik Velldal'}], 'venue': 'International Conference on Computational Linguistics', 'abstract': 'Recent years have witnessed a surge of publications aimed at tracing temporal changes in lexical semantics using distributional methods, particularly prediction-based word embedding models. However, this vein of research lacks the cohesion, common terminology and shared practices of more established areas of natural language processing. In this paper, we survey the current state of academic research related to diachronic word embeddings and semantic shifts detection. We start with discussing the notion of semantic shifts, and then continue with an overview of the existing methods for tracing such time-related shifts with word embedding models. We propose several axes along which these methods can be compared, and outline the main challenges before this emerging subfield of NLP, as well as prospects and possible applications.', 'year': 2018, 'in_acl': True, 'citationCount': 291, 'section': 'Surveys', 'subsection': None}, {'id': 76666453, 'paperId': '755cda23e5b90d3c3f2aa027cb00dd071ee97386', 'title': 'Survey of Computational Approaches to Lexical Semantic Change', 'authors': [{'authorId': '1731960', 'name': 'Nina Tahmasebi'}, {'authorId': '143739029', 'name': 'L. Borin'}, {'authorId': '1774986', 'name': 'A. Jatowt'}], 'venue': '', 'abstract': 'Our languages are in constant flux driven by external factors such as cultural, societal and technological changes, as well as by only partially understood internal motivations. Words acquire new meanings and lose old senses, new words are coined or borrowed from other languages and obsolete words slide into obscurity. Understanding the characteristics of shifts in the meaning and in the use of words is useful for those who work with the content of historical texts, the interested general public, but also in and of itself. The findings from automatic lexical semantic change detection, and the models of diachronic conceptual change are currently being incorporated in approaches for measuring document across-time similarity, information retrieval from long-term document archives, the design of OCR algorithms, and so on. In recent years we have seen a surge in interest in the academic community in computational methods and tools supporting inquiry into diachronic conceptual change and lexical replacement. This article is an extract of a survey of recent computational techniques to tackle lexical semantic change currently under review. In this article we focus on diachronic conceptual change as an extension of semantic change.', 'year': 2018, 'in_acl': False, 'citationCount': 156, 'section': 'Surveys', 'subsection': None}, {'id': 257921251, 'paperId': 'ab08495f575c7616b316b86a5e4cdbe84fb5d0e6', 'title': 'Lexical Semantic Change through Large Language Models: a Survey', 'authors': [{'authorId': '2136116967', 'name': 'Francesco Periti'}, {'authorId': '1732265', 'name': 'S. Montanelli'}], 'venue': 'ACM Computing Surveys', 'abstract': 'Lexical Semantic Change (LSC) is the task of identifying, interpreting, and assessing the possible change over time in the meanings of a target word. Traditionally, LSC has been addressed by linguists and social scientists through manual and time-consuming analyses, which have thus been limited in terms of the volume, genres, and time-frame that can be considered. In recent years, computational approaches based on Natural Language Processing have gained increasing attention to automate LSC as much as possible. Significant advancements have been made by relying on Large Language Models (LLMs), which can handle the multiple usages of the words and better capture the related semantic change. In this article, we survey the approaches based on LLMs for LSC and we propose a classification framework characterized by three dimensions: meaning representation, time-awareness, and learning modality. The framework is exploited to i) review the measures for change assessment, ii) compare the approaches on performance, and iii) discuss the current issues in terms of scalability, interpretability, and robustness. Open challenges and future research directions about the use of LLMs for LSC are finally outlined.', 'year': 2023, 'in_acl': False, 'citationCount': 19, 'section': 'Surveys', 'subsection': None}, {'id': 220686630, 'paperId': 'a0895e9555527e30b82a8c66b6993683c6cabe14', 'title': 'SemEval-2020 Task 1: Unsupervised Lexical Semantic Change Detection', 'authors': [{'authorId': '3449121', 'name': 'Dominik Schlechtweg'}, {'authorId': '144463772', 'name': 'Barbara McGillivray'}, {'authorId': '3422512', 'name': 'Simon Hengchen'}, {'authorId': '2026652', 'name': 'Haim Dubossarsky'}, {'authorId': '1731960', 'name': 'Nina Tahmasebi'}], 'venue': 'International Workshop on Semantic Evaluation', 'abstract': 'Lexical Semantic Change detection, i.e., the task of identifying words that change meaning over time, is a very active research area, with applications in NLP, lexicography, and linguistics. Evaluation is currently the most pressing problem in Lexical Semantic Change detection, as no gold standards are available to the community, which hinders progress. We present the results of the first shared task that addresses this gap by providing researchers with an evaluation framework and manually annotated, high-quality datasets for English, German, Latin, and Swedish. 33 teams submitted 186 systems, which were evaluated on two subtasks.', 'year': 2020, 'in_acl': True, 'citationCount': 223, 'section': 'Benchmarks', 'subsection': None}, {'id': 229292864, 'paperId': '16af57bf06ebcba08fb50fd0582df93cab5ada89', 'title': 'DIACR-Ita @ EVALITA2020: Overview of the EVALITA2020 Diachronic Lexical Semantics (DIACR-Ita) Task', 'authors': [{'authorId': '1731651', 'name': 'Pierpaolo Basile'}, {'authorId': '25392172', 'name': 'A. Caputo'}, {'authorId': '1864635', 'name': 'Tommaso Caselli'}, {'authorId': '29931342', 'name': 'Pierluigi Cassotti'}, {'authorId': '72137436', 'name': 'Rossella Varvara'}], 'venue': 'International Workshop on Evaluation of Natural Language and Speech Tools for Italian', 'abstract': 'This paper describes the first edition of the “Diachronic Lexical Seman-tics” (DIACR-Ita) task at the EVALITA2020 campaign. The task challenges participants to develop systems that can automatically detect if a given word has changed its meaning over time, given con-textual information from corpora.The task, at its first edition, attracted 9 participant teams and collected a total of 36 sub-mission runs', 'year': 2020, 'in_acl': False, 'citationCount': 43, 'section': 'Benchmarks', 'subsection': None}, {'id': 240007326, 'paperId': 'e4ed8bf2155d35ee83a769b9c8f48fb299afc1d8', 'title': 'RuShiftEval: a shared task on semantic shift detection for Russian', 'authors': [{'authorId': '145579909', 'name': 'Lidia Pivovarova'}, {'authorId': '2689095', 'name': 'Andrey Kutuzov'}], 'venue': '', 'abstract': 'We present the first shared task on diachronic word meaning change detection for the Russian. The participating systems were provided with three sub-corpora of the Russian National Corpus — corresponding to pre-Soviet, Soviet and post-Soviet periods respectively — and a set of approximately one hundred Russian nouns. The task was to rank those nouns according to the degrees of their meaning change between periods. Although RuShiftEval is in many respects similar to the previous tasks organized for other languages', 'year': 2021, 'in_acl': False, 'citationCount': 39, 'section': 'Benchmarks', 'subsection': None}, {'id': 5480561, 'paperId': '7ee0a337faec1d87bbb15d84856a43a4aa64ac65', 'title': 'Diachronic Word Embeddings Reveal Statistical Laws of Semantic Change', 'authors': [{'authorId': '49437682', 'name': 'William L. Hamilton'}, {'authorId': '1702139', 'name': 'J. Leskovec'}, {'authorId': '1746807', 'name': 'Dan Jurafsky'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Understanding how words change their meanings over time is key to models of language and cultural evolution, but historical data on meaning is scarce, making theories hard to develop and test. Word embeddings show promise as a diachronic tool, but have not been carefully evaluated. We develop a robust methodology for quantifying semantic change by evaluating word embeddings (PPMI, SVD, word2vec) against known historical changes. We then use this methodology to reveal statistical laws of semantic evolution. Using six historical corpora spanning four languages and two centuries, we propose two quantitative laws of semantic change: (i) the law of conformity---the rate of semantic change scales with an inverse power-law of word frequency; (ii) the law of innovation---independent of frequency, words that are more polysemous have higher rates of semantic change.', 'year': 2016, 'in_acl': True, 'citationCount': 887, 'section': 'Models', 'subsection': None}, {'id': 36748720, 'paperId': 'ffa952b637e03bcae13b38e13ce5cf73c6f24e59', 'title': 'Dynamic Word Embeddings for Evolving Semantic Discovery', 'authors': [{'authorId': '2117803201', 'name': 'Zijun Yao'}, {'authorId': '2213925', 'name': 'Yifan Sun'}, {'authorId': '2365722', 'name': 'Weicong Ding'}, {'authorId': '145850291', 'name': 'Nikhil S. Rao'}, {'authorId': '144467554', 'name': 'Hui Xiong'}], 'venue': 'Web Search and Data Mining', 'abstract': 'Word evolution refers to the changing meanings and associations of words throughout time, as a byproduct of human language evolution. By studying word evolution, we can infer social trends and language constructs over different periods of human history. However, traditional techniques such as word representation learning do not adequately capture the evolving language structure and vocabulary. In this paper, we develop a dynamic statistical model to learn time-aware word vector representation. We propose a model that simultaneously learns time-aware embeddings and solves the resulting alignment problem. This model is trained on a crawled NYTimes dataset. Additionally, we develop multiple intuitive evaluation strategies of temporal word embeddings. Our qualitative and quantitative tests indicate that our method not only reliably captures this evolution over time, but also consistently outperforms state-of-the-art temporal embedding approaches on both semantic accuracy and alignment quality.', 'year': 2017, 'in_acl': False, 'citationCount': 207, 'section': 'Models', 'subsection': None}, {'id': 258833586, 'paperId': 'a2fb308508afbe00aab0709f6563719bd86e256a', 'title': 'Interpretable Word Sense Representations via Definition Generation: The Case of Semantic Change Analysis', 'authors': [{'authorId': '24068173', 'name': 'Mario Giulianelli'}, {'authorId': '2218113101', 'name': 'Iris Luden'}, {'authorId': '2147411708', 'name': 'Raquel Fernández'}, {'authorId': '2689095', 'name': 'Andrey Kutuzov'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'We propose using automatically generated natural language definitions of contextualised word usages as interpretable word and word sense representations.Given a collection of usage examples for a target word, and the corresponding data-driven usage clusters (i.e., word senses), a definition is generated for each usage with a specialised Flan-T5 language model, and the most prototypical definition in a usage cluster is chosen as the sense label. We demonstrate how the resulting sense labels can make existing approaches to semantic change analysis more interpretable, and how they can allow users — historical linguists, lexicographers, or social scientists — to explore and intuitively explain diachronic trajectories of word meaning. Semantic change analysis is only one of many possible applications of the ‘definitions as representations’ paradigm. Beyond being human-readable, contextualised definitions also outperform token or usage sentence embeddings in word-in-context semantic similarity judgements, making them a new promising type of lexical representation for NLP.', 'year': 2023, 'in_acl': True, 'citationCount': 21, 'section': 'Models', 'subsection': None}, {'id': 259370555, 'paperId': '6632dfd527f6161feed7fc6ab3970b53ffbbffcc', 'title': 'XL-LEXEME: WiC Pretrained Model for Cross-Lingual LEXical sEMantic changE', 'authors': [{'authorId': '29931342', 'name': 'Pierluigi Cassotti'}, {'authorId': '40989465', 'name': 'Lucia Siciliani'}, {'authorId': '1873109', 'name': 'M. Degemmis'}, {'authorId': '145467353', 'name': 'G. Semeraro'}, {'authorId': '1731651', 'name': 'Pierpaolo Basile'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'The recent introduction of large-scale datasets for the WiC (Word in Context) task enables the creation of more reliable and meaningful contextualized word embeddings.However, most of the approaches to the WiC task use cross-encoders, which prevent the possibility of deriving comparable word embeddings.In this work, we introduce XL-LEXEME, a Lexical Semantic Change Detection model.XL-LEXEME extends SBERT, highlighting the target word in the sentence.We evaluate XL-LEXEME on the multilingual benchmarks for SemEval-2020 Task 1 - Lexical Semantic Change (LSC) Detection and the RuShiftEval shared task involving five languages: English, German, Swedish, Latin, and Russian.XL-LEXEME outperforms the state-of-the-art in English, German and Swedish with statistically significant differences from the baseline results and obtains state-of-the-art performance in the RuShiftEval shared task.', 'year': 2023, 'in_acl': True, 'citationCount': 29, 'section': 'Models', 'subsection': None}]
|
2024.eacl-tutorials.4
|
Transformer-specific Interpretability
|
Transformers have emerged as dominant play- ers in various scientific fields, especially NLP. However, their inner workings, like many other neural networks, remain opaque. In spite of the widespread use of model-agnostic interpretability techniques, including gradient-based and occlusion-based, their shortcomings are becoming increasingly apparent for Transformer interpretation, making the field of interpretability more demanding today. In this tutorial, we will present Transformer-specific interpretability methods, a new trending approach, that make use of specific features of the Transformer architecture and are deemed more promising for understanding Transformer-based models. We start by discussing the potential pitfalls and misleading results model-agnostic approaches may produce when interpreting Transformers. Next, we discuss Transformer-specific methods, including those designed to quantify context- mixing interactions among all input pairs (as the fundamental property of the Transformer architecture) and those that combine causal methods with low-level Transformer analysis to identify particular subnetworks within a model that are responsible for specific tasks. By the end of the tutorial, we hope participants will understand the advantages (as well as current limitations) of Transformer-specific interpretability methods, along with how these can be applied to their own research.
| 2,024
|
https://aclanthology.org/2024.eacl-tutorials.4
|
EACL
|
[{'id': 56657817, 'paperId': '668f42a4d4094f0a66d402a16087e14269b31a1f', 'title': 'Analysis Methods in Neural Language Processing: A Survey', 'authors': [{'authorId': '2083259', 'name': 'Yonatan Belinkov'}, {'authorId': '145898106', 'name': 'James R. Glass'}], 'venue': 'Transactions of the Association for Computational Linguistics', 'abstract': 'The field of natural language processing has seen impressive progress in recent years, with neural network models replacing many of the traditional systems. A plethora of new models have been proposed, many of which are thought to be opaque compared to their feature-rich counterparts. This has led researchers to analyze, interpret, and evaluate neural networks in novel and more fine-grained ways. In this survey paper, we review analysis methods in neural language processing, categorize them according to prominent research trends, highlight existing limitations, and point to potential directions for future work.', 'year': 2018, 'in_acl': True, 'citationCount': 513, 'section': None, 'subsection': None}, {'id': 236976388, 'paperId': 'eadb1e7da375939e25083ae3936c4f4ef1f2a719', 'title': 'Post-hoc Interpretability for Neural NLP: A Survey', 'authors': [{'authorId': '152446182', 'name': 'Andreas Madsen'}, {'authorId': '145732771', 'name': 'Siva Reddy'}, {'authorId': '144631588', 'name': 'A. Chandar'}], 'venue': 'ACM Computing Surveys', 'abstract': 'Neural networks for NLP are becoming increasingly complex and widespread, and there is a growing concern if these models are responsible to use. Explaining models helps to address the safety and ethical concerns and is essential for accountability. Interpretability serves to provide these explanations in terms that are understandable to humans. Additionally, post-hoc methods provide explanations after a model is learned and are generally model-agnostic. This survey provides a categorization of how recent post-hoc interpretability methods communicate explanations to humans, it discusses each method in-depth, and how they are validated, as the latter is often a common concern.', 'year': 2021, 'in_acl': False, 'citationCount': 191, 'section': None, 'subsection': None}, {'id': 251104722, 'paperId': '2c709ef6186bd607494a3344c903552ea500e449', 'title': 'Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks', 'authors': [{'authorId': '2179318557', 'name': 'Tilman Raukur'}, {'authorId': '120892153', 'name': 'A. Ho'}, {'authorId': '2103487700', 'name': 'Stephen Casper'}, {'authorId': '1397904824', 'name': 'Dylan Hadfield-Menell'}], 'venue': '2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML)', 'abstract': 'The last decade of machine learning has seen drastic increases in scale and capabilities. Deep neural networks (DNNs) are increasingly being deployed in the real world. However, they are difficult to analyze, raising concerns about using them without a rigorous understanding of how they function. Effective tools for interpreting them will be important for building more trustworthy AI by helping to identify problems, fix bugs, and improve basic understanding. In particular, “inner” interpretability techniques, which focus on explaining the internal components of DNNs, are well-suited for developing a mechanistic understanding, guiding manual modifications, and reverse engineering solutions. Much recent work has focused on DNN interpretability, and rapid progress has thus far made a thorough systematization of methods difficult. In this survey, we review over 300 works with a focus on inner interpretability tools. We introduce a taxonomy that classifies methods by what part of the network they help to explain (weights, neurons, subnetworks, or latent representations) and whether they are implemented during (intrinsic) or after (post hoc) training. To our knowledge, we are also the first to survey a number of connections between interpretability research and work in adversarial robustness, continual learning, modularity, network compression, and studying the human visual system. We discuss key challenges and argue that the status quo in interpretability research is largely unproductive. Finally, we highlight the importance of future work that emphasizes diagnostics, debugging, adversaries, and benchmarking in order to make interpretability tools more useful to engineers in practical applications.', 'year': 2022, 'in_acl': False, 'citationCount': 107, 'section': None, 'subsection': None}, {'id': 252519203, 'paperId': '285d13bf3cbe6a8a0f164f584d84f8b74067271f', 'title': 'Towards Faithful Model Explanation in NLP: A Survey', 'authors': [{'authorId': '1904906987', 'name': 'Qing Lyu'}, {'authorId': '2817917', 'name': 'Marianna Apidianaki'}, {'authorId': '1763608', 'name': 'Chris Callison-Burch'}], 'venue': 'Computational Linguistics', 'abstract': 'Abstract End-to-end neural Natural Language Processing (NLP) models are notoriously difficult to understand. This has given rise to numerous efforts towards model explainability in recent years. One desideratum of model explanation is faithfulness, that is, an explanation should accurately represent the reasoning process behind the model’s prediction. In this survey, we review over 110 model explanation methods in NLP through the lens of faithfulness. We first discuss the definition and evaluation of faithfulness, as well as its significance for explainability. We then introduce recent advances in faithful explanation, grouping existing approaches into five categories: similarity-based methods, analysis of model-internal structures, backpropagation-based methods, counterfactual intervention, and self-explanatory models. For each category, we synthesize its representative studies, strengths, and weaknesses. Finally, we summarize their common virtues and remaining challenges, and reflect on future work directions towards faithful explainability in NLP.', 'year': 2022, 'in_acl': False, 'citationCount': 75, 'section': None, 'subsection': None}]
|
2024.eacl-tutorials.5
|
LLMs for Low Resource Languages in Multilingual, Multimodal and Dialectal Settings
|
The recent breakthroughs in Artificial Intelligence (AI) can be attributed to the remarkable performance of Large Language Models (LLMs) across a spectrum of research areas (e.g., machine translation, question-answering, automatic speech recognition, text-to-speech generation) and application domains (e.g., business, law, healthcare, education, and psychology). The success of these LLMs largely de- pends on specific training techniques, most notably instruction tuning, RLHF, and subsequent prompting to achieve the desired output. As the development of such LLMs continues to increase in both closed and open settings, evaluation has become crucial for understanding their generalization capabilities across different tasks, modalities, languages, and dialects. This evaluation process is tightly coupled with prompting, which plays a key role in obtain- ing better outputs. There has been attempts to evaluate such models focusing on diverse tasks, languages, and dialects, which suggests that the capabilities of LLMs are still limited to medium-to-low-resource languages due to the lack of representative datasets. The tutorial offers an overview of this emerging research area. We explore the capabilities of LLMs in terms of their performance, zero- and few-shot settings, fine-tuning, instructions tuning, and close vs. open models with a special emphasis on low-resource settings. In addition to LLMs for standard NLP tasks, we will focus on speech and multimodality.
| 2,024
|
https://aclanthology.org/2024.eacl-tutorials.5
|
EACL
|
[{'id': 257900969, 'paperId': '0b3904d0e229796aff0bda43bb386513353bc992', 'title': 'A Survey of Large Language Models', 'authors': [{'authorId': '2542603', 'name': 'Wayne Xin Zhao'}, {'authorId': '1423651904', 'name': 'Kun Zhou'}, {'authorId': '2018027', 'name': 'Junyi Li'}, {'authorId': '1997234792', 'name': 'Tianyi Tang'}, {'authorId': '72541556', 'name': 'Xiaolei Wang'}, {'authorId': '151472453', 'name': 'Yupeng Hou'}, {'authorId': '2007666579', 'name': 'Yingqian Min'}, {'authorId': '2107926615', 'name': 'Beichen Zhang'}, {'authorId': '2155570461', 'name': 'Junjie Zhang'}, {'authorId': '2198280871', 'name': 'Zican Dong'}, {'authorId': '2111895473', 'name': 'Yifan Du'}, {'authorId': '2181967397', 'name': 'Chen Yang'}, {'authorId': '2109315001', 'name': 'Yushuo Chen'}, {'authorId': '46842323', 'name': 'Z. Chen'}, {'authorId': '2118240359', 'name': 'Jinhao Jiang'}, {'authorId': '1708171825', 'name': 'Ruiyang Ren'}, {'authorId': '2209136299', 'name': 'Yifan Li'}, {'authorId': '2109887979', 'name': 'Xinyu Tang'}, {'authorId': '2119618242', 'name': 'Zikang Liu'}, {'authorId': '2108129670', 'name': 'Peiyu Liu'}, {'authorId': '50204644', 'name': 'J. Nie'}, {'authorId': '153693432', 'name': 'Ji-rong Wen'}], 'venue': 'arXiv.org', 'abstract': 'Language is essentially a complex, intricate system of human expressions governed by grammatical rules. It poses a significant challenge to develop capable AI algorithms for comprehending and grasping a language. As a major approach, language modeling has been widely studied for language understanding and generation in the past two decades, evolving from statistical language models to neural language models. Recently, pre-trained language models (PLMs) have been proposed by pre-training Transformer models over large-scale corpora, showing strong capabilities in solving various NLP tasks. Since researchers have found that model scaling can lead to performance improvement, they further study the scaling effect by increasing the model size to an even larger size. Interestingly, when the parameter scale exceeds a certain level, these enlarged language models not only achieve a significant performance improvement but also show some special abilities that are not present in small-scale language models. To discriminate the difference in parameter scale, the research community has coined the term large language models (LLM) for the PLMs of significant size. Recently, the research on LLMs has been largely advanced by both academia and industry, and a remarkable progress is the launch of ChatGPT, which has attracted widespread attention from society. The technical evolution of LLMs has been making an important impact on the entire AI community, which would revolutionize the way how we develop and use AI algorithms. In this survey, we review the recent advances of LLMs by introducing the background, key findings, and mainstream techniques. In particular, we focus on four major aspects of LLMs, namely pre-training, adaptation tuning, utilization, and capacity evaluation. Besides, we also summarize the available resources for developing LLMs and discuss the remaining issues for future directions.', 'year': 2023, 'in_acl': False, 'citationCount': 1803, 'section': 'an overview of LLMs', 'subsection': None}, {'id': 236493269, 'paperId': '28692beece311a90f5fa1ca2ec9d0c2ce293d069', 'title': 'Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing', 'authors': [{'authorId': '144118452', 'name': 'Pengfei Liu'}, {'authorId': '30300197', 'name': 'Weizhe Yuan'}, {'authorId': '41037252', 'name': 'Jinlan Fu'}, {'authorId': '2669515', 'name': 'Zhengbao Jiang'}, {'authorId': '50376014', 'name': 'Hiroaki Hayashi'}, {'authorId': '1700325', 'name': 'Graham Neubig'}], 'venue': 'ACM Computing Surveys', 'abstract': 'This article surveys and organizes research works in a new paradigm in natural language processing, which we dub “prompt-based learning.” Unlike traditional supervised learning, which trains a model to take in an input x and predict an output y as P(y|x), prompt-based learning is based on language models that model the probability of text directly. To use these models to perform prediction tasks, the original input x is modified using a template into a textual string prompt x′ that has some unfilled slots, and then the language model is used to probabilistically fill the unfilled information to obtain a final string x̂, from which the final output y can be derived. This framework is powerful and attractive for a number of reasons: It allows the language model to be pre-trained on massive amounts of raw text, and by defining a new prompting function the model is able to perform few-shot or even zero-shot learning, adapting to new scenarios with few or no labeled data. In this article, we introduce the basics of this promising paradigm, describe a unified set of mathematical notations that can cover a wide variety of existing work, and organize existing work along several dimensions, e.g., the choice of pre-trained language models, prompts, and tuning strategies. To make the field more accessible to interested beginners, we not only make a systematic review of existing works and a highly structured typology of prompt-based concepts but also release other resources, e.g., a website NLPedia–Pretrain including constantly updated survey and paperlist.', 'year': 2021, 'in_acl': False, 'citationCount': 3190, 'section': 'prompt engineering', 'subsection': None}, {'id': 260357841, 'paperId': '06d8562831c32844285a691c5250d04726df3c61', 'title': 'A Systematic Survey of Prompt Engineering on Vision-Language Foundation Models', 'authors': [{'authorId': '52203056', 'name': 'Jindong Gu'}, {'authorId': '2223193538', 'name': 'Zhen Han'}, {'authorId': '2116572341', 'name': 'Shuo Chen'}, {'authorId': '1791052', 'name': 'Ahmad Beirami'}, {'authorId': '2147293727', 'name': 'Bailan He'}, {'authorId': '2143853643', 'name': 'Gengyuan Zhang'}, {'authorId': '2072387342', 'name': 'Ruotong Liao'}, {'authorId': '2219078907', 'name': 'Yao Qin'}, {'authorId': '1742501819', 'name': 'Volker Tresp'}, {'authorId': '143635540', 'name': 'Philip H. S. Torr'}], 'venue': 'arXiv.org', 'abstract': '—Prompt engineering is a technique that involves augmenting a large pre-trained model with task-specific hints, known as prompts, to adapt the model to new tasks. Prompts can be created manually as natural language instructions or generated automatically as either natural language instructions or vector representations. Prompt engineering enables the ability to perform predictions based solely on prompts without updating model parameters, and the easier application of large pre-trained models in real-world tasks. In past years, Prompt engineering has been well-studied in natural language processing. Recently, it has also been intensively studied in vision-language modeling. However, there is currently a lack of a systematic overview of prompt engineering on pre-trained vision-language models. This paper aims to provide a comprehensive survey of cutting-edge research in prompt engineering on three types of vision-language models: multimodal-to-text generation models ( e.g., Flamingo), image-text matching models ( e.g., CLIP), and text-to-image generation models ( e.g., Stable Diffusion). For each type of model, a brief model summary, prompting methods, prompting-based applications, and the corresponding responsibility and integrity issues are summarized and discussed. Furthermore, the commonalities and differences between prompting on vision-language models, language models, and vision models are also discussed. The challenges, future directions, and research opportunities are summarized to foster future research on this topic.', 'year': 2023, 'in_acl': False, 'citationCount': 100, 'section': 'prompt engineering', 'subsection': None}, {'id': 263886074, 'paperId': '8aa98fbfb6f1e979dead13ce24075503fe47658e', 'title': 'A Survey for In-context Learning', 'authors': [{'authorId': '2047143813', 'name': 'Qingxiu Dong'}, {'authorId': '49192881', 'name': 'Lei Li'}, {'authorId': '10780897', 'name': 'Damai Dai'}, {'authorId': '2113919886', 'name': 'Ce Zheng'}, {'authorId': '2267004431', 'name': 'Zhiyong Wu'}, {'authorId': '7267809', 'name': 'Baobao Chang'}, {'authorId': '2116530295', 'name': 'Xu Sun'}, {'authorId': '2257464374', 'name': 'Jingjing Xu'}, {'authorId': '2257344719', 'name': 'Lei Li'}, {'authorId': '3335836', 'name': 'Zhifang Sui'}], 'venue': 'arXiv.org', 'abstract': 'With the increasing ability of large language models (LLMs), in-context learning (ICL) has become a new paradigm for natural language processing (NLP), where LLMs make predictions only based on contexts augmented with a few training examples. It has been a new trend exploring ICL to evaluate and extrapolate the ability of LLMs. In this paper, we aim to survey and summarize the progress, challenges, and future work in ICL. We first present a formal definition of ICL and clarify its correlation to related studies. Then, we organize and discuss advanced techniques of ICL, including training strategies, prompting strategies, and so on. Finally, we present the challenges of ICL and provide potential directions for further research. We hope our work can encourage more research on uncovering how ICL works and improving ICL in future work. 1', 'year': 2023, 'in_acl': False, 'citationCount': 332, 'section': 'in-context learning', 'subsection': None}, {'id': 253553585, 'paperId': 'ce913026f693101e54d3ab9152e107034d81fce1', 'title': 'Holistic Evaluation of Language Models', 'authors': [{'authorId': '145419642', 'name': 'Percy Liang'}, {'authorId': '150272855', 'name': 'Rishi Bommasani'}, {'authorId': '2110585783', 'name': 'Tony Lee'}, {'authorId': '2754804', 'name': 'Dimitris Tsipras'}, {'authorId': '1914569491', 'name': 'Dilara Soylu'}, {'authorId': '19168196', 'name': 'Michihiro Yasunaga'}, {'authorId': '9227100', 'name': 'Yian Zhang'}, {'authorId': '22252150', 'name': 'D. Narayanan'}, {'authorId': '3374063', 'name': 'Yuhuai Wu'}, {'authorId': '32423266', 'name': 'Ananya Kumar'}, {'authorId': '51149693', 'name': 'Benjamin Newman'}, {'authorId': '2833699', 'name': 'Binhang Yuan'}, {'authorId': '1748871792', 'name': 'Bobby Yan'}, {'authorId': '2146064162', 'name': 'Ce Zhang'}, {'authorId': '133749287', 'name': 'Christian Cosgrove'}, {'authorId': '144783904', 'name': 'Christopher D. Manning'}, {'authorId': '2061444681', 'name': "Christopher R'e"}, {'authorId': '1413421064', 'name': 'Diana Acosta-Navas'}, {'authorId': '152951058', 'name': 'Drew A. Hudson'}, {'authorId': '49456763', 'name': 'E. Zelikman'}, {'authorId': '41152329', 'name': 'Esin Durmus'}, {'authorId': '8759332', 'name': 'Faisal Ladhak'}, {'authorId': '2047004093', 'name': 'Frieda Rong'}, {'authorId': '40046694', 'name': 'Hongyu Ren'}, {'authorId': '18307037', 'name': 'Huaxiu Yao'}, {'authorId': '39597242', 'name': 'Jue Wang'}, {'authorId': '50818255', 'name': 'Keshav Santhanam'}, {'authorId': '4773175', 'name': 'Laurel J. Orr'}, {'authorId': '2118604716', 'name': 'Lucia Zheng'}, {'authorId': '2186981598', 'name': 'Mert Yuksekgonul'}, {'authorId': '51903517', 'name': 'Mirac Suzgun'}, {'authorId': '2182172863', 'name': 'Nathan S. Kim'}, {'authorId': '2820009', 'name': 'Neel Guha'}, {'authorId': '22193324', 'name': 'Niladri S. Chatterji'}, {'authorId': '144112155', 'name': 'O. Khattab'}, {'authorId': '2071773966', 'name': 'Peter Henderson'}, {'authorId': '144862341', 'name': 'Qian Huang'}, {'authorId': '2121293578', 'name': 'Ryan Chi'}, {'authorId': '46215055', 'name': 'Sang Michael Xie'}, {'authorId': '2852106', 'name': 'Shibani Santurkar'}, {'authorId': '25769960', 'name': 'S. Ganguli'}, {'authorId': '2117567142', 'name': 'Tatsunori Hashimoto'}, {'authorId': '8938047', 'name': 'Thomas F. Icard'}, {'authorId': '123437034', 'name': 'Tianyi Zhang'}, {'authorId': '113810201', 'name': 'Vishrav Chaudhary'}, {'authorId': '2127971344', 'name': 'William Wang'}, {'authorId': '2145429039', 'name': 'Xuechen Li'}, {'authorId': '2054708905', 'name': 'Yifan Mai'}, {'authorId': '49889860', 'name': 'Yuhui Zhang'}, {'authorId': '2740047', 'name': 'Yuta Koreeda'}], 'venue': 'Trans. Mach. Learn. Res.', 'abstract': 'Language models (LMs) like GPT‐3, PaLM, and ChatGPT are the foundation for almost all major language technologies, but their capabilities, limitations, and risks are not well understood. We present Holistic Evaluation of Language Models (HELM) to improve the transparency of LMs. LMs can serve many purposes and their behavior should satisfy many desiderata. To navigate the vast space of potential scenarios and metrics, we taxonomize the space and select representative subsets. We evaluate models on 16 core scenarios and 7 metrics, exposing important trade‐offs. We supplement our core evaluation with seven targeted evaluations to deeply analyze specific aspects (including world knowledge, reasoning, regurgitation of copyrighted content, and generation of disinformation). We benchmark 30 LMs, from OpenAI, Microsoft, Google, Meta, Cohere, AI21 Labs, and others. Prior to HELM, models were evaluated on just 17.9% of the core HELM scenarios, with some prominent models not sharing a single scenario in common. We improve this to 96.0%: all 30 models are now benchmarked under the same standardized conditions. Our evaluation surfaces 25 top‐level findings. For full transparency, we release all raw model prompts and completions publicly. HELM is a living benchmark for the community, continuously updated with new scenarios, metrics, and models https://crfm.stanford.edu/helm/latest/.', 'year': 2023, 'in_acl': False, 'citationCount': 736, 'section': 'evaluation of LLMs', 'subsection': None}]
|
2024.lrec-tutorials.2
|
Geo-Cultural Representation and Inclusion in Language Technologies
|
Training and evaluation of language models are increasingly relying on semi-structured data that is annotated by humans, along with techniques such as RLHF growing in usage across the board. As a result, both the data and the human perspectives involved in this process play a key role in what is taken as ground truth by our models. As annotation tasks are becoming increasingly more subjective and culturally complex, it is unclear how much of their socio-cultural identity annotators use to respond to tasks. We also currently do not have ways to integrate rich and diverse community perspectives into our language technologies. Accounting for such cross-cultural differences in interacting with technology is an increasingly crucial step for evaluating AI harms holistically. Without this, the state of the art of the AI models being deployed is at risk of causing unprecedented biases at a global scale. In this tutorial, we will take an interactive approach by utilizing some different types of annotation tasks to investigate together how our different socio-cultural perspectives and lived experiences influence what we consider as appropriate representations of global concepts.
| 2,024
|
https://aclanthology.org/2024.lrec-tutorials.2
|
LREC
|
[{'id': 245005939, 'paperId': '6159a9048cf3efb9bcee231b175932d07be33e37', 'title': 'Whose Ground Truth? Accounting for Individual and Collective Identities Underlying Dataset Annotation', 'authors': [{'authorId': '40081727', 'name': 'Emily L. Denton'}, {'authorId': '2146515892', 'name': "M. D'iaz"}, {'authorId': '24643287', 'name': 'I. Kivlichan'}, {'authorId': '3331141', 'name': 'Vinodkumar Prabhakaran'}, {'authorId': '2144002544', 'name': 'Rachel Rosen'}], 'venue': 'arXiv.org', 'abstract': "Human annotations play a crucial role in machine learning (ML) research and development. However, the ethical considerations around the processes and decisions that go into building ML datasets has not received nearly enough attention. In this paper, we survey an array of literature that provides insights into ethical considerations around crowdsourced dataset annotation. We synthesize these insights, and lay out the challenges in this space along two layers: (1) who the annotator is, and how the annotators' lived experiences can impact their annotations, and (2) the relationship between the annotators and the crowdsourcing platforms and what that relationship affords them. Finally, we put forth a concrete set of recommendations and considerations for dataset developers at various stages of the ML data pipeline: task formulation, selection of annotators, platform and infrastructure choices, dataset analysis and evaluation, and dataset documentation and release.", 'year': 2021, 'in_acl': False, 'citationCount': 58, 'section': None, 'subsection': None}, {'id': 258823008, 'paperId': 'f4f154892800008894ebbf57add31fcaac4f27ca', 'title': 'SeeGULL: A Stereotype Benchmark with Broad Geo-Cultural Coverage Leveraging Generative Models', 'authors': [{'authorId': '36701727', 'name': 'Akshita Jha'}, {'authorId': '2132006618', 'name': 'A. Davani'}, {'authorId': '144417522', 'name': 'Chandan K. Reddy'}, {'authorId': '2160404', 'name': 'Shachi Dave'}, {'authorId': '3331141', 'name': 'Vinodkumar Prabhakaran'}, {'authorId': '50991767', 'name': 'Sunipa Dev'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Stereotype benchmark datasets are crucial to detect and mitigate social stereotypes about groups of people in NLP models. However, existing datasets are limited in size and coverage, and are largely restricted to stereotypes prevalent in the Western society. This is especially problematic as language technologies gain hold across the globe. To address this gap, we present SeeGULL, a broad-coverage stereotype dataset, built by utilizing generative capabilities of large language models such as PaLM, and GPT-3, and leveraging a globally diverse rater pool to validate the prevalence of those stereotypes in society. SeeGULL is in English, and contains stereotypes about identity groups spanning 178 countries across 8 different geo-political regions across 6 continents, as well as state-level identities within the US and India. We also include fine-grained offensiveness scores for different stereotypes and demonstrate their global disparities. Furthermore, we include comparative annotations about the same groups by annotators living in the region vs. those that are based in North America, and demonstrate that within-region stereotypes about groups differ from those prevalent in North America.', 'year': 2023, 'in_acl': True, 'citationCount': 27, 'section': None, 'subsection': None}, {'id': 247748753, 'paperId': '8a3d1ce6dd9dc9766a42e645a23de4b5f2b447ec', 'title': 'Probing Pre-Trained Language Models for Cross-Cultural Differences in Values', 'authors': [{'authorId': '1943255906', 'name': 'Arnav Arora'}, {'authorId': '23319388', 'name': 'Lucie-Aimée Kaffee'}, {'authorId': '1736067', 'name': 'Isabelle Augenstein'}], 'venue': 'C3NLP', 'abstract': 'Language embeds information about social, cultural, and political values people hold. Prior work has explored potentially harmful social biases encoded in Pre-trained Language Models (PLMs). However, there has been no systematic study investigating how values embedded in these models vary across cultures.In this paper, we introduce probes to study which cross-cultural values are embedded in these models, and whether they align with existing theories and cross-cultural values surveys. We find that PLMs capture differences in values across cultures, but those only weakly align with established values surveys. We discuss implications of using mis-aligned models in cross-cultural settings, as well as ways of aligning PLMs with values surveys.', 'year': 2022, 'in_acl': True, 'citationCount': 99, 'section': None, 'subsection': None}, {'id': 253802027, 'paperId': '6615e9d1641e5b8141efa8e947a90d12b3158075', 'title': 'Cultural Incongruencies in Artificial Intelligence', 'authors': [{'authorId': '3331141', 'name': 'Vinodkumar Prabhakaran'}, {'authorId': '101513478', 'name': 'Rida Qadri'}, {'authorId': '2044655623', 'name': 'Ben Hutchinson'}], 'venue': 'arXiv.org', 'abstract': 'Artificial intelligence (AI) systems attempt to imitate human behavior. How well they do this imitation is often used to assess their utility and to attribute human-like (or artificial) intelligence to them. However, most work on AI refers to and relies on human intelligence without accounting for the fact that human behavior is inherently shaped by the cultural contexts they are embedded in, the values and beliefs they hold, and the social practices they follow. Additionally, since AI technologies are mostly conceived and developed in just a handful of countries, they embed the cultural values and practices of these countries. Similarly, the data that is used to train the models also fails to equitably represent global cultural diversity. Problems therefore arise when these technologies interact with globally diverse societies and cultures, with different values and interpretive practices. In this position paper, we describe a set of cultural dependencies and incongruencies in the context of AI-based language and vision technologies, and reflect on the possibilities of and potential strategies towards addressing these incongruencies.', 'year': 2022, 'in_acl': False, 'citationCount': 15, 'section': None, 'subsection': None}, {'id': 257833897, 'paperId': 'ca94c924d8a3b77a2bd5b16ffc03b8723bce9c1f', 'title': 'Assessing Cross-Cultural Alignment between ChatGPT and Human Societies: An Empirical Study', 'authors': [{'authorId': '2112402733', 'name': 'Yong Cao'}, {'authorId': '2116635928', 'name': 'Li Zhou'}, {'authorId': '2132579230', 'name': 'Seolhwa Lee'}, {'authorId': '2092471782', 'name': 'Laura Cabello'}, {'authorId': '2108557855', 'name': 'Min Chen'}, {'authorId': '2064295987', 'name': 'Daniel Hershcovich'}], 'venue': 'C3NLP', 'abstract': 'The recent release of ChatGPT has garnered widespread recognition for its exceptional ability to generate human-like conversations. Given its usage by users from various nations and its training on a vast multilingual corpus that includes diverse cultural and societal norms, it is crucial to evaluate its effectiveness in cultural adaptation. In this paper, we investigate the underlying cultural background of ChatGPT by analyzing its responses to questions designed to quantify human cultural differences. Our findings suggest that, when prompted with American context, ChatGPT exhibits a strong alignment with American culture, but it adapts less effectively to other cultural contexts. Furthermore, by using different prompts to probe the model, we show that English prompts reduce the variance in model responses, flattening out cultural differences and biasing them towards American culture. This study provides valuable insights into the cultural implications of ChatGPT and highlights the necessity of greater diversity and cultural awareness in language technologies.', 'year': 2023, 'in_acl': True, 'citationCount': 121, 'section': None, 'subsection': None}]
|
2024.lrec-tutorials.3
|
Meaning Representations for Natural Languages: Design, Models and Applications
|
This tutorial reviews the design of common meaning representations, SoTA models for predicting meaning representations, and the applications of meaning representations in a wide range of downstream NLP tasks and real-world applications. Reporting by a diverse team of NLP researchers from academia and industry with extensive experience in designing, building and using meaning representations, our tutorial has three components: (1) an introduction to common meaning representations, including basic concepts and design challenges; (2) a review of SoTA methods on building models for meaning representations; and (3) an overview of applications of meaning representations in downstream NLP tasks and real-world applications. We propose a cutting-edge, full-day tutorial for all stakeholders in the AI community, including NLP researchers, domain-specific practitioners, and students
| 2,024
|
https://aclanthology.org/2024.lrec-tutorials.3
|
LREC
|
[{'id': 7771402, 'paperId': 'e72e5ee5de14fd463ab58ce830474157258e3578', 'title': 'Abstract Meaning Representation for Sembanking', 'authors': [{'authorId': '3460261', 'name': 'L. Banarescu'}, {'authorId': '3202888', 'name': 'C. Bonial'}, {'authorId': '2112618394', 'name': 'Shu Cai'}, {'authorId': '2065872210', 'name': 'Madalina Georgescu'}, {'authorId': '3168985', 'name': 'Kira Griffitt'}, {'authorId': '1791311', 'name': 'U. Hermjakob'}, {'authorId': '152971314', 'name': 'Kevin Knight'}, {'authorId': '1755162', 'name': 'Philipp Koehn'}, {'authorId': '145755155', 'name': 'Martha Palmer'}, {'authorId': '145254207', 'name': 'Nathan Schneider'}], 'venue': 'LAW@ACL', 'abstract': 'We describe Abstract Meaning Representation (AMR), a semantic representation language in which we are writing down the meanings of thousands of English sentences. We hope that a sembank of simple, whole-sentence semantic structures will spur new work in statistical natural language understanding and generation, like the Penn Treebank encouraged work on statistical parsing. This paper gives an overview of AMR and tools associated with it.', 'year': 2013, 'in_acl': True, 'citationCount': 1379, 'section': None, 'subsection': None}, {'id': 250390642, 'paperId': 'e7ac9ea2ae42a7a9125da69de1ff295efebfaff5', 'title': 'Label Definitions Improve Semantic Role Labeling', 'authors': [{'authorId': '72436283', 'name': 'Li Zhang'}, {'authorId': '144377686', 'name': 'Ishan Jindal'}, {'authorId': '1718694', 'name': 'Yunyao Li'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'Argument classification is at the core of Semantic Role Labeling. Given a sentence and the predicate, a semantic role label is assigned to each argument of the predicate. While semantic roles come with meaningful definitions, existing work has treated them as symbolic. Learning symbolic labels usually requires ample training data, which is frequently unavailable due to the cost of annotation. We instead propose to retrieve and leverage the definitions of these labels from the annotation guidelines. For example, the verb predicate “work” has arguments defined as “worker”, “job”, “employer”, etc. Our model achieves state-of-the-art performance on the CoNLL09 dataset injected with label definitions given the predicate senses. The performance improvement is even more pronounced in low-resource settings when training data is scarce.', 'year': 2022, 'in_acl': True, 'citationCount': 4, 'section': None, 'subsection': None}, {'id': 252370416, 'paperId': '83e3b3b19bbe92a85265bcee6634738206662f03', 'title': 'Universal Proposition Bank 2.0', 'authors': [{'authorId': '144377686', 'name': 'Ishan Jindal'}, {'authorId': '13836343', 'name': 'Alexandre Rademaker'}, {'authorId': '2185433526', 'name': 'Michał Ulewicz'}, {'authorId': '2180951336', 'name': 'Linh H. Ha'}, {'authorId': '145199659', 'name': 'Huyen Nguyen'}, {'authorId': '1399212475', 'name': 'Khoi-Nguyen Tran'}, {'authorId': '2115718238', 'name': 'Huaiyu Zhu'}, {'authorId': '1718694', 'name': 'Yunyao Li'}], 'venue': 'International Conference on Language Resources and Evaluation', 'abstract': 'Semantic role labeling (SRL) represents the meaning of a sentence in the form of predicate-argument structures. Such shallow semantic analysis is helpful in a wide range of downstream NLP tasks and real-world applications. As treebanks enabled the development of powerful syntactic parsers, the accurate predicate-argument analysis demands training data in the form of propbanks. Unfortunately, most languages simply do not have corresponding propbanks due to the high cost required to construct such resources. To overcome such challenges, Universal Proposition Bank 1.0 (UP1.0) was released in 2017, with high-quality propbank data generated via a two-stage method exploiting monolingual SRL and multilingual parallel data. In this paper, we introduce Universal Proposition Bank 2.0 (UP2.0), with significant enhancements over UP1.0: (1) propbanks with higher quality by using a state-of-the-art monolingual SRL and improved auto-generation of annotations; (2) expanded language coverage (from 7 to 9 languages); (3) span annotation for the decoupling of syntactic analysis; and (4) Gold data for a subset of the languages. We also share our experimental results that confirm the significant quality improvements of the generated propbanks. In addition, we present a comprehensive experimental evaluation on how different implementation choices impact the quality of the resulting data. We release these resources to the research community and hope to encourage more research on cross-lingual SRL.', 'year': 2022, 'in_acl': True, 'citationCount': 10, 'section': None, 'subsection': None}, {'id': 235563506, 'paperId': 'b1c1bfe5f7a5696909c0ee7de7fbb4092a04c907', 'title': 'Designing a Uniform Meaning Representation for Natural Language Processing', 'authors': [{'authorId': '116305713', 'name': 'J. V. Gysel'}, {'authorId': '51882643', 'name': 'Meagan Vigus'}, {'authorId': '41124366', 'name': 'Jayeol Chun'}, {'authorId': '2715566', 'name': 'Kenneth Lai'}, {'authorId': '51500425', 'name': 'Sarah Moeller'}, {'authorId': '40040342', 'name': 'Jiarui Yao'}, {'authorId': '1388957618', 'name': "Timothy J. O'Gorman"}, {'authorId': '2070241255', 'name': 'Andrew Cowell'}, {'authorId': '144456145', 'name': 'W. Bruce Croft'}, {'authorId': '1405994600', 'name': 'Chu-Ren Huang'}, {'authorId': '144002335', 'name': 'Jan Hajic'}, {'authorId': '1740728360', 'name': 'James H. Martin'}, {'authorId': '2949607', 'name': 'S. Oepen'}, {'authorId': '145755155', 'name': 'Martha Palmer'}, {'authorId': '1707726', 'name': 'J. Pustejovsky'}, {'authorId': '143886116', 'name': 'Rosa Vallejos'}, {'authorId': '1702849', 'name': 'Nianwen Xue'}], 'venue': 'KI - Künstliche Intelligenz', 'abstract': 'In this paper we present Uniform Meaning Representation (UMR), a meaning representation designed to annotate the semantic content of a text. UMR is primarily based on Abstract Meaning Representation (AMR), an annotation framework initially designed for English, but also draws from other meaning representations. UMR extends AMR to other languages, particularly morphologically complex, low-resource languages. UMR also adds features to AMR that are critical to semantic interpretation and enhances AMR by proposing a companion document-level representation that captures linguistic phenomena such as coreference as well as temporal and modal dependencies that potentially go beyond sentence boundaries.', 'year': 2021, 'in_acl': False, 'citationCount': 66, 'section': None, 'subsection': None}, {'id': 5001921, 'paperId': '1c37654db8b6a86795b9c83d214d994fe46f6a37', 'title': 'Toward Abstractive Summarization Using Semantic Representations', 'authors': [{'authorId': '144544919', 'name': 'Fei Liu'}, {'authorId': '144683841', 'name': 'Jeffrey Flanigan'}, {'authorId': '38094552', 'name': 'Sam Thomson'}, {'authorId': '2464164', 'name': 'N. Sadeh'}, {'authorId': '144365875', 'name': 'Noah A. Smith'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'We present a novel abstractive summarization framework that draws on the recent development of a treebank for the Abstract Meaning Representation (AMR). In this framework, the source text is parsed to a set of AMR graphs, the graphs are transformed into a summary graph, and then text is generated from the summary graph. We focus on the graph-tograph transformation that reduces the source semantic graph into a summary graph, making use of an existing AMR parser and assuming the eventual availability of an AMR-totext generator. The framework is data-driven, trainable, and not specifically designed for a particular domain. Experiments on goldstandard AMR annotations and system parses show promising results. Code is available at: https://github.com/summarization', 'year': 2018, 'in_acl': True, 'citationCount': 291, 'section': None, 'subsection': None}, {'id': 9135033, 'paperId': '0729515f62042d1274c131360c33a121df71c856', 'title': 'Generation from Abstract Meaning Representation using Tree Transducers', 'authors': [{'authorId': '144683841', 'name': 'Jeffrey Flanigan'}, {'authorId': '1745899', 'name': 'Chris Dyer'}, {'authorId': '144365875', 'name': 'Noah A. Smith'}, {'authorId': '143712374', 'name': 'J. Carbonell'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'Language generation from purely semantic representations is a challenging task. This paper addresses generating English from the Ab-stract Meaning Representation (AMR), consisting of re-entrant graphs whose nodes are concepts and edges are relations. The new method is trained statistically from AMR-annotated English and consists of two major steps: (i) generating an appropriate spanning tree for the AMR, and (ii) applying tree-to-string transducers to generate English. The method relies on discriminative learning and an argument realization model to overcome data sparsity. Initial tests on held-out data show good promise despite the complexity of the task. The system is available open-source as part of JAMR at:', 'year': 2016, 'in_acl': True, 'citationCount': 102, 'section': None, 'subsection': None}, {'id': 15344879, 'paperId': '91830ca68422d6b42446631eeef696c84e602e87', 'title': 'A Transition-based Algorithm for AMR Parsing', 'authors': [{'authorId': '47074942', 'name': 'Chuan Wang'}, {'authorId': '1702849', 'name': 'Nianwen Xue'}, {'authorId': '1735131', 'name': 'Sameer Pradhan'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'We present a two-stage framework to parse a sentence into its Abstract Meaning Representation (AMR). We first use a dependency parser to generate a dependency tree for the sentence. In the second stage, we design a novel transition-based algorithm that transforms the dependency tree to an AMR graph. There are several advantages with this approach. First, the dependency parser can be trained on a training set much larger than the training set for the tree-to-graph algorithm, resulting in a more accurate AMR parser overall. Our parser yields an improvement of 5% absolute in F-measure over the best previous result. Second, the actions that we design are linguistically intuitive and capture the regularities in the mapping between the dependency structure and the AMR of a sentence. Third, our parser runs in nearly linear time in practice in spite of a worst-case complexity ofO(n 2 ).', 'year': 2015, 'in_acl': True, 'citationCount': 170, 'section': None, 'subsection': None}, {'id': 5000956, 'paperId': '33a9d1a702eb75da709d26c44aaeb7c2015c870b', 'title': 'A Discriminative Graph-Based Parser for the Abstract Meaning Representation', 'authors': [{'authorId': '144683841', 'name': 'Jeffrey Flanigan'}, {'authorId': '38094552', 'name': 'Sam Thomson'}, {'authorId': '143712374', 'name': 'J. Carbonell'}, {'authorId': '1745899', 'name': 'Chris Dyer'}, {'authorId': '144365875', 'name': 'Noah A. Smith'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Abstract Meaning Representation (AMR) is a semantic formalism for which a grow- ing set of annotated examples is avail- able. We introduce the first approach to parse sentences into this representa- tion, providing a strong baseline for fu- ture improvement. The method is based on a novel algorithm for finding a maxi- mum spanning, connected subgraph, em- bedded within a Lagrangian relaxation of an optimization problem that imposes lin- guistically inspired constraints. Our ap- proach is described in the general frame- work of structured prediction, allowing fu- ture incorporation of additional features and constraints, and may extend to other formalisms as well. Our open-source sys- tem, JAMR, is available at: http://github.com/jflanigan/jamr', 'year': 2014, 'in_acl': True, 'citationCount': 321, 'section': None, 'subsection': None}]
|
2024.lrec-tutorials.4
|
Navigating the Modern Evaluation Landscape: Considerations in Benchmarks and Frameworks for Large Language Models (LLMs)
|
General-Purpose Language Models have changed the world of Natural Language Processing, if not the world itself. The evaluation of such versatile models, while supposedly similar to evaluation of generation models before them, in fact presents a host of new evaluation challenges and opportunities. In this Tutorial, we will start from the building blocks of evaluation. The tutorial welcomes people from diverse backgrounds and assumes little familiarity with metrics, datasets, prompts and benchmarks. It will lay the foundations and explain the basics and their importance, while touching on the major points and breakthroughs of the recent era of evaluation. It will also compare traditional evaluation methods – which are still widely used – to newly developed methods. We will contrast new to old approaches, from evaluating on many-task benchmarks rather than on dedicated datasets to efficiency constraints, and from testing stability and prompts on in-context learning to using the models themselves as evaluation metrics. Finally, the tutorial will cover practical issues, ranging from reviewing widely-used benchmarks and prompt banks to efficient evaluation.
| 2,024
|
https://aclanthology.org/2024.lrec-tutorials.4
|
LREC
|
[{'id': 5000956, 'paperId': '33a9d1a702eb75da709d26c44aaeb7c2015c870b', 'title': 'A Discriminative Graph-Based Parser for the Abstract Meaning Representation', 'authors': [{'authorId': '144683841', 'name': 'Jeffrey Flanigan'}, {'authorId': '38094552', 'name': 'Sam Thomson'}, {'authorId': '143712374', 'name': 'J. Carbonell'}, {'authorId': '1745899', 'name': 'Chris Dyer'}, {'authorId': '144365875', 'name': 'Noah A. Smith'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Abstract Meaning Representation (AMR) is a semantic formalism for which a grow- ing set of annotated examples is avail- able. We introduce the first approach to parse sentences into this representa- tion, providing a strong baseline for fu- ture improvement. The method is based on a novel algorithm for finding a maxi- mum spanning, connected subgraph, em- bedded within a Lagrangian relaxation of an optimization problem that imposes lin- guistically inspired constraints. Our ap- proach is described in the general frame- work of structured prediction, allowing fu- ture incorporation of additional features and constraints, and may extend to other formalisms as well. Our open-source sys- tem, JAMR, is available at: http://github.com/jflanigan/jamr', 'year': 2014, 'in_acl': True, 'citationCount': 321, 'section': 'Surveys on evaluation of LLMs', 'subsection': None}, {'id': 260899983, 'paperId': '451a657dabf80ebc43f6a3be518250b2cd5dfe1a', 'title': 'Through the Lens of Core Competency: Survey on Evaluation of Large Language Models', 'authors': [{'authorId': '26250168', 'name': 'Ziyu Zhuang'}, {'authorId': '2133447633', 'name': 'Qiguang Chen'}, {'authorId': '153132928', 'name': 'Longxuan Ma'}, {'authorId': '2112109549', 'name': 'Mingda Li'}, {'authorId': '2230014897', 'name': 'Yi Han'}, {'authorId': '2229053371', 'name': 'Yushan Qian'}, {'authorId': '2228758690', 'name': 'Haopeng Bai'}, {'authorId': '2048146115', 'name': 'Zixian Feng'}, {'authorId': '1806419', 'name': 'Weinan Zhang'}, {'authorId': '2140034831', 'name': 'Ting Liu'}], 'venue': 'China National Conference on Chinese Computational Linguistics', 'abstract': '“From pre-trained language model (PLM) to large language model (LLM), the field of naturallanguage processing (NLP) has witnessed steep performance gains and wide practical uses. Theevaluation of a research field guides its direction of improvement. However, LLMs are extremelyhard to thoroughly evaluate for two reasons. First of all, traditional NLP tasks become inade-quate due to the excellent performance of LLM. Secondly, existing evaluation tasks are difficultto keep up with the wide range of applications in real-world scenarios. To tackle these problems,existing works proposed various benchmarks to better evaluate LLMs. To clarify the numerousevaluation tasks in both academia and industry, we investigate multiple papers concerning LLMevaluations. We summarize 4 core competencies of LLM, including reasoning, knowledge, relia-bility, and safety. For every competency, we introduce its definition, corresponding benchmarks,and metrics. Under this competency architecture, similar tasks are combined to reflect corre-sponding ability, while new tasks can also be easily added into the system. Finally, we give oursuggestions on the future direction of LLM’s evaluation.”', 'year': 2023, 'in_acl': True, 'citationCount': 7, 'section': 'Surveys on evaluation of LLMs', 'subsection': None}, {'id': 246822399, 'paperId': 'e4e9d556e9725a5fdb2e133b61243ff7c1ca8aeb', 'title': 'Repairing the Cracked Foundation: A Survey of Obstacles in Evaluation Practices for Generated Text', 'authors': [{'authorId': '3159346', 'name': 'Sebastian Gehrmann'}, {'authorId': '40684993', 'name': 'Elizabeth Clark'}, {'authorId': '145450400', 'name': 'Thibault Sellam'}], 'venue': 'Journal of Artificial Intelligence Research', 'abstract': 'Evaluation practices in natural language generation (NLG) have many known flaws, but improved evaluation approaches are rarely widely adopted. This issue has become more urgent, since neural generation models have improved to the point where their outputs can often no longer be distinguished based on the surface-level features that older metrics rely on. This paper surveys the issues with human and automatic model evaluations and with commonly used datasets in NLG that have been pointed out over the past 20 years. We summarize, categorize, and discuss how researchers have been addressing these issues and what their findings mean for the current state of model evaluations. Building on those insights, we lay out a long-term vision for evaluation research and propose concrete steps for researchers to improve their evaluation processes. Finally, we analyze 66 generation papers from recent NLP conferences in how well they already follow these suggestions and identify which areas require more drastic changes to the status quo.', 'year': 2022, 'in_acl': False, 'citationCount': 160, 'section': 'Surveys on evaluation of LLMs', 'subsection': None}, {'id': 240420063, 'paperId': 'c23d9d44e8bc68408cea9f305d1f24d915bc0d0d', 'title': 'Recent Advances in Natural Language Processing via Large Pre-trained Language Models: A Survey', 'authors': [{'authorId': '1875233', 'name': 'Bonan Min'}, {'authorId': '2136457937', 'name': 'Hayley Ross'}, {'authorId': '46185356', 'name': 'Elior Sulem'}, {'authorId': '3460489', 'name': 'Amir Pouran Ben Veyseh'}, {'authorId': '1811211', 'name': 'Thien Huu Nguyen'}, {'authorId': '1724648481', 'name': 'Oscar Sainz'}, {'authorId': '1733049', 'name': 'Eneko Agirre'}, {'authorId': '2136480655', 'name': 'Ilana Heinz'}, {'authorId': '144590225', 'name': 'D. Roth'}], 'venue': 'ACM Computing Surveys', 'abstract': 'Large, pre-trained language models (PLMs) such as BERT and GPT have drastically changed the Natural Language Processing (NLP) field. For numerous NLP tasks, approaches leveraging PLMs have achieved state-of-the-art performance. The key idea is to learn a generic, latent representation of language from a generic task once, then share it across disparate NLP tasks. Language modeling serves as the generic task, one with abundant self-supervised text available for extensive training. This article presents the key fundamental concepts of PLM architectures and a comprehensive view of the shift to PLM-driven NLP techniques. It surveys work applying the pre-training then fine-tuning, prompting, and text generation approaches. In addition, it discusses PLM limitations and suggested directions for future research.', 'year': 2021, 'in_acl': False, 'citationCount': 726, 'section': 'Pre-training paradigms', 'subsection': None}, {'id': 253553585, 'paperId': 'ce913026f693101e54d3ab9152e107034d81fce1', 'title': 'Holistic Evaluation of Language Models', 'authors': [{'authorId': '145419642', 'name': 'Percy Liang'}, {'authorId': '150272855', 'name': 'Rishi Bommasani'}, {'authorId': '2110585783', 'name': 'Tony Lee'}, {'authorId': '2754804', 'name': 'Dimitris Tsipras'}, {'authorId': '1914569491', 'name': 'Dilara Soylu'}, {'authorId': '19168196', 'name': 'Michihiro Yasunaga'}, {'authorId': '9227100', 'name': 'Yian Zhang'}, {'authorId': '22252150', 'name': 'D. Narayanan'}, {'authorId': '3374063', 'name': 'Yuhuai Wu'}, {'authorId': '32423266', 'name': 'Ananya Kumar'}, {'authorId': '51149693', 'name': 'Benjamin Newman'}, {'authorId': '2833699', 'name': 'Binhang Yuan'}, {'authorId': '1748871792', 'name': 'Bobby Yan'}, {'authorId': '2146064162', 'name': 'Ce Zhang'}, {'authorId': '133749287', 'name': 'Christian Cosgrove'}, {'authorId': '144783904', 'name': 'Christopher D. Manning'}, {'authorId': '2061444681', 'name': "Christopher R'e"}, {'authorId': '1413421064', 'name': 'Diana Acosta-Navas'}, {'authorId': '152951058', 'name': 'Drew A. Hudson'}, {'authorId': '49456763', 'name': 'E. Zelikman'}, {'authorId': '41152329', 'name': 'Esin Durmus'}, {'authorId': '8759332', 'name': 'Faisal Ladhak'}, {'authorId': '2047004093', 'name': 'Frieda Rong'}, {'authorId': '40046694', 'name': 'Hongyu Ren'}, {'authorId': '18307037', 'name': 'Huaxiu Yao'}, {'authorId': '39597242', 'name': 'Jue Wang'}, {'authorId': '50818255', 'name': 'Keshav Santhanam'}, {'authorId': '4773175', 'name': 'Laurel J. Orr'}, {'authorId': '2118604716', 'name': 'Lucia Zheng'}, {'authorId': '2186981598', 'name': 'Mert Yuksekgonul'}, {'authorId': '51903517', 'name': 'Mirac Suzgun'}, {'authorId': '2182172863', 'name': 'Nathan S. Kim'}, {'authorId': '2820009', 'name': 'Neel Guha'}, {'authorId': '22193324', 'name': 'Niladri S. Chatterji'}, {'authorId': '144112155', 'name': 'O. Khattab'}, {'authorId': '2071773966', 'name': 'Peter Henderson'}, {'authorId': '144862341', 'name': 'Qian Huang'}, {'authorId': '2121293578', 'name': 'Ryan Chi'}, {'authorId': '46215055', 'name': 'Sang Michael Xie'}, {'authorId': '2852106', 'name': 'Shibani Santurkar'}, {'authorId': '25769960', 'name': 'S. Ganguli'}, {'authorId': '2117567142', 'name': 'Tatsunori Hashimoto'}, {'authorId': '8938047', 'name': 'Thomas F. Icard'}, {'authorId': '123437034', 'name': 'Tianyi Zhang'}, {'authorId': '113810201', 'name': 'Vishrav Chaudhary'}, {'authorId': '2127971344', 'name': 'William Wang'}, {'authorId': '2145429039', 'name': 'Xuechen Li'}, {'authorId': '2054708905', 'name': 'Yifan Mai'}, {'authorId': '49889860', 'name': 'Yuhui Zhang'}, {'authorId': '2740047', 'name': 'Yuta Koreeda'}], 'venue': 'Trans. Mach. Learn. Res.', 'abstract': 'Language models (LMs) like GPT‐3, PaLM, and ChatGPT are the foundation for almost all major language technologies, but their capabilities, limitations, and risks are not well understood. We present Holistic Evaluation of Language Models (HELM) to improve the transparency of LMs. LMs can serve many purposes and their behavior should satisfy many desiderata. To navigate the vast space of potential scenarios and metrics, we taxonomize the space and select representative subsets. We evaluate models on 16 core scenarios and 7 metrics, exposing important trade‐offs. We supplement our core evaluation with seven targeted evaluations to deeply analyze specific aspects (including world knowledge, reasoning, regurgitation of copyrighted content, and generation of disinformation). We benchmark 30 LMs, from OpenAI, Microsoft, Google, Meta, Cohere, AI21 Labs, and others. Prior to HELM, models were evaluated on just 17.9% of the core HELM scenarios, with some prominent models not sharing a single scenario in common. We improve this to 96.0%: all 30 models are now benchmarked under the same standardized conditions. Our evaluation surfaces 25 top‐level findings. For full transparency, we release all raw model prompts and completions publicly. HELM is a living benchmark for the community, continuously updated with new scenarios, metrics, and models https://crfm.stanford.edu/helm/latest/.', 'year': 2023, 'in_acl': False, 'citationCount': 736, 'section': 'Current benchmarks', 'subsection': None}, {'id': 263625818, 'paperId': 'bd1331b233e84bab7eba503abc60b31ac08e7881', 'title': 'Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models', 'authors': [{'authorId': '2169175069', 'name': 'Aarohi Srivastava'}, {'authorId': '2188497', 'name': 'Abhinav Rastogi'}, {'authorId': '1484043592', 'name': 'Abhishek Rao'}, {'authorId': '2248610', 'name': 'Abu Awal Md Shoeb'}, {'authorId': '144948925', 'name': 'Abubakar Abid'}, {'authorId': '2064150446', 'name': 'Adam Fisch'}, {'authorId': '2254150367', 'name': 'Adam R. Brown'}, {'authorId': '2253463637', 'name': 'Adam Santoro'}, {'authorId': '2254858472', 'name': 'Aditya Gupta'}, {'authorId': '1388513000', 'name': 'Adrià Garriga-Alonso'}, {'authorId': '2169552920', 'name': 'Agnieszka Kluska'}, {'authorId': '102549875', 'name': 'Aitor Lewkowycz'}, {'authorId': '2253472327', 'name': 'Akshat Agarwal'}, {'authorId': '146162186', 'name': 'Alethea Power'}, {'authorId': '2253525058', 'name': 'Alex Ray'}, {'authorId': '46236380', 'name': 'Alex Warstadt'}, {'authorId': '49502890', 'name': 'Alexander W. Kocurek'}, {'authorId': '150920423', 'name': 'Ali Safaya'}, {'authorId': '30615457', 'name': 'Ali Tazarv'}, {'authorId': '2253627565', 'name': 'Alice Xiang'}, {'authorId': '119389860', 'name': 'Alicia Parrish'}, {'authorId': '2253756381', 'name': 'Allen Nie'}, {'authorId': '2254089780', 'name': 'Aman Hussain'}, {'authorId': '2220750220', 'name': 'Amanda Askell'}, {'authorId': '2169546807', 'name': 'Amanda Dsouza'}, {'authorId': '133666998', 'name': 'Ambrose Slone'}, {'authorId': '1397271551', 'name': 'Ameet Rahane'}, {'authorId': '2169441753', 'name': 'Anantharaman S. Iyer'}, {'authorId': '39552848', 'name': 'Anders Andreassen'}, {'authorId': '3064807', 'name': 'Andrea Madotto'}, {'authorId': '2065039862', 'name': 'Andrea Santilli'}, {'authorId': '2169579494', 'name': 'Andreas Stuhlmuller'}, {'authorId': '2253757717', 'name': 'Andrew M. Dai'}, {'authorId': '2253744892', 'name': 'Andrew La'}, {'authorId': '32322945', 'name': 'Andrew Kyle Lampinen'}, {'authorId': '1380103052', 'name': 'Andy Zou'}, {'authorId': '2253471334', 'name': 'Angela Jiang'}, {'authorId': '13336152', 'name': 'Angelica Chen'}, {'authorId': '2064058890', 'name': 'Anh Vuong'}, {'authorId': '2110763559', 'name': 'Animesh Gupta'}, {'authorId': '1411423941', 'name': 'Anna Gottardi'}, {'authorId': '1596822208', 'name': 'Antonio Norelli'}, {'authorId': '47851456', 'name': 'Anu Venkatesh'}, {'authorId': '1396646913', 'name': 'Arash Gholamidavoodi'}, {'authorId': '11997563', 'name': 'Arfa Tabassum'}, {'authorId': '2253477913', 'name': 'Arul Menezes'}, {'authorId': '1417449913', 'name': 'Arun Kirubarajan'}, {'authorId': '3567738', 'name': 'A. Mullokandov'}, {'authorId': '48229640', 'name': 'Ashish Sabharwal'}, {'authorId': '103829114', 'name': 'Austin Herrick'}, {'authorId': '1388010852', 'name': 'Avia Efrat'}, {'authorId': '2253752451', 'name': 'Aykut Erdem'}, {'authorId': '2169560863', 'name': 'Ayla Karakacs'}, {'authorId': '2253506707', 'name': 'B. R. Roberts'}, {'authorId': '25229391', 'name': 'B. S. Loe'}, {'authorId': '2368067', 'name': 'Barret Zoph'}, {'authorId': '2169579430', 'name': 'Bartlomiej Bojanowski'}, {'authorId': '2169579156', 'name': 'Batuhan Ozyurt'}, {'authorId': '2127328167', 'name': 'Behnam Hedayatnia'}, {'authorId': '3007442', 'name': 'Behnam Neyshabur'}, {'authorId': '2911198', 'name': 'Benjamin Inden'}, {'authorId': '1405867539', 'name': 'Benno Stein'}, {'authorId': '3407537', 'name': 'Berk Ekmekci'}, {'authorId': '51583409', 'name': 'Bill Yuchen Lin'}, {'authorId': '1759660', 'name': 'B. Howald'}, {'authorId': '2253742181', 'name': 'Bryan Orinion'}, {'authorId': '2113828270', 'name': 'Cameron Diao'}, {'authorId': '2169579131', 'name': 'Cameron Dour'}, {'authorId': '2253742142', 'name': 'Catherine Stinson'}, {'authorId': '2080698457', 'name': 'Cedrick Argueta'}, {'authorId': '2169282994', 'name': "C'esar Ferri Ram'irez"}, {'authorId': '2253649838', 'name': 'Chandan Singh'}, {'authorId': '30465886', 'name': 'Charles Rathkopf'}, {'authorId': '83262128', 'name': 'Chenlin Meng'}, {'authorId': '2064619864', 'name': 'Chitta Baral'}, {'authorId': '2115397918', 'name': 'Chiyu Wu'}, {'authorId': '1763608', 'name': 'Chris Callison-Burch'}, {'authorId': '1451646307', 'name': 'Chris Waites'}, {'authorId': '2253721294', 'name': 'Christian Voigt'}, {'authorId': '2250402802', 'name': 'Christopher D. Manning'}, {'authorId': '2253742954', 'name': 'Christopher Potts'}, {'authorId': '1381594105', 'name': 'Cindy Ramirez'}, {'authorId': '2253680456', 'name': 'Clara E. Rivera'}, {'authorId': '2056776870', 'name': 'Clemencia Siro'}, {'authorId': '2402716', 'name': 'Colin Raffel'}, {'authorId': '2169255641', 'name': 'Courtney Ashcraft'}, {'authorId': '3360992', 'name': 'Cristina Garbacea'}, {'authorId': '2311890249', 'name': 'Damien Sileo'}, {'authorId': '69045302', 'name': 'Daniel H Garrette'}, {'authorId': '3422872', 'name': 'Dan Hendrycks'}, {'authorId': '2051801993', 'name': 'D. Kilman'}, {'authorId': '2249759427', 'name': 'Dan Roth'}, {'authorId': '2253572738', 'name': 'Daniel Freeman'}, {'authorId': '1783281', 'name': 'Daniel Khashabi'}, {'authorId': '2052679852', 'name': 'Daniel Levy'}, {'authorId': '1802312462', 'name': "D. Gonz'alez"}, {'authorId': '6472480', 'name': 'Danielle R. Perszyk'}, {'authorId': '39182747', 'name': 'Danny Hernandez'}, {'authorId': '2255489905', 'name': 'Danqi Chen'}, {'authorId': '7975935', 'name': 'Daphne Ippolito'}, {'authorId': '32994625', 'name': 'D. Gilboa'}, {'authorId': '35363891', 'name': 'David Dohan'}, {'authorId': '97501513', 'name': 'D. Drakard'}, {'authorId': '3046220', 'name': 'David Jurgens'}, {'authorId': '2852125', 'name': 'Debajyoti Datta'}, {'authorId': '2081806483', 'name': 'Deep Ganguli'}, {'authorId': '51889641', 'name': 'Denis Emelin'}, {'authorId': '2193599771', 'name': 'Denis Kleyko'}, {'authorId': '2253742671', 'name': 'Deniz Yuret'}, {'authorId': '2253841704', 'name': 'Derek Chen'}, {'authorId': '1390031652', 'name': 'Derek Tam'}, {'authorId': '3449411', 'name': 'Dieuwke Hupkes'}, {'authorId': '2253641873', 'name': 'Diganta Misra'}, {'authorId': '2169553550', 'name': 'Dilyar Buzan'}, {'authorId': '51127600', 'name': 'Dimitri Coelho Mollo'}, {'authorId': '2254124345', 'name': 'Diyi Yang'}, {'authorId': '2253883759', 'name': 'Dong-Ho Lee'}, {'authorId': '2253751766', 'name': 'Dylan Schrader'}, {'authorId': '2362276', 'name': 'Ekaterina Shutova'}, {'authorId': '8132903', 'name': 'E. D. Cubuk'}, {'authorId': '153401294', 'name': 'Elad Segal'}, {'authorId': '83195245', 'name': 'Eleanor Hagerman'}, {'authorId': '2057742918', 'name': 'Elizabeth Barnes'}, {'authorId': '1602681246', 'name': 'E. Donoway'}, {'authorId': '2949185', 'name': 'Ellie Pavlick'}, {'authorId': '1796150', 'name': 'E. Rodolà'}, {'authorId': '2253753565', 'name': 'Emma Lam'}, {'authorId': '2253520497', 'name': 'Eric Chu'}, {'authorId': '2090511698', 'name': 'Eric Tang'}, {'authorId': '152330322', 'name': 'Erkut Erdem'}, {'authorId': '48025720', 'name': 'Ernie Chang'}, {'authorId': '2253469028', 'name': 'Ethan A. Chi'}, {'authorId': '52136425', 'name': 'Ethan Dyer'}, {'authorId': '87911177', 'name': 'E. Jerzak'}, {'authorId': '2047591327', 'name': 'Ethan Kim'}, {'authorId': '102184064', 'name': 'Eunice Engefu Manyasi'}, {'authorId': '23913513', 'name': 'Evgenii Zheltonozhskii'}, {'authorId': '2253581486', 'name': 'Fanyue Xia'}, {'authorId': '16851583', 'name': 'F. Siar'}, {'authorId': '2126497260', 'name': "Fernando Mart'inez-Plumed"}, {'authorId': '2169302468', 'name': "Francesca Happ'e"}, {'authorId': '1565641737', 'name': 'François Chollet'}, {'authorId': '2047004093', 'name': 'Frieda Rong'}, {'authorId': '2159632445', 'name': 'Gaurav Mishra'}, {'authorId': '9162688', 'name': 'Genta Indra Winata'}, {'authorId': '2253752804', 'name': 'Gerard de Melo'}, {'authorId': '2067996', 'name': 'Germán Kruszewski'}, {'authorId': '50213542', 'name': 'Giambattista Parascandolo'}, {'authorId': '2253744058', 'name': 'Giorgio Mariani'}, {'authorId': '2143229090', 'name': 'Gloria Xinyue Wang'}, {'authorId': '2169561208', 'name': "Gonzalo Jaimovitch-L'opez"}, {'authorId': '2253742347', 'name': 'Gregor Betz'}, {'authorId': '2284681044', 'name': 'Guy Gur-Ari'}, {'authorId': '2148236997', 'name': 'Hana Galijasevic'}, {'authorId': '2254029693', 'name': 'Hannah Kim'}, {'authorId': '2516777', 'name': 'Hannah Rashkin'}, {'authorId': '2548384', 'name': 'Hannaneh Hajishirzi'}, {'authorId': '18138802', 'name': 'Harsh Mehta'}, {'authorId': '81356189', 'name': 'H. Bogar'}, {'authorId': '66652934', 'name': 'Henry Shevlin'}, {'authorId': '2261745622', 'name': 'Hinrich Schutze'}, {'authorId': '2252055381', 'name': 'H. Yakura'}, {'authorId': '2253956610', 'name': 'Hongming Zhang'}, {'authorId': '2084554416', 'name': 'Hugh Mee Wong'}, {'authorId': '2175479575', 'name': 'Ian Ng'}, {'authorId': '146990369', 'name': 'Isaac Noble'}, {'authorId': '2253757620', 'name': 'Jaap Jumelet'}, {'authorId': '67288781', 'name': 'Jack Geissinger'}, {'authorId': '1583434563', 'name': 'John Kernion'}, {'authorId': '2052366271', 'name': 'Jacob Hilton'}, {'authorId': '2253808003', 'name': 'Jaehoon Lee'}, {'authorId': '1843342', 'name': 'J. Fisac'}, {'authorId': '2253938438', 'name': 'James B. Simon'}, {'authorId': '39465522', 'name': 'James Koppel'}, {'authorId': '2254106317', 'name': 'James Zheng'}, {'authorId': '2240530524', 'name': 'James Zou'}, {'authorId': '2169553526', 'name': "Jan Koco'n"}, {'authorId': '2148444557', 'name': 'Jana Thompson'}, {'authorId': '2253742801', 'name': 'Janelle Wingfield'}, {'authorId': '2053807409', 'name': 'Jared Kaplan'}, {'authorId': '2133262330', 'name': 'Jarema Radom'}, {'authorId': '1407546424', 'name': 'Jascha Narain Sohl-Dickstein'}, {'authorId': '80842917', 'name': 'Jason Phang'}, {'authorId': '2253952872', 'name': 'Jason Wei'}, {'authorId': '2965424', 'name': 'J. Yosinski'}, {'authorId': '2848048', 'name': 'Jekaterina Novikova'}, {'authorId': '2169562378', 'name': 'Jelle Bosscher'}, {'authorId': '2253582254', 'name': 'Jennifer Marsh'}, {'authorId': '2253951784', 'name': 'Jeremy Kim'}, {'authorId': '2169553864', 'name': 'Jeroen Taal'}, {'authorId': '2253555236', 'name': 'Jesse Engel'}, {'authorId': '122367036', 'name': 'Jesujoba Oluwadara Alabi'}, {'authorId': '2254142854', 'name': 'Jiacheng Xu'}, {'authorId': '2250159648', 'name': 'Jiaming Song'}, {'authorId': '82706161', 'name': 'Jillian Tang'}, {'authorId': '147175687', 'name': 'Jane W Waweru'}, {'authorId': '31502027', 'name': 'John Burden'}, {'authorId': '2253928282', 'name': 'John Miller'}, {'authorId': '2075220382', 'name': 'John U. Balis'}, {'authorId': '2253742887', 'name': 'Jonathan Batchelder'}, {'authorId': '1750652', 'name': 'Jonathan Berant'}, {'authorId': '2146695800', 'name': 'Jorg Frohberg'}, {'authorId': '120419790', 'name': 'Jos Rozen'}, {'authorId': '1398777358', 'name': 'J. Hernández-Orallo'}, {'authorId': '2169583246', 'name': 'Joseph Boudeman'}, {'authorId': '2253742339', 'name': 'Joseph Guerr'}, {'authorId': '2169419125', 'name': 'Joseph Jones'}, {'authorId': '2250220321', 'name': 'Joshua B. Tenenbaum'}, {'authorId': '38219739', 'name': 'Joshua S. Rule'}, {'authorId': '119803865', 'name': 'Joyce Chua'}, {'authorId': '2007286374', 'name': 'Kamil Kanclerz'}, {'authorId': '2924113', 'name': 'Karen Livescu'}, {'authorId': '48778049', 'name': 'K. Krauth'}, {'authorId': '145916630', 'name': 'Karthik Gopalakrishnan'}, {'authorId': '2169583196', 'name': 'Katerina Ignatyeva'}, {'authorId': '2253742520', 'name': 'K. Markert'}, {'authorId': '4834571', 'name': 'Kaustubh D. Dhole'}, {'authorId': '1700980', 'name': 'Kevin Gimpel'}, {'authorId': '98151280', 'name': 'Kevin Omondi'}, {'authorId': '2034344309', 'name': 'K. Mathewson'}, {'authorId': '2169579399', 'name': 'Kristen Chiafullo'}, {'authorId': '94055272', 'name': 'Ksenia Shkaruta'}, {'authorId': '50812160', 'name': 'K. Shridhar'}, {'authorId': '2049410219', 'name': 'Kyle McDonell'}, {'authorId': '46666605', 'name': 'Kyle Richardson'}, {'authorId': '2049583158', 'name': 'Laria Reynolds'}, {'authorId': '2027599537', 'name': 'Leo Gao'}, {'authorId': '72436283', 'name': 'Li Zhang'}, {'authorId': '83863037', 'name': 'Liam Dugan'}, {'authorId': '3444092', 'name': 'Lianhui Qin'}, {'authorId': '1572944977', 'name': 'Lidia Contreras-Ochando'}, {'authorId': '49933077', 'name': 'Louis-philippe Morency'}, {'authorId': '2253396046', 'name': 'Luca Moschella'}, {'authorId': '134309836', 'name': 'Luca Lam'}, {'authorId': '2253759995', 'name': 'Lucy Noble'}, {'authorId': '2253541812', 'name': 'Ludwig Schmidt'}, {'authorId': '2253917827', 'name': 'Luheng He'}, {'authorId': '2169579445', 'name': "Luis Oliveros Col'on"}, {'authorId': '2096458', 'name': 'Luke Metz'}, {'authorId': '2126865294', 'name': 'Lutfi Kerem cSenel'}, {'authorId': '40377863', 'name': 'Maarten Bosma'}, {'authorId': '2729164', 'name': 'Maarten Sap'}, {'authorId': '41096186', 'name': 'Maartje ter Hoeve'}, {'authorId': '77751476', 'name': 'Maheen Farooqi'}, {'authorId': '1779225', 'name': 'Manaal Faruqui'}, {'authorId': '16787428', 'name': 'Mantas Mazeika'}, {'authorId': '2169579697', 'name': 'Marco Baturan'}, {'authorId': '2202507568', 'name': 'Marco Marelli'}, {'authorId': '1388826344', 'name': 'Marco Maru'}, {'authorId': '2253743804', 'name': 'Maria Jose Ram’irez Quintana'}, {'authorId': '46445780', 'name': 'M. Tolkiehn'}, {'authorId': '24068173', 'name': 'Mario Giulianelli'}, {'authorId': '2253999321', 'name': 'Martha Lewis'}, {'authorId': '3046200', 'name': 'Martin Potthast'}, {'authorId': '2240527814', 'name': 'Matthew L. Leavitt'}, {'authorId': '145072133', 'name': 'Matthias Hagen'}, {'authorId': '2072397293', 'name': 'M. Schubert'}, {'authorId': '2399139', 'name': 'Medina Baitemirova'}, {'authorId': '2253743368', 'name': 'Melody Arnaud'}, {'authorId': '1410273721', 'name': 'M. McElrath'}, {'authorId': '2253753325', 'name': 'Michael A. Yee'}, {'authorId': '2253473243', 'name': 'Michael Cohen'}, {'authorId': '2253764442', 'name': 'Michael Gu'}, {'authorId': '51260702', 'name': 'Michael Ivanitskiy'}, {'authorId': '2169579231', 'name': 'Michael Starritt'}, {'authorId': '2253690372', 'name': 'M. Strube'}, {'authorId': '2169561132', 'name': 'Michal Swkedrowski'}, {'authorId': '2253475117', 'name': 'Michele Bevilacqua'}, {'authorId': '19168196', 'name': 'Michihiro Yasunaga'}, {'authorId': '26688118', 'name': 'Mihir Kale'}, {'authorId': '2253762090', 'name': 'Mike Cain'}, {'authorId': '2254456411', 'name': 'Mimee Xu'}, {'authorId': '51903517', 'name': 'Mirac Suzgun'}, {'authorId': '2253473026', 'name': 'Mitch Walker'}, {'authorId': '2074227875', 'name': 'Monica Tiwari'}, {'authorId': '2253762115', 'name': 'Mohit Bansal'}, {'authorId': '2035522904', 'name': 'Moin Aminnaseri'}, {'authorId': '22245981', 'name': 'Mor Geva'}, {'authorId': '151114702', 'name': 'Mozhdeh Gheini'}, {'authorId': '2007825839', 'name': 'T. MukundVarma'}, {'authorId': '2253599901', 'name': 'Nanyun Peng'}, {'authorId': '2253474896', 'name': 'Nathan A. Chi'}, {'authorId': '40221187', 'name': 'Nayeon Lee'}, {'authorId': '2169562637', 'name': 'Neta Gur-Ari Krakover'}, {'authorId': '2070244342', 'name': 'Nicholas Cameron'}, {'authorId': '2253607509', 'name': 'Nicholas Roberts'}, {'authorId': '2249758958', 'name': 'Nick Doiron'}, {'authorId': '2253719808', 'name': 'Nicole Martinez'}, {'authorId': '10666396', 'name': 'Nikita Nangia'}, {'authorId': '40015417', 'name': 'Niklas Deckers'}, {'authorId': '2037383772', 'name': 'Niklas Muennighoff'}, {'authorId': '2844898', 'name': 'N. Keskar'}, {'authorId': '2121286996', 'name': 'Niveditha Iyer'}, {'authorId': '40832517', 'name': 'Noah Constant'}, {'authorId': '22640071', 'name': 'Noah Fiedel'}, {'authorId': '2253657042', 'name': 'Nuan Wen'}, {'authorId': '2253742723', 'name': 'Oliver Zhang'}, {'authorId': '2138307026', 'name': 'Omar Agha'}, {'authorId': '2169583261', 'name': 'Omar Elbaghdadi'}, {'authorId': '2253752918', 'name': 'Omer Levy'}, {'authorId': '47107786', 'name': 'Owain Evans'}, {'authorId': '145163583', 'name': 'Pablo Antonio Moreno Casares'}, {'authorId': '1453712240', 'name': 'P. Doshi'}, {'authorId': '40539650', 'name': 'Pascale Fung'}, {'authorId': '28130078', 'name': 'P. Liang'}, {'authorId': '2039154', 'name': 'Paul Vicol'}, {'authorId': '1805993128', 'name': 'Pegah Alipoormolabashi'}, {'authorId': '2253656797', 'name': 'Peiyuan Liao'}, {'authorId': '2249641250', 'name': 'Percy Liang'}, {'authorId': '2113642717', 'name': 'Peter Chang'}, {'authorId': '2654106', 'name': 'P. Eckersley'}, {'authorId': '41022736', 'name': 'Phu Mon Htut'}, {'authorId': '46221673', 'name': 'P. Hwang'}, {'authorId': '1413772109', 'name': 'P. Milkowski'}, {'authorId': '2414098', 'name': 'P. Patil'}, {'authorId': '1713436', 'name': 'Pouya Pezeshkpour'}, {'authorId': '2051972539', 'name': 'Priti Oli'}, {'authorId': '2286285875', 'name': 'Qiaozhu Mei'}, {'authorId': '1904906987', 'name': 'Qing Lyu'}, {'authorId': '2256591945', 'name': 'Qinlang Chen'}, {'authorId': '2004401966', 'name': 'Rabin Banjade'}, {'authorId': '2238019263', 'name': 'Rachel Etta Rudolph'}, {'authorId': '39303368', 'name': 'Raefer Gabriel'}, {'authorId': '2169583577', 'name': 'Rahel Habacker'}, {'authorId': '2253742845', 'name': 'Ramon Risco'}, {'authorId': '2249763478', 'name': 'Raphael Milliere'}, {'authorId': '144914264', 'name': 'Rhythm Garg'}, {'authorId': '2056119463', 'name': 'Richard Barnes'}, {'authorId': '2278009', 'name': 'R. Saurous'}, {'authorId': '83976651', 'name': 'Riku Arakawa'}, {'authorId': '2143515065', 'name': 'Robbe Raymaekers'}, {'authorId': '2253563490', 'name': 'Robert Frank'}, {'authorId': '2169583539', 'name': 'Rohan Sikand'}, {'authorId': '39068839', 'name': 'Roman Novak'}, {'authorId': '2143205525', 'name': 'Roman Sitelew'}, {'authorId': '39227408', 'name': 'Ronan Le Bras'}, {'authorId': '48757909', 'name': 'Rosanne Liu'}, {'authorId': '2169442080', 'name': 'Rowan Jacobs'}, {'authorId': '2255464042', 'name': 'Rui Zhang'}, {'authorId': '145124475', 'name': 'R. Salakhutdinov'}, {'authorId': '2121293578', 'name': 'Ryan Chi'}, {'authorId': '2110680870', 'name': 'Ryan Lee'}, {'authorId': '2096414946', 'name': 'Ryan Stovall'}, {'authorId': '2131107966', 'name': 'Ryan Teehan'}, {'authorId': '2170120947', 'name': 'Rylan Yang'}, {'authorId': '2256771265', 'name': 'Sahib Singh'}, {'authorId': '2253519805', 'name': 'Saif Mohammad'}, {'authorId': '51177395', 'name': 'Sajant Anand'}, {'authorId': '14149388', 'name': 'Sam Dillavou'}, {'authorId': '88728159', 'name': 'Sam Shleifer'}, {'authorId': '2844243', 'name': 'Sam Wiseman'}, {'authorId': '46173414', 'name': 'Samuel Gruetter'}, {'authorId': '2137213671', 'name': 'Samuel R. Bowman'}, {'authorId': '2601641', 'name': 'S. Schoenholz'}, {'authorId': '2254097408', 'name': 'Sanghyun Han'}, {'authorId': '1412838170', 'name': 'Sanjeev Kwatra'}, {'authorId': '121266477', 'name': 'Sarah A. Rous'}, {'authorId': '3022427', 'name': 'Sarik Ghazarian'}, {'authorId': '2143032877', 'name': 'Sayan Ghosh'}, {'authorId': '2253475225', 'name': 'Sean Casey'}, {'authorId': '32306415', 'name': 'Sebastian Bischoff'}, {'authorId': '3159346', 'name': 'Sebastian Gehrmann'}, {'authorId': '2067227206', 'name': 'Sebastian Schuster'}, {'authorId': '2162733', 'name': 'Sepideh Sadeghi'}, {'authorId': '1832987156', 'name': 'Shadi S. Hamdan'}, {'authorId': '2111057669', 'name': 'Sharon Zhou'}, {'authorId': '2253588064', 'name': 'Shashank Srivastava'}, {'authorId': '2113258914', 'name': 'Sherry Shi'}, {'authorId': '2108410562', 'name': 'Shikhar Singh'}, {'authorId': '30462410', 'name': 'Shima Asaadi'}, {'authorId': '2253699903', 'name': 'S. Gu'}, {'authorId': '2169561291', 'name': 'Shubh Pachchigar'}, {'authorId': '2634203', 'name': 'Shubham Toshniwal'}, {'authorId': '33145619', 'name': 'Shyam Upadhyay'}, {'authorId': '2169422265', 'name': 'Shyamolima Debnath'}, {'authorId': '2944868', 'name': 'Siamak Shakeri'}, {'authorId': '2169583327', 'name': 'Simon Thormeyer'}, {'authorId': '1972186', 'name': 'S. Melzi'}, {'authorId': '2256152032', 'name': 'Siva Reddy'}, {'authorId': '118798524', 'name': 'S. Makini'}, {'authorId': '2255599173', 'name': 'Soo-Hwan Lee'}, {'authorId': '5910521', 'name': 'Spencer Bradley Torene'}, {'authorId': '1659275411', 'name': 'Sriharsha Hatwar'}, {'authorId': '2250013696', 'name': 'S. Dehaene'}, {'authorId': '102987414', 'name': 'Stefan Divic'}, {'authorId': '2490652', 'name': 'Stefano Ermon'}, {'authorId': '103476203', 'name': 'Stella Biderman'}, {'authorId': '2253840098', 'name': 'Stephanie Lin'}, {'authorId': '2253781922', 'name': 'Stephen Prasad'}, {'authorId': '2238331992', 'name': 'Steven T Piantadosi'}, {'authorId': '1692491', 'name': 'Stuart M. Shieber'}, {'authorId': '2169554101', 'name': 'Summer Misherghi'}, {'authorId': '2886725', 'name': 'S. Kiritchenko'}, {'authorId': '1817207', 'name': 'Swaroop Mishra'}, {'authorId': '51223875', 'name': 'Tal Linzen'}, {'authorId': '32303439', 'name': 'Tal Schuster'}, {'authorId': '2149201962', 'name': 'Tao Li'}, {'authorId': '2256865416', 'name': 'Tao Yu'}, {'authorId': '2253522954', 'name': 'Tariq Ali'}, {'authorId': '2253575091', 'name': 'Tatsunori Hashimoto'}, {'authorId': '2134514770', 'name': 'Te-Lin Wu'}, {'authorId': '88367918', 'name': 'T. Desbordes'}, {'authorId': '2169561552', 'name': 'Theodore Rothschild'}, {'authorId': '145127341', 'name': 'Thomas Phan'}, {'authorId': '2255375735', 'name': 'Tianle Wang'}, {'authorId': '101592141', 'name': 'Tiberius Nkinyili'}, {'authorId': '32246932', 'name': 'Timo Schick'}, {'authorId': '98873736', 'name': 'T. Kornev'}, {'authorId': '2169579934', 'name': 'T. Tunduny'}, {'authorId': '2697953', 'name': 'Tobias Gerstenberg'}, {'authorId': '2107372733', 'name': 'T. Chang'}, {'authorId': '10729963', 'name': 'Trishala Neeraj'}, {'authorId': '2236429', 'name': 'Tushar Khot'}, {'authorId': '2253757587', 'name': 'Tyler Shultz'}, {'authorId': '50482645', 'name': 'Uri Shaham'}, {'authorId': '40055795', 'name': 'Vedant Misra'}, {'authorId': '2243444130', 'name': 'Vera Demberg'}, {'authorId': '2169580169', 'name': 'Victoria Nyamai'}, {'authorId': '24025563', 'name': 'Vikas Raunak'}, {'authorId': '96641652', 'name': 'V. Ramasesh'}, {'authorId': '2670978', 'name': 'Vinay Uday Prabhu'}, {'authorId': '2044959912', 'name': 'Vishakh Padmakumar'}, {'authorId': '3052879', 'name': 'Vivek Srikumar'}, {'authorId': '26958176', 'name': 'W. Fedus'}, {'authorId': '2058848938', 'name': 'W. Saunders'}, {'authorId': '2115212355', 'name': 'William Zhang'}, {'authorId': '2143514461', 'name': 'Wout Vossen'}, {'authorId': '2256826104', 'name': 'Xiang Ren'}, {'authorId': '2253762061', 'name': 'Xiaoyu Tong'}, {'authorId': '1500662261', 'name': 'Xinran Zhao'}, {'authorId': '2255396948', 'name': 'Xinyi Wu'}, {'authorId': '2144058688', 'name': 'Xudong Shen'}, {'authorId': '3261470', 'name': 'Yadollah Yaghoobzadeh'}, {'authorId': '3051598', 'name': 'Yair Lakretz'}, {'authorId': '2258804099', 'name': 'Yangqiu Song'}, {'authorId': '12383244', 'name': 'Yasaman Bahri'}, {'authorId': '2253903625', 'name': 'Yejin Choi'}, {'authorId': '2108652034', 'name': 'Yichi Yang'}, {'authorId': '2256343174', 'name': 'Yiding Hao'}, {'authorId': '2253813584', 'name': 'Yifu Chen'}, {'authorId': '2083259', 'name': 'Yonatan Belinkov'}, {'authorId': '2118739951', 'name': 'Yu Hou'}, {'authorId': '2118739951', 'name': 'Yu Hou'}, {'authorId': '1486307451', 'name': 'Yuntao Bai'}, {'authorId': '2169561434', 'name': 'Zachary Seid'}, {'authorId': '2254023165', 'name': 'Zhuoye Zhao'}, {'authorId': '37571937', 'name': 'Zijian Wang'}, {'authorId': '1390877819', 'name': 'Zijie J. Wang'}, {'authorId': '2255392870', 'name': 'Zirui Wang'}, {'authorId': '2253894469', 'name': 'Ziyi Wu'}], 'venue': 'arXiv.org', 'abstract': 'Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 450 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI\'s GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit"breakthrough"behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting.', 'year': 2022, 'in_acl': False, 'citationCount': 1423, 'section': 'Current benchmarks', 'subsection': None}, {'id': 233296808, 'paperId': 'ffdbd7f0b03b85747b001b4734d5ee31b5229aa4', 'title': 'The Power of Scale for Parameter-Efficient Prompt Tuning', 'authors': [{'authorId': '144104130', 'name': 'Brian Lester'}, {'authorId': '1388360943', 'name': 'Rami Al-Rfou'}, {'authorId': '40832517', 'name': 'Noah Constant'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'In this work, we explore “prompt tuning,” a simple yet effective mechanism for learning “soft prompts” to condition frozen language models to perform specific downstream tasks. Unlike the discrete text prompts used by GPT-3, soft prompts are learned through backpropagation and can be tuned to incorporate signals from any number of labeled examples. Our end-to-end learned approach outperforms GPT-3’s few-shot learning by a large margin. More remarkably, through ablations on model size using T5, we show that prompt tuning becomes more competitive with scale: as models exceed billions of parameters, our method “closes the gap” and matches the strong performance of model tuning (where all model weights are tuned). This finding is especially relevant because large models are costly to share and serve and the ability to reuse one frozen model for multiple downstream tasks can ease this burden. Our method can be seen as a simplification of the recently proposed “prefix tuning” of Li and Liang (2021) and we provide a comparison to this and other similar approaches. Finally, we show that conditioning a frozen model with soft prompts confers benefits in robustness to domain transfer and enables efficient “prompt ensembling.” We release code and model checkpoints to reproduce our experiments.', 'year': 2021, 'in_acl': True, 'citationCount': 3231, 'section': 'Prompts', 'subsection': 'creating paraphrases'}, {'id': 254408772, 'paperId': 'd03a9b2a0e090cc9fd2ba0a457ecea35372f1018', 'title': 'Demystifying Prompts in Language Models via Perplexity Estimation', 'authors': [{'authorId': '1821892', 'name': 'Hila Gonen'}, {'authorId': '1900163', 'name': 'Srini Iyer'}, {'authorId': '3443287', 'name': 'Terra Blevins'}, {'authorId': '144365875', 'name': 'Noah A. Smith'}, {'authorId': '1982950', 'name': 'Luke Zettlemoyer'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Language models can be prompted to perform a wide variety of zero- and few-shot learning problems. However, performance varies significantly with the choice of prompt, and we do not yet understand why this happens or how to pick the best prompts. In this work, we analyze the factors that contribute to this variance and establish a new empirical hypothesis: the performance of a prompt is coupled with the extent to which the model is familiar with the language it contains. Over a wide range of tasks, we show that the lower the perplexity of the prompt is, the better the prompt is able to perform the task. As a result, we devise a method for creating prompts: (1) automatically extend a small seed set of manually written prompts by paraphrasing using GPT3 and backtranslation and (2) choose the lowest perplexity prompts to get significant gains in performance.', 'year': 2022, 'in_acl': False, 'citationCount': 157, 'section': 'Prompts', 'subsection': 'creating paraphrases'}, {'id': 254853659, 'paperId': '6f4cc536f9ed83d0dbf7e919dc609be12aa0848a', 'title': 'Unnatural Instructions: Tuning Language Models with (Almost) No Human Labor', 'authors': [{'authorId': '1754700648', 'name': 'Or Honovich'}, {'authorId': '2073456043', 'name': 'Thomas Scialom'}, {'authorId': '39455775', 'name': 'Omer Levy'}, {'authorId': '32246932', 'name': 'Timo Schick'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Instruction tuning enables pretrained language models to perform new tasks from inference-time natural language descriptions. These approaches rely on vast amounts of human supervision in the form of crowdsourced datasets or user interactions. In this work, we introduce Unnatural Instructions: a large dataset of creative and diverse instructions, collected with virtually no human labor. We collect 64,000 examples by prompting a language model with three seed examples of instructions and eliciting a fourth. This set is then expanded by prompting the model to rephrase each instruction, creating a total of approximately 240,000 examples of instructions, inputs, and outputs. Experiments show that despite containing a fair amount of noise, training on Unnatural Instructions rivals the effectiveness of training on open-source manually-curated datasets, surpassing the performance of models such as T0++ and Tk-Instruct across various benchmarks. These results demonstrate the potential of model-generated data as a cost-effective alternative to crowdsourcing for dataset expansion and diversification.', 'year': 2022, 'in_acl': True, 'citationCount': 304, 'section': 'Prompts', 'subsection': 'creating paraphrases'}, {'id': 254366640, 'paperId': '37255b091317aedc6854383104b3343e67ab5c80', 'title': 'Robustness of Learning from Task Instructions', 'authors': [{'authorId': '50771069', 'name': 'Jiasheng Gu'}, {'authorId': '2143534669', 'name': 'Hanzi Xu'}, {'authorId': '1382526237', 'name': 'Liang Nie'}, {'authorId': '40483594', 'name': 'Wenpeng Yin'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Traditional supervised learning mostly works on individual tasks and requires training on a large set of task-specific examples. This paradigm seriously hinders the development of task generalization since preparing a task-specific example set is costly. To build a system that can quickly and easily generalize to new tasks, task instructions have been adopted as an emerging trend of supervision recently. These instructions give the model the definition of the task and allow the model to output the appropriate answer based on the instructions and inputs. However, task instructions are often expressed in different forms, which can be interpreted from two threads: first, some instructions are short sentences and are pretrained language model (PLM) oriented, such as prompts, while other instructions are paragraphs and are human-oriented, such as those in Amazon MTurk; second, different end-users very likely explain the same task with instructions of different textual expressions. A robust system for task generalization should be able to handle any new tasks regardless of the variability of instructions. However, the system robustness in dealing with instruction-driven task generalization is still unexplored. This work investigates the system robustness when the instructions of new tasks are (i) manipulated, (ii) paraphrased, or (iii) from different levels of conciseness. To our knowledge, this is the first work that systematically studies how robust a PLM is when it is supervised by instructions with different factors of variability.', 'year': 2022, 'in_acl': False, 'citationCount': 25, 'section': 'Prompts', 'subsection': 'robustness to paraphrases'}, {'id': 259203613, 'paperId': 'b0bac6aca93021105c8a4f165184a097a249fbce', 'title': 'Evaluating the Zero-shot Robustness of Instruction-tuned Language Models', 'authors': [{'authorId': '2175443520', 'name': 'Jiu Sun'}, {'authorId': '2008165628', 'name': 'Chantal Shaib'}, {'authorId': '2111879324', 'name': 'Byron Wallace'}], 'venue': 'arXiv.org', 'abstract': "Instruction fine-tuning has recently emerged as a promising approach for improving the zero-shot capabilities of Large Language Models (LLMs) on new tasks. This technique has shown particular strength in improving the performance of modestly sized LLMs, sometimes inducing performance competitive with much larger model variants. In this paper we ask two questions: (1) How sensitive are instruction-tuned models to the particular phrasings of instructions, and, (2) How can we make them more robust to such natural language variation? To answer the former, we collect a set of 319 instructions manually written by NLP practitioners for over 80 unique tasks included in widely used benchmarks, and we evaluate the variance and average performance of these instructions as compared to instruction phrasings observed during instruction fine-tuning. We find that using novel (unobserved) but appropriate instruction phrasings consistently degrades model performance, sometimes substantially so. Further, such natural instructions yield a wide variance in downstream performance, despite their semantic equivalence. Put another way, instruction-tuned models are not especially robust to instruction re-phrasings. We propose a simple method to mitigate this issue by introducing ``soft prompt'' embedding parameters and optimizing these to maximize the similarity between representations of semantically equivalent instructions. We show that this method consistently improves the robustness of instruction-tuned models.", 'year': 2023, 'in_acl': False, 'citationCount': 36, 'section': 'Prompts', 'subsection': 'robustness to paraphrases'}, {'id': 266693922, 'paperId': '8dce168f723158b771b526401113064c36fc875e', 'title': 'State of What Art? A Call for Multi-Prompt LLM Evaluation', 'authors': [{'authorId': '1396412520', 'name': 'Moran Mizrahi'}, {'authorId': '2277302614', 'name': 'Guy Kaplan'}, {'authorId': '2082022055', 'name': 'Daniel Malkin'}, {'authorId': '3372941', 'name': 'Rotem Dror'}, {'authorId': '1805894', 'name': 'Dafna Shahaf'}, {'authorId': '2126417012', 'name': 'Gabriel Stanovsky'}], 'venue': 'Transactions of the Association for Computational Linguistics', 'abstract': 'Abstract Recent advances in LLMs have led to an abundance of evaluation benchmarks, which typically rely on a single instruction template per task. We create a large-scale collection of instruction paraphrases and comprehensively analyze the brittleness introduced by single-prompt evaluations across 6.5M instances, involving 20 different LLMs and 39 tasks from 3 benchmarks. We find that different instruction templates lead to very different performance, both absolute and relative. Instead, we propose a set of diverse metrics on multiple instruction paraphrases, specifically tailored for different use cases (e.g., LLM vs. downstream development), ensuring a more reliable and meaningful assessment of LLM capabilities. We show that our metrics provide new insights into the strengths and limitations of current LLMs.', 'year': 2023, 'in_acl': False, 'citationCount': 70, 'section': 'Prompts', 'subsection': 'robustness to paraphrases'}, {'id': 221340941, 'paperId': 'c6d38e105562ae0a5d9b21fb4333212f36a3e041', 'title': 'A Survey of Evaluation Metrics Used for NLG Systems', 'authors': [{'authorId': '145338991', 'name': 'Ananya B. Sai'}, {'authorId': '1389549528', 'name': 'Akash Kumar Mohankumar'}, {'authorId': '2361078', 'name': 'Mitesh M. Khapra'}], 'venue': 'ACM Computing Surveys', 'abstract': 'In the last few years, a large number of automatic evaluation metrics have been proposed for evaluating Natural Language Generation (NLG) systems. The rapid development and adoption of such automatic evaluation metrics in a relatively short time has created the need for a survey of these metrics. In this survey, we (i) highlight the challenges in automatically evaluating NLG systems, (ii) propose a coherent taxonomy for organising existing evaluation metrics, (iii) briefly describe different existing metrics, and finally (iv) discuss studies criticising the use of automatic evaluation metrics. We then conclude the article highlighting promising future directions of research.', 'year': 2020, 'in_acl': False, 'citationCount': 199, 'section': 'Metrics', 'subsection': None}, {'id': 259129398, 'paperId': 'a0a79dad89857a96f8f71b14238e5237cbfc4787', 'title': 'Judging LLM-as-a-judge with MT-Bench and Chatbot Arena', 'authors': [{'authorId': '2149970173', 'name': 'Lianmin Zheng'}, {'authorId': '2537924', 'name': 'Wei-Lin Chiang'}, {'authorId': '2209360681', 'name': 'Ying Sheng'}, {'authorId': '92721493', 'name': 'Siyuan Zhuang'}, {'authorId': '1390573666', 'name': 'Zhanghao Wu'}, {'authorId': '2152482391', 'name': 'Yonghao Zhuang'}, {'authorId': '143872641', 'name': 'Zi Lin'}, {'authorId': '2141335450', 'name': 'Zhuohan Li'}, {'authorId': '2117961435', 'name': 'Dacheng Li'}, {'authorId': '143977260', 'name': 'E. Xing'}, {'authorId': '145140331', 'name': 'Haotong Zhang'}, {'authorId': '49988044', 'name': 'Joseph E. Gonzalez'}, {'authorId': '2055174324', 'name': 'Ion Stoica'}], 'venue': 'Neural Information Processing Systems', 'abstract': 'Evaluating large language model (LLM) based chat assistants is challenging due to their broad capabilities and the inadequacy of existing benchmarks in measuring human preferences. To address this, we explore using strong LLMs as judges to evaluate these models on more open-ended questions. We examine the usage and limitations of LLM-as-a-judge, including position, verbosity, and self-enhancement biases, as well as limited reasoning ability, and propose solutions to mitigate some of them. We then verify the agreement between LLM judges and human preferences by introducing two benchmarks: MT-bench, a multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our results reveal that strong LLM judges like GPT-4 can match both controlled and crowdsourced human preferences well, achieving over 80% agreement, the same level of agreement between humans. Hence, LLM-as-a-judge is a scalable and explainable way to approximate human preferences, which are otherwise very expensive to obtain. Additionally, we show our benchmark and traditional benchmarks complement each other by evaluating several variants of LLaMA and Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with human preferences are publicly available at https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.', 'year': 2023, 'in_acl': False, 'citationCount': 2526, 'section': 'Metrics', 'subsection': None}, {'id': 261076362, 'paperId': '9a4765547cb43ab221fe262df7405f6795557d8c', 'title': 'Efficient Benchmarking (of Language Models)', 'authors': [{'authorId': '102376484', 'name': 'Yotam Perlitz'}, {'authorId': '2072249334', 'name': 'Elron Bandel'}, {'authorId': '48835746', 'name': 'Ariel Gera'}, {'authorId': '1454512761', 'name': 'Ofir Arviv'}, {'authorId': '1402680837', 'name': 'L. Ein-Dor'}, {'authorId': '1734246', 'name': 'Eyal Shnarch'}, {'authorId': '1766595', 'name': 'N. Slonim'}, {'authorId': '1397653860', 'name': 'Michal Shmueli-Scheuer'}, {'authorId': '41019330', 'name': 'Leshem Choshen'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'The increasing versatility of language models (LMs) has given rise to a new class of benchmarks that comprehensively assess a broad range of capabilities. Such benchmarks are associated with massive computational costs, extending to thousands of GPU hours per model. However, the efficiency aspect of these evaluation efforts had raised little discussion in the literature.In this work, we present the problem of Efficient Benchmarking, namely, intelligently reducing the computation costs of LM evaluation without compromising reliability. Using the HELM benchmark as a test case, we investigate how different benchmark design choices affect the computation-reliability trade-off. We propose to evaluate the reliability of such decisions, by using a new measure – Decision Impact on Reliability, DIoR for short.We find, for example, that a benchmark leader may change by merely removing a low-ranked model from the benchmark, and observe that a correct benchmark ranking can be obtained by considering only a fraction of the evaluation examples.Based on our findings, we outline a set of concrete recommendations for efficient benchmark design and utilization practices. To take a step further, we use our findings to propose an evaluation algorithm, that, when applied to the HELM benchmark, leads to dramatic cost savings with minimal loss of benchmark reliability, often reducing computation by x100 or more.', 'year': 2023, 'in_acl': True, 'citationCount': 17, 'section': 'Efficient-benchmarking', 'subsection': None}, {'id': 262045288, 'paperId': 'd4085ae0f004624a3141734d3a88a9ebbc803a55', 'title': 'Anchor Points: Benchmarking Models with Much Fewer Examples', 'authors': [{'authorId': '2243189065', 'name': 'Rajan Vivek'}, {'authorId': '10324691', 'name': 'Kawin Ethayarajh'}, {'authorId': '2243367527', 'name': 'Diyi Yang'}, {'authorId': '2111313627', 'name': 'Douwe Kiela'}], 'venue': 'Conference of the European Chapter of the Association for Computational Linguistics', 'abstract': 'Modern language models often exhibit powerful but brittle behavior, leading to the development of larger and more diverse benchmarks to reliably assess their behavior. Here, we suggest that model performance can be benchmarked and elucidated with much smaller evaluation sets. We first show that in six popular language classification benchmarks, model confidence in the correct class on many pairs of points is strongly correlated across models. We build upon this phenomenon to propose Anchor Point Selection, a technique to select small subsets of datasets that capture model behavior across the entire dataset. Anchor points reliably rank models: across 87 diverse language model-prompt pairs, evaluating models using 1-30 anchor points outperforms uniform sampling and other baselines at accurately ranking models. Moreover, just a dozen anchor points can be used to estimate model per-class predictions on all other points in a dataset with low error, sufficient for gauging where the model is likely to fail. Lastly, we present Anchor Point Maps for visualizing these insights and facilitating comparisons of the performance of different models on various regions within the dataset distribution.', 'year': 2023, 'in_acl': True, 'citationCount': 17, 'section': 'Efficient-benchmarking', 'subsection': None}, {'id': 253553585, 'paperId': 'ce913026f693101e54d3ab9152e107034d81fce1', 'title': 'Holistic Evaluation of Language Models', 'authors': [{'authorId': '145419642', 'name': 'Percy Liang'}, {'authorId': '150272855', 'name': 'Rishi Bommasani'}, {'authorId': '2110585783', 'name': 'Tony Lee'}, {'authorId': '2754804', 'name': 'Dimitris Tsipras'}, {'authorId': '1914569491', 'name': 'Dilara Soylu'}, {'authorId': '19168196', 'name': 'Michihiro Yasunaga'}, {'authorId': '9227100', 'name': 'Yian Zhang'}, {'authorId': '22252150', 'name': 'D. Narayanan'}, {'authorId': '3374063', 'name': 'Yuhuai Wu'}, {'authorId': '32423266', 'name': 'Ananya Kumar'}, {'authorId': '51149693', 'name': 'Benjamin Newman'}, {'authorId': '2833699', 'name': 'Binhang Yuan'}, {'authorId': '1748871792', 'name': 'Bobby Yan'}, {'authorId': '2146064162', 'name': 'Ce Zhang'}, {'authorId': '133749287', 'name': 'Christian Cosgrove'}, {'authorId': '144783904', 'name': 'Christopher D. Manning'}, {'authorId': '2061444681', 'name': "Christopher R'e"}, {'authorId': '1413421064', 'name': 'Diana Acosta-Navas'}, {'authorId': '152951058', 'name': 'Drew A. Hudson'}, {'authorId': '49456763', 'name': 'E. Zelikman'}, {'authorId': '41152329', 'name': 'Esin Durmus'}, {'authorId': '8759332', 'name': 'Faisal Ladhak'}, {'authorId': '2047004093', 'name': 'Frieda Rong'}, {'authorId': '40046694', 'name': 'Hongyu Ren'}, {'authorId': '18307037', 'name': 'Huaxiu Yao'}, {'authorId': '39597242', 'name': 'Jue Wang'}, {'authorId': '50818255', 'name': 'Keshav Santhanam'}, {'authorId': '4773175', 'name': 'Laurel J. Orr'}, {'authorId': '2118604716', 'name': 'Lucia Zheng'}, {'authorId': '2186981598', 'name': 'Mert Yuksekgonul'}, {'authorId': '51903517', 'name': 'Mirac Suzgun'}, {'authorId': '2182172863', 'name': 'Nathan S. Kim'}, {'authorId': '2820009', 'name': 'Neel Guha'}, {'authorId': '22193324', 'name': 'Niladri S. Chatterji'}, {'authorId': '144112155', 'name': 'O. Khattab'}, {'authorId': '2071773966', 'name': 'Peter Henderson'}, {'authorId': '144862341', 'name': 'Qian Huang'}, {'authorId': '2121293578', 'name': 'Ryan Chi'}, {'authorId': '46215055', 'name': 'Sang Michael Xie'}, {'authorId': '2852106', 'name': 'Shibani Santurkar'}, {'authorId': '25769960', 'name': 'S. Ganguli'}, {'authorId': '2117567142', 'name': 'Tatsunori Hashimoto'}, {'authorId': '8938047', 'name': 'Thomas F. Icard'}, {'authorId': '123437034', 'name': 'Tianyi Zhang'}, {'authorId': '113810201', 'name': 'Vishrav Chaudhary'}, {'authorId': '2127971344', 'name': 'William Wang'}, {'authorId': '2145429039', 'name': 'Xuechen Li'}, {'authorId': '2054708905', 'name': 'Yifan Mai'}, {'authorId': '49889860', 'name': 'Yuhui Zhang'}, {'authorId': '2740047', 'name': 'Yuta Koreeda'}], 'venue': 'Trans. Mach. Learn. Res.', 'abstract': 'Language models (LMs) like GPT‐3, PaLM, and ChatGPT are the foundation for almost all major language technologies, but their capabilities, limitations, and risks are not well understood. We present Holistic Evaluation of Language Models (HELM) to improve the transparency of LMs. LMs can serve many purposes and their behavior should satisfy many desiderata. To navigate the vast space of potential scenarios and metrics, we taxonomize the space and select representative subsets. We evaluate models on 16 core scenarios and 7 metrics, exposing important trade‐offs. We supplement our core evaluation with seven targeted evaluations to deeply analyze specific aspects (including world knowledge, reasoning, regurgitation of copyrighted content, and generation of disinformation). We benchmark 30 LMs, from OpenAI, Microsoft, Google, Meta, Cohere, AI21 Labs, and others. Prior to HELM, models were evaluated on just 17.9% of the core HELM scenarios, with some prominent models not sharing a single scenario in common. We improve this to 96.0%: all 30 models are now benchmarked under the same standardized conditions. Our evaluation surfaces 25 top‐level findings. For full transparency, we release all raw model prompts and completions publicly. HELM is a living benchmark for the community, continuously updated with new scenarios, metrics, and models https://crfm.stanford.edu/helm/latest/.', 'year': 2023, 'in_acl': False, 'citationCount': 736, 'section': 'Efficient-benchmarking', 'subsection': None}, {'id': 263608566, 'paperId': 'a56c93747666afd534ed8f5c019869bdda673236', 'title': 'Hierarchical Evaluation Framework: Best Practices for Human Evaluation', 'authors': [{'authorId': '48765714', 'name': 'I. Bojic'}, {'authorId': '2253951043', 'name': 'Jessica Chen'}, {'authorId': '2254223460', 'name': 'Si Yuan Chang'}, {'authorId': '2207683983', 'name': 'Qi Chwen Ong'}, {'authorId': '2708940', 'name': 'Shafiq R. Joty'}, {'authorId': '2252857020', 'name': 'Josip Car'}], 'venue': 'HUMEVAL', 'abstract': 'Human evaluation plays a crucial role in Natural Language Processing (NLP) as it assesses the quality and relevance of developed systems, thereby facilitating their enhancement. However, the absence of widely accepted human evaluation metrics in NLP hampers fair comparisons among different systems and the establishment of universal assessment standards. Through an extensive analysis of existing literature on human evaluation metrics, we identified several gaps in NLP evaluation methodologies. These gaps served as motivation for developing our own hierarchical evaluation framework. The proposed framework offers notable advantages, particularly in providing a more comprehensive representation of the NLP system’s performance. We applied this framework to evaluate the developed Machine Reading Comprehension system, which was utilized within a human-AI symbiosis model. The results highlighted the associations between the quality of inputs and outputs, underscoring the necessity to evaluate both components rather than solely focusing on outputs. In future work, we will investigate the potential time-saving benefits of our proposed framework for evaluators assessing NLP systems.', 'year': 2023, 'in_acl': True, 'citationCount': 5, 'section': 'Manual Evaluation', 'subsection': None}, {'id': 259859021, 'paperId': 'd0e67f3a8047547a5e3503a30a21f2896eb0fa85', 'title': 'Non-Repeatable Experiments and Non-Reproducible Results: The Reproducibility Crisis in Human Evaluation in NLP', 'authors': [{'authorId': '41052836', 'name': 'Anya Belz'}, {'authorId': '144556458', 'name': 'Craig Thomson'}, {'authorId': '2113922820', 'name': 'Ehud Reiter'}, {'authorId': '2738095', 'name': 'Simon Mille'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Human evaluation is widely regarded as the lit-mus test of quality in NLP. A basic requirement of all evaluations, but in particular where used for meta-evaluation, is that they should support the same conclusions if repeated. However, the reproducibility of human evaluations is virtually never queried in NLP, let alone formally tested, and their repeatability and reproducibility of results is currently an open question. This paper reports our review of human evaluation experiments published in NLP papers over the past five years which we assessed in terms of (i) their ability to be rerun, and (ii) their re-sults being reproduced where they can be rerun. Overall, we estimate that just 5% of human evaluations are repeatable in the sense that (i) there are no prohibitive barriers to repetition, and (ii) sufficient information about experimental design is publicly available for rerunning them. Our estimate goes up to about 20% when author help is sought. We complement this investigation with a survey of results concerning the reproducibility of human evaluations where those are repeatable in the first place. Here we find worryingly low degrees of reproducibility, both in terms of similarity of scores and of the findings supported by them. We summarise what insights can be gleaned so far regarding how to make human evaluations in NLP more repeatable and more reproducible.', 'year': 2023, 'in_acl': False, 'citationCount': 20, 'section': 'Manual Evaluation', 'subsection': None}]
|
2024.lrec-tutorials.7
|
The DBpedia Databus Tutorial: Increase the Visibility and Usability of Your Data
|
This tutorial introduces DBpedia Databus (https://databus.dbpedia.org), a FAIR data publishing platform, to address challenges faced by data producers and consumers. It covers data organization, publishing, and consumption on the DBpedia Databus, with an exclusive focus on Linguistic Knowledge Graphs. The tutorial offers practical insights for knowledge graph stakeholders, aiding data integration and accessibility in the Linked Open Data community. Designed for a diverse audience, it fosters hands-on learning to familiarize participants with the DBpedia Databus technology.
| 2,024
|
https://aclanthology.org/2024.lrec-tutorials.7
|
LREC
|
[{'id': 219602200, 'paperId': '20aa85c0b0b07c3bef15f435b9d5292781b7c751', 'title': 'The New DBpedia Release Cycle: Increasing Agility and Efficiency in Knowledge Extraction Workflows', 'authors': [{'authorId': '24163430', 'name': 'M. Hofer'}, {'authorId': '2024066', 'name': 'Sebastian Hellmann'}, {'authorId': '1819564', 'name': 'Milan Dojchinovski'}, {'authorId': '32114346', 'name': 'Johannes Frey'}], 'venue': 'International Conference on Semantic Systems', 'abstract': 'Since its inception in 2007, DBpedia has been constantly releasing open data in RDF, extracted from various Wikimedia projects using a complex software system called the DBpedia Information Extraction Framework (DIEF). For the past 12 years, the software received a plethora of extensions by the community, which positively affected the size and data quality. Due to the increase in size and complexity, the release process was facing huge delays (from 12 to 17 months cycle), thus impacting the agility of the development. In this paper, we describe the new DBpedia release cycle including our innovative release workflow, which allows development teams (in particular those who publish large, open data) to implement agile, cost-efficient processes and scale up productivity. The DBpedia release workflow has been re-engineered, its new primary focus is on productivity and agility, to address the challenges of size and complexity. At the same time, quality is assured by implementing a comprehensive testing methodology. We run an experimental evaluation and argue that the implemented measures increase agility and allow for cost-effective quality-control and debugging and thus achieve a higher level of maintainability. As a result, DBpedia now publishes regular (i.e. monthly) releases with over 21 billion triples with minimal publishing effort.', 'year': 2020, 'in_acl': False, 'citationCount': 17, 'section': None, 'subsection': None}, {'id': 1181640, 'paperId': 'd2946a868682e4141beabc288d79253ae254c6e1', 'title': 'DBpedia - A large-scale, multilingual knowledge base extracted from Wikipedia', 'authors': [{'authorId': '144568027', 'name': 'Jens Lehmann'}, {'authorId': '2968874', 'name': 'Robert Isele'}, {'authorId': '2065890993', 'name': 'Max Jakob'}, {'authorId': '2856259', 'name': 'Anja Jentzsch'}, {'authorId': '2627116', 'name': 'D. Kontokostas'}, {'authorId': '1692493', 'name': 'Pablo N. Mendes'}, {'authorId': '2024066', 'name': 'Sebastian Hellmann'}, {'authorId': '145022718', 'name': 'M. Morsey'}, {'authorId': '2141758', 'name': 'Patrick van Kleef'}, {'authorId': '145044578', 'name': 'S. Auer'}, {'authorId': '1729154', 'name': 'Christian Bizer'}], 'venue': 'Semantic Web', 'abstract': 'The DBpedia community project extracts structured, multilingual knowledge from Wikipedia and makes it freely available on the Web using Semantic Web and Linked Data technologies. The project extracts knowledge from 111 different language editions of Wikipedia. The largest DBpedia knowledge base which is extracted from the English edition of Wikipedia consists of over 400 million facts that describe 3.7 million things. The DBpedia knowledge bases that are extracted from the other 110 Wikipedia editions together consist of 1.46 billion facts and describe 10 million additional things. The DBpedia project maps Wikipedia infoboxes from 27 different language editions to a single shared ontology consisting of 320 classes and 1,650 properties. The mappings are created via a world-wide crowd-sourcing effort and enable knowledge from the different Wikipedia editions to be combined. The project publishes releases of all DBpedia knowledge bases for download and provides SPARQL query access to 14 out of the 111 language editions via a global network of local DBpedia chapters. In addition to the regular releases, the project maintains a live knowledge base which is updated whenever a page in Wikipedia changes. DBpedia sets 27 million RDF links pointing into over 30 external data sources and thus enables data from these sources to be used together with DBpedia data. Several hundred data sets on the Web publish RDF links pointing to DBpedia themselves and make DBpedia one of the central interlinking hubs in the Linked Open Data (LOD) cloud. In this system report, we give an overview of the DBpedia community project, including its architecture, technical implementation, maintenance, internationalisation, usage statistics and applications.', 'year': 2015, 'in_acl': False, 'citationCount': 3185, 'section': None, 'subsection': None}, {'id': 16081721, 'paperId': '58f72b53d576c6e4a42b4d8812e5542ffa2c03cc', 'title': 'DBpedia - A crystallization point for the Web of Data', 'authors': [{'authorId': '1729154', 'name': 'Christian Bizer'}, {'authorId': '144568027', 'name': 'Jens Lehmann'}, {'authorId': '2051816', 'name': 'Georgi Kobilarov'}, {'authorId': '145044578', 'name': 'S. Auer'}, {'authorId': '2068696031', 'name': 'Christian Becker'}, {'authorId': '1702661', 'name': 'Richard Cyganiak'}, {'authorId': '2024066', 'name': 'Sebastian Hellmann'}], 'venue': 'Journal of Web Semantics', 'abstract': 'The DBpedia project is a community effort to extract structured information from Wikipedia and to make this information accessible on the Web. The resulting DBpedia knowledge base currently describes over 2.6 million entities. For each of these entities, DBpedia defines a globally unique identifier that can be dereferenced over the Web into a rich RDF description of the entity, including human-readable definitions in 30 languages, relationships to other resources, classifications in four concept hierarchies, various facts as well as data-level links to other Web data sources describing the entity. Over the last year, an increasing number of data publishers have begun to set data-level links to DBpedia resources, making DBpedia a central interlinking hub for the emerging Web of Data. Currently, the Web of interlinked data sources around DBpedia provides approximately 4.7 billion pieces of information and covers domains such as geographic information, people, companies, films, music, genes, drugs, books, and scientific publications. This article describes the extraction of the DBpedia knowledge base, the current status of interlinking DBpedia with other data sources on the Web, and gives an overview of applications that facilitate the Web of Data around DBpedia.', 'year': 2009, 'in_acl': False, 'citationCount': 2430, 'section': None, 'subsection': None}]
|
2024.lrec-tutorials.8
|
NLP for Chemistry – Introduction and Recent Advances
|
In this half-day tutorial we will be giving an introductory overview to a number of recent applications of natural language processing to a relatively underrepresented application domain: chemistry. Specifically, we will see how neural language models (transformers) can be applied (oftentimes with near-human performance) to chemical text mining, reaction extraction, or more importantly computational chemistry (forward and backward synthesis of chemical compounds). At the same time, a number of gold standards for experimentation have been made available to the research –academic and otherwise– community. Theoretical results will be, whenever possible, supported by system demonstrations in the form of Jupyter notebooks. This tutorial targets an audience interested in bioinformatics and biomedical applications, but pre-supposes no advanced knowledge of either.
| 2,024
|
https://aclanthology.org/2024.lrec-tutorials.8
|
LREC
|
[{'id': 1427846, 'paperId': '5dd0ae971c88a817bb46160d1afc8af3c09fa69d', 'title': 'Identifying, Indexing, and Ranking Chemical Formulae and Chemical Names in Digital Documents', 'authors': [{'authorId': '47935371', 'name': 'Bingjun Sun'}, {'authorId': '143930195', 'name': 'P. Mitra'}, {'authorId': '145157784', 'name': 'C. Lee Giles'}, {'authorId': '1759717', 'name': 'K. Mueller'}], 'venue': 'TOIS', 'abstract': 'End-users utilize chemical search engines to search for chemical formulae and chemical names. Chemical search engines identify and index chemical formulae and chemical names appearing in text documents to support efficient search and retrieval in the future. Identifying chemical formulae and chemical names in text automatically has been a hard problem that has met with varying degrees of success in the past. We propose algorithms for chemical formula and chemical name tagging using Conditional Random Fields (CRFs) and Support Vector Machines (SVMs) that achieve higher accuracy than existing (published) methods. After chemical entities have been identified in text documents, they must be indexed. In order to support user-provided search queries that require a partial match between the chemical name segment used as a keyword or a partial chemical formula, all possible (or a significant number of) subformulae of formulae that appear in any document and all possible subterms (e.g., “methyl”) of chemical names (e.g., “methylethyl ketone”) must be indexed. Indexing all possible subformulae and subterms results in an exponential increase in the storage and memory requirements as well as the time taken to process the indices. We propose techniques to prune the indices significantly without reducing the quality of the returned results significantly. Finally, we propose multiple query semantics to allow users to pose different types of partial search queries for chemical entities. We demonstrate empirically that our search engines improve the relevance of the returned results for search queries involving chemical entities.', 'year': 2011, 'in_acl': False, 'citationCount': 18, 'section': None, 'subsection': None}, {'id': 232342107, 'paperId': 'e451e1717a8fd4238b7d36e06da478d2d3333f1a', 'title': 'ChEMU 2020: Natural Language Processing Methods Are Effective for Information Extraction From Chemical Patents', 'authors': [{'authorId': '150147667', 'name': 'Jiayuan He'}, {'authorId': '34691913', 'name': 'Dat Quoc Nguyen'}, {'authorId': '2830474', 'name': 'S. Akhondi'}, {'authorId': '150055420', 'name': 'Christian Druckenbrodt'}, {'authorId': '2093331', 'name': 'Camilo Thorne'}, {'authorId': '1630460836', 'name': 'Ralph Hoessel'}, {'authorId': '2685347', 'name': 'Z. Afzal'}, {'authorId': '51230252', 'name': 'Zenan Zhai'}, {'authorId': '153841255', 'name': 'Biaoyan Fang'}, {'authorId': '2082741', 'name': 'Hiyori Yoshikawa'}, {'authorId': '9581515', 'name': 'Ameer Albahem'}, {'authorId': '1788025', 'name': 'L. Cavedon'}, {'authorId': '1630460898', 'name': 'Trevor Cohn'}, {'authorId': '145465286', 'name': 'Timothy Baldwin'}, {'authorId': '144765178', 'name': 'Karin M. Verspoor'}], 'venue': 'Frontiers in Research Metrics and Analytics', 'abstract': 'Chemical patents represent a valuable source of information about new chemical compounds, which is critical to the drug discovery process. Automated information extraction over chemical patents is, however, a challenging task due to the large volume of existing patents and the complex linguistic properties of chemical patents. The Cheminformatics Elsevier Melbourne University (ChEMU) evaluation lab 2020, part of the Conference and Labs of the Evaluation Forum 2020 (CLEF2020), was introduced to support the development of advanced text mining techniques for chemical patents. The ChEMU 2020 lab proposed two fundamental information extraction tasks focusing on chemical reaction processes described in chemical patents: (1) chemical named entity recognition, requiring identification of essential chemical entities and their roles in chemical reactions, as well as reaction conditions; and (2) event extraction, which aims at identification of event steps relating the entities involved in chemical reactions. The ChEMU 2020 lab received 37 team registrations and 46 runs. Overall, the performance of submissions for these tasks exceeded our expectations, with the top systems outperforming strong baselines. We further show the methods to be robust to variations in sampling of the test data. We provide a detailed overview of the ChEMU 2020 corpus and its annotation, showing that inter-annotator agreement is very strong. We also present the methods adopted by participants, provide a detailed analysis of their performance, and carefully consider the potential impact of data leakage on interpretation of the results. The ChEMU 2020 Lab has shown the viability of automated methods to support information extraction of key information in chemical patents.', 'year': 2021, 'in_acl': False, 'citationCount': 31, 'section': None, 'subsection': None}, {'id': 247362020, 'paperId': 'eee7997106834442f1704e4681a9a761df6696a1', 'title': 'Unified Deep Learning Model for Multitask Reaction Predictions with Explanation', 'authors': [{'authorId': '1998959323', 'name': 'Jieyu Lu'}, {'authorId': '1591145699', 'name': 'Yingkai Zhang'}], 'venue': 'Journal of Chemical Information and Modeling', 'abstract': 'There is significant interest and importance to develop robust machine learning models to assist organic chemistry synthesis. Typically, task-specific machine learning models for distinct reaction prediction tasks have been developed. In this work, we develop a unified deep learning model, T5Chem, for a variety of chemical reaction predictions tasks by adapting the "Text-to-Text Transfer Transformer" (T5) framework in natural language processing (NLP). On the basis of self-supervised pretraining with PubChem molecules, the T5Chem model can achieve state-of-the-art performances for four distinct types of task-specific reaction prediction tasks using four different open-source data sets, including reaction type classification on USPTO_TPL, forward reaction prediction on USPTO_MIT, single-step retrosynthesis on USPTO_50k, and reaction yield prediction on high-throughput C-N coupling reactions. Meanwhile, we introduced a new unified multitask reaction prediction data set USPTO_500_MT, which can be used to train and test five different types of reaction tasks, including the above four as well as a new reagent suggestion task. Our results showed that models trained with multiple tasks are more robust and can benefit from mutual learning on related tasks. Furthermore, we demonstrated the use of SHAP (SHapley Additive exPlanations) to explain T5Chem predictions at the functional group level, which provides a way to demystify sequence-based deep learning models in chemistry. T5Chem is accessible through https://yzhang.hpc.nyu.edu/T5Chem.', 'year': 2022, 'in_acl': False, 'citationCount': 57, 'section': None, 'subsection': None}, {'id': 258059792, 'paperId': '354dcdebf3f8b5feeed5c62090e0bc1f0c28db06', 'title': 'Augmenting large language models with chemistry tools', 'authors': [{'authorId': '2216007369', 'name': 'Andrés M Bran'}, {'authorId': '2161337138', 'name': 'Sam Cox'}, {'authorId': '1820929773', 'name': 'Oliver Schilter'}, {'authorId': '2251414370', 'name': 'Carlo Baldassari'}, {'authorId': '2150199535', 'name': 'Andrew D. White'}, {'authorId': '1379965853', 'name': 'P. Schwaller'}], 'venue': 'Nat. Mac. Intell.', 'abstract': 'Large language models (LLMs) have shown strong performance in tasks across domains but struggle with chemistry-related problems. These models also lack access to external knowledge sources, limiting their usefulness in scientific applications. We introduce ChemCrow, an LLM chemistry agent designed to accomplish tasks across organic synthesis, drug discovery and materials design. By integrating 18 expert-designed tools and using GPT-4 as the LLM, ChemCrow augments the LLM performance in chemistry, and new capabilities emerge. Our agent autonomously planned and executed the syntheses of an insect repellent and three organocatalysts and guided the discovery of a novel chromophore. Our evaluation, including both LLM and expert assessments, demonstrates ChemCrow’s effectiveness in automating a diverse set of chemical tasks. Our work not only aids expert chemists and lowers barriers for non-experts but also fosters scientific advancement by bridging the gap between experimental and computational chemistry.', 'year': 2023, 'in_acl': False, 'citationCount': 225, 'section': None, 'subsection': None}]
|
2024.lrec-tutorials.9
|
Formal Semantic Controls over Language Models
|
Text embeddings provide a concise representation of the semantics of sentences and larger spans of text, rather than individual words, capturing a wide range of linguistic features. They have found increasing application to a variety of NLP tasks, including machine translation and natural language inference. While most recent breakthroughs in task performance are being achieved by large scale distributional models, there is a growing disconnection between their knowledge representation and traditional semantics, which hinders efforts to capture such knowledge in human interpretable form or explain model inference behaviour. In this tutorial, we examine from basics to the cutting edge research on the analysis and control of text representations, aiming to shorten the gap between deep latent semantics and formal symbolics. This includes the considerations on knowledge formalisation, the linguistic information that can be extracted and measured from distributional models, and intervention techniques that enable explainable reasoning and controllable text generation, covering methods from pooling to LLM-based.
| 2,024
|
https://aclanthology.org/2024.lrec-tutorials.9
|
LREC
|
[{'id': 393948, 'paperId': '184ac0766262312ba76bbdece4e7ffad0aa8180b', 'title': 'Representation Learning: A Review and New Perspectives', 'authors': [{'authorId': '1751762', 'name': 'Yoshua Bengio'}, {'authorId': '1760871', 'name': 'Aaron C. Courville'}, {'authorId': '145467703', 'name': 'Pascal Vincent'}], 'venue': 'IEEE Transactions on Pattern Analysis and Machine Intelligence', 'abstract': 'The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. This motivates longer term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation, and manifold learning.', 'year': 2012, 'in_acl': False, 'citationCount': 11774, 'section': None, 'subsection': None}, {'id': 204824113, 'paperId': '530a059cb48477ad1e3d4f8f4b153274c8997332', 'title': 'Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI', 'authors': [{'authorId': '1379511816', 'name': 'Alejandro Barredo Arrieta'}, {'authorId': '2058921025', 'name': 'Natalia Díaz Rodríguez'}, {'authorId': '9221552', 'name': 'J. Ser'}, {'authorId': '1379511786', 'name': 'Adrien Bennetot'}, {'authorId': '3030006', 'name': 'S. Tabik'}, {'authorId': '50449165', 'name': 'A. Barbado'}, {'authorId': '39558258', 'name': 'S. García'}, {'authorId': '1402195255', 'name': 'S. Gil-Lopez'}, {'authorId': '145337392', 'name': 'D. Molina'}, {'authorId': '2445552', 'name': 'Richard Benjamins'}, {'authorId': '2091924780', 'name': 'Raja Chatila'}, {'authorId': '2098723448', 'name': 'Francisco Herrera'}], 'venue': 'Information Fusion', 'abstract': 'In the last few years, Artificial Intelligence (AI) has achieved a notable momentum that, if harnessed appropriately, may deliver the best of expectations over many application sectors across the field. For this to occur shortly in Machine Learning, the entire community stands in front of the barrier of explainability, an inherent problem of the latest techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI (namely, expert systems and rule based models). Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is widely acknowledged as a crucial feature for the practical deployment of AI models. The overview presented in this article examines the existing literature and contributions already done in the field of XAI, including a prospect toward what is yet to be reached. For this purpose we summarize previous efforts made to define explainability in Machine Learning, establishing a novel definition of explainable Machine Learning that covers such prior conceptual propositions with a major focus on the audience for which the explainability is sought. Departing from this definition, we propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at explaining Deep Learning methods for which a second dedicated taxonomy is built and examined in detail. This critical literature analysis serves as the motivating background for a series of challenges faced by XAI, such as the interesting crossroads of data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to the field of XAI with a thorough taxonomy that can serve as reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.', 'year': 2019, 'in_acl': False, 'citationCount': 5184, 'section': None, 'subsection': None}, {'id': 220305304, 'paperId': '02a715a0bee9065a259f78876d8f4f92090cad01', 'title': 'Representation Learning for Natural Language Processing', 'authors': [{'authorId': '49293587', 'name': 'Zhiyuan Liu'}, {'authorId': '2427350', 'name': 'Yankai Lin'}, {'authorId': '1753344', 'name': 'Maosong Sun'}], 'venue': 'arXiv.org', 'abstract': 'This open access book provides an overview of the recent advances in representation learning theory, algorithms and applications for natural language processing (NLP). It is divided into three parts. Part I presents the representation learning techniques for multiple language entries, including words, phrases, sentences and documents. Part II then introduces the representation techniques for those objects that are closely related to NLP, including entity-based world knowledge, sememe-based linguistic knowledge, networks, and cross-modal entries. Lastly, Part III provides open resource tools for representation learning techniques, and discusses the remaining challenges and future research directions. The theories and algorithms of representation learning presented can also benefit other related domains such as machine learning, social network analysis, semantic Web, information retrieval, data mining and computational biology. This book is intended for advanced undergraduate andgraduate students, post-doctoral fellows, researchers, lecturers, and industrial engineers, as well as anyone interested in representation learning and natural language processing.', 'year': 2021, 'in_acl': False, 'citationCount': 12, 'section': None, 'subsection': None}, {'id': 24461982, 'paperId': 'c41516420ddbd0f29e010ca259a74c1fc2da0466', 'title': 'What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties', 'authors': [{'authorId': '2480903', 'name': 'Alexis Conneau'}, {'authorId': '2067996', 'name': 'Germán Kruszewski'}, {'authorId': '1830914', 'name': 'Guillaume Lample'}, {'authorId': '2934336', 'name': 'Loïc Barrault'}, {'authorId': '145283199', 'name': 'Marco Baroni'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Although much effort has recently been devoted to training high-quality sentence embeddings, we still have a poor understanding of what they are capturing. “Downstream” tasks, often based on sentence classification, are commonly used to evaluate the quality of sentence representations. The complexity of the tasks makes it however difficult to infer what kind of information is present in the representations. We introduce here 10 probing tasks designed to capture simple linguistic features of sentences, and we use them to study embeddings generated by three different encoders trained in eight distinct ways, uncovering intriguing properties of both encoders and training methods.', 'year': 2018, 'in_acl': True, 'citationCount': 845, 'section': None, 'subsection': None}, {'id': 220683893, 'paperId': 'f4c671ba8608d187e3cdba21232489e18fac67e6', 'title': 'Which sentence embeddings and which layers encode syntactic structure?', 'authors': [{'authorId': '2814317', 'name': 'M. Kelly'}, {'authorId': '2115682413', 'name': 'Yang Xu'}, {'authorId': '2053897', 'name': 'Jesús Calvillo'}, {'authorId': '1781409', 'name': 'D. Reitter'}], 'venue': 'Annual Meeting of the Cognitive Science Society', 'abstract': 'Recent models of language have eliminated syntactic-semantic dividing lines. We explore the psycholinguistic implications of this development by comparing different types of sentence embeddings in their ability to encode syntactic constructions. Our study uses contrasting sentence structures known to cause syntactic priming effects, that is, the tendency in humans to repeat sentence structures after recent exposure. We compare how syntactic alternatives are captured by sentence embeddings produced by a neural language model (BERT) or by the composition of word embeddings (BEAGLE, HHM, GloVe). Dative double object vs. prepositional object and active vs. passive sentences are separable in the high-dimensional space of the sentence embeddings and can be classified with a high degree of accuracy. The results lend empirical support to the modern, computational, integrated accounts of semantics and syntax, and they shed light on the information stored at different layers in deep language models such as BERT.', 'year': 2020, 'in_acl': False, 'citationCount': 8, 'section': None, 'subsection': None}, {'id': 227231833, 'paperId': 'c331a3e3e55d95beb8be5cec9ccc772e72b32282', 'title': 'Sentence Analogies: Linguistic Regularities in Sentence Embeddings', 'authors': [{'authorId': '51121868', 'name': 'Xunjie Zhu'}, {'authorId': '144608002', 'name': 'Gerard de Melo'}], 'venue': 'International Conference on Computational Linguistics', 'abstract': 'While important properties of word vector representations have been studied extensively, far less is known about the properties of sentence vector representations. Word vectors are often evaluated by assessing to what degree they exhibit regularities with regard to relationships of the sort considered in word analogies. In this paper, we investigate to what extent commonly used sentence vector representation spaces as well reflect certain kinds of regularities. We propose a number of schemes to induce evaluation data, based on lexical analogy data as well as semantic relationships between sentences. Our experiments consider a wide range of sentence embedding methods, including ones based on BERT-style contextual embeddings. We find that different models differ substantially in their ability to reflect such regularities.', 'year': 2020, 'in_acl': True, 'citationCount': 30, 'section': None, 'subsection': None}, {'id': 247410985, 'paperId': '2886734184eb7efc1dca1b33c35d33bc41cdfe5c', 'title': 'A Sentence is Worth 128 Pseudo Tokens: A Semantic-Aware Contrastive Learning Framework for Sentence Embeddings', 'authors': [{'authorId': '2114894741', 'name': 'Haochen Tan'}, {'authorId': '152348954', 'name': 'Wei Shao'}, {'authorId': '2109295256', 'name': 'Han Wu'}, {'authorId': '2119301597', 'name': 'Ke Yang'}, {'authorId': '38117987', 'name': 'Linqi Song'}], 'venue': 'Findings', 'abstract': 'Contrastive learning has shown great potential in unsupervised sentence embedding tasks, e.g., SimCSE (CITATION).However, these existing solutions are heavily affected by superficial features like the length of sentences or syntactic structures. In this paper, we propose a semantic-aware contrastive learning framework for sentence embeddings, termed Pseudo-Token BERT (PT-BERT), which is able to explore the pseudo-token space (i.e., latent semantic space) representation of a sentence while eliminating the impact of superficial features such as sentence length and syntax. Specifically, we introduce an additional pseudo token embedding layer independent of the BERT encoder to map each sentence into a sequence of pseudo tokens in a fixed length. Leveraging these pseudo sequences, we are able to construct same-length positive and negative pairs based on the attention mechanism to perform contrastive learning. In addition, we utilize both the gradient-updating and momentum-updating encoders to encode instances while dynamically maintaining an additional queue to store the representation of sentence embeddings, enhancing the encoder’s learning performance for negative examples. Experiments show that our model outperforms the state-of-the-art baselines on six standard semantic textual similarity (STS) tasks. Furthermore, experiments on alignments and uniformity losses, as well as hard examples with different sentence lengths and syntax, consistently verify the effectiveness of our method.', 'year': 2022, 'in_acl': True, 'citationCount': 15, 'section': None, 'subsection': None}, {'id': 253224438, 'paperId': 'b7395263bbb1612ab6fdf9ca331792a6542d713a', 'title': 'SBERT studies Meaning Representations: Decomposing Sentence Embeddings into Explainable Semantic Features', 'authors': [{'authorId': '32781138', 'name': 'Juri Opitz'}, {'authorId': '143876555', 'name': 'A. Frank'}], 'venue': 'AACL', 'abstract': 'Models based on large-pretrained language models, such as S(entence)BERT, provide effective and efficient sentence embeddings that show high correlation to human similarity ratings, but lack interpretability. On the other hand, graph metrics for graph-based meaning representations (e.g., Abstract Meaning Representation, AMR) can make explicit the semantic aspects in which two sentences are similar. However, such metrics tend to be slow, rely on parsers, and do not reach state-of-the-art performance when rating sentence similarity. In this work, we aim at the best of both worlds, by learning to induce Semantically Structured Sentence BERT embeddings (S^3BERT). Our S^3BERT embeddings are composed of explainable sub-embeddings that emphasize various sentence meaning features (e.g., semantic roles, negation, or quantification). We show how to i) learn a decomposition of the sentence embeddings into meaning features, through approximation of a suite of interpretable semantic AMR graph metrics, and how to ii) preserve the overall power of the neural embeddings by controlling the decomposition learning process with a second objective that enforces consistency with the similarity ratings of an SBERT teacher model. In our experimental studies, we show that our approach offers interpretability – while preserving the effectiveness and efficiency of the neural sentence embeddings.', 'year': 2022, 'in_acl': True, 'citationCount': 27, 'section': None, 'subsection': None}]
|
2024.lrec-tutorials.10
|
Towards a Human-Computer Collaborative Scientific Paper Lifecycle: A Pilot Study and Hands-On Tutorial
|
Due to the rapid growth of publications varying in quality, there exists a pressing need to help scientists digest and evaluate relevant papers, thereby facilitating scientific discovery. This creates a number of urgent questions; however, computer-human collaboration in the scientific paper lifecycle is still in the exploratory stage and lacks a unified framework for analyzing the relevant tasks. Additionally, with the recent significant success of large language models (LLMs), they have increasingly played an important role in academic writing. In this cutting-edge tutorial, we aim to provide an all-encompassing overview of the paper lifecycle, detailing how machines can augment every stage of the research process for the scientist, including scientific literature understanding, experiment development, manuscript draft writing, and finally draft evaluation. This tutorial is devised for researchers interested in this rapidly-developing field of NLP-augmented paper writing. The tutorial will also feature a session of hands-on exercises during which participants can guide machines in generating ideas and automatically composing key paper elements. Furthermore, we will address current challenges, explore future directions, and discuss potential ethical issues. A toolkit designed for human-computer collaboration throughout the paper lifecycle will also be made publically available.
| 2,024
|
https://aclanthology.org/2024.lrec-tutorials.10
|
LREC
|
[{'id': 221191589, 'paperId': '22c39a725b020a57e4c152333ea702a342eee46c', 'title': 'Scientific Text Mining and Knowledge Graphs', 'authors': [{'authorId': '144812586', 'name': 'Meng Jiang'}, {'authorId': '2884976', 'name': 'Jingbo Shang'}], 'venue': 'Knowledge Discovery and Data Mining', 'abstract': 'Unstructured scientific text, in various forms of textual artifacts, including manuscripts, publications, patents, and proposals, is used to store the tremendous wealth of knowledge discovered after weeks, months, and years, developing hypotheses, working in the lab or clinic, and analyzing results. A grand challenge on data mining research is to develop effective methods for transforming the scientific text into well-structured forms (e.g., ontology, taxonomy, knowledge graphs), so that machine intelligent systems can build on them for hypothesis generation and validation. In this tutorial, we provide a comprehensive overview on recent research and development in this direction. First, we introduce a series of text mining methods that extract phrases, entities, scientific concepts, relations, claims, and experimental evidence. Then we discuss methods that construct and learn from scientific knowledge graphs for accurate search, document classification, and exploratory analysis. Specifically, we focus on scalable, effective, weakly supervised methods that work on text in sciences (e.g., chemistry, biology).', 'year': 2020, 'in_acl': False, 'citationCount': 4, 'section': 'Related Tutorials', 'subsection': None}, {'id': 250390715, 'paperId': '8255ab7a62c31c9a426ecad73143883d47f33f4e', 'title': 'New Frontiers of Information Extraction', 'authors': [{'authorId': '1998918', 'name': 'Muhao Chen'}, {'authorId': '34170717', 'name': 'Lifu Huang'}, {'authorId': '2118482058', 'name': 'Manling Li'}, {'authorId': '2108536188', 'name': 'Ben Zhou'}, {'authorId': '2113323573', 'name': 'Heng Ji'}, {'authorId': '144590225', 'name': 'D. Roth'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'This tutorial targets researchers and practitioners who are interested in AI and ML technologies for structural information extraction (IE) from unstructured textual sources. Particularly, this tutorial will provide audience with a systematic introduction to recent advances of IE, by answering several important research questions. These questions include (i) how to develop an robust IE system from noisy, insufficient training data, while ensuring the reliability of its prediction? (ii) how to foster the generalizability of IE through enhancing the system’s cross-lingual, cross-domain, cross-task and cross-modal transferability? (iii) how to precisely support extracting structural information with extremely fine-grained, diverse and boundless labels? (iv) how to further improve IE by leveraging indirect supervision from other NLP tasks, such as NLI, QA or summarization, and pre-trained language models? (v) how to acquire knowledge to guide the inference of IE systems? We will discuss several lines of frontier research that tackle those challenges, and will conclude the tutorial by outlining directions for further investigation.', 'year': 2022, 'in_acl': True, 'citationCount': 10, 'section': 'Related Tutorials', 'subsection': None}, {'id': 263866951, 'paperId': '277dd00ab02f122133bf56b485dfb7c730acdcde', 'title': 'Retrieval-based Language Models and Applications', 'authors': [{'authorId': '2290402940', 'name': 'Akari Asai'}, {'authorId': '48872685', 'name': 'Sewon Min'}, {'authorId': '49164966', 'name': 'Zexuan Zhong'}, {'authorId': '2286629648', 'name': 'Danqi Chen'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Retrieval-based language models (LMs) have shown impressive performance on diverse NLP tasks. In this tutorial, we will provide a comprehensive and coherent overview of recent advances in retrieval-based LMs. We will start by providing preliminaries covering the foundation of LMs (e.g., masked LMs, autoregressive LMs) and retrieval systems (e.g., nearest-neighbor search). We will then detail recent progress in retrieval-based models, focusing on their model architectures and learning approaches. Finally, we will show how retrieval-based LMs are adapted to downstream applications, and extended to multilingual and multi-modal settings. Finally, we will use an exercise to showcase the effectiveness of retrieval-based LMs.', 'year': 2023, 'in_acl': True, 'citationCount': 60, 'section': 'Related Tutorials', 'subsection': None}, {'id': 236456742, 'paperId': 'f2cbbbbbca2a8b9636eca890dc1d14a9ac50b7a0', 'title': 'Will AI Write Scientific Papers in the Future?', 'authors': [{'authorId': '46701545', 'name': 'Y. Gil'}], 'venue': 'The AI Magazine', 'abstract': 'In this presidential address, I would like to start with a personal reflection on the field and then share with you the research directions I am pursuing and my excitement about the future of AI. In my personal research to advance AI while advancing scientific discoveries, one question that I have been pondering for some years now is whether AI will write scientific papers in the future. I want to reflect on this question, and look back at the many accomplishments in our field that can make us very hopeful that the answer will be yes, and that it may happen sooner than we might expect.', 'year': 2022, 'in_acl': False, 'citationCount': 18, 'section': 'General Guideline', 'subsection': None}, {'id': 231740610, 'paperId': 'cefd3993db4d065b95ab8f105452fb728c02b60e', 'title': 'Can We Automate Scientific Reviewing?', 'authors': [{'authorId': '30300197', 'name': 'Weizhe Yuan'}, {'authorId': '144118452', 'name': 'Pengfei Liu'}, {'authorId': '1700325', 'name': 'Graham Neubig'}], 'venue': 'Journal of Artificial Intelligence Research', 'abstract': 'The rapid development of science and technology has been accompanied by an exponential growth in peer-reviewed scientific publications. At the same time, the review of each paper is a laborious process that must be carried out by subject matter experts. Thus, providing high-quality reviews of this growing number of papers is a significant challenge. In this work, we ask the question “can we automate scientific reviewing? ”, discussing the possibility of using natural language processing (NLP) models to generate peer reviews for scientific papers. Because it is non-trivial to define what a “good” review is in the first place, we first discuss possible evaluation metrics that could be used to judge success in this task. We then focus on the machine learning domain and collect a dataset of papers in the domain, annotate them with different aspects of content covered in each review, and train targeted summarization models that take in papers as input and generate reviews as output. Comprehensive experimental results on the test set show that while system-generated reviews are comprehensive, touching upon more aspects of the paper than human-written reviews, the generated texts are less constructive and less factual than human-written reviews for all aspects except the explanation of the core ideas of the papers, which are largely factually correct. Given these results, we pose eight challenges in the pursuit of a good review generation system together with potential solutions, which, hopefully, will inspire more future research in this direction.\nWe make relevant resource publicly available for use by future research: https://github. com/neulab/ReviewAdvisor. In addition, while our conclusion is that the technology is not yet ready for use in high-stakes review settings we provide a system demo, ReviewAdvisor (http://review.nlpedia.ai/), showing the current capabilities and failings of state-of-the-art NLP models at this task (see demo screenshot in A.2). A review of this paper written by the system proposed in this paper can be found in A.1.', 'year': 2021, 'in_acl': False, 'citationCount': 71, 'section': 'General Guideline', 'subsection': None}, {'id': 258361324, 'paperId': 'aa92dc559b8845bf134f3bfad4fc188615453dfb', 'title': 'Science in the age of large language models', 'authors': [{'authorId': '8318698', 'name': 'Abeba Birhane'}, {'authorId': '51880633', 'name': 'Atoosa Kasirzadeh'}, {'authorId': '145664726', 'name': 'David Leslie'}, {'authorId': '12806133', 'name': 'Sandra Wachter'}], 'venue': 'Nature Reviews Physics', 'abstract': 'Rapid advances in the capabilities of large language models and the broad accessibility of tools powered by this technology have led to both excitement and concern regarding their use in science. Four experts in artificial intelligence ethics and policy discuss potential risks and call for careful consideration and responsible usage to ensure that good scientific practices and trust in science are not compromised.', 'year': 2023, 'in_acl': False, 'citationCount': 133, 'section': 'General Guideline', 'subsection': None}, {'id': 257463753, 'paperId': 'da9683e826c37a6383c124b5c6cddefcb35ee8fd', 'title': 'ChatGPT and a new academic reality: Artificial Intelligence‐written research papers and the ethics of the large language models in scholarly publishing', 'authors': [{'authorId': '2000639769', 'name': 'Brady D. Lund'}, {'authorId': '2155389734', 'name': 'Ting Wang'}, {'authorId': '2212887866', 'name': 'Nishith Reddy Mannuru'}, {'authorId': '2058824340', 'name': 'Bing Nie'}, {'authorId': '98041740', 'name': 'S. Shimray'}, {'authorId': '2141037440', 'name': 'Ziang Wang'}], 'venue': 'J. Assoc. Inf. Sci. Technol.', 'abstract': "This article discusses OpenAI's ChatGPT, a generative pre‐trained transformer, which uses natural language processing to fulfill text‐based user requests (i.e., a “chatbot”). The history and principles behind ChatGPT and similar models are discussed. This technology is then discussed in relation to its potential impact on academia and scholarly research and publishing. ChatGPT is seen as a potential model for the automated preparation of essays and other types of scholarly manuscripts. Potential ethical issues that could arise with the emergence of large language models like GPT‐3, the underlying technology behind ChatGPT, and its usage by academics and researchers, are discussed and situated within the context of broader advancements in artificial intelligence, machine learning, and natural language processing for research and scholarly publishing.", 'year': 2023, 'in_acl': False, 'citationCount': 361, 'section': 'General Guideline', 'subsection': None}, {'id': 245769589, 'paperId': '8dc1e4bac2d0403ba4bec7bcb8abb7534c53ab1f', 'title': 'Automatic Related Work Generation: A Meta Study', 'authors': [{'authorId': '89919188', 'name': 'Xiangci Li'}, {'authorId': '2112778', 'name': 'Jessica Ouyang'}], 'venue': 'arXiv.org', 'abstract': 'Academic research is an exploration activity to solve problems that have never been resolved before. By this nature, each academic research work is required to perform a literature review to distinguish its novelties that have not been addressed by prior works. In natural language processing, this literature review is usually conducted under the"Related Work"section. The task of automatic related work generation aims to automatically generate the"Related Work"section given the rest of the research paper and a list of cited papers. Although this task was proposed over 10 years ago, it received little attention until very recently, when it was cast as a variant of the scientific multi-document summarization problem. However, even today, the problems of automatic related work and citation text generation are not yet standardized. In this survey, we conduct a meta-study to compare the existing literature on related work generation from the perspectives of problem formulation, dataset collection, methodological approach, performance evaluation, and future prospects to provide the reader insight into the progress of the state-of-the-art studies, as well as and how future studies can be conducted. We also survey relevant fields of study that we suggest future work to consider integrating.', 'year': 2022, 'in_acl': False, 'citationCount': 9, 'section': 'Survey Papers', 'subsection': None}, {'id': 258947504, 'paperId': '0133c1128f2036ecb6b65ab15c562b71bf4f18a0', 'title': 'Scientific Fact-Checking: A Survey of Resources and Approaches', 'authors': [{'authorId': '2066962303', 'name': 'Juraj Vladika'}, {'authorId': '2522197', 'name': 'F. Matthes'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'The task of fact-checking deals with assessing the veracity of factual claims based on credible evidence and background knowledge. In particular, scientific fact-checking is the variation of the task concerned with verifying claims rooted in scientific knowledge. This task has received significant attention due to the growing importance of scientific and health discussions on online platforms. Automated scientific fact-checking methods based on NLP can help combat the spread of misinformation, assist researchers in knowledge discovery, and help individuals understand new scientific breakthroughs. In this paper, we present a comprehensive survey of existing research in this emerging field and its related tasks. We provide a task description, discuss the construction process of existing datasets, and analyze proposed models and approaches. Based on our findings, we identify intriguing challenges and outline potential future directions to advance the field.', 'year': 2023, 'in_acl': False, 'citationCount': 34, 'section': 'Survey Papers', 'subsection': None}, {'id': 248512482, 'paperId': '6c5c6f883604a3abaa829b83d2958de8c343beeb', 'title': 'A Computational Inflection for Scientific Discovery', 'authors': [{'authorId': '2041698667', 'name': 'Tom Hope'}, {'authorId': '145612610', 'name': 'Doug Downey'}, {'authorId': '1741101', 'name': 'Oren Etzioni'}, {'authorId': '1780531', 'name': 'Daniel S. Weld'}, {'authorId': '145479841', 'name': 'E. Horvitz'}], 'venue': 'Communications of the ACM', 'abstract': 'Enabling researchers to leverage systems to overcome the limits of human cognitive capacity.', 'year': 2022, 'in_acl': False, 'citationCount': 25, 'section': 'Survey Papers', 'subsection': None}, {'id': 52118895, 'paperId': 'b21b927c251c415b601b6d7f785a42cc5c292635', 'title': 'Multi-Task Identification of Entities, Relations, and Coreference for Scientific Knowledge Graph Construction', 'authors': [{'authorId': '145081697', 'name': 'Yi Luan'}, {'authorId': '2265599', 'name': 'Luheng He'}, {'authorId': '144339506', 'name': 'Mari Ostendorf'}, {'authorId': '2548384', 'name': 'Hannaneh Hajishirzi'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'We introduce a multi-task setup of identifying entities, relations, and coreference clusters in scientific articles. We create SciERC, a dataset that includes annotations for all three tasks and develop a unified framework called SciIE with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature.', 'year': 2018, 'in_acl': True, 'citationCount': 627, 'section': 'Scientific IE', 'subsection': None}, {'id': 218470122, 'paperId': 'e99a259299d4d555ee4c354f2095ab4401369c82', 'title': 'SciREX: A Challenge Dataset for Document-Level Information Extraction', 'authors': [{'authorId': '49837811', 'name': 'Sarthak Jain'}, {'authorId': '15292561', 'name': 'Madeleine van Zuylen'}, {'authorId': '2548384', 'name': 'Hannaneh Hajishirzi'}, {'authorId': '46181066', 'name': 'Iz Beltagy'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Extracting information from full documents is an important problem in many domains, but most previous work focus on identifying relationships within a sentence or a paragraph. It is challenging to create a large-scale information extraction (IE) dataset at the document level since it requires an understanding of the whole document to annotate entities and their document-level relationships that usually span beyond sentences or even sections. In this paper, we introduce SciREX, a document level IE dataset that encompasses multiple IE tasks, including salient entity identification and document level N-ary relation identification from scientific articles. We annotate our dataset by integrating automatic and human annotations, leveraging existing scientific knowledge resources. We develop a neural model as a strong baseline that extends previous state-of-the-art IE models to document-level IE. Analyzing the model performance shows a significant gap between human performance and current baselines, inviting the community to use our dataset as a challenge to develop document-level IE models. Our data and code are publicly available at https://github.com/allenai/SciREX .', 'year': 2020, 'in_acl': True, 'citationCount': 145, 'section': 'Scientific IE', 'subsection': None}, {'id': 245704273, 'paperId': 'f95620883ce631dcca296d6301ab094555a9b1c4', 'title': 'VILA: Improving Structured Content Extraction from Scientific PDFs Using Visual Layout Groups', 'authors': [{'authorId': '101568984', 'name': 'Zejiang Shen'}, {'authorId': '46258841', 'name': 'Kyle Lo'}, {'authorId': '31860505', 'name': 'Lucy Lu Wang'}, {'authorId': '2003338023', 'name': 'Bailey Kuehl'}, {'authorId': '1780531', 'name': 'Daniel S. Weld'}, {'authorId': '145612610', 'name': 'Doug Downey'}], 'venue': 'Transactions of the Association for Computational Linguistics', 'abstract': 'Accurately extracting structured content from PDFs is a critical first step for NLP over scientific papers. Recent work has improved extraction accuracy by incorporating elementary layout information, for example, each token’s 2D position on the page, into language model pretraining. We introduce new methods that explicitly model VIsual LAyout (VILA) groups, that is, text lines or text blocks, to further improve performance. In our I-VILA approach, we show that simply inserting special tokens denoting layout group boundaries into model inputs can lead to a 1.9% Macro F1 improvement in token classification. In the H-VILA approach, we show that hierarchical encoding of layout-groups can result in up to 47% inference time reduction with less than 0.8% Macro F1 loss. Unlike prior layout-aware approaches, our methods do not require expensive additional pretraining, only fine-tuning, which we show can reduce training cost by up to 95%. Experiments are conducted on a newly curated evaluation suite, S2-VLUE, that unifies existing automatically labeled datasets and includes a new dataset of manual annotations covering diverse papers from 19 scientific disciplines. Pre-trained weights, benchmark datasets, and source code are available at https://github.com/allenai/VILA.', 'year': 2021, 'in_acl': True, 'citationCount': 33, 'section': 'Scientific IE', 'subsection': None}, {'id': 258685532, 'paperId': '049288e68caeadf7842df6977e140b47a8a2f89d', 'title': 'MatSci-NLP: Evaluating Scientific Language Models on Materials Science Language Tasks Using Text-to-Schema Modeling', 'authors': [{'authorId': '2152602955', 'name': 'Yurun Song'}, {'authorId': '51895312', 'name': 'Santiago Miret'}, {'authorId': '2116441692', 'name': 'Bang Liu'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'We present MatSci-NLP, a natural language benchmark for evaluating the performance of natural language processing (NLP) models on materials science text. We construct the benchmark from publicly available materials science text data to encompass seven different NLP tasks, including conventional NLP tasks like named entity recognition and relation classification, as well as NLP tasks specific to materials science, such as synthesis action retrieval which relates to creating synthesis procedures for materials. We study various BERT-based models pretrained on different scientific text corpora on MatSci-NLP to understand the impact of pretraining strategies on understanding materials science text. Given the scarcity of high-quality annotated data in the materials science domain, we perform our fine-tuning experiments with limited training data to encourage the generalize across MatSci-NLP tasks.Our experiments in this low-resource training setting show that language models pretrained on scientific text outperform BERT trained on general text. MatBERT, a model pretrained specifically on materials science journals, generally performs best for most tasks. Moreover, we propose a unified text-to-schema for multitask learning on {pasted macro ‘BENCHMARK’} and compare its performance with traditional fine-tuning methods. In our analysis of different training methods, we find that our proposed text-to-schema methods inspired by question-answering consistently outperform single and multitask NLP fine-tuning methods. The code and datasets are publicly available https://github.com/BangLab-UdeM-Mila/NLP4MatSci-ACL23.', 'year': 2023, 'in_acl': True, 'citationCount': 22, 'section': 'Scientific IE', 'subsection': None}, {'id': 220056972, 'paperId': '824636e935807dab178a30622383647686f98085', 'title': 'EVIDENCEMINER: Textual Evidence Discovery for Life Sciences', 'authors': [{'authorId': '2154990549', 'name': 'Xuan Wang'}, {'authorId': '2069571239', 'name': 'Yingjun Guan'}, {'authorId': '2109300810', 'name': 'Weili Liu'}, {'authorId': '72446317', 'name': 'Aabhas Chauhan'}, {'authorId': '1488691379', 'name': 'Enyi Jiang'}, {'authorId': '37696683', 'name': 'Qi Li'}, {'authorId': '72861332', 'name': 'D. Liem'}, {'authorId': '41130227', 'name': 'Dibakar Sigdel'}, {'authorId': '145710797', 'name': 'J. Caufield'}, {'authorId': '3023770', 'name': 'P. Ping'}, {'authorId': '153034701', 'name': 'Jiawei Han'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Traditional search engines for life sciences (e.g., PubMed) are designed for document retrieval and do not allow direct retrieval of specific statements. Some of these statements may serve as textual evidence that is key to tasks such as hypothesis generation and new finding validation. We present EVIDENCEMINER, a web-based system that lets users query a natural language statement and automatically retrieves textual evidence from a background corpora for life sciences. EVIDENCEMINER is constructed in a completely automated way without any human effort for training data annotation. It is supported by novel data-driven methods for distantly supervised named entity recognition and open information extraction. The entities and patterns are pre-computed and indexed offline to support fast online evidence retrieval. The annotation results are also highlighted in the original document for better visualization. EVIDENCEMINER also includes analytic functionalities such as the most frequent entity and relation summarization. EVIDENCEMINER can help scientists uncover important research issues, leading to more effective research and more in-depth quantitative analysis. The system of EVIDENCEMINER is available at https://evidenceminer.firebaseapp.com/.', 'year': 2020, 'in_acl': True, 'citationCount': 13, 'section': 'Scientific IR', 'subsection': None}, {'id': 250286985, 'paperId': '24a95ff7a4f37d7d05f30e60dae40a576f49eeda', 'title': 'KID-Review: Knowledge-Guided Scientific Review Generation with Oracle Pre-training', 'authors': [{'authorId': '30300197', 'name': 'Weizhe Yuan'}, {'authorId': '144118452', 'name': 'Pengfei Liu'}], 'venue': 'AAAI Conference on Artificial Intelligence', 'abstract': 'The surge in the number of scientific submissions has brought challenges to the work of peer review. In this paper, as a first step, we explore the possibility of designing an automated system, which is not meant to replace humans, but rather providing a first-pass draft for a machine-assisted human review process. Specifically, we present an end-to-end knowledge-guided review generation framework for scientific papers grounded in cognitive psychology research that a better understanding of text requires different types of knowledge. In practice, we found that this seemingly intuitive idea suffered from training difficulties. In order to solve this problem, we put forward an oracle pre-training strategy, which can not only make the Kid-Review better educated but also make the generated review cover more aspects. Experimentally, we perform a comprehensive evaluation (human and automatic) from different perspectives. Empirical results have shown the effectiveness of different types of knowledge as well as oracle pre-training. We make all code, relevant dataset available: https://github.com/Anonymous4nlp233/KIDReview as well as the Kid-Review system: http://nlpeer.reviews.', 'year': 2022, 'in_acl': False, 'citationCount': 5, 'section': 'Review Generation', 'subsection': None}, {'id': 252682946, 'paperId': '07dc375b95aaeb748d7b0560bfa7d81f1bddc8b2', 'title': 'Forecasting the future of artificial intelligence with machine learning-based link prediction in an exponentially growing knowledge network', 'authors': [{'authorId': '5906965', 'name': 'Mario Krenn'}, {'authorId': '102683668', 'name': 'L. Buffoni'}, {'authorId': '145360457', 'name': 'B. Coutinho'}, {'authorId': '2981096', 'name': 'S. Eppel'}, {'authorId': '40104995', 'name': 'J. Foster'}, {'authorId': '2299002545', 'name': 'Andrew Gritsevskiy'}, {'authorId': '72152328', 'name': 'Harlin Lee'}, {'authorId': '2141583641', 'name': 'Yichao Lu'}, {'authorId': '2095423656', 'name': 'João P. Moutinho'}, {'authorId': '84710505', 'name': 'Nima Sanjabi'}, {'authorId': '51129341', 'name': 'Rishi Sonthalia'}, {'authorId': '2150443015', 'name': 'Ngoc M. Tran'}, {'authorId': '2149984387', 'name': 'Francisco Valente'}, {'authorId': '2154871103', 'name': 'Yangxinyu Xie'}, {'authorId': '2151886670', 'name': 'Rose Yu'}, {'authorId': '2058236577', 'name': 'Michael Kopp'}], 'venue': 'Nature Machine Intelligence', 'abstract': 'A tool that could suggest new personalized research directions and ideas by taking insights from the scientific literature could profoundly accelerate the progress of science. A field that might benefit from such an approach is artificial intelligence (AI) research, where the number of scientific publications has been growing exponentially over recent years, making it challenging for human researchers to keep track of the progress. Here we use AI techniques to predict the future research directions of AI itself. We introduce a graph-based benchmark based on real-world data—the Science4Cast benchmark, which aims to predict the future state of an evolving semantic network of AI. For that, we use more than 143,000 research papers and build up a knowledge network with more than 64,000 concept nodes. We then present ten diverse methods to tackle this task, ranging from pure statistical to pure learning methods. Surprisingly, the most powerful methods use a carefully curated set of network features, rather than an end-to-end AI approach. These results indicate a great potential that can be unleashed for purely ML approaches without human knowledge. Ultimately, better predictions of new future research directions will be a crucial component of more advanced research suggestion tools.', 'year': 2022, 'in_acl': False, 'citationCount': 31, 'section': 'Hypothesis Generation', 'subsection': None}, {'id': 232126179, 'paperId': '278ecb42fc5ba89d710055a7056384c22886c883', 'title': 'AutoCite: Multi-Modal Representation Fusion for Contextual Citation Generation', 'authors': [{'authorId': '35832075', 'name': 'Qingqin Wang'}, {'authorId': '33629364', 'name': 'Yun Xiong'}, {'authorId': '49889358', 'name': 'Yao Zhang'}, {'authorId': '1718428', 'name': 'Jiawei Zhang'}, {'authorId': '8247706', 'name': 'Yangyong Zhu'}], 'venue': 'Web Search and Data Mining', 'abstract': "Citing comprehensive and correct related work is crucial in academic writing. It can not only support the author's claims but also help readers trace other related research papers. Nowadays, with the rapid increase in the number of scientific literatures, it has become increasingly challenging to search for high-quality citations and write the manuscript. In this paper, we present an automatic writing assistant model, AutoCite, which not only infers potentially related work but also automatically generates the citation context at the same time. Specifically, AutoCite involves a novel multi-modal encoder and a multi-task decoder architecture. Based on the multi-modal inputs, the encoder in AutoCite learns paper representations with both citation network structure and textual contexts. The multi-task decoder in AutoCite couples and jointly learns citation prediction and context generation in a unified manner. To effectively join the encoder and decoder, we introduce a novel representation fusion component, i.e., gated neural fusion, which feeds the multi-modal representation inputs from the encoder and creates outputs for the downstream multi-task decoder adaptively. Extensive experiments on five real-world citation network datasets validate the effectiveness of our model.", 'year': 2021, 'in_acl': False, 'citationCount': 8, 'section': 'Paper Draft Generation', 'subsection': None}]
|
2024.lrec-tutorials.11
|
Tutorial Proposal: Hallucination in Large Language Models
|
In the fast-paced domain of Large Language Models (LLMs), the issue of hallucination is a prominent challenge. Despite continuous endeavors to address this concern, it remains a highly active area of research within the LLM landscape. Grasping the intricacies of this problem can be daunting, especially for those new to the field. This tutorial aims to bridge this knowledge gap by introducing the emerging realm of hallucination in LLMs. It will comprehensively explore the key aspects of hallucination, including benchmarking, detection, and mitigation techniques. Furthermore, we will delve into the specific constraints and shortcomings of current approaches, providing valuable insights to guide future research efforts for participants.
| 2,024
|
https://aclanthology.org/2024.lrec-tutorials.11
|
LREC
|
[{'id': 261530162, 'paperId': 'd00735241af700d21762d2f3ca00d920241a15a4', 'title': "Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models", 'authors': [{'authorId': '1895977079', 'name': 'Yue Zhang'}, {'authorId': '2110450452', 'name': 'Yafu Li'}, {'authorId': '152496687', 'name': 'Leyang Cui'}, {'authorId': '1724421', 'name': 'Deng Cai'}, {'authorId': '2978364', 'name': 'Lemao Liu'}, {'authorId': '2156525869', 'name': 'Tingchen Fu'}, {'authorId': '14799547', 'name': 'Xinting Huang'}, {'authorId': '2065703096', 'name': 'Enbo Zhao'}, {'authorId': '2257439415', 'name': 'Yu Zhang'}, {'authorId': '2109404730', 'name': 'Yulong Chen'}, {'authorId': '1800190', 'name': 'Longyue Wang'}, {'authorId': '1755919', 'name': 'A. Luu'}, {'authorId': '2237804371', 'name': 'Wei Bi'}, {'authorId': '8815141', 'name': 'Freda Shi'}, {'authorId': '34720053', 'name': 'Shuming Shi'}], 'venue': 'arXiv.org', 'abstract': 'While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.', 'year': 2023, 'in_acl': False, 'citationCount': 355, 'section': 'Hallucination in Large Language Models', 'subsection': None}, {'id': 261705916, 'paperId': '396305230ddcf915b19a19683a89e34d76321a33', 'title': 'Cognitive Mirage: A Review of Hallucinations in Large Language Models', 'authors': [{'authorId': '2239197934', 'name': 'Hongbin Ye'}, {'authorId': '2239249506', 'name': 'Tong Liu'}, {'authorId': '2239587085', 'name': 'Aijia Zhang'}, {'authorId': '2239199462', 'name': 'Wei Hua'}, {'authorId': '2239200814', 'name': 'Weiqiang Jia'}], 'venue': 'arXiv.org', 'abstract': 'As large language models continue to develop in the field of AI, text generation systems are susceptible to a worrisome phenomenon known as hallucination. In this study, we summarize recent compelling insights into hallucinations in LLMs. We present a novel taxonomy of hallucinations from various text generation tasks, thus provide theoretical insights, detection methods and improvement approaches. Based on this, future research directions are proposed. Our contribution are threefold: (1) We provide a detailed and complete taxonomy for hallucinations appearing in text generation tasks; (2) We provide theoretical analyses of hallucinations in LLMs and provide existing detection and improvement methods; (3) We propose several research directions that can be developed in the future. As hallucinations garner significant attention from the community, we will maintain updates on relevant research progress.', 'year': 2023, 'in_acl': False, 'citationCount': 58, 'section': 'Hallucination in Large Language Models', 'subsection': None}, {'id': 261696947, 'paperId': '71bc0c97c20fffce796a355b16bd202987260029', 'title': 'A Survey of Hallucination in Large Foundation Models', 'authors': [{'authorId': '9460529', 'name': 'Vipula Rawte'}, {'authorId': '144463965', 'name': 'A. Sheth'}, {'authorId': '48806891', 'name': 'Amitava Das'}], 'venue': 'arXiv.org', 'abstract': "Hallucination in a foundation model (FM) refers to the generation of content that strays from factual reality or includes fabricated information. This survey paper provides an extensive overview of recent efforts that aim to identify, elucidate, and tackle the problem of hallucination, with a particular focus on ``Large'' Foundation Models (LFMs). The paper classifies various types of hallucination phenomena that are specific to LFMs and establishes evaluation criteria for assessing the extent of hallucination. It also examines existing strategies for mitigating hallucination in LFMs and discusses potential directions for future research in this area. Essentially, the paper offers a comprehensive examination of the challenges and solutions related to hallucination in LFMs.", 'year': 2023, 'in_acl': False, 'citationCount': 255, 'section': 'Hallucination in Large Foundation Models', 'subsection': None}]
|
2024.lrec-tutorials.13
|
Knowledge-enhanced Response Generation in Dialogue Systems: Current Advancements and Emerging Horizons
|
This tutorial provides an in-depth exploration of Knowledge-enhanced Dialogue Systems (KEDS), diving into their foundational aspects, methodologies, advantages, and practical applications. Topics include the distinction between internal and external knowledge integration, diverse methodologies employed in grounding dialogues, and innovative approaches to leveraging knowledge graphs for enhanced conversation quality. Furthermore, the tutorial touches upon the rise of biomedical text mining, the advent of domain-specific language models, and the challenges and strategies specific to medical dialogue generation. The primary objective is to give attendees a comprehensive understanding of KEDS. By delineating the nuances of these systems, the tutorial aims to elucidate their significance, highlight advancements made using deep learning, and pinpoint the current challenges. Special emphasis is placed on showcasing how KEDS can be fine-tuned for domain-specific requirements, with a spotlight on the healthcare sector. The tutorial is crafted for both beginners and intermediate researchers in the dialogue systems domain, with a focus on those keen on advancing research in KEDS. It will also be valuable for practitioners in sectors like healthcare, seeking to integrate advanced dialogue systems.
| 2,024
|
https://aclanthology.org/2024.lrec-tutorials.13
|
LREC
|
[{'id': 5523008, 'paperId': 'a6401e102c03a441992b3e45f7b63eec09d4b89d', 'title': 'A Survey on Dialogue Systems: Recent Advances and New Frontiers', 'authors': [{'authorId': '2957953', 'name': 'Hongshen Chen'}, {'authorId': '1390612725', 'name': 'Xiaorui Liu'}, {'authorId': '50559722', 'name': 'Dawei Yin'}, {'authorId': '1736632', 'name': 'Jiliang Tang'}], 'venue': 'SKDD', 'abstract': 'Dialogue systems have attracted more and more attention. Recent advances on dialogue systems are overwhelmingly contributed by deep learning techniques, which have been employed to enhance a wide range of big data applications such as computer vision, natural language processing, and recommender systems. For dialogue systems, deep learning can leverage a massive amount of data to learn meaningful feature representations and response generation strategies, while requiring a minimum amount of hand-crafting. In this article, we give an overview to these recent advances on dialogue systems from various perspectives and discuss some possible research directions. In particular, we generally divide existing dialogue systems into task-oriented and nontask- oriented models, then detail how deep learning techniques help them with representative algorithms and finally discuss some appealing research directions that can bring the dialogue system research into a new frontier', 'year': 2017, 'in_acl': False, 'citationCount': 659, 'section': 'Survey', 'subsection': None}, {'id': 219178913, 'paperId': '77b101d2c0f3d2842edb4acdbca0c4e859cda4d5', 'title': 'A survey on empathetic dialogue systems', 'authors': [{'authorId': '145921076', 'name': 'Yukun Ma'}, {'authorId': '2055542232', 'name': 'Khanh Linh Nguyen'}, {'authorId': '121112586', 'name': 'Frank Xing'}, {'authorId': '49943757', 'name': 'E. Cambria'}], 'venue': 'Information Fusion', 'abstract': 'Dialogue systems have achieved growing success in many areas thanks to the rapid advances of machine learning techniques. In the quest for generating more human-like conversations, one of the major challenges is to learn to generate responses in a more empathetic manner. In this review article, we focus on the literature of empathetic dialogue systems, whose goal is to enhance the perception and expression of emotional states, personal preference, and knowledge. Accordingly, we identify three key features that underpin such systems: emotion-awareness, personality-awareness, and knowledge-accessibility. The main goal of this review is to serve as a comprehensive guide to research and development on empathetic dialogue systems and to suggest future directions in this domain.', 'year': 2020, 'in_acl': False, 'citationCount': 183, 'section': 'Survey', 'subsection': None}, {'id': 222272210, 'paperId': 'c845494445f3bfa01d8245a4759b144e27aa3788', 'title': 'A Survey of Knowledge-enhanced Text Generation', 'authors': [{'authorId': '38767143', 'name': 'W. Yu'}, {'authorId': '70461341', 'name': 'Wenhao Yu'}, {'authorId': '8652308', 'name': 'Chenguang Zhu'}, {'authorId': '1993150474', 'name': 'Zaitang Li'}, {'authorId': '2749311', 'name': 'Zhiting Hu'}, {'authorId': '1786863', 'name': 'Qingyun Wang'}, {'authorId': '2113323573', 'name': 'Heng Ji'}, {'authorId': '1470716407', 'name': 'Meng Jiang'}], 'venue': 'ACM Computing Surveys', 'abstract': 'The goal of text-to-text generation is to make machines express like a human in many applications such as conversation, summarization, and translation. It is one of the most important yet challenging tasks in natural language processing (NLP). Various neural encoder-decoder models have been proposed to achieve the goal by learning to map input text to output text. However, the input text alone often provides limited knowledge to generate the desired output, so the performance of text generation is still far from satisfaction in many real-world scenarios. To address this issue, researchers have considered incorporating (i) internal knowledge embedded in the input text and (ii) external knowledge from outside sources such as knowledge base and knowledge graph into the text generation system. This research topic is known as knowledge-enhanced text generation. In this survey, we present a comprehensive review of the research on this topic over the past five years. The main content includes two parts: (i) general methods and architectures for integrating knowledge into text generation; (ii) specific techniques and applications according to different forms of knowledge data. This survey can have broad audiences, researchers and practitioners, in academia and industry.', 'year': 2020, 'in_acl': False, 'citationCount': 232, 'section': 'Survey', 'subsection': None}, {'id': 216641884, 'paperId': '35763b04ca7ec8d1cbdacb3be5635015d2f7ad9b', 'title': 'A Survey of Document Grounded Dialogue Systems (DGDS)', 'authors': [{'authorId': '153132928', 'name': 'Longxuan Ma'}, {'authorId': '1806419', 'name': 'Weinan Zhang'}, {'authorId': None, 'name': 'Mingda Li'}, {'authorId': '40282288', 'name': 'Ting Liu'}], 'venue': 'arXiv.org', 'abstract': 'Dialogue system (DS) attracts great attention from industry and academia because of its wide application prospects. Researchers usually divide the DS according to the function. However, many conversations require the DS to switch between different functions. For example, movie discussion can change from chit-chat to QA, the conversational recommendation can transform from chit-chat to recommendation, etc. Therefore, classification according to functions may not be enough to help us appreciate the current development trend. We classify the DS based on background knowledge. Specifically, study the latest DS based on the unstructured document(s). We define Document Grounded Dialogue System (DGDS) as the DS that the dialogues are centering on the given document(s). The DGDS can be used in scenarios such as talking over merchandise against product Manual, commenting on news reports, etc. We believe that extracting unstructured document(s) information is the future trend of the DS because a great amount of human knowledge lies in these document(s). The research of the DGDS not only possesses a broad application prospect but also facilitates AI to better understand human knowledge and natural language. We analyze the classification, architecture, datasets, models, and future development trends of the DGDS, hoping to help researchers in this field.', 'year': 2020, 'in_acl': False, 'citationCount': 19, 'section': 'Survey', 'subsection': None}, {'id': 248780269, 'paperId': '04aa1605c650bee77e09ad61c4e894ecb9f543a8', 'title': 'Knowledge Enhanced Reflection Generation for Counseling Dialogues', 'authors': [{'authorId': '2072820796', 'name': 'Siqi Shen'}, {'authorId': '1396239754', 'name': 'Verónica Pérez-Rosas'}, {'authorId': '145645240', 'name': 'Charles F Welch'}, {'authorId': '1746416', 'name': 'Soujanya Poria'}, {'authorId': '2105984203', 'name': 'Rada Mihalcea'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'In this paper, we study the effect of commonsense and domain knowledge while generating responses in counseling conversations using retrieval and generative methods for knowledge integration. We propose a pipeline that collects domain knowledge through web mining, and show that retrieval from both domain-specific and commonsense knowledge bases improves the quality of generated responses. We also present a model that incorporates knowledge generated by COMET using soft positional encoding and masked self-attention.We show that both retrieved and COMET-generated knowledge improve the system’s performance as measured by automatic metrics and also by human evaluation. Lastly, we present a comparative study on the types of knowledge encoded by our system showing that causal and intentional relationships benefit the generation task more than other types of commonsense relations.', 'year': 2022, 'in_acl': True, 'citationCount': 19, 'section': 'Knowledge-enhanced Response Generation', 'subsection': None}, {'id': 248266574, 'paperId': '3d6b094f439ceae770ad1ca5cb322421debf3ba8', 'title': 'DialoKG: Knowledge-Structure Aware Task-Oriented Dialogue Generation', 'authors': [{'authorId': '120441491', 'name': 'Md. Rashad Al Hasan Rony'}, {'authorId': '2370666', 'name': 'Ricardo Usbeck'}, {'authorId': '71564931', 'name': 'Jens Lehmann'}], 'venue': 'NAACL-HLT', 'abstract': "Task-oriented dialogue generation is challenging since the underlying knowledge is often dynamic and effectively incorporating knowledge into the learning process is hard. It is particularly challenging to generate both human-like and informative responses in this setting. Recent research primarily focused on various knowledge distillation methods where the underlying relationship between the facts in a knowledge base is not effectively captured. In this paper, we go one step further and demonstrate how the structural information of a knowledge graph can improve the system's inference capabilities. Specifically, we propose DialoKG, a novel task-oriented dialogue system that effectively incorporates knowledge into a language model. Our proposed system views relational knowledge as a knowledge graph and introduces (1) a structure-aware knowledge embedding technique, and (2) a knowledge graph-weighted attention masking strategy to facilitate the system selecting relevant information during the dialogue generation. An empirical evaluation demonstrates the effectiveness of DialoKG over state-of-the-art methods on several standard benchmark datasets.", 'year': 2022, 'in_acl': False, 'citationCount': 30, 'section': 'Knowledge-enhanced Response Generation', 'subsection': None}, {'id': 227033623, 'paperId': '0ed502bc03b3fb6d7c4356d0daf34bd915daeb91', 'title': 'MedDialog: A Large-scale Medical Dialogue Dataset', 'authors': [{'authorId': '2061248094', 'name': 'Guangtao Zeng'}, {'authorId': '32412901', 'name': 'Wenmian Yang'}, {'authorId': '1613055688', 'name': 'Zeqian Ju'}, {'authorId': '2109410479', 'name': 'Yue Yang'}, {'authorId': '2116422777', 'name': 'Sicheng Wang'}, {'authorId': '3483566', 'name': 'Ruisi Zhang'}, {'authorId': '2112494296', 'name': 'Meng Zhou'}, {'authorId': '2072984384', 'name': 'Jiaqi Zeng'}, {'authorId': '151257356', 'name': 'Xiangyu Dong'}, {'authorId': '2110065346', 'name': 'Ruoyu Zhang'}, {'authorId': '122851213', 'name': 'Hongchao Fang'}, {'authorId': '11243844', 'name': 'Penghui Zhu'}, {'authorId': '2107976513', 'name': 'Shu Chen'}, {'authorId': '40526720', 'name': 'P. Xie'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Medical dialogue systems are promising in assisting in telemedicine to increase access to healthcare services, improve the quality of patient care, and reduce medical costs. To facilitate the research and development of medical dialogue systems, we build large-scale medical dialogue datasets – MedDialog, which contain 1) a Chinese dataset with 3.4 million conversations between patients and doctors, 11.3 million utterances, 660.2 million tokens, covering 172 specialties of diseases, and 2) an English dataset with 0.26 million conversations, 0.51 million utterances, 44.53 million tokens, covering 96 specialties of diseases. To our best knowledge, MedDialog is the largest medical dialogue dataset to date. We pretrain several dialogue generation models on the Chinese MedDialog dataset, including Transformer, GPT, BERT-GPT, and compare their performance. It is shown that models trained on MedDialog are able to generate clinically correct and doctor-like medical dialogues. We also study the transferability of models trained on MedDialog to low-resource medical dialogue generation tasks. It is shown that via transfer learning which finetunes the models pretrained on MedDialog, the performance on medical dialogue generation tasks with small datasets can be greatly improved, as shown in human evaluation and automatic evaluation. The datasets and code are available at https://github.com/UCSD-AI4H/Medical-Dialogue-System', 'year': 2020, 'in_acl': True, 'citationCount': 43, 'section': 'Knowledge-enhanced Response Generation', 'subsection': None}, {'id': 202717047, 'paperId': '980456f50cd4b6e30649592afb693d5b6af8a703', 'title': 'Topical-Chat: Towards Knowledge-Grounded Open-Domain Conversations', 'authors': [{'authorId': '145916630', 'name': 'Karthik Gopalakrishnan'}, {'authorId': '8869538', 'name': 'Behnam Hedayatnia'}, {'authorId': '3465846', 'name': 'Qinlang Chen'}, {'authorId': '1411423941', 'name': 'Anna Gottardi'}, {'authorId': '1412838170', 'name': 'Sanjeev Kwatra'}, {'authorId': '47851456', 'name': 'Anu Venkatesh'}, {'authorId': '39303368', 'name': 'Raefer Gabriel'}, {'authorId': '1395813836', 'name': 'Dilek Z. Hakkani-Tür'}], 'venue': 'Interspeech', 'abstract': "Building socialbots that can have deep, engaging open-domain conversations with humans is one of the grand challenges of artificial intelligence (AI). To this end, bots need to be able to leverage world knowledge spanning several domains effectively when conversing with humans who have their own world knowledge. Existing knowledge-grounded conversation datasets are primarily stylized with explicit roles for conversation partners. These datasets also do not explore depth or breadth of topical coverage with transitions in conversations. We introduce Topical-Chat, a knowledge-grounded human-human conversation dataset where the underlying knowledge spans 8 broad topics and conversation partners don't have explicitly defined roles, to help further research in open-domain conversational AI. We also train several state-of-the-art encoder-decoder conversational models on Topical-Chat and perform automated and human evaluation for benchmarking.", 'year': 2019, 'in_acl': False, 'citationCount': 313, 'section': 'Knowledge-enhanced Response Generation', 'subsection': None}, {'id': 13756489, 'paperId': '204e3073870fae3d05bcbc2f6a8e263d9b72e776', 'title': 'Attention is All you Need', 'authors': [{'authorId': '40348417', 'name': 'Ashish Vaswani'}, {'authorId': '1846258', 'name': 'Noam M. Shazeer'}, {'authorId': '3877127', 'name': 'Niki Parmar'}, {'authorId': '39328010', 'name': 'Jakob Uszkoreit'}, {'authorId': '145024664', 'name': 'Llion Jones'}, {'authorId': '19177000', 'name': 'Aidan N. Gomez'}, {'authorId': '40527594', 'name': 'Lukasz Kaiser'}, {'authorId': '3443442', 'name': 'Illia Polosukhin'}], 'venue': 'Neural Information Processing Systems', 'abstract': 'The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.', 'year': 2017, 'in_acl': False, 'citationCount': 109681, 'section': 'Basic papers', 'subsection': None}, {'id': 7961699, 'paperId': 'cea967b59209c6be22829699f05b8b1ac4dc092d', 'title': 'Sequence to Sequence Learning with Neural Networks', 'authors': [{'authorId': '1701686', 'name': 'I. Sutskever'}, {'authorId': '1689108', 'name': 'O. Vinyals'}, {'authorId': '2827616', 'name': 'Quoc V. Le'}], 'venue': 'Neural Information Processing Systems', 'abstract': "Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT-14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous state of the art. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.", 'year': 2014, 'in_acl': False, 'citationCount': 19633, 'section': 'Basic papers', 'subsection': None}, {'id': 11212020, 'paperId': 'fa72afa9b2cbc8f0d7b05d52548906610ffbb9c5', 'title': 'Neural Machine Translation by Jointly Learning to Align and Translate', 'authors': [{'authorId': '3335364', 'name': 'Dzmitry Bahdanau'}, {'authorId': '1979489', 'name': 'Kyunghyun Cho'}, {'authorId': '1751762', 'name': 'Yoshua Bengio'}], 'venue': 'International Conference on Learning Representations', 'abstract': 'Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.', 'year': 2014, 'in_acl': False, 'citationCount': 26130, 'section': 'Basic papers', 'subsection': None}, {'id': 8174613, 'paperId': '02534853626c18c9a097c2712f1ddf3613257d35', 'title': 'Incorporating Copying Mechanism in Sequence-to-Sequence Learning', 'authors': [{'authorId': '3016273', 'name': 'Jiatao Gu'}, {'authorId': '11955007', 'name': 'Zhengdong Lu'}, {'authorId': '49404233', 'name': 'Hang Li'}, {'authorId': '2052674293', 'name': 'V. Li'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'We address an important problem in sequence-to-sequence (Seq2Seq) learning referred to as copying, in which certain segments in the input sequence are selectively replicated in the output sequence. A similar phenomenon is observable in human language communication. For example, humans tend to repeat entity names or even long phrases in conversation. The challenge with regard to copying in Seq2Seq is that new machinery is needed to decide when to perform the operation. In this paper, we incorporate copying into neural network-based Seq2Seq learning and propose a new model called CopyNet with encoder-decoder structure. CopyNet can nicely integrate the regular way of word generation in the decoder with the new copying mechanism which can choose sub-sequences in the input sequence and put them at proper places in the output sequence. Our empirical study on both synthetic data sets and real world data sets demonstrates the efficacy of CopyNet. For example, CopyNet can outperform regular RNN-based model with remarkable margins on text summarization tasks.', 'year': 2016, 'in_acl': True, 'citationCount': 1506, 'section': 'Basic papers', 'subsection': None}, {'id': 8314118, 'paperId': '668db48c6a79826456341680ee1175dfc4cced71', 'title': 'Get To The Point: Summarization with Pointer-Generator Networks', 'authors': [{'authorId': '13070498', 'name': 'A. See'}, {'authorId': '35025299', 'name': 'Peter J. Liu'}, {'authorId': '144783904', 'name': 'Christopher D. Manning'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.', 'year': 2017, 'in_acl': True, 'citationCount': 3827, 'section': 'Basic papers', 'subsection': None}]
|
2024.naacl-tutorials.2
|
Combating Security and Privacy Issues in the Era of Large Language Models
|
This tutorial seeks to provide a systematic summary of risks and vulnerabilities in security, privacy and copyright aspects of large language models (LLMs), and most recent solutions to address those issues. We will discuss a broad thread of studies that try to answer the following questions: (i) How do we unravel the adversarial threats that attackers may leverage in the training time of LLMs, especially those that may exist in recent paradigms of instruction tuning and RLHF processes? (ii) How do we guard the LLMs against malicious attacks in inference time, such as attacks based on backdoors and jailbreaking? (iii) How do we ensure privacy protection of user information and LLM decisions for Language Model as-a-Service (LMaaS)? (iv) How do we protect the copyright of an LLM? (v) How do we detect and prevent cases where personal or confidential information is leaked during LLM training? (vi) How should we make policies to control against improper usage of LLM-generated content? In addition, will conclude the discussions by outlining emergent challenges in security, privacy and reliability of LLMs that deserve timely investigation by the community
| 2,024
|
https://aclanthology.org/2024.naacl-tutorials.2
|
NAACL
|
[{'id': 227118606, 'paperId': '3a1f8829e641b46f661775f64a7f27b933a46103', 'title': 'ONION: A Simple and Effective Defense Against Textual Backdoor Attacks', 'authors': [{'authorId': '51466208', 'name': 'Fanchao Qi'}, {'authorId': '123331686', 'name': 'Yangyi Chen'}, {'authorId': '2027599235', 'name': 'Mukai Li'}, {'authorId': '49293587', 'name': 'Zhiyuan Liu'}, {'authorId': '1753344', 'name': 'Maosong Sun'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Backdoor attacks are a kind of emergent training-time threat to deep neural networks (DNNs). They can manipulate the output of DNNs and possess high insidiousness. In the field of natural language processing, some attack methods have been proposed and achieve very high attack success rates on multiple popular models. Nevertheless, there are few studies on defending against textual backdoor attacks. In this paper, we propose a simple and effective textual backdoor defense named ONION, which is based on outlier word detection and, to the best of our knowledge, is the first method that can handle all the textual backdoor attack situations. Experiments demonstrate the effectiveness of our model in defending BiLSTM and BERT against five different backdoor attacks. All the code and data of this paper can be obtained at https://github.com/thunlp/ONION.', 'year': 2020, 'in_acl': True, 'citationCount': 207, 'section': None, 'subsection': None}, {'id': 259309096, 'paperId': 'f5fa0b3c2ecbf17ba922932432bed46a1447ed23', 'title': 'On the Exploitability of Instruction Tuning', 'authors': [{'authorId': '1643697854', 'name': 'Manli Shu'}, {'authorId': '2110170885', 'name': 'Jiong Wang'}, {'authorId': '1431754650', 'name': 'Chen Zhu'}, {'authorId': '8284185', 'name': 'Jonas Geiping'}, {'authorId': '2723309', 'name': 'Chaowei Xiao'}, {'authorId': '1962083', 'name': 'T. Goldstein'}], 'venue': 'Neural Information Processing Systems', 'abstract': "Instruction tuning is an effective technique to align large language models (LLMs) with human intents. In this work, we investigate how an adversary can exploit instruction tuning by injecting specific instruction-following examples into the training data that intentionally changes the model's behavior. For example, an adversary can achieve content injection by injecting training examples that mention target content and eliciting such behavior from downstream models. To achieve this goal, we propose \\textit{AutoPoison}, an automated data poisoning pipeline. It naturally and coherently incorporates versatile attack goals into poisoned data with the help of an oracle LLM. We showcase two example attacks: content injection and over-refusal attacks, each aiming to induce a specific exploitable behavior. We quantify and benchmark the strength and the stealthiness of our data poisoning scheme. Our results show that AutoPoison allows an adversary to change a model's behavior by poisoning only a small fraction of data while maintaining a high level of stealthiness in the poisoned examples. We hope our work sheds light on how data quality affects the behavior of instruction-tuned models and raises awareness of the importance of data quality for responsible deployments of LLMs. Code is available at \\url{https://github.com/azshue/AutoPoison}.", 'year': 2023, 'in_acl': False, 'citationCount': 69, 'section': None, 'subsection': None}, {'id': 258866212, 'paperId': '82fe948f18ca0138d035f553286c5e4b712dbdbe', 'title': 'Instructions as Backdoors: Backdoor Vulnerabilities of Instruction Tuning for Large Language Models', 'authors': [{'authorId': '2110519123', 'name': 'Jiashu Xu'}, {'authorId': '144592155', 'name': 'Mingyu Derek Ma'}, {'authorId': '47939052', 'name': 'Fei Wang'}, {'authorId': '2723309', 'name': 'Chaowei Xiao'}, {'authorId': '1998918', 'name': 'Muhao Chen'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'We investigate security concerns of the emergent instruction tuning paradigm, that models are trained on crowdsourced datasets with task instructions to achieve superior performance. Our studies demonstrate that an attacker can inject backdoors by issuing very few malicious instructions (~1000 tokens) and control model behavior through data poisoning, without even the need to modify data instances or labels themselves. Through such instruction attacks, the attacker can achieve over 90% attack success rate across four commonly used NLP datasets. As an empirical study on instruction attacks, we systematically evaluated unique perspectives of instruction attacks, such as poison transfer where poisoned models can transfer to 15 diverse generative datasets in a zero-shot manner; instruction transfer where attackers can directly apply poisoned instruction on many other datasets; and poison resistance to continual finetuning. Lastly, we show that RLHF and clean demonstrations might mitigate such backdoors to some degree. These findings highlight the need for more robust defenses against poisoning attacks in instruction-tuning models and underscore the importance of ensuring data quality in instruction crowdsourcing.', 'year': 2023, 'in_acl': True, 'citationCount': 56, 'section': None, 'subsection': None}, {'id': 258865399, 'paperId': '1abfc211793c683972ded8d3268475e3ee7a88b0', 'title': 'Adversarial Demonstration Attacks on Large Language Models', 'authors': [{'authorId': '2110170885', 'name': 'Jiong Wang'}, {'authorId': '2155718616', 'name': 'Zi-yang Liu'}, {'authorId': '2218344932', 'name': 'Keun Hee Park'}, {'authorId': '1998918', 'name': 'Muhao Chen'}, {'authorId': '2723309', 'name': 'Chaowei Xiao'}], 'venue': 'arXiv.org', 'abstract': 'With the emergence of more powerful large language models (LLMs), such as ChatGPT and GPT-4, in-context learning (ICL) has gained significant prominence in leveraging these models for specific tasks by utilizing data-label pairs as precondition prompts. While incorporating demonstrations can greatly enhance the performance of LLMs across various tasks, it may introduce a new security concern: attackers can manipulate only the demonstrations without changing the input to perform an attack. In this paper, we investigate the security concern of ICL from an adversarial perspective, focusing on the impact of demonstrations. We propose a novel attack method named advICL, which aims to manipulate only the demonstration without changing the input to mislead the models. Our results demonstrate that as the number of demonstrations increases, the robustness of in-context learning would decrease. Additionally, we also identify the intrinsic property of the demonstrations is that they can be used (prepended) with different inputs. As a result, it introduces a more practical threat model in which an attacker can attack the test input example even without knowing and manipulating it. To achieve it, we propose the transferable version of advICL, named Transferable-advICL. Our experiment shows that the adversarial demonstration generated by Transferable-advICL can successfully attack the unseen test input examples. We hope that our study reveals the critical security risks associated with ICL and underscores the need for extensive research on the robustness of ICL, particularly given its increasing significance in the advancement of LLMs.', 'year': 2023, 'in_acl': False, 'citationCount': 42, 'section': None, 'subsection': None}, {'id': 235293967, 'paperId': 'ff9d04fc15a2c52d982b5b7daa787a373ed7f899', 'title': 'Differential Privacy for Text Analytics via Natural Text Sanitization', 'authors': [{'authorId': '145548079', 'name': 'Xiang Yue'}, {'authorId': '2938213', 'name': 'Minxin Du'}, {'authorId': '49980880', 'name': 'Tianhao Wang'}, {'authorId': '2110479359', 'name': 'Yaliang Li'}, {'authorId': '1515546612', 'name': 'Huan Sun'}, {'authorId': '145876490', 'name': 'Sherman S. M. Chow'}], 'venue': 'Findings', 'abstract': 'Texts convey sophisticated knowledge. However, texts also convey sensitive information. Despite the success of general-purpose language models and domain-specific mechanisms with differential privacy (DP), existing text sanitization mechanisms still provide low utility, as cursed by the high-dimensional text representation. The companion issue of utilizing sanitized texts for downstream analytics is also under-explored. This paper takes a direct approach to text sanitization. Our insight is to consider both sensitivity and similarity via our new local DP notion. The sanitized texts also contribute to our sanitization-aware pretraining and fine-tuning, enabling privacy-preserving natural language processing over the BERT language model with promising utility. Surprisingly, the high utility does not boost up the success rate of inference attacks.', 'year': 2021, 'in_acl': True, 'citationCount': 59, 'section': None, 'subsection': None}, {'id': 253116660, 'paperId': '58996964dbcd15045b66201c2b850b5570ba74cb', 'title': 'Synthetic Text Generation with Differential Privacy: A Simple and Practical Recipe', 'authors': [{'authorId': '145548079', 'name': 'Xiang Yue'}, {'authorId': '3058104', 'name': 'Huseyin A. Inan'}, {'authorId': '2145429039', 'name': 'Xuechen Li'}, {'authorId': '2090135421', 'name': 'Girish Kumar'}, {'authorId': '69041346', 'name': 'Julia McAnallen'}, {'authorId': '1515546612', 'name': 'Huan Sun'}, {'authorId': '2188832906', 'name': 'David Levitan'}, {'authorId': '1562202621', 'name': 'Robert Sim'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Privacy concerns have attracted increasing attention in data-driven products due to the tendency of machine learning models to memorize sensitive training data. Generating synthetic versions of such data with a formal privacy guarantee, such as differential privacy (DP), provides a promising path to mitigating these privacy concerns, but previous approaches in this direction have typically failed to produce synthetic data of high quality. In this work, we show that a simple and practical recipe in the text domain is effective: simply fine-tuning a pretrained generative language model with DP enables the model to generate useful synthetic text with strong privacy protection. Through extensive empirical analyses on both benchmark and private customer data, we demonstrate that our method produces synthetic text that is competitive in terms of utility with its non-private counterpart, meanwhile providing strong protection against potential privacy leakages.', 'year': 2022, 'in_acl': True, 'citationCount': 57, 'section': None, 'subsection': None}, {'id': 246823897, 'paperId': '62d17b6f6ad77fd71ef9954c7784700d5e316f1f', 'title': 'What Does it Mean for a Language Model to Preserve Privacy?', 'authors': [{'authorId': '2105643848', 'name': 'Hannah Brown'}, {'authorId': '3844009', 'name': 'Katherine Lee'}, {'authorId': '52195885', 'name': 'FatemehSadat Mireshghallah'}, {'authorId': '2520493', 'name': 'R. Shokri'}, {'authorId': '2444919', 'name': 'Florian Tramèr'}], 'venue': 'Conference on Fairness, Accountability and Transparency', 'abstract': 'Natural language reflects our private lives and identities, making its privacy concerns as broad as those of real life. Language models lack the ability to understand the context and sensitivity of text, and tend to memorize phrases present in their training sets. An adversary can exploit this tendency to extract training data. Depending on the nature of the content and the context in which this data was collected, this could violate expectations of privacy. Thus, there is a growing interest in techniques for training language models that preserve privacy. In this paper, we discuss the mismatch between the narrow assumptions made by popular data protection techniques (data sanitization and differential privacy), and the broadness of natural language and of privacy as a social norm. We argue that existing protection methods cannot guarantee a generic and meaningful notion of privacy for language models. We conclude that language models should be trained on text data which was explicitly produced for public use.', 'year': 2022, 'in_acl': False, 'citationCount': 179, 'section': None, 'subsection': None}, {'id': 256194179, 'paperId': 'cb5b71a622aff47014d4f28a958679629a8b6363', 'title': 'A Watermark for Large Language Models', 'authors': [{'authorId': '2166053502', 'name': 'John Kirchenbauer'}, {'authorId': '8284185', 'name': 'Jonas Geiping'}, {'authorId': '123191916', 'name': 'Yuxin Wen'}, {'authorId': '143975296', 'name': 'Jonathan Katz'}, {'authorId': '2679804', 'name': 'Ian Miers'}, {'authorId': '1962083', 'name': 'T. Goldstein'}], 'venue': 'International Conference on Machine Learning', 'abstract': 'Potential harms of large language models can be mitigated by watermarking model output, i.e., embedding signals into generated text that are invisible to humans but algorithmically detectable from a short span of tokens. We propose a watermarking framework for proprietary language models. The watermark can be embedded with negligible impact on text quality, and can be detected using an efficient open-source algorithm without access to the language model API or parameters. The watermark works by selecting a randomized set of"green"tokens before a word is generated, and then softly promoting use of green tokens during sampling. We propose a statistical test for detecting the watermark with interpretable p-values, and derive an information-theoretic framework for analyzing the sensitivity of the watermark. We test the watermark using a multi-billion parameter model from the Open Pretrained Transformer (OPT) family, and discuss robustness and security.', 'year': 2023, 'in_acl': False, 'citationCount': 350, 'section': None, 'subsection': None}, {'id': 256627372, 'paperId': 'c25d2a27f1abe169d7b68078071b6698f0980469', 'title': 'Protecting Language Generation Models via Invisible Watermarking', 'authors': [{'authorId': '150345512', 'name': 'Xuandong Zhao'}, {'authorId': '2143529300', 'name': 'Yu-Xiang Wang'}, {'authorId': '143900005', 'name': 'Lei Li'}], 'venue': 'International Conference on Machine Learning', 'abstract': 'Language generation models have been an increasingly powerful enabler for many applications. Many such models offer free or affordable API access, which makes them potentially vulnerable to model extraction attacks through distillation. To protect intellectual property (IP) and ensure fair use of these models, various techniques such as lexical watermarking and synonym replacement have been proposed. However, these methods can be nullified by obvious countermeasures such as"synonym randomization". To address this issue, we propose GINSEW, a novel method to protect text generation models from being stolen through distillation. The key idea of our method is to inject secret signals into the probability vector of the decoding steps for each target token. We can then detect the secret message by probing a suspect model to tell if it is distilled from the protected one. Experimental results show that GINSEW can effectively identify instances of IP infringement with minimal impact on the generation quality of protected APIs. Our method demonstrates an absolute improvement of 19 to 29 points on mean average precision (mAP) in detecting suspects compared to previous methods against watermark removal attacks.', 'year': 2023, 'in_acl': False, 'citationCount': 64, 'section': None, 'subsection': None}, {'id': 257900638, 'paperId': '49f1fa0d609ff06564b46270cbc022b7d9d195f4', 'title': 'Assessing Language Model Deployment with Risk Cards', 'authors': [{'authorId': '113320522', 'name': 'Leon Derczynski'}, {'authorId': '90729626', 'name': 'Hannah Rose Kirk'}, {'authorId': '143820870', 'name': 'Vidhisha Balachandran'}, {'authorId': '51467955', 'name': 'Sachin Kumar'}, {'authorId': '2073587169', 'name': 'Yulia Tsvetkov'}, {'authorId': '119004240', 'name': 'M. Leiser'}, {'authorId': '2057036852', 'name': 'Saif Mohammad'}], 'venue': 'arXiv.org', 'abstract': 'This paper introduces RiskCards, a framework for structured assessment and documentation of risks associated with an application of language models. As with all language, text generated by language models can be harmful, or used to bring about harm. Automating language generation adds both an element of scale and also more subtle or emergent undesirable tendencies to the generated text. Prior work establishes a wide variety of language model harms to many different actors: existing taxonomies identify categories of harms posed by language models; benchmarks establish automated tests of these harms; and documentation standards for models, tasks and datasets encourage transparent reporting. However, there is no risk-centric framework for documenting the complexity of a landscape in which some risks are shared across models and contexts, while others are specific, and where certain conditions may be required for risks to manifest as harms. RiskCards address this methodological gap by providing a generic framework for assessing the use of a given language model in a given scenario. Each RiskCard makes clear the routes for the risk to manifest harm, their placement in harm taxonomies, and example prompt-output pairs. While RiskCards are designed to be open-source, dynamic and participatory, we present a"starter set"of RiskCards taken from a broad literature survey, each of which details a concrete risk presentation. Language model RiskCards initiate a community knowledge base which permits the mapping of risks and harms to a specific model or its application scenario, ultimately contributing to a better, safer and shared understanding of the risk landscape.', 'year': 2023, 'in_acl': False, 'citationCount': 35, 'section': None, 'subsection': None}, {'id': 256627571, 'paperId': 'cd0988714ea326642d2b1bb18753e187fec71e42', 'title': 'A Categorical Archive of ChatGPT Failures', 'authors': [{'authorId': '3177797', 'name': 'A. Borji'}], 'venue': 'arXiv.org', 'abstract': "Large language models have been demonstrated to be valuable in different fields. ChatGPT, developed by OpenAI, has been trained using massive amounts of data and simulates human conversation by comprehending context and generating appropriate responses. It has garnered significant attention due to its ability to effectively answer a broad range of human inquiries, with fluent and comprehensive answers surpassing prior public chatbots in both security and usefulness. However, a comprehensive analysis of ChatGPT's failures is lacking, which is the focus of this study. Eleven categories of failures, including reasoning, factual errors, math, coding, and bias, are presented and discussed. The risks, limitations, and societal implications of ChatGPT are also highlighted. The goal of this study is to assist researchers and developers in enhancing future language models and chatbots.", 'year': 2023, 'in_acl': False, 'citationCount': 326, 'section': None, 'subsection': None}]
|
2024.naacl-tutorials.4
|
From Text to Context: Contextualizing Language with Humans, Groups, and Communities for Socially Aware NLP
|
Aimed at the NLP researchers or practitioners who would like to integrate human - individual, group, or societal level factors into their analyses, this tutorial will cover recent techniques and libraries for doing so at each level of analysis. Starting with human-centered techniques that provide benefit to traditional document- or word-level NLP tasks (Garten et al., 2019; Lynn et al., 2017), we undertake a thorough exploration of critical human-level aspects as they pertain to NLP, gradually moving up to higher levels of analysis: individual persons, individual with agent (chat/dialogue), groups of people, and finally communities or societies.
| 2,024
|
https://aclanthology.org/2024.naacl-tutorials.4
|
NAACL
|
[{'id': 433382, 'paperId': 'bdb73be49c4fdcbd0c79ca62e5703155915fa4c4', 'title': 'Learning Multiview Embeddings of Twitter Users', 'authors': [{'authorId': '145583569', 'name': 'Adrian Benton'}, {'authorId': '144365054', 'name': 'R. Arora'}, {'authorId': '1782853', 'name': 'Mark Dredze'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Low-dimensional vector representations are widely used as stand-ins for the text of words, sentences, and entire documents. These embeddings are used to identify similar words or make predictions about documents. In this work, we consider embeddings for social media users and demonstrate that these can be used to identify users who behave similarly or to predict attributes of users. In order to capture information from all aspects of a user’s online life, we take a multiview approach, applying a weighted variant of Generalized Canonical Correlation Analysis (GCCA) to a collection of over 100,000 Twitter users. We demonstrate the utility of these multiview embeddings on three downstream tasks: user engagement, friend selection, and demographic attribute prediction.', 'year': 2016, 'in_acl': True, 'citationCount': 86, 'section': 'User representation through language ', 'subsection': None}, {'id': 248693617, 'paperId': '791a76536cbe4d7f17275fbc9ff4a4d4967b04b8', 'title': 'Human Language Modeling', 'authors': [{'authorId': '145297996', 'name': 'Nikita Soni'}, {'authorId': '1386902685', 'name': 'Matthew Matero'}, {'authorId': '35217367', 'name': 'Niranjan Balasubramanian'}, {'authorId': '145035129', 'name': 'H. A. Schwartz'}], 'venue': 'Findings', 'abstract': 'Natural language is generated by people, yet traditional language modeling views words or documents as if generated independently. Here, we propose human language modeling (HuLM), a hierarchical extension to the language modeling problem where by a human- level exists to connect sequences of documents (e.g. social media messages) and capture the notion that human language is moderated by changing human states. We introduce, HaRT, a large-scale transformer model for solving HuLM, pre-trained on approximately 100,000 social media users, and demonstrate it’s effectiveness in terms of both language modeling (perplexity) for social media and fine-tuning for 4 downstream tasks spanning document- and user-levels. Results on all tasks meet or surpass the current state-of-the-art.', 'year': 2022, 'in_acl': True, 'citationCount': 7, 'section': 'User representation through language ', 'subsection': None}, {'id': 2955580, 'paperId': '1ea75cdb7ce8c4f5f2599165e3698034b4142e08', 'title': 'A Persona-Based Neural Conversation Model', 'authors': [{'authorId': '49298465', 'name': 'Jiwei Li'}, {'authorId': '1947267', 'name': 'Michel Galley'}, {'authorId': '3125776', 'name': 'Chris Brockett'}, {'authorId': '3130583', 'name': 'Georgios P. Spithourakis'}, {'authorId': '1800422', 'name': 'Jianfeng Gao'}, {'authorId': '83415753', 'name': 'W. Dolan'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'We present persona-based models for handling the issue of speaker consistency in neural response generation. A speaker model encodes personas in distributed embeddings that capture individual characteristics such as background information and speaking style. A dyadic speaker-addressee model captures properties of interactions between two interlocutors. Our models yield qualitative performance improvements in both perplexity and BLEU scores over baseline sequence-to-sequence models, with similar gains in speaker consistency as measured by human judges.', 'year': 2016, 'in_acl': True, 'citationCount': 1013, 'section': 'Individual level dialog models ', 'subsection': None}, {'id': 14021168, 'paperId': '737bb106a35d1ebe6b0acd1cb77582738cf0e09c', 'title': 'Demographic Factors Improve Classification Performance', 'authors': [{'authorId': '2022288', 'name': 'Dirk Hovy'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Extra-linguistic factors influence language use, and are accounted for by speakers and listeners. Most natural language processing (NLP) tasks to date, however, treat language as uniform. This assumption can harm performance. We investigate the effect of including demographic information on performance in a variety of text-classification tasks. We find that by including age or gender information, we consistently and significantly improve performance over demographic-agnostic models. These results hold across three text-classification tasks in five languages.', 'year': 2015, 'in_acl': True, 'citationCount': 174, 'section': 'Human factor adaptation ', 'subsection': None}, {'id': 26192090, 'paperId': '06920ce34abe17848bd02561a721afd930fe9581', 'title': 'Human Centered NLP with User-Factor Adaptation', 'authors': [{'authorId': '5536113', 'name': 'Veronica E. Lynn'}, {'authorId': '22254278', 'name': 'Youngseo Son'}, {'authorId': '144592382', 'name': 'Vivek Kulkarni'}, {'authorId': '35217367', 'name': 'Niranjan Balasubramanian'}, {'authorId': '145035129', 'name': 'H. A. Schwartz'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'We pose the general task of user-factor adaptation – adapting supervised learning models to real-valued user factors inferred from a background of their language, reflecting the idea that a piece of text should be understood within the context of the user that wrote it. We introduce a continuous adaptation technique, suited for real-valued user factors that are common in social science and bringing us closer to personalized NLP, adapting to each user uniquely. We apply this technique with known user factors including age, gender, and personality traits, as well as latent factors, evaluating over five tasks: POS tagging, PP-attachment, sentiment analysis, sarcasm detection, and stance detection. Adaptation provides statistically significant benefits for 3 of the 5 tasks: up to +1.2 points for PP-attachment, +3.4 points for sarcasm, and +3.0 points for stance.', 'year': 2017, 'in_acl': True, 'citationCount': 50, 'section': 'Human factor adaptation ', 'subsection': None}, {'id': 266191532, 'paperId': 'dc4d9b0c3c9c9cd9eb3a4c8d3ffa415d9953f77d', 'title': 'Large Human Language Models: A Need and the Challenges', 'authors': [{'authorId': '145297996', 'name': 'Nikita Soni'}, {'authorId': '2273933343', 'name': 'H. A. Schwartz'}, {'authorId': '2662374', 'name': 'João Sedoc'}, {'authorId': '2273927019', 'name': 'Niranjan Balasubramanian'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'As research in human-centered NLP advances, there is a growing recognition of the importance of incorporating human and social factors into NLP models. At the same time, our NLP systems have become heavily reliant on LLMs, most of which do not model authors. To build NLP systems that can truly understand human language, we must better integrate human contexts into LLMs. This brings to the fore a range of design considerations and challenges in terms of what human aspects to capture, how to represent them, and what modeling strategies to pursue. To address these, we advocate for three positions toward creating large human language models (LHLMs) using concepts from psychological and behavioral sciences: First, LM training should include the human context. Second, LHLMs should recognize that people are more than their group(s). Third, LHLMs should be able to account for the dynamic and temporally-dependent nature of the human context. We refer to relevant advances and present open challenges that need to be addressed and their possible solutions in realizing these goals.', 'year': 2023, 'in_acl': True, 'citationCount': 5, 'section': 'Human factor adaptation ', 'subsection': None}, {'id': 250041118, 'paperId': '71fc53f702883b62774dd6d3ebc971d82ea9a0d9', 'title': 'Tracking group identity through natural language within groups', 'authors': [{'authorId': '1390108796', 'name': 'A. Ashokkumar'}, {'authorId': '1854783', 'name': 'J. Pennebaker'}], 'venue': 'PNAS Nexus', 'abstract': "Abstract To what degree can we determine people's connections with groups through the language they use? In recent years, large archives of behavioral data from social media communities have become available to social scientists, opening the possibility of tracking naturally occurring group identity processes. A feature of most digital groups is that they rely exclusively on the written word. Across 3 studies, we developed and validated a language-based metric of group identity strength and demonstrated its potential in tracking identity processes in online communities. In Studies 1a–1c, 873 people wrote about their connections to various groups (country, college, or religion). A total of 2 language markers of group identity strength were found: high affiliation (more words like we, togetherness) and low cognitive processing or questioning (fewer words like think, unsure). Using these markers, a language-based unquestioning affiliation index was developed and applied to in-class stream-of-consciousness essays of 2,161 college students (Study 2). Greater levels of unquestioning affiliation expressed in language predicted not only self-reported university identity but also students’ likelihood of remaining enrolled in college a year later. In Study 3, the index was applied to naturalistic Reddit conversations of 270,784 people in 2 online communities of supporters of the 2016 presidential candidates—Hillary Clinton and Donald Trump. The index predicted how long people would remain in the group (3a) and revealed temporal shifts mirroring members’ joining and leaving of groups (3b). Together, the studies highlight the promise of a language-based approach for tracking and studying group identity processes in online groups.", 'year': 2022, 'in_acl': False, 'citationCount': 16, 'section': 'Groups as Individual Context ', 'subsection': None}, {'id': 2851638, 'paperId': 'fe1850826165fd89d51eca05c46a087bd3b32fa2', 'title': 'Fitting In or Standing Out? The Tradeoffs of Structural and Cultural Embeddedness', 'authors': [{'authorId': '3727566', 'name': 'Amir Goldberg'}, {'authorId': '3106444', 'name': 'S. Srivastava'}, {'authorId': '144020466', 'name': 'V. Manian'}, {'authorId': '145768639', 'name': 'Will Monroe'}, {'authorId': '144922861', 'name': 'Christopher Potts'}], 'venue': '', 'abstract': 'A recurring theme in sociological research is the tradeoff between fitting in and standing out. Prior work examining this tension tends to take either a structural or a cultural perspective. We fuse these two traditions to develop a theory of how structural and cultural embeddedness jointly relate to individual attainment within organizations. Given that organizational culture is hard to observe, we develop a novel approach to assessing individuals’ cultural fit with their colleagues based on the language expressed in internal e-mail communications. Drawing on a unique dataset that includes a corpus of 10.24 million e-mail messages exchanged over five years among 601 employees in a high-technology firm, we find that network constraint impedes, whereas cultural fit promotes, individual attainment. More importantly, we find evidence of a tradeoff between the two forms of embeddedness: cultural fit benefits individuals with low network constraint (i.e., brokers), whereas network constraint promotes attainment for people with low cultural fit.', 'year': 2015, 'in_acl': False, 'citationCount': 170, 'section': 'Groups as Individual Context ', 'subsection': None}]
|
2024.naacl-tutorials.5
|
Human-AI Interaction in the Age of LLMs
|
Recently, the development of Large Language Models (LLMs) has revolutionized the capabilities of AI systems. These models possess the ability to comprehend and generate human-like text, enabling them to engage in sophisticated conversations, generate content, and even perform tasks that once seemed beyond the reach of machines. As a result, the way we interact with technology and each other — an established field called “Human-AI Interaction” and have been studied for over a decade — is undergoing a profound transformation. This tutorial will provide an overview of the interaction between humans and LLMs, exploring the challenges, opportunities, and ethical considerations that arise in this dynamic landscape. It will start with a review of the types of AI models we interact with, and a walkthrough of the core concepts in Human-AI Interaction. We will then emphasize the emerging topics shared between HCI and NLP communities in light of LLMs.
| 2,024
|
https://aclanthology.org/2024.naacl-tutorials.5
|
NAACL
|
[{'id': 218483124, 'paperId': '529025645c70a935221bd434484faee695ad0f25', 'title': 'Re-examining Whether, Why, and How Human-AI Interaction Is Uniquely Difficult to Design', 'authors': [{'authorId': '2117860470', 'name': 'Qian Yang'}, {'authorId': '1792714', 'name': 'Aaron Steinfeld'}, {'authorId': '35959897', 'name': 'C. Rosé'}, {'authorId': '145308025', 'name': 'J. Zimmerman'}], 'venue': 'International Conference on Human Factors in Computing Systems', 'abstract': "Artificial Intelligence (AI) plays an increasingly important role in improving HCI and user experience. Yet many challenges persist in designing and innovating valuable human-AI interactions. For example, AI systems can make unpredictable errors, and these errors damage UX and even lead to undesired societal impact. However, HCI routinely grapples with complex technologies and mitigates their unintended consequences. What makes AI different? What makes human-AI interaction appear particularly difficult to design? This paper investigates these questions. We synthesize prior research, our own design and research experience, and our observations when teaching human-AI interaction. We identify two sources of AI's distinctive design challenges: 1) uncertainty surrounding AI's capabilities, 2) AI's output complexity, spanning from simple to adaptive complex. We identify four levels of AI systems. On each level, designers encounter a different subset of the design challenges. We demonstrate how these findings reveal new insights for designers, researchers, and design tool makers in productively addressing the challenges of human-AI interaction going forward.", 'year': 2020, 'in_acl': False, 'citationCount': 371, 'section': None, 'subsection': None}, {'id': 220128138, 'paperId': 'ebcbbb8fe297940d79b17aeb6d46bedff9db7fec', 'title': 'Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance', 'authors': [{'authorId': '33340656', 'name': 'Gagan Bansal'}, {'authorId': '35232494', 'name': 'Tongshuang Sherry Wu'}, {'authorId': '153823289', 'name': 'Joyce Zhou'}, {'authorId': '27083453', 'name': 'Raymond Fok'}, {'authorId': '2571049', 'name': 'Besmira Nushi'}, {'authorId': '1783184', 'name': 'Ece Kamar'}, {'authorId': '78846919', 'name': 'Marco Tulio Ribeiro'}, {'authorId': '1780531', 'name': 'Daniel S. Weld'}], 'venue': 'International Conference on Human Factors in Computing Systems', 'abstract': 'Many researchers motivate explainable AI with studies showing that human-AI team performance on decision-making tasks improves when the AI explains its recommendations. However, prior studies observed improvements from explanations only when the AI, alone, outperformed both the human and the best team. Can explanations help lead to complementary performance, where team accuracy is higher than either the human or the AI working solo? We conduct mixed-method user studies on three datasets, where an AI with accuracy comparable to humans helps participants solve a task (explaining itself in some conditions). While we observed complementary improvements from AI augmentation, they were not increased by explanations. Rather, explanations increased the chance that humans will accept the AI’s recommendation, regardless of its correctness. Our result poses new challenges for human-centered AI: Can we develop explanatory approaches that encourage appropriate trust in AI, and therefore help generate (or improve) complementary performance?', 'year': 2020, 'in_acl': False, 'citationCount': 463, 'section': None, 'subsection': None}, {'id': 8943607, 'paperId': '5df85ae89af55c6d82a1a14836ea6bcfbfc2c0ec', 'title': 'Principles of mixed-initiative user interfaces', 'authors': [{'authorId': '145479841', 'name': 'E. Horvitz'}], 'venue': 'International Conference on Human Factors in Computing Systems', 'abstract': 'Recent debate has centered on the relative promise of focusinguser-interface research on developing new metaphors and tools thatenhance users abilities to directly manipulate objects versusdirecting effort toward developing interface agents that provideautomation. In this paper, we review principles that show promisefor allowing engineers to enhance human-computer interactionthrough an elegant coupling of automated services with directmanipulation. Key ideas will be highlighted in terms of the Lookoutsystem for scheduling and meeting management.', 'year': 1999, 'in_acl': False, 'citationCount': 1329, 'section': None, 'subsection': None}, {'id': 86866942, 'paperId': 'ad3cf68bae32d21f25ac142287d4a556155619d2', 'title': 'Guidelines for Human-AI Interaction', 'authors': [{'authorId': '1719124', 'name': 'Saleema Amershi'}, {'authorId': '1780531', 'name': 'Daniel S. Weld'}, {'authorId': '3109339', 'name': 'Mihaela Vorvoreanu'}, {'authorId': '3318905', 'name': 'Adam Fourney'}, {'authorId': '2571049', 'name': 'Besmira Nushi'}, {'authorId': '9703838', 'name': 'Penny Collisson'}, {'authorId': '38972741', 'name': 'Jina Suh'}, {'authorId': '1730570', 'name': 'Shamsi T. Iqbal'}, {'authorId': '144609235', 'name': 'Paul N. Bennett'}, {'authorId': '1781500', 'name': 'K. Quinn'}, {'authorId': '144113253', 'name': 'J. Teevan'}, {'authorId': '1405707881', 'name': 'Ruth Kikin-Gil'}, {'authorId': '145479841', 'name': 'E. Horvitz'}], 'venue': 'International Conference on Human Factors in Computing Systems', 'abstract': 'Advances in artificial intelligence (AI) frame opportunities and challenges for user interface design. Principles for human-AI interaction have been discussed in the human-computer interaction community for over two decades, but more study and innovation are needed in light of advances in AI and the growing uses of AI technologies in human-facing applications. We propose 18 generally applicable design guidelines for human-AI interaction. These guidelines are validated through multiple rounds of evaluation including a user study with 49 design practitioners who tested the guidelines against 20 popular AI-infused products. The results verify the relevance of the guidelines over a spectrum of interaction scenarios and reveal gaps in our knowledge, highlighting opportunities for further research. Based on the evaluations, we believe the set of design guidelines can serve as a resource to practitioners working on the design of applications and features that harness AI technologies, and to researchers interested in the further development of human-AI interaction design principles.', 'year': 2019, 'in_acl': False, 'citationCount': 1094, 'section': None, 'subsection': None}, {'id': 258714781, 'paperId': 'e16a782b529a7adbea1669236c97efc653196b8c', 'title': 'Helping the Helper: Supporting Peer Counselors via AI-Empowered Practice and Feedback', 'authors': [{'authorId': '2147213057', 'name': 'Shang-ling Hsu'}, {'authorId': '2051264008', 'name': 'Raj Sanjay Shah'}, {'authorId': '2217341381', 'name': 'Prathik Senthil'}, {'authorId': '2220428', 'name': 'Zahra Ashktorab'}, {'authorId': '2391727', 'name': 'Casey Dugan'}, {'authorId': '143774641', 'name': 'Werner Geyer'}, {'authorId': '2022168', 'name': 'Diyi Yang'}], 'venue': 'arXiv.org', 'abstract': "Millions of users come to online peer counseling platforms to seek support on diverse topics ranging from relationship stress to anxiety. However, studies show that online peer support groups are not always as effective as expected largely due to users' negative experiences with unhelpful counselors. Peer counselors are key to the success of online peer counseling platforms, but most of them often do not have systematic ways to receive guidelines or supervision. In this work, we introduce CARE: an interactive AI-based tool to empower peer counselors through automatic suggestion generation. During the practical training stage, CARE helps diagnose which specific counseling strategies are most suitable in the given context and provides tailored example responses as suggestions. Counselors can choose to select, modify, or ignore any suggestion before replying to the support seeker. Building upon the Motivational Interviewing framework, CARE utilizes large-scale counseling conversation data together with advanced natural language generation techniques to achieve these functionalities. We demonstrate the efficacy of CARE by performing both quantitative evaluations and qualitative user studies through simulated chats and semi-structured interviews. We also find that CARE especially helps novice counselors respond better in challenging situations.", 'year': 2023, 'in_acl': False, 'citationCount': 10, 'section': None, 'subsection': None}, {'id': 254854296, 'paperId': 'a640cdafc10181517b7694ab589db515595b3490', 'title': 'Evaluating Human-Language Model Interaction', 'authors': [{'authorId': '49316195', 'name': 'Mina Lee'}, {'authorId': '2143366464', 'name': 'Megha Srivastava'}, {'authorId': '1914650552', 'name': 'Amelia Hardy'}, {'authorId': '50343904', 'name': 'John Thickstun'}, {'authorId': '41152329', 'name': 'Esin Durmus'}, {'authorId': '40404493', 'name': 'Ashwin Paranjape'}, {'authorId': '2193248562', 'name': 'Ines Gerard-Ursin'}, {'authorId': '32551341', 'name': 'Xiang Lisa Li'}, {'authorId': '8759332', 'name': 'Faisal Ladhak'}, {'authorId': '2047004093', 'name': 'Frieda Rong'}, {'authorId': '2155890009', 'name': 'Rose E. Wang'}, {'authorId': '37909625', 'name': 'Minae Kwon'}, {'authorId': '2197475360', 'name': 'Joon Sung Park'}, {'authorId': '2196927606', 'name': 'Hancheng Cao'}, {'authorId': '2110585783', 'name': 'Tony Lee'}, {'authorId': '150272855', 'name': 'Rishi Bommasani'}, {'authorId': '145879842', 'name': 'Michael S. Bernstein'}, {'authorId': '145419642', 'name': 'Percy Liang'}], 'venue': 'Trans. Mach. Learn. Res.', 'abstract': "Many real-world applications of language models (LMs), such as writing assistance and code autocomplete, involve human-LM interaction. However, most benchmarks are non-interactive in that a model produces output without human involvement. To evaluate human-LM interaction, we develop a new framework, Human-AI Language-based Interaction Evaluation (HALIE), that defines the components of interactive systems and dimensions to consider when designing evaluation metrics. Compared to standard, non-interactive evaluation, HALIE captures (i) the interactive process, not only the final output; (ii) the first-person subjective experience, not just a third-party assessment; and (iii) notions of preference beyond quality (e.g., enjoyment and ownership). We then design five tasks to cover different forms of interaction: social dialogue, question answering, crossword puzzles, summarization, and metaphor generation. With four state-of-the-art LMs (three variants of OpenAI's GPT-3 and AI21 Labs' Jurassic-1), we find that better non-interactive performance does not always translate to better human-LM interaction. In particular, we highlight three cases where the results from non-interactive and interactive metrics diverge and underscore the importance of human-LM interaction for LM evaluation.", 'year': 2022, 'in_acl': False, 'citationCount': 81, 'section': None, 'subsection': None}]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.