id
stringlengths 8
23
| title
stringlengths 24
120
| abstract
stringlengths 323
2.05k
⌀ | year
int64 2.02k
2.02k
| url
stringlengths 34
49
| venues
stringclasses 9
values | reading_list
stringlengths 4.41k
62.7k
|
|---|---|---|---|---|---|---|
C16-3001
|
Compositional Distributional Models of Meaning
|
Compositional distributional models of meaning (CDMs) provide a function that produces a vectorial representation for a phrase or a sentence by composing the vectors of its words. Being the natural evolution of the traditional and well-studied distributional models at the word level, CDMs are steadily evolving to a popular and active area of NLP. This COLING 2016 tutorial aims at providing a concise introduction to this emerging field, presenting the different classes of CDMs and the various issues related to them in sufficient detail.
| 2,016
|
https://aclanthology.org/C16-3001/
|
COLING
|
[{'id': 8360910, 'paperId': '37efe2ef1b9d27cc598361a8013ec888a6f7c4d8', 'title': 'Nouns are Vectors, Adjectives are Matrices: Representing Adjective-Noun Constructions in Semantic Space', 'authors': [{'authorId': '145283199', 'name': 'Marco Baroni'}, {'authorId': '2713535', 'name': 'Roberto Zamparelli'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'We propose an approach to adjective-noun composition (AN) for corpus-based distributional semantics that, building on insights from theoretical linguistics, represents nouns as vectors and adjectives as data-induced (linear) functions (encoded as matrices) over nominal vectors. Our model significantly outperforms the rivals on the task of reconstructing AN vectors not seen in training. A small post-hoc analysis further suggests that, when the model-generated AN vector is not similar to the corpus-observed AN vector, this is due to anomalies in the latter. We show moreover that our approach provides two novel ways to represent adjective meanings, alternative to its representation via corpus-based co-occurrence vectors, both outperforming the latter in an adjective clustering task.', 'year': 2010, 'in_acl': True, 'citationCount': 542, 'section': None, 'subsection': None}, {'id': 5917203, 'paperId': '228d9e4b69926594fd26080f4cfaa9ecfca44eb3', 'title': 'Mathematical Foundations for a Compositional Distributional Model of Meaning', 'authors': [{'authorId': '3326718', 'name': 'B. Coecke'}, {'authorId': '1784777', 'name': 'M. Sadrzadeh'}, {'authorId': '144523372', 'name': 'S. Clark'}], 'venue': 'arXiv.org', 'abstract': "We propose a mathematical framework for a unification of the distributional theory of meaning in terms of vector space models, and a compositional theory for grammatical types, for which we rely on the algebra of Pregroups, introduced by Lambek. This mathematical framework enables us to compute the meaning of a well-typed sentence from the meanings of its constituents. Concretely, the type reductions of Pregroups are `lifted' to morphisms in a category, a procedure that transforms meanings of constituents into a meaning of the (well-typed) whole. Importantly, meanings of whole sentences live in a single space, independent of the grammatical structure of the sentence. Hence the inner-product can be used to compare meanings of arbitrary sentences, as it is for comparing the meanings of words in the distributional model. The mathematical structure we employ admits a purely diagrammatic calculus which exposes how the information flows between the words in a sentence in order to make up the meaning of the whole sentence. A variation of our `categorical model' which involves constraining the scalars of the vector spaces to the semiring of Booleans results in a Montague-style Boolean-valued semantics.", 'year': 2010, 'in_acl': False, 'citationCount': 547, 'section': None, 'subsection': None}, {'id': 11691908, 'paperId': '167d0e6bbb4199764773e7fb77882ce64e586e89', 'title': 'A Unified Sentence Space for Categorical Distributional-Compositional Semantics: Theory and Experiments', 'authors': [{'authorId': '2940780', 'name': 'Dimitri Kartsaklis'}, {'authorId': '1784777', 'name': 'M. Sadrzadeh'}, {'authorId': '50419262', 'name': 'S. Pulman'}], 'venue': 'International Conference on Computational Linguistics', 'abstract': 'This short paper summarizes a faithful implementation of the categorical framework of Coecke et al. (2010), the aim of which is to provide compositionality in distributional models of lexical semantics. Based on Frobenius Algebras, our method enable us to (1) have a unifying meaning space for phrases and sentences of different structure and word vectors, (2) stay faithful to the linguistic types suggested by the underlying type-logic, and (3) perform the concrete computations in lower dimensions by reducing the space complexity. We experiment with two different parameters of the model and apply the setting to a verb disambiguation and a term/definition classification task with promising results.', 'year': 2012, 'in_acl': True, 'citationCount': 75, 'section': None, 'subsection': None}, {'id': 26901423, 'paperId': '745d86adca56ec50761591733e157f84cfb19671', 'title': 'Composition in Distributional Models of Semantics', 'authors': [{'authorId': '34902160', 'name': 'Jeff Mitchell'}, {'authorId': '1747893', 'name': 'Mirella Lapata'}], 'venue': 'Cognitive Sciences', 'abstract': 'Vector-based models of word meaning have become increasingly popular in cognitive science. The appeal of these models lies in their ability to represent meaning simply by using distributional information under the assumption that words occurring within similar contexts are semantically similar. Despite their widespread use, vector-based models are typically directed at representing words in isolation, and methods for constructing representations for phrases or sentences have received little attention in the literature. This is in marked contrast to experimental evidence (e.g., in sentential priming) suggesting that semantic similarity is more complex than simply a relation between isolated words. This article proposes a framework for representing the meaning of word combinations in vector space. Central to our approach is vector composition, which we operationalize in terms of additive and multiplicative functions. Under this framework, we introduce a wide range of composition models that we evaluate empirically on a phrase similarity task.', 'year': 2010, 'in_acl': False, 'citationCount': 989, 'section': None, 'subsection': None}, {'id': 806709, 'paperId': '27e38351e48fe4b7da2775bf94341738bc4da07e', 'title': 'Semantic Compositionality through Recursive Matrix-Vector Spaces', 'authors': [{'authorId': '2166511', 'name': 'R. Socher'}, {'authorId': '2570381', 'name': 'Brody Huval'}, {'authorId': '144783904', 'name': 'Christopher D. Manning'}, {'authorId': '34699434', 'name': 'A. Ng'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Single-word vector space models have been very successful at learning lexical information. However, they cannot capture the compositional meaning of longer phrases, preventing them from a deeper understanding of language. We introduce a recursive neural network (RNN) model that learns compositional vector representations for phrases and sentences of arbitrary syntactic type and length. Our model assigns a vector and a matrix to every node in a parse tree: the vector captures the inherent meaning of the constituent, while the matrix captures how it changes the meaning of neighboring words or phrases. This matrix-vector RNN can learn the meaning of operators in propositional logic and natural language. The model obtains state of the art performance on three different experiments: predicting fine-grained sentiment distributions of adverb-adjective pairs; classifying sentiment labels of movie reviews and classifying semantic relationships such as cause-effect or topic-message between nouns using the syntactic path between them.', 'year': 2012, 'in_acl': True, 'citationCount': 1380, 'section': None, 'subsection': None}, {'id': 1500900, 'paperId': '3a0e788268fafb23ab20da0e98bb578b06830f7d', 'title': 'From Frequency to Meaning: Vector Space Models of Semantics', 'authors': [{'authorId': '1689647', 'name': 'Peter D. Turney'}, {'authorId': '1990190', 'name': 'Patrick Pantel'}], 'venue': 'Journal of Artificial Intelligence Research', 'abstract': 'Computers understand very little of the meaning of human language. This profoundly limits our ability to give instructions to computers, the ability of computers to explain their actions to us, and the ability of computers to analyse and process text. Vector space models (VSMs) of semantics are beginning to address these limits. This paper surveys the use of VSMs for semantic processing of text. We organize the literature on VSMs according to the structure of the matrix in a VSM. There are currently three broad classes of VSMs, based on term-document, word-context, and pair-pattern matrices, yielding three classes of applications. We survey a broad range of applications in these three categories and we take a detailed look at a specific open source project in each category. Our goal in this survey is to show the breadth of applications of VSMs for semantics, to provide a new perspective on VSMs for those who are already familiar with the area, and to provide pointers into the literature for those who are less familiar with the field.', 'year': 2010, 'in_acl': False, 'citationCount': 2942, 'section': None, 'subsection': None}]
|
P19-4004
|
Computational Analysis of Political Texts: Bridging Research Efforts Across Communities
|
In the last twenty years, political scientists started adopting and developing natural language processing (NLP) methods more actively in order to exploit text as an additional source of data in their analyses. Over the last decade the usage of computational methods for analysis of political texts has drastically expanded in scope, allowing for a sustained growth of the text-as-data community in political science. In political science, NLP methods have been extensively used for a number of analyses types and tasks, including inferring policy position of actors from textual evidence, detecting topics in political texts, and analyzing stylistic aspects of political texts (e.g., assessing the role of language ambiguity in framing the political agenda). Just like in numerous other domains, much of the work on computational analysis of political texts has been enabled and facilitated by the development of resources such as, the topically coded electoral programmes (e.g., the Manifesto Corpus) or topically coded legislative texts (e.g., the Comparative Agenda Project). Political scientists created resources and used available NLP methods to process textual data largely in isolation from the NLP community. At the same time, NLP researchers addressed closely related tasks such as election prediction, ideology classification, and stance detection. In other words, these two communities have been largely agnostic of one another, with NLP researchers mostly unaware of interesting applications in political science and political scientists not applying cutting-edge NLP methodology to their problems. The main goal of this tutorial is to systematize and analyze the body of research work on political texts from both communities. We aim to provide a gentle, all-round introduction to methods and tasks related to computational analysis of political texts. Our vision is to bring the two research communities closer to each other and contribute to faster and more significant developments in this interdisciplinary research area.
| 2,019
|
https://aclanthology.org/P19-4004/
|
ACL
|
[{'id': 16196219, 'paperId': 'b9921fb4d1448058642897797e77bdaf8f444404', 'title': 'Text as Data: The Promise and Pitfalls of Automatic Content Analysis Methods for Political Texts', 'authors': [{'authorId': '2361828', 'name': 'Justin Grimmer'}, {'authorId': '28924497', 'name': 'Brandon M Stewart'}], 'venue': 'Political Analysis', 'abstract': 'Politics and political conflict often occur in the written and spoken word. Scholars have long recognized this, but the massive costs of analyzing even moderately sized collections of texts have hindered their use in political science research. Here lies the promise of automated text analysis: it substantially reduces the costs of analyzing large collections of text. We provide a guide to this exciting new area of research and show how, in many instances, the methods have already obtained part of their promise. But there are pitfalls to using automated methods—they are no substitute for careful thought and close reading and require extensive and problem-specific validation. We survey a wide range of new methods, provide guidance on how to validate the output of the models, and clarify misconceptions and errors in the literature. To conclude, we argue that for automated text methods to become a standard tool for political scientists, methodologists must contribute new methods and new methods of validation.', 'year': 2013, 'in_acl': False, 'citationCount': 2486, 'section': None, 'subsection': None}, {'id': 10274824, 'paperId': '7d9cc63dfbd34acf271e3a2c922ea1c07fb2f482', 'title': 'Extracting Policy Positions from Political Texts Using Words as Data', 'authors': [{'authorId': '143758665', 'name': 'M. Laver'}, {'authorId': '26916605', 'name': 'K. Benoit'}, {'authorId': '80157164', 'name': 'John Garry'}], 'venue': 'American Political Science Review', 'abstract': 'We present a new way of extracting policy positions from political texts that treats texts not as discourses to be understood and interpreted but rather, as data in the form of words. We compare this approach to previous methods of text analysis and use it to replicate published estimates of the policy positions of political parties in Britain and Ireland, on both economic and social policy dimensions. We “export” the method to a non-English-language environment, analyzing the policy positions of German parties, including the PDS as it entered the former West German party system. Finally, we extend its application beyond the analysis of party manifestos, to the estimation of political positions from legislative speeches. Our “language-blind” word scoring technique successfully replicates published policy estimates without the substantial costs of time and labor that these require. Furthermore, unlike in any previous method for extracting policy positions from political texts, we provide uncertainty measures for our estimates, allowing analysts to make informed judgments of the extent to which differences between two estimated policy positions can be viewed as significant or merely as products of measurement error.We thank Raj Chari, Gary King, Michael McDonald, Gail McElroy, and three anonymous reviewers for comments on drafts of this paper.', 'year': 2003, 'in_acl': False, 'citationCount': 1277, 'section': None, 'subsection': None}, {'id': 17026162, 'paperId': '5109c519cd4442041a5d3915ca305eba6d68ee10', 'title': 'A Scaling Model for Estimating Time-Series Party Positions from Texts', 'authors': [{'authorId': '70665044', 'name': 'Jonathan B. Slapin'}, {'authorId': '145688599', 'name': 'Sven-Oliver Proksch'}], 'venue': '', 'abstract': 'However, existing text-based methods face challenges in producing valid and reliable time-series data. This article proposes a scaling algorithm called WORDFISH to estimate policy positions based on word frequencies in texts. The technique allows researchers to locate parties in one or multiple elections. We demonstrate the algorithm by estimating the positions of German political parties from 1990 to 2005 using word frequencies in party manifestos. The extracted positions reflect changes in the party system more accurately than existing time-series estimates. In addition, the method allows researchers to examine which words are important for placing parties on the left and on the right. We find that words with strong political connotations are the best discriminators between parties. Finally, a series of robustness checks demonstrate that the estimated positions are insensitive to distributional assumptions and document selection.', 'year': 2007, 'in_acl': False, 'citationCount': 679, 'section': None, 'subsection': None}]
|
2020.acl-tutorials.1
|
Interpretability and Analysis in Neural NLP
|
While deep learning has transformed the natural language processing (NLP) field and impacted the larger computational linguistics community, the rise of neural networks is stained by their opaque nature: It is challenging to interpret the inner workings of neural network models, and explicate their behavior. Therefore, in the last few years, an increasingly large body of work has been devoted to the analysis and interpretation of neural network models in NLP. This body of work is so far lacking a common framework and methodology. Moreover, approaching the analysis of modern neural networks can be difficult for newcomers to the field. This tutorial aims to fill this gap and introduce the nascent field of interpretability and analysis of neural networks in NLP. The tutorial will cover the main lines of analysis work, such as structural analyses using probing classifiers, behavioral studies and test suites, and interactive visualizations. We will highlight not only the most commonly applied analysis methods, but also the specific limitations and shortcomings of current approaches, in order to inform participants where to focus future efforts.
| 2,020
|
https://aclanthology.org/2020.acl-tutorials.1
|
ACL
|
[{'id': 56657817, 'paperId': '668f42a4d4094f0a66d402a16087e14269b31a1f', 'title': 'Analysis Methods in Neural Language Processing: A Survey', 'authors': [{'authorId': '2083259', 'name': 'Yonatan Belinkov'}, {'authorId': '145898106', 'name': 'James R. Glass'}], 'venue': 'Transactions of the Association for Computational Linguistics', 'abstract': 'The field of natural language processing has seen impressive progress in recent years, with neural network models replacing many of the traditional systems. A plethora of new models have been proposed, many of which are thought to be opaque compared to their feature-rich counterparts. This has led researchers to analyze, interpret, and evaluate neural networks in novel and more fine-grained ways. In this survey paper, we review analysis methods in neural language processing, categorize them according to prominent research trends, highlight existing limitations, and point to potential directions for future work.', 'year': 2018, 'in_acl': True, 'citationCount': 513, 'section': None, 'subsection': None}, {'id': 5013113, 'paperId': 'f170fed9acd71bd5feb20901c7ec1fe395f3fae5', 'title': "Visualisation and 'diagnostic classifiers' reveal how recurrent and recursive neural networks process hierarchical structure", 'authors': [{'authorId': '3449411', 'name': 'Dieuwke Hupkes'}, {'authorId': '1787819', 'name': 'Willem H. Zuidema'}], 'venue': 'Journal of Artificial Intelligence Research', 'abstract': "We investigate how neural networks can learn and process languages with hierarchical, compositional semantics. To this end, we define the artificial task of processing nested arithmetic expressions, and study whether different types of neural networks can learn to compute their meaning. We find that recursive neural networks can find a generalising solution to this problem, and we visualise this solution by breaking it up in three steps: project, sum and squash. As a next step, we investigate recurrent neural networks, and show that a gated recurrent unit, that processes its input incrementally, also performs very well on this task. To develop an understanding of what the recurrent network encodes, visualisation techniques alone do not suffice. Therefore, we develop an approach where we formulate and test multiple hypotheses on the information encoded and processed by the network. For each hypothesis, we derive predictions about features of the hidden state representations at each time step, and train 'diagnostic classifiers' to test those predictions. Our results indicate that the networks follow a strategy similar to our hypothesised 'cumulative strategy', which explains the high accuracy of the network on novel expressions, the generalisation to longer expressions than seen in training, and the mild deterioration with increasing length. This is turn shows that diagnostic classifiers can be a useful technique for opening up the black box of neural networks. We argue that diagnostic classification, unlike most visualisation techniques, does scale up from small networks in a toy domain, to larger and deeper recurrent networks dealing with real-life data, and may therefore contribute to a better understanding of the internal dynamics of current state-of-the-art models in natural language processing.", 'year': 2017, 'in_acl': False, 'citationCount': 235, 'section': None, 'subsection': None}, {'id': 108300988, 'paperId': 'e2587eddd57bc4ba286d91b27c185083f16f40ee', 'title': 'What do you learn from context? Probing for sentence structure in contextualized word representations', 'authors': [{'authorId': '6117577', 'name': 'Ian Tenney'}, {'authorId': '2465658', 'name': 'Patrick Xia'}, {'authorId': '2108381400', 'name': 'Berlin Chen'}, {'authorId': '144906624', 'name': 'Alex Wang'}, {'authorId': '48926630', 'name': 'Adam Poliak'}, {'authorId': '145534175', 'name': 'R. Thomas McCoy'}, {'authorId': '8756748', 'name': 'Najoung Kim'}, {'authorId': '7536576', 'name': 'Benjamin Van Durme'}, {'authorId': '3644767', 'name': 'Samuel R. Bowman'}, {'authorId': '143790066', 'name': 'Dipanjan Das'}, {'authorId': '2949185', 'name': 'Ellie Pavlick'}], 'venue': 'International Conference on Learning Representations', 'abstract': 'Contextualized representation models such as ELMo (Peters et al., 2018a) and BERT (Devlin et al., 2018) have recently achieved state-of-the-art results on a diverse array of downstream NLP tasks. Building on recent token-level probing work, we introduce a novel edge probing task design and construct a broad suite of sub-sentence tasks derived from the traditional structured NLP pipeline. We probe word-level contextual representations from four recent models and investigate how they encode sentence structure across a range of syntactic, semantic, local, and long-range phenomena. We find that existing models trained on language modeling and translation produce strong representations for syntactic phenomena, but only offer comparably small improvements on semantic tasks over a non-contextual baseline.', 'year': 2019, 'in_acl': False, 'citationCount': 808, 'section': None, 'subsection': None}, {'id': 14091946, 'paperId': '3aa52436575cf6768a0a1a476601825f6a62e58f', 'title': 'Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies', 'authors': [{'authorId': '2467508', 'name': 'Tal Linzen'}, {'authorId': '2202008', 'name': 'Emmanuel Dupoux'}, {'authorId': '79775260', 'name': 'Yoav Goldberg'}], 'venue': 'Transactions of the Association for Computational Linguistics', 'abstract': 'The success of long short-term memory (LSTM) neural networks in language processing is typically attributed to their ability to capture long-distance statistical regularities. Linguistic regularities are often sensitive to syntactic structure; can such dependencies be captured by LSTMs, which do not have explicit structural representations? We begin addressing this question using number agreement in English subject-verb dependencies. We probe the architecture’s grammatical competence both using training objectives with an explicit grammatical target (number prediction, grammaticality judgments) and using language models. In the strongly supervised settings, the LSTM achieved very high overall accuracy (less than 1% errors), but errors increased when sequential and structural information conflicted. The frequency of such errors rose sharply in the language-modeling setting. We conclude that LSTMs can capture a non-trivial amount of grammatical structure given targeted supervision, but stronger architectures may be required to further reduce errors; furthermore, the language modeling signal is insufficient for capturing syntax-sensitive dependencies, and should be supplemented with more direct supervision if such dependencies need to be captured.', 'year': 2016, 'in_acl': True, 'citationCount': 860, 'section': None, 'subsection': None}, {'id': 49363457, 'paperId': '843c6b0a35b02e2c3d74bb545e74bc655e16e992', 'title': 'Assessing Composition in Sentence Vector Representations', 'authors': [{'authorId': '37907837', 'name': 'Allyson Ettinger'}, {'authorId': '143718836', 'name': 'Ahmed Elgohary'}, {'authorId': '143843506', 'name': 'C. Phillips'}, {'authorId': '1680292', 'name': 'P. Resnik'}], 'venue': 'International Conference on Computational Linguistics', 'abstract': 'An important component of achieving language understanding is mastering the composition of sentence meaning, but an immediate challenge to solving this problem is the opacity of sentence vector representations produced by current neural sentence composition models. We present a method to address this challenge, developing tasks that directly target compositional meaning information in sentence vector representations with a high degree of precision and control. To enable the creation of these controlled tasks, we introduce a specialized sentence generation system that produces large, annotated sentence sets meeting specified syntactic, semantic and lexical constraints. We describe the details of the method and generation system, and then present results of experiments applying our method to probe for compositional information in embeddings from a number of existing sentence composition models. We find that the method is able to extract useful information about the differing capacities of these models, and we discuss the implications of our results with respect to these systems’ capturing of sentence information. We make available for public use the datasets used for these experiments, as well as the generation system.', 'year': 2018, 'in_acl': True, 'citationCount': 78, 'section': None, 'subsection': None}, {'id': 11212020, 'paperId': 'fa72afa9b2cbc8f0d7b05d52548906610ffbb9c5', 'title': 'Neural Machine Translation by Jointly Learning to Align and Translate', 'authors': [{'authorId': '3335364', 'name': 'Dzmitry Bahdanau'}, {'authorId': '1979489', 'name': 'Kyunghyun Cho'}, {'authorId': '1751762', 'name': 'Yoshua Bengio'}], 'venue': 'International Conference on Learning Representations', 'abstract': 'Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.', 'year': 2014, 'in_acl': False, 'citationCount': 26130, 'section': None, 'subsection': None}, {'id': 13017314, 'paperId': '4c41104e871bccbd56494350a71d77a7f1da5bb0', 'title': 'Understanding Neural Networks through Representation Erasure', 'authors': [{'authorId': '49298465', 'name': 'Jiwei Li'}, {'authorId': '145768639', 'name': 'Will Monroe'}, {'authorId': '1746807', 'name': 'Dan Jurafsky'}], 'venue': 'arXiv.org', 'abstract': "While neural networks have been successfully applied to many natural language processing tasks, they come at the cost of interpretability. In this paper, we propose a general methodology to analyze and interpret decisions from a neural model by observing the effects on the model of erasing various parts of the representation, such as input word-vector dimensions, intermediate hidden units, or input words. We present several approaches to analyzing the effects of such erasure, from computing the relative difference in evaluation metrics, to using reinforcement learning to erase the minimum set of input words in order to flip a neural model's decision. In a comprehensive analysis of multiple NLP tasks, including linguistic feature classification, sentence-level sentiment analysis, and document level sentiment aspect prediction, we show that the proposed methodology not only offers clear explanations about neural model decisions, but also provides a way to conduct error analysis on neural models.", 'year': 2016, 'in_acl': False, 'citationCount': 536, 'section': None, 'subsection': None}, {'id': 3085700, 'paperId': '63c4114bd373dd0fcfe0d25a605b353c62be2995', 'title': 'How Grammatical is Character-level Neural Machine Translation? Assessing MT Quality with Contrastive Translation Pairs', 'authors': [{'authorId': '2082372', 'name': 'Rico Sennrich'}], 'venue': 'Conference of the European Chapter of the Association for Computational Linguistics', 'abstract': 'Analysing translation quality in regards to specific linguistic phenomena has historically been difficult and time-consuming. Neural machine translation has the attractive property that it can produce scores for arbitrary translations, and we propose a novel method to assess how well NMT systems model specific linguistic phenomena such as agreement over long distances, the production of novel words, and the faithful translation of polarity. The core idea is that we measure whether a reference translation is more probable under a NMT model than a contrastive translation which introduces a specific type of error. We present LingEval97, a large-scale data set of 97000 contrastive translation pairs based on the WMT English->German translation task, with errors automatically created with simple rules. We report results for a number of systems, and find that recently introduced character-level NMT systems perform better at transliteration than models with byte-pair encoding (BPE) segmentation, but perform more poorly at morphosyntactic agreement, and translating discontiguous units of meaning.', 'year': 2016, 'in_acl': True, 'citationCount': 161, 'section': None, 'subsection': None}, {'id': 17362994, 'paperId': '78aa018ee7d52360e15d103390ea1cdb3a0beb41', 'title': 'Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples', 'authors': [{'authorId': '1967156', 'name': 'Nicolas Papernot'}, {'authorId': '144061974', 'name': 'P. Mcdaniel'}, {'authorId': '153440022', 'name': 'I. Goodfellow'}], 'venue': 'arXiv.org', 'abstract': 'Many machine learning models are vulnerable to adversarial examples: inputs that are specially crafted to cause a machine learning model to produce an incorrect output. Adversarial examples that affect one model often affect another model, even if the two models have different architectures or were trained on different training sets, so long as both models were trained to perform the same task. An attacker may therefore train their own substitute model, craft adversarial examples against the substitute, and transfer them to a victim model, with very little information about the victim. Recent work has further developed a technique that uses the victim model as an oracle to label a synthetic training set for the substitute, so the attacker need not even collect a training set to mount the attack. We extend these recent techniques using reservoir sampling to greatly enhance the efficiency of the training procedure for the substitute model. We introduce new transferability attacks between previously unexplored (substitute, victim) pairs of machine learning model classes, most notably SVMs and decision trees. We demonstrate our attacks on two commercial machine learning classification systems from Amazon (96.19% misclassification rate) and Google (88.94%) using only 800 queries of the victim model, thereby showing that existing machine learning approaches are in general vulnerable to systematic black-box attacks regardless of their structure.', 'year': 2016, 'in_acl': False, 'citationCount': 1657, 'section': None, 'subsection': None}, {'id': 21698802, 'paperId': '514e7fb769950dbe96eb519c88ca17e04dc829f6', 'title': 'HotFlip: White-Box Adversarial Examples for Text Classification', 'authors': [{'authorId': '39043512', 'name': 'J. Ebrahimi'}, {'authorId': '36290866', 'name': 'Anyi Rao'}, {'authorId': '3021654', 'name': 'Daniel Lowd'}, {'authorId': '1721158', 'name': 'D. Dou'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'We propose an efficient method to generate white-box adversarial examples to trick a character-level neural classifier. We find that only a few manipulations are needed to greatly decrease the accuracy. Our method relies on an atomic flip operation, which swaps one token for another, based on the gradients of the one-hot input vectors. Due to efficiency of our method, we can perform adversarial training which makes the model more robust to attacks at test time. With the use of a few semantics-preserving constraints, we demonstrate that HotFlip can be adapted to attack a word-level classifier as well.', 'year': 2017, 'in_acl': True, 'citationCount': 956, 'section': None, 'subsection': None}]
|
2020.acl-tutorials.2
|
Integrating Ethics into the NLP Curriculum
|
To raise awareness among future NLP practitioners and prevent inertia in the field, we need to place ethics in the curriculum for all NLP students—not as an elective, but as a core part of their education. Our goal in this tutorial is to empower NLP researchers and practitioners with tools and resources to teach others about how to ethically apply NLP techniques. We will present both high-level strategies for developing an ethics-oriented curriculum, based on experience and best practices, as well as specific sample exercises that can be brought to a classroom. This highly interactive work session will culminate in a shared online resource page that pools lesson plans, assignments, exercise ideas, reading suggestions, and ideas from the attendees. Though the tutorial will focus particularly on examples for university classrooms, we believe these ideas can extend to company-internal workshops or tutorials in a variety of organizations. In this setting, a key lesson is that there is no single approach to ethical NLP: each project requires thoughtful consideration about what steps can be taken to best support people affected by that project. However, we can learn (and teach) what issues to be aware of, what questions to ask, and what strategies are available to mitigate harm.
| 2,020
|
https://aclanthology.org/2020.acl-tutorials.2
|
ACL
|
[{'id': 26039972, 'paperId': '0e661bd2cfe94ed58e4e2abc1409c75b98c2582c', 'title': 'Dual use and the ethical responsibility of scientists', 'authors': [{'authorId': '3920554', 'name': 'Hans-Jörg Ehni'}], 'venue': 'Archivum Immunologiae et Therapiae Experimentalis', 'abstract': 'The main normative problem in the context of dual use is to determine the ethical responsibility of scientists especially in the case of unintended, harmful, and criminal dual use of new technological applications of scientific results. This article starts from an analysis of the concepts of responsibility and complicity, examining alternative options regarding the responsibility of scientists. Within the context of the basic conflict between the freedom of science and the duty to avoid causing harm, two positions are discussed: moral skepticism and the ethics of responsibility by Hans Jonas. According to these reflections, four duties are suggested and evaluated: stopping research, systematically carrying out research for dual-use applications, informing public authorities, and not publishing results. In the conclusion it is argued that these duties should be considered as imperfect duties in a Kantian sense and that the individual scientist should be discharged as much as possible from obligations which follow from them by the scientific community and institutions created for this purpose.', 'year': 2008, 'in_acl': False, 'citationCount': 40, 'section': None, 'subsection': None}, {'id': 52113954, 'paperId': 'e8fa186444d98a39ee9139b1f5dd0c7618caef8f', 'title': 'Privacy-preserving Neural Representations of Text', 'authors': [{'authorId': '3443469', 'name': 'Maximin Coavoux'}, {'authorId': None, 'name': 'Shashi Narayan'}, {'authorId': '40146204', 'name': 'Shay B. Cohen'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'This article deals with adversarial attacks towards deep learning systems for Natural Language Processing (NLP), in the context of privacy protection. We study a specific type of attack: an attacker eavesdrops on the hidden representations of a neural text classifier and tries to recover information about the input text. Such scenario may arise in situations when the computation of a neural network is shared across multiple devices, e.g. some hidden representation is computed by a user’s device and sent to a cloud-based model. We measure the privacy of a hidden representation by the ability of an attacker to predict accurately specific private information from it and characterize the tradeoff between the privacy and the utility of neural representations. Finally, we propose several defense methods based on modified training objectives and show that they improve the privacy of neural representations.', 'year': 2018, 'in_acl': True, 'citationCount': 106, 'section': None, 'subsection': None}, {'id': 9460040, 'paperId': 'f9acf607b858ac110c1bf83bf62835bcc1820e83', 'title': 'Value scenarios: a technique for envisioning systemic effects of new technologies', 'authors': [{'authorId': '34869420', 'name': 'L. Nathan'}, {'authorId': '2035680', 'name': 'P. Klasnja'}, {'authorId': '144029598', 'name': 'Batya Friedman'}], 'venue': 'CHI Extended Abstracts', 'abstract': 'In this paper we argue that there is a scarcity of methods which support critical, systemic, long-term thinking in current design practice, technology development and deployment. To address this need we introduce value scenarios, an extension of scenario-based design which can support envisioning the systemic effects of new technologies. We identify and describe five key elements of value scenarios; stakeholders, pervasiveness, time, systemic effects, and value implications. We provide two examples of value scenarios, which draw from our current work on urban simulation and human-robotic interaction . We conclude with suggestions for how value scenarios might be used by others.', 'year': 2007, 'in_acl': False, 'citationCount': 101, 'section': None, 'subsection': None}, {'id': 53782832, 'paperId': 'c9fa1cb56feeeb5033aa7ba40fa035ca2b9018ce', 'title': '50 Years of Test (Un)fairness: Lessons for Machine Learning', 'authors': [{'authorId': '2044655623', 'name': 'Ben Hutchinson'}, {'authorId': '49501003', 'name': 'Margaret Mitchell'}], 'venue': 'FAT', 'abstract': 'Quantitative definitions of what is unfair and what is fair have been introduced in multiple disciplines for well over 50 years, including in education, hiring, and machine learning. We trace how the notion of fairness has been defined within the testing communities of education and hiring over the past half century, exploring the cultural and social context in which different fairness definitions have emerged. In some cases, earlier definitions of fairness are similar or identical to definitions of fairness in current machine learning research, and foreshadow current formal work. In other cases, insights into what fairness means and how to measure it have largely gone overlooked. We compare past and current notions of fairness along several dimensions, including the fairness criteria, the focus of the criteria (e.g., a test, a model, or its use), the relationship of fairness to individuals, groups, and subgroups, and the mathematical method for measuring fairness (e.g., classification, regression). This work points the way towards future research and measurement of (un)fairness that builds from our modern understanding of fairness while incorporating insights from the past.', 'year': 2018, 'in_acl': False, 'citationCount': 330, 'section': None, 'subsection': None}, {'id': 2077168, 'paperId': '0fee3b6c72f7676b4934651e517d0a328048c600', 'title': 'Certifying and Removing Disparate Impact', 'authors': [{'authorId': '2053453944', 'name': 'Michael Feldman'}, {'authorId': '34597147', 'name': 'Sorelle A. Friedler'}, {'authorId': '144275618', 'name': 'John Moeller'}, {'authorId': '1786183', 'name': 'C. Scheidegger'}, {'authorId': '72563021', 'name': 'S. Venkatasubramanian'}], 'venue': 'Knowledge Discovery and Data Mining', 'abstract': 'What does it mean for an algorithm to be biased? In U.S. law, unintentional bias is encoded via disparate impact, which occurs when a selection process has widely different outcomes for different groups, even as it appears to be neutral. This legal determination hinges on a definition of a protected class (ethnicity, gender) and an explicit description of the process. When computers are involved, determining disparate impact (and hence bias) is harder. It might not be possible to disclose the process. In addition, even if the process is open, it might be hard to elucidate in a legal setting how the algorithm makes its decisions. Instead of requiring access to the process, we propose making inferences based on the data it uses. We present four contributions. First, we link disparate impact to a measure of classification accuracy that while known, has received relatively little attention. Second, we propose a test for disparate impact based on how well the protected class can be predicted from the other attributes. Third, we describe methods by which data might be made unbiased. Finally, we present empirical evidence supporting the effectiveness of our test for disparate impact and our approach for both masking bias and preserving relevant information in the data. Interestingly, our approach resembles some actual selection practices that have recently received legal scrutiny.', 'year': 2014, 'in_acl': False, 'citationCount': 1846, 'section': None, 'subsection': None}]
|
2020.acl-tutorials.3
|
Achieving Common Ground in Multi-modal Dialogue
|
All communication aims at achieving common ground (grounding): interlocutors can work together effectively only with mutual beliefs about what the state of the world is, about what their goals are, and about how they plan to make their goals a reality. Computational dialogue research offers some classic results on grouding, which unfortunately offer scant guidance to the design of grounding modules and behaviors in cutting-edge systems. In this tutorial, we focus on three main topic areas: 1) grounding in human-human communication; 2) grounding in dialogue systems; and 3) grounding in multi-modal interactive systems, including image-oriented conversations and human-robot interactions. We highlight a number of achievements of recent computational research in coordinating complex content, show how these results lead to rich and challenging opportunities for doing grounding in more flexible and powerful ways, and canvass relevant insights from the literature on human–human conversation. We expect that the tutorial will be of interest to researchers in dialogue systems, computational semantics and cognitive modeling, and hope that it will catalyze research and system building that more directly explores the creative, strategic ways conversational agents might be able to seek and offer evidence about their understanding of their interlocutors.
| 2,020
|
https://aclanthology.org/2020.acl-tutorials.3
|
ACL
|
[{'id': 153811205, 'paperId': '5a9cac54de14e58697d0315fe3c01f3dbe69c186', 'title': 'Grounding in communication', 'authors': [{'authorId': '29224904', 'name': 'H. H. Clark'}, {'authorId': '71463834', 'name': 'S. Brennan'}], 'venue': 'Perspectives on socially shared cognition', 'abstract': "GROUNDING It takes two people working together to play a duet, shake hands, play chess, waltz, teach, or make love. To succeed, the two of them have to coordinate both the content and process of what they are doing. Alan and Barbara, on the piano, must come to play the same Mozart duet. This is coordination of content. They must also synchronize their entrances and exits, coordinate how loudly to play forte and pianissimo, and otherwise adjust to each other's tempo and dynamics. This is coordination of process. They cannot even begin to coordinate on content without assuming a vast amount of shared information or common ground-that is, mutual knowledge, mutual beliefs, and mutual assumptions And to coordinate on process, they need to update their common ground moment by moment. All collective actions are built on common ground and its accumulation. We thank many colleagues for discussion of the issues we take up here.", 'year': 1991, 'in_acl': False, 'citationCount': 4465, 'section': None, 'subsection': None}, {'id': 14623495, 'paperId': '06b6595034f6a8ea850ac12814030c0ef214d300', 'title': 'Meaning and Demonstration', 'authors': [{'authorId': '144884556', 'name': 'Matthew Stone'}, {'authorId': '3458697', 'name': 'Una Stojnić'}], 'venue': '', 'abstract': 'In demonstration, speakers use real-world activity both for its practical effects and to help make their points. The demonstrations of origami mathematics, for example, reconfigure pieces of paper by folding, while simultaneously allowing their author to signal geometric inferences. Demonstration challenges us to explain how practical actions can get such precise significance and how this meaning compares with that of other representations. In this paper, we propose an explanation inspired by David Lewis’s characterizations of coordination and scorekeeping in conversation. In particular, we argue that words, gestures, diagrams and demonstrations can function together as integrated ensembles that contribute to conversation, because interlocutors use them in parallel ways to coordinate updates to the conversational record.', 'year': 2015, 'in_acl': False, 'citationCount': 13, 'section': None, 'subsection': None}, {'id': 10161834, 'paperId': '68922969c1b91cdfb4a13f1dab9b90d015179a9c', 'title': 'Using Reinforcement Learning to Model Incrementality in a Fast-Paced Dialogue Game', 'authors': [{'authorId': '2175808', 'name': 'R. Manuvinakurike'}, {'authorId': '144662324', 'name': 'David DeVault'}, {'authorId': '3194430', 'name': 'Kallirroi Georgila'}], 'venue': 'SIGDIAL Conference', 'abstract': 'We apply Reinforcement Learning (RL) to the problem of incremental dialogue policy learning in the context of a fast-paced dialogue game. We compare the policy learned by RL with a high-performance baseline policy which has been shown to perform very efficiently (nearly as well as humans) in this dialogue game. The RL policy outperforms the baseline policy in offline simulations (based on real user data). We provide a detailed comparison of the RL policy and the baseline policy, including information about how much effort and time it took to develop each one of them. We also highlight the cases where the RL policy performs better, and show that understanding the RL policy can provide valuable insights which can inform the creation of an even better rule-based policy.', 'year': 2017, 'in_acl': True, 'citationCount': 20, 'section': None, 'subsection': None}, {'id': 51609464, 'paperId': '0e3c3599bf5dc2e24e724f097b80948f25c57d1d', 'title': 'Language to Action: Towards Interactive Task Learning with Physical Agents', 'authors': [{'authorId': '1707259', 'name': 'J. Chai'}, {'authorId': '3193409', 'name': 'Qiaozi Gao'}, {'authorId': '2720582', 'name': 'Lanbo She'}, {'authorId': '47569745', 'name': 'Shaohua Yang'}, {'authorId': '1411038811', 'name': 'S. Saba-Sadiya'}, {'authorId': '49560239', 'name': 'Guangyue Xu'}], 'venue': 'International Joint Conference on Artificial Intelligence', 'abstract': 'Language communication plays an important role in human learning and knowledge acquisition. With the emergence of a new generation of cognitive robots, empowering these robots to learn directly from human partners becomes increasingly important. This paper gives a brief introduction to interactive task learning where humans can teach physical agents new tasks through natural language communication and action demonstration. It discusses research challenges and opportunities in language and communication grounding that are critical in this process. It further highlights the importance of commonsense knowledge, particularly the very basic physical causality knowledge, in grounding language to perception and action.', 'year': 2018, 'in_acl': False, 'citationCount': 87, 'section': None, 'subsection': None}, {'id': 14843216, 'paperId': 'a2a4cc9bd34ed61383979edd365d29a32a74368e', 'title': "It's Not What You Do, It's How You Do It: Grounding Uncertainty for a Simple Robot", 'authors': [{'authorId': '144397346', 'name': 'Julian Hough'}, {'authorId': '1817455', 'name': 'David Schlangen'}], 'venue': 'IEEE/ACM International Conference on Human-Robot Interaction', 'abstract': 'For effective HRI, robots must go beyond having good legibility of their intentions shown by their actions, but also ground the degree of uncertainty they have. We show how in simple robots which have spoken language understanding capacities, uncertainty can be communicated to users by principles of grounding in dialogue interaction even without natural language generation. We present a model which makes this possible for robots with limited communication channels beyond the execution of task actions themselves. We implement our model in a pick-and-place robot, and experiment with two strategies for grounding uncertainty. In an observer study, we show that participants observing interactions with the robot run by the two different strategies were able to infer the degree of understanding the robot had internally, and in the more uncertainty-expressive system, were also able to perceive the degree of internal uncertainty the robot had reliably.', 'year': 2017, 'in_acl': False, 'citationCount': 33, 'section': None, 'subsection': None}, {'id': 11824338, 'paperId': '440dd122c93f707c213bb3096449848ac7d1bda5', 'title': 'Learning Effective Multimodal Dialogue Strategies from Wizard-of-Oz Data: Bootstrapping and Evaluation', 'authors': [{'authorId': '1681799', 'name': 'Verena Rieser'}, {'authorId': '1782798', 'name': 'Oliver Lemon'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'We address two problems in the field of automatic optimization of dialogue strategies: learning effective dialogue strategies when no initial data or system exists, and evaluating the result with real users. We use Reinforcement Learning (RL) to learn multimodal dialogue strategies by interaction with a simulated environment which is “bootstrapped” from small amounts of Wizard-of-Oz (WOZ) data. This use of WOZ data allows development of optimal strategies for domains where no working prototype is available. We compare the RL-based strategy against a supervised strategy which mimics the wizards’ policies. This comparison allows us to measure relative improvement over the training data. Our results show that RL significantly outperforms Supervised Learning when interacting in simulation as well as for interactions with real users. The RL-based policy gains on average 50-times more reward when tested in simulation, and almost 18-times more reward when interacting with real users. Users also subjectively rate the RL-based policy on average 10% higher.', 'year': 2008, 'in_acl': True, 'citationCount': 97, 'section': None, 'subsection': None}, {'id': 117398433, 'paperId': '39253602324619c130e140c4e5cef4af0d746880', 'title': 'A Survey of Nonverbal Signaling Methods for Non-Humanoid Robots', 'authors': [{'authorId': '3289717', 'name': 'Elizabeth Cha'}, {'authorId': '1717667', 'name': 'Yunkyung Kim'}, {'authorId': '145585047', 'name': 'T. Fong'}, {'authorId': '1742183', 'name': 'M. Matarić'}], 'venue': 'Found. Trends Robotics', 'abstract': 'This monograph surveys and informs the design and usage of nonverbal signals for human-robot interaction. With robots increasingly being utilized for tasks that require them to not only operate in close proximity to humans but to interact with them as well, there has been great interest in the communication challenges associated with the varying degrees of interaction in these environments. The success of such interactions depends on robots’ ability to convey information about their knowledge, intent, and actions to co-located humans. The monograph presents a comprehensive review of literature related to the generation and usage of nonverbal signals that facilitate legibility of non-humanoid robot state and behavior. To motivate the need for these signaling behaviors, it surveys literature in human communication and psychology and outlines target use cases of non-humanoid robots. Specifically, the focus is on works that provide insight into the cognitive processes that enable humans to recognize, interpret, and exploit nonverbal signals. From these use cases, information is identified that is potentially important for non-humanoid robots to signal and organize it into three categories of robot state. The monograph then presents a review of signal design techniques to illustrate how signals conveying this information can be generated and utilized. It concludes by discussing issues that must be considered during nonverbal signaling and open research areas, with a focus on informing the design and usage of generalizable nonverbal signaling behaviors for task-oriented non-humanoid robots.', 'year': 2018, 'in_acl': False, 'citationCount': 97, 'section': None, 'subsection': None}, {'id': 202588687, 'paperId': '4a5c3216f8aad40b17531e78df8cc18e2b5c73ff', 'title': 'The Devil is in the Details: A Magnifying Glass for the GuessWhich Visual Dialogue Game', 'authors': [{'authorId': '50829868', 'name': 'A. Testoni'}, {'authorId': '145543514', 'name': 'Ravi Shekhar'}, {'authorId': '144151273', 'name': 'R. Fernández'}], 'venue': '', 'abstract': 'Grounded conversational agents are a fascinating research line on which important progress has beenmade lately thanks to the development of neural network models and to the release of visual dialogue datasets. The latter have been used to set visual dialogue games which are an interesting test bed to evaluate conversational agents. Researchers’ attention is on building models of increasing complexity, trained with computationally costly machine learning paradigms that lead to higher task success scores. In this paper, we take a step back: We use a rather simple neural network architecture and we scrutinize theGuessWhich task, the dataset, and the quality of the generated dialogues. We show that our simple Questioner agent reaches state-of-the art performance, that the evaluation metric commonly used is too coarse to compare different models, and that high task success does not correspond to high quality of the dialogues. Our work shows the importance of running detailed analyses of the results to spot possible models’ weaknesses rather than aiming to outperform state-of-the-art scores.', 'year': 2019, 'in_acl': False, 'citationCount': 6, 'section': None, 'subsection': None}]
|
2020.acl-tutorials.4
|
Reviewing Natural Language Processing Research
|
This tutorial will cover the theory and practice of reviewing research in natural language processing. Heavy reviewing burdens on natural language processing researchers have made it clear that our community needs to increase the size of our pool of potential reviewers. Simultaneously, notable “false negatives”—rejection by our conferences of work that was later shown to be tremendously important after acceptance by other conferences—have raised awareness of the fact that our reviewing practices leave something to be desired. We do not often talk about “false positives” with respect to conference papers, but leaders in the field have noted that we seem to have a publication bias towards papers that report high performance, with perhaps not much else of interest in them. It need not be this way. Reviewing is a learnable skill, and you will learn it here via lectures and a considerable amount of hands-on practice.
| 2,020
|
https://aclanthology.org/2020.acl-tutorials.4
|
ACL
|
[{'id': 154339, 'paperId': '33ff45f364dac785b8bd4e3bf70fb169dc1d39b4', 'title': "Who's afraid of peer review?", 'authors': [{'authorId': '145179131', 'name': 'J. Bohannon'}], 'venue': 'Science', 'abstract': 'Dozens of open-access journals targeted in an elaborate Science sting accepted a spoof research article, raising questions about peer-review practices in much of the open-access world.', 'year': 2013, 'in_acl': False, 'citationCount': 885, 'section': None, 'subsection': None}, {'id': 8460592, 'paperId': '9ca5552008fe2c24e0541f6af47fd5110d4015b3', 'title': 'Last Words: Reviewing the Reviewers', 'authors': [{'authorId': '2272727361', 'name': 'K. Church'}], 'venue': 'International Conference on Computational Logic', 'abstract': '', 'year': 2005, 'in_acl': True, 'citationCount': 48, 'section': None, 'subsection': None}, {'id': 16508456, 'paperId': '4fb5a17d4066116a8fc928e43aa558732d8b7cb2', 'title': 'Preventing the ends from justifying the means: withholding results to address publication bias in peer-review', 'authors': [{'authorId': '4058655', 'name': 'K. Button'}, {'authorId': '38974348', 'name': 'Liz Bal'}, {'authorId': '145879163', 'name': 'A. Clark'}, {'authorId': '19854097', 'name': 'Tim Shipley'}], 'venue': 'BMC Psychology', 'abstract': 'The evidence that many of the findings in the published literature may be unreliable is compelling. There is an excess of positive results, often from studies with small sample sizes, or other methodological limitations, and the conspicuous absence of null findings from studies of a similar quality. This distorts the evidence base, leading to false conclusions and undermining scientific progress. Central to this problem is a peer-review system where the decisions of authors, reviewers, and editors are more influenced by impressive results than they are by the validity of the study design. To address this, BMC Psychology is launching a pilot to trial a new ‘results-free’ peer-review process, whereby editors and reviewers are blinded to the study’s results, initially assessing manuscripts on the scientific merits of the rationale and methods alone. The aim is to improve the reliability and quality of published research, by focusing editorial decisions on the rigour of the methods, and preventing impressive ends justifying poor means.', 'year': 2016, 'in_acl': False, 'citationCount': 42, 'section': None, 'subsection': None}, {'id': 53149927, 'paperId': 'a772589606f9880d74ac79519ccef073eefd5519', 'title': 'Double-blind peer review and gender publication bias', 'authors': [{'authorId': '3145400', 'name': 'L. Engqvist'}, {'authorId': '2149229', 'name': 'Joachim G. Frommen'}], 'venue': 'Animal Behaviour', 'abstract': '', 'year': 2008, 'in_acl': False, 'citationCount': 34, 'section': None, 'subsection': None}, {'id': 7350256, 'paperId': 'd8cf5c798397b6a0be1b41f18f979f0988f1ece7', 'title': 'Publication prejudices: An experimental study of confirmatory bias in the peer review system', 'authors': [{'authorId': '35256346', 'name': 'M. Mahoney'}], 'venue': 'Cognitive Therapy and Research', 'abstract': "Confirmatory bias is the tendency to emphasize and believe experiences which support one's views and to ignore or discredit those which do not. The effects of this tendency have been repeatedly documented in clinical research. However, its ramifications for the behavior of scientists have yet to be adequately explored. For example, although publication is a critical element in determining the contribution and impact of scientific findings, little research attention has been devoted to the variables operative in journal review policies. In the present study, 75 journal reviewers were asked to referee manuscripts which described identical experimental procedures but which reported positive, negative, mixed, or no results. In addition to showing poor interrater agreement, reviewers were strongly biased against manuscripts which reported results contrary to their theoretical perspective. The implications of these findings for epistemology and the peer review system are briefly addressed.", 'year': 1977, 'in_acl': False, 'citationCount': 667, 'section': None, 'subsection': None}, {'id': 155600340, 'paperId': '8d75051e8151fa5b7bd7c863102d0c4be7608c93', 'title': 'Peer review — reviewed', 'authors': [], 'venue': 'Nature', 'abstract': '', 'year': 2014, 'in_acl': False, 'citationCount': 3, 'section': None, 'subsection': None}, {'id': 256659588, 'paperId': '87e849787dcfda83d7315c7d3d5c54851c82d264', 'title': 'On becoming a discipline', 'authors': [{'authorId': '5922478', 'name': 'Melissa J. Fickling'}], 'venue': 'Counselor Education and Supervision', 'abstract': "Clarifying counselor education's status as a discipline carries implications for pedagogy, researcher identity development, and knowledge production. In this manuscript, these implications are discussed within a historical context and with attention to the successful career transitions for new counselor educators, as well as those pursuing promotion and tenure in academia.", 'year': 2023, 'in_acl': False, 'citationCount': 3, 'section': None, 'subsection': None}, {'id': 1570550, 'paperId': '830ab38207bd40189752a301967b865c38dab591', 'title': 'Last Words: Breaking News: Changing Attitudes and Practices', 'authors': [{'authorId': '1736049', 'name': 'B. Webber'}], 'venue': 'International Conference on Computational Logic', 'abstract': '', 'year': 2007, 'in_acl': True, 'citationCount': 3, 'section': None, 'subsection': None}, {'id': 522864, 'paperId': 'cf222293e2447365ad25e603bfdd064646ef6652', 'title': 'Nepotism and sexism in peer-review', 'authors': [{'authorId': '6447164', 'name': 'C. Wennerås'}, {'authorId': '2053374957', 'name': 'Agnes E. Wold'}], 'venue': 'Nature', 'abstract': 'In the first-ever analysis of peer-review scores for postdoctoral fellowship applications, the system is revealed as being riddled with prejudice. The policy of secrecy in evaluation must be abandoned.', 'year': 1997, 'in_acl': False, 'citationCount': 1359, 'section': None, 'subsection': None}]
|
2020.acl-tutorials.6
|
Multi-modal Information Extraction from Text, Semi-structured, and Tabular Data on the Web
|
The World Wide Web contains vast quantities of textual information in several forms: unstructured text, template-based semi-structured webpages (which present data in key-value pairs and lists), and tables. Methods for extracting information from these sources and converting it to a structured form have been a target of research from the natural language processing (NLP), data mining, and database communities. While these researchers have largely separated extraction from web data into different problems based on the modality of the data, they have faced similar problems such as learning with limited labeled data, defining (or avoiding defining) ontologies, making use of prior knowledge, and scaling solutions to deal with the size of the Web. In this tutorial we take a holistic view toward information extraction, exploring the commonalities in the challenges and solutions developed to address these different forms of text. We will explore the approaches targeted at unstructured text that largely rely on learning syntactic or semantic textual patterns, approaches targeted at semi-structured documents that learn to identify structural patterns in the template, and approaches targeting web tables which rely heavily on entity linking and type information. While these different data modalities have largely been considered separately in the past, recent research has started taking a more inclusive approach toward textual extraction, in which the multiple signals offered by textual, layout, and visual clues are combined into a single extraction model made possible by new deep learning approaches. At the same time, trends within purely textual extraction have shifted toward full-document understanding rather than considering sentences as independent units. With this in mind, it is worth considering the information extraction problem as a whole to motivate solutions that harness textual semantics along with visual and semi-structured layout information. We will discuss these approaches and suggest avenues for future work.
| 2,020
|
https://aclanthology.org/2020.acl-tutorials.6
|
ACL
|
[{'id': 13091007, 'paperId': '7a12502ba5b9686e37b0ec9d86a2dc7f4b7022ac', 'title': 'Web-scale information extraction with vertex', 'authors': [{'authorId': '2627799', 'name': 'P. Gulhane'}, {'authorId': '2136102', 'name': 'Amit Madaan'}, {'authorId': '3259494', 'name': 'Rupesh R. Mehta'}, {'authorId': '2311735', 'name': 'J. Ramamirtham'}, {'authorId': '1696519', 'name': 'R. Rastogi'}, {'authorId': '1837802', 'name': 'Sandeepkumar Satpal'}, {'authorId': '1757518', 'name': 'Srinivasan H. Sengamedu'}, {'authorId': '2990683', 'name': 'Ashwin Tengli'}, {'authorId': '2081450365', 'name': 'Charu Tiwari'}], 'venue': 'IEEE International Conference on Data Engineering', 'abstract': 'Vertex is a Wrapper Induction system developed at Yahoo! for extracting structured records from template-based Web pages. To operate at Web scale, Vertex employs a host of novel algorithms for (1) Grouping similar structured pages in a Web site, (2) Picking the appropriate sample pages for wrapper inference, (3) Learning XPath-based extraction rules that are robust to variations in site structure, (4) Detecting site changes by monitoring sample pages, and (5) Optimizing editorial costs by reusing rules, etc. The system is deployed in production and currently extracts more than 250 million records from more than 200 Web sites. To the best of our knowledge, Vertex is the first system to do high-precision information extraction at Web scale.', 'year': 2011, 'in_acl': False, 'citationCount': 90, 'section': None, 'subsection': None}, {'id': 51993171, 'paperId': 'e49e6dbdfdb813b42fff716a8b11951de2d5cbf3', 'title': 'Ten Years of WebTables', 'authors': [{'authorId': '1725561', 'name': 'Michael J. Cafarella'}, {'authorId': '1770962', 'name': 'A. Halevy'}, {'authorId': '8386466', 'name': 'Hongrae Lee'}, {'authorId': '2224716', 'name': 'Jayant Madhavan'}, {'authorId': '40592227', 'name': 'Cong Yu'}, {'authorId': '2111220343', 'name': 'D. Wang'}, {'authorId': '48144872', 'name': 'Eugene Wu'}], 'venue': 'Proceedings of the VLDB Endowment', 'abstract': '\n In 2008, we wrote about WebTables, an effort to exploit the large and diverse set of structured databases casually published online in the form of HTML tables. The past decade has seen a flurry of research and commercial activities around the WebTables project itself, as well as the broad topic of informal online structured data. In this paper, we\n 1\n will review the WebTables project, and try to place it in the broader context of the decade of work that followed. We will also show how the progress over the past ten years sets up an exciting agenda for the future, and will draw upon many corners of the data management community.\n', 'year': 2018, 'in_acl': False, 'citationCount': 62, 'section': None, 'subsection': None}, {'id': 3627801, 'paperId': '0e46803ac8fc715b72d7f935a3f383ade945487f', 'title': 'Fonduer: Knowledge Base Construction from Richly Formatted Data', 'authors': [{'authorId': '144766615', 'name': 'Sen Wu'}, {'authorId': '2065637845', 'name': 'Luke Hsiao'}, {'authorId': '2149478197', 'name': 'Xiaoxia Cheng'}, {'authorId': '34302368', 'name': 'Braden Hancock'}, {'authorId': '145071799', 'name': 'Theodoros Rekatsinas'}, {'authorId': '1721681', 'name': 'P. Levis'}, {'authorId': '2114485554', 'name': 'C. Ré'}], 'venue': 'SIGMOD Conference', 'abstract': "We focus on knowledge base construction (KBC) from richly formatted data. In contrast to KBC from text or tabular data, KBC from richly formatted data aims to extract relations conveyed jointly via textual, structural, tabular, and visual expressions. We introduce Fonduer, a machine-learning-based KBC system for richly formatted data. Fonduer presents a new data model that accounts for three challenging characteristics of richly formatted data: (1) prevalent document-level relations, (2) multimodality, and (3) data variety. Fonduer uses a new deep-learning model to automatically capture the representation (i.e., features) needed to learn how to extract relations from richly formatted data. Finally, Fonduer provides a new programming model that enables users to convert domain expertise, based on multiple modalities of information, to meaningful signals of supervision for training a KBC system. Fonduer-based KBC systems are in production for a range of use cases, including at a major online retailer. We compare Fonduer against state-of-the-art KBC approaches in four different domains. We show that Fonduer achieves an average improvement of 41 F1 points on the quality of the output knowledge base---and in some cases produces up to 1.87x the number of correct entries---compared to expert-curated public knowledge bases. We also conduct a user study to assess the usability of Fonduer's new programming model. We show that after using Fonduer for only 30 minutes, non-domain experts are able to design KBC systems that achieve on average 23 F1 points higher quality than traditional machine-learning-based KBC approaches.", 'year': 2017, 'in_acl': False, 'citationCount': 97, 'section': None, 'subsection': None}, {'id': 102353905, 'paperId': 'dcb28c8ba94434eb8a06e81eb55bfdbc343d2340', 'title': 'Document-Level N-ary Relation Extraction with Multiscale Representation Learning', 'authors': [{'authorId': '3422908', 'name': 'Robin Jia'}, {'authorId': '2109566188', 'name': 'Cliff Wong'}, {'authorId': '1759772', 'name': 'Hoifung Poon'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'Most information extraction methods focus on binary relations expressed within single sentences. In high-value domains, however, n-ary relations are of great demand (e.g., drug-gene-mutation interactions in precision oncology). Such relations often involve entity mentions that are far apart in the document, yet existing work on cross-sentence relation extraction is generally confined to small text spans (e.g., three consecutive sentences), which severely limits recall. In this paper, we propose a novel multiscale neural architecture for document-level n-ary relation extraction. Our system combines representations learned over various text spans throughout the document and across the subrelation hierarchy. Widening the system’s purview to the entire document maximizes potential recall. Moreover, by integrating weak signals across the document, multiscale modeling increases precision, even in the presence of noisy labels from distant supervision. Experiments on biomedical machine reading show that our approach substantially outperforms previous n-ary relation extraction methods.', 'year': 2019, 'in_acl': True, 'citationCount': 135, 'section': None, 'subsection': None}, {'id': 5774632, 'paperId': '131383aa1f91eb0e9578dcae80f4dfcfb0f11e3e', 'title': 'Extraction and Integration of Partially Overlapping Web Sources', 'authors': [{'authorId': '1760944', 'name': 'Mirko Bronzi'}, {'authorId': '1791339', 'name': 'Valter Crescenzi'}, {'authorId': '1796590', 'name': 'P. Merialdo'}, {'authorId': '1802817', 'name': 'Paolo Papotti'}], 'venue': 'Proceedings of the VLDB Endowment', 'abstract': 'We present an unsupervised approach for harvesting the data exposed by a set of structured and partially overlapping data-intensive web sources. Our proposal comes within a formal framework tackling two problems: the data extraction problem, to generate extraction rules based on the input websites, and the data integration problem, to integrate the extracted data in a unified schema. We introduce an original algorithm, WEIR, to solve the stated problems and formally prove its correctness. WEIR leverages the overlapping data among sources to make better decisions both in the data extraction (by pruning rules that do not lead to redundant information) and in the data integration (by reflecting local properties of a source over the mediated schema). Along the way, we characterize the amount of redundancy needed by our algorithm to produce a solution, and present experimental results to show the benefits of our approach with respect to existing solutions.', 'year': 2013, 'in_acl': False, 'citationCount': 60, 'section': None, 'subsection': None}, {'id': 4557963, 'paperId': 'cf5ea582bccc7cb21a2ebeb7a0987f79652bde8d', 'title': 'Knowledge vault: a web-scale approach to probabilistic knowledge fusion', 'authors': [{'authorId': '145867172', 'name': 'X. Dong'}, {'authorId': '1718798', 'name': 'E. Gabrilovich'}, {'authorId': '1728179', 'name': 'Geremy Heitz'}, {'authorId': '40428294', 'name': 'Wilko Horn'}, {'authorId': '1914797', 'name': 'N. Lao'}, {'authorId': '1702318', 'name': 'K. Murphy'}, {'authorId': '2931575', 'name': 'Thomas Strohmann'}, {'authorId': '2109375570', 'name': 'Shaohua Sun'}, {'authorId': None, 'name': 'Wei Zhang'}], 'venue': 'Knowledge Discovery and Data Mining', 'abstract': "Recent years have witnessed a proliferation of large-scale knowledge bases, including Wikipedia, Freebase, YAGO, Microsoft's Satori, and Google's Knowledge Graph. To increase the scale even further, we need to explore automatic methods for constructing knowledge bases. Previous approaches have primarily focused on text-based extraction, which can be very noisy. Here we introduce Knowledge Vault, a Web-scale probabilistic knowledge base that combines extractions from Web content (obtained via analysis of text, tabular data, page structure, and human annotations) with prior knowledge derived from existing knowledge repositories. We employ supervised machine learning methods for fusing these distinct information sources. The Knowledge Vault is substantially bigger than any previously published structured knowledge repository, and features a probabilistic inference system that computes calibrated probabilities of fact correctness. We report the results of multiple studies that explore the relative utility of the different information sources and extraction methods.", 'year': 2014, 'in_acl': False, 'citationCount': 1700, 'section': None, 'subsection': None}, {'id': 102352698, 'paperId': 'f262ef2f50dfcaf07dc6598f22fb9b2470b37cf1', 'title': 'A general framework for information extraction using dynamic span graphs', 'authors': [{'authorId': '145081697', 'name': 'Yi Luan'}, {'authorId': '30051202', 'name': 'David Wadden'}, {'authorId': '2265599', 'name': 'Luheng He'}, {'authorId': '2107663537', 'name': 'A. Shah'}, {'authorId': '144339506', 'name': 'Mari Ostendorf'}, {'authorId': '2548384', 'name': 'Hannaneh Hajishirzi'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'We introduce a general framework for several information extraction tasks that share span representations using dynamically constructed span graphs. The graphs are dynamically constructed by selecting the most confident entity spans and linking these nodes with confidence-weighted relation types and coreferences. The dynamic span graph allow coreference and relation type confidences to propagate through the graph to iteratively refine the span representations. This is unlike previous multi-task frameworks for information extraction in which the only interaction between tasks is in the shared first-layer LSTM. Our framework significantly outperforms state-of-the-art on multiple information extraction tasks across multiple datasets reflecting different domains. We further observe that the span enumeration approach is good at detecting nested span entities, with significant F1 score improvement on the ACE dataset.', 'year': 2019, 'in_acl': True, 'citationCount': 307, 'section': None, 'subsection': None}, {'id': 174799980, 'paperId': 'b31eef8d9263b02f7d0c1ab55b26012550a2e95a', 'title': 'OpenCeres: When Open Information Extraction Meets the Semi-Structured Web', 'authors': [{'authorId': '144182018', 'name': 'Colin Lockard'}, {'authorId': '3310534', 'name': 'Prashant Shiralkar'}, {'authorId': '2143917898', 'name': 'Xin Dong'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'Open Information Extraction (OpenIE), the problem of harvesting triples from natural language text whose predicate relations are not aligned to any pre-defined ontology, has been a popular subject of research for the last decade. However, this research has largely ignored the vast quantity of facts available in semi-structured webpages. In this paper, we define the problem of OpenIE from semi-structured websites to extract such facts, and present an approach for solving it. We also introduce a labeled evaluation dataset to motivate research in this area. Given a semi-structured website and a set of seed facts for some relations existing on its pages, we employ a semi-supervised label propagation technique to automatically create training data for the relations present on the site. We then use this training data to learn a classifier for relation extraction. Experimental results of this method on our new benchmark dataset obtained a precision of over 70%. A larger scale extraction experiment on 31 websites in the movie vertical resulted in the extraction of over 2 million triples.', 'year': 2019, 'in_acl': True, 'citationCount': 50, 'section': None, 'subsection': None}, {'id': 53109320, 'paperId': '1da8e1ad1814d81f69433ac877ef70caa950e4e6', 'title': 'GraphIE: A Graph-Based Framework for Information Extraction', 'authors': [{'authorId': '5606742', 'name': 'Yujie Qian'}, {'authorId': '2628786', 'name': 'Enrico Santus'}, {'authorId': '8752221', 'name': 'Zhijing Jin'}, {'authorId': '144084849', 'name': 'Jiang Guo'}, {'authorId': '1741283', 'name': 'R. Barzilay'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'Most modern Information Extraction (IE) systems are implemented as sequential taggers and only model local dependencies. Non-local and non-sequential context is, however, a valuable source of information to improve predictions. In this paper, we introduce GraphIE, a framework that operates over a graph representing a broad set of dependencies between textual units (i.e. words or sentences). The algorithm propagates information between connected nodes through graph convolutions, generating a richer representation that can be exploited to improve word-level predictions. Evaluation on three different tasks — namely textual, social media and visual information extraction — shows that GraphIE consistently outperforms the state-of-the-art sequence tagging model by a significant margin.', 'year': 2018, 'in_acl': True, 'citationCount': 105, 'section': None, 'subsection': None}]
|
2020.acl-tutorials.7
|
Commonsense Reasoning for Natural Language Processing
|
Commonsense knowledge, such as knowing that “bumping into people annoys them” or “rain makes the road slippery”, helps humans navigate everyday situations seamlessly. Yet, endowing machines with such human-like commonsense reasoning capabilities has remained an elusive goal of artificial intelligence research for decades. In recent years, commonsense knowledge and reasoning have received renewed attention from the natural language processing (NLP) community, yielding exploratory studies in automated commonsense understanding. We organize this tutorial to provide researchers with the critical foundations and recent advances in commonsense representation and reasoning, in the hopes of casting a brighter light on this promising area of future research. In our tutorial, we will (1) outline the various types of commonsense (e.g., physical, social), and (2) discuss techniques to gather and represent commonsense knowledge, while highlighting the challenges specific to this type of knowledge (e.g., reporting bias). We will then (3) discuss the types of commonsense knowledge captured by modern NLP systems (e.g., large pretrained language models), and (4) present ways to measure systems’ commonsense reasoning abilities. We will finish with (5) a discussion of various ways in which commonsense reasoning can be used to improve performance on NLP tasks, exemplified by an (6) interactive session on integrating commonsense into a downstream task.
| 2,020
|
https://aclanthology.org/2020.acl-tutorials.7
|
ACL
|
[{'id': 91184338, 'paperId': 'b1832b749528755dfcbe462717f4f5afc07243b8', 'title': 'Commonsense Reasoning for Natural Language Understanding: A Survey of Benchmarks, Resources, and Approaches', 'authors': [{'authorId': '89093987', 'name': 'Shane Storks'}, {'authorId': '3193409', 'name': 'Qiaozi Gao'}, {'authorId': '1707259', 'name': 'J. Chai'}], 'venue': 'arXiv.org', 'abstract': "Commonsense knowledge and commonsense reasoning are some of the main bottlenecks in machine intelligence. In the NLP community, many benchmark datasets and tasks have been created to address commonsense reasoning for language understanding. These tasks are designed to assess machines' ability to acquire and learn commonsense knowledge in order to reason and understand natural language text. As these tasks become instrumental and a driving force for commonsense research, this paper aims to provide an overview of existing tasks and benchmarks, knowledge resources, and learning and inference approaches toward commonsense reasoning for natural language understanding. Through this, our goal is to support a better understanding of the state of the art, its limitations, and future challenges.", 'year': 2019, 'in_acl': False, 'citationCount': 72, 'section': None, 'subsection': None}, {'id': 15710851, 'paperId': '128cb6b891aee1b5df099acb48e2efecfcff689f', 'title': 'The Winograd Schema Challenge', 'authors': [{'authorId': '143634377', 'name': 'H. Levesque'}, {'authorId': '144883814', 'name': 'E. Davis'}, {'authorId': '40429476', 'name': 'L. Morgenstern'}], 'venue': 'AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning', 'abstract': 'In this paper, we present an alternative to the Turing Test that has some conceptual and practical advantages. Like the original, it involves responding to typed English sentences, and English-speaking adults will have no difficulty with it. Unlike the original, the subject is not required to engage in a conversation and fool an interrogator into believing she is dealing with a person. Moreover, the test is arranged in such a way that having full access to a large corpus of English text might not help much. Finally, the interrogator or a third party will be able to decide unambiguously after a few minutes whether or not a subject has passed the test.', 'year': 2011, 'in_acl': False, 'citationCount': 1275, 'section': None, 'subsection': None}, {'id': 15206880, 'paperId': '26aa6fe2028b5eefbaa40ab54ef725bbbe7d9810', 'title': 'ConceptNet 5.5: An Open Multilingual Graph of General Knowledge', 'authors': [{'authorId': '145696762', 'name': 'R. Speer'}, {'authorId': '2060230787', 'name': 'Joshua Chin'}, {'authorId': '2232845', 'name': 'Catherine Havasi'}], 'venue': 'AAAI Conference on Artificial Intelligence', 'abstract': '\n \n Machine learning about language can be improved by supplying it with specific knowledge and sources of external information. We present here a new version of the linked open data resource ConceptNet that is particularly well suited to be used with modern NLP techniques such as word embeddings. ConceptNet is a knowledge graph that connects words and phrases of natural language with labeled edges. Its knowledge is collected from many sources that include expert-created resources, crowd-sourcing, and games with a purpose. It is designed to represent the general knowledge involved in understanding language, improving natural language applications by allowing the application to better understand the meanings behind the words people use. When ConceptNet is combined with word embeddings acquired from distributional semantics (such as word2vec), it provides applications with understanding that they would not acquire from distributional semantics alone, nor from narrower resources such as WordNet or DBPedia. We demonstrate this with state-of-the-art results on intrinsic evaluations of word relatedness that translate into improvements on applications of word vectors, including solving SAT-style analogies.\n \n', 'year': 2016, 'in_acl': False, 'citationCount': 2626, 'section': None, 'subsection': None}, {'id': 16567195, 'paperId': 'cceb698cbbb828537f2f195fb70b6fdc586d3327', 'title': 'Reporting bias and knowledge acquisition', 'authors': [{'authorId': '145402198', 'name': 'Jonathan Gordon'}, {'authorId': '7536576', 'name': 'Benjamin Van Durme'}], 'venue': 'Conference on Automated Knowledge Base Construction', 'abstract': 'Much work in knowledge extraction from text tacitly assumes that the frequency with which people write about actions, outcomes, or properties is a reflection of real-world frequencies or the degree to which a property is characteristic of a class of individuals. In this paper, we question this idea, examining the phenomenon of reporting bias and the challenge it poses for knowledge extraction. We conclude with discussion of approaches to learning commonsense knowledge from text despite this distortion.', 'year': 2013, 'in_acl': False, 'citationCount': 210, 'section': None, 'subsection': None}, {'id': 1726501, 'paperId': '85b68477a6e031d88b963833e15a4b4fc6855264', 'title': 'A Corpus and Cloze Evaluation for Deeper Understanding of Commonsense Stories', 'authors': [{'authorId': '2400138', 'name': 'N. Mostafazadeh'}, {'authorId': '1729918', 'name': 'Nathanael Chambers'}, {'authorId': '144137069', 'name': 'Xiaodong He'}, {'authorId': '153432684', 'name': 'Devi Parikh'}, {'authorId': '1746610', 'name': 'Dhruv Batra'}, {'authorId': '1909300', 'name': 'Lucy Vanderwende'}, {'authorId': '143967473', 'name': 'Pushmeet Kohli'}, {'authorId': '145844737', 'name': 'James F. Allen'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': "Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.", 'year': 2016, 'in_acl': True, 'citationCount': 670, 'section': None, 'subsection': None}, {'id': 53296520, 'paperId': 'c21a4d70d83e0f6eb2a9e1c41d034842dd561e47', 'title': 'CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge', 'authors': [{'authorId': '12371246', 'name': 'Alon Talmor'}, {'authorId': '47426264', 'name': 'Jonathan Herzig'}, {'authorId': '35219984', 'name': 'Nicholas Lourie'}, {'authorId': '1750652', 'name': 'Jonathan Berant'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'When answering a question, people often draw upon their rich world knowledge in addition to the particular context. Recent work has focused primarily on answering questions given some relevant document or context, and required very little general background. To investigate question answering with prior knowledge, we present CommonsenseQA: a challenging new dataset for commonsense question answering. To capture common sense beyond associations, we extract from ConceptNet (Speer et al., 2017) multiple target concepts that have the same semantic relation to a single source concept. Crowd-workers are asked to author multiple-choice questions that mention the source concept and discriminate in turn between each of the target concepts. This encourages workers to create questions with complex semantics that often require prior knowledge. We create 12,247 questions through this procedure and demonstrate the difficulty of our task with a large number of strong baselines. Our best baseline is based on BERT-large (Devlin et al., 2018) and obtains 56% accuracy, well below human performance, which is 89%.', 'year': 2019, 'in_acl': True, 'citationCount': 1364, 'section': None, 'subsection': None}]
|
2020.coling-tutorials.1
|
Cross-lingual Semantic Representation for NLP with UCCA
|
This is an introductory tutorial to UCCA (Universal Conceptual Cognitive Annotation), a cross-linguistically applicable framework for semantic representation, with corpora annotated in English, German and French, and ongoing annotation in Russian and Hebrew. UCCA builds on extensive typological work and supports rapid annotation. The tutorial will provide a detailed introduction to the UCCA annotation guidelines, design philosophy and the available resources; and a comparison to other meaning representations. It will also survey the existing parsing work, including the findings of three recent shared tasks, in SemEval and CoNLL, that addressed UCCA parsing. Finally, the tutorial will present recent applications and extensions to the scheme, demonstrating its value for natural language processing in a range of languages and domains.
| 2,020
|
https://aclanthology.org/2020.coling-tutorials.1/
|
COLING
|
[{'id': 60742189, 'paperId': '54ffc8f1cb11ec21eb14a6706b8b6d9b192a1b32', 'title': 'A Semantic Approach to English Grammar', 'authors': [{'authorId': '34256957', 'name': 'R. Dixon'}], 'venue': '', 'abstract': 'This book shows how grammar helps people communicate and looks at the ways grammar and meaning interrelate. The author starts from the notion that a speaker codes a meaning into grammatical forms which the listener is then able to recover: each word, he shows, has its own meaning and each bit of grammar its own function, their combinations creating and limiting the possibilities for different words. He uncovers a rationale for the varying grammatical properties of different words and in the process explains many facts about English - such as why we can say I wish to go, I wish that he would go, and I want to go but not I want that he would go. The first part of the book reviews the main points of English syntax and discusses English verbs in terms of their semantic types including those of Motion, Giving, Speaking, Liking, and Trying. In the second part Professor Dixon looks at eight grammatical topics, including complement clauses, transitivity and causatives, passives, and the promotion of a non-subject to subject, as in Dictionaries sell well. This is the updated and revised edition of A New Approach to English Grammar on Semantic Principles. It includes new chapters on tense and aspect, nominalizations and possession, and adverbs and negation, and contains a new discussion of comparative forms of adjectives. It also explains recent changes in English grammar, including how they has replaced the tabooed he as a pronoun referring to either gender, as in When a student reads this book, they will learn a lot about English grammar in a most enjoyable manner.', 'year': 2005, 'in_acl': False, 'citationCount': 223, 'section': None, 'subsection': None}, {'id': 1642392, 'paperId': 'eec3a236ecd185712ce65fb336141f8656eea13d', 'title': 'Simple and Accurate Dependency Parsing Using Bidirectional LSTM Feature Representations', 'authors': [{'authorId': '2022679', 'name': 'E. Kiperwasser'}, {'authorId': '2089067', 'name': 'Yoav Goldberg'}], 'venue': 'Transactions of the Association for Computational Linguistics', 'abstract': 'We present a simple and effective scheme for dependency parsing which is based on bidirectional-LSTMs (BiLSTMs). Each sentence token is associated with a BiLSTM vector representing the token in its sentential context, and feature vectors are constructed by concatenating a few BiLSTM vectors. The BiLSTM is trained jointly with the parser objective, resulting in very effective feature extractors for parsing. We demonstrate the effectiveness of the approach by applying it to a greedy transition-based parser as well as to a globally optimized graph-based parser. The resulting parsers have very simple architectures, and match or surpass the state-of-the-art accuracies on English and Chinese.', 'year': 2016, 'in_acl': True, 'citationCount': 658, 'section': None, 'subsection': None}, {'id': 8233374, 'paperId': '4a715ea217dc5ecc2b16d6cf542bfb3f4a10f2b5', 'title': 'A Transition-Based Directed Acyclic Graph Parser for UCCA', 'authors': [{'authorId': '2086349', 'name': 'Daniel Hershcovich'}, {'authorId': '2769805', 'name': 'Omri Abend'}, {'authorId': '145009917', 'name': 'A. Rappoport'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'We present the first parser for UCCA, a cross-linguistically applicable framework for semantic representation, which builds on extensive typological work and supports rapid annotation. UCCA poses a challenge for existing parsing techniques, as it exhibits reentrancy (resulting in DAG structures), discontinuous structures and non-terminal nodes corresponding to complex semantic units. To our knowledge, the conjunction of these formal properties is not supported by any existing parser. Our transition-based parser, which uses a novel transition set and features based on bidirectional LSTMs, has value not just for UCCA parsing: its ability to handle more general graph structures can inform the development of parsers for other semantic DAG structures, and in languages that frequently use discontinuous structures.', 'year': 2017, 'in_acl': True, 'citationCount': 93, 'section': None, 'subsection': None}, {'id': 15939234, 'paperId': '4908fc4d7f58383170c085fe8238a868e9a901f9', 'title': 'Deep Multitask Learning for Semantic Dependency Parsing', 'authors': [{'authorId': '1818378366', 'name': 'Hao Peng'}, {'authorId': '38094552', 'name': 'Sam Thomson'}, {'authorId': '144365875', 'name': 'Noah A. Smith'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'We present a deep neural architecture that parses sentences into three semantic dependency graph formalisms. By using efficient, nearly arc-factored inference and a bidirectional-LSTM composed with a multi-layer perceptron, our base system is able to significantly improve the state of the art for semantic dependency parsing, without using hand-engineered features or syntax. We then explore two multitask learning approaches—one that shares parameters across formalisms, and one that uses higher-order structures to predict the graphs jointly. We find that both approaches improve performance across formalisms on average, achieving a new state of the art. Our code is open-source and available at https://github.com/Noahs-ARK/NeurboParser.', 'year': 2017, 'in_acl': True, 'citationCount': 144, 'section': None, 'subsection': None}, {'id': 19488885, 'paperId': '7ada8577807aefcad4f8120e8a031cceba065ec9', 'title': 'Multitask Parsing Across Semantic Representations', 'authors': [{'authorId': '2086349', 'name': 'Daniel Hershcovich'}, {'authorId': '2769805', 'name': 'Omri Abend'}, {'authorId': '145009917', 'name': 'A. Rappoport'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'The ability to consolidate information of different types is at the core of intelligence, and has tremendous practical value in allowing learning for one task to benefit from generalizations learned for others. In this paper we tackle the challenging task of improving semantic parsing performance, taking UCCA parsing as a test case, and AMR, SDP and Universal Dependencies (UD) parsing as auxiliary tasks. We experiment on three languages, using a uniform transition-based system and learning architecture for all parsing tasks. Despite notable conceptual, formal and domain differences, we show that multitask learning significantly improves UCCA parsing in both in-domain and out-of-domain settings.', 'year': 2018, 'in_acl': True, 'citationCount': 67, 'section': None, 'subsection': None}, {'id': 7741748, 'paperId': 'e32e3feb2225b427caa05eb26f241671196fc942', 'title': 'The State of the Art in Semantic Representation', 'authors': [{'authorId': '2769805', 'name': 'Omri Abend'}, {'authorId': '145009917', 'name': 'A. Rappoport'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'Semantic representation is receiving growing attention in NLP in the past few years, and many proposals for semantic schemes (e.g., AMR, UCCA, GMB, UDS) have been put forth. Yet, little has been done to assess the achievements and the shortcomings of these new contenders, compare them with syntactic schemes, and clarify the general goals of research on semantic representation. We address these gaps by critically surveying the state of the art in the field.', 'year': 2017, 'in_acl': True, 'citationCount': 82, 'section': None, 'subsection': None}, {'id': 11461990, 'paperId': '02258d796c3b52c2fd88bca8300465ba79f6199a', 'title': 'Translation Divergences in Chinese–English Machine Translation: An Empirical Investigation', 'authors': [{'authorId': '121137142', 'name': 'D. Deng'}, {'authorId': '1702849', 'name': 'Nianwen Xue'}], 'venue': 'International Conference on Computational Logic', 'abstract': 'In this article, we conduct an empirical investigation of translation divergences between Chinese and English relying on a parallel treebank. To do this, we first devise a hierarchical alignment scheme where Chinese and English parse trees are aligned in a way that eliminates conflicts and redundancies between word alignments and syntactic parses to prevent the generation of spurious translation divergences. Using this Hierarchically Aligned Chinese–English Parallel Treebank (HACEPT), we are able to semi-automatically identify and categorize the translation divergences between the two languages and quantify each type of translation divergence. Our results show that the translation divergences are much broader than described in previous studies that are largely based on anecdotal evidence and linguistic knowledge. The distribution of the translation divergences also shows that some high-profile translation divergences that motivate previous research are actually very rare in our data, whereas other translation divergences that have previously received little attention actually exist in large quantities. We also show that HACEPT allows the extraction of syntax-based translation rules, most of which are expressive enough to capture the translation divergences, and point out that the syntactic annotation in existing treebanks is not optimal for extracting such translation rules. We also discuss the implications of our study for attempts to bridge translation divergences by devising shared semantic representations across languages. Our quantitative results lend further support to the observation that although it is possible to bridge some translation divergences with semantic representations, other translation divergences are open-ended, thus building a semantic representation that captures all possible translation divergences may be impractical.', 'year': 2017, 'in_acl': True, 'citationCount': 27, 'section': None, 'subsection': None}, {'id': 245635, 'paperId': '6c6fa1184cfc25b0e1b9e2c835b40be4d716bfe2', 'title': 'Linguistic Typology meets Universal Dependencies', 'authors': [{'authorId': '144456145', 'name': 'W. Bruce Croft'}, {'authorId': '8613581', 'name': 'D. Nordquist'}, {'authorId': '27963760', 'name': 'Katherine Looney'}, {'authorId': '145666891', 'name': 'Michael Regan'}], 'venue': 'International Workshop on Treebanks and Linguistic Theories', 'abstract': 'Current work on universal dependency schemes in NLP does not make reference to the extensive typological research on language universals, but could benefit since many principles are shared between the two enterprises. We propose a revision of the syntactic dependencies in the Universal Dependencies scheme (Nivre et al. [16, 17]) based on four principles derived from contemporary typological theory: dependencies should be based primarily on universal construction types over language-specific strategies; syntactic dependency labels should match lexical feature names for the same function; dependencies should be based on the information packaging function of constructions, not lexical semantic types; and dependencies should keep distinct the “ranks” of the functional dependency tree.', 'year': 2017, 'in_acl': False, 'citationCount': 41, 'section': None, 'subsection': None}, {'id': 216056404, 'paperId': 'b805693c17961af2cc7f859c1a54320b26036f46', 'title': 'Universal Dependencies v2: An Evergrowing Multilingual Treebank Collection', 'authors': [{'authorId': '1720988', 'name': 'Joakim Nivre'}, {'authorId': '2241127', 'name': 'M. Marneffe'}, {'authorId': '1694491', 'name': 'Filip Ginter'}, {'authorId': '1602260260', 'name': 'Jan Hajivc'}, {'authorId': '144783904', 'name': 'Christopher D. Manning'}, {'authorId': '1708916', 'name': 'S. Pyysalo'}, {'authorId': '145157639', 'name': 'Sebastian Schuster'}, {'authorId': '3262036', 'name': 'Francis M. Tyers'}, {'authorId': '1771298', 'name': 'Daniel Zeman'}], 'venue': 'International Conference on Language Resources and Evaluation', 'abstract': 'Universal Dependencies is an open community effort to create cross-linguistically consistent treebank annotation for many languages within a dependency-based lexicalist framework. The annotation consists in a linguistically motivated word segmentation; a morphological layer comprising lemmas, universal part-of-speech tags, and standardized morphological features; and a syntactic layer focusing on syntactic relations between predicates, arguments and modifiers. In this paper, we describe version 2 of the universal guidelines (UD v2), discuss the major changes from UD v1 to UD v2, and give an overview of the currently available treebanks for 90 languages.', 'year': 2020, 'in_acl': True, 'citationCount': 476, 'section': None, 'subsection': None}]
|
2020.coling-tutorials.2
|
Embeddings in Natural Language Processing
|
Embeddings have been one of the most important topics of interest in NLP for the past decade. Representing knowledge through a low-dimensional vector which is easily integrable in modern machine learning models has played a central role in the development of the field. Embedding techniques initially focused on words but the attention soon started to shift to other forms. This tutorial will provide a high-level synthesis of the main embedding techniques in NLP, in the broad sense. We will start by conventional word embeddings (e.g., Word2Vec and GloVe) and then move to other types of embeddings, such as sense-specific and graph alternatives. We will finalize with an overview of the trending contextualized representations (e.g., ELMo and BERT) and explain their potential and impact in NLP.
| 2,020
|
https://aclanthology.org/2020.coling-tutorials.2/
|
COLING
|
[{'id': 15829786, 'paperId': 'e569d99f3a0fcfa038631dda2b44c73a6e8e97b8', 'title': 'Dimensions of meaning', 'authors': [{'authorId': '144418438', 'name': 'Hinrich Schütze'}], 'venue': "Supercomputing '92", 'abstract': 'The representation of documents and queries as vectors in a high-dimensional space is well-established in information retrieval. The author proposes that the semantics of words and contexts in a text be represented as vectors. The dimensions of the space are words and the initial vectors are determined by the words occurring close to the entity to be represented, which implies that the space has several thousand dimensions (words). This makes the vector representations (which are dense) too cumbersome to use directly. Therefore, dimensionality reduction by means of a singular value decomposition is employed. The author analyzes the structure of the vector representations and applies them to word sense disambiguation and thesaurus induction. >', 'year': 1992, 'in_acl': False, 'citationCount': 467, 'section': None, 'subsection': None}, {'id': 1500900, 'paperId': '3a0e788268fafb23ab20da0e98bb578b06830f7d', 'title': 'From Frequency to Meaning: Vector Space Models of Semantics', 'authors': [{'authorId': '1689647', 'name': 'Peter D. Turney'}, {'authorId': '1990190', 'name': 'Patrick Pantel'}], 'venue': 'Journal of Artificial Intelligence Research', 'abstract': 'Computers understand very little of the meaning of human language. This profoundly limits our ability to give instructions to computers, the ability of computers to explain their actions to us, and the ability of computers to analyse and process text. Vector space models (VSMs) of semantics are beginning to address these limits. This paper surveys the use of VSMs for semantic processing of text. We organize the literature on VSMs according to the structure of the matrix in a VSM. There are currently three broad classes of VSMs, based on term-document, word-context, and pair-pattern matrices, yielding three classes of applications. We survey a broad range of applications in these three categories and we take a detailed look at a specific open source project in each category. Our goal in this survey is to show the breadth of applications of VSMs for semantics, to provide a new perspective on VSMs for those who are already familiar with the area, and to provide pointers into the literature for those who are less familiar with the field.', 'year': 2010, 'in_acl': False, 'citationCount': 2942, 'section': None, 'subsection': None}, {'id': 5959482, 'paperId': 'f6b51c8753a871dc94ff32152c00c01e94f90f09', 'title': 'Efficient Estimation of Word Representations in Vector Space', 'authors': [{'authorId': '2047446108', 'name': 'Tomas Mikolov'}, {'authorId': '2118440152', 'name': 'Kai Chen'}, {'authorId': '32131713', 'name': 'G. Corrado'}, {'authorId': '49959210', 'name': 'J. Dean'}], 'venue': 'International Conference on Learning Representations', 'abstract': 'We propose two novel model architectures for computing continuous vector\nrepresentations of words from very large data sets. The quality of these\nrepresentations is measured in a word similarity task, and the results are\ncompared to the previously best performing techniques based on different types\nof neural networks. We observe large improvements in accuracy at much lower\ncomputational cost, i.e. it takes less than a day to learn high quality word\nvectors from a 1.6 billion words data set. Furthermore, we show that these\nvectors provide state-of-the-art performance on our test set for measuring\nsyntactic and semantic word similarities.', 'year': 2013, 'in_acl': False, 'citationCount': 29962, 'section': None, 'subsection': None}, {'id': 1957433, 'paperId': 'f37e1b62a767a307c046404ca96bc140b3e68cb5', 'title': 'GloVe: Global Vectors for Word Representation', 'authors': [{'authorId': '143845796', 'name': 'Jeffrey Pennington'}, {'authorId': '2166511', 'name': 'R. Socher'}, {'authorId': '144783904', 'name': 'Christopher D. Manning'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75% on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition.', 'year': 2014, 'in_acl': True, 'citationCount': 30717, 'section': None, 'subsection': None}, {'id': 7890036, 'paperId': '59761abc736397539bdd01ad7f9d91c8607c0457', 'title': 'context2vec: Learning Generic Context Embedding with Bidirectional LSTM', 'authors': [{'authorId': '2298649', 'name': 'Oren Melamud'}, {'authorId': '34508613', 'name': 'J. Goldberger'}, {'authorId': '7465342', 'name': 'Ido Dagan'}], 'venue': 'Conference on Computational Natural Language Learning', 'abstract': 'Context representations are central to various NLP tasks, such as word sense disam-biguation, named entity recognition, co-reference resolution, and many more. In this work we present a neural model for efficiently learning a generic context embedding function from large corpora, us-ing bidirectional LSTM. With a very simple application of our context representations, we manage to surpass or nearly reach state-of-the-art results on sentence completion, lexical substitution and word sense disambiguation tasks, while substantially outperforming the popular context representation of averaged word embeddings. We release our code and pre-trained models, suggesting they could be useful in a wide variety of NLP tasks.', 'year': 2016, 'in_acl': True, 'citationCount': 484, 'section': None, 'subsection': None}, {'id': 3626819, 'paperId': '3febb2bed8865945e7fddc99efd791887bb7e14f', 'title': 'Deep Contextualized Word Representations', 'authors': [{'authorId': '39139825', 'name': 'Matthew E. Peters'}, {'authorId': '50043859', 'name': 'Mark Neumann'}, {'authorId': '2136562', 'name': 'Mohit Iyyer'}, {'authorId': '40642935', 'name': 'Matt Gardner'}, {'authorId': '143997772', 'name': 'Christopher Clark'}, {'authorId': '2544107', 'name': 'Kenton Lee'}, {'authorId': '1982950', 'name': 'Luke Zettlemoyer'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pre-trained on a large text corpus. We show that these representations can be easily added to existing models and significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment and sentiment analysis. We also present an analysis showing that exposing the deep internals of the pre-trained network is crucial, allowing downstream models to mix different types of semi-supervision signals.', 'year': 2018, 'in_acl': True, 'citationCount': 11138, 'section': None, 'subsection': None}, {'id': 13696533, 'paperId': 'bf9db8ca2dce7386cbed1ae0fd6465148cdb2b98', 'title': 'From Word to Sense Embeddings: A Survey on Vector Representations of Meaning', 'authors': [{'authorId': '1387447871', 'name': 'José Camacho-Collados'}, {'authorId': '1717641', 'name': 'Mohammad Taher Pilehvar'}], 'venue': 'Journal of Artificial Intelligence Research', 'abstract': '\n \n \nOver the past years, distributed semantic representations have proved to be effective and flexible keepers of prior knowledge to be integrated into downstream applications. This survey focuses on the representation of meaning. We start from the theoretical background behind word vector space models and highlight one of their major limitations: the meaning conflation deficiency, which arises from representing a word with all its possible meanings as a single vector. Then, we explain how this deficiency can be addressed through a transition from the word level to the more fine-grained level of word senses (in its broader acceptation) as a method for modelling unambiguous lexical meaning. We present a comprehensive overview of the wide range of techniques in the two main branches of sense representation, i.e., unsupervised and knowledge-based. Finally, this survey covers the main evaluation procedures and applications for this type of representation, and provides an analysis of four of its important aspects: interpretability, sense granularity, adaptability to different domains and compositionality. \n \n \n', 'year': 2018, 'in_acl': False, 'citationCount': 317, 'section': None, 'subsection': None}, {'id': 52098907, 'paperId': 'ac11062f1f368d97f4c826c317bf50dcc13fdb59', 'title': 'Dissecting Contextual Word Embeddings: Architecture and Representation', 'authors': [{'authorId': '39139825', 'name': 'Matthew E. Peters'}, {'authorId': '50043859', 'name': 'Mark Neumann'}, {'authorId': '1982950', 'name': 'Luke Zettlemoyer'}, {'authorId': '144105277', 'name': 'Wen-tau Yih'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Contextual word representations derived from pre-trained bidirectional language models (biLMs) have recently been shown to provide significant improvements to the state of the art for a wide range of NLP tasks. However, many questions remain as to how and why these models are so effective. In this paper, we present a detailed empirical study of how the choice of neural architecture (e.g. LSTM, CNN, or self attention) influences both end task accuracy and qualitative properties of the representations that are learned. We show there is a tradeoff between speed and accuracy, but all architectures learn high quality contextual representations that outperform word embeddings for four challenging NLP tasks. Additionally, all architectures learn representations that vary with network depth, from exclusively morphological based at the word embedding layer through local syntax based in the lower contextual layers to longer range semantics such coreference at the upper layers. Together, these results suggest that unsupervised biLMs, independent of architecture, are learning much more about the structure of language than previously appreciated.', 'year': 2018, 'in_acl': True, 'citationCount': 406, 'section': None, 'subsection': None}, {'id': 52967399, 'paperId': 'df2b0e26d0599ce3e70df8a9da02e51594e0e992', 'title': 'BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding', 'authors': [{'authorId': '39172707', 'name': 'Jacob Devlin'}, {'authorId': '1744179', 'name': 'Ming-Wei Chang'}, {'authorId': '2544107', 'name': 'Kenton Lee'}, {'authorId': '3259253', 'name': 'Kristina Toutanova'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018a; Radford et al., 2018), BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5 (7.7 point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).', 'year': 2019, 'in_acl': True, 'citationCount': 84138, 'section': None, 'subsection': None}]
|
2020.coling-tutorials.5
|
A guide to the dataset explosion in QA, NLI, and commonsense reasoning
|
Question answering, natural language inference and commonsense reasoning are increasingly popular as general NLP system benchmarks, driving both modeling and dataset work. Only for question answering we already have over 100 datasets, with over 40 published after 2018. However, most new datasets get “solved” soon after publication, and this is largely due not to the verbal reasoning capabilities of our models, but to annotation artifacts and shallow cues in the data that they can exploit. This tutorial aims to (1) provide an up-to-date guide to the recent datasets, (2) survey the old and new methodological issues with dataset construction, and (3) outline the existing proposals for overcoming them. The target audience is the NLP practitioners who are lost in dozens of the recent datasets, and would like to know what these datasets are actually measuring. Our overview of the problems with the current datasets and the latest tips and tricks for overcoming them will also be useful to the researchers working on future benchmarks.
| 2,020
|
https://aclanthology.org/2020.coling-tutorials.5/
|
COLING
|
[{'id': 182952898, 'paperId': 'a1f000b88e81f02b2a0d7a4097171428364af8c7', 'title': 'A Survey on Neural Machine Reading Comprehension', 'authors': [{'authorId': '2064466537', 'name': 'Boyu Qiu'}, {'authorId': '2118183867', 'name': 'Xu Chen'}, {'authorId': '2073589', 'name': 'Jungang Xu'}, {'authorId': '46676156', 'name': 'Yingfei Sun'}], 'venue': 'arXiv.org', 'abstract': 'Enabling a machine to read and comprehend the natural language documents so that it can answer some questions remains an elusive challenge. In recent years, the popularity of deep learning and the establishment of large-scale datasets have both promoted the prosperity of Machine Reading Comprehension. This paper aims to present how to utilize the Neural Network to build a Reader and introduce some classic models, analyze what improvements they make. Further, we also point out the defects of existing models and future research directions', 'year': 2019, 'in_acl': False, 'citationCount': 29, 'section': None, 'subsection': None}, {'id': 213613608, 'paperId': '4043a936960de8e149dc208178fe1bcb157c7fa4', 'title': 'Recent Advances in Natural Language Inference: A Survey of Benchmarks, Resources, and Approaches', 'authors': [{'authorId': '89093987', 'name': 'Shane Storks'}, {'authorId': '3193409', 'name': 'Qiaozi Gao'}, {'authorId': '1707259', 'name': 'J. Chai'}], 'venue': '', 'abstract': "In the NLP community, recent years have seen a surge of research activities that address machines' ability to perform deep language understanding which goes beyond what is explicitly stated in text, rather relying on reasoning and knowledge of the world. Many benchmark tasks and datasets have been created to support the development and evaluation of such natural language inference ability. As these benchmarks become instrumental and a driving force for the NLP research community, this paper aims to provide an overview of recent benchmarks, relevant knowledge resources, and state-of-the-art learning and inference approaches in order to support a better understanding of this growing field.", 'year': 2019, 'in_acl': False, 'citationCount': 116, 'section': None, 'subsection': None}, {'id': 219124082, 'paperId': '4311eefc03a3f391bae39ebf364cbd5f8b90a001', 'title': 'Beyond Leaderboards: A survey of methods for revealing weaknesses in Natural Language Inference data and models', 'authors': [{'authorId': '71034258', 'name': 'Viktor Schlegel'}, {'authorId': '2144507', 'name': 'G. Nenadic'}, {'authorId': '1400900759', 'name': 'R. Batista-Navarro'}], 'venue': 'arXiv.org', 'abstract': "Recent years have seen a growing number of publications that analyse Natural Language Inference (NLI) datasets for superficial cues, whether they undermine the complexity of the tasks underlying those datasets and how they impact those models that are optimised and evaluated on this data. This structured survey provides an overview of the evolving research area by categorising reported weaknesses in models and datasets and the methods proposed to reveal and alleviate those weaknesses for the English language. We summarise and discuss the findings and conclude with a set of recommendations for possible future research directions. We hope it will be a useful resource for researchers who propose new datasets, to have a set of tools to assess the suitability and quality of their data to evaluate various phenomena of interest, as well as those who develop novel architectures, to further understand the implications of their improvements with respect to their model's acquired capabilities.", 'year': 2020, 'in_acl': False, 'citationCount': 18, 'section': None, 'subsection': None}, {'id': 91184338, 'paperId': 'b1832b749528755dfcbe462717f4f5afc07243b8', 'title': 'Commonsense Reasoning for Natural Language Understanding: A Survey of Benchmarks, Resources, and Approaches', 'authors': [{'authorId': '89093987', 'name': 'Shane Storks'}, {'authorId': '3193409', 'name': 'Qiaozi Gao'}, {'authorId': '1707259', 'name': 'J. Chai'}], 'venue': 'arXiv.org', 'abstract': "Commonsense knowledge and commonsense reasoning are some of the main bottlenecks in machine intelligence. In the NLP community, many benchmark datasets and tasks have been created to address commonsense reasoning for language understanding. These tasks are designed to assess machines' ability to acquire and learn commonsense knowledge in order to reason and understand natural language text. As these tasks become instrumental and a driving force for commonsense research, this paper aims to provide an overview of existing tasks and benchmarks, knowledge resources, and learning and inference approaches toward commonsense reasoning for natural language understanding. Through this, our goal is to support a better understanding of the state of the art, its limitations, and future challenges.", 'year': 2019, 'in_acl': False, 'citationCount': 72, 'section': None, 'subsection': None}]
|
2020.coling-tutorials.6
|
A Crash Course in Automatic Grammatical Error Correction
|
Grammatical Error Correction (GEC) is the task of automatically detecting and correcting all types of errors in written text. Although most research has focused on correcting errors in the context of English as a Second Language (ESL), GEC can also be applied to other languages and native text. The main application of a GEC system is thus to assist humans with their writing. Academic and commercial interest in GEC has grown significantly since the Helping Our Own (HOO) and Conference on Natural Language Learning (CoNLL) shared tasks in 2011-14, and a record-breaking 24 teams took part in the recent Building Educational Applications (BEA) shared task. Given this interest, and the recent shift towards neural approaches, we believe the time is right to offer a tutorial on GEC for researchers who may be new to the field or who are interested in the current state of the art and future challenges. With this in mind, the main goal of this tutorial is not only to bring attendees up to speed with GEC in general, but also examine the development of neural-based GEC systems.
| 2,020
|
https://aclanthology.org/2020.coling-tutorials.6/
|
COLING
|
[{'id': 219306476, 'paperId': '20499f3c6fe9f84a12c9def941e2e12846a00c77', 'title': 'The CoNLL-2014 Shared Task on Grammatical Error Correction', 'authors': [{'authorId': '34789794', 'name': 'H. Ng'}, {'authorId': '2069266', 'name': 'S. Wu'}, {'authorId': '145693410', 'name': 'Ted Briscoe'}, {'authorId': '3271719', 'name': 'Christian Hadiwinoto'}, {'authorId': '32406168', 'name': 'Raymond Hendy Susanto'}, {'authorId': '145178009', 'name': 'Christopher Bryant'}], 'venue': 'CoNLL Shared Task', 'abstract': 'The CoNLL-2014 shared task was devoted to grammatical error correction of all error types. In this paper, we give the task definition, present the data sets, and describe the evaluation metric and scorer used in the shared task. We also give an overview of the various approaches adopted by the participating teams, and present the evaluation results. Compared to the CoNLL2013 shared task, we have introduced the following changes in CoNLL-2014: (1) A participating system is expected to detect and correct grammatical errors of all types, instead of just the five error types in CoNLL-2013; (2) The evaluation metric was changed from F1 to F0.5, to emphasize precision over recall; and (3) We have two human annotators who independently annotated the test essays, compared to just one human annotator in CoNLL-2013.', 'year': 2014, 'in_acl': True, 'citationCount': 498, 'section': None, 'subsection': None}, {'id': 18051414, 'paperId': 'be08a1189ce88c4a6d6a98784377eb02e578d95a', 'title': 'Building a State-of-the-Art Grammatical Error Correction System', 'authors': [{'authorId': '2271568', 'name': 'Alla Rozovskaya'}, {'authorId': '144590225', 'name': 'D. Roth'}], 'venue': 'Transactions of the Association for Computational Linguistics', 'abstract': 'This paper identifies and examines the key principles underlying building a state-of-the-art grammatical error correction system. We do this by analyzing the Illinois system that placed first among seventeen teams in the recent CoNLL-2013 shared task on grammatical error correction. The system focuses on five different types of errors common among non-native English writers. We describe four design principles that are relevant for correcting all of these errors, analyze the system along these dimensions, and show how each of these dimensions contributes to the performance.', 'year': 2014, 'in_acl': True, 'citationCount': 33, 'section': None, 'subsection': None}, {'id': 6820419, 'paperId': 'b19f365aab0bf8c6cf712c07313b919556bfacc0', 'title': 'Phrase-based Machine Translation is State-of-the-Art for Automatic Grammatical Error Correction', 'authors': [{'authorId': '1733933', 'name': 'Marcin Junczys-Dowmunt'}, {'authorId': '3272639', 'name': 'Roman Grundkiewicz'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'In this work, we study parameter tuning towards the M^2 metric, the standard metric for automatic grammar error correction (GEC) tasks. After implementing M^2 as a scorer in the Moses tuning framework, we investigate interactions of dense and sparse features, different optimizers, and tuning strategies for the CoNLL-2014 shared task. We notice erratic behavior when optimizing sparse feature weights with M^2 and offer partial solutions. We find that a bare-bones phrase-based SMT setup with task-specific parameter-tuning outperforms all previously published results for the CoNLL-2014 test set by a large margin (46.37% M^2 over previously 41.75%, by an SMT system with neural features) while being trained on the same, publicly available data. Our newly introduced dense and sparse features widen that gap, and we improve the state-of-the-art to 49.49% M^2.', 'year': 2016, 'in_acl': True, 'citationCount': 102, 'section': None, 'subsection': None}, {'id': 19236015, 'paperId': '6ed38b0cb510fa91434eb63ab464bee66c9323c6', 'title': 'A Multilayer Convolutional Encoder-Decoder Neural Network for Grammatical Error Correction', 'authors': [{'authorId': '3422793', 'name': 'Shamil Chollampatt'}, {'authorId': '34789794', 'name': 'H. Ng'}], 'venue': 'AAAI Conference on Artificial Intelligence', 'abstract': '\n \n We improve automatic correction of grammatical, orthographic, and collocation errors in text using a multilayer convolutional encoder-decoder neural network. The network is initialized with embeddings that make use of character N-gram information to better suit this task. When evaluated on common benchmark test data sets (CoNLL-2014 and JFLEG), our model substantially outperforms all prior neural approaches on this task as well as strong statistical machine translation-based systems with neural and task-specific features trained on the same data. Our analysis shows the superiority of convolutional neural networks over recurrent neural networks such as long short-term memory (LSTM) networks in capturing the local context via attention, and thereby improving the coverage in correcting grammatical errors. By ensembling multiple models, and incorporating an N-gram language model and edit features via rescoring, our novel method becomes the first neural approach to outperform the current state-of-the-art statistical machine translation-based approach, both in terms of grammaticality and fluency.\n \n', 'year': 2018, 'in_acl': False, 'citationCount': 210, 'section': None, 'subsection': None}, {'id': 195504787, 'paperId': '7cc6f009feb5ad5ad0e1ff00c551fb318fc95016', 'title': 'Neural Grammatical Error Correction Systems with Unsupervised Pre-training on Synthetic Data', 'authors': [{'authorId': '3272639', 'name': 'Roman Grundkiewicz'}, {'authorId': '1733933', 'name': 'Marcin Junczys-Dowmunt'}, {'authorId': '1702066', 'name': 'Kenneth Heafield'}], 'venue': 'BEA@ACL', 'abstract': 'Considerable effort has been made to address the data sparsity problem in neural grammatical error correction. In this work, we propose a simple and surprisingly effective unsupervised synthetic error generation method based on confusion sets extracted from a spellchecker to increase the amount of training data. Synthetic data is used to pre-train a Transformer sequence-to-sequence model, which not only improves over a strong baseline trained on authentic error-annotated data, but also enables the development of a practical GEC system in a scenario where little genuine error-annotated data is available. The developed systems placed first in the BEA19 shared task, achieving 69.47 and 64.24 F_{0.5} in the restricted and low-resource tracks respectively, both on the W&I+LOCNESS test set. On the popular CoNLL 2014 test set, we report state-of-the-art results of 64.16 M² for the submitted system, and 61.30 M² for the constrained system trained on the NUCLE and Lang-8 data.', 'year': 2019, 'in_acl': True, 'citationCount': 169, 'section': None, 'subsection': None}, {'id': 202539354, 'paperId': '1a5ef51ae0c0ee1216e14aa390734cf7581c3b27', 'title': 'An Empirical Study of Incorporating Pseudo Data into Grammatical Error Correction', 'authors': [{'authorId': '32140786', 'name': 'Shun Kiyono'}, {'authorId': '144042991', 'name': 'Jun Suzuki'}, {'authorId': '35643168', 'name': 'Masato Mita'}, {'authorId': '3079116', 'name': 'Tomoya Mizumoto'}, {'authorId': '3040648', 'name': 'Kentaro Inui'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'The incorporation of pseudo data in the training of grammatical error correction models has been one of the main factors in improving the performance of such models. However, consensus is lacking on experimental configurations, namely, choosing how the pseudo data should be generated or used. In this study, these choices are investigated through extensive experiments, and state-of-the-art performance is achieved on the CoNLL-2014 test set (F0.5=65.0) and the official test set of the BEA-2019 shared task (F0.5=70.2) without making any modifications to the model architecture.', 'year': 2019, 'in_acl': True, 'citationCount': 143, 'section': None, 'subsection': None}]
|
2020.coling-tutorials.7
|
Endangered Languages meet Modern NLP
|
This tutorial will focus on NLP for endangered languages documentation and revitalization. First, we will acquaint the attendees with the process and the challenges of language documentation, showing how the needs of the language communities and the documentary linguists map to specific NLP tasks. We will then present the state-of-the-art in NLP applied in this particularly challenging setting (extremely low-resource datasets, noisy transcriptions, limited annotations, non-standard orthographies). In doing so, we will also analyze the challenges of working in this domain and expand on both the capabilities and the limitations of current NLP approaches. Our ultimate goal is to motivate more NLP practitioners to work towards this very important direction, and also provide them with the tools and understanding of the limitations/challenges, both of which are needed in order to have an impact.
| 2,020
|
https://aclanthology.org/2020.coling-tutorials.7/
|
COLING
|
[{'id': 48356442, 'paperId': 'b4aa5354e88564b2e4eeee3019ed04e5388042f3', 'title': 'Challenges of language technologies for the indigenous languages of the Americas', 'authors': [{'authorId': '153151470', 'name': 'Manuel Mager'}, {'authorId': '1409305289', 'name': 'Ximena Gutierrez-Vasques'}, {'authorId': '32889164', 'name': 'Gerardo E Sierra'}, {'authorId': '1403616824', 'name': 'Ivan Vladimir Meza Ruiz'}], 'venue': 'International Conference on Computational Linguistics', 'abstract': 'Indigenous languages of the American continent are highly diverse. However, they have received little attention from the technological perspective. In this paper, we review the research, the digital resources and the available NLP systems that focus on these languages. We present the main challenges and research questions that arise when distant languages and low-resource scenarios are faced. We would like to encourage NLP research in linguistically rich and diverse areas like the Americas.', 'year': 2018, 'in_acl': True, 'citationCount': 83, 'section': None, 'subsection': None}, {'id': 38196008, 'paperId': '842232e4c1f11239fb67cb0f1bc068149df64183', 'title': 'Last Words: Natural Language Processing and Linguistic Fieldwork', 'authors': [{'authorId': '21308992', 'name': 'Steven Bird'}], 'venue': 'International Conference on Computational Logic', 'abstract': '', 'year': 2009, 'in_acl': True, 'citationCount': 35, 'section': None, 'subsection': None}, {'id': 201666291, 'paperId': '21d4ffff54d3f71d028f9bda89c22a496e0fbb82', 'title': 'Deploying Technology to Save Endangered Languages', 'authors': [{'authorId': '38878141', 'name': 'Hilaria Cruz'}, {'authorId': '2060201926', 'name': 'Joseph Waring'}], 'venue': 'arXiv.org', 'abstract': 'Computer scientists working on natural language processing, native speakers of endangered languages, and field linguists to discuss ways to harness Automatic Speech Recognition, especially neural networks, to automate annotation, speech tagging, and text parsing on endangered languages.', 'year': 2019, 'in_acl': False, 'citationCount': 6, 'section': None, 'subsection': None}, {'id': 146056073, 'paperId': '55b9e8f6194e745190cce0dd6db7eca4ff53a260', 'title': 'Future Directions in Technological Support for Language Documentation', 'authors': [{'authorId': '8775666', 'name': 'D. Esch'}, {'authorId': '92304991', 'name': 'Ben Foley'}, {'authorId': '79396737', 'name': 'Nay San'}], 'venue': 'Proceedings of the Workshop on Computational Methods for Endangered Languages', 'abstract': 'To reduce the annotation burden placed on linguistic fieldworkers, freeing up time for deeper linguistic analysis and descriptive work, the language documentation community has been working with machine learning researchers to investigate what assistive role technology can play, with promising early results. This paper describes a number of potential follow-up technical projects that we believe would be worthwhile and straightforward to do. We provide examples of the annotation tasks for computer scientists; descriptions of the technological challenges involved and the estimated level of complexity; and pointers to relevant literature. We hope providing a clear overview of what the needs are and what annotation challenges exist will help facilitate the dialogue and collaboration between computer scientists and fieldwork linguists.', 'year': 2019, 'in_acl': True, 'citationCount': 17, 'section': None, 'subsection': None}, {'id': 52011439, 'paperId': '243880fde63abfc287bd1356c2e1dbf68a1a0aac', 'title': 'Indigenous language technologies in Canada: Assessment, challenges, and successes', 'authors': [{'authorId': '3070462', 'name': 'Patrick Littell'}, {'authorId': '145131025', 'name': 'Anna Kazantseva'}, {'authorId': '143937779', 'name': 'R. Kuhn'}, {'authorId': '51183422', 'name': 'Aidan Pine'}, {'authorId': '2587266', 'name': 'Antti Arppe'}, {'authorId': '2052347303', 'name': 'Christopher Cox'}, {'authorId': '46230685', 'name': 'M. Junker'}], 'venue': 'International Conference on Computational Linguistics', 'abstract': 'In this article, we discuss which text, speech, and image technologies have been developed, and would be feasible to develop, for the approximately 60 Indigenous languages spoken in Canada. In particular, we concentrate on technologies that may be feasible to develop for most or all of these languages, not just those that may be feasible for the few most-resourced of these. We assess past achievements and consider future horizons for Indigenous language transliteration, text prediction, spell-checking, approximate search, machine translation, speech recognition, speaker diarization, speech synthesis, optical character recognition, and computer-aided language learning.', 'year': 2018, 'in_acl': True, 'citationCount': 40, 'section': None, 'subsection': None}, {'id': 11955283, 'paperId': '82e14d316f8e21b883a6a580f29c9953e6ce1886', 'title': 'Automatic speech recognition for under-resourced languages: A survey', 'authors': [{'authorId': '143823463', 'name': 'L. Besacier'}, {'authorId': '1790459', 'name': 'E. Barnard'}, {'authorId': '145191867', 'name': 'Alexey Karpov'}, {'authorId': '145618636', 'name': 'Tanja Schultz'}], 'venue': 'Speech Communication', 'abstract': 'Speech processing for under-resourced languages is an active field of research, which has experienced significant progress during the past decade. We propose, in this paper, a survey that focuses on automatic speech recognition (ASR) for these languages. The definition of under-resourced languages and the challenges associated to them are first defined. The main part of the paper is a literature review of the recent (last 8 years) contributions made in ASR for under-resourced languages. Examples of past projects and future trends when dealing with under-resourced languages are also presented. We believe that this paper will be a good starting point for anyone interested to initiate research in (or operational development of) ASR for one or several under-resourced languages. It should be clear, however, that many of the issues and approaches presented here, apply to speech technology in general (text-to-speech synthesis for instance).', 'year': 2014, 'in_acl': False, 'citationCount': 484, 'section': None, 'subsection': None}, {'id': 53065219, 'paperId': 'c859416a8e5682bee3c35df29bc02e02a22de072', 'title': 'Integrating automatic transcription into the language documentation workflow: Experiments with Na data and the Persephone toolkit', 'authors': [{'authorId': '39720748', 'name': 'Alexis Michaud'}, {'authorId': '38535429', 'name': 'Oliver Adams'}, {'authorId': '143620680', 'name': 'Trevor Cohn'}, {'authorId': '1700325', 'name': 'Graham Neubig'}, {'authorId': '46673007', 'name': 'Severine Guillaume'}], 'venue': '', 'abstract': 'Automatic speech recognition tools have potential for facilitating language documentation, but in practice these tools remain little-used by linguists for a variety of reasons, such as that the technology is still new (and evolving rapidly), user-friendly interfaces are still under development, and case studies demonstrating the practical usefulness of automatic recognition in a low-resource setting remain few. This article reports on a success story in integrating automatic transcription into the language documentation workflow, specifically for Yongning Na, a language of Southwest China. Using PERSEPHONE, an open-source toolkit, a single-speaker speech transcription tool was trained over five hours of manually transcribed speech. The experiments found that this method can achieve a remarkably low error rate (on the order of 17%), and that automatic transcriptions were useful as a canvas for the linguist. The present report is intended for linguists with little or no knowledge of speech processing. It aims to provide insights into (i) the way the tool operates and (ii) the process of collaborating with natural language processing specialists. Practical recommendations are offered on how to anticipate the requirements of this type of technology from the early stages of data collection in the field.', 'year': 2018, 'in_acl': False, 'citationCount': 44, 'section': None, 'subsection': None}, {'id': 53333371, 'paperId': '9476918d232768de4f2cbc13240c6626f49b4d04', 'title': 'Building Speech Recognition Systems for Language Documentation: The CoEDL Endangered Language Pipeline and Inference System (Elpis)', 'authors': [{'authorId': '92304991', 'name': 'Ben Foley'}, {'authorId': '32286262', 'name': 'Joshua T. Arnold'}, {'authorId': '1405433180', 'name': 'Rolando Coto-Solano'}, {'authorId': '5963331', 'name': 'Gautier Durantin'}, {'authorId': '144016062', 'name': 'T. M. Ellison'}, {'authorId': '8775666', 'name': 'D. Esch'}, {'authorId': '145936386', 'name': 'Scott Heath'}, {'authorId': '4438531', 'name': 'František Kratochvil'}, {'authorId': '1410034849', 'name': 'Zara Maxwell-Smith'}, {'authorId': '2082556414', 'name': 'David Nash'}, {'authorId': '26949393', 'name': 'Ola Olsson'}, {'authorId': '2072102908', 'name': 'Mark Richards'}, {'authorId': '79396737', 'name': 'Nay San'}, {'authorId': '2343936', 'name': 'H. Stoakes'}, {'authorId': '24739790', 'name': 'N. Thieberger'}, {'authorId': '1716264', 'name': 'Janet Wiles'}], 'venue': '', 'abstract': 'Machine learning has revolutionised speech technologies for major world languages, but these technologies have generally not been available for the roughly 4,000 languages with populations of fewer than 10,000 speakers. This paper describes the development of Elpis, a pipeline which language documentation workers with minimal computational experience can use to build their own speech recognition models, resulting in models being built for 16 languages from the Asia-Pacific region. Elpis puts machine learning speech technologies within reach of people working with languages with scarce data, in a scalable way. This is impactful since it enables language communities to cross the digital divide, and speeds up language documentation. Complete automation of the process is not feasible for languages with small quantities of data and potentially large vocabularies. Hence our goal is not full automation, but rather to make a practical and effective workflow that integrates machine learning technologies.', 'year': 2018, 'in_acl': False, 'citationCount': 47, 'section': None, 'subsection': None}, {'id': 201058388, 'paperId': 'f249e3a7d4f7f964e9a4ca6e633ac31410a91dd8', 'title': 'Pushing the Limits of Low-Resource Morphological Inflection', 'authors': [{'authorId': '49513989', 'name': 'Antonios Anastasopoulos'}, {'authorId': '1700325', 'name': 'Graham Neubig'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Recent years have seen exceptional strides in the task of automatic morphological inflection generation. However, for a long tail of languages the necessary resources are hard to come by, and state-of-the-art neural methods that work well under higher resource settings perform poorly in the face of a paucity of data. In response, we propose a battery of improvements that greatly improve performance under such low-resource conditions. First, we present a novel two-step attention architecture for the inflection decoder. In addition, we investigate the effects of cross-lingual transfer from single and multiple languages, as well as monolingual data hallucination. The macro-averaged accuracy of our models outperforms the state-of-the-art by 15 percentage points. Also, we identify the crucial factors for success with cross-lingual transfer for morphological inflection: typological similarity and a common representation across languages.', 'year': 2019, 'in_acl': True, 'citationCount': 75, 'section': None, 'subsection': None}]
|
2020.emnlp-tutorials.1
|
Machine Reasoning: Technology, Dilemma and Future
|
Machine reasoning research aims to build interpretable AI systems that can solve problems or draw conclusions from what they are told (i.e. facts and observations) and already know (i.e. models, common sense and knowledge) under certain constraints. In this tutorial, we will (1) describe the motivation of this tutorial and give our definition on machine reasoning; (2) introduce typical machine reasoning frameworks, including symbolic reasoning, probabilistic reasoning, neural-symbolic reasoning and neural-evidence reasoning, and show their successful applications in real-world scenarios; (3) talk about the dilemma between black-box neural networks with state-of-the-art performance and machine reasoning approaches with better interpretability; (4) summarize the content of this tutorial and discuss possible future directions.
| 2,020
|
https://aclanthology.org/2020.emnlp-tutorials.1
|
EMNLP
|
[{'id': 63671278, 'paperId': '20394c89e24d9060ecc69b8a58bdab7833c5b5bd', 'title': 'Markov Logic: A Unifying Framework for Statistical Relational Learning', 'authors': [{'authorId': '1746034', 'name': 'L. Getoor'}, {'authorId': '1685978', 'name': 'B. Taskar'}], 'venue': '', 'abstract': 'This chapter contains sections titled: The Need for a Unifying Framework, Markov Networks, First-order Logic, Markov Logic, SRL Approaches, SRL Tasks, Inference, Learning, Experiments, Conclusion, Acknowledgments, References', 'year': 2007, 'in_acl': False, 'citationCount': 203, 'section': None, 'subsection': None}, {'id': 1067591, 'paperId': 'ab4850b6151ca9a9337dbba94115bde342876d50', 'title': 'From machine learning to machine reasoning', 'authors': [{'authorId': '52184096', 'name': 'L. Bottou'}], 'venue': 'Machine-mediated learning', 'abstract': 'A plausible definition of “reasoning” could be “algebraically manipulating previously acquired knowledge in order to answer a new question”. This definition covers first-order logical inference or probabilistic inference. It also includes much simpler manipulations commonly used to build large learning systems. For instance, we can build an optical character recognition system by first training a character segmenter, an isolated character recognizer, and a language model, using appropriate labelled training sets. Adequately concatenating these modules and fine tuning the resulting system can be viewed as an algebraic operation in a space of models. The resulting model answers a new question, that is, converting the image of a text page into a computer readable text. This observation suggests a conceptual continuity between algebraically rich inference systems, such as logical or probabilistic inference, and simple manipulations, such as the mere concatenation of trainable learning systems. Therefore, instead of trying to bridge the gap between machine learning systems and sophisticated “all-purpose” inference mechanisms, we can instead algebraically enrich the set of manipulations applicable to training systems, and build reasoning capabilities from the ground up.', 'year': 2011, 'in_acl': False, 'citationCount': 267, 'section': None, 'subsection': None}, {'id': 1755720, 'paperId': '9dbb506ded56ff4b7ab65aa92b363c0112987f10', 'title': 'Neural-Symbolic Learning and Reasoning: A Survey and Interpretation', 'authors': [{'authorId': '143862012', 'name': 'Tarek R. Besold'}, {'authorId': '2925941', 'name': 'A. Garcez'}, {'authorId': '144349956', 'name': 'Sebastian Bader'}, {'authorId': '145042515', 'name': 'H. Bowman'}, {'authorId': '1740213', 'name': 'Pedro M. Domingos'}, {'authorId': '1699771', 'name': 'P. Hitzler'}, {'authorId': '1743582', 'name': 'Kai-Uwe Kühnberger'}, {'authorId': '2335532', 'name': 'L. Lamb'}, {'authorId': '3021654', 'name': 'Daniel Lowd'}, {'authorId': '144829981', 'name': 'P. Lima'}, {'authorId': '2910868', 'name': 'L. Penning'}, {'authorId': '2263909', 'name': 'Gadi Pinkas'}, {'authorId': '1759772', 'name': 'Hoifung Poon'}, {'authorId': '1753715', 'name': 'Gerson Zaverucha'}], 'venue': 'Neuro-Symbolic Artificial Intelligence', 'abstract': 'The study and understanding of human behaviour is relevant to computer science, artificial intelligence, neural computation, cognitive science, philosophy, psychology, and several other areas. Presupposing cognition as basis of behaviour, among the most prominent tools in the modelling of behaviour are computational-logic systems, connectionist models of cognition, and models of uncertainty. Recent studies in cognitive science, artificial intelligence, and psychology have produced a number of cognitive models of reasoning, learning, and language that are underpinned by computation. In addition, efforts in computer science research have led to the development of cognitive computational systems integrating machine learning and automated reasoning. Such systems have shown promise in a range of applications, including computational biology, fault diagnosis, training and assessment in simulators, and software verification. This joint survey reviews the personal ideas and views of several researchers on neural-symbolic learning and reasoning. The article is organised in three parts: Firstly, we frame the scope and goals of neural-symbolic computation and have a look at the theoretical foundations. We then proceed to describe the realisations of neural-symbolic computation, systems, and applications. Finally we present the challenges facing the area and avenues for further research.', 'year': 2017, 'in_acl': False, 'citationCount': 297, 'section': None, 'subsection': None}, {'id': 155092677, 'paperId': '833c4ac0599f4b8c5f1ee6ea948ec675fbe56b15', 'title': 'Neural-Symbolic Computing: An Effective Methodology for Principled Integration of Machine Learning and Reasoning', 'authors': [{'authorId': '2925941', 'name': 'A. Garcez'}, {'authorId': '145467467', 'name': 'M. Gori'}, {'authorId': '2335532', 'name': 'L. Lamb'}, {'authorId': '144077615', 'name': 'L. Serafini'}, {'authorId': '145570895', 'name': 'Michael Spranger'}, {'authorId': '1930235', 'name': 'S. Tran'}], 'venue': 'FLAP', 'abstract': 'Current advances in Artificial Intelligence and machine learning in general, and deep learning in particular have reached unprecedented impact not only across research communities, but also over popular media channels. However, concerns about interpretability and accountability of AI have been raised by influential thinkers. In spite of the recent impact of AI, several works have identified the need for principled knowledge representation and reasoning mechanisms integrated with deep learning-based systems to provide sound and explainable models for such systems. Neural-symbolic computing aims at integrating, as foreseen by Valiant, two most fundamental cognitive abilities: the ability to learn from the environment, and the ability to reason from what has been learned. Neural-symbolic computing has been an active topic of research for many years, reconciling the advantages of robust learning in neural networks and reasoning and interpretability of symbolic representation. In this paper, we survey recent accomplishments of neural-symbolic computing as a principled methodology for integrated machine learning and reasoning. We illustrate the effectiveness of the approach by outlining the main characteristics of the methodology: principled integration of neural learning with symbolic knowledge representation and reasoning allowing for the construction of explainable AI systems. The insights provided by neural-symbolic computing shed new light on the increasingly prominent need for interpretable and accountable AI systems.', 'year': 2019, 'in_acl': False, 'citationCount': 262, 'section': None, 'subsection': None}, {'id': 213613608, 'paperId': '4043a936960de8e149dc208178fe1bcb157c7fa4', 'title': 'Recent Advances in Natural Language Inference: A Survey of Benchmarks, Resources, and Approaches', 'authors': [{'authorId': '89093987', 'name': 'Shane Storks'}, {'authorId': '3193409', 'name': 'Qiaozi Gao'}, {'authorId': '1707259', 'name': 'J. Chai'}], 'venue': '', 'abstract': "In the NLP community, recent years have seen a surge of research activities that address machines' ability to perform deep language understanding which goes beyond what is explicitly stated in text, rather relying on reasoning and knowledge of the world. Many benchmark tasks and datasets have been created to support the development and evaluation of such natural language inference ability. As these benchmarks become instrumental and a driving force for the NLP research community, this paper aims to provide an overview of recent benchmarks, relevant knowledge resources, and state-of-the-art learning and inference approaches in order to support a better understanding of this growing field.", 'year': 2019, 'in_acl': False, 'citationCount': 116, 'section': None, 'subsection': None}, {'id': 51893222, 'paperId': '3df952d4a724655f7520ff95d4b2cef90fff0cae', 'title': 'Techniques for interpretable machine learning', 'authors': [{'authorId': '3432460', 'name': 'Mengnan Du'}, {'authorId': '47717322', 'name': 'Ninghao Liu'}, {'authorId': '48539382', 'name': 'Xia Hu'}], 'venue': 'Communications of the ACM', 'abstract': 'Uncovering the mysterious ways machine learning models make decisions.', 'year': 2018, 'in_acl': False, 'citationCount': 979, 'section': None, 'subsection': None}, {'id': 220058074, 'paperId': '6efe7653b9a7928bc47b61dfeb84c0831a1d7a39', 'title': 'Open-Domain Question Answering', 'authors': [{'authorId': '50536468', 'name': 'Danqi Chen'}, {'authorId': '144105277', 'name': 'Wen-tau Yih'}], 'venue': 'Annual Meeting of the Association for Computational Linguistics', 'abstract': 'This tutorial provides a comprehensive and coherent overview of cutting-edge research in open-domain question answering (QA), the task of answering questions using a large collection of documents of diversified topics. We will start by first giving a brief historical background, discussing the basic setup and core technical challenges of the research problem, and then describe modern datasets with the common evaluation metrics and benchmarks. The focus will then shift to cutting-edge models proposed for open-domain QA, including two-stage retriever-reader approaches, dense retriever and end-to-end training, and retriever-free methods. Finally, we will cover some hybrid approaches using both text and large knowledge bases and conclude the tutorial with important open questions. We hope that the tutorial will not only help the audience to acquire up-to-date knowledge but also provide new perspectives to stimulate the advances of open-domain QA research in the next phase.', 'year': 2020, 'in_acl': True, 'citationCount': 17, 'section': None, 'subsection': None}, {'id': 220060314, 'paperId': 'c24a3ba1f161df77bdf9374d787851d6ce7e366b', 'title': 'Introductory Tutorial: Commonsense Reasoning for Natural Language Processing', 'authors': [{'authorId': '2729164', 'name': 'Maarten Sap'}, {'authorId': '3103343', 'name': 'Vered Shwartz'}, {'authorId': '8536286', 'name': 'Antoine Bosselut'}, {'authorId': '2257385140', 'name': 'Yejin Choi'}, {'authorId': '2249759427', 'name': 'Dan Roth'}], 'venue': '', 'abstract': 'Commonsense knowledge, such as knowing that “bumping into people annoys them” or “rain makes the road slippery”, helps humans navigate everyday situations seamlessly. Yet, endowing machines with such human-like commonsense reasoning capabilities has remained an elusive goal of artificial intelligence research for decades. In recent years, commonsense knowledge and reasoning have received renewed attention from the natural language processing (NLP) community, yielding exploratory studies in automated commonsense understanding. We organize this tutorial to provide researchers with the critical foundations and recent advances in commonsense representation and reasoning, in the hopes of casting a brighter light on this promising area of future research. In our tutorial, we will (1) outline the various types of commonsense (e.g., physical, social), and (2) discuss techniques to gather and represent commonsense knowledge, while highlighting the challenges specific to this type of knowledge (e.g., reporting bias). We will then (3) discuss the types of commonsense knowledge captured by modern NLP systems (e.g., large pretrained language models), and (4) present ways to measure systems’ commonsense reasoning abilities. We will finish with (5) a discussion of various ways in which commonsense reasoning can be used to improve performance on NLP tasks, exemplified by an (6) interactive session on integrating commonsense into a downstream task.', 'year': 2020, 'in_acl': False, 'citationCount': 92, 'section': None, 'subsection': None}]
|
2020.emnlp-tutorials.2
|
Fact-Checking, Fake News, Propaganda, and Media Bias: Truth Seeking in the Post-Truth Era
|
The rise of social media has democratized content creation and has made it easy for everybody to share and spread information online. On the positive side, this has given rise to citizen journalism, thus enabling much faster dissemination of information compared to what was possible with newspapers, radio, and TV. On the negative side, stripping traditional media from their gate-keeping role has left the public unprotected against the spread of misinformation, which could now travel at breaking-news speed over the same democratic channel. This has given rise to the proliferation of false information specifically created to affect individual people’s beliefs, and ultimately to influence major events such as political elections. There are strong indications that false information was weaponized at an unprecedented scale during Brexit and the 2016 U.S. presidential elections. “Fake news,” which can be defined as fabricated information that mimics news media content in form but not in organizational process or intent, became the Word of the Year for 2017, according to Collins Dictionary. Thus, limiting the spread of “fake news” and its impact has become a major focus for computer scientists, journalists, social media companies, and regulatory authorities. The tutorial will offer an overview of the broad and emerging research area of disinformation, with focus on the latest developments and research directions.
| 2,020
|
https://aclanthology.org/2020.emnlp-tutorials.2
|
EMNLP
|
[{'id': 207718082, 'paperId': 'cb40a5e6d4fc0290452345791bb91040aed76961', 'title': 'Fake News Detection on Social Media: A Data Mining Perspective', 'authors': [{'authorId': '145800151', 'name': 'Kai Shu'}, {'authorId': '2880010', 'name': 'A. Sliva'}, {'authorId': '2893721', 'name': 'Suhang Wang'}, {'authorId': '1736632', 'name': 'Jiliang Tang'}, {'authorId': '145896397', 'name': 'Huan Liu'}], 'venue': 'SKDD', 'abstract': 'Social media for news consumption is a double-edged sword. On the one hand, its low cost, easy access, and rapid dissemination of information lead people to seek out and consume news from social media. On the other hand, it enables the wide spread of \\fake news", i.e., low quality news with intentionally false information. The extensive spread of fake news has the potential for extremely negative impacts on individuals and society. Therefore, fake news detection on social media has recently become an emerging research that is attracting tremendous attention. Fake news detection on social media presents unique characteristics and challenges that make existing detection algorithms from traditional news media ine ective or not applicable. First, fake news is intentionally written to mislead readers to believe false information, which makes it difficult and nontrivial to detect based on news content; therefore, we need to include auxiliary information, such as user social engagements on social media, to help make a determination. Second, exploiting this auxiliary information is challenging in and of itself as users\' social engagements with fake news produce data that is big, incomplete, unstructured, and noisy. Because the issue of fake news detection on social media is both challenging and relevant, we conducted this survey to further facilitate research on the problem. In this survey, we present a comprehensive review of detecting fake news on social media, including fake news characterizations on psychology and social theories, existing algorithms from a data mining perspective, evaluation metrics and representative datasets. We also discuss related research areas, open problems, and future research directions for fake news detection on social media.', 'year': 2017, 'in_acl': False, 'citationCount': 2533, 'section': None, 'subsection': None}, {'id': 207743293, 'paperId': '290513795d653bd13a27c0688b12a459eb66c711', 'title': 'Detection and Resolution of Rumours in Social Media', 'authors': [{'authorId': '2805349', 'name': 'A. Zubiaga'}, {'authorId': '145970060', 'name': 'Ahmet Aker'}, {'authorId': '1723649', 'name': 'Kalina Bontcheva'}, {'authorId': '1991548', 'name': 'Maria Liakata'}, {'authorId': '144723416', 'name': 'R. Procter'}], 'venue': 'ACM Computing Surveys', 'abstract': 'Despite the increasing use of social media platforms for information and news gathering, its unmoderated nature often leads to the emergence and spread of rumours, i.e., items of information that are unverified at the time of posting. At the same time, the openness of social media platforms provides opportunities to study how users share and discuss rumours, and to explore how to automatically assess their veracity, using natural language processing and data mining techniques. In this article, we introduce and discuss two types of rumours that circulate on social media: long-standing rumours that circulate for long periods of time, and newly emerging rumours spawned during fast-paced events such as breaking news, where reports are released piecemeal and often with an unverified status in their early stages. We provide an overview of research into social media rumours with the ultimate goal of developing a rumour classification system that consists of four components: rumour detection, rumour tracking, rumour stance classification, and rumour veracity classification. We delve into the approaches presented in the scientific literature for the development of each of these four components. We summarise the efforts and achievements so far toward the development of rumour classification systems and conclude with suggestions for avenues for future research in social media mining for the detection and resolution of rumours.', 'year': 2017, 'in_acl': False, 'citationCount': 757, 'section': None, 'subsection': None}, {'id': 49320819, 'paperId': '22616702da06431668022c649a017af9b333c530', 'title': 'Automated Fact Checking: Task Formulations, Methods and Future Directions', 'authors': [{'authorId': '144603330', 'name': 'James Thorne'}, {'authorId': '2064056928', 'name': 'Andreas Vlachos'}], 'venue': 'International Conference on Computational Linguistics', 'abstract': 'The recently increased focus on misinformation has stimulated research in fact checking, the task of assessing the truthfulness of a claim. Research in automating this task has been conducted in a variety of disciplines including natural language processing, machine learning, knowledge representation, databases, and journalism. While there has been substantial progress, relevant papers and articles have been published in research communities that are often unaware of each other and use inconsistent terminology, thus impeding understanding and further progress. In this paper we survey automated fact checking research stemming from natural language processing and related disciplines, unifying the task formulations and methodologies across papers and authors. Furthermore, we highlight the use of evidence as an important distinguishing factor among them cutting across task formulations and methods. We conclude with proposing avenues for future NLP research on automated fact checking.', 'year': 2018, 'in_acl': True, 'citationCount': 258, 'section': None, 'subsection': None}, {'id': 9060471, 'paperId': '6447bfcda1dfb2fa8484683711af92b7cbaeca2b', 'title': 'A Survey on Truth Discovery', 'authors': [{'authorId': '2110479359', 'name': 'Yaliang Li'}, {'authorId': '144407304', 'name': 'Jing Gao'}, {'authorId': '2598592', 'name': 'Chuishi Meng'}, {'authorId': '37696683', 'name': 'Qi Li'}, {'authorId': '143843304', 'name': 'Lu Su'}, {'authorId': '2112525352', 'name': 'Bo Zhao'}, {'authorId': '3228071', 'name': 'Wei Fan'}, {'authorId': '145325584', 'name': 'Jiawei Han'}], 'venue': 'SKDD', 'abstract': 'Thanks to information explosion, data for the objects of interest can be collected from increasingly more sources. However, for the same object, there usually exist conflicts among the collected multi-source information. To tackle this challenge, truth discovery, which integrates multi-source noisy information by estimating the reliability of each source, has emerged as a hot topic. Several truth discovery methods have been proposed for various scenarios, and they have been successfully applied in diverse application domains. In this survey, we focus on providing a comprehensive overview of truth discovery methods, and summarizing them from different aspects. We also discuss some future directions of truth discovery research. We hope that this survey will promote a better understanding of the current progress on truth discovery, and offer some guidelines on how to apply these approaches in application domains.', 'year': 2015, 'in_acl': False, 'citationCount': 403, 'section': None, 'subsection': None}, {'id': 4410672, 'paperId': '73bfad11b96a69cb882028ead115751adb55252d', 'title': 'The science of fake news', 'authors': [{'authorId': '3185333', 'name': 'D. Lazer'}, {'authorId': '40508064', 'name': 'M. Baum'}, {'authorId': '2237559', 'name': 'Y. Benkler'}, {'authorId': '4859855', 'name': 'Adam J. Berinsky'}, {'authorId': '40828798', 'name': 'Kelly M. Greenhill'}, {'authorId': '143653472', 'name': 'F. Menczer'}, {'authorId': '1976593', 'name': 'Miriam J. Metzger'}, {'authorId': '2064358', 'name': 'B. Nyhan'}, {'authorId': '2998138', 'name': 'Gordon Pennycook'}, {'authorId': '145792941', 'name': 'David M. Rothschild'}, {'authorId': '50156656', 'name': 'M. Schudson'}, {'authorId': '2404363', 'name': 'S. Sloman'}, {'authorId': '3171769', 'name': 'C. Sunstein'}, {'authorId': '26668235', 'name': 'Emily A. Thorson'}, {'authorId': '1783914', 'name': 'D. Watts'}, {'authorId': '46714697', 'name': 'Jonathan Zittrain'}], 'venue': 'Science', 'abstract': 'Addressing fake news requires a multidisciplinary effort The rise of fake news highlights the erosion of long-standing institutional bulwarks against misinformation in the internet age. Concern over the problem is global. However, much remains unknown regarding the vulnerabilities of individuals, institutions, and society to manipulations by malicious actors. A new system of safeguards is needed. Below, we discuss extant social and computer science research regarding belief in fake news and the mechanisms by which it spreads. Fake news has a long history, but we focus on unanswered scientific questions raised by the proliferation of its most recent, politically oriented incarnation. Beyond selected references in the text, suggested further reading can be found in the supplementary materials.', 'year': 2018, 'in_acl': False, 'citationCount': 2893, 'section': None, 'subsection': None}, {'id': 4549072, 'paperId': 'ef07defaf08123d5e1a8bd41ad6e2db5e5b225e3', 'title': 'The spread of true and false news online', 'authors': [{'authorId': '1918441', 'name': 'Soroush Vosoughi'}, {'authorId': '145364504', 'name': 'D. Roy'}, {'authorId': '2413779', 'name': 'Sinan Aral'}], 'venue': 'Science', 'abstract': 'Lies spread faster than the truth There is worldwide concern over false news and the possibility that it can influence political, economic, and social well-being. To understand how false news spreads, Vosoughi et al. used a data set of rumor cascades on Twitter from 2006 to 2017. About 126,000 rumors were spread by ∼3 million people. False news reached more people than the truth; the top 1% of false news cascades diffused to between 1000 and 100,000 people, whereas the truth rarely diffused to more than 1000 people. Falsehood also diffused faster than the truth. The degree of novelty and the emotional reactions of recipients may be responsible for the differences observed. Science, this issue p. 1146 A large-scale analysis of tweets reveals that false rumors spread further and faster than the truth. We investigated the differential diffusion of all of the verified true and false news stories distributed on Twitter from 2006 to 2017. The data comprise ~126,000 stories tweeted by ~3 million people more than 4.5 million times. We classified news as true or false using information from six independent fact-checking organizations that exhibited 95 to 98% agreement on the classifications. Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information, and the effects were more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information. We found that false news was more novel than true news, which suggests that people were more likely to share novel information. Whereas false stories inspired fear, disgust, and surprise in replies, true stories inspired anticipation, sadness, joy, and trust. Contrary to conventional wisdom, robots accelerated the spread of true and false news at the same rate, implying that false news spreads more than the truth because humans, not robots, are more likely to spread it.', 'year': 2018, 'in_acl': False, 'citationCount': 5292, 'section': None, 'subsection': None}, {'id': 67748733, 'paperId': '8114cf0628c29e8309d6f1e2ef61030f64a7b28c', 'title': 'Stance Detection', 'authors': [{'authorId': '1910084', 'name': 'D. Küçük'}, {'authorId': '2083563', 'name': 'F. Can'}], 'venue': 'ACM Computing Surveys', 'abstract': 'Automatic elicitation of semantic information from natural language texts is an important research problem with many practical application areas. Especially after the recent proliferation of online content through channels such as social media sites, news portals, and forums; solutions to problems such as sentiment analysis, sarcasm/controversy/veracity/rumour/fake news detection, and argument mining gained increasing impact and significance, revealed with large volumes of related scientific publications. In this article, we tackle an important problem from the same family and present a survey of stance detection in social media posts and (online) regular texts. Although stance detection is defined in different ways in different application settings, the most common definition is “automatic classification of the stance of the producer of a piece of text, towards a target, into one of these three classes: {Favor, Against, Neither}.” Our survey includes definitions of related problems and concepts, classifications of the proposed approaches so far, descriptions of the relevant datasets and tools, and related outstanding issues. Stance detection is a recent natural language processing topic with diverse application areas, and our survey article on this newly emerging topic will act as a significant resource for interested researchers and practitioners.', 'year': 2020, 'in_acl': False, 'citationCount': 136, 'section': None, 'subsection': None}, {'id': 220483038, 'paperId': 'd3833e446e536f7627ae01c45cf265d6e736e78c', 'title': 'A Survey on Computational Propaganda Detection', 'authors': [{'authorId': '34086979', 'name': 'Giovanni Da San Martino'}, {'authorId': '40598011', 'name': 'S. Cresci'}, {'authorId': '1397442049', 'name': 'Alberto Barrón-Cedeño'}, {'authorId': '1885974', 'name': 'Seunghak Yu'}, {'authorId': '1728076', 'name': 'R. D. Pietro'}, {'authorId': '1683562', 'name': 'Preslav Nakov'}], 'venue': 'International Joint Conference on Artificial Intelligence', 'abstract': "Propaganda campaigns aim at influencing people's mindset with the purpose of advancing a specific agenda. They exploit the anonymity of the Internet, the micro-profiling ability of social networks, and the ease of automatically creating and managing coordinated networks of accounts, to reach millions of social network users with persuasive messages, specifically targeted to topics each individual user is sensitive to, and ultimately influencing the outcome on a targeted issue. \n\nIn this survey, we review the state of the art on computational propaganda detection from the perspective of Natural Language Processing and Network Analysis, arguing about the need for combined efforts between these communities. We further discuss current challenges and future research directions.", 'year': 2020, 'in_acl': False, 'citationCount': 172, 'section': None, 'subsection': None}, {'id': 1914124, 'paperId': '6f90ad2553c2a73948f614d19c763ec3d5e58542', 'title': 'The rise of social bots', 'authors': [{'authorId': '48898287', 'name': 'Emilio Ferrara'}, {'authorId': '2307347', 'name': 'Onur Varol'}, {'authorId': '2057124', 'name': 'Clayton A. Davis'}, {'authorId': '143653472', 'name': 'F. Menczer'}, {'authorId': '1769960', 'name': 'A. Flammini'}], 'venue': 'Communications of the ACM', 'abstract': "Today's social bots are sophisticated and sometimes menacing. Indeed, their presence can endanger online ecosystems as well as our society.", 'year': 2014, 'in_acl': False, 'citationCount': 1752, 'section': None, 'subsection': None}, {'id': 252277994, 'paperId': '8ce2cb70d5a98ebe3bc6cb10d830dd2282a3e766', 'title': 'The Web of False Information', 'authors': [{'authorId': '3447293', 'name': 'Savvas Zannettou'}, {'authorId': '2698864', 'name': 'Michael Sirivianos'}, {'authorId': '144728530', 'name': 'Jeremy Blackburn'}, {'authorId': '1946641', 'name': 'N. Kourtellis'}], 'venue': 'ACM Journal of Data and Information Quality', 'abstract': 'A new era of Information Warfare has arrived. Various actors, including state-sponsored ones, are weaponizing information on Online Social Networks to run false-information campaigns with targeted manipulation of public opinion on specific topics. These false-information campaigns can have dire consequences to the public: mutating their opinions and actions, especially with respect to critical world events like major elections. Evidently, the problem of false information on the Web is a crucial one and needs increased public awareness as well as immediate attention from law enforcement agencies, public institutions, and in particular, the research community. In this article, we make a step in this direction by providing a typology of the Web’s false-information ecosystem, composed of various types of false-information, actors, and their motives. We report a comprehensive overview of existing research on the false-information ecosystem by identifying several lines of work: (1) how the public perceives false information; (2) understanding the propagation of false information; (3) detecting and containing false information on the Web; and (4) false information on the political stage. In this work, we pay particular attention to political false information as: (1) it can have dire consequences to the community (e.g., when election results are mutated) and (2) previous work shows that this type of false information propagates faster and further when compared to other types of false information. Finally, for each of these lines of work, we report several future research directions that can help us better understand and mitigate the emerging problem of false-information dissemination on the Web.', 'year': 2018, 'in_acl': False, 'citationCount': 146, 'section': None, 'subsection': None}, {'id': 44111303, 'paperId': '2ed166a3301209ccd9838e26ec4648a4d2f07bd9', 'title': 'Bias on the web', 'authors': [{'authorId': '1389957009', 'name': 'R. Baeza-Yates'}], 'venue': 'Communications of the ACM', 'abstract': 'Bias in Web data and use taints the algorithms behind Web-based applications, delivering equally biased results.', 'year': 2018, 'in_acl': False, 'citationCount': 214, 'section': None, 'subsection': None}]
|
2020.emnlp-tutorials.3
|
Interpreting Predictions of NLP Models
|
Although neural NLP models are highly expressive and empirically successful, they also systematically fail in counterintuitive ways and are opaque in their decision-making process. This tutorial will provide a background on interpretation techniques, i.e., methods for explaining the predictions of NLP models. We will first situate example-specific interpretations in the context of other ways to understand models (e.g., probing, dataset analyses). Next, we will present a thorough study of example-specific interpretations, including saliency maps, input perturbations (e.g., LIME, input reduction), adversarial attacks, and influence functions. Alongside these descriptions, we will walk through source code that creates and visualizes interpretations for a diverse set of NLP tasks. Finally, we will discuss open problems in the field, e.g., evaluating, extending, and improving interpretation methods.
| 2,020
|
https://aclanthology.org/2020.emnlp-tutorials.3
|
EMNLP
|
[{'id': 11319376, 'paperId': '5c39e37022661f81f79e481240ed9b175dec6513', 'title': 'Towards A Rigorous Science of Interpretable Machine Learning', 'authors': [{'authorId': '1388372395', 'name': 'F. Doshi-Velez'}, {'authorId': '3351164', 'name': 'Been Kim'}], 'venue': '', 'abstract': 'As machine learning systems become ubiquitous, there has been a surge of interest in interpretable machine learning: systems that provide explanation for their outputs. These explanations are often used to qualitatively assess other criteria such as safety or non-discrimination. However, despite the interest in interpretability, there is very little consensus on what interpretable machine learning is and how it should be measured. In this position paper, we first define interpretability and describe when interpretability is needed (and when it is not). Next, we suggest a taxonomy for rigorous evaluation and expose open questions towards a more rigorous science of interpretable machine learning.', 'year': 2017, 'in_acl': False, 'citationCount': 3309, 'section': None, 'subsection': None}, {'id': 5981909, 'paperId': 'd516daff247f7157fccde6649ace91d969cd1973', 'title': 'The mythos of model interpretability', 'authors': [{'authorId': '32219137', 'name': 'Zachary Chase Lipton'}], 'venue': 'Queue', 'abstract': 'In machine learning, the concept of interpretability is both important and slippery.', 'year': 2016, 'in_acl': False, 'citationCount': 3363, 'section': None, 'subsection': None}, {'id': 67855860, 'paperId': '1e83c20def5c84efa6d4a0d80aa3159f55cb9c3f', 'title': 'Attention is not Explanation', 'authors': [{'authorId': '49837811', 'name': 'Sarthak Jain'}, {'authorId': '1912476', 'name': 'Byron C. Wallace'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'Attention mechanisms have seen wide adoption in neural NLP models. In addition to improving predictive performance, these are often touted as affording transparency: models equipped with attention provide a distribution over attended-to input units, and this is often presented (at least implicitly) as communicating the relative importance of inputs. However, it is unclear what relationship exists between attention weights and model outputs. In this work we perform extensive experiments across a variety of NLP tasks that aim to assess the degree to which attention weights provide meaningful “explanations” for predictions. We find that they largely do not. For example, learned attention weights are frequently uncorrelated with gradient-based measures of feature importance, and one can identify very different attention distributions that nonetheless yield equivalent predictions. Our findings show that standard attention modules do not provide meaningful explanations and should not be treated as though they do.', 'year': 2019, 'in_acl': True, 'citationCount': 1207, 'section': None, 'subsection': None}, {'id': 7228830, 'paperId': 'ffb949d3493c3b2f3c9acf9c75cb03938933ddf0', 'title': 'Adversarial Examples for Evaluating Reading Comprehension Systems', 'authors': [{'authorId': '3422908', 'name': 'Robin Jia'}, {'authorId': '145419642', 'name': 'Percy Liang'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Standard accuracy metrics indicate that reading comprehension systems are making rapid progress, but the extent to which these systems truly understand language remains unclear. To reward systems with real language understanding abilities, we propose an adversarial evaluation scheme for the Stanford Question Answering Dataset (SQuAD). Our method tests whether systems can answer questions about paragraphs that contain adversarially inserted sentences, which are automatically generated to distract computer systems without changing the correct answer or misleading humans. In this adversarial setting, the accuracy of sixteen published models drops from an average of 75% F1 score to 36%; when the adversary is allowed to add ungrammatical sequences of words, average accuracy on four models decreases further to 7%. We hope our insights will motivate the development of new models that understand language more precisely.', 'year': 2017, 'in_acl': True, 'citationCount': 1536, 'section': None, 'subsection': None}, {'id': 13029170, 'paperId': 'c0883f5930a232a9c1ad601c978caede29155979', 'title': '“Why Should I Trust You?”: Explaining the Predictions of Any Classifier', 'authors': [{'authorId': '78846919', 'name': 'Marco Tulio Ribeiro'}, {'authorId': '34650964', 'name': 'Sameer Singh'}, {'authorId': '1730156', 'name': 'Carlos Guestrin'}], 'venue': 'North American Chapter of the Association for Computational Linguistics', 'abstract': 'Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one. In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally varound the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.', 'year': 2016, 'in_acl': True, 'citationCount': 14884, 'section': None, 'subsection': None}, {'id': 1450294, 'paperId': 'dc6ac3437f0a6e64e4404b1b9d188394f8a3bf71', 'title': 'Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps', 'authors': [{'authorId': '34838386', 'name': 'K. Simonyan'}, {'authorId': '1687524', 'name': 'A. Vedaldi'}, {'authorId': '1688869', 'name': 'Andrew Zisserman'}], 'venue': 'International Conference on Learning Representations', 'abstract': 'This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [Erhan et al., 2009], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [Zeiler et al., 2013].', 'year': 2013, 'in_acl': False, 'citationCount': 6811, 'section': None, 'subsection': None}, {'id': 202712654, 'paperId': 'ddd27dba038d0ed14c48cd027812df58a902ece2', 'title': 'AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models', 'authors': [{'authorId': '145217343', 'name': 'Eric Wallace'}, {'authorId': '1388109456', 'name': 'Jens Tuyls'}, {'authorId': '49606614', 'name': 'Junlin Wang'}, {'authorId': '17097887', 'name': 'Sanjay Subramanian'}, {'authorId': '40642935', 'name': 'Matt Gardner'}, {'authorId': '34650964', 'name': 'Sameer Singh'}], 'venue': 'Conference on Empirical Methods in Natural Language Processing', 'abstract': 'Neural NLP models are increasingly accurate but are imperfect and opaque—they break in counterintuitive ways and leave end users puzzled at their behavior. Model interpretation methods ameliorate this opacity by providing explanations for specific model predictions. Unfortunately, existing interpretation codebases make it difficult to apply these methods to new models and tasks, which hinders adoption for practitioners and burdens interpretability researchers. We introduce AllenNLP Interpret, a flexible framework for interpreting NLP models. The toolkit provides interpretation primitives (e.g., input gradients) for any AllenNLP model and task, a suite of built-in interpretation methods, and a library of front-end visualization components. We demonstrate the toolkit’s flexibility and utility by implementing live demos for five interpretation methods (e.g., saliency maps and adversarial attacks) on a variety of models and tasks (e.g., masked language modeling using BERT and reading comprehension using BiDAF). These demos, alongside our code and tutorials, are available at https://allennlp.org/interpret.', 'year': 2019, 'in_acl': True, 'citationCount': 133, 'section': None, 'subsection': None}]
|
ACL-rlg: A Dataset for Reading List Generation
About
ACL-rlg is the largest dataset of expert-crafted reading lists, containing 85 reading lists manually extracted from tutorial papers submitted to ACL-related conferences between 2020 and 2024. Data was sourced from ACL Anthology and cross-referenced with Semantic Scholar, enabling the extraction of metadata for articles beyond the ACL collection.
Content
The following data fields are available :
| Field | Type | Description |
|---|---|---|
id |
string |
Unique identifier of the tutorial paper in the ACL Anthology. |
title |
string |
Title of the tutorial paper. |
abstract |
string |
Abstract of the tutorial paper. |
year |
int64 |
Year of publication. |
url |
string |
ACL Anthology link to the paper. |
venues |
string |
Name of the venues the tutorial paper is published in. |
reading_list |
list[object] |
Reading list provided by the authors of the paper. Each record includes: • corpusid (int64): Semantic Scholar corpus ID. • paperId (string): Semantic Scholar paper ID. • title (string): Title of the referenced paper. • abstract (string): Abstract of the referenced paper. • authors (list[object]): Informations about referenced paper's authors. • venue (string): Name of the venue the referenced paper is published in. • year (int64): Year of publication of the referenced paper. • in_acl (bool): Boolean indicating if the referenced is referenced in ACL Anthology. • citationCount (int64): Citation count of the paper extracted from Semantic Scholar API. • section (string): Name of the section of the reading list the referenced paper is listed in. • subsection (string): Name of the subsection of the reading list the referenced paper is listed in. |
Licence
Dataset: CC BY-NC 4.0
If you use this dataset you may use, share, and adapt the dataset for non-commercial research or educational purposes only.
Citation
Julien Aubert-Béduchaud, Florian Boudin, Béatrice Daille, and Richard Dufour. 2025. ACL-rlg: A Dataset for Reading List Generation. In Proceedings of the 31st International Conference on Computational Linguistics, pages 4910–4919, Abu Dhabi, UAE. Association for Computational Linguistics.
- Downloads last month
- 3