New Story
by Writings, Papers and Blogs on Text ModelsJune 1st, 2024
In this study, researchers exploit rich, naturally-occurring structures on Wikipedia for various NLP tasks.
Author:
(1) Mingda Chen.
3.1 Improving Language Representation Learning via Sentence Ordering Prediction
3.2 Improving In-Context Few-Shot Learning via Self-Supervised Training
4.2 Learning Discourse-Aware Sentence Representations from Document Structures
5 DISENTANGLING LATENT REPRESENTATIONS FOR INTERPRETABILITY AND CONTROLLABILITY
5.1 Disentangling Semantics and Syntax in Sentence Representations
5.2 Controllable Paraphrase Generation with a Syntactic Exemplar
In this chapter, we describe our contributions to exploiting rich, naturally-occurring structures on Wikipedia for various NLP tasks. In Section 4.1, we use hyperlinks to learn entity representations. The resultant models use contextualized representations rather than a fixed set of vectors for representing entities (unlike most prior work). In Section 4.2, we use article structures (e.g., paragraph positions and section titles) to make sentence representations aware of the broader context in which they situate, leading to improvements across various discourse-related tasks. In Section 4.3, we use article category hierarchies to learn concept hierarchies that improve model performance on textual entailment tasks.
The material in this chapter is adapted from Chen et al. (2019a), Chen et al. (2019b), and Chen et al. (2020a).
This paper is available on arxiv under CC 4.0 license.
L O A D I N G
. . . comments & more!
Writings, Papers and Blogs on Text Models@textmodels
We publish the best academic papers on rule-based techniques, LLMs, & the generation of text that resembles human text.