New Story
by Writings, Papers and Blogs on Text ModelsJune 1st, 2024
In this study, researchers build evaluation tasks from naturally-occurring textual resources.
Author:
(1) Mingda Chen.
3.1 Improving Language Representation Learning via Sentence Ordering Prediction
3.2 Improving In-Context Few-Shot Learning via Self-Supervised Training
4.2 Learning Discourse-Aware Sentence Representations from Document Structures
5 DISENTANGLING LATENT REPRESENTATIONS FOR INTERPRETABILITY AND CONTROLLABILITY
5.1 Disentangling Semantics and Syntax in Sentence Representations
5.2 Controllable Paraphrase Generation with a Syntactic Exemplar
In this chapter, we showed that naturally-occurring textual resources can be tailored to build datasets for long-form data-to-text generation, long-form text summarization, and story generation with constraints. For each dataset, we conducted experiments to characterize the challenges in these new datasets. We also proposed new (either automatic or human-evaluation) metrics and models for these tasks to promote research in these directions.
This paper is available on arxiv under CC 4.0 license.
L O A D I N G
. . . comments & more!
Writings, Papers and Blogs on Text Models@textmodels
We publish the best academic papers on rule-based techniques, LLMs, & the generation of text that resembles human text.