oai:arXiv.org:2405.17764
Computer Science
2024
9/10/2024
Generative models have gained significant prominence in Natural Language Processing (NLP), especially in tackling the complex task of modeling and evaluating long text sequences.
This task is crucial for advancing various downstream applications, such as text generation and machine translation.
Recent methods that utilize stochastic processes to capture the intrinsic dynamics of sequences have shown superior performance in generative modeling.
However, the accurate encoding of both temporal and structural dependencies from text datasets, as well as leveraging this encoded information for sequence evaluation, remains an open area of research.
In this paper, we propose a novel approach to learn the stochastic dynamics of long text sequences, utilizing a negative log-likelihood-based encoder that outperforms contrastive learning methods.
We also introduce a likelihood-based evaluation metric for long-text assessment, which measures sequence coherence and can be applied to downstream tasks such as Human-AI discrimination.
Our encoder preserves sequence coherence effectively and performs robustly on out-of-domain datasets.
Additionally, the proposed evaluation metric captures both temporal and structural information comprehensively.
Theoretical analysis demonstrates the superiority of our metric in sequence evaluation, and experimental results highlight its flexibility and exceptional performance across a variety of tasks, showcasing its utility in diverse NLP applications.
Zhang, Tianhao,Lin, Zhexiao,Sheng, Zhecheng,Jiang, Chen,Kang, Dongyeop, 2024, On the Sequence Evaluation based on Stochastic Processes