Iravani, Sahar and Conrad, T. O. F. (2024) Towards More Effective Table-to-Text Generation: Assessing In-Context Learning and Self-Evaluation with Open-Source Models. Computational Linguistics . (Submitted)
Full text not available from this repository.
Official URL: https://arxiv.org/abs/2410.12878
Abstract
Table processing, a key task in natural language processing, has significantly benefited from recent advancements in language models (LMs). However, the capabilities of LMs in table-to-text generation, which transforms structured data into coherent narrative text, require an in-depth investigation, especially with current open-source models. This study explores the effectiveness of various in-context learning strategies in LMs across benchmark datasets, focusing on the impact of providing examples to the model. More importantly, we examine a real-world use case, offering valuable insights into practical applications. To complement traditional evaluation metrics, we employ a large language model (LLM) self-evaluation approach using chain-of-thought reasoning and assess its correlation with human-aligned metrics like BERTScore. Our findings highlight the significant impact of examples in improving table-to-text generation and suggest that, while LLM self-evaluation has potential, its current alignment with human judgment could be enhanced. This points to the need for more reliable evaluation methods.
Item Type: | Article |
---|---|
Subjects: | Mathematical and Computer Sciences > Artificial Intelligence > Speech and Natural Language Processing Linguistics |
Divisions: | Department of Mathematics and Computer Science > Institute of Mathematics Department of Mathematics and Computer Science > Institute of Mathematics > Comp. Proteomics Group |
ID Code: | 3187 |
Deposited By: | Admin Administrator |
Deposited On: | 07 Nov 2024 12:41 |
Last Modified: | 07 Nov 2024 12:41 |
Repository Staff Only: item control page