Characterizing Truthfulness in Large Language Model Generations with Local Intrinsic Dimension
Fan Yin, Jayanth Srinivasa, and Kai-Wei Chang, in ICML, 2024.
Download the full text
Abstract
We study how to characterize and predict the truthfulness of texts generated from large language models (LLMs), which serves as a crucial step in building trust between humans and LLMs. Although several approaches based on entropy or verbalized uncertainty have been proposed to calibrate model predictions, these methods are often intractable, sensitive to hyperparameters, and less reliable when applied in generative tasks with LLMs. In this paper, we suggest investigating internal activations and quantifying LLM’s truthfulness using the local intrinsic dimension (LID) of model activations. Through experiments on four question answering (QA) datasets, we demonstrate the effectiveness ohttps://info.arxiv.org/help/prep#abstractsf our proposed method. Additionally, we study intrinsic dimensions in LLMs and their relations with model layers, autoregressive language modeling, and the training of LLMs, revealing that intrinsic dimensions can be a powerful approach to understanding LLMs.
Bib Entry
@inproceedings{yin2024charactering,
title = {Characterizing Truthfulness in Large Language Model Generations with Local Intrinsic Dimension},
author = {Yin, Fan and Srinivasa, Jayanth and Chang, Kai-Wei},
booktitle = {ICML},
year = {2024}
}
Related Publications
- Control Large Language Models via Divide and Conquer, EMNLP, 2024
- Re-ReST: Reflection-Reinforced Self-Training for Language Agents, EMNLP, 2024
- Agent Lumos: Unified and Modular Training for Open-Source Language Agents, ACL, 2024
- TrustLLM: Trustworthiness in Large Language Models, ICML, 2024
- The steerability of large language models toward data-driven personas, NAACL, 2024
- AI-Assisted Summarization of Radiologic Reports: Evaluating GPT3davinci, BARTcnn, LongT5booksum, LEDbooksum, LEDlegal, and LEDclinical, American Journal of Neuroradiology, 2024
- Understanding and Mitigating Spurious Correlations in Text Classification with Neighborhood Analysis, EACL-Findings, 2024
- Few-Shot Representation Learning for Out-Of-Vocabulary Words, ACL, 2019
- Learning Word Embeddings for Low-resource Languages by PU Learning, NAACL, 2018
- Co-training Embeddings of Knowledge Graphs and Entity Descriptions for Cross-lingual Entity Alignment, IJCAI, 2018
- Beyond Bilingual: Multi-sense Word Embeddings using Multilingual Context, ACL RepL4NLP Workshop, 2017