On the Paradox of Learning to Reason from Data
Honghua Zhang, Liunian Harold Li, Tao Meng, Kai-Wei Chang, and Guy Van den Broeck., in IJCAI, 2023.
Top-10 cited paper at IJCAI 23
CodeDownload the full text
Abstract
Logical reasoning is needed in a wide range of NLP tasks. Can a BERT model be trained end-to-end to solve logical reasoning problems presented in natural language? We attempt to answer this question in a confined problem space where there exists a set of parameters that perfectly simulates logical reasoning. We make observations that seem to contradict each other: BERT attains near-perfect accuracy on in-distribution test examples while failing to generalize to other data distributions over the exact same problem space. Our study provides an explanation for this paradox: instead of learning to emulate the correct reasoning function, BERT has in fact learned statistical features that inherently exist in logical reasoning problems. We also show that it is infeasible to jointly remove statistical features from data, illustrating the difficulty of learning to reason in general. Our result naturally extends to other neural models and unveils the fundamental difference between learning to reason and learning to achieve high performance on NLP benchmarks using statistical features.
Can language models learn to reason by end-to-end training? We show that near-perfect test accuracy is deceiving: instead, they tend to learn statistical features inherent to reasoning problems. See more in https://t.co/2F1s1cB9TE @LiLiunian @TaoMeng10 @kaiwei_chang @guyvdb
— Honghua Zhang (@HonghuaZhang2) May 24, 2022
Bib Entry
@inproceedings{zhang2023on,
title = {On the Paradox of Learning to Reason from Data},
author = {Zhang, Honghua and Li, Liunian Harold and Meng, Tao and Chang, Kai-Wei and den Broeck., Guy Van},
booktitle = {IJCAI},
year = {2023}
}
Related Publications
- AVIS: Autonomous Visual Information Seeking with Large Language Models, NeurIPS, 2023
- Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models, NeurIPS, 2023
- A Survey of Deep Learning for Mathematical Reasoning, ACL, 2023
- Symbolic Chain-of-Thought Distillation: Small Models Can Also "Think" Step-by-Step, ACL, 2023
- Dynamic Prompt Learning via Policy Gradient for Semi-structured Mathematical Reasoning, ICLR, 2023
- Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering, NeurIPS, 2022
- Semantic Probabilistic Layers for Neuro-Symbolic Learning, NeurIPS, 2022
- Neuro-Symbolic Entropy Regularization, UAI, 2022