TY - GEN
T1 - Probing LLMs for Logical Reasoning
AU - Manigrasso, Francesco
AU - Schouten, Stefan
AU - Morra, Lia
AU - Bloem, Peter
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.
PY - 2024
Y1 - 2024
N2 - Recently, the question of what types of computation and cognition large language models (LLMs) are capable of has received increasing attention. With models clearly capable of convincingly faking true reasoning behavior, the question of whether they are also capable of real reasoning—and how the difference should be defined—becomes increasingly vexed. Here we introduce a new tool, Logic Tensor Probes (LTP), that may help to shed light on the problem. Logic Tensor Networks (LTN) serve as a neural symbolic framework designed for differentiable fuzzy logics. Using a pretrained LLM with frozen weights, an LTP uses the LTN framework as a diagnostic tool. This allows for the detection and localization of logical deductions within LLMs, enabling the use of first-order logic as a versatile modeling language for investigating the internal mechanisms of LLMs. The LTP can make deductions from basic assertions, and track if the model makes the same deductions from the natural language equivalent, and if so, where in the model this happens. We validate our approach through proof-of-concept experiments on hand-crafted knowledge bases derived from WordNet and on smaller samples from FrameNet.
AB - Recently, the question of what types of computation and cognition large language models (LLMs) are capable of has received increasing attention. With models clearly capable of convincingly faking true reasoning behavior, the question of whether they are also capable of real reasoning—and how the difference should be defined—becomes increasingly vexed. Here we introduce a new tool, Logic Tensor Probes (LTP), that may help to shed light on the problem. Logic Tensor Networks (LTN) serve as a neural symbolic framework designed for differentiable fuzzy logics. Using a pretrained LLM with frozen weights, an LTP uses the LTN framework as a diagnostic tool. This allows for the detection and localization of logical deductions within LLMs, enabling the use of first-order logic as a versatile modeling language for investigating the internal mechanisms of LLMs. The LTP can make deductions from basic assertions, and track if the model makes the same deductions from the natural language equivalent, and if so, where in the model this happens. We validate our approach through proof-of-concept experiments on hand-crafted knowledge bases derived from WordNet and on smaller samples from FrameNet.
KW - Logic Tensor Networks
KW - NeuroSymbolic AI
KW - Probing Large Language Models
UR - http://www.scopus.com/inward/record.url?scp=85204627104&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85204627104&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-71167-1_14
DO - 10.1007/978-3-031-71167-1_14
M3 - Conference contribution
AN - SCOPUS:85204627104
SN - 9783031711664
VL - 1
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 257
EP - 278
BT - Neural-Symbolic Learning and Reasoning
A2 - Besold, Tarek R.
A2 - d’Avila Garcez, Artur
A2 - Jimenez-Ruiz, Ernesto
A2 - Madhyastha, Pranava
A2 - Wagner, Benedikt
A2 - Confalonieri, Roberto
PB - Springer Science and Business Media Deutschland GmbH
T2 - 18th International Conference on Neural-Symbolic Learning and Reasoning, NeSy 2024
Y2 - 9 September 2024 through 12 September 2024
ER -