Investigating LLMs internal processing through tacit knowledge
The impressive performance of LLMs has led some to argue that these systems acquire “knowledge” of language or even exhibit signs of “intelligence”. Yet, it remains unclear what it takes to attribute cognitive capacities and states to LLMs. In this talk, I focus on the notion of knowledge of language and argue that knowledge should not be attributed based on behavior alone, but instead based on the internal processing and representations of LLMs. Specifically, taking inspiration from debates between symbolic and connectionist AI in the 1980s and 90s, I argue that LLMs might acquire tacit knowledge as defined by Davies (1990). Tacit knowledge, in this context, refers to implicitly represented rules or structures that causally affect—and potentially explain—the system’s behavior.
The goals of my contribution are twofold. First, I motivate and further introduce Davies’ account of tacit knowledge, including the constraints for attributing tacit knowledge to a system. Second, I take a closer look at the recent literature on LLMs and show that there is preliminary evidence that LLMs in fact acquire tacit knowledge, although more research is needed. Taken together, if LLMs can indeed be attributed tacit knowledge, this would both help us better explain and potentially improve their behavior.
Speaker
Céline Budding is a final-year PhD candidate in the Philosophy & Ethics group at Eindhoven University of Technology, with a background in neuroscience and explainable AI. In her PhD project, she combines these fields with research in philosophy to study what LLMs learn and which methods we can use to best explain their behavior
Please note. The subtitle was primarily generated by AI in collaboration with human review and may contain some errors.