All text input data fed into an LLM is unstructured and treated on equal footing. Although we can ask complex queries in the form of natural language and retrieve an answer in the same medium, there is no inventory accessible to us. This lack of insight into LLMs 'reasoning' stands in contrast with traditional knowledge bases based on facts and logic. This article considers the lessons we can learn from this discrepancy.
L O A D I N G
. . . comments & more!