Very thought provoking piece. “The map is not the territory” quote leaps to mind. That made me dig a bit deeper into that quote actually means and I didn't use ai to find it. I used search. I am finding that I toggle back and forth between the approaches...as I continue to try and understand the why and whats.
This quote comes from Alfred Korzybski, father of general semantics: “A map is not the territory it represents, but if correct, it has a similar structure to the territory, which accounts for its usefulness”. To sum up, our perception of reality is not reality itself but our own version of it, or our own “map”.
Really insightful. Seeing exploration of the knowledge territory as maps and routes also makes it easier to think about the ethical side of things. Which (whose…) knowledge is excluded from the territory being mapped, what are the implications of omitting parts of the territory in sparse LLMs etc. (e.g. a Russian language LLM that only uses Russian sources might be more “efficient” in terms of computing power/cost, but the maps would change significantly). Or, for LLMs that self-train, what happens to our maps when more of the territory is self-generated than not… Roll on the improbability drive!
Very thought provoking piece. “The map is not the territory” quote leaps to mind. That made me dig a bit deeper into that quote actually means and I didn't use ai to find it. I used search. I am finding that I toggle back and forth between the approaches...as I continue to try and understand the why and whats.
This quote comes from Alfred Korzybski, father of general semantics: “A map is not the territory it represents, but if correct, it has a similar structure to the territory, which accounts for its usefulness”. To sum up, our perception of reality is not reality itself but our own version of it, or our own “map”.
Yes! I posted in LinkedIn this great piece that digs in to the way LLMs map knowledge spaces - far smarter and more complex than mine!
https://www.lesswrong.com/posts/D7PumeYTDPfBTp3i7/the-waluigi-effect-mega-post
Really insightful. Seeing exploration of the knowledge territory as maps and routes also makes it easier to think about the ethical side of things. Which (whose…) knowledge is excluded from the territory being mapped, what are the implications of omitting parts of the territory in sparse LLMs etc. (e.g. a Russian language LLM that only uses Russian sources might be more “efficient” in terms of computing power/cost, but the maps would change significantly). Or, for LLMs that self-train, what happens to our maps when more of the territory is self-generated than not… Roll on the improbability drive!
Yes- I should have written more about this in my post, but the maps analogy leads us to think about bias and context a lot more. This is so critical!