There is a glaring flaw in popular conceptions of definitions.
How much information can be contained in one word?
“Red” tells you the basic color in the spectrum – but there are so many shades the one word can’t discern.
“Chair” tells you it’s something for one person to sit on, but every detail is unspecified. Whether stools are included is ambiguous.
– Context allows multiple words to use the same “chair” characters. Even more information fits in this one word.
Yet the point is obvious: not that much information fits in one word.
A trade-off between specificity and generality (applicability).
Too specific and extraneous words are needed. Even elements have different isotopes (hence multi-word names).
Moreover, humans don’t represent words (concepts) in terms of these nit-picky (or logical) details. If you write a book specifying the exact meaning of a single word (concept), it will likely fail to represent reality accurately for most. And if you try to catalog all the ways in which people really conceptualize the word? At best, you now have a taxonomy of human understandings. Hope it’s complete, but is it at all applicable?
Are definitions pointers / guides for humans or formal classification algorithms?
If the information-content in demarcating some concept is sufficiently small, perhaps one could say it is well-definable: one word will suffice without complaints.
‘Red’, ‘chair’, and ‘carrot’ suffice, but ‘pizza’ is debatable :p
What if one word just doesn’t suffice for a concept humans find intuitive? (Except for those damned borderline cases people can hardly agree on. Maybe borders are somewhat fuzzy in the first place though . . ..) If a few words suffice that’s okay.
What about ‘art’? Is it well-definable? It’s intuitive to humans, but will an informationally-concise definition classify only art and no non-art? Probably not. A classification algorithm that does well will probably contain more information a sensible definition allows. Humans don’t really have full consensus in the first place, but it seems likely there doesn’t exist a simple formulation that will unify us.
Well, that is, until we accept the limitations of definitions and words and accept errors in our less-than-precise definitions.
Yep. All this rambling to lead up to why I’m fine with the somewhat lame dictionary definitions. Say of art, where many try to do better, but only succeed marginally. Don’t get me wrong: they had many cool insights, but they are not definitionally helpful.
To not berate ‘art’ too much, one can look at ‘consciousness’ too. There are multiple words and concepts in there. Even the basic dictionary definitions reek of an attempt, to use an overly Buddhist perspective, to point at something with the wrong pointers / tools. Like trying to point at your pointer finger with your pointer finger (if that’s to easy, try your thumb with your thumb). Although one can try to extract concreteness: consciousness -> thoughts -> mental activity -> activity. Hey, this kind of leads to some sort of panpsychism. But then maybe we’re abstracting beyond that which we couldn’t point at, at which we could only find synonyms for.
‘Consciousness’ has some of the ill-definable issues ‘art’ does, but it leads us to another way a word can be ill-definable.
With ‘art’ the problem may be that the information inherent in the human concept is too variegated / nuanced / complicated / fuzzy for nice definitions like we have for ‘carbon’. But what if the problem is that we don’t have even have the relevant information for defining a concept or part of experience? We have enough to know it exists, but not enough to pin it down in a clean definition. (Was having this information about ‘consciousness’ evolutionarily useful anyway? :P)
As this is primarily ranting, my other pretty obvious insight on definitions was that they really are little more than ‘pointing’. Glorified pointing. Just fucking assigning labels to things experienced and communicating to others what label goes with what. Before you have any labels, you have to, uhh, point or something. Then you can use descriptions / definitions to use recursive / chain pointing. Pointing via prior pointers.
Pointing really just adds one example to some class of concepts, doesn’t it?
‘Carrot’ examples fit pretty cleanly into one cluster, with sub-clusters.
‘Chair’ examples fit a bit less cleanly, depending on how people extrapolate from the examples: some of our models include stools, bean bags, etc. Some, perhaps not.
‘Art’ examples fit pretty poorly, and aren’t even cleanly separable. (Nor do we want to stomach removing many artworks for definitional clarity :p)
‘Consciousness’ doesn’t even have any good examples. We don’t yet even share a frame in which to point at our experiences of consciousness. Yet :>.
Hey-ho, the first Captain Obvious remark falls right out of the second.
Hmm, if anything, one issue here seems to be the weakness of string-of-word definitions: a series of 50 points, grunts and gestures is much better than one, but how much information can one really fit into there compared to the extrapolated models inside a human brain? Probably not that good except for the simplest / cleanest cut labels.
Another is failing to recognize when the class of examples given for a certain concept are ill-definable (i.e., not linearly separable or lacking information).
@@ >> Uguuu Umuu MAguahuagua
How should this all affect how we treat definitions / words?
Arguments over definitions? (Obviously first verify the same concepts are even being discussed . . ..)