The meaning of life is only valuable insofar as it falls inside the logical spectrum of one’s values. If one can’t logically get to the conclusion via one’s current values, then one will likely devalue a good idea. On the other hand, what if one’s values support the idea. Will this lead to a new value? And when values conflict, one can eventually lose values.
This paints an interesting picture.
To add new values, we need the support of most of our current values. Likewise when changing or removing a value, we need the support of other strong values.
(In a short time frame, some values will just temporarily overrule another value rather than actually changing it.)
The question here is: are there attractor values that our groups of values tend to over time or in certain environments?
Alas, I haven’t worked these value dynamics out :p.
This could be used to attempt to work around reason’s dependence on our current values. Although it would still be more a description of how your values will likely change than anything prescriptive. Hmm . . .