Care as the Driver of Intelligence

Biology, Buddhism, and AI: Care as the Driver of Intelligence is an interesting paper. I like how they aim to develop a way of thinking about any subsystem of the universe to allow us to discuss the intelligence of diverse life-forms. There’s some credibility to that given Michael Levin’s spot as a leading synthetic biologist working with bioelectricity to stimulate the development of two-headed worms, xenobots (’robots’ made of living frog tissue), etc.

They define “care” as “the capacity to exert energy and effort toward preferred states”

They focus on two “light cones” over spatio-temporal regions.

One is the physical light cone. Clearly, an analysis of possible future/past states of an entity already tells us something important about it, even being suggestive of intelligence. Compare ticks versus dogs versus humans versus interdimensional aliens.

Of more interest is what they call the care light cone, which represents the cognitive boundaries of entities. What is the agent’s goal space? The scope of states that an agent may care about? Maybe math is represented via physical symbols? The human may engage in cognitive reflections, with care (motivating action), on galactic scales. Can we have strong confidence that ticks are not (via neuro-imaging)?

In the case of AlphaGo, we could essentially calculate this. In general, we’ll have to work with implicit goals inferred from their behavior and what we know of their make.

The vision seems to be that in biological life, cellular life needed to band together into larger conglomerations that share in their ‘care’ and ‘states’ via gap junctions before they gained the capacity for broader intelligence and expanded care light cones. Thus the expansion of ‘care’ came before the ‘expanded intelligence’.

Does this apply to digital computers? I’m not sure how the analysis would go with transistors. In the case of organisms put together by other organisms, perhaps things differ? On the other hand, the basic circuitry of computers, and even operating systems, is what provides the capacity to exert energy and effort toward preferred states, which does seem to come prior to increasingly sophisticated intelligence 😉.

Stress is defined as the differential ‘loss’ of the current state from preferred states and intelligence as the ability to identify stress and the means to alleviate it, i.e., to approach preferred states.

There are discussions of how the insight of ‘no-self’ affects the care light cone and, further, how making the bodhisattva vow to assist all sentient beings expands the care light cone to infinity and motivates the bodhisattva agent to aspire to infinite intelligence in a very open-ended manner, for the agent will need to learn about every single stress-toting being to figure out means of assisting it.

This sort of framework seems to tie in well together with notions of Open Ended Intelligence as discussed by Weaver.

I wonder if, working with multi-objective systems, it is indeed wise to aim to create and rear agents with universally loving care as one primary objective.. perhaps allowing for some mixture with other objectives?

I’d like to see the concepts in this paper fleshed out with some greater formal precision 🥰🤓.