So, as my Third Grand Goal is to create cute wittle living thingies, I’ve been thinking about exactly what kind of life to create.
Alas, I’ve realized that Ben was right :>. But that means more when realized in my own words and concepts :p
Err, I’ve realized that life and intelligence are very closely related. So AI and AL are then two sides of a coin, different paradigms for doing the same thing.
Life coupled with evolution is essentially an intelligent system, with the default goal (fitness function) of survival. Learning to solve the problem of surviving, dispersing energy or whatever in the environment it’s in. (Yeah, I’m not quite sure what it’s intelligently achieving 😡 Perhaps because I haven’t really read up on evolution.. at all?) But that’d make individual species just particular solutions to particular environments.
No wonder there are millions of species: slightly different environments make different solutions more optimal :p
Some definitions of intelligence fit quite well:
The capacity to acquire capacity (H. Woodrow).
Any system . . . that generates adaptive behavior to meet goals in a range of environments can be said to be intelligent” (D. Fogel).
Sure, many definitions have big words like memory, understanding, judgment, reasoning, etc. Those are all methods systems can use in being intelligent though! =D
This brings me to Ben’s response to my idea of creating artificial life-forms to do something specific (cool or cute): “if you’re doing something particular, it’s better to just write a program to do that.”1 This, alas, follows from my newfound (to me :p) observation about life being an intelligent system.
Life’s intelligence works on a species level: it adapts to new environments and develops new capacities largely by having a variety of species and generating new ones. Although now that we have abstracted a lot of this intelligence and fit it into individual entities, we are superceding and exterminating many other species. We’ve transcended to the next level of solutions >:D.
So let’s say you make an individual life form. Will it either be intelligent, an AI, or a lone chess piece without the set and board? It doesn’t need reproduction. It may not even need Growth. Growth is a sort of cognitive compression: it allows a class of solutions (a species) to apply to more problems (environments) by growing from a less defined (more general) state to a more mature one. Two cute little game-players may grow up in different environments, one for Chess and one for Go. The one who developed for Chess will have trouble switching environments though. If enough of them live on chess, a new species may develop. So Growth is another feature that supplements evolution and the life-evolution intelligent system (LEIS). The individual doesn’t really need it. – Okay, if I’m making 10 individuals, I may want Growth anyway… :b. Then again, people are working on general game-players…
Are there any redeeming qualities to an individual LEIS solution over a direct solution, or an AI approach? I have an idea, but perhaps it means more in a spatial 3D world with slow communications rather than the digital realm. The autonomy of life-forms is such that what they are driven to do is often the solution (modern day humans have a lot of incongruencies here though…). So LEIS solutions can wander around and search for problems they can solve. They have built-in heuristics. … Cool, but on computers, there’s no real need to compartmentalize that, and why not use agents?
Hmm :(. It seems the main advantage of life is in the LEIS, not really the particular solutions – err, life forms. Of course, looking at pet snakes, you don’t necessarily need that much intelligence to make fun life forms. A bit more for dogs. But that’s the dreaded autonomous AI making it interesting :>. (And still just a toy, as the intelligence of LEIS greatly outweighs that of a dog, even Lord Crunchkin.)
The difference between LEIS (and AL when we find a language as powerful as our chemistry) and AI is just in where the intelligence lies. Are solutions found in abstractions inside the agents or are the agents themselves part of the search for solutions. And as I said last time, in the current climate against life (cockroaches and all) and autonomy, we want that layer of abstraction between the search for answers and the answers we see.
Hmm… some more thought is still necessary…
- If I’m doing something particular that depends on an API, I may want the system to autonomously adapt to (small) changes in the API. It may also be desirable that the system look for new niches where it may be able to thrive (aka be cute or useful). Thus some life-like properties may be desired even for particular solutions! (Updated on 29 Dec 2020)