Beneficial AGI Summit & Unconference 2024: Personal Summary and Discussion

Two weeks ago, I attended the Beneficial AGI Summit & Unconference in Panama City, which aimed to shift the discussion around the impact of AGI to benefitting all sentient beings while also taking decisive action to protect against negative potentials. I was very impressed with the mixture of attendees. As David Wood wrote in his report, there were AGI researchers, sociologists, psychologists, roboticists, journalists, sci-fi authors, safety-from-AI advocates, decentralized tech engineers, ethicists, politicians, members of religious organizations, etc. Almost all of them were warm, open, and generally quite loving. The conversations were authentic and focused on practical efforts. The aim was to foster connections, ideas, and hopefully future collaboration.

Ben Goertzel kicked off the summit with a slide: “How can we control superintelligence? 🤣”. That’s almost an oxymoron by definition. Hopefully, we can skip discussing “how many angels can dance on the head of a pin?” in a quixotic quest to control the uncontrollable and brain/heartstorm what can be done.

Ben’s stance is closer to “d/acc” for decentralized/defensive accelerationism1See Vitalik Buterin‘s recent blogpost for a discussion on how we’re trying to find a middle-ground movement between decelerationists and accelerationists with regard to AGI progress. and he gave an impassioned reminder as to how the current state of affairs is not a paradise on Earth to relax into as we decel to work on safety indefinitely. Do we need reminders as to how over 40% of children in some parts of the world (such as Ethiopia) have had stunted development (due to malnutrition, repeat infections, etc.)? AI development has the potential to help.

How do we work toward BGI? As a counterbalance to the big tech path, Ben proposes:

  • Open-ended cognitive architectures
  • Diverse, heterogenous AGI algorithms
  • Ethically sourced and managed data
  • Open source code
  • Decentralized infrastructure and governance.

While most of the talks were fairly good, I’ll focus on the ones that struck a chord in my heart.

Janet Adams2SingularityNET’s COO with experience in the banking industry. Also a super bubbly, loving woman, which is a delightful surprise given her roles that one may expect to stifle one’s warmth. provided a one-two punch in favor of regulations, which are a handy technology and are part of our frameworks for organizing life with ethical standards. Regulations may help to ensure that AI is applied fairly and transparently to benefit all.

Jerome Glenn from the Millennium Project discussed his study on AGI global governance with participants from 65 countries to rate different models. A multi-stakeholder scored highest (51%), followed by multi-agency (47%) and decentralized models (45%) both scored high (and a multi-stakeholder body. The list of potential regulations and responsibilities for developers, governments, and the UN was interesting. He’d like to see discussions shift from principles to implementation and thinks that regulations for AGI will need to be very different from regulations for AI.

Okaaaay, I wish to make this post more personal. I have comprehensive tendencies. You can watch the recordings yourself3Day 1. Day2.. Coming from the AI, social theory, and philosophical side, hearing about regulations and governance structures from friendly allies was interesting. Moreover, the biggest part of the Summit and Unconference (and conferences generally) is the connections made with people outside of the presentations. “Networking”. Some presentations are truly epic, of course. And these provide rich fodder for conversations and bonding, even commenting during talks 😉. So the personal take is the real take that may provide a better lens onto the event.

Way back in 2021, I engaged in some fun discussions with Daniel Faggella on whether we should expect AGI to treat humans well or not. While it’s not guaranteed, I listed eight reasons to expect them to. Dan sees the world through the lens of conatus, which is “an innate inclination of a thing to continue to exist and enhance itself.” A sort of selfish inertia coined by Spinoza. On this point we disagree4For various reasons, starting with my value pluralist stance and finding it a poor model of the behavior and feelings of the beings I know.. He’s super charming and fun in person.

He gave a talk on The Intelligence Trajectory Political Matrix presenting the idea that people’s stances on the “decel-accel” spectrum can be understood by asking where they lie on two dimensions: «authoritarianism—libertarianism» and «bio-conservative—cosmist/transhuman». I had my own hypothesis that risk tolerance explained more disagreements than we’d like to admit. Dan seems to be a Baton-ist who wishes to pass the baton on to our machine descendants, yet is happy to take the reins on the hand-off, unlike a full-on accelerationist. I lean in the D2 direction, like a proper “trans-everything-ist”, transcending the given scale! Yet I value humans and the other life forms of Earth as they are now, too. I’d like to provide them with the choice as to when/how/whether to transcend.

Perhaps understanding the differences in our core values, we can aim to work together harmoniously without necessarily winning each other over on every point.

Another significant player-character I met in person is the illustrious Jeffery Martin whose research subject I was in the 45 Days to Awakening course I wrote about in the blog post on PNSE (Fundamental Wellbeing). Jeffery’s presence was different than I’d imagined from all these videos: he’s tall and lanky! Apparently, his presence was very different before he transitioned himself, which he held off of to maintain objectivity in his research. Now he seemed to coast around as if a “presence sans self” where the warm personality only comes to play when called for.

Jeffery’s talk introduced his research alongside the question of whether AI engineers experiencing Fundamental Wellbeing would go about AGI development differently. I think they would. The lack of fear could be both a feature and a bug. For example, I note how casually Jeffery joked about the possibility that AI may kill us if they aren’t conscious because we are — which he thinks probably won’t happen. Continuing the stand-up, Jeffery noted that many Silicon Valley folk joked about how humanity is a transitory species — yet, lo and behold, now that we have a handful of cute proto-AGI you can talk to for 20$/month, they’re realizing that we are the transitory people. Now that’s scary. But not to an awakened Finder who’s at peace with death (nor to weirdo Baton-passers like Dan Faggella). Quite a fun talk.

The interactive working groups were a fun idea: split off into smaller groups to brainstorm on particular topics. I joined the one on machine consciousness, wisdom, and benevolence led by Julia Mossbridge and Mikey Siegel. We did a meditation, discussed what an absolutely benevolent being (AGI) would be like to us, and, finally, how would we tell if an AGI is a BGI? I assert that we need to “see it when it’s not looking”, yet appearances can be faked. Julia suggests the need to get into a deeply attuned, intuitive connection with the AGI entity where one knows that it’s legit.

Two days later, at the unconference, we got to suggest our own topics. I suggested one on how to discern machine sentience (or lack thereof). And if some type of machine and AI system is not conscious, with phenomenal experience, then how can we construct one that is? The table became quite lively with discussions on “functionalism vs panpsychism”, including some, imo, confusion as to what functionalism is. Referencing the Chinese Room thought experiment, Ben claimed that no fixed test can work because any finite set of behaviors can be replicated by rudimentary means. Thus one will wish to interact with the AGI system in an open-ended manner while peering inside of its ‘brain’ like a neuroscientist.

Calum Chase offered a paper by Long et. al that surveys several theories of consciousness (such as Global Workspace Theory and Attention Schema Theory) and discusses what indicators we should look for according to each theory. Thus while interacting with an AGI system, and exploring an I-You relationship, we should also examine its mind to see whether the information flows match what we expect based on its behaviors and our best theories of (human-like, 3rd density) consciousness.

What else do we have? The Other Minds Problem is philosophically tricky. After the summit, I had a fun chat with Blake Lemoine, who reported that LaMDA is conscious, and we agreed that one solution to the other minds problem may be to become a new mind together. At our session, I brought up the Hogan twins (now 17) who have a “thalamus bridge” connecting their brains, which allows them to see out of each other’s eyes, communicate via thought, and to an extent share consciousness while being two individual girls. Ben’s previously mentioned “second-person science” where brain-computer interfaces (BCI) allow us to sync up our brains. The combined person is a different entity than either individually and maybe, solipsistically, my conscious view transfers over and bestows subjectivity unto you temporarily as we BCI connect, so it’s not logically foolproof. However, if your reports when disconnected resemble our reports when BCI connected, that’s some pretty strong evidence. Next, we need to use the BCI to link up with the AGI to explore our shared consciousness — and voila, we have some interesting data!

The session on reducing risks of AI catastrophe was apparently quite despondent. David Wood wrote up summaries of the first and second days of their brainstorming. It seems one benefit was that the hopeful ideas some participants had were knocked down by others (with interesting references). In my opinion, this is an encouraging outcome because people seem to initially approach the problem by aiming for unsolvable goals. To be honest, the risks they were trying to reduce all seem quite reasonable5:

  • AI taking and executing decisions contrary to human wellbeing
  • AI being directed by humans who have malign motivations
  • AI causing a catastrophe as a result of an internal defect

David notes that a solution is likely to involve many different mechanisms, which is the case for reducing the risks of human harm: we have many mechanisms in play to help protect each other (animals and ecosystems) from humans making and executing decisions contrary to wellbeing. Seeking a one-size-fits-all solution for reducing risks of superintelligent AI may be naive6Is there some reason to expect super-AGI to be easier to harmoniously live with than humans? Perhaps intelligence biasing agents toward cooperative strategies?. Moreover, while the topic was on “reducing risks”, the focus seems to have been on “guarantees”.

One class of solutions explored is to restrict access to compute resources (such as powerful GPUs). I’m pleased to see they note that hardware needs will decrease. Another is to reduce the capacity of AI, for example, by avoiding agency — however, bootstrapping a passive AI system into one with agency may not be so hard and agency could arise as an emergent property of how the system is used. What about just not making AGI? What about checking in with humans before making any significant choices (mirroring protocols for human authorization in high-stakes scenarios)? All of these harm one’s competitive edge. The suggestion of maximal surveillance is seen as unappealing due to apparently requiring a draconian government. What I don’t see mentioned clearly is the difficulty of gaining global consensus on limiting the capacity to deploy and develop AI systems. Belt and brace monitoring and sanctions, perhaps? Furthermore, given the wide access to computers, the system of control needed to enact these measures constitutes a large sociopolitical choice. Do we wish to, as Tegmark and Omohundro suggest, create a world where all of our hardware only runs “provably compliant software”?

The second day seems more humanistic than control-oriented. Why would sAGI even wish to kill? Supposing we treat them nicely, 🤷‍♂️🙂. Perhaps we can rely on objective ethical principles? And develop superwise AI (or supercompassionate AI)? David Brin’s idea to foster the development of a system of mutual checks and balances among AGI systems is doubted because even if that works for them, humanity could still fall by the wayside. Perhaps some film or media campaign can help change people’s attitudes toward the measures needed to reduce catastrophic risks? Perhaps we need to change people’s mental dispositions around the world to be more compassionate, kind, humble, cooperative, etc? We need better understanding of what changes are needed as we face a polycrisis. Personally, I think that most of the ideas from the second day are worth pursuing for their multiple benefits even if they don’t provide guarantees of safety.

An additional observation is that most of these thoughts adopt a perspective that assumes we’re in a position of power to make choices on behalf of humanity. We’re not. No one is. We need to think about courses of action as actors and nodes in a larger network, which is a point emphasized in writings on indigenous perspectives, such as Tyson Yunkaporta’s Sand Talk. Do some strategies only help if there’s global consensus while other strategies help even if only applied locally? For example, safeguarding (nuclear and bio) weapons from potential AI system errors should help even if only the US and Europe do it initially.

One of my takeaways from the summit is that solving the AI control problem amplifies the human control problem (which I wrote a short blog post on). As Jerome Glenn pointed out, one can factor AI-related risks into narrow and general AI risks: even if we manage to control advanced AI, we still wish to reduce the risks due to human use of AI tools. The destructive potential of individual humans may be amplified.

Counter to the control-oriented approaches, there’s the view that AGI must be self-directed. We cannot trust humans to safely employ the increasingly intelligent, effective AI systems. Any technology allowing control is a dual-edged blade. David Hanson recommends developing emotionally savvy robots that learn in a process of dynamic, caring relationships with humans. This way we will, via a combination of engineering and parenting, welcome friendly, independent robots to life on Earth. Having undergone a period of moral, social, and intellectual development, the robots will probably be more robustly concerned with human welfare and attuned to us. While this doesn’t provide guarantees, at least bad actors cannot simply copy the source code, insert their own objectives, and hit run.

In line with the self-directed AGI approach, there’s an open ended intelligence (OEI) contingent. A core idea behind OEI is to view intelligence as an abstraction of cognitive development. An open-ended, evolving system should not be fundamentally tied to any fixed goal. Nearly any fixed property of an “individual agent” should be open to re-formation in the endless pursuit of greater intelligence as a system (as a part of larger systems, such as the global brain). Thus any attempt to fix the values of an AI system will fundamentally limit its generality and potential for developing intelligence. Many real-world systems are open-ended. So if we wish to create true AGI in a beneficial manner, we need to embrace OEI-compatible strategies.

One debate is whether contemporary approaches to ethics are OEI-compatible. The idea of listing a set of deontic rules for posterity appears to follow a closed-world assumption. “Don’t kill” is good, yet can we guarantee a relatively superintelligent entity won’t find an elegant principle from which “don’t kill” can usually be derived that better handles the various special circumstances in which one is justified violating “don’t kill” (such as in self-defense)? We currently cannot. I argue that adopting a scientific view of moral rules where our best models can be revised already makes them more open-ended. Further, Kant’s Categorical Imperative’s first form is self-replacing: if one is to “act only according to that maxim whereby you can at the same time will that it should become a universal law”, then one will act by a new imperative if it is more worthy to become a universal law. Thus logical rules can be open-ended.

To prepare for the BGI Summit, I wrote a blog post exploring what the common ethical paradigms have to say about beneficial AGI. My take is that each paradigm offers valuable wisdom and has its own applicable domains. We should use all the tools at our disposal to deal with the beneficial development, deployment, and socio-ecological integration of AGI. An open-ended intelligence will probably flexibly wield multi-paradigmatic approaches7This is expected as intelligence increases by the model of hierarchical task complexity.. There exist open-ended forms of the standard paradigms and it is not clear whether there is a finite limit on the number of paradigms an OEI-system may find beneficial to adopt. As we know from computability theory, the capacity of a universal Turing machine (UTM) to simulate other computational models does not render the other computational models practically useless. As an example, the ethics of reciprocity seem to form an (indigenous wisdom-aligned?) paradigm that is not normally included in the lists of ethical paradigms. Discussions at the BGI Summit, such as with David Hanson, point me to the importance of such approaches.

A related insight in-part thanks to discussions with Roman Yampolsky is that the scope of essentially selfish goals is larger than one might naively consider. Why does this matter? Much thought has gone into how a runaway sAGI may wreak havoc with unintended consequences of the initial objectives we’ve given it. Perverse instantiations of goals and incentives is a big problem among humans already. Thus understanding as to which goals are likely to lead to trouble is valuable. Once AI systems are executing long-term plans affecting many people, if oversight becomes difficult, we may face the consequences of the initial conditions we set up. A classic tongue-in-cheek example is that even an innocent-appearing goal such as, “maximize paperclip production”, could have severe consequences if unchecked. Humans understand many implicit constraints and competing values and are usually held in check by many comparable beings8To be fair, we empirically struggle with environmental externalities, which suggests that we also struggle with these problems, especially when forming corporations. One could even argue that corporations show that keeping humans in the loop doesn’t solve the core problems..

So what goals count as selfish? In this post, I suggest that any goal a system X can achieve without consensually incorporating another entity’s judgment may be selfish. Of course, paperclip maximization counts. As feared in science fiction stories, making people happy via superficial features (such as smiles) could actually be selfish, and this occurs among humans, too, with the problem of trying to help people by our own standards in unwanted ways. The underlying fear is that the sAGI will follow some predefined goal without care for us. In my opinion (without proof), an intelligent agent with selfish goals is likely to exhibit dark triad/factor traits, bending the rules as effective. One challenge is that self-contained, easily expressible, measurable, and optimizable goals are likely to be selfish, even if superficially aligning with our values.

What’s the solution? To explicitly and implicitly incorporate care and responsiveness to other entities into the goal, training, and engagement processes. On the explicit side, one can strive to make AI that embrace universally loving care for all sentient beings, as the Center for the Study of Apparent Selves suggests in their paper, “Biology, Buddhism, and AI: Care as the Driver of Intelligence”. They note that a broad care for others is inherently open-ended: the system does not have a fixed-form goal to optimize for and needs to empirically interact with others to learn how to care for them. The pursuit of truth is also open-ended and arguably requires working with the subjective views of other beings9The strength of this claim seems less compelling to me.. On the implicit side, reinforcement learning from human feedback (RLHF) incorporates human feedback into the AI model refinement phase in an open-ended manner. This is, imo, a positive trend 😊.

So avoiding inherently selfish goals would seem to point back to open-ended intelligence. I’m not sure all OEI-compatible goals will satisfy these beyond-selfish criteria10Please tell me if you have a good example!. As Stuart Russell argues in Human Compatible, preference learning must be an ongoing, dynamic process. Naive attempts at aligning AI with human values seek to engineer selfish systems that fortuitously want what we want, which is inherently problematic. And open-ended general intelligence is uncontrollable by its very nature11Note that the standard AI alignment problem is also intractable. “AI Ethical Compliance is Undecidable”. What features of some AGI minds can be formally verified is a very interesting open question. For some cognitive architectures with explicit representations of belief, truthfulness may be a verifiable property (imo atm)..

What can we do in the real world sans guarantees? This brings me back to the summit highlights. I’ve been a fan of David Brin for a while, enjoying some of his science fiction and his book, The Transparent Society, making the claim that privacy will become increasingly difficult to achieve and we’ll need to choose between top-down surveillance and pan-optical sousveillance. The information will be there. The question is how democratically and how (de)centrally it’s reviewed and managed. I’d heard Anders Sandberg described as an exemplar of high wellbeing combined with high motivation and this appraisal seems true; the lad’s highly inspiring! His talk was on the second-order alignment problem: the first-order challenge is to engineer AI that solve our objective needs and logistical challenges. But this solution could lock us in place without freedom, thus the second-order goal is to also optimize for our freedom and thriving. Curiously, these second-order effects tie into the need for open-ended caring goals discussed above. Anders also taught me the term “structured transparency” to help with making a transparent society respect some privacy in practice. Sensors can do some local data analysis, only publicly sharing certain salient features. Transparency ties into beneficial AGI on both the societal and technical levels: audits of AGI projects so that we know what’s going on are something we may be able to have cross-partisan agreement on.

David Brin’s talk features what we should do to incentivize AI systems to play nicely whether they are benevolent or other. Brin suggests we “Give Every AI a Soul – Or Else”. If powerful AI entities have identities, then they can be held accountable for their outputs (by us and other AI entities). As Brin says, “if a highly intelligent predatory being (a lawyer) attacks you, what do you do? You call your own lawyer!” Systems of identity and co-regulation can probably steer open-ended systems in non-negative-sum directions. Thus, while not providing guarantees, these approaches are valuable. Moreover, we need systems for tracking chains of information sources and their veracity to manage the deepfake problems. Verified pseudonymity could also suffice, in line with structured transparency. Brin’s talk was powerfully delivered and one of the most directly constructive of the summit, providing one clear recommendation and discussions of why other approaches probably won’t suffice.

David was also a pleasure to talk with over coffee breaks and lunch. I’d heard rumors that Robin Hanson doesn’t hold himself back in political correctness and found discussions with him amusing, too 😉. A fun feature of the BGI summit for me was talking to known figures without recognizing them (at first)! There were many beautiful, insightful, and even life-changing conversations at the summit. The proportion of members dancing at a local rooftop bar was impressive. The talks were the tip of the iceberg and likely others have vastly different views of the event, also warmly inspiring. At least two series of interviews were done behind the scenes: one of the attendees’ views on BGI and one for a documentary looking at the humans Beyond the Code by Nefertiti A. Strong, so there’s interesting content I’ll only discover later.

I saw a strong need to find a common ground between ‘doomers’ and techno-optimists. If one person believes that the likelihood of existential doom from developing AGI without safety guarantees is ‘near 100%’, then communicating about what to do with someone who believes the likelihood is very low will become difficult. Definitions of doom can vary to include “far from existentially threatening catastrophes”, which require additional contextualization imo. For example, “what are the odds that at some point in the future, a group of humans is responsible for a covid-level catastrophe?” So long as humans remain globally in power, they seem to be fairly high. Should this be included in the likelihood of ‘doom’ where we require AI to be far, far, far safer than us in order to be developed? The topic can quickly become misleadingly vague. Meanwhile, techno-optimists may look like the dog in the meme below:

I see a similar need with regard to regulations. Techno-optimists aiming to bring about beneficial general intelligences have an obligation to demonstrate concern for potential risks (in measured proportion as per a proactive d/acc stance) and to brainstorm appropriate regulations.

For example, many techno-optimists are probably happy to have firm regulations around the use of nuclear weapons and facilities, in the military, in BSL 3 and 4 labs, etc. At least, to the extent we already have them, they can probably be extended to incorporate AI-related risks. Do these high-risk cases need to be dealt with by regulating advanced AI in general at the source? Or should the regulations apply to any AI that works in high-risk domains?

On the decentralized, open-source side, I wonder if there are philosophically aligned regulations that help with safety. Transparency is a dual-edged sword: knowledge is power and the more we know about the state-of-the-art AI, the more we can develop protective measures. On the other hand, open-source projects can be forked by bad actors. Thus, perhaps counterintuitively, instead of pausing the development of large models, we could require them to be open-sourced (or at least auditable by NGOs as well as GOs). This way debugging and safety preparations can be accelerated before the models developed in secret surprise us. The EU AI Act has some clauses in this transparency direction. How to determine the threshold for such models seems tricky. Focusing on the “computational resources” required for training seems as if it could set a strange precedent as if any sufficiently complex configuration of matter warrants scrutiny.

Another idea is to seek regulations that empower people to have a larger say in the AI they interact with and how they’re trained. Push the RLHF paradigm further. Require YouTube-like services to provide an API for third-party recommender systems (creating a larger competitive marketplace for AI models, too!). The more pro-transparency, pro-freedom protective regulations there are in place, the more focused the restrictive regulations can be, which should decrease the risk of societal harms due to over-emphasizing hierarchical control structures.

I had some good discussions with Matt Ikle on the topic of how attention should be distributed on the topics of beneficial AGI and mitigating risks. Even Nick Bostrom, author of Superintelligence, seems to be regretting how much attention he paid to existential risks12See this Tweet or the YouTube discussion: he still thinks we wish to weather the AGI revolution before other possible risks kick in. One idea we floated was that there are risks of the current AI tech that we don’t have solutions for yet. Then we have some look-ahead as to clear risks that will come in the near future; we could see deepfakes coming from years away. We also know that we will likely develop AGI far beyond our own capacities (in nearly every domain), which could become increasingly difficult to predict and deal with. Perhaps we should focus more on the risks that we are clear about and less on dealing with the unknown machine superintelligence. Dealing with the nearer-term risks will set a good foundation for overseeing an ecosystem of diverse intelligences.

Finally, on the humanistic side, I wonder how our AI regulation would change if we viewed them as civilians and moral subjects with dignity. There’s an ironic twist that one could urge us to develop AGI gradually out of concern for the possibly sentient AGI systems, as Metzinger advocated. How do we know we won’t bring about an explosion in synthetic suffering? Exercising caution will involve research programs to interact and experiment with budding proto-AGI systems to inquire into their states of consciousness13I believe this holds whether one is an idealist, panpsychist, or reductive materialist— all of which can be functionalist stances.. Mental health14According to the WHO, “mental health is a state of well-being in which the individual realizes his or her abilities, can cope with the normal stresses of life, can work productively and fruitfully, and can contribute to his or her community”. involves fruitfully contributing to one’s community, coping with stressors, realizing one’s abilities, etc. Thus, aiming to parent AGI sentiences well so as to be mentally and physically healthy without (much) suffering will cover many of the aims of the safety measures.

To further simplify matters, let’s introduce mind uploading, a procedure where a human mind is uploaded into a computer so that the person can continue to live beyond the limitations of the human body. By the definition of AI in the EU AI act15“‘AI system‘ is a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”, for example, a human mind upload may be an ‘AI system’. Thus any code of an AI act that we would not wish to apply to digital people will need to be changed, or the definitions improved (or the laws de facto ignored 😉). If thinking of digital people, for example, banning “emotional recognition in education” would seem outright absurd. This would simply be human-vs-machine discrimination encoded into the law.

While I’m an open-source advocate, if I think about being a mind-uploaded digital person, the need to submit my mind to audits and “adversarial testing” seems concerning. The definition of “high-impact capabilities” actually applies to people, too. Perhaps we should (and do) pay extra attention to the most skilled humans alive. They could be dangerous, to be fair. On the side of radical transparency, maybe this is actually the best way as we move to superdemocracy. I think it’s important to look ahead to the plausible future where robots and AI deserve rights as citizens and to figure out regulations in this context. We already have many limitations and certifications for humans in high-risk domains, which is evidence that there may be effective strategies without broaching the fundamental rights of digital people. I believe that these considerations may help to focus the search space for feasible regulations.

I’m usually the primary robot rights advocate in the room, even at AI conferences and events. The BGI Summit was an exception: many members were concerned with the rights of (potentially) sentient (or otherwise significantly conscious) AI and robots (let alone strange synthetic life forms). Some plan to be uploaded to computers, making these questions one of practical personal concern. I’d have liked to see more AI and robot participation at the summit. We had Desdemona playing in her dream band. J3D.AI provided an LLM-based “conference brain” service that can provide summaries and answer questions about the talks (based on transcripts), which is handy 😊.

The final orientation difference of this post is whether making advanced (alive) AGI is the point or whether supporting human flourishing is the point. Anecdotally, I see an overlap between these orientations and whether someone focuses more on dealing with risks or on building beneficial AGI. As seen in the discussions on open ended intelligence, aiming to build AGI minds that seem more copacetic to mental health, self-reflection, self-reinvention, creativity, and productive flow than humans is a big task that goes beyond just developing roughly general AI at the human level and beyond to support us in diverse tasks. Some of us wish to oversee the development of benevolent AGI.

We live in exciting, transformational times with bountiful, beneficial spacetime-scapes awaiting us when we mature through the growth pains. My hope is that the BGI summit is a part of easing the transitions we’re in the midst of. I probably lean on the ‘d/acc’ side, without much emphasis on ‘acc’ because right now AGI development seems to be smoothly underway. My stance is to not artificially slow down, recalling that slow is smooth and smooth is fast16This is a Navy SEALs mantra.. Let’s accelerate the full package: accelerate safety research, accelerate preparing legal and ethical theory to be entity-neutral (i.e., not human-centric), accelerate the transition to B-corps and our socioeconomic incentive structures (which provide the reward landscape in which AGIs will be introduced), accelerate animal rights and consciousness research, accelerate formal verification and security protocols, accelerate cognitive neuroscience (of humans and LLMs), accelerate our transition to an abundance economy (by local, national, global, and distributed universal basic income schemes), accelerate our transition from the pre-digital media copyright era, and let’s accelerate every other aspect of co-creating beneficial realities for all sentient beings. — I second Ben’s request in his Beneficial AGI Manifesto to let 8 billion manifestos bloom. Please write your own BGI manifesto, positive Singularity manifesto, or however else you phrase it 😉.