There are three cutting edge thinking clusters I believe we should unite
1) The Incentive Tensors:
Bostrom, Daniel Schmachtenberger (closer to the blade), David Sloan Wilson, Brett Weinstein, Joon Yun, Thiel, Eric Weinstein (trailing).
Trying to find the basins and attractors that might stabilize future evolution (cultural, technological and memetic) away from Moloch (bad incentive structures), Azhathoth (evolutionary constraints).
Related keywords: X-risks, Catastrophic Risks, incentive alignment, basins of attraction, exponential tech, differential progress, Singleton, transhumanism, multipolar equelibriae.
2) The G Must Rise Clan:
Michael Anthony Woodley of Menia, @Edward Dutton, Curt Doolittlele, Emil O. W. Kirkegaard, Alexander Kruel, etc…
They caught up with the research on correlations between intelligence and genes to the point where they can use the genome of ancient populations to calculate their G, and the mechanisms that produce intelligence in populations, and see we are falling 1 point per decade and want to make G rise.
Keywords: Social Epistasis Models, Intelligence decline, Woodley Effect, Anti-Flynn effect, Differential reproduction.
3) The Individual x Group Differentiators:
Ellen Clarke, Sloan David Sloan Wilson again, Price equation, Coase’s theory of the firm, Stuart Armstrong Anthropic Decision Theory, Eros Szathmary, Deacon, Tononi
They try in different disciplines, from economics, to corporations, to biological organisms to artificial agents differentiatiate what is an individual versus what is a group. When do many individuals become a group through loss of autonomy and degeneration for instance, or to what extent is functional identity or similarity sufficient for something to be one versus a member of a group, or a copy etc….
Keywords: Major Evolutionary Transition, Type 1 Type 2 object (in Clarke’s), Autonomy loss, degeneration, differentiation, autopoiesis, autocatalysis, synergy, merger.
The reason I think these people should try to think together and understand each other’s fields is basically that we lack the appropriate tools to steer the future if any foot of this triad is ignored.
We can only design the right incentive structures and alignment by recognizing the on the ground reality of reproduction, the fall in G in the last century and a half, and the expected continuation of this process in the current biogeographical and mating dynamics – both due to the dynamics themselves but also due to the astronomical and thus prohibitive cost of transition to a system where selection bypasses sex, sexual selection etc… e.g. genetic engineering is a dead end.
Incentive structures and tensoring them on directions also requires understanding to what extent an agent is one or many, and how hard it will protect or help (Steve Omohundro comes to mind) its own survival and reproduction and what it considers part of itself or a larger group or different entity.
Uniting these three paradigms was, and is, the bulk of my PhD thesis but seeing the stellar conversation between Schmachtenberger and Eric made me realize we’re probably closer to a point where that debate is legible to a wider audience than 5 years ago when I began writing.
So I’d urge people who understand one of foot of the triad well to teach their foot to those in the other two, and everyone to try to learn the ones they are less familiar with.
The G Must Rise people seem to strongly mired into politics, and sometimes I suppose that prevents their memes from becoming widespread among people who want to save the world, EAs etc… the focus on differential reproduction, and keeping intelligence afloat is insufficient if we don’t also consider the risks and damages of loss of autonomy and individual intelligence and offloading that intelligence to higher levels, hive minds etc… Forming superorganisms has trade offs, and isn’t a panacea, as the G must rise people sometimes seem to advocate.
reply: We must also make sure that as exponential technology progresses, most people remain UNDER the waterline of destructive capability, which correlates with intelligence.
Counter: It seems plausibe that the waterline of steering the future correctly is actually higher than the waterline of destructive capability, and thus we have reason to keep some fraction of the population at a distinctively high level of G, even if they require monitoring. Futher the steering the future waterline is probably a much smaller fraction of the population (though it sadly does not appear to be a fraction so small that mere assorted mating effects suffice to generate enough good smart wise minds). It might be possible to sustain a substantial population between the two lines whilst not sacrificing a few people above both of them who would keep the job of modeling the world and steering deep future decisions, infinite games, non-rivalrous games, universal compersion, differential safety AI and any other methods of of preventing catastrophe that require binded large intelligences and not just hive type/market type/superorganism type intelligence.