I am wondering whether to give in and do a newspaper-clippings-and-ball-of-string map to show the connections between the American Rationalists, American or Right Libertarians, Effective Altruism, 'human biodiversity' (sic), neoreaction, and the American pundit-economists with blogs (plus a few figures with lives and influence off the Internet such as Steven Pinker and Peter Thiel). I am so not surprised to learn that the rationalists started writing Harry Potter fanfic and ended up shilling the FTX ponzi scheme.
A lot of effort has been put in to spread these ideas in the California and New York tech spaces. This Tumblr post is not bad but does not get into the 'scientific' racism or the connections with economists with a PhD and a blog or a newspaper column https://leviathan-supersystem.tumblr.com/post/180724263214/what-is-lesswrong-and-can-you-summarize-why-its (This RationalWiki entry (https://rationalwiki.org/wiki/Scott_Alexander#Race_and_IQ) is not bad on them and race theories but focused on one prominent figure rather than the faction within that space which likes to cite Razib Khan and has racist cranks posting in their comments). OTOH, you can waste your life documenting people on the Internet who push terrible ideas or terrible people.
Edit: thinky professional centre-left mag Vox discovered neoreaction a few weeks ago https://www.vox.com/policy-and-politics/23373795/curtis-yarvin-neoreaction-redpill-moldbug It also fails to draw the whole network of connections (S. Alexander and R. Hanson are not just "ideas bloggers" but part of specific subcultures where there is sympathy for neoreaction).
Edit: also, back in the Before Times, Dominic Cummings' blog seemed to be drawing on some of these communities (although I don't remember any sign that they noticed him).
Edit: Back in the Internet Feminism Wars of the early 2010s, a famous rationalist blogger wrote an essay with an infamous paragraph comparing feminists to Voldemort. I am told that was a response to an essay by journalist Laurie Penny who went on to skewer cryptocurrency scammers! (https://lauriepenny.substack.com/p/ship-of-fools) So this is a tiny tiny space with dense connections and far too much public drama. (Which is one reason why descriptions of these spaces are cluttered with personal attacks and misleading insinuations).
David Gerard cites the following two posts as early attempts to move 'race science' into rationalist discourse
https://www.lesswrong.com/posts/faHbrHuPziFH7Ef7p/why-are-individual-iq-differences-ok
https://www.lesswrong.com/posts/BahoNzY2pzSeM2Dtk/beware-of-stephen-j-gould
He mentions someone called Aella (https://nitter.ca/davidgerard/status/1556391089124286467) who I never heard of.
Edit: someone spelled out Cummings' connections to the rationalist movement without being quoted on their connections to shady and not just weird ideas https://www.theguardian.com/technology/2021/may/27/demis-hassabis-the-deep-mind-dominic-cummings-turned-to-as-the-pandemic-hit
Quoteas well as being a generally respected scientist, (Cummings advisor Dr. Denis) Hassabis is linked to the rationalist movement, which has guided much of Cummings' thinking.
"We know that Dom is rationalist-influenced from his own blogroll and comments," says Tom Chivers, author of a book on the movement, The AI Does Not Hate You. While Hassabis is not himself a member of the community, his involvement in advanced AI research brings him into the same circles.
"What rationalism implies from a policy perspective is a big question," Chivers says, "but you can see something like it in the effective altruist mode of thinking: trying to separate emotional responses from outcomes. And, by extension, it can lead to serious thought about long-term existential risks, AI and bio-terror, because they have the potential to crush human flourishing in the long term."
A blogger in Australia has also noticed that figures and tropes from the Social Media Right from the early 2010s are being talked about again (https://camestrosfelapton.wordpress.com/2022/11/27/the-weird-attempt-at-a-2014-revival/). Two people tried to restart the Internet Feminism Wars from that period with me in the past week, and I am sorry but I won't touch that with a dragonlance.
I guess my view with some of this weird Very Online Politics stuff is that I think it's maybe worth working back to it from anything with hitting power (politicians, tech barons, mass media, etc) that it might have influenced, but probably not forward from it starting with it as a core premise. So I think the Cummings or Thiel connection might have some interest as a product of this milieu and its influence on wider society, more than the milieu is per se interesting in and of itself for example.
I never really know if I should learn more about some of this stuff: I suspect it might be information my brain doesn't actually need, in that I'm not sure what I'd usefully do with it if I did know how some of these groups fitted together? My interest in politics is a fairly practical (or at least policy-level) one, and I don't think my own political movement (here meaning "the UK tradition of radical liberalism") has been all that drastically influenced by the weirder end of blogosphere currents.
Quote from: Jubal on November 29, 2022, 11:17:42 PM
I guess my view with some of this weird Very Online Politics stuff is that I think it's maybe worth working back to it from anything with hitting power (politicians, tech barons, mass media, etc) that it might have influenced, but probably not forward from it starting with it as a core premise. So I think the Cummings or Thiel connection might have some interest as a product of this milieu and its influence on wider society, more than the milieu is per se interesting in and of itself for example.
I never really know if I should learn more about some of this stuff: I suspect it might be information my brain doesn't actually need, in that I'm not sure what I'd usefully do with it if I did know how some of these groups fitted together? My interest in politics is a fairly practical (or at least policy-level) one, and I don't think my own political movement (here meaning "the UK tradition of radical liberalism") has been all that drastically influenced by the weirder end of blogosphere currents.
One thing I noticed is that some American spaces in the 2010s which proudly stated that they were focusing on politics because that was much more important than geekery didn't seem to start acting on politics. They just kept talking about politics online and yelling at people who had hurting wrong labels. But to do electoral politics, you need to build coalitions with people you are different from around common interests! And those coalitions have to be built around electoral districts, not weird global ideologies.
I would also respectfully suggest that many of these figures have serious hitting power in the form of a receptive audience of thousands of professionals, many of whom build and maintain New Media systems. The average racist with logorhea does not count, but S. Alexander probably does, so so many of the pundit-economists such as Yglesias.
I agree that if you try to learn about these spaces and their influence you will hear far more than you want to hear about who bedded whom, who snubbed whom on Tumblr, etc. The RationalWiki article I linked has that problem, so do David Gerard's birdsite posts. And many of these people's most notable achievement is writing or talking endlessly online.
QuoteAnd those coalitions have to be built around electoral districts, not weird global ideologies.
In fairness this is most true in the Anglosphere: you can have a much more ideologically-driven coalition in theory in a lot of other countries, though there are limits to that (one of the reasons that the Austrian right-liberal NEOS never gets above ten or twelve percent is that it's not socially conservative enough for the conservatives but anyone on the left thinks their economic policies are absolutely
nuts.)
I guess I agree that pundits like Yglesias do have a meaningful amount of power, but I do wonder how much: their political preferences aren't terribly well represented in actual policy or electoral results, as far as I can tell. Though maybe their professional-leaning and media-type audiences do mean they have outsize narrative power or possibly outsize financial power or executive power over all the bits of government nobody really looks at much (they might concievably reach and influence a much higher percentage of political donors or special advisors than voters).
On the open web, the classlc example was that right or American libertarians were big, whereas that ideology basically only exists as an organized movement in the USA, and even in the USA has very little influence on policy. It was just fashionable with white American men in the IT industry and SF fandom, and that demographic had an outsized influence on open web culture.
I would argue that in systems with proportional representation, the relevant electoral districts are "the areas across which votes are distributed." Even if you want a national or state policy, then you need to organize people within your nation or state. Organizing a bunch of fellow travellers from Switzerland and Oregon won't help you get policies enacted in New Hampshire or Quebec.
Yes, I think that's very fair on both counts. Though "we need to get a chunk of 1% of the ten million or so Dutch voters" is maybe at least in theory a much easier lift for a niche ideology than "we need forty percent of voters in a specific geographical block of 50-60 thousand people in England". That said, the Dutch parliament does actually rather lack e.g. a weird overly-online neoliberal party, though it has various weird far-right brands, a splitting constellation of small parties on the left, and some long-standing niche religious parties. I guess maybe the wide array of "normal" parties give most people some bit of ideological flotsam to hang onto.
This is bringing up big nihilistic topics I don't have energy to tackle.
In the USA, electoral politics have to be 'big tent.' So eg. if you are a Democrat who wants to get things done, you have to be willing to work with Black Christians even if you are neither, and you absolutely can't define yourself by being angry at either even if that goes well on social media.
I don't see any way of knowing who is actually influential this century. Journalists on birdsite say that they sometimes write opinion pieces for American policy magazines with an intended readership of one (and people in Washington DC have seen very specific ads posted along commuter routes to specific agencies and departments). Journalists in Canada say that all important federal policy decisions are made behind closed doors at the PMO by appointed (not elected) advisors whose only interest in evidence is the evidence of polls. I would expect that politicians have the same lazy epistemology as most people, so their opinions come from their friends and newspapers and magazines and social media. There are lots of policies which are widely supported but can't get enacted because of status quo bias or because a small group is very firmly for the current policy (eg. marijuana legalization / decriminalization had won all the arguments by the 1990s but took until the 2010s to be enacted).
As a heuristic, I would assume that anyone who earns their living sharing opinions on policy is influential (unless almost all their income comes from a single patron). So the pundit economists would qualify, and arguably S. Alexander since the NYT doxed him (although he gets paid through Substack which is funded by other people's money, and its possible to create a bunch of fake subscribers to funnel money his way and make him look influential - the same scam as buying birdsite followers). But I agree that their influence is mainly in New York State and California tech communities, where people are often not very good at electoral politics (although they build media systems and allocate capital).
As far as I can tell, most people are not interested in electoral politics at all! That is why newspapers used to print the story of a trial every single day, to catch the people who just flipped through a paper every so often rather than read the same paper every day.
I have a conversation about anything serious with people I don't live every month or so during the pandemic. My friends are scattered around the world. Yes, the Internet is written by crazy people with too much time on their hands. But how on earth could I know what most people in my area think?
The Old Media are mostly dead and get most of their info from social media and press releases these days anyways.
The Effective Altruism movement came up on Mastodon. That seems to have a number of factions. There are groups like GiveWell whose point is "if you claim to be doing good with donations, prove it! and isn't it generally easier to reduce suffering and death in poor places than rich places?" There are a lot of well-funded charities whose mission is "raising awareness" or "advocacy" or who can't produce photos of all the schools they say they built. https://www.givewell.org/charities/top-charities As far as I can tell, they are still around and still have the same basic approach of reducing sickness and death among people alive today (these are easy to measure, whereas its hard to measure the effect of "raising awareness").
There is Longtermism who like speculative risks and who have been infiltrated by the people who are worried that chatbots will be like the AI in Terminator or Reign of Steel or The Forbin Project. "Longterm" or "existential risk" means the future of humanity and avoiding human extinction (or keeping humans alive long enough to transition into digital minds). If you believe that the welfare of trillions of potential future humans is an end that can justify any harm to actually existing human beings today, you can talk yourself into doing terrible things. This is a known danger of Utilitarianism (https://crookedtimber.org/2022/11/13/a-rant-on-ftx-william-macaskill-and-utilitarianism/) as well as Christianity, Communism, and other belief systems which envision an end. Eg. someone got on twitter and started calling for bombing unlicensed AI research facilities, even in the territory of other nuclear powers, and there are a lot of people whispering "why help millions of poor brown people when we should be focused on stopping engineered bioweapons from wiping out humanity oh look I have the blueprints for a lab to do that right here and for just a billion dollars plus operating costs ..."
There are the 80,000 hours people who argue that the best way to do good is to get a high-paying job and donate the proceeds (80,000 hours is the time you spend in a 40-year career). This can clearly be an honourable way to live, but because humans are rationalizing not rational, this can become an excuse to live in luxury on other people's work doing all kinds of damage in the name of the Cause. What is publishing propaganda for a tobacco company or mining a few mountaintops if you build some nice libraries?
And there are the grifters like Sam Bankman Fried who wanted to donate to improve their reputations, or want to suck up donors' money. They seemed to find longtermism and 80,000 hours useful smokescreens.
If you read things published before Sam Bankman Fried's Ponzi scheme collapsed, you can find hints that groups 2-4 were gaining more influence because they brought money and charisma https://www.newyorker.com/magazine/2022/08/15/the-reluctant-prophet-of-effective-altruism And a lot of people are outraged by the grifters and frightened by the Longtermists / AI risk movement so they have launched a propaganda counterattack with the whole Effective Altruism movement as a target. I don't have the contacts in that space to say how much of it the Longtermists and the grifters control, I suspect the answer is "more than I would have guessed."
The New Yorker estimated the EA movement's assets at around USD 30 billion in August 2022. That is also much more than I would have guessed in the early 2010s when groups like GiveWell seemed to be a small part of the charitable sector.
The New Yorker article also has a hint of sexuality:
QuoteIn graduate school, "I started giving three per cent, and then five per cent, of my income," he (Effective Altruism philosopher William MacAskill) said. This wasn't much—he was then living on a university stipend. "I think it's O.K. to tell you this: I supplemented my income with nude modelling for life-drawing classes." The postures left him free to philosophize. Later, he moved on to bachelorette parties, where he could make twice the money "for way easier poses."
...
When MacAskill took his vow of relative poverty, he worried that it would make him less attractive to date: "It was all so weird and unusual that I thought, Out of all the people I could be in a relationship with, I've just cut out ninety-nine per cent of them." This prediction was incorrect; in 2013, he married another Scottish philosopher and early E.A., and the two of them took her grandmother's surname, MacAskill.
That is harmless but compare the polyamorous household which ran Bankman-Fried's ponzi scheme (and his girlfriend Caroline Elison making comments about a Chinese harem) and the reports that LessWrong guru Eliezer Yudkowski encourages female fans to compete for his attention at sex parties. I don't like sticking my nose into people's private lives but powerful men among the grifters and LessWrong 'rationalists' use power in ways that raise red flags (cp. Ayn Rand deciding that Objectivism demanded that she trade husbands with one of her students and the student could not object without rejecting Objectivism).
Edit: some good keywords to bring up cultlike behaviour among the 'rationalists' if cults are not a trigger are "Leverage Geoff Anders"
Yeah. I feel like the pushback against EA stuff is a bit linked to the big pushback against Net Zero in environmentalist circles, and in general against the idea of "I can do a bad thing and balance it with a good thing" which utilitarianism kind of has a risk of. It probably helps that these causes tend to have the traditional villains of the left heading them in that it's largely a movement among people in high paying and somewhat more damaging sectors.
I have not seen that connection! Net zero seemed like a figleaf for "maybe we can save some of the ways things are done right now if we invent magical technology." (or for scams pretending to plant trees in Brazil)
I agree that the 'scientific' racists with lots of time to post on the Internet, the LessWrong and SlateStar 'rationalists', and the Longtermists are not likely to cause more destruction than one airliner crash. Rich people give money to strange or scammy things all the time, so while spending $80 million to prevent Skynet seems like a bad use of that money, its probably no worse than a donation to a church, or We Charity, or the PAC for Electing Bad People.
I'm not sure it's a very visible connection, just two things that have the same zeitgeist and underlying argument of "the things that the rich people are doing that nominally are about saving the planet are actually about protecting their ability to do harm in a deniable way". And yeah, in terms of bad impacts 80 million to prevent skynet is probably a lot better than the Electing Bad People PAC, but it certainly doesn't count as a good use of money either.
And while you can quibble about the GiveWell style of Effective Altruism, I think its true that donors who follow their strategy will reduce suffering and death more than donors who give more or less at random to worthy-sounding causes with honest-looking local representatives.
The problem with risks which have not happened yet is that its not clear how to tell how likely they are, or how damaging different outcomes would be, or what actions that we can take today might actually reduce them. Humans are terrible at predicting the future and since chaos theory we know that many aspects of the future are inherently unpredictable. Eg. nobody predicted how much containerization would change the logistics of transport, not the longshoremen, not the shipping companies, not the business press, not militaries with logistics problems, not the port authorities. Many of them believed that it could reduce transport costs, but not that it would almost complete replace breakbulk and enable whole new modes of manufacturing based on importing subassemblies by sea. And if the Port of London and the Longshoremen of New York had funded a Containerization Research Institute in the 1920s, its not clear that they could have stopped the transition (or that that would have been good!)
Yes. I don't think trying to work out what will happen in the future is a wholly valueless exercise, but in general, I think it's a good rule of thumb that you're likely to get a better future first and foremost by producing a better now: if we had a society more robust at fixing its present problems, that'd be likely to be a society better able to cope with the strain of any new problems.
This essay on AI cult longtermism came up on Mastodon https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo One of its points is that this movement simultaneously believes that humans have a duty to turn the universe into simulated humans, and that the greatest threats to humanity's future are radical near-future technologies. So they have the problem that they push for aggressively developing dangerous technologies (its hard to imagine humans expanding outside the solar system without radical biological and information and energy and propulsion technologies), but also see these technologies as something they must control even at the risk of nuclear war or unchecked global warming.
In my view, long-term predictions (ie. centuries not trillions of years) of a system like the human species are obvious quackery.
I am sure there are forms of longtermism focused on 10,000 year clocks (https://kk.org/thetechnium/neal-stephenson-and-the-10000y/) and seed banks in the Arctic and other practical things, not on conquering the universe and stopping Skynet.
Another handy essay on the connections between Bostrom's Longtermism, the "speculative threats" kind of effective Altruism, the rationalist movement, Steven Pinker, and race-and-IQ 'science' (sic) https://www.truthdig.com/dig-series/eugenics/ Again, there are the longtermists who are building millennium clocks in Arizona and seedbanks in the Arctic, and the Effective Altruists who point out "before you donate to We Charity look if someone has independently verified their accomplishments and whether others are doing the same for less money" but dangerous quacks are trying to appropriate the names for their own uses.
Edit: A Timnit Gebru @timnitGebru@dair-community.social coined the term TESCREAL for these weird Internet and California / New York / Oxford spaces (although again, not all Effective Altruists!)
Quote from: https://dair-community.social/@timnitGebru/110096711168347951#TESCREAL stands for transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism and longtermism. Émile P. Torres @xriskology@mastodon.bida.im coined it in our upcoming paper. Great to see everyone following the bandwagon of a secular religious cult.
I'm not sure I know what extropianism or cosmism are...
Quote from: Jubal on June 03, 2023, 08:14:34 PM
I'm not sure I know what extropianism or cosmism are...
I could not define them either but I imagine she says something in her paper.
There is a lot of overlap in these spaces and ideas that don't seem obviously related like race 'science' keep coming up in them. I think that is one reason for the social media offensive focusing on people and painting with a broad brush so the eugenicists and racists and builders of hierarchies can't just rebrand.
I have trouble getting too angry with Scott Alexander because clever lonely dudes with blogs rarely do much harm (and the NYT did not need to publish his legal name to show his connections with shady people and advocacy of dubious ideas), but I would not recommend entrusting any of these people with a hot dog stand, they often push terrible ideas or get grifted by terrible people. The AI Foom people have a thought experiment "what if you lock the AI in a box and it persuades someone to let it out?" and I think the David Gerards of the world are scared that someone is trying to let these ideas out of weird Internet communities and geeky clubs in San Francisco, New York City, and Oxford and let them control serious money. And they are not polite Canadians so they play dirty.
I see that Maciej Ceglowski was suspicious of Nick Bostrom's ideas before that was cool https://idlewords.com/talks/superintelligence.htm (and he moved in those same software and venture capital circles in California)
Edit: I think I have finally found the essay which lays out the connections between these people without personal attacks, unverified claims, or misunderstanding arguments https://aiascendant.substack.com/p/extropias-children-chapter-1-the-wunderkind Professor Nick Bostrom's 1996 email endorsing 'scientific' racism was on, you guessed it, the Extropians mailing list which
Extropia's Children begins with (1996 is a long time ago and I have no idea of Bostrom's current views, but the idea of a racial hierarchy of IQ comes up frequently in these spaces and it is one reason to be suspicious of them, given that even 1996-Bostrom said "I have begun to believe that I won't have much success with most people if I speak like that" and given that he did not wholeheartedly renounce these ideas in his appology at the start of 2023 (https://thetab.com/uk/oxford/2023/01/12/senior-oxford-uni-academic-argues-blacks-are-more-stupid-than-whites-in-unearthed-emails-29768))
Edit edit: and unsurprisingly defenders of Longtermism accuse Emile P. Torres (they/them) of misrepresenting their arguments. Torres and others certainly spend a vast amount of time and energy criticizing these movements but these movements do seem to control billions of dollars and influence policy. And what I have seen does not make me think that sitting down and reading key works by Longtermist thinkers would make me wiser or happier. We all have limited time and attention. https://markfuentes1.substack.com/p/emile-p-torress-history-of-dishonesty {lots of twitter drama on this one}
Quote from: Jubal on April 30, 2023, 11:45:24 PM
Yes. I don't think trying to work out what will happen in the future is a wholly valueless exercise, but in general, I think it's a good rule of thumb that you're likely to get a better future first and foremost by producing a better now: if we had a society more robust at fixing its present problems, that'd be likely to be a society better able to cope with the strain of any new problems.
Activisty types sometimes object to bednet Effective Altruism on the grounds "it does not address the underlying causes, just the symptoms." And while that seems true, "cure children in Botswana of parasites which will stunt their growth and health" is much more tractable for busy people in London or Chicago than "solve the global, national, and local inequities which lead to so many children in Botswana getting infected in the first place." Its also much easier to know whether your actions are improving what you say they want to improve.
Quote from: dubsartur on June 08, 2023, 06:56:03 AM
Quote from: Jubal on April 30, 2023, 11:45:24 PM
Yes. I don't think trying to work out what will happen in the future is a wholly valueless exercise, but in general, I think it's a good rule of thumb that you're likely to get a better future first and foremost by producing a better now: if we had a society more robust at fixing its present problems, that'd be likely to be a society better able to cope with the strain of any new problems.
Activisty types sometimes object to bednet Effective Altruism on the grounds "it does not address the underlying causes, just the symptoms." And while that seems true, "cure children in Botswana of parasites which will stunt their growth and health" is much more tractable for busy people in London or Chicago than "solve the global, national, and local inequities which lead to so many children in Botswana getting infected in the first place." Its also much easier to know whether your actions are improving what you say they want to improve.
Yeah, I think this is a complex problem that internet discourse tries to simplify too often: one doesn't want to never try and fix underlying problems, that's really bad, but it's also morally bad to simply tell people they have to sit and die while you spend all the money on funding the grand progressive takeover of the world which may or may not happen. But it wrongly becomes an either/or for too many people.
Its also probably the case that the people who go into "bednetting" EA are better at solving well-defined, context-independent problems than at squishy things like land reform in Malaysia or getting the right people elected and appointed in Ploughkeepsie. Just like the average person who glues themself to a crosswalk in Berlin probably lacks the money and skills and personality to build a windbreak forest in the Sahel. That does not mean that one type of action against climate change or poverty is hurting wrong.
Some aspects of rationalism, longtermism, etc. are hostile to this kind of thinking but other parts of EA seem to be open to dividing donations between a few strategies. But I didn't know anything about Longtermist EA before fall 2022!
For an interesting counterpoint, a friend on Facebook shared this article which pushes back on the TESCREAL idea by splitting out the component parts and suggests that Gebru etc are joining more dots than actually exist on some of this:
https://medium.com/institute-for-ethics-and-emerging-technologies/conspiracy-theories-left-futurism-and-the-attack-on-tescreal-456972fe02aa
I think there are some fair points in there but I'm not sure I'm convinced as a whole - in that I didn't think all these people and groups were in a single evil cabal anyway, and that my concern is not nefarious scheming so much as wasting vast amounts of money in ways that tackle imagined problems over and above present ones and fail to recognise the actual legal and social adaptations we need urgently. The authors of this piece do recognise that, but I think they brush past the organisational, reputational, and movement scale problems that things like EA have right now.
Some of the apologia from EA advocates I know feels like the Lib Dems who in 2015 were just outraged the electorates was rejecting them because they had tried their hardest to stop Tory overreach: but a lot of the criticisms of them and especially of some of their most prominent advocates were also substantially true and people knew it, so that wasn't going to be enough for people to trust them again for a while.
My first thought is that in the part I know best (the rationalists and economists with blogs), sneering at people who see connections and mutual influence as conspiracy theorists is dead wrong. Its a fact that leading thinkers hung out on the Extropians mailing list in the 1990s and later became publicly enthusiastic for ideas which former buddies had promoted in the 1990s! Its a fact that of the three most prominent rationalist bloggers who are not economists, two have expressed enthusiastic support for scientific racism (and a very young Bostrom did so, and his recantation suggests that he is still interested just not sure about the 'genetic' part). Its elementary that people often adopt ideas from their friends, family, and lovers, one basic form of political lobbying is to organize nice meals or parties, invite fellow travellers and the people you want them to influence, and let nature take its course. No one person in this space has the same terrible ideas or sinister goals, and its not reasonable to ask a member of the public to keep straight the difference between Eliezer Yudkowski and Robin Hanson.
Edit: A random look at Caroline Ellison's Tumblr (https://web.archive.org/web/20191224190021/http://worldoptimization.tumblr.com/) showed me a post which begins "btw a link from SSC sent me down a rabbit hole of reading (scientific racist blogger) hbd chick and related links lately and the whole intellectual edifice is pretty fascinating. I don't have a great summary, and epistemic status tentative so you should just read the blog and follow the rabbit hole yourself. "
I can not speak to longtermism since I have not read key works and do not know the key figures. So I can't say how well the Internet criticisms represent it. There may well be some conspiratorial thinking in Torres and co's belief that small passages show a hidden agenda. But I have seen Caroline Ellison's tumblr blog computing the suffering of fish on a scale with the suffering caused by specific human diseases, and everything I know about singulitarianism screams "run. Do not engage. Its a trap for minds like yours, in the way that a confidence game is a trap."
OTOH, I agree that I have not heard of any major sinister Transhumanist groups and I have never heard of Cosmism or Extropianism. I also agree that some of the critics have a beef against utilitarianism, which can be a useful ethical framework if you don't go too far.
"Perhaps the best example of grounded, careful thinking on these topics is Nick Bostrom's book 2014 Superintelligence," Ceglowski is not an intellectual but his takeaway from that book was that it was designed to catch people with a weakness for clever ideas.
"An attack on rationalism has to be understood in light of the postmodernist critique of rationality." No, most of us who run screaming from those people (and especially from the LessWrong crowd) are scientists and makers who, as Evans said, deeply distrust their building of castles on the clouds before they set a single stone upon a stone on earth. I agree that its common for people in these spaces or adjacent ones (eg. Michael Shermer or Richard Carrier) to ignore Hume and argue that the one true morality can be deduced from the study of the world by formal logic. But Kant and Hume are not postmodernist thinkers!
"we see its connections to reactionary (as opposed to liberal or centrist) political views as exaggerated" The Rationalism of the Rationally Speaking podcast is full of young sheltered Right Libertarians, polls of the SlateStarCodex readers show that active commentators skew right or right libertarian while readers are more like a sample of the US population. See also Robin Hanson and the Marginal Revolution guy, or Peter Thiel's funding of MetaMed and Yudkowski's foundation (this essay describes Thiel as a Transhumanist but he has funded Yudkowski's flavour or rationalists).
One of the key points of Dan Davies' Lying for Money is that fraudsters want you to be overwhelmed with a million details, while successful prosecutors want you to focus on the broad outlines of the scheme. I think that is what critics like Timnit Gebru or Maciej Ceglowski are doing. Its fair to tell the average person to run screaming whenever a 'rationalist' or longtermist wants them to do something in the real world. Its not reasonable to spend endless time arguing semantics about individual thinkers' politics or exactly which of these terrible ideas they support at a given time. I think they are tarring the innocent and the guilty with the same brush, but I think they would say "yes this is unjust but it will force the decent people to distace themselves from the rationalists and longtermists if they want to get anything done offline."
Edit: Hughes, the author of the Medium piece, co-founded his organization in Boston (yellow flag for this family of ideas, its not so infected as SoCal NYC or Oxford but close) with Nicholas Bostrom (red flag!) https://en.wikipedia.org/wiki/Institute_for_Ethics_and_Emerging_Technologies
Edit: so TL;DR I think that Evans' approach to these spaces (https://aiascendant.substack.com/p/extropias-children-chapter-1-the-wunderkind) as a social space where people adopt each others' unusual ideas and support each others' hilariously doomed projects is the best I have found; maybe supplement it with one of the early criticisms of the singularity or the AI as god by a pop culture figure such as Doctorow. Don't let the drama and the "he said, she said" distract you from the key point that many rationalists and longtermists support some disturbing things and have a history of failure whenever they try to do anything other than post on the Internet and hold geeky social events.
On a nerdy level, apparently the late Daniel Ellsberg had some writing on the limits of quantified probability as a model for rational decisions eg. in a paper "Risk, Ambiguity and the Savage Axioms." The rationalists are prone to doing arithmetic on made-up numbers as if it proved anything, and to waving around the term "Bayesian" when they mean "updating your opinions as you learn new things." One reason I thought the bed-net EA worked relatively well is that they had actual numbers to calculate on and seemed to put thought into the source of those numbers.
If they let a lot of that money be diverted to buying castles and paying friends to sit in a room imagining how to deal with malign superintelligences, that seems like a bad criticism (especially if donors thought they were contributing to bed-netting and actually got a bunch of autodidacts with dreams about things which might happen in the future). And so does the connection with Sam Bankman-Fried's FTX fraud.
Back in April 2018, SBF barely survived accusations by the board of Alameda that he was a serial liar who refused to implement basic corporate controls against fraud and embezzlement and had sexual and romantic relations with subordinates (the board and half the staff left instead). People involved at the time say that MacAskill and the rest of the Oxford EA movement were thoroughly informed but still accepted SBF's money and spoke in public about how wonderful his businesses were: https://time.com/6262810/sam-bankman-fried-effective-altruism-alameda-ftx/
Edit: economist John Quiggin talks about what happens if we try to maximize average utility rather than total utility (because total utility is what leads the Longtermists to dream of conquering the universe and turning the solar system into a giant computer simulating minds, just like minimizing suffering leads to the conclusion that humanity should end itself) https://johnquiggin.com/2023/07/30/against-the-repugnant-conclusion/ There is an Isaac Asimov story where humanity has reduced the biosphere to humans, algae tanks, and a few lab animals, and someone notices that they could reach perfection by euthanizing the animals and authorizing a few more human births
Interesting thread. Behind the Bastards did an SBF update recently (https://youtu.be/S54GrXDjokg?si=snIvNiDfdW9B279q), which was my onboarding for Effective Altruism.
I don't find it very surprising that there is a connection there with Utilitarian thinking, through his parents. While I do think we owe some debt to consequentialist arguments for the improvement of social equality, I'm very suspicious of the attempts this tradition makes at 'solving' morality like a mathematical equation. We cannot help but fall into all kinds of absurd paradoxes once we seek to ground morality in outcomes alone.
This is certainly an interesting topic I've vaguely encountered floating around. To me, the main concern is that this vague constellation of beliefs is held by and seems to be viral among people with such an outsized degree of wealth. These people have the power to waste (not only their own) time, energy and resources on what seem to me to be ultimately doomed ventures, based in many cases on false premises. For example, transhumanism is based on technology that is so nascent at present that it's barely even experimental and singularitarianism requires that the pace of technology improvement is exponential, despite (in my opinion) the distinct observation that it is stagnating. By fuelling research into such sci-fi technologies, we lose the opportunity to instead spend those resources on things that would help people more immediately, with less risk, and as a sure deal. It's also concerning that the desire for such futuristic tech is also causing corners to be cut, such as the tragic cruelty shown at Elon Musk's Neuralink.
However, I get the impression that while these beliefs are truly held by many, they also provide a utilitarian purpose of driving hype in technology to the end of lining the pockets of their adherents. Sam Altman may well believe that the singularity is coming, say, but I rather get the impression that hand-wringing over the field of "AI" requiring regulations is more to do with driving up the public perception of OpenAI's chatbots than it is genuine concern over the future of humanity. I'm not overly familiar with Effective Altruism, but I had the feeling that it was always more about PR and justification of amassing vast wealth than it was about actually helping people.
Does anyone else know the story of Matthew White's atrocitology? He is a librarian who ran a classic 90s and oughties website where he collected statistics about early diasters in books and encyclopedias. But he was an uncritical compiler, so most of his sources were just books by people which just read earlier books and picked a number which felt right, so garbage in, garbage out. Most of the books he used did not have data, did not have a rigorous method for estimating, and did not have a rigorous method for choosing between earlier estimates, they just made up a number or picked between earlier numbers.
Steven Pinker loved the site when he was writing The Better Angels of our Nature (published 2011) because it was full of numbers and citations and he wanted numbers and did not care how they were created. He got White a contract to turn his website into a book with a big trade publisher (https://en.wikipedia.org/wiki/The_Great_Big_Book_of_Horrible_Things), and he used White's numbers in his book. If you knew that part of the early Internet, you knew to be very skeptical of this big data which was bad data (http://publishingarchaeology.blogspot.com/2018/12/when-big-data-are-bad-data.html). But Pinker's books probably reach 1000 times as many people as careful scholarship with reliable methods.
I have briefly talked about how Pinker seems to toy with race 'science' and has occasionally supported people who push race 'science' although again I am not doing the strings-and-pushpins-on-a-wall thing. But this shows how obscure person with red flags (White's lack of achievements endorsed by trained historians, the racists' racism) + famous person with credentials and Old Media connections = misinformation explosion
Edit: "According to White, the Atlas (his webpage that includes the atrocity statistics) been used as source by many authors, including in 377 books and 183 scholarly articles" ia cthulhu cthulhu ftaghn
Edit: In his 2011 book, White estimated that Genghis Khan killed 40 million people in China (about 2/3 of his total estimate from WW II, which also involved mass slaughter in China) based on a book by McEvedy and Jones which is such bullarmadillo that there are articles dedicated to explaining the problems. ia cthulhu cthulhu ftaghn
Molly White's latest post on the trial of Sam Bankman Fried is clear, focused, and without too many random acronyms or angry asides https://newsletter.mollywhite.net/p/the-fraud-was-in-the-code
I greatly enjoyed that Molly White piece, her writing on the cryptocurrency world in general is well worth reading. It's almost surprising to see such clear and readable code for doing fraud---you'd expect something more obfuscated than an "infinite money" flag.
On the topic of Sam Bankman-Fried, the linked tweet in https://quomodocumque.wordpress.com/2023/10/08/underestimating-shakespeare-and-real-numbers/ (https://quomodocumque.wordpress.com/2023/10/08/underestimating-shakespeare-and-real-numbers/) to me exemplifies the nonsense within the over-quantification that comes in "rationalist" spaces. It's utterly alien to me that the fact that it's unlikely that Shakespeare is literally the greatest writer that has and will ever be is taken as a reason to dismiss his works. As the article itself suggests, I don't think we can assign a real number to "literary greatness," and even if we could there's still plenty of reason to read works not by the Empirical Greatest Writer of All Time.
I can't see that quote because its a screenshot behind a scriptwall on twitter and not archived on nitter.
One of the many reasons I wonder what happened to education in the USA is that "garbage in, garbage out" is classical computer science (like Charles Babbage is supposed to have made a joke about it and technicians were including it in their 'introduction to electronic computing' for journalists in 1957). The rationalists often have little or no formal education in probability, statistics, or computer science and the crypto grifters often know even less about the technical side of blockchain but Sam Bankman-Fried is supposed to have an undergraduate degree in physics. But making up numbers to express opinions in pseudo-statistics is fashionable among the rationalists, and SBF was trying to appeal to them or blend in with them.
Which gets us back to how these communities are interlinked, conduct money and influence, and often have a front which looks mostly harmless and a covert wing which is dubious or criminal. Where most rationalists and the effective altruists stand on a "useful idiots to swindlers" spectrum is a judgment call but there are groups at the core of these spaces with clear, sinister goals.
Quote from: dubsartur on October 13, 2023, 05:59:35 PM
I can't see that quote because its a screenshot behind a scriptwall on twitter and not archived on nitter.
Voila, here it is :)
(https://i.imgur.com/KaMigfx.png)
People in these weird Internet communities tend to simultaneously dismiss the value of education and worship pop science books, so suggesting that going to university makes you a better writer would be controversial there. But SBF was just a clever superficial dude high on proscription stimulants running a bunch of fraud and embezzlement so I'm not sure how much thought went into that tweet.
In general, I see tweets as the equivalents of what a drunk friend says in private at 11 pm.
Edit: Scott Alexander essay from 2017 describing how many US persons in finance, competitive exam-taking, etc. want adderall; others in the rationalist subculture experimented on themselves to self-optimize with various substances (https://rationalwiki.org/wiki/Nootropics) https://slatestarcodex.com/2017/12/28/adderall-risks-much-more-than-you-wanted-to-know/ (Alexander did not discuss the common side effect of impulsivity, just uncommon side effects such as psychosis and Parkinson's disease, and he said that in his professional opinion as a psychiatrist adderall will help most people focus) FTX's pet psychiatrist had some weasel words about how his patients among the staff used these drugs at similar rates to other finance workers.
David Gerard misrepresents this in service of a higher truth as "he (= Scott Alexander) told the rationalist subculture it made you into a financial genius" ("focus better" is not the same as "genius"). IME he does this a lot; he knows these people's Internet posts and the Internet gossip about their private lives, but you can't trust him on the details.
I hope the fact that a subculture that grew out of Southern California encourages experimenting with mind-altering substances and unconventional sexual relationships does not surprise readers. Or that this movement has gurus and Leaders who get accused of taking financial and sexual advantage of their position.
Edit: one of the American tech billionaires just successfully seeded a manifest (https://a16z.com/the-techno-optimist-manifesto/) onto Hacker News only for the posters there to notice "hey, he cites the Futurist Manifesto as a model and a lot of their followers became Fascists! And if 'Our enemy is the ivory tower, the know-it-all credentialed expert worldview, indulging in abstract theories, luxury beliefs, social engineering, disconnected from the real world, delusional, unelected, and unaccountable – playing God with everyone else's lives, with total insulation from the consequences.' how is that different from venture capitalists deciding who to shower with money and connections and publicity because they worked for the right company in 1997?" So even the wannabe Silicon Valley startup people are getting the message.
One useful thing from Gerard (https://davidgerard.co.uk/blockchain/2023/10/26/the-beautiful-mind-of-sam-bankman-fried/): SBF spilled his guts in a text thread with journalist Kelsey Piper (https://www.vox.com/future-perfect/23462333/sam-bankman-fried-ftx-cryptocurrency-effective-altruism-crypto-bahamas-philanthropy) (currently at Vox). Piper promptly published part of the exchange.
Piper was one of Scott Alexander's examples (https://slatestarcodex.com/2017/12/28/adderall-risks-much-more-than-you-wanted-to-know/) of someone who desperately needed Adderall but was kept from it because people with ADHD have trouble completing long bureaucratic procedures, and he had her on his blogroll under Rationality. So she seems to have been part of the SoCal rationalist subculture in 2017. And SBF has complained that he is not getting the heavy dose of adderall which he was accustomed to. So SBF texted indiscreet things to a fellow traveller, who was also a journalist and shared his interest in proscription stimulants, not a random journalist.
This is a small densely-connected space and the connections are not always obvious to a casual observer even if they are documented in endless detail online. I already discussed how little interest the media had in exploring what Dominic Cummings' connections to the rationalists implied about his ideas and associations, because those were blog posts not tweets.
Edit: I also note that the online-and-tabloid, Moscow-based newspaper The eXile (https://en.wikipedia.org/wiki/The_eXile) (1997-2008) had both journalist Matt Taibbi (https://en.wikipedia.org/wiki/Matt_Taibbil) (who I don't know but seems to have slipped into that Glenn Greenwald space 'calls himself a lefty but really likes Putin, Assad, and conspiracy theories about lefty Anglos') and pottymouthed commentator John Dolan (https://en.wikipedia.org/wiki/John_Dolan_(writer)) alias Gary 'War Nerd' Brecher (today,YouTuber LazerPig has a similar voice). Dolan supposedly taught as an adjunct English professor in BC circa 2006-2008 and was interviewed by understimulated Internet racist Steve Sailer in 2003 (https://web.archive.org/web/20170808073034/http://www.exile.ru/articles/detail.php?ARTICLE_ID=7021). Around that time Christopher Wylie (https://en.wikipedia.org/wiki/Christopher_Wylie) the Cambridge Analytica whistleblower was leaving school in the same city.
I genuinely wonder if a prosopography of internet thinkers and movers and shakers of the 2000s-2010s would actually be a useful piece of research work for someone to build (I'm not volunteering, I should hasten to add, though it's the sort of project where I'd happily give thoughts on the data modelling).
Quote from: Jubal on October 27, 2023, 12:21:08 AM
I genuinely wonder if a prosopography of internet thinkers and movers and shakers of the 2000s-2010s would actually be a useful piece of research work for someone to build (I'm not volunteering, I should hasten to add, though it's the sort of project where I'd happily give thoughts on the data modelling).
I think some people tried something like that with their projects on Rational Wiki, but they didn't have any thoughts on visualization or organization, and they didn't have a Rankean goal of just figuring out who was connected to whom. Jon Evans, "Extropia's Children" (2022) tried to be a prose account in the style of Nevala-Lee's
Astounding!Edit: the Pinkerite blog has an essay with diagrams on what it sees as Steven Pinker's racist and far-right connections (they make a good case that Pinker is intrigued by race 'science' and quietly boosts some people who believe in it)
Piper's Vox article certainly did not spell out that she had social connections to Sam Bankman-Fried, although she describes her section of the newsroom as "effective altruism-inspired".
I'm just trying to understand how these weird Internet people were so much more successful at offline networking and getting resources for big projects than my weird Internet people. Ok "do crimes" and "lie a lot" can help for getting access to resources!
Edit: a few years ago I would have pegged all these spaces as somewhere between the Los Angeles SF Society in the 1960s and the kind of people who hang out in comment threads, as big on talk and unconventional opinions but mostly harmless. There are worse things in the world than shy people with opinions I think are bad. But it seems like they got their hands on some levers of power.
Edit: anyone familiar with the rationalists sees Piper's page at Vox (https://www.vox.com/authors/kelsey-piper) and her Twitter account (https://nitter.net/KelseyTuoc), sees "Tyler Cowen Bryan Alexander AI risk existential risk YIMBY effective altruism," let alone the endorsement by Scott Alexander, and hears "here be a rationalist." And that is obviously relevant to a story where an EA- and rationalist-linked fraudster sends her texts confessing to misdeeds. But it took another clever angry lonely person in Australia to spell out the connection.
Edit: David Gerard's blog post has a beautiful sad moment where after criticizing rationalists for disagreeing with (all?) psychologists about IQ, and criticizing the rationalists as lacking offline achievements, he has to criticize psychiatrist Scott Alexander for talking about Adderall in the wrong way. To my knowledge Gerard has no verifiable expertise in psychiatry, he is just a sysadmin and journalist. Gerard thinks offline achievements and expertise are valuable until someone he hates has them (and because he obsessively follows the rationalists' internet posts but seems to rarely attend their parties, he blames a lot on Internet posts and less on the fact that drugs are popular in the USA and Adderall and cocaine are especially popular with bankers in the USA and "its medicinal!" is a really popular excuse for getting zonked)
I have seen this kind of pseudo-skepticism, where people cite 'the authorities' when they mean Wikipedia or what a friend told them experts think, a lot since smartphones came out. Gerard rightfully criticizes the rationalists for reading pop science books and thinking that makes them experts, but that does not stop him from pronouncing about the G/IQ construct with the confidence of Eliezer Yudkowski.
Patrick McKenzie has a good long rant about how on one hand cryptocurrency scams grow out of the small worlds of Internet subcultures and colourful characters, but on the other hand they spread through Anglo institutions like cholera through an army camp. Neither journalists, not police forces, nor financial regulators, did much to stop them; two Canadian pension funds invested sums in the $100 million range in crypto companies which any graduate of an accounting program could see were frauds. Different fraudsters tried to buy small Pacific islands with UN membership, and got both major US parties to start proposing legislation written by the fraudsters by convincing each party that the fraudsters would give them lots of money and let them crush the other party. Even now, the CBC of all places is printing stories full of quotes from crypto advocates which don't mention the stream of bankruptcies and convictions for defrauding people of billions of dollars (this story (https://www.cbc.ca/news/politics/cryptocurrency-political-conversation-waning-1.7011672) was published one day (!) after one of those convictions of a fraudster who cost a Canadian pension fund big money).
So on one hand there are stories about people like "had Kelsey Piper had informal chats with SBF before he sent her those indiscreet texts?" or "how close was Dominc Cummings to these spaces?" but on the other hand there are stories of how some confidence men with the right lingo can run wild through established institutions in the rich Anglo countries. Most of us as individuals just have to know to keep away from anyone associated with rationalism, LessWrong, longtermism, longtermist Effective Altruism, blockchain, cryptocurrency, NFTs, etc., but our institutions struggle to do that much.
https://www.bitsaboutmoney.com/archive/a-review-of-number-go-up-on-crypto-shenanigans/
Edit: Molly White has a handy list of some of the aspects of crypto fraud which are open for investigation. There are 80 billion Tether tokens in circulation, each theoretically worth 1 USD, but since a theft in 2016 (!) they have not had enough actual money or money-like assets to redeem those. Some was stolen, some was embezzled, some is in bank accounts frozen by governments that don't like money laundering, and some may have never existed at all- their current wording gives them room to back Tether tokens with other cryptocurrency or equity in other crypto companies rather than boring old US dollars that their customers can pay their boring old taxes and mortgages with. And this is the version of events that they have more or less admitted in court, its possible that the rot goes further back because crypto companies are run by incompetent people and crooks and crypto software makes it easy to, for example, lose large amounts of cryptocurrency on an old hard drive.
https://newsletter.mollywhite.net/p/the-stones-left-unturned
If you assume that most of those 80 billion Tethers are not actually backed with anything you can sell for USD, then you have two to four Madoffs worth of bezzle (money which the mark thinks they have but the embezzler has already stolen)
Tether alone could be a USD 80 billion bezzle (the gap between when the embezzler has stolen the money, and the victim realizes it is missing). And when it collapses there will be fireworks.
The board of OpenAI just removed its CEO, a random billionaire. The chairman of the board the quit in protest. The board consisted of six people: Greg Brockman (chairman and president), Ilya Sutskever (chief scientist), Sam Altman (CEO), Adam D'Angelo, Tasha McCauley, and Helen Toner. Of them, McCauley and Toner are involved in rationalism or effective altruism, and Sutskever throws around the term 'AI safety' which can be a LessWrong term of art for their specific dreams of how machine intelligence could go bad.
Microsoft bought 49% of OpenAI for USD 13 billion. So Effective Altruists and LessWrong rationalists made up about half of the board of a USD 26 billion company in a sector which could have very big consequences (OpenAI's charter (https://openai.com/charter) presumes that "artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work" is coming in the next few decades).
NYT (https://www.nytimes.com/2023/11/19/business/media/openai-sam-altman-why.html) - Wired (https://www.wired.com/story/what-openai-really-wants/) - Castor and Gerard (https://davidgerard.co.uk/blockchain/2023/11/18/pivot-to-ai-replacing-sam-altman-with-a-very-small-shell-script/) (with G's trademark personal attacks, gossip, and statements that don't match my observations of these movements online)
Yeah, I think a lot of how much this particular tiff matters is quite dependent on how much you believe maximalist claims about AI's capabilities. From what I'm reading it sounds like there seems to be a bit of a struggle between the people who want to run OpenAI as a Rationalist/EA nonprofit (possibly aligned to Sutskever) and those who want to run it more like a standard business (possibly aligned to Altman).
At the moment I'm definitely seeing LLMs and generative models becoming increasingly in-use by people in general and colleagues in neighbouring fields, but most of what I'm hearing about in terms of job losses is in things like content writing, and I think that may be an outcome of the SEO-isation of content anyway, something I wonder if there'll be a backlash too as the current generation of internet systems end up creaking under the strain of autogenerated garbage.
I see another AI lead has resigned (https://www.bbc.com/news/technology-67446000) over the exploitative nature of using copyrighted materials in training sets.
Quote from: Jubal on November 19, 2023, 11:28:01 PM
Yeah, I think a lot of how much this particular tiff matters is quite dependent on how much you believe maximalist claims about AI's capabilities. From what I'm reading it sounds like there seems to be a bit of a struggle between the people who want to run OpenAI as a Rationalist/EA nonprofit (possibly aligned to Sutskever) and those who want to run it more like a standard business (possibly aligned to Altman).
At the moment I'm definitely seeing LLMs and generative models becoming increasingly in-use by people in general and colleagues in neighbouring fields, but most of what I'm hearing about in terms of job losses is in things like content writing, and I think that may be an outcome of the SEO-isation of content anyway, something I wonder if there'll be a backlash too as the current generation of internet systems end up creaking under the strain of autogenerated garbage.
I see another AI lead has resigned (https://www.bbc.com/news/technology-67446000) over the exploitative nature of using copyrighted materials in training sets.
I am interested because it shows that Rationalists and Effective Altruists now control significant money and power (eg. Clarkesworld magazine had to close submissions due to a deluge of LLM spam which overwhelmed their slush readers, and its not clear that essay assignments have a future).
I don't think the Internet needs my speculations about politics between people I heard of today or the future of spicy autocomplete.
The problem with copyright is that copyright is a racket too. I have never seen a justification for copyright longer than 20 or 30 years after the creator' death, its just an excuse for corporations to collect rent. But its concerning that these programs often spit out their original training data and that some are designed to take all the money and recognition out of things that some people really love to do.
Quote from: dubsartur on November 20, 2023, 02:05:59 AM
The problem with copyright is that copyright is a racket too. I have never seen a justification for copyright longer than 20 or 30 years after the creator' death, its just an excuse for corporations to collect rent. But its concerning that these programs often spit out their original training data and that some are designed to take all the money and recognition out of things that some people really love to do.
Agreed on the general awfulness of copyrighted, though I think a lot of the biggest negatives of AI are avoided if it can't train on reasonably copyrighted (author still alive or very recently deceased) content, because an AI where most of its data set has to be ca fifty years old is not exactly going to be producing the kinds of outputs where it can purport plausibly enough for nonspecialists to be an up to date artist, journalist, or scientist. I think I'd be finding AI a lot more fun if what it could mostly produce was rehashed variants of 19th century book illustrations or even 1940s era comics, the kid of stuff that is or should be reasonably in the public domain anyway and where there isn't an issue of people basically providing unpaid labour in vast quantity to the AI developers.
This essay is billed as a criticism of Effective Altruism but is really a criticism of Utilitarianism. https://www.theintrinsicperspective.com/p/why-i-am-not-an-effective-altruist
I still say that if you look at things like We Charity, "raising awareness," or scams run by churches and gurus, you have to ask "couldn't we be doing this better? shouldn't we ask to verify the results? we say we want to reduce suffering but do our actions help to achieve that?" And in actual emergencies, nobody starts raising deep philosophical questions about the ethics of triage, they just start allotting their resources with the desperate efficiency of Homer's widow spinning someone else's wool to feed her children.
Likewise, every day we make decisions which will create suffering somewhere far enough away that we don't have to think about it. Every time our local council votes on bus lanes vs. bike lanes vs. automobile lanes they are making a choice like that, because commutes take time and people die of car crashes and air pollution and not being able to get to a hospital quick enough. Isn't it better to explicitly consider the tradeoffs than to let some rich guy decide on a whim?
Edit: the essayist wants you to know that he was a bold cryptocurrency trader (ie. gambler or fraudster). Ia ia Cthulhu ftagn.
Edit: another essay in an internet magazine aimed at Americans with university education and above-average incomes https://www.currentaffairs.org/2023/05/why-effective-altruism-and-longtermism-are-toxic-ideologies
'Cecil Adam' of the
Straight Dope column (1973-2018) returned in 2023 with a column on longtermism https://boards.straightdope.com/t/straight-dope-1-13-2023-is-longtermism-the-worlds-most-dangerous-belief-system/978173 Apparently some people believe 'Cecil' is a team of writers or a series of writers.
Edit: short life of Napoleon Hill, the early-20th-century swindler and self-help author (specialty: how to get rich quick). Many aspects of his career overlap with dubious people in the spaces in these threads. https://gizmodo.com/the-untold-story-of-napoleon-hill-the-greatest-self-he-1789385645
QuoteThe Royal Fraternity of the Master Metaphysicians was founded by James B. Schafer and is largely forgotten today. Born around 1896, Schafer came from Michigan to New York sometime around 1930 and by the mid-30s had amassed a following through his speeches on the spiritual potential hidden in the material world. He explained to crowds of hundreds at Carnegie Hall each Sunday morning that the human mind had the ability to change everything around it. If you could simply imagine it, those thoughts could become real. By some estimates Schafer counted nearly 10,000 people amongst his followers by the end of the decade. ...
Schafer's intentions with the cult were unclear. He seemed to believe every word he breathed, but he also saw that his status afforded him access to a great deal of money and women. There's a strange psycho-sexual component to the Master Metaphysicians that's always hinted at in news articles of the day, but never said outright.
An animal-rights activist has alleged that she had an affair with Peter Singer, the utilitarian philosopher with many crappy opinions, and that he had a series of physical relationships with female co-authors. This is obviously a deeply personal conflict (the plaintiff sued about events in 2002-2004 in 2022 and represents herself) but it adds to the suspicion that many influential men in this space are in it for the chicks, and that some of the public controversies are shaped by private interpersonal drama.
Singer was one of the early advocates for Effective Altruism, although he seems to have aged out of the spotlight by 2022 (as a famous tenured professor at a rich private university he does not need the money or attention).
Teacher-student relationships (not just mentor-mentee relationships) seem very common between tenured faculty and students at wealthy US universities.
Docket 22CV01792 at https://portal.sbcourts.org/CASBCIVILPORTAL/ Many content warnings (cosmetic surgery, messy breakup with someone who likes to share crappy opinions)
Quote from: dubsartur on January 04, 2024, 06:48:59 PM
Teacher-student relationships (not just mentor-mentee relationships) seem very common between tenured faculty and students at wealthy US universities.
This is one of those things where I'm not sure how much it's the case everywhere, or if the US has a specific problem, or if the US is just more on it with calling out the problem, or what. I've definitely seen a lot of US scholars with a very particularly forceful public approach
against staff having relationships with anyone who is in a study position, whereas it just doesn't seem to be something European academics discuss and that may mean there's a bit more quietly sweeping it under the table or it may mean there's less of a problem and I'm not sure what the balance of those things is (though I've heard enough stories from the European side to suspect that there's a lot of sweeping it under the table going on).
Quote from: Jubal on January 04, 2024, 07:13:10 PM
Quote from: dubsartur on January 04, 2024, 06:48:59 PM
Teacher-student relationships (not just mentor-mentee relationships) seem very common between tenured faculty and students at wealthy US universities.
This is one of those things where I'm not sure how much it's the case everywhere, or if the US has a specific problem, or if the US is just more on it with calling out the problem, or what. I've definitely seen a lot of US scholars with a very particularly forceful public approach against staff having relationships with anyone who is in a study position, whereas it just doesn't seem to be something European academics discuss and that may mean there's a bit more quietly sweeping it under the table or it may mean there's less of a problem and I'm not sure what the balance of those things is (though I've heard enough stories from the European side to suspect that there's a lot of sweeping it under the table going on).
It is hard for me because I mostly see the version from people who like to share strong opinions on old or social media. My understanding is that universities with such policies imposed them in response to a lot of bad behaviour and essays by tenured professors insisting that there is nothing wrong with it, and that in practice these policies are more often enforced against TAs than tenured faculty. The social media discourse on the topic adds jealousy and discomfort with the fact that humans vary and are not infinitely maleable (easier to forbid than engage with the complexities of a relationship between a yoga instructor and her cutest student, or a 25 year old and a 19 year old). A lot of bad people have discovered that sexually frustrated people are easy to line up behind a CAUSE, so they contrive reasons to sexually frustrate junior members of their community. And a certain kind of bad person learns to climb in an organization to on one hand get sexual access to more people, but on the other hand use power in the organization to cover up any complaints ("the most active people in our community are spending so much energy getting together, breaking up, and talking about it that our official activities are stagnant" is a complaint).
I also believe that some of the absolute discourse during the Internet feminism wars grew out of cases like the ones discussed here, where a powerful person was using his place in the community to get laid and either treating partners badly or preferring them, and nobody with concerns dared name them directly just speak of general principles.
An American financier named Eric Falkenstein thinks that the California Effective Altruism movement of young, unattached, ideologically committed people was used to create a network of trusted people at key international locations (http://falkenblog.blogspot.com/2023/03/ftxs-mythical-origin-story.html) in the same way that say Armenians in 16th/17th century Old World or Chinese diasporas in Southeast Asia dealt efficiently with each other across long distance because they spoke each other's language, were married into each other's families, worshipped the same way, and so on. Wikipedia says that Falkenstein converted to Christianity at the age of 51 (https://en.wikipedia.org/wiki/Eric_Falkenstein) which might be why he does not use the term 'affinity fraud' (when someone joins a church or a club, makes friends, convinces them to invest in a venture, and runs off with the money)
A casual reader might get the impression that William McAskill the philosopher was part of the California branch, but as far as I know he never lived in CA and just saw fellow travellers at events.
He found a FTX white paper circa 2019 on how to spot fake trading which is amusing.
An (rolls dice) effective altruist from (rolls dice) New York with a background in (rolls dice) trading assets at Jane Street has written a longform retrospective on SBF which starts "anyone could have been fooled!" but then moves on to "wait, after SBF offered me a job, after one conversation with someone familiar with financial fraud I had several dozen questions for him, and the first time I talked to a friend outside the world of finance he said 'this business sounds like a scam.'" It does not ask why effective altruists and LessWrong rationalists keep being involved in major frauds, scams, and cult-like movements beyond "moving people to another country, working them long hours, and encouraging them to date each other makes it easy to manipulate them." https://asteriskmag.com/issues/05/michael-lewis-s-blind-side
After the collapse of MetaMed (the startup which promised to revolutionize medical care through the power of LessWrong Rationalism!) Sarah Constantin wrote essays like: https://srconstantin.github.io/2017/08/08/the-craft-is-not-the-community.html
QuoteIt seems to me that the increasingly ill-named "Rationalist Community" in Berkeley has, in practice, a core value of "unconditional tolerance of weirdos." It is a haven for outcasts and a paradise for bohemians. It is a social community based on warm connections of mutual support and fun between people who don't fit in with the broader society.
We've built, over the years, a number of sharehouses, a serious plan for a baugruppe, preliminary plans for an unschooling center, and the beginnings of mutual aid organizations and dispute resolution mechanisms. We're actually doing this. It takes time, but there's visible progress on the ground.
I live on a street with my friends as neighbors. Hardly anybody in my generation gets to say that.
What we're not doing well at, as a community, is external-facing projects.
I have heard the same kind of phrasing from people in other geeky cultures which emerged out of SoCal, such as the Society for Creative Anachronism. And the way these communities have sometimes ended up covering for members who commit violent crimes, let alone a bit of embezzlement, has been written about elsewhere.
Edit: she has another post from 2017 Effective Altruism has a Lying problem https://srconstantin.github.io/2017/01/17/ea-has-a-lying-problem.html
Quote
if there are signs that EA orgs, as they grow and professionalize, are deliberately targeting growth among less-critical, less-intellectually-engaged, lower-integrity donors, while being dismissive towards intelligent and serious critics, which I think some of the discussions I've quoted on the GWWC pledge suggest, then it makes me worry that they're trying to get money out of people's weaknesses rather than gaining from their strengths.
I think that somehow these movements were good at creating both online spaces and social scenes in key areas such as Oxford, Greater NYC, and the SF Bay Area (did not know about Berkeley). I have to be honest that this kind of commune culture is totally beyond my experience. But it would be relevant to know (for example) did Dominic Cummings just read their web postings, or was he part of the face-to-face culture? And how did this geeky SoCal community end up controlling real money, when the LA SF Society mostly just held meetings and argued with each other? Close-knit nerdy communities have been full of drama since Plato died and his students had to decide who was in charge of the Academy, or the Pythagoreans tossed someone off a boat for proving there are irrational numbers.
A software person in the USA just told his followers that big parts of this (gestures to the thread) are just a typical California apocalyptic cult as has been common since the 1930s. That person has a cryptocurrency address and wants you to know that spicy autocorrect will change everything for the good as creatives become AI-feeders. A typical California Ideology is that if we turn everything into data and feed it into the computer our problems will be solved, and if actually existing computers don't seem so helpful we just need to give them more power.
So there are a lot of messages about the impending doom or rebirth of the world circulating in parts of these spaces, and someone can reject one of them ("my company which is currently raising funds with several well-known VC firms is not building Skynet" or "anomalous sensor readings on classified hardware are neither aliens nor angels") but fall for others.
The NXIVM (https://en.wikipedia.org/wiki/NXIVM) cult / self-help movement / pyramid scheme was also based in New York City and had many tropes which will be familiar to anyone who has looked into all of this (lots of bad Latin, 'rationality', 'doing well by doing good', a male Leader surrounded by adoring women)
Cathy O'Neil interviewed someone who dropped out of the Effective Altruism movement while still practicing some of the belief system. Interviewee reports that a philosophy professor thinks EA is getting major influence in philosophy departments in the UK through donations. Contrast the LessWrongers whose preferred way to interact with academe is to read pop science books and computer science and psychology papers and who tend to be dismissive of philosophy, history, philology, etc. https://mathbabe.org/2024/03/16/an-interview-with-someone-who-left-effective-altruism/ (And Sam Bankman "if you wrote a book you made a mistake" Fried the son of two professors)
Interviewee, like the people above, noticed that many of the movement leaders are thinky talky people not doers ... except that some EA people now control big money!
Edit: American on how he spent a year working for a crypto company trying to decide whether it was as scammy and fly-by-night as it seemed then left when he decided the answer was "yes" https://johnsundman.substack.com/p/100-bafflegab
An ex-Mormon blogger has a long essay looking at David Gerard's very active Internet postings since the oughties. Apparently he is an active Wikipedia editor as well as RationalWiki editor and uses Wikipedia to fight his beefs with other weird Internet people and movements.
So again, be suspicious of anyone who wants you to feel angry or scared about weird Internet people, but some of these movements have acquired offline power and they have a long history that is very well documented.
Some of the ghouls and renfields flocking to Orange Julius want cryptocurrency deregulated (say a law requiring that any US bank open an account for any legal business registered in their state). The cryptocurrency Tether appears to already be one of the biggest frauds in world history. (They claim to back crypto currencies with other assets, but as the number of Tethers has rapidly increased there has been no visible flow of those meatspace assets into the organization, and their approach to accounting is not one you learn in college). If they get a foothold at a bank in a rich country again, that will be very bad.
Zeke Faux, "Anyone Seen Tether's Billions?," Bloomberg Businessweek, 7 October 2021 https://www.bloomberg.com/news/features/2021-10-07/crypto-mystery-where-s-the-69-billion-backing-the-stablecoin-tether
Edit: accused CEO-assassin Luigi Mangioni was interested in pop philosophy such as LessWrong Rationalism, stoicism, and Effective Altruism https://www.nbcnews.com/news/rcna183996 and a splinter called This Part of Twitter https://sfstandard.com/2024/12/10/this-one-internet-subculture-explains-murder-suspect-luigi-mangiones-odd-politics/ Note the description of people they admire as "agentic" and "generative" and Maciej Ceglowski's old warnings that people in Silicon Valley were acting the way they imagined a digital superintelligence would act and this was an actual present danger whereas the digital superintelligences are hypothetical. Generative AI will supposedly become useful when it gets agentic models.
I like this quote on the LessWrongers or TPOTers from a journalist: "It's a very verbal culture. People really love to have long-form discussions, state their opinions," she said. "Really, just people who like to talk a lot."
I have talked about how the TESCREAL idea is a bit of a simplification, and not everyone in these spaces is awful just strange or naive. But then S. Alexander comes out with a long blog post which starts with Richard Lynn's estimate of National IQs which just happened to reproduce racial stereotypes from Richard Lynn's culture, briefly mentions that Lynn made his figures for poor countries up, and then goes on babbling about how nevertheless they are true and important and only misguided sentimentalists would deny this https://archive.ph/BJ6bd
Mixed in with the Internet racists in the comments are a few things which give hope like:
QuoteI spent 18 months in a country where people are supposed to have an iq of about 70, according to the map. My neighbors and friends were mostly non-literate. They did not seem less intelligent than the people I know in my current (US) neighborhood or the people I grew up with (in the US). Most of them would not have performed well on IQ tests, though. They'd never attended school and had no familiarity with puzzle-solving. This was 35 years ago and most people had not seen movies or even photographs. I remember sitting with one older woman and helping her interpret a black-and-white photograph: this is the arm, here's where it connects to the body, etc. It's hard for people from literate societies with tons of exposure to text & graphical representations to see the extent of the gap.
So many people in these spaces think the race theorists are nuts too, but if you hang out in these spaces you will meet someone enthusiastic and articulate who wants you to hear the good word of 'race science,' just like if you hang out with working-class Brits you will meet some who read the
Daily Mail and want you to know about sinister foreigners.
White American men with college degrees tend to be really really into race-and-IQ like white American men without college degrees are into Ancient Aliens and cryptozoology or European men in 1913 were in to nationalism. And as we have seen, these subcultures are hotbeds of race-and-IQ thinking, and the leaders read each other's Internet writing on race-and-IQ even if not all of the believers post about it under their main handle. If you have been on the Internet before smartphones you know how this goes, it was one of the things like Star Wars v. Star Trek or right Libertarianism or creationism that a few Americans always wanted to talk about. I have been pitched by at least one believer over email.
Even if you accept the validity of g/IQ (Cosma Shalzi's counter-argument (http://www.bactra.org/weblog/523.html), Nicholas Taleb's (https://medium.com/incerto/iq-is-largely-a-pseudoscientific-swindle-f131c101ba39), there are others), and you accept that g / IQ is something like smarts and not just an obscure psychometric concept, the differences on Lynn's chart seem implausibly high, the variation inside Africa surprisingly low, and my own experience traveling and living with people born all over the world does not suggest that people anywhere are much smarter or wiser than people anywhere else.
A long list of ideas from the garbage-bin of history are on the march. They have been circulating in very influential online communities while it was hard to say them on TV or in the newspaper, and many fellow travelers don't believe they need to hide or dissimulate any more. If you want to know what is happening in the rich Anglo countries it is a very good idea to familiarize yourself with these people and spaces so you can see when a proposed bill implements a Substack post paraphrases a crank in comment threads. British journalism, as far as I can tell, failed to do this under the Tories. Ask yourself whether journalists in your country will do better, or whether you need to spend a weekend reading strange websites.
If you cannot read this list and give pocket bios of a dozen names you are not prepared to understand what is happening https://www.manifest.is/#speakers One of the guests had the following people endorse his podcast.
(https://www.bookandsword.com/wp-content/uploads/2025/01/patel-endorsements.png)
His publicist seems to like Broadway musicals. The podcaster studied with Scott Aaronson (https://en.wikipedia.org/wiki/Scott_Aaronson) who is another figure on the edge of these spaces.
(https://www.bookandsword.com/wp-content/uploads/2025/01/puff-piece-on-patel.png)
Note that they are networking face to face, that the speakers are quite young, and that they have Internet writers, (new media) journalists, academics, and people in finance or banking. This is a type of Internet movement that I have no direct experience with, an Internet movement with support from offline money and institutions. And what was basically harmless while it was brilliant sad people writing to other brilliant sad people is going to get very strange as people in charge of billions of dollars or government agencies try to bring it to life.
I don't know how "unschooling centre and mutual aid" go together with "lets invite all the Internet's tireless racists to give us talks" in Berkeley but they do somehow.
A trans woman named Ziz founded a cult in Vallejo, CA which was involved in a series of unfortunate events between 2022 and 2025 including six or seven violent deaths (one a border patrol agent shot trying to arrest a cult member in Vermont, another a landlord whose property some cult members had been squatting on). The cult leader once attended a workshop at the Center for Acquired Rationality (group is based in Berkeley about 20 km from Vallejo, Julia Galef from NYC is involved), managed to get herself banned, and picketed one of their events with her followers; her patter and framing of herself as a self-help blogger should sound familiar to anyone who knows the LessWrongers, as should her framing of herself as a Sith Lord fighting non-vegans like Yudkowski got famous for "what if Harry Potter were a child prodigy and scientist?" Rationalist internet personality Aella says that the cultists were well known to the SoCal LessWronger community.
Per a blogger called Max Read, someone on Twitter claims that one of the cultists killed by police in Vermont was a German citizen who had worked for Jane Street Capital, whose former employees also include Sam Bankman-Fried. See previous discussion about how these movements seem to have deliberately colonized institutions in SoCal, NYC, and England to get access to money and recruits.
https://www.sfgate.com/bayarea/article/bay-area-death-cult-zizian-murders-20064333.php
https://www.sfgate.com/bayarea/article/leader-alleged-bay-area-death-cult-faked-death-sf-20066610.php
https://x.com/aella_girl/status/1884481375690223684?mx=2
https://news.ycombinator.com/item?id=42877910
https://maxread.substack.com/p/the-zizians-and-the-rationalist-death
So one step from the Bay Area LessWrongers gets you charities and 'research institututes' getting tens of millions of dollars to play with, another step gets you classic California cults. And the public message "here are the intricate Internet discussions! here are our institutes using other people's money to work on serious-sounding issues! here are our face-to-face events with cute articulate girls and guys! we are cool and powerful engage with us!" hides a lot of darkness and bad character. These groups are actively recruiting in geeky spaces online so if you are the kind of person who posts on a forum like this its wise to know some of the lingo and names which raise red flags (today they seem especially active on Substack because the Other People who fund that site are the same reactionary tech-bros who are intrigued by parts of all this).
Jon Evans who wrote Extropia's Children says that Artificial General Intelligence will exist when "the return on AI inference investment is within 25% of the average return on the equivalent average human salary for at least 50% of occupations, as taxonomized by the US Bureau of Labor Statistics, for which interaction with the physical world is not essential." Does that make sense at all to any of you?
We didn't redirect the postage budget to email servers, or the accounting and clerical budget to Excel and Quicken and Lotus 1-2-3, or the translation budget to machine translation, mostly we stopped paying much for those things and spent the money somewhere else. It seems to take for granted the "big chatbot" model where you pour in chips and electricity and data and get out money (all the "big chatbot" companies are losing vast sums of money but they promise investors that they will make huge profits one day). And the neoliberal model that human beings are engines to produce USD (1990 purchasing-power parity).
Its notoriously hard for the leaders of big organizations to tell the rate of return from internal spending, which is why big organizations are organized more like families than downtowns. I think that businesses often have a task they need doing and they look for the cheapest and most hassle-free way to complete it. So for many processes they are not thinking about "return on investment" as much as "will electronic labels on the shelves which we can update from a central server be cheaper than printed labels that someone has to walk around and update?"
Occupations and their roles also change with technology (no typesetters before the 15th century or after the 20th century).
Evans is selling his own front end to a chatbot from the San Francisco Bay Area so his take on all this is not entirely disinterested and the level of confidence he expresses may be more "American salesman" than "German academic."
Quote from: dubsartur on February 04, 2025, 03:59:53 PMJon Evans who wrote Extropia's Children says that Artificial General Intelligence will exist when "the return on AI inference investment is within 25% of the average return on the equivalent average human salary for at least 50% of occupations, as taxonomized by the US Bureau of Labor Statistics, for which interaction with the physical world is not essential." Does that make sense at all to any of you?
It makes sense in the sense that I understand the meaning of the sentence: I assume the occupations he refers to are e.g. programmers, writers, artists, and his logic is that if per pound a company spends, they get 50% of the return from using an AI replacement than they'd get from spending the same amount on salaries, that's his bar for AGI. So it makes sense in the purely grammatical, parse-able sense of "sense". It also does not make sense in the sense that it's utter nonsense as a definition of AGI.
Quote from: dubsartur on February 03, 2025, 02:25:18 AMtoday they seem especially active on Substack
It amazes me how much substack is nominally set up for the sorts of things I like - interesting deep-dive reporting and blogging across politics and culture and art - and yet somehow seems to attract all the people I'd rather not bother reading, whereas the people I do want to read mostly have regular blogs.
I was trying to remember the group associated with Bay Area LessWrongers which reinvented scientology and came up with Leverage Research and this (rambling, not wholly substantiated, Substack) blog post https://www.aipanic.news/p/when-effective-altruism-takes-a-dark Leverage Research had communal living, a process like Scientology auditing or Maoist self-criticism, warnings that the End of Times is Nigh and actions today will determine the fate of humanity, and a Leader who allegedly bedded at least three disciples.
(https://www.bookandsword.com/wp-content/uploads/2025/02/doing-good-better-how-can-i-make-the-biggest-difference-4-638.webp)
The blogger also claims that 'bednetting' Effective Altruism was always just a cover for longtermist/Skynet EA (milk before meat (https://rationalwiki.org/wiki/Bait-and-switch#Usage_by_cults_and_extremists)). She cites a young researcher named Mollie Gleiberman in Antwerp. https://ideas.repec.org/p/iob/dpaper/2023.01.html The people from the movement who I heard back then seemed sincere about causes like giving money to poor people in Africa and Asia and showed no sign of worries about AI. I remember criticisms that all their money seemed to go to a few projects like fighting malaria and parasites in Africa which had measurable immediate impact. Its good to assume that your enemies are as confused and divided as the people you know. I don't think Yudkowski talks about racial hierarchies or yells about feminists, and Scott Alexander wants the Internet to know that despite being high status in this sexually-charged subculture he only recently found a long-term relationship.
The blogger has some gossip about Michael Vassar who founded MetaMed, one of the first LessWrong projects to find wealthy patrons including Estonian software developer Jaan Tallinn and Xanatos cosplayer Peter Thiel. Tallinn also sponsors the Centre for Applied Rationality with $100,000 in scholarships a year and became a billionaire with help from cryptocurrency speculation. Let people familiar with Scientology read Scott Alexander's description of Vassar and his disciples (https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe?commentId=4j2GS4yWu6stGvZWs) and ponder (although this is a tiff among former friends, and Alexander is a very clever very twisted person).
QuoteOccasionally it would also cause full-blown psychosis, which they would discourage people from seeking treatment for, because they thought psychiatrists were especially evil and corrupt and traumatizing and unable to understand that psychosis is just breaking mental shackles.
As someone on Hacker News said, the LessWrong cultural norm of open-mindedly considering any idea (especially any idea in the form of a rambling Internet essay with colourful language) makes them very vulnerable to recruitment by cults and white supremacists. And Scott Alexander has been very interested in putting a selection of ideas from the Alt Right in front of his audience since 2014, and Yudkowski slowed down after he had groupies and an admiring audience for his speeches. So the idea that this was deliberately built into their culture to create a pool of recruits should not be rejected offhand.
I have talked about projection (every accusation is a confession, Nick Bostrom the longtermist seems to have invented the term Pascal's Mugging for how long chains of logic based on made-up numbers can go astray) and how many people in California are acting the way they imagine a superintelligence would act, and a lot of AI-foom discourse is about how to create an army of mechanical servants with hidden flaws which let the wise manipulate them. And an awful lot of human culture is an excuse to gather money or adoration, and Sam Bankman-Fried admitted that he attached himself to Effective Altruism for the money and kudos.
Why does all of this matter? First, these movements still control billions of dollars and have the ear of oligarchs. Vitalik Buterin gave one EA organization cryptocurrency which they were able to sell for USD 658m through FTX because of price fluctuations. The collapse of FTX cost them some funds and donors but they have not burned through everything. They are very active in the Internet for young smart technical people, so if you read a forum like this its wise to learn the signs that something is a LessWronger/Longtermist project so you know what to do when you are talking to a new student group and they start to use buzzwords.
Second, as power centralizes and the media are defunded the world today is full of shadowy forces manipulating policy and the media. Most of us can't attend a prayer breakfast in Washington or an Institute for Social Justice soiree in Ottawa, but these movements post endlessly online and move their money in the form of cryptocurrency the least discreet means of payment since we stopped rolling giant stone wheels from village to village. So you can study how some of these people used GiveWell EA as a milk-before-meat scam, or how Timnit Gebru, Emile P. Torres, and David Gerard tried to push a counter-message on the media in the service of a higher truth and their own passions, and think about how much else that you read on your feeds or hear on the radio comes out of shadow battles like that which are not planned on web forums and in comment threads.
Justin Trudeau attended a conference in Paris on AI policy recently and I promise that representatives of some of these movements were speaking there and looking serious and respectable.
I agree that if money and power were not so centralized, this would all be of no importance to people outside SoCal, NYC, Boston, and Oxford, because these people agree that their full program has no appeal to most people. But we live in a world where you can get billions of dollars by impressing a few rich people.
Edit: a characteristic of the early smartphone age is online movements optimized for social media bursting into meatspace which works differently. We saw that with the Tiki-Torch Brigade and Tumblr-flavoured social justice eight years ago, this is one of the latest outbreaks. And because longtermists and LessWrongers write so much in public (so very very much), they may be easier to understand than whatever has been festering on whatever replaced 4chan (one theory (https://www.unpopularfront.news/p/groyperfication) that maybe focuses too much on the people who really really like party politics on their smartphones).
On 29 January LessWronger, Effective Altruist, and (Effective-Altruism-funded) journalist Kelsey Piper https://www.vox.com/authors/kelsey-piper mentioned that she helped James Damore get a job after Google.
https://xcancel.com/KelseyTuoc/status/1884702831451754548#m
QuoteJames Damore was a Google software engineer who wrote a memo arguing that, while diversity and inclusion were good goals, bias was not the main reason there weren't more women in tech, and differences in personality between men and women probably explained a lot of it. ... The internet was outraged. He got fired from Google. And he applied to the tech hiring startup I worked at, Triplebyte, which offered background-blind screening to anyone who wanted to be a software engineer. We really believed in the mission, at Triplebyte. I think I ended up kind of badly calibrated about how earnest to expect people to be everywhere else. We found people working as janitors and line cooks and homemakers who could code, and we got them 6 figure jobs, and we were proud of it. ... I'd been at Triplebyte for like six months at this point, it was my first job after graduation, and I was honestly way out of my lane, but I made a pretty big fuss internally. (It helped that I suspected a lot of people agreed with me but I was a woman and it was safer for me to say it.) I said that we were not in the business of deciding who had good politics, that we shared this country with many people who profoundly disagreed with each other, that companies could assess for themselves if he worked respectfully with female engineers, and that we should put him on the site and let them decide. We did. ... James Damore was egregiously wronged. To my knowledge he's a good software engineer with extremely reasonable, approximately accurate opinions about the reasons there were fewer women in software engineering, which he shared in good faith, and a lot of people who should've known better really did try to drive him out of the industry for it. It was wrong.
Piper consulted with Scott Alexander on how to get proscription medication (I hope she was not a patient since he shared her name and diagnosis), and a confidant of Sam Bankman-Fried until he confessed his crimes. She is also one of the few women whose ideas Scott Alexander acknowledges. So this is a very small world, and everyone dates, hires, parties with, or exchanges messages with everyone else. The only special thing is that they got the attention of big money which is now trying to bring some of their weird Internet ideas to life.
Has anyone done a proper network graph of these people?
Quote from: Jubal on February 14, 2025, 10:05:31 PMHas anyone done a proper network graph of these people?
I suppose I could apply for a grant funded by some software billionaire ... oh, bother.
The long blog series on the Extropians was by a fellow traveler but OK, and the paper on the Effective Altruists by an academic in Amsterdam, but neither had a visual display let alone an editable one. The paper clued me in that her job with Vox is paid for by a donation by one of the big Effective Altruism sponsors.
Quote from: dubsartur on February 14, 2025, 10:16:48 PMI suppose I could apply for a grant funded by some software billionaire ... oh, bother.
I wonder if you actually could get a billionaire to pay for it by enthusing about it as a "social chain powered networking display tool" or something! The right buzzwords can get one surprisingly far with these things...
Quote from: Jubal on February 14, 2025, 10:20:15 PMQuote from: dubsartur on February 14, 2025, 10:16:48 PMI suppose I could apply for a grant funded by some software billionaire ... oh, bother.
I wonder if you actually could get a billionaire to pay for it by enthusing about it as a "social chain powered networking display tool" or something! The right buzzwords can get one surprisingly far with these things...
Let me shelve that for a moment.
Scott Alexander published http://slatestarcodex.com/2014/09/05/mapmaker-mapmaker-make-me-a-map/ but its about websites and social media accounts not who is living in whose spare bedroom. Many of these people had pseudonymous Tumblrs, LessWrong profiles, etc. Its also from before they got very rich people to bankroll them, back then a million or so was big money in these circles.
As early as 2015, The Atlantic published a story on how computer science professor Scott Aaronson blogged about how a younger self had felt there was no acceptable feminist way to express sexual interest in a woman, then writers Laurie Penny and Amanda Marcotte (https://en.wikipedia.org/wiki/Amanda_Marcotte) made a snarky reply, then Scott Alexander replied with frustration (https://slatestarcodex.com/2015/01/01/untitled/) to them, and many were the posts that followed.
Aaronson, Penny, and Alexander are all Jewish by ancestry, blazingly neurodivergent, and very clever. All had white-collar careers with a lot of autonomy (professor, writer for chattering magazines and news outlets for the educated, psychiatrist). Marcotte went to university in Austin TX, the same city where Aaronson worked. One of the Zizians was educated at Oxford, Nick Bostrom's employer and sometimes William MacAskill's. Penny recently thanked Scott Alexander on her blog for endorsing Kamala Harris so they still follow each other's writing.
So even back then this was a small world. What changed is that a few businessmen, fraudsters, and cryptocurrency speculators have put billions of dollars behind all this and now what was just interpersonal drama or an excuse to post and post and post has bigger consequences. David Gerard likes to tell the story that Grimes and Elon Musk first bonded over the story of Roco's Basilisk (https://en.wikipedia.org/wiki/Roko's_basilisk#legacy), and the breakup of that relationship has something to do with Musk's radicalization. Peter Thiel saw something of value in the LessWrongers back in 2012, and bankrupted another chattering web magazine with snarky feminist writing (Gawker Media (https://en.wikipedia.org/wiki/Gawker), host of Jezebel). Musk, Thiel, Bezos, Tallinn, and Zuckerberg (the five billionaires who have expressed interets in this space) are obviously part of another small world (I note without comment that Tallinn has six children and wrote a Bachelor's thesis about the physics of interstellar travel, let him who is brave pull those strings).
Eliezer Yudkowsky has now joined Alexander, Bostrom, and Gwern by endorsing a project to genetically engineer people for IQ and create "superbabies" (yes, he has also said that superhuman computer intelligence is coming in the next two years, and that is not many generations Yud, but don't try to understand this it will break your brain). The project is run by a software developer who is sure that the field of genetics is too cautious and unwilling to admit the truth. /s Sure sounds like someone I would let experiment on my future children s/.
In the original post, an academic said they were not sure what kinds of policies Cummings' rationalist sympathies would imply. One of Dominic Cummings' twenty-something cronies proposed sterilizing the underclass! "Whoever has ears, let him hear." https://www.politicshome.com/news/article/new-downing-street-adviser-called-for-universal-contraception-to-stop-permanent-underclass
I mentioned Libertarians on the first page.
A useful way of thinking of LessWrong is comment-section Libertarians (https://leviathan-supersystem.tumblr.com/post/180724263214/what-is-lesswrong-and-can-you-summarize-why-its) who are conning moderates and liberals and being conned by neofascists and oligarchs because that is the function of Libertarians. The British longtermists are slightly different but also into eugenics, transhumanism, etc. I suspect that if you get them in a bar and they like you most will share the same ideas abut the poors and the coloureds that their American friends post on main, because this all comes out of the same mailing lists and early blogs.
All of this (waves with immense fatigue) is old and moderate compared to what was bubbling on Twitter and whatever is replacing it. I just talk about it because I knew about some of these people ten or fifteen years ago and saw them being promoted on Hacker News and American right-wing twitter, and because its so easy to pull strings because they all post like they are getting paid by the word. Nazis and eugenicists are not interesting, you mock them mercilessly, you fight them in the courts, and if necessary you liquidate them with naval blockades, air strikes, and Ordnance QF 25-pounder fire.
David S. Holz the CEO of Midjourney has an active Twitter account (in 2025!) where he markets himself as an active transhumanist and trades posts with Roko of Basilisk fame.
The company seems to be private so its hard to know its revenue and expenses (which have little to do with the stock price of publicly-traded "tech" or "AI" companies anyways).
He also seems to believe that there are people who still admire any Bay Area "tech" company? That is a disconnect.
Quote from: dubsartur on March 14, 2025, 04:57:29 PMthe CEO of Midjourney has an active Twitter account (in 2025!)
This doesn't really surprise me at all, I think - it's that sort of tech-capitalist milieu that still does cling to Twitter (and probably Twitter where any kneejerk admirers of Bay Area tech still hang out).
Quote from: Jubal on March 14, 2025, 05:58:47 PMQuote from: dubsartur on March 14, 2025, 04:57:29 PMthe CEO of Midjourney has an active Twitter account (in 2025!)
This doesn't really surprise me at all, I think - it's that sort of tech-capitalist milieu that still does cling to Twitter (and probably Twitter where any kneejerk admirers of Bay Area tech still hang out).
I guess Apple is still pretty popular, and some consumers like Amazon but even then its known for being rapaciously capitalist. :pangolin: And some people really like generative AI / LLMs. But I don't think many people look at MS or Meta or Palantir or Netflix or Stripe and think "that is cool I want to be a part of that mission." They think maybe "that has a lot of money I want a piece of it."
Birdsite Guy did turn Twitter from a journalism / elite site to a tech / far right site, but that made it less useful for influencing officials and journalists.
Quote from: dubsartur on March 14, 2025, 07:20:15 PMI don't think many people look at MS or Meta or Palantir or Netflix or Stripe and think "that is cool I want to be a part of that mission." They think maybe "that has a lot of money I want a piece of it."
I think a lot of people are surprisingly good at starting from the latter proposition and then convincing themselves of the former as post-hoc self justification. I don't see those people very much but I understand they hang out on LinkedIn quite a lot.
BTW the Pinkerite blog has a chart of racialist networks around Emil Kirkegaard https://www.pinkerite.com/2024/12/elon-musk-supports-german-neo-nazi.html (Kirkegaard has no visible means of support except for family, an inheritance from racialist Richard Lynn, and American racialist Wickliffe Draper's Pioneer Fund)
A business book on OpenAI suggests that Sam Altman was chronically lying to the board of OpenAI (or doing things without telling them) because they were LessWrong type doomers and their understanding of AI safety was getting in the way of making all the money (OpenAI dominates the chatbot business even though it does not own the best technology for some use cases and is losing billions of dollars a year; but Sam Altman knows that if enough money passes through your hands you can make some stick to your fingers) https://pivot-to-ai.com/2025/04/06/how-sam-altman-got-fired-from-openai-in-2023-not-being-an-ai-doom-crank-and-lying-a-lot/
Almost everyone in these spaces is crazy, greedy, or a Nazi. So whichever faction wins the results will be bad. About the least harmful result is the chatbot companies which take rich people's money and spend it building nothing (one company borrowed $2 billion on a promise not to release technology before they have superintelligence which is very South Sea Bubble).
And people at chatbot companies lie like the Hells Angels are at their door about a debt they can't pay.
Daniela Amodei, president of Anthropic the chatbot company (ClaudeAI), is married to Holden Karnofsky, founder of an EA spending organization GiveWell and Anthropic staffer. Amanda Askell was an early employee at Anthropic; her ex-husband is William MacAskell. Daniella's brother Dario Amodei is CEO of Anthropic and once shared a group house with Karnofsky while both worked for Open Philanthropy, another EA organization. So the org charts of these companies and foundations look more like Saudi Arabia or Qing China than contemporary Canada, its all roommates and lovers and members of secret societies hiring each other. The closest thing I can think of in respectable circles in the USA is Old Media journalism (https://smuhlberger.blogspot.com/2009/08/imperial-decadence-fisher-king-bleeds.htm). https://forum.effectivealtruism.org/posts/53Gc35vDLK2u5nBxP/anthropic-is-not-being-consistently-candid-about-their See previous discussion about how all this (gestures) has social dynamics similar to geeky subcultures but much more ambitious goals and bigger budgets or influence (although a $20 million endowment and some six-figure to low-seven-figure grants to fund EA propaganda on Vox Media is not that much in absolute terms! Just big by the standards of FOSS or the SCA or your local amateur dramatic society).
Many people in these spaces hate public universities and civil servants, and I note that those are institutions in the USA which were dragged kicking and screaming away from hiring your friend's failson and your ex-wife's bridge partner. Publicly-traded companies can't get away with this easily any more either. But American so-called Libertarians tend to be very comfortable with organizing a country like one patriarchal family, and IQ-fetishists and believers in secret revelations can also be persuaded that clearly they know best and rules are for lesser creatures.
I note without further comment that there is a Mormon tranhumanist association if Czarist Russian cosmicists are not your jam https://www.transfigurism.org/
The Economist had an article a year ago with some details
https://archive.is/5sdci The major EA organizations had spent perhaps $2-3 billion by the end of 2022. That year was the only year where spending on longtermist causes was comparable to spending on global poverty reduction (animal welfare can be in between, it can mean "humane farming" or it can mean sitting and talking about genetically engineering lions to eat grass). A major donor is Dustin Moskovitz a Facebook founder whose publicist decided not to make him famous like Zuck (he runs something called Open Philanthropy). The Economist estimates about 10,000 core EA members, GiveWell reported around 40,000 donors in 2022.
They don't spell out that the movement looks big on social media because it spends money on paying people to post on the Internet, not organize prayer breakfasts or art gallery cocktail parties or golf tournaments. And its connected to social media companies like Substack, Twitter, and Facebook.
These days a lot of the people who used to call themselves rationalists or Effective Altruists are trying not to use labels in public eg. Rethink Priorities which funds something called Apollo Research https://rethinkpriorities.org/about-us/ (earlier charitable filings are very honest that they are a longtermist Effective Altruist organization). A lot of scare stories about "AI" are funded by rationalist and longtermist EA organizations.
This space has lots of organizations, which sometimes change names, but staffers and money mostly move between them.
Caroline Ellision the fraudster, incidentally, was a regular face-to-face and Internet contact of Scott Alexander and friends, and SBF sometimes posted on Alexander's blog. She will be out of prison by the end of 2026.
Longtermists tend to poo-poo climate change because its unlikely to make humans extinct and actions to reduce it might prevent us from building the cosmic computer. A Vox article has some dismissing antibotic resistance because genetically engineered bioweapons invented by friend computer are more exciting https://www.vox.com/2015/4/24/8457895/givewell-open-philanthropy-charity (warning: author is funded by EA foundations and self-identifies as an EA https://forum.effectivealtruism.org/users/dylan-matthews-1) This is obviously totally opposed to the logic of empirical, shorttermist Effective Altruism because you can't measure hypothetical future threats.
An Old Media article (https://archive.is/4zSZT) on Curtis Yarvin mentions that after his wife died suddenly, in 2021 and 2022 he dated people including Caroline Ellison. People involved in the community say that while Scott Alexander was writing a Neo-Reactionary FAQ he attended events with Curtis Yarvin who does not have a day job so had lots of time to hang around blathering. Yarvin also dated a feminist sex blogger (http://clarissethorn.com/blog/) who went by the handle Clarisse Thorn (RationalWiki has an article on a similar type of person called Ozy Brennan). My Early Social Media community was mostly people 1000 km or more away so this merging of meatspace and IRL (and different communities) feels alien.
According to the article Yarvin knows Razib Khan, Steve Sailer, and many other cranks well.
Came across someone in a history FB group advertising his book on "medieval history" which actually seems to be an explicit ultra-Catholic argument for the restoration of monarchy, entitled Missing Monarchy. Kind of rare to see those sorts of sentiments outside their own bubbles.
Unfortunately, the refusal of Americans to control their oligarchs means that you just need to amuse one rich person to make a fringe Internet idea into a global sensation. And Amazon etc. don't care what they publish or advertise as long as it does not cross a few simple lines.
As far as I am concerned, the whole country is like a pet owner that lets their pit bull chase all the smaller dogs and people at the park. Utter abandonment of personal responsibility.
He Jiankui, the Chinese scientist who claimed he performed human genetic engineering with CRISPR and spent three years in prison for his trouble, is moving to America:
Quote"My new lab in Austin, Texas is being prepared and I'll be settling down there," he said. ... "My mission is still in the field of embryonic genetic editing." ... The controversial figure said he had given up trying to win over conventional academia and intended to spread his influence among his online fans instead. ... His growing rejection of mainstream academia has largely been attributed to his second wife, Cathy Tie – a 29-year-old Chinese-Canadian bioinformatician and entrepreneur who founded Ranomics, a genetic screening company, and Locke Bio, a telemedicine company, in Toronto. ... He also said that Tie had helped to create a cryptocurrency called meme coin to enable public support of his research.
Elon Musk has connections to Austin and several figures in this thread live or have lived there.
At least the Kaiser just let portugaling Lenin out! Would be funny if ICE deports He to a gulag in Honduras for stealing jobs from American mad scientists.
Has anyone else encountered the idea of "the big sort" or "the great sort" on American right-wing social media? The idea that the Invisible Hand of the Market just alots capable people to good jobs? (And don't talk about the state of public education and public health in many places, or how many jobs in say high-status journalism are for the children of the last generation of celebrities and elected officials (https://smuhlberger.blogspot.com/2009/08/imperial-decadence-fisher-king-bleeds.htm)- people like that also tend to be anxious about chatbots taking jobs).
I've not seen it named but have vaguely seen the idea alongside other free market type discussions. I find myself questioning if the people who imagine it to be true have ever actually interacted with another person whilst working.
An establishment rag in NYC just shared a leak on the current socialist Democratic candidate for Mayor of NYC from Crémieux Receuil, a race crank Twitter and Substack personality who the Guardian thinks (https://www.theguardian.com/us-news/2025/mar/03/natal-conference-austin-texas-eugenics) is a graduate student named Jordan Lasker. Receuil was among those who recently sued RationalWiki to stop saying documented true things about him. Said rag described him as "an academic and an opponent of affirmative action" not "a blogger obsessed with telling the Internet about the hereditary inferiority of black and Black people."
Crémieux is part of the network in this thread but I am not going on twitter to work things out (RationalWiki mentioned Richard Hanania, Aporia Magazine, Natal Conference, a Libertarian charter city called Infinita, and something called Manifest in Berkeley where LessWrongers and race 'scientists' network).
I see indications that said candidate is not a great person, but the small-C conservative establishment siding with racists against socialists is how you get Nazi Germany.
Crémieux is sad (https://archive.ph/5kRsa) that guesses about the mayoral race have not moved on the prediction market Polymarket so he presumably tried insider trading and shorted the candidate's chances before the leak https://www.dewereldmorgen.be/community/rassenbiologische-hit-job-op-zohran-mamdani-in-the-new-york-times/ Manifest was billed as about prediction markets but attendees noticed "there are five HBD bloggers among the invited guests and Curtis Yarvin has organized a party"
Supposedly Crémieux has a Reddit handle which begins with a six-letter slur for trans people. Do none of these people have healthy sexuality which is based on respecting the people they want to take to bed not hating and fearing people who are different from them or seeing them as resources to harvest? And why did they all spent their youth calling for despicable policies rather than arguing about BSG versus Dr. Who and writing fanfic like normal nerds?
Quote from: Jubal on June 10, 2025, 10:27:02 AMCame across someone in a history FB group advertising his book on "medieval history" which actually seems to be an explicit ultra-Catholic argument for the restoration of monarchy, entitled Missing Monarchy. Kind of rare to see those sorts of sentiments outside their own bubbles.
I am told there is a far-right book by a social media personality with a title or handle like Bronze Age Pervert (and I can kind of grok the John Ringos who fantasize about being warlords who cut a swathe through their enemies by day and amuse themselves with captive women by night, or the people who start a band on go on tour and meet a groupie, but my thoughts on sophists who invent patter and use drugs and cult tactics to get much younger people dependent on them like some of the figures in earlier posts in this thread are not printable). Jordan Peterson has thoughts on Ancient Near Eastern literature.
A characteristic of the current phase of American social media is fringe figures gaining influence in old institutions. Fifteen or twenty years ago someone very influential on 4Chan or Tumblr (although there was a Tumblr to twitter to the headlines pipeline) would not be given access to the NYT or appointed by a close associate of POTUS. The visible mental collapse of prominent figures like Rudy Giulianni and Elon Musk also feels new (plenty of prominent people were deeply messed up but they kept it out of the headlines, and if they got in the news like Tiger Woods and his wife they often took a break).
On whether someone with Indian parents born in Uganda and resident in the USA is African-American (Ugandan), I have applied to American universities which try to ask your race without saying race (my ethnicity is Canadian and Anglo but there are never boxes for either). My thoughts there are also not for the Internet except that I imagine that people interpret the stupid question many different ways and that people with all kinds of ancestry might get lumped in as "black" or "Hispanic" in the USA depending on circumstances (a pale-skinned Mexican in a business suit who speaks excellent English might be safer than a working-class tourist from Calabria drunk on real American whiskey). One way I heard it explained is that if you leave it blank they can guess your race, if you enter it they have to trust it. Lots of people have toyed with "is a white South-African or a Tunisian African-American?" or more interestingly "is an immigrant from Kenya African-American if she is not part of Black culture but has more encounters with police than most of us on this forum can imagine?"