"Bizarre and Dangerous Utopian Ideology Has Quietly Taken Hold of Tech World"
So longtermism is an ideology that emerged out of the effective altruism community....
So if you’re an effective altruist and your goal is to possibly influence the greatest number of people possible, and if most people who could exist will exist in the far future, once we colonize space and create these digital worlds in which trillions and trillions of people live, then you should be focused on the very far future. Even if there’s a small probability that you’ll influence in a positive way, 1% of these 10 to the 58 digital people in the future, that still is just a much greater value in expectation, so much greater expected value, than focusing on current people and contemporary problems....
One really has to recognize the influence of this TESCREAL bundle of ideologies. The acronym stands for Transhumanism, Extropianism — it’s a mouthful — Singularitarianism, Cosmism, Rationalism, Effective Altruism, and longtermism. The way I’ve described it is that transhumanism is the backbone of the bundle, and longtermism is kind of the galaxy brain atop the bundle. It sort of binds together a lot of the themes and important ideas that are central to these other ideologies.
Transhumanism in its modern form emerged in the late 1980s and 1990s. The central aim of transhumanism is to develop advanced technologies that would enable us to radically modify, or they would say radically enhance, ourselves to ultimately become a posthuman species. So by becoming posthuman, we could end up living forever. We could maybe abolish all suffering, radically enhance our cognitive systems, augment our cognitive systems so that we ourselves become super intelligent, and ultimately usher in this kind of utopian world of immortality and endless pleasure....
So this ideology is everywhere. It’s even infiltrating major international governing bodies like the United Nations. There was a UN Dispatch article from just last year that noted that foreign policy circles in general and the United Nations in particular are increasingly embracing the longtermism ideology. If you embrace longtermism, there is a sense in which you embrace the core commitments of many of the other TESCREAL ideologies.
July 20, 2023
Pause Giant AI Experiment: An Open Letter
"AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control..."
The "Godfather of AI," Dr. Geoffrey Hinton, has quit his job at Google, reported in today's New York Times, so that he can talk about the risks of artificial intelligence. “It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said." " ... gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity."
"Hinton is best known for an algorithm called backpropagation, which he first proposed with two colleagues in the 1980s. The technique, which allows artificial neural networks to learn, today underpins nearly all machine-learning models. In a nutshell, backpropagation is a way to adjust the connections between artificial neurons over and over until a neural network produces the desired output
Deep learning has proven a stunning success for countless problems of interest, but this success belies the fact that, at a fundamental level, we do not understand why it works so well. Many empirical phenomena, well-known to deep learning practitioners, remain mysteries to theoreticians. Perhaps the greatest of these mysteries has been the question of generalization: why do the functions learned by neural networks generalize so well to unseen data? From the perspective of classical ML, neural nets’ high performance is a surprise given that they are so overparameterized that they could easily represent countless poorly-generalizing functions.
Berkeley AI Research (BAIR)
It’s very hard to make a fully-aligned AI that can do whatever we want. But it might be easier to align a narrow AI that’s only capable of thinking about specific domains and isn’t able to consider the real world in all of its complexity. But if you’re clever, this AI could still be superintelligent and could still do some kind of pivotal action that could at least buy us time.
Yudkowsky, Machine Intelligence Research Institute (MIRI)
British inventor Clive Sinclair has said he thinks artificial intelligence will doom mankind.
"Once you start to make machines that are rivaling and surpassing humans with intelligence, it's going to be very difficult for us to survive. It's just an inevitability."
The Washington Post, By Peter Holley, December 2, 2014.
Artificial intelligence and nanotechnology have been named alongside nuclear war, ecological catastrophe and super-volcano eruptions as “risks that threaten human civilization” in a report by the Global Challenges Foundation.
“Such extreme intelligences could not easily be controlled (either by the groups creating them, or by some international regulatory regime), and would probably act to boost their own intelligence and acquire maximal resources for almost all initial AI motivations,” suggest authors Dennis Pamlin and Stuart Armstrong.
When it comes to discussing the dangers of artificial intelligence, the renowned theoretical physicist Stephen Hawking doesn't exactly mince words:
"I think the development of full artificial intelligence could spell the end of the human race," the Cambridge University professor told the BBC in an interview. . . "