Warwick Art & Books
fine art & handmade books
Selected citations
Deep learning has proven a stunning success for countless problems of interest, but this success belies the fact that, at a fundamental level, we do not understand why it works so well. Many empirical phenomena, well-known to deep learning practitioners, remain mysteries to theoreticians. Perhaps the greatest of these mysteries has been the question of generalization: why do the functions learned by neural networks generalize so well to unseen data? From the perspective of classical ML, neural nets’ high performance is a surprise given that they are so overparameterized that they could easily represent countless poorly-generalizing functions.
Berkeley AI Research (BAIR)
It’s very hard to make a fully-aligned AI that can do whatever we want. But it might be easier to align a narrow AI that’s only capable of thinking about specific domains and isn’t able to consider the real world in all of its complexity. But if you’re clever, this AI could still be superintelligent and could still do some kind of pivotal action that could at least buy us time.
Yudkowsky, Machine Intelligence Research Institute (MIRI)
British inventor Clive Sinclair has said he thinks artificial intelligence will doom mankind.
"Once you start to make machines that are rivaling and surpassing humans with intelligence, it's going to be very difficult for us to survive. It's just an inevitability."
The Washington Post, By Peter Holley, December 2, 2014.
Artificial intelligence and nanotechnology have been named alongside nuclear war, ecological catastrophe and super-volcano eruptions as “risks that threaten human civilization” in a report by the Global Challenges Foundation.
“Such extreme intelligences could not easily be controlled (either by the groups creating them, or by some international regulatory regime), and would probably act to boost their own intelligence and acquire maximal resources for almost all initial AI motivations,” suggest authors Dennis Pamlin and Stuart Armstrong.
The Guardian, By Stuart Dredge, February 18, 2015.
When it comes to discussing the dangers of artificial intelligence, the renowned theoretical physicist Stephen Hawking doesn't exactly mince words:
"I think the development of full artificial intelligence could spell the end of the human race," the Cambridge University professor told the BBC in an interview. . . "