Sam Harris Vs David Deutsch
Sam Harris asks David Deutsch if there is anything impossible about the horrible scenario where AI learns and understands things at a rate much faster than us and in a week does a million years' worth of human thinking and then finally decides that we are no longer necessary and alas, decides to eliminate us. Harris concedes that this would not necessarily be bad if the AGI were sentient/conscious. Because that would just mean it is people 2.0 and it makes sense for people 2.0 to take over, even if that means removing us, people 1.0. But what if that were not the case? what if it is unimaginably faster than us in thought, better in every metric of intelligence, reaches the same conclusions as before, but is not conscious? What if it has the same level of consciousness that current programs like ChatGPT have? Then it seems like the right thing to do would be to prevent these unconscious automatons from wiping us out just because they are optimized for problem-solving and progress and we just happen to get in the way of what they conceive to be the best way to achieve those are.
Deutsch proceeds to call this a baseless alarmism by invoking the idea of morality (after first stating that most of Harris’s assumptions seem implausible if looked at closely, but still agrees to assume they are true for the sake of argument).
The idea of morality was indeed the right argument because these AGIs, when they get built, will be people. Preventing their creation because they might turn out to be bad is the same as ceasing to have children because we might give birth to the next Hitler. This single catastrophic scenario is just a single imagination out of infinitely many that we could conceive of and indeed, are possible. But there is absolutely no evidence that suggests that this scenario is more likely than the infinite others.
But that’s still only part of the answer and the less important part in terms of addressing the source of the question. Since these AGIs are going to be much superior to us in speed and memory, Sam Harris & co. seem to think that these hardware advantages can bring about qualitative differences between us and them. This is what is at the heart of the doomsayers’ argument. But is this true? Can vastly increasing memory and speed bring about qualitative differences in intelligence? i.e., can this lead to them becoming as superior to us in intelligence as we are to apes? If this is so, then this will presumably make their thought processes fundamentally unpredictable and unintelligible to us. But yet again, all creativity is unpredictable, and hence, the issue is not that we may fail to predict what they’ll do, because we already know we can’t do that with creative entities, like humans. So, the real concern is that their ideas will become unintelligible to us. And hence, this is the issue that needs to be answered to address the concerns of people like Sam Harris.
This is related to the question of how ease of understanding factors into differences in intelligence. Let’s assume for a moment that what constitute the ease with which one understands complex problems are memory and speed. Then, even if there can in principle be no concept that one kind of people can understand that another kind possibly couldn’t, the fact is that in practice we have limited energy and time that will invariably lead to significant differences in what we can understand with feasible amounts of effort and time. If one kind of people find it 10x or 100x easier to understand concepts than another kind, what will this mean? Won’t this mean that they will solve problems and get to more deeper questions faster than the others?
But even if these differences could come about because of differences in hardware, this still doesn’t amount to enough reason to apprehend the invention of true AGIs. Because if they will be just like us but only faster, then this means that whatever conclusions they will reach, we would have reached too given enough time. That is, if their conclusions are based on the truth. In that case, they are only going to speed up the inevitable. But if what we fear is that they’ll reach those conclusions because of a mistake they made, then we should also stop having children, because there is also no limit to the mistakes humans can make. If you are saying that we should have a chance at making mistakes, even ones that can lead to our distraction, but they shouldn’t, then Deutsch would call you a racist, because you are essentially saying that only we should have that privilege for no other reason than the fact we are humans.
Or if the idea of an artificial technology enabling us to get to the future much more quickly than we had anticipated is scary, then this is only because we don’t have a clear and well-defined philosophy of progress. Is progress, in terms of answering more & more fundamental questions that will afford us ever greater control over the universe, a good thing? We haven’t really decided on that, but even so, the direction of progress is clear. In the process of solving problems and answering questions, progress will only create ever more weirder questions and problems that will continue to be further removed from our immediate intuitions and common sense. And hence, progress will guarantee that the future will be weird. But this is true regardless of AGIs, which will only accelerate the process of getting there the same way augmenting Human cognition with direct access to superior hardware will. So, this is a matter of a philosophy of progress. Because if progress and innovation are good, then faster progress and innovation that come about because of AGIs or other breakthrough technologies can only be better.