Maybe the answer is to try and figure out what people think about the future.
There are many ways to ask the right question, but a big problem that many people are having is that they have forgotten the questions we might think of.
If you are someone who is interested in AI, think about the question:
What happens if humanity discovers a technology that eliminates us?
Most people think:
AI will take everything, so it should be safe. If it's just a matter of time before we destroy ourselves the best we can do is get an AI that has a high level of intelligence and self preservation potential.
Most people who study AI think the same thing, but they don't think of it as an end of human life but rather the last few steps before we die.
In other words, if AI does eliminate us, what happens to humanity?
The people who are interested in AI and have a question of their own need to ask some basic questions about what happens if humanity is completely wiped out. They can probably just as easily ask the question:
"What if they don't kill us?"
But that would be a very poor and very unsatisfactory way of thinking about the question.
The question is:
Is it more important to create the safest technology?
Or to create the biggest bang for the buck?
There are some important questions we could ask to try and get answers to those questions.
Can AI Save Us?
Are there better ways to think about this? Yes and no.
If you are going to tell people you believe in a hypothetical future, you have to tell them that you have some serious concerns about this, but also that you are certain these technologies can keep us safe. That is important, because people need to know these things.
So if you are willing to let people know you think this is a very real possibility and if you have doubts about this, it is useful to say so directly.
0 comments: