<p>
</p>
<p>well why people who study existential risk to humanity are most fearful for our future is not because they feel we don’t have the ability to solve the problems we know about. No, they think problems such as the declining environment, natural disasters, etc., we will be able to get under control. The fear comes from the problems that they believe we will not be able to predict - unknown, unknowns - which will post sudden risks we are not prepared to deal with…</p>
<p>of course, all these threats come from new things, so from technological developments. some foreseeable candidates for such threats include advanced biotechnology (bioweapons, nanotechnology) and perhaps most worrisome, advanced (smarter than human) artificial intelligence, among other things. </p>
<p>anyway that is just seems to be the consensus of the very intelligent people who have wrote books/papers about existential risk to humanity. All agree that there is probably a greater than 15% chance of humans going extinct this century.</p>