Can be harder than it seems. I found this on “The Statistics Forum” of ASA and Chance
Many of questioned why the Fukushima reactor was built to withstand only 8.3 in that part of the world known for seismic activity. It turns out they had a seemingly well understood theory and 300 years of data.
Here is the article from the NYT
This was not the first time scientists have underestimated the ferocity of an earthquake fault. Many were also caught by surprise by the magnitude-9.1 quake in 2004 off Sumatra, which set off tsunamis radiating across the Indian Ocean, killing more than 200,000 people.
Sometimes, scientists are blindsided by earthquakes because they occur along undiscovered faults…
In California, for instance, scientists have cataloged 1,400 faults, yet for smaller earthquakes — magnitude 6.7 or less — about one in three still occur on previously unknown faults.
In other words we have good reason to believe our knowledge is incomplete. Hubris is often called Overconfidence bias, http://en.wikipedia.org/wiki/Overconfidence_effect , where we substantially overestimate our certainty. Subjective confidence intervale are not very reliable.
The wikipedia restates a well known study where weather forecasters have no overconfidence but clinical psychologists do. We often make fun of weather forecasters, but their repeated experience with forecasting and prompt outcome feedback means that they are well calibrated. If they say 70% chance of rain, it will rain 70% of the time.
Historic data may not be enough. Although, perhaps had they looked at the data for predictability they would have determined that the earchquake process is neither stable nor fuly understood.