Here is a problem quite a long way from the problem of induction, but that introduces an incredibly useful tool for thinking about many things. It is a problem most people get wrong.
Suppose you decide to check yourself out for some disease. Suppose this disease is quite rare in the population: only about one in a thousand people suffer from it. But you go to your doctor, who says he has a good test for it. The test is in fact over 99 per cent reliable! Faced with this, you take the test. Then - horrors! - you test positive. You have tested positive, and the test is better than 99 per cent reliable. How bad is your situation, or in other words, what is the chance you have the disease?
Most people say, it's terrible: you are virtually certain to have the disease.
But suppose, being a thinker, you ask the doctor a bit more about this 99 per cent reliability. Suppose you get this information:
These two together make up the better than 99 per cent reliability. You might think that you are still virtually certain to have the disease. But in fact this is entirely wrong. Given the facts, your chance of having the disease is a little less than ten per cent.
- If you have the disease, the rest will say you have it.
- The test sometimes, but very rarely, gives 'false postives'. In only a very few cases - around 1 per cent - does it say that someone has the disease when they do not.
Why? Well, suppose 1,000 people take the test. Given the general incidence of the disease (the 'base rate'), one of them might be expected to have it. The test will say he has it. It will also say that 1 per cent of the rest of those tested, i.e. roughly ten people, have it. So eleven people might be expected to test positive, of whom only one will have the disease. It is true the news was bad - you have gone from a 1 in 1,000 chance of disease to a 1 in 11 chance - but it is still far more probable that you are healthy than not. Getting this answer wrong is called the fallacy of ignoring the base rate.
The Key to Everything
6 hours ago