BD
technologically challenged barbie doll
- Joined
- Sep 1, 2011
- Messages
- 25,323
I'm realising how much stats have leaked out of my brain.
From (long term) memory significance level is a test of the data where 95% = 5% chance that the null hypothesis is true, confidence level is the probability that if the test was repeated again and again the results would be the same and confidence intervals being a range within which a result would be expected to fall in 95% of the time and constructed from significance level (or perhaps confidence level or both/either - I forget the details). All closely related yet different. I suspect I've been conflating one or more at times.
Kind of, but not quite.
The null hypothesis is either true, or it isn't - there's no probability assigned to it. A given vaccine either reduces cases of Covid, or it doesn't.
For the p-value, we assume that the null hypothesis is correct, and then the p-value denotes the probability of observing our observed data (or data more 'extreme') given the null hypothesis to be true. So we assume that our Covid vaccine does nothing, but then when we see the difference in case rates across the two placebo and the vaccine cohort, the p-value will give an indication of how likely this difference was to happen, under the assumption that there should be no difference.
I don't mean to be a nitpicky bore, but I just thought it might be interesting. And I've heard complaints from a lot of statisticians that p-values are so often misunderstood and misused, especially in medical statistics. To the point where research is being done in such a way that gives the best chance for a 'statistically significant' result, even if the corresponding research is no good (and I've experienced similar to this on a project I was helping on, where p-values were the be-all and end-all, despite us trying to say why they were pointless in our case).