 # What is the p value of 1 92

Generalizing to a Population: You are supposed to do it by generating a p value from a test statistic. So let's find out what this p is, what's special about 0. I'll also deal with the related topics of one-tailed vs two-tailed tests, and hypothesis testing.

What is a P Value? It's difficult, this one. P is short for probability: And what's this got to do with statistical significance? I've already defined statistical significance in terms of confidence intervals.

The other approach to statistical significance--the one that involves p values--is a bit convoluted. First you assume there is no effect in the population. Then you see if the value you get for the effect in your sample is the sort of value you would expect for no effect in the population.

If the value you get is unlikely for no effect, you conclude there is an effect, and you say the result is "statistically significant". Let's take an example. You are interested in the correlation between two things, say height and weight, and you have a sample of 20 subjects.

OK, assume there is no correlation in the population. Now, what are some unlikely values for a correlation with a sample of 20? It depends on what we mean by "unlikely". In that case, with 20 subjects, all correlations more positive than 0.

What did you get in your sample? OK, that's not an unlikely value, so the result is not statistically significant. Or if you got -0. But wait a minute. What about the p value?

That's the way it used to be done before computers. You looked up a table of threshold values for correlations or for some other statistic to see whether your value was more or less than the threshold value, for your sample size. Stats programs could do it that way, but they don't.

## There was a problem providing the content you requested

That's the p value. A bit of thought will satisfy you that if the p value is less than 0. For an observed correlation of 0. The correlation is therefore not statistically significant. Here's our example summarized in a diagram: The curve shows the probability of getting a particular value of the correlation in a sample of 20, when the correlation in the population is zero.

For a particular observed value, say 0. That probability is the sum of the shaded areas under the probability curve. The total area under a probability curve is 1, which means absolute certainty, because you have to get a value of some kind. Results falling in that shaded area are not really unlikely, are they?

No, we need a smaller area before we get excited about the result. In the example, that would happen for correlations greater than 0. So an observed correlation of 0. Bigger correlations would have even smaller p values and would be statistically significant. Test Statistics The stats program works out the p value either directly for the statistic you're interested in e. A test statistic is just another kind of effect statistic, one that is easier for statisticians and computers to handle.

Common test statistics are t, F, and chi-squared. You don't ever need to know how these statistics are defined, or what their values are.

### INTRODUCTION

All you need is the p value, or better still, the confidence limits or interval for your effect statistic. P Values and Confidence Intervals Speaking of confidence intervals, let's bring them back into the picture. It's possible to show that the two definitions of statistical significance are compatible --that getting a p value of less what is the p value of 1 92 0.

I won't try to explain it, other than to say that you have to slide the confidence interval sideways to prove it. The relationship between p values and confidence intervals also provides us with a more sensible way to think about what the "p" in "p value" stands for.

I've already said that it's the probability of a more extreme positive or negative result than what you observed, when the population value is null. But hey, what does that really mean? I get lost every time I try to wrap my brain around it. Here's something much better: For example, you observed a correlation of 0. OK, the chance that the true value of the correlation is negative less than zero is 0. Maybe it's better to it turn around and talk about a probability of 0.

Check your understanding by working out how to interpret a p value of exactly 1. So, if you want to include p values in your next paper, here is a new way to describe them in the Methods section: But even with this interpretation, p values are not a great way to generalize an outcome from a sample to a population, because what matters is clinical significance, not statistical significance.

Clinical vs Statistical Significance As we've just seen, the p value gives you a way to talk about the probability that the effect has any positive or negative value. To recap, if you observe a positive effect, and it's statistically significant, then the true value of the effect is likely to be positive. But if you're going to all the trouble of using probabilities to describe magnitudes of effects, it's better to talk about the probability that the effect is substantially positive or negative.

Because we want to know the probability that the true value is big enough to count for something in the world. In other words, we want to know the probability of clinical or practical significance. To work out that probability, you will have to think about and take into account the smallest clinically important positive and negative values of the effect; that is, the smallest values that matter to your subjects.

For more on that topic, see the page about a scale of magnitudes. Then it's a relatively simple matter to calculate the probability that the true value of the effect is greater than the positive value, and the probability that the true value is less than the negative value.

I have now included the calculations in the spreadsheet for confidence limits and likelihoods. I've called the smallest clinically important value a "threshold value for chances [of a clinically important effect]". You have to choose a threshold value on the basis of experience or understanding.

You also have to include the observed value of the statistic and the p value provided by your stats program. For changes or differences between means you also have to provide the number of degrees of freedom for the effect, but the exact value isn't crucial.

The spreadsheet then gives you the chances expressed as probabilities and odds that the true value is clinically positive greater than the smallest positive clinically important valueclinically negative less than the negative of the smallest important valueand clinically trivial between the positive and negative smallest important values.

The spreadsheet also works out confidence limits, as explained in the next section below. Use the spreadsheet to play around with some p values, observed values of a statistic, and smallest clinically important values to see what the what is the p value of 1 92 are like. I've got an example there showing that a p value of 0. I have written two short articles on this topic at the Sportscience site.

The first article introduces the topic, pretty much as above. The second article summarizes a Powerpoint slide show I have been using for a seminar with the title Statistical vs Clinical or Practical Significance, in which I explain hypothesis testing, P values, statistical significance, confidence limits, probabilities of clinical significance, a qualitative scale for interpreting clinical probabilities, and some examples of how to use the probabilities in practice.

Download the presentation 91 KB by right- clicking on this link.

1. Several statistics are employed to measure the magnitude of effect produced by these interventions.
2. In other words, you would have more power to detect the effect. The hybrid of the two schools as often read in medical journals and textbooks of statistics makes it as if the two schools were and are compatible as a single coherent method of statistical inference 4 , 23 ,
3. You also have to include the observed value of the statistic and the p value provided by your stats program. But, this is the result of their attempt.
4. It is an accepted fact among statisticians of the inadequacy of P value as a sole standard judgment in the analysis of clinical trials Clinical vs Statistical Significance As we've just seen, the p value gives you a way to talk about the probability that the effect has any positive or negative value.

View it as a full slide show so you see each slide build. Confidence Limits from a P Value Stats programs often don't give you confidence limits, but they always give you the p value. So here's a clever way to derive the confidence limits from the p value. It works for differences between means in descriptive or experimental studies, and for any normally distributed statistic from a sample.

Best of all, it's on a spreadsheet! I explain how it works in the next paragraph, but it's a bit tricky and you don't have to understand it to use the spreadsheet. Link back to the previous page to download the spreadsheet. I'll explain with an example. Suppose you've done a controlled experiment on the effect of a drug on time to run 10,000 m. Suppose the overall difference between the means you're interested in is 46 seconds, with a p value of 0. Or to put it another way, the area between -46 and 46 is 1-0.

1. If the value you get is unlikely for no effect, you conclude there is an effect, and you say the result is "statistically significant". The correlation is therefore not statistically significant.
2. I therefore don't buy into one-tailed tests. You also have to include the observed value of the statistic and the p value provided by your stats program.
3. By the way, if you do report p values with your outcome statistics, there is no point in reporting the value of the test statistic as well. Exact p values convey more information, but confidence intervals give a much better idea of what could be going on in the population.
4. What did you get in your sample?
5. By the way, if you do report p values with your outcome statistics, there is no point in reporting the value of the test statistic as well.

We know that the chance of the true value being between 0 and 92 is 0. To work it out, we use the fact that the distribution is normal. That allows us to calculate how many standard deviations also known as the z score we have to go on each side of the mean to enclose 0. Ah, but we know that 1. Fine, except that it's not really a normal distribution. Exactly how much more depends on the number of subjects, or more precisely, the number of degrees of freedom. With your own data, search around in the output from the analysis until you find the degrees of freedom for the error term or the residuals. 