This expression, p-value, must me very familiar for students, researchers, professors, doctors, other scholarly and of course statisticians. If type it in google you will have more than 1,300,000 web pages containing this word. There are also lots of definitions of it,
“ In statistical hypothesis testing, the p-value of a random variable T used as a test statistic is the probability that T will assume a value “at least as extreme” as the observed value, given that a null hypothesis being considered is true. “More extreme” would mean less favorable to the null hypothesis; in some cases that means greater than, in some cases less than, and in some cases further away from a specified center. “
2. From Journal of medicine :
“Probability that an observed difference between groups occurred by chance alone. A result is conventionally regarded as ‘statistically significant’ if the likelihood that it is due to chance alone is less than five times out of 100 (P < 0.05).”
“1. the probability of making a Type I error; 2. the significance level of a hypothesis test.”
Some of the definitions have similar general idea, but also not few have misleading definition just like the 3rd definition above which seems to explain level of significance (alpha).
It is also usual that many people use this value without knowing clearly what the meaning is. For many researchers and students having p-value smaller than 0.05 means that their research is success and appropriate to publish which is not true. If we have siginifant result but the assumption and other requirements are not satisfied, we will have incorrect result.
The definition of p-value should be “the probability of the observed result under null hypothesis will be more extreme, in the direction of alternative hypothesis. More extreme can be larger, smaller or both“.
And the last thing is p-value only a number that helps to make conclusion, but the decision is still in the researcher hand.