top of page


As nutritionists, we are often exposed to trial reports comparing two or more diets. In order to read these reports correctly and make proper decision, we need to look at the statistical analysis supporting the conclusions. The interest of statistic is to confirm whether the differences that we want to demonstrate are superior to the differences coming from the group variability

If the trial demonstrates a difference but the statistic analysis concluded that this difference is not significant, that means that we cannot confirm with certainty where this difference comes from. It can come from the product effect but it could come as well from the differences existing between the animals in the group. Even when the trial did not show any difference and the results are statistically not different, we cannot conclude as well. There is still a possibility that the difference coming from the product has been hidden by the differences from individuals.

> Interpretation of the p-value Trials are often associated with a p-value that are often expressed as “p-value <5%” or “p-value<1%”. This p-value expressed actually the probability that we could get the same result between two groups that are actually equivalent.

As example, if the trial concludes that there is a statistical difference of +3% between Treatment group A and Control group B with a p-value of 5%, that would mean that there is 5% of chances that I could get the same results when comparing the group A with itself. We can therefore conclude that the treatment will give a +3% improvement in 95% of the cases. Trials never give a 100% certainty but the p-value help to measure the degree of confidence in results.

To illustrate the statistically differences between groups, we are labelling each graphic bar with letter a, b, c, etc…

The bar ‘a’ will be statistically different from the bar ‘b’. But if a bar is labelled ‘ab’ between 2 bars ‘a’ and ‘b’, it would mean that the ‘ab’ bar is not statistically different neither from the ‘a’ bar or the ‘b’ bar.

When you are confronted to results that are not significantly different or to a p-value too high, it could mean either that the 2 groups tested are not different or that the experimental design needs to be revised before we can confirm about a possible differences between these 2 groups. To increase the chances to get statistical meaningful results, the experimental design needs to minimize the differences inside the group of animals. To do so, we need to:

  1. multiply the number of replicates (number of pens for collective measurements, number of animals for individual measurements)

  2. increase the number of animals per pen, and consequently reduce the variability between the pens

  3. ensure homogeneous group in term of weight, sex, age, in order to focus on the effect of treatments and avoid bias

In conclusion, only the statistical analysis can demonstrate if the results of a trial are conclusive. Without statistical differences, we can only make assumptions of what the results would have been in case of a better experimental design that would have end up in statistical differences. Numerical difference gives a hint but only statistical differences can enable us.

Related tags : Statistics

bottom of page