Regression: Patterns of Variation
Regression involves the determination of the degree of relationship in the patterns of variation of two or more variables through the calculation of the coefficient of correlation, r. The value of r can vary between 1.0, perfect correlation, and 1.0, perfect negative correlation. When r=0, there is zero correlation, meaning that the variation of one variable cannot be used to explain any of the variation in the other variable. The coefficient of determination, r^{2}, is a measure of how well the variation of one variable explains the variation of the other, and corresponds to the percentage of the variation explained by a bestfit regression line which is calculated for the data.
In simple linear regression, a single dependent variable, Y, is considered to be a function of an independent X variable, and the relationship between the variables is defined by a straight line. (Note: many biological relationships are known to be nonlinear and other models apply.) When a bestfit regression line is calculated, its binomial equation (y=mx+b) defines how the variation in the X variable explains the variation in the Y variable. Regression analysis also involves measuring the amount of variation not taken into account by the regression equation, and this variation is known as the residual. A statistical test called the Ftest is used to compare the variation explained by the regression line to the residual variation, and the pvalue that results from the Ftest corresponds to the probability that the slope of the regression line is zero (i.e., the null hypothesis).
As the value of r^{2} increases, one can place more confidence in the predictive value of the regression line. Particularly when there are many data points used to generate a regression, a regression may be significant but have a very low r^{2} , indicating that little of the variation in the dependent variable can be explained by variation in the independent variable.
In the example below, we used regression analysis to explore the relationship between trunk girth and weight of trees, using trunk girth as the independent variable. The calculation of a regression is tedious and timeconsuming. Statistics software and many spreadsheet packages will do a regression analysis for you. A popular statistics application used in the Biology Department is Statview. The Statview output for this example is reproduced below.

The first table summarizes the analysis, indicating that there are 104 data points in the analysis. Furthermore, our rvalue is 0.916 and our coefficient of determination, r^{2}, is 0.840. These values are high, indicating that knowing the girth of a tree will allow us to make an accurate estimate of its weight.
The second table confirms our hunch of a significant relationship between tree girth and weight. The Fvalue in the table has a value of 533.679 and a pvalue <0.0001. The pvalue gives the probability that the slope is zero which would indicate that there is no correlation between the two variables. The low pvalue indicates that the probability that the two variables are not related is vanishingly small.
The third table gives us the coefficients for our regression equation. Remember that the formula for a straight line is y = mx + b, where m is the slope and b is the yintercept.From the table, we see that the yintercept is 1225.413 and m, the trunk girth coefficient, is 5.874. Therefore, the equation for our line is:
weight = 5.874(tree girth)  1225.413
Finally, StatView provides a plot of weight versus trunk girth. It is easy to see why the r^{2} value is so high.