Watershed Management Quotes

We've searched our database for all the quotes and captions related to Watershed Management. Here they are! All 14 of them:

Scholars who have studied the development of leaders have situated resilience, the ability to sustain ambition in the face of frustration, at the heart of potential leadership growth. More important than what happened to them was how they responded to these reversals, how they managed in various ways to put themselves back together, how these watershed experiences at first impeded, then deepened, and finally and decisively molded their leadership.
Doris Kearns Goodwin (Leadership: In Turbulent Times)
The violence exercised in the service of human commodification relied on a scientific empiricism always seeking to find the limits of human capacity for suffering, that point where material and social poverty threatened to consume entirely the lives it was meant to garner for sale in the Americas. In this regard, the economic enterprise of human trafficking marked a watershed in what would become an enduring project in the modern Western world: probing the limits up to which it is possible to discipline the body without extinguishing the life within. The aim in the case being economic efficiency rather than punishment, this was a regime whose intent was not to torture but rather to manage the depletion of life that resulted from the conditions of saltwater slavery. But for the Africans who were starved, sorted, and warped to make them into saltwater slaves, torture was the result. It takes no great insight to point to the role of violence in the Atlantic slave trade. But to understand what happened to Africans in this system of human trafficking requires us to ask precisely what kind of violence it requires to achieve its end, the transformation of African captives into Atlantic commodities.
Stephanie E. Smallwood (Saltwater Slavery: A Middle Passage from Africa to American Diaspora)
2006 interview by Jim Gray, Amazon CTO Werner Vogels recalled another watershed moment: We went through a period of serious introspection and concluded that a service-oriented architecture would give us the level of isolation that would allow us to build many software components rapidly and independently. By the way, this was way before service-oriented was a buzzword. For us service orientation means encapsulating the data with the business logic that operates on the data, with the only access through a published service interface. No direct database access is allowed from outside the service, and there’s no data sharing among the services.3 That’s a lot to unpack for non–software engineers, but the basic idea is this: If multiple teams have direct access to a shared block of software code or some part of a database, they slow each other down. Whether they’re allowed to change the way the code works, change how the data are organized, or merely build something that uses the shared code or data, everybody is at risk if anybody makes a change. Managing that risk requires a lot of time spent in coordination. The solution is to encapsulate, that is, assign ownership of a given block of code or part of a database to one team. Anyone else who wants something from that walled-off area must make a well-documented service request via an API.
Colin Bryar (Working Backwards: Insights, Stories, and Secrets from Inside Amazon)
*THE COMMONS, which are creative - so unleash their potential* The commons are shareable resources of society or nature that people choose to use and govern through self-organising, instead of relying on the state or market for doing so. Think of how a village community might manage its only freshwater well and its nearby forest, or how Internet users worldwide collaboratively curate Wikipedia. Natural commons have traditionally emerged in communities seeking to steward Earth's 'common pool' resources, such as grazing land, fisheries, watersheds and forests. Cultural commons serve to keep alive a community's language, heritage and rituals, myths and music, traditional knowledge and practice. And the fast-growing digital commons are stewarded collaboratively online, co-creating open-source software, social networks, information and knowledge. ...In the 1970s, the little-known political scientist Elinor Ostrom started seeking out real-life examples of natural commons to find out what made them work - and she went on to win a Nobel-Memorial prize for what she discovered. Rather than being left 'open access', those successful commons were governed by clearly defined communities with collectively agreed rules and punitive sanctions for those who broke them...she realised, the commons can turn out to be a triumph, outperforming both state and market in sustainably stewarding and equitably harvesting Earth's resources... The triumph of the commons is certainly evident in the digital commons, which are fast turning into one of the most dynamic areas of the global economy. (p.82-3)
Kate Raworth (Doughnut Economics: Seven Ways to Think Like a 21st-Century Economist)
In a 2006 interview by Jim Gray, Amazon CTO Werner Vogels recalled another watershed moment: We went through a period of serious introspection and concluded that a service-oriented architecture would give us the level of isolation that would allow us to build many software components rapidly and independently. By the way, this was way before service-oriented was a buzzword. For us service orientation means encapsulating the data with the business logic that operates on the data, with the only access through a published service interface. No direct database access is allowed from outside the service, and there’s no data sharing among the services.3 That’s a lot to unpack for non–software engineers, but the basic idea is this: If multiple teams have direct access to a shared block of software code or some part of a database, they slow each other down. Whether they’re allowed to change the way the code works, change how the data are organized, or merely build something that uses the shared code or data, everybody is at risk if anybody makes a change. Managing that risk requires a lot of time spent in coordination. The solution is to encapsulate, that is, assign ownership of a given block of code or part of a database to one team. Anyone else who wants something from that walled-off area must make a well-documented service request via an API.
Colin Bryar (Working Backwards: Insights, Stories, and Secrets from Inside Amazon)
significantly different? Because the variances are equal, we read the t-test statistics from the top line, which states “equal variances assumed.” (If variances had been unequal, then we would read the test statistics from the second line, “equal variances not assumed.”). The t-test statistic for equal variances for this test is 2.576, which is significant at p = .011.13 Thus, we conclude that the means are significantly different; the 10th graders report feeling safer one year after the anger management program was implemented. Working Example 2 In the preceding example, the variables were both normally distributed, but this is not always the case. Many variables are highly skewed and not normally distributed. Consider another example. The U.S. Environmental Protection Agency (EPA) collects information about the water quality of watersheds, including information about the sources and nature of pollution. One such measure is the percentage of samples that exceed pollution limits for ammonia, dissolved oxygen, phosphorus, and pH.14 A manager wants to know whether watersheds in the East have higher levels of pollution than those in the Midwest. Figure 12.4 Untransformed Variable: Watershed Pollution An index variable of such pollution is constructed. The index variable is called “pollution,” and the first step is to examine it for test assumptions. Analysis indicates that the range of this variable has a low value of 0.00 percent and a high value of 59.17 percent. These are plausible values (any value above 100.00 percent is implausible). A boxplot (not shown) demonstrates that the variable has two values greater than 50.00 percent that are indicated as outliers for the Midwest region. However, the histograms shown in Figure 12.4 do not suggest that these values are unusually large; rather, the peak in both histograms is located off to the left. The distributions are heavily skewed.15 Because the samples each have fewer than 50 observations, the Shapiro-Wilk test for normality is used. The respective test statistics for East and Midwest are .969 (p = .355) and .931 (p = .007). Visual inspection confirms that the Midwest distribution is indeed nonnormal. The Shapiro-Wilk test statistics are given only for completeness; they have no substantive interpretation. We must now either transform the variable so that it becomes normal for purposes of testing, or use a nonparametric alternative. The second option is discussed later in this chapter. We also show the consequences of ignoring the problem. To transform the variable, we try the recommended transformations, , and then examine the transformed variable for normality. If none of these transformations work, we might modify them, such as using x⅓ instead of x½ (recall that the latter is ).16 Thus, some experimentation is required. In our case, we find that the x½ works. The new Shapiro-Wilk test statistics for East and Midwest are, respectively, .969 (p = .361) and .987 (p = .883). Visual inspection of Figure 12.5 shows these two distributions to be quite normal, indeed. Figure 12.5 Transformed Variable: Watershed Pollution The results of the t-test for the transformed variable are shown in Table
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
categorical and the dependent variable is continuous. The logic of this approach is shown graphically in Figure 13.1. The overall group mean is (the mean of means). The boxplots represent the scores of observations within each group. (As before, the horizontal lines indicate means, rather than medians.) Recall that variance is a measure of dispersion. In both parts of the figure, w is the within-group variance, and b is the between-group variance. Each graph has three within-group variances and three between-group variances, although only one of each is shown. Note in part A that the between-group variances are larger than the within-group variances, which results in a large F-test statistic using the above formula, making it easier to reject the null hypothesis. Conversely, in part B the within-group variances are larger than the between-group variances, causing a smaller F-test statistic and making it more difficult to reject the null hypothesis. The hypotheses are written as follows: H0: No differences between any of the group means exist in the population. HA: At least one difference between group means exists in the population. Note how the alternate hypothesis is phrased, because the logical opposite of “no differences between any of the group means” is that at least one pair of means differs. H0 is also called the global F-test because it tests for differences among any means. The formulas for calculating the between-group variances and within-group variances are quite cumbersome for all but the simplest of designs.1 In any event, statistical software calculates the F-test statistic and reports the level at which it is significant.2 When the preceding null hypothesis is rejected, analysts will also want to know which differences are significant. For example, analysts will want to know which pairs of differences in watershed pollution are significant across regions. Although one approach might be to use the t-test to sequentially test each pair of differences, this should not be done. It would not only be a most tedious undertaking but would also inadvertently and adversely affect the level of significance: the chance of finding a significant pair by chance alone increases as more pairs are examined. Specifically, the probability of rejecting the null hypothesis in one of two tests is [1 – 0.952 =] .098, the probability of rejecting it in one of three tests is [1 – 0.953 =] .143, and so forth. Thus, sequential testing of differences does not reflect the true level of significance for such tests and should not be used. Post-hoc tests test all possible group differences and yet maintain the true level of significance. Post-hoc tests vary in their methods of calculating test statistics and holding experiment-wide error rates constant. Three popular post-hoc tests are the Tukey, Bonferroni, and Scheffe tests.
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
The Scheffe test is the most conservative, the Tukey test is best when many comparisons are made (when there are many groups), and the Bonferroni test is preferred when few comparisons are made. However, these post-hoc tests often support the same conclusions.3 To illustrate, let’s say the independent variable has three categories. Then, a post-hoc test will examine hypotheses for whether . In addition, these tests will also examine which categories have means that are not significantly different from each other, hence, providing homogeneous subsets. An example of this approach is given later in this chapter. Knowing such subsets can be useful when the independent variable has many categories (for example, classes of employees). Figure 13.1 ANOVA: Significant and Insignificant Differences Eta-squared (η2) is a measure of association for mixed nominal-interval variables and is appropriate for ANOVA. Its values range from zero to one, and it is interpreted as the percentage of variation explained. It is a directional measure, and computer programs produce two statistics, alternating specification of the dependent variable. Finally, ANOVA can be used for testing interval-ordinal relationships. We can ask whether the change in means follows a linear pattern that is either increasing or decreasing. For example, assume we want to know whether incomes increase according to the political orientation of respondents, when measured on a seven-point Likert scale that ranges from very liberal to very conservative. If a linear pattern of increase exists, then a linear relationship is said to exist between these variables. Most statistical software packages can test for a variety of progressive relationships. ANOVA Assumptions ANOVA assumptions are essentially the same as those of the t-test: (1) the dependent variable is continuous, and the independent variable is ordinal or nominal, (2) the groups have equal variances, (3) observations are independent, and (4) the variable is normally distributed in each of the groups. The assumptions are tested in a similar manner. Relative to the t-test, ANOVA requires a little more concern regarding the assumptions of normality and homogeneity. First, like the t-test, ANOVA is not robust for the presence of outliers, and analysts examine the presence of outliers for each group. Also, ANOVA appears to be less robust than the t-test for deviations from normality. Second, regarding groups having equal variances, our main concern with homogeneity is that there are no substantial differences in the amount of variance across the groups; the test of homogeneity is a strict test, testing for any departure from equal variances, and in practice, groups may have neither equal variances nor substantial differences in the amount of variances. In these instances, a visual finding of no substantial differences suffices. Other strategies for dealing with heterogeneity are variable transformations and the removal of outliers, which increase variance, especially in small groups. Such outliers are detected by examining boxplots for each group separately. Also, some statistical software packages (such as SPSS), now offer post-hoc tests when equal variances are not assumed.4 A Working Example The U.S. Environmental Protection Agency (EPA) measured the percentage of wetland loss in watersheds between 1982 and 1992, the most recent period for which data are available (government statistics are sometimes a little old).5 An analyst wants to know whether watersheds with large surrounding populations have
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
suffered greater wetland loss than watersheds with smaller surrounding populations. Most watersheds have suffered no or only very modest losses (less than 3 percent during the decade in question), and few watersheds have suffered more than a 4 percent loss. The distribution is thus heavily skewed toward watersheds with little wetland losses (that is, to the left) and is clearly not normally distributed.6 To increase normality, the variable is transformed by twice taking the square root, x.25. The transformed variable is then normally distributed: the Kolmogorov-Smirnov statistic is 0.82 (p = .51 > .05). The variable also appears visually normal for each of the population subgroups. There are four population groups, designed to ensure an adequate number of observations in each. Boxplot analysis of the transformed variable indicates four large and three small outliers (not shown). Examination suggests that these are plausible and representative values, which are therefore retained. Later, however, we will examine the effect of these seven observations on the robustness of statistical results. Descriptive analysis of the variables is shown in Table 13.1. Generally, large populations tend to have larger average wetland losses, but the standard deviations are large relative to (the difference between) these means, raising considerable question as to whether these differences are indeed statistically significant. Also, the untransformed variable shows that the mean wetland loss is less among watersheds with “Medium I” populations than in those with “Small” populations (1.77 versus 2.52). The transformed variable shows the opposite order (1.06 versus 0.97). Further investigation shows this to be the effect of the three small outliers and two large outliers on the calculation of the mean of the untransformed variable in the “Small” group. Variable transformation minimizes this effect. These outliers also increase the standard deviation of the “Small” group. Using ANOVA, we find that the transformed variable has unequal variances across the four groups (Levene’s statistic = 2.83, p = .41 < .05). Visual inspection, shown in Figure 13.2, indicates that differences are not substantial for observations within the group interquartile ranges, the areas indicated by the boxes. The differences seem mostly caused by observations located in the whiskers of the “Small” group, which include the five outliers mentioned earlier. (The other two outliers remain outliers and are shown.) For now, we conclude that no substantial differences in variances exist, but we later test the robustness of this conclusion with consideration of these observations (see Figure 13.2). Table 13.1 Variable Transformation We now proceed with the ANOVA analysis. First, Table 13.2 shows that the global F-test statistic is 2.91, p = .038 < .05. Thus, at least one pair of means is significantly different. (The term sum of squares is explained in note 1.) Getting Started Try ANOVA on some data of your choice. Second, which pairs are significantly different? We use the Bonferroni post-hoc test because relatively few comparisons are made (there are only four groups). The computer-generated results (not shown in Table 13.2) indicate that the only significant difference concerns the means of the “Small” and “Large” groups. This difference (1.26 - 0.97 = 0.29 [of transformed values]) is significant at the 5 percent level (p = .028). The Tukey and Scheffe tests lead to the same conclusion (respectively, p = .024 and .044). (It should be noted that post-hoc tests also exist for when equal variances are not assumed. In our example, these tests lead to the same result.7) This result is consistent with a visual reexamination of Figure 13.2, which shows that differences between group means are indeed small. The Tukey and
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
Scheffe tests also produce “homogeneous subsets,” that is, groups that have statistically identical means. Both the three largest and the three smallest populations have identical means. The Tukey levels of statistical significance are, respectively, .725 and .165 (both > .05). This is shown in Table 13.3. Figure 13.2 Group Boxplots Table 13.2 ANOVA Table Third, is the increase in means linear? This test is an option on many statistical software packages that produces an additional line of output in the ANOVA table, called the “linear term for unweighted sum of squares,” with the appropriate F-test. Here, that F-test statistic is 7.85, p = .006 < .01, and so we conclude that the apparent linear increase is indeed significant: wetland loss is linearly associated with the increased surrounding population of watersheds.8 Figure 13.2 does not clearly show this, but the enlarged Y-axis in Figure 13.3 does. Fourth, are our findings robust? One concern is that the statistical validity is affected by observations that statistically (although not substantively) are outliers. Removing the seven outliers identified earlier does not affect our conclusions. The resulting variable remains normally distributed, and there are no (new) outliers for any group. The resulting variable has equal variances across the groups (Levene’s test = 1.03, p = .38 > .05). The global F-test is 3.44 (p = .019 < .05), and the Bonferroni post-hoc test similarly finds that only the differences between the “Small” and “Large” group means are significant (p = .031). The increase remains linear (F = 6.74, p = .011 < .05). Thus, we conclude that the presence of observations with large values does not alter our conclusions. Table 13.3 Homogeneous Subsets Figure 13.3 Watershed Loss, by Population We also test the robustness of conclusions for different variable transformations. The extreme skewness of the untransformed variable allows for only a limited range of root transformations that produce normality. Within this range (power 0.222 through 0.275), the preceding conclusions are replicated fully. Natural log and base-10 log transformations also result in normality and replicate these results, except that the post-hoc tests fail to identify that the means of the “Large” and “Small” groups are significantly different. However, the global F-test is (marginally) significant (F = 2.80, p = .043 < .05), which suggests that this difference is too small to detect with this transformation. A single, independent-samples t-test for this difference is significant (t = 2.47, p = .017 < .05), suggesting that this problem may have been exacerbated by the limited number of observations. In sum, we find converging evidence for our conclusions. As this example also shows, when using statistics, analysts frequently must exercise judgment and justify their decisions.9 Finally, what is the practical significance of this analysis? The wetland loss among watersheds with large surrounding populations is [(3.21 – 2.52)/2.52 =] 27.4 percent greater than among those surrounded by small populations. It is up to managers and elected officials to determine whether a difference of this magnitude warrants intervention in watersheds with large surrounding populations.10
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
Beyond One-Way ANOVA The approach described in the preceding section is called one-way ANOVA. This scenario is easily generalized to accommodate more than one independent variable. These independent variables are either discrete (called factors) or continuous (called covariates). These approaches are called n-way ANOVA or ANCOVA (the “C” indicates the presence of covariates). Two way ANOVA, for example, allows for testing of the effect of two different independent variables on the dependent variable, as well as the interaction of these two independent variables. An interaction effect between two variables describes the way that variables “work together” to have an effect on the dependent variable. This is perhaps best illustrated by an example. Suppose that an analyst wants to know whether the number of health care information workshops attended, as well as a person’s education, are associated with healthy lifestyle behaviors. Although we can surely theorize how attending health care information workshops and a person’s education can each affect an individual’s healthy lifestyle behaviors, it is also easy to see that the level of education can affect a person’s propensity for attending health care information workshops, as well. Hence, an interaction effect could also exist between these two independent variables (factors). The effects of each independent variable on the dependent variable are called main effects (as distinct from interaction effects). To continue the earlier example, suppose that in addition to population, an analyst also wants to consider a measure of the watershed’s preexisting condition, such as the number of plant and animal species at risk in the watershed. Two-way ANOVA produces the results shown in Table 13.4, using the transformed variable mentioned earlier. The first row, labeled “model,” refers to the combined effects of all main and interaction effects in the model on the dependent variable. This is the global F-test. The “model” row shows that the two main effects and the single interaction effect, when considered together, are significantly associated with changes in the dependent variable (p < .000). However, the results also show a reduced significance level of “population” (now, p = .064), which seems related to the interaction effect (p = .076). Although neither effect is significant at conventional levels, the results do suggest that an interaction effect is present between population and watershed condition (of which the number of at-risk species is an indicator) on watershed wetland loss. Post-hoc tests are only provided separately for each of the independent variables (factors), and the results show the same homogeneous grouping for both of the independent variables. Table 13.4 Two-Way ANOVA Results As we noted earlier, ANOVA is a family of statistical techniques that allow for a broad range of rather complex experimental designs. Complete coverage of these techniques is well beyond the scope of this book, but in general, many of these techniques aim to discern the effect of variables in the presence of other (control) variables. ANOVA is but one approach for addressing control variables. A far more common approach in public policy, economics, political science, and public administration (as well as in many others fields) is multiple regression (see Chapter 15). Many analysts feel that ANOVA and regression are largely equivalent. Historically, the preference for ANOVA stems from its uses in medical and agricultural research, with applications in education and psychology. Finally, the ANOVA approach can be generalized to allow for testing on two or more dependent variables. This approach is called multiple analysis of variance, or MANOVA. Regression-based analysis can also be used for dealing with multiple dependent variables, as mentioned in Chapter 17.
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
A NONPARAMETRIC ALTERNATIVE A nonparametric alternative to one-way ANOVA is Kruskal-Wallis’ H test of one-way ANOVA. Instead of using the actual values of the variables, Kruskal-Wallis’ H test assigns ranks to the variables, as shown in Chapter 11. As a nonparametric method, Kruskal-Wallis’ H test does not assume normal populations, but the test does assume similarly shaped distributions for each group. This test is applied readily to our one-way ANOVA example, and the results are shown in Table 13.5. Table 13.5 Kruskal-Wallis’ H-Test of One-Way ANOVA Kruskal-Wallis’ H one-way ANOVA test shows that population is significantly associated with watershed loss (p = .013). This is one instance in which the general rule that nonparametric tests have higher levels of significance is not seen. Although Kruskal-Wallis’ H test does not report mean values of the dependent variable, the pattern of mean ranks is consistent with Figure 13.2. A limitation of this nonparametric test is that it does not provide post-hoc tests or analysis of homogeneous groups, nor are there nonparametric n-way ANOVA tests such as for the two-way ANOVA test described earlier. SUMMARY One-way ANOVA extends the t-test by allowing analysts to test whether two or more groups have different means of a continuous variable. The t-test is limited to only two groups. One-way ANOVA can be used, for example, when analysts want to know if the mean of a variable varies across regions, racial or ethnic groups, population or employee categories, or another grouping with multiple categories. ANOVA is family of statistical techniques, and one-way ANOVA is the most basic of these methods. ANOVA is a parametric test that makes the following assumptions: The dependent variable is continuous. The independent variable is ordinal or nominal. The groups have equal variances. The variable is normally distributed in each of the groups. Relative to the t-test, ANOVA requires more attention to the assumptions of normality and homogeneity. ANOVA is not robust for the presence of outliers, and it appears to be less robust than the t-test for deviations from normality. Variable transformations and the removal of outliers are to be expected when using ANOVA. ANOVA also includes three other types of tests of interest: post-hoc tests of mean differences among categories, tests of homogeneous subsets, and tests for the linearity of mean differences across categories. Two-way ANOVA addresses the effect of two independent variables on a continuous dependent variable. When using two-way ANOVA, the analyst is able to distinguish main effects from interaction effects. Kruskal-Wallis’ H test is a nonparametric alternative to one-way ANOVA. KEY TERMS   Analysis of variance (ANOVA) ANOVA assumptions Covariates Factors Global F-test Homogeneous subsets Interaction effect Kruskal-Wallis’ H test of one-way ANOVA Main effect One-way ANOVA Post-hoc test Two-way ANOVA Notes   1. The between-group variance is
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
12.2. The transformed variable has equal variances across the two groups (Levene’s test, p = .119), and the t-test statistic is –1.308 (df = 85, p = .194). Thus, the differences in pollution between watersheds in the East and Midwest are not significant. (The negative sign of the t-test statistic, –1.308, merely reflects the order of the groups for calculating the difference: the testing variable has a larger value in the Midwest than in the East. Reversing the order of the groups results in a positive sign.) Table 12.2 Independent-Samples T-Test: Output For comparison, results for the untransformed variable are shown as well. The untransformed variable has unequal variances across the two groups (Levene’s test, p = .036), and the t-test statistic is –1.801 (df = 80.6, p =.075). Although this result also shows that differences are insignificant, the level of significance is higher; there are instances in which using nonnormal variables could lead to rejecting the null hypothesis. While our finding of insignificant differences is indeed robust, analysts cannot know this in advance. Thus, analysts will need to deal with nonnormality. Variable transformation is one approach to the problem of nonnormality, but transforming variables can be a time-intensive and somewhat artful activity. The search for alternatives has led many analysts to consider nonparametric methods. TWO T-TEST VARIATIONS Paired-Samples T-Test Analysts often use the paired t-test when applying before and after tests to assess student or client progress. Paired t-tests are used when analysts have a dependent rather than an independent sample (see the third t-test assumption, described earlier in this chapter). The paired-samples t-test tests the null hypothesis that the mean difference between the before and after test scores is zero. Consider the following data from Table 12.3. Table 12.3 Paired-Samples Data The mean “before” score is 3.39, and the mean “after” score is 3.87; the mean difference is 0.54. The paired t-test tests the null hypothesis by testing whether the mean of the difference variable (“difference”) is zero. The paired t-test test statistic is calculated as where D is the difference between before and after measurements, and sD is the standard deviation of these differences. Regarding t-test assumptions, the variables are continuous, and the issue of heterogeneity (unequal variances) is moot because this test involves only one variable, D; no Levene’s test statistics are produced. We do test the normality of D and find that it is normally distributed (Shapiro-Wilk = .925, p = .402). Thus, the assumptions are satisfied. We proceed with testing whether the difference between before and after scores is statistically significant. We find that the paired t-test yields a t-test statistic of 2.43, which is significant at the 5 percent level (df = 9, p = .038 < .05).17 Hence, we conclude that the increase between the before and after scores is significant at the 5 percent level.18 One-Sample T-Test Finally, the one-sample t-test tests whether the mean of a single variable is different from a prespecified value (norm). For example, suppose we want to know whether the mean of the before group in Table 12.3 is different from the value of, say, 3.5? Testing against a norm is akin to the purpose of the chi-square goodness-of-fit test described in Chapter 11, but here we are dealing with a continuous variable rather than a categorical one, and we are testing the mean rather than its distribution. The one-sample t-test assumes that the single variable is continuous and normally distributed. As with the paired t-test, the issue of heterogeneity is moot because there is only one variable. The Shapiro-Wilk test shows that the variable “before” is normal (.917, p = .336). The one-sample t-test statistic for testing against the test value of 3.5 is –0.515 (df = 9, p = .619 > .05). Hence, the mean of 3.39 is not significantly
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
When rain falls, it starts its inevitable journey to the sea. If this journey is rapid, the rain carries topsoil and pollutants with it to streams and then rivers, which terminate in the earth’s oceans. If rainwater is slowed by vegetation, more of it seeps into the ground rather than rushing into local streams. Such infiltration not only replenishes water tables but also scrubs the water clean of its nitrogen, phosphorus, and heavy metal pollutants. Moreover, slow and steady discharge from water tables into streams and rivers reduces the destructive pulse of stormwater that scours our tributaries of their biota. By virtue of their copious leaf surface area and large root systems, oaks impede rainwater from the moment it condenses out of clouds. Much of the water intercepted by leafy oak canopies (up to 3,000 gallons per tree annually) evaporates before it ever reaches the ground (Cotrone 2014). All this makes oaks one of our very best tools in responsible watershed management.
Douglas W. Tallamy (The Nature of Oaks: The Rich Ecology of Our Most Essential Native Trees)