Sample Of Indicative Quotes

We've searched our database for all the quotes and captions related to Sample Of Indicative. Here they are! All 30 of them:

One substantial hunk of junk led to the formation of the Moon. The unexpected scarcity of iron and other higher-mass elements in the Moon, derived from lunar samples returned by Apollo astronauts, indicates that the Moon most likely burst forth from Earth’s iron-poor crust and mantle after a glancing collision with a wayward Mars-sized protoplanet.
Neil deGrasse Tyson (Astrophysics for People in a Hurry (Astrophysics for People in a Hurry Series))
The Bible indicates that human beings can attain genuine knowledge of God, the self, and the world.30 God created both the world and humankind and his objective existence is the fixed reference point that makes authentic knowledge possible.
Kenneth R. Samples (A World of Difference: Putting Christian TruthClaims to the Worldview Test (Reasons to Believe): Putting Christian Truth-Claims to the Worldview Test)
As it happens, there’s a way of presenting data, called the funnel plot, that indicates whether or not the scientific literature is biased in this way.15 (If statistics don’t excite you, feel free to skip straight to the probably unsurprising conclusion in the last sentence of this paragraph.) You plot the data points from all your studies according to the effect sizes, running along the horizontal axis, and the sample size (roughly)16 running up the vertical axis. Why do this? The results from very large studies, being more “precise,” should tend to cluster close to the “true” size of the effect. Smaller studies by contrast, being subject to more random error because of their small, idiosyncratic samples, will be scattered over a wider range of effect sizes. Some small studies will greatly overestimate a difference; others will greatly underestimate it (or even “flip” it in the wrong direction). The next part is simple but brilliant. If there isn’t publication bias toward reports of greater male risk taking, these over- and underestimates of the sex difference should be symmetrical around the “true” value indicated by the very large studies. This, with quite a bit of imagination, will make the plot of the data look like an upside-down funnel. (Personally, my vote would have been to call it the candlestick plot, but I wasn’t consulted.) But if there is bias, then there will be an empty area in the plot where the smaller samples that underestimated the difference, found no differences, or yielded greater female risk taking should be. In other words, the overestimates of male risk taking get published, but various kinds of “underestimates” do not. When Nelson plotted the data she’d been examining, this is exactly what she found: “Confirmation bias is strongly indicated.”17 This
Cordelia Fine (Testosterone Rex: Myths of Sex, Science, and Society)
Several researchers demonstrate the ways people fail to label trauma as such or underreport traumatic experiences. In a sample of 1,526 university students, Rausch and Knutson (1991) found that although participants reported receiving punitive treatment similar to that of their siblings, they were more than twice as likely to identify their siblings’ experiences as abusive as they were to label their own in this way. The authors reported that participants were likely to interpret parental treatment toward themselves but not parental treatment toward their siblings as deserved and therefore not abusive. Other studies similarly indicate that those reporting abuse experiences often do not demonstrate a metaconsciousness of having been abused (Goldsmith & Freyd, in press; Koss, 1998; Varia & Abidin, 1999; Weinbach & Curtiss, 1986)." KNOWING AND NOT KNOWING ABOUT TRAUMA: IMPLICATIONS FOR THERAPY (2004)
Jennifer J. Freyd
THE BUCCAL CELL SMEAR TEST (EXATest) Using cells gently scraped from an area in the mouth between the bottom teeth and the back of the tongue provides an accurate means of measuring the amount of magnesium in the cells of the body. Measuring cellular magnesium in this way indicates the amount of magnesium in heart and muscle cells, the two major body tissues affected by magnesium deficiency. The buccal cell smear test can be used to sample many things in cells; however, IntraCellular Diagnostics has developed a testing procedure called EXATest specifically to identify the amounts of certain minerals in the cell. The company sends sampling kits to your doctor’s office, where a simple procedure, which takes about 60 seconds, is performed. Your doctor uses a wooden spatula to scrape off superficial layers of cells under your tongue. The scrapings are carefully placed on a microscope slide and sent back to the lab. A special electron microscope then measures the amount of magnesium and other minerals in the sample on the slide. The results are sent back to your doctor. The test is expensive but may be covered by Medicare and insurance.
Carolyn Dean (The Magnesium Miracle (Revised and Updated))
recent research indicates that unstructured play in natural settings is essential for children’s healthy growth. As any parent or early childhood educator will attest, play is an innate drive. It is also the primary vehicle for youngsters to experience and explore their surroundings. Compared to kids confined indoors, children who regularly play in nature show heightened motor control—including balance, coordination, and agility. They tend to engage more in imaginative and creative play, which in turn fosters language, abstract reasoning, and problem-solving skills, together with a sense of wonder. Nature play is superior at engendering a sense of self and a sense of place, allowing children to recognize both their independence and interdependence. Play in outdoor settings also exceeds indoor alternatives in fostering cognitive, emotional, and moral development. And individuals who spend abundant time playing outdoors as children are more likely to grow up with a strong attachment to place and an environmental ethic. When asked to identify the most significant environment of their childhoods, 96.5 percent of a large sample of adults named an outdoor environment. In
Scott D. Sampson (How to Raise a Wild Child: The Art and Science of Falling in Love with Nature)
We see three men standing around a vat of vinegar. Each has dipped his finger into the vinegar and has tasted it. The expression on each man's face shows his individual reaction. Since the painting is allegorical, we are to understand that these are no ordinary vinegar tasters, but are instead representatives of the "Three Teachings" of China, and that the vinegar they are sampling represents the Essence of Life. The three masters are K'ung Fu-tse (Confucius), Buddha, and Lao-tse, author of the oldest existing book of Taoism. The first has a sour look on his face, the second wears a bitter expression, but the third man is smiling. To Kung Fu-tse (kung FOOdsuh), life seemed rather sour. He believed that the present was out step with the past, and that the government of man on earth was out of harmony with the Way of Heaven, the government of, the universe. Therefore, he emphasized reverence for the Ancestors, as well as for the ancient rituals and ceremonies in which the emperor, as the Son of Heaven, acted as intermediary between limitless heaven and limited earth. Under Confucianism, the use of precisely measured court music, prescribed steps, actions, and phrases all added up to an extremely complex system of rituals, each used for a particular purpose at a particular time. A saying was recorded about K'ung Fu-tse: "If the mat was not straight, the Master would not sit." This ought to give an indication of the extent to which things were carried out under Confucianism. To Buddha, the second figure in the painting, life on earth was bitter, filled with attachments and desires that led to suffering. The world was seen as a setter of traps, a generator of illusions, a revolving wheel of pain for all creatures. In order to find peace, the Buddhist considered it necessary to transcend "the world of dust" and reach Nirvana, literally a state of "no wind." Although the essentially optimistic attitude of the Chinese altered Buddhism considerably after it was brought in from its native India, the devout Buddhist often saw the way to Nirvana interrupted all the same by the bitter wind of everyday existence. To Lao-tse (LAOdsuh), the harmony that naturally existed between heaven and earth from the very beginning could be found by anyone at any time, but not by following the rules of the Confucianists. As he stated in his Tao To Ching (DAO DEH JEENG), the "Tao Virtue Book," earth was in essence a reflection of heaven, run by the same laws - not by the laws of men. These laws affected not only the spinning of distant planets, but the activities of the birds in the forest and the fish in the sea. According to Lao-tse, the more man interfered with the natural balance produced and governed by the universal laws, the further away the harmony retreated into the distance. The more forcing, the more trouble. Whether heavy or fight, wet or dry, fast or slow, everything had its own nature already within it, which could not be violated without causing difficulties. When abstract and arbitrary rules were imposed from the outside, struggle was inevitable. Only then did life become sour. To Lao-tse, the world was not a setter of traps but a teacher of valuable lessons. Its lessons needed to be learned, just as its laws needed to be followed; then all would go well. Rather than turn away from "the world of dust," Lao-tse advised others to "join the dust of the world." What he saw operating behind everything in heaven and earth he called Tao (DAO), "the Way." A basic principle of Lao-tse's teaching was that this Way of the Universe could not be adequately described in words, and that it would be insulting both to its unlimited power and to the intelligent human mind to attempt to do so. Still, its nature could be understood, and those who cared the most about it, and the life from which it was inseparable, understood it best.
Benjamin Hoff (The Tao of Pooh)
I well remember the first great hemp shop that was opened in San Francisco around 1976. It was essentially a long wooden bar with stools for the customers. On the bar itself were a few large crocks containing the basic and cheaper forms of the weed—Panama Red, Acapulco Gold, Indian Ganja, and Domestic Green. But against the wall behind the bar stood a long cabinet furnished with hundreds of small drawers that a local guitar maker had decorated with intricate ivory inlays in the Italian style. Each drawer carried a label indicating the precise field and year of the product, so that one could purchase all the different varieties from Mexico, Lebanon, Morocco, Egypt, India, and Vietnam, as well as the carefully tended plants of devout cannabinologists here at home. Business was conducted with leisure and courtesy, and the salesmen offered small samples for testing at the bar, along with sensitive and expert discussion of their special effects. I might add that the stronger psychedelics, such as LSD, were coming to be used only rarely—for psychotherapy, for retreats in religious institutions, and in our special hospitals for the dying.
Alan W. Watts (Cloud-hidden, Whereabouts Unknown)
In the opinion of the A. C. Nielsen Company, the ideal radio research service must: 1. Measure the entertainment value of the program (probably best indicated by the size of the audience, bearing in mind the scope of the broadcasting facilities). 2. Measure the sales effectiveness of the program. 3. Cover the entire radio audience; that is: a. All geographical sections. b. All sizes of cities. c. Farms. d. All income classes. e. All occupations. f. All races. g. All sizes of family. h. Telephone and non-telephone homes, etc., etc. 4. Sample each of the foregoing sections of the audience in its proper portion; that is, there must be scientific, controlled sampling — not wholly random sampling. 5. Cover a sufficiently large sample to give reliable results. 6. Cover all types of programs. 7. Cover all hours of the day. 8. Permit complete analysis of each program; for example: a. Variations in audience size at each instant during the broadcast. b. Average duration of listening. c. Detection of entertainment features or commercials which cause gain or loss of audience. d. Audience turnover from day to day or week to week, etc., etc. 9. Reveal the true popularity and listening areas of each station and each network; that is, furnish an "Audit Bureau of Circulations" for radio. A study was made by A. C. Nielson Company of all possible methods of meeting these specifications. After careful investigation, they decided to use a graphic recording instrument known as the "audimeter" for accurately measuring radio listening. . . . The audimeter is installed in radio receivers in homes.
Judith C. Waller (Radio: The Fifth Estate)
The three types of bone resorption tests are: N-telopeptide (NTX). This marker measures the small molecules of bone collagen being excreted through the urine. High levels of NTX are associated with rapid bone resorption and low bone mass in both men and women. By testing every several months, it’s easy to monitor and determine the effectiveness of your nutritional therapy. Substantial drops in NTX indicate a reduction in bone loss and less risk for fracture. C-telopeptide (CTX). This is a similar marker to that of NTX, but CTX measures a different part of the collagen molecule. This marker can be tested from either a urine or blood sample. Deoxypyridinoline (DPD). This marker is tested, like NTX, by using a urine sample. Biological and analytical variability can be a problem with bone resorption markers, but
R. Keith Mccormick (The Whole-Body Approach to Osteoporosis: How to Improve Bone Strength and Reduce Your Fracture Risk (The New Harbinger Whole-Body Healing Series))
When studies using mental ability test scores from children are considered, the heritability of mental ability is typically found to be about .40, and the effect of the common or shared environment is found to be almost as strong, about .35. In contrast, when studies using mental ability test scores from adults (or older adolescents) are considered, estimates of the heritability of mental ability are much higher, typically about .65, whereas estimates of common or shared environment effects are much lower, probably under .20 (see review by Haworth et al., 2010). These findings indicate that differences among children in their levels of mental ability are attributable almost as much to their common environment—that is, to features of their family or household circumstances—as to their genetic inheritances. However, the findings also suggest that as children grow up, the differences among them in mental ability become less strongly related to the features of their common environments, and more strongly related to their genetic inheritances. In other words, the effect on one's mental ability of the family or household in which one is reared tends to become less important as one grows up, so that by adulthood one's level of mental ability is heavily dependent on one's genetic characteristics. It is as if one's level of mental ability—relative to that of other persons of the same age—can be raised (or lowered) during childhood by a particularly good (or poor) home environment, but then gradually returns to the level that one's genes tend to produce. The aforementioned findings are based mainly on samples of participants who belong to the broad middle class of modern Western countries. There is some evidence, though, that the heritability of IQ tends to be somewhat lower (at least until young adulthood, and perhaps beyond) when studies are conducted using participants of less enriched environments, such as those in economically underdeveloped countries or in the lowest socioeconomic classes of some Western countries (see review by Nisbett et al., 2012). One recent study (Tucker-Drob & Bates, 2016) found that in the United States, additive genetic influences had a weaker influence on IQ among persons of low socioeconomic status than among persons of high socioeconomic status. (Interestingly, Tucker-Drob and Bates did not find this effect in western European countries or in Australia, where socioeconomic status differences tend to be smaller.) The above findings suggest that whenever the heritability of IQ is discussed, it is important to consider the ages of the persons being examined as well as their socioeconomic status and their country.
Michael C. Ashton (Individual Differences and Personality)
Correlations have a hypothesis test. As with any hypothesis test, this test takes sample data and evaluates two mutually exclusive statements about the population from which the sample was drawn. For Pearson correlations, the two hypotheses are the following: Null hypothesis: There is no linear relationship between the two variables. ρ = 0. Alternative hypothesis: There is a linear relationship between the two variables. ρ ≠ 0. A correlation of zero indicates that no linear relationship exists. If your p-value is less than your significance level, the sample contains sufficient evidence to reject the null hypothesis and conclude that the correlation does not equal zero. In other words, the sample data support the notion that the relationship exists in the population.
Jim Frost (Regression Analysis: An Intuitive Guide for Using and Interpreting Linear Models)
values and coefficients are they key regression output. Collectively, these statistics indicate whether the variables are statistically significant and describe the relationships between the independent variables and the dependent variable. Low p-values (typically < 0.05) indicate that the independent variable is statistically significant. Regression analysis is a form of inferential statistics. Consequently, the p-values help determine whether the relationships that you observe in your sample also exist in the larger population. The coefficients for the independent variables represent the average change in the dependent variable given a one-unit change in the independent variable (IV) while controlling the other IVs.
Jim Frost (Regression Analysis: An Intuitive Guide for Using and Interpreting Linear Models)
When Bouchard’s twin-processing operation was in full swing, he amassed a staff of eighteen—psychologists, psychiatrists, ophthalmologists, cardiologists, pathologists, geneticists, even dentists. Several of his collaborators were highly distinguished: David Lykken was a widely recognized expert on personality, and Auke Tellegen, a Dutch psychologist on the Minnesota faculty, was an expert on personality measuring. In scheduling his twin-evaluations, Bouchard tried limiting the testing to one pair of twins at a time so that he and his colleagues could devote the entire week—with a grueling fifty hours of tests—to two genetically identical individuals. Because it is not a simple matter to determine zygosity—that is, whether twins are identical or fraternal—this was always the first item of business. It was done primarily by comparing blood samples, fingerprint ridge counts, electrocardiograms, and brain waves. As much background information as possible was collected from oral histories and, when possible, from interviews with relatives and spouses. I.Q. was tested with three different instruments: the Wechsler Adult Intelligence Scale, a Raven, Mill-Hill composite test, and the first principal components of two multiple abilities batteries. The Minnesota team also administered four personality inventories (lengthy questionnaires aimed at characterizing and measuring personality traits) and three tests of occupational interests. In all the many personality facets so laboriously measured, the Minnesota team was looking for degrees of concordance and degrees of difference between the separated twins. If there was no connection between the mean scores of all twins sets on a series of related tests—I.Q. tests, for instance—the concordance figure would be zero percent. If the scores of every twin matched his or her twin exactly, the concordance figure would be 100 percent. Statistically, any concordance above 30 percent was considered significant, or rather indicated the presence of some degree of genetic influence. As the week of testing progressed, the twins were wired with electrodes, X-rayed, run on treadmills, hooked up for twenty-four hours with monitoring devices. They were videotaped and a series of questionnaires and interviews elicited their family backgrounds, educations, sexual histories, major life events, and they were assessed for psychiatric problems such as phobias and anxieties. An effort was made to avoid adding questions to the tests once the program was under way because that meant tampering with someone else’s test; it also would necessitate returning to the twins already tested with more questions. But the researchers were tempted. In interviews, a few traits not on the tests appeared similar in enough twin pairs to raise suspicions of a genetic component. One of these was religiosity. The twins might follow different faiths, but if one was religious, his or her twin more often than not was religious as well. Conversely, when one was a nonbeliever, the other generally was too. Because this discovery was considered too intriguing to pass by, an entire additional test was added, an existing instrument that included questions relating to spiritual beliefs. Bouchard would later insist that while he and his colleagues had fully expected to find traits with a high degree of heritability, they also expected to find traits that had no genetic component. He was certain, he says, that they would find some traits that proved to be purely environmental. They were astonished when they did not. While the degree of heritability varied widely—from the low thirties to the high seventies— every trait they measured showed at least some degree of genetic influence. Many showed a lot.
William Wright (Born That Way: Genes, Behavior, Personality)
By convention, your heart rate during sedentary activities is between its resting level and 40 percent of maximum; light activities such as cooking and slow walking boost your heart rate to between 40 and 54 percent of maximum; moderate activities like rapid walking, yoga, and working in the garden speed your heart rate to 55 to 69 percent of maximum; vigorous activities such as running, jumping jacks, and climbing a mountain demand heart rates of 70 percent or higher.27 Large samples of Americans asked to wear heart rate monitors indicate that a typical adult engages in about five and a half hours of light activity, just twenty minutes of moderate activity, and less than one minute of vigorous activity.28 In contrast, a typical Hadza adult spends nearly four hours doing light activities, two hours doing moderate-intensity activities, and twenty minutes doing vigorous activities.29 Altogether, twenty-first-century Americans elevate their heart rates to moderate levels between half and one-tenth as much as nonindustrial people.
Daniel E. Lieberman (Exercised: Why Something We Never Evolved to Do Is Healthy and Rewarding)
The null hypothesis of normality is that the variable is normally distributed: thus, we do not want to reject the null hypothesis. A problem with statistical tests of normality is that they are very sensitive to small samples and minor deviations from normality. The extreme sensitivity of these tests implies the following: whereas failure to reject the null hypo thesis indicates normal distribution of a variable, rejecting the null hypothesis does not indicate that the variable is not normally distributed. It is acceptable to consider variables as being normally distributed when they visually appear to be so, even when the null hypothesis of normality is rejected by normality tests. Of course, variables are preferred that are supported by both visual inspection and normality tests. In Greater Depth … Box 12.1 Why Normality? The reasons for the normality assumption are twofold: First, the features of the normal distribution are well-established and are used in many parametric tests for making inferences and hypothesis testing. Second, probability theory suggests that random samples will often be normally distributed, and that the means of these samples can be used as estimates of population means. The latter reason is informed by the central limit theorem, which states that an infinite number of relatively large samples will be normally distributed, regardless of the distribution of the population. An infinite number of samples is also called a sampling distribution. The central limit theorem is usually illustrated as follows. Assume that we know the population distribution, which has only six data elements with the following values: 1, 2, 3, 4, 5, or 6. Next, we write each of these six numbers on a separate sheet of paper, and draw repeated samples of three numbers each (that is, n = 3). We
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
significantly different? Because the variances are equal, we read the t-test statistics from the top line, which states “equal variances assumed.” (If variances had been unequal, then we would read the test statistics from the second line, “equal variances not assumed.”). The t-test statistic for equal variances for this test is 2.576, which is significant at p = .011.13 Thus, we conclude that the means are significantly different; the 10th graders report feeling safer one year after the anger management program was implemented. Working Example 2 In the preceding example, the variables were both normally distributed, but this is not always the case. Many variables are highly skewed and not normally distributed. Consider another example. The U.S. Environmental Protection Agency (EPA) collects information about the water quality of watersheds, including information about the sources and nature of pollution. One such measure is the percentage of samples that exceed pollution limits for ammonia, dissolved oxygen, phosphorus, and pH.14 A manager wants to know whether watersheds in the East have higher levels of pollution than those in the Midwest. Figure 12.4 Untransformed Variable: Watershed Pollution An index variable of such pollution is constructed. The index variable is called “pollution,” and the first step is to examine it for test assumptions. Analysis indicates that the range of this variable has a low value of 0.00 percent and a high value of 59.17 percent. These are plausible values (any value above 100.00 percent is implausible). A boxplot (not shown) demonstrates that the variable has two values greater than 50.00 percent that are indicated as outliers for the Midwest region. However, the histograms shown in Figure 12.4 do not suggest that these values are unusually large; rather, the peak in both histograms is located off to the left. The distributions are heavily skewed.15 Because the samples each have fewer than 50 observations, the Shapiro-Wilk test for normality is used. The respective test statistics for East and Midwest are .969 (p = .355) and .931 (p = .007). Visual inspection confirms that the Midwest distribution is indeed nonnormal. The Shapiro-Wilk test statistics are given only for completeness; they have no substantive interpretation. We must now either transform the variable so that it becomes normal for purposes of testing, or use a nonparametric alternative. The second option is discussed later in this chapter. We also show the consequences of ignoring the problem. To transform the variable, we try the recommended transformations, , and then examine the transformed variable for normality. If none of these transformations work, we might modify them, such as using x⅓ instead of x½ (recall that the latter is ).16 Thus, some experimentation is required. In our case, we find that the x½ works. The new Shapiro-Wilk test statistics for East and Midwest are, respectively, .969 (p = .361) and .987 (p = .883). Visual inspection of Figure 12.5 shows these two distributions to be quite normal, indeed. Figure 12.5 Transformed Variable: Watershed Pollution The results of the t-test for the transformed variable are shown in Table
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
Remedies exist for correcting substantial departures from normality, but these remedies may make matters worse when departures from normality are minimal. The first course of action is to identify and remove any outliers that may affect the mean and standard deviation. The second course of action is variable transformation, which involves transforming the variable, often by taking log(x), of each observation, and then testing the transformed variable for normality. Variable transformation may address excessive skewness by adjusting the measurement scale, thereby helping variables to better approximate normality.8 Substantively, we strongly prefer to make conclusions that satisfy test assumptions, regardless of which measurement scale is chosen.9 Keep in mind that when variables are transformed, the units in which results are expressed are transformed, as well. An example of variable transformation is provided in the second working example. Typically, analysts have different ways to address test violations. Examination of the causes of assumption violations often helps analysts to better understand their data. Different approaches may be successful for addressing test assumptions. Analysts should not merely go by the result of one approach that supports their case, ignoring others that perhaps do not. Rather, analysts should rely on the weight of robust, converging results to support their final test conclusions. Working Example 1 Earlier we discussed efforts to reduce high school violence by enrolling violence-prone students into classes that address anger management. Now, after some time, administrators and managers want to know whether the program is effective. As part of this assessment, students are asked to report their perception of safety at school. An index variable is constructed from different items measuring safety (see Chapter 3). Each item is measured on a seven-point Likert scale (1 = strongly disagree to 7 = strongly agree), and the index is constructed such that a high value indicates that students feel safe.10 The survey was initially administered at the beginning of the program. Now, almost a year later, the survey is implemented again.11 Administrators want to know whether students who did not participate in the anger management program feel that the climate is now safer. The analysis included here focuses on 10th graders. For practical purposes, the samples of 10th graders at the beginning of the program and one year later are regarded as independent samples; the subjects are not matched. Descriptive analysis shows that the mean perception of
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
T-TESTS FOR INDEPENDENT SAMPLES T-tests are used to test whether the means of a continuous variable differ across two different groups. For example, do men and women differ in their levels of income, when measured as a continuous variable? Does crime vary between two parts of town? Do rich people live longer than poor people? Do high-performing students commit fewer acts of violence than do low-performing students? The t-test approach is shown graphically in Figure 12.1, which illustrates the incomes of men and women as boxplots (the lines in the middle of the boxes indicate the means rather than the medians).2 When the two groups are independent samples, the t-test is called the independent-samples t-test. Sometimes the continuous variable is called a “test variable” and the dichotomous variable is called a “grouping variable.” The t-test tests whether the difference of the means is significantly different from zero, that is, whether men and women have different incomes. The following
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
knowledge of cardiovascular disease and whether such knowledge reduces behaviors that put people at risk for cardiovascular disease. Simple regression is used to analyze the relationship between two continuous variables. Continuous variables assume that the distances between ordered categories are determinable.1 In simple regression, one variable is defined as the dependent variable and the other as the independent variable (see Chapter 2 for the definitions). In the current example, the level of knowledge obtained from workshops and other sources might be measured on a continuous scale and treated as an independent variable, and behaviors that put people at risk for cardiovascular disease might also be measured on a continuous scale and treated as a dependent variable. Scatterplot The relationship between two continuous variables can be portrayed in a scatterplot. A scatterplot is merely a plot of the data points for two continuous variables, as shown in Figure 14.1 (without the straight line). By convention, the dependent variable is shown on the vertical (or Y-) axis, and the independent variable on the horizontal (or X-) axis. The relationship between the two variables is estimated as a straight line relationship. The line is defined by the equation y = a + bx, where a is the intercept (or constant), and b is the slope. The slope, b, is defined as Figure 14.1 Scatterplot or (y2 – y1)/(x2 – x1). The line is calculated mathematically such that the sum of distances from each observation to the line is minimized.2 By definition, the slope indicates the change in y as a result of a unit change in x. The straight line, defined by y = a + bx, is also called the regression line, and the slope (b) is called the regression coefficient. A positive regression coefficient indicates a positive relationship between the variables, shown by the upward slope in Figure 14.1. A negative regression coefficient indicates a negative relationship between the variables and is indicated by a downward-sloping line. Test of Significance The test of significance of the regression coefficient is a key test that tells us whether the slope (b) is statistically different from zero. The slope is calculated from a sample, and we wish to know whether it is significant. When the regression line is horizontal (b = 0), no relationship exists between the two variables. Then, changes in the independent variable have no effect on the dependent variable. The following hypotheses are thus stated: H0: b = 0, or the two variables are unrelated. HA: b ≠ 0, or the two variables are (positively or negatively) related. To determine whether the slope equals zero, a t-test is performed. The test statistic is defined as the slope, b, divided by the standard error of the slope, se(b). The standard error of the slope is a measure of the distribution of the observations around the regression slope, which is based on the standard deviation of those observations to the regression line: Thus, a regression line with a small slope is more likely to be statistically significant when observations lie closely around it (that is, the standard error of the observations around the line is also small, resulting in a larger test statistic). By contrast, the same regression line might be statistically insignificant when observations are scattered widely around it. Observations that lie farther from the
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
Table 14.1 also shows R-square (R2), which is called the coefficient of determination. R-square is of great interest: its value is interpreted as the percentage of variation in the dependent variable that is explained by the independent variable. R-square varies from zero to one, and is called a goodness-of-fit measure.5 In our example, teamwork explains only 7.4 percent of the variation in productivity. Although teamwork is significantly associated with productivity, it is quite likely that other factors also affect it. It is conceivable that other factors might be more strongly associated with productivity and that, when controlled for other factors, teamwork is no longer significant. Typically, values of R2 below 0.20 are considered to indicate weak relationships, those between 0.20 and 0.40 indicate moderate relationships, and those above 0.40 indicate strong relationships. Values of R2 above 0.65 are considered to indicate very strong relationships. R is called the multiple correlation coefficient and is always 0 ≤ R ≤ 1. To summarize up to this point, simple regression provides three critically important pieces of information about bivariate relationships involving two continuous variables: (1) the level of significance at which two variables are associated, if at all (t-statistic), (2) whether the relationship between the two variables is positive or negative (b), and (3) the strength of the relationship (R2). Key Point R-square is a measure of the strength of the relationship. Its value goes from 0 to 1. The primary purpose of regression analysis is hypothesis testing, not prediction. In our example, the regression model is used to test the hypothesis that teamwork is related to productivity. However, if the analyst wants to predict the variable “productivity,” the regression output also shows the SEE, or the standard error of the estimate (see Table 14.1). This is a measure of the spread of y values around the regression line as calculated for the mean value of the independent variable, only, and assuming a large sample. The standard error of the estimate has an interpretation in terms of the normal curve, that is, 68 percent of y values lie within one standard error from the calculated value of y, as calculated for the mean value of x using the preceding regression model. Thus, if the mean index value of the variable “teamwork” is 5.0, then the calculated (or predicted) value of “productivity” is [4.026 + 0.223*5 =] 5.141. Because SEE = 0.825, it follows that 68 percent of productivity values will lie 60.825 from 5.141 when “teamwork” = 5. Predictions of y for other values of x have larger standard errors.6 Assumptions and Notation There are three simple regression assumptions. First, simple regression assumes that the relationship between two variables is linear. The linearity of bivariate relationships is easily determined through visual inspection, as shown in Figure 14.2. In fact, all analysis of relationships involving continuous variables should begin with a scatterplot. When variable
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
COEFFICIENT The nonparametric alternative, Spearman’s rank correlation coefficient (r, or “rho”), looks at correlation among the ranks of the data rather than among the values. The ranks of data are determined as shown in Table 14.2 (adapted from Table 11.8): Table 14.2 Ranks of Two Variables In Greater Depth … Box 14.1 Crime and Poverty An analyst wants to examine empirically the relationship between crime and income in cities across the United States. The CD that accompanies the workbook Exercising Essential Statistics includes a Community Indicators dataset with assorted indicators of conditions in 98 cities such as Akron, Ohio; Phoenix, Arizona; New Orleans, Louisiana; and Seattle, Washington. The measures include median household income, total population (both from the 2000 U.S. Census), and total violent crimes (FBI, Uniform Crime Reporting, 2004). In the sample, household income ranges from $26,309 (Newark, New Jersey) to $71,765 (San Jose, California), and the median household income is $42,316. Per-capita violent crime ranges from 0.15 percent (Glendale, California) to 2.04 percent (Las Vegas, Nevada), and the median violent crime rate per capita is 0.78 percent. There are four types of violent crimes: murder and nonnegligent manslaughter, forcible rape, robbery, and aggravated assault. A measure of total violent crime per capita is calculated because larger cities are apt to have more crime. The analyst wants to examine whether income is associated with per-capita violent crime. The scatterplot of these two continuous variables shows that a negative relationship appears to be present: The Pearson’s correlation coefficient is –.532 (p < .01), and the Spearman’s correlation coefficient is –.552 (p < .01). The simple regression model shows R2 = .283. The regression model is as follows (t-test statistic in parentheses): The regression line is shown on the scatterplot. Interpreting these results, we see that the R-square value of .283 indicates a moderate relationship between these two variables. Clearly, some cities with modest median household incomes have a high crime rate. However, removing these cities does not greatly alter the findings. Also, an assumption of regression is that the error term is normally distributed, and further examination of the error shows that it is somewhat skewed. The techniques for examining the distribution of the error term are discussed in Chapter 15, but again, addressing this problem does not significantly alter the finding that the two variables are significantly related to each other, and that the relationship is of moderate strength. With this result in hand, further analysis shows, for example, by how much violent crime decreases for each increase in household income. For each increase of $10,000 in average household income, the violent crime rate drops 0.25 percent. For a city experiencing the median 0.78 percent crime rate, this would be a considerable improvement, indeed. Note also that the scatterplot shows considerable variation in the crime rate for cities at or below the median household income, in contrast to those well above it. Policy analysts may well wish to examine conditions that give rise to variation in crime rates among cities with lower incomes. Because Spearman’s rank correlation coefficient examines correlation among the ranks of variables, it can also be used with ordinal-level data.9 For the data in Table 14.2, Spearman’s rank correlation coefficient is .900 (p = .035).10 Spearman’s p-squared coefficient has a “percent variation explained” interpretation, similar
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
T-TESTS FOR INDEPENDENT SAMPLES T-tests are used to test whether the means of a continuous variable differ across two different groups. For example, do men and women differ in their levels of income, when measured as a continuous variable? Does crime vary between two parts of town? Do rich people live longer than poor people? Do high-performing students commit fewer acts of violence than do low-performing students? The t-test approach is shown graphically in Figure 12.1, which illustrates the incomes of men and women as boxplots (the lines in the middle of the boxes indicate the means rather than the medians).2 When the two groups are independent samples, the t-test is called the independent-samples t-test. Sometimes the continuous variable is called a “test variable” and the dichotomous variable is called a “grouping variable.” The t-test tests whether the difference of the means is significantly different from zero, that is, whether men and women have different incomes. The following hypotheses are posited: Key Point The independent-samples t-test is used when one variable is dichotomous and the other is continuous. H0: Men and women do not have different mean incomes (in the population). HA: Men and women do have different mean incomes (in the population). Alternatively, using the Greek letter m to refer to differences in the population, H0: μm = μf, and HA: μm ≠ μf. The formula for calculating the t-test test statistic (a tongue twister?) is As always, the computer calculates the test statistic and reports at what level it is significant. Such calculations are seldom done by hand. To further conceptual understanding of this formula, it is useful to relate it to the discussion of hypothesis testing in Chapter 10. First, note that the difference of means, appears in the numerator: the larger the difference of means, the larger the t-test test statistic, and the more likely we might reject the null hypothesis. Second, sp is the pooled variance of the two groups, that is, the weighted average of the variances of each group.3 Increases in the standard deviation decrease the test statistic. Thus, it is easier to reject the null hypotheses when two populations are clustered narrowly around their means than when they are spread widely around them. Finally, more observations (that is, increased information or larger n1 and n2) increase the size of the test statistic, making it easier to reject the null hypothesis. Figure 12.1 The T-Test: Mean Incomes by Gender
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
Three individuals were involved: the one running the experiment, the subject of the experiment (a volunteer), and a confederate pretending to be a volunteer. These three people fill three distinct roles: the Experimenter (an authoritative role), the Teacher (a role intended to obey the orders of the Experimenter), and the Learner (the recipient of stimulus from the Teacher). The subject and the actor both drew slips of paper to determine their roles, but unknown to the subject, both slips said "teacher". The actor would always claim to have drawn the slip that read "learner", thus guaranteeing that the subject would always be the "teacher". Next, the "teacher" and "learner" were taken into an adjacent room where the "learner" was strapped into what appeared to be an electric chair. The experimenter told the participants this was to ensure that the "learner" would not escape.[1] The "teacher" and "learner" were then separated into different rooms where they could communicate but not see each other. In one version of the experiment, the confederate was sure to mention to the participant that he had a heart condition.[1] At some point prior to the actual test, the "teacher" was given a sample electric shock from the electroshock generator in order to experience firsthand what the shock that the "learner" would supposedly receive during the experiment would feel like. The "teacher" was then given a list of word pairs that he was to teach the learner. The teacher began by reading the list of word pairs to the learner. The teacher would then read the first word of each pair and read four possible answers. The learner would press a button to indicate his response. If the answer was incorrect, the teacher would administer a shock to the learner, with the voltage increasing in 15-volt increments for each wrong answer. If correct, the teacher would read the next word pair.[1] The subjects believed that for each wrong answer, the learner was receiving actual shocks. In reality, there were no shocks. After the confederate was separated from the subject, the confederate set up a tape recorder integrated with the electroshock generator, which played prerecorded sounds for each shock level. After a number of voltage-level increases, the actor started to bang on the wall that separated him from the subject. After several times banging on the wall and complaining about his heart condition, all responses by the learner would cease.[1] At this point, many people indicated their desire to stop the experiment and check on the learner. Some test subjects paused at 135 volts and began to question the purpose of the experiment. Most continued after being assured that they would not be held responsible. A few subjects began to laugh nervously or exhibit other signs of extreme stress once they heard the screams of pain coming from the learner.[1] If at any time the subject indicated his desire to halt the experiment, he was given a succession of verbal prods by the experimenter, in this order:[1] Please continue. The experiment requires that you continue. It is absolutely essential that you continue. You have no other choice, you must go on.
Wikipedia :-)
Business Plan Samples This sample business plan is intended to provide you with a template that can be used as a reference for when you’re hard at work on your plan. It's always easier to write something if you can read an example first, so here's an executive summary example that you can use as a model for your own business plan's executive summary. The Executive Summary is where you explain the general idea behind your company; it’s where you give the reader (most likely an investor, or someone else you need on board) a clear indication of why you’ve sent this Business Plan to them. In the Business Plan section, you will want to get the reader’s attention by letting them know what you do. It’s vitality important to set up a strong foundation and a thorough business plan in the beginning. Nonprofit business plans have several different features and quirks that you’ll need to include to receive funding and having a strong passion to help your community is only a fragment of what it takes to run a not-for-profit organization.
Business Plan Writers
The Rooster taught me to wake up early and be a leader. The Butterfly encouraged me to allow a period of struggles to develop strong and look beautiful. The Squirrel showed me to be alert and fast all the time. The Dog influenced me to give up my life for my best friend. The Cat told me to exercise every day. Otherwise, I will be lazy and crazy. The Fox illustrated me to be subtle and keep my place organized and neat. The Snake demonstrated to me to hold my peace even if I am capable of attack, harm, or kill. The Monkey stimulated me to be vocal and communicate. The Tiger cultivated me to be active and fast. The Lion cultured me not to be lazy especially if I have strength and power that could be used. The Eagle was my sample for patience, beauty, courage, bravery, honor, pride, grace, and determination. The Rat skilled me to find my way out no matter what or how long it takes. The Chameleon revealed to me the ability to change my color for beauty and protection. The Fish display to live in peace even if I have to live a short life. The Delphin enhanced me to be the source of kindness, peace, harmony, and protection. The Shark enthused me to live as active and restful as I can be. The Octopus exhibited me to be silent and intelligent. The Elephant experienced me with the value of cooperation and family. To care for others and respect elders. The Pig indicated to me to act smart, clean, and shameless. The Panda appears to me as life is full of white and black times but my thick fur will enable me to survive. The Kangaroo enthused me to live with pride even if I am unable to walk backward. The Penguin influenced me to never underestimate a person. The Deer reveals the ability to sense the presence of hunters before they sense you. The Turtle brightened me to realize that I will get there no matter how long it takes me while having a shell of protection above me. The Rabbit reassured me to allow myself to be playful and silly. The Bat proved to me that I can fly even in darkness. The Alligator/crocodile alerted me that threat exists. The Ant moved me to be organized, active, and social with others. The Bee educated me to be the source of honey and cure for others. The Horse my best intelligent friend with who I bond. Trained me to recover fast from tough conditions. The Whale prompted me to take care of my young ones and show them life abilities. The Crab/Lobster enlightened me not to follow them when they make resolutions depending on previous undesirable events.
Isaac Nash (The Herok)
The hypothesis of “discriminative grandparental investment” predicts that behavioral and psychological indicators of investment should follow the degree of certainty inherent in the different types of grandparental relationships: most for MoMo, least for FaFa, and in between these two for MoFa and FaMo. Studies from different cultures have tested the hypothesis of discriminative grandparental solicitude. In one study conducted in the United States, evolutionary psychologist Todd DeKay (1995) studied a sample of 120 undergraduates. Each student completed a questionnaire that included information on biographical background and then evaluated each of the four grandparents on the following dimensions: grandparent’s physical similarity to self, grandparent’s personality similarity to self, time spent with grandparent while growing up, knowledge acquired from grandparent, gifts received from grandparent, and emotional closeness to grandparent. Figure 8.2 summarizes the results from this study. Findings show that the mother’s mother is closer to, spends more time with, and invests most resources in the grandchild, whereas father’s father scores lowest on these dimensions. Findings presumably reflect evolved psychological mechanisms sensitive to the degree of certainty of genetic relatedness.
David M. Buss (Evolutionary Psychology: The New Science of the Mind)
P-values indicate the strength of the sample evidence against the null hypothesis. If it is less than the significance level, your results are statistically significant.
Jim Frost (Hypothesis Testing: An Intuitive Guide for Making Data Driven Decisions)
Once you’ve answered this question for yourself, replace a quick response with one that takes the time to describe the process you identified, points out the current step, and emphasizes the step that comes next. I call this the process-centric approach to e-mail, and it’s designed to minimize both the number of e-mails you receive and the amount of mental clutter they generate. To better explain this process and why it works consider the following process-centric responses to the sample e-mails from earlier: Process-Centric Response to E-mail #1: “I’d love to grab coffee. Let’s meet at the Starbucks on campus. Below I listed two days next week when I’m free. For each day, I listed three times. If any of those day and time combinations work for you, let me know. I’ll consider your reply confirmation for the meeting. If none of those date and time combinations work, give me a call at the number below and we’ll hash out a time that works. Looking forward to it.” Process-Centric Response to E-mail #2: “I agree that we should return to this problem. Here’s what I suggest… “Sometime in the next week e-mail me everything you remember about our discussion on the problem. Once I receive that message, I’ll start a shared directory for the project and add to it a document that summarizes what you sent me, combined with my own memory of our past discussion. In the document, I’ll highlight the two or three most promising next steps. “We can then take a crack at those next steps for a few weeks and check back in. I suggest we schedule a phone call for a month from now for this purpose. Below I listed some dates and times when I’m available for a call. When you respond with your notes, indicate the date and time combination that works best for you and we’ll consider that reply confirmation for the call. I look forward to digging into this problem.” Process-Centric
Cal Newport (Deep Work: Rules for Focused Success in a Distracted World)
But by 1989–90, it became obvious that the majority of thrown materials didn’t fall into the reactor’s pit and didn’t fulfill their tasks. The combination of rated and measured curves should, most likely, be considered a result of the “hypnotic influence” of high science upon the results of incorrect measuring. Let’s consider some facts. The first one. Consider the Central Hall of the reactor. It’s covered by huge hills of thrown materials. This could be observed from the helicopters before completion of the Shelter that encased the reactor; and it was proved by the exploratory groups that got inside the hall after a long preparatory period. But this doesn’t exclude the fact that the major part of the materials landed in the reactor’s pit. The second fact. In the middle of 1988, with the help of optical instruments and TV cameras, researchers managed to see what was inside the pit of the reactor. They found practically no thrown materials. But here one can object that these materials fell into an area of extraordinarily high temperatures, and they melted and spread over the lower rooms of the reactor. Such a process could take place. On the lower floors, they did discover great accumulations of solid lava-like masses that contained nuclear fuel. The third fact. The presence of lead would indicate that those lava-like masses contained not only materials of the reactor itself—concrete, dolomite, sand, steel, zirconium, etc.—but also materials thrown from the helicopters. But there is no lead in the reactor and the nearest rooms, even though over two thousand tons of it was thrown in! After investigation of dozens of samples, it was found that the quantity of lead in the lava masses was too small. That meant the lead didn’t get into the pit. The other components of the thrown materials fell in such a small quantity, they couldn’t influence the behavior of the release. These are the known facts.
Alexander Borovoi (My Chernobyl: The Human Story of a Scientist and the nuclear power Plant Catastrophe)