Regression Testing Quotes

We've searched our database for all the quotes and captions related to Regression Testing. Here they are! All 52 of them:

Grief turns out to be a place none of us know until we reach it. We anticipate (we know) that someone close to us could die, but we do not look beyond the few days or weeks that immediately follow such an imagined death. We misconstrue the nature of even those few days or weeks. We might expect if the death is sudden to feel shock. We do not expect the shock to be obliterative, dislocating to both body and mind. We might expect that we will be prostrate, inconsolable, crazy with loss. We do not expect to be literally crazy, cool customers who believe that their husband is about to return and need his shoes. In the version of grief we imagine, the model will be "healing." A certain forward movement will prevail. The worst days will be the earliest days. We imagine that the moment to most severely test us will be the funeral, after which this hypothetical healing will take place. When we anticipate the funeral we wonder about failing to "get through it," rise to the occasion, exhibit the "strength" that invariably gets mentioned as the correct response to death. We anticipate needing to steel ourselves the for the moment: will I be able to greet people, will I be able to leave the scene, will I be able even to get dressed that day? We have no way of knowing that this will not be the issue. We have no way of knowing that the funeral itself will be anodyne, a kind of narcotic regression in which we are wrapped in the care of others and the gravity and meaning of the occasion. Nor can we know ahead of the fact (and here lies the heart of the difference between grief was we imagine it and grief as it is) the unending absence that follows, the void, the very opposite of meaning, the relentless succession of moments during which we will confront the experience of meaninglessness itself.
Joan Didion (The Year of Magical Thinking)
...maintaining good regression tests is the key to refactoring with confidence.
Andrew Hunt (The Pragmatic Programmer: From Journeyman to Master)
No bug is considered properly fixed without an automated regression test.
Anonymous
The tail is the time period from “code slush” (true code freezes are rare) or “feature freeze” to actual deployment. This is the time period when companies do some or all of the following: beta testing, regression testing, product integration, integration testing, documentation, defect fixing. The worst “tail” I’ve encountered was 18 months—18 months from feature freeze to product release, and most of that time was spent in QA. I’ve
Jim Highsmith (Adaptive Leadership)
regression
Testy McTesterson (Test Book)
Be wary, though, of the way news media use the word “significant,” because to statisticians it doesn’t mean “noteworthy.” In statistics, the word “significant” means that the results passed mathematical tests such as t-tests, chi-square tests, regression, and principal components analysis (there are hundreds). Statistical significance tests quantify how easily pure chance can explain the results. With a very large number of observations, even small differences that are trivial in magnitude can be beyond what our models of change and randomness can explain. These tests don’t know what’s noteworthy and what’s not—that’s a human judgment.
Daniel J. Levitin (A Field Guide to Lies: Critical Thinking in the Information Age)
Grief turns out to be a place none of us know until we reach it...We might expect that we will be prostrate, inconsolable, crazy with loss. We do not expect to be literally crazy, cool customers who believe that their husband is about to return and need his shoes. In the version of grief we imagine, the model will be 'healing.' A Certain forward movement will prevail. The worst days will be the earliest days. We imagine that the moment to most severely test us will be the funeral, after which this hypothetical healing will take place. When we anticipate the funeral we wonder about failing to 'get through it,' rise to the occasion, exhibit the 'strength' that invariably gets mentioned as the correct response to death. We anticipate needing to steel ourselves for the moment: will I be able to greet people, will I be able to leave the scene, will I be able even to get dressed that day? We have no way of knowing that this will not be the issue. We have no way of knowing that the funeral itself will be anodyne, a kind of narcotic regression in which we are wrapped in the care of others and the gravity and meaning of the occasion. Nor can we know ahead of the fact the unending absence that follows, the void, the very opposite of meaning, the relentless succession of moments during which we will confront the experience of meaninglessness itself.
Joan Didion
When the Axcom team started testing the approach, they quickly began to see improved results. The firm began incorporating higher dimensional kernel regression approaches, which seemed to work best for trending models, or those predicting how long certain investments would keep moving in a trend.
Gregory Zuckerman (The Man Who Solved the Market: How Jim Simons Launched the Quant Revolution)
In principle, more analytic power can be achieved by varying multiple things at once in an uncorrelated (random) way, and doing standard analysis, such as multiple linear regression. In practice, though, A/B testing is widely used, because A/B tests are easy to deploy, easy to understand, and easy to explain to management.
Christopher D. Manning (Introduction to Information Retrieval)
Thought Control * Require members to internalize the group’s doctrine as truth * Adopt the group’s “map of reality” as reality * Instill black and white thinking * Decide between good versus evil * Organize people into us versus them (insiders versus outsiders) * Change a person’s name and identity * Use loaded language and clichés to constrict knowledge, stop critical thoughts, and reduce complexities into platitudinous buzzwords * Encourage only “good and proper” thoughts * Use hypnotic techniques to alter mental states, undermine critical thinking, and even to age-regress the member to childhood states * Manipulate memories to create false ones * Teach thought stopping techniques that shut down reality testing by stopping negative thoughts and allowing only positive thoughts. These techniques include: * Denial, rationalization, justification, wishful thinking * Chanting * Meditating * Praying * Speaking in tongues * Singing or humming * Reject rational analysis, critical thinking, constructive criticism * Forbid critical questions about leader, doctrine, or policy * Label alternative belief systems as illegitimate, evil, or not useful * Instill new “map of reality” Emotional Control * Manipulate and narrow the range of feelings—some emotions and/or needs are deemed as evil, wrong, or selfish * Teach emotion stopping techniques to block feelings of hopelessness, anger, or doubt * Make the person feel that problems are always their own fault, never the leader’s or the group’s fault * Promote feelings of guilt or unworthiness, such as: * Identity guilt * You are not living up to your potential * Your family is deficient * Your past is suspect * Your affiliations are unwise * Your thoughts, feelings, actions are irrelevant or selfish * Social guilt * Historical guilt * Instill fear, such as fear of: * Thinking independently * The outside world * Enemies * Losing one’s salvation * Leaving * Orchestrate emotional highs and lows through love bombing and by offering praise one moment, and then declaring a person is a horrible sinner * Ritualistic and sometimes public confession of sins * Phobia indoctrination: inculcate irrational fears about leaving the group or questioning the leader’s authority * No happiness or fulfillment possible outside the group * Terrible consequences if you leave: hell, demon possession, incurable diseases, accidents, suicide, insanity, 10,000 reincarnations, etc. * Shun those who leave and inspire fear of being rejected by friends and family * Never a legitimate reason to leave; those who leave are weak, undisciplined, unspiritual, worldly, brainwashed by family or counselor, or seduced by money, sex, or rock and roll * Threaten harm to ex-member and family (threats of cutting off friends/family)
Steven Hassan
The prediction of false rape-related beliefs (rape myth acceptance [RMA]) was examined using the Illinois Rape Myth Acceptance Scale (Payne, Lonsway, & Fitzgerald, 1999) among a nonclinical sample of 258 male and female college students. Predictor variables included measures of attitudes toward women, gender role identity (GRI), sexual trauma history, and posttraumatic stress disorder (PTSD) symptom severity. Using linear regression and testing interaction effects, negative attitudes toward women significantly predicted greater RMA for individuals without a sexual trauma history. However, neither attitudes toward women nor GRI were significant predictors of RMA for individuals with a sexual trauma history." Rape Myth Acceptance, Sexual Trauma History, and Posttraumatic Stress Disorder Shannon N. Baugher, PhD, Jon D. Elhai, PhD, James R. Monroe, PhD, Ruth Dakota, Matt J. Gray, PhD
Shannon N. Baugher
Grief turns out to be a place none of us know until we reach it. We anticipate (we know) that someone close to us could die, but we do not look beyond the few days or weeks that immediately follow such an imagined death. We misconstrue the nature of even those few days or weeks. We might expect if the death is sudden to feel shock. We do not expect this shock to be obliterative, dislocating to both body and mind. We might expect that we will be prostrate, inconsolable, crazy with loss. We do not expect to be literally crazy, cool customers who believe that their husband is about to return and need his shoes. In the version of grief we imagine, the model will be “healing.” A certain forward movement will prevail. The worst days will be the earliest days. We imagine that the moment to most severely test us will be the funeral, after which this hypothetical healing will take place. When we anticipate the funeral we wonder about failing to “get through it,” rise to the occasion, exhibit the “strength” that invariably gets mentioned as the correct response to death. We anticipate needing to steel ourselves for the moment: will I be able to greet people, will I be able to leave the scene, will I be able even to get dressed that day? We have no way of knowing that this will not be the issue. We have no way of knowing that the funeral itself will be anodyne, a kind of narcotic regression in which we are wrapped in the care of others and the gravity and meaning of the occasion. Nor can we know ahead of the fact (and here lies the heart of the difference between grief as we imagine it and grief as it is) the unending absence that follows, the void, the very opposite of meaning, the relentless succession of moments during which we will confront the experience of meaninglessness itself.           A
Joan Didion (The Year of Magical Thinking)
Modern statistics is built on the idea of models — probability models in particular. [...] The standard approach to any new problem is to identify the sources of variation, to describe those sources by probability distributions and then to use the model thus created to estimate, predict or test hypotheses about the undetermined parts of that model. […] A statistical model involves the identification of those elements of our problem which are subject to uncontrolled variation and a specification of that variation in terms of probability distributions. Therein lies the strength of the statistical approach and the source of many misunderstandings. Paradoxically, misunderstandings arise both from the lack of an adequate model and from over reliance on a model. […] At one level is the failure to recognise that there are many aspects of a model which cannot be tested empirically. At a higher level is the failure is to recognise that any model is, necessarily, an assumption in itself. The model is not the real world itself but a representation of that world as perceived by ourselves. This point is emphasised when, as may easily happen, two or more models make exactly the same predictions about the data. Even worse, two models may make predictions which are so close that no data we are ever likely to have can ever distinguish between them. […] All model-dependant inference is necessarily conditional on the model. This stricture needs, especially, to be borne in mind when using Bayesian methods. Such methods are totally model-dependent and thus all are vulnerable to this criticism. The problem can apparently be circumvented, of course, by embedding the model in a larger model in which any uncertainties are, themselves, expressed in probability distributions. However, in doing this we are embarking on a potentially infinite regress which quickly gets lost in a fog of uncertainty.
David J. Bartholomew (Unobserved Variables: Models and Misunderstandings (SpringerBriefs in Statistics))
Simple Regression   CHAPTER OBJECTIVES After reading this chapter, you should be able to Use simple regression to test the statistical significance of a bivariate relationship involving one dependent and one independent variable Use Pearson’s correlation coefficient as a measure of association between two continuous variables Interpret statistics associated with regression analysis Write up the model of simple regression Assess assumptions of simple regression This chapter completes our discussion of statistical techniques for studying relationships between two variables by focusing on those that are continuous. Several approaches are examined: simple regression; the Pearson’s correlation coefficient; and a nonparametric alterative, Spearman’s rank correlation coefficient. Although all three techniques can be used, we focus particularly on simple regression. Regression allows us to predict outcomes based on knowledge of an independent variable. It is also the foundation for studying relationships among three or more variables, including control variables mentioned in Chapter 2 on research design (and also in Appendix 10.1). Regression can also be used in time series analysis, discussed in Chapter 17. We begin with simple regression. SIMPLE REGRESSION Let’s first look at an example. Say that you are a manager or analyst involved with a regional consortium of 15 local public agencies (in cities and counties) that provide low-income adults with health education about cardiovascular diseases, in an effort to reduce such diseases. The funding for this health education comes from a federal grant that requires annual analysis and performance outcome reporting. In Chapter 4, we used a logic model to specify that a performance outcome is the result of inputs, activities, and outputs. Following the development of such a model, you decide to conduct a survey among participants who attend such training events to collect data about the number of events they attended, their knowledge of cardiovascular disease, and a variety of habits such as smoking that are linked to cardiovascular disease. Some things that you might want to know are whether attending workshops increases
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
knowledge of cardiovascular disease and whether such knowledge reduces behaviors that put people at risk for cardiovascular disease. Simple regression is used to analyze the relationship between two continuous variables. Continuous variables assume that the distances between ordered categories are determinable.1 In simple regression, one variable is defined as the dependent variable and the other as the independent variable (see Chapter 2 for the definitions). In the current example, the level of knowledge obtained from workshops and other sources might be measured on a continuous scale and treated as an independent variable, and behaviors that put people at risk for cardiovascular disease might also be measured on a continuous scale and treated as a dependent variable. Scatterplot The relationship between two continuous variables can be portrayed in a scatterplot. A scatterplot is merely a plot of the data points for two continuous variables, as shown in Figure 14.1 (without the straight line). By convention, the dependent variable is shown on the vertical (or Y-) axis, and the independent variable on the horizontal (or X-) axis. The relationship between the two variables is estimated as a straight line relationship. The line is defined by the equation y = a + bx, where a is the intercept (or constant), and b is the slope. The slope, b, is defined as Figure 14.1 Scatterplot or (y2 – y1)/(x2 – x1). The line is calculated mathematically such that the sum of distances from each observation to the line is minimized.2 By definition, the slope indicates the change in y as a result of a unit change in x. The straight line, defined by y = a + bx, is also called the regression line, and the slope (b) is called the regression coefficient. A positive regression coefficient indicates a positive relationship between the variables, shown by the upward slope in Figure 14.1. A negative regression coefficient indicates a negative relationship between the variables and is indicated by a downward-sloping line. Test of Significance The test of significance of the regression coefficient is a key test that tells us whether the slope (b) is statistically different from zero. The slope is calculated from a sample, and we wish to know whether it is significant. When the regression line is horizontal (b = 0), no relationship exists between the two variables. Then, changes in the independent variable have no effect on the dependent variable. The following hypotheses are thus stated: H0: b = 0, or the two variables are unrelated. HA: b ≠ 0, or the two variables are (positively or negatively) related. To determine whether the slope equals zero, a t-test is performed. The test statistic is defined as the slope, b, divided by the standard error of the slope, se(b). The standard error of the slope is a measure of the distribution of the observations around the regression slope, which is based on the standard deviation of those observations to the regression line: Thus, a regression line with a small slope is more likely to be statistically significant when observations lie closely around it (that is, the standard error of the observations around the line is also small, resulting in a larger test statistic). By contrast, the same regression line might be statistically insignificant when observations are scattered widely around it. Observations that lie farther from the
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
regression line will have larger standard deviations and, hence, larger standard errors. The computer calculates the slope, intercept, standard error of the slope, and the level at which the slope is statistically significant. Key Point The significance of the slope tests the relationship. Consider the following example. A management analyst with the Department of Defense wishes to evaluate the impact of teamwork on the productivity of naval shipyard repair facilities. Although all shipyards are required to use teamwork management strategies, these strategies are assumed to vary in practice. Coincidentally, a recently implemented employee survey asked about the perceived use and effectiveness of teamwork. These items have been aggregated into a single index variable that measures teamwork. Employees were also asked questions about perceived performance, as measured by productivity, customer orientation, planning and scheduling, and employee motivation. These items were combined into an index measure of work productivity. Both index measures are continuous variables. The analyst wants to know whether a relationship exists between perceived productivity and teamwork. Table 14.1 shows the computer output obtained from a simple regression. The slope, b, is 0.223; the slope coefficient of teamwork is positive; and the slope is significant at the 1 percent level. Thus, perceptions of teamwork are positively associated with productivity. The t-test statistic, 5.053, is calculated as 0.223/0.044 (rounding errors explain the difference from the printed value of t). Other statistics shown in Table 14.1 are discussed below. The appropriate notation for this relationship is shown below. Either the t-test statistic or the standard error should be shown in parentheses, directly below the regression coefficient; analysts should state which statistic is shown. Here, we show the t-test statistic:3 The level of significance of the regression coefficient is indicated with asterisks, which conforms to the p-value legend that should also be shown. Typically, two asterisks are used to indicate a 1 percent level of significance, one asterisk for a 5 percent level of significance, and no asterisk for coefficients that are insignificant.4 Table 14.1 Simple Regression Output Note: SEE = standard error of the estimate; SE = standard error; Sig. = significance.
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
Table 14.1 also shows R-square (R2), which is called the coefficient of determination. R-square is of great interest: its value is interpreted as the percentage of variation in the dependent variable that is explained by the independent variable. R-square varies from zero to one, and is called a goodness-of-fit measure.5 In our example, teamwork explains only 7.4 percent of the variation in productivity. Although teamwork is significantly associated with productivity, it is quite likely that other factors also affect it. It is conceivable that other factors might be more strongly associated with productivity and that, when controlled for other factors, teamwork is no longer significant. Typically, values of R2 below 0.20 are considered to indicate weak relationships, those between 0.20 and 0.40 indicate moderate relationships, and those above 0.40 indicate strong relationships. Values of R2 above 0.65 are considered to indicate very strong relationships. R is called the multiple correlation coefficient and is always 0 ≤ R ≤ 1. To summarize up to this point, simple regression provides three critically important pieces of information about bivariate relationships involving two continuous variables: (1) the level of significance at which two variables are associated, if at all (t-statistic), (2) whether the relationship between the two variables is positive or negative (b), and (3) the strength of the relationship (R2). Key Point R-square is a measure of the strength of the relationship. Its value goes from 0 to 1. The primary purpose of regression analysis is hypothesis testing, not prediction. In our example, the regression model is used to test the hypothesis that teamwork is related to productivity. However, if the analyst wants to predict the variable “productivity,” the regression output also shows the SEE, or the standard error of the estimate (see Table 14.1). This is a measure of the spread of y values around the regression line as calculated for the mean value of the independent variable, only, and assuming a large sample. The standard error of the estimate has an interpretation in terms of the normal curve, that is, 68 percent of y values lie within one standard error from the calculated value of y, as calculated for the mean value of x using the preceding regression model. Thus, if the mean index value of the variable “teamwork” is 5.0, then the calculated (or predicted) value of “productivity” is [4.026 + 0.223*5 =] 5.141. Because SEE = 0.825, it follows that 68 percent of productivity values will lie 60.825 from 5.141 when “teamwork” = 5. Predictions of y for other values of x have larger standard errors.6 Assumptions and Notation There are three simple regression assumptions. First, simple regression assumes that the relationship between two variables is linear. The linearity of bivariate relationships is easily determined through visual inspection, as shown in Figure 14.2. In fact, all analysis of relationships involving continuous variables should begin with a scatterplot. When variable
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
(e). Hence the expressions are equivalent, as is y = ŷ + e. Certain assumptions about e are important, such as that it is normally distributed. When error term assumptions are violated, incorrect conclusions may be made about the statistical significance of relationships. This important issue is discussed in greater detail in Chapter 15 and, for time series data, in Chapter 17. Hence, the above is a pertinent but incomplete list of assumptions. Getting Started Conduct a simple regression, and practice writing up your results. PEARSON’S CORRELATION COEFFICIENT Pearson’s correlation coefficient, r, measures the association (significance, direction, and strength) between two continuous variables; it is a measure of association for two continuous variables. Also called the Pearson’s product-moment correlation coefficient, it does not assume a causal relationship, as does simple regression. The correlation coefficient indicates the extent to which the observations lie closely or loosely clustered around the regression line. The coefficient r ranges from –1 to +1. The sign indicates the direction of the relationship, which, in simple regression, is always the same as the slope coefficient. A “–1” indicates a perfect negative relationship, that is, that all observations lie exactly on a downward-sloping regression line; a “+1” indicates a perfect positive relationship, whereby all observations lie exactly on an upward-sloping regression line. Of course, such values are rarely obtained in practice because observations seldom lie exactly on a line. An r value of zero indicates that observations are so widely scattered that it is impossible to draw any well-fitting line. Figure 14.2 illustrates some values of r. Key Point Pearson’s correlation coefficient, r, ranges from –1 to +1. It is important to avoid confusion between Pearson’s correlation coefficient and the coefficient of determination. For the two-variable, simple regression model, r2 = R2, but whereas 0 ≤ R ≤ 1, r ranges from –1 to +1. Hence, the sign of r tells us whether a relationship is positive or negative, but the sign of R, in regression output tables such as Table 14.1, is always positive and cannot inform us about the direction of the relationship. In simple regression, the regression coefficient, b, informs us about the direction of the relationship. Statistical software programs usually show r rather than r2. Note also that the Pearson’s correlation coefficient can be used only to assess the association between two continuous variables, whereas regression can be extended to deal with more than two variables, as discussed in Chapter 15. Pearson’s correlation coefficient assumes that both variables are normally distributed. When Pearson’s correlation coefficients are calculated, a standard error of r can be determined, which then allows us to test the statistical significance of the bivariate correlation. For bivariate relationships, this is the same level of significance as shown for the slope of the regression coefficient. For the variables given earlier in this chapter, the value of r is .272 and the statistical significance of r is p ≤ .01. Use of the Pearson’s correlation coefficient assumes that the variables are normally distributed and that there are no significant departures from linearity.7 It is important not to confuse the correlation coefficient, r, with the regression coefficient, b. Comparing the measures r and b (the slope) sometimes causes confusion. The key point is that r does not indicate the regression slope but rather the extent to which observations lie close to it. A steep regression line (large b) can have observations scattered loosely or closely around it, as can a shallow (more horizontal) regression line. The purposes of these two statistics are very different.8 SPEARMAN’S RANK CORRELATION
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
COEFFICIENT The nonparametric alternative, Spearman’s rank correlation coefficient (r, or “rho”), looks at correlation among the ranks of the data rather than among the values. The ranks of data are determined as shown in Table 14.2 (adapted from Table 11.8): Table 14.2 Ranks of Two Variables In Greater Depth … Box 14.1 Crime and Poverty An analyst wants to examine empirically the relationship between crime and income in cities across the United States. The CD that accompanies the workbook Exercising Essential Statistics includes a Community Indicators dataset with assorted indicators of conditions in 98 cities such as Akron, Ohio; Phoenix, Arizona; New Orleans, Louisiana; and Seattle, Washington. The measures include median household income, total population (both from the 2000 U.S. Census), and total violent crimes (FBI, Uniform Crime Reporting, 2004). In the sample, household income ranges from $26,309 (Newark, New Jersey) to $71,765 (San Jose, California), and the median household income is $42,316. Per-capita violent crime ranges from 0.15 percent (Glendale, California) to 2.04 percent (Las Vegas, Nevada), and the median violent crime rate per capita is 0.78 percent. There are four types of violent crimes: murder and nonnegligent manslaughter, forcible rape, robbery, and aggravated assault. A measure of total violent crime per capita is calculated because larger cities are apt to have more crime. The analyst wants to examine whether income is associated with per-capita violent crime. The scatterplot of these two continuous variables shows that a negative relationship appears to be present: The Pearson’s correlation coefficient is –.532 (p < .01), and the Spearman’s correlation coefficient is –.552 (p < .01). The simple regression model shows R2 = .283. The regression model is as follows (t-test statistic in parentheses): The regression line is shown on the scatterplot. Interpreting these results, we see that the R-square value of .283 indicates a moderate relationship between these two variables. Clearly, some cities with modest median household incomes have a high crime rate. However, removing these cities does not greatly alter the findings. Also, an assumption of regression is that the error term is normally distributed, and further examination of the error shows that it is somewhat skewed. The techniques for examining the distribution of the error term are discussed in Chapter 15, but again, addressing this problem does not significantly alter the finding that the two variables are significantly related to each other, and that the relationship is of moderate strength. With this result in hand, further analysis shows, for example, by how much violent crime decreases for each increase in household income. For each increase of $10,000 in average household income, the violent crime rate drops 0.25 percent. For a city experiencing the median 0.78 percent crime rate, this would be a considerable improvement, indeed. Note also that the scatterplot shows considerable variation in the crime rate for cities at or below the median household income, in contrast to those well above it. Policy analysts may well wish to examine conditions that give rise to variation in crime rates among cities with lower incomes. Because Spearman’s rank correlation coefficient examines correlation among the ranks of variables, it can also be used with ordinal-level data.9 For the data in Table 14.2, Spearman’s rank correlation coefficient is .900 (p = .035).10 Spearman’s p-squared coefficient has a “percent variation explained” interpretation, similar
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
to the measures described earlier. Hence, 90 percent of the variation in one variable can be explained by the other. For the variables given earlier, the Spearman’s rank correlation coefficient is .274 (p < .01), which is comparable to r reported in preceding sections. Box 14.1 illustrates another use of the statistics described in this chapter, in a study of the relationship between crime and poverty. SUMMARY When analysts examine relationships between two continuous variables, they can use simple regression or the Pearson’s correlation coefficient. Both measures show (1) the statistical significance of the relationship, (2) the direction of the relationship (that is, whether it is positive or negative), and (3) the strength of the relationship. Simple regression assumes a causal and linear relationship between the continuous variables. The statistical significance and direction of the slope coefficient is used to assess the statistical significance and direction of the relationship. The coefficient of determination, R2, is used to assess the strength of relationships; R2 is interpreted as the percent variation explained. Regression is a foundation for studying relationships involving three or more variables, such as control variables. The Pearson’s correlation coefficient does not assume causality between two continuous variables. A nonparametric alternative to testing the relationship between two continuous variables is the Spearman’s rank correlation coefficient, which examines correlation among the ranks of the data rather than among the values themselves. As such, this measure can also be used to study relationships in which one or both variables are ordinal. KEY TERMS   Coefficient of determination, R2 Error term Observed value of y Pearson’s correlation coefficient, r Predicted value of the dependent variable y, ŷ Regression coefficient Regression line Scatterplot Simple regression assumptions Spearman’s rank correlation coefficient Standard error of the estimate Test of significance of the regression coefficient Notes   1. See Chapter 3 for a definition of continuous variables. Although the distinction between ordinal and continuous is theoretical (namely, whether or not the distance between categories can be measured), in practice ordinal-level variables with seven or more categories (including Likert variables) are sometimes analyzed using statistics appropriate for interval-level variables. This practice has many critics because it violates an assumption of regression (interval data), but it is often
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
regression as dummy variables Explain the importance of the error term plot Identify assumptions of regression, and know how to test and correct assumption violations Multiple regression is one of the most widely used multivariate statistical techniques for analyzing three or more variables. This chapter uses multiple regression to examine such relationships, and thereby extends the discussion in Chapter 14. The popularity of multiple regression is due largely to the ease with which it takes control variables (or rival hypotheses) into account. In Chapter 10, we discussed briefly how contingency tables can be used for this purpose, but doing so is often a cumbersome and sometimes inconclusive effort. By contrast, multiple regression easily incorporates multiple independent variables. Another reason for its popularity is that it also takes into account nominal independent variables. However, multiple regression is no substitute for bivariate analysis. Indeed, managers or analysts with an interest in a specific bivariate relationship will conduct a bivariate analysis first, before examining whether the relationship is robust in the presence of numerous control variables. And before conducting bivariate analysis, analysts need to conduct univariate analysis to better understand their variables. Thus, multiple regression is usually one of the last steps of analysis. Indeed, multiple regression is often used to test the robustness of bivariate relationships when control variables are taken into account. The flexibility with which multiple regression takes control variables into account comes at a price, though. Regression, like the t-test, is based on numerous assumptions. Regression results cannot be assumed to be robust in the face of assumption violations. Testing of assumptions is always part of multiple regression analysis. Multiple regression is carried out in the following sequence: (1) model specification (that is, identification of dependent and independent variables), (2) testing of regression assumptions, (3) correction of assumption violations, if any, and (4) reporting of the results of the final regression model. This chapter examines these four steps and discusses essential concepts related to simple and multiple regression. Chapters 16 and 17 extend this discussion by examining the use of logistic regression and time series analysis. MODEL SPECIFICATION Multiple regression is an extension of simple regression, but an important difference exists between the two methods: multiple regression aims for full model specification. This means that analysts seek to account for all of the variables that affect the dependent variable; by contrast, simple regression examines the effect of only one independent variable. Philosophically, the phrase identifying the key difference—“all of the variables that affect the dependent variable”—is divided into two parts. The first part involves identifying the variables that are of most (theoretical and practical) relevance in explaining the dependent
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
single or index variables. As an example, consider the dependent variable “high school violence,” discussed in Chapter 2. We ask: “What are the most important, distinct factors affecting or causing high school violence?” Some plausible factors are (1) student access to weapons, (2) student isolation from others, (3) peer groups that are prone to violence, (4) lack of enforcement of school nonviolence policies, (5) participation in anger management programs, and (6) familiarity with warning signals (among teachers and staff). Perhaps you can think of other factors. Then, following the strategies discussed in Chapter 3—conceptualization, operationalization, and index variable construction—we use either single variables or index measures as independent variables to measure each of these factors. This approach provides for the inclusion of programs or policies as independent variables, as well as variables that measure salient rival hypotheses. The strategy of full model specification requires that analysts not overlook important factors. Thus, analysts do well to carefully justify their model and to consult past studies and interview those who have direct experience with, or other opinions about, the research subject. Doing so might lead analysts to include additional variables, such as the socioeconomic status of students’ parents. Then, after a fully specified model has been identified, analysts often include additional variables of interest. These may be variables of lesser relevance, speculative consequences, or variables that analysts want to test for their lack of impact, such as rival hypotheses. Demographic variables, such as the age of students, might be added. When additional variables are included, analysts should identify which independent variables constitute the nomothetic explanation, and which serve some other purpose. Remember, all variables included in models must be theoretically justified. Analysts must argue how each variable could plausibly affect their dependent variable. The second part of “all of the variables that affect the dependent variable” acknowledges all of the other variables that are not identified (or included) in the model. They are omitted; these variables are not among “the most important factors” that affect the dependent variable. The cumulative effect of these other variables is, by definition, contained in the error term, described later in this chapter. The assumption of full model specification is that these other variables are justifiably omitted only when their cumulative effect on the dependent variable is zero. This approach is plausible because each of these many unknown variables may have a different magnitude, thus making it possible that their effects cancel each other out. The argument, quite clearly, is not that each of these other factors has no impact on the dependent variable—but only that their cumulative effect is zero. The validity of multiple regression models centers on examining the behavior of the error term in this regard. If the cumulative effect of all the other variables is not zero, then additional independent variables may have to be considered. The specification of the multiple regression model is as follows:
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
Thus, multiple regression requires two important tasks: (1) specification of independent variables and (2) testing of the error term. An important difference between simple regression and multiple regression is the interpretation of the regression coefficients in multiple regression (b1, b2, b3, …) in the preceding multiple regression model. Although multiple regression produces the same basic statistics discussed in Chapter 14 (see Table 14.1), each of the regression coefficients is interpreted as its effect on the dependent variable, controlled for the effects of all of the other independent variables included in the regression. This phrase is used frequently when explaining multiple regression results. In our example, the regression coefficient b1 shows the effect of x1 on y, controlled for all other variables included in the model. Regression coefficient b2 shows the effect of x2 on y, also controlled for all other variables in the model, including x1. Multiple regression is indeed an important and relatively simple way of taking control variables into account (and much easier than the approach shown in Appendix 10.1). Key Point The regression coefficient is the effect on the dependent variable, controlled for all other independent variables in the model. Note also that the model given here is very different from estimating separate simple regression models for each of the independent variables. The regression coefficients in simple regression do not control for other independent variables, because they are not in the model. The word independent also means that each independent variable should be relatively unaffected by other independent variables in the model. To ensure that independent variables are indeed independent, it is useful to think of the distinctively different types (or categories) of factors that affect a dependent variable. This was the approach taken in the preceding example. There is also a statistical reason for ensuring that independent variables are as independent as possible. When two independent variables are highly correlated with each other (r2 > .60), it sometimes becomes statistically impossible to distinguish the effect of each independent variable on the dependent variable, controlled for the other. The variables are statistically too similar to discern disparate effects. This problem is called multicollinearity and is discussed later in this chapter. This problem is avoided by choosing independent variables that are not highly correlated with each other. A WORKING EXAMPLE Previously (see Chapter 14), the management analyst with the Department of Defense found a statistically significant relationship between teamwork and perceived facility productivity (p <.01). The analyst now wishes to examine whether the impact of teamwork on productivity is robust when controlled for other factors that also affect productivity. This interest is heightened by the low R-square (R2 = 0.074) in Table 14.1, suggesting a weak relationship between teamwork and perceived productivity. A multiple regression model is specified to include the effects of other factors that affect perceived productivity. Thinking about other categories of variables that could affect productivity, the analyst hypothesizes the following: (1) the extent to which employees have adequate technical knowledge to do their jobs, (2) perceptions of having adequate authority to do one’s job well (for example, decision-making flexibility), (3) perceptions that rewards and recognition are distributed fairly (always important for motivation), and (4) the number of sick days. Various items from the employee survey are used to measure these concepts (as discussed in the workbook documentation for the Productivity dataset). After including these factors as additional independent variables, the result shown in Table 15.1 is
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
regression results. Standardized Coefficients The question arises as to which independent variable has the greatest impact on explaining the dependent variable. The slope of the coefficients (b) does not answer this question because each slope is measured in different units (recall from Chapter 14 that b = ∆y/∆x). Comparing different slope coefficients is tantamount to comparing apples and oranges. However, based on the regression coefficient (or slope), it is possible to calculate the standardized coefficient, β (beta). Beta is defined as the change produced in the dependent variable by a unit of change in the independent variable when both variables are measured in terms of standard deviation units. Beta is unit-less and thus allows for comparison of the impact of different independent variables on explaining the dependent variable. Analysts compare the relative values of beta coefficients; beta has no inherent meaning. It is appropriate to compare betas across independent variables in the same regression, not across different regressions. Based on Table 15.1, we conclude that the impact of having adequate authority on explaining productivity is [(0.288 – 0.202)/0.202 =] 42.6 percent greater than teamwork, and about equal to that of knowledge. The impact of having adequate authority is two-and-a-half times greater than that of perceptions of fair rewards and recognition.4 F-Test Table 15.1 also features an analysis of variance (ANOVA) table. The global F-test examines the overall effect of all independent variables jointly on the dependent variable. The null hypothesis is that the overall effect of all independent variables jointly on the dependent variables is statistically insignificant. The alternate hypothesis is that this overall effect is statistically significant. The null hypothesis implies that none of the regression coefficients is statistically significant; the alternate hypothesis implies that at least one of the regression coefficients is statistically significant. The
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
SUMMARY A vast array of additional statistical methods exists. In this concluding chapter, we summarized some of these methods (path analysis, survival analysis, and factor analysis) and briefly mentioned other related techniques. This chapter can help managers and analysts become familiar with these additional techniques and increase their access to research literature in which these techniques are used. Managers and analysts who would like more information about these techniques will likely consult other texts or on-line sources. In many instances, managers will need only simple approaches to calculate the means of their variables, produce a few good graphs that tell the story, make simple forecasts, and test for significant differences among a few groups. Why, then, bother with these more advanced techniques? They are part of the analytical world in which managers operate. Through research and consulting, managers cannot help but come in contact with them. It is hoped that this chapter whets the appetite and provides a useful reference for managers and students alike. KEY TERMS   Endogenous variables Exogenous variables Factor analysis Indirect effects Loading Path analysis Recursive models Survival analysis Notes 1. Two types of feedback loops are illustrated as follows: 2. When feedback loops are present, error terms for the different models will be correlated with exogenous variables, violating an error term assumption for such models. Then, alternative estimation methodologies are necessary, such as two-stage least squares and others discussed later in this chapter. 3. Some models may show double-headed arrows among error terms. These show the correlation between error terms, which is of no importance in estimating the beta coefficients. 4. In SPSS, survival analysis is available through the add-on module in SPSS Advanced Models. 5. The functions used to estimate probabilities are rather complex. They are so-called Weibull distributions, which are defined as h(t) = αλ(λt)a–1, where a and 1 are chosen to best fit the data. 6. Hence, the SSL is greater than the squared loadings reported. For example, because the loadings of variables in groups B and C are not shown for factor 1, the SSL of shown loadings is 3.27 rather than the reported 4.084. If one assumes the other loadings are each .25, then the SSL of the not reported loadings is [12*.252 =] .75, bringing the SSL of factor 1 to [3.27 + .75 =] 4.02, which is very close to the 4.084 value reported in the table. 7. Readers who are interested in multinomial logistic regression can consult on-line sources or the SPSS manual, Regression Models 10.0 or higher. The statistics of discriminant analysis are very dissimilar from those of logistic regression, and readers are advised to consult a separate text on that topic. Discriminant analysis is not often used in public
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
Beyond One-Way ANOVA The approach described in the preceding section is called one-way ANOVA. This scenario is easily generalized to accommodate more than one independent variable. These independent variables are either discrete (called factors) or continuous (called covariates). These approaches are called n-way ANOVA or ANCOVA (the “C” indicates the presence of covariates). Two way ANOVA, for example, allows for testing of the effect of two different independent variables on the dependent variable, as well as the interaction of these two independent variables. An interaction effect between two variables describes the way that variables “work together” to have an effect on the dependent variable. This is perhaps best illustrated by an example. Suppose that an analyst wants to know whether the number of health care information workshops attended, as well as a person’s education, are associated with healthy lifestyle behaviors. Although we can surely theorize how attending health care information workshops and a person’s education can each affect an individual’s healthy lifestyle behaviors, it is also easy to see that the level of education can affect a person’s propensity for attending health care information workshops, as well. Hence, an interaction effect could also exist between these two independent variables (factors). The effects of each independent variable on the dependent variable are called main effects (as distinct from interaction effects). To continue the earlier example, suppose that in addition to population, an analyst also wants to consider a measure of the watershed’s preexisting condition, such as the number of plant and animal species at risk in the watershed. Two-way ANOVA produces the results shown in Table 13.4, using the transformed variable mentioned earlier. The first row, labeled “model,” refers to the combined effects of all main and interaction effects in the model on the dependent variable. This is the global F-test. The “model” row shows that the two main effects and the single interaction effect, when considered together, are significantly associated with changes in the dependent variable (p < .000). However, the results also show a reduced significance level of “population” (now, p = .064), which seems related to the interaction effect (p = .076). Although neither effect is significant at conventional levels, the results do suggest that an interaction effect is present between population and watershed condition (of which the number of at-risk species is an indicator) on watershed wetland loss. Post-hoc tests are only provided separately for each of the independent variables (factors), and the results show the same homogeneous grouping for both of the independent variables. Table 13.4 Two-Way ANOVA Results As we noted earlier, ANOVA is a family of statistical techniques that allow for a broad range of rather complex experimental designs. Complete coverage of these techniques is well beyond the scope of this book, but in general, many of these techniques aim to discern the effect of variables in the presence of other (control) variables. ANOVA is but one approach for addressing control variables. A far more common approach in public policy, economics, political science, and public administration (as well as in many others fields) is multiple regression (see Chapter 15). Many analysts feel that ANOVA and regression are largely equivalent. Historically, the preference for ANOVA stems from its uses in medical and agricultural research, with applications in education and psychology. Finally, the ANOVA approach can be generalized to allow for testing on two or more dependent variables. This approach is called multiple analysis of variance, or MANOVA. Regression-based analysis can also be used for dealing with multiple dependent variables, as mentioned in Chapter 17.
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
If you write a test for every bug you fix and run it in your CI system, the system catches Bug Regressions when the offending code is checked in. This strategy effectively stops Bug Regressions.
Anonymous
If you use plain text to create synthetic data to drive system tests, then it is a simple matter to add, update, or modify the test data without having to create any special tools to do so. Similarly, plain text output from regression tests can be trivially analyzed (with diff, for instance) or subjected to more thorough scrutiny with Perl, Python, or some other scripting tool.
Andrew Hunt (The Pragmatic Programmer)
I speak about technical debt quite a bit in this chapter, so I wanted to leave you with a few thoughts on how to measure it. There are, of course, the static code-analysis tools that provide a view of technical debt based on coding issues. I think that is a good starting point. I would add to this the cost to deploy the application (e.g., how many hours of people does it require to deploy into a test or production environment), the cost of regression testing the product (how many hours or people time does it take to validate nothing has broken), and the cost of creating a new environment with the application. If you are more ambitious, you can also look for measures of complexity and dependencies with other applications, but I have not yet seen a good repeatable way for measuring those. The first four I mention are relatively easy to determine and should therefore be the basis for your measure of technical debt.
Mirco Hering (DevOps for the Modern Enterprise: Winning Practices to Transform Legacy IT Organizations)
Correlations have a hypothesis test. As with any hypothesis test, this test takes sample data and evaluates two mutually exclusive statements about the population from which the sample was drawn. For Pearson correlations, the two hypotheses are the following: Null hypothesis: There is no linear relationship between the two variables. ρ = 0. Alternative hypothesis: There is a linear relationship between the two variables. ρ ≠ 0. A correlation of zero indicates that no linear relationship exists. If your p-value is less than your significance level, the sample contains sufficient evidence to reject the null hypothesis and conclude that the correlation does not equal zero. In other words, the sample data support the notion that the relationship exists in the population.
Jim Frost (Regression Analysis: An Intuitive Guide for Using and Interpreting Linear Models)
here are some steps to identify and track code that should be reviewed carefully: Tagging user stories for security features or business workflows which handle money or sensitive data. Grepping source code for calls to dangerous function calls like crypto functions. Scanning code review comments (if you are using a collaborative code review tool like Gerrit). Tracking code check-in to identify code that is changed often: code with a high rate of churn tends to have more defects. Reviewing bug reports and static analysis to identify problem areas in code: code with a history of bugs, or code that has high complexity and low automated test coverage. Looking out for code that has recently undergone large-scale “root canal” refactoring. While day-to-day, in-phase refactoring can do a lot to simplify code and make it easier to understand and safer to change, major refactoring or redesign work can accidentally change the trust model of an application and introduce regressions.
Laura Bell (Agile Application Security: Enabling Security in a Continuous Delivery Pipeline)
The p-value for each independent variable tests the null hypothesis that the variable has no relationship with the dependent variable.
Jim Frost (Regression Analysis: An Intuitive Guide for Using and Interpreting Linear Models)
Identify some of the actual individuals who are your best customers. Evaluate those with the highest customer lifetime value (CLV) and develop hypotheses about their shared traits. Although demographics and psychographics might be the most obvious, you’ll find additional insights if you examine their behavior. What channels did they come through? What messages resonated? How did they onboard? How recently, frequently, and deeply have they engaged? Compare best customers and worst customers—those you acquired who weren’t ultimately profitable or who weren’t satisfied with your offering. Notice people who exhaust your free trial but don’t convert to paid, or who join but cancel within the first few months. The best customers have the greatest customer lifetime value (CLV); they will spend more with you over time than anyone else. Produce either a qualitative write-up of your best customer or use regression analysis to prioritize characteristics. Share these conclusions with your frontline team—retail workers, customer support, sales—to accrue early insights. With a concrete conception of your best customer, you can discern if the customer segment is sufficiently large to justify addressing. Test and adjust as needed. Then make these best customers and their forever promise as “real” as possible to the team. If you have actual customers who fit the profile, talk about them, invite them in, or have their pictures on your wall. You’re going to feel their pain, share their objectives, and design experiences for them. It’s important to know them well.
Robbie Kellman Baxter (The Forever Transaction: How to Build a Subscription Model So Compelling, Your Customers Will Never Want to Leave)
National Security, in practice, must always fall short of the logically Empedoclean infinite regress it requires for perfect “security.” In that gap between the ideal of “One Nation under surveillance with wire taps and urine tests for all,” and the strictly limited real situation of finite resources and finite funding, there is ample encouragement for paranoias of all sorts to flourish, both among the citizens and among the police.
Robert Anton Wilson (Prometheus Rising)
Just as a hologram is so structured that each part contains the whole, Finnegans Wake is structured in puns and synchronicities that "contain" and reflect each other, creating the closest approximation of an infinite regress ever achieved in any art-form. The absent is everpresent, the dead are all alive, and the abyss of uncertainty appears in every multi-meaningful sentence. (This will be illustrated below.)
Robert Anton Wilson (Coincidance: A Head Test)
Once we get the regression results, we would calculate a t-statistic, which is the ratio of the observed coefficient to the standard error for that coefficient.* This t-statistic is then evaluated against whatever t-distribution is appropriate for the size of the data sample (since this is largely what determines the number of degrees of freedom). When the t-statistic is sufficiently large, meaning that our observed coefficient is far from what the null hypothesis would predict, we can reject the null hypothesis at some level of statistical significance. Again, this is the same basic process of statistical inference that we have been employing throughout the book. The fewer the degrees of freedom (and therefore the “fatter” the tails of the relevant t-distribution), the higher the t-statistic will have to be in order for us to reject the null hypothesis at some given level of significance. In the hypothetical regression example described above, if we had four degrees of freedom, we would need a t-statistic of at least 2.13 to reject the null hypothesis at the .05 level (in a one-tailed test). However, if we have 20,000 degrees of freedom (which essentially allows us to use the normal distribution), we would need only a t-statistic of 1.65 to reject the null hypothesis at the .05 level in the same one-tailed test.
Charles Wheelan (Naked Statistics: Stripping the Dread from the Data)
Unit testing is one of the most important components in legacy code work. System-level regression tests are great, but small, localized tests are invaluable. They can give you feedback as you develop and allow you to refactor with much more safety.
Michael C. Feathers (Working Effectively with Legacy Code)
processed, and transformed into a format that is suitable for analysis. This often involves removing duplicate data, correcting errors, and dealing with missing values. After data is prepared, exploratory data analysis is performed to better understand the data and identify patterns, trends, and outliers. Descriptive statistics, data visualization, and data clustering techniques are often used to explore data. Once the data is understood, statistical methods such as hypothesis testing and regression analysis can be applied to identify relationships and make predictions.
Brian Murray (Data Analysis for Beginners: The ABCs of Data Analysis. An Easy-to-Understand Guide for Beginners)
The ‘quantitative revolution’ in geography required the discipline to adopt an explicitly scientific approach, including numerical and statistical methods, and mathematical modelling, so ‘numeracy’ became another necessary skill. Its immediate impact was greatest on human geography as physical geographers were already using these methods. A new lexicon encompassing the language of statistics and its array of techniques entered geography as a whole. Terms such as random sampling, correlation, regression, tests of statistical significance, probability, multivariate analysis, and simulation became part both of research and undergraduate teaching. Correlation and regression are procedures to measure the strength and form, respectively, of the relationships between two or more sets of variables. Significance tests measure the confidence that can be placed in those relationships. Multivariate methods enable the analysis of many variables or factors simultaneously – an appropriate approach for many complex geographical data sets. Simulation is often linked to probability and is a set of techniques capable of extrapolating or projecting future trends.
John A. Matthews (Geography: A Very Short Introduction)
The Core Debugging Process The core of the debugging process consists of four steps: Reproduce: Find a way to reliably and conveniently reproduce the problem on demand. Report erratum Prepared exclusively for castLabs GmbH this copy is (P2.0 printing, February 2010) FIRST THINGS FIRST 18 Diagnose: Construct hypotheses, and test them by performing experiments until you are confident that you have identified the underlying cause of the bug. Fix: Design and implement changes that fix the problem, avoid intro- ducing regressions, and maintain or improve the overall quality of the software. Reflect: Learn the lessons of the bug. Where did things go wrong? Are there any other examples of the same problem that will also need fixing? What can you do to ensure that the same problem doesn’t happen again?
Paul Butcher
Qacraft offering regression testing services with vast industry experience that ensure any enhancements, fixes, configuration changes or upgrades have not affected the functioning of the system.
qacraft
​The vital condition of every true state is a well-defined climate: the climate of the highest possible tension, but not of forced agitation. It will be desirable that everyone stay at his post, that he takes pleasure in an activity in conformity with his own nature and vocation, which is therefore free and desired for itself before considering utilitarian purposes and the unhealthy desire to live above one’s proper condition. If it is not possible to ask everyone to follow an ‘ascetic and military vision of life’, it will be possible to aim at a climate of concentrated intensity, of personal life, that will encourage people to prefer a greater margin of liberty, as opposed to comfort and prosperity paid for with the consequent limitation of liberty through the evitable economic and social influences. Autarchy, in the terms we have emphasised, is a valid Fascist formula. A course of virile, measured austerity is also valid and, finally, an internal discipline through which one develops a taste and an anti-bourgeois orientation of life, but no schoolmarmish and impertinent intrusion by what is public into the field of private life. Here, too, the principle should be liberty connected with equal responsibility and, in general, giving prominence to the principles of ‘great morality’ as opposed to the principles of conformist ‘little morality’. A doctrine of the state can only propose values to test the elective affinities and the dominant or latent vocations of a nation. If a people cannot or does not want to acknowledge the values that we have called ‘traditional’, and which define a true Right, it deserves to be left to itself. At most, we can point out to it the illusions and suggestions of which it has been or is the victim, which are due to a general action which has often been systematically organised, and to regressive processes. If not even this leads to a sensible result, this people will suffer the fate that it has created, by making use of its ‘liberty’.​
Julius Evola (Fascism Viewed from the Right)
More traditional and conformist values ultimately give way to more progressive ones. Cultures that go against that progression regress or fail (ahem . . . Japan). It’s constructive that conservatives challenge new liberal technologies and values. That’s how we test these things and separate what’s productive and acceptable from what’s nonproductive and unacceptable. It demonstrates the ultimate principle of cycles and progress: the play of opposites. Like male and female, boom and bust, inflation and deflation, liberals and conservatives aren’t right or wrong. They are yin and yang. Inseparable. Together, they create the energy and innovation necessary for real life to function and evolve, just as opposite poles create energy in a battery. This dynamic has created the differences and comparative advantages in our global culture today . . . the very ones the world’s citizens are revolting against. And as this revolution runs its course, we’ll ultimately move back toward globalizing . . . to our mutual advantage and pain. The backlash against globalization is necessary at this extreme point, and it will take decades to work out. But it’s not the ultimate result. It’s just the pause that refreshes.
Harry S. Dent (Zero Hour: Turn the Greatest Political and Financial Upheaval in Modern History to Your Advantage)
When the automation test pack is being designed, the most important decision is to plan the Test Scheduling of those Automated Test Scripts. The objective of test automation is to reduce the amount of time spent in Regression Testing
Narayanan Palani (Software Automation Testing Secrets Revealed: Revised Edition - Part 1)
We cannot achieve deployments on demand if every code deployment requires two weeks to set up our test environments and data sets, and another four weeks to manually execute all our regression tests. The countermeasure is to automate our tests so we can execute deployments safely and to parallelize them so the test rate can keep up with our code development rate.
Gene Kim (The DevOps Handbook: How to Create World-Class Agility, Reliability, and Security in Technology Organizations)
THE DSM-V: A VERITABLE SMORGASBORD OF “DIAGNOSES” When DSM-V was published in May 2013 it included some three hundred disorders in its 945 pages. It offers a veritable smorgasbord of possible labels for the problems associated with severe early-life trauma, including some new ones such as Disruptive Mood Regulation Disorder,26 Non-suicidal Self Injury, Intermittent Explosive Disorder, Dysregulated Social Engagement Disorder, and Disruptive Impulse Control Disorder.27 Before the late nineteenth century doctors classified illnesses according to their surface manifestations, like fevers and pustules, which was not unreasonable, given that they had little else to go on.28 This changed when scientists like Louis Pasteur and Robert Koch discovered that many diseases were caused by bacteria that were invisible to the naked eye. Medicine then was transformed by its attempts to discover ways to get rid of those organisms rather than just treating the boils and the fevers that they caused. With DSM-V psychiatry firmly regressed to early-nineteenth-century medical practice. Despite the fact that we know the origin of many of the problems it identifies, its “diagnoses” describe surface phenomena that completely ignore the underlying causes. Even before DSM-V was released, the American Journal of Psychiatry published the results of validity tests of various new diagnoses, which indicated that the DSM largely lacks what in the world of science is known as “reliability”—the ability to produce consistent, replicable results. In other words, it lacks scientific validity. Oddly, the lack of reliability and validity did not keep the DSM-V from meeting its deadline for publication, despite the near-universal consensus that it represented no improvement over the previous diagnostic system.29 Could the fact that the APA had earned $100 million on the DSM-IV and is slated to take in a similar amount with the DSM-V (because all mental health practitioners, many lawyers, and other professionals will be obliged to purchase the latest edition) be the reason we have this new diagnostic system?
Bessel van der Kolk (The Body Keeps the Score: Brain, Mind, and Body in the Healing of Trauma)
Can Strawberries Reverse the Development of Esophageal Cancer? Esophageal cancer joins pancreatic cancer as one of the gravest diagnoses imaginable. The five-year survival rate is less than 20 percent,124 with most people dying within the first year after diagnosis.125 This underscores the need to prevent, stop, or reverse the disease process as early as possible. Researchers decided to put berries to the test. In a randomized clinical trial of powdered strawberries in patients with precancerous lesions in their esophagus, subjects ate one to two ounces of freeze-dried strawberries every day for six months—that’s the daily equivalent of about a pound of fresh strawberries.126 All of the study participants started out with either mild or moderate precancerous disease, but, amazingly, the progression of the disease was reversed in about 80 percent of the patients in the high-dose strawberry group. Most of these precancerous lesions either regressed from moderate to mild or disappeared entirely. Half of those on the high-dose strawberry treatment walked away disease-free.127
Michael Greger (How Not to Die: Discover the Foods Scientifically Proven to Prevent and Reverse Disease)
What worries me is that common sense seems to be dwindling to the point of extinction. The minds of men whom our contemporaries consider educated are regressing to the level of the most ignorant peasant on a Mediaeval manor. There is something terrifying in the spectacle of men who hold degrees in the genuine sciences and assemble vast arrays of elaborate scientific equipment to “prove” the authenticity of a “Holy Shroud,” and thus make it necessary to assemble more equipment and conduct long and painstaking research to prove what any half-way educated and rational man would have known from the very first. And the same sotie is performed whenever some prestidigitator claims that he can bend spoons by thinking about them. Is there any limit to the gullibility of “highly qualified scientists”? I sometimes have a vision of scores of great scientists and tons of elaborate and very expensive laboratory equipment assembled about a pond into which they drop horsehairs to determine whether the percentage that turn into tadpoles is significant by the binomial formula. If hairs from Standard-breeds don’t work, get some from Appaloosas. Then try Percherons and Arabians: their hairs may make tadpoles better. And no one can say that the hairs of horses do not turn into tadpoles until you have made exhaustive scientific tests of hairs from every known breed of horses – and then someone will turn up to prove that the negative results are all wrong, because tadpoles come from the hairs of horses who eat the variety of four-leaved clover that grows in a hidden valley in Afghanistan, so the assembled scientists and their equipment will start all over.
Revilo P. Oliver (Is There Intelligent Life on Earth?)
In typical DevOps transformations, as we progress from deployment lead times measured in months or quarters to lead times measured in minutes, the constraint usually follows this progression: Environment creation: We cannot achieve deployments on-demand if we always have to wait weeks or months for production or test environments. The countermeasure is to create environments that are on demand and completely self-serviced, so that they are always available when we need them. Code deployment: We cannot achieve deployments on demand if each of our production code deployments take weeks or months to perform (i.e., each deployment requires 1,300 manual, error-prone steps, involving up to three hundred engineers). The countermeasure is to automate our deployments as much as possible, with the goal of being completely automated so they can be done self-service by any developer. Test setup and run: We cannot achieve deployments on demand if every code deployment requires two weeks to set up our test environments and data sets, and another four weeks to manually execute all our regression tests. The countermeasure is to automate our tests so we can execute deployments safely and to parallelize them so the test rate can keep up with our code development rate. Overly tight architecture: We cannot achieve deployments on demand if overly tight architecture means that every time we want to make a code change we have to send our engineers to scores of committee meetings in order to get permission to make our changes. Our countermeasure is to create more loosely-coupled architecture so that changes can be made safely and with more autonomy, increasing developer productivity.
Gene Kim (The DevOps Handbook: How to Create World-Class Agility, Reliability, and Security in Technology Organizations)
As a general observation: Every problem created by human beings started as a solution to a previous problem. Therefore, in thinking about any solution for any problem, it is important to think about how it (the solution itself) would become a future problem.
Chade-Meng Tan
Scarcity of resources brings clarity to execution and it creates a strong sense of ownership by those on a project. Imagine raising a child with a large staff of help: one person for feeding, one for diapering, one for entertainment, and so on. None of these people is as vested in the child’s life as a single, overworked parent. It is the scarcity of the parenting resource that brings clarity and efficiency to the process of raising children. When resources are scarce, you are forced to optimize. You are quick to see process inefficiencies and not repeat them. You create a feeding schedule and stick to it. You place the various diapering implements in close proximity to streamline steps in the process. It’s the same concept for software-testing projects at Google. Because you can’t simply throw people at a problem, the tool chain gets streamlined. Automation that serves no real purpose gets deprecated. Tests that find no regressions aren’t written. Developers who demand certain types of activity from testers have to participate in it. There are no make-work tasks. There is no busy work in an attempt to add value where you are not needed.
James A. Whittaker (How Google Tests Software)
BITE tries to address many of these issues and lets the engineer focus on actual exploratory and regression testing—not the process and mechanics. Modern jet fighters have dealt with this information overload problem by building Heads Up Displays (HUDs). HUDs streamline information and put it in context, right over the pilot’s field of view. Much like moving from propeller-driven aircraft to jets, the frequency with which we ship new versions of software at Google also adds to the amount of data and the premium on the speed at which we can make decisions. We’ve taken a similar approach with BITE for regression and manual testing. We implemented BITE as a browser extension because it allows us to watch what the tester is doing (see Figure 3.35) and examine the inside of the web application (DOM). It also enables us to project a unified user experience in the toolbar for quick access to data while overlaying that data on the web application at the same time, much like a HUD.
James A. Whittaker (How Google Tests Software)