Policy Analysis Quotes

We've searched our database for all the quotes and captions related to Policy Analysis. Here they are! All 100 of them:

Facts are rarely self-explanatory; their significance, analysis, and interpretation—at least in the foreign policy world—depend on context and relevance.
Henry Kissinger (World Order)
As a rule, capitalism is blamed for the undesired effects of a policy directed at its elimination. The man who sips his morning coffee does not say, "Capitalism has brought this beverage to my breakfast table." But when he reads in the papers that the government of Brazil has ordered part of the coffee crop destroyed, he does not say, "That is government for you"; he exclaims, "That is capitalism for you.
Ludwig von Mises (Interventionism: An Economic Analysis)
Three conclusions emerge from Richelieu’s career. First, the indispensable element of a successful foreign policy is a long-term strategic concept based on a careful analysis of all relevant factors. Second, the statesman must distill that vision by analyzing and shaping an array of ambiguous, often conflicting pressures into a coherent and purposeful direction. He (or she) must know where this strategy is leading and why. And, third, he must act at the outer edge of the possible, bridging the gap between his society’s experiences and its aspirations.
Henry Kissinger (World Order: Reflections on the Character of Nations and the Course of History)
It is time to think differently about gun violence as a public health problem. From that perspective, a fair and effective policy should start with risk, not mental illness.
Daniel W. Webster (Updated Evidence and Policy Developments on Reducing Gun Violence in America: Informing Policy with Evidence and Analysis)
Management is, in the end, the most creative of all the arts—for its medium is human talent itself.
Robert S. McNamara
Scholars have long debated whether capital markets lead to appropriate levels of saving and investment for future generations.
David L. Weimer (Policy Analysis: Concepts and Practice)
No data are excluded on subjective or arbitrary grounds. No one piece of data is more highly valued than another. The consequences of this policy have to be accepted, even if they prove awkward.
Jennifer K. McArthur (Place-Names in the Knossos Tablets Identification and Location (Suplementos a MINOS, #9))
In every section of the entire area where the word science may properly be applied, the limiting factor is a human one. We shall have rapid or slow advance in this direction or in that depending on the number of really first-class men who are engaged in the work in question. ... So in the last analysis, the future of science in this country will be determined by our basic educational policy.
James Bryant Conant
A careful analysis of the September 11th attacks reveals that deficiencies in U.S. intelligence collection and information sharing, not immigration laws, prevented the terrorists’ plans from being discovered
Bill Ong Hing (Deporting our Souls: Values, Morality, and Immigration Policy)
Ron Rivest, one of the inventors of RSA, thinks that restricting cryptography would be foolhardy: It is poor policy to clamp down indiscriminately on a technology just because some criminals might be able to use it to their advantage. For example, any U.S. citizen can freely buy a pair of gloves, even though a burglar might use them to ransack a house without leaving fingerprints. Cryptography is a data-protection technology, just as gloves are a hand-protection technology. Cryptography protects data from hackers, corporate spies, and con artists, whereas gloves protect hands from cuts, scrapes, heat, cold, and infection. The former can frustrate FBI wiretapping, and the latter can thwart FBI fingerprint analysis. Cryptography and gloves are both dirt-cheap and widely available. In fact, you can download good cryptographic software from the Internet for less than the price of a good pair of gloves.
Simon Singh (The Code Book: The Science of Secrecy from Ancient Egypt to Quantum Cryptography)
Therefore, Orientalism is not a mere political subject matter or field that is reflected passively by culture, scholarship, or institutions; nor is it a large and diffuse collection of texts about the Orient; nor is it representative and expressive of some nefarious “Western” imperialist plot to hold down the “Oriental” world. It is rather a distribution of geopolitical awareness into aesthetic, scholarly, economic, sociological, historical, and philological texts; it is an elaboration not only of a basic geographical distinction (the world is made up of two unequal halves, Orient and Occident) but also of a whole series of “interests” which, by such means as scholarly discovery, philological reconstruction, psychological analysis, landscape and sociological description, it not only creates but also maintains; it is, rather than expresses, a certain will or intention to understand, in some cases to control, manipulate, even to incorporate, what is a manifestly different (or alternative and novel) world; it is, above all, a discourse that is by no means in direct, corresponding relationship with political power in the raw, but rather is produced and exists in an uneven exchange with various kinds of power, shaped to a degree by the exchange with power political (as with a colonial or imperial establishment), power intellectual (as with reigning sciences like comparative linguistics or anatomy, or any of the modern policy sciences), power cultural (as with orthodoxies and canons of taste, texts, values), power moral (as with ideas about what “we” do and what “they” cannot do or understand as “we” do). Indeed, my real argument is that Orientalism is—and does not simply represent—a considerable dimension of modern political-intellectual culture, and as such has less to do with the Orient than it does with “our” world.
Edward W. Said (Orientalism)
The U.S.-led coalition dropped about twelve thousand bombs on Afghanistan that autumn, about 40 percent of them “dumb,” or unguided, according to an analysis by Carl Conetta of the Center for International Policy. Hank Crumpton at the Counterterrorist Center estimated that the campaign killed “at least ten thousand” foreign and Taliban fighters, “perhaps double or triple that number.” By the conservative estimate of Boston University political scientist Neta Crawford, between 1,500 and 2,375 Afghan civilians also died.
Steve Coll (Directorate S: The C.I.A. and America's Secret Wars in Afghanistan and Pakistan, 2001-2016)
The only recourse we have against bad ideas is to be vigilant, resist the seduction of the “obvious,” be skeptical of promised miracles, question the evidence, be patient with complexity and honest about what we know and what we can know. Without that vigilance, conversations about multifaceted problems turn into slogans and caricatures and policy analysis gets replaced by quack remedies. The call to action is not just for academic economists—it is for all of us who want a better, saner, more humane world. Economics is too important to be left to economists.
Abhijit V. Banerjee (Good Economics for Hard Times: Better Answers to Our Biggest Problems)
QUESTIONS TO CONSIDER • As you survey your company-wide policies and procedures, ask: What is the purpose of this policy or procedure? Does it achieve that result? • Are there any approval mechanisms you can eliminate? • What percentage of its time does management spend on problem solving and team building? • Have you done a cost-benefit analysis of the incentives and perks you offer employees? • Could you replace approvals and permissions with analysis of spending patterns and a focus on accuracy and predictability? • Is your decision-making system clear and communicated widely?
Patty McCord (Powerful: Building a Culture of Freedom and Responsibility)
❝Washington — perhaps as many global powers have done in the past — uses what I might call the “immaculate conception” theory of crises abroad. That is, we believe we are essentially out there, just minding our own business, trying to help make the world right, only to be endlessly faced with a series of spontaneous, nasty challenges from abroad to which we must react. There is not the slightest consideration that perhaps US policies themselves may have at least contributed to a series of unfolding events. This presents a huge paradox: how can America on the one hand pride itself on being the world’s sole global superpower, with over seven hundred military bases abroad and the Pentagon’s huge global footprint, and yet, on the other hand, be oblivious to and unacknowledging of the magnitude of its own role — for better or for worse — as the dominant force charting the course of world events? This Alice-in-Wonderland delusion affects not just policy makers, but even the glut of think tanks that abound in Washington. In what may otherwise often be intelligent analysis of a foreign situation, the focus of each study is invariably the other country, the other culture, the negative intentions of other players; the impact of US actions and perceptions are quite absent from the equation. It is hard to point to serious analysis from mainstream publications or think tanks that address the role of the United States itself in helping create current problems or crises, through policies of omission or commission. We’re not even talking about blame here; we’re addressing the logical and self-evident fact that the actions of the world’s sole global superpower have huge consequences in the unfolding of international politics. They require examination.
Graham E. Fuller (A World Without Islam)
Trump is Trump. I came to understand that he believed he could run the Executive Branch and establish national-security policies on instinct, relying on personal relationships with foreign leaders, and with made-for-television showmanship always top of mind. Now, instinct, personal relations, and showmanship are elements of any President’s repertoire. But they are not all of it, by a long stretch. Analysis, planning, intellectual discipline and rigor, evaluation of results, course corrections, and the like are the blocking and tackling of presidential decision-making, the unglamorous side of the job. Appearance takes you only so far.
John Bolton (The Room Where It Happened: A White House Memoir)
In autumn 1937, the New York Times delivered its analysis of the economy’s downturn: “The cause is attributed by some to taxation and alleged federal curbs on industry; by others, to the demoralization of production caused by strikes.” Both the taxes and the strikes were the result of Roosevelt policy; the strikes had been made possible by the Wagner Act the year before. As scholars have long noted, the high wages generated by New Deal legislation helped those workers who earned them. But the inflexibility of those wages also prevented companies from hiring additional workers. Hence the persistent shortage of jobs in the latter part of the 1930s.
Amity Shlaes (The Forgotten Man: A New History of the Great Depression)
An estimated 3.5 million people with serious mental illnesses are going without treatment (Kessler et al. 2001). That is scandalous. But mentally ill people are not the cause of the violence problem. If schizophrenia, bipolar disorder, and depression were cured, our society’s problem of violence would diminish by only about 4% (Swanson 1994).
Daniel W. Webster (Updated Evidence and Policy Developments on Reducing Gun Violence in America: Informing Policy with Evidence and Analysis)
What is clear is that this growing river of money will dramatically expand the size and influence of a new power elite of living donors that already wields enormous clout. One analysis by the scholar Kristin Goss found that nearly half of America’s top two hundred philanthropists—including many Giving Pledge members—have expressed an interest in shaping public policy.
David Callahan (The Givers: Wealth, Power, and Philanthropy in a New Gilded Age)
The evidence showing that patients of color, black patients especially, are undertreated for pain in the United States is particularly robust. A 2012 meta-analysis of twenty years of published research found that, across all the studies, black patients were 22 percent less likely than whites to get any pain medication and 29 percent less likely to be treated with opioids. Latino patients were also 22 percent less likely to receive opioids. As is the case with gender disparities, racial/ethnic disparities were most pronounced 'when a cause of pain could not be readily verified.' But black patients were less likely to get opioids after traumatic injuries or surgery too. and the authors warned that the gap 'does not appear to be closing with time or existing policy initiatives.
Maya Dusenbery (Doing Harm: The Truth About How Bad Medicine and Lazy Science Leave Women Dismissed, Misdiagnosed, and Sick)
White supremacists are the ones supporting policies that benefit racist power against the interests of the majority of White people. White supremacists claim to be pro-White but refuse to acknowledge that climate change is having a disastrous impact on the earth White people inhabit. They oppose affirmative-action programs, despite White women being their primary beneficiaries. White supremacists rage against Obamacare even as 43 percent of the people who gained lifesaving health insurance from 2010 to 2015 were White. They heil Adolf Hitler’s Nazis, even though it was the Nazis who launched a world war that destroyed the lives of more than forty million White people and ruined Europe. They wave Confederate flags and defend Confederate monuments, even though the Confederacy started a civil war that ended with more than five hundred thousand White American lives lost—more than every other American war combined. White supremacists love what America used to be, even though America used to be—and still is—teeming with millions of struggling White people. White supremacists blame non-White people for the struggles of White people when any objective analysis of their plight primarily implicates the rich White Trumps they support.
Ibram X. Kendi (How to Be an Antiracist)
Education without social action is a one-sided value because it has no true power potential. Social action without education is a weak expression of pure energy. Deeds uninformed by educated thought can take false directions. When we go into action and confront our adversaries, we must be as armed with knowledge as they. Our policies should have the strength of deep analysis beneath them to be able to challenge the clever sophistries of our opponents.
Martin Luther King Jr. (Where Do We Go from Here: Chaos or Community?)
This is not necessarily true, however, of measures merely restricting the allowed methods of production, so long as these restrictions affect all potential producers equally and are not used as an indirect way of controlling prices and quantities. Though all such controls of the methods of production impose extra costs (i.e., make it necessary to use more resources to produce a given output), they may be well worth while. To prohibit the use of certain poisonous substances or to require special precautions in their use, to limit working hours or to require certain sanitary arrangements, is fully compatible with the preservation of competition. The only question here is whether in the particular instance the advantages gained are greater than the social costs which they impose. Nor is the preservation of competition incompatible with an extensive system of social services — so long as the organization of these services is not designed in such a way as to make competition ineffective over wide fields.
Friedrich A. Hayek (The Road to Serfdom)
Over the years Tomsen had concluded that America’s failed policies in Afghanistan flowed in part from the compartmented, top secret isolation in which the CIA always sought to work. The agency saw the president as its client. By keeping the State Department and other policy makers at a distance, it preserved a certain freedom to operate. But when the agency was wrong—the Bay of Pigs, Gulbuddin Hekmatyar—there was little check on its analysis. Conversely, when it was on the right track—as with Massoud in the late 1990s—it often had trouble finding allies in political Washington.11
Steve Coll (Ghost Wars: The Secret History of the CIA, Afghanistan & Bin Laden from the Soviet Invasion to September 10, 2001)
1 – Thinking Straight Maxim 1 - When you are having trouble getting your thinking straight, go to an extreme case Maxim 2 - When you are having trouble getting your thinking straight, go to a simple case Maxim 3 – Don’t take refuge in complexity Maxim 4 - When trying to understand a complex real-world situation, think of an everyday analogue 2 – Tackling Uncertainty Maxim 5 - The world is much more uncertain than you think Maxim 6 - Think probabilistically about the world Maxim 7 - Uncertainty is the friend of the status quo 3 – Making Decisions Maxim 8 - Good decisions sometimes have poor outcomes Maxim 9 - Some good decisions have a high probability of a bad outcome Maxim 10 - Errors of commission should be weighted the same as errors of omission Maxim 11 - Don’t be limited by the options you have in front of you Maxim 12 - Information is only valuable if it can change your decision 4 – Understanding Policy Maxim 13 - Long division is the most important tool for policy analysis Maxim 14 - Elasticities are a powerful tool for understanding many important things in life Maxim 15 - Heterogeneity in the population explains many phenomena Maxim 16 - Capitalize on complementarities
Dan Levy (Maxims for Thinking Analytically: The wisdom of legendary Harvard Professor Richard Zeckhauser)
(Orientals). For such divisions are generalities whose use historically and actually has been to press the importance of the distinction between some men and some other men, usually towards not especially admirable ends. When one uses categories like Oriental and Western as both the starting and the end points of analysis, research, public policy (as the categories were used by Balfour and Cromer), the result is usually to polarize the distinction—the Oriental becomes more Oriental, the Westerner more Western—and limit the human encounter between different cultures, traditions, and societies.
Edward W. Said (Orientalism)
Free spirits, the ambitious, ex-socialists, drug users, and sexual eccentrics often find an attractive political philosophy in libertarianism, the idea that individual freedom should be the sole rule of ethics and government. Libertarianism offers its believers a clear conscience to do things society presently restrains, like make more money, have more sex, or take more drugs. It promises a consistent formula for ethics, a rigorous framework for policy analysis, a foundation in American history, and the application of capitalist efficiencies to the whole of society. But while it contains substantial grains of truth, as a whole it is a seductive mistake. . . . The most fundamental problem with libertarianism is very simple: freedom, though a good thing, is simply not the only good thing in life. . . . Libertarians try to get around this fact that freedom is not the only good thing by trying to reduce all other goods to it through the concept of choice, claiming that everything that is good is so because we choose to partake of it. Therefore freedom, by giving us choice, supposedly embraces all other goods. But this violates common sense by denying that anything is good by nature, independently of whether we choose it. . . . So even if the libertarian principle of “an it harm none, do as thou wilt,” is true, it does not license the behavior libertarians claim. Consider pornography: libertarians say it should be permitted because if someone doesn’t like it, he can choose not to view it. But what he can’t do is choose not to live in a culture that has been vulgarized by it. . . . There is no need to embrace outright libertarianism just because we want a healthy portion of freedom, and the alternative to libertarianism is not the USSR, it is America’s traditional liberties. . . . Paradoxically, people exercise their freedom not to be libertarians. The political corollary of this is that since no electorate will support libertarianism, a libertarian government could never be achieved democratically but would have to be imposed by some kind of authoritarian state, which rather puts the lie to libertarians’ claim that under any other philosophy, busybodies who claim to know what’s best for other people impose their values on the rest of us. . . . Libertarians are also naïve about the range and perversity of human desires they propose to unleash. They can imagine nothing more threatening than a bit of Sunday-afternoon sadomasochism, followed by some recreational drug use and work on Monday. They assume that if people are given freedom, they will gravitate towards essentially bourgeois lives, but this takes for granted things like the deferral of gratification that were pounded into them as children without their being free to refuse. They forget that for much of the population, preaching maximum freedom merely results in drunkenness, drugs, failure to hold a job, and pregnancy out of wedlock. Society is dependent upon inculcated self-restraint if it is not to slide into barbarism, and libertarians attack this self-restraint. Ironically, this often results in internal restraints being replaced by the external restraints of police and prison, resulting in less freedom, not more. This contempt for self-restraint is emblematic of a deeper problem: libertarianism has a lot to say about freedom but little about learning to handle it. Freedom without judgment is dangerous at best, useless at worst. Yet libertarianism is philosophically incapable of evolving a theory of how to use freedom well because of its root dogma that all free choices are equal, which it cannot abandon except at the cost of admitting that there are other goods than freedom. Conservatives should know better.
Robert Locke
What is missing—what is foregone—in the typical discussion and analysis of historical or current nuclear policies is the recognition that what is being discussed is dizzyingly insane and immoral: in its almost-incalculable and inconceivable destructiveness and deliberate murderousness, its disproportionality of risked and planned destructiveness to either declared or unacknowledged objectives, the infeasibility of its secretly pursued aims (damage limitation to the United States and allies, “victory” in two-sided nuclear war), its criminality (to a degree that explodes ordinary visions of law, justice, crime), its lack of wisdom or compassion, its sinfulness and evil.
Daniel Ellsberg (The Doomsday Machine: Confessions of a Nuclear War Planner)
Complex operations, in which agencies assume complementary roles and operate in close proximity-often with similar missions but conflicting mandates-accentuate these tensions. The tensions are evident in the processes of analyzing complex environments, planning for complex interventions, and implementing complex operations. Many reports and analyses forecast that these complex operations are precisely those that will demand our attention most in the indefinite future. As essayist Barton and O'Connell note, our intelligence and understanding of the root cause of conflict, multiplicity of motivations and grievances, and disposition of actors is often inadequate. Moreover, the problems that complex operations are intended and implemented to address are convoluted, and often inscrutable. They exhibit many if not all the characteristics of "wicked problems," as enumerated by Rittel and Webber in 1973: they defy definitive formulations; any proposed solution or intervention causes the problem to mutate, so there is no second chance at a solution; every situation is unique; each wicked problem can be considered a symptom of another problem. As a result, policy objectives are often compound and ambiguous. The requirements of stability, for example, in Afghanistan today, may conflict with the requirements for democratic governance. Efforts to establish an equitable social contract may well exacerbate inter-communal tensions that can lead to violence. The rule of law, as we understand it, may displace indigenous conflict management and stabilization systems. The law of unintended consequences may indeed be the only law of the land. The complexity of the challenges we face in the current global environment would suggest the obvious benefit of joint analysis - bringing to bear on any given problem the analytic tools of military, diplomatic and development analysts. Instead, efforts to analyze jointly are most often an afterthought, initiated long after a problem has escalated to a level of urgency that negates much of the utility of deliberate planning.
Michael Miklaucic (Commanding Heights: Strategic Lessons from Complex Operations)
Thought Control * Require members to internalize the group’s doctrine as truth * Adopt the group’s “map of reality” as reality * Instill black and white thinking * Decide between good versus evil * Organize people into us versus them (insiders versus outsiders) * Change a person’s name and identity * Use loaded language and clichés to constrict knowledge, stop critical thoughts, and reduce complexities into platitudinous buzzwords * Encourage only “good and proper” thoughts * Use hypnotic techniques to alter mental states, undermine critical thinking, and even to age-regress the member to childhood states * Manipulate memories to create false ones * Teach thought stopping techniques that shut down reality testing by stopping negative thoughts and allowing only positive thoughts. These techniques include: * Denial, rationalization, justification, wishful thinking * Chanting * Meditating * Praying * Speaking in tongues * Singing or humming * Reject rational analysis, critical thinking, constructive criticism * Forbid critical questions about leader, doctrine, or policy * Label alternative belief systems as illegitimate, evil, or not useful * Instill new “map of reality” Emotional Control * Manipulate and narrow the range of feelings—some emotions and/or needs are deemed as evil, wrong, or selfish * Teach emotion stopping techniques to block feelings of hopelessness, anger, or doubt * Make the person feel that problems are always their own fault, never the leader’s or the group’s fault * Promote feelings of guilt or unworthiness, such as: * Identity guilt * You are not living up to your potential * Your family is deficient * Your past is suspect * Your affiliations are unwise * Your thoughts, feelings, actions are irrelevant or selfish * Social guilt * Historical guilt * Instill fear, such as fear of: * Thinking independently * The outside world * Enemies * Losing one’s salvation * Leaving * Orchestrate emotional highs and lows through love bombing and by offering praise one moment, and then declaring a person is a horrible sinner * Ritualistic and sometimes public confession of sins * Phobia indoctrination: inculcate irrational fears about leaving the group or questioning the leader’s authority * No happiness or fulfillment possible outside the group * Terrible consequences if you leave: hell, demon possession, incurable diseases, accidents, suicide, insanity, 10,000 reincarnations, etc. * Shun those who leave and inspire fear of being rejected by friends and family * Never a legitimate reason to leave; those who leave are weak, undisciplined, unspiritual, worldly, brainwashed by family or counselor, or seduced by money, sex, or rock and roll * Threaten harm to ex-member and family (threats of cutting off friends/family)
Steven Hassan
Now everyone knows that to try to say something in the mainstream Western media that is critical of U.S. policy or Israel is extremely difficult; conversely, to say things that are hostile to the Arabs as a people and culture, or Islam as a religion, is laughably easy. For in effect there is a cultural war between spokespersons for the West and those of the Muslim and Arab world. In so inflamed a situation, the hardest thing to do as an intellectual is to be critical, to refuse to adopt a rhetorical style that is the verbal equivalent of carpet-bombing, and to focus instead on those issues like U.S. support for unpopular client re­gimes, which for a person writing in the U.S. are somewhat more likely to be affected by critical discussion. Of course, on the other hand, there is a virtual cer­tainty of getting an audience if as an Arab intellectual you passionately, even slavishly support U.S. policy, you attack its critics, and if they happen to be Arabs, you invent evi­dence to show their villainy; if they are American you confect stories and situations that prove their duplicity; you spin out stories concerning Arabs and Muslims that have the effect of defaming their tradition, defacing their history, accentuating their weaknesses, of which of course there are plenty. Above all, you attack the officially ap­ proved enemies-Saddam Hussein, Baathism, Arab na­tionalism, the Palestinian movement, Arab views of Israel. And of course this earns you the expected accolades: you are characterized as courageous, you are outspoken and passionate, and on and on. The new god of course is the West. Arabs, you say, should try to be more like the West, should regard the West as a source and a reference point. · Gone is the history of what the West actually did. Gone are the Gulf War's destructive results. We Arabs and Mus­lims are the sick ones, our problems are our own, totally self-inflicted. A number of things stand out about these kinds of performance. In the first place, there is no universalism here at all. Because you serve a god uncritically, all the devils are always on the other side: this was as true when you were a Trotskyist as it i's now when you are a recanting former Trotskyist. You do not think of politics in terms of interrelationships or of common histories such as, for instance, the long and complicated dynamic that has bound the Arabs and Muslims to the West and vice versa. Real intellectual analysis forbids calling one side innocent, the other evil. Indeed the notion of a side is, where cultures are at issue, highly problematic, since most cultures aren't watertight little packages, all homogenous, and all either good or evil. But if your eye is on your patron, you cannot think as an intellectual, but only as a disciple or acolyte. In the back of your mind there is the thought that you must please and not displease.
Edward W. Said (Representations of the Intellectual)
If we look at the way an industrial producer creates new products, we see a long list of trials and errors and eventually improvement in quality at a lower cost. Urban policies and strategies, by contrast, often do not follow this logic; they are often repeated even when it is well known that they failed. For instance, policies like rent control, greenbelts, new light rail transports, among others, are constantly repeated in spite of a near consensus on their failure to achieve their objectives. A quantitative evaluation of the failure of these policies is usually well documented through special reports or academic papers; it is seldom produced internally by cities, however, and the information does not seem to reach urban decision makers. Only a systematic analysis of data through indicators allows urban policies to be improved over time and failing policies to be abandoned. But as Angus Deaton wrote: 'without data, anyone who does anything is free to claim success.
Alain Bertaud (Order without Design: How Markets Shape Cities)
In the United States, both of the dominant parties have shifted toward free-market capitalism. Even though analysis of roll call votes show that since the 1970s, Republicans have drifted farther to the right than Democrats have moved to the left, the latter were instrumental in implementing financial deregulation in the 1990s and focused increasingly on cultural issues such as gender, race, and sexual identity rather than traditional social welfare policies. Political polarization in Congress, which had bottomed out in the 1940s, has been rapidly growing since the 1980s. Between 1913 and 2008, the development of top income shares closely tracked the degree of polarization but with a lag of about a decade: changes in the latter preceded changes in the former but generally moved in the same direction—first down, then up. The same has been true of wages and education levels in the financial sector relative to all other sectors of the American economy, an index that likewise tracks partisan polarization with a time lag. Thus elite incomes in general and those in the finance sector in particular have been highly sensitive to the degree of legislative cohesion and have benefited from worsening gridlock.
Walter Scheidel (The Great Leveler: Violence and the History of Inequality from the Stone Age to the Twenty-First Century (The Princeton Economic History of the Western World Book 74))
Page 244: The Jewish involvement in influencing immigration policy in the United States is especially noteworthy as an aspect of ethnic conflict. ... Throughout much of the period from 1881 to 1965, one Jewish interest in liberal immigration policies stemmed from a desire to provide a sanctuary for Jews fleeing from anti-Semitic persecutions in Europe and elsewhere. ... There is also evidence that Jews, much more than any other European-derived ethnic group in the United States, have viewed liberal immigration policies as a mechanism of ensuring that the United States would be a pluralistic rather than a unitary, homogeneous society (e.g., Cohen 1972). ... Pluralism serves internal Jewish interests because it legitimates the internal Jewish interest in rationalizing ... Jewish group commitment and non-assimilation, what Howard Sachar (1992, 427) terms its function in “legitimizing the preservation of a minority culture in the midst of a majority’s host society.” ... Ethnic and religious pluralism also serves external Jewish interests because Jews become just one of many ethnic groups. This results in the diffusion of political and cultural influence among the various ethnic and religious groups, and it becomes difficult or impossible to develop unified, cohesive groups of gentiles united in their opposition to Judaism. Historically, major anti-Semitic movements have tended to erupt in societies that have been, apart from the Jews, religiously or ethnically homogeneous.
Kevin B. MacDonald (The Culture of Critique: An Evolutionary Analysis of Jewish Involvement in Twentieth-Century Intellectual and Political Movements)
Military analysis is not an exact science. To return to the wisdom of Sun Tzu, and paraphrase the great Chinese political philosopher, it is at least as close to art. But many logical methods offer insight into military problems-even if solutions to those problems ultimately require the use of judgement and of broader political and strategic considerations as well. Military affairs may not be as amenable to quantification and formal methodological treatment as economics, for example. However, even if our main goal in analysis is generally to illuminate choices, bound problems, and rule out bad options - rather than arrive unambiguously at clear policy choices-the discipline of military analysis has a great deal to offer. Moreover, simple back-of-the envelope methodologies often provide substantial insight without requiring the churning of giant computer models or access to the classified data of official Pentagon studies, allowing generalities and outsiders to play important roles in defense analytical debates. We have seen all too often (in the broad course of history as well as in modern times) what happens when we make key defense policy decisions based solely on instinct, ideology, and impression. To avoid cavalier, careless, and agenda-driven decision-making, we therefore need to study the science of war as well-even as we also remember the cautions of Clausewitz and avoid hubris in our predictions about how any war or other major military endeavor will ultimately unfold.
Michael O'Hanlon
Contrary to “the mantra,” White supremacists are the ones supporting policies that benefit racist power against the interests of the majority of White people. White supremacists claim to be pro-White but refuse to acknowledge that climate change is having a disastrous impact on the earth White people inhabit. They oppose affirmative-action programs, despite White women being their primary beneficiaries. White supremacists rage against Obamacare even as 43 percent of the people who gained lifesaving health insurance from 2010 to 2015 were White. They heil Adolf Hitler’s Nazis, even though it was the Nazis who launched a world war that destroyed the lives of more than forty million White people and ruined Europe. They wave Confederate flags and defend Confederate monuments, even though the Confederacy started a civil war that ended with more than five hundred thousand White American lives lost—more than every other American war combined. White supremacists love what America used to be, even though America used to be—and still is—teeming with millions of struggling White people. White supremacists blame non-White people for the struggles of White people when any objective analysis of their plight primarily implicates the rich White Trumps they support. White supremacist is code for anti-White, and White supremacy is nothing short of an ongoing program of genocide against the White race. In fact, it’s more than that: White supremacist is code for anti-human, a nuclear ideology that poses an existential threat to human existence.
Ibram X. Kendi (How to Be an Antiracist)
There is an excellent short book (126 pages) by Faustino Ballvè, Essentials of Economics (Irvington-on-Hudson, N.Y.: Foundation for Economic Education), which briefly summarizes principles and policies. A book that does that at somewhat greater length (327 pages) is Understanding the Dollar Crisis by Percy L. Greaves (Belmont, Mass.: Western Islands, 1973). Bettina Bien Greaves has assembled two volumes of readings on Free Market Economics (Foundation for Economic Education). The reader who aims at a thorough understanding, and feels prepared for it, should next read Human Action by Ludwig von Mises (Chicago: Contemporary Books, 1949, 1966, 907 pages). This book extended the logical unity and precision of economics beyond that of any previous work. A two-volume work written thirteen years after Human Action by a student of Mises is Murray N. Rothbard’s Man, Economy, and State (Mission, Kan.: Sheed, Andrews and McMeel, 1962, 987 pages). This contains much original and penetrating material; its exposition is admirably lucid; and its arrangement makes it in some respects more suitable for textbook use than Mises’ great work. Short books that discuss special economic subjects in a simple way are Planning for Freedom by Ludwig von Mises (South Holland, 111.: Libertarian Press, 1952), and Capitalism and Freedom by Milton Friedman (Chicago: University of Chicago Press, 1962). There is an excellent pamphlet by Murray N. Rothbard, What Has Government Done to Our Money? (Santa Ana, Calif.: Rampart College, 1964, 1974, 62 pages). On the urgent subject of inflation, a book by the present author has recently been published, The Inflation Crisis, and How to Resolve It (New Rochelle, N.Y.: Arlington House, 1978). Among recent works which discuss current ideologies and developments from a point of view similar to that of this volume are the present author’s The Failure of the “New Economics”: An Analysis of the Keynesian Fallacies (Arlington House, 1959); F. A. Hayek, The Road to Serfdom (1945) and the same author’s monumental Constitution of Liberty (Chicago: University of Chicago Press, 1960). Ludwig von Mises’ Socialism: An Economic and Sociological Analysis (London: Jonathan Cape, 1936, 1969) is the most thorough and devastating critique of collectivistic doctrines ever written. The reader should not overlook, of course, Frederic Bastiat’s Economic Sophisms (ca. 1844), and particularly his essay on “What Is Seen and What Is Not Seen.” Those who are interested in working through the economic classics might find it most profitable to do this in the reverse of their historical order. Presented in this order, the chief works to be consulted, with the dates of their first editions, are: Philip Wicksteed, The Common Sense of Political Economy, 1911; John Bates Clark, The Distribution of Wealth, 1899; Eugen von Böhm-Bawerk, The Positive Theory of Capital, 1888; Karl Menger, Principles of Economics, 1871; W. Stanley Jevons, The Theory of Political Economy, 1871; John Stuart Mill, Principles of Political Economy, 1848; David Ricardo, Principles of Political Economy and Taxation, 1817; and Adam Smith, The Wealth of Nations, 1776.
Henry Hazlitt (Economics in One Lesson: The Shortest and Surest Way to Understand Basic Economics)
The government has a great need to restore its credibility, to make people forget its history and rewrite it. The intelligentsia have to a remarkable degree undertaken this task. It is also necessary to establish the "lessons" that have to be drawn from the war, to ensure that these are conceived on the narrowest grounds, in terms of such socially neutral categories as "stupidity" or "error" or "ignorance" or perhaps "cost." Why? Because soon it will be necessary to justify other confrontations, perhaps other U.S. interventions in the world, other Vietnams. But this time, these will have to be successful intervention, which don't slip out of control. Chile, for example. It is even possible for the press to criticize successful interventions - the Dominican Republic, Chile, etc. - as long as these criticisms don't exceed "civilized limits," that is to say, as long as they don't serve to arouse popular movements capable of hindering these enterprises, and are not accompanied by any rational analysis of the motives of U.S. imperialism, something which is complete anathema, intolerable to liberal ideology. How is the liberal press proceeding with regard to Vietnam, that sector which supported the "doves"? By stressing the "stupidity" of the U.S. intervention; that's a politically neutral term. It would have been sufficient to find an "intelligent" policy. The war was thus a tragic error in which good intentions were transmuted into bad policies, because of a generation of incompetent and arrogant officials. The war's savagery is also denounced, but that too, is used as a neutral category...Presumably the goals were legitimate - it would have been all right to do the same thing, but more humanely... The "responsible" doves were opposed to the war - on a pragmatic basis. Now it is necessary to reconstruct the system of beliefs according to which the United States is the benefactor of humanity, historically committed to freedom, self-determination, and human rights. With regard to this doctrine, the "responsible" doves share the same presuppositions as the hawks. They do not question the right of the United States to intervene in other countries. Their criticism is actually very convenient for the state, which is quite willing to be chided for its errors, as long as the fundamental right of forceful intervention is not brought into question. ... The resources of imperialist ideology are quite vast. It tolerates - indeed, encourages - a variety of forms of opposition, such as those I have just illustrated. It is permissible to criticize the lapses of the intellectuals and of government advisers, and even to accuse them of an abstract desire for "domination," again a socially neutral category not linked in any way to concrete social and economic structures. But to relate that abstract "desire for domination" to the employment of force by the United States government in order to preserve a certain system of world order, specifically, to ensure that the countries of the world remain open insofar as possible to exploitation by U.S.-based corporations - that is extremely impolite, that is to argue in an unacceptable way.
Noam Chomsky (The Chomsky-Foucault Debate: On Human Nature)
In the absence of expert [senior military] advice, we have seen each successive administration fail in the business of strategy - yielding a United States twice as rich as the Soviet Union but much less strong. Only the manner of the failure has changed. In the 1960s, under Robert S. McNamara, we witnessed the wholesale substitution of civilian mathematical analysis for military expertise. The new breed of the "systems analysts" introduced new standards of intellectual discipline and greatly improved bookkeeping methods, but also a trained incapacity to understand the most important aspects of military power, which happens to be nonmeasurable. Because morale is nonmeasurable it was ignored, in large and small ways, with disastrous effects. We have seen how the pursuit of business-type efficiency in the placement of each soldier destroys the cohesion that makes fighting units effective; we may recall how the Pueblo was left virtually disarmed when it encountered the North Koreans (strong armament was judged as not "cost effective" for ships of that kind). Because tactics, the operational art of war, and strategy itself are not reducible to precise numbers, money was allocated to forces and single weapons according to "firepower" scores, computer simulations, and mathematical studies - all of which maximize efficiency - but often at the expense of combat effectiveness. An even greater defect of the McNamara approach to military decisions was its businesslike "linear" logic, which is right for commerce or engineering but almost always fails in the realm of strategy. Because its essence is the clash of antagonistic and outmaneuvering wills, strategy usually proceeds by paradox rather than conventional "linear" logic. That much is clear even from the most shopworn of Latin tags: si vis pacem, para bellum (if you want peace, prepare for war), whose business equivalent would be orders of "if you want sales, add to your purchasing staff," or some other, equally absurd advice. Where paradox rules, straightforward linear logic is self-defeating, sometimes quite literally. Let a general choose the best path for his advance, the shortest and best-roaded, and it then becomes the worst path of all paths, because the enemy will await him there in greatest strength... Linear logic is all very well in commerce and engineering, where there is lively opposition, to be sure, but no open-ended scope for maneuver; a competitor beaten in the marketplace will not bomb our factory instead, and the river duly bridged will not deliberately carve out a new course. But such reactions are merely normal in strategy. Military men are not trained in paradoxical thinking, but they do no have to be. Unlike the business-school expert, who searches for optimal solutions in the abstract and then presents them will all the authority of charts and computer printouts, even the most ordinary military mind can recall the existence of a maneuvering antagonists now and then, and will therefore seek robust solutions rather than "best" solutions - those, in other words, which are not optimal but can remain adequate even when the enemy reacts to outmaneuver the first approach.
Edward N. Luttwak
The largest and most rigorous study that is currently available in this area is the third one commissioned by the British Home Office (Kelly, Lovett, & Regan, 2005). The analysis was based on the 2,643 sexual assault cases (where the outcome was known) that were reported to British police over a 15-year period of time. Of these, 8% were classified by the police department as false reports. Yet the researchers noted that some of these classifications were based simply on the personal judgments of the police investigators, based on the victim’s mental illness, inconsistent statements, drinking or drug use. These classifications were thus made in violation of the explicit policies of their own police agencies. There searchers therefore supplemented the information contained in the police files by collecting many different types of additional data, including: reports from forensic examiners, questionnaires completed by police investigators, interviews with victims and victim service providers, and content analyses of the statements made by victims and witnesses. They then proceeded to evaluate each case using the official criteria for establishing a false allegation, which was that there must be either “a clear and credible admission by the complainant” or “strong evidential grounds” (Kelly, Lovett, & Regan,2005). On the basis of this analysis, the percentage of false reports dropped to 2.5%." Lonsway, Kimberly A., Joanne Archambault, and David Lisak. "False reports: Moving beyond the issue to successfully investigate and prosecute non-stranger sexual assault." The Voice 3.1 (2009): 1-11.
David Lisak
Westerners, not just Lincoln Steffens. It took in the Central Intelligence Agency of the United States. It even took in the Soviet Union’s own leaders, such as Nikita Khrushchev, who famously boasted in a speech to Western diplomats in 1956 that “we will bury you [the West].” As late as 1977, a leading academic textbook by an English economist argued that Soviet-style economies were superior to capitalist ones in terms of economic growth, providing full employment and price stability and even in producing people with altruistic motivation. Poor old Western capitalism did better only at providing political freedom. Indeed, the most widely used university textbook in economics, written by Nobel Prize–winner Paul Samuelson, repeatedly predicted the coming economic dominance of the Soviet Union. In the 1961 edition, Samuelson predicted that Soviet national income would overtake that of the United States possibly by 1984, but probably by 1997. In the 1980 edition there was little change in the analysis, though the two dates were delayed to 2002 and 2012. Though the policies of Stalin and subsequent Soviet leaders could produce rapid economic growth, they could not do so in a sustained way. By the 1970s, economic growth had all but stopped. The most important lesson is that extractive institutions cannot generate sustained technological change for two reasons: the lack of economic incentives and resistance by the elites. In addition, once all the very inefficiently used resources had been reallocated to industry, there were few economic gains to be had by fiat. Then the Soviet system hit a roadblock, with lack of innovation and poor economic incentives preventing any further progress. The only area in which the Soviets did manage to sustain some innovation was through enormous efforts in military and aerospace technology. As a result they managed to put the first dog, Leika, and the first man, Yuri Gagarin, in space. They also left the world the AK-47 as one of their legacies. Gosplan was the supposedly all-powerful planning agency in charge of the central planning of the Soviet economy. One of the benefits of the sequence of five-year plans written and administered by Gosplan was supposed to have been the long time horizon necessary for rational investment and innovation. In reality, what got implemented in Soviet industry had little to do with the five-year plans, which were frequently revised and rewritten or simply ignored. The development of industry took place on the basis of commands by Stalin and the Politburo, who changed their minds frequently and often completely revised their previous decisions. All plans were labeled “draft” or “preliminary.” Only one copy of a plan labeled “final”—that for light industry in 1939—has ever come to light. Stalin himself said in 1937 that “only bureaucrats can think that planning work ends with the creation of the plan. The creation of the plan is just the beginning. The real direction of the plan develops only after the putting together of the plan.” Stalin wanted to maximize his discretion to reward people or groups who were politically loyal, and punish those who were not. As for Gosplan, its main role was to provide Stalin with information so he could better monitor his friends and enemies. It actually tried to avoid making decisions. If you made a decision that turned
Daron Acemoğlu (Why Nations Fail: The Origins of Power, Prosperity and Poverty)
What to Do Tonight Tell your child, “You’re the expert on you. Nobody really knows you better than you know yourself, because nobody really knows what it feels like to be you.” Give your child a choice about something you may have previously decided for her. Or ask her opinion about something. (If they’re young, you can frame it as, “Do you think we should do it this way or that way?”) Have a family meeting where you problem solve together about what chores need to be done and who should do them. Give them options. Could they walk the dog instead of doing the dinner dishes? Take out the trash instead of cleaning the toilet? Do they want to do it each Sunday or each Wednesday? Morning or night? Keep a consistent schedule, but let them choose that schedule. Make a list of things your child would like to be in charge of, and make a plan to shift responsibility for some of these things from you to him or her. Ask your child whether something in his life isn’t working for him (his homework routine, bedtime, management of electronics) and if he has any ideas about how to make it work better. Do a cost-benefit analysis of any decision you make for your child that she sees differently. Tell your child about decisions you’ve made that, in retrospect, were not the best decisions—and how you were able to learn and grow from them. Have a talk in which you point out that your kid has got a good mind. Recall some times when he’s made a good decision or felt strongly about something and turned out to be right. If he’ll let you, make a list together of the things he’s decided for himself that have worked well. Tell your teen you want him to have lots of practice running his own life before he goes off to college—and that you want to see that he can run his life without running it into the ground before he goes away. Emphasize logical and natural consequences, and encourage the use of family meetings to discuss family rules or family policies more generally (e.g., no gaming during the week).
William Stixrud (The Self-Driven Child: The Science and Sense of Giving Your Kids More Control Over Their Lives)
The information in this topic of decision making and how to create and nurture it, is beneficial to every cop in their quest to mastering tactics and tactical decision making and are a must read for every cop wanting to be more effective and safe on the street. My purpose is to get cops thinking about this critical question: In mastering tactics shouldn’t we be blending policy and procedure with people and ideas? It should be understandable that teaching people, procedures helps them perform tasks more skillfully doesn’t always apply. Procedures are most useful in well-ordered situations when they can substitute for skill, not augment it. In complex situations, in the shadows of the unknown, uncertain and unpredictable and complex world of law enforcement conflict, procedures are less likely to substitute for expertise and may even stifle its development. Here is a different way of putting it as Klein explains: In complex situations, people will need judgment skills to follow procedures effectively and to go beyond them when necessary.3 For stable and well-structured tasks i.e. evidence collection and handling, follow-up investigations, booking procedures and report writing, we should be able to construct comprehensive procedure guides. Even for complex tasks we might try to identify the procedures because that is one road to progress. But we also have to discover the kinds of expertise that comes into play for difficult jobs such as, robbery response, active shooter and armed gunman situations, hostage and barricade situations, domestic disputes, drug and alcohol related calls and pretty much any other call that deals with emotionally charged people in conflict. Klein states, “to be successful we need both analysis (policy and procedure) and intuition (people and ideas).”4 Either one alone can get us into trouble. Experts certainly aren’t perfect, but analysis can fail. Intuition isn’t magic either. Klein defines intuition as, “ways we use our experience without consciously thinking things out”. Intuition includes tacit knowledge that we can’t describe. It includes our ability to recognize patterns stored in memory. We have been building these patterns up all our lives from birth to present, and we can rapidly match a situation to a pattern or notice that something is off, that some sort of anomaly is warning us to be careful.5
Fred Leland (Adaptive Leadership Handbook - Law Enforcement & Security)
Growth was so rapid that it took in generations of Westerners, not just Lincoln Steffens. It took in the Central Intelligence Agency of the United States. It even took in the Soviet Union’s own leaders, such as Nikita Khrushchev, who famously boasted in a speech to Western diplomats in 1956 that “we will bury you [the West].” As late as 1977, a leading academic textbook by an English economist argued that Soviet-style economies were superior to capitalist ones in terms of economic growth, providing full employment and price stability and even in producing people with altruistic motivation. Poor old Western capitalism did better only at providing political freedom. Indeed, the most widely used university textbook in economics, written by Nobel Prize–winner Paul Samuelson, repeatedly predicted the coming economic dominance of the Soviet Union. In the 1961 edition, Samuelson predicted that Soviet national income would overtake that of the United States possibly by 1984, but probably by 1997. In the 1980 edition there was little change in the analysis, though the two dates were delayed to 2002 and 2012. Though the policies of Stalin and subsequent Soviet leaders could produce rapid economic growth, they could not do so in a sustained way. By the 1970s, economic growth had all but stopped. The most important lesson is that extractive institutions cannot generate sustained technological change for two reasons: the lack of economic incentives and resistance by the elites. In addition, once all the very inefficiently used resources had been reallocated to industry, there were few economic gains to be had by fiat. Then the Soviet system hit a roadblock, with lack of innovation and poor economic incentives preventing any further progress. The only area in which the Soviets did manage to sustain some innovation was through enormous efforts in military and aerospace technology. As a result they managed to put the first dog, Leika, and the first man, Yuri Gagarin, in space. They also left the world the AK-47 as one of their legacies. Gosplan was the supposedly all-powerful planning agency in charge of the central planning of the Soviet economy. One of the benefits of the sequence of five-year plans written and administered by Gosplan was supposed to have been the long time horizon necessary for rational investment and innovation. In reality, what got implemented in Soviet industry had little to do with the five-year plans, which were frequently revised and rewritten or simply ignored. The development of industry took place on the basis of commands by Stalin and the Politburo, who changed their minds frequently and often completely revised their previous decisions. All plans were labeled “draft” or “preliminary.” Only one copy of a plan labeled “final”—that for light industry in 1939—has ever come to light. Stalin himself said in 1937 that “only bureaucrats can think that planning work ends with the creation of the plan. The creation of the plan is just the beginning. The real direction of the plan develops only after the putting together of the plan.” Stalin wanted to maximize his discretion to reward people or groups who were politically loyal, and punish those who were not. As for Gosplan, its main role was to provide Stalin with information so he could better monitor his friends and enemies. It actually tried to avoid making decisions. If you made a decision that turned out badly, you might get shot. Better to avoid all responsibility. An example of what could happen
Daron Acemoğlu (Why Nations Fail: The Origins of Power, Prosperity and Poverty)
International labor mobility What’s the problem? Increased levels of migration from poor to rich countries would provide substantial benefits for the poorest people in the world, as well as substantial increases in global economic output. However, almost all developed countries pose heavy restrictions on who can enter the country to work. Scale: Very large. Eighty-five percent of the global variation in earnings is due to location rather than other factors: the extremely poor are poor simply because they don’t live in an environment that enables them to be productive. Economists Michael Clemens, Claudio Montenegro, and Lant Pritchett have estimated what they call the place premium—the wage gain for foreign workers who move to the United States. For an average person in Haiti, relocation to the United States would increase income by about 680 percent; for a Nigerian, it would increase income by 1,000 percent. Some other developing countries have comparatively lower place premiums, but they are still high enough to dramatically benefit migrants. Most migrants would also earn enough to send remittances to family members, thus helping many of those who do not migrate. An estimated six hundred million people worldwide would migrate if they were able to. Several economists have estimated that the total economic gains from free mobility of labor across borders would be greater than a 50 percent increase in world GDP. Even if these estimates were extremely optimistic, the economic gains from substantially increased immigration would be measured in trillions of dollars per year. (I discuss some objections to increased levels of immigration in the endnotes.) Neglectedness: Very neglected. Though a number of organizations work on immigration issues, very few focus on the benefits to future migrants of relaxing migration policy, instead focusing on migrants who are currently living in the United States. Tractability: Not very tractable. Increased levels of immigration are incredibly unpopular in developed countries, with the majority of people in Germany, Italy, the Netherlands, Norway, Sweden, and the United Kingdom favoring reduced immigration. Among developed countries, Canada is most sympathetic to increased levels of immigration; but even there only 20 percent of people favor increasing immigration, while 42 percent favor reducing it. This makes political change on this issue in the near term seem unlikely. What promising organizations are working on it? ImmigrationWorks (accepts donations) organizes, represents, and advocates on behalf of small-business owners who would benefit from being able to hire lower-skill migrant workers more easily, with the aim of “bringing America’s annual legal intake of foreign workers more realistically into line with the country’s labor needs.” The Center for Global Development (accepts donations) conducts policy-relevant research and policy analysis on topics relevant to improving the lives of the global poor, including on immigration reform, then makes recommendations to policy makers.
William MacAskill (Doing Good Better: How Effective Altruism Can Help You Make a Difference)
Simple Regression   CHAPTER OBJECTIVES After reading this chapter, you should be able to Use simple regression to test the statistical significance of a bivariate relationship involving one dependent and one independent variable Use Pearson’s correlation coefficient as a measure of association between two continuous variables Interpret statistics associated with regression analysis Write up the model of simple regression Assess assumptions of simple regression This chapter completes our discussion of statistical techniques for studying relationships between two variables by focusing on those that are continuous. Several approaches are examined: simple regression; the Pearson’s correlation coefficient; and a nonparametric alterative, Spearman’s rank correlation coefficient. Although all three techniques can be used, we focus particularly on simple regression. Regression allows us to predict outcomes based on knowledge of an independent variable. It is also the foundation for studying relationships among three or more variables, including control variables mentioned in Chapter 2 on research design (and also in Appendix 10.1). Regression can also be used in time series analysis, discussed in Chapter 17. We begin with simple regression. SIMPLE REGRESSION Let’s first look at an example. Say that you are a manager or analyst involved with a regional consortium of 15 local public agencies (in cities and counties) that provide low-income adults with health education about cardiovascular diseases, in an effort to reduce such diseases. The funding for this health education comes from a federal grant that requires annual analysis and performance outcome reporting. In Chapter 4, we used a logic model to specify that a performance outcome is the result of inputs, activities, and outputs. Following the development of such a model, you decide to conduct a survey among participants who attend such training events to collect data about the number of events they attended, their knowledge of cardiovascular disease, and a variety of habits such as smoking that are linked to cardiovascular disease. Some things that you might want to know are whether attending workshops increases
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
Table 14.1 also shows R-square (R2), which is called the coefficient of determination. R-square is of great interest: its value is interpreted as the percentage of variation in the dependent variable that is explained by the independent variable. R-square varies from zero to one, and is called a goodness-of-fit measure.5 In our example, teamwork explains only 7.4 percent of the variation in productivity. Although teamwork is significantly associated with productivity, it is quite likely that other factors also affect it. It is conceivable that other factors might be more strongly associated with productivity and that, when controlled for other factors, teamwork is no longer significant. Typically, values of R2 below 0.20 are considered to indicate weak relationships, those between 0.20 and 0.40 indicate moderate relationships, and those above 0.40 indicate strong relationships. Values of R2 above 0.65 are considered to indicate very strong relationships. R is called the multiple correlation coefficient and is always 0 ≤ R ≤ 1. To summarize up to this point, simple regression provides three critically important pieces of information about bivariate relationships involving two continuous variables: (1) the level of significance at which two variables are associated, if at all (t-statistic), (2) whether the relationship between the two variables is positive or negative (b), and (3) the strength of the relationship (R2). Key Point R-square is a measure of the strength of the relationship. Its value goes from 0 to 1. The primary purpose of regression analysis is hypothesis testing, not prediction. In our example, the regression model is used to test the hypothesis that teamwork is related to productivity. However, if the analyst wants to predict the variable “productivity,” the regression output also shows the SEE, or the standard error of the estimate (see Table 14.1). This is a measure of the spread of y values around the regression line as calculated for the mean value of the independent variable, only, and assuming a large sample. The standard error of the estimate has an interpretation in terms of the normal curve, that is, 68 percent of y values lie within one standard error from the calculated value of y, as calculated for the mean value of x using the preceding regression model. Thus, if the mean index value of the variable “teamwork” is 5.0, then the calculated (or predicted) value of “productivity” is [4.026 + 0.223*5 =] 5.141. Because SEE = 0.825, it follows that 68 percent of productivity values will lie 60.825 from 5.141 when “teamwork” = 5. Predictions of y for other values of x have larger standard errors.6 Assumptions and Notation There are three simple regression assumptions. First, simple regression assumes that the relationship between two variables is linear. The linearity of bivariate relationships is easily determined through visual inspection, as shown in Figure 14.2. In fact, all analysis of relationships involving continuous variables should begin with a scatterplot. When variable
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
relationships are nonlinear (parabolic or otherwise heavily curved), it is not appropriate to use linear regression. Then, one or both variables must be transformed, as discussed in Chapter 12. Second, simple regression assumes that the linear relationship is constant over the range of observations. This assumption is violated when the relationship is “broken,” for example, by having an upward slope for the first half of independent variable values and a downward slope over the remaining values. Then, analysts should consider using two regression models each for these different, linear relationships. The linearity assumption is also violated when no relationship is present in part of the independent variable values. This is particularly problematic because regression analysis will calculate a regression slope based on all observations. In this case, analysts may be misled into believing that the linear pattern holds for all observations. Hence, regression results always should be verified through visual inspection. Third, simple regression assumes that the variables are continuous. In Chapter 15, we will see that regression can also be used for nominal and dichotomous independent variables. The dependent variable, however, must be continuous. When the dependent variable is dichotomous, logistic regression should be used (Chapter 16). Figure 14.2 Three Examples of r The following notations are commonly used in regression analysis. The predicted value of y (defined, based on the regression model, as y = a + bx) is typically different from the observed value of y. The predicted value of the dependent variable y is sometimes indicated as ŷ (pronounced “y-hat”). Only when R2 = 1 are the observed and predicted values identical for each observation. The difference between y and ŷ is called the regression error or error term
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
COEFFICIENT The nonparametric alternative, Spearman’s rank correlation coefficient (r, or “rho”), looks at correlation among the ranks of the data rather than among the values. The ranks of data are determined as shown in Table 14.2 (adapted from Table 11.8): Table 14.2 Ranks of Two Variables In Greater Depth … Box 14.1 Crime and Poverty An analyst wants to examine empirically the relationship between crime and income in cities across the United States. The CD that accompanies the workbook Exercising Essential Statistics includes a Community Indicators dataset with assorted indicators of conditions in 98 cities such as Akron, Ohio; Phoenix, Arizona; New Orleans, Louisiana; and Seattle, Washington. The measures include median household income, total population (both from the 2000 U.S. Census), and total violent crimes (FBI, Uniform Crime Reporting, 2004). In the sample, household income ranges from $26,309 (Newark, New Jersey) to $71,765 (San Jose, California), and the median household income is $42,316. Per-capita violent crime ranges from 0.15 percent (Glendale, California) to 2.04 percent (Las Vegas, Nevada), and the median violent crime rate per capita is 0.78 percent. There are four types of violent crimes: murder and nonnegligent manslaughter, forcible rape, robbery, and aggravated assault. A measure of total violent crime per capita is calculated because larger cities are apt to have more crime. The analyst wants to examine whether income is associated with per-capita violent crime. The scatterplot of these two continuous variables shows that a negative relationship appears to be present: The Pearson’s correlation coefficient is –.532 (p < .01), and the Spearman’s correlation coefficient is –.552 (p < .01). The simple regression model shows R2 = .283. The regression model is as follows (t-test statistic in parentheses): The regression line is shown on the scatterplot. Interpreting these results, we see that the R-square value of .283 indicates a moderate relationship between these two variables. Clearly, some cities with modest median household incomes have a high crime rate. However, removing these cities does not greatly alter the findings. Also, an assumption of regression is that the error term is normally distributed, and further examination of the error shows that it is somewhat skewed. The techniques for examining the distribution of the error term are discussed in Chapter 15, but again, addressing this problem does not significantly alter the finding that the two variables are significantly related to each other, and that the relationship is of moderate strength. With this result in hand, further analysis shows, for example, by how much violent crime decreases for each increase in household income. For each increase of $10,000 in average household income, the violent crime rate drops 0.25 percent. For a city experiencing the median 0.78 percent crime rate, this would be a considerable improvement, indeed. Note also that the scatterplot shows considerable variation in the crime rate for cities at or below the median household income, in contrast to those well above it. Policy analysts may well wish to examine conditions that give rise to variation in crime rates among cities with lower incomes. Because Spearman’s rank correlation coefficient examines correlation among the ranks of variables, it can also be used with ordinal-level data.9 For the data in Table 14.2, Spearman’s rank correlation coefficient is .900 (p = .035).10 Spearman’s p-squared coefficient has a “percent variation explained” interpretation, similar
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
regression as dummy variables Explain the importance of the error term plot Identify assumptions of regression, and know how to test and correct assumption violations Multiple regression is one of the most widely used multivariate statistical techniques for analyzing three or more variables. This chapter uses multiple regression to examine such relationships, and thereby extends the discussion in Chapter 14. The popularity of multiple regression is due largely to the ease with which it takes control variables (or rival hypotheses) into account. In Chapter 10, we discussed briefly how contingency tables can be used for this purpose, but doing so is often a cumbersome and sometimes inconclusive effort. By contrast, multiple regression easily incorporates multiple independent variables. Another reason for its popularity is that it also takes into account nominal independent variables. However, multiple regression is no substitute for bivariate analysis. Indeed, managers or analysts with an interest in a specific bivariate relationship will conduct a bivariate analysis first, before examining whether the relationship is robust in the presence of numerous control variables. And before conducting bivariate analysis, analysts need to conduct univariate analysis to better understand their variables. Thus, multiple regression is usually one of the last steps of analysis. Indeed, multiple regression is often used to test the robustness of bivariate relationships when control variables are taken into account. The flexibility with which multiple regression takes control variables into account comes at a price, though. Regression, like the t-test, is based on numerous assumptions. Regression results cannot be assumed to be robust in the face of assumption violations. Testing of assumptions is always part of multiple regression analysis. Multiple regression is carried out in the following sequence: (1) model specification (that is, identification of dependent and independent variables), (2) testing of regression assumptions, (3) correction of assumption violations, if any, and (4) reporting of the results of the final regression model. This chapter examines these four steps and discusses essential concepts related to simple and multiple regression. Chapters 16 and 17 extend this discussion by examining the use of logistic regression and time series analysis. MODEL SPECIFICATION Multiple regression is an extension of simple regression, but an important difference exists between the two methods: multiple regression aims for full model specification. This means that analysts seek to account for all of the variables that affect the dependent variable; by contrast, simple regression examines the effect of only one independent variable. Philosophically, the phrase identifying the key difference—“all of the variables that affect the dependent variable”—is divided into two parts. The first part involves identifying the variables that are of most (theoretical and practical) relevance in explaining the dependent
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
regression results. Standardized Coefficients The question arises as to which independent variable has the greatest impact on explaining the dependent variable. The slope of the coefficients (b) does not answer this question because each slope is measured in different units (recall from Chapter 14 that b = ∆y/∆x). Comparing different slope coefficients is tantamount to comparing apples and oranges. However, based on the regression coefficient (or slope), it is possible to calculate the standardized coefficient, β (beta). Beta is defined as the change produced in the dependent variable by a unit of change in the independent variable when both variables are measured in terms of standard deviation units. Beta is unit-less and thus allows for comparison of the impact of different independent variables on explaining the dependent variable. Analysts compare the relative values of beta coefficients; beta has no inherent meaning. It is appropriate to compare betas across independent variables in the same regression, not across different regressions. Based on Table 15.1, we conclude that the impact of having adequate authority on explaining productivity is [(0.288 – 0.202)/0.202 =] 42.6 percent greater than teamwork, and about equal to that of knowledge. The impact of having adequate authority is two-and-a-half times greater than that of perceptions of fair rewards and recognition.4 F-Test Table 15.1 also features an analysis of variance (ANOVA) table. The global F-test examines the overall effect of all independent variables jointly on the dependent variable. The null hypothesis is that the overall effect of all independent variables jointly on the dependent variables is statistically insignificant. The alternate hypothesis is that this overall effect is statistically significant. The null hypothesis implies that none of the regression coefficients is statistically significant; the alternate hypothesis implies that at least one of the regression coefficients is statistically significant. The
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
Assume that a welfare manager in our earlier example (see discussion of path analysis) takes a snapshot of the status of the welfare clients. Some clients may have obtained employment and others not yet. Clients will also vary as to the amount of time that they have been receiving welfare. Examine the data in Table 18.2. It shows that neither of the two clients, who have yet to complete their first week on welfare, has found employment; one of the three clients who have completed one week of welfare has found employment. Censored observations are observations for which the specified outcome has yet to occur. It is assumed that all clients who have not yet found employment are still waiting for this event to occur. Thus, the sample should not include clients who are not seeking employment. Note, however, that a censored observation is very different from one that has missing data, which might occur because the manager does not know whether the client has found employment. As with regression, records with missing data are excluded from analysis. A censored
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
observation is simply an observation for which a specified outcome has not yet occurred. Assume that data exist from a random sample of 100 clients who are seeking, or have found, employment. Survival analysis is the statistical procedure for analyzing these data. The name of this procedure stems from its use in medical research. In clinical trials, researchers want to know the survival (or disease) rate of patients as a function of the duration of their treatment. For patients in the middle of their trial, the specified outcome may not have occurred yet. We obtain the following results (also called a life table) from analyzing hypothetical data from welfare records (see Table 18.3). In the context shown in the table, the word terminal signifies that the event has occurred. That is, the client has found employment. At start time zero, 100 cases enter the interval. During the first period, there are no terminal cases and nine censored cases. Thus, 91 cases enter the next period. In this second period, 2 clients find employment and 14 do not, resulting in 75 cases that enter the following period. The column labeled “Cumulative proportion surviving until end of interval” is an estimate of probability of surviving (not finding employment) until the end of the stated interval.5 The column labeled “Probability density” is an estimate of the probability of the terminal event occurring (that is, finding employment) during the time interval. The results also report that “the median survival time is 5.19.” That is, half of the clients find employment in 5.19 weeks. Table 18.2 Censored Observations Note: Obs = observations (clients); Emp = employment; 0 = has not yet found employment; 1 = has found employment. Table 18.3 Life Table Results
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
Note: The median survival time is 5.19. Survival analysis can also examine survival rates for different “treatments” or conditions. Assume that data are available about the number of dependents that each client has. Table 18.3 is readily produced for each subset of this condition. For example, by comparing the survival rates of those with and those without dependents, the probability density figure, which shows the likelihood of an event occurring, can be obtained (Figure 18.5). This figure suggests that having dependents is associated with clients’ finding employment somewhat faster. Beyond Life Tables Life tables require that the interval (time) variable be measured on a discrete scale. When the time variable is continuous, Kaplan-Meier survival analysis is used. This procedure is quite analogous to life tables analysis. Cox regression is similar to Kaplan-Meier but allows for consideration of a larger number of independent variables (called covariates). In all instances, the purpose is to examine the effect of treatment on the survival of observations, that is, the occurrence of a dichotomous event. Figure 18.5 Probability Density FACTOR ANALYSIS A variety of statistical techniques help analysts to explore relationships in their data. These exploratory techniques typically aim to create groups of variables (or observations) that are related to each
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
other and distinct from other groups. These techniques usually precede regression and other analyses. Factor analysis is a well-established technique that often aids in creating index variables. Earlier, Chapter 3 discussed the use of Cronbach alpha to empirically justify the selection of variables that make up an index. However, in that approach analysts must still justify that variables used in different index variables are indeed distinct. By contrast, factor analysis analyzes a large number of variables (often 20 to 30) and classifies them into groups based on empirical similarities and dissimilarities. This empirical assessment can aid analysts’ judgments regarding variables that might be grouped together. Factor analysis uses correlations among variables to identify subgroups. These subgroups (called factors) are characterized by relatively high within-group correlation among variables and low between-group correlation among variables. Most factor analysis consists of roughly four steps: (1) determining that the group of variables has enough correlation to allow for factor analysis, (2) determining how many factors should be used for classifying (or grouping) the variables, (3) improving the interpretation of correlations and factors (through a process called rotation), and (4) naming the factors and, possibly, creating index variables for subsequent analysis. Most factor analysis is used for grouping of variables (R-type factor analysis) rather than observations (Q-type). Often, discriminant analysis is used for grouping of observations, mentioned later in this chapter. The terminology of factor analysis differs greatly from that used elsewhere in this book, and the discussion that follows is offered as an aid in understanding tables that might be encountered in research that uses this technique. An important task in factor analysis is determining how many common factors should be identified. Theoretically, there are as many factors as variables, but only a few factors account for most of the variance in the data. The percentage of variation explained by each factor is defined as the eigenvalue divided by the number of variables, whereby the
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
eigenvalue of a factor is the sum of correlations (r) of each variable with that factor. This correlation is also called loading in factor analysis. Analysts can define (or “extract”) how many factors they wish to use, or they can define a statistical criterion (typically requiring each factor to have an eigenvalue of at least 1.0). The method of identifying factors is called principal component analysis (PCA). The results of PCA often make it difficult to interpret the factors, in which case the analyst will use rotation (a statistical technique that distributes the explained variance across factors). Rotation causes variables to load higher on one factor, and less on others, bringing the pattern of groups better into focus for interpretation. Several different methods of rotation are commonly used (for example, Varimax, Promax), but the purpose of this procedure is always to understand which variables belong together. Typically, for purposes of interpretation, factor loadings are considered only if their values are at least .50, and only these values might be shown in tables. Table 18.4 shows the result of a factor analysis. The table shows various items related to managerial professionalism, and the factor analysis identifies three distinct groups for these items. Such tables are commonly seen in research articles. The labels for each group (for example, “A. Commitment to performance”) are provided by the authors; note that the three groupings are conceptually distinct. The table also shows that, combined, these three factors account for 61.97 percent of the total variance. The table shows only loadings greater than .50; those below this value are not shown.6 Based on these results, the authors then create index variables for the three groups. Each group has high internal reliability (see Chapter 3); the Cronbach alpha scores are, respectively, 0.87, 0.83, and 0.88. This table shows a fairly typical use of factor analysis, providing statistical support for a grouping scheme. Beyond Factor Analysis A variety of exploratory techniques exist. Some seek purely to classify, whereas
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
others seek to create and predict classifications through independent variables. Table 18.4 Factor Analysis Note: Factor analysis with Varimax rotation. Source: E. Berman and J. West. (2003). “What Is Managerial Mediocrity? Definition, Prevalence and Negative Impact (Part 1).” Public Performance & Management Review, 27 (December): 7–27. Multidimensional scaling and cluster analysis aim to identify key dimensions along which observations (rather than variables) differ. These techniques differ from factor analysis in that they allow for a hierarchy of classification dimensions. Some also use graphics to aid in visualizing the extent of differences and to help in identifying the similarity or dissimilarity of observations. Network analysis is a descriptive technique used to portray relationships among actors. A graphic representation can be made of the frequency with which actors interact with each other, distinguishing frequent interactions from those that are infrequent. Discriminant analysis is used when the dependent variable is nominal with two or more categories. For example, we might want to know how parents choose among three types of school vouchers. Discriminant analysis calculates regression lines that distinguish (discriminate) among the nominal groups (the categories of the dependent variable), as well as other
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
regression lines that describe the relationship of the independent variables for each group (called classification functions). The emphasis in discriminant analysis is the ability of the independent variables to correctly predict values of the nominal variable (for example, group membership). Discriminant analysis is one strategy for dealing with dependent variables that are nominal with three or more categories. Multinomial logistic regression and ordinal regression have been developed in recent years to address nominal and ordinal dependent variables in logic regression. Multinomial logistic regression calculates functions that compare the probability of a nominal value occurring relative to a base reference group. The calculation of such probabilities makes this technique an interesting alternative to discriminant analysis. When the nominal dependent variable has three values (say, 1, 2, and 3), one logistic regression predicts the likelihood of 2 versus 1 occurring, and the other logistic regression predicts the likelihood of 3 versus 1 occurring, assuming that “1” is the base reference group.7 When the dependent variable is ordinal, ordinal regression can be used. Like multinomial logistic regression, ordinal regression often is used to predict event probability or group membership. Ordinal regression assumes that the slope coefficients are identical for each value of the dependent variable; when this assumption is not met, multinomial logistic regression should be considered. Both multinomial logistic regression and ordinal regression are relatively recent developments and are not yet widely used. Statistics, like other fields of science, continues to push its frontiers forward and thereby develop new techniques for managers and analysts. Key Point Advanced statistical tools are available. Understanding the proper circumstances under which these tools apply is a prerequisite for using them.
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
SUMMARY A vast array of additional statistical methods exists. In this concluding chapter, we summarized some of these methods (path analysis, survival analysis, and factor analysis) and briefly mentioned other related techniques. This chapter can help managers and analysts become familiar with these additional techniques and increase their access to research literature in which these techniques are used. Managers and analysts who would like more information about these techniques will likely consult other texts or on-line sources. In many instances, managers will need only simple approaches to calculate the means of their variables, produce a few good graphs that tell the story, make simple forecasts, and test for significant differences among a few groups. Why, then, bother with these more advanced techniques? They are part of the analytical world in which managers operate. Through research and consulting, managers cannot help but come in contact with them. It is hoped that this chapter whets the appetite and provides a useful reference for managers and students alike. KEY TERMS   Endogenous variables Exogenous variables Factor analysis Indirect effects Loading Path analysis Recursive models Survival analysis Notes 1. Two types of feedback loops are illustrated as follows: 2. When feedback loops are present, error terms for the different models will be correlated with exogenous variables, violating an error term assumption for such models. Then, alternative estimation methodologies are necessary, such as two-stage least squares and others discussed later in this chapter. 3. Some models may show double-headed arrows among error terms. These show the correlation between error terms, which is of no importance in estimating the beta coefficients. 4. In SPSS, survival analysis is available through the add-on module in SPSS Advanced Models. 5. The functions used to estimate probabilities are rather complex. They are so-called Weibull distributions, which are defined as h(t) = αλ(λt)a–1, where a and 1 are chosen to best fit the data. 6. Hence, the SSL is greater than the squared loadings reported. For example, because the loadings of variables in groups B and C are not shown for factor 1, the SSL of shown loadings is 3.27 rather than the reported 4.084. If one assumes the other loadings are each .25, then the SSL of the not reported loadings is [12*.252 =] .75, bringing the SSL of factor 1 to [3.27 + .75 =] 4.02, which is very close to the 4.084 value reported in the table. 7. Readers who are interested in multinomial logistic regression can consult on-line sources or the SPSS manual, Regression Models 10.0 or higher. The statistics of discriminant analysis are very dissimilar from those of logistic regression, and readers are advised to consult a separate text on that topic. Discriminant analysis is not often used in public
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
The population is angry, frustrated, bitter—and for good reasons. For the past generation, policies have been initiated that have led to an extremely sharp concentration of wealth in a tiny sector of the population. In fact, the wealth distribution is very heavily weighted by, literally, the top tenth of one percent of the population, a fraction so small that they’re not even picked up on the census. You have to do statistical analysis just to detect them. And they have benefited enormously. This is mostly from the financial sector—hedge fund managers, CEOs of financial corporations, and so on.
Noam Chomsky (Occupy: Reflections on Class War, Rebellion and Solidarity)
Remedies exist for correcting substantial departures from normality, but these remedies may make matters worse when departures from normality are minimal. The first course of action is to identify and remove any outliers that may affect the mean and standard deviation. The second course of action is variable transformation, which involves transforming the variable, often by taking log(x), of each observation, and then testing the transformed variable for normality. Variable transformation may address excessive skewness by adjusting the measurement scale, thereby helping variables to better approximate normality.8 Substantively, we strongly prefer to make conclusions that satisfy test assumptions, regardless of which measurement scale is chosen.9 Keep in mind that when variables are transformed, the units in which results are expressed are transformed, as well. An example of variable transformation is provided in the second working example. Typically, analysts have different ways to address test violations. Examination of the causes of assumption violations often helps analysts to better understand their data. Different approaches may be successful for addressing test assumptions. Analysts should not merely go by the result of one approach that supports their case, ignoring others that perhaps do not. Rather, analysts should rely on the weight of robust, converging results to support their final test conclusions. Working Example 1 Earlier we discussed efforts to reduce high school violence by enrolling violence-prone students into classes that address anger management. Now, after some time, administrators and managers want to know whether the program is effective. As part of this assessment, students are asked to report their perception of safety at school. An index variable is constructed from different items measuring safety (see Chapter 3). Each item is measured on a seven-point Likert scale (1 = strongly disagree to 7 = strongly agree), and the index is constructed such that a high value indicates that students feel safe.10 The survey was initially administered at the beginning of the program. Now, almost a year later, the survey is implemented again.11 Administrators want to know whether students who did not participate in the anger management program feel that the climate is now safer. The analysis included here focuses on 10th graders. For practical purposes, the samples of 10th graders at the beginning of the program and one year later are regarded as independent samples; the subjects are not matched. Descriptive analysis shows that the mean perception of
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
safety at the beginning of the program was 4.40 (standard deviation, SD = 1.00), and one year later, 4.80 (SD = 0.94). The mean safety score increased among 10th graders, but is the increase statistically significant? Among other concerns is that the standard deviations are considerable for both samples. As part of the analysis, we conduct a t-test to answer the question of whether the means of these two distributions are significantly different. First, we examine whether test assumptions are met. The samples are independent, and the variables meet the requirement that one is continuous (the index variable) and the other dichotomous. The assumption of equality of variances is answered as part of conducting the t-test, and so the remaining question is whether the variables are normally distributed. The distributions are shown in the histograms in Figure 12.3.12 Are these normal distributions? Visually, they are not the textbook ideal—real-life data seldom are. The Kolmogorov-Smirnov tests for both distributions are insignificant (both p > .05). Hence, we conclude that the two distributions can be considered normal. Having satisfied these t-test assumptions, we next conduct the t-test for two independent samples. Table 12.1 shows the t-test results. The top part of Table 12.1 shows the descriptive statistics, and the bottom part reports the test statistics. Recall that the t-test is a two-step test. We first test whether variances are equal. This is shown as the “Levene’s test for equality of variances.” The null hypothesis of the Levene’s test is that variances are equal; this is rejected when the p-value of this Levene’s test statistic is less than .05. The Levene’s test uses an F-test statistic (discussed in Chapters 13 and 15), which, other than its p-value, need not concern us here. In Table 12.1, the level of significance is .675, which exceeds .05. Hence, we accept the null hypothesis—the variances of the two distributions shown in Figure 12.3 are equal. Figure 12.3 Perception of High School Safety among 10th Graders Table 12.1 Independent-Samples T-Test: Output Note: SD = standard deviation. Now we go to the second step, the main purpose. Are the two means (4.40 and 4.80)
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
significantly different? Because the variances are equal, we read the t-test statistics from the top line, which states “equal variances assumed.” (If variances had been unequal, then we would read the test statistics from the second line, “equal variances not assumed.”). The t-test statistic for equal variances for this test is 2.576, which is significant at p = .011.13 Thus, we conclude that the means are significantly different; the 10th graders report feeling safer one year after the anger management program was implemented. Working Example 2 In the preceding example, the variables were both normally distributed, but this is not always the case. Many variables are highly skewed and not normally distributed. Consider another example. The U.S. Environmental Protection Agency (EPA) collects information about the water quality of watersheds, including information about the sources and nature of pollution. One such measure is the percentage of samples that exceed pollution limits for ammonia, dissolved oxygen, phosphorus, and pH.14 A manager wants to know whether watersheds in the East have higher levels of pollution than those in the Midwest. Figure 12.4 Untransformed Variable: Watershed Pollution An index variable of such pollution is constructed. The index variable is called “pollution,” and the first step is to examine it for test assumptions. Analysis indicates that the range of this variable has a low value of 0.00 percent and a high value of 59.17 percent. These are plausible values (any value above 100.00 percent is implausible). A boxplot (not shown) demonstrates that the variable has two values greater than 50.00 percent that are indicated as outliers for the Midwest region. However, the histograms shown in Figure 12.4 do not suggest that these values are unusually large; rather, the peak in both histograms is located off to the left. The distributions are heavily skewed.15 Because the samples each have fewer than 50 observations, the Shapiro-Wilk test for normality is used. The respective test statistics for East and Midwest are .969 (p = .355) and .931 (p = .007). Visual inspection confirms that the Midwest distribution is indeed nonnormal. The Shapiro-Wilk test statistics are given only for completeness; they have no substantive interpretation. We must now either transform the variable so that it becomes normal for purposes of testing, or use a nonparametric alternative. The second option is discussed later in this chapter. We also show the consequences of ignoring the problem. To transform the variable, we try the recommended transformations, , and then examine the transformed variable for normality. If none of these transformations work, we might modify them, such as using x⅓ instead of x½ (recall that the latter is ).16 Thus, some experimentation is required. In our case, we find that the x½ works. The new Shapiro-Wilk test statistics for East and Midwest are, respectively, .969 (p = .361) and .987 (p = .883). Visual inspection of Figure 12.5 shows these two distributions to be quite normal, indeed. Figure 12.5 Transformed Variable: Watershed Pollution The results of the t-test for the transformed variable are shown in Table
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
different from 3.5. However, it is different from larger values, such as 4.0 (t = 2.89, df = 9, p = .019). Another example of this is provided in the Box 12.2. Finally, note that the one-sample t-test is identical to the paired-samples t-test for testing whether the mean D = 0. Indeed, the one-sample t-test for D = 0 produces the same results (t = 2.43, df = 9, p = .038). In Greater Depth … Box 12.2 Use of the T-Test in Performance Management: An Example Performance benchmarking is an increasingly popular tool in performance management. Public and nonprofit officials compare the performance of their agencies with performance benchmarks and draw lessons from the comparison. Let us say that a city government requires its fire and medical response unit to maintain an average response time of 360 seconds (6 minutes) to emergency requests. The city manager has suspected that the growth in population and demands for the services have slowed down the responses recently. He draws a sample of 10 response times in the most recent month: 230, 450, 378, 430, 270, 470, 390, 300, 470, and 530 seconds, for a sample mean of 392 seconds. He performs a one-sample t-test to compare the mean of this sample with the performance benchmark of 360 seconds. The null hypothesis of this test is that the sample mean is equal to 360 seconds, and the alternate hypothesis is that they are different. The result (t = 1.030, df = 9, p = .330) shows a failure to reject the null hypothesis at the 5 percent level, which means that we don’t have sufficient evidence to say that the average response time is different from the benchmark 360 seconds. We cannot say that current performance of 392 seconds is significantly different from the 360-second benchmark. Perhaps more data (samples) are needed to reach such a conclusion, or perhaps too much variability exists for such a conclusion to be reached. NONPARAMETRIC ALTERNATIVES TO T-TESTS The tests described in the preceding sections have nonparametric alternatives. The chief advantage of these tests is that they do not require continuous variables to be normally distributed. The chief disadvantage is that they are less likely to reject the null hypothesis. A further, minor disadvantage is that these tests do not provide descriptive information about variable means; separate analysis is required for that. Nonparametric alternatives to the independent-samples test are the Mann-Whitney and Wilcoxon tests. The Mann-Whitney and Wilcoxon tests are equivalent and are thus discussed jointly. Both are simplifications of the more general Kruskal-Wallis’ H test, discussed in Chapter 11.19 The Mann-Whitney and Wilcoxon tests assign ranks to the testing variable in the exact manner shown in Table 12.4. The sum of the ranks of each group is computed, shown in the table. Then a test is performed to determine the statistical significance of the difference between the sums, 22.5 and 32.5. Although the Mann-Whitney U and Wilcoxon W test statistics are calculated differently, they both have the same level of statistical significance: p = .295. Technically, this is not a test of different means but of different distributions; the lack of significance implies that groups 1 and 2 can be regarded as coming from the same population.20 Table 12.4 Rankings of
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
For comparison, we use the Mann-Whitney test to compare the two samples of 10th graders discussed earlier in this chapter. The sum of ranks for the “before” group is 69.55, and for the “one year later group,” 86.57. The test statistic is significant at p = .019, yielding the same conclusion as the independent-samples t-test, p = .011. This comparison also shows that nonparametric tests do have higher levels of significance. As mentioned earlier, the Mann-Whitney test (as a nonparametric test) does not calculate the group means; separate, descriptive analysis needs to be undertaken for that information. A nonparametric alternative to the paired-samples t-test is the Wilcoxon signed rank test. This test assigns ranks based on the absolute values of these differences (Table 12.5). The signs of the differences are retained (thus, some values are positive and others are negative). For the data in Table 12.5, there are seven positive ranks (with mean rank = 6.57) and three negative ranks (with mean rank = 3.00). The Wilcoxon signed rank test statistic is normally distributed. The Wilcoxon signed rank test statistic, Z, for a difference between these values is 1.89 (p = .059 > .05). Hence, according to this test, the differences between the before and after scores are not significant. Getting Started Calculate a t-test and a Mann-Whitney test on data of your choice. Again, nonparametric tests result in larger p-values. The paired-samples t-test finds that p = .038 < .05, providing sufficient statistical evidence to conclude that the differences are significant. It might also be noted that a doubling of the data in Table 12.5 results in finding a significant difference between the before and after scores with the Wilcoxon signed rank test, Z = 2.694, p = .007. Table 12.5 Wilcoxon Signed Rank Test The Wilcoxon signed rank test can also be adapted as a nonparametric alternative to the one-sample t-test. In that case, analysts create a second variable that, for each observation, is the test value. For example, if in Table 12.5 we wish to test whether the mean of variable “before” is different from, say, 4.0, we create a second variable with 10 observations for which each value is, say, 4.0. Then using the Wilcoxon signed rank test for the “before” variable and this new,
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
Analysis of Variance (ANOVA)   CHAPTER OBJECTIVES After reading this chapter, you should be able to Use one-way ANOVA when the dependent variable is continuous and the independent variable is nominal or ordinal with two or more categories Understand the assumptions of ANOVA and how to test for them Use post-hoc tests Understand some extensions of one-way ANOVA This chapter provides an essential introduction to analysis of variance (ANOVA). ANOVA is a family of statistical techniques, the most basic of which is the one-way ANOVA, which provides an essential expansion of the t-test discussed in Chapter 12. One-way ANOVA allows analysts to test the effect of a continuous variable on an ordinal or nominal variable with two or more categories, rather than only two categories as is the case with the t-test. Thus, one-way ANOVA enables analysts to deal with problems such as whether the variable “region” (north, south, east, west) or “race” (Caucasian, African American, Hispanic, Asian, etc.) affects policy outcomes or any other matter that is measured on a continuous scale. One-way ANOVA also allows analysts to quickly determine subsets of categories with similar levels of the dependent variable. This chapter also addresses some extensions of one-way ANOVA and a nonparametric alternative. ANALYSIS OF VARIANCE Whereas the t-test is used for testing differences between two groups on a continuous variable (Chapter 12), one-way ANOVA is used for testing the means of a continuous variable across more than two groups. For example, we may wish to test whether income levels differ among three or more ethnic groups, or whether the counts of fish vary across three or more lakes. Applications of ANOVA often arise in medical and agricultural research, in which treatments are given to different groups of patients, animals, or crops. The F-test statistic compares the variances within each group against those that exist between each group and the overall mean: Key Point ANOVA extends the t-test; it is used when the independent variable is
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
suffered greater wetland loss than watersheds with smaller surrounding populations. Most watersheds have suffered no or only very modest losses (less than 3 percent during the decade in question), and few watersheds have suffered more than a 4 percent loss. The distribution is thus heavily skewed toward watersheds with little wetland losses (that is, to the left) and is clearly not normally distributed.6 To increase normality, the variable is transformed by twice taking the square root, x.25. The transformed variable is then normally distributed: the Kolmogorov-Smirnov statistic is 0.82 (p = .51 > .05). The variable also appears visually normal for each of the population subgroups. There are four population groups, designed to ensure an adequate number of observations in each. Boxplot analysis of the transformed variable indicates four large and three small outliers (not shown). Examination suggests that these are plausible and representative values, which are therefore retained. Later, however, we will examine the effect of these seven observations on the robustness of statistical results. Descriptive analysis of the variables is shown in Table 13.1. Generally, large populations tend to have larger average wetland losses, but the standard deviations are large relative to (the difference between) these means, raising considerable question as to whether these differences are indeed statistically significant. Also, the untransformed variable shows that the mean wetland loss is less among watersheds with “Medium I” populations than in those with “Small” populations (1.77 versus 2.52). The transformed variable shows the opposite order (1.06 versus 0.97). Further investigation shows this to be the effect of the three small outliers and two large outliers on the calculation of the mean of the untransformed variable in the “Small” group. Variable transformation minimizes this effect. These outliers also increase the standard deviation of the “Small” group. Using ANOVA, we find that the transformed variable has unequal variances across the four groups (Levene’s statistic = 2.83, p = .41 < .05). Visual inspection, shown in Figure 13.2, indicates that differences are not substantial for observations within the group interquartile ranges, the areas indicated by the boxes. The differences seem mostly caused by observations located in the whiskers of the “Small” group, which include the five outliers mentioned earlier. (The other two outliers remain outliers and are shown.) For now, we conclude that no substantial differences in variances exist, but we later test the robustness of this conclusion with consideration of these observations (see Figure 13.2). Table 13.1 Variable Transformation We now proceed with the ANOVA analysis. First, Table 13.2 shows that the global F-test statistic is 2.91, p = .038 < .05. Thus, at least one pair of means is significantly different. (The term sum of squares is explained in note 1.) Getting Started Try ANOVA on some data of your choice. Second, which pairs are significantly different? We use the Bonferroni post-hoc test because relatively few comparisons are made (there are only four groups). The computer-generated results (not shown in Table 13.2) indicate that the only significant difference concerns the means of the “Small” and “Large” groups. This difference (1.26 - 0.97 = 0.29 [of transformed values]) is significant at the 5 percent level (p = .028). The Tukey and Scheffe tests lead to the same conclusion (respectively, p = .024 and .044). (It should be noted that post-hoc tests also exist for when equal variances are not assumed. In our example, these tests lead to the same result.7) This result is consistent with a visual reexamination of Figure 13.2, which shows that differences between group means are indeed small. The Tukey and
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
Scheffe tests also produce “homogeneous subsets,” that is, groups that have statistically identical means. Both the three largest and the three smallest populations have identical means. The Tukey levels of statistical significance are, respectively, .725 and .165 (both > .05). This is shown in Table 13.3. Figure 13.2 Group Boxplots Table 13.2 ANOVA Table Third, is the increase in means linear? This test is an option on many statistical software packages that produces an additional line of output in the ANOVA table, called the “linear term for unweighted sum of squares,” with the appropriate F-test. Here, that F-test statistic is 7.85, p = .006 < .01, and so we conclude that the apparent linear increase is indeed significant: wetland loss is linearly associated with the increased surrounding population of watersheds.8 Figure 13.2 does not clearly show this, but the enlarged Y-axis in Figure 13.3 does. Fourth, are our findings robust? One concern is that the statistical validity is affected by observations that statistically (although not substantively) are outliers. Removing the seven outliers identified earlier does not affect our conclusions. The resulting variable remains normally distributed, and there are no (new) outliers for any group. The resulting variable has equal variances across the groups (Levene’s test = 1.03, p = .38 > .05). The global F-test is 3.44 (p = .019 < .05), and the Bonferroni post-hoc test similarly finds that only the differences between the “Small” and “Large” group means are significant (p = .031). The increase remains linear (F = 6.74, p = .011 < .05). Thus, we conclude that the presence of observations with large values does not alter our conclusions. Table 13.3 Homogeneous Subsets Figure 13.3 Watershed Loss, by Population We also test the robustness of conclusions for different variable transformations. The extreme skewness of the untransformed variable allows for only a limited range of root transformations that produce normality. Within this range (power 0.222 through 0.275), the preceding conclusions are replicated fully. Natural log and base-10 log transformations also result in normality and replicate these results, except that the post-hoc tests fail to identify that the means of the “Large” and “Small” groups are significantly different. However, the global F-test is (marginally) significant (F = 2.80, p = .043 < .05), which suggests that this difference is too small to detect with this transformation. A single, independent-samples t-test for this difference is significant (t = 2.47, p = .017 < .05), suggesting that this problem may have been exacerbated by the limited number of observations. In sum, we find converging evidence for our conclusions. As this example also shows, when using statistics, analysts frequently must exercise judgment and justify their decisions.9 Finally, what is the practical significance of this analysis? The wetland loss among watersheds with large surrounding populations is [(3.21 – 2.52)/2.52 =] 27.4 percent greater than among those surrounded by small populations. It is up to managers and elected officials to determine whether a difference of this magnitude warrants intervention in watersheds with large surrounding populations.10
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
Beyond One-Way ANOVA The approach described in the preceding section is called one-way ANOVA. This scenario is easily generalized to accommodate more than one independent variable. These independent variables are either discrete (called factors) or continuous (called covariates). These approaches are called n-way ANOVA or ANCOVA (the “C” indicates the presence of covariates). Two way ANOVA, for example, allows for testing of the effect of two different independent variables on the dependent variable, as well as the interaction of these two independent variables. An interaction effect between two variables describes the way that variables “work together” to have an effect on the dependent variable. This is perhaps best illustrated by an example. Suppose that an analyst wants to know whether the number of health care information workshops attended, as well as a person’s education, are associated with healthy lifestyle behaviors. Although we can surely theorize how attending health care information workshops and a person’s education can each affect an individual’s healthy lifestyle behaviors, it is also easy to see that the level of education can affect a person’s propensity for attending health care information workshops, as well. Hence, an interaction effect could also exist between these two independent variables (factors). The effects of each independent variable on the dependent variable are called main effects (as distinct from interaction effects). To continue the earlier example, suppose that in addition to population, an analyst also wants to consider a measure of the watershed’s preexisting condition, such as the number of plant and animal species at risk in the watershed. Two-way ANOVA produces the results shown in Table 13.4, using the transformed variable mentioned earlier. The first row, labeled “model,” refers to the combined effects of all main and interaction effects in the model on the dependent variable. This is the global F-test. The “model” row shows that the two main effects and the single interaction effect, when considered together, are significantly associated with changes in the dependent variable (p < .000). However, the results also show a reduced significance level of “population” (now, p = .064), which seems related to the interaction effect (p = .076). Although neither effect is significant at conventional levels, the results do suggest that an interaction effect is present between population and watershed condition (of which the number of at-risk species is an indicator) on watershed wetland loss. Post-hoc tests are only provided separately for each of the independent variables (factors), and the results show the same homogeneous grouping for both of the independent variables. Table 13.4 Two-Way ANOVA Results As we noted earlier, ANOVA is a family of statistical techniques that allow for a broad range of rather complex experimental designs. Complete coverage of these techniques is well beyond the scope of this book, but in general, many of these techniques aim to discern the effect of variables in the presence of other (control) variables. ANOVA is but one approach for addressing control variables. A far more common approach in public policy, economics, political science, and public administration (as well as in many others fields) is multiple regression (see Chapter 15). Many analysts feel that ANOVA and regression are largely equivalent. Historically, the preference for ANOVA stems from its uses in medical and agricultural research, with applications in education and psychology. Finally, the ANOVA approach can be generalized to allow for testing on two or more dependent variables. This approach is called multiple analysis of variance, or MANOVA. Regression-based analysis can also be used for dealing with multiple dependent variables, as mentioned in Chapter 17.
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
A NONPARAMETRIC ALTERNATIVE A nonparametric alternative to one-way ANOVA is Kruskal-Wallis’ H test of one-way ANOVA. Instead of using the actual values of the variables, Kruskal-Wallis’ H test assigns ranks to the variables, as shown in Chapter 11. As a nonparametric method, Kruskal-Wallis’ H test does not assume normal populations, but the test does assume similarly shaped distributions for each group. This test is applied readily to our one-way ANOVA example, and the results are shown in Table 13.5. Table 13.5 Kruskal-Wallis’ H-Test of One-Way ANOVA Kruskal-Wallis’ H one-way ANOVA test shows that population is significantly associated with watershed loss (p = .013). This is one instance in which the general rule that nonparametric tests have higher levels of significance is not seen. Although Kruskal-Wallis’ H test does not report mean values of the dependent variable, the pattern of mean ranks is consistent with Figure 13.2. A limitation of this nonparametric test is that it does not provide post-hoc tests or analysis of homogeneous groups, nor are there nonparametric n-way ANOVA tests such as for the two-way ANOVA test described earlier. SUMMARY One-way ANOVA extends the t-test by allowing analysts to test whether two or more groups have different means of a continuous variable. The t-test is limited to only two groups. One-way ANOVA can be used, for example, when analysts want to know if the mean of a variable varies across regions, racial or ethnic groups, population or employee categories, or another grouping with multiple categories. ANOVA is family of statistical techniques, and one-way ANOVA is the most basic of these methods. ANOVA is a parametric test that makes the following assumptions: The dependent variable is continuous. The independent variable is ordinal or nominal. The groups have equal variances. The variable is normally distributed in each of the groups. Relative to the t-test, ANOVA requires more attention to the assumptions of normality and homogeneity. ANOVA is not robust for the presence of outliers, and it appears to be less robust than the t-test for deviations from normality. Variable transformations and the removal of outliers are to be expected when using ANOVA. ANOVA also includes three other types of tests of interest: post-hoc tests of mean differences among categories, tests of homogeneous subsets, and tests for the linearity of mean differences across categories. Two-way ANOVA addresses the effect of two independent variables on a continuous dependent variable. When using two-way ANOVA, the analyst is able to distinguish main effects from interaction effects. Kruskal-Wallis’ H test is a nonparametric alternative to one-way ANOVA. KEY TERMS   Analysis of variance (ANOVA) ANOVA assumptions Covariates Factors Global F-test Homogeneous subsets Interaction effect Kruskal-Wallis’ H test of one-way ANOVA Main effect One-way ANOVA Post-hoc test Two-way ANOVA Notes   1. The between-group variance is
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
To date, I don’t know what changed in her. Could I have found out, by requesting information or talking to her in the corridor? Maybe. But could I have done that and not gotten involved? Process is king, I believe, and so these things have to play themselves out; there’s no right answer. Sure, it takes some organizational cold-bloodedness, and it might leave the reader, as well as many Semco employees, miffed or unconvinced. That, however, is the price for giving up policies, procedures, missions, and credos. Just as our aversion to long-term analysis is based on the realization that it can be a waste of time and energy to attempt to foresee every possible twist and turn of the road ahead, finding the root cause of every problem can also be unproductive.
Ricardo Semler (The Seven-Day Weekend: Changing the Way Work Works)
The situation was similar in the Soviet Union, with industry playing the role of sugar in the Caribbean. Industrial growth in the Soviet Union was further facilitated because its technology was so backward relative to what was available in Europe and the United States, so large gains could be reaped by reallocating resources to the industrial sector, even if all this was done inefficiently and by force. Before 1928 most Russians lived in the countryside. The technology used by peasants was primitive, and there were few incentives to be productive. Indeed, the last vestiges of Russian feudalism were eradicated only shortly before the First World War. There was thus huge unrealized economic potential from reallocating this labor from agriculture to industry. Stalinist industrialization was one brutal way of unlocking this potential. By fiat, Stalin moved these very poorly used resources into industry, where they could be employed more productively, even if industry itself was very inefficiently organized relative to what could have been achieved. In fact, between 1928 and 1960 national income grew at 6 percent a year, probably the most rapid spurt of economic growth in history up until then. This quick economic growth was not created by technological change, but by reallocating labor and by capital accumulation through the creation of new tools and factories. Growth was so rapid that it took in generations of Westerners, not just Lincoln Steffens. It took in the Central Intelligence Agency of the United States. It even took in the Soviet Union’s own leaders, such as Nikita Khrushchev, who famously boasted in a speech to Western diplomats in 1956 that “we will bury you [the West].” As late as 1977, a leading academic textbook by an English economist argued that Soviet-style economies were superior to capitalist ones in terms of economic growth, providing full employment and price stability and even in producing people with altruistic motivation. Poor old Western capitalism did better only at providing political freedom. Indeed, the most widely used university textbook in economics, written by Nobel Prize–winner Paul Samuelson, repeatedly predicted the coming economic dominance of the Soviet Union. In the 1961 edition, Samuelson predicted that Soviet national income would overtake that of the United States possibly by 1984, but probably by 1997. In the 1980 edition there was little change in the analysis, though the two dates were delayed to 2002 and 2012. Though the policies of Stalin and subsequent Soviet leaders could produce rapid economic growth, they could not do so in a sustained way. By the 1970s, economic growth had all but stopped. The most important lesson is that extractive institutions cannot generate sustained technological change for two reasons: the lack of economic incentives and resistance by the elites. In addition, once all the very inefficiently used resources had been reallocated to industry, there were few economic gains to be had by fiat. Then the Soviet system hit a roadblock, with lack of innovation and poor economic incentives preventing any further progress. The only area in which the Soviets did manage to sustain some innovation was through enormous efforts in military and aerospace technology. As a result they managed to put the first dog, Leika, and the first man, Yuri Gagarin, in space. They also left the world the AK-47 as one of their legacies. Gosplan was the supposedly all-powerful planning agency in charge of the central planning of the Soviet economy.
Daron Acemoğlu (Why Nations Fail: The Origins of Power, Prosperity and Poverty)
If policies were based upon climate science rather than climate studies, this simple, straightforward analysis would spell the end of any onerous climate policy. However, while our similar studies can be scientifically cited,7 to date, there has been an understandable reluctance to publish this in the tier-1 scientific literature, such as Nature or Science, as that would indicate a massive, unexplainable, and persistent failure of the studies driving global climate policy. Paltridge recently speculated that when this is ultimately permitted, the cost to all science (not just climate science) is going to be dear and lasting, much to the detriment of our society and our public policies.8 It will provoke serious doubt that the present incentive structure in science—which requires that the practitioners keep their problems ‘important’—has far-reaching and disastrous unintended consequences.
Alan Moran (Climate Change: The Facts)
In insisting that peasant activity contrary to Communist policies could be defined as kulak while at the same time maintaining that his approach to the peasantry was based on scientific Marxist class analysis, Lenin provided his successors with conceptualizations that would be used in collectivization when Stalin launched a war against all peasants.
Lynne Viola (Peasant Rebels Under Stalin: Collectivization and the Culture of Peasant Resistance)
The unprecedented bull market in Treasury bonds, supported by the belief that Treasury bonds are “insurance policies” in the case of financial collapse, could end as badly as the bull market in technology stocks did at the turn of the century. When economic growth increases, Treasury bondholders will receive the double blow of rising interest rates and loss of safe-haven status. One of the prime lessons learned from long-term analysis is that no asset class can stay permanently detached from fundamentals. Stocks had their comeuppance when the technology bubble burst and the financial system crashed. It is quite likely that bondholders will suffer a similar fate as the liquidity created by the world’s central banks turns into stronger economic growth and higher inflation.
Jeremy J. Siegel (Stocks for the Long Run: The Definitive Guide to Financial Market Returns & Long-Term Investment Strategies)
Westerners, not just Lincoln Steffens. It took in the Central Intelligence Agency of the United States. It even took in the Soviet Union’s own leaders, such as Nikita Khrushchev, who famously boasted in a speech to Western diplomats in 1956 that “we will bury you [the West].” As late as 1977, a leading academic textbook by an English economist argued that Soviet-style economies were superior to capitalist ones in terms of economic growth, providing full employment and price stability and even in producing people with altruistic motivation. Poor old Western capitalism did better only at providing political freedom. Indeed, the most widely used university textbook in economics, written by Nobel Prize–winner Paul Samuelson, repeatedly predicted the coming economic dominance of the Soviet Union. In the 1961 edition, Samuelson predicted that Soviet national income would overtake that of the United States possibly by 1984, but probably by 1997. In the 1980 edition there was little change in the analysis, though the two dates were delayed to 2002 and 2012. Though the policies of Stalin and subsequent Soviet leaders could produce rapid economic growth, they could not do so in a sustained way. By the 1970s, economic growth had all but stopped. The most important lesson is that extractive institutions cannot generate sustained technological change for two reasons: the lack of economic incentives and resistance by the elites. In addition, once all the very inefficiently used resources had been reallocated to industry, there were few economic gains to be had by fiat. Then the Soviet system hit a roadblock, with lack of innovation and poor economic incentives preventing any further progress. The only area in which the Soviets did manage to sustain some innovation was through enormous efforts in military and aerospace technology. As a result they managed to put the first dog, Leika, and the first man, Yuri Gagarin, in space. They also left the world the AK-47 as one of their legacies. Gosplan was the supposedly all-powerful planning agency in charge of the central planning of the Soviet economy. One of the benefits of the sequence of five-year plans written and administered by Gosplan was supposed to have been the long time horizon necessary for rational investment and innovation. In reality, what got implemented in Soviet industry had little to do with the five-year plans, which were frequently revised and rewritten or simply ignored. The development of industry took place on the basis of commands by Stalin and the Politburo, who changed their minds frequently and often completely revised their previous decisions. All plans were labeled “draft” or “preliminary.” Only one copy of a plan labeled “final”—that for light industry in 1939—has ever come to light. Stalin himself said in 1937 that “only bureaucrats can think that planning work ends with the creation of the plan. The creation of the plan is just the beginning. The real direction of the plan develops only after the putting together of the plan.” Stalin wanted to maximize his discretion to reward people or groups who were politically loyal, and punish those who were not. As for Gosplan, its main role was to provide Stalin with information so he could better monitor his friends and enemies. It actually tried to avoid making decisions. If you made a decision that turned out badly, you might get shot. Better to avoid all responsibility. An example of what could happen
Daron Acemoğlu (Why Nations Fail: The Origins of Power, Prosperity and Poverty)
For example, in October 2012, Barclaycard asked the Ring community if the company should change its late-fee policy. The existing policy allowed members a three-day grace period before charging a late-payment penalty. Barclaycard proposed eliminating the three-day grace period and allowing one late payment per year. From its analysis, Barclaycard estimated it would generate 15 percent more late-fee revenue, which would result in higher profits for the community. Members voted overwhelmingly to adopt the policy. They were willing to punish those members who were habitually late payers to generate more profit for the community. This behavior may seem counterintuitive, but because most members paid their bills on time, they wouldn’t be adversely affected.
Brian Burke (Gamify: How Gamification Motivates People to Do Extraordinary Things)
Association for Policy Analysis and Management. This list of thanks would be incomplete without special mention of Jo Turpin, who provided stellar research assistance and spent countless hours editing the entire manuscript. Her work helped us find that plain voice we had so long forgotten after writing so many pages in scholarly journals for our research colleagues.     We also thank the five anonymous reviewers of an earlier draft of the manuscript, senior staff especially, Jennifer Burnszynski, our long-time colleague, Elaine Sorensen, and Debra Pontisso
Ronald B. Mincy (Failing Our Fathers: Confronting the Crisis of Economically Vulnerable Nonresident Fathers)
Public policy formulation has gone a metamorphic change during the last three or four decades due to rapidly globalising world. There are at least four ways in which globalization is affecting the policy formulation in each country. Firstly, thanks to social and electronic media, small issues which a decade or so ago could only find place in the back page of a national newspaper become breaking news in major global channels creating advocacy and sympathy movements in different parts of the world. Secondly, with the rapidly globalizing world, global issues like environmental degradation, climate change, GMO, etc., which were only discussed in the corridors of power are being debated in the drawing rooms of countries and creating strong advocacy movements among the population. Thirdly, centers of actual power and decision making are shifting from local to global level with the outreach of domestic interest groups to their sympathizers in international organizations, multinational corporations and those in the governments of global powers. This outreach enables them to force their own government to accede to their demands because of economic and political clout of the global players. Lastly, whether approached by the domestic interests or not, global state and non-state actors are increasingly penetrating those domains which were henceforth exclusively reserved for the domestic state machinery. They not only interfere in the policy formulation but are now acting direct through their proxies in the form of nongovernmental organizations in domestic policy formulation and implementation.
Shahid Hussain Raja (Public Policy Formulation and Analysis: A Handbook 2nd Edition)
To further support this claim, consider again the work of Michael Scheuer, the former head of CIA's Bin Laden Unit. Scheuer has provided a comprehensive analysis of Osama Bin Laden, al-Qaeda, and the war on terror as presently undertaken by the United States 42 One of Scheuer's central claims is that al-Qaeda, and other terrorist organizations, are not motivated by a fundamental hatred for the American identity and way of life, but instead by U.S. interventions and policies in the larger Middle East region. It is Scheuer's contention that these interventions are in fact the driving force behind the backlash against the United States.43 In other words, these interventions in the Middle East have generated negative unintended consequences, such as the 9/11 attacks, that in turn led to further interventions, such as the overall war on terror and the invasion of and subsequent reconstruction efforts in Afghanistan and Iraq.
Christopher J. Coyne (After War: The Political Economy of Exporting Democracy)
Analysis predating [Milton] Friedman's gave a different answer to question of the Fed's policy errors [during the Great Depression] and new scholarship is validating the old wisdom. It now appears that Friedman will be merely an interlude between the sounder analysis of economists contemporary to the Great Depression and those who have rediscovered their insights. --Jeff Herbener
David Howden (The Fed at One Hundred: A Critical View on the Federal Reserve System)
...while social analysis must always be part of curriculum development and teaching, education policy and practice needs to be protected against the dangers of fads, obsessions and moral panics.
Rob Gilbert and Pam Gilbert
analysis system. In 2012, the ACRC strengthened its functions of communication and integration as well as policy support by developing a complaint-predicting and early-warning system, improving the quality of complaint analysis, and enhancing the cycling system as the third phase of the project. In 2013, it discovered and analyzed the voices of
섹파조건만남
complaints a day). The analysis results are published in the Voices of the People Monthly, and provided to 288 government organizations, including the Presidential Office, the Office for Government Policy Coordination, central government offices, and local government offices. The Commission analyzed unreasonable legislations and policies found in the blind spots of administration, and
섹파조건만남
In 2013, the ACRC made efforts to solve the peoples’ inconveniences and improve policies by providing 190 analysis reports: 40 frequent analyses, including analyses on social issues, customized analyses, and frequent complaint analyses, and 150 regular analyses, including daily·weekly·monthly reports. In particular, the ACRC
섹파조건만남
has encouraged administrative changes by providing information on a total of 217 cases of complaint analysis for public organizations through the Voices of the People Weekly, of which 106 cases were used for policies, including 52 cases for institutional improvement and 13 cases for strengthening PR and education
섹파조건만남
it also conducted an in-depth analysis on 5 issues, such as protection of personal information (Personal Information Protection Commission), policies related to international marriages and multicultural families (Ministry of Justice), and
섹파조건만남
areas of inconvenience to the daily lives of the people. The service was provided to 125 agencies over the course of 7 times. 2. Establishment and Enhancement of the Complaint Analysis System The ACRC improved its functions of analyzing complaints for the voices of the people to be incorporated into policies in a systematic and comprehensive manner. With the
섹파조건만남
the people into policies by analyzing complaints. To this end, it will push forward to improve the effectiveness of complaint analysis, enhance the policy circulation system and communication with the people by expanding the number of government agencies sharing the joint use system, upgrade the quality of complaint analysis, and build the analysis capacity of complaint analysts
섹파조건만남
The Commission is planning to analyze and predict social issues and major controversial policies in a preemptive way, to prevent social conflicts and to solve the inconveniences of the people at the early stage. Also, the ACRC will provide analysis results such as daily, weekly
섹파조건만남
and monthly reports to the concerned agencies for them to track and analyze the current trends of civil complaints. Raising Satisfaction of the People by Enhancing Analysis of Complaints regarding Major Government Policies
섹파조건만남
Capitalizing on the Complaint Analysis System, the ACRC is planning to raise the satisfaction of the people on government policies by analyzing the complaints on the core policies of the government, such as the 140 national tasks and the “normalization of misguided practices,” as
섹파조건만남
Capacity building of analysts and upgrading the quality of analysis The ACRC will use the information not only about civil complaints and counseling but also online public opinions and those from policy discussions on e-People. It will also expand the range of civil complaint analysis from central government agencies to local administrative agencies
섹파조건만남
that are closely related to the daily lives of the people. In this way, the ACRC will upgrade the capacity of analysts by effectively using the Complaint Analysis System, securing professionals who have much experience and knowledge to preemptively figure out the social implications and lessons to be reflected in policies among the voluminous complaints, and opening training courses
섹파조건만남
In essence, then, the common picture of economic thought after Smith needs to be reversed. In the conventional view, Adam Smith, the towering founder, by his theoretical genius and by the sheer weight of his knowledge of institutional facts, single-handedly created the discipline of political economy as well as the public policy of the free market, and did so out of a jumble of mercantilist fallacies and earlier absurd scholastic notions of a 'just price'. The real story is almost the opposite. Before Smith, centuries of scholastic analysis had developed an excellent value theory and monetary theory, along with corresponding free market and hard-money conclusions. Originally embedded among the scholastics in a systematic framework of property rights and contract law based on natural law theory, economic theory
Anonymous
Sticking with the $2 trillion infrastructure proposal, MMT would have us begin by asking if it would be safe for Congress to authorize $2 trillion in new spending without offsets. A careful analysis of the economy’s existing (and anticipated) slack would guide lawmakers in making that determination. If the CBO and other independent analysts concluded it would risk pushing inflation above some desired inflation rate, then lawmakers could begin to assemble a menu of options to identify the most effective ways to mitigate that risk. Perhaps one-third, one-half, or three-fourths of the spending would need to be offset. It’s also possible that none would require offsets. Or perhaps the economy is so close to its full employment potential that PAYGO is the right policy. The point is, Congress should work backward to arrive at the answer rather than beginning with the presumption that every new dollar of spending needs to be fully offset. That helps to protect us from unwarranted tax increases and undesired inflation. It also ensures that there is always a check on any new spending. The best way to fight inflation is before it happens. In one sense, we have gotten lucky. Congress routinely makes large fiscal commitments without pausing to evaluate inflation risks. It can add hundreds of billions of dollars to the defense budget or pass tax cuts that add trillions to the fiscal deficit over time, and for the most part, we come out unscathed—at least in terms of inflation. That’s because there’s normally enough slack to absorb bigger deficits. Although excess capacity has served as a sort of insurance policy against a Congress that ignores inflation risk, maintaining idle resources comes at a price. It depresses our collective well-being by depriving us of the array of things we could have enjoyed if we had put our resources to good use. MMT aims to change that. MMT is about harnessing the power of the public purse to build an economy that lives up to its full potential while maintaining appropriate checks on that power. No one would think of Spider-Man as a superhero if he refused to use his powers to protect and serve. With great power comes great responsibility. The power of the purse belongs to all of us. It is wielded by democratically elected members of Congress, but we should think of it as a power that exists to serve us all. Overspending is an abuse of power, but so is refusing to act when more can be done to elevate the human condition without risking inflation.
Stephanie Kelton (The Deficit Myth: Modern Monetary Theory and the Birth of the People's Economy)
Contrary to “the mantra,” White supremacists are the ones supporting policies that benefit racist power against the interests of the majority of White people. White supremacists claim to be pro-White but refuse to acknowledge that climate change is having a disastrous impact on the earth White people inhabit. They oppose affirmative-action programs, despite White women being their primary beneficiaries. White supremacists rage against Obamacare even as 43 percent of the people who gained lifesaving health insurance from 2010 to 2015 were White. They heil Adolf Hitler’s Nazis, even though it was the Nazis who launched a world war that destroyed the lives of more than forty million White people and ruined Europe. They wave Confederate flags and defend Confederate monuments, even though the Confederacy started a civil war that ended with more than five hundred thousand White American lives lost—more than every other American war combined. White supremacists love what America used to be, even though America used to be—and still is—teeming with millions of struggling White people. White supremacists blame non-White people for the struggles of White people when any objective analysis of their plight primarily implicates the rich White Trumps they support.
Ibram X. Kendi (How to Be an Antiracist)
An incapacitous patient who becomes pregnant may be forced, against her will, to have an abortion. The judgment will typically be expressed in the language of the best interests both of the mother and of the welfare (were it to be born) of the child. What’s happening here? The maternal best interests part of the analysis is fairly straightforward. This isn’t really an abortion against the mother’s will. She’s got no (rightly directed) will. But what about the interests of the putative child? A couple of points. First: it is given a voice in the debate (although for other purposes it has no legal existence) because it is convenient for it to have it. It will obligingly deliver a speech saying that it doesn’t want to exist, and will then shut up. It’s allowed no other speech. Second: in the law of the UK and in many other jurisdictions a child cannot bring a claim based on the assertion ‘It were better that my mother had not borne me.’ It’s regarded as offensive to public policy: see, for instance, McKay v Essex AHA (1982).
Charles Foster (Medical Law: A Very Short Introduction)
and authors who present information challenging to those in power. Censorship leads instead to greater distrust of both government institutions and large corporations. There is no ideology or politics in pointing out the obvious: scientific errors and public policy errors do occur—and can have devastating consequences. Errors might result from flawed analysis, haste, arrogance, and sometimes, corruption. Whatever the cause, the solutions come from open-minded exploration,
Robert F. Kennedy Jr. (The Real Anthony Fauci: Bill Gates, Big Pharma, and the Global War on Democracy and Public Health)
This obstacle which should be relentlessly combatted as a sign of narrow-minded party fanaticism and backward political culture, is reinforced for a journal like ours through the fact that in social sciences the stimulus to the posing of scientific problems is in actuality always given by practical "questions" Hence the very recognition of the existence of a scientific problem coincides personally, with the possession of specially oriented motives and values A Joumal which has come into existence under the Influence of a general interest in a concrete problem, will always include among its contributors persons who are personally Interested In these problems because certain concrete situations seem to be incompatible with, or seem to threaten. the realization of certain ideal values In which they belIeve. A bond of similar ideals will hold this circle of contrIbutors together and it will be the basis of a further recruitment. This in turn will tend to give the Journal, at least in its treatment of questions of practical social policy, a certain "character" which of course inevitably accompanies every collaboration of vigorously sensitive persons whose evaluative standpoint regarding the problems cannot be entirely expressed even In purely theoretical analysis; in the criticIsm of practIcal recommendations and measures it quite legitimately finds expression under the particular conditions above discussed.
Max Weber (The Theory of Social and Economic Organization)