“
Reaction is just that—an action you have taken before. When you “re-act,” what you do is assess the incoming data, search your memory bank for the same or nearly the same experience, and act the way you did before. This is all the work of the mind, not of your soul.
”
”
Neale Donald Walsch (The Complete Conversations with God)
“
It is well and good to opine or theorize about a subject, as humankind is wont to do, but when moral posturing is replaced by an honest assessment of the data, the result is often a new, surprising insight.
”
”
Steven D. Levitt (Freakonomics: A Rogue Economist Explores the Hidden Side of Everything)
“
When moral posturing is replaced by an honest assessment of the data, the result is often a new, surprising insight.
”
”
Steven D. Levitt (Freakonomics: A Rogue Economist Explores the Hidden Side of Everything)
“
When the tragedies of others become for us diversions, sad stories with which to enthrall our friends, interesting bits of data to toss out at cocktail parties, a means of presenting a pose of political concern, or whatever…when this happens we commit the gravest of sins, condemn ourselves to ignominy, and consign the world to a dangerous course. We begin to justify our casual overview of pain and suffering by portraying ourselves as do-gooders incapacitated by the inexorable forces of poverty, famine, and war. “What can I do?” we say, “I’m only one person, and these things are beyond my control. I care about the world’s trouble, but there are no solutions.” Yet no matter how accurate this assessment, most of us are relying on it to be true, using it to mask our indulgence, our deep-seated lack of concern, our pathological self-involvement.
”
”
Lucius Shepard (The Best of Lucius Shepard)
“
The important thing with Elon,” he says, “is that if you told him the risks and showed him the engineering data, he would make a quick assessment and let the responsibility shift from your shoulders to his.
”
”
Walter Isaacson (Elon Musk)
“
Maintaining good accounting records is vital to the successful management of a business. It's really good to be able to assess business-specific financial data to inform decisions. So every business should invest in good accounting software like Intuit, Quicken, or Freshbooks... Or any of the many apps out there.
”
”
Hendrith Vanlon Smith Jr.
“
But the history of science—by far the most successful claim to knowledge accessible to humans—teaches that the most we can hope for is successive improvement in our understanding, learning from our mistakes, an asymptotic approach to the Universe, but with the proviso that absolute certainty will always elude us. We will always be mired in error. The most each generation can hope for is to reduce the error bars a little, and to add to the body of data to which error bars apply. The error bar is a pervasive, visible self-assessment of the reliability of our knowledge.
”
”
Carl Sagan (The Demon-Haunted World: Science as a Candle in the Dark)
“
We have noted that gut feelings are an important part of the body’s sensory apparatus, helping us to evaluate the environment and assess whether a situation is safe. Gut feelings magnify perceptions that the emotional centres of the brain find important and relay through the hypothalamus. Pain in the gut is one signal the body uses to send messages that are difficult for us to ignore. Thus, pain is also a mode of perception. Physiologically, the pain pathways channel information that we have blocked from reaching us by more direct routes. Pain is a powerful secondary mode of perception to alert us when our primary modes have shut down. It provides us with data that we ignore at our peril.
”
”
Gabor Maté (When the Body Says No: The Cost of Hidden Stress)
“
But why should we accept that the way men do things, the way men see themselves, is the correct way? Recent research has emerged showing that while women tend to assess their intelligence accurately, men of average intelligence think they are more intelligent than two-thirds of people. This being the case, perhaps it wasn’t that women’s rates of putting themselves up for promotion were too low. Perhaps it was that men’s were too high.
”
”
Caroline Criado Pérez (Invisible Women: Data Bias in a World Designed for Men)
“
In fact, the greatest savings from wellness programs come from the penalties assessed on the workers. In other words, like scheduling algorithms, they provide corporations with yet another tool to raid their employees’ paychecks.
”
”
Cathy O'Neil (Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy)
“
Uncertainty is an acid, corrosive to authority. Once the monopoly on information is lost, so too is our trust. Every presidential statement, every CIA assessment, every investigative report by a great newspaper, suddenly acquired an arbitrary aspect, and seemed grounded in moral predilection rather than intellectual rigor. When proof for and against approaches infinity, a cloud of suspicion about cherry-picking data will hang over every authoritative judgment.
”
”
Martin Gurri (The Revolt of the Public and the Crisis of Authority in the New Millennium)
“
impressive series of studies by Thomas Åstebro sheds light on what happens when optimists receive bad news. He drew his data from a Canadian organization—the Inventor’s Assistance Program—which collects a small fee to provide inventors with an objective assessment of the commercial prospects of their idea.
”
”
Daniel Kahneman (Thinking, Fast and Slow)
“
During sustained stress, the amygdala processes emotional sensory information more rapidly and less accurately, dominates hippocampal function, and disrupts frontocortical function; we’re more fearful, our thinking is muddled, and we assess risks poorly and act impulsively out of habit, rather than incorporating new data.
”
”
Robert M. Sapolsky (Behave: The Biology of Humans at Our Best and Worst)
“
The witch-hunt narrative is now the conventional wisdom about these cases. That view is so widely endorsed and firmly entrenched that so widely endorsed and firmly entrenched that there would seem to be nothing left to say about these cases. But a close examination of the witch hunt canon leads to some unsettling questions: Why is there so little in the way of academic scholarship about these cases? Almost all of the major witch-hunt writings have been in magazines, often without any footnotes to verify or assess the claims made. Why hasn't anyone writing about these cases said anything about how difficult they are to research? There are so many roadblocks and limitations to researching these cases that it would seem incumbent on any serious writer to address the limitations of data sources. Many of these cases seem to have been researched in a manner of days or weeks. Nevertheless, the cases are described in a definitive way that belies their length and complexity, along with the inherent difficulty in researching original trial court documents. This book is based on the first systematic examination of court records in these cases.
”
”
Ross E. Cheit (The Witch-Hunt Narrative: Politics, Psychology, and the Sexual Abuse of Children)
“
Assessment can be either formal and/or informal measures that gather information. In education, meaningful assessment is data that guides and informs the teacher and/or stakeholders of students' abilities, strategies, performance, content knowledge, feelings and/or attitudes. Information obtained is used to make educational judgements or evaluative statements. Most useful assessment is data which is used to adjust curriculum in order to benefit the students. Assessment should be used to inform instruction. Diagnosis and assessment should document literacy in real-world contexts using data as performance indicators of students' growth and development.
”
”
Dan Greathouse & kathleen Donalson
“
Given the central place that technology holds in our lives, it is astonishing that technology companies have not put more resources into fixing this global problem. Advanced computer systems and artificial intelligence (AI) could play a much bigger role in shaping diagnosis and prescription. While the up-front costs of using such technology may be sizeable, the long-term benefits to the health-care system need to be factored into value assessments.
We believe that AI platforms could improve on the empirical prescription approach. Physicians work long hours under stressful conditions and have to keep up to date on the latest medical research. To make this work more manageable, the health-care system encourages doctors to specialize. However, the vast majority of antibiotics are prescribed either by generalists (e.g., general practitioners or emergency physicians) or by specialists in fields other than infectious disease, largely because of the need to treat infections quickly. An AI system can process far more information than a single human, and, even more important, it can remember everything with perfect accuracy. Such a system could theoretically enable a generalist doctor to be as effective as, or even superior to, a specialist at prescribing. The system would guide doctors and patients to different treatment options, assigning each a probability of success based on real-world data. The physician could then consider which treatment was most appropriate.
”
”
William Hall (Superbugs: An Arms Race against Bacteria)
“
The umbrella assertion made by Team B—and the most inflammatory—was that the previous National Intelligence Estimates “substantially misperceived the motivations behind Soviet strategic programs, and thereby tended consistently to underestimate their intensity, scope, and implicit threat.” Soviet military leaders weren’t simply trying to defend their territory and their people; they were readying a First Strike option, and the US intelligence community had missed it. What led to this “grave and dangerous flaw” in threat assessment, according to Team B, was an overreliance on hard technical facts, and a lamentable tendency to downplay “the large body of soft data.” This “soft” data, the ideological leader of Team B, Richard Pipes, would later say, included “his deep knowledge of the Russian soul.
”
”
Rachel Maddow (Drift: The Unmooring of American Military Power)
“
I know that the consequences of scientific illiteracy are far more dangerous in our time than in any that has come before. It’s perilous and foolhardy for the average citizen to remain ignorant about global warming, say, or ozone depletion, air pollution, toxic and radioactive wastes, acid rain, topsoil erosion, tropical deforestation, exponential population growth. Jobs and wages depend on science and technology. If our nation can’t manufacture, at high quality and low price, products people want to buy, then industries will continue to drift away and transfer a little more prosperity to other parts of the world. Consider the social ramifications of fission and fusion power, supercomputers, data “highways,” abortion, radon, massive reductions in strategic weapons, addiction, government eavesdropping on the lives of its citizens, high-resolution TV, airline and airport safety, fetal tissue transplants, health costs, food additives, drugs to ameliorate mania or depression or schizophrenia, animal rights, superconductivity, morning-after pills, alleged hereditary antisocial predispositions, space stations, going to Mars, finding cures for AIDS and cancer. How can we affect national policy—or even make intelligent decisions in our own lives—if we don’t grasp the underlying issues? As I write, Congress is dissolving its own Office of Technology Assessment—the only organization specifically tasked to provide advice to the House and Senate on science and technology. Its competence and integrity over the years have been exemplary. Of the 535 members of the U.S. Congress, rarely in the twentieth century have as many as one percent had any significant background in science. The last scientifically literate President may have been Thomas Jefferson.* So how do Americans decide these matters? How do they instruct their representatives? Who in fact makes these decisions, and on what basis? —
”
”
Carl Sagan (The Demon-Haunted World: Science as a Candle in the Dark)
“
At the first trans health conference I ever attended, a parent asked about long-term health risks for people taking hormones. The doctor gave a full assessment of issues
that trans men face; many of them mimic the risks that would be inherited from father to son if they'd been born male, now that testosterone is a factor.
"What about trans women?" another parent asked.
The doctor took a deep breath. "Those outcomes are murkier. Because trans women are so discriminated against, they're at far greater risk for issues like alcoholism, poverty, homelessness, and lack of access to good healthcare. All of these issues impact their overall health so much that it's hard to gather data on what their health outcomes would be if these issues weren't present."
This was stunning-a group of people is treated so badly by our culture that we can't clearly study their health. The burden of this abuse is that substantial and
pervasive. Your generation will be healthier. The signs are already there.
”
”
Carolyn Hays (A Girlhood: Letter to My Transgender Daughter)
“
Imagine you're sitting having dinner in a restaurant. At some point during the meal, your companion leans over and whispers that they've spotted Lady Gaga eating at the table opposite. Before having a look for yourself, you'll no doubt have some sense of how much you believe your friends theory. You'll take into account all of your prior knowledge: perhaps the quality of the establishment, the distance you are from Gaga's home in Malibu, your friend's eyesight. That sort of thing. If pushed, it's a belief that you could put a number on. A probability of sorts. As you turn to look at the woman, you'll automatically use each piece of evidence in front of you to update your belief in your friend's hypothesis Perhaps the platinum-blonde hair is consistent with what you would expect from Gaga, so your belief goes up. But the fact that she's sitting on her own with no bodyguards isn't, so your belief goes down. The point is, each new observations adds to your overall assessment. This is all Bayes' theorem does: offers a systematic way to update your belief in a hypothesis on the basis of the evidence. It accepts that you can't ever be completely certain about the theory you are considering, but allows you to make a best guess from the information available. So, once you realize the woman at the table opposite is wearing a dress made of meat -- a fashion choice that you're unlikely to chance up on in the non-Gaga population -- that might be enough to tip your belief over the threshold and lead you to conclude that it is indeed Lady Gaga in the restaurant. But Bayes' theorem isn't just an equation for the way humans already make decisions. It's much more important that that. To quote Sharon Bertsch McGrayne, author of The Theory That Would Not Die: 'Bayes runs counter to the deeply held conviction that modern science requires objectivity and precision. By providing a mechanism to measure your belief in something, Bayes allows you to draw sensible conclusions from sketchy observations, from messy, incomplete and approximate data -- even from ignorance.
”
”
Hannah Fry (Hello World: Being Human in the Age of Algorithms)
“
There was little effort to conceal this method of doing business. It was common knowledge, from senior managers and heads of research and development to the people responsible for formulation and the clinical people. Essentially, Ranbaxy’s manufacturing standards boiled down to whatever the company could get away with. As Thakur knew from his years of training, a well-made drug is not one that passes its final test. Its quality must be assessed at each step of production and lies in all the data that accompanies it. Each of those test results, recorded along the way, helps to create an essential roadmap of quality. But because Ranbaxy was fixated on results, regulations and requirements were viewed with indifference. Good manufacturing practices were stop signs and inconvenient detours. So Ranbaxy was driving any way it chose to arrive at favorable results, then moving around road signs, rearranging traffic lights, and adjusting mileage after the fact. As the company’s head of analytical research would later tell an auditor: “It is not in Indian culture to record the data while we conduct our experiments.
”
”
Katherine Eban (Bottle of Lies: The Inside Story of the Generic Drug Boom)
“
Though Hoover conceded that some might deem him a “fanatic,” he reacted with fury to any violations of the rules. In the spring of 1925, when White was still based in Houston, Hoover expressed outrage to him that several agents in the San Francisco field office were drinking liquor. He immediately fired these agents and ordered White—who, unlike his brother Doc and many of the other Cowboys, wasn’t much of a drinker—to inform all of his personnel that they would meet a similar fate if caught using intoxicants. He told White, “I believe that when a man becomes a part of the forces of this Bureau he must so conduct himself as to remove the slightest possibility of causing criticism or attack upon the Bureau.” The new policies, which were collected into a thick manual, the bible of Hoover’s bureau, went beyond codes of conduct. They dictated how agents gathered and processed information. In the past, agents had filed reports by phone or telegram, or by briefing a superior in person. As a result, critical information, including entire case files, was often lost. Before joining the Justice Department, Hoover had been a clerk at the Library of Congress—“ I’m sure he would be the Chief Librarian if he’d stayed with us,” a co-worker said—and Hoover had mastered how to classify reams of data using its Dewey decimal–like system. Hoover adopted a similar model, with its classifications and numbered subdivisions, to organize the bureau’s Central Files and General Indices. (Hoover’s “Personal File,” which included information that could be used to blackmail politicians, would be stored separately, in his secretary’s office.) Agents were now expected to standardize the way they filed their case reports, on single sheets of paper. This cut down not only on paperwork—another statistical measurement of efficiency—but also on the time it took for a prosecutor to assess whether a case should be pursued.
”
”
David Grann (Killers of the Flower Moon: The Osage Murders and the Birth of the FBI)
“
In 1942, Merton set out four scientific values, now known as the ‘Mertonian Norms’. None of them have snappy names, but all of them are good aspirations for scientists. First, universalism: scientific knowledge is scientific knowledge, no matter who comes up with it – so long as their methods for finding that knowledge are sound. The race, sex, age, gender, sexuality, income, social background, nationality, popularity, or any other status of a scientist should have no bearing on how their factual claims are assessed. You also can’t judge someone’s research based on what a pleasant or unpleasant person they are – which should come as a relief for some of my more disagreeable colleagues. Second, and relatedly, disinterestedness: scientists aren’t in it for the money, for political or ideological reasons, or to enhance their own ego or reputation (or the reputation of their university, country, or anything else). They’re in it to advance our understanding of the universe by discovering things and making things – full stop.20 As Charles Darwin once wrote, a scientist ‘ought to have no wishes, no affections, – a mere heart of stone.’
The next two norms remind us of the social nature of science. The third is communality: scientists should share knowledge with each other. This principle underlies the whole idea of publishing your results in a journal for others to see – we’re all in this together; we have to know the details of other scientists’ work so that we can assess and build on it. Lastly, there’s organised scepticism: nothing is sacred, and a scientific claim should never be accepted at face value. We should suspend judgement on any given finding until we’ve properly checked all the data and methodology. The most obvious embodiment of the norm of organised scepticism is peer review itself.
20. Robert K. Merton, ‘The Normative Structure of Science’ (1942),
The Sociology of Science: Empirical and Theoretical Investigations
(Chicago and London: University of Chicago Press, 1973): pp. 267–278.
”
”
Stuart Ritchie (Science Fictions)
“
As Graedon scrutinized the FDA’s standards for bioequivalence and the data that companies had to submit, he found that generics were much less equivalent than commonly assumed. The FDA’s statistical formula that defined bioequivalence as a range—a generic drug’s concentration in the blood could not fall below 80 percent or rise above 125 percent of the brand name’s concentration, using a 90 percent confidence interval—still allowed for a potential outside range of 45 percent among generics labeled as being the same. Patients getting switched from one generic to another might be on the low end one day, the high end the next. The FDA allowed drug companies to use different additional ingredients, known as excipients, that could be of lower quality. Those differences could affect a drug’s bioavailability, the amount of drug potentially absorbed into the bloodstream. But there was another problem that really drew Graedon’s attention. Generic drug companies submitted the results of patients’ blood tests in the form of bioequivalence curves. The graphs consisted of a vertical axis called Cmax, which mapped the maximum concentration of drug in the blood, and a horizontal axis called Tmax, the time to maximum concentration. The resulting curve looked like an upside-down U. The FDA was using the highest point on that curve, peak drug concentration, to assess the rate of absorption into the blood. But peak drug concentration, the point at which the blood had absorbed the largest amount of drug, was a single number at one point in time. The FDA was using that point as a stand-in for “rate of absorption.” So long as the generic hit a similar peak of drug concentration in the blood as the brand name, it could be deemed bioequivalent, even if the two curves reflecting the time to that peak looked totally different. Two different curves indicated two entirely different experiences in the body, Graedon realized. The measurement of time to maximum concentration, the horizontal axis, was crucial for time-release drugs, which had not been widely available when the FDA first created its bioequivalence standard in 1992. That standard had not been meaningfully updated since then. “The time to Tmax can vary all over the place and they don’t give a damn,” Graedon emailed a reporter. That “seems pretty bizarre to us.” Though the FDA asserted that it wouldn’t approve generics with “clinically significant” differences in release rates, the agency didn’t disclose data filed by the companies, so it was impossible to know how dramatic the differences were.
”
”
Katherine Eban (Bottle of Lies: The Inside Story of the Generic Drug Boom)
“
Henry, there’s something I would like to tell you, for what it’s worth, something I wish I had been told years ago. You’ve been a consultant for a long time, and you’ve dealt a great deal with top secret information. But you’re about to receive a whole slew of special clearances, maybe fifteen or twenty of them, that are higher than top secret.
I’ve had a number of these myself, and I’ve known other people who have just acquired them, and I have a pretty good sense of what the effects of receiving these clearances are on a person who didn’t previously know they even existed. And the effects of reading the information that they will make available to you.
First, you’ll be exhilarated by some of this new information, and by having it all—so much! incredible!—suddenly available to you. But second, almost as fast, you will feel like a fool for having studied, written, talked about these subjects, criticized and analyzed decisions made by presidents for years without having known of the existence of all this information, which presidents and others had and you didn’t, and which must have influenced their decisions in ways you couldn’t even guess. In particular, you’ll feel foolish for having literally rubbed shoulders for over a decade with some officials and consultants who did have access to all this information you didn’t know about and didn’t know they had, and you’ll be stunned that they kept that secret from you so well.
You will feel like a fool, and that will last for about two weeks. Then, after you’ve started reading all this daily intelligence input and become used to using what amounts to whole libraries of hidden information, which is much more closely held than mere top secret data, you will forget there ever was a time when you didn’t have it, and you’ll be aware only of the fact that you have it now and most others don’t … and that all those other people are fools.
Over a longer period of time—not too long, but a matter of two or three years—you’ll eventually become aware of the limitations of this information. There is a great deal that it doesn’t tell you, it’s often inaccurate, and it can lead you astray just as much as the New York Times can. But that takes a while to learn.
In the meantime it will have become very hard for you to learn from anybody who doesn’t have these clearances. Because you’ll be thinking as you listen to them: “What would this man be telling me if he knew what I know? Would he be giving me the same advice, or would it totally change his predictions and recommendations?” And that mental exercise is so torturous that after a while you give it up and just stop listening. I’ve seen this with my superiors, my colleagues … and with myself.
You will deal with a person who doesn’t have those clearances only from the point of view of what you want him to believe and what impression you want him to go away with, since you’ll have to lie carefully to him about what you know. In effect, you will have to manipulate him. You’ll give up trying to assess what he has to say. The danger is, you’ll become something like a moron. You’ll become incapable of learning from most people in the world, no matter how much experience they may have in their particular areas that may be much greater than yours.
”
”
Greg Grandin (Kissinger's Shadow: The Long Reach of America's Most Controversial Statesman)
“
Well before the end of the 20th century however print had lost its former dominance. This resulted in, among other things, a different kind of person getting elected as leader. One who can present himself and his programs in a polished way, as Lee Quan Yu you observed in 2000, adding, “Satellite television has allowed me to follow the American presidential campaign. I am amazed at the way media professionals can give a candidate a new image and transform him, at least superficially, into a different personality. Winning an election becomes, in large measure, a contest in packaging and advertising. Just as the benefits of the printed era were inextricable from its costs, so it is with the visual age. With screens in every home entertainment is omnipresent and boredom a rarity. More substantively, injustice visualized is more visceral than injustice described. Television played a crucial role in the American Civil rights movement, yet the costs of television are substantial, privileging emotional display over self-command, changing the kinds of people and arguments that are taken seriously in public life. The shift from print to visual culture continues with the contemporary entrenchment of the Internet and social media, which bring with them four biases that make it more difficult for leaders to develop their capabilities than in the age of print. These are immediacy, intensity, polarity, and conformity. Although the Internet makes news and data more immediately accessible than ever, this surfeit of information has hardly made us individually more knowledgeable, let alone wiser, as the cost of accessing information becomes negligible, as with the Internet, the incentives to remember it seem to weaken. While forgetting anyone fact may not matter, the systematic failure to internalize information brings about a change in perception, and a weakening of analytical ability. Facts are rarely self-explanatory; their significance and interpretation depend on context and relevance. For information to be transmuted into something approaching wisdom it must be placed within a broader context of history and experience. As a general rule, images speak at a more emotional register of intensity than do words. Television and social media rely on images that inflamed the passions, threatening to overwhelm leadership with the combination of personal and mass emotion. Social media, in particular, have encouraged users to become image conscious spin doctors. All this engenders a more populist politics that celebrates utterances perceived to be authentic over the polished sound bites of the television era, not to mention the more analytical output of print. The architects of the Internet thought of their invention as an ingenious means of connecting the world. In reality, it has also yielded a new way to divide humanity into warring tribes. Polarity and conformity rely upon, and reinforce, each other. One is shunted into a group, and then the group polices once thinking. Small wonder that on many contemporary social media platforms, users are divided into followers and influencers. There are no leaders. What are the consequences for leadership? In our present circumstances, Lee's gloomy assessment of visual media's effects is relevant. From such a process, I doubt if a Churchill or Roosevelt or a de Gaulle can emerge. It is not that changes in communications technology have made inspired leadership and deep thinking about world order impossible, but that in an age dominated by television and the Internet, thoughtful leaders must struggle against the tide.
”
”
Henry Kissinger (Leadership : Six Studies in World Strategy)
“
it is not uncommon for experts in DNA analysis to testify at a criminal trial that a DNA sample taken from a crime scene matches that taken from a suspect. How certain are such matches? When DNA evidence was first introduced, a number of experts testified that false positives are impossible in DNA testing. Today DNA experts regularly testify that the odds of a random person’s matching the crime sample are less than 1 in 1 million or 1 in 1 billion. With those odds one could hardly blame a juror for thinking, throw away the key. But there is another statistic that is often not presented to the jury, one having to do with the fact that labs make errors, for instance, in collecting or handling a sample, by accidentally mixing or swapping samples, or by misinterpreting or incorrectly reporting results. Each of these errors is rare but not nearly as rare as a random match. The Philadelphia City Crime Laboratory, for instance, admitted that it had swapped the reference sample of the defendant and the victim in a rape case, and a testing firm called Cellmark Diagnostics admitted a similar error.20 Unfortunately, the power of statistics relating to DNA presented in court is such that in Oklahoma a court sentenced a man named Timothy Durham to more than 3,100 years in prison even though eleven witnesses had placed him in another state at the time of the crime. It turned out that in the initial analysis the lab had failed to completely separate the DNA of the rapist and that of the victim in the fluid they tested, and the combination of the victim’s and the rapist’s DNA produced a positive result when compared with Durham’s. A later retest turned up the error, and Durham was released after spending nearly four years in prison.21 Estimates of the error rate due to human causes vary, but many experts put it at around 1 percent. However, since the error rate of many labs has never been measured, courts often do not allow testimony on this overall statistic. Even if courts did allow testimony regarding false positives, how would jurors assess it? Most jurors assume that given the two types of error—the 1 in 1 billion accidental match and the 1 in 100 lab-error match—the overall error rate must be somewhere in between, say 1 in 500 million, which is still for most jurors beyond a reasonable doubt. But employing the laws of probability, we find a much different answer. The way to think of it is this: Since both errors are very unlikely, we can ignore the possibility that there is both an accidental match and a lab error. Therefore, we seek the probability that one error or the other occurred. That is given by our sum rule: it is the probability of a lab error (1 in 100) + the probability of an accidental match (1 in 1 billion). Since the latter is 10 million times smaller than the former, to a very good approximation the chance of both errors is the same as the chance of the more probable error—that is, the chances are 1 in 100. Given both possible causes, therefore, we should ignore the fancy expert testimony about the odds of accidental matches and focus instead on the much higher laboratory error rate—the very data courts often do not allow attorneys to present! And so the oft-repeated claims of DNA infallibility are exaggerated.
”
”
Leonard Mlodinow (The Drunkard's Walk: How Randomness Rules Our Lives)
“
Simple Regression CHAPTER OBJECTIVES After reading this chapter, you should be able to Use simple regression to test the statistical significance of a bivariate relationship involving one dependent and one independent variable Use Pearson’s correlation coefficient as a measure of association between two continuous variables Interpret statistics associated with regression analysis Write up the model of simple regression Assess assumptions of simple regression This chapter completes our discussion of statistical techniques for studying relationships between two variables by focusing on those that are continuous. Several approaches are examined: simple regression; the Pearson’s correlation coefficient; and a nonparametric alterative, Spearman’s rank correlation coefficient. Although all three techniques can be used, we focus particularly on simple regression. Regression allows us to predict outcomes based on knowledge of an independent variable. It is also the foundation for studying relationships among three or more variables, including control variables mentioned in Chapter 2 on research design (and also in Appendix 10.1). Regression can also be used in time series analysis, discussed in Chapter 17. We begin with simple regression. SIMPLE REGRESSION Let’s first look at an example. Say that you are a manager or analyst involved with a regional consortium of 15 local public agencies (in cities and counties) that provide low-income adults with health education about cardiovascular diseases, in an effort to reduce such diseases. The funding for this health education comes from a federal grant that requires annual analysis and performance outcome reporting. In Chapter 4, we used a logic model to specify that a performance outcome is the result of inputs, activities, and outputs. Following the development of such a model, you decide to conduct a survey among participants who attend such training events to collect data about the number of events they attended, their knowledge of cardiovascular disease, and a variety of habits such as smoking that are linked to cardiovascular disease. Some things that you might want to know are whether attending workshops increases
”
”
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
“
(e). Hence the expressions are equivalent, as is y = ŷ + e. Certain assumptions about e are important, such as that it is normally distributed. When error term assumptions are violated, incorrect conclusions may be made about the statistical significance of relationships. This important issue is discussed in greater detail in Chapter 15 and, for time series data, in Chapter 17. Hence, the above is a pertinent but incomplete list of assumptions. Getting Started Conduct a simple regression, and practice writing up your results. PEARSON’S CORRELATION COEFFICIENT Pearson’s correlation coefficient, r, measures the association (significance, direction, and strength) between two continuous variables; it is a measure of association for two continuous variables. Also called the Pearson’s product-moment correlation coefficient, it does not assume a causal relationship, as does simple regression. The correlation coefficient indicates the extent to which the observations lie closely or loosely clustered around the regression line. The coefficient r ranges from –1 to +1. The sign indicates the direction of the relationship, which, in simple regression, is always the same as the slope coefficient. A “–1” indicates a perfect negative relationship, that is, that all observations lie exactly on a downward-sloping regression line; a “+1” indicates a perfect positive relationship, whereby all observations lie exactly on an upward-sloping regression line. Of course, such values are rarely obtained in practice because observations seldom lie exactly on a line. An r value of zero indicates that observations are so widely scattered that it is impossible to draw any well-fitting line. Figure 14.2 illustrates some values of r. Key Point Pearson’s correlation coefficient, r, ranges from –1 to +1. It is important to avoid confusion between Pearson’s correlation coefficient and the coefficient of determination. For the two-variable, simple regression model, r2 = R2, but whereas 0 ≤ R ≤ 1, r ranges from –1 to +1. Hence, the sign of r tells us whether a relationship is positive or negative, but the sign of R, in regression output tables such as Table 14.1, is always positive and cannot inform us about the direction of the relationship. In simple regression, the regression coefficient, b, informs us about the direction of the relationship. Statistical software programs usually show r rather than r2. Note also that the Pearson’s correlation coefficient can be used only to assess the association between two continuous variables, whereas regression can be extended to deal with more than two variables, as discussed in Chapter 15. Pearson’s correlation coefficient assumes that both variables are normally distributed. When Pearson’s correlation coefficients are calculated, a standard error of r can be determined, which then allows us to test the statistical significance of the bivariate correlation. For bivariate relationships, this is the same level of significance as shown for the slope of the regression coefficient. For the variables given earlier in this chapter, the value of r is .272 and the statistical significance of r is p ≤ .01. Use of the Pearson’s correlation coefficient assumes that the variables are normally distributed and that there are no significant departures from linearity.7 It is important not to confuse the correlation coefficient, r, with the regression coefficient, b. Comparing the measures r and b (the slope) sometimes causes confusion. The key point is that r does not indicate the regression slope but rather the extent to which observations lie close to it. A steep regression line (large b) can have observations scattered loosely or closely around it, as can a shallow (more horizontal) regression line. The purposes of these two statistics are very different.8 SPEARMAN’S RANK CORRELATION
”
”
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
“
to the measures described earlier. Hence, 90 percent of the variation in one variable can be explained by the other. For the variables given earlier, the Spearman’s rank correlation coefficient is .274 (p < .01), which is comparable to r reported in preceding sections. Box 14.1 illustrates another use of the statistics described in this chapter, in a study of the relationship between crime and poverty. SUMMARY When analysts examine relationships between two continuous variables, they can use simple regression or the Pearson’s correlation coefficient. Both measures show (1) the statistical significance of the relationship, (2) the direction of the relationship (that is, whether it is positive or negative), and (3) the strength of the relationship. Simple regression assumes a causal and linear relationship between the continuous variables. The statistical significance and direction of the slope coefficient is used to assess the statistical significance and direction of the relationship. The coefficient of determination, R2, is used to assess the strength of relationships; R2 is interpreted as the percent variation explained. Regression is a foundation for studying relationships involving three or more variables, such as control variables. The Pearson’s correlation coefficient does not assume causality between two continuous variables. A nonparametric alternative to testing the relationship between two continuous variables is the Spearman’s rank correlation coefficient, which examines correlation among the ranks of the data rather than among the values themselves. As such, this measure can also be used to study relationships in which one or both variables are ordinal. KEY TERMS Coefficient of determination, R2 Error term Observed value of y Pearson’s correlation coefficient, r Predicted value of the dependent variable y, ŷ Regression coefficient Regression line Scatterplot Simple regression assumptions Spearman’s rank correlation coefficient Standard error of the estimate Test of significance of the regression coefficient Notes 1. See Chapter 3 for a definition of continuous variables. Although the distinction between ordinal and continuous is theoretical (namely, whether or not the distance between categories can be measured), in practice ordinal-level variables with seven or more categories (including Likert variables) are sometimes analyzed using statistics appropriate for interval-level variables. This practice has many critics because it violates an assumption of regression (interval data), but it is often
”
”
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
“
other and distinct from other groups. These techniques usually precede regression and other analyses. Factor analysis is a well-established technique that often aids in creating index variables. Earlier, Chapter 3 discussed the use of Cronbach alpha to empirically justify the selection of variables that make up an index. However, in that approach analysts must still justify that variables used in different index variables are indeed distinct. By contrast, factor analysis analyzes a large number of variables (often 20 to 30) and classifies them into groups based on empirical similarities and dissimilarities. This empirical assessment can aid analysts’ judgments regarding variables that might be grouped together. Factor analysis uses correlations among variables to identify subgroups. These subgroups (called factors) are characterized by relatively high within-group correlation among variables and low between-group correlation among variables. Most factor analysis consists of roughly four steps: (1) determining that the group of variables has enough correlation to allow for factor analysis, (2) determining how many factors should be used for classifying (or grouping) the variables, (3) improving the interpretation of correlations and factors (through a process called rotation), and (4) naming the factors and, possibly, creating index variables for subsequent analysis. Most factor analysis is used for grouping of variables (R-type factor analysis) rather than observations (Q-type). Often, discriminant analysis is used for grouping of observations, mentioned later in this chapter. The terminology of factor analysis differs greatly from that used elsewhere in this book, and the discussion that follows is offered as an aid in understanding tables that might be encountered in research that uses this technique. An important task in factor analysis is determining how many common factors should be identified. Theoretically, there are as many factors as variables, but only a few factors account for most of the variance in the data. The percentage of variation explained by each factor is defined as the eigenvalue divided by the number of variables, whereby the
”
”
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
“
Remedies exist for correcting substantial departures from normality, but these remedies may make matters worse when departures from normality are minimal. The first course of action is to identify and remove any outliers that may affect the mean and standard deviation. The second course of action is variable transformation, which involves transforming the variable, often by taking log(x), of each observation, and then testing the transformed variable for normality. Variable transformation may address excessive skewness by adjusting the measurement scale, thereby helping variables to better approximate normality.8 Substantively, we strongly prefer to make conclusions that satisfy test assumptions, regardless of which measurement scale is chosen.9 Keep in mind that when variables are transformed, the units in which results are expressed are transformed, as well. An example of variable transformation is provided in the second working example. Typically, analysts have different ways to address test violations. Examination of the causes of assumption violations often helps analysts to better understand their data. Different approaches may be successful for addressing test assumptions. Analysts should not merely go by the result of one approach that supports their case, ignoring others that perhaps do not. Rather, analysts should rely on the weight of robust, converging results to support their final test conclusions. Working Example 1 Earlier we discussed efforts to reduce high school violence by enrolling violence-prone students into classes that address anger management. Now, after some time, administrators and managers want to know whether the program is effective. As part of this assessment, students are asked to report their perception of safety at school. An index variable is constructed from different items measuring safety (see Chapter 3). Each item is measured on a seven-point Likert scale (1 = strongly disagree to 7 = strongly agree), and the index is constructed such that a high value indicates that students feel safe.10 The survey was initially administered at the beginning of the program. Now, almost a year later, the survey is implemented again.11 Administrators want to know whether students who did not participate in the anger management program feel that the climate is now safer. The analysis included here focuses on 10th graders. For practical purposes, the samples of 10th graders at the beginning of the program and one year later are regarded as independent samples; the subjects are not matched. Descriptive analysis shows that the mean perception of
”
”
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
“
12.2. The transformed variable has equal variances across the two groups (Levene’s test, p = .119), and the t-test statistic is –1.308 (df = 85, p = .194). Thus, the differences in pollution between watersheds in the East and Midwest are not significant. (The negative sign of the t-test statistic, –1.308, merely reflects the order of the groups for calculating the difference: the testing variable has a larger value in the Midwest than in the East. Reversing the order of the groups results in a positive sign.) Table 12.2 Independent-Samples T-Test: Output For comparison, results for the untransformed variable are shown as well. The untransformed variable has unequal variances across the two groups (Levene’s test, p = .036), and the t-test statistic is –1.801 (df = 80.6, p =.075). Although this result also shows that differences are insignificant, the level of significance is higher; there are instances in which using nonnormal variables could lead to rejecting the null hypothesis. While our finding of insignificant differences is indeed robust, analysts cannot know this in advance. Thus, analysts will need to deal with nonnormality. Variable transformation is one approach to the problem of nonnormality, but transforming variables can be a time-intensive and somewhat artful activity. The search for alternatives has led many analysts to consider nonparametric methods. TWO T-TEST VARIATIONS Paired-Samples T-Test Analysts often use the paired t-test when applying before and after tests to assess student or client progress. Paired t-tests are used when analysts have a dependent rather than an independent sample (see the third t-test assumption, described earlier in this chapter). The paired-samples t-test tests the null hypothesis that the mean difference between the before and after test scores is zero. Consider the following data from Table 12.3. Table 12.3 Paired-Samples Data The mean “before” score is 3.39, and the mean “after” score is 3.87; the mean difference is 0.54. The paired t-test tests the null hypothesis by testing whether the mean of the difference variable (“difference”) is zero. The paired t-test test statistic is calculated as where D is the difference between before and after measurements, and sD is the standard deviation of these differences. Regarding t-test assumptions, the variables are continuous, and the issue of heterogeneity (unequal variances) is moot because this test involves only one variable, D; no Levene’s test statistics are produced. We do test the normality of D and find that it is normally distributed (Shapiro-Wilk = .925, p = .402). Thus, the assumptions are satisfied. We proceed with testing whether the difference between before and after scores is statistically significant. We find that the paired t-test yields a t-test statistic of 2.43, which is significant at the 5 percent level (df = 9, p = .038 < .05).17 Hence, we conclude that the increase between the before and after scores is significant at the 5 percent level.18 One-Sample T-Test Finally, the one-sample t-test tests whether the mean of a single variable is different from a prespecified value (norm). For example, suppose we want to know whether the mean of the before group in Table 12.3 is different from the value of, say, 3.5? Testing against a norm is akin to the purpose of the chi-square goodness-of-fit test described in Chapter 11, but here we are dealing with a continuous variable rather than a categorical one, and we are testing the mean rather than its distribution. The one-sample t-test assumes that the single variable is continuous and normally distributed. As with the paired t-test, the issue of heterogeneity is moot because there is only one variable. The Shapiro-Wilk test shows that the variable “before” is normal (.917, p = .336). The one-sample t-test statistic for testing against the test value of 3.5 is –0.515 (df = 9, p = .619 > .05). Hence, the mean of 3.39 is not significantly
”
”
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
“
Putting your house in order is fun! The process of assessing how you feel about the things you own, identifying those that have fulfilled their purpose, expressing your gratitude, and bidding them farewell, is really about examining your inner self, a rite of passage to a new life. The yardstick by which you judge is your intuitive sense of attraction, and therefore there’s no need for complex theories or numerical data. All you need to do is follow the right order.
”
”
Marie Kondō (The Life-Changing Magic of Tidying Up: The Japanese Art of Decluttering and Organizing (Magic Cleaning #1))
“
why has almost everyone done the calendar thing, but almost no one has moved everything else in their life into a similar zone, by capturing it all and creating the habit of assessing it all appropriately? Three reasons: First, the data that is entered onto a calendar has already been thought through and determined; it’s been translated down to the physical action level. You agreed to call Jim at noon on Monday: there is no more thinking required about what the appropriate action is, or where and when you’re going to do it. Second, you know where those kinds of actions need to be parked (calendar), and it’s a familiar and available tool.
”
”
David Allen (Making It All Work: Winning At The Game Of Work And The Business Of Life)
“
why has almost everyone done the calendar thing, but almost no one has moved everything else in their life into a similar zone, by capturing it all and creating the habit of assessing it all appropriately? Three reasons: First, the data that is entered onto a calendar has already been thought through and determined; it’s been translated down to the physical action level. You agreed to call Jim at noon on Monday: there is no more thinking required about what the appropriate action is, or where and when you’re going to do it. Second, you know where those kinds of actions need to be parked (calendar), and it’s a familiar and available tool. And third, if you lose track of calendar actions and commitments, you will encounter obvious and rapid negative feedback from people you consider important.
”
”
David Allen (Making It All Work: Winning At The Game Of Work And The Business Of Life)
“
Beyond objective assessment data, there is subjective information that best comes from the school professionals who work with the students every day. These observational data are vital to identifying students for additional help and determining why each student is struggling. For this reason, the third way a school should identify students for additional support is to create a systematic and timely process for staff to recommend and discuss students who need help.
”
”
Austin Buffum (Simplifying Response to Intervention: Four Essential Guiding Principles (What Principals Need to Know))
“
Unexpected emergency plumbers
Unexpected emergency plumber is? If your own group, but probably the same dress isn’t in the middle, where they start imitating the pool, the owner most likely to smoke.
This is certainly a task that will require a qualified plumber, clean bathrooms and sinks in each backup, and even the simple addition of a new line of right tubes. Unfortunately, there are elements that do not require any old plumber, but a situation of sudden emergency, like H2O uncontrolled always works with tap water and start flooding the marsh peace. However, they are high quality. How can I tell if other service providers should be, or not?
Are you sure you need a plumber crisis?
Shortly before speaking to the installer should complete the water supply or the probability that the water line, the rack provides back. It is in order to avoid problems with the drinking water. He is not only very welcome to complete the water flow. After the arrest of H2O oneself've, evaluates the circumstances. If the problem is a bathroom fully equipped, bathroom once, until dawn, so the long-term wear’s each washing. He is a very potential and are reluctant to get up early in the morning when you are ready for self-determination, these solutions makes the kitchen sink, toilet and a lounge. In fact, you can get away from high fire call 24 hours a plumber at night for a few hours or during holidays or weekends to stay.
In an interview with an unexpected emergency plumbers
Unfortunately, when the time of the suspension of H2O and objective analysis and emergency may not be present, created only for contacting unexpected emergency sanitary and easy and to take concerns in writing to the other include some content his hands to keep the person.
Preliminary interviews hydraulic range is trying to understand a lot of the other Box difficulties. Other personal data and many other facts themselves can be better able to assess the management of the crisis and the calculation of the payments change.
Is a great addition to the amount pipeline management principle affects many, if not yet in a plumber decision. In fact, bought a lot of contact carrier price quotes can also sometimes significant price differences.
Also check out the views of the services is in his hands. Some of the costs only in the room, even if they, after maintenance. Well, the result have, as it in this area before the season and it is surprising simply be a monthly bill.
Please ask to get the price of maintenance. 24 hours plumber not calculates the direction of providing greater than a cell phone, and requires separate installation scenario earlier selection. But it can be equipped with a direction to select difficulty of defining and thinking about the cost, if he succeeded in presenting the sewage system in unforeseen emergencies. Ask will differ plumber state and talk about their own crisis normal or common prices.
If you need to contact the unexpected rescue tend to check an unexpected emergency plumber to the self to take us in the direction of first, so that you can be your own ready to talk to the plumber, one after another, much better, then you determine the value.
”
”
oxford plumber
“
I recommend you do a detailed time study for yourself to see where you spend your time. Make an estimate of how many hours each week you take for the major activities of your life: work, school, rest, entertainment, hobbies, spouse, children, commuting, church, God, friends, and so on. Then, over a typical period of your life, take two weeks and do a detailed time study. Keep track of how you spend your time, using fifteen- to thirty-minute increments. After you have gathered the raw data, categorize them carefully into the major groups: rest, work/school, church/God, family, and recreation. Create subcategories as appropriate for anything that might consume multiple hours per week, like listing commuting under work or TV under recreation. Finally, with the summary in hand, make the difficult assessments about how you are using your time. Ask yourself: • Any surprises? Areas where I just couldn’t imagine I was wasting—er, uh, um, spending—so much of my time? • Is this where I want my time to go? • Am I putting as much time as I’d like into the areas I want as the priorities in my life? • How much time am I really spending with my spouse? Children? Friends? • Did I realize how much time I was spending at work? • If I wanted to spend more time on XYZ or ABC, in what areas would I consciously choose to spend less time?
”
”
Pat Gelsinger (The Juggling Act: Bringing Balance to Your Faith, Family, and Work)
“
researchers analyzed data on more than six thousand children in Hong Kong, where smoking is not confined to those in lower economic brackets and where most smokers are men. The children were assessed when they were seven years old and again when they were eleven. Those whose fathers smoked when the mothers were pregnant were more likely to be overweight or obese. It was the first evidence supporting the idea that childhood obesity could be affected by a mother’s exposure to her husband’s smoking while she was pregnant.
”
”
Paul Raeburn (Do Fathers Matter?: What Science Is Telling Us About the Parent We've Overlooked)
“
The three tenets of upstream data are: Data management Quantification of uncertainty Risk assessment
”
”
Keith Holdaway (Harness Oil and Gas Big Data with Analytics: Optimize Exploration and Production with Data-Driven Models (Wiley and SAS Business Series))
“
In the first part of this work, we examined the impact of using a dump or slice style entry on officer performance. We found that, compared to the slice conditions, officers took approximately twice as long to respond to a second gunman in the dump conditions. Once the officers in the dump conditions detected the second gunman in the room, they were almost 5 times more likely to violate the universal firearms safety rules and commit a priority of fire violation. The first officer also momentarily stalled in the doorway during 18% of the dump entries but never stalled during a slice entry. We did observe more instances of the officers in the slice entry shooting at the innocent suspect in the room, but this difference was not large enough to be confident that it was not the product of chance assignment error. Taken together, we argued that the data suggested that the slice was a better entry style than the dump to teach patrol officers.
”
”
Pete J. Blair (Evaluating Police Tactics: An Empirical Assessment of Room Entry Techniques (Real World Criminology))
“
Staff will need to receive adequate training from IT staff on ways to effectively use databases. Beyond the ability to navigate a database system, staff will need additional skills such as developing queries or using spreadsheets to analyze and present data. In other words, rather than simply concentrating on the business functions that technology supports, student affairs staff should integrate assessment functions into their understanding of technology tools.
”
”
John H. Schuh (Assessment Methods for Student Affairs)
“
Unless a school has clearly identified the essential standards that every student must master, as well as unwrapped the standards into specific student learning targets, it would be nearly impossible to have the curricular focus and targeted assessment data necessary to target interventions to this level.
”
”
Austin Buffum (Simplifying Response to Intervention: Four Essential Guiding Principles (What Principals Need to Know))
“
The answer to information asymmetry is not always the provision of more information, especially when most of this ‘information’ is simply noise, or boilerplate (standardised documentation bolted on to every report). Companies justifiably complain about the ever-increasing volume of data they are required to produce, while users of accounting find less and less of relevance in them. The notion that all investors have, or could have, identical access to corporate data is a fantasy, but the attempt to make it a reality generates a raft of regulation which inhibits engagement between companies and their investors and impedes the collection of substantive information that is helpful in assessing the fundamental value of securities. In the terms popularised by the American computer scientist Clifford Stoll, ‘data is not information, information is not knowledge, knowledge is not understanding, understanding is not wisdom’.9 In
”
”
John Kay (Other People's Money: The Real Business of Finance)
“
FLATOW: So you would - how would you treat a patient like Sybil if she showed up in your office
BRAND: Well, first I would start with a very thorough assessment, using the current standardized measures that we have available to us that assess for the range of dissociative disorders but the whole range of other psychological disorders, too. I would need to know what I'm working with, and I'd be very careful and make my decisions slowly, based on data about what she has. And furthermore, with therapists who are well-trained in dissociative disorders, we do keep an eye open for suggestibility. But that research, too, is not anywhere near as strong as what the other two people in the interview are suggesting.It shows - for example, there's eight studies that have a total of 11 samples. In the three clinical samples that have looked at the correlation between dissociation and suggestibility, all three clinical samples found non-significant correlations. So it's just not as strong as what people think. That's a myth that's not backed up by science."
Exploring Multiple Personalities In 'Sybil Exposed' October 21, 2011 by Ira Flatow
”
”
Bethany L. Brand
“
One of the reasons for its success is that science has built-in, error-correcting machinery at its very heart. Some may consider this an overbroad characterization, but to me every time we exercise self-criticism, every time we test our ideas against the outside world, we are doing science. When we are self-indulgent and uncritical, when we confuse hopes and facts, we slide into pseudoscience and superstition. Every time a scientific paper presents a bit of data, it's accompanied by an error bar - a quiet but insistent reminder that no knowledge is complete or perfect. It's a calibration of how much we trust what we think we know. If the error bars are small, the accuracy of our empirical knowledge is high; if the error bars are large, then so is the uncertainty in our knowledge. Except in pure mathematics nothing is known for certain (although much is certainly false). Moreover, scientists are usually careful to characterize the veridical status of.their attempts to understand the world - ranging from conjectures and hypotheses, which are highly tentative, all the way up to laws of Nature which are repeatedly and systematically confirmed through many interrogations of how the world works. But even laws of Nature are not absolutely certain. There may be new circumstances never before examined - inside black holes, say, or within the electron, or close to the speed of light -where even our vaunted laws of Nature break down and, however valid they may be in ordinary circumstances, need correction. Humans may crave absolute certainty; they may aspire to it; they may pretend, as partisans of certain religions do, to have attained it. But the history of science - by far the most successful claim to knowledge accessible to humans - teaches that the most we can hope for is successive improvement in our understanding, learning from our mistakes, an asymptotic approach to the Universe, but with the proviso that absolute certainty will always elude us. We will always be mired in error. The most each generation can hope for is to reduce the error bars a little, and to add to the body of data to which error bars apply. The error bar is a pervasive, visible self-assessment of the reliability of our knowledge.
”
”
Anonymous
“
Another vital component of the UDL is the constant flow of data from student work. Daily tracking for each lesson, as well as mid- and end-of-module assessment tasks, are essential for determining students’ understandings at benchmark points. Such data flow keeps teaching practice firmly grounded in students learning and makes incremental progress possible. When feedback is provided, students understand that making mistakes is part of the learning process.
”
”
Peggy Grant (Personalized Learning: A Guide to Engaging Students with Technology)
“
by region and service provider.
c) Cable/wireless Data Service Quality
Assessment
Since 2007, the KCC has been assessing the
”
”
만남찾기
“
service) and wireless data services (3G and
WiBro).
In 2010, the quality assessments performed
in 2009 were referenced to conduct assessment
”
”
만남찾기
“
Without assessment, acceleration will be a mirage.
”
”
Wisdom Kwashie Mensah (THE HONEYMOON: A SACRED AND UNFORGETTABLE SAVOUR OF A BLISSFUL MARITAL JOURNEY)
“
Are damaging data practices and systems capable of reform?
Re-evaluate your relationship to data and assess whether existing practices and systems are capable of reform. If reform seems possible, question who is best placed undertake this work. When reform fails, or efforts to reform risk keeping a damaging system alive for longer, consider if an abolitionist approach might put data in the hands of those most in need.
”
”
Kevin Guyan (Queer Data: Using Gender, Sex and Sexuality Data for Action (Bloomsbury Studies in Digital Cultures))
“
Does your project create more good than harm? And for whom?
Assess what your project intends to achieve and its potential to cause harm; only continue when the potential benefits outweigh the potential dangers. Disaggregate the differential impacts among LGBTQ people to ensure that the project does not only benefit the least marginalized individuals, for whom sexual orientation is the only characteristics that excludes them from full inclusion.
”
”
Kevin Guyan (Queer Data: Using Gender, Sex and Sexuality Data for Action (Bloomsbury Studies in Digital Cultures))
“
Educators’ lives are filled with opportunities to develop their own social awareness during student and adult interactions. They participate in work groups, such as co-teaching, professional learning programs, faculty meetings, team meetings, data analysis teams, developing common assessments, lesson-study groups, and curriculum development committees. The checklist in the figure below can be modified to fit any type of group activity. It can be reviewed by the supervisor or coach and the educator prior to the activity. After the activity, the educator can be asked to confidentially self-assess his or skills, thereby increasing self-awareness of his/her relationship skills and self-management skills.
”
”
William Ribas (Social-Emotional Learning in the Classroom second edition: Practice Guide for Integrating All SEL Skills into Instruction and Classroom Management)
“
American pragmatist philosopher Charles Sanders Peirce’s observation that no new idea in the history of the world has been proven in advance analytically, which means that if you insist on rigorous proof of the merits of an idea during its development, you will kill it if it is truly a breakthrough idea, because there will be no proof of its breakthrough characteristics in advance. If you are going to screen innovation projects, therefore, a better model is one that has you assess them on the strength of their logic—the theory of why the idea is a good one—not on the strength of the existing data. Then, as you get further into each project that passes the logic test, you need to look for ways to create data that enables you to test and adjust—or perhaps kill—the idea as you develop it.
”
”
Roger L. Martin (A New Way to Think: Your Guide to Superior Management Effectiveness)
“
This graph shows all the observations together with a line that represents the fitted relationship. As is traditional, the Y-axis displays the dependent variable, which is weight. The X-axis shows the independent variable, which is height. The line is the fitted line. If you enter the full range of height values that are on the X-axis into the regression equation that the chart displays, you will obtain the line shown on the graph. This line produces a smaller SSE than any other line you can draw through these observations. Visually, we see that that the fitted line has a positive slope that corresponds to the positive correlation we obtained earlier. The line follows the data points, which indicates that the model fits the data. The slope of the line equals the coefficient that I circled. This coefficient indicates how much mean weight tends to increase as we increase height. We can also enter a height value into the equation and obtain a prediction for the mean weight. Each point on the fitted line represents the mean weight for a given height. However, like any mean, there is variability around the mean. Notice how there is a spread of data points around the line. You can assess this variability by picking a spot on the line and observing the range of data points above and below that point. Finally, the vertical distance between each data point and the line is the residual for that observation.
”
”
Jim Frost (Regression Analysis: An Intuitive Guide for Using and Interpreting Linear Models)
“
Censoring Measuring response rate and progression offers more problems than measurement error. We may not be considering the right denominator of patients. In 2017, the FDA approved the first cellular cancer therapy, called tisagenlecleucel (Kymriah, Novartis), or CAR-T, for short. A CAR-T is a chimeric antigen receptor T-cell, basically a genetically modified cell taken from a patient that is trained to attack cancer cells and then placed back in the patient. In the data submitted to the FDA, 88 patients had the cells removed, but 18% (16/88) did not receive the cells because some patients died and some patients’ cells could not be manufactured.17 Unfortunately, the FDA excluded these patients from the denominator and assessed response only in patients who got the cells. This violates a principle called intention to treat, that is, you should judge a drug based on all patients allocated to get it, irrespective of whether or not they received it. Why? Because therapies that take a long time to give (this CAR-T took approximately 22 days to make) may exclude the sickest patients who die while waiting, thus distorting their benefit. In fact, if I have a patient in my office and we decide to treat with tisagenlecleucel, the response rate from the package overestimates her chances of success, as I am unsure she will live long enough to receive the cells.
”
”
Vinayak K. Prasad (Malignant: How Bad Policy and Bad Evidence Harm People with Cancer)
“
Project evaluation is a critical task in the management of a project primarily to assess project maximizing outcomes. However systematic collection of data is integrated with the evaluation process that may be conducted at various project phases, and the task of the evaluation-the approach depends upon the type of project, the project vision, the provision, the timeline of the project, and the phase of the project.
”
”
Henrietta Newton Martin
“
GCMs perform poorly when their projections are assessed against empirical data.
”
”
Craig D. Idso (Why Scientists Disagree About Global Warming: The NIPCC Report on Scientific Consensus)
“
Identify Your Strengths With Strengths Finder 2.0
One tool that can help you remember your achievements is the ‘Strengths Finder’ "assessment. The father of Strengths Psychology, Donald O. Clifton, Ph.D, along with Tom Rath and a team of scientists at The Gallup Organization, created StrengthsFinder.
You can take this assessment by purchasing the Strengths Finder 2.0 book.
The value of SF 2.0 is that it helps you understand your unique strengths. Once you have this knowledge, you can review past activities and understand what these strengths enabled you to do.
Here’s what I mean, in the paragraphs below, I’ve listed some of the strengths identified by my Strengths Finder assessment and accomplishments where these strengths were used.
“You can see repercussions more clearly than others can.”
In a prior role, I witnessed products being implemented in the sales system at breakneck speed. While quick implementation seemed good, I knew speed increased the likelihood of revenue impacting errors.
I conducted an audit and uncovered a misconfigured product. While the customer had paid for the product, the revenue had never been recognized. As a result of my work, we were able to add another $7.2 million that went straight to the bottom line.
“You automatically pinpoint trends, notice problems, or identify opportunities many people overlook.”
At my former employer, leadership did not audit certain product manager decisions. On my own initiative, I instituted an auditing process. This led to the discovery that one product manager’s decisions cost the company more than $5M.
“Because of your strengths, you can reconfigure factual information or data in ways that reveal trends, raise issues, identify opportunities, or offer solutions.”
In a former position, product managers were responsible for driving revenue, yet there was no revenue reporting at the product level. After researching the issue, I found a report used to process monthly journal entries which when reconfigured, provided product managers with monthly product revenue.
“You entertain ideas about the best ways to…increase productivity.”
A few years back, I was trained by the former Operations Manager when I took on that role. After examining the tasks, I found I could reduce the time to perform the role by 66%. As a result, I was able to tell my Director I could take on some of the responsibilities of the two managers she had to let go.
“You entertain ideas about the best ways to…solve a problem.”
About twenty years ago I worked for a division where legacy systems were being replaced by a new company-wide ERP system. When I discovered no one had budgeted for training in my department, I took it upon myself to identify how to extract the data my department needed to perform its role, documented those learnings and that became the basis for a two day training class.
“Sorting through lots of information rarely intimidates you. You welcome the abundance of information. Like a detective, you sort through it and identify key pieces of evidence. Following these leads, you bring the big picture into view.”
I am listing these strengths to help you see the value of taking the Strengths Finder Assessment.
”
”
Clark Finnical
“
When assessing and prioritizing the opportunity space, it’s important that we find the right balance between being data-informed and not getting stuck in analysis paralysis. It’s easy to fall into the trap of wanting more data, spending just a little bit more time, trying to get to a more perfect decision. However, we’ll learn more by making a decision and then seeing the consequences of having made that decision than we will from trying to think our way to the perfect decision. Jeff Bezos, founder and CEO of Amazon, made this exact argument in his 2015 letter to shareholders,33 where he introduced the idea of Level 1 and Level 2 decisions. He describes a Level 1 decision as one that is hard to reverse, whereas a Level 2 decision is one that is easy to reverse. Bezos argues that we should be slow and cautious when making Level 1 decisions, but that we should move fast and not wait for perfect data when making Level 2 decisions.
”
”
Teresa Torres (Continuous Discovery Habits: Discover Products that Create Customer Value and Business Value)
“
Solvay Business School Professor Paul Verdin and I developed a perspective that frames an organization's strategy as a hypothesis rather than a plan.62 Like all hypotheses, it starts with situation assessment and analysis –strategy's classic tools. Also, like all hypotheses, it must be tested through action. When strategy is seen as a hypothesis to be continually tested, encounters with customers provide valuable data of ongoing interest to senior executives.
”
”
Amy C. Edmondson (The Fearless Organization: Creating Psychological Safety in the Workplace for Learning, Innovation, and Growth)
“
Imagine if Wells Fargo had adopted an agile approach to strategy: the company's top management would then have taken repeated instances of missed targets or false accounts as useful data to help it assess the efficacy of the original cross-selling strategy. This learning would then have triggered much-needed strategic adaptation.
”
”
Amy C. Edmondson (The Fearless Organization: Creating Psychological Safety in the Workplace for Learning, Innovation, and Growth)
“
On the subject of climate change, the data clearly show that Earth is warming up, and that humans are contributing to the warming. That’s the fairest assessment of the evidence.
”
”
Neil deGrasse Tyson (StarTalk: Everything You Ever Need to Know About Space Travel, Sci-Fi, the Human Race, the Universe, and Beyond (Astrophysics for People in a Hurry Series))
“
exposed outlets, cords, fans, etc. Safe cribs Written emergency plan Disposable towels available Eating area away from diaper area Toys washed each day Teacher knows about infant illnesses Fun Toys can be reached by kids Floor space available for crawlers to play 3 different types of “large-muscle materials” available (balls, rocking horse) 3 types of music materials available “Special activities” (i.e., water play, sponge painting) 3 materials for outdoor infant play Individualization Kid has own crib Each infant is assigned to one of the teachers Child development is assessed formally at least every 6 months Infants offered toys appropriate for their development level Teachers have at least 1 hour a week for team planning
”
”
Emily Oster (Cribsheet: A Data-Driven Guide to Better, More Relaxed Parenting, from Birth to Preschool (The ParentData Series Book 2))
“
Trademark
Trademark is fundamentally exceptional of a licensed innovation comprising plans, logos, and imprints. Organizations utilize different plans, logos, or words to recognize their items and administrations from others. Those imprints which help in distinctive the item or administrations from others and help the clients in distinguishing their image, quality, and even source of the item is known as Trademark.
In contrast to licenses, a brand name is enlisted for a very long time, and from that point, it tends to be recharged for an additional 10 years after an additional installment of reestablishment expenses.
Trademark Objection
After the enrollment of the brand name, an Examiner/Registrar or outsider can set a trademark objection. As per Section(s) 9 (Absolute Grounds of Refusal) and 11 (Relative Grounds of Refusal) of the Act, these two can be the ground of a complaint:-
The application contains wrong data, or
Comparable or indistinguishable brand names exist.
At whatever point a Trademark enlistment center mentions a criticism, a candidate has an occasion to send a composed answer alongside the strong proof, realities, and reasons why the imprint ought to be assigned to him within 30 days of the protest.
On the off chance that the analyst/enlistment center discovers the answer to be adequate and addresses the entirety of his interests in the assessment report and there is no contention, at that point he may give authorization to the candidate to distribute the application in the Trademark diary before enrollment.
How to respond to an objection
A Trademark assessment report is set up on the Trademark office site alongside the subtleties of the brand name application and a candidate or a specialist has the occasion to send a composed answer which ought to be known as a trademark objection reply.
The answer can be submitted as "Answer to the assessment report" either on the web or it tends to be submitted through a post or individual alongside supporting archives or a sworn statement.
When the application gets recorded a candidate ought to be given a notification about the protest and ground of the complaint. Different grounds are:-
There ought to be a counter assertion of the application,
It ought to be recorded within 2 months of the application,
On the off chance that the analyst neglects to record a complaint inside the time, at that point the status of the application will be deserted.
After recording the counter of a complaint, the enlistment center will call a candidate for the meeting. On the off chance that it rules in the courtesy, at that point, the candidate will get it enrolled, and on the off chance that the answer isn't agreeable, at that point, the application for the enlistment will get dismissed.
Trademark Objection Reply Fees
Although I have gone through various sites, finding a perfect formal reply is quite difficult. But Professional Utilities provides a perfect reply through experts, also the trademark objection reply fees are really affordable. They provide services for just 1,499/- only.
”
”
Shweta Sharma
“
Recent research has emerged showing that while women tend to assess their intelligence accurately, men of average intelligence think they are more intelligent than two-thirds of people. This being the case, perhaps it wasn’t that women’s rates of putting themselves up for promotion were too low. Perhaps it was that men’s were too high.
”
”
Caroline Criado Pérez (Invisible Women: Data Bias in a World Designed for Men)
“
But everything, absolutely all the library work, had also been data. Collectible information that could be assessed and analyzed, that inferences could be made from. Some might argue that information and data, numbers and charts and statistics, aren't concerned with what feels "good" or "bad" (or any number of things in between), but I disagree. All data is tied back to emotions - to some original question, concern, desire, hypothesis that can be traced back to the feelings of a researcher, or a scientist, or whoever formed a hypothesis, asked a question, became interested in measuring something, tried to solve a problem, or cure a virus, and so forth.
”
”
Amanda Oliver (Overdue: Reckoning with the Public Library)
“
Structured methods for learning Method Uses Useful for Organizational climate and employee satisfaction surveys Learning about culture and morale. Many organizations do such surveys regularly, and a database may already be available. If not, consider setting up a regular survey of employee perceptions. Useful for managers at all levels if the analysis is available specifically for your unit or group.
Usefulness depends on the granularity of the collection and analysis. This also assumes the survey instrument is a good one and the data have been collected carefully and analyzed rigorously. Structured sets of interviews with slices of the organization or unit Identifying shared and divergent perceptions of opportunities and problems. You can interview people at the same level in different departments (a horizontal slice) or bore down through multiple levels (a vertical slice). Whichever dimension you choose, ask everybody the same questions, and look for similarities and differences in people’s responses. Most useful for managers leading groups of people from different functional backgrounds.
Can be useful at lower levels if the unit is experiencing significant problems. Focus groups Probing issues that preoccupy key groups of employees, such as morale issues among frontline production or service workers. Gathering groups of people who work together also lets you see how they interact and identify who displays leadership. Fostering discussion promotes deeper insight. Most useful for managers of large groups of people who perform a similar function, such as sales managers or plant managers.
Can be useful for senior managers as a way of getting quick insights into the perceptions of key employee constituencies. Analysis of critical past decisions Illuminating decision-making patterns and sources of power and influence. Select an important recent decision, and look into how it was made. Who exerted influence at each stage? Talk with the people involved, probe their perceptions, and note what is and is not said. Most useful for higher-level managers of business units or project groups. Process analysis Examining interactions among departments or functions and assessing the efficiency of a process. Select an important process, such as delivery of products to customers or distributors, and assign a cross-functional group to chart the process and identify bottlenecks and problems. Most useful for managers of units or groups in which the work of multiple functional specialties must be integrated.
Can be useful for lower-level managers as a way of understanding how their groups fit into larger processes. Plant and market tours Learning firsthand from people close to the product. Plant tours let you meet production personnel informally and listen to their concerns. Meetings with sales and production staff help you assess technical capabilities. Market tours can introduce you to customers, whose comments can reveal problems and opportunities. Most useful for managers of business units. Pilot projects Gaining deep insight into technical capabilities, culture, and politics. Although these insights are not the primary purpose of pilot projects, you can learn a lot from how the organization or group responds to your pilot initiatives. Useful for managers at all levels. The size of the pilot projects and their impact will increase as you rise through the organization.
”
”
Michael D. Watkins (The First 90 Days: Proven Strategies for Getting Up to Speed Faster and Smarter)
“
Learning Plan Template Before Entry Find out whatever you can about the organization’s strategy, structure, performance, and people. Look for external assessments of the performance of the organization. You will learn how knowledgeable, fairly unbiased people view it. If you are a manager at a lower level, talk to people who deal with your new group as suppliers or customers. Find external observers who know the organization well, including former employees, recent retirees, and people who have transacted business with the organization. Ask these people open-ended questions about history, politics, and culture. Talk with your predecessor if possible. Talk to your new boss. As you begin to learn about the organization, write down your first impressions and eventually some hypotheses. Compile an initial set of questions to guide your structured inquiry after you arrive. Soon After Entry Review detailed operating plans, performance data, and personnel data. Meet one-on-one with your direct reports and ask them the questions you compiled. You will learn about convergent and divergent views and about your reports as people. Assess how things are going at key interfaces. You will hear how salespeople, purchasing agents, customer service representatives, and others perceive your organization’s dealings with external constituencies. You will also learn about problems they see that others do not. Test strategic alignment from the top down. Ask people at the top what the company’s vision and strategy are. Then see how far down into the organizational hierarchy those beliefs penetrate. You will learn how well the previous leader drove vision and strategy down through the organization. Test awareness of challenges and opportunities from the bottom up. Start by asking frontline people how they view the company’s challenges and opportunities. Then work your way up. You will learn how well the people at the top check the pulse of the organization. Update your questions and hypotheses. Meet with your boss to discuss your hypotheses and findings. By the End of the First Month Gather your team to feed back to them your preliminary findings. You will elicit confirmation and challenges of your assessments and will learn more about the group and its dynamics. Now analyze key interfaces from the outside in. You will learn how people on the outside (suppliers, customers, distributors, and others) perceive your organization and its strengths and weaknesses. Analyze a couple of key processes. Convene representatives of the responsible groups to map out and evaluate the processes you selected. You will learn about productivity, quality, and reliability. Meet with key integrators. You will learn how things work at interfaces among functional areas. What problems do they perceive that others do not? Seek out the natural historians. They can fill you in on the history, culture, and politics of the organization, and they are also potential allies and influencers. Update your questions and hypotheses. Meet with your boss again to discuss your observations.
”
”
Michael D. Watkins (The First 90 Days: Proven Strategies for Getting Up to Speed Faster and Smarter)
“
Little by little, as the team members stitched together small pieces of information, they stumbled into Ranbaxy’s secret: the company manipulated almost every aspect of its manufacturing process to quickly produce impressive-looking data that would bolster its bottom line. Each member of Thakur’s team came back with similar examples. At the behest of managers, the company’s scientists substituted lower-purity ingredients for higher ones to reduce costs. They altered test parameters so that formulations with higher impurities could be approved. They faked dissolution studies. To generate optimal results, they crushed up brand-name drugs into capsules so that they could be tested in lieu of the company’s own drugs. They superimposed brand-name test results onto their own in applications. For some markets, the company fraudulently mixed and matched data streams, taking its best data from manufacturing in one market and presenting it to regulators elsewhere as data unique to the drugs in their markets. For other markets, the company simply invented data. Document forgery was pervasive. The company even forged its own standard operating procedures, which FDA investigators rely on to assess whether a company is following its own policies. In one instance, employees backdated documents and then artificially aged them in a steamy room overnight in an attempt to fool regulators during inspections.
”
”
Katherine Eban (Bottle of Lies: The Inside Story of the Generic Drug Boom)
“
Today health officials are under pressure to introduce new vaccines, whatever national needs or priorities might be, and irrespective of whether or not they even have the data on which to base an assessment of the burden of disease.
”
”
Stuart S. Blume (Immunization: How Vaccines Became Controversial)
“
Before Entry Read internal and external perspectives on the market and consumers. You won’t become an expert, but that’s OK; awareness is what you’re after. Identify local consultants who can brief you on the state of the market and the competitive environment. Learn the language—it’s not about fluency; it’s about respect. Develop some hypotheses about the business situation you are entering. – Use the STARS model to talk with your new boss and other stakeholders about the situation. – Assess the leadership team—is it functioning well, and does it comprise a good mix of new and veteran, or local and expatriate, talent? – Assess the overall organization using any available corporate performance and talent-pool data. – If possible, talk to some team members to gather their insights and test some of your early hypotheses. After Entry Your first day, first week, and first month are absolutely critical. Without the following four-phase plan, you risk getting drawn into fighting fires rather than proactively leading change. Diagnose the situation and align the leadership team around some early priorities. Establish strategic direction and align the organization around it. Repair critical processes and strive for execution consistency. Develop local leadership talent to lay the foundation for your eventual exit.
”
”
Michael D. Watkins (Master Your Next Move, with a New Introduction: The Essential Companion to "The First 90 Days")
“
Recent research has emerged showing that while women tend to assess their intelligence accurately, men of average intelligence think they are more intelligent than two-thirds of people.
”
”
Caroline Criado Pérez (Invisible Women: Data Bias in a World Designed for Men)
“
You learn to assess the big picture and what you are trying to accomplish. You learn to be persuasive. You learn to look for data or external examples to support your proposals. You anticipate questions and have answers ready. You create a detailed action plan to implement your proposal.
”
”
Reggie Fils-Aimé (Disrupting the Game: From the Bronx to the Top of Nintendo)
“
Research consistently shows that tougher individuals are able to perceive stressful situations as challenges instead of threats. A challenge is something that’s difficult, but manageable. On the other hand, a threat is something we’re just trying to survive, to get through. This difference in appraisals isn’t because of an unshakable confidence or because tougher individuals downplay the difficulty. Rather, those who can see situations as a challenge developed the ability to quickly and accurately assess the situation and their ability to cope with it. An honest appraisal is all about giving your mind better data to predict with.
”
”
Steve Magness (Do Hard Things: Why We Get Resilience Wrong and the Surprising Science of Real Toughness)
“
Progressive tackled risk assessment in a different way, building a massive database with more granular indicators that better predicted the probability of accidents. It used this data to spot the good risks in pools that looked like bad drivers to other insurers. For example, among drivers cited for drinking, those with children were least likely to reoffend; among motorcyclists, Harley owners aged forty-plus were likely to ride their bikes less often. Progressive used information like this to set prices so that even the worst customers could be profitable.
”
”
Joan Magretta (Understanding Michael Porter: The Essential Guide to Competition and Strategy)
“
Before we explore the account setup, let's take a closer look at how Immediate Momentum functions. Understanding the mechanics of this trading software is crucial to comprehend its potential benefits.
According to Immediate Momentum's official website, the software harnesses sophisticated algorithms to analyze cryptocurrency price movements with pinpoint accuracy. It relies on technical indicators and historical data to identify lucrative trading opportunities by monitoring market trends. Immediate Momentum review operates fully automatically, executing every action on behalf of traders.
Users have the flexibility to fine-tune trade parameters to align with their risk tolerance, investment objectives, and experience level. This customization empowers the software to analyze market trends and generate precise trade signals.
Immediate Momentum continually assesses price fluctuations, notifying users of any significant value changes in the cryptocurrencies they're trading. All it takes is twenty minutes to set up the software's parameters, after which it takes over the trading process with efficiency.
”
”
William
“
To reverse student underachievement, a well-structured reversal plan is essential. This usually begins with interviewing the student, parents, or guardians to collect crucial data. This data serves as the foundation for creating a detailed reversal plan, based on the four rules of reversal.
”
”
Asuni LadyZeal
“
Rapid learning approach heavily relies on data to assess students’ progress, make informed decisions for teaching, intervention, and ensure continuous development.
”
”
Asuni LadyZeal
“
In the context of rapid learning, data becomes a dynamic tool with multifaceted applications. From assessments and diagnosis to feedback and collaboration with parents, data serves as a valuable resource for tailoring instruction, enhancing individualized learning experiences, and ensuring a responsive educational environment.
”
”
Asuni LadyZeal
“
Data collected at various stages of the rapid learning session, including assessments, diagnosis, feedback, practice, and collaboration with parents, plays a pivotal role in shaping the learning experience. This information is instrumental in tailoring instruction to individual needs and creating a supportive and responsive learning environment.
”
”
Asuni LadyZeal
“
Whether gathered from assessments, feedback, practice sessions, or collaboration with parents, data is a key driver in shaping the rapid learning environment.
”
”
Asuni LadyZeal
“
Rapid learning involves a systematic cycle of data collection, analysis, and application. This approach enables educators to continually assess student progress, make informed instructional decisions, and foster an environment of continuous improvement.
”
”
Asuni LadyZeal
“
Data is the cornerstone of adaptability in rapid learning. Collected from assessments, feedback mechanisms, practice sessions, and collaboration with parents, it is the catalyst for tailoring instruction, fostering a supportive learning environment, and ensuring that the rapid learning experience is optimized for each student.
”
”
Asuni LadyZeal
“
Data collection in rapid learning is not a singular event; it spans every stage of the learning process. From initial assessments to ongoing feedback, data serves as a compass, guiding educators in tailoring instruction to individual needs and cultivating a responsive and supportive learning atmosphere.
”
”
Asuni LadyZeal
“
As the prefrontal cortex further develops, teens become better equipped to resist impulses and assess potential risks. At the same time, they develop the ability to put themselves in another person’s shoes, a capacity that is often called theory of mind, or mentalizing. This uniquely human superpower allows us to understand other people’s intentions and beliefs. In doing so, we can extrapolate from this data to understand and predict behavior while also better integrating ourselves into society. Today, scientists attribute this remarkable capacity to the puberty-fueled brain revamp.
”
”
Lisa Mosconi (The Menopause Brain)
“
Onora O’Neill argues that if we want to demonstrate trustworthiness, we need the basis of our decisions to be “intelligently open.” She proposes a checklist of four properties that intelligently open decisions should have. Information should be accessible: that implies it’s not hiding deep in some secret data vault. Decisions should be understandable—capable of being explained clearly and in plain language. Information should be usable—which may mean something as simple as making data available in a standard digital format. And decisions should be assessable—meaning that anyone with the time and expertise has the detail required to rigorously test any claims or decisions if they wish to.
”
”
Tim Harford (The Data Detective: Ten Easy Rules to Make Sense of Statistics)
“
I’ve argued that we need to be skeptical of both hype and hysteria. We should ask tough questions on a case-by-case basis whenever we have reason for concern. Are the underlying data accessible? Has the performance of the algorithm been assessed rigorously—for example, by running a randomized trial to see if people make better decisions with or without algorithmic advice? Have independent experts been given a chance to evaluate the algorithm? What have they concluded? We should not simply trust that algorithms are doing a better job than humans, nor should we assume that if the algorithms are flawed, the humans would be flawless.
”
”
Tim Harford (The Data Detective: Ten Easy Rules to Make Sense of Statistics)
“
However when assessing how well a candidate exhibits the Amazon Leadership Principles, we adopted a technique called Behavioral Interviewing. This involves assigning one or more of the 14 Leadership Principles to each member of the interview panel, who in turn poses questions that map to their assigned leadership principle, seeking to elicit two kinds of data.
”
”
Colin Bryar (Working Backwards: Insights, Stories, and Secrets from Inside Amazon)
“
There are several good assessments available but for retirement or transition coaching, I use the Birkman as it will give you a new look at your usual or strengths behaviors, working style, and motivational needs. The Birkman assessment is non-judgmental, empowering and gives a “beyond the surface” look at ourselves with records and data on 8 million-plus people across six continents and 23 languages.
”
”
Retirement Coaches Association (Thriving Throughout Your Retirement Transition)
“
Switching to a plant-based diet can cut cadmium (and lead) levels in half within just three months, and lower mercury levels by 20 percent, as measured in hair samples, but the heavy metal levels bounce back when an omnivorous diet is resumed.5492 Whether this helps account for the data showing two to three times lower dementia rates in vegetarians5493 is unclear. Although blood levels of mercury are correlated with Alzheimer’s risk, brain mercury levels, assessed on autopsy, do not correlate with brain pathology.
”
”
Michael Greger (How Not to Age: The Scientific Approach to Getting Healthier as You Get Older)
“
The evidence that sleep is important is irrefutable. Some strategies you might use in your consultant role include: Often when the advice comes from a third, nonparental party, kids are more willing to take it seriously. With a school-aged child, tell her that you want to get her pediatrician’s advice about sleep—or the advice of another adult the child respects. If you have a teenager, ask her if she would be open to your sharing articles about sleep with her. With school-aged kids and younger, you can enforce an agreed-upon lights-out time. Remind them that as a responsible parent, it’s right for you to enforce limits on bedtime and technology use in the evening (more on this later). Because technology and peer pressure can make it very difficult for teens to go to bed early, say, “I know this is hard for you. I’m not trying to control you. But if you’d like to get to bed earlier and need help doing it, I’m happy to give you an incentive.” An incentive is okay in this case because you’re not offering it as a means to get her to do what you want her to do, but to help her do what she wants to do on her own but finds challenging. It’s a subtle but important distinction.26 For older kids, make privileges like driving contingent on getting enough sleep—since driving while sleep deprived is so dangerous. How to chart their sleep is more complicated. Reliable tools for assessing when a child falls asleep and how long he stays asleep, such as the actigraph, require extensive training and are not something parents can use at home to track their kids’ sleep. Moreover, Fitbits are unfortunately unreliable in gathering data. But you can ask your child to keep a sleep log where she records what time she turned out the lights, and (in the morning) how long she thinks it took her to fall asleep, and whether she was up during the night. She may not know how long it took her to fall asleep; that’s okay. Just ask, “Was it easier to fall asleep than last night or harder?” Helping kids figure out if they’ve gotten enough rest is a process, and trust, communication, and collaborative problem solving are key to that process. Encourage your child to do screen-time homework earlier and save reading homework for later so she gets less late light exposure. Ask questions such as “If you knew you’d be better at everything you do if you slept an extra hour and a half, would that change your sense of how important sleep is?” And “If you knew you’d be at risk for developing depression if you didn’t sleep enough, would that change your mind?” Talk to her about your own attempts to get to bed earlier. Ask, “Would you be open to us supporting each other in getting the sleep we need? I’ll remind you and you remind me?
”
”
William Stixrud (The Self-Driven Child: The Science and Sense of Giving Your Kids More Control Over Their Lives)
“
As studies have shown, there’s a difference between data, information, and knowledge. Always assess what you're gaining.
”
”
Mitta Xinindlu
“
Six decades of study, however, have revealed conflicting, confusing, and inconclusive data.17 That’s right: there has never been a human study that successfully links low serotonin levels and depression. Imaging studies, blood and urine tests, postmortem suicide assessments, and even animal research have never validated the link between neurotransmitter levels and depression.18 In other words, the serotonin theory of depression is a total myth that has been unjustly supported by the manipulation of data. Much to the contrary, high serotonin levels have been linked to a range of problems, including schizophrenia and autism.19
”
”
Kelly Brogan (A Mind of Your Own: The Truth About Depression and How Women Can Heal Their Bodies to Reclaim Their Lives)
“
Perhaps the most obvious difference between modern social and personality psychology is that the former is based almost exclusively on experiments, whereas the latter is usually based on correlational studies. […] In summary, over the past 50 years social psychology has concentrated on the perceptual and cognitive processes of person perceivers, with scant attention to the persons being perceived. Personality psychology has had the reverse orientation, closely examining self-reports of individuals for indications of their personality traits, but rarely examining how these people actually come off in social interaction. […] individuals trained in either social or personality psychology are often more ignorant of the other field than they should be. Personality psychologists sometimes reveal an imperfect understanding of the concerns and methods of their social psychological brethren, and they in particular fail to comprehend the way in which so much of the self-report data they gather fails to overcome the skepticism of those trained in other methods. For their part, social psychologists are often unfamiliar with basic findings and concepts of personality psychology, misunderstand common statistics such as correlation coefficients and other measures of effect size, and are sometimes breathtakingly ignorant of basic psychometric principles. This is revealed, for example, when social psychologists, assuring themselves that they would not deign to measure any entity so fictitious as a trait, proceed to construct their own self-report scales to measure individual difference constructs called schemas or strategies or construals (never a trait). But they often fail to perform the most elementary analyses to confirm the internal consistency or the convergent and discriminant validity of their new measures, probably because they do not know that they should. […] an astonishing number of research articles currently published in major journals demonstrate a complete innocence of psychometric principles. Social psychologists and cognitive behaviorists who overtly eschew any sympathy with the dreaded concept of ‘‘trait’’ freely report the use of self-report assessment instruments of completely unknown and unexamined reliability, convergent validity, or discriminant validity. It is almost as if they believe that as long as the individual difference construct is called a ‘‘strategy,’’ ‘‘schema,’’ or ‘‘implicit theory,’’ then none of these concepts is relevant. But I suspect the real cause of the omission is that many investigators are unfamiliar with these basic concepts, because through no fault of their own they were never taught them.
”
”
David C. Funder (Personality Judgment: A Realistic Approach to Person Perception)
“
A computational procedure is said to have a top-down organization if it has been constructed according to some well-defined and clearly understood fixed computational procedure (which may include some preassigned store of knowledge), where this procedure specifically provides a clear-cut solution to some problem at hand. (Euclid's algorithm for finding the highest common factor of two natural numbers, as described in ENM, p. 31, is a simple example of a top-down algorithm.) This is to be contrasted with a bottom-up organization, where such clearly defined rules of operation and knowledge store are not specified in advance, but instead there is a procedure laid down for the way that the system is to 'learn' and to improve its performance according to its 'experience'. Thus, with a bottom-up system, these rules of operation are subject to continual modification. One must allow that the system is to be run many times, performing its actions upon a continuing input of data. On each run, an assessment is made-perhaps by the system itself-and it modifies its operations, in the lifht of this assessment, with a view to improving this quality of output. For example, the input data for the system might be a number of photographs of human faces, appropriately digitized, and the system's task is to decide which photographs represent the same individuals and which do not. After each run, the system's performance is compared with the correct answers. Its rules of operation are then modified in such a way as to lead to a probable improvement in its performance on the next run.
”
”
Roger Penrose (Shadows of the Mind: A Search for the Missing Science of Consciousness)
“
Be careful about caching in too many places! The more caches between you and the source of fresh data, the more stale the data can be, and the harder it can be to determine the freshness of the data that a client eventually sees. This can be especially problematic with a microservice architecture where you have multiple services involved in a call chain. Again, the more caching you have, the harder it will be to assess the freshness of any piece of data. So if you think a cache is a good idea, keep it simple, stick to one, and think carefully before adding more!
”
”
Sam Newman (Building Microservices: Designing Fine-Grained Systems)
“
Passage Five: From Business Manager to Group Manager This is another leadership passage that at first glance doesn’t seem overly arduous. The assumption is that if you can run one business successfully, you can do the same with two or more businesses. The flaw in this reasoning begins with what is valued at each leadership level. A business manager values the success of his own business. A group manager values the success of other people’s businesses. This is a critical distinction because some people only derive satisfaction when they’re the ones receiving the lion’s share of the credit. As you might imagine, a group manager who doesn’t value the success of others will fail to inspire and support the performance of the business managers who report to him. Or his actions might be dictated by his frustration; he’s convinced he could operate the various businesses better than any of his managers and wishes he could be doing so. In either instance, the leadership pipeline becomes clogged with business managers who aren’t operating at peak capacity because they’re not being properly supported or their authority is being usurped. This level also requires a critical shift in four skill sets. First, group managers must become proficient at evaluating strategy for capital allocation and deployment purposes. This is a sophisticated business skill that involves learning to ask the right questions, analyze the right data, and apply the right corporate perspective to understand which strategy has the greatest probability of success and therefore should be funded. The second skill cluster involves development of business managers. As part of this development, group managers need to know which of the function managers are ready to become business managers. Coaching new business managers is also an important role for this level. The third skill set has to do with portfolio strategy. This is quite different from business strategy and demands a perceptual shift. This is the first time managers have to ask these questions: Do I have the right collection of businesses? What businesses should be added, subtracted, or changed to position us properly and ensure current and future earnings? Fourth, group managers must become astute about assessing whether they have the right core capabilities. This means avoiding wishful thinking and instead taking a hard, objective look at their range of resources and making a judgment based on analysis and experience. Leadership becomes more holistic at this level. People may master the required skills, but they won’t perform at full leadership capacity if they don’t begin to see themselves as broad-gauged executives. By broad-gauged, we mean that managers need to factor in the complexities of running multiple businesses, thinking in terms of community, industry, government,
”
”
Ram Charan (The Leadership Pipeline: How to Build the Leadership Powered Company (Jossey-Bass Leadership Series Book 391))
“
When scientific proposals are brought forward, they are not judged by hunches or gut feelings. Only one standard is relevant: a proposal's ability to explain or predict experimental data and astronomical observations.
Therein lies the singular beauty of science. As we struggle toward deeper understanding, we must give our creative imagination ample room to explore. We must be willing to step outside conventional ideas and established frameworks. But unlike the wealth of other human activities through which the creative impulse is channeled, science supplies a final reckoning, a built-in assessment of what's right and what's not.
A complication of scientific life in the late twentieth and early twenty-first centuries is that some of our theoretical ideas have soared past our ability to test or observe. String theory has for some time been the poster child for this situation; the possibility that we're part of a multiverse provides an even more sprawling example. I've laid out a general prescription for how a multiverse proposal might be testable, but at our current level of understanding none of the multiverse theories we've encountered yet meet the criteria. With ongoing research, this situation could greatly improve.
”
”
Brian Greene (The Hidden Reality: Parallel Universes and the Deep Laws of the Cosmos)