Percentage Calculate Quotes

We've searched our database for all the quotes and captions related to Percentage Calculate. Here they are! All 41 of them:

Morris Kleiner has calculated that the percentage of jobs subject to occupational licensing has expanded from 10 percent in 1970 to 30 percent in 2008.
Robert J. Gordon (The Rise and Fall of American Growth: The U.S. Standard of Living since the Civil War (The Princeton Economic History of the Western World Book 70))
And what percentage of people take up the option to die off?’ She looked at me, her glance telling me to be calm. ‘Oh, a hundred per cent, of course. Over many thousands of years, calculated by old time, of course. But yes, everyone takes the option, sooner or later.’ ‘So it’s just like the first time round? You always die in the end?’ ‘Yes, except don’t forget the quality of life here is much better. People die when they decide they’ve had enough, not before. The second time round it’s altogether more satisfying because it’s willed.’ She paused, then added, ‘As I say, we cater for what people want.’ I hadn’t been blaming her. I’m not that sort. I just wanted to find out how the system worked. ‘So … even people, religious people, who come here to worship God throughout eternity … they end up throwing in the towel after a few years, hundred years, thousand years?’ ‘Certainly. As I said, there are still a few Old Heaveners around, but their numbers are diminishing all the time.
Julian Barnes (A History of the World in 10½ Chapters)
Suppose you were to total up all the wars over the past two hundred years that occurred between very large and very small countries. Let’s say that one side has to be at least ten times larger in population and armed might than the other. How often do you think the bigger side wins? Most of us, I think, would put that number at close to 100 percent. A tenfold difference is a lot. But the actual answer may surprise you. When the political scientist Ivan Arreguin-Toft did the calculation a few years ago, what he came up with was 71.5 percent. Just under a third of the time, the weaker country wins. Arreguin-Toft then asked the question slightly differently. What happens in wars between the strong and the weak when the weak side […] refuses to fight the way the bigger side wants to fight, using unconventional or guerilla tactics? The answer: in those cases, the weaker party’s winning percentage climbs from 28.5 percent to 63.6 percent. To put that in perspective, the United Stats’ population is ten times the size of Canada’s. If the two countries went to war and Canada chose to fight unconventionally, history would suggest that you ought to put your money on Canada.
Malcolm Gladwell (David and Goliath: Underdogs, Misfits, and the Art of Battling Giants)
The thought of suicide was entertained by nearly everyone, if only for a brief time. It was born of the hopelessness of the situation, the constant danger of death looming over us daily and hourly, and the closeness of the deaths suffered by many of the others. From personal convictions which will be mentioned later, I made myself a firm promise, on my first evening in camp, that I would not “run into the wire.” This was a phrase used in camp to describe the most popular method of suicide—touching the electrically charged barbed-wire fence. It was not entirely difficult for me to make this decision. There was little point in committing suicide, since, for the average inmate, life expectation, calculating objectively and counting all likely chances, was very poor. He could not with any assurance expect to be among the small percentage of men who survived all the selections. The prisoner of Auschwitz, in the first phase of shock, did not fear death. Even the gas chambers lost their horrors for him after the first few days—after all, they spared him the act of committing suicide.
Viktor E. Frankl (Man's Search for Meaning)
Suppose you were to total up all the wars over the past two hundred years that occurred between very large and very small countries. Let’s say that one side has to be at least ten times larger in population and armed might than the other. How often do you think the bigger side wins? Most of us, I think, would put that number at close to 100 percent. A tenfold difference is a lot. But the actual answer may surprise you. When the political scientist Ivan Arreguín-Toft did the calculation a few years ago, what he came up with was 71.5 percent. Just under a third of the time, the weaker country wins. Arreguín-Toft then asked the question slightly differently. What happens in wars between the strong and the weak when the weak side does as David did and refuses to fight the way the bigger side wants to fight, using unconventional or guerrilla tactics? The answer: in those cases, the weaker party’s winning percentage climbs from 28.5 percent to 63.6 percent. To put that in perspective, the United States’ population is ten times the size of Canada’s. If the two countries went to war and Canada chose to fight unconventionally, history would suggest that you ought to put your money on Canada.
Malcolm Gladwell (David and Goliath: Underdogs, Misfits, and the Art of Battling Giants)
The twentieth century’s first major discovery about vision came about, once again, because of war. Russia had long coveted a warm-water port on the Pacific Ocean, so in 1904 the czar sent hundreds of thousands of troops to Manchuria and Korea to bully one away from the Japanese. These soldiers were armed with high-speed rifles whose tiny, quarter-inch bullets rocketed from the muzzle at fourteen hundred miles per hour. Fast enough to penetrate the skull but small enough to avoid messy shattering, these bullets made clean, precise wounds like worm tracks through an apple. Japanese soldiers who were shot through the back of the brain—through the vision centers, in the occipital lobe—often woke up to find themselves with tiny blind spots, as if they were wearing glasses spattered with black paint. Tatsuji Inouye, a Japanese ophthalmologist, had the uncomfortable job of calculating how much of a pension these speckled-blind soldiers should receive, based on the percentage of vision lost. Inouye could have gotten away
Sam Kean (The Tale of the Dueling Neurosurgeons: The History of the Human Brain as Revealed by True Stories of Trauma, Madness, and Recovery)
...one of the most powerful examples of group feeling and belief affecting a broad geographic area was documented as a daring experiment during the war between Lebanon and Israel that began in 1982. It was during that time that researchers trained a group of people to "feel" peace in their bodies while believing that it was already present within them, rather than simply thinking about it in their minds or praying "for" it to occur. For this particular experiment, those involved used a form of meditation known as TM (Transcendental Meditation) to achieve that feeling. At appointed times on specific days of the month, these people were positioned throughout the war-torn areas of the Middle East. During the window of time when they were feeling peace, terrorist activities ceased, the rate of crimes against people went down, the number of emergency-room visits declined, and the incidence of traffic accidents dropped. When the participants' feelings changed, the statistics were reversed. This study confirmed the earlier findings: When a small percentage of the population achieved peace within themselves, it was reflected in the world around them. The experiments took into account the days of the week, holidays, and even lunar cycles; and the data was so consistent that the researchers were able to identify how many people are needed to share the experience of peace before it's mirrored in their world. The number is the square root of one percent of the population. This formula produces figures that are smaller than we might expect. For example, in a city of one million people, the number is about 100. In a world of 6 billion, it's just under 8,000. This calculation represents only the minimum needed to begin the process. The more people involved in feeling peace, the faster the effect is created. The study became known as the International Peace Project in the Middle East...
Gregg Braden (The Spontaneous Healing of Belief: Shattering the Paradigm of False Limits)
Dr. Lydia Ciarallo in the Department of Pediatrics, Brown University School of Medicine, treated thirty-one asthma patients ages six to eighteen who were deteriorating on conventional treatments. One group was given magnesium sulfate and another group was given saline solution, both intravenously. At fifty minutes the magnesium group had a significantly greater percentage of improvement in lung function, and more magnesium patients than placebo patients were discharged from the emergency department and did not need hospitalization.4 Another study showed a correlation between intracellular magnesium levels and airway spasm. The investigators found that patients who had low cellular magnesium levels had increased bronchial spasm. This finding confirmed not only that magnesium was useful in the treatment of asthma by dilating the bronchial tubes but that lack of magnesium was probably a cause of this condition.5 A team of researchers identified magnesium deficiency as surprisingly common, finding it in 65 percent of an intensive-care population of asthmatics and in 11 percent of an outpatient asthma population. They supported the use of magnesium to help prevent asthma attacks. Magnesium has several antiasthmatic actions. As a calcium antagonist, it relaxes airways and smooth muscles and dilates the lungs. It also reduces airway inflammation, inhibits chemicals that cause spasm, and increases anti-inflammatory substances such as nitric oxide.6 The same study established that a lower dietary magnesium intake was associated with impaired lung function, bronchial hyperreactivity, and an increased risk of wheezing. The study included 2,633 randomly selected adults ages eighteen to seventy. Dietary magnesium intake was calculated by a food frequency questionnaire, and lung function and allergic tendency were evaluated. The investigators concluded that low magnesium intake may be involved in the development of both asthma and chronic obstructive airway disease.
Carolyn Dean (The Magnesium Miracle (Revised and Updated))
The Peterson Foundation calculates that, since 2010, fiscal uncertainty—i.e., gridlock—might have slowed America’s GDP growth by one percentage point and stopped the creation of two million jobs.
John Micklethwait (The Fourth Revolution: The Global Race to Reinvent the State)
Edible yield factor (EYF) is the figure that assists in the calculation of how many products will remain after preparation. The EYF is expressed as a percentage and equals the EP divided by the AP.
Ruby Parker Puckett (Foodservice Manual for Health Care Institutions (J-B AHA Press Book 150))
There are only eight numbers. They repeat.” “That’s right, and it’s amazing that we could calculate this with only one hundred examples. Despite every block being a slightly different size and shape, calculating the ratios gets us only eight different ones, and the same thing happens when we divide the circumference by the ratio as a percentage. Furthermore, the eight unique numbers that result from that calculation are all Fibonacci numbers.
J.C. Ryan (The 10th Cycle (Rossler Foundation, #1))
Conversion rates for lead-generation sites are usually calculated based on the percentage of visitors who begin and complete the lead generation form or process. Average conversion rates are often higher for lead-generation websites since the commitment required (such as providing an e-mail address) is a lot lower than for an e-commerce purchase.
Tim Ash (Landing Page Optimization: The Definitive Guide to Testing and Tuning for Conversions)
But, as I indicated in the historical overview, many of these same physicists quickly realized that the story for nature's remaining force, gravity, was far subtler. Whenever the equations of general relativity commingled with those of quantum theory, the mathematics balked. Use the combined equations to calculate the quantum probability of some physical process- such as the chance of two electrons ricocheting off each other, given both their electromagnetic repulsion and their gravitational attraction-and you'd typically get the answer infinity. While some things in the universe can be infinite, such as the extent of space and the quantity of matter that may fill it, probabilities are not among them. By definition, the value of a probability must be between 0 and 1 (or, in terms of percentages, between 0 and 100). An infinite probability does not mean that something is very likely to happen, or is certain to happen; rather, it's meaningless, like speaking of the thirteenth egg in an even dozen. An infinite probability sends a clear mathematical message: the combined equations are nonsense.
Brian Greene (The Hidden Reality: Parallel Universes and the Deep Laws of the Cosmos)
But the current investment banking model—whether applied in a standalone institution such as Goldman or in a broad financial conglomerate such as Deutsche Bank—is at the heart of the problems the finance sector poses for the real economy. Investment banks today engage in securities issuance, corporate advice and asset management; they make markets in equities and FICC, and trade in these markets on their own account. It is only necessary to list these functions to see that each of these activities conflicts with all the others. Each should be undertaken in distinct institutions. And with lower volumes of inter-bank trading, a diminished role for public equity markets and much more direct investment by asset managers the scale of most of these activities should be much reduced. Among all the actors in the finance sector today, only the asset manager, who typically earns a fee calculated as a percentage of funds under management, is rewarded for idleness. The profits of a segregated deposit-taking bank would similarly depend primarily on the scale of the deposit base, and secondarily on its success in making good loans. Dedicated channels of capital allocation have a more appropriate incentive structure than activities focused on trading and transactions. Whenever
John Kay (Other People's Money: The Real Business of Finance)
Bill Gates made a convincing argument for why improving human health is money well spent, and won’t lead to overpopulation, in his 2018 video “Does Saving More Lives Lead to Overpopulation?”56 The short answer is: No. If we were to stop all deaths—every single one around the globe—right now, we would add about 150,000 people to our planet each day. That would be 55 million people each year. That might sound like a lot, but it would be less than a single percentage point. At that rate, we would add a billion people to our ranks every eighteen years, which is still considerably slower than the rate at which the last few billion people have come along and easily countered by the global decline in family sizes. It’s still an increase, but it’s not the sort of exponential growth many people fret about when they first encounter the idea of slowing aging. Recall, these calculations are what we’d face if we ended all deaths right away. And although I’m very optimistic about the prospects for prolonged vitality, I’m not that optimistic. I don’t know any reputable scientist who is.
David A. Sinclair (Lifespan: Why We Age―and Why We Don't Have To)
Nearly all the bull markets had a number of well-defined characteristics in common, such as (1) a historically high price level, (2) high price/earnings ratios, (3) low dividend yields as against bond yields, (4) much speculation on margin, and (5) many offerings of new common-stock issues of poor quality. Thus to the student of stock-market history it appeared that the intelligent investor should have been able to identify the recurrent bear and bull markets, to buy in the former and sell in the latter, and to do so for the most part at reasonably short intervals of time. Various methods were developed for determining buying and selling levels of the general market, based on either value factors or percentage movements of prices or both. But we must point out that even prior to the unprecedented bull market that began in 1949, there were sufficient variations in the successive market cycles to complicate and sometimes frustrate the desirable process of buying low and selling high. The most notable of these departures, of course, was the great bull market of the late 1920s, which threw all calculations badly out
Benjamin Graham (The Intelligent Investor)
Though the question of global warming had not yet emerged to public perception, natural gas—methane—is about thirty times more effective than carbon dioxide as a greenhouse gas. No one has calculated how much the vast waste of natural gas across the decades of the twentieth century—in the United States and throughout the world—contributed to global warming. The percentage was certainly more than zero.
Richard Rhodes (Energy: A Human History)
Calculating risks shrewdly is the main ingredient for consistent superior performance. Pros play percentage ball, and that’s why, in the long run, they are more consistent than amateurs. Therefore, it could be said that the difference between an amateur and a pro lies in consistency. Relying on the probabilities based on a positive mathematical expectation to win (your “edge”) will lead to success.
Mark Minervini (Think & Trade Like a Champion: The Secrets, Rules & Blunt Truths of a Stock Market Wizard)
Bain’s analysis produced a startling bottom-line conclusion: by 2030, employers will need 20 to 25 percent fewer employees, a percentage that would equal 30 to 40 million displaced workers in the United States. Bain acknowledged that some of these workers will be reabsorbed into new professions that barely exist today (such as robot repair technician), but predicted that this reabsorption would fail to make a meaningful dent in the massive and growing trend of displacement. And automation’s impact will be felt far wider than even this 20 to 25 percent of displaced workers. The study calculated that if we include both displacement and wage suppression, a full 80 percent of all workers will be affected.
Kai-Fu Lee (AI Superpowers: China, Silicon Valley, and the New World Order)
So for a survey of 1,000 people (the industry standard), the margin of error is generally quoted as ± 3%:fn8 if 400 of them said they preferred coffee, and 600 of them said they preferred tea, then you could roughly estimate the underlying percentage of people in the population who prefer coffee as 40 ± 3%, or between 37% and 43%. Of course, this is only accurate if the polling company really did take a random sample, and everyone replied, and they all had an opinion either way and they all told the truth. So although we can calculate margins of error, we must remember that they only hold if our assumptions are roughly correct. But can we rely on these assumptions?
David Spiegelhalter (The Art of Statistics: Learning from Data)
In the past, school revenue automatically increased when the value of property rose. In 1996 the legislature “floated” the tax, that is, each year they fixed the amount schools would receive and then technicians calculated the rate needed to produce that amount. School property tax eventually fell about one-third as a result. In 2007, a year with record revenue, legislators cut state income taxes and $150–200 million from annual school revenues. None of these measures had a large immediate effect. All were technical, the gradual, long-term reductions were scarcely noticed by the public. By 2006 the Utah Foundation estimated that the changes together cost schools $1.3 billion, one-third of school spending. In 1996, before the changes, Utah had been fifth among the states in percentage of total personal income devoted to public schools. By 2014, Utah had fallen to thirty-seventh.29 After the mid-1990s, Utah’s economy grew more than the American economy, but Utah school spending fell further behind spending in other states. Republican politicians never say they hold down educational spending on purpose. They always say they spend as much as Utah can afford.
Rod Decker (Utah Politics: The Elephant in the Room)
According to Poterba’s calculations, shown in Table 1.5, taxable investors in stocks might lose as much as 3.5 percentage points per year to taxes. In the context of a pre-tax return of 12.7 percent per year, the tax burden dramatically reduces the rewards for investing in equities. The absolute level of the tax impact on bond and cash returns falls below the impact on equity returns, but taxes consume a greater portion of current-income-intensive assets. According to Poterba’s estimates, 28 percent of gross equity returns go to the tax man, while taxes consume 38 percent of bond returns and 42 percent of cash returns. Table 1.5 Taxes Materially Reduce Investment Returns Pre-Tax and After-Tax Returns (Percent) 1926 to 1996 Source: James M. Poterba, “Taxation, Risk-Taking, and Household Portfolio Behavior,” NBER Working Paper Series, Working Paper 8340 (National Bureau of Economic Research, 2001), 90. Tax laws currently favor long-term gains over dividend and interest income in two ways: capital gains face lower tax rates and incur tax only when realized. The provision in the tax code that causes taxes to be due only upon realization of gains allows investors to delay payment of taxes far into the future. Deferral of capital gains taxes creates enormous economic value to investors.*
David F. Swensen (Unconventional Success: A Fundamental Approach to Personal Investment)
In The Better Angels of Our Nature, Pinker calculates the average homicide rate among eight primitive societies, arriving at an alarming 14 per cent. This figure appeared in respected journals like Science and was endlessly regurgitated by newspapers and on TV. When other scientists took a look at his source material, however, they discovered that Pinker mixed up some things. This may get a little technical, but we need to understand where he went wrong. The question we want to answer is: which peoples still hunting and gathering today are representative of how humans lived 50,000 years ago? After all, we were nomads for 95 per cent of human history, roving the world in small, relatively egalitarian groups. Pinker chose to focus almost exclusively on hybrid cultures. These are people who hunt and gather, but who also ride horses or live together in settlements or engage in farming on the side. Now these activities are all relatively recent. Humans didn’t start farming until 10,000 years ago and horses weren’t domesticated until 5,000 years ago. If you want to figure out how our distant ancestors lived 50,000 years ago, it doesn’t make sense to extrapolate from people who keep horses and tend vegetable plots. But even if we get on board with Pinker’s methods, the data is problematic. According to the psychologist, 30 per cent of deaths among the Aché in Paraguay (tribe 1 on his list) and 21 per cent of deaths among the Hiwi in Venezuela and Colombia (tribe 3) are attributable to warfare. These people are out for blood, it would seem. The anthropologist Douglas Fry was sceptical, however. Reviewing the original sources, he discovered that all forty-six cases of what Pinker categorised as Aché ‘war mortality’ actually concerned a tribe member listed as ‘shot by Paraguayan’. The Aché were in fact not killing each other, but being ‘relentlessly pursued by slave traders and attacked by Paraguayan frontiersmen’, reads the original source, whereas they themselves ‘desire a peaceful relationship with their more powerful neighbors’. It was the same with the Hiwi. All the men, women and children enumerated by Pinker as war deaths were murdered in 1968 by local cattle ranchers.40 There go the iron-clad homicide rates. Far from habitually slaughtering one another, these nomadic foragers were the victims of ‘civilised’ farmers wielding advanced weaponry. ‘Bar charts and numeric tables depicting percentages […] convey an air of scientific objectivity,’ Fry writes. ‘But in this case it is all an illusion.
Rutger Bregman (Humankind: A Hopeful History)
Jeffrey and Arnott calculate the tax burden imposed by portfolio turnover. Using a 35 percent capital gains tax rate and a 6 percent pre-tax growth rate (roughly equivalent to the long-term capital appreciation of U.S. equities), the authors conclude that even modest levels of turnover create material costs. For example, as shown in Table 8.14, a turnover rate of 10 percent leads to a tax bill that reduces returns by more than a full percentage point, a steep price to pay relative to the 6 percent pre-tax rate of appreciation.
David F. Swensen (Unconventional Success: A Fundamental Approach to Personal Investment)
Welcome, in other words, to the Land of Plenty. To the good life, where almost everyone is rich, safe, and healthy. Where there’s only one thing we lack: a reason to get out of bed in the morning. Because, after all, you can’t really improve on paradise. Back in 1989, the American philosopher Francis Fukuyama already noted that we had arrived in an era where life has been reduced to “economic calculation, the endless solving of technical problems, environmental concerns, and the satisfaction of sophisticated consumer demands.”18 Notching up our purchasing power another percentage point, or shaving a couple off our carbon emissions; perhaps a new gadget – that’s about the extent of our vision. We live in an era of wealth and overabundance, but how bleak it is. There is “neither art nor philosophy,” Fukuyama says. All that’s left is the “perpetual caretaking of the museum of human history.” According to Oscar Wilde, upon reaching the Land of Plenty, we should once more fix our gaze on the farthest horizon and rehoist the sails. “Progress is the realization of Utopias,” he wrote. But
Rutger Bregman (Utopia for Realists: And How We Can Get There)
He wasn’t serving up these insults for effect—well, not entirely. And his behavior wasn’t carefully calculated; it was tit for tat, and he likely would have said what he’d said even if no one was left standing with him. (This very lack of calculation, this inability to be political, was part of his political charm.) It was just his good luck that the Trumpian 35 percent—that standing percentage of people who, according to most polls, seemed to support him no matter what (who would, in his estimation, let him get away with shooting someone on Fifth Avenue)—was largely unfazed and maybe even buoyed by every new expression of Trumpness.
Michael Wolff (Fire and Fury: Inside the Trump White House)
Measuring the Acquisition Network Effect To increase the Acquisition Effect, you have to be able to directly measure it. The good news is that viral growth can be rolled up into one number. Here’s how you calculate it: Let’s say you’ve built a new productivity tool for sharing notes, and after it launches, 1,000 users download the new app. A percentage of these users invite their colleagues and friends, and over the next month, 500 users download and sign up—what happens next? Well, those 500 users then invite their friends, and get 250 to sign up, who create another 125 sign-ups, and so on. Pay attention to the ratios between each set of users—1000 to 500 to 250. This ratio is often called the viral factor, and in this case can be calculated at 0.5, because each cohort of users generates 0.5 of the next cohort. In this example, things are looking good—starting with 1,000 users with a viral factor of 0.5 leads to a total of 2,000 users by the end of the amplification—meaning an amplification rate of 2x. A higher ratio is better, since it means each cohort is more efficiently bringing on the next batch of users.
Andrew Chen (The Cold Start Problem: How to Start and Scale Network Effects)
… the English colonies in North America accounted for only a tiny fraction of the hideous traffic in human beings. David Brion Davis, in his magisterial 2006 history Inhuman Bondage: The Rise and Fall of Slavery in the New World, concludes that colonial North America ‘surprisingly received only 5 to 6 percent of the African slaves shipped across the Atlantic.’ Hugh Thomas in The Slave Trade calculates the percentage as slightly lower, at 4.4 percent.
Michael Medved (The 10 Big Lies about America)
Ancient Master Requirements: Talent attribute two or more Tiers above lowest-Tier attribute Know three or more forms of Magic Race: Most Focus: Magic Zeal or Conviction one Tier lower than Willpower Restrictions: Must never reject an opportunity to learn a new type of magic (but see below). May not voluntarily increase Zeal or Conviction May not use or learn Divine Magic Some part of him was impressed at the depth of the class system, but that part was small indeed. Most of him was howling “get to the kewl powerz.” The knowledge slid into his mind, and he began to smile. Passive Abilities: Calculate aether-derived %RESOURCE% using an improved formula: 50+(Talent*50) Increased facility with improvised magic Decreased ability to use known spellforms Base aether to %RESOURCE% conversion ratio is 100% Basic Abilities: %RESOURCE%bolt (3 %RESOURCE% / damage, global cooldown, attack spell) Fires a bolt of %RESOURCE% energy at the target Gnostic Reflection (100 %RESOURCE%, 30s cooldown, mental trigger) Absorbs the energy of one spell targeting the caster, then targets the spell’s source with an identical spell using the caster’s parameters. Unknown magic types will not be replicated but can contribute to learning that type of magic. %RESOURCE% Metamorphosis (100 percent of current %RESOURCE%, 1/day, mental trigger) Converts all surrounding energy in a (Tier*Talent) meter radius as well as the caster’s physical form into %RESOURCE% for up to 60 seconds. During this time, damage to Health is applied to %RESOURCE%, only abilities or effects which use %RESOURCE% will function within the ability’s area, %RESOURCE% pool is doubled, and %RESOURCE% regeneration is halted. When the effect expires, caster returns to physical form with a percentage of %RESOURCE% based on their Tier remaining.
Gregory Blackburn (Unbound (Arcana Unlocked #1))
[Aza Raskin] designed something that distinctly changed how the web works. It's called 'infinite scroll.' Older readers will remember that it used to be that the internet was divided into pages, and when you got to the bottom of one page, you had to decide to click a button to get to the next page. It was an active choice. It gave you a moment to pause and ask: Do I want to carry on looking at this? Aza designed the code that means you don't have to ask that question any more. ...It downloads a chunk of status updates for your to read through ...when you get to the bottom, it will automatically load another chunk for your to flick through. ...'At the outset, it looks like a really good invention,' he told me. He believed he was making life easier for everyone. He had been taught that increased speed and efficiency of access were always advances. his invention quickly spread all over the internet ...But then Aza watched as the people around him changed. They seemed to be unable to pull themselves away from their devices, flicking through and through and through, thanks in part to the code he had designed. He found himself infinitely scrolling through what he often realised afterwards was crap, and he wondered if he was making good use of his life. ...Aza sat down and did a calculation. At a conservative estimate, infinite scroll makes you spend 50 percent more of your time on sites like Twitter. (For many people, Aza believes, it's vastly more.) Sticking with this low-ball percentage, Aza wanted to know what it meant, in practice, if billions of people were spending 50 percent more time on a string of social media sites. When he was done, he stared at the sums. Every day, as a direct result of his invention, the combined total of 200,000 more total human lifetimes - every moment from birth to death - is now spent scrolling through a screen. These hours would otherwise have been spent on some other activity. When he described this to me, he sounded a little stunned. That time is 'just completely gone. It's like their entire life - poof. That time, which could have been used for solving climate change, for spending time with their family, for strengthening social bonds. For whatever is it that makes their life well-lived. It's just...' He trailed off.
Johann Hari (Stolen Focus: Why You Can't Pay Attention— and How to Think Deeply Again)
Net Promoter Scores. An NPS survey asks, on a scale of 0 to 10, how likely a customer is to refer the product to a friend or colleague. The score is calculated as the percentage of all customers who are “promoters” (scoring 9 or 10), minus the percentage who are “detractors” (scoring 0–6). NPS scores over 50 are considered excellent. A declining NPS can serve as an early warning sign of problems and can allow managers to take corrective actions before severe damage is done.
Tom Eisenmann (Why Startups Fail: A New Roadmap for Entrepreneurial Success)
A couple recently came to my office. Let’s call them Mark and Elizabeth Schuler. They came in for a consultation at Elizabeth’s request. Mark’s best friend was a stockbroker who had handled the couple’s investment portfolio for decades. All they wanted from me was a second opinion. If all went well, they planned to stop working within five years. After a quick chat about their goals, I organized the mess of financial paperwork they’d brought and set about assessing their situation. As my team and I prepared their “Retirement Map Review,” it was immediately apparent the Schulers were carrying significant market risk. We scheduled a follow-up appointment for two weeks later. When they returned, I asked them to estimate their comfortable risk tolerance. In other words, how much of their savings could they comfortably afford to have exposed to stock market losses? Elizabeth laughed at the question. “We’re not comfortable losing any of it,” she said. I had to laugh too. Of course, no one wants to lose any of their money. But with assets housed in mutual funds, 401(k)s, and stocks, there’s always going to be some measure of risk, not to mention fees to maintain such accounts. We always stand to lose something. So how much could they tolerate losing and still be okay to retire? The Schulers had to think about that for a while. After some quick calculations and hurried deliberation, they finally came up with a number. “I guess if we’re just roughly estimating,” Mark said, “I could see us subjecting about 10 percent of our retirement savings to the market’s ups and downs and still being all right.” Can you guess what percentage of their assets were at risk? After a careful examination of the Schulers’ portfolio, my team and I discovered 100 percent of their portfolio was actually invested in individual stocks—an investment option with very high risk! In fact, a large chunk of the Schulers’ money was invested in Pacific Gas & Electric Company (PG&E), a utility company that has been around for over one hundred years. Does that name sound familiar? When I met with the Schulers, PG&E stock was soaring. But you may remember the company name from several 2019 news headlines in which the electric and natural gas giant was accused of negligence that contributed to 30 billion dollars’ worth of damage caused by California wild fires. In the wake of that disaster, the company’s stock dropped by more than 60 percent in a matter of months. That’s how volatile individual stocks can be.
John Hagensen (The Retirement Flight Plan: Arriving Safely at Financial Success)
Studies show that the training intensity can be calculated more precisely based on a percentage of the best times than on the heart rate (Olbrecht 1997).
Jan Olbrecht (The Science of Winning: Planning, Periodizing and Optimizing Swim Training)
Keeping track of the 5s leads to a very simple winning system. Suppose the player bets small whenever any 5s are left and bets big whenever all the 5s are gone. The likelihood of all the 5s being gone increases as fewer cards remain. When twenty-six cards are left, this will happen about 5 percent of the time, and if only thirteen cards are left, 30 percent of the time. Since the player then has a 3.29 percent edge on his bets, if these are very big compared with his other bets he wins in the long run. For actual casino play, I built a much more powerful winning strategy based on the fluctuation in the percentage of Ten-value cards in the deck, even though my calculations showed that the impact of a Ten was less than that of a 5, since there were four times as many Tens. The larger fluctuations in “Ten-richness” that resulted gave the player more and better opportunities.
Edward O. Thorp (A Man for All Markets: From Las Vegas to Wall Street, How I Beat the Dealer and the Market)
Figuring out how to allocate your assets doesn’t need to be difficult. Obviously, as my grandmother liked to remind me, you don’t want to keep all your eggs in one basket. But how do you know what proportion of your nest egg should be invested in equities vs. fixed-income securities? There are all sorts of ways to calculate this. For my part, I prefer the following simple rule of thumb. Take your age and subtract it from 110. The number you get is the percentage of your assets that should go into equities; the remainder should go into bonds or other fixed-income investments.
David Bach (Smart Couples Finish Rich: 9 Steps to Creating a Rich Future for You and Your Partner)
much capacity is sufficient? When you’ve raised your AeT to be within 10 percent (elite athletes can have a Z3 spread of 6–7 percent or only 10 beats) of your LT as measured by either heart rate or pace. With more than a 10 percent spread between thresholds, an athlete still has aerobic deficiency and needs to build more aerobic base. Those who have less than a 10 percent spread between thresholds will need to reduce or even drop Z2 training, substituting Z3 workouts. Here’s how to do the 10 percent test: Determine your AeT using one of the methods described on pages 152 to 155 (AeT Testing). Then do the LT test (see page 155). Calculate the percentage difference between the AeT heart rate and the LT heart rate by dividing the higher heart rate by the lower heart rate. We know this is not the conventional way to calculate percentage, but it works well for our purposes. Example: Suppose your AeT heart rate is 128 as determined by a laboratory test. Your LT hill-climb test shows an average heart rate of 150. 150/128 = 1.17. This shows that the LT heart rate is 17 percent greater than the AeT heart rate. You still have a lot of potential to improve your aerobic base with Z1–2 and should not be too eager to move to adding Z3 or higher intensity yet. ZONE
Steve House (Training for the Uphill Athlete: A Manual for Mountain Runners and Ski Mountaineers)
Table 14.1 also shows R-square (R2), which is called the coefficient of determination. R-square is of great interest: its value is interpreted as the percentage of variation in the dependent variable that is explained by the independent variable. R-square varies from zero to one, and is called a goodness-of-fit measure.5 In our example, teamwork explains only 7.4 percent of the variation in productivity. Although teamwork is significantly associated with productivity, it is quite likely that other factors also affect it. It is conceivable that other factors might be more strongly associated with productivity and that, when controlled for other factors, teamwork is no longer significant. Typically, values of R2 below 0.20 are considered to indicate weak relationships, those between 0.20 and 0.40 indicate moderate relationships, and those above 0.40 indicate strong relationships. Values of R2 above 0.65 are considered to indicate very strong relationships. R is called the multiple correlation coefficient and is always 0 ≤ R ≤ 1. To summarize up to this point, simple regression provides three critically important pieces of information about bivariate relationships involving two continuous variables: (1) the level of significance at which two variables are associated, if at all (t-statistic), (2) whether the relationship between the two variables is positive or negative (b), and (3) the strength of the relationship (R2). Key Point R-square is a measure of the strength of the relationship. Its value goes from 0 to 1. The primary purpose of regression analysis is hypothesis testing, not prediction. In our example, the regression model is used to test the hypothesis that teamwork is related to productivity. However, if the analyst wants to predict the variable “productivity,” the regression output also shows the SEE, or the standard error of the estimate (see Table 14.1). This is a measure of the spread of y values around the regression line as calculated for the mean value of the independent variable, only, and assuming a large sample. The standard error of the estimate has an interpretation in terms of the normal curve, that is, 68 percent of y values lie within one standard error from the calculated value of y, as calculated for the mean value of x using the preceding regression model. Thus, if the mean index value of the variable “teamwork” is 5.0, then the calculated (or predicted) value of “productivity” is [4.026 + 0.223*5 =] 5.141. Because SEE = 0.825, it follows that 68 percent of productivity values will lie 60.825 from 5.141 when “teamwork” = 5. Predictions of y for other values of x have larger standard errors.6 Assumptions and Notation There are three simple regression assumptions. First, simple regression assumes that the relationship between two variables is linear. The linearity of bivariate relationships is easily determined through visual inspection, as shown in Figure 14.2. In fact, all analysis of relationships involving continuous variables should begin with a scatterplot. When variable
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
The Economist has produced a more sophisticated set of ‘back-of-the-envelope’ estimates in an interactive basic income calculator for all OECD countries.4 This purports to show how much could be paid as a basic income by switching spending on non-health transfers, leaving tax revenues and other public spending unchanged. Interestingly, even on this very restrictive basis, a cluster of seven west European countries could already pay over $10,000 per person per year. The United States could pay $6,300 and Britain $5,800. Obviously, for most countries, the level of basic income that could be financed from this tax-neutral welfare-switching exercise would be modest – though, especially for bottom-ranked countries such as South Korea ($2,200) or Mexico (only $900), this largely reflects their current low tax take and welfare spending. The Economist’s interactive calculator also aims to calculate what tax rises would be needed to pay a basic income of a given amount. For the UK, the calculator estimates that the cost of a basic income of one-third average GDP per head would require a 15 percentage point rise in tax take. Its calculations can again be questioned in their own terms. However, all these back-of-the-envelope exercises are flawed in more fundamental ways. First, they do not allow for clawing the basic income back in tax from higher-income earners, which could be done with no net cost to the affluent or to the Exchequer, simply by tweaking tax rates and allowances so that the extra tax take equals the basic income paid. Second, they do not take account of administrative savings from removal of means testing and behaviour conditions. Administration accounted for £8 billion of the £172 billion 2013–14 budget of the UK’s Department of Work and Pensions, much of which will have gone to pay staff in local job centres to monitor and sanction benefit recipients. This does not include hundreds of millions of pounds paid to private contractors to carry out so-called ‘work assessment’ tests on people with disabilities, which have led to denial of benefits to some of society’s most vulnerable people. Third, they compare the cost of a basic income with the existing welfare budget and assume that all other areas of public spending remain intact. Yet governments can always choose to realign spending priorities. The UK government could save billions by scrapping the plan to replace the Trident nuclear missile system, now estimated to cost more than £200 billion over its lifetime. It could save further billions by ending subsidies that go predominantly to corporations and the affluent.
Guy Standing (Basic Income: And How We Can Make It Happen)
As independent as the kid appeared, a twelve-year-old had no business living on her own in an abandoned house. “What are your favorite subjects?” “Math.” “Really? Good for you. I’m terrible at math, though I can do percentages very well.” “Why?” “I’m a bartender. I can calculate tips off the top of my head. I suppose you do much higher math than that.” “Trigonometry and geometry are pretty fun.” Lucy sipped her soda. “Fun? I’m impressed. So do
Mary Ellen Taylor (Winter Cottage)
We have used the same model for analyzing the off spinners, the left-arm spinners and the fast bowlers too but it is only in this chapter that we will burden the readers with the details and logic of our model building. Our premise is that every aspect of bowling performance—wickets, strike rate (SR), bowling average, five wickets in an innings, 10 wickets in a match and the wickets taken in away matches—has a bearing on determining the overall value or effectiveness of the bowler. In order to arrive at a composite overall effectiveness index, we used the SR, bowling average, five wickets in an innings, 10 wickets in a match and the proportion of wickets taken away from home to create a relative index and converted each bowler’s performance in each of these factors into his individual index score. To calculate the index for a particular parameter, let us demonstrate with the example of Warne’s index for SR. His SR is 57.4. The cumulative SR of 38 players in our list is 2760.7 and so Warne’s SR Index is 57.4/2760.7 expressed as a percentage which is 2.7. A similar index for each parameter is calculated for each of the players. The aggregate of the index for the five parameters—SR, bowling average, five wickets/innings, 10 wickets/match and proportion of away wickets—of each player provides us a score for each bowler. And as you would have noted from the way we calculated the SR Index, the lower this score the better is the bowler’s rating; thus the one with the lowest score is best in class and the ranks would progressively go down as the individual index scores went up. Let us call this aggregated score as “Bowler Index Score.” But this “Bowler Index Score” does not recognize or give weightage to the number of wickets that a bowler had taken. The number of wickets reflects a bowler’s longevity at the highest level of the game. Since 38 bowlers in our list range from an extreme high of 708 wickets to an extreme low of 40 wickets, we decided to convert the wickets to their logarithmic value. (Log W for 100 wickets has a value of 2.0, for 200 wickets would have a value of 2.3 and for 400 would be 2.6 and so on.) In order to retain consistency in the convention of lowest figures indicating highest degree of effectiveness, we created an overall Effectiveness Index by dividing the Bowler Index Score by the Log value of the wickets taken. Thus, Effectiveness Index = Bowler Index Score/Log W.
S. Giridhar (Mid-Wicket Tales: From Trumper to Tendulkar)
Correlation is enough,” 2 then-Wired editor in chief Chris Anderson famously declared in 2008. We can, he implied, solve innovation problems by the sheer brute force of the data deluge. Ever since Michael Lewis chronicled the Oakland A’s unlikely success in Moneyball (who knew on-base percentage was a better indicator of offensive success than batting averages?), organizations have been trying to find the Moneyball equivalent of customer data that will lead to innovation success. Yet few have. Innovation processes in many companies are structured and disciplined, and the talent applying them is highly skilled. There are careful stage-gates, rapid iterations, and checks and balances built into most organizations’ innovation processes. Risks are carefully calculated and mitigated. Principles like six-sigma have pervaded innovation process design so we now have precise measurements and strict requirements for new products to meet at each stage of their development. From the outside, it looks like companies have mastered an awfully precise, scientific process. But for most of them, innovation is still painfully hit or miss. And worst of all, all this activity gives the illusion of progress, without actually causing it. Companies are spending exponentially more to achieve only modest incremental innovations while completely missing the mark on the breakthrough innovations critical to long-term, sustainable growth. As Yogi Berra famously observed: “We’re lost, but we’re making good time!” What’s gone so wrong? Here is the fundamental problem: the masses and masses of data that companies accumulate are not organized in a way that enables them to reliably predict which ideas will succeed. Instead the data is along the lines of “this customer looks like that one,” “this product has similar performance attributes as that one,” and “these people behaved the same way in the past,” or “68 percent of customers say they prefer version A over version B.” None of that data, however, actually tells you why customers make the choices that they do.
Clayton M. Christensen (Competing Against Luck: The Story of Innovation and Customer Choice)