Difference Between Single And Double Quotes

We've searched our database for all the quotes and captions related to Difference Between Single And Double. Here they are! All 13 of them:

[The] structural theory is of extreme simplicity. It assumes that the molecule is held together by links between one atom and the next: that every kind of atom can form a definite small number of such links: that these can be single, double or triple: that the groups may take up any position possible by rotation round the line of a single but not round that of a double link: finally that with all the elements of the first short period [of the periodic table], and with many others as well, the angles between the valencies are approximately those formed by joining the centre of a regular tetrahedron to its angular points. No assumption whatever is made as to the mechanism of the linkage. Through the whole development of organic chemistry this theory has always proved capable of providing a different structure for every different compound that can be isolated. Among the hundreds of thousands of known substances, there are never more isomeric forms than the theory permits.
Nevil Vincent Sidgwick
We already have eight hundred million people living in hunger—and population is growing by eighty million a year. Over a billion people are in poverty—and present industrial strategies are making them poorer, not richer. The percentage of old people will double by 2050—and already there aren’t enough young people to care for them. Cancer rates are projected to increase by seventy percent in the next fifteen years. Within two decades our oceans will contain more microplastics than fish. Fossil fuels will run out before the end of the century. Do you have an answer to those problems? Because I do. Robot farmers will increase food production twentyfold. Robot carers will give our seniors a dignified old age. Robot divers will clear up the mess humans have made of our seas. And so on, and so on—but every single step has to be costed and paid for by the profits of the last.” He paused for breath, then went on, “My vision is a society where autonomous, intelligent bots are as commonplace as computers are now. Think about that—how different our world could be. A world where disease, hunger, manufacturing, design, are all taken care of by AI. That’s the revolution we’re shooting for. The shopbots get us to the next level, that’s all. And you know what? This is not some binary choice between idealism or realism, because for some of us idealism is just long-range realism. This shit has to happen. And you need to ask yourself, do you want to be part of that change? Or do you want to stand on the sidelines and bitch about the details?” We had all heard this speech, or some version of it, either in our job interviews, or at company events, or in passionate late-night tirades. And on every single one of us it had had a deep and transformative effect. Most of us had come to Silicon Valley back in those heady days when it seemed a new generation finally had the tools and the intelligence to change the world. The hippies had tried and failed; the yuppies and bankers had had their turn. Now it was down to us techies. We were fired up, we were zealous, we felt the nobility of our calling…only to discover that the general public, and our backers along with them, were more interested in 140 characters, fitness trackers, and Grumpy Cat videos. The greatest, most powerful deep-learning computers in humanity’s existence were inside Google and Facebook—and all humanity had to show for it were adwords, sponsored links, and teenagers hooked on sending one another pictures of their genitals.
J.P. Delaney (The Perfect Wife)
In her book The Government-Citizen Disconnect, the political scientist Suzanne Mettler reports that 96 percent of American adults have relied on a major government program at some point in their lives. Rich, middle-class, and poor families depend on different kinds of programs, but the average rich and middle-class family draws on the same number of government benefits as the average poor family. Student loans look like they were issued from a bank, but the only reason banks hand out money to eighteen-year-olds with no jobs, no credit, and no collateral is because the federal government guarantees the loans and pays half their interest. Financial advisers at Edward Jones or Prudential can help you sign up for 529 college savings plans, but those plans' generous tax benefits will cost the federal government an estimated $28.5 billion between 2017 and 2026. For most Americans under the age of sixty-five, health insurance appears to come from their jobs, but supporting this arrangement is one of the single largest tax breaks issued by the federal government, one that exempts the cost of employer-sponsored health insurance from taxable incomes. In 2022, this benefit is estimated to have cost the government $316 billion for those under sixty-five. By 2032, its price tag is projected to exceed $6oo billion. Almost half of all Americans receive government-subsidized health benefits through their employers, and over a third are enrolled in government-subsidized retirement benefits. These participation rates, driven primarily by rich and middle-class Americans, far exceed those of even the largest programs directed at low income families, such as food stamps (14 percent of Americans) and the Earned Income Tax Credit (19 percent). Altogether, the United States spent $1.8 trillion on tax breaks in 2021. That amount exceeded total spending on law enforcement, education, housing, healthcare, diplomacy, and everything else that makes up our discretionary budget. Roughly half the benefits of the thirteen largest individual tax breaks accrue to the richest families, those with incomes that put them in the top 20 percent. The top I percent of income earners take home more than all middle-class families and double that of families in the bottom 20 percent. I can't tell you how many times someone has informed me that we should reduce military spending and redirect the savings to the poor. When this suggestion is made in a public venue, it always garners applause. I've met far fewer people who have suggested we boost aid to the poor by reducing tax breaks that mostly benefit the upper class, even though we spend over twice as much on them as on the military and national defense.
Matthew Desmond (Poverty, by America)
IT is worth remembering that the rise of what we call literary fiction happened at a time when the revealed, authenticated account of the beginning was losing its authority. Now that changes in things as they are change beginnings to make them fit, beginnings have lost their mythical rigidity. There are, it is true, modern attempts to restore this rigidity. But on the whole there is a correlation between subtlety and variety in our fictions and remoteness and doubtfulness about ends and origins. There is a necessary relation between the fictions by which we order our world and the increasing complexity of what we take to be the 'real' history of that world. I propose in this talk to ask some questions about an early and very interesting example of this relation. There was a long-established opinion that the beginning was as described in Genesis, and that the end is to be as obscurely predicted in Revelation. But what if this came to seem doubtful? Supposing reason proved capable of a quite different account of the matter, an account contradicting that of faith? On the argument of these talks so far as they have gone, you would expect two developments: there should be generated fictions of concord between the old and the new explanations; and there should be consequential changes in fictive accounts of the world. And of course I should not be troubling you with all this if I did not think that such developments occurred. The changes to which I refer came with a new wave of Greek influence on Christian philosophy. The provision of accommodations between Greek and Hebrew thought is an old story, and a story of concord-fictions--necessary, as Berdyaev says, because to the Greeks the world was a cosmos, but to the Hebrews a history. But this is too enormous a tract in the history of ideas for me to wander in. I shall make do with my single illustration, and speak of what happened in the thirteenth century when Christian philosophers grappled with the view of the Aristotelians that nothing can come of nothing--ex nihilo nihil fit--so that the world must be thought to be eternal. In the Bible the world is made out of nothing. For the Aristotelians, however, it is eternal, without beginning or end. To examine the Aristotelian arguments impartially one would need to behave as if the Bible might be wrong. And this was done. The thirteenth-century rediscovery of Aristotle led to the invention of double-truth. It takes a good deal of sophistication to do what certain philosophers then did, namely, to pursue with vigour rational enquiries the validity of which one is obliged to deny. And the eternity of the world was, of course, more than a question in a scholarly game. It called into question all that might seem ragged and implausible in the usual accounts of the temporal structure of the world, the relation of time to eternity (certainly untidy and discordant compared with the Neo-Platonic version) and of heaven to hell.
Frank Kermode (The Sense of an Ending: Studies in the Theory of Fiction)
One key characteristic of structure is its richness. To illustrate, recall the comparison that John Rawls drew between checkers and chess when he was describing the Aristotelian principle (see page 386). Both games are played on a board with 64 squares, but they have different structures. Checkers has one kind of piece, while chess has six different kinds of pieces. The movement of any checker piece is restricted to a single square per turn unless it is capturing, while movement in chess is different for each piece. In checkers, the goal is to capture all the opponents’ pieces. In chess, the goal is to trap one particular piece. The structure of chess is objectively richer than the structure of checkers. It is no coincidence that chess has thousands of books written about tactics and strategy for every aspect of the game while checkers has a fraction of that number. The nature of accomplishment in checkers and chess is also objectively different, as reflected in their relative places in Western culture.[1] I measure the richness of a structure by three aspects: principles, craft, and tools. The scientific method offers convenient examples. Conceptually, a scientific experiment proceeds according to principles such as replicability, falsifiability, and the role of the hypothesis that apply across different scientific disciplines. The actual conduct of a classic scientific experiment involves craft—the generation of a hypothesis to be tested or a topic to be explored, the creation of the methods for doing so, and meticulous observance of protocols and procedures during the actual work. The details of craft differ not only across disciplines but within disciplines. They also have a family resemblance, in the sense that a meticulous scientist behaves in ways that are recognizable to scientists in every field—“meticulous” being one of the defining characteristics of craft practiced at a high level. Tools play a double role. Sometimes they are created in direct response to needs generated by principles and craft—accurate thermometers are an example—but at least as often, a tool turns out to have unanticipated uses that alter both principles and craft, independently expanding the realm of things a discipline can achieve. An example is the invention of the diffraction grating to study spectra of light, which 40 years later turned out to enable astronomers to study the composition of the stars.
Charles Murray (Human Accomplishment: The Pursuit of Excellence in the Arts and Sciences, 800 B.C. to 1950)
Every Day Take Your Daily Doses Black Cumin (Nigella sativa) (¼ tsp) As noted in the Appetite Suppression section, a systematic review and meta-analysis of randomized, controlled weight-loss trials found that about a quarter teaspoon of black cumin powder every day appears to reduce body mass index within a span of a couple of months. Note that black cumin is different from regular cumin, for which the dosing is different. (See below.) Garlic Powder (¼ tsp) Randomized, double-blind, placebo-controlled studies have found that as little as a daily quarter teaspoon of garlic powder can reduce body fat at a cost of perhaps two cents a day. Ground Ginger (1 tsp) or Cayenne Pepper (½ tsp) Randomized controlled trials have found that ¼ teaspoon to 1½ teaspoons a day of ground ginger significantly decreased body weight for just pennies a day. It can be as easy as stirring the ground spice into a cup of hot water. Note: Ginger may work better in the morning than evening. Chai tea is a tasty way to combine the green tea and ginger tweaks into a single beverage. Alternately, for BAT activation, you can add one raw jalapeño pepper or a half teaspoon of red pepper powder (or, presumably, crushed red pepper flakes) into your daily diet. To help beat the heat, you can very thinly slice or finely chop the jalapeño to reduce its bite to little prickles, or mix the red pepper into soup or the whole-food vegetable smoothie I featured in one of my cooking videos on NutritionFacts.org.4985 Nutritional Yeast (2 tsp) Two teaspoons of baker’s, brewer’s, or nutritional yeast contains roughly the amount of beta 1,3/1,6 glucans found in randomized, double-blind, placebo-controlled clinical trials to facilitate weight loss. Cumin (Cuminum cyminum) (½ tsp with lunch and dinner) Overweight women randomized to add a half teaspoon of cumin to their lunches and dinners beat out the control group by four more pounds and an extra inch off their waists. There is also evidence to support the use of the spice saffron, but a pinch a day would cost a dollar, whereas a teaspoon of cumin costs less than ten cents. Green Tea (3 cups) Drink three cups a day between meals (waiting at least an hour after a meal so as to not interfere with iron absorption). During meals, drink water, black coffee, or hibiscus tea mixed 6:1 with lemon verbena, but never exceed three cups of fluid an hour (important given my water preloading advice). Take advantage of the reinforcing effect of caffeine by drinking your green tea along with something healthy you wish you liked more, but don’t consume large amounts of caffeine within six hours of bedtime. Taking your tea without sweetener is best, but if you typically sweeten your tea with honey or sugar, try yacon syrup instead. Stay
Michael Greger (How Not to Diet)
Hyphen This word comes from two Greek words together meaning ‘under one’, which gets nobody anywhere and merely prompts the reflection that argument by etymology only serves the purpose of intimidating ignorant antagonists. On, then. This is one more case in which matters have not improved since Fowler’s day, since he wrote in 1926: The chaos prevailing among writers or printers or both regarding the use of hyphens is discreditable to English education … The wrong use or wrong non-use of hyphens makes the words, if strictly interpreted, mean something different from what the writers intended. It is no adequate answer to such criticisms to say that actual misunderstanding is unlikely; to have to depend on one’s employer’s readiness to take the will for the deed is surely a humiliation that no decent craftsman should be willing to put up with. And so say all of us who may be reading this book. The references there to ‘printers’ needs updating to something like ‘editors’, meaning those who declare copy fit to print. Such people now often get it wrong by preserving in midcolumn a hyphen originally put at the end of a line to signal a word-break: inter-fere, say, is acceptable split between lines but not as part of a single line. This mistake is comparatively rare and seldom causes confusion; even so, time spent wondering whether an exactor may not be an ex-actor is time avoidably wasted. The hyphen is properly and necessarily used to join the halves of a two-word adjectival phrase, as in fair-haired children, last-ditch resistance, falling-down drunk, over-familiar reference. Breaches of this rule are rare and not troublesome. Hyphens are also required when a phrase of more than two words is used adjectivally, as in middle-of-the-road policy, too-good-to-be-true story, no-holds-barred contest. No hard-and-fast rule can be devised that lays down when a two-word phrase is to be hyphenated and when the two words are to be run into one, though there will be a rough consensus that, for example, book-plate and bookseller are each properly set out and that bookplate and book-seller might seem respectively new-fangled and fussy. A hyphen is not required when a normal adverb (i.e. one ending in -ly) plus an adjective or other modifier are used in an adjectival role, as in Jack’s equally detestable brother, a beautifully kept garden, her abnormally sensitive hearing. A hyphen is required, however, when the adverb lacks a final -ly, like well, ill, seldom, altogether or one of those words like tight and slow that double as adjectives. To avoid ambiguity here we must write a well-kept garden, an ill-considered objection, a tight-fisted policy. The commonest fault in the use of the hyphen, and the hardest to eradicate, is found when an adjectival phrase is used predicatively. So a gent may write of a hard-to-conquer mountain peak but not of a mountain peak that remains hard-to-conquer, an often-proposed solution but not of one that is often-proposed. For some reason this fault is especially common when numbers, including fractions, are concerned, and we read every other day of criminals being imprisoned for two-and-a-half years, a woman becoming a mother-of-three and even of some unfortunate being stabbed six-times. And the Tories have been in power for a decade-and-a-half. Finally, there seems no end to the list of common phrases that some berk will bung a superfluous hyphen into the middle of: artificial-leg, daily-help, false-teeth, taxi-firm, martial-law, rainy-day, airport-lounge, first-wicket, piano-concerto, lung-cancer, cavalry-regiment, overseas-service. I hope I need not add that of course one none the less writes of a false-teeth problem, a first-wicket stand, etc. The only guide is: omit the hyphen whenever possible, so avoid not only mechanically propelled vehicle users (a beauty from MEU) but also a man eating tiger. And no one is right and no-one is wrong.
Kingsley Amis (The King's English: A Guide to Modern Usage)
The structure of de Prony’s computing office cannot be easily seen in Smith’s example. His computing staff had two distinct classes of workers. The larger of these was a staff of nearly ninety computers. These workers were quite different from Smith’s pin makers or even from the computers at the British Nautical Almanac and the Connaissance des Temps. Many of de Prony’s computers were former servants or wig dressers, who had lost their jobs when the Revolution rendered the elegant styles of Louis XVI unfashionable or even treasonous.35 They were not trained in mathematics and held no special interest in science. De Prony reported that most of them “had no knowledge of arithmetic beyond the two first rules [of addition and subtraction].”36 They were little different from manual workers and could not discern whether they were computing trigonometric functions, logarithms, or the orbit of Halley’s comet. One labor historian has described them as intellectual machines, “grasping and releasing a single piece of ‘data’ over and over again.”37 The second class of workers prepared instructions for the computation and oversaw the actual calculations. De Prony had no special title for this group of workers, but subsequent computing organizations came to use the term “planning committee” or merely “planners,” as they were the ones who actually planned the calculations. There were eight planners in de Prony’s organization. Most of them were experienced computers who had worked for either the Bureau du Cadastre or the Paris Observatory. A few had made interesting contributions to mathematical theory, but the majority had dealt only with the problems of practical mathematics.38 They took the basic equations for the trigonometric functions and reduced them to the fundamental operations of addition and subtraction. From this reduction, they prepared worksheets for the computers. Unlike Nevil Maskelyne’s worksheets, which gave general equations to the computers, these sheets identified every operation of the calculation and left nothing for the workers to interpret. Each step of the calculation was followed by a blank space for the computers to fill with a number. Each table required hundreds of these sheets, all identical except for a single unique starting value at the top of the page. Once the computers had completed their sheets, they returned their results to the planners. The planners assembled the tables and checked the final values. The task of checking the results was a substantial burden in itself. The group did not double-compute, as that would have obviously doubled the workload. Instead the planners checked the final values by taking differences between adjacent values in order to identify miscalculated numbers. This procedure, known as “differencing,” was an important innovation for human computers. As one observer noted, differencing removed the “necessity of repeating, or even of examining, the whole of the work done by the [computing] section.”39 The entire operation was overseen by a handful of accomplished scientists, who “had little or nothing to do with the actual numerical work.” This group included some of France’s most accomplished mathematicians, such as Adrien-Marie Legendre (1752–1833) and Lazare-Nicolas-Marguerite Carnot (1753–1823).40 These scientists researched the appropriate formulas for the calculations and identified potential problems. Each formula was an approximation, as no trigonometric function can be written as an exact combination of additions and subtractions. The mathematicians analyzed the quality of the approximations and verified that all the formulas produced values adequately close to the true values of the trigonometric functions.
David Alan Grier (When Computers Were Human)
When you measure a single spiked wave, such as that in Figure 8.9, the device registers the spike's location. If it's spiked at Strawberry Fields, that's what the device reads; if you look at the result, your brain registers that location and you become aware of it. If it's spiked at Grant's Tomb, that's what the device registers; if you look, your brain registers that location and you become aware of it. When you measure the double spiked wave in Figure 8.10, Schrodinger's math tells you to combine the two results you just found. But, says Everett, be careful and precise when you combine them. The combined result, he argued, does not yield a meter and a mind each simultaneously registering two locations. That's sloppy thinking. Instead, proceeding slowly and literally, we find that the combined result is a device and a mind registering Strawberry Fields, and a device and a mind registering Grant's Tomb. And what does that mean? I'll use broad strokes in painting the general picture, which I'll refine shortly. To accommodate Everett's suggested outcome, the device and you and everything else must split upon measurement, yielding two devices, two yous, and two everything elses-the only difference between the two being that one device and one you registers Strawberry Fields, while the other device and the other you registers Grant's Tomb. As in Figure 8.12, this implies that we now have two parallel realities, two parallel worlds. To the you occupying each, the measurement and your mental impression of the result are sharp and unique and thus fell like life as usual. The peculiarity, of course, is that there are two of you who feel this way.
Brian Greene (The Hidden Reality: Parallel Universes and the Deep Laws of the Cosmos)
Why did people make such a fateful miscalculation? For the same reason that people throughout history have miscalculated. People were unable to fathom the full consequences of their decisions. Whenever they decided to do a bit of extra work – say, to hoe the fields instead of scattering seeds on the surface – people thought, ‘Yes, we will have to work harder. But the harvest will be so bountiful! We won’t have to worry any more about lean years. Our children will never go to sleep hungry.’ It made sense. If you worked harder, you would have a better life. That was the plan. The first part of the plan went smoothly. People indeed worked harder. But people did not foresee that the number of children would increase, meaning that the extra wheat would have to be shared between more children. Neither did the early farmers understand that feeding children with more porridge and less breast milk would weaken their immune system, and that permanent settlements would be hotbeds for infectious diseases. They did not foresee that by increasing their dependence on a single source of food, they were actually exposing themselves even more to the depredations of drought. Nor did the farmers foresee that in good years their bulging granaries would tempt thieves and enemies, compelling them to start building walls and doing guard duty. Then why didn’t humans abandon farming when the plan backfired? Partly because it took generations for the small changes to accumulate and transform society and, by then, nobody remembered that they had ever lived differently. And partly because population growth burned humanity’s boats. If the adoption of ploughing increased a village’s population from 100 to 110, which ten people would have volunteered to starve so that the others could go back to the good old times? There was no going back. The trap snapped shut. The pursuit of an easier life resulted in much hardship, and not for the last time. It happens to us today. How many young college graduates have taken demanding jobs in high-powered firms, vowing that they will work hard to earn money that will enable them to retire and pursue their real interests when they are thirty-five? But by the time they reach that age, they have large mortgages, children to school, houses in the suburbs that necessitate at least two cars per family, and a sense that life is not worth living without really good wine and expensive holidays abroad. What are they supposed to do, go back to digging up roots? No, they double their efforts and keep slaving away. One
Yuval Noah Harari (Sapiens: A Brief History of Humankind)
Afterward, she’d graciously analyzed every single nuance with me until she was convinced we were going to have a double wedding—our childhood dream—and I confirmed she was insane.
Rachel Higginson (The Difference Between Us (Opposites Attract, #2))
Scientists and engineers tend to divide their work into two large categories, sometimes described as basic research and directed research. Some of the most crucial inventions and discoveries of the modern world have come about through basic research—that is, work that was not directed toward any particular use. Albert Einstein’s picture of the universe, Alexander Fleming’s discovery of penicillin, Niels Bohr’s blueprint of the atomic nucleus, the Watson-Crick “double helix” model of DNA—all these have had enormous practical implications, but they all came out of basic research. There are just as many basic tools of modern life—the electric light, the telephone, vitamin pills, the Internet—that resulted from a clearly focused effort to solve a particular problem. In a sense, this distinction between basic and directed research encompasses the difference between science and engineering. Scientists, on the whole, are driven by the thirst for knowledge; their motivation, as the Nobel laureate Richard Feynman put it, is “the joy of finding things out.” Engineers, in contrast, are solution-driven. Their joy is making things work. The monolithic idea was an engineering solution. It worked around the tyranny of numbers by reducing the numbers to one: a complete circuit would consist of just one part—a single (“monolithic”) block of semiconductor material containing all the components and all the interconnections of the most complex circuit designs. The tangible product of that idea, known to engineers as the monolithic integrated circuit and to the world at large as the semiconductor chip, has changed the world as fundamentally as did the telephone, the light bulb, and the horseless carriage. The integrated circuit is the heart of clocks, computers, cameras, and calculators, of pacemakers and Palm Pilots, of deep-space probes and deep-sea sensors, of toasters, typewriters, cell phones, and Internet servers. The National Academy of Sciences declared the integrated circuit the progenitor of the “Second Industrial Revolution.” The first Industrial Revolution enhanced man’s physical prowess and freed people from the drudgery of backbreaking manual labor; the revolution spawned by the chip enhances our intellectual prowess and frees people from the drudgery of mind-numbing computational labor. A British physicist, Sir Ieuan Madlock, Her Majesty’s Chief Science Advisor, called the integrated circuit “the most remarkable technology ever to hit mankind.” A California businessman, Jerry Sanders, founder of Advanced Micro Devices, Inc., offered a more pointed assessment: “Integrated circuits are the crude oil of the eighties.” All
T.R. Reid (The Chip: How Two Americans Invented the Microchip and Launched a Revolution)
This was one instance in which the medical profession totally rejected something that they were not ready to accept because, in part, there was no framework for understanding the new concept. It was only later, when the causative agents were clearly identified through the work of Robert Koch, Louis Pasteur, and Joseph Lister, that it became accepted that germs cause disease. Koch discovered that a microscopic agent was the cause of tuberculosis and showed this with total certainty, leading to a revolution in medicine. All of a sudden, science and technology seemed to hold tremendous power, once it was understood that so many diseases were caused by infectious agents, and powerful new technologies could be developed to treat them with great specificity. This naturally gave rise to a “find it and fix it” culture. Over the last one hundred years, medical science has given rise to many wonderful things. However, it is now very heavily focused on disease. We spend almost no time on health. We make an assumption that for every disease, there is a defect that we need to find and fix. We don’t deal with people throughout their lives, but only when they’re sick. In the United States, we have become accustomed to assuming that one’s health is managed by one’s doctor, and that individuals have little responsibility or control over their health. Where does this leave us? On the one hand, life expectancy in 1900 was forty years. Today it’s eighty years. We have doubled life expectancy in a hundred years. That’s almost miraculous. On the other hand, in 1900 the most likely cause of death for a young man between the ages of fifteen and twenty-five would have been infection. Today, it’s murder, suicide, drug abuse, or violent accidents. We have made tremendous progress, but some of the consequences of our progress are absolutely terrifying. In addition, we have a tremendous accumulation of chronic diseases, many of which are fostered by people’s own behavior. One of the problems with Western medicine is that it tends to make a reductionist assumption that for every disease, there is a single causative factor that we need to find and fix. We now are learning that there are often multiple factors, rather than a single reductionist cause of disease. People are born with a baseline risk, and then environmental factors impinge on that risk over time. There is a tremendous difference in susceptibility to different diseases, yet we often have a lot of control over environmental factors that contribute to disease progression.
Jon Kabat-Zinn (The Mind's Own Physician: A Scientific Dialogue with the Dalai Lama on the Healing Power of Meditation)