“
Another key commitment for succeeding with this strategy is to support your commitment to shutting down with a strict shutdown ritual that you use at the end of the workday to maximize the probability that you succeed. In more detail, this ritual should ensure that every incomplete task, goal, or project has been reviewed and that for each you have confirmed that either (1) you have a plan you trust for its completion, or (2) it’s captured in a place where it will be revisited when the time is right. The process should be an algorithm: a series of steps you always conduct, one after another. When you’re done, have a set phrase you say that indicates completion (to end my own ritual, I say, “Shutdown complete”). This final step sounds cheesy, but it provides a simple cue to your mind that it’s safe to release work-related thoughts for the rest of the day.
”
”
Cal Newport (Deep Work: Rules for Focused Success in a Distracted World)
“
What’s amazing is that things like hashtag design—these essentially ad hoc experiments in digital architecture—have shaped so much of our political discourse. Our world would be different if Anonymous hadn’t been the default username on 4chan, or if every social media platform didn’t center on the personal profile, or if YouTube algorithms didn’t show viewers increasingly extreme content to retain their attention, or if hashtags and retweets simply didn’t exist. It’s because of the hashtag, the retweet, and the profile that solidarity on the internet gets inextricably tangled up with visibility, identity, and self-promotion. It’s telling that the most mainstream gestures of solidarity are pure representation, like viral reposts or avatar photos with cause-related filters, and meanwhile the actual mechanisms through which political solidarity is enacted, like strikes and boycotts, still exist on the fringe.
”
”
Jia Tolentino (Trick Mirror)
“
When Charles Darwin was trying to decide whether he should propose to his cousin Emma Wedgwood, he got out a pencil and paper and weighed every possible consequence. In favor of marriage he listed children, companionship, and the 'charms of music and female chit-chat.' Against marriage he listed the 'terrible loss of time,' lack of freedom to go where he wished, the burden of visiting relatives, the expense and anxiety provoked by children, the concern that 'perhaps my wife won't like London,' and having less money to spend on books. Weighing one column against the other produced a narrow margin of victory, and at the bottom Darwin scrawled, 'Marry—Marry—Marry Q.E.D.' Quod erat demonstrandum, the mathematical sign-off that Darwin himself restated in English: 'It being proved necessary to Marry.
”
”
Brian Christian (Algorithms to Live By: The Computer Science of Human Decisions)
“
If you have nothing to hide, then you have nothing to fear.” This is a dangerously narrow conception of the value of privacy. Privacy is an essential human need, and central to our ability to control how we relate to the world. Being stripped of privacy is fundamentally dehumanizing, and it makes no difference whether the surveillance is conducted by an undercover policeman following us around or by a computer algorithm tracking our every move.
”
”
Bruce Schneier (Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World)
“
Over the next three decades, scholars and fans, aided by computational algorithms, will knit together the books of the world into a single networked literature. A reader will be able to generate a social graph of an idea, or a timeline of a concept, or a networked map of influence for any notion in the library. We’ll come to understand that no work, no idea stands alone, but that all good, true, and beautiful things are ecosystems of intertwined parts and related entities, past and present.
”
”
Kevin Kelly (The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future)
“
within thirty seconds of learning their name and where they lived, she would implement her social algorithm and calculate precisely where they stood in her constellation based on who their family was, who else they were related to, what their approximate net worth might be, how the fortune was derived, and what family scandals might have occurred within the past fifty years.
”
”
Kevin Kwan (Crazy Rich Asians (Crazy Rich Asians, #1))
“
In fact, AI might make centralized systems far more efficient than diffused systems, because machine learning works better the more information it can analyze. If you disregard all privacy concerns and concentrate all the information relating to a billion people in one database, you can train much better algorithms than if you respect individual privacy and have in your database only partial information on a million people.
”
”
Yuval Noah Harari (21 Lessons for the 21st Century)
“
The longer someone ignores an email before finally responding, the more relative social power that person has. Map these response times across an entire organization and you get a remarkably accurate chart of the actual social standing. The boss leaves emails unanswered for hours or days; those lower down respond within minutes. There’s an algorithm for this, a data mining method called “automated social hierarchy detection,” developed at Columbia University.8 When applied to the archive of email traffic at Enron Corporation before it folded, the method correctly identified the roles of top-level managers and their subordinates just by how long it took them to answer a given person’s emails. Intelligence agencies have been applying the same metric to suspected terrorist gangs, piecing together the chain of influence to spot the central figures.
”
”
Daniel Goleman (Focus: The Hidden Driver of Excellence)
“
In the economic sphere too, the ability to hold a hammer or press a button is becoming less valuable than before. In the past, there were many things only humans could do. But now robots and computers are catching up, and may soon outperform humans in most tasks. True, computers function very differently from humans, and it seems unlikely that computers will become humanlike any time soon. In particular, it doesn’t seem that computers are about to gain consciousness, and to start experiencing emotions and sensations. Over the last decades there has been an immense advance in computer intelligence, but there has been exactly zero advance in computer consciousness. As far as we know, computers in 2016 are no more conscious than their prototypes in the 1950s. However, we are on the brink of a momentous revolution. Humans are in danger of losing their value, because intelligence is decoupling from consciousness.
Until today, high intelligence always went hand in hand with a developed consciousness. Only conscious beings could perform tasks that required a lot of intelligence, such as playing chess, driving cars, diagnosing diseases or identifying terrorists. However, we are now developing new types of non-conscious intelligence that can perform such tasks far better than humans. For all these tasks are based on pattern recognition, and non-conscious algorithms may soon excel human consciousness in recognising patterns. This raises a novel question: which of the two is really important, intelligence or consciousness? As long as they went hand in hand, debating their relative value was just a pastime for philosophers. But in the twenty-first century, this is becoming an urgent political and economic issue. And it is sobering to realise that, at least for armies and corporations, the answer is straightforward: intelligence is mandatory but consciousness is optional.
”
”
Yuval Noah Harari (Homo Deus: A History of Tomorrow)
“
Study after study shows that diverse teams perform better. In a 2014 report for Scientific American, Columbia professor Katherine W. Phillips examined a broad cross section of research related to diversity and organizational performance. And over and over, she found that the simple act of interacting in a diverse group improves performance, because it “forces group members to prepare better, to anticipate alternative viewpoints and to expect that reaching consensus will take effort.
”
”
Sara Wachter-Boettcher (Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech)
“
Giselle’s features solidified. Whether she was beautiful, hot, pretty, or none of the above changed according to who was doing the looking and to whom she was being compared. Beauty was relative, reflective; you learned early to rank and code, to tally appearance and desirability in seconds, a subconscious algorithm that input body parts, clothing, attitude, makeup.
”
”
Lisa Ko (Memory Piece)
“
It’s important to note, as we endeavor to understand relative harms, that they are entirely dependent on context. For example, if a high-risk score for a given defendant qualified him for a reentry program that would help him find a job upon release from prison, we’d be much less worried about false positives. Or in the case of the child abuse algorithm, if we are sure that a high-risk score leads to a thorough and fair-minded investigation of the situation at home, we’d be less worried about children unnecessarily removed from their parents. In the end, how an algorithm will be used should affect how it is constructed and optimized.
”
”
Cathy O'Neil (Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy)
“
When Charles Darwin was trying to decide whether he should propose to his cousin Emma Wedgwood, he got out a pencil and paper and weighed every possible consequence. In favor of marriage he listed children, companionship, and the “charms of music & female chit-chat.” Against marriage he listed the “terrible loss of time,” lack of freedom to go where he wished, the burden of visiting relatives, the expense and anxiety provoked by children, the concern that “perhaps my wife won’t like London,” and having less money to spend on books. Weighing one column against the other produced a narrow margin of victory, and at the bottom Darwin scrawled, “Marry—Marry—Marry
”
”
Brian Christian (Algorithms to Live By: The Computer Science of Human Decisions)
“
It is best to be the CEO; it is satisfactory to be an early employee, maybe the fifth or sixth or perhaps the tenth. Alternately, one may become an engineer devising precious algorithms in the cloisters of Google and its like. Otherwise, one becomes a mere employee. A coder of websites at Facebook is no one in particular. A manager at Microsoft is no one. A person (think woman) working in customer relations is a particular type of no one, banished to the bottom, as always, for having spoken directly to a non-technical human being. All these and others are ways for strivers to fall by the wayside — as the startup culture sees it — while their betters race ahead of them. Those left behind may see themselves as ordinary, even failures.
”
”
Ellen Ullman (Life in Code: A Personal History of Technology)
“
In the late twentieth century democracies usually outperformed dictatorships because democracies were better at data-processing. Democracy diffuses the power to process information and make decisions among many people and institutions, whereas dictatorship concentrates information and power in one place. Given twentieth-century technology, it was inefficient to concentrate too much information and power in one place. Nobody had the ability to process all the information fast enough and make the right decisions. This is part of the reason why the Soviet Union made far worse decisions than the United States, and why the Soviet economy lagged far behind the American economy.
However, soon AI might swing the pendulum in the opposite direction. AI makes it possible to process enormous amounts of information centrally. Indeed, AI might make centralised systems far more efficient than diffused systems, because machine learning works better the more information it can analyse. If you concentrate all the information relating to a billion people in one database, disregarding all privacy concerns, you can train much better algorithms than if you respect individual privacy and have in your database only partial information on a million people. For example, if an authoritarian government orders all its citizens to have their DNA scanned and to share all their medical data with some central authority, it would gain an immense advantage in genetics and medical research over societies in which medical data is strictly private. The main handicap of authoritarian regimes in the twentieth century – the attempt to concentrate all information in one place – might become their decisive advantage in the twenty-first century.
”
”
Yuval Noah Harari (21 Lessons for the 21st Century)
“
In the late twentieth century democracies usually outperformed dictatorships because democracies were better at data-processing. Democracy diffuses the power to process information and make decisions among many people and institutions, whereas dictatorship concentrates information and power in one place. Given twentieth-century technology, it was inefficient to concentrate too much information and power in one place. Nobody had the ability to process all the information fast enough and make the right decisions. This is part of the reason why the Soviet Union made far worse decisions than the United States, and why the Soviet economy lagged far behind the American economy.
“However, soon AI might swing the pendulum in the opposite direction. AI makes it possible to process enormous amounts of information centrally. Indeed, AI might make centralised systems far more efficient than diffused systems, because machine learning works better the more information it can analyse. If you concentrate all the information relating to a billion people in one database, disregarding all privacy concerns, you can train much better algorithms than if you respect individual privacy and have in your database only partial information on a million people. For example, if an authoritarian government orders all its citizens to have their DNA scanned and to share all their medical data with some central authority, it would gain an immense advantage in genetics and medical research over societies in which medical data is strictly private. The main handicap of authoritarian regimes in the twentieth century – the attempt to concentrate all information in one place – might become their decisive advantage in the twenty-first century.
”
”
Yuval Noah Harari (21 Lessons for the 21st Century)
“
By the end of this decade, permutations and combinations of genetic variants will be used to predict variations in human phenotype, illness, and destiny. Some diseases might never be amenable to such a genetic test, but perhaps the severest variants of schizophrenia or heart disease, or the most penetrant forms of familial cancer, say, will be predictable by the combined effect of a handful of mutations. And once an understanding of "process" has been built into predictive algorithms, the interactions between various gene variants could be used to compute ultimate effects on a whole host of physical and mental characteristics beyond disease alone. Computational algorithms could determine the probability of the development of heart disease or asthma or sexual orientation and assign a level of relative risk for various fates to each genome. The genome will thus be read not in absolutes, but in likelihoods-like a report card that does not contain grades but probabilities, or a resume that does not list past experiences but future propensities. It will become a manual for previvorship.
”
”
Siddhartha Mukherjee (The Gene: An Intimate History)
“
Search engine query data is not the product of a designed statistical experiment and finding a way to meaningfully analyse such data and extract useful knowledge is a new and challenging field that would benefit from collaboration. For the 2012–13 flu season, Google made significant changes to its algorithms and started to use a relatively new mathematical technique called Elasticnet, which provides a rigorous means of selecting and reducing the number of predictors required. In 2011, Google launched a similar program for tracking Dengue fever, but they are no longer publishing predictions and, in 2015, Google Flu Trends was withdrawn. They are, however, now sharing their data with academic researchers...
Google Flu Trends, one of the earlier attempts at using big data for epidemic prediction, provided useful insights to researchers who came after them...
The Delphi Research Group at Carnegie Mellon University won the CDC’s challenge to ‘Predict the Flu’ in both 2014–15 and 2015–16 for the most accurate forecasters. The group successfully used data from Google, Twitter, and Wikipedia for monitoring flu outbreaks.
”
”
Dawn E. Holmes (Big Data: A Very Short Introduction (Very Short Introductions))
“
Why, exactly, is Marduk handing Hammurabi a one and a zero in this picture?"
Hiro asks.
"They were emblems of royal power," the Librarian says. "Their origin is
obscure."
"Enki must have been responsible for that one," Hiro says.
"Enki's most important role is as the creator and guardian of the me and the
gis-hur, the 'key words' and 'patterns' that rule the universe."
"Tell me more about the me."
"To quote Kramer and Maier again, '[They believed in] the existence from time
primordial of a fundamental, unalterable, comprehensive assortment of powers and
duties, norms and standards, rules and regulations, known as me, relating to the
cosmos and its components, to gods and humans, to cities and countries, and to
the varied aspects of civilized life.'"
"Kind of like the Torah."
"Yes, but they have a kind of mystical or magical force. And they often deal
with banal subjects -- not just religion."
"Examples?"
"In one myth, the goddess Inanna goes to Eridu and tricks Enki into giving her
ninety-four me and brings them back to her home town of Uruk, where they are
greeted with much commotion and rejoicing."
"Inanna is the person that Juanita's obsessed with."
"Yes, sir. She is hailed as a savior because 'she brought the perfect execution
of the me.'"
"Execution? Like executing a computer program?"
"Yes. Apparently, they are like algorithms for carrying out certain activities
essential to the society. Some of them have to do with the workings of
priesthood and kingship. Some explain how to carry out religious ceremonies.
Some relate to the arts of war and diplomacy. Many of them are about the arts and crafts: music, carpentry, smithing, tanning, building, farming, even such
simple tasks as lighting fires."
"The operating system of society."
"I'm sorry?"
"When you first turn on a computer, it is an inert collection of circuits that
can't really do anything. To start up the machine, you have to infuse those
circuits with a collection of rules that tell it how to function. How to be a
computer. It sounds as though these me served as the operating system of the
society, organizing an inert collection of people into a functioning system."
"As you wish. In any case, Enki was the guardian of the me."
"So he was a good guy, really."
"He was the most beloved of the gods."
"He sounds like kind of a hacker.
”
”
Neal Stephenson (Snow Crash)
“
At this point, the cautious reader might wish to read over the whole argument again, as presented above, just to make sure that I have not indulged in any 'sleight of hand'! Admittedly there is an air of the conjuring trick about the argument, but it is perfectly legitimate, and it only gains in strength the more minutely it is examined. We have found a computation Ck(k) that we know does not stop; yet the given computational procedure A is not powerful enough to ascertain that facet. This is the Godel(-Turing) theorem in the form that I require. It applies to any computational procedure A whatever for ascertaining that computations do not stop, so long as we know it to be sound. We deduce that no knowably sound set of computational rules (such as A) can ever suffice for ascertaining that computations do not stop, since there are some non-stopping computations (such as Ck(k)) that must elude these rules. Moreover, since from the knowledge of A and of its soundness, we can actually construct a computation Ck(k) that we can see does not ever stop, we deduce that A cannot be a formalization of the procedures available to mathematicians for ascertaining that computations do not stop, no matter what A is.
Hence:
(G) Human mathematicians are not using a knowably sound algorithm in order to ascertain mathematical truth.
It seems to me that this conclusion is inescapable. However, many people have tried to argue against it-bringing in objections like those summarized in the queries Q1-Q20 of 2.6 and 2.10 below-and certainly many would argue against the stronger deduction that there must be something fundamentally non-computational in our thought processes. The reader may indeed wonder what on earth mathematical reasoning like this, concerning the abstract nature of computations, can have to say about the workings of the human mind. What, after all, does any of this have to do with the issue of conscious awareness? The answer is that the argument indeed says something very significant about the mental quality of understanding-in relation to the general issue of computation-and, as was argued in 1.12, the quality of understanding is something dependent upon conscious awareness. It is true that, for the most part, the foregoing reasoning has been presented as just a piece of mathematics, but there is the essential point that the algorithm A enters the argument at two quite different levels. At the one level, it is being treated as just some algorithm that has certain properties, but at the other, we attempt to regard A as being actually 'the algorithm that we ourselves use' in coming to believe that a computation will not stop. The argument is not simply about computations. It is also about how we use our conscious understanding in order to infer the validity of some mathematical claim-here the non-stopping character of Ck(k). It is the interplay between the two different levels at which the algorithm A is being considered-as a putative instance of conscious activity and as a computation itself-that allows us to arrive at a conclusion expressing a fundamental conflict between such conscious activity and mere computation.
”
”
Roger Penrose (Shadows of the Mind: A Search for the Missing Science of Consciousness)
“
By now, though, it had been a steep learning curve, he was fairly well versed on the basics of how clearing worked: When a customer bought shares in a stock on Robinhood — say, GameStop — at a specific price, the order was first sent to Robinhood's in-house clearing brokerage, who in turn bundled the trade to a market maker for execution. The trade was then brought to a clearinghouse, who oversaw the trade all the way to the settlement.
During this time period, the trade itself needed to be 'insured' against anything that might go wrong, such as some sort of systemic collapse or a default by either party — although in reality, in regulated markets, this seemed extremely unlikely. While the customer's money was temporarily put aside, essentially in an untouchable safe, for the two days it took for the clearing agency to verify that both parties were able to provide what they had agreed upon — the brokerage house, Robinhood — had to insure the deal with a deposit; money of its own, separate from the money that the customer had provided, that could be used to guarantee the value of the trade. In financial parlance, this 'collateral' was known as VAR — or value at risk.
For a single trade of a simple asset, it would have been relatively easy to know how much the brokerage would need to deposit to insure the situation; the risk of something going wrong would be small, and the total value would be simple to calculate. If GME was trading at $400 a share and a customer wanted ten shares, there was $4000 at risk, plus or minus some nominal amount due to minute vagaries in market fluctuations during the two-day period before settlement. In such a simple situation, Robinhood might be asked to put up $4000 and change — in addition to the $4000 of the customer's buy order, which remained locked in the safe.
The deposit requirement calculation grew more complicated as layers were added onto the trading situation. A single trade had low inherent risk; multiplied to millions of trades, the risk profile began to change. The more volatile the stock — in price and/or volume — the riskier a buy or sell became.
Of course, the NSCC did not make these calculations by hand; they used sophisticated algorithms to digest the numerous inputs coming in from the trade — type of equity, volume, current volatility, where it fit into a brokerage's portfolio as a whole — and spit out a 'recommendation' of what sort of deposit would protect the trade. And this process was entirely automated; the brokerage house would continually run its trading activity through the federal clearing system and would receive its updated deposit requirements as often as every fifteen minutes while the market was open. Premarket during a trading week, that number would come in at 5:11 a.m. East Coast time, usually right as Jim, in Orlando, was finishing his morning coffee. Robinhood would then have until 10:00 a.m. to satisfy the deposit requirement for the upcoming day of trading — or risk being in default, which could lead to an immediate shutdown of all operations.
Usually, the deposit requirement was tied closely to the actual dollars being 'spent' on the trades; a near equal number of buys and sells in a brokerage house's trading profile lowered its overall risk, and though volatility was common, especially in the past half-decade, even a two-day settlement period came with an acceptable level of confidence that nobody would fail to deliver on their trades.
”
”
Ben Mezrich (The Antisocial Network: The GameStop Short Squeeze and the Ragtag Group of Amateur Traders That Brought Wall Street to Its Knees)
“
For example, consider a stack (which is a first-in, last-out list). You might have a program that requires three different types of stacks. One stack is used for integer values, one for floating-point values, and one for characters. In this case, the algorithm that implements each stack is the same, even though the data being stored differs. In a non-object-oriented language, you would be required to create three different sets of stack routines, with each set using different names. However, because of polymorphism, in Java you can create one general set of stack routines that works for all three specific situations. This way, once you know how to use one stack, you can use them all. More generally, the concept of polymorphism is often expressed by the phrase “one interface, multiple methods.” This means that it is possible to design a generic interface to a group of related activities. Polymorphism helps reduce complexity by allowing the same interface to be used to specify a general class of action.
”
”
Herbert Schildt (Java: A Beginner's Guide)
“
this ritual should ensure that every incomplete task, goal, or project has been reviewed and that for each you have confirmed that either (1) you have a plan you trust for its completion, or (2) it’s captured in a place where it will be revisited when the time is right. The process should be an algorithm: a series of steps you always conduct, one after another. When you’re done, have a set phrase you say that indicates completion (to end my own ritual, I say, “Shutdown complete”). This final step sounds cheesy, but it provides a simple cue to your mind that it’s safe to release work-related thoughts for the rest of the day.
”
”
Cal Newport (Deep Work: Rules for Focused Success in a Distracted World)
“
This is completely wrong, Manson argues, because happiness is a fleeting state. Once we solve our momentary happiness algorithm, a new algorithm will inevitably appear, whispering to us that yeah, X is okay, but if we could just achieve something even better, then we’d really have it made. Everything is relative.
”
”
Worth Books (Summary and Analysis of The Subtle Art of Not Giving a F*ck: A Counterintuitive Approach to Living a Good Life: Based on the Book by Mark Manson (Smart Summaries))
“
It is best to be the CEO; it is satisfactory to be an early employee, maybe the fifth or sixth or perhaps the tenth. Alternately, one may become an engineer devising precious algorithms in the cloisters of Google and its like. Otherwise one becomes a mere employee. A coder of websites at Facebook is no one in particular. A manager at Microsoft is no one. A person (think woman) working in customer relations is a particular type of no one,
”
”
Ellen Ullman (Life in Code: A Personal History of Technology)
“
It’s easy for you to tell what it’s a photo of, but to program a function that inputs nothing but the colors of all the pixels of an image and outputs an accurate caption such as “A group of young people playing a game of frisbee” had eluded all the world’s AI researchers for decades. Yet a team at Google led by Ilya Sutskever did precisely that in 2014. Input a different set of pixel colors, and it replies “A herd of elephants walking across a dry grass field,” again correctly. How did they do it? Deep Blue–style, by programming handcrafted algorithms for detecting frisbees, faces and the like? No, by creating a relatively simple neural network with no knowledge whatsoever about the physical world or its contents, and then letting it learn by exposing it to massive amounts of data. AI visionary Jeff Hawkins wrote in 2004 that “no computer can…see as well as a mouse,” but those days are now long gone.
”
”
Max Tegmark (Life 3.0: Being Human in the Age of Artificial Intelligence)
“
we are on the brink of a momentous revolution. Humans are in danger of losing their value, because intelligence is decoupling from consciousness.Until today, high intelligence always went hand in hand with a developed consciousness. Only conscious beings could perform tasks that required a lot of intelligence, such as playing chess, driving cars, diagnosing diseases or identifying terrorists. However, we are now developing new types of non-conscious intelligence that can perform such tasks far better than humans. For all these tasks are based on pattern recognition, and non-conscious algorithms may soon excel human consciousness in recognising patterns. This raises a novel question: which of the two is really important, intelligence or consciousness? As long as they went hand in hand, debating their relative value was just a pastime for philosophers. But in the twenty-first century, this is becoming an urgent political and economic issue. And it is sobering to realise that, at least for armies and corporations, the answer is straightforward: intelligence is mandatory but consciousness is optional.
”
”
Yuval Noah Harari
“
From time to time, a complex algorithm will lead to a longer routine, and in those circumstances, the routine should be allowed to grow organically up to 100–200 lines. (A line is a noncomment, nonblank line of source code.) Decades of evidence say that routines of such length are no more error prone than shorter routines. Let issues such as the routine's cohesion, depth of nesting, number of variables, number of decision points, number of comments needed to explain the routine, and other complexity-related considerations dictate the length of the routine rather than imposing a length restriction per se.
”
”
Steve McConnell (Code Complete)
“
This curve, which looks like an elongated S, is variously known as the logistic, sigmoid, or S curve. Peruse it closely, because it’s the most important curve in the world. At first the output increases slowly with the input, so slowly it seems constant. Then it starts to change faster, then very fast, then slower and slower until it becomes almost constant again. The transfer curve of a transistor, which relates its input and output voltages, is also an S curve. So both computers and the brain are filled with S curves. But it doesn’t end there. The S curve is the shape of phase transitions of all kinds: the probability of an electron flipping its spin as a function of the applied field, the magnetization of iron, the writing of a bit of memory to a hard disk, an ion channel opening in a cell, ice melting, water evaporating, the inflationary expansion of the early universe, punctuated equilibria in evolution, paradigm shifts in science, the spread of new technologies, white flight from multiethnic neighborhoods, rumors, epidemics, revolutions, the fall of empires, and much more. The Tipping Point could equally well (if less appealingly) be entitled The S Curve. An earthquake is a phase transition in the relative position of two adjacent tectonic plates. A bump in the night is just the sound of the microscopic tectonic plates in your house’s walls shifting, so don’t be scared. Joseph Schumpeter said that the economy evolves by cracks and leaps: S curves are the shape of creative destruction. The effect of financial gains and losses on your happiness follows an S curve, so don’t sweat the big stuff. The probability that a random logical formula is satisfiable—the quintessential NP-complete problem—undergoes a phase transition from almost 1 to almost 0 as the formula’s length increases. Statistical physicists spend their lives studying phase transitions.
”
”
Pedro Domingos (The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World)
“
The S curve is not just important as a model in its own right; it’s also the jack-of-all-trades of mathematics. If you zoom in on its midsection, it approximates a straight line. Many phenomena we think of as linear are in fact S curves, because nothing can grow without limit. Because of relativity, and contra Newton, acceleration does not increase linearly with force, but follows an S curve centered at zero. So does electric current as a function of voltage in the resistors found in electronic circuits, or in a light bulb (until the filament melts, which is itself another phase transition). If you zoom out from an S curve, it approximates a step function, with the output suddenly changing from zero to one at the threshold. So depending on the input voltages, the same curve represents the workings of a transistor in both digital computers and analog devices like amplifiers and radio tuners. The early part of an S curve is effectively an exponential, and near the saturation point it approximates exponential decay. When someone talks about exponential growth, ask yourself: How soon will it turn into an S curve? When will the population bomb peter out, Moore’s law lose steam, or the singularity fail to happen? Differentiate an S curve and you get a bell curve: slow, fast, slow becomes low, high, low. Add a succession of staggered upward and downward S curves, and you get something close to a sine wave. In fact, every function can be closely approximated by a sum of S curves: when the function goes up, you add an S curve; when it goes down, you subtract one. Children’s learning is not a steady improvement but an accumulation of S curves. So is technological change. Squint at the New York City skyline and you can see a sum of S curves unfolding across the horizon, each as sharp as a skyscraper’s corner. Most importantly for us, S curves lead to a new solution to the credit-assignment problem. If the universe is a symphony of phase transitions, let’s model it with one. That’s what the brain does: it tunes the system of phase transitions inside to the one outside. So let’s replace the perceptron’s step function with an S curve and see what happens.
”
”
Pedro Domingos (The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World)
“
When Charles Darwin was trying to decide whether he should propose to his cousin Emma Wedgwood, he got out a pencil and paper and weighed every possible consequence. In favor of marriage he listed children, companionship, and the “charms of music & female chit-chat.” Against marriage he listed the “terrible loss of time,” lack of freedom to go where he wished, the burden of visiting relatives, the expense and anxiety provoked by children, the concern that “perhaps my wife won’t like London,” and having less money to spend on books. Weighing
”
”
Brian Christian (Algorithms to Live By: The Computer Science of Human Decisions)
“
Fiscal Numbers (the latter uniquely identifies a particular hospitalization for patients who might have been admitted multiple times), which allowed us to merge information from many different hospital sources. The data were finally organized into a comprehensive relational database. More information on database merger, in particular, how database integrity was ensured, is available at the MIMIC-II web site [1]. The database user guide is also online [2]. An additional task was to convert the patient waveform data from Philips’ proprietary format into an open-source format. With assistance from the medical equipment vendor, the waveforms, trends, and alarms were translated into WFDB, an open data format that is used for publicly available databases on the National Institutes of Health-sponsored PhysioNet web site [3]. All data that were integrated into the MIMIC-II database were de-identified in compliance with Health Insurance Portability and Accountability Act standards to facilitate public access to MIMIC-II. Deletion of protected health information from structured data sources was straightforward (e.g., database fields that provide the patient name, date of birth, etc.). We also removed protected health information from the discharge summaries, diagnostic reports, and the approximately 700,000 free-text nursing and respiratory notes in MIMIC-II using an automated algorithm that has been shown to have superior performance in comparison to clinicians in detecting protected health information [4]. This algorithm accommodates the broad spectrum of writing styles in our data set, including personal variations in syntax, abbreviations, and spelling. We have posted the algorithm in open-source form as a general tool to be used by others for de-identification of free-text notes [5].
”
”
Mit Critical Data (Secondary Analysis of Electronic Health Records)
“
Chasing tax cheats using normal procedures was not an option. It would take decades just to identify anything like the majority of them and centuries to prosecute them successfully; the more we caught, the more clogged up the judicial system would become. We needed a different approach. Once Danis was on board a couple of days later, together we thought of one: we would extract historical and real-time data from the banks on all transfers taking place within Greece as well as in and out of the country and commission software to compare the money flows associated with each tax file number with the tax returns of that same file number. The algorithm would be designed to flag up any instance where declared income seemed to be substantially lower than actual income. Having identified the most likely offenders in this way, we would make them an offer they could not refuse. The plan was to convene a press conference at which I would make it clear that anyone caught by the new system would be subject to 45 per cent tax, large penalties on 100 per cent of their undeclared income and criminal prosecution. But as our government sought to establish a new relationship of trust between state and citizenry, there would be an opportunity to make amends anonymously and at minimum cost. I would announce that for the next fortnight a new portal would be open on the ministry’s website on which anyone could register any previously undeclared income for the period 2000–14. Only 15 per cent of this sum would be required in tax arrears, payable via web banking or debit card. In return for payment, the taxpayer would receive an electronic receipt guaranteeing immunity from prosecution for previous non-disclosure.17 Alongside this I resolved to propose a simple deal to the finance minister of Switzerland, where so many of Greece’s tax cheats kept their untaxed money.18 In a rare example of the raw power of the European Union being used as a force for good, Switzerland had recently been forced to disclose all banking information pertaining to EU citizens by 2017. Naturally, the Swiss feared that large EU-domiciled depositors who did not want their bank balances to be reported to their country’s tax authorities might shift their money before the revelation deadline to some other jurisdiction, such as the Cayman Islands, Singapore or Panama. My proposals were thus very much in the Swiss finance minister’s interests: a 15 per cent tax rate was a relatively small price to pay for legalizing a stash and allowing it to remain in safe, conveniently located Switzerland. I would pass a law through Greece’s parliament that would allow for the taxation of money in Swiss bank accounts at this exceptionally low rate, and in return the Swiss finance minister would require all his country’s banks to send their Greek customers a friendly letter informing them that, unless they produced the electronic receipt and immunity certificate provided by my ministry’s web page, their bank account would be closed within weeks. To my great surprise and delight, my Swiss counterpart agreed to the proposal.19
”
”
Yanis Varoufakis (Adults in the Room: My Battle with Europe's Deep Establishment)
“
a machine which learns from patterns in human-generated data, and autonomously manipulates human language, knowledge and relations, is more than a machine. It is a social agent: a participant in society, simultaneously participated in by it. As such, it becomes a legitimate object of sociological research.
”
”
Massimo Airoldi (Machine Habitus: Toward a Sociology of Algorithms)
“
In more detail, this ritual should ensure that every incomplete task, goal, or project has been reviewed and that for each you have confirmed that either (1) you have a plan you trust for its completion, or (2) it’s captured in a place where it will be revisited when the time is right. The process should be an algorithm: a series of steps you always conduct, one after another. When you’re done, have a set phrase you say that indicates completion (to end my own ritual, I say, “Shutdown complete”). This final step sounds cheesy, but it provides a simple cue to your mind that it’s safe to release work-related thoughts for the rest of the day.
”
”
Cal Newport (Deep Work: Rules for Focused Success in a Distracted World)
“
More fundamentally, productivity gains from automation may always be somewhat limited, especially compared to the introduction of new products and tasks that transform the production process, such as those in the early Ford factories. Automation is about substituting cheaper machines or algorithms for human labor, and reducing production costs by 10 or even 20 percent in a few tasks will have relatively small consequences for TFP or the efficiency of the production process. In contrast, introducing new technologies, such as electrification, novel designs, or new production tasks, has been at the root of transformative TFP gains throughout much of the twentieth century.
”
”
Simon Johnson (Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity)
“
As noted previously, there are a number of other algorithms beyond search, prime factors and Monte Carlo methods, though many of these apply only to very specialist mathematical problems and may never have practical applications. As yet, though, the range available is relatively limited. Some of this may be due to the limitations that are imposed in dreaming up algorithms without an actual device to run them on, but it is entirely possible that the list will always be fairly short, as we shouldn’t underestimate the difficulties of getting quantum algorithms that will run. However, Lov Grover commented in an interview with the author a while ago: ‘Not everyone agrees with this, but I believe that there are many more quantum algorithms to be discovered.’
Even if Grover is right, quantum computers are never going to supplant conventional computers as general-purpose machines. They are always likely to be specialist in application. And, as we shall see, it is not easy to get quantum computers to work at all, let alone develop them into robust desktop devices like a familiar PC.
”
”
Brian Clegg (Quantum Computing: The transformative technology of the Qubit Revolution)
“
is the advancement of the mathematical tools called algorithms and their related sophisticated software. Never before has so much mental power been computerized and made available to so many—power to deconstruct and predict patterns and changes
”
”
Ram Charan (The Attacker's Advantage: Turning Uncertainty into Breakthrough Opportunities)
“
Classical mathematicians have trouble understanding the set ℝ of Constructive real numbers because it seems to be both countable and uncountable. The Cantor diagonal argument—an algorithm that, given a sequence of real numbers, produces a real number different from every number in the sequence—is completely Constructive, and seems to show that the set is uncountable. But every real number is given by an algorithm that is described by a finite sequence of symbols, and the set of all finite sequences of symbols is countable.
The situation clarifies if we see that Cantor discovered a difference in complexity rather than in size. The set ℝ is not bigger, but more complex than the set ℕ. Its complexity is related to the fact that real numbers are algorithms, and to the undecidability of the halting problem shown by Turing. Given a set of symbols purporting to describe the algorithm for a real number, Turing showed that we have no algorithm that decides whether it actually computes a real number or goes into an infinite loop. So we have no way to make a list of all real numbers.
”
”
Newcomb Greenleaf