Algorithms Related Quotes

We've searched our database for all the quotes and captions related to Algorithms Related. Here they are! All 66 of them:

Another key commitment for succeeding with this strategy is to support your commitment to shutting down with a strict shutdown ritual that you use at the end of the workday to maximize the probability that you succeed. In more detail, this ritual should ensure that every incomplete task, goal, or project has been reviewed and that for each you have confirmed that either (1) you have a plan you trust for its completion, or (2) it’s captured in a place where it will be revisited when the time is right. The process should be an algorithm: a series of steps you always conduct, one after another. When you’re done, have a set phrase you say that indicates completion (to end my own ritual, I say, “Shutdown complete”). This final step sounds cheesy, but it provides a simple cue to your mind that it’s safe to release work-related thoughts for the rest of the day.
Cal Newport (Deep Work: Rules for Focused Success in a Distracted World)
What’s amazing is that things like hashtag design—these essentially ad hoc experiments in digital architecture—have shaped so much of our political discourse. Our world would be different if Anonymous hadn’t been the default username on 4chan, or if every social media platform didn’t center on the personal profile, or if YouTube algorithms didn’t show viewers increasingly extreme content to retain their attention, or if hashtags and retweets simply didn’t exist. It’s because of the hashtag, the retweet, and the profile that solidarity on the internet gets inextricably tangled up with visibility, identity, and self-promotion. It’s telling that the most mainstream gestures of solidarity are pure representation, like viral reposts or avatar photos with cause-related filters, and meanwhile the actual mechanisms through which political solidarity is enacted, like strikes and boycotts, still exist on the fringe.
Jia Tolentino (Trick Mirror)
If you have nothing to hide, then you have nothing to fear.” This is a dangerously narrow conception of the value of privacy. Privacy is an essential human need, and central to our ability to control how we relate to the world. Being stripped of privacy is fundamentally dehumanizing, and it makes no difference whether the surveillance is conducted by an undercover policeman following us around or by a computer algorithm tracking our every move.
Bruce Schneier (Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World)
When Charles Darwin was trying to decide whether he should propose to his cousin Emma Wedgwood, he got out a pencil and paper and weighed every possible consequence. In favor of marriage he listed children, companionship, and the 'charms of music and female chit-chat.' Against marriage he listed the 'terrible loss of time,' lack of freedom to go where he wished, the burden of visiting relatives, the expense and anxiety provoked by children, the concern that 'perhaps my wife won't like London,' and having less money to spend on books. Weighing one column against the other produced a narrow margin of victory, and at the bottom Darwin scrawled, 'Marry—Marry—Marry Q.E.D.' Quod erat demonstrandum, the mathematical sign-off that Darwin himself restated in English: 'It being proved necessary to Marry.
Brian Christian (Algorithms to Live By: The Computer Science of Human Decisions)
Over the next three decades, scholars and fans, aided by computational algorithms, will knit together the books of the world into a single networked literature. A reader will be able to generate a social graph of an idea, or a timeline of a concept, or a networked map of influence for any notion in the library. We’ll come to understand that no work, no idea stands alone, but that all good, true, and beautiful things are ecosystems of intertwined parts and related entities, past and present.
Kevin Kelly (The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future)
within thirty seconds of learning their name and where they lived, she would implement her social algorithm and calculate precisely where they stood in her constellation based on who their family was, who else they were related to, what their approximate net worth might be, how the fortune was derived, and what family scandals might have occurred within the past fifty years.
Kevin Kwan (Crazy Rich Asians (Crazy Rich Asians, #1))
In fact, AI might make centralized systems far more efficient than diffused systems, because machine learning works better the more information it can analyze. If you disregard all privacy concerns and concentrate all the information relating to a billion people in one database, you can train much better algorithms than if you respect individual privacy and have in your database only partial information on a million people.
Yuval Noah Harari (21 Lessons for the 21st Century)
The longer someone ignores an email before finally responding, the more relative social power that person has. Map these response times across an entire organization and you get a remarkably accurate chart of the actual social standing. The boss leaves emails unanswered for hours or days; those lower down respond within minutes. There’s an algorithm for this, a data mining method called “automated social hierarchy detection,” developed at Columbia University.8 When applied to the archive of email traffic at Enron Corporation before it folded, the method correctly identified the roles of top-level managers and their subordinates just by how long it took them to answer a given person’s emails. Intelligence agencies have been applying the same metric to suspected terrorist gangs, piecing together the chain of influence to spot the central figures.
Daniel Goleman (Focus: The Hidden Driver of Excellence)
Study after study shows that diverse teams perform better. In a 2014 report for Scientific American, Columbia professor Katherine W. Phillips examined a broad cross section of research related to diversity and organizational performance. And over and over, she found that the simple act of interacting in a diverse group improves performance, because it “forces group members to prepare better, to anticipate alternative viewpoints and to expect that reaching consensus will take effort.
Sara Wachter-Boettcher (Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech)
In the economic sphere too, the ability to hold a hammer or press a button is becoming less valuable than before. In the past, there were many things only humans could do. But now robots and computers are catching up, and may soon outperform humans in most tasks. True, computers function very differently from humans, and it seems unlikely that computers will become humanlike any time soon. In particular, it doesn’t seem that computers are about to gain consciousness, and to start experiencing emotions and sensations. Over the last decades there has been an immense advance in computer intelligence, but there has been exactly zero advance in computer consciousness. As far as we know, computers in 2016 are no more conscious than their prototypes in the 1950s. However, we are on the brink of a momentous revolution. Humans are in danger of losing their value, because intelligence is decoupling from consciousness. Until today, high intelligence always went hand in hand with a developed consciousness. Only conscious beings could perform tasks that required a lot of intelligence, such as playing chess, driving cars, diagnosing diseases or identifying terrorists. However, we are now developing new types of non-conscious intelligence that can perform such tasks far better than humans. For all these tasks are based on pattern recognition, and non-conscious algorithms may soon excel human consciousness in recognising patterns. This raises a novel question: which of the two is really important, intelligence or consciousness? As long as they went hand in hand, debating their relative value was just a pastime for philosophers. But in the twenty-first century, this is becoming an urgent political and economic issue. And it is sobering to realise that, at least for armies and corporations, the answer is straightforward: intelligence is mandatory but consciousness is optional.
Yuval Noah Harari (Homo Deus: A History of Tomorrow)
It’s important to note, as we endeavor to understand relative harms, that they are entirely dependent on context. For example, if a high-risk score for a given defendant qualified him for a reentry program that would help him find a job upon release from prison, we’d be much less worried about false positives. Or in the case of the child abuse algorithm, if we are sure that a high-risk score leads to a thorough and fair-minded investigation of the situation at home, we’d be less worried about children unnecessarily removed from their parents. In the end, how an algorithm will be used should affect how it is constructed and optimized.
Cathy O'Neil (Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy)
When Charles Darwin was trying to decide whether he should propose to his cousin Emma Wedgwood, he got out a pencil and paper and weighed every possible consequence. In favor of marriage he listed children, companionship, and the “charms of music & female chit-chat.” Against marriage he listed the “terrible loss of time,” lack of freedom to go where he wished, the burden of visiting relatives, the expense and anxiety provoked by children, the concern that “perhaps my wife won’t like London,” and having less money to spend on books. Weighing one column against the other produced a narrow margin of victory, and at the bottom Darwin scrawled, “Marry—Marry—Marry
Brian Christian (Algorithms to Live By: The Computer Science of Human Decisions)
It is best to be the CEO; it is satisfactory to be an early employee, maybe the fifth or sixth or perhaps the tenth. Alternately, one may become an engineer devising precious algorithms in the cloisters of Google and its like. Otherwise, one becomes a mere employee. A coder of websites at Facebook is no one in particular. A manager at Microsoft is no one. A person (think woman) working in customer relations is a particular type of no one, banished to the bottom, as always, for having spoken directly to a non-technical human being. All these and others are ways for strivers to fall by the wayside — as the startup culture sees it — while their betters race ahead of them. Those left behind may see themselves as ordinary, even failures.
Ellen Ullman (Life in Code: A Personal History of Technology)
In the late twentieth century democracies usually outperformed dictatorships because democracies were better at data-processing. Democracy diffuses the power to process information and make decisions among many people and institutions, whereas dictatorship concentrates information and power in one place. Given twentieth-century technology, it was inefficient to concentrate too much information and power in one place. Nobody had the ability to process all the information fast enough and make the right decisions. This is part of the reason why the Soviet Union made far worse decisions than the United States, and why the Soviet economy lagged far behind the American economy. However, soon AI might swing the pendulum in the opposite direction. AI makes it possible to process enormous amounts of information centrally. Indeed, AI might make centralised systems far more efficient than diffused systems, because machine learning works better the more information it can analyse. If you concentrate all the information relating to a billion people in one database, disregarding all privacy concerns, you can train much better algorithms than if you respect individual privacy and have in your database only partial information on a million people. For example, if an authoritarian government orders all its citizens to have their DNA scanned and to share all their medical data with some central authority, it would gain an immense advantage in genetics and medical research over societies in which medical data is strictly private. The main handicap of authoritarian regimes in the twentieth century – the attempt to concentrate all information in one place – might become their decisive advantage in the twenty-first century.
Yuval Noah Harari (21 Lessons for the 21st Century)
In the late twentieth century democracies usually outperformed dictatorships because democracies were better at data-processing. Democracy diffuses the power to process information and make decisions among many people and institutions, whereas dictatorship concentrates information and power in one place. Given twentieth-century technology, it was inefficient to concentrate too much information and power in one place. Nobody had the ability to process all the information fast enough and make the right decisions. This is part of the reason why the Soviet Union made far worse decisions than the United States, and why the Soviet economy lagged far behind the American economy. “However, soon AI might swing the pendulum in the opposite direction. AI makes it possible to process enormous amounts of information centrally. Indeed, AI might make centralised systems far more efficient than diffused systems, because machine learning works better the more information it can analyse. If you concentrate all the information relating to a billion people in one database, disregarding all privacy concerns, you can train much better algorithms than if you respect individual privacy and have in your database only partial information on a million people. For example, if an authoritarian government orders all its citizens to have their DNA scanned and to share all their medical data with some central authority, it would gain an immense advantage in genetics and medical research over societies in which medical data is strictly private. The main handicap of authoritarian regimes in the twentieth century – the attempt to concentrate all information in one place – might become their decisive advantage in the twenty-first century.
Yuval Noah Harari (21 Lessons for the 21st Century)
By the end of this decade, permutations and combinations of genetic variants will be used to predict variations in human phenotype, illness, and destiny. Some diseases might never be amenable to such a genetic test, but perhaps the severest variants of schizophrenia or heart disease, or the most penetrant forms of familial cancer, say, will be predictable by the combined effect of a handful of mutations. And once an understanding of "process" has been built into predictive algorithms, the interactions between various gene variants could be used to compute ultimate effects on a whole host of physical and mental characteristics beyond disease alone. Computational algorithms could determine the probability of the development of heart disease or asthma or sexual orientation and assign a level of relative risk for various fates to each genome. The genome will thus be read not in absolutes, but in likelihoods-like a report card that does not contain grades but probabilities, or a resume that does not list past experiences but future propensities. It will become a manual for previvorship.
Siddhartha Mukherjee (The Gene: An Intimate History)
Search engine query data is not the product of a designed statistical experiment and finding a way to meaningfully analyse such data and extract useful knowledge is a new and challenging field that would benefit from collaboration. For the 2012–13 flu season, Google made significant changes to its algorithms and started to use a relatively new mathematical technique called Elasticnet, which provides a rigorous means of selecting and reducing the number of predictors required. In 2011, Google launched a similar program for tracking Dengue fever, but they are no longer publishing predictions and, in 2015, Google Flu Trends was withdrawn. They are, however, now sharing their data with academic researchers... Google Flu Trends, one of the earlier attempts at using big data for epidemic prediction, provided useful insights to researchers who came after them... The Delphi Research Group at Carnegie Mellon University won the CDC’s challenge to ‘Predict the Flu’ in both 2014–15 and 2015–16 for the most accurate forecasters. The group successfully used data from Google, Twitter, and Wikipedia for monitoring flu outbreaks.
Dawn E. Holmes (Big Data: A Very Short Introduction (Very Short Introductions))
Why, exactly, is Marduk handing Hammurabi a one and a zero in this picture?" Hiro asks. "They were emblems of royal power," the Librarian says. "Their origin is obscure." "Enki must have been responsible for that one," Hiro says. "Enki's most important role is as the creator and guardian of the me and the gis-hur, the 'key words' and 'patterns' that rule the universe." "Tell me more about the me." "To quote Kramer and Maier again, '[They believed in] the existence from time primordial of a fundamental, unalterable, comprehensive assortment of powers and duties, norms and standards, rules and regulations, known as me, relating to the cosmos and its components, to gods and humans, to cities and countries, and to the varied aspects of civilized life.'" "Kind of like the Torah." "Yes, but they have a kind of mystical or magical force. And they often deal with banal subjects -- not just religion." "Examples?" "In one myth, the goddess Inanna goes to Eridu and tricks Enki into giving her ninety-four me and brings them back to her home town of Uruk, where they are greeted with much commotion and rejoicing." "Inanna is the person that Juanita's obsessed with." "Yes, sir. She is hailed as a savior because 'she brought the perfect execution of the me.'" "Execution? Like executing a computer program?" "Yes. Apparently, they are like algorithms for carrying out certain activities essential to the society. Some of them have to do with the workings of priesthood and kingship. Some explain how to carry out religious ceremonies. Some relate to the arts of war and diplomacy. Many of them are about the arts and crafts: music, carpentry, smithing, tanning, building, farming, even such simple tasks as lighting fires." "The operating system of society." "I'm sorry?" "When you first turn on a computer, it is an inert collection of circuits that can't really do anything. To start up the machine, you have to infuse those circuits with a collection of rules that tell it how to function. How to be a computer. It sounds as though these me served as the operating system of the society, organizing an inert collection of people into a functioning system." "As you wish. In any case, Enki was the guardian of the me." "So he was a good guy, really." "He was the most beloved of the gods." "He sounds like kind of a hacker.
Neal Stephenson (Snow Crash)
At this point, the cautious reader might wish to read over the whole argument again, as presented above, just to make sure that I have not indulged in any 'sleight of hand'! Admittedly there is an air of the conjuring trick about the argument, but it is perfectly legitimate, and it only gains in strength the more minutely it is examined. We have found a computation Ck(k) that we know does not stop; yet the given computational procedure A is not powerful enough to ascertain that facet. This is the Godel(-Turing) theorem in the form that I require. It applies to any computational procedure A whatever for ascertaining that computations do not stop, so long as we know it to be sound. We deduce that no knowably sound set of computational rules (such as A) can ever suffice for ascertaining that computations do not stop, since there are some non-stopping computations (such as Ck(k)) that must elude these rules. Moreover, since from the knowledge of A and of its soundness, we can actually construct a computation Ck(k) that we can see does not ever stop, we deduce that A cannot be a formalization of the procedures available to mathematicians for ascertaining that computations do not stop, no matter what A is. Hence: (G) Human mathematicians are not using a knowably sound algorithm in order to ascertain mathematical truth. It seems to me that this conclusion is inescapable. However, many people have tried to argue against it-bringing in objections like those summarized in the queries Q1-Q20 of 2.6 and 2.10 below-and certainly many would argue against the stronger deduction that there must be something fundamentally non-computational in our thought processes. The reader may indeed wonder what on earth mathematical reasoning like this, concerning the abstract nature of computations, can have to say about the workings of the human mind. What, after all, does any of this have to do with the issue of conscious awareness? The answer is that the argument indeed says something very significant about the mental quality of understanding-in relation to the general issue of computation-and, as was argued in 1.12, the quality of understanding is something dependent upon conscious awareness. It is true that, for the most part, the foregoing reasoning has been presented as just a piece of mathematics, but there is the essential point that the algorithm A enters the argument at two quite different levels. At the one level, it is being treated as just some algorithm that has certain properties, but at the other, we attempt to regard A as being actually 'the algorithm that we ourselves use' in coming to believe that a computation will not stop. The argument is not simply about computations. It is also about how we use our conscious understanding in order to infer the validity of some mathematical claim-here the non-stopping character of Ck(k). It is the interplay between the two different levels at which the algorithm A is being considered-as a putative instance of conscious activity and as a computation itself-that allows us to arrive at a conclusion expressing a fundamental conflict between such conscious activity and mere computation.
Roger Penrose (Shadows of the Mind: A Search for the Missing Science of Consciousness)
For example, consider a stack (which is a first-in, last-out list). You might have a program that requires three different types of stacks. One stack is used for integer values, one for floating-point values, and one for characters. In this case, the algorithm that implements each stack is the same, even though the data being stored differs. In a non-object-oriented language, you would be required to create three different sets of stack routines, with each set using different names. However, because of polymorphism, in Java you can create one general set of stack routines that works for all three specific situations. This way, once you know how to use one stack, you can use them all. More generally, the concept of polymorphism is often expressed by the phrase “one interface, multiple methods.” This means that it is possible to design a generic interface to a group of related activities. Polymorphism helps reduce complexity by allowing the same interface to be used to specify a general class of action.
Herbert Schildt (Java: A Beginner's Guide)
To the degree that they have access to the devices we use to mediate our relation to everyday life, companies deploy algorithms based on correlations found in large data sets to shape our opportunities—our sense of what feels possible. Undesirable outcomes need not be forbidden and policed if instead they can simply be made improbable.
Anonymous
This curve, which looks like an elongated S, is variously known as the logistic, sigmoid, or S curve. Peruse it closely, because it’s the most important curve in the world. At first the output increases slowly with the input, so slowly it seems constant. Then it starts to change faster, then very fast, then slower and slower until it becomes almost constant again. The transfer curve of a transistor, which relates its input and output voltages, is also an S curve. So both computers and the brain are filled with S curves. But it doesn’t end there. The S curve is the shape of phase transitions of all kinds: the probability of an electron flipping its spin as a function of the applied field, the magnetization of iron, the writing of a bit of memory to a hard disk, an ion channel opening in a cell, ice melting, water evaporating, the inflationary expansion of the early universe, punctuated equilibria in evolution, paradigm shifts in science, the spread of new technologies, white flight from multiethnic neighborhoods, rumors, epidemics, revolutions, the fall of empires, and much more. The Tipping Point could equally well (if less appealingly) be entitled The S Curve. An earthquake is a phase transition in the relative position of two adjacent tectonic plates. A bump in the night is just the sound of the microscopic tectonic plates in your house’s walls shifting, so don’t be scared. Joseph Schumpeter said that the economy evolves by cracks and leaps: S curves are the shape of creative destruction. The effect of financial gains and losses on your happiness follows an S curve, so don’t sweat the big stuff. The probability that a random logical formula is satisfiable—the quintessential NP-complete problem—undergoes a phase transition from almost 1 to almost 0 as the formula’s length increases. Statistical physicists spend their lives studying phase transitions.
Pedro Domingos (The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World)
The S curve is not just important as a model in its own right; it’s also the jack-of-all-trades of mathematics. If you zoom in on its midsection, it approximates a straight line. Many phenomena we think of as linear are in fact S curves, because nothing can grow without limit. Because of relativity, and contra Newton, acceleration does not increase linearly with force, but follows an S curve centered at zero. So does electric current as a function of voltage in the resistors found in electronic circuits, or in a light bulb (until the filament melts, which is itself another phase transition). If you zoom out from an S curve, it approximates a step function, with the output suddenly changing from zero to one at the threshold. So depending on the input voltages, the same curve represents the workings of a transistor in both digital computers and analog devices like amplifiers and radio tuners. The early part of an S curve is effectively an exponential, and near the saturation point it approximates exponential decay. When someone talks about exponential growth, ask yourself: How soon will it turn into an S curve? When will the population bomb peter out, Moore’s law lose steam, or the singularity fail to happen? Differentiate an S curve and you get a bell curve: slow, fast, slow becomes low, high, low. Add a succession of staggered upward and downward S curves, and you get something close to a sine wave. In fact, every function can be closely approximated by a sum of S curves: when the function goes up, you add an S curve; when it goes down, you subtract one. Children’s learning is not a steady improvement but an accumulation of S curves. So is technological change. Squint at the New York City skyline and you can see a sum of S curves unfolding across the horizon, each as sharp as a skyscraper’s corner. Most importantly for us, S curves lead to a new solution to the credit-assignment problem. If the universe is a symphony of phase transitions, let’s model it with one. That’s what the brain does: it tunes the system of phase transitions inside to the one outside. So let’s replace the perceptron’s step function with an S curve and see what happens.
Pedro Domingos (The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World)
There are also books that contain collections of papers or chapters on particular aspects of knowledge discovery—for example, Relational Data Mining edited by Dzeroski and Lavrac [De01]; Mining Graph Data edited by Cook and Holder [CH07]; Data Streams: Models and Algorithms edited by Aggarwal [Agg06]; Next Generation of Data Mining edited by Kargupta, Han, Yu, et al. [KHY+08]; Multimedia Data Mining: A Systematic Introduction to Concepts and Theory edited by Z. Zhang and R. Zhang [ZZ09]; Geographic Data Mining and Knowledge Discovery edited by Miller and Han [MH09]; and Link Mining: Models, Algorithms and Applications edited by Yu, Han, and Faloutsos [YHF10]. There are many tutorial notes on data mining in major databases, data mining, machine learning, statistics, and Web technology conferences.
Vipin Kumar (Introduction to Data Mining)
When we unpack the master equations of QCD, we get equations that are related by symmetry-symmetry among the colors, symmetry among different directions in space, and the symmetry of special relativity between systems moving at constant velocity. Their complete content is out front, and the algorithms that unpack them flow from the unambiguous mathematics of symmetry. So let me assure you that you really should be impressed. It's a genuinely elegant theory.
Frank Wilczek (The Lightness of Being: Mass, Ether, and the Unification of Forces)
On the same meditation retreat that led me to see a weed that lacked essence-of-weed, I also had an interesting encounter with a reptile. I was walking through the woods and, looking down, saw a lizard frozen in its tracks, presumably by the sight of me. As I watched it look around nervously and calculate its next move, my first thought was that this lizard’s behavior was governed by a relatively simple algorithm: see large creature, freeze; if creature approaches, run. But then I realized that, though my own behavioral algorithms are much more complicated than that, there could well be a being so intelligent that, to it, I look as simple-minded as the lizard looks to me. The more I thought about it, the more that lizard and I seemed to have in common. We were both thrown into a world we didn’t choose, under the guidance of behavioral algorithms we didn’t choose, and were trying to make the best of the situation. I felt a kind of kinship with the lizard that I’d never felt with a lizard.” Excerpt From: Robert Wright. “Why Buddhism is True.” iBooks.
Robert Wright (Why Buddhism Is True: The Science and Philosophy of Meditation and Enlightenment)
When you use predictive thinking, by contrast, you’re usually foreseeing events that are considered to be highly likely. It can work well if 1) you are very familiar with the parameters of events likely to occur relatively soon; 2) those parameters are fairly constant, stable, and easy to measure; and 3) you’re able to use established algorithms to make decisions about short-term outcomes. Predictive thinking prioritizes your deductive mind. An example might be the way in which traffic control experts at airports are able to decide when to ground planes and when to allow them to fly. Long-term weather patterns are not easy to anticipate but meteorologists can predict short-term hour-by-hour weather with relative accuracy and thus traffic control officers can use
Luc de Brabandere (Thinking in New Boxes: A New Paradigm for Business Creativity)
In game theory, as in applications of other technologies that use RPT [Revealed Preference Theory], the purpose of the machinery is to tell us what happens when patterns of behavior instantiate some particular strategic vector, payoff matrix, and distribution of information—for example, a PD [Prisoner's Dilemma]—that we’re empirically motivated to regard as a correct model of a target situation. The motivational history that produced this vector in a given case is irrelevant to which game is instantiated, or to the location of its equilibrium or equilibria. As Binmore (1994, pp. 95–256) emphasizes at length, if, in the case of any putative PD, there is any available story that would rationalize cooperation by either player, then it follows as a matter of logic that the modeler has assigned at least one of them the wrong utility function (or has mistakenly assumed perfect information, or has failed to detect a commitment action) and so made a mistake in taking their game as an instance of the (one-shot) PD. Perhaps she has not observed enough of their behavior to have inferred an accurate model of the agents they instantiate. The game theorist’s solution algorithms, in themselves, are not empirical hypotheses about anything. Applications of them will be only as good, for purposes of either normative strategic advice or empirical explanation, as the empirical model of the players constructed from the intentional stance is accurate. It is a much-cited fact from the experimental economics literature that when people are brought into laboratories and set into situations contrived to induce PDs, substantial numbers cooperate. What follows from this, by proper use of RPT, not in discredit of it, is that the experimental setup has failed to induce a PD after all. The players’ behavior indicates that their preferences have been misrepresented in the specification of their game as a PD. A game is a mathematical representation of a situation, and the operation of solving a game is an exercise in deductive reasoning. Like any deductive argument, it adds no new empirical information not already contained in the premises. However, it can be of explanatory value in revealing structural relations among facts that we otherwise might not have noticed.
Don Ross
When an infant is designated as marketable, the algorithm immediately identifies the parents from the gene pool—with the exception of relatively rare cases in which the sire is bankliving. The precise moment the infant is accepted into the receptacle, they are both upgraded to gifters and entitled to a bonus for one year as a reward for gifting.
Eli K.P. William (The Naked World (Jubilee Cycle, #2))
Moravec’s Paradox. Hans Moravec was a professor of mine at Carnegie Mellon University, and his work on artificial intelligence and robotics led him to a fundamental truth about combining the two: contrary to popular assumptions, it is relatively easy for AI to mimic the high-level intellectual or computational abilities of an adult, but it’s far harder to give a robot the perception and sensorimotor skills of a toddler. Algorithms can blow humans out of the water when it comes to making predictions based on data, but robots still can’t perform the cleaning duties of a hotel maid. In essence, AI is great at thinking, but robots are bad at moving their fingers.
Kai-Fu Lee (AI Superpowers: China, Silicon Valley, and the New World Order)
Humans live in society and exercise their free will in socio-economic relations. Unlike the dials in a well-functioning clock which do not intersect, humans have the potential to be compassionate or not to be. But, why should I part with my time to help some stranger I might never meet again or for someone who lives miles away from me? Why should I part with my wealth if it is scarce, legally belongs to me and so nobody could question what I would decide to do with it in life? Science seeks cause-effect relations in physical realities. Mathematics is one of the tools to guide this search in complex relationships. Some scientists exclusively focus on material processes. So, they only extract the deeper meaning as regularity and algorithm itself. Science can only help us thus far. The reflection on nature and its regularity around us requires a philosophical underpinning for deeper meaning.
Salman Ahmed Shaikh (Reflections on the Origins in the Post COVID-19 World)
This book doesn’t address problems related to family dynamics, to untenable pressures placed on young people, especially young women (please read Sherry Turkle on those topics), the way scammers can use social media to abuse you, the way social media algorithms might discriminate against you for racist or other horrible reasons (please read Cathy O’Neil on that topic), or the way your loss of privacy can bite you personally and harm society in surprising ways.
Jaron Lanier (Ten Arguments for Deleting Your Social Media Accounts Right Now)
The privacy issue was reignited in early 2014, when the Wall Street Journal reported that Facebook had conducted a massive social-science experiment on nearly seven hundred thousand of its users. To determine whether it could alter the emotional state of its users and prompt them to post either more positive or negative content, the site’s data scientists enabled an algorithm, for one week, to automatically omit content that contained words associated with either positive or negative emotions from the central news feeds of 689,003 users. As it turned out, the experiment was very “successful” in that it was relatively easy to manipulate users’ emotions, but the backlash from the blogosphere was horrendous. “Apparently what many of us feared is already a reality: Facebook is using us as lab rats, and not just to figure out which ads we’ll respond to but to actually change our emotions,” wrote Sophie Weiner on AnimalNewYork.com.
Jonathan Taplin (Move Fast and Break Things: How Facebook, Google, and Amazon Cornered Culture and Undermined Democracy)
As I’ve said throughout this book, networked products tend to start from humble beginnings—rather than big splashy launches—and YouTube was no different. Jawed’s first video is a good example. Steve described the earliest days of content and how it grew: In the earliest days, there was very little content to organize. Getting to the first 1,000 videos was the hardest part of YouTube’s life, and we were just focused on that. Organizing the videos was an afterthought—we just had a list of recent videos that had been uploaded, and you could just browse through those. We had the idea that everyone who uploaded a video would share it with, say, 10 people, and then 5 of them would actually view it, and then at least one would upload another video. After we built some key features—video embedding and real-time transcoding—it started to work.75 In other words, the early days was just about solving the Cold Start Problem, not designing the fancy recommendations algorithms that YouTube is now known for. And even once there were more videos, the attempt at discoverability focused on relatively basic curation—just showing popular videos in different categories and countries. Steve described this to me: Once we got a lot more videos, we had to redesign YouTube to make it easier to discover the best videos. At first, we had a page on YouTube to see just the top 100 videos overall, sorted by day, week, or month. Eventually it was broken out by country. The homepage was the only place where YouTube as a company would have control of things, since we would choose the 10 videos. These were often documentaries, or semi-professionally produced content so that people—particularly advertisers—who came to the YouTube front page would think we had great content. Eventually it made sense to create a categorization system for videos, but in the early years everything was grouped in with each other. Even while the numbers of videos was rapidly growing, so too were all the other forms of content on the site. YouTube wasn’t just the videos, it was also the comments left by viewers: Early in we saw that there were 100x more viewers than creators. Every social product at that time had comments, so we added them to YouTube, which was a way for the viewers to participate, too. It seems naive now, but we were just thinking about raw growth at that time—the raw number of videos, the raw number of comments—so we didn’t think much about the quality. We weren’t thinking about fake news or anything like that. The thought was, just get as many comments as possible out there, and the more controversial the better! Keep in mind that the vast majority of videos had zero comments, so getting feedback for our creators usually made the experience better for them. Of course now we know that once you get to a certain level of engagement, you need a different solution over time.
Andrew Chen (The Cold Start Problem: How to Start and Scale Network Effects)
The code is a metaphor that works well for the genetic code or the rules of a cellular automaton. The code is bad metaphor for the continuously changing states of neurons as they run through their algorithmic programs. Imagine looking for a code in space and time for rule 110, iteration 1,234, position 820-870. To study the mechanism that led to that state, we look at the point before, the relation of how it happened, in all available detail. Is that useful? In the case of the rule 110 automaton, the same rule applies everywhere, so the instance reveals something general about the whole. This is the hope of the biological experiment as well. But what happens if the rule changes with every iteration, as discussed for transcription factor cascades? To describe changing rules of algorithmic growth for every instance in time and space and in different systems is not only a rather large endeavor, it also suffers from the same danger of undefined depth. It is a description of the system, a series of bits of endpoint information, not a description of a code sufficient to create the system. The code is the 'extremely small amount of information to be specified genetically,' as Willshaw and von der Malsburg put it, that is sufficient to encode the unfolding of information under the influence of time and energy. The self-assembling brain.
Peter Robin Hiesinger (The Self-Assembling Brain: How Neural Networks Grow Smarter)
Because the authorities have forced all residents of Xinjiang to register for a new state-issued ID card, they have a base library of high-definition images of each person’s face, and in addition, they have collected tens of millions of images of the faces of residents who pass through the checkpoints. The Face++ and similar algorithms such as those from companies like YITU and Sensetime run extremely fast. As the Shawan study notes, in 0.8 seconds it can run a match of a face, and register and record notification alarms related to up to three hundred thousand targeted people.
Darren Byler (In the Camps: Life in China's High-Tech Penal Colony)
At a surface level, the Kalman filter resembles the kind of time series analysis that’s common in finance. The key difference is that the Kalman filter is used on reproducible systems while finance is typically a non-reproducible system. If you’re using the Kalman filter to guide a drone from point A to point B, but you have a bug in your code and the drone crashes, you can simply pick up the drone21, put it back on the launch pad at point A, and try again. Because you can repeat the experiment over and over, you can eventually get very precise measurements and a functioning guidance algorithm. That’s a reproducible system. In finance, however, you usually can’t just keep re-running a trading algorithm that makes money and get the same result. Eventually your counterparties will adapt and get wise. A key difference relative to our drone example is the presence of animate objects (other humans) who won’t always do the same thing given the same input.
Balaji S. Srinivasan (The Network State: How To Start a New Country)
Volume is key. Twitter now estimates that Russia used more than fifty thousand automated accounts or bots to Tweet election-related content during the 2016 presidential campaign. Twitter and Facebook are the best-known disinformation superhighways, but there are many others. Russian officers have infiltrated everything from 4chan to Pinterest.
Amy B. Zegart (Spies, Lies, and Algorithms: The History and Future of American Intelligence)
Like most of the woman in her crowd, Eleanor could meet another Asian anywhere in the world—say, over dim sum at Royal China in London, shopping in the lingerie department of David Jones in Sydney—and within thirty seconds of learning their name and where they lived,she would implement her social algorithm and calculate precisely where they stood in her constellation based on who their family was, who else they were related to, what their approximate net worth might be, how the fortune was derived, and what family scandals might have occurred within the past fifty years
Kevin Kwan (Crazy Rich Asians (Crazy Rich Asians, #1))
In more detail, this ritual should ensure that every incomplete task, goal, or project has been reviewed and that for each you have confirmed that either (1) you have a plan you trust for its completion, or (2) it’s captured in a place where it will be revisited when the time is right. The process should be an algorithm: a series of steps you always conduct, one after another. When you’re done, have a set phrase you say that indicates completion (to end my own ritual, I say, “Shutdown complete”). This final step sounds cheesy, but it provides a simple cue to your mind that it’s safe to release work-related thoughts for the rest of the day.
Cal Newport (Deep Work: Rules for Focused Success in a Distracted World)
More fundamentally, productivity gains from automation may always be somewhat limited, especially compared to the introduction of new products and tasks that transform the production process, such as those in the early Ford factories. Automation is about substituting cheaper machines or algorithms for human labor, and reducing production costs by 10 or even 20 percent in a few tasks will have relatively small consequences for TFP or the efficiency of the production process. In contrast, introducing new technologies, such as electrification, novel designs, or new production tasks, has been at the root of transformative TFP gains throughout much of the twentieth century.
Simon Johnson (Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity)
Everyone will have a detailed model of themselves, and these models will always be talking to each other. (…) Our model will go to millions of meetings so we don’t have to, and by Saturday we will meet the best candidates at a party organized by OkCupid (…). Our digital half will have a model of the world: not just the world at large, but the world as it relates to us. (…) Tomorrow’s cyberspace will be a vast parallel world that chooses only the most promising things to experience in the real world. It will be like a new global subconscious, the collective id of the human race.” ― Pedro Domingos, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World
Pedro Domingos (The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World)
These two viewpoints offer us different ways of orienting to the world that lead to strikingly different values, ways of relating, and behaviors ... The essence of the right-hemisphere perspective involves attending to relationship, embodiment, and what is unfolding in the unique moment in the space between. We could say that from this viewpoint, the central metaphor here is living beings in relationship with each other in this moment. In contrast, the left-hemisphere viewpoint steps out of the relational moment to focus on division, fixity, disembodiment, and the creation of algorithms (standardized step-by-step solutions to problems that do not take individuality and context into account). The central metaphor here is the machine, with our bodies, our brains and our very selves viewed as mechanisms to be analyzed and shaped. We might immediately sense that the perspective of each hemisphere has substantial consequences for how we are able to be present with one another.
Bonnie Badenoch
As noted previously, there are a number of other algorithms beyond search, prime factors and Monte Carlo methods, though many of these apply only to very specialist mathematical problems and may never have practical applications. As yet, though, the range available is relatively limited. Some of this may be due to the limitations that are imposed in dreaming up algorithms without an actual device to run them on, but it is entirely possible that the list will always be fairly short, as we shouldn’t underestimate the difficulties of getting quantum algorithms that will run. However, Lov Grover commented in an interview with the author a while ago: ‘Not everyone agrees with this, but I believe that there are many more quantum algorithms to be discovered.’ Even if Grover is right, quantum computers are never going to supplant conventional computers as general-purpose machines. They are always likely to be specialist in application. And, as we shall see, it is not easy to get quantum computers to work at all, let alone develop them into robust desktop devices like a familiar PC.
Brian Clegg (Quantum Computing: The transformative technology of the Qubit Revolution)
The book, All I Really Need to Know I Learned in Kindergarten, was written in 1986 by a minister, Robert Fulghum, and it’s full of simple-sounding life advice, like “share everything,” “play fair,” and “clean up after your own mess.” Chen believes that these skills—the elementary, pre-literate skills of treating other people well, acting ethically, and behaving in prosocial ways, all of which I consider “analog ethics”—are badly needed for an age in which our value will come from our ability to relate to other people. He writes: While I know that we’ll need to layer on top of that foundation a set of practical and technical know-how, I agree with [Fulghum] that a foundation rich in EQ [emotional quotient] and compassion and imagination and creativity is the perfect springboard to prepare people—the doctors with the best bedside manner, the sales reps solving my actual problems, crisis counselors who really understand when we’re in crisis—for a machine-learning powered future in which humans and algorithms are better together. Research has indicated that teaching analog ethics can be effective. One 2015 study that tracked children from kindergarten through young adulthood found that people who had developed strong prosocial, noncognitive skills—traits like positivity, empathy, and regulating one’s own emotions—were more likely to be successful as adults. Another study in 2017 found that kids who participated in “social-emotional” learning programs were more likely to graduate from college, were arrested
Kevin Roose (Futureproof: 9 Rules for Surviving in the Age of AI)
The shortcomings of the system are best understood as the result of taking this ocean of data, and the decision points produced by our algorithms, as a near enough substitute for perfect certainty. My own best results are often due to pretending I know relatively little, and acting accordingly, though it’s easier said than done. Far easier.
William Gibson (The Peripheral (Jackpot #1))
A high-profile example of this type of data bias appeared in Google’s “Flu Trends” program. The program, which started in 2008, intended to leverage online searches and user location monitoring to pinpoint regional flu outbreaks. Google collected and used this information to tip-off and alert health authorities in regions they identified. Over time the project failed to accurately predict flu cases due to changes in Google’s search engine algorithm. A new algorithm update in 2012 caused Google’s search engine to suggest a medical diagnosis when users searched for the terms “cough” and “fever.” Google, therefore, inserted a false bias into its results by prompting users with a cough or a fever to search for flu-related results (equivalent to a research assistant lingering over respondents’ shoulder whispering to check the “flu” box to explain their symptoms). This increased the volume of searches for flu-related terms and led Google to predict an exaggerated flu outbreak twice as severe as public health officials anticipated.
Oliver Theobald (Statistics for Absolute Beginners: A Plain English Introduction)
a machine which learns from patterns in human-generated data, and autonomously manipulates human language, knowledge and relations, is more than a machine. It is a social agent: a participant in society, simultaneously participated in by it. As such, it becomes a legitimate object of sociological research.
Massimo Airoldi (Machine Habitus: Toward a Sociology of Algorithms)
By now, though, it had been a steep learning curve, he was fairly well versed on the basics of how clearing worked: When a customer bought shares in a stock on Robinhood — say, GameStop — at a specific price, the order was first sent to Robinhood's in-house clearing brokerage, who in turn bundled the trade to a market maker for execution. The trade was then brought to a clearinghouse, who oversaw the trade all the way to the settlement. During this time period, the trade itself needed to be 'insured' against anything that might go wrong, such as some sort of systemic collapse or a default by either party — although in reality, in regulated markets, this seemed extremely unlikely. While the customer's money was temporarily put aside, essentially in an untouchable safe, for the two days it took for the clearing agency to verify that both parties were able to provide what they had agreed upon — the brokerage house, Robinhood — had to insure the deal with a deposit; money of its own, separate from the money that the customer had provided, that could be used to guarantee the value of the trade. In financial parlance, this 'collateral' was known as VAR — or value at risk. For a single trade of a simple asset, it would have been relatively easy to know how much the brokerage would need to deposit to insure the situation; the risk of something going wrong would be small, and the total value would be simple to calculate. If GME was trading at $400 a share and a customer wanted ten shares, there was $4000 at risk, plus or minus some nominal amount due to minute vagaries in market fluctuations during the two-day period before settlement. In such a simple situation, Robinhood might be asked to put up $4000 and change — in addition to the $4000 of the customer's buy order, which remained locked in the safe. The deposit requirement calculation grew more complicated as layers were added onto the trading situation. A single trade had low inherent risk; multiplied to millions of trades, the risk profile began to change. The more volatile the stock — in price and/or volume — the riskier a buy or sell became. Of course, the NSCC did not make these calculations by hand; they used sophisticated algorithms to digest the numerous inputs coming in from the trade — type of equity, volume, current volatility, where it fit into a brokerage's portfolio as a whole — and spit out a 'recommendation' of what sort of deposit would protect the trade. And this process was entirely automated; the brokerage house would continually run its trading activity through the federal clearing system and would receive its updated deposit requirements as often as every fifteen minutes while the market was open. Premarket during a trading week, that number would come in at 5:11 a.m. East Coast time, usually right as Jim, in Orlando, was finishing his morning coffee. Robinhood would then have until 10:00 a.m. to satisfy the deposit requirement for the upcoming day of trading — or risk being in default, which could lead to an immediate shutdown of all operations. Usually, the deposit requirement was tied closely to the actual dollars being 'spent' on the trades; a near equal number of buys and sells in a brokerage house's trading profile lowered its overall risk, and though volatility was common, especially in the past half-decade, even a two-day settlement period came with an acceptable level of confidence that nobody would fail to deliver on their trades.
Ben Mezrich (The Antisocial Network: The GameStop Short Squeeze and the Ragtag Group of Amateur Traders That Brought Wall Street to Its Knees)
NOTE: “Discussion” with people in my time period is oddly unsettling because instead of listening, they scan conversations for a few keywords and regurgitate three or four bits of text they memorized from Twitter; they struggle to tell if this text is related or not. My contemporaries think “debate” means reciting scripts louder or meaner than their opponents; they never try to understand the other side, they just cut and paste on top of each other. These people aren’t controlled by algorithms, they are algorithms!
Ben Hamilton ("Sorry Guys, We Stormed the Capitol": Eye-Witness Accounts of January 6th (The Chasing History Project))
is the advancement of the mathematical tools called algorithms and their related sophisticated software. Never before has so much mental power been computerized and made available to so many—power to deconstruct and predict patterns and changes
Ram Charan (The Attacker's Advantage: Turning Uncertainty into Breakthrough Opportunities)
Soon after that, Eno briefly joined a group called the Scratch Orchestra, led by the late British avant-garde composer Cornelius Cardew. There was one Cardew piece that would be a formative experience for Eno—a piece known as “Paragraph 7,” part of a larger Cardew masterwork called The Great Learning. Explaining “Paragraph 7” could easily take up a book of its own. “Paragraph 7”’s score is designed to be performed by a group of singers, and it can be done by anyone, trained or untrained. The words are from a text by Confucius, broken up into 24 short chunks, each of which has a number. There are only a few simple rules. The number tells the singer how many times to repeat that chunk of text; an additional number tells each singer how many times to repeat it loudly or softly. Each singer chooses a note with which to sing each chunk—any note—with the caveats to not hit the same note twice in a row, and to try to match notes with a note sung by someone else in the group. Each note is held “for the length of a breath,” and each singer goes through the text at his own pace. Despite the seeming vagueness of the score’s few instructions, the piece sounds very similar—and very beautiful—each time it is performed. It starts out in discord, but rapidly and predictably resolves into a tranquil pool of sound. “Paragraph 7,” and 1960s tape loop pieces like Steve Reich’s “It’s Gonna Rain,” sparked Eno’s fascination with music that wasn’t obsessively organized from the start, but instead grew and mutated in intriguing ways from a limited set of initial constraints. “Paragraph 7” also reinforced Eno’s interest in music compositions that seemed to have the capacity to regulate themselves; the idea of a self-regulating system was at the very heart of cybernetics. Another appealing facet of “Paragraph 7” for Eno was that it was both process and product—an elegant and endlessly beguiling process that yielded a lush, calming result. Some of Cage’s pieces, and other process-driven pieces by other avant-gardists, embraced process to the point of extreme fetishism, and the resulting product could be jarring or painful to listen to. “Paragraph 7,” meanwhile, was easier on the ears—a shimmering cloud of sonics. In an essay titled “Generating and Organizing Variety in the Arts,” published in Studio International in 1976, a 28-year-old Eno connected his interest in “Paragraph 7” to his interest in cybernetics. He attempted to analyze how the design of the score’s few instructions naturally reduced the “variety” of possible inputs, leading to a remarkably consistent output. In the essay, Eno also wrote about algorithms—a cutting-edge concept for an electronic-music composer to be writing about, in an era when typewriters, not computers, were still en vogue. (In 1976, on the other side of the Atlantic, Steve Jobs and Steve Wozniak were busy building a primitive personal computer in a garage that they called the Apple I.) Eno also talked about the related concept of a “heuristic,” using managerial-cybernetics champion Stafford Beer’s definition. “To use Beer’s example: If you wish to tell someone how to reach the top of a mountain that is shrouded in mist, the heuristic ‘keep going up’ will get him there,” Eno wrote. Eno connected Beer’s concept of a “heuristic” to music. Brecht’s Fluxus scores, for instance, could be described as heuristics.
Geeta Dayal (Brian Eno's Another Green World (33 1/3 Book 67))
New opportunities for New York as a high-tech hub are related to the evolution of the Internet, according to Chris Dixon: “Imagine the Internet as a house. The first phase— laying the foundation, the bricks—happened in the ‘90s. No wonder that Boston and California, heavy tech places with MIT and Stanford, dominated the scene at that time. The house has been built, now it’s more about interior design. Many interesting, recent companies haven’t been started by technologists but by design and product-oriented people, which has helped New York a lot. New York City has always been a consumer media kind of city, and the Internet is in need of those kinds of skills now. Actually, when I say design, it’s more about product-focused people. I’d put Facebook in that category. Everything requires engineers, but unlike Google, their breakthrough was not as scientific. It was a well-designed product that people liked to use. Google had a significant scientific breakthrough with their search algorithm. That’s not what drives Facebook. In The Social Network movie, when they write equations on the wall that’s just not what it is, it’s not about that. Every company has engineering problems, but Facebook is product-design driven.
Maria Teresa Cometto (Tech and the City: The Making of New York's Startup Community)
When Charles Darwin was trying to decide whether he should propose to his cousin Emma Wedgwood, he got out a pencil and paper and weighed every possible consequence. In favor of marriage he listed children, companionship, and the “charms of music & female chit-chat.” Against marriage he listed the “terrible loss of time,” lack of freedom to go where he wished, the burden of visiting relatives, the expense and anxiety provoked by children, the concern that “perhaps my wife won’t like London,” and having less money to spend on books. Weighing
Brian Christian (Algorithms to Live By: The Computer Science of Human Decisions)
Fiscal Numbers (the latter uniquely identifies a particular hospitalization for patients who might have been admitted multiple times), which allowed us to merge information from many different hospital sources. The data were finally organized into a comprehensive relational database. More information on database merger, in particular, how database integrity was ensured, is available at the MIMIC-II web site [1]. The database user guide is also online [2]. An additional task was to convert the patient waveform data from Philips’ proprietary format into an open-source format. With assistance from the medical equipment vendor, the waveforms, trends, and alarms were translated into WFDB, an open data format that is used for publicly available databases on the National Institutes of Health-sponsored PhysioNet web site [3]. All data that were integrated into the MIMIC-II database were de-identified in compliance with Health Insurance Portability and Accountability Act standards to facilitate public access to MIMIC-II. Deletion of protected health information from structured data sources was straightforward (e.g., database fields that provide the patient name, date of birth, etc.). We also removed protected health information from the discharge summaries, diagnostic reports, and the approximately 700,000 free-text nursing and respiratory notes in MIMIC-II using an automated algorithm that has been shown to have superior performance in comparison to clinicians in detecting protected health information [4]. This algorithm accommodates the broad spectrum of writing styles in our data set, including personal variations in syntax, abbreviations, and spelling. We have posted the algorithm in open-source form as a general tool to be used by others for de-identification of free-text notes [5].
Mit Critical Data (Secondary Analysis of Electronic Health Records)
we are on the brink of a momentous revolution. Humans are in danger of losing their value, because intelligence is decoupling from consciousness.Until today, high intelligence always went hand in hand with a developed consciousness. Only conscious beings could perform tasks that required a lot of intelligence, such as playing chess, driving cars, diagnosing diseases or identifying terrorists. However, we are now developing new types of non-conscious intelligence that can perform such tasks far better than humans. For all these tasks are based on pattern recognition, and non-conscious algorithms may soon excel human consciousness in recognising patterns. This raises a novel question: which of the two is really important, intelligence or consciousness? As long as they went hand in hand, debating their relative value was just a pastime for philosophers. But in the twenty-first century, this is becoming an urgent political and economic issue. And it is sobering to realise that, at least for armies and corporations, the answer is straightforward: intelligence is mandatory but consciousness is optional.
Yuval Noah Harari
Chasing tax cheats using normal procedures was not an option. It would take decades just to identify anything like the majority of them and centuries to prosecute them successfully; the more we caught, the more clogged up the judicial system would become. We needed a different approach. Once Danis was on board a couple of days later, together we thought of one: we would extract historical and real-time data from the banks on all transfers taking place within Greece as well as in and out of the country and commission software to compare the money flows associated with each tax file number with the tax returns of that same file number. The algorithm would be designed to flag up any instance where declared income seemed to be substantially lower than actual income. Having identified the most likely offenders in this way, we would make them an offer they could not refuse. The plan was to convene a press conference at which I would make it clear that anyone caught by the new system would be subject to 45 per cent tax, large penalties on 100 per cent of their undeclared income and criminal prosecution. But as our government sought to establish a new relationship of trust between state and citizenry, there would be an opportunity to make amends anonymously and at minimum cost. I would announce that for the next fortnight a new portal would be open on the ministry’s website on which anyone could register any previously undeclared income for the period 2000–14. Only 15 per cent of this sum would be required in tax arrears, payable via web banking or debit card. In return for payment, the taxpayer would receive an electronic receipt guaranteeing immunity from prosecution for previous non-disclosure.17 Alongside this I resolved to propose a simple deal to the finance minister of Switzerland, where so many of Greece’s tax cheats kept their untaxed money.18 In a rare example of the raw power of the European Union being used as a force for good, Switzerland had recently been forced to disclose all banking information pertaining to EU citizens by 2017. Naturally, the Swiss feared that large EU-domiciled depositors who did not want their bank balances to be reported to their country’s tax authorities might shift their money before the revelation deadline to some other jurisdiction, such as the Cayman Islands, Singapore or Panama. My proposals were thus very much in the Swiss finance minister’s interests: a 15 per cent tax rate was a relatively small price to pay for legalizing a stash and allowing it to remain in safe, conveniently located Switzerland. I would pass a law through Greece’s parliament that would allow for the taxation of money in Swiss bank accounts at this exceptionally low rate, and in return the Swiss finance minister would require all his country’s banks to send their Greek customers a friendly letter informing them that, unless they produced the electronic receipt and immunity certificate provided by my ministry’s web page, their bank account would be closed within weeks. To my great surprise and delight, my Swiss counterpart agreed to the proposal.19
Yanis Varoufakis (Adults in the Room: My Battle with Europe's Deep Establishment)
From time to time, a complex algorithm will lead to a longer routine, and in those circumstances, the routine should be allowed to grow organically up to 100–200 lines. (A line is a noncomment, nonblank line of source code.) Decades of evidence say that routines of such length are no more error prone than shorter routines. Let issues such as the routine's cohesion, depth of nesting, number of variables, number of decision points, number of comments needed to explain the routine, and other complexity-related considerations dictate the length of the routine rather than imposing a length restriction per se.
Steve McConnell (Code Complete)
It is best to be the CEO; it is satisfactory to be an early employee, maybe the fifth or sixth or perhaps the tenth. Alternately, one may become an engineer devising precious algorithms in the cloisters of Google and its like. Otherwise one becomes a mere employee. A coder of websites at Facebook is no one in particular. A manager at Microsoft is no one. A person (think woman) working in customer relations is a particular type of no one,
Ellen Ullman (Life in Code: A Personal History of Technology)
It’s easy for you to tell what it’s a photo of, but to program a function that inputs nothing but the colors of all the pixels of an image and outputs an accurate caption such as “A group of young people playing a game of frisbee” had eluded all the world’s AI researchers for decades. Yet a team at Google led by Ilya Sutskever did precisely that in 2014. Input a different set of pixel colors, and it replies “A herd of elephants walking across a dry grass field,” again correctly. How did they do it? Deep Blue–style, by programming handcrafted algorithms for detecting frisbees, faces and the like? No, by creating a relatively simple neural network with no knowledge whatsoever about the physical world or its contents, and then letting it learn by exposing it to massive amounts of data. AI visionary Jeff Hawkins wrote in 2004 that “no computer can…see as well as a mouse,” but those days are now long gone.
Max Tegmark (Life 3.0: Being Human in the Age of Artificial Intelligence)
this ritual should ensure that every incomplete task, goal, or project has been reviewed and that for each you have confirmed that either (1) you have a plan you trust for its completion, or (2) it’s captured in a place where it will be revisited when the time is right. The process should be an algorithm: a series of steps you always conduct, one after another. When you’re done, have a set phrase you say that indicates completion (to end my own ritual, I say, “Shutdown complete”). This final step sounds cheesy, but it provides a simple cue to your mind that it’s safe to release work-related thoughts for the rest of the day.
Cal Newport (Deep Work: Rules for Focused Success in a Distracted World)
This is completely wrong, Manson argues, because happiness is a fleeting state. Once we solve our momentary happiness algorithm, a new algorithm will inevitably appear, whispering to us that yeah, X is okay, but if we could just achieve something even better, then we’d really have it made. Everything is relative.
Worth Books (Summary and Analysis of The Subtle Art of Not Giving a F*ck: A Counterintuitive Approach to Living a Good Life: Based on the Book by Mark Manson (Smart Summaries))
The procession to the flight and then the numbing experience of flying itself involves a kind of stripping-away of the self and surroundings until everything becomes smooth and uniform. It’s a recognizable feeling—that slight separation from reality that happens when the plane takes off, or the clean burst of anonymity when opening the door of a hotel room for the first time. “The space of non-place creates neither singular identity nor relations; only solitude and similitude,” Augé writes. He describes “the passive joys of identity-loss.
Kyle Chayka (Filterworld: How Algorithms Flattened Culture)
Classical mathematicians have trouble understanding the set ℝ of Constructive real numbers because it seems to be both countable and uncountable. The Cantor diagonal argument—an algorithm that, given a sequence of real numbers, produces a real number different from every number in the sequence—is completely Constructive, and seems to show that the set is uncountable. But every real number is given by an algorithm that is described by a finite sequence of symbols, and the set of all finite sequences of symbols is countable. The situation clarifies if we see that Cantor discovered a difference in complexity rather than in size. The set ℝ is not bigger, but more complex than the set ℕ. Its complexity is related to the fact that real numbers are algorithms, and to the undecidability of the halting problem shown by Turing. Given a set of symbols purporting to describe the algorithm for a real number, Turing showed that we have no algorithm that decides whether it actually computes a real number or goes into an infinite loop. So we have no way to make a list of all real numbers.
Newcomb Greenleaf
Lovelace defined as an ‘operation’ the control of material and symbolic entities beyond the second-order language of mathematics (like the idea, discussed in chapter 1, of an algorithmic thinking beyond the boundary of computer science). In a visionary way, Lovelace seemed to suggest that mathematics is not the universal theory par excellence but a particular case of the science of operations. Following this insight, she envisioned the capacity of numerical computers qua universal machines to represent and manipulate numerical relations in the most diverse disciplines and generate, among other things, complex musical artefacts: [The Analytical Engine] might act upon other things besides number, were objects found whose mutual fundamental relations could be expressed by those of the abstract science of operations, and which should be also susceptible of adaptations to the action of the operating notation and mechanism of the engine … Supposing, for instance, that the fundamental relations of pitched sounds in the science of harmony and of musical composition were susceptible of such expression and adaptations, the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.
Matteo Pasquinelli (The Eye of the Master: A Social History of Artificial Intelligence)
The scale shift of labour composition from the nineteenth to the twentieth centuries affected also the logic of automation, that is, the scientific paradigms involved in this transformation. The relatively simple industrial division of labour and its seemingly rectilinear assembly lines could easily be compared to a simple algorithm, a rulebased procedure with an ‘if/then’ structure which has its equivalent in the logical form of deduction. Deduction, not by coincidence, is the logical form that via Leibniz, Babbage, Shannon, and Turing innervated into electromechanical computation and eventually symbolic AI. Deductive logic is useful for modelling simple processes, but not systems with a multitude of autonomous agents, such as society, the market, or the brain.
Matteo Pasquinelli (The Eye of the Master: A Social History of Artificial Intelligence)