“
Programming is a science dressed up as art, because most of us don’t understand the physics of software and it’s rarely, if ever, taught. The physics of software is not algorithms, data structures, languages, and abstractions. These are just tools we make, use, and throw away. The real physics of software is the physics of people. Specifically, it’s about our limitations when it comes to complexity and our desire to work together to solve large problems in pieces. This is the science of programming: make building blocks that people can understand and use easily, and people will work together to solve the very largest problems.
”
”
Pieter Hintjens (ZeroMQ: Messaging for Many Applications)
“
All programs transform data, converting an input into an output. And yet when we think about design, we rarely think about creating transformations. Instead we worry about classes and modules, data structures and algorithms, languages and frameworks.
”
”
Andrew Hunt (The Pragmatic Programmer: Your Journey to Mastery, 20th Anniversary Edition)
“
Programming is a science dressed up as art, because most of us don't understand the physics of software, and it's rarely if ever taught. The physics of software is not algorithms, data structures, languages and abstractions. These are just tools we make, use, throw away. The real physics of software is the physics of people.
”
”
ØMQ - The Guide
“
As a thought experiment, von Neumann's analysis was simplicity itself. He was saying that the genetic material of any self-reproducing system, whether natural or artificial, must function very much like a stored program in a computer: on the one hand, it had to serve as live, executable machine code, a kind of algorithm that could be carried out to guide the construction of the system's offspring; on the other hand, it had to serve as passive data, a description that could be duplicated and passed along to the offspring.
As a scientific prediction, that same analysis was breathtaking: in 1953, when James Watson and Francis Crick finally determined the molecular structure of DNA, it would fulfill von Neumann's two requirements exactly. As a genetic program, DNA encodes the instructions for making all the enzymes and structural proteins that the cell needs in order to function. And as a repository of genetic data, the DNA double helix unwinds and makes a copy of itself every time the cell divides in two. Nature thus built the dual role of the genetic material into the structure of the DNA molecule itself.
”
”
M. Mitchell Waldrop (The Dream Machine: J.C.R. Licklider and the Revolution That Made Computing Personal)
“
The main ones are the symbolists, connectionists, evolutionaries, Bayesians, and analogizers. Each tribe has a set of core beliefs, and a particular problem that it cares most about. It has found a solution to that problem, based on ideas from its allied fields of science, and it has a master algorithm that embodies it. For symbolists, all intelligence can be reduced to manipulating symbols, in the same way that a mathematician solves equations by replacing expressions by other expressions. Symbolists understand that you can’t learn from scratch: you need some initial knowledge to go with the data. They’ve figured out how to incorporate preexisting knowledge into learning, and how to combine different pieces of knowledge on the fly in order to solve new problems. Their master algorithm is inverse deduction, which figures out what knowledge is missing in order to make a deduction go through, and then makes it as general as possible. For connectionists, learning is what the brain does, and so what we need to do is reverse engineer it. The brain learns by adjusting the strengths of connections between neurons, and the crucial problem is figuring out which connections are to blame for which errors and changing them accordingly. The connectionists’ master algorithm is backpropagation, which compares a system’s output with the desired one and then successively changes the connections in layer after layer of neurons so as to bring the output closer to what it should be. Evolutionaries believe that the mother of all learning is natural selection. If it made us, it can make anything, and all we need to do is simulate it on the computer. The key problem that evolutionaries solve is learning structure: not just adjusting parameters, like backpropagation does, but creating the brain that those adjustments can then fine-tune. The evolutionaries’ master algorithm is genetic programming, which mates and evolves computer programs in the same way that nature mates and evolves organisms. Bayesians are concerned above all with uncertainty. All learned knowledge is uncertain, and learning itself is a form of uncertain inference. The problem then becomes how to deal with noisy, incomplete, and even contradictory information without falling apart. The solution is probabilistic inference, and the master algorithm is Bayes’ theorem and its derivates. Bayes’ theorem tells us how to incorporate new evidence into our beliefs, and probabilistic inference algorithms do that as efficiently as possible. For analogizers, the key to learning is recognizing similarities between situations and thereby inferring other similarities. If two patients have similar symptoms, perhaps they have the same disease. The key problem is judging how similar two things are. The analogizers’ master algorithm is the support vector machine, which figures out which experiences to remember and how to combine them to make new predictions.
”
”
Pedro Domingos (The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World)
“
Turing showed that just as the uncertainties of physics stem from using electrons and photons to measure themselves, the limitations of computers stem from recursive self-reference. Just as quantum theory fell into self-referential loops of uncertainty because it measured atoms and electrons using instruments composed of atoms and electrons, computer logic could not escape self-referential loops as its own logical structures informed its own algorithms.12
”
”
George Gilder (Life After Google: The Fall of Big Data and the Rise of the Blockchain Economy)
“
A few books that I've read....
Pascal, an Introduction to the Art and Science of Programming
by Walter Savitch
Programming algorithms
Introduction to Algorithms, 3rd Edition (The MIT Press)
Data Structures and Algorithms in Java
Author: Michael T. Goodrich - Roberto Tamassia - Michael H. Goldwasser
The Algorithm Design Manual
Author: Steven S Skiena
Algorithm Design
Author: Jon Kleinberg - Éva Tardos
Algorithms + Data Structures = Programs
Book by Niklaus Wirth
Discrete Math
Discrete Mathematics and Its Applications
Author: Kenneth H Rosen
Computer Org
Structured Computer Organization
Andrew S. Tanenbaum
Introduction to Assembly Language Programming: From 8086 to Pentium Processors (Undergraduate Texts in Computer Science)
Author: Sivarama P. Dandamudi
Distributed Systems
Distributed Systems: Concepts and Design
Author: George Coulouris - Jean Dollimore - Tim Kindberg - Gordon Blair
Distributed Systems: An Algorithmic Approach, Second Edition (Chapman & Hall/CRC Computer and Information Science Series)
Author: Sukumar Ghosh
Mathematical Reasoning
Mathematical Reasoning: Writing and Proof Version 2.1
Author: Ted Sundstrom
An Introduction to Mathematical Reasoning: Numbers, Sets and Functions
Author: Peter J. Eccles
Differential Equations
Differential Equations (with DE Tools Printed Access Card)
Author: Paul Blanchard - Robert L. Devaney - Glen R. Hall
Calculus
Calculus: Early Transcendentals
Author: James Stewart
And more....
”
”
Michael Gitabaum
“
I took 17 computer science classes and made an A in 11 of them. 1 point away from an A in 3 of them and the rest of them didn't matter.
Math is a tool for physics,chemistry,biology/basic computation and nothing else.
CS I(Pascal Vax),
CS II(Pascal Vax),
Sr. Software Engineering,
Sr. Distributed Systems,
Sr. Research,
Sr. Operating Systems,
Sr. Unix Operating Systems,
Data Structures,
Sr. Object Oriented A&D,
CS (perl/linux),
Sr. Java Programming,
Information Systems Design,
Jr. Unix Operating Systems,
Microprocessors,
Programming Algorithms,
Calculus I,II,III, B
Differential Equations, TI-89
Mathematical Reasoning, 92
C++ Programming,
Assembly 8086,
Digital Computer Organization,
Discrete Math I,II, B
Statistics for the Engineering & Sciences (w/permutations & combinatorics) --
A-American Literature
A-United States History 1865
CLEP-full year english
CLEP-full year biology
A-Psychology
A-Environmental Ethics
”
”
Michael Gitabaum
“
Logic. Rationality. Reasoning. Thought. Analysis. Calculation. Decision-making. All this is within the mind of a human being, correct? Humanity
”
”
Code Well Academy (Javascript Artificial Intelligence: Made Easy, w/ Essential Programming; Create your * Problem Solving * Algorithms! TODAY! w/ Machine Learning & Data Structures (Artificial Intelligence Series))
“
Each tribe’s solution to its central problem is a brilliant, hard-won advance. But the true Master Algorithm must solve all five problems, not just one. For example, to cure cancer we need to understand the metabolic networks in the cell: which genes regulate which others, which chemical reactions the resulting proteins control, and how adding a new molecule to the mix would affect the network. It would be silly to try to learn all of this from scratch, ignoring all the knowledge that biologists have painstakingly accumulated over the decades. Symbolists know how to combine this knowledge with data from DNA sequencers, gene expression microarrays, and so on, to produce results that you couldn’t get with either alone. But the knowledge we obtain by inverse deduction is purely qualitative; we need to learn not just who interacts with whom, but how much, and backpropagation can do that. Nevertheless, both inverse deduction and backpropagation would be lost in space without some basic structure on which to hang the interactions and parameters they find, and genetic programming can discover it. At this point, if we had complete knowledge of the metabolism and all the data relevant to a given patient, we could figure out a treatment for her. But in reality the information we have is always very incomplete, and even incorrect in places; we need to make headway despite that, and that’s what probabilistic inference is for. In the hardest cases, the patient’s cancer looks very different from previous ones, and all our learned knowledge fails. Similarity-based algorithms can save the day by seeing analogies between superficially very different situations, zeroing in on their essential similarities and ignoring the rest. In this book we will synthesize a single algorithm will all these capabilities:
”
”
Pedro Domingos (The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World)
“
For the hardest problems—the ones we really want to solve but haven’t been able to, like curing cancer—pure nature-inspired approaches are probably too uninformed to succeed, even given massive amounts of data. We can in principle learn a complete model of a cell’s metabolic networks by a combination of structure search, with or without crossover, and parameter learning via backpropagation, but there are too many bad local optima to get stuck in. We need to reason with larger chunks, assembling and reassembling them as needed and using inverse deduction to fill in the gaps. And we need our learning to be guided by the goal of optimally diagnosing cancer and finding the best drugs to cure it. Optimal learning is the Bayesians’ central goal, and they are in no doubt that they’ve figured out how to reach it. This way, please …
”
”
Pedro Domingos (The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World)
“
The rate of time flow perceived by an observer in the simulated universe is completely independent of the rate at which a computer runs the simulation, a point emphasized in Greg Egan's science-fiction novel Permutation City. Moreover, as we discussed in the last chapter and as stressed by Einstein, it's arguably more natural to view our Universe not from the frog perspective as a three-dimensional space where things happen, but from the bird perspective as a four-dimensional spacetime that merely is. There should therefore be no need for the computer to compute anything at all-it could simply store all the four-dimensional data, that is, encode all properties of the mathematical structure that is our Universe. Individual time slices could then be read out sequentially if desired, and the "simulated" world should still feel as real to its inhabitants as in the case where only three-dimensional data is stored and evolved. In conclusion: the role of the simulating computer isn't to compute the history of our Universe, but to specify it.
How specify it? The way in which the data are stored (the type of computer, the data format, etc.) should be irrelevant, so the extent to which the inhabitants of the simulated universe perceive themselves as real should be independent of whatever method is used for data compression. The physical laws that we've discovered provide great means of data compression, since they make it sufficient to store the initial data at some time together with the equations and a program computing the future from these initial data. As emphasized on pages 340-344, the initial data might be extremely simple: popular initial states from quantum field theory with intimidating names such as the Hawking-Hartle wavefunction or the inflationary Bunch-Davies vacuum have very low algorithmic complexity, since they can be defined in brief physics papers, yet simulating their time evolution would simulate not merely one universe like ours, but a vast decohering collection of parallel ones. It's therefore plausible that our Universe (and even the whole Level III multiverse) could be simulated by quite a short computer program.
”
”
Max Tegmark (Our Mathematical Universe: My Quest for the Ultimate Nature of Reality)
“
Smart entrepreneurs have grabbed this opportunity with a vengeance. Now online lesson-plan marketplaces such as Gooru Learning, Teachers Pay Teachers, and Share My Lesson allow teachers who want to devote more of their time to other tasks the ability to purchase high-quality (and many lesser-quality) lesson plans, ready to go. With sensors, data, and A.I., we can begin, even today, testing for the learning efficacy of different lectures, styles, and more. And, because humans do a poor job of incorporating massive amounts of information to make iterative decisions, in the very near future, computers will start doing more and more of the lesson planning. They will write the basic lessons and learn what works and what doesn’t for specific students. Creative teachers will continue, though, to be incredibly valuable: they will learn how to steer and curate algorithmic and heuristically updated lesson creation in ways that computers could not necessarily imagine. All of this is, of course, a somewhat bittersweet development. Teaching is an idealistic profession. You probably remember a special teacher who shaped your life, encouraged your interests, and made school exciting. The movies and pop culture are filled with paeans to unselfish, underpaid teachers fighting the good fight and helping their charges. But it is becoming clearer that teaching, like many other white-collar jobs that have resisted robots, is something that robots can do—possibly, in structured curricula, better than humans can. The
”
”
Vivek Wadhwa (The Driver in the Driverless Car: How Our Technology Choices Will Create the Future)
“
Fiscal Numbers (the latter uniquely identifies a particular hospitalization for patients who might have been admitted multiple times), which allowed us to merge information from many different hospital sources. The data were finally organized into a comprehensive relational database. More information on database merger, in particular, how database integrity was ensured, is available at the MIMIC-II web site [1]. The database user guide is also online [2]. An additional task was to convert the patient waveform data from Philips’ proprietary format into an open-source format. With assistance from the medical equipment vendor, the waveforms, trends, and alarms were translated into WFDB, an open data format that is used for publicly available databases on the National Institutes of Health-sponsored PhysioNet web site [3]. All data that were integrated into the MIMIC-II database were de-identified in compliance with Health Insurance Portability and Accountability Act standards to facilitate public access to MIMIC-II. Deletion of protected health information from structured data sources was straightforward (e.g., database fields that provide the patient name, date of birth, etc.). We also removed protected health information from the discharge summaries, diagnostic reports, and the approximately 700,000 free-text nursing and respiratory notes in MIMIC-II using an automated algorithm that has been shown to have superior performance in comparison to clinicians in detecting protected health information [4]. This algorithm accommodates the broad spectrum of writing styles in our data set, including personal variations in syntax, abbreviations, and spelling. We have posted the algorithm in open-source form as a general tool to be used by others for de-identification of free-text notes [5].
”
”
Mit Critical Data (Secondary Analysis of Electronic Health Records)
“
As leaders, if you don’t transform and use this technology differently—if you don’t reinvent yourself, change your organization structure; if you don’t talk about speed of innovation—you’re going to get disrupted. And it’ll be a brutal disruption, where the majority of companies will not exist in a meaningful way 10 to 15 years from now.
”
”
Paul Leonardi (The Digital Mindset: What It Really Takes to Thrive in the Age of Data, Algorithms, and AI)
“
Learn Data Science Course at SLA to extract meaningful insights from structured and unstructured data using scientific methods, algorithms, and systematic processes. Get hands-on with popular tools and technologies used to analyze data efficiently. Earn an industry-accredited certificate and placement assistance in our leading Data Science Training Institute in Chennai. Equip yourself with the key concepts of Data Science such as Probability, Statistics, Machine Learning Techniques, Data Analytics Basics, and Data Visualization processes. We are extremely dedicated to serving you better.
”
”
Data Science Course in Chennai
“
I must take a moment and give an especially hearty cheer to anyone who champions structured data, richer data, data that gives us more handles to grab on to the things we are describing and thus enables us to serve them up in different ways within different contexts for our different constituent groups. However sophisticated the relevancy algorithms and myriad features of any discovery product might be, at the most basic level, these systems rely on the data we feed them.
”
”
Joseph Janes (Library 2020: Today's Leading Visionaries Describe Tomorrow's Library)
“
When you are vertically separating use cases from one another, you will run into this issue, and your temptation will be to couple the use cases because they have similar screen structures, or similar algorithms, or similar database queries and/or schemas. Be careful. Resist the temptation to commit the sin of knee-jerk elimination of duplication. Make sure the duplication is real. By the same token, when you are separating layers horizontally, you might notice that the data structure of a particular database record is very similar to the data structure of a particular screen view. You may be tempted to simply pass the database record up to the UI, rather than to create a view model that looks the same and copy the elements across. Be careful: This duplication is almost certainly accidental. Creating the separate view model is not a lot of effort, and it will help you keep the layers properly decoupled.
”
”
Robert C. Martin (Clean Architecture: A Craftsman's Guide to Software Structure and Design)
“
Naturally occurring processes are often informally modeled by priority queues. Single people maintain a priority queue of potential dating candidates, mentally if not explicitly. One’s impression on meeting a new person maps directly to an attractiveness or desirability score, which serves as the key field for inserting this new entry into the “little black book” priority queue data structure. Dating is the process of extracting the most desirable person from the data structure (Find-Maximum), spending an evening to evaluate them better, and then reinserting them into the priority queue with a possibly revised score.
”
”
Steven S. Skiena (The Algorithm Design Manual)
“
The core product team at most modern tech companies is called the triad: Engineer (or Tech Lead), Designer, and Product Manager. Engineers are responsible for the technical solution. They'll plan the data structures and algorithms that will make things fast, scalable, and maintainable. They'll write the code and tests. Designers are responsible for the solution from the user experience perspective. What will it look like? What are the flows, screens, and buttons? They'll make mockups or prototypes of how the feature should work. Product managers are responsible for selecting and defining which problems the team is going to solve, then ensuring the team solves them. They'll define what success looks like, and plan how to get there.
”
”
Jackie Bavaro (Cracking the PM Career: The Skills, Frameworks, and Practices To Become a Great Product Manager (Cracking the Interview & Career))
“
Programs, after all, are concrete formulations
of abstract algorithms based on particular representations and structures
of data.
”
”
Niklaus Wirth
“
The task of composition of operations is often considered the heart of the
art of programming. However, it will become evident that the appropriate
composition of data is equally fundamental and essential.
”
”
Niklaus Wirth (Algorithms + Data Structures = Programs (Prentice-Hall Series in Automatic Computation))
“
Natural Language Generation - An Overview | Yellowfin
Natural Language Generation (NLG) is a branch of artificial intelligence that focuses on transforming structured data into human-readable text. By using algorithms and linguistic rules, NLG systems can produce coherent narratives, summaries, and reports, enhancing communication in various applications like chatbots, automated journalism, and personalized content creation. This technology improves efficiency and accessibility in data interpretation. For more information, visit Yellowfin blog.
”
”
Yellowfin blog
“
Machine learning tends to be more focused on developing efficient algorithms that scale to large data in order to optimize the predictive model. Statistics generally pays more attention to the probabilistic theory and underlying structure of the model.
”
”
Peter Bruce (Practical Statistics for Data Scientists: 50 Essential Concepts)
“
If you know the sequence ahead of time,” says Tarjan, who splits his time between Princeton and Silicon Valley, “you can customize the data structure to minimize the total time for the entire sequence. That’s the optimum offline algorithm:
”
”
Brian Christian (Algorithms to Live By: The Computer Science of Human Decisions)
“
The most recent statistic is that YouTube adds nearly 600 hours of content every minute, as the product continues to grow its network into the many billions of users across web and mobile. To me, the key learning from the YouTube story is the journey that every networked product has to take. When they started out, they needed very little organization, but as the network grew, more and more structure was applied—first by editors, moderators, and users—and then by data and algorithms. The earliest iterations weren’t sophisticated, just whatever got the job done. Algorithms came later, and even years later, keeping the network healthy is still an everyday battle.
”
”
Andrew Chen (The Cold Start Problem: How to Start and Scale Network Effects)
“
Unleashing Reliable Insights from Generative AI by Disentangling Language Fluency and Knowledge Acquisition
Generative AI carries immense potential but also comes with significant risks. One of these risks of Generative AI lies in its limited ability to identify misinformation and inaccuracies within the contextual framework.
This deficiency can lead to mistakenly associating correlation with causation, reliance on incomplete or inaccurate data, and a lack of awareness regarding sensitive dependencies between information sets.
With society’s increasing fascination with and dependence on Generative AI, there is a concern that the unintended consequence that it will have an unhealthy influence on shaping societal views on politics, culture, and science.
Humans acquire language and communication skills from a diverse range of sources, including raw, unfiltered, and unstructured content. However, when it comes to knowledge acquisition, humans typically rely on transparent, trusted, and structured sources.
In contrast, large language models (LLMs) such as ChatGPT draw from an array of opaque, unattested sources of raw, unfiltered, and unstructured content for language and communication training. LLMs treat this information as the absolute source of truth used in their responses.
While this approach has demonstrated effectiveness in generating natural language, it also introduces inconsistencies and deficiencies in response integrity.
While Generative AI can provide information it does not inherently yield knowledge.
To unlock the true value of generative AI, it is crucial to disaggregate the process of language fluency training from the acquisition of knowledge used in responses. This disaggregation enables LLMs to not only generate coherent and fluent language but also deliver accurate and reliable information.
However, in a culture that obsesses over information from self-proclaimed influencers and prioritizes virality over transparency and accuracy, distinguishing reliable information from misinformation and knowledge from ignorance has become increasingly challenging. This presents a significant obstacle for AI algorithms striving to provide accurate and trustworthy responses.
Generative AI shows great promise, but addressing the issue of ensuring information integrity is crucial for ensuring accurate and reliable responses. By disaggregating language fluency training from knowledge acquisition, large language models can offer valuable insights.
However, overcoming the prevailing challenges of identifying reliable information and distinguishing knowledge from ignorance remains a critical endeavour for advancing AI algorithms. It is essential to acknowledge that resolving this is an immediate challenge that needs open dialogue that includes a broad set of disciplines, not just technologists
Technology alone cannot provide a complete solution.
”
”
Tom Golway
“
value, I can do three things,” he says. “I can improve the algorithm itself, make it more sophisticated. I can throw more and better data at the algorithm so that the existing code produces better results. And I can change the speed of experimentation to get more results faster. “We focused on data and speed, not on a better algorithm.” Candela describes this decision as “dramatic” and “hard.” Computer scientists, especially academic-minded ones, are rewarded for inventing new algorithms or improving existing ones. A better statistical model is the goal. Getting cited in a journal is validation. Wowing your peers gives you cred. It requires a shift in thinking to get those engineers to focus on business impact before optimal statistical model. He thinks many companies are making the mistake of structuring their efforts around building the best algorithms, or hiring developers who claim to have the best algorithms, because that’s how many AI developers think.
”
”
Harvard Business Review (Artificial Intelligence: The Insights You Need from Harvard Business Review (HBR Insights))
“
The power of recursion evidently lies in the possibility of defining an infinite set of objects by a finite statement. In the same manner, an infinite number of computations can be described by a finite recursive program, even if this program contains no explicit repetitions.
”
”
Niklaus Wirth (Algorithms and Data Structures)
“
WHY STUDY DISCRETE MATHEMATICS? There are several important reasons for studying discrete mathematics. First, through this course you can develop your mathematical maturity: that is, your ability to understand and create mathematical arguments. You will not get very far in your studies in the mathematical sciences without these skills. Second, discrete mathematics is the gateway to more advanced courses in all parts of the mathematical sciences. Discrete mathematics provides the mathematical foundations for many computer science courses, including data structures, algorithms, database theory, automata theory, formal languages, compiler theory, computer security, and operating systems. Students find these courses much more difficult when they have not had the appropriate mathematical foundations from discrete mathematics.
”
”
Kenneth H. Rosen (Discrete Mathematics and Its Applications)
“
Time Complexity: O(n). Space Complexity: O(n). Problem-13 Give an algorithm for deleting an element (assuming data is given) from binary tree.
”
”
Narasimha Karumanchi (Data Structures and Algorithmic Thinking with Python: Data Structure and Algorithmic Puzzles)
“
Here’s the problem: The future isn’t made up of predetermined, structured data alone. It changes as a result of people, and what we are learning, breaking, achieving, feeling, saying, thinking, and building in the present. Algorithms can’t account for the introduction of new qualitative variables, such as hardheaded CEOs, temperamental developers, or the eruption of mob justice within online communities.
”
”
Amy Webb (The Signals Are Talking: Why Today's Fringe Is Tomorrow's Mainstream)
“
Rules for Building High-Performance Code We’ve got the following rules for creating high-performance software: Know where you’re going (understand the objective of the software). Make a big map (have an overall program design firmly in mind, so the various parts of the program and the data structures work well together). Make lots of little maps (design an algorithm for each separate part of the overall design). Know the territory (understand exactly how the computer carries out each task). Know when it matters (identify the portions of your programs where performance matters, and don’t waste your time optimizing the rest). Always consider the alternatives (don’t get stuck on a single approach; odds are there’s a better way, if you’re clever and inventive enough). Know how to turn on the juice (optimize the code as best you know how when it does matter).
”
”
Anonymous
“
There's a certain "algorithmic objectivity" or even "epistemic purity" that has been attributed to automated content--an adherence to a consistent factual rendition that confers a halo of authority. But the apparent authority of automated content be-lies the messy, complex reality of datafication, which contorts the beautiful complexity of the world into a structured data scheme that invariably excludes nuance and context.
”
”
Nicholas Diakopoulos (Automating the News: How Algorithms Are Rewriting the Media)
“
When recommendation algorithms are based only on data about what you and other platform users already like, then these algorithms are less capable of providing the kind of surprise that might not be immediately pleasurable, that Montesquieu described. The feed structure also discourages users from spending too much time with any one piece of content. If you find something boring, perhaps too subtle, you just keep scrolling, and there’s no time for a greater sense of admiration to develop—one is increasingly encouraged to lean into impatience and superficiality in all things.
”
”
Kyle Chayka (Filterworld: How Algorithms Flattened Culture)