Relational Database Quotes

We've searched our database for all the quotes and captions related to Relational Database. Here they are! All 52 of them:

Hate Poem I hate you truly. Truly I do. Everything about me hates everything about you. The flick of my wrist hates you. The way I hold my pencil hates you. The sound made by my tiniest bones were they trapped in the jaws of a moray eel hates you. Each corpuscle singing in its capillary hates you. Look out! Fore! I hate you. The blue-green jewel of sock lint I’m digging from under by third toenail, left foot, hates you. The history of this keychain hates you. My sigh in the background as you explain relational databases hates you. The goldfish of my genius hates you. My aorta hates you. Also my ancestors. A closed window is both a closed window and an obvious symbol of how I hate you. My voice curt as a hairshirt: hate. My hesitation when you invite me for a drive: hate. My pleasant “good morning”: hate. You know how when I’m sleepy I nuzzle my head under your arm? Hate. The whites of my target-eyes articulate hate. My wit practices it. My breasts relaxing in their holster from morning to night hate you. Layers of hate, a parfait. Hours after our latest row, brandishing the sharp glee of hate, I dissect you cell by cell, so that I might hate each one individually and at leisure. My lungs, duplicitous twins, expand with the utter validity of my hate, which can never have enough of you, Breathlessly, like two idealists in a broken submarine.
Julie Sheehan
There’s an old saying in the relational database world: on a long enough timeline, all fields become optional.
Eric Redmond
Big data is the most disruptive force this industry has seen since the introduction of the relational database.
Jeffrey Needham (Disruptive Possibilities: How Big Data Changes Everything)
Facebook in particular is the most appalling spying machine that has ever been invented,” Wikileaks founder Julian Assange said in 2011. “Here we have the world’s most comprehensive database about people, their relationships, their names, their addresses, their locations, and the communications with each other, their relatives, all sitting within the United States, all accessible to US intelligence.”37 Assange’s
Sarah Kendzior (They Knew: How a Culture of Conspiracy Keeps America Complacent)
In fact, AI might make centralized systems far more efficient than diffused systems, because machine learning works better the more information it can analyze. If you disregard all privacy concerns and concentrate all the information relating to a billion people in one database, you can train much better algorithms than if you respect individual privacy and have in your database only partial information on a million people.
Yuval Noah Harari (21 Lessons for the 21st Century)
Big data is based on the feedback economy where the Internet of Things places sensors on more and more equipment. More and more data is being generated as medical records are digitized, more stores have loyalty cards to track consumer purchases, and people are wearing health-tracking devices. Generally, big data is more about looking at behavior, rather than monitoring transactions, which is the domain of traditional relational databases. As the cost of storage is dropping, companies track more and more data to look for patterns and build predictive models".
Neil Dunlop
Database Management System [Origin: Data + Latin basus "low, mean, vile, menial, degrading, ounterfeit."] A complex set of interrelational data structures allowing data to be lost in many convenient sequences while retaining a complete record of the logical relations between the missing items. -- From The Devil's DP Dictionary
Stan Kelly-Bootle
The biochemist's approach pivots on concentration: find the protein by looking where it's most likely to be concentrated, and distill it out of the mix. The geneticist's approach, in contrast, pivots on information: find the gene by search for differences in "databases" created by two closely related cells and multiply the gene in bacteria via cloning. The biochemist distills forms; the gene cloner amplifies information.
Siddhartha Mukherjee (The Gene: An Intimate History)
In fact, as these companies offered more and more (simply because they could), they found that demand actually followed supply. The act of vastly increasing choice seemed to unlock demand for that choice. Whether it was latent demand for niche goods that was already there or a creation of new demand, we don't yet know. But what we do know is that the companies for which we have the most complete data - netflix, Amazon, Rhapsody - sales of products not offered by their bricks-and-mortar competitors amounted to between a quarter and nearly half of total revenues - and that percentage is rising each year. in other words, the fastest-growing part of their businesses is sales of products that aren't available in traditional, physical retail stores at all. These infinite-shelf-space businesses have effectively learned a lesson in new math: A very, very big number (the products in the Tail) multiplied by a relatives small number (the sales of each) is still equal to a very, very big number. And, again, that very, very big number is only getting bigger. What's more, these millions of fringe sales are an efficient, cost-effective business. With no shelf space to pay for - and in the case of purely digital services like iTunes, no manufacturing costs and hardly any distribution fees - a niche product sold is just another sale, with the same (or better) margins as a hit. For the first time in history, hits and niches are on equal economic footing, both just entries in a database called up on demand, both equally worthy of being carried. Suddenly, popularity no longer has a monopoly on profitability.
Chris Anderson (The Long Tail: Why the Future of Business is Selling Less of More)
Using an example of how this would work in a more relatable scenario. If you were to imagine a hacker accessing the computer system of your bank and transferring all your funds from your own account into his and deleting all evidence of the transaction, existing technology would not be able to pick this up and you would likely be out of pocket. In the case of a blockchain currency like Bitcoin, having one server hacked with a false transaction being inserted into the database would not be consistent with the same record across the other copies of the database. The blockchain would identify the transaction as being illegitimate and would ultimately reject it meaning the money in your account would be kept safe.
Chris Lambert (Cryptocurrency: How I Turned $400 into $100,000 by Trading Cryptocurrency for 6 months (Crypto Trading Secrets Book 1))
In the late twentieth century democracies usually outperformed dictatorships because democracies were better at data-processing. Democracy diffuses the power to process information and make decisions among many people and institutions, whereas dictatorship concentrates information and power in one place. Given twentieth-century technology, it was inefficient to concentrate too much information and power in one place. Nobody had the ability to process all the information fast enough and make the right decisions. This is part of the reason why the Soviet Union made far worse decisions than the United States, and why the Soviet economy lagged far behind the American economy. However, soon AI might swing the pendulum in the opposite direction. AI makes it possible to process enormous amounts of information centrally. Indeed, AI might make centralised systems far more efficient than diffused systems, because machine learning works better the more information it can analyse. If you concentrate all the information relating to a billion people in one database, disregarding all privacy concerns, you can train much better algorithms than if you respect individual privacy and have in your database only partial information on a million people. For example, if an authoritarian government orders all its citizens to have their DNA scanned and to share all their medical data with some central authority, it would gain an immense advantage in genetics and medical research over societies in which medical data is strictly private. The main handicap of authoritarian regimes in the twentieth century – the attempt to concentrate all information in one place – might become their decisive advantage in the twenty-first century.
Yuval Noah Harari (21 Lessons for the 21st Century)
In the late twentieth century democracies usually outperformed dictatorships because democracies were better at data-processing. Democracy diffuses the power to process information and make decisions among many people and institutions, whereas dictatorship concentrates information and power in one place. Given twentieth-century technology, it was inefficient to concentrate too much information and power in one place. Nobody had the ability to process all the information fast enough and make the right decisions. This is part of the reason why the Soviet Union made far worse decisions than the United States, and why the Soviet economy lagged far behind the American economy. “However, soon AI might swing the pendulum in the opposite direction. AI makes it possible to process enormous amounts of information centrally. Indeed, AI might make centralised systems far more efficient than diffused systems, because machine learning works better the more information it can analyse. If you concentrate all the information relating to a billion people in one database, disregarding all privacy concerns, you can train much better algorithms than if you respect individual privacy and have in your database only partial information on a million people. For example, if an authoritarian government orders all its citizens to have their DNA scanned and to share all their medical data with some central authority, it would gain an immense advantage in genetics and medical research over societies in which medical data is strictly private. The main handicap of authoritarian regimes in the twentieth century – the attempt to concentrate all information in one place – might become their decisive advantage in the twenty-first century.
Yuval Noah Harari (21 Lessons for the 21st Century)
Despite the advancements of systematic experimental pipelines, literature-curated protein-interaction data continue to be the primary data for investigation of focused biological mechanisms. Notwithstanding the variable quality of curated interactions available in public databases, the impact of inspection bias on the ability of literature maps to provide insightful information remains equivocal. The problems posed by inspection bias extend beyond mapping of protein interactions to the development of pharmacological agents and other aspects of modern biomedicine. Essentially the same 10% of the proteome is being investigated today as was being investigated before the announcement of completion of the reference genome sequence. One way forward, at least with regard to interactome mapping, is to continue the transition toward systematic and relatively unbiased experimental interactome mapping. With continued advancement of systematic protein-interaction mapping efforts, the expectation is that interactome 'deserts', the zones of the interactome space where biomedical knowledge researchers simply do not look for interactions owing to the lack of prior knowledge, might eventually become more populated. Efforts at mapping protein interactions will continue to be instrumental for furthering biomedical research.
Joseph Loscalzo (Network Medicine: Complex Systems in Human Disease and Therapeutics)
For almost all astronomical objects, gravitation dominates, and they have the same unexpected behavior. Gravitation reverses the usual relation between energy and temperature. In the domain of astronomy, when heat flows from hotter to cooler objects, the hot objects get hotter and the cool objects get cooler. As a result, temperature differences in the astronomical universe tend to increase rather than decrease as time goes on. There is no final state of uniform temperature, and there is no heat death. Gravitation gives us a universe hospitable to life. Information and order can continue to grow for billions of years in the future, as they have evidently grown in the past. The vision of the future as an infinite playground, with an unending sequence of mysteries to be understood by an unending sequence of players exploring an unending supply of information, is a glorious vision for scientists. Scientists find the vision attractive, since it gives them a purpose for their existence and an unending supply of jobs. The vision is less attractive to artists and writers and ordinary people. Ordinary people are more interested in friends and family than in science. Ordinary people may not welcome a future spent swimming in an unending flood of information. A darker view of the information-dominated universe was described in the famous story “The Library of Babel,” written by Jorge Luis Borges in 1941.§ Borges imagined his library, with an infinite array of books and shelves and mirrors, as a metaphor for the universe. Gleick’s book has an epilogue entitled “The Return of Meaning,” expressing the concerns of people who feel alienated from the prevailing scientific culture. The enormous success of information theory came from Shannon’s decision to separate information from meaning. His central dogma, “Meaning is irrelevant,” declared that information could be handled with greater freedom if it was treated as a mathematical abstraction independent of meaning. The consequence of this freedom is the flood of information in which we are drowning. The immense size of modern databases gives us a feeling of meaninglessness. Information in such quantities reminds us of Borges’s library extending infinitely in all directions. It is our task as humans to bring meaning back into this wasteland. As finite creatures who think and feel, we can create islands of meaning in the sea of information. Gleick ends his book with Borges’s image of the human condition: We walk the corridors, searching the shelves and rearranging them, looking for lines of meaning amid leagues of cacophony and incoherence, reading the history of the past and of the future, collecting our thoughts and collecting the thoughts of others, and every so often glimpsing mirrors, in which we may recognize creatures of the information.
Freeman Dyson (Dreams of Earth and Sky)
Ultimately, the World Top Incomes Database (WTID), which is based on the joint work of some thirty researchers around the world, is the largest historical database available concerning the evolution of income inequality; it is the primary source of data for this book.24 The book’s second most important source of data, on which I will actually draw first, concerns wealth, including both the distribution of wealth and its relation to income. Wealth also generates income and is therefore important on the income study side of things as well. Indeed, income consists of two components: income from labor (wages, salaries, bonuses, earnings from nonwage labor, and other remuneration statutorily classified as labor related) and income from capital (rent, dividends, interest, profits, capital gains, royalties, and other income derived from the mere fact of owning capital in the form of land, real estate, financial instruments, industrial equipment, etc., again regardless of its precise legal classification). The WTID contains a great deal of information about the evolution of income from capital over the course of the twentieth century. It is nevertheless essential to complete this information by looking at sources directly concerned with wealth. Here I rely on three distinct types of historical data and methodology, each of which is complementary to the others.25 In the first place, just as income tax returns allow us to study changes in income inequality, estate tax returns enable us to study changes in the inequality of wealth.26 This
Thomas Piketty (Capital in the Twenty-First Century)
When you launch an AWS resource like an Amazon EC2 instance or Amazon Relational Database (Amazon RDS) DB instance, you start with a default configuration. You can then execute automated bootstrapping actions. That is, scripts that install software or copy data to bring that resource to a particular state.
Amazon We Services (Architecting for the AWS Cloud: Best Practices (AWS Whitepaper))
Organizations seeking to commercialize open source software realized this, of course, and deliberately incorporated it as part of their market approach. In a 2013 piece on Pando Daily, venture capitalist Danny Rimer quotes then-MySQL CEO Mårten Mickos as saying, “The relational database market is a $9 billion a year market. I want to shrink it to $3 billion and take a third of the market.” While MySQL may not have succeeded in shrinking the market to three billion, it is interesting to note that growing usage of MySQL was concurrent with a declining ability of Oracle to sell new licenses. Which may explain both why Sun valued MySQL at one third of a $3 billion dollar market and why Oracle later acquired Sun and MySQL. The downward price pressure imposed by open source alternatives have become sufficiently visible, in fact, as to begin raising alarm bells among financial analysts. The legacy providers of data management systems have all fallen on hard times over the last year or two, and while many are quick to dismiss legacy vendor revenue shortfalls to macroeconomic issues, we argue that these macroeconomic issues are actually accelerating a technology transition from legacy products to alternative data management systems like Hadoop and NoSQL that typically sell for dimes on the dollar. We believe these macro issues are real, and rather than just causing delays in big deals for the legacy vendors, enterprises are struggling to control costs and are increasingly looking at lower cost solutions as alternatives to traditional products. — Peter Goldmacher Cowen and Company
Stephen O’Grady (The Software Paradox: The Rise and Fall of the Commercial Software Market)
Biological databases impose particular limitations on how biological objects can be related to one another. In other words, the structure of a database predetermines the sorts of biological relationships that can be 'discovered'. To use the language of Bowker and Star, the database 'torques,' or twists, objects into particular conformations with respect to one another. The creation of a database generates a particular and rigid structure of relationships between biological objects, and these relationships guide biologists in thinking about how living systems work. The evolution of GenBank from flat-file to relational to federated database paralleled biologists' moves from gene-centric to alignment-centric to multielement views of biological action.
Hallam Stevens (Life Out of Sequence: A Data-Driven History of Bioinformatics)
Fiscal Numbers (the latter uniquely identifies a particular hospitalization for patients who might have been admitted multiple times), which allowed us to merge information from many different hospital sources. The data were finally organized into a comprehensive relational database. More information on database merger, in particular, how database integrity was ensured, is available at the MIMIC-II web site [1]. The database user guide is also online [2]. An additional task was to convert the patient waveform data from Philips’ proprietary format into an open-source format. With assistance from the medical equipment vendor, the waveforms, trends, and alarms were translated into WFDB, an open data format that is used for publicly available databases on the National Institutes of Health-sponsored PhysioNet web site [3]. All data that were integrated into the MIMIC-II database were de-identified in compliance with Health Insurance Portability and Accountability Act standards to facilitate public access to MIMIC-II. Deletion of protected health information from structured data sources was straightforward (e.g., database fields that provide the patient name, date of birth, etc.). We also removed protected health information from the discharge summaries, diagnostic reports, and the approximately 700,000 free-text nursing and respiratory notes in MIMIC-II using an automated algorithm that has been shown to have superior performance in comparison to clinicians in detecting protected health information [4]. This algorithm accommodates the broad spectrum of writing styles in our data set, including personal variations in syntax, abbreviations, and spelling. We have posted the algorithm in open-source form as a general tool to be used by others for de-identification of free-text notes [5].
Mit Critical Data (Secondary Analysis of Electronic Health Records)
Historically, data started out being represented as one big tree (the hierarchical model), but that wasn’t good for representing many-to-many relationships, so the relational model was invented to solve that problem. More recently, developers found that some applications don’t fit well in the relational model either. New nonrelational “NoSQL” datastores have diverged in two main directions: Document databases target use cases where data comes in self-contained documents and relationships between one document and another are rare. Graph databases go in the opposite direction, targeting use cases where anything is potentially related to everything.
Martin Kleppmann (Designing Data-Intensive Applications: The Big Ideas Behind Reliable, Scalable, and Maintainable Systems)
All three models (document, relational, and graph) are widely used today, and each is good in its respective domain. One model can be emulated in terms of another model—for example, graph data can be represented in a relational database—but the result is often awkward. That’s why we have different systems for different purposes, not a single one-size-fits-all solution.
Martin Kleppmann (Designing Data-Intensive Applications: The Big Ideas Behind Reliable, Scalable, and Maintainable Systems)
Experience makes it easier to avoid Absence Blindness. Experience is valuable primarily because the expert has a larger mental database of related patterns and thus a higher chance of noticing an absence. By noticing violations of expected patterns, experienced people are more likely to get an “odd feeling” that things “aren’t quite right,” which is often enough warning to find an issue before it becomes serious.
Josh Kaufman (The Personal MBA: A World-Class Business Education in a Single Volume)
No sequence In a relational database, sequences are usually used to generate unique values for a surrogate key. Cassandra has no sequences because it is extremely difficult to implement in a peer-to-peer distributed system. There are however workarounds, which are as follows: Using part of the data to generate a unique key Using a UUID In most cases, the best practice is to select the second workaround.
C.Y. Kan (Cassandra Data Modeling and Analysis)
The most important difference is that a relational database models data by relationships whereas Cassandra models data by query.
C.Y. Kan (Cassandra Data Modeling and Analysis)
Column Column is the smallest data model element and storage unit in Cassandra. Though it also exists in a relational database, it is a different thing in Cassandra. As shown in the following figure, a column is a name-value pair with a timestamp and an optional Time-To-Live (TTL) value:
C.Y. Kan (Cassandra Data Modeling and Analysis)
There are also books that contain collections of papers or chapters on particular aspects of knowledge discovery—for example, Relational Data Mining edited by Dzeroski and Lavrac [De01]; Mining Graph Data edited by Cook and Holder [CH07]; Data Streams: Models and Algorithms edited by Aggarwal [Agg06]; Next Generation of Data Mining edited by Kargupta, Han, Yu, et al. [KHY+08]; Multimedia Data Mining: A Systematic Introduction to Concepts and Theory edited by Z. Zhang and R. Zhang [ZZ09]; Geographic Data Mining and Knowledge Discovery edited by Miller and Han [MH09]; and Link Mining: Models, Algorithms and Applications edited by Yu, Han, and Faloutsos [YHF10]. There are many tutorial notes on data mining in major databases, data mining, machine learning, statistics, and Web technology conferences.
Vipin Kumar (Introduction to Data Mining)
influence on banking with a relatively small investment. “Money is low bandwidth,” he said, during a speech at Stanford University in 2003, to describe his thinking. “You don’t need some sort of big infrastructure improvement to do things with it. It’s really just an entry in a database.” The
Ashlee Vance (Elon Musk: How the Billionaire CEO of SpaceX and Tesla is Shaping our Future)
Each business process is represented by a dimensional model that consists of a fact table containing the event's numeric measurements surrounded by a halo of dimension tables that contain the textual context that was true at the moment the event occurred. This characteristic star-like structure is often called a star join, a term dating back to the earliest days of relational databases. Figure 1.5 Fact and dimension tables in a dimensional model. The first thing to notice about the dimensional schema is its simplicity and symmetry. Obviously, business users benefit from the simplicity because the data is easier to understand and navigate. The charm of the design in Figure 1.5 is that it is highly recognizable to business users. We have observed literally hundreds of instances in which users immediately agree that the dimensional model is their business. Furthermore, the reduced number of tables and use of meaningful business descriptors make it easy to navigate and less likely that mistakes will occur. The simplicity of a dimensional model also has performance benefits. Database optimizers process these simple schemas with fewer joins more efficiently. A database engine can make strong assumptions about first constraining the heavily indexed dimension tables, and then attacking the fact table all at once with the Cartesian product of the dimension table keys satisfying the user's constraints. Amazingly, using this approach, the optimizer can evaluate arbitrary n-way joins to a fact table in a single pass through the fact table's index. Finally, dimensional models are gracefully extensible to accommodate change. The predictable framework of a dimensional model withstands unexpected changes in user behavior. Every dimension is equivalent; all dimensions are symmetrically-equal entry points into the fact table. The dimensional model has no built-in bias regarding expected query patterns. There are no preferences for the business questions asked this month versus the questions asked next month. You certainly don't want to adjust schemas if business users suggest new ways to analyze their business.
Ralph Kimball (The Data Warehouse Toolkit: The Definitive Guide to Dimensional Modeling)
Dimensional models implemented in relational database management systems are referred to as star schemas because of their resemblance to a star-like structure. Dimensional models implemented in multidimensional database environments are referred to as online analytical processing (OLAP) cubes, as illustrated in Figure 1.1. Figure 1.1 Star schema versus OLAP cube. If your DW/BI environment includes either star schemas or OLAP cubes, it leverages dimensional concepts. Both stars and cubes have a common logical design with recognizable dimensions; however, the physical implementation differs. When data is loaded into an OLAP cube, it is stored and indexed using formats and techniques that are designed for dimensional data. Performance aggregations or precalculated summary tables are often created and managed by the OLAP cube engine. Consequently, cubes deliver superior query performance because of the precalculations, indexing strategies, and other optimizations. Business users can drill down or up by adding or removing attributes from their analyses with excellent performance without issuing new queries. OLAP cubes also provide more analytically robust functions that exceed those available with SQL. The downside is that you pay a load performance price for these capabilities, especially with large data sets.
Ralph Kimball (The Data Warehouse Toolkit: The Definitive Guide to Dimensional Modeling)
One class of software, broadly known as analytical databases, dispenses with the relational model. Using innovations in on-disk storage and the use of distributed memory, analytical databases combine the flexibility of SQL with the speed of traditional OLAP systems.
Anonymous
Another recent study, this one on academic research, provides real-world evidence of the way the tools we use to sift information online influence our mental habits and frame our thinking. James Evans, a sociologist at the University of Chicago, assembled an enormous database on 34 million scholarly articles published in academic journals from 1945 through 2005. He analyzed the citations included in the articles to see if patterns of citation, and hence of research, have changed as journals have shifted from being printed on paper to being published online. Considering how much easier it is to search digital text than printed text, the common assumption has been that making journals available on the Net would significantly broaden the scope of scholarly research, leading to a much more diverse set of citations. But that’s not at all what Evans discovered. As more journals moved online, scholars actually cited fewer articles than they had before. And as old issues of printed journals were digitized and uploaded to the Web, scholars cited more recent articles with increasing frequency. A broadening of available information led, as Evans described it, to a “narrowing of science and scholarship.”31 In explaining the counterintuitive findings in a 2008 Science article, Evans noted that automated information-filtering tools, such as search engines, tend to serve as amplifiers of popularity, quickly establishing and then continually reinforcing a consensus about what information is important and what isn’t. The ease of following hyperlinks, moreover, leads online researchers to “bypass many of the marginally related articles that print researchers” would routinely skim as they flipped through the pages of a journal or a book. The quicker that scholars are able to “find prevailing opinion,” wrote Evans, the more likely they are “to follow it, leading to more citations referencing fewer articles.” Though much less efficient than searching the Web, old-fashioned library research probably served to widen scholars’ horizons: “By drawing researchers through unrelated articles, print browsing and perusal may have facilitated broader comparisons and led researchers into the past.”32 The easy way may not always be the best way, but the easy way is the way our computers and search engines encourage us to take.
Nicholas Carr (The Shallows: What the Internet is Doing to Our Brains)
SQL database is self-describing because it tells us everything we need to know regarding the relations between the data items inside and what they’re made of. We can find the metadata in a data dictionary. The data dictionary is used to describe all elements that make up the database.
Mark Reed (SQL: 3 books 1 - The Ultimate Beginner, Intermediate & Expert Guides To Master SQL Programming Quickly with Practical Exercises)
数据库的描述只说出了一部分真相。Redis是一个速度非常快的非关系数据库(non-relational database),它可以存储键(key)与5种不同类型的值(value)之间的映射(mapping),可以将存储在内存的键值对数据持久化到硬盘,可以使用复制特性来扩展读性能,还可以使用客户
约西亚 L.卡尔森(Josiah L. Carlson) (Redis实战(异步图书) (Chinese Edition))
...SQL is very far from being the “perfect” relational language—it suffers from numerous sins of both omission and commission. ...the overriding issue is simply that SQL fails in all too many ways to support the relational model properly. As a consequence, it is not at all clear that today's SQL products really deserve to be called “relational” at all! Indeed, as far as this writer is aware, there is no product on the market today that supports the relational model in its entirety. This is not to say that some parts of the model are unimportant; on the contrary, every detail of the model is important, and important, moreover, for genuinely practical reasons. Indeed, the point cannot be stressed too strongly that the purpose of relational theory is not just “theory for its own sake”; rather, the purpose is to provide a base on which to build systems that are 100 percent practical. But the sad fact is that the vendors have not yet really stepped up to the challenge of implementing the theory in its entirety. As a consequence, the “relational” products of today regrettably all fail, in one way or another, to deliver on the full promise of relational technology.
C.J. Date (An Introduction to Database Systems)
The Alvarez hypothesis did much to focus paleontological thinking on mass extinction, while further impetus came from another project that took shape about the same time. When I was a graduate student in the 1970s, my friend and fellow student Jack Sepkoski started to tabulate fossil diversity through time. Jack wasn’t the first to try this, but his perseverance and attention to detail enabled him to put together a remarkable database of the first and last appearances of every order, family and, eventually, genus of marine animals found in the fossil record. (Jack stayed away from tabulating species, correctly intuiting that the record at that level of detail would be prone to biases related to sediment abundance and the habits of collectors.) Jack’s data showed that the course of biological diversification never did run smooth.
Andrew H. Knoll (A Brief History of Earth: Four Billion Years in Eight Chapters)
Non-relational databases might be the right choice if: •Your application requires super-low latency. •Your data are unstructured, or you do not have any relational data. •You only need to serialize and deserialize data (JSON, XML, YAML, etc.). •You need to store a massive amount of data.
Alex Xu (System Design Interview – An insider's guide)
One of the problems with Domain Model is the interface with relational databases. In many ways this approach treats the relational database like a crazy aunt who’s shut up in an attic and whom nobody wants to talk about.
Martin Fowler (Patterns of Enterprise Application Architecture)
Because the chances of finding a coincidental one in a million match are relatively high if you run the sample through a database with samples from a million people.
Charles Wheelan (Naked Statistics: Stripping the Dread from the Data)
-Do you know the difference between intellectual telepathy and emotional reincarnation? -Yes, telepathy is reading thoughts, and reading feelings and sensations. -Did it ever occur to you that someone is telepathy to you against your will? -Some people have this talent, or so they claim. Baibars: It is not a talent, but a knowledge. Physiognomy was never a talent, but rather an experience. People who travel a lot, social people, who have an appetite for information, and details, are the owners of physiognomy, who acquire it as a result of their experiences, all of which are stored in their subconscious mind, and the latter gives them results. In the form of emphatic feelings, we call it physiognomy, or talent. And basically, it’s based on data: we do not hear or know about anyone who has insight, who has earned this talent while sitting at home, but who is a frequent traveler. The more data you have, the more precise you are able to telepath with your target, and now telepathy is happening at every moment. With the technical revolution and the development and diversity of the means of all information, in many ways, social networking sites are not the first and will not be the last. With the development of computers, and their ability to process huge amounts of data, in a relatively acceptable time, and with the development of artificial intelligence software, and self-learning software, our privacy has become violated by many parties around the world, not only the intelligence services, but even studies and research centers, and decision-making institutions. They all collect an awful lot of data every day, and everyone in this world has a share of it. These software and computers will stand powerless if you strip them from their database, which must be constantly updated. Telepathy became available, easy, and possible, as never before. Physiognomy became electronic in the literal sense of the word. However, our feelings, and our emotions, remain our impenetrable fortress. If you decide to make your entire electronic life a made-up story, contrary to the reality of what you feel, such as expressing joy when you feel sad, this software will expect you from you other than what you really feel, it will fail. The more you are cunning, and deceitful in reincarnation, the more helpless it stands in knowing the truth of your feelings that no one else knows. All that is required of you is to express the opposite of what you feel. The randomness of humans, their spontaneity, and those they think are their free decisions, have been programmed by a package of factors surrounding them, which were imposed on them, including society, environment, conditions, and education. The challenge is to act neither spontaneously nor randomly, and here lies the meaning of the real free will. Can you imagine that? Your spontaneity is pre-programmed, and your random decisions that you think are absolutely free, are in fact not free, and until you are able to imagine this and believe in it, you will remain a slave to the system. To be free you must first overcome it, you must rebel against what you think is your free self. He was silent for a moment, took a breath from his cigarette, and what he was about to say now almost made him inevitable madness, a few years ago… -But, did it occur to you, Robert, that there is someone who can know the truth about your feelings, no matter how hard you try to fake them! And even knows it before you even feel it! A long moment of silence…
Ahmad I. AlKhalel
What design theory does is state in a precise way what certain aspects of common sense consist of. In my opinion, that’s the real achievement—or one of the real achievements, anyway—of the theory: It formalizes certain commonsense principles, thereby opening the door to the possibility of mechanizing those principles (that is, incorporating them into computerized design tools). Critics of the theory often miss this point; they claim, quite rightly, that the ideas are mostly just common sense, but they don’t seem to realize it’s a significant achievement to state what common sense means in a precise and formal way.
C.J. Date (Database Design and Relational Theory: Normal Forms and All That Jazz)
The overall objective of logical design is to achieve a design that’s (a) hardware independent, for obvious reasons; (b) operating system and DBMS independent, again for obvious reasons; and finally, and perhaps a little controversially, (c) application independent (in other words, we’re concerned primarily with what the data is, rather than with how it’s going to be used). Application independence in this sense is desirable for the very good reason that it’s normally—perhaps always—the case that not all uses to which the data will be put are known at design time; thus, we want a design that’ll be robust, in the sense that it won’t be invalidated by the advent of application requirements that weren’t foreseen at the time of the original design.
C.J. Date (Database Design and Relational Theory: Normal Forms and All That Jazz)
I booted up my laptop and went into the FBI’s VICAP database. The Violent Criminal Apprehension Program was a national Web site with one purpose: to help law enforcement agents link up scattered bits of intel related to serial homicides. The site had a kick-ass search engine, and new information was always being plugged in by cops around the country
James Patterson (4th of July (Women's Murder Club, #4))
Victims included U.S. state and local entities, such as state boards of elections (SBOEs), secretaries of state, and county governments, as well as individuals who worked for those entities.186 The GRU also targeted private technology firms responsible for manufacturing and administering election-related software and hardware, such as voter registration software and electronic polling stations.187 The GRU continued to target these victims through the elections in November 2016. While the investigation identified evidence that the GRU targeted these individuals and entities, the Office did not investigate further. The Office did not, for instance, obtain or examine servers or other relevant items belonging to these victims. The Office understands that the FBI, the U.S. Department of Homeland Security, and the states have separately investigated that activity. By at least the summer of 2016, GRU officers sought access to state and local computer networks by exploiting known software vulnerabilities on websites of state and local governmental entities. GRU officers, for example, targeted state and local databases of registered voters using a technique known as "SQL injection," by which malicious code was sent to the state or local website in order to run commands (such as exfiltrating the database contents).188 In one instance in approximately June 2016, the GRU compromised the computer network of the Illinois State Board of Elections by exploiting a vulnerability in the SBOE's website. The GRU then gained access to a database containing information on millions of registered Illinois voters,189 and extracted data related to thousands of U.S. voters before the malicious activity was identified.190 GRU officers [REDACTED: Investigative Technique] scanned state and local websites for vulnerabilities. For example, over a two-day period in July 2016, GRU officers [REDACTED: Investigative Technique] for vulnerabilities on websites of more than two dozen states.
Robert S. Mueller III (The Mueller Report)
The Database of Insects and their Foodplants records three beetles, six bugs, twenty-four macro-moths and four miro-moths feeding on Nothofagus species, but none of those is confined to that genus. All the moths are common or fairly common polyphagous species that have spread to the alien trees, often being characteristic of native Fagaceae and recorded also from Sweet Chestnut. The latter species has been here for far longer and has accrued a longer list of feeders: 8, 25, 17 and 23, respectively for the above four insect groups. Figures for Sycamore (16, 25, 33 and 25 respectively) are even higher. One other genus of trees that is grown on small scale in forest plots, and as specimens in parks and gardens, is the gums (Eucalyptus). This, however, does not provide as much for our wildlife; no Lepidoptera have been found feeding on gums, and the only gall relates to a single record. Eucalyptus woodland is much more of a wildlife desert than the much-derided conifer plantations, and we are fortunate that it is scarcely suited to our climate.
Clive A. Stace
All one must do is remember basic math. If one system that administers medical payments require hundreds of duplicate services, equipment, software, & databases, and must make profits for passive investors, and must pay thousands of executives millions of dollars, then it is mathematically impossible for that system to be more efficient than one that must provide the same medical payments without those expenses and overhead. Not even an inordinate amount of waste and fraud in any single-payer system would likely match the legalized fraud of the private healthcare insurance system. It is simply basic math.
Egberto Willies (It’s Worth It: How to Talk To Your Right-Wing Relatives, Friends, and Neighbors (Our Politics Made Easy & Ready For Action))
The same pattern, liberalization followed by an increase in the earnings of skilled workers relative to the unskilled, as well as other measures of inequality, was found in Colombia, Brazil, Argentina, and India. Finally, inequality exploded in China as it gradually opened up starting in the 1980s and eventually joined the World Trade Organization (WTO) in 2001. According to the World Inequality Database team, in 1978 the bottom 50 percent and the top 10 percent of the population both took home the same share of Chinese income (27 percent).
Abhijit V. Banerjee (Good Economics for Hard Times: Better Answers to Our Biggest Problems)
...note that relational systems require only that the database be perceived by the user as tables. Tables are the logical structure in a relational system, not the physical structure. At the physical level, in fact, the system is free to store the data any way it likes—using sequential files, indexing, hashing, pointer chains, compression, and so on—provided only that it can map that stored representation to tables at the logical level. Another way of saying the same thing is that tables represent an abstraction of the way the data is physically stored—an abstraction in which numerous storage level details (such as stored record placement, stored record sequence, stored data value representations, stored record prefixes, stored access structures such as indexes, and so forth) are all hidden from the user. ... The Information Principle: The entire information content of the database is represented in one and only one way—namely, as explicit values in column positions in rows in tables. This method of representation is the only method available (at the logical level, that is) in a relational system. In particular, there are no pointers connecting one table to another.
C.J. Date (An Introduction to Database Systems)
...since there is so much confusion surrounding it in the industry. You will often hear claims to the effect that relational attributes can only be of very simple types (numbers, strings, and so forth). The truth is, however, that there is absolutely nothing in the relational model to support such claims. ...in fact, types can be as simple or as complex as we like, and so we can have attributes whose values are numbers, or strings, or dates, or times, or audio recordings, or maps, or video recordings, or geometric points (etc.). The foregoing message is so important‒and so widely misunderstood‒that we state it again in different terms: The question of what data types are supported is orthogonal to the question of support for the relational model.
C.J. Date (An Introduction to Database Systems)
■ Types are (sets of) things we can talk about. ■ Relations are (sets of) things we say about the things we can talk about. (There is a nice analogy here that might help you appreciate and remember these important points: Types are to relations as nouns are to sentences.) Thus, in the example, the things we can talk about are employee numbers, names, department numbers, and money values, and the things we say are true utterances of the form “The employee with the specified employee number has the specified name, works in the specified department, and earns the specified salary.” It follows from all of the foregoing that: 1. Types and relations are both necessary (without types, we have nothing to talk about; without relations, we cannot say anything). 2. Types and relations are sufficient, as well as necessary—i.e., we do not need anything else, logically speaking. 3. Types and relations are not the same thing. It is an unfortunate fact that certain commercial products—not relational ones, by definition!—are confused over this very point.
C.J. Date (An Introduction to Database Systems)
By at least the summer of 2016, GRU officers sought access to state and local computer networks by exploiting known software vulnerabilities on websites of state and local governmental entities. GRU officers, for example, targeted state and local databases of registered voters using a technique known as "SQL injection," by which malicious code was sent to the state or local website in order to run commands (such as exfiltrating the database contents).188 In one instance in approximately June 2016, the GRU compromised the computer network of the Illinois State Board of Elections by exploiting a vulnerability in the SBOE's website. The GRU then gained access to a database containing information on millions of registered Illinois voters,189 and extracted data related to thousands of U.S. voters before the malicious activity was identified.190
Robert S. Mueller III (The Mueller Report)
The constraints of a toolset help to define patterns for solving problems. In the case of MongoDB, one of those constraints is the lack of atomic multidocument update operations. The patterns we use in MongoDB to mitigate the lack of atomic multidocument update operations include document embedding and complex updates for basic operations, with optimistic update with compensation available for when we really need a two-phase commit protocol. When designing your application to use MongoDB, more than in relational databases, you must keep in mind which updates you need to be atomic and design your schema appropriately.
Rick Copeland (MongoDB Applied Design Patterns)
Wikipedia: Unofficial Collaborator The great range of circumstances that led to collaboration with the Stasi makes any overall moral evaluation of the spying activities extremely difficult. There were those that volunteered willingly and without moral scruples to pass detailed reports to the Stasi out of selfish motives, from self-regard, or from the urge to exercise power over others. Others collaborated with the Stasis out of a sincerely held sense of duty that the GDR was the better Germany and that it must be defended from the assaults of its enemies. Others were to a lesser or greater extent themselves victims of state persecution and had been broken or blackmailed into collaboration. Many informants believed that they could protect friends or relations by passing on only positive information about them, while others thought that provided they reported nothing suspicious or otherwise punishable, then no harm would be done by providing the Stasi with reports. These failed to accept that the Stasi could use apparently innocuous information to support their covert operations and interrogations. A further problem in any moral evaluation is presented by the extent to which information from informal collaborators was also used for combating non-political criminality. Moral judgements on collaboration involving criminal police who belonged to the Stasi need to be considered on a case by case basis, according to individual circumstances. A belief has gained traction that any informal collaborator (IM) who refused the Stasi further collaboration and extracted himself (in the now outdated Stasi jargon of the time "sich dekonspirierte") from a role as an IM need have no fear of serious consequences for his life, and could in this way safely cut himself off from communication with the Stasi. This is untrue. Furthermore, even people who declared unequivocally that they were not available for spying activities could nevertheless, over the years, find themselves exposed to high-pressure "recruitment" tactics. It was not uncommon for an IM trying to break out of a collaborative relationship with the Stasi to find his employment opportunities destroyed. The Stasi would often identify refusal to collaborate, using another jargon term, as "enemy-negative conduct" ("feindlich-negativen Haltung"), which frequently resulted in what they termed "Zersetzungsmaßnahmen", a term for which no very direct English translation is available, but for one form of which a definition has been provided that begins: "a systematic degradation of reputation, image, and prestige in a database on one part true, verifiable and degrading, and on the other part false, plausible, irrefutable, and always degrading; a systematic organization of social and professional failures for demolishing the self-confidence of the individual.
Wikipedia Contributors