Computer Science Short Quotes

We've searched our database for all the quotes and captions related to Computer Science Short. Here they are! All 29 of them:

We can only see a short distance ahead, but we can see plenty there that needs to be done.
Alan M. Turing (Computing machinery and intelligence)
Tomorrow at the press conference would be dreadful. She would be surrounded by nice young men who spoke Big Business or Computer or Bachelor on the Make, and she would not understand a word they said." "Short Story: Blued Moon
Connie Willis (Best Science Fiction of the Year 14)
And before you say this is all far-fetched, just think how far the human race has come in the past ten years. If someone had told your parents, for example, that they would be able to carry their entire music library in their pocket, would they have believed it? Now we have phones that have more computing power than was used to send some of the first rockets into space. We have electron microscopes that can see individual atoms. We routinely cure diseases that only fifty years ago was fatal. and the rate of change is increasing. Today we are able to do what your parents would of dismissed as impossible and your grandparents nothing short of magical.
Nicolas Flamel
An operating system is the great facilitator; it is the great protector; it is the great illusionist.
Subrata Dasgupta (Computer Science: A Very Short Introduction (Very Short Introductions))
the groundbreakers in many sciences were devout believers. Witness the accomplishments of Nicolaus Copernicus (a priest) in astronomy, Blaise Pascal (a lay apologist) in mathematics, Gregor Mendel (a monk) in genetics, Louis Pasteur in biology, Antoine Lavoisier in chemistry, John von Neumann in computer science, and Enrico Fermi and Erwin Schrodinger in physics. That’s a short list, and it includes only Roman Catholics; a long list could continue for pages. A roster that included other believers—Protestants, Jews, and unconventional theists like Albert Einstein, Fred Hoyle, and Paul Davies—could fill a book.
Scott Hahn (Reasons to Believe: How to Understand, Explain, and Defend the Catholic Faith)
Mind instantiates oneself into matter. In a mathematical sense, matter is an “in-formed” pattern of mind. Time is emergent, and so is space. If space-time is emergent, so is mass-energy. All interactions in our physical world is computed by the larger consciousness system. In short, mind is more fundamental than matter. All realities are observer-centric virtualities.
Alex M. Vikoulov (Theology of Digital Physics: Phenomenal Consciousness, The Cosmic Self & The Pantheistic Interpretation of Our Holographic Reality (The Science and Philosophy of Information Book 4))
In the medium term, AI may automate our jobs, to bring both great prosperity and equality. Looking further ahead, there are no fundamental limits to what can be achieved. There is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains. An explosive transition is possible, although it may play out differently than in the movies. As mathematician Irving Good realised in 1965, machines with superhuman intelligence could repeatedly improve their design even further, in what science-fiction writer Vernor Vinge called a technological singularity. One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders and potentially subduing us with weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.
Stephen Hawking
In 1997 an IBM computer called Deep Blue defeated the world chess champion Garry Kasparov, and unlike its predecessors, it did not just evaluate trillions of moves by brute force but was fitted with strategies that intelligently responded to patterns in the game. [Y]ou might still object that chess is an artificial world with discrete moves and a clear winner, perfectly suited to the rule-crunching of a computer. People, on the other hand, live in a messy world offering unlimited moves and nebulous goals. Surely this requires human creativity and intuition — which is why everyone knows that computers will never compose a symphony, write a story, or paint a picture. But everyone may be wrong. Recent artificial intelligence systems have written credible short stories, composed convincing Mozart-like symphonies, drawn appealing pictures of people and landscapes, and conceived clever ideas for advertisements. None of this is to say that the brain works like a digital computer, that artificial intelligence will ever duplicate the human mind, or that computers are conscious in the sense of having first-person subjective experience. But it does suggest that reasoning, intelligence, imagination, and creativity are forms of information processing, a well-understood physical process. Cognitive science, with the help of the computational theory of mind, has exorcised at least one ghost from the machine.
Steven Pinker (The Blank Slate: The Modern Denial of Human Nature)
Often interfaces are assumed to be synonymous with media itself. But what would it mean to say that “interface” and “media” are two names for the same thing? The answer is found in the remediation or layer model of media, broached already in the introduction, wherein media are essentially nothing but formal containers housing other pieces of media. This is a claim most clearly elaborated on the opening pages of Marshall McLuhan’s Understanding Media. McLuhan liked to articulate this claim in terms of media history: a new medium is invented, and as such its role is as a container for a previous media format. So, film is invented at the tail end of the nineteenth century as a container for photography, music, and various theatrical formats like vaudeville. What is video but a container for film. What is the Web but a container for text, image, video clips, and so on. Like the layers of an onion, one format encircles another, and it is media all the way down. This definition is well-established today, and it is a very short leap from there to the idea of interface, for the interface becomes the point of transition between different mediatic layers within any nested system. The interface is an “agitation” or generative friction between different formats. In computer science, this happens very literally; an “interface” is the name given to the way in which one glob of code can interact with another. Since any given format finds its identity merely in the fact that it is a container for another format, the concept of interface and medium quickly collapse into one and the same thing.
Alexander R. Galloway
Isaac Asimov’s short story “The Fun They Had” describes a school of the future that uses advanced technology to revolutionize the educational experience, enhancing individualized learning and providing students with personalized instruction and robot teachers. Such science fiction has gone on to inspire very real innovation. In a 1984 Newsweek interview, Apple’s co-founder Steve Jobs predicted computers were going to be a bicycle for our minds, extending our capabilities, knowledge, and creativity, much the way a ten-speed amplifies our physical abilities. For decades, we have been fascinated by the idea that we can use computers to help educate people. What connects these science fiction narratives is that they all imagined computers might eventually emulate what we view as intelligence. Real-life researchers have been working for more than sixty years to make this AI vision a reality. In 1962, the checkers master Robert Nealey played the game against an IBM 7094 computer, and the computer beat him. A few years prior, in 1957, the psychologist Frank Rosenblatt created Perceptron, the first artificial neural network, a computer simulation of a collection of neurons and synapses trained to perform certain tasks. In the decades following such innovations in early AI, we had the computation power to tackle systems only as complex as the brain of an earthworm or insect. We also had limited techniques and data to train these networks. The technology has come a long way in the ensuing decades, driving some of the most common products and apps today, from the recommendation engines on movie streaming services to voice-controlled personal assistants such as Siri and Alexa. AI has gotten so good at mimicking human behavior that oftentimes we cannot distinguish between human and machine responses. Meanwhile, not only has the computation power developed enough to tackle systems approaching the complexity of the human brain, but there have been significant breakthroughs in structuring and training these neural networks.
Salman Khan (Brave New Words: How AI Will Revolutionize Education (and Why That’s a Good Thing))
In order for A to apply to computations generally, we shall need a way of coding all the different computations C(n) so that A can use this coding for its action. All the possible different computations C can in fact be listed, say as C0, C1, C2, C3, C4, C5,..., and we can refer to Cq as the qth computation. When such a computation is applied to a particular number n, we shall write C0(n), C1(n), C2(n), C3(n), C4(n), C5(n),.... We can take this ordering as being given, say, as some kind of numerical ordering of computer programs. (To be explicit, we could, if desired, take this ordering as being provided by the Turing-machine numbering given in ENM, so that then the computation Cq(n) is the action of the qth Turing machine Tq acting on n.) One technical thing that is important here is that this listing is computable, i.e. there is a single computation Cx that gives us Cq when it is presented with q, or, more precisely, the computation Cx acts on the pair of numbers q, n (i.e. q followed by n) to give Cq(n). The procedure A can now be thought of as a particular computation that, when presented with the pair of numbers q,n, tries to ascertain that the computation Cq(n) will never ultimately halt. Thus, when the computation A terminates, we shall have a demonstration that Cq(n) does not halt. Although, as stated earlier, we are shortly going to try to imagine that A might be a formalization of all the procedures that are available to human mathematicians for validly deciding that computations never will halt, it is not at all necessary for us to think of A in this way just now. A is just any sound set of computational rules for ascertaining that some computations Cq(n) do not ever halt. Being dependent upon the two numbers q and n, the computation that A performs can be written A(q,n), and we have: (H) If A(q,n) stops, then Cq(n) does not stop. Now let us consider the particular statements (H) for which q is put equal to n. This may seem an odd thing to do, but it is perfectly legitimate. (This is the first step in the powerful 'diagonal slash', a procedure discovered by the highly original and influential nineteenth-century Danish/Russian/German mathematician Georg Cantor, central to the arguments of both Godel and Turing.) With q equal to n, we now have: (I) If A(n,n) stops, then Cn(n) does not stop. We now notice that A(n,n) depends upon just one number n, not two, so it must be one of the computations C0,C1,C2,C3,...(as applied to n), since this was supposed to be a listing of all the computations that can be performed on a single natural number n. Let us suppose that it is in fact Ck, so we have: (J) A(n,n) = Ck(n) Now examine the particular value n=k. (This is the second part of Cantor's diagonal slash!) We have, from (J), (K) A(k,k) = Ck(k) and, from (I), with n=k: (L) If A(k,k) stops, then Ck(k) does not stop. Substituting (K) in (L), we find: (M) If Ck(k) stops, then Ck(k) does not stop. From this, we must deduce that the computation Ck(k) does not in fact stop. (For if it did then it does not, according to (M)! But A(k,k) cannot stop either, since by (K), it is the same as Ck(k). Thus, our procedure A is incapable of ascertaining that this particular computation Ck(k) does not stop even though it does not. Moreover, if we know that A is sound, then we know that Ck(k) does not stop. Thus, we know something that A is unable to ascertain. It follows that A cannot encapsulate our understanding.
Roger Penrose (Shadows of the Mind: A Search for the Missing Science of Consciousness)
Marturano recommended something radical: do only one thing at a time. When you’re on the phone, be on the phone. When you’re in a meeting, be there. Set aside an hour to check your email, and then shut off your computer monitor and focus on the task at hand. Another tip: take short mindfulness breaks throughout the day. She called them “purposeful pauses.” So, for example, instead of fidgeting or tapping your fingers while your computer boots up, try to watch your breath for a few minutes. When driving, turn off the radio and feel your hands on the wheel. Or when walking between meetings, leave your phone in your pocket and just notice the sensations of your legs moving. “If I’m a corporate samurai,” I said, “I’d be a little worried about taking all these pauses that you recommend because I’d be thinking, ‘Well, my rivals aren’t pausing. They’re working all the time.’ ” “Yeah, but that assumes that those pauses aren’t helping you. Those pauses are the ways to make you a more clear thinker and for you to be more focused on what’s important.” This was another attack on my work style. I had long assumed that ceaseless planning was the recipe for effectiveness, but Marturano’s point was that too much mental churning was counterproductive. When you lurch from one thing to the next, constantly scheming, or reacting to incoming fire, the mind gets exhausted. You get sloppy and make bad decisions. I could see how the counterintuitive act of stopping, even for a few seconds, could be a source of strength, not weakness. This was a practical complement to Joseph’s “is this useful?” mantra. It was the opposite of zoning out, it was zoning in. In fact, I looked into it and found there was science to suggest that pausing could be a key ingredient in creativity and innovation. Studies showed that the best way to engineer an epiphany was to work hard, focus, research, and think about a problem—and then let go. Do something else. That didn’t necessarily mean meditate, but do something that relaxes and distracts you; let your unconscious mind go to work, making connections from disparate parts of the brain. This, too, was massively counterintuitive for me. My impulse when presented with a thorny problem was to bulldoze my way through it, to swarm it with thought. But the best solutions often come when you allow yourself to get comfortable with ambiguity. This is why people have aha moments in the shower. It was why Kabat-Zinn had a vision while on retreat. It was why Don Draper from Mad Men, when asked how he comes up with his great slogans, said he spends all day thinking and then goes to the movies. Janice Marturano was on
Dan Harris (10% Happier)
The rate of time flow perceived by an observer in the simulated universe is completely independent of the rate at which a computer runs the simulation, a point emphasized in Greg Egan's science-fiction novel Permutation City. Moreover, as we discussed in the last chapter and as stressed by Einstein, it's arguably more natural to view our Universe not from the frog perspective as a three-dimensional space where things happen, but from the bird perspective as a four-dimensional spacetime that merely is. There should therefore be no need for the computer to compute anything at all-it could simply store all the four-dimensional data, that is, encode all properties of the mathematical structure that is our Universe. Individual time slices could then be read out sequentially if desired, and the "simulated" world should still feel as real to its inhabitants as in the case where only three-dimensional data is stored and evolved. In conclusion: the role of the simulating computer isn't to compute the history of our Universe, but to specify it. How specify it? The way in which the data are stored (the type of computer, the data format, etc.) should be irrelevant, so the extent to which the inhabitants of the simulated universe perceive themselves as real should be independent of whatever method is used for data compression. The physical laws that we've discovered provide great means of data compression, since they make it sufficient to store the initial data at some time together with the equations and a program computing the future from these initial data. As emphasized on pages 340-344, the initial data might be extremely simple: popular initial states from quantum field theory with intimidating names such as the Hawking-Hartle wavefunction or the inflationary Bunch-Davies vacuum have very low algorithmic complexity, since they can be defined in brief physics papers, yet simulating their time evolution would simulate not merely one universe like ours, but a vast decohering collection of parallel ones. It's therefore plausible that our Universe (and even the whole Level III multiverse) could be simulated by quite a short computer program.
Max Tegmark (Our Mathematical Universe: My Quest for the Ultimate Nature of Reality)
Atoms, elements and molecules are three important knowledge in Physics, chemistry and Biology. mathematics comes where counting starts, when counting and measurement started, integers were required. Stephen hawking says integers were created by god and everything else is work of man. Man sees pattern in everything and they are searched and applied to other sciences for engineering, management and application problems. Physics, it is required understand the physical nature or meaning of why it happens, chemistry is for chemical nature, Biology is for that why it happened. Biology touch medicine, plants and animals. In medicine how these atoms, elements and molecules interplay with each other by bondage is being explained. Human emotions and responses are because of biochemistry, hormones i e anatomy and physiology. This physiology deals with each and every organs and their functions. When this atom in elements are disturbed whatever they made i e macromolecules DNA, RNA and Protein and other micro and macro nutrients and which affects the physiology of different organs on different scales and then diseases are born because of this imbalance/ disturb in homeostasis. There many technical words are there which are hard to explain in single para. But let me get into short, these atoms in elements and molecules made interplay because of ecological stimulus i e so called god. and when opposite sex meets it triggers various responses on body of each. It is also harmone and they are acting because of atoms inside elements and continuous generation or degenerations of cell cycle. There is a god cell called totipotent stem cell, less gods are pluripotent, multi potent and noni potent stem cells. So finally each and every organ system including brain cells are affected because of interplay of atoms inside elements and their bondages in making complex molecules, which are ruled by ecological stimulus i e god. So everything is basically biology and medicine even for animals, plants and microbes and other life forms. process differs in each living organisms. The biggest mystery is Brain and DNA. Brain has lots of unexplained phenomenon and even dreams are not completely understood by science that is where spiritualism/ soul touches. DNA is long molecule which has many applications as genetic engineering. genomics, personal medicine, DNA as tool for data storage, DNA in panspermia theory and many more. So everything happens to women and men and other sexes are because of Biology, Medicine and ecology. In ecology every organisms are inter connected and inter dependent. Now physics - it touch all technical aspects but it needs mathematics and statistics to lay foundation for why and how it happened and later chemistry, biology also included inside physics. Mathematics gave raise to computers and which is for fast calculation on any applications in any sciences. As physiological imbalances lead to diseases and disorders, genetic mutations, again old concept evolution was retaken to understand how new biology evolves. For evolution and disease mechanisms, epidemiology and statistics was required and statistics was as a data tool considered in all sciences now a days. Ultimate science is to break the atoms to see what is inside- CERN, but it creates lots of mysterious unanswerable questions. laws in physics were discovered and invented with mathematics to understand the universe from atoms. Theory of everything is a long search and have no answers. While searching inside atoms, so many hypothesis like worm holes and time travel born but not yet invented as far as my knowledge. atom is universe, and humans are universe they have everything that universe has. ecology is god that affects humans and climate. In business these computerized AI applications are trying to figure out human emotions by their mechanism of writing, reading, texting, posting on social media and bla bla. Arts is trying to figure out human emotions in art way.
Ganapathy K
Turing was able to show that there are certain classes of problem that do not have any algorithmic solution (in particular the 'halting problem' that I shall describe shortly). However, Hilbert's actual tenth problem had to wait until 1970 before the Russian mathematician Yuri Matiyasevich-providing proofs that completed certain arguments that had been earlier put forward by the Americans Julia Robinson, Martin Davis, and Hilary Putnam-showed that there can be no computer program (algorithm) which decides yes/no systematically to the question of whether a system of Diophantine equations has a solution. It may be remarked that whenever the answer happens to be 'yes', then that fact can, in principle, be ascertained by the particular computer program that just slavishly tries all sets of integers one after the other. It is the answer 'no', on the other hand, that eludes any systematic treatment. Various sets of rules for correctly giving the answer 'no' can be provided-like the argument using even and odd numbers that rules out solutions to the second system given above-but Matisyasevich's theorem showed that these can never be exhaustive.
Roger Penrose (Shadows of the Mind: A Search for the Missing Science of Consciousness)
Only years later would scientists again need to harness the power of multiple processors at once, when massively parallel processing would become an integral part of supercomputing. Years later, too, the genealogy of Shoch’s worm would come full circle. Soon after he published a paper about the worm citing The Shockwave Rider, he received a letter from John Brunner himself. It seemed that most science fiction writers harbored an unspoken ambition to write a book that actually predicted the future. Their model was Arthur C. Clarke, the prolific author of 2001: A Space Odyssey, who had become world-famous for forecasting the invention of the geosynchronous communications satellite in an earlier short story. “Apparently they’re all jealous of Arthur Clarke,” Shoch reflected. “Brunner wrote that his editor had sent him my paper. He said he was ‘really delighted to learn, that like Arthur C. Clarke, I predicted an event of the future.’” Shoch briefly considered replying that he had only borrowed the tapeworm’s name but that the concept was his own and that, unfortunately, Brunner did not really invent the worm. But he let it pass.
Michael A. Hiltzik (Dealers of Lightning: Xerox PARC and the Dawn of the Computer Age)
The Very Difference Between Game Design & 3D Game Development You Always Want to Know Getting into the gaming industry is a dream for many people. In addition to the fact that this area is always relevant, dynamic, alive and impenetrable for problems inherent in other areas, it will become a real paradise for those who love games. Turning your hobby into work is probably the best thing that can happen in your career. What is Game Designing? A 3D Game Designer is a creative person who dreams up the overall design of a video game. Game design is a large field, drawing from the fields of computer science/programming, creative writing, and graphic design. Game designers take the creative lead in imagining and bringing to life video game worlds. Game designers discuss the following issues: • the target audience; • genre; • main plot; • alternative scenarios; • maps; • levels; • characters; • game process; • user interface; • rules and restrictions; • the primary and secondary goals, etc Without this information, further work on the game is impossible. Once the concept has been chosen, the game designers work closely with the artists and developers to ensure that the overall picture of the game is harmonized and that the implementation is in line with the original ideas. As such, the skills of a game designer are drawn from the fields of computer science and programming, creative writing and graphic design. Game designers take the creative lead in imagining and bringing to life video game stories, characters, gameplay, rules, interfaces, dialogue and environments. A game designer's role on a game development outsourcing team differs from the specialized roles of graphic designers and programmers. Graphic designers and game programmers have specific tasks to accomplish in the division of labor that goes into creating a video game, international students can major in those specific disciplines if desired. The game designer generates ideas and concepts for games. They define the layout and overall functionality of the Game Animation Studio. In short, they are responsible for creating the vision for the game. These geniuses produce innovative ideas for games. Game designers should have a knack for extraordinary and creative vision so that their game may survive in the competitive market. The field of game design is always in need of artists of all types who may be drawn to multiple art forms, original game design and computer animation. The game designer is the artist who uses his/her talents to bring the characters and plot to life. Who is a Game Development? Games developers use their creative talent and skills to create the games that keep us glued to the screen for hours and even days or make us play them by erasing every other thought from our minds. They are responsible for turning the vision into a reality, i.e., they convert the ideas or design into the actual game. Thus, they convert all the layouts and sketches into the actual product. It may involve concept generation, design, build, test and release. While you create a game, it is important to think about the game mechanics, rewards, player engagement and level design. 3D Game development involves bringing these ideas to life. Developers take games from the conceptual phase, through *development*, and into reality. The Game Development Services side of games typically involves the programming, coding, rendering, engineering, and testing of the game (and all of its elements: sound, levels, characters, and other assets, etc.). Here are the following stages of 3D Game Development Service, and the best ways of learning game development (step by step). • High Concept • Pitch • Concept • Game Design Document • Prototype • Production • Design • Level Creation • Programming
GameYan
A popular misconception is that decision analysis is unemotional, dehumanizing, and obsessive because it uses numbers and arithmetic in order to guide important life decisions. Isn’t this turning over important human decisions “to a machine,” sometimes literally a computer — which now picks our quarterbacks, our chief executive officers, and even our lovers? Aren’t the “mathematicizers” of life, who admittedly have done well in the basic sciences, moving into a context where such uses of numbers are irrelevant and irreverent? Don’t we suffer enough from the tyranny of numbers when our opportunities in life are controlled by numerical scores on aptitude tests and numbers entered on rating forms by interviewers and supervisors? In short, isn’t the human spirit better expressed by intuitive choices than by analytic number crunching? Our answer to all these concerns is an unqualified “no.” There is absolutely nothing in the von Neumann and Morgenstern theory — or in this book — that requires the adoption of “inhumanly” stable or easily accessed values. In fact, the whole idea of utility is that it provides a measure of what is truly personally important to individuals reaching decisions. As presented here, the aim of analyzing expected utility is to help us achieve what is really important to us. As James March (1978) points out, one goal in life may be to discover what our values are. That goal might require action that is playful, or even arbitrary. Does such action violate the dictates of either rationality or expected utility theory? No. Upon examination, an individual valuing such an approach will be found to have a utility associated with the existential experimentation that follows from it. All that the decision analyst does is help to make this value explicit so that the individual can understand it and incorporate it into action in a noncontradictory manner.
Reid Hastie (Rational Choice in an Uncertain World: The Psychology of Judgement and Decision Making)
Quantum computing is not only faster than conventional computing, but its workload obeys a different scaling law—rendering Moore’s Law little more than a quaint memory. Formulated by Intel founder Gordon Moore, Moore’s Law observes that the number of transistors in a device’s integrated circuit doubles approximately every two years. Some early supercomputers ran on around 13,000 transistors; the Xbox One in your living room contains 5 billion. But Intel in recent years has reported that the pace of advancement has slowed, creating tremendous demand for alternative ways to provide faster and faster processing to fuel the growth of AI. The short-term results are innovative accelerators like graphics-processing unit (GPU) farms, tensor-processing unit (TPU) chips, and field-programmable gate arrays (FPGAs) in the cloud. But the dream is a quantum computer. Today we have an urgent need to solve problems that would tie up classical computers for centuries, but that could be solved by a quantum computer in a few minutes or hours. For example, the speed and accuracy with which quantum computing could break today’s highest levels of encryption is mind-boggling. It would take a classical computer 1 billion years to break today’s RSA-2048 encryption, but a quantum computer could crack it in about a hundred seconds, or less than two minutes. Fortunately, quantum computing will also revolutionize classical computing encryption, leading to ever more secure computing. To get there we need three scientific and engineering breakthroughs. The math breakthrough we’re working on is a topological qubit. The superconducting breakthrough we need is a fabrication process to yield thousands of topological qubits that are both highly reliable and stable. The computer science breakthrough we need is new computational methods for programming the quantum computer.
Satya Nadella (Hit Refresh)
The twenty-first-century shift into real-time analytics has only made the danger of metrics more intense. Avinash Kaushik, digital marketing evangelist at Google, warns that trying to get website users to see as many ads as possible naturally devolves into trying to cram sites with ads: “When you are paid on a [cost per thousand impressions] basis the incentive is to figure out how to show the most possible ads on every page [and] ensure the visitor sees the most possible pages on the site.… That incentive removes a focus from the important entity, your customer, and places it on the secondary entity, your advertiser.” The website might gain a little more money in the short term, but ad-crammed articles, slow-loading multi-page slide shows, and sensationalist clickbait headlines will drive away readers in the long run. Kaushik’s conclusion: “Friends don’t let friends measure Page Views. Ever.
Brian Christian (Algorithms to Live By: The Computer Science of Human Decisions)
Getting to fifty-fifty is incredibly complex and nuanced, requiring many detailed solutions that will take decades to fully play out. To accelerate the process, change needs to start at the top. Like Stewart Butterfield, CEOs need to make hiring and retaining women an explicit priority. In addition, here is the bare minimum of what we can do at an individual and a systemic level: First of all, people, be nice to each other. Treat one another with respect and dignity, including those of the opposite sex.That should be pretty simple. Don’t enable assholes. Stop making excuses for bad behavior, or ignoring it. CEOs must embrace and champion the need to reach a fair representation of gender within their companies, and develop a comprehensive plan to get there. Be long-term focused, not short-term. It may take three weeks to find a white man for the job, but three months to find a woman. Those three months could save three years of playing catch-up in the future. Invest in not just diversity but inclusion. Even if your company is small, everything counts. And take the time to educate your employees about why this is important. Companies need to appoint more women to their boards. And boards need to hold company leadership to account to get to fifty-fifty in their employee ranks, starting with company executives. Venture capital firms need to hire more women partners, and limited partners should pressure them to do so and, at the very least, ask them what their plans around diversity are. Investors, both men and women, need to start funding more women and diverse teams, period. LPs need to fund more women VCs, who can establish new firms with new cultural norms. Stop funding partnerships that look and act the same. Most important, stop blaming everybody else for the problem or pretending that it is too hard for us to solve. It’s time to look in the mirror. This is an industry, after all, that prides itself on disruption and revolutionary new ways of thinking. Let’s put that spirit of innovation and embrace of radical change to good use. Seeing a more inclusive workforce in Silicon Valley will encourage more girls and women studying computer science now.
Emily Chang (Brotopia: Breaking Up the Boys' Club of Silicon Valley)
Dynamically speaking, a globular cluster is a big many-body problem. The two-body problem is easy. Newton solved it completely. Each body—the earth and the moon, for example—travels in a perfect ellipse around the system’s joint center of gravity. Add just one more gravitational object, however, and everything changes. The three-body problem is hard, and worse than hard. As Poincaré discovered, it is most often impossible. The orbits can be calculated numerically for a while, and with powerful computers they can be tracked for a long while before uncertainties begin to take over. But the equations cannot be solved analytically, which means that long-term questions about a three-body system cannot be answered. Is the solar system stable? It certainly appears to be, in the short term, but even today no one knows for sure that some planetary orbits could not become more and more eccentric until the planets fly off from the system forever.
James Gleick (Chaos: Making a New Science)
Researchers have found some interesting facts about computer-generated short social media posts:3 •The average person is twice as likely to be fooled by these posts as a security researcher is. Computer-generated posts that are contrary to popular belief are more likely to be accepted as true. •It is easier to deceive people about entertainment topics than about science topics. •It is easier to fool people about pornographic topics than any other topic.
Steven Shwartz (Evil Robots, Killer Computers, and Other Myths: The Truth About AI and the Future of Humanity)
It is often said that philosophy makes no progress, but to a large extent the creation of autonomous disciplines is how philosophy progresses. Mathematics in antiquity; physics in the Renaissance; biology after Darwin; logic in the early 20th century; computer science in mid-20th century; cognitive science still more recently; in each case, so much progress was made, so many controversies resolved, so many confusions clarified, that a self-contained subject was created and equipped to progress further. The philosopher Daniel Dennett defines philosophy as what we do when we don’t know what questions to ask; when we understand enough to work out what the questions are and can start answering them, a new science buds off from philosophy.
David Wallace (Philosophy of Physics: A Very Short Introduction (Very Short Introductions))
When thinking about how to incorporate lecture videos, many online faculty imagine posting videos of their classroom lectures in the course. This is certainly one way to do it, and some institutions are investing in elaborate lecture-capture systems to facilitate this process. But lecture capture requires expensive tech and a team of skilled professionals. The small teaching way is to record short narrated slideshow videos or webcam-style videos speaking directly to the camera on your computer monitor. The key word here is short. “Traditional in-person lectures usually last an hour, but students have much shorter attention spans when watching educational videos online,” writes Philip Guo in a blog post about a study he and his colleagues conducted (Guo, 2013). The researchers compiled data from 6.9 million video-watching sessions to track engagement patterns of online students. Their findings led to a strong recommendation that online class videos should be no longer than six minutes.
Flower Darby (Small Teaching Online: Applying Learning Science in Online Classes)
With the exception of a few supporters at Bell Laboratories who understood digital technology, AT&T continued to resist the idea. The most outspoken skeptics were some of AT&T’s most senior technical people. “After I heard the melodic refrain of ‘bullshit’ often enough,” Baran recalled, “I was motivated to go away and write a series of detailed memoranda papers, to show, for example, that algorithms were possible that allowed a short message to contain all the information it needed to find its own way through the network.” With each objection answered, another was raised and another piece of a report had to be written. By the time Baran had answered all of the concerns raised by the defense, communications, and computer science communities, nearly four years had passed and his volumes numbered eleven.
Katie Hafner (Where Wizards Stay Up Late: The Origins Of The Internet)
So when it comes to poetry, make sure you’ve got a comfortable seat. Something normally distributed that’s gone on seemingly too long is bound to end shortly; but the longer something in a power-law distribution has gone on, the longer you can expect it to keep going.
Brian Christian (Algorithms to Live By: The Computer Science of Human Decisions)
In short, the mathematics of self-organizing lists suggests something radical: the big pile of papers on your desk, far from being a guilt-inducing fester of chaos, is actually one of the most well-designed and efficient structures available. What might appear to others to be an unorganized mess is, in fact, a self-organizing mess.
Brian Christian (Algorithms to Live By: The Computer Science of Human Decisions)
So far this does not tell us anything very general about structure except that it is hierarchical. But we can say more. Each assembly or subassembly or part has a task to perform. If it did not it would not be there. Each therefore is a means to a purpose. Each therefore, by my earlier definition, is a technology. This means that the assemblies, subassemblies, and individual parts are all executables-are all technologies. It follows that a technology consists of building blocks that are technologies, which consist of further building blocks that are technologies, which consist of yet further building blocks that are technologies, with the pattern repeating all the way down to the fundamental level of elemental components. Technologies, in other words, have a recursive structure. They consist of technologies within technologies all the way down to the elemental parts. Recursiveness will be the second principle we will be working with. It is not a very familiar concept outside mathematics, physics, and computer science, where it means that structures consist of components that are in some way similar to themselves. In our context of course it does not mean that a jet engine consists of systems and parts that are little jet engines. That would be absurd. It means simply that a jet engine (or more generally, any technology) consists of component building blocks that are also technologies, and these consist of sub-parts that are also technologies, in a repeating (or recurring) pattern. Technologies, then, are built from a hierarchy of technologies, and this has implications for how we should think of them, as wee will see shortly. It also means that whatever we can say in general about technologies-singular must hold also for assemblies or subsystems at lower levels as well. In particular, because a technology consists of main assembly and supporting assemblies, each assembly or subsystem must be organized this way too.
W. Brian Arthur (The Nature of Technology: What It Is and How It Evolves)