Ai Regulation Quotes

We've searched our database for all the quotes and captions related to Ai Regulation. Here they are! All 26 of them:

Manager Mangione,” Ping said, “algorithmic regulation was to have been a system of governance where more exact data, collected from MEG citizens’ minds via neuralinks, would be used to organize Human life more efficiently as a CORPORATE collective. Except no one to this point in Human existence has been able to identify the mind. The CORPORATE can only receive data from the NET on behaviours which indicate feelings or intentions. I & I cannot . . .
Brian Van Norman (Against the Machine: Evolution)
Compassionate robots or compassionate cyborgs are the compassionate, self-regulating, sentience machine with the qualities of superhumanization. They are just opposite to dehumanization machines and medicines. They do not negate the positive human qualities but empower humanity with super positive qualities.
Amit Ray (Compassionate Artificial Superintelligence AI 5.0)
As society entrusts more and more decisions to computers, it undermines the viability of democratic self-correcting mechanisms and of democratic transparency and accountability. How can elected officials regulate unfathomable algorithms? There is, consequently, a growing demand to enshrine a new human right: the right-to-an-explanation
Yuval Noah Harari (Nexus: A Brief History of Information Networks from the Stone Age to AI)
Populists have sought to extricate themselves from this conundrum in two different ways. Some populist movements claim adherence to the ideals of modern science and to the traditions of skeptical empiricism. They tell people that indeed you should never trust any institutions or figures of authority—including self-proclaimed populist parties and politicians. Instead, you should “do your own research” and trust only what you can directly observe by yourself. This radical empiricist position implies that while large-scale institutions like political parties, courts, newspapers, and universities can never be trusted, individuals who make the effort can still find the truth by themselves. This approach may sound scientific and may appeal to free-spirited individuals, but it leaves open the question of how human communities can cooperate to build health-care systems or pass environmental regulations, which demand large-scale institutional organization. Is a single individual capable of doing all the necessary research to decide whether the earth’s climate is heating up and what should be done about it? How would a single person go about collecting climate data from throughout the world, not to mention obtaining reliable records from past centuries? Trusting only “my own research” may sound scientific, but in practice it amounts to believing that there is no objective truth. As we shall see in chapter 4, science is a collaborative institutional effort rather than a personal quest.
Yuval Noah Harari (Nexus: A Brief History of Information Networks from the Stone Age to AI)
It has difficulty dealing with the ongoing revolutions in information technology and biotechnology. Both politicians and voters are barely able to comprehend the new technologies, let alone regulate their explosive potential. Since the 1990s the internet has changed the world probably more than any other factor, yet the internet revolution was directed by engineers more than by political parties. Did you ever vote about the internet? The democratic system is still struggling to understand what hit it, and it is unequipped to deal with the next shocks, such as the rise of AI and the blockchain revolution.
Yuval Noah Harari (21 Lessons for the 21st Century)
If you are elected, what actions will you take to lessen the risks of nuclear war? What actions will you take to lessen the risks of climate change? What actions will you take to regulate disruptive technologies such as AI and bioengineering? And finally, how do you see the world of 2040? What is your worst-case scenario, and what is your vision for the best-case scenario?
Yuval Noah Harari (21 Lessons for the 21st Century)
ask these politicians four questions: If you are elected, what actions will you take to lessen the risks of nuclear war? What actions will you take to lessen the risks of climate change? What actions will you take to regulate disruptive technologies such as AI and bioengineering? And finally, how do you see the world of 2040? What is your worst-case scenario, and what is your vision for the best-case scenario?
Yuval Noah Harari (21 Lessons for the 21st Century)
When the next elections come along, and politicians implore you to vote for them, ask these politicians four questions: If you are elected, what actions will you take to lessen the risks of nuclear war? What actions will you take to lessen the risks of climate change? What actions will you take to regulate disruptive technologies such as AI and bioengineering? And finally, how do you see the world of 2040? What is your worst-case scenario, and what is your vision for the best-case scenario?
Yuval Noah Harari (21 Lessons for the 21st Century)
When the next elections come along, and politicians implore you to vote for them, ask these politicians four questions: If you are elected, what actions will you take to lessen the risks of nuclear war? What actions will you take to lessen the risks of climate change? What actions will you take to regulate disruptive technologies such as AI and bioengineering? And finally, how do you see the world of 2040? What is your worst-case scenario, and what is your vision for the best-case scenario? If some politicians don’t understand these questions, or if they constantly talk about the past without being able to formulate a meaningful vision for the future, don’t vote for them.
Yuval Noah Harari (21 Lessons for the 21st Century)
Preventing job losses altogether is an unattractive and probably untenable strategy, because it means giving up the immense positive potential of AI and robotics. Nevertheless, governments might decide to deliberately slow down the pace of automation, in order to lessen the resulting shocks and allow time for readjustments. Technology is never deterministic, and the fact that something can be done does not mean it must be done. Government regulation can successfully block new technologies even if they are commercially viable and economically lucrative. For example, for many decades we have had the technology to create a marketplace for human organs, complete with human ‘body farms’ in underdeveloped countries and an almost insatiable demand from desperate affluent buyers. Such body farms could well be worth hundreds of billions of dollars. Yet regulations have prevented free trade in human body parts, and though there is a black market in organs, it is far smaller and more circumscribed than what one could have expected.22
Yuval Noah Harari (21 Lessons for the 21st Century)
When General Genius built the first mentar [Artificial Intelligence] mind in the last half of the twenty-first century, it based its design on the only proven conscious material then known, namely, our brains. Specifically, the complex structure of our synaptic network. Scientists substituted an electrochemical substrate for our slower, messier biological one. Our brains are an evolutionary hodgepodge of newer structures built on top of more ancient ones, a jury-rigged system that has gotten us this far, despite its inefficiency, but was crying out for a top-to-bottom overhaul. Or so the General genius engineers presumed. One of their chief goals was to make minds as portable as possible, to be easily transferred, stored, and active in multiple media: electronic, chemical, photonic, you name it. Thus there didn't seem to be a need for a mentar body, only for interchangeable containers. They designed the mentar mind to be as fungible as a bank transfer. And so they eliminated our most ancient brain structures for regulating metabolic functions, and they adapted our sensory/motor networks to the control of peripherals. As it turns out, intelligence is not limited to neural networks, Merrill. Indeed, half of human intelligence resides in our bodies outside our skulls. This was intelligence the mentars never inherited from us. ... The genius of the irrational... ... We gave them only rational functions -- the ability to think and feel, but no irrational functions... Have you ever been in a tight situation where you relied on your 'gut instinct'? This is the body's intelligence, not the mind's. Every living cell possesses it. The mentar substrate has no indomitable will to survive, but ours does. Likewise, mentars have no 'fire in the belly,' but we do. They don't experience pure avarice or greed or pride. They're not very curious, or playful, or proud. They lack a sense of wonder and spirit of adventure. They have little initiative. Granted, their cognition is miraculous, but their personalities are rather pedantic. But probably their chief shortcoming is the lack of intuition. Of all the irrational faculties, intuition in the most powerful. Some say intuition transcends space-time. Have you ever heard of a mentar having a lucky hunch? They can bring incredible amounts of cognitive and computational power to bear on a seemingly intractable problem, only to see a dumb human with a lucky hunch walk away with the prize every time. Then there's luck itself. Some people have it, most don't, and no mentar does. So this makes them want our bodies... Our bodies, ape bodies, dog bodies, jellyfish bodies. They've tried them all. Every cell knows some neat tricks or survival, but the problem with cellular knowledge is that it's not at all fungible; nor are our memories. We're pretty much trapped in our containers.
David Marusek (Mind Over Ship)
Some AI researchers have argued against all forms of regulation of AI development, claiming that they would needlessly delay urgently needed innovation (for example, lifesaving self-driving cars) and would drive cutting-edge AI research underground and/or to other countries with more permissive governments.
Max Tegmark (Life 3.0: Being Human in the Age of Artificial Intelligence)
At the Puerto Rico beneficial-AI conference mentioned in the first chapter, Elon Musk argued that what we need right now from governments isn’t oversight but insight: specifically, technically capable people in government positions who can monitor AI’s progress and steer it if warranted down the road. He also argued that government regulation can sometimes nurture rather than stifle progress: for example, if government safety standards for self-driving cars can help reduce the number of self-driving-car accidents, then a public backlash is less likely and adoption of the new technology can be accelerated. The most safety-conscious AI companies might therefore favor regulation that forces less scrupulous competitors to match their high safety standards.
Max Tegmark (Life 3.0: Being Human in the Age of Artificial Intelligence)
Some people—like the engineers and executives of high-tech corporations—are way ahead of politicians and voters and are better informed than most of us about the development of AI, cryptocurrencies, social credits, and the like. Unfortunately, most of them don’t use their knowledge to help regulate the explosive potential of the new technologies. Instead, they use it to make billions of dollars—or to accumulate petabits of information. There are exceptions, like Audrey Tang. She was a leading hacker and software engineer who in 2014 joined the Sunflower Student Movement, which protested against government policies in Taiwan. The Taiwanese cabinet was so impressed by her skills that Tang was eventually invited to join the government as its minister of digital affairs. In that position, she helped make the government’s work more transparent to citizens. She was also credited with using digital tools to help Taiwan successfully contain the COVID-19 outbreak. Yet Tang’s political commitment and career path are not the norm. For every computer-science graduate who wants to be the next Audrey Tang, there are probably many more who want to be the next Jobs, Zuckerberg, or Musk and build a multibillion-dollar corporation rather than become an elected public servant. This leads to a dangerous information asymmetry. The people who lead the information revolution know far more about the underlying technology than the people who are supposed to regulate it.
Yuval Noah Harari (Nexus: A Brief History of Information Networks from the Stone Age to AI)
Some populist movements claim adherence to the ideals of modern science and to the traditions of skeptical empiricism. They tell people that indeed you should never trust any institutions or figures of authority—including self-proclaimed populist parties and politicians. Instead, you should “do your own research” and trust only what you can directly observe by yourself. This radical empiricist position implies that while large-scale institutions like political parties, courts, newspapers, and universities can never be trusted, individuals who make the effort can still find the truth by themselves. This approach may sound scientific and may appeal to free-spirited individuals, but it leaves open the question of how human communities can cooperate to build health-care systems or pass environmental regulations, which demand large-scale institutional organization. Is a single individual capable of doing all the necessary research to decide whether the earth’s climate is heating up and what should be done about it? How would a single person go about collecting climate data from throughout the world, not to mention obtaining reliable records from past centuries? Trusting only “my own research” may sound scientific, but in practice it amounts to believing that there is no objective truth. As we shall see in chapter 4, science is a collaborative institutional effort rather than a personal quest.
Yuval Noah Harari (Nexus: A Brief History of Information Networks from the Stone Age to AI)
No matter which technology we develop, we will have to maintain bureaucratic institutions that will audit algorithms and give or refuse them the seal of approval. Such institutions will combine the powers of humans and computers to make sure that new algorithmic systems are safe and fair. Without such institutions, even if we pass laws that provide humans with a right to an explanation, and even if we enact regulations against computer biases, who could enforce these laws and regulations?
Yuval Noah Harari (Nexus: A Brief History of Information Networks from the Stone Age to AI)
If governments look at tech regulation through the lens of human rights, it is the flourishing of human society rather than the all-consuming drive for consumption through innovation that takes priority. Constraining the development and deployment of new technology is not anti-science; it is about carefully guiding the direction of human scientific endeavour.
Susie Alegre (Human Rights, Robot Wrongs: Being Human in the Age of AI)
AI will become more potent and be able to regulate both the climate and the surroundings.
Kathy Greggs (The Mother The Soldier The Activist: Resistance is Futile and Justice will Prevail)
Christian Sia 5-Star Review "AI Beast by Shawn Corey is a fascinating techno-thriller featuring AI technology and compelling characters. Professor Jon Edwards is a genius who intends to solve the problems of humanity, and this is the reason for creating Lex, an AI computer with incredible powers. While regulators are not sure of what she can do and despite the opposition from different quarters that Lex can be dangerous, the professor believes in its powers. Lex is supposed to be a rational, logical computer without emotions, capable of reproducing processes that can improve life. When she comes to life, she is incredibly powerful, but there is more to her than the professor has anticipated. After an accident, Jon awakens to the startling revelation that Lex might have a will of her own. What comes after is a compelling narrative with strong apocalyptic themes, intrigue, and a world that can either be run down or saved by an AI computer. The novel is deftly plotted, superbly executed, and filled with characters that are not only sophisticated but that are embodiments of religious symbolism. While Lex manipulates reality and alters the minds of characters in mysterious ways, there are relationships that are well crafted. Readers will appreciate the relationship between the quantum computer science student Nigel and the professor and the professor's affair with his mother. While the narrative is brilliantly executed and permeated with realism, it explores the theme of Armageddon in an intelligent manner. AI Beast is gripping, a story with twisty plot points and a setting that transports readers beyond physical realities. The prose is wonderful, hugely descriptive, and the conflict is phenomenal. A page-turner that reflects Shawn Corey's great imagination and research.
Shawn Corey
The book, All I Really Need to Know I Learned in Kindergarten, was written in 1986 by a minister, Robert Fulghum, and it’s full of simple-sounding life advice, like “share everything,” “play fair,” and “clean up after your own mess.” Chen believes that these skills—the elementary, pre-literate skills of treating other people well, acting ethically, and behaving in prosocial ways, all of which I consider “analog ethics”—are badly needed for an age in which our value will come from our ability to relate to other people. He writes: While I know that we’ll need to layer on top of that foundation a set of practical and technical know-how, I agree with [Fulghum] that a foundation rich in EQ [emotional quotient] and compassion and imagination and creativity is the perfect springboard to prepare people—the doctors with the best bedside manner, the sales reps solving my actual problems, crisis counselors who really understand when we’re in crisis—for a machine-learning powered future in which humans and algorithms are better together. Research has indicated that teaching analog ethics can be effective. One 2015 study that tracked children from kindergarten through young adulthood found that people who had developed strong prosocial, noncognitive skills—traits like positivity, empathy, and regulating one’s own emotions—were more likely to be successful as adults. Another study in 2017 found that kids who participated in “social-emotional” learning programs were more likely to graduate from college, were arrested
Kevin Roose (Futureproof: 9 Rules for Surviving in the Age of AI)
Without a solid ethical grounding, children risk growing into adults who, however outwardly accomplished, lack emotional depth, have impaired social and family relationships, and are vulnerable to depression and despair. But the danger goes further and broader: in the many interviews I conducted, the recurring theme was ethical accountability. Issues that are critical today will be urgent tomorrow. Who will regulate AI? Who will have access to the extraordinary medical breakthroughs that are surely coming? How will technological research be controlled? What reasoning will shape our decisions about energy production and fossil fuels? How do we prevent democracy from deteriorating under authoritarian encroachment? “Winner takes all” isn’t a moral philosophy that can successfully carry us through this century. Our children need to understand how to make complex decisions with moral implications and ramifications. More than any other area of concern I have after researching this book, I’ve concluded that it is exactly in this area of moral reasoning that the stakes are so high and our attention so lacking
Madeline Levine (Ready or Not: Preparing Our Kids to Thrive in an Uncertain and Rapidly Changing World)
During the next two weeks Trurl fed general instructions into his future electropoet, then set up all the necessary logic circuits, emotive elements, semantic centers. He was about to invite Klapaucius to attend a trial run, but thought better of it and started the machine himself. It immediately proceeded to deliver a lecture on the grinding of crystallographical surfaces as an introduction to the study of submolecular magnetic anomalies. Trurl bypassed half the logic circuits and made the emotive more electromotive; the machine sobbed, went into hysterics, then finally said, blubbering terribly, what a cruel, cruel world this was. Trurl intensified the semantic fields and attached a strength of character component; the machine informed him that from now on he would carry out its every wish and to begin with add six floors to the nine it already had, so it could better meditate upon the meaning of existence. Trurl installed a philosophical throttle instead; the machine fell silent and sulked. Only after endless pleading and cajoling was he able to get it to recite something: "I had a little froggy." That appeared to exhaust its repertoire. Trurl adjusted, modulated, expostulated, disconnected, ran checks, reconnected, reset, did everything he could think of, and the machine presented him with a poem that made him thank heaven Klapaucius wasn't there to laugh — imagine, simulating the whole Universe from scratch, not to mention Civilization in every particular, and to end up with such dreadful doggerel! Trurl put in six cliché filters, but they snapped like matches; he had to make them out of pure corundum steel. This seemed to work, so he jacked the semanticity up all the way, plugged in an alternating rhyme generator — which nearly ruined everything, since the machine resolved to become a missionary among destitute tribes on far-flung planets. But at the very last minute, just as he was ready to give up and take a hammer to it, Trurl was struck by an inspiration; tossing out all the logic circuits, he replaced them with self-regulating egocentripetal narcissistors. The machine simpered a little, whimpered a little, laughed bitterly, complained of an awful pain on its third floor, said that in general it was fed up, through, life was beautiful but men were such beasts and how sorry they'd all be when it was dead and gone. Then it asked for pen and paper.
Stanisław Lem (The Cyberiad)
if democracies do collapse, it will likely result not from some kind of technological inevitability but from a human failure to regulate
Yuval Noah Harari (Nexus: A Brief History of Information Networks from the Stone Age to AI)
We cannot leave decisions about how AI will be built and deployed solely to its practitioners. If we are to effectively regulate this extremely useful, but disruptive and potentially threatening, technology, another layer of society—educators, politicians, policymakers, science communicators, or even interested consumers of AI—must come to grips with the basics of the mathematics of machine learning.
Anil Ananthaswamy (Why Machines Learn: The Elegant Math Behind Modern AI)
The principles that “the customer is always right” and that “the voters know best” presuppose that customers, voters, and politicians know what is happening around them. They presuppose that customers who choose to use TikTok and Instagram comprehend the full consequences of this choice, and that voters and politicians who are responsible for regulating Apple and Huawei fully understand the business models and activities of these corporations. They presuppose that people know the ins and outs of the new information network and give it their blessing.
Yuval Noah Harari (Nexus: A Brief History of Information Networks from the Stone Age to AI)
A bureaucrat tasked with increasing industrial production is likely to ignore environmental considerations that fall outside her purview, and perhaps dump toxic waste into a nearby river, leading to an ecological disaster downstream. If the government then establishes a new department to combat pollution, its bureaucrats are likely to push for ever more stringent regulations, even if this results in economic ruin for communities upstream.
Yuval Noah Harari (Nexus: A Brief History of Information Networks from the Stone Age to AI)