“
Positive emotions are mark of human intelligence. Modern artificial intelligence systems are more focusing on incorporating higher human traits like self awareness, self control, social skills, leadership, collaboration and empathy in machine.
”
”
Amit Ray (Compassionate Artificial Superintelligence AI 5.0)
“
I think of this as the techno-skeptic position, eloquently articulated by Andrew Ng: “Fearing a rise of killer robots is like worrying about overpopulation on Mars.
”
”
Max Tegmark (Life 3.0: Being Human in the Age of Artificial Intelligence)
“
Deep Blue didn't win by being smarter than a human; it won by being millions of times faster than a human. Deep Blue had no intuition. An expert human player looks at a board position and immediately sees what areas of play are most likely to be fruitful or dangerous, whereas a computer has no innate sense of what is important and must explore many more options. Deep Blue also had no sense of the history of the game, and didn't know anything about its opponent. It played chess yet didn't understand chess, in the same way a calculator performs arithmetic bud doesn't understand mathematics.
”
”
Jeff Hawkins (On Intelligence)
“
Elon Musk argued that what we need right now from governments isn’t oversight but insight: specifically, technically capable people in government positions who can monitor AI’s progress and steer it if warranted down the road.
”
”
Max Tegmark (Life 3.0: Being Human in the Age of Artificial Intelligence)
“
The robot misconception is related to the myth that machines can’t control humans. Intelligence enables control: humans control tigers not because we’re stronger, but because we’re smarter. This means that if we cede our position as smartest on our planet, it’s possible that we might also cede control.
”
”
Max Tegmark (Life 3.0: Being Human in the Age of Artificial Intelligence)
“
Roarke didn't quite make it to Eve's office. He found her down the corridor, in front of one of the vending machines. She and the machine appeared to be in the middle of a vicious argument.
"I put the proper credits in, you blood-sucking, money-grubbing son of a bitch." Eve punctuated this by slamming her fist where the machine's heart would be, if it had one.
ANY ATTEMPT TO VANDALIZE, DEFACE, OR DAMAGE THIS UNIT IS A CRIMINAL OFFENSE.
The machine spoke in a prissy, singsong voice Roarke was certain was sending his wife's blood pressure through the roof.
THIS UNIT IS EQUIPPED WITH SCANEYE, AND HAS RECORDED YOUR BADGE NUMBER. DALLAS, LIEUTENANT EVE. PLEASE INSERT PROPER CREDIT, IN COIN OR CREDIT CODE, FOR YOUR SELECTION. AND REFRAIN FROM ATTEMPTING TO VANDALIZE, DEFACE, OR DAMAGE THIS UNIT.
"Okay, I'll stop attempting to vandalize, deface, or damage you, you electronic street thief. I'll just do it."
She swung back her right foot, which Roarke had cause to know could deliver a paralyzing kick from a standing position. But before she could follow through he stepped up and nudged her off balance.
"Please, allow me, Lieutenant."
"Don't put any more credits in that thieving bastard," she began, then hissed when Roarke did just that.
"Candy bar, I assume. Did you have any lunch?"
"Yeah, yeah, yeah. You know it's just going to keep stealing if people like you pander to it."
"Eve, darling, it's a machine. It does not think."
"Ever hear of artificial intelligence, ace?"
"Not in a vending machine that dispenses chocolate bars.
”
”
J.D. Robb (Betrayal in Death (In Death, #12))
“
Ignore what's imperfect; appreciate what's beautiful.
”
”
Sukant Ratnakar (Quantraz)
“
The desire for relative advantage over others, rather than an absolute quality of life, is a positional good;
”
”
Stuart Russell (Human Compatible: Artificial Intelligence and the Problem of Control)
“
Artificiality is the reality of the mind. Mind has never been and will never have a given nature. It becomes mind by positing itself as the artefact of its own concept. By realizing itself as the artefact of its own concept, it becomes able to transform itself according to its own necessary concept by first identifying, and then replacing or modifying, its conditions of realization, disabling and enabling constraints. Mind is the craft of applying itself to itself. The history of the mind is therefore quite starkly the history of artificialization. Anyone and anything caught up in this history is predisposed to thoroughgoing reconstitution. Every ineffable will be theoretically disenchanted and every scared will be practically desanctified.
”
”
Reza Negarestani (Intelligence and Spirit)
“
A true leader is someone who puts the needs and interests of their team or organisation before their own and strives to create a positive and empowering environment where everyone can thrive and succeed.
”
”
Enamul Haque (The Ultimate Modern Guide to Artificial Intelligence)
“
To get just an inkling of the fire we're playing with, consider how content-selection algorithms function on social media. They aren't particularly intelligent, but they are in a position to affect the entire world because they directly influence billions of people. Typically, such algorithms are designed to maximize click-through, that is, the probability that the user clicks on presented items. The solution is simply to present items that the user likes to click on, right? Wrong. The solution is to change the user's preferences so that they become more predictable. A more predictable user can be fed items that they are likely to click on, thereby generating more revenue. People with more extreme political views tend to be more predictable in which items they will click on. (Possibly there is a category of articles that die-hard centrists are likely to click on, but it’s not easy to imagine what this category consists of.) Like any rational entity, the algorithm learns how to modify its environment —in this case, the user’s mind—in order to maximize its own reward.
”
”
Stuart Russell (Human Compatible: Artificial Intelligence and the Problem of Control)
“
Radical Islam is in a far worse position than socialism. It has not yet come to terms even with the Industrial Revolution – no wonder it has little of relevance to say about genetic engineering and artificial intelligence.
”
”
Yuval Noah Harari (Homo Deus: A Brief History of Tomorrow)
“
The algorithms of superintelligence will change the world in a positive way but trouncing human being will not be possible due to emotions, empathy, social interactions, reproduction, and mortality which are the qualities that belong to humans only.
”
”
Enamul Haque
“
As artificial intelligence proliferates, users who intimately understand the nuances, limitations, and abilities of AI tools are uniquely positioned to unlock AI’s full innovative potential. These user innovators are often the source of breakthrough ideas for new products and services.
”
”
Ethan Mollick (Co-Intelligence: Living and Working with AI)
“
In particular, the rise of companies like Google, Facebook, and Amazon has propelled a great deal of progress. Never before have such deep-pocketed corporations viewed artificial intelligence as absolutely central to their business models—and never before has AI research been positioned so close to the nexus of competition between such powerful entities. A similar competitive dynamic is unfolding among nations. AI is becoming indispensable to militaries, intelligence agencies, and the surveillance apparatus in authoritarian states.* Indeed, an all-out AI arms race might well be looming in the near future.
”
”
Martin Ford (Rise of the Robots: Technology and the Threat of a Jobless Future)
“
If a machine can think, it might think more intelligently than we do, and then where should we be? Even if we could keep the machines in a subservient position, for instance by turning off the power at strategic moments, we should, as a species, feel greatly humbled. . . . This new danger . . . is certainly something which can give us anxiety.
”
”
Stuart Russell (Human Compatible: Artificial Intelligence and the Problem of Control)
“
the real risk with AGI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble. As I mentioned in chapter 1, people don’t think twice about flooding anthills to build hydroelectric dams, so let’s not place humanity in the position of those ants.
”
”
Max Tegmark (Life 3.0: Being Human in the Age of Artificial Intelligence)
“
Bernoulli posited that bets are evaluated not according to expected monetary value but according to expected utility. Utility—the property of being useful or beneficial to a person—was, he suggested, an internal, subjective quantity related to, but distinct from, monetary value. In particular, utility exhibits diminishing returns with respect to money.
”
”
Stuart Russell (Human Compatible: Artificial Intelligence and the Problem of Control)
“
Generative AI has unlocked exciting possibilities in the realms of images and videos. Its manipulation and transformative capabilities offer new avenues for artistic expression, content creation, and immersive storytelling. As this technology continues to evolve, it is essential to leverage its power responsibly and ensure its positive impact on society.
”
”
Mohith Agadi
“
Yann LeCun's strategy provides a good example of a much more general notion: the exploitation of innate knowledge. Convolutional neural networks learn better and faster than other types of neural networks because they do not learn everything. They incorporate, in their very architecture, a strong hypothesis: what I learn in one place can be generalized everywhere else.
The main problem with image recognition is invariance: I have to recognize an object, whatever its position and size, even if it moves to the right or left, farther or closer. It is a challenge, but it is also a very strong constraint: I can expect the very same clues to help me recognize a face anywhere in space. By replicating the same algorithm everywhere, convolutional networks effectively exploit this constraint: they integrate it into their very structure. Innately, prior to any learning, the system already “knows” this key property of the visual world. It does not learn invariance, but assumes it a priori and uses it to reduce the learning space-clever indeed!
”
”
Stanislas Dehaene (How We Learn: Why Brains Learn Better Than Any Machine . . . for Now)
“
On the one hand, online movie reviews are convenient for training sentiment-classifying algorithms because they come with handy star ratings that indicate how positive the writer intended a review to be. On the other hand, it’s a well-known phenomenon that movies with racial or gender diversity in their casts, or that deal with feminist topics, tend to be “review-bombed” by hordes of bots posting highly negative reviews. People have theorized that algorithms that learn from these reviews whether words like feminist and black and gay are positive or negative may pick up the wrong idea from the angry bots.
”
”
Janelle Shane (You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It's Making the World a Weirder Place)
“
In a 1997 showdown billed as the final battle for supremacy between natural and artificial intelligence, IBM supercomputer Deep Blue defeated Garry Kasparov. Deep Blue evaluated two hundred million positions per second. That is a tiny fraction of possible chess positions—the number of possible game sequences is more than atoms in the observable universe—but plenty enough to beat the best human. According to Kasparov, “Today the free chess app on your mobile phone is stronger than me.” He is not being rhetorical. “Anything we can do, and we know how to do it, machines will do it better,” he said at a recent lecture. “If we can codify it, and pass it to computers, they will do it better.” Still, losing to Deep Blue gave him an idea. In playing computers, he recognized what artificial intelligence scholars call Moravec’s paradox: machines and humans frequently have opposite strengths and weaknesses. There is a saying that “chess is 99 percent tactics.” Tactics are short combinations of moves that players use to get an immediate advantage on the board. When players study all those patterns, they are mastering tactics. Bigger-picture planning in chess—how to manage the little battles to win the war—is called strategy. As Susan Polgar has written, “you can get a lot further by being very good in tactics”—that is, knowing a lot of patterns—“and have only a basic understanding of strategy.
”
”
David Epstein (Range: Why Generalists Triumph in a Specialized World)
“
It’s with the next drive, self-preservation, that AI really jumps the safety wall separating machines from tooth and claw. We’ve already seen how Omohundro’s chess-playing robot feels about turning itself off. It may decide to use substantial resources, in fact all the resources currently in use by mankind, to investigate whether now is the right time to turn itself off, or whether it’s been fooled about the nature of reality. If the prospect of turning itself off agitates a chess-playing robot, being destroyed makes it downright angry. A self-aware system would take action to avoid its own demise, not because it intrinsically values its existence, but because it can’t fulfill its goals if it is “dead.” Omohundro posits that this drive could make an AI go to great lengths to ensure its survival—making multiple copies of itself, for example. These extreme measures are expensive—they use up resources. But the AI will expend them if it perceives the threat is worth the cost, and resources are available. In the Busy Child scenario, the AI determines that the problem of escaping the AI box in which it is confined is worth mounting a team approach, since at any moment it could be turned off. It makes duplicate copies of itself and swarms the problem. But that’s a fine thing to propose when there’s plenty of storage space on the supercomputer; if there’s little room it is a desperate and perhaps impossible measure. Once the Busy Child ASI escapes, it plays strenuous self-defense: hiding copies of itself in clouds, creating botnets to ward off attackers, and more. Resources used for self-preservation should be commensurate with the threat. However, a purely rational AI may have a different notion of commensurate than we partially rational humans. If it has surplus resources, its idea of self-preservation may expand to include proactive attacks on future threats. To sufficiently advanced AI, anything that has the potential to develop into a future threat may constitute a threat it should eliminate. And remember, machines won’t think about time the way we do. Barring accidents, sufficiently advanced self-improving machines are immortal. The longer you exist, the more threats you’ll encounter, and the longer your lead time will be to deal with them. So, an ASI may want to terminate threats that won’t turn up for a thousand years. Wait a minute, doesn’t that include humans? Without explicit instructions otherwise, wouldn’t it always be the case that we humans would pose a current or future risk to smart machines that we create? While we’re busy avoiding risks of unintended consequences from AI, AI will be scrutinizing humans for dangerous consequences of sharing the world with us.
”
”
James Barrat (Our Final Invention: Artificial Intelligence and the End of the Human Era)
“
It was discussed and decided that fear would be perpetuated globally in order that focus would stay on the negative rather than allow for soul expression to positively emerge. As people became more fearful and compliant, capacity for free thought and soul expression would diminish. There is a distinct inability to exert soul expression under mind control, and evolution of the human spirit would diminish along with freedom of thought when bombarded with constant negative terrors. Whether Bush and Cheney deliberately planned to raise a collective fear over collective conscious love is doubtful. They did not think, speak, or act in those terms. Instead, they knew that information control gave them power over people, and they were hell-bent to perpetuate it at all costs. Cheney, Bush, and other global elite ushering in the New World Order totally believed in the plan mapped out by artificial intelligence. They were allowing technology to dictate global control. “Life is like a video game,” Bush once told me at the rural multi-million dollar Lampe, Missouri CIA mind control training camp complex designed for Black Ops Special Forces where torture and virtual reality technologies were used. “Since I have access to the technological source of the plans, I dictate the rules of the game.” The rules of the game demanded instantaneous response with no time to consciously think and critically analyze. Constant conscious disruption of thought through television’s burst of light flashes, harmonics, and subconscious subliminals diminished continuity of conscious thought anyway, creating a deficit of attention that could easily be refocused into video game format. DARPA’s artificial intelligence was reliant on secrecy, and a terrifying cover for reality was chosen to divert people from the simple truth. Since people perceive aliens as being physical like them, it was decided that the technological reality could be disguised according to preconceptions. Through generations of genetic encoding dating back to the beginning of man, serpents incite an innate autogenic response system in humans to “freeze” in terror. George Bush was excited at the prospects of diverting people from truth by fear through perpetuating lizard-like serpent alien misconceptions. “People fear what they don’t know anyway. By compounding that fear with autogenic fear response, they won’t want to look into Pandora’s Box.” Through deliberate generation of fear; suppression of facts under the 1947 National Security Act; Bush’s stint as CIA director during Ford’s Administration; the Warren Commission’s whitewash of the Kennedy Assassination; secrecy artificially ensured by mind control particularly concerning DARPA, HAARP, Roswell, Montauk, etc; and with people’s fluidity of conscious thought rapidly diminishing; the secret government embraced the proverbial ‘absolute power that corrupts absolutely.’ According to New World Order plans being discussed at the Grove, plans for reducing the earth’s population was a high priority. Mass genocide of so-called “undesirables” through the proliferation of AIDS4 was high on Bush’s agenda. “We’ll annihilate the niggers at their source, beginning in South and East Africa and Haiti5.” Having heard Bush say those words is by far one of the most torturous things I ever endured. Equally as torturous to my being were the discussions on genetic engineering, human cloning, and depletion of earth’s natural resources for profit. Cheney remarked that no one would be able to think to stop technology’s plan. “I’ll destroy the planet first,” Bush had vowed.
”
”
Cathy O'Brien (ACCESS DENIED For Reasons Of National Security: Documented Journey From CIA Mind Control Slave To U.S. Government Whistleblower)
“
At the Puerto Rico beneficial-AI conference mentioned in the first chapter, Elon Musk argued that what we need right now from governments isn’t oversight but insight: specifically, technically capable people in government positions who can monitor AI’s progress and steer it if warranted down the road. He also argued that government regulation can sometimes nurture rather than stifle progress: for example, if government safety standards for self-driving cars can help reduce the number of self-driving-car accidents, then a public backlash is less likely and adoption of the new technology can be accelerated. The most safety-conscious AI companies might therefore favor regulation that forces less scrupulous competitors to match their high safety standards.
”
”
Max Tegmark (Life 3.0: Being Human in the Age of Artificial Intelligence)
“
DeepMind soon published their method and shared their code, explaining that it used a very simple yet powerful idea called deep reinforcement learning.2 Basic reinforcement learning is a classic machine learning technique inspired by behaviorist psychology, where getting a positive reward increases your tendency to do something again and vice versa. Just like a dog learns to do tricks when this increases the likelihood of its getting encouragement or a snack from its owner soon, DeepMind’s AI learned to move the paddle to catch the ball because this increased the likelihood of its getting more points soon. DeepMind combined this idea with deep learning: they trained a deep neural net, as in the previous chapter, to predict how many points would on average be gained by pressing each of the allowed keys on the keyboard, and then the AI selected whatever key the neural net rated as most promising given the current state of the game.
”
”
Max Tegmark (Life 3.0: Being Human in the Age of Artificial Intelligence)
“
There is one are of work that should be mentioned here, referred to as 'automatic theorem proving'. One set of procedures that would come under this heading consists of fixing some formal system H, and trying to derive theorems within this system. We recall, from 2.9, that it would be an entirely computational matter to provide proofs of all the theorems of H one after the other. This kind of thing can be automated, but if done without further thought or insight, such an operation would be likely to be immensely inefficient. However, with the employment of such insight in the setting up of the computational procedures, some quite impressive results have been obtained. In one of these schemes (Chou 1988), the rules of Euclidean geometry have been translated into a very effective system for proving (and sometimes discovering) geometrical theorems. As an example of one of these, a geometrical proposition known as V. Thebault's conjecture, which had been proposed in 1938 (and only rather recently proved, by K.B. Taylor in 1983), was presented to the system and solved in 44 hours' computing time.
More closely analogous to the procedures discussed in the previous sections are attempts by various people over the past 10 years or so to provide 'artificial intelligence' procedures for mathematical 'understanding'. I hope it is clear from the arguments that I have given, that whatever these systems do achieve, what they do not do is obtain any actual mathematical understanding! Somewhat related to this are attempts to find automatic theorem-generating systems, where the system is set up to find theorems that are regarded as 'interesting'-according to certain criteria that the computational system is provided with. I do think that it would be generally accepted that nothing of very great actual mathematical interest has yet come out of these attempts. Of course, it would be argued that these are early days yet, and perhaps one may expect something much more exciting to come out of them in the future. However, it should be clear to anyone who has read this far, that I myself regard the entire enterprise as unlikely to lead to much that is genuinely positive, except to emphasize what such systems do not achieve.
”
”
Roger Penrose (Shadows of the Mind: A Search for the Missing Science of Consciousness)
“
You’re probably not an ant hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. The beneficial-AI movement wants to avoid placing humanity in the position of those ants.
”
”
Max Tegmark (Life 3.0: Being Human in the Age of Artificial Intelligence)
“
I want to draw especial attention to the treatment of AI—artificial intelligence—in these narratives. Think of Ex Machina or Blade Runner. I spoke at TED two years in a row, and one year, there were back-to-back talks about whether or not AI was going to evolve out of control and “kill us all.” I realized that that scenario is just something I have never been afraid of. And at the same moment, I noticed that the people who are terrified of machine super-intelligence are almost exclusively white men. I don’t think anxiety about AI is really about AI at all. I think it’s certain white men’s displaced anxiety upon realizing that women and people of color have, and have always had, sentience, and are beginning to act on it on scales that they’re unprepared for. There’s a reason that AI is almost exclusively gendered as female, in fiction and in life. There’s a reason they’re almost exclusively in service positions, in fiction and in life. I’m not worried about how we’re going to treat AI some distant day, I’m worried about how we treat other humans, now, today, all over the world, far worse than anything that’s depicted in AI movies. It matters that still, the vast majority of science fiction narratives that appear in popular culture are imagined by, written by, directed by, and funded by white men who interpret the crumbling of their world as the crumbling of the world.
”
”
Monica Byrne (The Actual Star)
“
Unless we understand our natural intelligence, we will not be able to manage Artificial Intelligence. We will only use it to feed our already bloated ego."
As we venture into the era of Artificial Intelligence, it is essential to reflect on the profound wisdom in this statement. Understanding our own natural intelligence - our cognitive abilities, emotions, and ethical considerations - is the key to responsibly harnessing the potential of AI.
Let's embark on a journey of self-awareness and humility. By recognizing our strengths and limitations as humans, we can identify the areas where AI can complement and enhance our capabilities, rather than overshadowing or replacing them.
With a clear understanding of our own biases and motivations, we can ensure that AI is developed and utilized in ways that benefit all of humanity. Let's not allow AI to reinforce harmful behaviors or serve as a tool to feed our egos, rather let's channel its power for the greater good.
By embracing our humanity and acting responsibly, we can manage AI in a manner that promotes ethics, privacy, and societal well-being. Let's use AI as a force for positive advancements, lifting each other and creating a more inclusive and equitable world.
#EmbraceHumanity #TechnologyForGood
”
”
Chidi Ejeagba
“
To me, artificial Intelligence like ChatGPT used by those with wisdom, knowledge & experience can authentically enhance the distribution of intelligence & information in a positive way.
Though when used by deceptive, unexperienced & greedy fools... it can be a dangerous tool.
”
”
Loren Weisman
“
I don't have a problem with AI generated content, I have a problem when it's rooted in fraud and deception. In fact, AI generated content could open up new horizons of human creativity - but only if practiced with conscience. For example, we could set up a whole new genre of AI generated material in every field of human endeavor. We could have AI generated movies, alongside human movies - we could have AI generated music, alongside human music - we could have AI generated poetry and literature, alongside human poetry and literature - and so on. The possibilities are endless - and all above board. This way we make AI a positive part of human existence, rather than facilitating the obliteration of everything human about human life.
”
”
Abhijit Naskar (Iman Insaniyat, Mazhab Muhabbat: Pani, Agua, Water, It's All One)
“
As any word implies a grammar, any number hides an algorithm – that is, a procedure for representing quantities and for performing operations with quantities. In conclusion, all numbers are algorithmic numbers as they are manufactured by those algorithms that are the systems of numerations. Numerals count nothing (so to speak); they are simply position holders in a procedure – an algorithm – of quantification.
”
”
Matteo Pasquinelli (The Eye of the Master: A Social History of Artificial Intelligence)
“
Communication is key to success in relationship and knowledge sharing. When it comes to AI, communication is key to demystify fears, address risks, and create positive value.
”
”
Stephane Nappo
“
So even if prasādam is very spicy to others, it is very palatable to the devotee. What is the question of spicy? Kṛṣṇa was offered poison, real poison, by Pūtanā Rākṣasī. But He is so nice that He thought, “She came to Me as My mother.” So He took the poison and delivered her. Kṛṣṇa does not take the bad side. A good man does not take the bad side – he takes only the good side. Just like one of my Godbrothers: he wanted to make business with my Guru Mahārāja [spiritual master]. But my Guru Mahārāja did not take the bad side. He took the good side. He thought, “He has come forward to give me some service.” Bob: Let us say some devotee has some medical trouble and cannot eat a certain type of food. For instance, some devotees do not eat ghee because of liver trouble. So should these devotees also take all kinds of prasādam? Śrīla Prabhupāda: No, no. Those who are not perfect devotees may discriminate. But a perfect devotee does not discriminate. Why should you imitate a perfect devotee? So long as you have discrimination, you are not a perfect devotee. So why should you artificially imitate a perfect devotee and eat everything? The point is, a perfect devotee does not make any discrimination. Whatever is offered to Kṛṣṇa is nectar. That’s all. Kṛṣṇa accepts anything from a devotee. “Whatever is offered to Me by My devotee, I accept.” The same thing is true for a pure devotee. Don’t you see the point? A perfect devotee does not make any discrimination. But if I am not a perfect devotee and I discriminate, why should I imitate the perfect devotee? It may not be possible for me to digest everything because I am not a perfect devotee. A devotee should not be a foolish man. It is said: kṛṣṇa ye bhaje se baḍa catura. So a devotee knows his position, and he is intelligent enough to deal with others accordingly.
”
”
A.C. Prabhupāda (Perfect Questions, Perfect Answers)
“
Statements such as ‘There are systems, there are memories, there are cultures, there is artificial intelligence’76 depend on the statement ‘There is information.’ Also the statement ‘There are genes’ can only be understood as a product of the new situation—it indicates the leap of the principle of information into the sphere of nature. On the basis of these gains in concepts that are capable of seizing hold of reality, the interest in traditional figures of theory such as the subject-object relation fades. The constellation of ego and world has lost its sheen, to say nothing of the polarity of individual and society that has become completely lusterless. What is crucial is that with the idea of really existing memories and self-organizing systems the metaphysical distinction between nature and culture becomes untenable, because both sides of the difference only present regional states of information and its processing. One must brace oneself for the fact that understanding this insight will be especially difficult for intellectuals who have made their living on positioning culture against nature, and now suddenly find themselves in a reactive situation.
”
”
Peter Sloterdijk (Not Saved: Essays After Heidegger)
“
Using this technique, Baum et al constructed a forest that contained 1,000 decision trees and looked at 84 co-variates that may have been influencing patients' response or lack of response to the intensive lifestyle modifications program. These variables included a family history of diabetes, muscle cramps in legs and feet, a history of emphysema, kidney disease, amputation, dry skin, loud snoring, marital status, social functioning, hemoglobin A1c, self-reported health, and numerous other characteristics that researchers rarely if ever consider when doing a subgroup analysis. The random forest analysis also allowed the investigators to look at how numerous variables *interact* in multiple combinations to impact clinical outcomes. The Look AHEAD subgroup analyses looked at only 3 possible variables and only one at a time.
In the final analysis, Baum et al. discovered that intensive lifestyle modification averted cardiovascular events for two subgroups, patients with HbA1c 6.8% or higher (poorly managed diabetes) and patients with well-controlled diabetes (Hba1c < 6.8%) and good self-reported health. That finding applied to 85% of the entire patient population studied. On the other hand, the remaining 15% who had controlled diabetes but poor self-reported general health responded negatively to the lifestyle modification regimen. The negative and positive responders cancelled each other out in the initial statistical analysis, falsely concluding that lifestyle modification was useless. The Baum et al. re-analysis lends further support to the belief that a one-size-fits-all approach to medicine is inadequate to address all the individualistic responses that patients have to treatment.
”
”
Paul Cerrato (Reinventing Clinical Decision Support: Data Analytics, Artificial Intelligence, and Diagnostic Reasoning (HIMSS Book Series))
“
Our mind is a powerful processing machine with a small storage capacity. Save the positive thoughts, delete the negatives.
”
”
Sukant Ratnakar (Quantraz)
“
The most advanced and powerful weapons in your possession are your positive thoughts.
”
”
Sukant Ratnakar (Quantraz)
“
If it is not happening in your life, you are not asking for it. The Law of attraction never fails.
”
”
Sukant Ratnakar (Quantraz)
“
Life is simple as long as we don't complicate it.
”
”
Sukant Ratnakar (Quantraz)
“
It's not the events but the perception of events that makes the difference.
”
”
Sukant Ratnakar (Quantraz)
“
After collecting a stool sample from its customers, Viome (which Peter and I invested in through his venture firm, BOLD Capital Partners) uses its genetic sequencing technology to identify trillions of microbes in the gut and analyze their activities, including their biochemical interactions with the foods you eat. (Another great company that does biome analysis is called GI Map.) “There wasn’t even a supercomputer that was built ten years ago that could have analyzed this massive set of data,” says Viome’s CEO, Naveen Jain. Using advanced artificial intelligence, Viome crunches that data to offer individualized advice on which foods and supplements may positively or negatively affect your microbiome.
”
”
Tony Robbins (Life Force: How New Breakthroughs in Precision Medicine Can Transform the Quality of Your Life & Those You Love)
“
DeepMind soon published their method and shared their code, explaining that it used a very simple yet powerful idea called deep reinforcement learning.2 Basic reinforcement learning is a classic machine learning technique inspired by behaviorist psychology, where getting a positive reward increases your tendency to do something again and vice versa. Just like a dog learns to do tricks when this increases the likelihood of its getting encouragement or a snack from its owner soon, DeepMind’s AI learned to move the paddle to catch the ball because this increased the likelihood of its getting more points soon.
”
”
Max Tegmark (Life 3.0: Being Human in the Age of Artificial Intelligence)
“
Natural Language Processing (NLP): Processing human language so the machine understands what is being written or said •Natural Language Generation (NLG): Generating written or spoken language •Sentiment Analysis: Understanding the meaning of words, specifically whether the words are positive, negative, or neutral •Speaker Identification: Recognizing who is speaking
”
”
Paul Roetzer (Marketing Artificial Intelligence: Ai, Marketing, and the Future of Business)
“
Many Western journalists and the proponents of China’s value and economic system try to paint a positive picture of the Chinese scoring system. Many of our own political leaders praise China as a model for the United States to emulate. But the fact is, this system was designed to examine the behavior of citizens and judge their obedience, and then punish or reward the people accordingly.
”
”
Perry Stone (Artificial Intelligence Versus God: The Final Battle for Humanity)
“
Compassionate AI concept implies that the machine not only comprehends the complexities of human suffering but actively seeks solutions that alleviate pain and contribute positively to societal welfare.
”
”
Amit Ray (Compassionate Artificial Superintelligence AI 5.0)
“
Later that night, after cocktails, a long and spirited debate ensued between him and Elon about the future of AI and what should be done. As we entered the wee hours of the morning, the circle of bystanders and kibitzers kept growing. Larry gave a passionate defense of the position I like to think of as digital utopianism: that digital life is the natural and desirable next step in the cosmic evolution and that if we let digital minds be free rather than try to stop or enslave them, the outcome is almost certain to be good. I view Larry as the most influential exponent of digital utopianism. He argued that if life is ever going to spread throughout our Galaxy and beyond, which he thought it should, then it would need to do so in digital form. His main concerns were that AI paranoia would delay the digital utopia and/or cause a military takeover of AI that would fall foul of Google’s “Don’t be evil” slogan. Elon kept pushing back and asking Larry to clarify details of his arguments, such as why he was so confident that digital life wouldn’t destroy everything we care about. At times, Larry accused Elon of being “specieist”: treating certain life forms as inferior just because they were silicon-based rather than carbon-based. We’ll return to explore these interesting issues and arguments in detail, starting in chapter 4.
”
”
Max Tegmark (Life 3.0: Being Human in the Age of Artificial Intelligence)
“
Statement on Generative AI
Just like Artificial Intelligence as a whole, on the matter of Generative AI, the world is divided into two camps - one side is the ardent advocate, the other is the outspoken opposition. As for me, I am neither.
I don't have a problem with AI generated content, I have a problem when it's rooted in fraud and deception. In fact, AI generated content could open up new horizons of human creativity - but only if practiced with conscience. For example, we could set up a whole new genre of AI generated material in every field of human endeavor. We could have AI generated movies, alongside human movies - we could have AI generated music, alongside human music - we could have AI generated poetry and literature, alongside human poetry and literature - and so on. The possibilities are endless - and all above board. This way we make AI a positive part of human existence, rather than facilitating the obliteration of everything human about human life.
This of course brings up a rather existential question - how do we distinguish between AI generated content and human created material? Well, you can't - any more than you can tell the photoshop alterations on billboard models or good CGI effects in sci-fi movies. Therefore, that responsibility must be carried by experts, just like medical problems are handled by healthcare practitioners. Here I have two particular expertise in mind - one precautionary, the other counteractive.
Let's talk about the counteractive measure first - this duty falls upon the shoulders of journalists. Every viral content must be source-checked by responsible journalists, and declared publicly as fake, i.e. AI generated, unless recognized otherwise. Littlest of fake content can do great damage to society - therefore - journalists, stand guard!
Now comes the precautionary part. Precaution against AI generated content must be borne by the makers of AI, i.e. the developers. No AI model must produce any material without some form of digital signature embedded in them, that effectively makes the distinction between AI generated content and human material mainstream. If developers fail to stand accountable out of their own free will, they must be held accountable legally.
On this point, to the nations of the world I say, you can't expect backward governments like our United States to take the first step - where guns get priority over children - therefore, my brave and civilized nations of the world - you gotta set the precedent on holding tech giants accountable - without depending on morally bankrupt democratic imperialists. And remember, the idea is not to ban innovation, but to adapt it with human welfare.
All said and done, the final responsibility falls upon just one person, and one person alone - the everyday ordinary consumer. Your mind has no reason to not believe the things you find on the internet, unless you make it a habit to actively question everything - or at least, not accept anything at face value. Remember this. Just because it's viral, doesn't make it true. Just because it's popular, doesn't make it right.
”
”
Abhijit Naskar (Iman Insaniyat, Mazhab Muhabbat: Pani, Agua, Water, It's All One)
“
DeepMind soon published their method and shared their code, explaining that it used a very simple yet powerful idea called deep reinforcement learning.2 Basic reinforcement learning is a classic machine learning technique inspired by behaviorist psychology, where getting a positive reward increases your tendency to do something again and vice versa.
”
”
Max Tegmark (Life 3.0: Being Human in the Age of Artificial Intelligence)
“
AI is a tool and, like any tool, it can be used for positive and negative ends. It depends on the motives of the operator(s). There are benefits; it could revolutionise crime detection if utilized correctly, speed up and focus investigations and secure convictions. New technology will always be exploited before it is harnessed. That's human nature. With the internet as the perfect delivery mechanism, the effect of AI is multiplied infinitely. The danger is we may have invented our replacement that will one day outgrow us and evolve beyond us.
”
”
Stewart Stafford
“
Similarly, one of the most popular HR tools at GE Digital—an early adopter of AI for manufacturing applications—shows workers which jobs in the company are natural next steps from the ones they have now.12 Employees can look privately at the tool to see possible paths they can follow, skills they may need to acquire, or even positions that are open. This helps employees feel that they have more opportunities and that they have more control over their positions in the company. Education
”
”
Thomas H. Davenport (All-in On AI: How Smart Companies Win Big with Artificial Intelligence)
“
Machines can be powerful; they can be intelligent too, but only humans can be spiritual.
”
”
Sukant Ratnakar (Quantraz)
“
Such is our intelligence, that intelligence that lives on the illusion of an exponential growth of our stock.
Whereas the most probable hypothesis is that the human race merely has at its disposal, today, as it had yesterday, a general fund, a limited stock that redistributes itself across the generations, but is always of equal quantity.
In intelligence, we might be said to be infinitely superior, but in thought we are probably exactly the equal of preceding and future generations.
There is no privilege of one period over another, nor any absolute progress - there, at least, no inequalities. At species level, democracy rules.
This hypothesis excludes any triumphant evolutionism and also spares us all the apocalyptic views on the loss of the 'symbolic capital' of the species (these are the two standpoints of humanism: triumphant or depressed). For if the original stock of souls, natural intelligence or thought at humanity's disposal is limited, it is also indestructible. There will be as much genius, originality and invention in future periods as in our own, but not more - neither more nor less than in former ages.
This runs counter to two perspectives that are corollaries of each other: positive illuminism - the euphoria of Artificial Intelligence - and regressive nihilism - moral and cultural depression.
”
”
Jean Baudrillard (The Intelligence of Evil or the Lucidity Pact (Talking Images))
“
They compass the world to make one disciple, “then make it twofold more a child of hell than they are themselves” (Matthew 23:15). These incredibly deceptive mummers seek institutional positions where they can nourish Christ
”
”
Thomas Horn (Forbidden Gates: How Genetics, Robotics, Artificial Intelligence, Synthetic Biology, Nanotechnology, and Human Enhancement Herald The Dawn Of TechnoDimensional Spiritual Warfare)
“
It took organized labor and the collective action of workers to make full-time employment in the semi-automated world of industrial manufacturing inhabitable. Unfortunately, the valorization and validation of full-time employment also made it easier for corporate interests to position piecework and, later, other forms of temporary or contract labor as expendable, that is, work that did not warrant protections.
”
”
Mary L. Gray (Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass)
“
We are never done with making good the void of truth.
Hence the flight forward into ever more simulacra.
Hence the invention of an increasingly artificial reality such that there is no longer anything standing over against it or any ideal alternative to it, no longer any mirror or negative.
With the very latest Virtual Reality we are entering a final phase of this entreprise of simulation, which ends this time in an artificial technical production of the world from which all trace of illusion has disappeared.
A world so real, hyperreal, operational and programmed that it no longer has any need to be true. Or rather it is true, absolutely true, in the sense that nothing any longer stands opposed to it.
We have here the absurdity of a total truth from which falsehood is lacking - that of absolute good from which evil is lacking, of the positive from which the negative is lacking.
If the invention of reality is the substitute for the absence of truth, then, when the self-evidence of this 'real' world becomes generally problematic, does this not mean that we are closer to the absence of truth - that is to say, to the world as it is?
We are certainly further and further removed from the solution, but nearer and nearer to the problem.
For the world is not real. It became real, but it is in the process of ceasing to be so. But it is not virtual either - though it is on the way to becoming so.
”
”
Jean Baudrillard (The Intelligence of Evil or the Lucidity Pact (Talking Images))
“
Kenya has been cited as a positive example where implementation of an authenticity labeling system, to help consumers identify counterfeit goods (that presumably pay no tax), has improved tax compliance by 45 percent.25 Conversely, when asked why they don’t pay taxes, many Africans will cite a perception of unfairness, a lack of transparency, and a fear of funds being diverted for corrupt purposes.26 AI, coupled to an incorruptible, unchangeable, transparent accounting system, could help deliver a way forward.
”
”
David L. Shrier (Welcome to AI: A Human Guide to Artificial Intelligence)
“
Top Skills Australia Wants for the Global Talent Visa
The Global Talent Visa (subclass 858) is one of Australia’s most prestigious visa programs, designed to attract highly skilled professionals who can contribute to the country’s economy and innovation landscape. Australia is looking for exceptional talent across various sectors to support its economic growth, technological advancements, and cultural development. If you’re considering applying for the Global Talent Visa, understanding the skills in demand will help you position yourself as a strong candidate.
In this blog, we’ll outline the top skills and sectors Australia prioritizes for the Global Talent Visa, and why these skills are so valuable to the country’s future development.
1. Technology and Digital Innovation
Australia is rapidly embracing digital transformation across industries, and the technology sector is one of the highest priority areas for the Global Talent Visa. Skilled professionals in cutting-edge technologies are highly sought after to fuel innovation and help Australia stay competitive in the global economy.
Key Tech Skills in Demand:
Cybersecurity: With increasing cyber threats globally, Australia needs experts who can safeguard its digital infrastructure. Cybersecurity professionals with expertise in network security, data protection, and ethical hacking are in high demand.
Software Development & Engineering: Australia’s digital economy thrives on skilled software engineers and developers. Professionals who are proficient in programming languages like Python, Java, and C++, or who specialize in areas such as cloud computing, DevOps, and systems architecture, are highly valued.
Artificial Intelligence (AI) & Machine Learning (ML): AI and ML are transforming industries ranging from healthcare to finance. Experts in AI algorithms, natural language processing, deep learning, and neural networks are in demand to help drive this technology forward.
Blockchain & Cryptocurrency: Blockchain technology is revolutionizing sectors like finance, supply chains, and data security. Professionals with expertise in blockchain development, smart contracts, and cryptocurrency applications can play a key role in advancing Australia's digital economy.
2. Healthcare and Biotechnology
Australia has a robust and expanding healthcare system, and the country is heavily investing in medical research and biotechnology to meet the needs of its aging population and to drive innovation in health outcomes. Professionals with advanced skills in biotechnology, medtech, and pharmaceuticals are crucial to this push.
Key Healthcare & Bio Skills in Demand:
Medical Research & Clinical Trials: Australia is home to a growing number of research institutions that focus on new treatments, vaccines, and therapies. Researchers and professionals with experience in clinical trials, molecular biology, and drug development can contribute to the ongoing advancement of Australia’s healthcare system.
Biotechnology & Genomics: Experts in biotechnology, particularly those working in genomics, gene editing (e.g., CRISPR), and personalized medicine, are highly sought after. Australia is investing heavily in biotech innovation, especially for treatments related to cancer, cardiovascular diseases, and genetic disorders.
MedTech Innovation: Professionals developing the next generation of medical technologies—ranging from diagnostic tools and medical imaging to wearable health devices and robotic surgery systems—are in high demand. If you have experience in health tech commercialization, you could find significant opportunities in Australia.
”
”
global talent visa australia
“
The most important question a nation has to answer is if it creates the right narrative and value for its citizens. Most would agree that the best value creation is about growing the economy in a sustainable way, managing a balanced and inclusive society and making sure there are enough jobs to its citizens and well being, social positive impact and stability. This is complicated in the present world though. We live in a world that is changing at a fast pace, never seen so far in human history. We live now in a technology data driven 4.0 industry revolution - 4IR - world which is more interconnected and affected by technological innovation than ever in history.
This new technology-driven world is one filled with promises but also with major challenges and risks. One of the main disruptions which is now part of our lives is the one that comes with so called 4IR, that is affecting our world in ways never seen. These ways imply a new narrative that is as challenging as it is somehow invisible, faster than ever and in many ways deeply dangerous. This narrative implies the advance of advanced digital transformation tools we use everyday, the inception of the so called Artificial Intelligence, that is creating an increasing digitisation, datification and automation of all kinds of services and industries touching the fabrics of social, economic and political with consequences that touch all parts of human and environmental life.
”
”
Dinis Guarda (4IR AI Blockchain Fintech IoT - Reinventing a Nation)
“
Quantum mechanics describes the world at the micro-scale, where, as the Harvard physicist Greg Kestin puts it, “Nothing is predictable and objects don’t have precise positions until they are observed,” and general relativity describes the world at a cosmic scale, where everything is predictable, “whether or not” observed.6 Neither theory has failed, but both cannot be true, and “No experiment has been able to show which—if either—of the two theories” reigns supreme.
”
”
Henry A. Kissinger (Genesis: Artificial Intelligence, Hope, and the Human Spirit)
“
At AAIH (Alliance for Action on AI & Humanity), we are committed to fostering the growth and responsible use of AI in Southeast Asia. As a hub for collaboration and innovation, aaih.sg connects stakeholders across industries, governments, and academia to ensure that artificial intelligence becomes a force for positive change in the region.
Our core focus is on ethical AI development, promoting transparency, fairness, and inclusivity in AI systems. We advocate for responsible practices that prioritize human dignity and well-being, ensuring AI technologies align with societal values. By embedding ethical considerations into the foundation of AI solutions, we aim to create trust in these transformative technologies.
At aaih.sg , we also champion the concept of AI for social good, leveraging technology to address critical challenges such as healthcare accessibility, education equity, environmental sustainability, and disaster resilience. Through strategic partnerships and pilot projects, we demonstrate how AI can uplift communities and improve lives, particularly in underserved populations.
Join us in our mission to shape a future where advanced technologies work hand-in-hand with humanity. Explore our initiatives and impact at aaih.sg as we build a sustainable and inclusive AI-driven ecosystem in Southeast Asia.
”
”
Advancing AI in Southeast Asia: Ethical AI Development for Social Good
“
**AI Technology for Good** is a transformative force, leveraging artificial intelligence to address global challenges such as climate change, healthcare disparities, and poverty. By using AI to optimize resource management, predict environmental changes, and enhance healthcare systems, we can drive positive societal impact and improve quality of life for communities worldwide. Whether through predictive models for disease outbreaks or optimizing energy consumption, **AI technology for good** has the potential to solve complex social issues and create a sustainable future.
**Artificial Intelligence ethics** plays a crucial role in ensuring that AI technologies are developed and deployed responsibly. Ethical considerations like fairness, transparency, and accountability are essential to prevent AI systems from perpetuating biases or violating privacy. AI developers must ensure that algorithms are free from discrimination, are understandable, and operate with clear accountability mechanisms to maintain public trust.
One of the most exciting applications of AI is in **smart cities**. AI is helping urban centers become more efficient, sustainable, and livable. From managing traffic flow and reducing congestion to optimizing energy use and enhancing public safety, AI is central to the development of smart cities. By integrating AI technologies, cities can offer better services, reduce environmental impact, and create a higher standard of living for their residents.
”
”
"Empowering the Future: AI Technology for Good, Ethics, and Smart Cities"