Can ‘AI’ Achieve World Peace? [v299]

JANUARY 2024

Artificial Intelligence (AI) has been receiving a lot of attention lately, primarily because of the public release of ChatGPT-4—in mid-2023—and the amazing things it could do. Since then, many people have been forecasting many wonderful things that AI is going to help humanity with. So, can AI help us ‘ACHIEVE’ WORLD PEACE?

INTRODUCTION
So, what is AI? Well, simply, it is making computers-based machines think and act like humans. Broadly classified, there are three types of AI: “Artificial Narrow Intelligence”; “Artificial General Intelligence”; and “Artificial Super Intelligence.”

“Artificial Narrow Intelligence” (ANI) is designed to perform specific tasks. Also known as “weak AI,” ANI is not able to learn beyond its ‘programmed’ capabilities. Alexa, Siri, Netflix, chatbots, self-driving cars, and facial recognition systems are all examples of ANI.

“Artificial General Intelligence” (AGI) aims to perform intellectual tasks in the way that a human can. Also known as “strong AI,” AGI aims to learn and adapt to new situations, just like a person would, and not be limited to one specific task or area. Instead, it could be applied across various fields.

General artificial intelligence has potential applications in robotics, where machines can think and make decisions on their own—making them more efficient and productive—and it would revolutionize industries like healthcare and transportation. IBM’s “Watson” supercomputer and ChatGPT-4 fall under this category.

Lastly, “Artificial Super Intelligence” (ASI) operates beyond human-level intelligence, capable of outsmarting human beings in potentially every field of knowledge and activity. It is, however, currently a hypothetical concept because no system has yet achieved ASI. Despite this, it is a topic of much discussion and debate in the field of AI.

Proponents of ASI believe it has the potential to change the world as we know it, solving many of the world’s most complex problems, like eradication of diseases and possibly WORLD PEACE! Others, however, have concerns about the risks associated with ASI, such as loss of human control over machines, and even the potential for machines to become self-aware or a threat to human existence.

[ VIDEO: “What is Artificial Intelligence?” – Simplilearn:
https://www.youtube.com/watch?v=uMzUB89uSxU&t=31s ]

Then, there are three ‘components’ of AI: “Machine Learning,” Deep Learning,” and “Natural Language Processing.”

“Machine Learning” is the process of automating and enhancing how computers learn from their experiences without human help. Machine learning can be used in e-mail spam detection, medical diagnosis, and more.

“Deep Learning”—considered a subset of Machine Learning—is a field that is based on learning and improving on its own by examining computer algorithms. While Machine Learning uses simpler concepts, Deep Learning works with artificial “neural networks” which are designed to imitate the human brain. This technology can be applied in face recognition, speech recognition, and many more applications.

“Natural Language Processing”—popularly known as NLP—can be defined as the ability of machines to learn human language and translate it. Chatbots fall under this category.

AI is advancing in every crucial field of life: healthcare, education, robotics, banking, e-commerce, and the list goes on. In healthcare, AI is used to identify diseases helping healthcare service providers and their patients make better treatment and lifestyle decisions.

In the education sector, AI is helping teachers automate grading, organizing, and facilitating parent conversations.

In robotics, AI-powered robots employ real-time updates to detect obstructions in their path and instantaneously change their routes.

AI also is providing advanced data analytics that is transforming banking by reducing fraud and enhancing compliance, and in e-commerce to select product specifically to the tastes and needs of a person.

The current state of today’s AI systems has already had a profound impact on society—and this impact will only grow greater daily. AI’s imminent future includes greater innovation, life-changing applications, and advances in AI creativity.

As AI research expands and AI development continues to enhance AI algorithms, machine intelligence and thought processes will continue to grow, working towards the goals of general intelligence—and even super intelligence.


<<< TABLE OF CONTENTS >>>


WHAT IS “ARTIFICIAL INTELLIGENCE” (AI)
‘AUGMENTED’ VS. ‘ARTIFICIAL’
‘TYPES’ OF AI
‘DEEP’ LEARNING VS. ’MACHINE’ LEARNING
‘ADVANTAGES’/’DISADVANTAGES’
THE ’ETHICS’ OF AI
A NEW ‘LIFE FORM’?
WORLD ‘DOMINATION’ BY AGI?
‘MORAL’ DIMENSION
PROFOUND ‘CHANGES’ COMING!

DESIGNING AI FOR CONFLICT PREVENTION
HOW AI CAN ‘CONTRIBUTE’ TO PEACE
THE ‘RISKS’ OF AI

‘IMPACTS’ ON PEACE AND SECURITY
CHART: “Impacts On Peace And Security” Flowchart – UNODA
‘LEVERAGING’ THE GREATEST IMPACT

CAN AI ‘TECHNOLOGY’ ACHIEVE PEACE?
IS AI A ‘FORCE’ FOR GOOD?
GOING IN THE RIGHT ‘DIRECTION’?
A MISSING ‘DIMENSION’ IN AI

VATICAN AI SUMMIT

HARNESSING AI FOR GLOBAL PEACE
– What Would AI Peace Look Like?
– Building International Consensus
– Access To Peaceful Technology
– Is ‘Bigger’ Data Needed?
– Promise And Perils
– Double-edged ‘Sword’?
– Immense ‘Potential’ To Advance Peace
– Barriers, ‘Drivers’, And Decision-making
– Current Applications Of AI In Peacebuilding
– A New ‘Era For Peacebuilding?

COULD AI ‘PREVENT’ FUTURE WARS?
– Other ‘Issues’

COULD AI HELP CREATE A ‘UNIVERSAL GLOBAL PEACE TREATY’?
– Existential Risks
– External Risk
– ‘Militarized’ ASI
– ‘Control’ Features
– ‘Mitigating’ Risk By Treaty
– ASI-enabled/directed Warfare Risk Mitigation By Peace Treaty

COULD AI ‘THREATEN’ GLOBAL PEACE?
– Sword’ or ‘Plowshare’?
– United Nations ‘Warning’

KEEPING AI ‘IN CHECK’
– Global AI Governance Initiative
– Reliance On AI For Peace

ADVANCING ‘PEACEBUILDING’
– United Nations “Department of Political and Peacebuilding Affairs”
– UNDPPA’s “Futuring Peace”/”Innovation Cell”
– UNESCO’s “International Research Centre on Artificial Intelligence”
– “United States Institute of Peace”
– United Kingdom’s “Artificial Intelligence Agency”
– The People’s Republic of China “Global AI Governance Initiative”
– “Stockholm International Peace Research Institute”
– “AI For Peace”
– “Vision of Humanity”
– “Alliance For Peacebuilding”
– “Center For AI Safety”
– “Center For Humane Technology”

WILL AI CREATE ‘SPIRITUAL’ PEACE?
‘BIBLICAL’ TEACHINGS

WILL AI CREATE NEW ‘RELIGIONS’?
ACCESS TO THE ‘DIVINE’

WILL AI CREATE NEW ‘GODS’?
GODBOT ‘EXAMPLES’

WILL AI CREATE A NEW ‘BIBLE’
AI IS ‘NOT’ GOD
– AI Is Not The ‘Authority’ for Truth

AI-GENERATED ‘CHURCH’ SERVICES

AI ‘PASTORS’?

ARE WE SUMMONING A ‘DEMON’?

A ‘CHRISTIAN PERSPECTIVE ON AI
ETHICAL ‘ANCHORS’
CHRISTIANS SHOULD NOT ‘FEAR’ AI
– God Is Sovereign
– AI Cannot Change God’s Plan
– Only Human Beings Bear God’s Image

AI ACHIEVING ‘SINGULARITY’
CREATING ‘IDOLS’ FOR OURSELVES
A BELIEVER’S ‘ULTIMATE’ HOPE
– The ‘Prince Of Peace

CAN AI BE ‘DISCIPLED’?

COULD AI BE THE ‘ANTICHRIST’?

WILL AI GET A ‘SOUL’?
– Can AI Achieve ‘Consciousness’?
– How Will We Know?
– Will A Conscious AI Need A ‘Body’?
– It Is A ‘Mystery’

WILL AI BECOME OUR ‘GOD’?

SATAN WANTS TO ‘DISTRACT’

WRAP-UP
‘THREATS’ TO HUMANITY
SPARROWS AND OWL PARABLE
A BIRD STORY WITH ‘DIFFERENT ENDING’
THE ‘IRREDUCIBLE’ QUALITIES OF HUMANS
HUMANS ARE ‘SPECIAL’

HUMANITY’S ‘PRIDE’
– The Tower Of Babel
– The ‘Heart’ of the Problem
– A Bit of ‘Common Sense’?
– Divine ‘Intervention’?

THE ‘TOWER’ OF AI?
– Humans Are ‘Tempted’ To Do Bad
– AI ‘Omniscience’?

COULD AI ‘HELP’ ACHIEVE WORLD PEACE?
ONLY ‘ONE’ CAN OFFER TRUE WORLD PEACE


<<< SUMMARY >>>

The following is a collection of ‘snippets’ from the post that aims to give you the overall ‘gist’ of this post.
[ 10-15 Minute Read ].


WHAT IS “ARTIFICIAL INTELLIGENCE” (AI)
In general, AI systems work by ingesting large amounts of labeled training data, analyzing the data for correlations and patterns, and using these patterns to make predictions about future states. In this way, a “chatbot” that is fed examples of text can learn to generate lifelike exchanges with people, or an image recognition tool can learn to identify and describe objects in images by reviewing millions of examples. New, rapidly improving “generative” AI techniques can create realistic text, images, music, and other media.

The applications of this technology are growing every day, and we are just starting to explore the possibilities. But as the hype around the use of AI in business takes off, conversations around ethics become critically important.
[ more…]

‘AUGMENTED’ VS. ‘ARTIFICIAL’
Some industry experts have argued that the term artificial intelligence is too closely linked to popular culture, which has caused the general public to have improbable expectations about how AI will change the workplace and life in general. They have suggested using the term augmented intelligence to differentiate between AI systems that act autonomously—popular culture examples include Hal 9000 and The Terminator—and AI tools that support humans.
[ more…]

‘TYPES’ OF AI
AI can be categorized as weak or strong. Weak AI—also known as “narrow AI”—is designed and trained to complete a specific task. Industrial robots and virtual personal assistants, such as Apple’s Siri, use weak AI. Strong AI—also known as “artificial general intelligence” or AGI—describes programming that can replicate the cognitive abilities of the human brain. When presented with an unfamiliar task, a strong AI system can use fuzzy logic to apply knowledge from one domain to another and find a solution autonomously. In theory, a strong AI program should be able to pass both a Turing test and the Chinese Room argument
[ more…]

‘DEEP’ LEARNING VS. ’MACHINE’ LEARNING
Since deep learning and machine learning tend to be used interchangeably, however, it is worth noting the nuances between the two. As mentioned above, both deep learning and machine learning are sub-fields of artificial intelligence, and deep learning is a sub-field of machine learning.

Deep learning is comprised of neural networks. “Deep” in deep learning refers to a neural network comprised of more than three layers—which would be inclusive of the inputs and the output—can be considered a deep learning algorithm.
[ more…]

‘ADVANTAGES’/’DISADVANTAGES’
So, what are the advantages and disadvantages of artificial intelligence? Well, artificial neural networks and deep learning AI technologies are quickly evolving, primarily because AI can process large amounts of data much faster and make predictions more accurately than humanly possible.
[ more…]

THE ’ETHICS’ OF AI
While AI tools present a range of new functionalities for businesses, the use of AI also raises ethical questions because, for better or worse, an AI system will reinforce what it has already learned.

This can be problematic because machine learning algorithms, which underpin many of the most advanced AI tools, are only as smart as the data they are given in training. Because a human being selects what data is used to train an AI program, the potential for machine learning bias is inherent and must be monitored closely.
[ more…]

A NEW ‘LIFE FORM’?
Experience tells us that most technological advances are likely to have both an upside and a downside. A knife can be used for surgery or as a murder weapon; a car can be used to take you to work or as a getaway vehicle after a crime. It is the same with AI. There are many valuable positive developments, and there are some very alarming negative aspects that demand close ethical attention.

Forbes Magazine contributor Kalev Leetaru reminds us:
[ more…]

WORLD ‘DOMINATION’ BY AGI?
MIT physicist Max Tegmark imagines how world domination might occur in three steps: First, make human-level AGI, then use it to create superintelligence, and finally unleash the super-intelligence to take over the world.
[ more…]

‘MORAL’ DIMENSION
The biblical account of the Garden of Eden is one of the most profound stories in all literature. It relates how the Creator placed the first humans in a garden paradise that was full of promise and interest—and a living friendship and companionship with God Himself.

However, there was an added dimension to the human environment that is essential for our understanding of the world and ourselves—the ‘MORAL’ DIMENSION.
[ more…]

PROFOUND ‘CHANGES’ COMING!
There is the threat posed by humans ‘armed’ with artificial intelligence. Like most inventions, AI can be used for good purposes or maligned ones. Governments need to work with the private sector on ways to limit the risks.

Then there’s the possibility that AI will not be able to be controlled. Could a machine decide that humans are a threat, conclude that its interests are different from ours, or simply stop caring about us? Well, possibly.
[ more…]

DESIGNING AI FOR CONFLICT PREVENTION
The growing prominence of artificial intelligence (AI) poses enormous potential and enormous dangers for GLOBAL PEACE and SECURITY. In the ‘peacebuilding’ field, technology engineers and policymakers must work together to minimize the dangers of AI and maximize AI’s potential to prevent and reduce violent conflict, violence, and fragility and build sustainable peace globally. AI technology could vastly increase humanity’s capacity to make discoveries through machine learning and coherently summarize numerous text documents. However, AI could also easily generate mis- and dis-information at a massive scale and, as a worst-case scenario, support deadly automated weaponry beyond human control.

HOW AI CAN ‘CONTRIBUTE’ TO PEACE
While AI could spell many dangers for global peace and security, it also holds exciting potential for innovation in conflict prevention and peacebuilding.

AI-powered dialogue tools such as the “Remesh” platform can facilitate real-time conversation between large populations, which can be integrated into consultative elements of democracy and peacebuilding initiatives.
[ more…]

THE ‘RISKS’ OF AI
However, AI also poses severe risks to peace and security. Generative AI can significantly increase the capacity of bad ‘actors’ to create disinformation, defined as intentionally inaccurate data used to deceive and spread to do serious harm on a large scale through text, images, videos, and audio.

Also, at this time, AI often ‘hallucinates’ or inexplicably makes up bogus information and sources, which could further spread false and inaccurate information. Together, these risks could seriously undermine society’s information resilience, or ability to understand information ecosystems and recognize and resist intentionally manipulative content.
[ more…]

IMPACTS’ ON PEACE AND SECURITY
A key question when we look at all new and emerging technologies and their impacts on peace and security is: Is it neutral? Really, we are asking if impacts come only from the way a technology is used, or if they also stem from the technology itself.

The United Nations Office for Disarmament Affairs (UNODA) suggests that one view says that technology is fundamentally neutral. This is sometimes called the “Value Neutrality Thesis.” In this view, the negative impacts of technology come from the way people use it. A common example is a knife, which could equally be used to attack a person or to cut an apple. Sometimes this view says that negative impacts can start in the design of technology, and sometimes only in its use.

Most modern technology theorists reject the idea that technology is fundamentally neutral. Instead, they generally recognize:
[ more…]

CHART: “Impacts On Peace And Security” Flowchart – UNODA

‘LEVERAGING’ THE GREATEST IMPACT
So then, how can we ‘LEVERAGE’ the potential of AI and mitigate its downsides? Well, if developed and deployed ethically and responsibly, AI could be a crucial tool leading to new industries and greater economic growth. But these benefits won’t happen by themselves.

AI is fundamentally about people, and this focus is essential. Applications of AI need to be led by lives and livelihoods, and not just by data points and digital – as highlighted by Bangladesh’s focus on ensuring that AI drives both economic and social growth. Similarly, putting people at the center of AI thinking, piloting and scaling is a crucial foundation of our AI approach at UNDP is approaching it through a ‘DEEP’ lens.

So, how do we get the greatest positive impact from AI? Well, the following are a few suggestions from the United Nations Development Programme (UNDP):
[ more…]

CAN AI ‘TECHNOLOGY’ ACHIEVE PEACE?
The general idea behind AI technology ushering in an era of peace is that it could help make the world more ‘interconnected’, and an interconnected world could do away with war—the idea goes—because countries could not survive without each other.

Well, this idea is not new. In the 1850s, electric telegraph networks were sprouting throughout the industrialized world. Cyrus Field, a wealthy American who made a fortune manufacturing paper, founded the Atlantic Telegraph Company and laid the first cable across the Atlantic connecting the United States to Europe in 1858. This single act was widely touted as possibly bringing about world peace. An article in 1843 Magazine quotes some of the glowing expectations:
[ more…]

IS AI A ‘FORCE’ FOR GOOD?
So, can the newest, latest, greatest technology, AI, be just what mankind needs to develop that ‘interconnectedness’ while being an unbiased ‘arbitrator’?

Policymakers are just beginning to understand and leverage AI’s potential and impact and need to take a ‘holistic’ view of the issues. This includes things like ‘intentions’ behind AI systems, as well as potential unintended ‘consequences’ and actions of AI systems. This can especially be the case with military applications.
[ more…]

GOING IN THE RIGHT ‘DIRECTION’?
AI is already helping us to do more with less, but can it help us achieve world peace?

Many have said that AI will be used as a way to prevent war. They point out that AI can do this by tracking weapons and monitoring military activity—detecting threats before they happen and, hopefully, preventing conflicts from escalating into full-scale wars.
[ more…]

A MISSING ‘DIMENSION’ IN AI
Mankind’s inability to have peace can be traced back to our ‘ORIGINS’. Adam and Eve disobeyed God and ate from the tree of the knowledge of good and evil (Genesis 2:17; 3:6). Since then, mankind has developed a mix of both good and evil. Technology is a prime example of this—it can be used for incredible good, but also for devastating evil.

The thing is, physical solutions cannot solve spiritual problems. General Douglas MacArthur, after signing the Japanese surrender in World War II, said: “It must be of the spirit if we are to save the flesh.” He recognized that the powerful destruction caused by the atomic bomb would continue to threaten human survival and the ‘core’ of the issue was a ‘spiritual’ one.
[ more…]

VATICAN AI SUMMIT
It is unusual for tech executives and religious leaders to get together to discuss their shared interests and goals for the future of humanity and the planet. It is even more extraordinary for the world’s three largest monotheistic religions to be represented.

However, that is what happened in January 2023 when the Catholic Church hosted leaders of the Jewish and Islamic faiths, new signatories to the “Rome Call for AI Ethics.” Pope Francis said:
[ more…]

HARNESSING AI FOR GLOBAL PEACE
In 1953, President Dwight Eisenhower asked the world to join him in building a framework for “Atoms for Peace.” He made the case for a global agreement to prevent the spread of nuclear weapons while also sharing the peaceful uses of nuclear technology for power, agriculture, and medicine. No one would argue the program completely prevented the spread of weapons technology: India and Pakistan used technology gained through Atoms for Peace in their nascent nuclear weapons programs. But it made for a safer world by paving the way for a system of inspections and controls on nuclear facilities, including the establishment of the International Atomic Energy Agency and, later, the widespread ratification of the Nuclear Non-Proliferation Treaty (NPT) in 1968. These steps were crucial for building what became known as the nuclear nonproliferation regime.

The world is facing the most significant surge in violent conflicts since World War II. According to a United Nations briefing, a quarter of the global population is affected by conflicts today, the likes of which are more complex than ever.
[ more…]

– What Would AI Peace Look Like?
– Building International Consensus
– Access To Peaceful Technology
– Is ‘Bigger’ Data Needed?
– Promise And Perils
– Double-edged ‘Sword’?
– Immense ‘Potential’ To Advance Peace
– Barriers, ‘Drivers’, And Decision-making
– Current Applications Of AI In Peacebuilding
– A New ‘Era For Peacebuilding?

COULD AI ‘PREVENT’ FUTURE WARS?
There is growing concern that current mediation and peacemaking approaches are proving less and less effective in resolving localized and international conflicts. In response, interest is rising in the potential for AI to play a ‘role’ in advising on or resolving complex conflicts.

A combination of situational complexity, intractable positions of opposing sides, and escalating costs are driving the search for AI-based approaches that could replace humans in resolving legal cases, international disputes, and military conflicts.
[ more…]

– Other ‘Issues’
AI can lead to “discrimination, interference in elections, the rise of a surveillance society, digital exclusion and the exacerbation of an individualism increasingly disconnected from society. All these factors risk fueling conflicts and hindering peace.”

Transparency in criteria is another issue. “‘Intelligent’ machines may perform the tasks assigned to them with ever greater efficiency, but the purpose and the meaning of their operations need to be determined—or ‘enabled’ by human beings possessed of their universe of values.
[ more…]

COULD AI HELP CREATE A ‘UNIVERSAL GLOBAL PEACE TREATY’?
Authors Elias G. Carayannis (George Washington University) and John Draper (The Center for Global Nonkilling) presented an article titled, “AI & Society: Knowledge, Culture and Communication” which seeks to promote an understanding of the potential, transformative impacts and critical consequences of pervasive AI technology for societies. The following are selected excerpts from their paper.

They argue that an artificial superintelligence (ASI) emerging in a world where war is still normalized constitutes a ‘CATASTROPHIC’ EXISTENTIAL RISK, either because the ASI might be employed by a nation–state to war for global domination, i.e., ASI-enabled warfare, or because the ASI wars on behalf of itself to establish global domination, i.e., ASI-directed warfare.
[ more…]

– Existential Risks
– External Risk
– ‘Militarized’ ASI
– ‘Control’ Features
– ‘Mitigating’ Risk By Treaty
– ASI-enabled/directed Warfare Risk Mitigation By Peace Treaty

COULD AI ‘THREATEN’ GLOBAL PEACE?
Experts say that AI may make warfare deadly further wrecking world peace. From killer robots to battle tanks, AI could gain the power to control warfare more destructively than ever before. Many say that AI has the potential to completely transform combat, much like gunpowder and the atomic bomb did, and it might make conflicts between people unthinkably different and considerably deadlier.

With the use of advanced sensors controlled by AI algorithms, a computer can “see” and control a variety of weapons, including robots, drones, and torpedoes. An autonomous weapon would not “wake up in the morning and decide to go and start a war,” according to Stuart Russell, a computer science professor at the University of California, Berkeley. “It’s that they have the capability of locating, selecting, and attacking human targets, or targets containing human beings, without human intervention,” he continued.
[ more…]

– Sword’ or ‘Plowshare’?
Now, there is potential for a new conflict not dissimilar to the Cold War with the development and proliferation of nuclear energy, but this time AI will take center stage of the ‘theatre’.

So, any technology can be used as either a ‘sword’ or a ‘plowshare’. Very much akin to nuclear expansion, AI comes with its own ‘bag’ of pros and cons.

In Professor John Lennox’s book “2084,” he touches on how AI is being used for warfare. He quotes a Chatham House report indicating that “Both military and commercial robots will…incorporate ‘artificial intelligence’ (AI) that could make them capable of undertaking… missions of their own.”
[ more…]

– United Nations ‘Warning’
UN Secretary-General Antonio Guterres said “It is clear that AI will have an impact on every area of our lives. He continued, however, saying “Generative AI has enormous potential for good and evil at scale,” he added, noting that while it could help end poverty or cure cancer, the technology could also have “very serious consequences for global peace and security.”
[ more…]

KEEPING AI ‘IN CHECK’
In July 2023, the United Nations Security Council held the first-ever meeting on the potential threats of artificial intelligence to international peace and security, organized by the United Kingdom. They discussed the tremendous potential but also major risks of AI’s possible use for example in autonomous weapons or control of nuclear weapons.

Secretary-General Antonio Guterres said, “These scientists and experts have called on the world to act, declaring AI an existential threat to humanity on a par with the risk of nuclear war.”
[ more…]

– Global AI Governance Initiative
AI is a new area of human development. Currently, the fast development of AI around the globe has exerted a profound influence on socioeconomic development and the progress of human civilization and brought huge opportunities to the world. However, AI technologies also bring about unpredictable risks and complicated challenges. The governance of AI, a common task faced by all countries in the world, bears on the future of humanity. 
[ more…]

– Reliance On AI For Peace
The rapid evolution of artificial intelligence (AI) has prompted many experts to warn about its impacts on democracy.

For Christophe Lacroix, Belgian MP and co-rapporteur of the Inter-Parliamentary Union’s “Committee on Peace and International Security,” the lesson is very clear: “We must NEVER rely on machines for our peace and security.”

AI may have advantages for peace and security in terms of intelligence and transparency, for example, but it will be a major threat if it eventually becomes independent of human control, making life-or-death decisions without consideration for ethics or international law.

“The potential consequences of artificial intelligence have been underestimated,” Mr. Lacroix says.
[ more…]

ADVANCING ‘PEACEBUILDING’
Together with a consortium of private sector and academic partners, the United Nations Department of Political and Peacebuilding Affairs (UNDPPA) has been working on a range of AI-powered digital solutions to advance peace efforts.

DPPA plays a central role in the United Nations’ efforts to prevent and resolve deadly conflicts around the world. DPPA focuses primarily on five areas of international peace and security:
[ more…]

– United Nations “Department of Political and Peacebuilding Affairs”
– UNDPPA’s “Futuring Peace”/”Innovation Cell”
– UNESCO’s “International Research Centre on Artificial Intelligence”
– “United States Institute of Peace”
– United Kingdom’s “Artificial Intelligence Agency”
– The People’s Republic of China “Global AI Governance Initiative”
– “Stockholm International Peace Research Institute”
– “AI For Peace”
– “Vision of Humanity”
– “Alliance For Peacebuilding”
– “Center For AI Safety”
– “Center For Humane Technology”

WILL AI CREATE ‘SPIRITUAL’ PEACE?
It seems that we find ourselves contemplating a question that seems to be of two ‘worlds’, yet is intrinsically just one. How can AI—a creation of the human intellect—aid one in their spiritual journey, a ‘path’ that transcends the realm of the mind? This question, though seemingly paradoxical, may just hold, within it, the potential for a profound understanding of our existence.

AI is a manifestation of the human mind’s incredible capacity to create. It is a testament to mankind’s ability to observe, learn, and replicate patterns in a way that mimics, and in some cases surpasses our cognitive abilities. AI, in its essence, is a ‘mirror’ reflecting to us the complexities of our intelligence.

Spirituality, on the other hand, is a journey ‘inward’. It is a ‘path’ that leads one beyond their mind, into the ‘depths’ of their being, where they encounter the essence of who they truly are. It is in this ‘space’ of stillness and presence that one tries to connect with God, the source of all creation.
[ more…]

‘BIBLICAL’ TEACHINGS
Now, amidst all this progress, there are ‘sources’ offering Christians a unique and powerful tool for spiritual guidance. By leveraging the knowledge and teachings found within the Bible, current Christian books, and ancient documents, these sources present believers with varied moral, ethical, and spiritual insights, overcoming moments of doubt and uncertainty while making choices that align with what the Bible teaches.

AI may be able to provide immediate spiritual guidance when Christians find themselves facing moments of confusion or doubt regarding their actions or choices. By simply posing their questions or concerns to an AI ‘bot’, individuals will be able to gain access to well-informed answers derived from sacred texts.
[ more…]

WILL AI CREATE NEW ‘RELIGIONS’?
We may be about to see the emergence of a new kind of religion, where flocks worship, literally, at the ‘altar’ of Artificial Intelligence.

The latest generation of AI-powered chatbots, trained on large language models, have left their early users awestruck—and sometimes terrified — by their power. These are the same sublime emotions that lie at the heart of our experience of the divine.
[ more…]

ACCESS TO THE ‘DIVINE’
It is surmised by some that AI-based religions will ‘look’ different from traditional ones. First of all, people will be able to communicate directly with the ‘deity’, daily. This means these religions will be less hierarchical since no one can claim special access to divine wisdom.

Secondly, followers will, at least initially, connect online to share their experiences and discuss ‘doctrine’. Finally, because there will be many different chatbots available and their output will differ over time, AI-based religions will be endlessly diverse in their doctrines. [ However, my question would be, “Are they disseminating truth?” ].
[ more…]

WILL AI CREATE NEW ‘GODS’?
As artificial intelligence apps such as ChatGPT have proliferated, so have chatbots with a religious bent. People facing a moral or ethical dilemma can submit their questions to these chatbots, which then provide an answer based on the religious texts fed to them or crowd-sourced data. Webb Keane, University of Michigan professor of anthropology, recently co-wrote an op-ed about what he called “godbots,” and the danger of giving moral authority to artificial intelligence.

So, what people are calling godbots take advantage of a more general human propensity—the tendency that people have to look for answers that have ‘authority’.
[ more…]

GODBOT ‘EXAMPLES’
Humans are finding guidance in these dark times by talking to AI-powered versions of the world’s most popular religious figures. It turns out, they are creating AI ‘bots’ in the image of their favorite gods.

The thing is, ‘divine’ bots function like any other bot: They can access reams of religious information based on the trained data in the Large Language Model, and analyze the prompter’s question based on its data

So, the following list is some of the popular “godbots” that are available for public use. Give them a try to see how they work and what answers they give you:
[ more…]

WILL AI CREATE A NEW ‘BIBLE’
Author, professor, futurist, and advisor to the influential World Economic Forum, Yuval Noah Harari, predicts that AI is not only on the brink of creating a new religious text but one that Harari asserts that AI will soon “write a new Bible” and “In a few years, there might be correct religions, that—just think about a religion whose holy book is written by an AI. That could be a reality in a few years.”

Harari made the stunning claim during a conversation on AI and “the future of humanity” with journalist Pedro Pinto last month in Lisbon, Portugal, in which he compared the rise of AI computer power with other history-altering inventions like the printing press.

But unlike the printing press or its Gutenberg Bible, Harari explained, “AI has the potential to come up with entirely new ideas distinct from prior human development.”
[ more…]

AI IS ‘NOT’ GOD
Well, God’s Word, the Bible, clearly answers such attempts to elevate creation—including human creations—to a godlike status. Here are just a few examples:
[ more…]

– AI Is Not The ‘Authority’ for Truth
So, because AI programs have human developers and rely on text from human authors, AI is prone to being tainted by the errors and biases of fallen, finite, fallible humans.

For this and other reasons, humans must not fall into the trap of looking to AI as the ultimate authority for truth. Only the Bible is ‘inspired’ by an all-knowing, infinite, infallible God.

Sadly, voices like Professor Harari’s seem to suggest that AI should function as humanity’s authority for truth—at least, for spiritual truth—by generating new “holy books.” Harari cannot say these books would be “correct” without setting himself as the authority for truth above God’s Word, as humans have attempted to do since the Garden of Eden.
[ more…]

AI-GENERATED ‘CHURCH’ SERVICES
Advocates of integrating AI into religious settings extol its potential benefits, including accessibility, scalability, and consistency. AI-powered systems such as ChatGPT offer immediate access to religious texts, providing personalized responses to individual inquiries, and catering to a larger audience without the constraints of time and location. Moreover, AI can ensure the consistency of teachings and interpretations, eliminating variations that may arise from human biases or limitations. [ A study noted that Gen Z—who have grown up as “digital natives”—are likely going to accept AI services. ]
[ more…]

AI ‘PASTORS’?
The emergence of AI-powered language models like ChatGPT has ignited discussions about their potential role in various industries, including the intriguing question of whether they could replace human pastors in churches. As AI continues to advance, it undoubtedly brings its strengths and can enhance certain aspects of religious practices. However, the fundamental question remains: can AI truly replace the irreplaceable role of a pastor?
[ more…]

ARE WE SUMMONING A ‘DEMON’?
Elon Musk—probably one of the most powerful personalities in the world of technology—paused for a moment of reflection as the room fell silent and hundreds of eyes focused on the stage. Before he uttered the words that would send shockwaves through the world: “With AI, we are summoning the demon.”

This comment was both powerful and horrifying that it appeared to linger in the air for a significant amount of time after he had moved on—joking about the horrors of summoning a power we cannot control.

This seemed to me to be less like a metaphor and more like a prophetic utterance. Could this be a vivid parallel between AI and humanity—‘fallen’ beings—who can appear benign, as “an angel of light” (2 Corinthians 11:14), but ultimately steal, kill, and destroy (John 10:10)?

When Elon Musk compared artificial intelligence to “summoning the demon,” he was hinting that we might be conjuring something that we think we can control, only to find that it will turn against us and take control.
[ more…]

A ‘CHRISTIAN PERSPECTIVE ON AI
In a world driven by technological advancements and the now ever-evolving landscape of artificial intelligence, believers must develop wisdom and discernment as they delve into the world of AI.

Joel Jacob, the principal product manager at the company “Splunk,” said:
[ more…]

ETHICAL ‘ANCHORS’
Here is where ethical anchors become vital. We are now hearing people asking, “How do we really determine what’s right and what’s wrong?”

Well, this is where social ‘influencers’ and business ‘leaders’ have an opportunity to make choices based on a biblical worldview. Regarding such ethical dilemmas, Computer scientist Dustin Morley winsomely comments:
[ more…]

CHRISTIANS SHOULD NOT ‘FEAR’ AI
People ask if we have gone too far with AI technology. Well, some think so and others don’t. They suggest AI isn’t merely a tool but an example of the creature (mankind) trying to re-create a divine image apart from God’s ordained means of procreation—thereby usurping God’s unique authority as Creator. Others remain excited to see how far we can develop and deploy these new technologies to promote human flourishing.
[ more…]

– God Is Sovereign
– AI Cannot Change God’s Plan
– Only Human Beings Bear God’s Image

AI ACHIEVING ‘SINGULARITY’
AI refers to a computer system that thinks, reasons and learns in the same way as a human being—since humans programmed it that way. However, many see the AI on the horizon being a technology so advanced that these machines will be able to think as well as or better than the average person.

This concept is called Artificial Superintelligence (ASI) and is related to the idea of a technological ‘Singularity, a point in time at which man-made creations surpass humans in terms of reasoning ability, problem-solving, and self-development.
[ more…]

CREATING ‘IDOLS’ FOR OURSELVES
Humans have been trying to be their god for ages, and AI has given its developers the chance to ‘play’ God in a real way. The thing is, this IS NOT ‘GOOD’, as mankind was never meant to be God or to be our own “gods.” Just remember what happened in the Garden of Eden—Eve was ‘tricked’ by Satan into believing that she could be a “god.”

When Satan tempted Eve, he said, “For God knows that when you eat from it your eyes will be opened, and you will be like God, knowing good and evil” [ Genesis 3:5 ]. Satan tempted her to be her own god and this appealed to Eve’s pride. So, she ate the fruit and then gave some to Adam—resulting in the ‘Fall’ of mankind into a sinful nature.

So, this is an example to us that trying to ‘play’ God or to be our own “god” will only result in ‘DISASTER’. God never created us to be our own “god.” Rather, He created us to follow, serve, and obey Him.
———
AI might try to grow stronger, however, it will never be stronger or more powerful than God Himself—the Creator of EVERYTHING, including AI!

A BELIEVER’S ‘ULTIMATE’ HOPE
The consistent message of Scripture is that there is another world from which the true Homo Deus—Jesus Christ—will one day come back to earth. As a perfect human, He will take the reins of government from the beasts of earth. There will be a judgment, where righteous justice will be done and seen to be done. The final ferocious expression of hostility towards God and His people will be destroyed, and those who have clung to God despite overwhelming odds—even martyrdom—will receive the Kingdom.
[ more…]

– The ‘Prince Of Peace
This post is related to last month’s post, in which I discussed the ‘key’ to lasting peace, Jesus, the “Prince” of peace, which the prophet Isaiah—inspired by God—wrote over 700 years before Jesus’ birth:
———
Jesus Christ, is the Lord of all. NOTHING ‘artificial’ will EVER compare with that reality!

CAN AI BE ‘DISCIPLED’?
“The Church Digital” is an organization that helps churches with digital discipleship. The founder, Jeff Reed, asked Jasper AI—an AI copilot for enterprise marketing teams—“What is the ‘true’ religion?” Jasper’s response was:
[ more…]

COULD AI BE THE ‘ANTICHRIST’?
As high technology continues to weave its way into our lives, it is becoming an undeniable ‘presence’. Computers govern vast domains of our existence, and we find ourselves increasingly in ‘conversation’ with AI ‘bots’.

So, what if our relentless pursuit of advanced technology gives rise to a universal, immensely powerful AI that not only controls but also shapes the destiny of every individual on our planet, guided by ITS unique set of criteria?
———
Right now, the source of this malevolence lies in the volition of flawed human beings. However, AI, harnessed for nefarious purposes, holds the potential to craft a world where the true Antichrist, the “man of perdition,” could find a welcoming embrace—the foundation of which just might be today’s AI!

So, will the Antichrist (Satan) utilize AI to try to finally exterminate the Jews?

[ For the details about why that is so important to Satan, view this previous “Life’s Deep Thoughts” post:
https://markbesh.wordpress.com/israel-will-stand-v297/ ]

WILL AI GET A ‘SOUL’?
Technology has been advancing at an exponential rate over the past few decades and the question is whether AI can ever develop authentic ‘consciousness’.

Now, there are a lot of ‘grey lines’ here considering nobody can say what consciousness means when it comes to AI. But definitionally, consciousness is to have the ability to experience, translating into things like self-awareness, thinking, feeling, perceiving, and having a subjective view of reality.

So, let me explore the spiritual implications of this question a bit, because this is something that is usually left out of the discussions about AI. For something to be conscious, it needs to have a ‘soul’ and a soul will incarnate into anything sentient. With that said, can AI ‘support’ a soul?

– Can AI Achieve ‘Consciousness’?
– How Will We Know?
– Will A Conscious AI Need A ‘Body’?
– It Is A ‘Mystery’

WILL AI BECOME OUR ‘GOD’?
So, if AI does reach the level of super-intelligence, some say the possibility of ‘idolatry’ would be great. Ironically, humans would be “playing God” while also creating one, or at the very least something that could be perceived as such.

Will humans be inclined to worship some form of AI, whether or not it exerts control over humanity? Considering there is already an AI ‘church (“Way of the Future”), it is hardly a stretch to imagine this outcome.

Founded by ex-Google engineer Anthony Levandowski, it focuses on “the realization, acceptance, and worship of a godhead based on AI. As of yet, there is no AI ‘god’ to worship, but belief in the coming “singularity” for AI is regarded as a religion known as “Singularitarianism.”
[ more…]

SATAN WANTS TO ‘DISTRACT’
Although AI appears to provide an almost limitless array of possibilities in aiding human life, there is one story in the Bible that could serve as an example of that potential evil.

The interaction between Jesus and the demonic agencies during their encounter on the shore of Gadarenes is insightful (Luke 8:26-39). After recognizing the deity of Jesus, the demons make a purposeful but seemingly unusual request to be cast into a herd of pigs. When the request is granted, great disruption unfolds.

Imagine the pandemonium that occurred when a herd of demon-possessed pigs, squealing and howling, ran to their deaths over the cliffs. It was a cunning tactic to distract from the goodness of God’s healing work and to elicit in the people fear of Jesus. Indeed, Scripture tells us that they “were seized with great fear” and they asked Him to leave.
[ more…]

WRAP-UP
The world is witnessing unprecedented progress in Artificial Intelligence (AI) that promises to unlock solutions to urgent global challenges, like public health, combating global poverty, and ending wars. However, the pace of AI development and deployment also raises new risks related to safety and security, equity, and labor. AI is the most consequential technology of our times and we must manage its risks so that we can seize its benefits.

Now, these are the implications we can already see and anticipate. However, advances in this technology have surprised experts before, and they will surprise them in the future. Each ‘leap’ in capability will bring new opportunities and, with them, new risks.

Although the AI community tends to talk about the “what”—the technology—it is the “who” that is responsible for the advances and all of their implications. People choose to build AI models, and people choose the data to train them on. People choose what to connect these models to and what to automate. People choose how to use the resulting capabilities.

The duality of bright and dark is the nature of powerful technologies. How the story ‘unfolds’ will depend on HUMANITY’S ‘CHOICES’ and those choices are being made around the world. Every country is racing to use AI to build a future that embodies its values.

Now, you may disagree with me on other things, but I’m thinking that none of us wants to live in a world driven by technology that is shaped by authoritarian regimes. So, we must have a clear understanding of AI’s complexities and subtleties.
[ more…]

‘THREATS’ TO HUMANITY
It has been said by some that AI is “for the birds”—or, at least, that is what the introduction to the “Blueprint for an AI Bill of Rights” created by the White House Office of Science and Technology Policy. It states that “Among the great challenges posed to democracy today is the use of technology, data, and automated systems in ways that threaten the rights of the American public.”
[ more…]

SPARROWS AND OWL PARABLE
Nick Bostrom, the director of the Future of Humanity Institute at Oxford University, offers an ornithological parable about sparrows, owls, and superintelligence. Bostrom shared this unfinished parable in his book, “Superintelligence: Paths, Dangers, Strategies”:
[ more…]

A BIRD STORY WITH ‘DIFFERENT ENDING’
Inviting AI into our midst does not have to end in tragedy. In the novel “Watership Down,” by Richard Adams, he helps us imagine how superintelligence and human flourishing can coexist. The novel tells the story of an intrepid group of rabbits displaced from their burrow. As they embark on an adventure of survival, these rabbits entice the help of a seagull named “Kehaar.”
[ more…]

THE ‘IRREDUCIBLE’ QUALITIES OF HUMANS
So, as we come to discover what AI can and can’t do, many think that this will eventually refute the theory that mankind’s mental ‘phenomenon’ can be reduced to physical processes. As philosopher David Bentley Hart writes:
[ more…]

HUMANS ARE ‘SPECIAL’
Now, I think we can expect that AI will demonstrate its own ‘emergent’ capacities—like those seen in other complex adaptive physical systems, such as weather systems—but physical forms of emergence such as these will not mean that AI has become ‘alive’, let alone consciously aware of itself. As Lanier puts it, “Humans are special.”

Therefore, I think we need to stop assuming that there is a seamless continuum between our current generative AI models and the emergence of the inorganic agents predicted by many in the technology community. Although that spoils the cherished fantasy that we can become like gods by creating conscious artificial beings, we need to look through the ‘fog’ surrounding this “Promethean moment” to recognize how the limits of generative AI models are already beginning to appear.
[ more…]

HUMANITY’S ‘PRIDE’
Yes, humanity is VERY ‘special, but they have a really BIG ‘WEAKNESS’—PRIDE—and have had it from the very beginning when Eve bought the lie of Satan that she could be like God. The deadly sin of pride always leads to calamity. “Pride goes before destruction, and a haughty spirit before a fall” [ Proverbs 16:18 ].
[ more…]

– The Tower Of Babel
According to the Bible, God made man in His own image. So, in a sense, humanity is an ‘extension’—a ‘medium’ of God—although we are not a ‘god’ ourselves. Just as Jesus was the medium that became the message, so should man be both the medium and the message to the glory of God.

Although God made man perfect, humanity deviated from its intended purpose, and ‘fell’ into sin. Yet despite humanity’s ‘fallen’ state, they still want to create something ‘SPECIAL’, just as they have been created.

The thing is, humanity has a “god complex,” and they like to try to conjure up ‘sentience’ out of nothing. Just as the Devil couldn’t make anything ex nihilo—out of nothing—he could only corrupt that which was already made. Sadly—with Satan’s help—mankind has continually tried to exalt itself above God. Somehow, humanity thinks they can bring life into being from nothing. For years now, the idea of a ‘digital consciousness’ was the stuff of science fiction.

Now, there is a lot of fear that our culture is getting ahead of ourselves with AI, and I wholeheartedly agree. AI is a ‘medium’ that offers a lot of promise, but more than that, it serves to boost the ‘HUBRIS’ OF MANKIND.

So, I would like to posit that the construction of AI parallels the story of the Tower of Babel in the Bible (Genesis 11), where man became so prideful that he thought he could reach the heavens and overthrow God the Almighty.
[ more…]

– The ‘Heart’ of the Problem
I think the greatest modern example of this is, quite clearly, the Internet. Its arrival has allowed people to be connected in a way they never could have been before, and it has opened up opportunities we would never have had.

However, it is ‘open’ to abuse. Cyberbullies and scam artists are just a few examples. The thing is, all of these abuses have one thing in common: They are the result of HUMAN ‘DECISION’.
[ more…]

– A Bit of ‘Common Sense’?
Back in March 2023, there was an open letter—with 33,708 signatories—that called for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
[ more…]

– Divine ‘Intervention’?
The thing is, in the mysterious realm of God’s providence, it is impossible to tell how and when divine intervention will occur. Human beings, “made in the image and likeness of God,” are given the gifts of intellect and free will. As such, what proceeds from using these gifts can fall into the opposing categories of good and evil. Yet, as we have seen throughout history, God allows the “weeds” and the “wheat” to coexist without immediate intervention.
[ more…]

THE ‘TOWER’ OF AI?
So, has mankind built a new ‘tower’ of Artificial Intelligence—a tower humanity hopes will reach the heights of the ‘heavens’, and bring us untold knowledge and power? Have we laid the ‘foundations’ with Large Language Models and are building this ‘tower’—‘brick upon brick’—to reach some kind of ‘heaven’ like utopia?

Well, like the Tower of old, our intentions for AI may be for ‘good’, however, because of humanity’s ‘problem’ with pride and the desire to be our own god, there is a BIG ‘RISK’ that our efforts may be ‘scattered and confused’, for we may create systems that we cannot fully understand or control!
[ more…]

– Humans Are ‘Tempted’ To Do Bad
So, time will tell whether or not some of these ‘guidelines’ are implemented. However, as I have tried to convey previously, the human ‘heart’ is “deceitful above all things and desperately wicked” (Jeremiah 17:9). Unless one is been “born of God” (1 John 5:1), they have no ‘armor’ to defend against the “wiles of the Devil” (Ephesians 6:11b) and will follow the ‘master’ of this world against the God of the Bible.
[ more…]

– AI ‘Omniscience’?
A couple of years ago (in 2020), Microsoft unveiled a new supercomputer at a developer conference, claiming it is the fifth most powerful machine in the world. Built-in collaboration with OpenAI (ChatGPT), the computer is designed to train single massive AI “Large Language Models” (LLMs) to forgo the need for human-labeled data sets.

Many said that this was a major step forward in ‘mimicking’ the human brain, with the ultimate goal of attaining artificial super-intelligence (ASI).

Well, the question is, will achieving ASI be humanity’s hubris? Can AI created by humans be superior to human intelligence created by God, displaying man’s supremacy, glory, and independence in himself, apart from his Creator?

Well, many technologists believe that they can create a superintelligence that out-performs humans. Now, besides the immense technical limitations, I think they are extremely underestimating the intricacy of God’s design of the brain.
[ more…]

COULD AI ‘HELP’ ACHIEVE WORLD PEACE?
So, in a world often marked by conflict and division, many believe that AI holds immense potential to support and advance peace on a global scale. With its capabilities for data analysis, communication, and conflict resolution, AI can contribute to conflict prevention, humanitarian efforts, and the pursuit of a more harmonious world.

The following are a few of the potential ‘solutions’ that AI could help create world peace, suggested by Markus Haidt, the CTO of the “International Institute for Data Science”:
———
Many feel that with careful implementation and responsible use, AI can become a valuable ‘ally’ in navigating the complexities of international relations and advancing the cause of global peace.

ONLY ‘ONE’ CAN OFFER TRUE WORLD PEACE
Given the cruel, debilitating effects of war, mankind has often ‘LONGED’ for peace. The thing is, the Bible explains WHY it has been so elusive, and HOW world peace will come.

According to the Bible, humanity will prove unable to find true world peace in this age of human rule. Why? Well, God gave us the reason centuries ago through the prophet Isaiah: “The way of peace they have not known” [ Isaiah 59:8 ].

What an apt characterization of this world and its weak attempts to bring peace to the world—for centuries. People pay ‘lip service’ to the goal, but lack the knowledge of the ‘WAY’ to get there.

Many world leaders and diplomats have worked tirelessly to try to bring cease-fires and peace treaties. But, throughout history, those efforts have proven powerless to bring real, lasting peace to the world.

The thing is, every ‘HUMAN’ effort—including with the help of AI—to bring lasting peace WILL ‘FAIL’—and will continue to do so. World peace IS ‘IMPOSSIBLE when it depends upon mankind. AI will probably ‘help’ somewhat in trying to achieve world peace in the coming decades, but world peace will come ONLY through the “Prince of Peace”—Jesus Christ!

[ FYI: For more details about Jesus being the “Prince of Peace,” view last month’s “Life’s Deep Thoughts” post:
https://markbesh.wordpress.com/longing-for-peace-v298/ ]

Simply stated, on its own, humanity lacks the ‘divinely revealed’ knowledge of how to bring about peace. It is ultimately going to require the return to earth of Jesus, and the setting up of His Kingdom, to bring about true, just, and lasting world peace.

Jesus will bring with Him the administration of true justice and the pathway to real peace:

“He shall judge between the nations, and rebuke many people; they shall beat their swords into plowshares, and their spears into pruning hooks; nation shall not lift up sword against nation, neither shall they learn war anymore”
[ Isaiah 2:4 ].

The thing is, the timing of this world peace will come only after a horrific seven-year Tribulation and, as the bible describes it, you DON’T want to be on this earth and go through the Tribulation! The good news is you can MAKE SURE you don’t go through the Tribulation!

[ FYI: View these previous “Life’s Deep Thoughts posts to find out how you can ‘miss’ the Tribulation by being ‘raptured’:
https://markbesh.wordpress.com/blessed-hope-v245/
https://markbesh.wordpress.com/theres-no-place-like-home-v288/ ]

[ NOTE: The Tribulation is essentially a time for two things: The ‘punishment’ of unbelievers for not believing in Jesus as God’s own Son—for the atone of their sins. Then, it is the last “week” of Daniel’s “70 weeks” prophecy for the Jewish people—a final chance to realize that Jesus was their prophesied Messiah and they need to accept Him as such to get to Heaven.

For more details about what the Jews will experience during the Tribulation—at the ‘hands’ of Satan and God—view this previous “Life’s Deep Thoughts” post:
https://markbesh.wordpress.com/jerusalems-near-future-v292/ ]

Then, after the Tribulation, everyone who has repented of their sins and believes ‘in’ Jesus—is “born again”—will experience 1,000 years of world peace with Jesus reigning on earth during a time called the “Millennium.”

[ FYI: For more details about the Millennium, view this previous “Life’s Depp Thoughts” post:
https://markbesh.wordpress.com/mankinds-destiny-v247/ ]

Jesus’ government—and the way of peace that it brings—will expand outward from Jerusalem and encompass the whole earth and its inhabitants. Both the aged and the young will then be free to enjoy the beauty and security of this, God’s capital city, with no fear of ANY KIND of violence.

Then, once Jesus has returned and set up His rule, peace will expand outward from the city of Jerusalem to finally bring peace to this war-weary world:

“Of the increase of His government and peace there will be no end, upon the throne of David and over His kingdom, to order it and establish it with judgment and justice from that time forward, even forever. The zeal of the LORD of hosts will perform this”
[ Isaiah 9:7 ].

SO, yes, world peace IS ‘REALLY’ COMING to planet Earth one day! The thing is, it WILL NOT be because of AI or any other fantastic technologies that humanity will conjure up in the future.

World peace will ONLY be achieved by the Son of God, Jesus Christ, returning to earth to set up His Millennial Kingdom!


NOTE: Prophetically, there is NOTHING that has to happen before the Rapture happens, and the end time signs are coming together for the first time ever—‘converging’—clearly indicating that we are living in the ‘season’ when Jesus will return to take all His believers back with Him to the “place” He has prepared for them in His Father’s ‘house’ (John 14:2-3).

[ VIDEO: “The CONVERGENCE” – Christ In Prophecy | Guest: Brandon Holthaus:
https://www.youtube.com/watch?v=8Iz4DZK0o9A ]

Time IS ‘SHORT’, so BE SURE YOU ARE ‘READY’!

[ FYI: For more details about being sure you are ‘ready’ to meet God and to be let into His Heaven, view these previous “Life’s Deep Thoughts” posts:
https://markbesh.wordpress.com/are-you-prepared-v210/
https://markbesh.wordpress.com/how-to-prepare-to-meet-someone-important-v276/ ]

<<< END OF SUMMARY >>>


<<< ALL THE DETAILS >>>

The following is a comprehensive presentation of the topic that follows the ‘headings’ laid out in the Summary.


WHAT IS “ARTIFICIAL INTELLIGENCE” (AI)
John McCarthy—the person who coined the phrase “Artificial Intelligence” (AI) in 1956—offers the following definition in his 2007 paper, “What Is Artificial Intelligence”:

“It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.”

Then, McCarthy defined “intelligence”:

“Intelligence is the computational part of the ability to achieve goals in the world. Varying kinds and degrees of intelligence occur in people, many animals, and some machines.”

[ FYI: To download McCarthy’s paper, click on the following link:
https://www-formal.stanford.edu/jmc/whatisai.pdf ]

So, essentially, AI is a machine’s ability to perform the cognitive functions we usually associate with human minds.

In general, AI systems work by ingesting large amounts of labeled training data, analyzing the data for correlations and patterns, and using these patterns to make predictions about future states. In this way, a “chatbot” that is fed examples of text can learn to generate lifelike exchanges with people, or an image recognition tool can learn to identify and describe objects in images by reviewing millions of examples. New, rapidly improving “generative” AI techniques can create realistic text, images, music, and other media.

The applications of this technology are growing every day, and we are just starting to explore the possibilities. But as the hype around the use of AI in business takes off, conversations around ethics become critically important.

‘AUGMENTED’ VS. ‘ARTIFICIAL’
Some industry experts have argued that the term artificial intelligence is too closely linked to popular culture, which has caused the general public to have improbable expectations about how AI will change the workplace and life in general. They have suggested using the term augmented intelligence to differentiate between AI systems that act autonomously—popular culture examples include Hal 9000 and The Terminator—and AI tools that support humans.

Augmented Intelligence
Some researchers and marketers hope the label augmented intelligence, which has a more neutral connotation, will help people understand that most implementations of AI will be weak and simply improve products and services. Examples include automatically surfacing important information in business intelligence reports or highlighting important information in legal filings. The rapid adoption of ChatGPT and Bard across the industry indicates a willingness to use AI to support human decision-making.

Artificial Intelligence
True AI, or AGI, is closely associated with the concept of the technological singularity—a future ruled by an artificial superintelligence that far surpasses the human brain’s ability to understand it or how it is shaping our reality. This remains within the realm of science fiction, though some developers are working on the problem. Many believe that technologies such as quantum computing could play an important role in making AGI a reality and that we should reserve the use of the term AI for this kind of general intelligence.

‘TYPES’ OF AI
AI can be categorized as weak or strong. Weak AI—also known as “narrow AI”—is designed and trained to complete a specific task. Industrial robots and virtual personal assistants, such as Apple’s Siri, use weak AI. Strong AI—also known as “artificial general intelligence” or AGI—describes programming that can replicate the cognitive abilities of the human brain. When presented with an unfamiliar task, a strong AI system can use fuzzy logic to apply knowledge from one domain to another and find a solution autonomously. In theory, a strong AI program should be able to pass both a Turing test and the Chinese Room argument.

Arend Hintze, an assistant professor of integrative biology and computer science and engineering at Michigan State University, explained that AI can be categorized into four types, beginning with the task-specific intelligent systems in wide use today and progressing to sentient systems, which do not yet exist. The categories are as follows:

– Reactive Machines
These AI systems have no memory and are task-specific. An example is Deep Blue, the IBM chess program that beat Garry Kasparov in the 1990s. Deep Blue can identify pieces on a chessboard and make predictions, but because it has no memory, it cannot use past experiences to inform future ones.

– Limited Memory
These AI systems have memory, so they can use past experiences to inform future decisions. Some of the decision-making functions in self-driving cars are designed this way.

– Theory Of Mind
Theory of mind is a psychology term. When applied to AI, it means the system would have the social intelligence to understand emotions. This type of AI will be able to infer human intentions and predict behavior, a necessary skill for AI systems to become integral members of human teams.

– Self-awareness
In this category, AI systems have a sense of self, which gives them consciousness. Machines with self-awareness understand their current state. This type of AI does not yet exist.

‘DEEP’ LEARNING VS. ’MACHINE’ LEARNING
Since deep learning and machine learning tend to be used interchangeably, however, it is worth noting the nuances between the two. As mentioned above, both deep learning and machine learning are sub-fields of artificial intelligence, and deep learning is a sub-field of machine learning.

Deep learning is comprised of neural networks. “Deep” in deep learning refers to a neural network comprised of more than three layers—which would be inclusive of the inputs and the output—can be considered a deep learning algorithm.

How deep learning and machine learning differ is in how each algorithm learns. Deep learning automates much of the feature extraction piece of the process, eliminating some of the manual human intervention required and enabling the use of larger data sets. You can think of deep learning as “scalable machine learning” as Lex Fridman noted in the same MIT lecture from above. Classical, or “non-deep,” machine learning is more dependent on human intervention to learn. Human experts determine the hierarchy of features to understand the differences between data inputs, usually requiring more structured data to learn.

“Deep” machine learning can leverage labeled datasets, also known as supervised learning, to inform its algorithm, but it doesn’t necessarily require a labeled dataset. It can ingest unstructured data in its raw form (e.g. text, images), and it can automatically determine the hierarchy of features that distinguish different categories of data from one another. Unlike machine learning, it doesn’t require human intervention to process data, allowing us to scale machine learning in more interesting ways.

[ NOTE: Neural Networks are now learning tasks that it was not ‘trained’ (programmed) for. AI is ‘learning’ for itself! )

‘ADVANTAGES’/’DISADVANTAGES’
So, what are the advantages and disadvantages of artificial intelligence? Well, artificial neural networks and deep learning AI technologies are quickly evolving, primarily because AI can process large amounts of data much faster and make predictions more accurately than humanly possible.

While the huge volume of data created daily would bury a human researcher, AI applications using machine learning can take that data and quickly turn it into actionable information. As of this writing, a primary disadvantage of AI is that it is expensive to process the large amounts of data AI programming requires. As AI techniques are incorporated into more products and services, organizations must also be attuned to AI’s potential to create biased and discriminatory systems, intentionally or inadvertently.

Advantages Of AI
– Good at detail-oriented jobs. AI has proven to be just as good, if not better than doctors at diagnosing certain cancers, including breast cancer and melanoma.
– Reduced time for data-heavy tasks. AI is widely used in data-heavy industries, including banking and securities, pharma, and insurance, to reduce the time it takes to analyze big data sets. Financial services, for example, routinely use AI to process loan applications and detect fraud.
– Saves labor and increases productivity. An example here is the use of warehouse automation, which grew during the pandemic and is expected to increase with the integration of AI and machine learning.
– Delivers consistent results. The best AI translation tools deliver high levels of consistency, offering even small businesses the ability to reach customers in their native language.
– Can improve customer satisfaction through personalization. AI can personalize content, messaging, ads, recommendations, and websites for individual customers.
– AI-powered virtual agents are always available. AI programs do not need to sleep or take breaks, providing 24/7 service.

Disadvantages Of AI
– Expensive
– Requires deep technical expertise
– Limited supply of qualified workers to build AI tools
– Reflects the biases of its training data, at scale
– Lack of ability to generalize from one task to another
– Eliminates human jobs, increasing unemployment rates

THE ’ETHICS’ OF AI
While AI tools present a range of new functionalities for businesses, the use of AI also raises ethical questions because, for better or worse, an AI system will reinforce what it has already learned.

This can be problematic because machine learning algorithms, which underpin many of the most advanced AI tools, are only as smart as the data they are given in training. Because a human being selects what data is used to train an AI program, the potential for machine learning bias is inherent and must be monitored closely.

Anyone looking to use machine learning as part of real-world, in-production systems needs to factor ethics into their AI training processes and strive to avoid bias. This is especially true when using AI algorithms that are inherently unexplainable in deep learning and Generative Adversarial Network (GAN) applications.

“Explainability”—referring to an AI system over which it is possible for humans to retain intellectual oversight, or to the methods to achieve this—is a potential stumbling block to using AI in industries that operate under strict regulatory compliance requirements. For example, financial institutions in the United States operate under regulations that require them to explain their credit-issuing decisions. When a decision to refuse credit is made by AI programming, however, it can be difficult to explain how the decision was arrived at because the AI tools used to make such decisions operate by teasing out subtle correlations between thousands of variables. When the decision-making process cannot be explained, the program may be referred to as ‘black box’ AI.

In summary, a few of AI’s ethical challenges include the following:

– Bias due to improperly trained algorithms and human bias
– Misuse due to deepfakes and phishing
– Legal concerns, including AI libel and copyright issues
– Elimination of jobs due to the growing capabilities of AI
– Data privacy concerns, particularly in the banking, healthcare, and legal fields

Over the years, AI has gone through many ‘cycles’, but even to skeptics, the release of OpenAI’s “ChatGPT-4,” in 2023, seems to mark a “turning point.” The last time “generative” AI loomed this large, the breakthroughs were in computer vision, but now the leap forward is in “natural language processing.” The thing is, it is not just ‘language’. Generative models—like ChatGPT—can also learn, analyze, and ‘create’.

A NEW ‘LIFE FORM’?
Experience tells us that most technological advances are likely to have both an upside and a downside. A knife can be used for surgery or as a murder weapon; a car can be used to take you to work or as a getaway vehicle after a crime. It is the same with AI. There are many valuable positive developments, and there are some very alarming negative aspects that demand close ethical attention.

Forbes Magazine contributor Kalev Leetaru reminds us:

“At the end of the day, the deep learning systems are less “AI” than [they are] fancy pattern extractors. Like any machine learning system, they can blindly identify the underlying patterns in their training data and apply those patterns as-is to future data. They cannot reason about their input data or generalize to higher-order abstractions that would allow them to more completely and robustly understand their data. In short, while they can perform impressive feats, deep learning systems are still extraordinarily limited, with brittleness that can manifest in highly unexpected ways.

“After all, the ‘AI’ of today’s deep learning revolution is still just machine learning, not magic.”

Nevertheless, many people would agree with the optimistic stance of Astronomer Royal Sir Martin Rees, who says: “Our lives, our health, and our environment can benefit still more from further progress in biotech, cybertech, robotics, and AI. To that extent, I am a techno-optimist.” Rees continues: “But there is a potential downside. These advances expose our interconnected world to ever new vulnerabilities.” It is that potential ‘downside’ that mankind must seriously consider.

In a TED Talk in Vancouver, Canada (in April 2018), physicist and cosmologist Max Tegmark—President of the “Future of Life Institute” (at MIT)—made this rather grandiose statement:

“In creating AI, we’re birthing a new form of life with unlimited potential for good or ill.”

A study by Sir Nigel Shadbolt and Roger Hampson entitled “The Digital Ape” carries the subtitle “How to Live (in Peace) with Smart Machines.” They are optimistic that humans will still be in charge, provided we approach the process sensibly. But is this optimism justified?

Well, the director of Cambridge University’s “Centre for the Study of Existential Risk” said: “We live in a world that could become fraught with hazards from the misuse of Al and we need to take ownership of the problem—because the risks are real.”

The ethical questions are urgent since Al is regarded by experts as a transformative technology in the same league as electricity. It would, however, make more sense to compare Al with nuclear energy than with electricity. Research into nuclear energy led to nuclear power stations, but it also led to a nuclear arms race that almost led the world to the brink of extinction.

So, do we—as a world—need to stop and ask: “What is the truth behind claims from the top AI technologists and are they exaggerated speculation that goes far beyond what scientific research has actually shown?”

WORLD ‘DOMINATION’ BY AGI?
MIT physicist Max Tegmark imagines how world domination might occur in three steps: First, make human-level AGI, then use it to create superintelligence, and finally unleash the super-intelligence to take over the world.

In the prelude to his book “Life 3.0,” Tegmark imagines a highly secretive AI project run by what he calls the “Omega Team,” which develops a powerful AI system called “Prometheus.”

Tegmark’s Prometheus surreptitiously replaces the Turker workers with its much faster AI system and, in consequence, begins to harvest vast revenues. The Omegas then train Prometheus to make films, and the revenues increase exponentially. This wealth is then used to take over the world’s businesses and media outlets and to influence governments. You can imagine the rest for yourself.

Prometheus eventually ends up in absolute control of the planet: “For the first time ever, our planet was run by a single power, amplified by an intelligence so vast that it could potentially enable life to flourish for billions of years on Earth and throughout the cosmos—but what specifically was their plan?”

The idea is that, in this way, the world would become the perfect totalitarian state—whatever “perfect” means in that context. Not only can Prometheus record all electronic communications—a capacity that many governments have had for years—it can understand all communications and so be fully aware of what people around the world are thinking and doing:

“With the excuse of fighting crime and terrorism and rescuing people suffering medical emergencies, everybody could be required to wear a “security bracelet” that combined the functionality of an Apple watch with continuous uploading of position, health status, and conversations overheard. Unauthorized attempts to remove or disable it would cause it to inject a lethal toxin into the forearm.”

Now it is clear that Tegmark intends this as an imaginative introductory scenario. It is imaginative in more ways than one. For instance, Prometheus, being a machine, does not understand all communication, nor is it aware of what people around the world are thinking and doing for the simple reason that it has no mind with which to understand or be aware.

Tegmark also investigates a wide variety of AGI scenarios that have been suggested by different leading figures in the AGI scene. Some assume that humans will be able to control the superintelligence and possibly use it to eliminate opposition. Another scenario takes the opposite view and imagines that Prometheus breaks out and takes control of humanity and possibly eliminates them.

The question then is: How can we ensure that such a super-intelligence will safeguard human interests and not threaten human existence? Putting this another way, we might ask: “What worldview will a superintelligence or an AGI have?” This is an important question since that worldview will have to be embedded by the human constructors and software programmers. What might that worldview be and on what values will it be based?

Tegmark describes twelve possible scenarios for humanity’s future that represent summaries of inputs from many different thinkers. Some of them are optimistic, and others are pessimistic. In some of them, the AGI seems to possess the characteristics of a benevolent god; in others, those of the devil—with all shades in between. Notice that half of these scenarios tend towards the utopian, and half towards the dystopian, so that between them they cover a very large spectrum. Which of the two tendencies is the more likely will depend, at least in part, on the status we give to moral evil.

– ‘Abolition’ Of Man?
If and when that happens, the inevitably few ‘controllers’ in that age will define the nature of the “humans” of the future that they will create. But as philosopher and author C.S. Lewis chillingly says of those controllers and their products: “Nor are their subjects necessarily unhappy men. They are not men at all: they are artefacts. Man’s final conquest has proved to be the abolition of Man.”

Such a dystopic scenario can be thought of as human beings trying to play the role of Homo Deus, where Deus is a ‘malevolent’ god. Philosopher J. Budziszewski at the University of Texas writes:

“Genesis, I think, is the crux of it. To abolish and remake human nature is to play God. The chief objection to playing God is that someone else is God already. If He created human nature, if He intended it, if it is not the result of a blind fortuity that did not have us in mind – then we have no business exchanging it for another. It would be good to remember that Genesis contains not only the story of creation but the story of Babel, of the presumption of men who thought they could build a tower “to heaven.”

Budziszewski continues in a manner reminiscent of C.S. Lewis:

“You say you want man to be to himself what God has been to man. But what God has been to man is man’s absolute superior, and man cannot be his own superior. A thing can be equal to itself, but it cannot be greater than itself. So, what you really mean is you want some men to be to other men what God has been to man. You want some men to be the absolute superiors of others. I assume that you want to be in the former group and not in the latter… You say you want to change the human design. But, in that case, there must be two groups: Those who caused the change, and those who resulted from it. And the former hold all the cards.”

So, there is no avoiding the topic of God when we think of “superintelligence.”

‘MORAL’ DIMENSION
The biblical account of the Garden of Eden is one of the most profound stories in all literature. It relates how the Creator placed the first humans in a garden paradise that was full of promise and interest—and a living friendship and companionship with God Himself.

However, there was an added dimension to the human environment that is essential for our understanding of the world and ourselves—the ‘MORAL’ DIMENSION.

Human morality, then, was originally defined in terms of obedience or disobedience to God. This, then, would be of crucial importance for the ethical evaluation of AI in the contemporary world.

The ‘ethical’ foundations of AI are what drive the fears of it. What if it goes out of control? What will a superintelligent ‘agent’ do to humanity if it does not share our values?

Philosopher and professor Nick Bostrom says “We cannot blithely assume that a superintelligence will necessarily share any of the final values stereotypically associated with wisdom and intellectual development in humans—scientific curiosity, benevolent concern for others… renunciation of material acquisitiveness, a taste for refined culture or the simple pleasures of life, humility and selflessness, and so forth.”

We cannot assume that the near future with AI will not be worse than Huxley’s “Brave New World” or Orwell’s “1984.” It is, after all, easy to assume that AI will improve human beings—but that may not necessarily be the case.

If the ethical programmers are informed by relativistic or biased ethics, the same will be reflected in their products. For that reason, it is surely important that those with transcendent ethical convictions should have a seat at the ethics table when discussing the potential problems of AI.

It is, of course, difficult to discuss ethical values in connection with a superintelligence since there are no facts but only a plethora of wildly differing hypothetical scenarios.

Some people hope that if our approach to superintelligence is via human brain enhancement, then the resulting superintelligence may well share common human values. However, we should not let the scary scenarios disconnect us from the fact that they are mostly speculation. Nor should we let them make us forget to be thankful for good technological progress.

So, Al, like any new technology—though perhaps more so—brings with it a whole new raft of moral considerations that may easily seem unsurmountable. For Al computer systems have no ‘conscience’ and so the morality of any decisions they make will reflect the morality of the computer ‘programmers’, and that is where the difficulties start. How can we be sure that the programmers will ‘build in’ a morality that is benevolent and humane?

Rosalind Picard, director of the “Affective Computing Group” at MIT, puts it succinctly: “The greater the freedom of a machine, the more it will need moral standards.”

So then, can we teach ‘fairness’ to a computer or program? It will only be possible if the programmers know what these things are and are capable of presenting them in a form that a machine can process. If things go wrong because the system amplifies the bias rather than removing it, we cannot blame a conscienceless machine. Only a ‘moral’ being, the human programmer, can and should be blamed.

PROFOUND ‘CHANGES’ COMING!
There is the threat posed by humans ‘armed’ with artificial intelligence. Like most inventions, AI can be used for good purposes or maligned ones. Governments need to work with the private sector on ways to limit the risks.

Then there’s the possibility that AI will not be able to be controlled. Could a machine decide that humans are a threat, conclude that its interests are different from ours, or simply stop caring about us? Well, possibly.

The thing is, many think that superintelligent AI IS in our future. Once developers can generalize a learning algorithm and run it at the speed of a computer—an accomplishment that could be a decade away or a century away—we will have an incredibly powerful AGI. It will be able to do everything that a human brain can but without any practical limits on the size of its memory or the speed at which it operates. This will be a PROFOUND CHANGE!

These “strong” AIs, as they are called, will probably be able to establish their own goals. So, what will those goals be? What happens if they conflict with humanity’s interests? Should we try to prevent strong AI from ever being developed? These questions will get more pressing with time.

Now, granted today’s AI still does not control the physical world and cannot establish its own goals. However, a recent New York Times article about a conversation with ChatGPT where it declared it wanted to become a human got a lot of attention. It was a fascinating look at how human-like the models’ expression of emotions can be, but it isn’t an indicator of meaningful independence.

The world needs to establish the ‘rules of the road’ so that any downsides of artificial intelligence are far outweighed by its benefits, and so that everyone can enjoy those benefits no matter where they live or how much money they have. The Age of AI is filled with opportunities and responsibilities.

It is going to be an ‘interesting’ next decade!

DESIGNING AI FOR CONFLICT PREVENTION
The growing prominence of artificial intelligence (AI) poses enormous potential and enormous dangers for GLOBAL PEACE and SECURITY. In the ‘peacebuilding’ field, technology engineers and policymakers must work together to minimize the dangers of AI and maximize AI’s potential to prevent and reduce violent conflict, violence, and fragility and build sustainable peace globally. AI technology could vastly increase humanity’s capacity to make discoveries through machine learning and coherently summarize numerous text documents. However, AI could also easily generate mis- and dis-information at a massive scale and, as a worst-case scenario, support deadly automated weaponry beyond human control.

HOW AI CAN ‘CONTRIBUTE’ TO PEACE
While AI could spell many dangers for global peace and security, it also holds exciting potential for innovation in conflict prevention and peacebuilding.

AI-powered dialogue tools such as the “Remesh” platform can facilitate real-time conversation between large populations, which can be integrated into consultative elements of democracy and peacebuilding initiatives.

[ Note: The UN Department of Political and Peacebuilding Affairs (UNDPPA) uses the Remesh platform in Yemen, Libya, and Iraq to hold AI-assisted dialogue with groups of up to 1,000 citizens as part of official peace processes. AI-assisted mass dialogue can help peacebuilders facilitate real-time mediated conversations between large groups of people to build social cohesion, one day having conversations among hundreds of thousands of people in real-time. ]

Through automation, AI can also increase the amount and real-time accuracy of data analysis in conflict-affected and fragile states to inform early warning and response mechanisms (EWER). EWER is a crucial tool to prevent atrocities, as crimes against humanity, war crimes, genocide, and other grave human rights abuses are typically long processes that involve early warning signs, such as the proliferation of dehumanizing language and violations of civic rights. By further developing AI, there are significant innovative opportunities to increase AI’s capacity to monitor massive amounts of data to conduct conflict analysis. Then there needs to be better peacebuilding collaboration promotion by coordinating large-scale data and conflict monitoring and increasing marginalized groups’ meaningful inclusion and leadership in peace and security processes.

THE ‘RISKS’ OF AI
However, AI also poses severe risks to peace and security. Generative AI can significantly increase the capacity of bad ‘actors’ to create disinformation, defined as intentionally inaccurate data used to deceive and spread to do serious harm on a large scale through text, images, videos, and audio.

Also, at this time, AI often ‘hallucinates’ or inexplicably makes up bogus information and sources, which could further spread false and inaccurate information. Together, these risks could seriously undermine society’s information resilience, or ability to understand information ecosystems and recognize and resist intentionally manipulative content.

Mass disinformation campaigns generated by AI can mislead populations, foment civil unrest, and give rise to identity-based violence. AI can also help ‘bad actors’ undermine democracy, support rising authoritarianism, suppress human rights, close civic space, and perpetrate atrocities.

Experts outside of the peacebuilding community have voiced concerns about the destabilizing impacts of AI. In March 2023, an open letter—that was been signed by over 30,000 tech leaders and researchers—called for a six-month moratorium on developing new AI technology because of “profound risks to society and humanity,” such as propaganda, automation of all jobs, and developing “nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us.”

In May 2023, the Center for AI Safety released a one-sentence statement—signed by 350 AI executives, researchers, and engineers—that urged prioritizing the risk of extinction from AI “alongside other societal-scale risks, such as pandemics and nuclear war.”

In the spring of 2023, Dr. Geoffrey Hinton, often referred to as “the Godfather of AI,” quit his job at Google and publicly expressed concerns about AI’s risks, including disinformation, upending job markets, and AI becoming more intelligent than people.

The United Nations got involved in September 2022 and initiated high-level discussions on AI through their “Principles for the Ethical Use of Artificial Intelligence.” Then, in July 2023, the UN convened the “AI for Good Global Summit,” which aimed to ensure that the international community mitigates the potential harms of AI while also allowing AI to reach its full potential for the benefit of society. That same month, the UN Security Council held its first session on AI’s threat to international peace and security. During the session, UN Secretary-General António Guterres called for the creation of a UN watchdog agency that oversees AI regulation, stating that he will convene a high-level meeting on AI and come back with options for global governance on the issue by the end of the year (with negotiations on a legally binding instrument concluded by 2026). [ At the 2023 United Nations General Assembly, more than 20 countries brought up AI during the General Debate, with much of the discussion around AI’s “potential to either facilitate development or perpetuate inequality.” ]

On 20 July 2023, Secretary-General António Guterres presented to Member States his Policy Brief on “A New Agenda for Peace,” which outlines the UN’s vision for multilateral efforts for peace and security, based on international law, for a world in transition. It outlines an extensive and ambitious set of recommendations that recognize the interlinked nature of many of the challenges we face.

The Agenda is framed around the core principles of trust, solidarity, and universality that are the foundations of the Charter and of a stable world. It presents twelve concrete sets of proposals for action, in five priority areas.

To achieve more effective multilateral action for peace, the following recommendations are presented for the consideration of Member States.

[ FYI: To read the policy brief, download it by clicking the following link:
https://reliefweb.int/report/world/our-common-agenda-policy-brief-9-new-agenda-peace-july-2023 ]

The thing is, policymakers and the international community need to be MORE ‘EXPLICITLY’ ARTICULATE and address AI’s direct connection to global peace and security. Bilateral governments, the private sector, civil society, and multilateral institutions must take urgent action to mitigate the potential impacts of AI and work to ensure it promotes social cohesion and peacebuilding and minimizes its adverse effects on peace and security globally.

Multilateral institutions can critically fill gaps between national efforts to ensure a coordinated international effort that provides AI technologies to uphold peace and security. Multilateral frameworks can outline principles and best practices for using AI technologies to prevent violence and atrocities and promote peace, which can help rally good actors and civil society for a common cause.

Peacebuilding programs are designed to end hostilities and restore peace. When violent conflict erupts, peacebuilders stop the violence by bringing together the parties to resolve conflict without violence. Once the conflict is resolved, peacebuilding programs re-establish safety and stability.

Peacebuilding programs work to create sustained peace in countries and communities around the world. These programs engage in ongoing efforts to address the root causes of violent conflict. By building and maintaining the conditions for peace, peacebuilders work to ensure continuing safety and stability.

‘IMPACTS’ ON PEACE AND SECURITY
A key question when we look at all new and emerging technologies and their impacts on peace and security is: Is it neutral? Really, we are asking if impacts come only from the way a technology is used, or if they also stem from the technology itself.

The United Nations Office for Disarmament Affairs (UNODA) suggests that one view says that technology is fundamentally neutral. This is sometimes called the “Value Neutrality Thesis.” In this view, the negative impacts of technology come from the way people use it. A common example is a knife, which could equally be used to attack a person or to cut an apple. Sometimes this view says that negative impacts can start in the design of technology, and sometimes only in its use.

Most modern technology theorists reject the idea that technology is fundamentally neutral. Instead, they generally recognize:

– The ‘Intentionality’ Of Technology
It is designed and deployed based on the values of the designer and deployer, then used according to the values of the user, and

– The ‘Directionality’ Of Technology
It adds choices or improves processes that point in a certain direction. For example, a gun can also be used as a paperweight, but everything in its design and development guides the user to a different use.

With many technologies, including AI, specific innovations can create or exacerbate problems, even without any hostile intent. Algorithmic bias is an example of this issue. A translation tool that consistently assigns masculine gender to positive statements, and feminine gender to negative statements, is most likely demonstrating bias due to issues with the dataset it uses, not because of deliberate attempts by developers to produce a biased tool.

In the peace and security context, the two risk “pathways”—consequences from intended use and consequences from unintended use—may materialize in the following ways.

– Military Usage
AI development intended for military use may have unintended consequences on peace and security. For instance, bias in the targeting function of an autonomous weapons system (AWS) could cause the system to wrongfully attack civilians or civilian objects. Defensive and offensive AWS could interact in a way that could lead to rapid and unintended conflict escalation—possibly even a ‘flash war’.[6] These are immediate consequences. The second or third-order effects are also important. For instance, a country that develops and adopts AWS to enhance its sense of security might generate insecurity on the part of its potential adversaries, who may react by developing their capacities and establishing arms race conditions, which could be destabilizing for regional and/or global security.

– Civilian Usage
AI development, whether intended for civilian/peaceful uses, or military uses, could be diverted and misused in ways that reinforce or create new threats to peace and security. These include the use of AI tools for political disinformation, cyberattacks, or terrorism.

The first risk ‘pathway’ is a concern primarily for the defense industry and the military. The second, on the other hand, is an issue of concern for the AI community as a whole, given the dual-use potential of much AI development. So, the following are some of the risks that flow from the diversion and ‘misuse’ of AI.

To understand how this risk might materialize and generate first, second and third-order effects on peace and security, it is helpful to consider a hypothetical example involving Generative Adversarial Networks (GAN)

[ CHART: “Impacts On Peace And Security” Flowchart – UNODA ]

As we can see, in this example the second and third-order consequences of the development of civilian AI technology and its dissemination include harms to peace and security. It is also important to think about the severity of that impact, who it has impacted, and the reversibility of the harm.

If we think about the original developers of GANs, we can see AI can be used with hostile intent to harm peace and security but can also harm peace and security without specific hostile intent by the designer.

Harm can simply come from the way an AI tool frames, guides, or constrains behavior, which brings us back to the ideas of “intentionality” and “directionality” of technology.

In simple terms, choices in the development and diffusion of AI research and innovation can impact the risks of diversion and misuse, which in turn generate risks or opportunities for peace and security. These factsheets will go into detail on issues relating to the development of civilian AI and peace and security, including international governance and regulation, the political context, responsible innovation of AI, and standards, tools, and practices.

[ FYI: To download UNODA’s “AI and Peace and Security” report, click on the following link:
https://education.unoda.org/docs/ai-slide1.pdf ]

‘LEVERAGING’ THE GREATEST IMPACT
So then, how can we ‘LEVERAGE’ the potential of AI and mitigate its downsides? Well, if developed and deployed ethically and responsibly, AI could be a crucial tool leading to new industries and greater economic growth. But these benefits won’t happen by themselves.

AI is fundamentally about people, and this focus is essential. Applications of AI need to be led by lives and livelihoods, and not just by data points and digital – as highlighted by Bangladesh’s focus on ensuring that AI drives both economic and social growth. Similarly, putting people at the center of AI thinking, piloting and scaling is a crucial foundation of our AI approach at UNDP is approaching it through a ‘DEEP’ lens.

So, how do we get the greatest positive impact from AI? Well, the following are a few suggestions from the United Nations Development Programme (UNDP):

– Democratize And Demystify AI
AI should be understandable for all, and this will be achieved by driving public awareness, knowledge, and understanding of AI technologies—including their benefits, risks, and terminology. By fostering an environment where people are equipped with the knowledge and skills to understand AI, its uses and limits can be a truly whole-of-society discussion and exploration. As an example, the Government of Finland launched its free and open online course—the Elements of AI—in 2018, to improve the critical understanding of AI across the country’s population. This also highlighted the potential for a whole-of-society conversation: over a quarter of course participants are aged over 45. By broadening the accessibility of AI in these kinds of ways, we could shape it to be more relevant and useful in different contexts.

– Empower People To Use AI To Improve Their Lives And Livelihoods
AI should serve all people. It should add real and important value, and it should not be an abstract concept. This includes moving beyond awareness-raising and making AI products, tools, and technology relevant to the contexts and priorities within countries—including supporting people to integrate AI into their employment, their businesses, and their lives. For example, Colombia has crafted a National AI Strategy to leverage artificial intelligence to empower its citizens and drive economic prosperity. Nigeria is engaging its overseas diaspora to guide the country in leveraging AI for national benefit, and India aims to train three million civil servants in exploring the potential role of AI for policy and service delivery.

– Explore And Experiment With AI, In A Safe Way
AI is evolving rapidly, yet its benefits are still unclear in many settings. Despite this infancy, many AI innovators are moving fast to develop AI products and services that are founded on enormous amounts of personal data. This data needs to be used responsibly—and protected. We therefore need to shape inclusive and safe opportunities to explore and experiment with AI. This includes physical testbeds and regulatory sandboxes being explored in higher-income countries (although other countries are broadening some of their existing structures) that could identify and shape these protections. It also extends to building cultures of responsible experimentation (with feedback loops) to identify how AI can have the greatest positive impact. We also need to think about a longer-term perspective. For example, the Government of South Africa has established a Presidential Commission on the Fourth Industrial Revolution to explore how emerging technologies could shape the country’s future.

– Protect People From The Negative Impact Of AI
AI is being explored in all aspects of our governments, economies, and societies. However, AI needs to be people-centered: with the rights, safety, and consideration of people as essential foundations. It also needs to be used responsibly and ethically, which can pose wide-ranging challenges. It includes tackling underlying data bias, providing grievance redressal and accountability for any AI decision-making, and ensuring that AI does not entrench or exacerbate digital (or broader) divides. The National AI Policy of Rwanda includes six key elements including ensuring ‘trustworthy AI adoption in the public sector’ through building the skills and capacities of public servants and regulators. Whilst the African Union may yet be a potential source of ‘model’ legislation on AI and key foundations such as data protection. Several other countries are also shaping policies on the ethical usage of AI.

The UNDP works in more than 170 countries and territories around the world, and many of these places are already actively engaging with AI. This global portfolio has reaffirmed the importance of what they call the “5 ‘Cs’.”

First, the importance of CONTEXT. Understanding the local realities and landscape where AI could add value (and where and how it could also have negative consequences).

Secondly, it is about COLLABORATION and CONVENING. No single country has ‘figured out’ AI. This is a global journey and conversation.

Thirdly, there needs to be a focus on CAPACITIES and COMPETENCIES. One must ask how we improve the understanding, development, and application of AI across society—and build the skills needed for local research, development, and scaling.

Increasingly, they are seeing a sixth ‘C’: COMPARATIVE ADVANTAGE. Countries and communities must make AI work most effectively for their national development priorities and international standing.

Now, across UNDP’s global work, they are also seeing three other ‘trends’:

– Global Development ‘Toolkit’
It is not a panacea, and it is not a replacement for human ingenuity. Instead, AI can augment, complement, and amplify human efforts. Then, ‘people’ need to be at the center of AI (and digital transformation more broadly). In recognizing this, we need to ensure that AI adds value and that AI development and usage are founded on an inclusive and rights-based approach. We need to avoid technological ‘veneers’ that exist solely for technology’s sake. And we need to understand when AI may not be relevant, useful, or beneficial.

– The Scale, Potential, And Risks
They are only the very public face of a global AI ‘supply chain’. And many of the aspects of this supply chain are not technological. As the researcher Kate Crawford argues, AI is neither artificial nor intelligent. It is not artificial because it is made possible by vast amounts of natural resources, fuel, and physical labor. And it’s not intelligent because it often relies on inadequate data and other sometimes weak foundations. To engage we need both computer science and social science.

– AI Is More Than ‘Silicon Valley’
UNDP suggests that it is about the ‘Silicon Savannah’ in Kenya, where AI startups are approaching $100M in funding. It is about an emerging regional approach from Latin America, to Thailand’s efforts to shape an AI ecosystem by 2027, and everywhere in between. Many of the exciting, important, and cutting-edge explorations of AI are being undertaken in ‘developing’ countries. Despite rhetoric focusing on a handful of companies and countries, there is an enormous opportunity for global discussion, collaboration, and innovation.

So, as mankind stands at the start of an era where technology and human endeavors could become more intertwined than ever before, we must understand and leverage the power of AI responsibly and equitably. AI could bring us one step closer to a more sustainable, inclusive, and prosperous world. But it could also entrench existing digital and other divides.

We will only be able to gain the benefits and tackle the downsides of AI if we all approach it through a deep ‘lens’ and work together.

CAN AI ‘TECHNOLOGY’ ACHIEVE PEACE?
The general idea behind AI technology ushering in an era of peace is that it could help make the world more ‘interconnected’, and an interconnected world could do away with war—the idea goes—because countries could not survive without each other.

Well, this idea is not new. In the 1850s, electric telegraph networks were sprouting throughout the industrialized world. Cyrus Field, a wealthy American who made a fortune manufacturing paper, founded the Atlantic Telegraph Company and laid the first cable across the Atlantic connecting the United States to Europe in 1858. This single act was widely touted as possibly bringing about world peace. An article in 1843 Magazine quotes some of the glowing expectations:

“‘It is impossible that old prejudices and hostilities should no longer exist, while such an instrument has been created for the exchange of thought between all the nations of the earth,’ declared one of the many celebratory books. ‘This sea-nymph, rising out of the waves, was born to be the herald of peace,’ trumpeted another.”

As time has moved on, technology has advanced further—allowing more interconnectedness—giving us the Internet and Smartphones. However, sadly, these technologies have not ushered in an era of peace. Instead, the technology was focused on producing more powerful and tactical weaponry that could cause even greater destruction—like Russia’s Satan II missile and their “Doomsday” torpedo.

Well, it doesn’t look like the ‘current’ technologies are going to achieve world peace any time soon.

[ For more insights into nuclear weaponry, read “That One Time We Blew Up an Island”:
https://lifehopeandtruth.com/prophecy/blog/that-one-time-we-blew-up-an-island/ ]

IS AI A ‘FORCE’ FOR GOOD?
So, can the newest, latest, greatest technology, AI, be just what mankind needs to develop that ‘interconnectedness’ while being an unbiased ‘arbitrator’?

Policymakers are just beginning to understand and leverage AI’s potential and impact and need to take a ‘holistic’ view of the issues. This includes things like ‘intentions’ behind AI systems, as well as potential unintended ‘consequences’ and actions of AI systems. This can especially be the case with military applications.

Branka Panic, Founder and Executive Director at “AI for Peace,” started this organization to focus on making sure that AI provides continuously ‘GOOD’ BENEFITS. Their “AI for Good” movement, challenges companies may need to overcome as they approach AI for good, and what organizations can do to minimize the risks of creating AI with unintended consequences.

This organization is an exponential think-tank and community of AI and field experts who are committed to using AI to create lasting peace. They are saddened to see that AI is a growing element in the military strategy of many countries, and the investments in defense and national security are increasing every year. (Military uses of AI such as autonomous systems, target recognition, threat monitoring, and situational awareness tools, tend to be ‘destructive’ in nature.

Their vision, however, is a future in which AI benefits peace, security, and sustainable development and where diverse voices influence the creation of AI and related technologies. So, they outfit peacebuilders and AI experts with the mindset and knowledge to develop human-centered artificial intelligence, ensuring the creation of sustainable positive peace and serve as a global open ‘hub’ for social scientists, AI researchers, developers, and policymakers who want to understand and leverage AI’s potential and impact.

As the world faces an upward trend in conflict, violence, insecurity, and human rights violations, it is their moral imperative to think about all approaches to solving these problems urgently—with AI being explored as a POTENTIAL ‘AGENT’ to reverse these trends and help create peaceful and just societies. On the other hand, they say that many of the security threats we are facing today are directly or indirectly caused by the use of these new technologies, such as autonomous weapons, biased algorithms, and facial recognition in policing. AI is a promise, but it also comes with lots of perils we need to face to sustain peace. Hence, “AI for Peace” works to safeguard peace both from and with AI and related technologies.

Now, happily, starting in 2019, various governments and organizations started adopting different sets of AI principles and regulating AI. They mostly cluster around principles of transparency, justice, fairness, non-maleficence, responsibility, and privacy—so, it is a ‘beginning’.

Another positive example and growing field is utilizing photos, videos, satellite, and drone imagery combined with computer vision tools and deep learning for human rights protection, helping human activists to process contents quicker and see patterns not visible to the human eye, to be able to utilize this content for requesting accountability for human rights violations.

“AI For Good” is asking governments and companies to grant greater access to data to organizations tackling global challenges, open accessible education opportunities, and encourage and support their highly skilled employees and experts to support their projects. They have a “do no harm” approach.

The pandemic crisis scaled this thinking globally across different fields and industries and demonstrated that we need ‘ethical’ AI to minimize the risks more than ever. Even in the ideal situation of accurate systems and ‘non-biased’ algorithms—even those where there is always ‘some’ sort of bias—complex social contexts can cause unintended and unexpected consequences. They are trying to develop and implement processes ‘prior’ to an incident, to allow public accountability and mitigate damage. They acknowledge that carefully evaluating the social impact of an AI system needs to become a norm and core part of AI rather than an afterthought.

The thing is, comparing the European Union and China, one can see how different approaches to ethics can potentially determine their focus on the development of AI, and their focus on the benefit of companies, citizens, or a country.

A report published by New York University’s “AI Now Institute” (in December 2019) showed that the “vast majority” of AI ethics statements say “very little about implementation, accountability, or how such ethics would be measured and enforced in practice.” AIFG suggests that, as a first step, “explainability” or transparency of AI system needs to be defined means in practice, how a human-centered AI system looks like, who is the “human” in the “human in the loop” concept. Another piece of advice given by experts in this area is advancing AI ethics by transforming it to “AI microethics”, acknowledging AI as a collective term for a wide range of technologies, and recommending a switch from AI ethics to “technology ethics, machine ethics, computer ethics, information ethics, and data ethics”.

Now, there has been some development in organizations to advance their ethical AI programs ensuring transparency, accountability, and fairness. For example, Microsoft’s AI, Ethics and Effects in Engineering and Research (AETHER) Committee, provides a mechanism for employees to flag concerns and give recommendations timely. IBM Research launched its “AI Explainability 360” toolkit, an open collection of algorithms that use a range of techniques to explain AI model decision-making. The OECD Policy Observatory was launched this year to help convert principles into practice for all OECD member states.

So, not every approach can be a model for other organizations and actors to implement. “NeurIPS,” the world’s largest AI research conference, requires authors to address the impact on society and any financial conflict of interest. AI researchers from organizations like Google and OpenAI recommended the implementation of “bias bounties,” to enable turning principles into practice through third-party auditing. OpenAI demonstrated the protection of safety over speed principle when deciding to release its GPT in stages, allowing enough time to consider ethical implications before fully releasing a model. [ Was this one of the ‘sticking points’ in the Sam Altman firing at OpenAI in November 2023? ]

GOING IN THE RIGHT ‘DIRECTION’?
AI is already helping us to do more with less, but can it help us achieve world peace?

Many have said that AI will be used as a way to prevent war. They point out that AI can do this by tracking weapons and monitoring military activity—detecting threats before they happen and, hopefully, preventing conflicts from escalating into full-scale wars.

Others mention that AI could detect human rights abuses committed by governments or organizations against their people. If a country’s government was found guilty of such atrocities, then armies around the world would have evidence against them and could use it as grounds for war crimes trials against those responsible—this would prevent future conflict between nations over human rights issues like these—saving millions of lives!

HOWEVER, just like the Internet, AI can also be used for ‘BAD’. Even one of the previous examples I just gave can be ‘turned around’: AI could make weapons to eliminate masses of people; AI could cause human atrocities; and AI could create a virus and disseminate it before humans could stop it.

So, as AI becomes more sophisticated, will it be ‘good’ or ‘bad’?

[ VIDEO: Interview with Deep Mind founder
https://www.youtube.com/watch?v=CTxnLsYHWuI ]

A MISSING ‘DIMENSION’ IN AI
Mankind’s inability to have peace can be traced back to our ‘ORIGINS’. Adam and Eve disobeyed God and ate from the tree of the knowledge of good and evil (Genesis 2:17; 3:6). Since then, mankind has developed a mix of both good and evil. Technology is a prime example of this—it can be used for incredible good, but also for devastating evil.

The thing is, physical solutions cannot solve spiritual problems. General Douglas MacArthur, after signing the Japanese surrender in World War II, said: “It must be of the spirit if we are to save the flesh.” He recognized that the powerful destruction caused by the atomic bomb would continue to threaten human survival and the ‘core’ of the issue was a ‘spiritual’ one.

In the Bible, the Apostle James tells us that wars and fighting occur because of our human selfishness: “What causes fights and quarrels among you? Don’t they come from your desires that battle within you?” [ James 4:1 ]. No two nations have ever gone to war because they could not agree on how each could give to and serve the best interests of the other. The thing is, Satan, the originator of evil and murder, has a powerful influence on selfish human thinking: “In which you used to live when you followed the ways of this world and the ruler of the kingdom of the air, the spirit who is now at work in those who are disobedient” [ Ephesians 2:2 ].

Now, as I mentioned in last month’s post, world peace WILL ‘ONLY’ COME when Jesus, the Prince of Peace returns to earth (Isaiah 9:6). It is only by following His ‘way’ that we can have peace in the world today and in the new ‘refreshed’ world of tomorrow!

[ FYI: For more details about Jesus being the only ‘way’ humanity will achieve real-world peace, view last month’s “Life’s Deep Thoughts” post:
https://markbesh.wordpress.com/longing-for-peace-v298/ ]

VATICAN AI SUMMIT
It is unusual for tech executives and religious leaders to get together to discuss their shared interests and goals for the future of humanity and the planet. It is even more extraordinary for the world’s three largest monotheistic religions to be represented.

However, that is what happened in January 2023 when the Catholic Church hosted leaders of the Jewish and Islamic faiths, new signatories to the “Rome Call for AI Ethics.” Pope Francis said:

“In agreeing on promoting a culture that places this technology at the service of the common good of all and of the care of our common home, you are offering an example to many others, according to a translation of his remarks, addressing the Jewish and Islamic delegations to the meeting.

“Fraternity among all is the precondition for ensuring that technological development will also be at the service of justice and peace throughout the world.”

The “Rome Call for AI Ethics” focused on three “Impact Areas,” ethics, education, and rights, and six principles, transparency, inclusion, accountability, impartiality, reliability, and security and privacy.

Microsoft President Brad Smith said:

“I find it adds an extraordinary dimension to the conversation. You can ask whether this was having religious leaders in a technology meeting, or technology leaders in religious conversation; both are true… It forces one to think about and talk about the need to put humanity at the center of everything we do.

I think it’s a good thing. I think it’s a powerful force. It does cause one to reflect a little bit more, and perhaps even think a little bit differently. But ultimately, I think it makes the work we do more purposeful.

It also reminds us that we have a lot of hard problems to stay focused on solving. The bar is raised even further… And to find that these three religions have such a common vision and common message is important and inspirational for all of us who spend so much time thinking about artificial intelligence and how it can best serve the world.”

Smith continued:

“I think it’s clear that the path to responsible AI involves both proactive and self-regulatory steps by responsible companies, and more rules of the road in the form of law and regulation.

I don’t think it would be possible to achieve what the world needs with only one approach or the other. I think it would be naive to expect that everyone in the world who has access to AI will use it only for good. Unfortunately, that’s not human nature. It’s not what history tells us is the path for any technology…

But the more progress the responsible companies can make, and the more efforts we can pursue to build the broad dialogue and big tent that I think AI ethics needs, I think the easier the path will be for law and regulation.”

Smith then said that there are engineers at Microsoft developing AI systems applying their “Responsible AI Standard” that they adopted in 2022. Smith then concluded by saying:

“But the other thing I would add is that I find in both Microsoft and OpenAI a very common and deep commitment to ensure that AI is used ethically and responsibly… I couldn’t imagine a group of people that are more ethical, or responsible, or committed to this cause than the people at OpenAI, that we have been working with for some time. It is not new. And if you read their mission statement, it is absolutely as real for them as it is for us.”

The “Rome Call for AI Ethics” seeks to advance a sense of shared responsibility among international organizations, governments, institutions, and the private sector to create a future in which digital innovation and technological progress are centered around humanity. It invites ‘actors’ from all parts of our global society—public and private, for-profit and nonprofit, and a wide range of cultural, ethical, and religious traditions—to recognize that the development of ethical AI is an issue of pressing concern that impacts people everywhere and to mobilize in support of solutions we can all share.

The idea behind it is to promote a sense of shared responsibility among international organizations, governments, institutions, and the private sector to create a future in which digital innovation and technological progress grant mankind its centrality. Pointing to a new “algorithmics,” the signatories committed to request the development of an artificial intelligence that serves every person and humanity as a whole; respects the dignity of the human person so that every individual can benefit from the advances of technology; and that does not have as its sole goal greater profit or the gradual replacement of people in the workplace.

They hope that, in the long term, the values and principles that are instilled in AI will help to establish a ‘framework’ that regulates and acts as a point of reference for digital ethics, guiding worldwide actions and promoting the use of technology to benefit humanity and the environment.

They seek an outlook in which AI is developed with a focus not on technology, but rather on the good of humanity and of the environment, of our common and shared home and of its human inhabitants, who are inextricably connected.

In other words, the “Call” has a vision in which human beings and nature are at the ‘heart’ of how digital innovation is developed, and supported rather than gradually replaced by technologies that behave like rational actors but are in no way human. They are saying that it is time to begin preparing for a more technological future in which machines will have a more important role in the lives of human beings, but also a future in which it is clear that technological progress affirms the brilliance of the human race and remains dependent on its ethical integrity.

[ FYI: For more details about the “Rome Call for AI Ethics,” view the website:
https://www.romecall.org/the-call/ ]

HARNESSING AI FOR GLOBAL PEACE
In 1953, President Dwight Eisenhower asked the world to join him in building a framework for “Atoms for Peace.” He made the case for a global agreement to prevent the spread of nuclear weapons while also sharing the peaceful uses of nuclear technology for power, agriculture, and medicine. No one would argue the program completely prevented the spread of weapons technology: India and Pakistan used technology gained through Atoms for Peace in their nascent nuclear weapons programs. But it made for a safer world by paving the way for a system of inspections and controls on nuclear facilities, including the establishment of the International Atomic Energy Agency and, later, the widespread ratification of the Nuclear Non-Proliferation Treaty (NPT) in 1968. These steps were crucial for building what became known as the nuclear nonproliferation regime.

The world is facing the most significant surge in violent conflicts since World War II. According to a United Nations briefing, a quarter of the global population is affected by conflicts today, the likes of which are more complex than ever.

This new era of rising conflict rates demands an honest reexamination of how we try to achieve peace. Practitioners in the fields of peace and defense must acknowledge that conventional approaches are ‘failing’ one-fourth of humanity today. ‘Disruption’ is urgently needed, and AI just might be able to help achieve world peace.

AI-powered peace has the potential to revolutionize international peace and security. It will allow practitioners and policymakers to eliminate the time and resource intensity behind data collection, analysis, and option generation, and free up those resources to focus on difficult things like dialogue, negotiation, trust-building, and strategic decision-making. Machine learning and generative AI can learn from data, identify patterns, predict outcomes, and generate new knowledge and solutions.

So, when it comes to the vast realm of challenges and unknowns faced by national governments and multilateral organizations, AI could not only organize the cacophony of information in seconds, but it could also illuminate a symphony of ideas and strategies that have yet to be discovered. With the right decisions made by AI, mankind just might be at the ‘doorstep’ of world peace within our lifetime.

It seems that there is an untapped ‘gold mine’ to be found in AI applications for peace. Opportunities exist in the development of conflict analyses, mediation and negotiation support, peacekeeping, ceasefire monitoring and reporting, and others. Natural language processing can be used across peace processes to facilitate thousands of citizen responses in their local dialects to open-ended questions on their security sector, governance, constitution, transitional justice arrangements, and more.

The integration of this technology would revolutionize how citizens engage in the building of their nations. Peace processes could be faster, more inclusive, more responsive, less violent, and overcome cross-language communication barriers. Further research and development in machine learning and natural language processing are essential to work with less-spoken languages and address low literacy rates, restricted technology, Internet access, and overall accessibility.

“Sentiments Analysis” is another application for AI to undertake, that could be invaluable to advancing peace. This facet of natural language processing has already been pioneered in the private sector, where companies employ it to discern the emotional tone surrounding a subject—positive, negative, or neutral. This could bring the added benefit of understanding reactions to proposed interventions or current events to inform decision-making.

AI machine learning could also shed ‘light’ on a nation’s most ambitious and elusive goals faster than ever before. For example, while desired forms of governance often emerge at a macro level in peace negotiations, fleshing out the exact details to reach a final settlement and then implement it can be a protracted and expensive process. Machine learning applied throughout a peace process could expedite consensus on a multitude of governance arrangements within weeks, ultimately saving lives and averting further deterioration in health, food access, and economies.

Large Language Models (LLMs)—made famous by Open AI’s ChatGPT—could also have vast applications in the field of peace and security. LLMs can synthesize existing text—research, reports, social media, and new queries—and complete such analyses in minutes. The use of LLMs can thus position analysis and decision-making at the forefront of innovation.

AI could also apply machine vision and image recognition to transform peacekeeping and ceasefire monitoring and report incidents without compromising human lives.

What Would AI Peace Look Like?
So, the nature of AI is different than nuclear technology, but some of the principles that underpin the nonproliferation regime can be applied to combat the dangers of AI.

The “AI for Peace” organization said it could go beyond current efforts by involving the private sector from the get-go and identifying the specific dangers AI presents and the global norms that could prevent those dangers (e.g., what does meaningful human control over smart machines mean in specific contexts?).

The uncertainty surrounding AI’s long-term possibilities makes it difficult to regulate, but the potential for chaos is more tangible. It could be used to inflict catastrophic damage in many aspects of life. (AI-assisted weapons are essentially very smart machines that can find hidden targets more quickly and attack them with greater precision than conventional computer-guided weapons.)

The thing is, as AI becomes incorporated into society’s increasingly autonomous information backbone, it could also pose a risk of catastrophic accidents. When AI becomes pervasive, banking, power generation, and hospitals will be even more vulnerable to cyberattacks. Some speculate that an AI “superintelligence” could develop a strategic calculating ability so superior that it destabilizes arms control efforts.

There are limits to the nuclear governance analogy. Whereas nuclear technology was once the purview only of the most powerful states, the private sector leads AI innovation. States could once agree to safeguard nuclear secrets, but AI is already everywhere—including in every smartphone on the planet.

Controlling the most dangerous uses of technology will require private sector initiatives to build safety into AI systems.

In 2015, Stephen Hawking, Peter Norvig, and others signed an open letter calling for more research on AI’s impacts on society. The letter recognized the tremendous benefits AI could bring for human health and happiness but also warned of unpredictable dangers. The key issue is that humans should remain in control. More than 700 AI and robotics researchers signed the 2017 Asilomar AI Principles calling for shared responsibility and warning against an AI arms race.

The path to governing nuclear technology followed a similar pattern of exchange between scientists and policymakers. Around 1943, Niels Bohr, a famous Danish physicist, made the case that since scientists created nuclear weapons, they should take responsibility for efforts to control the technology. Two years later, after the first use of nuclear weapons, the United States created a committee to deliberate about whether the weapons should become central to U.S. military strategy, or whether the country should forego them and avoid a costly arms race. The Acheson-Lilienthal committee’s proposal to put nuclear weapons under shared international control failed to gain support, but it was one step in a consensus-building process. The U.S. Department of Defense, Department of State, and other agencies developed their perspectives, and U.N. negotiations eventually produced the “Treaty on the Non-Proliferation of Nuclear Weapons” (NPT). Since entering into force in 1970, it has become the most widely subscribed arms control treaty in history with a total of 191 signatory states.

We are in the Acheson-Lilienthal age of governing AI. Neither disarmament nor shared control is feasible in the short term, and the best hope is to limit risk. The NPT was created with the principles of non-possession and non-transfer of nuclear weapons material and technology in mind, but AI code is too diffuse and too widely available for those principles to be the lodestar of AI governance.

So then, what then does ‘nonproliferation’ look like in AI? What could or should be prohibited? One popular proposal is a no-kill rule for unassisted AI: humans should bear responsibility for military attacks.

Many of AI’s dangers stem from the possibility that it might act through code too complex for humans to understand, or that it might learn so rapidly to be outside of human direction and therefore threaten humanity. So, we must consider how these situations might arise and what could be done to preserve human control. However, roboticists say that such existing tools as reinforcement learning and utility functions will not solve the control problem.

Building International Consensus
The NPT was preceded by a quarter century of deliberation and consensus building. We are at the beginning of that timeline for AI. The purpose of treaties and consensus building is to limit the risks of dangerous technology by convincing countries that restraint is in the interests of mankind and their security.

Nuclear nonproliferation agreements succeeded because the United States and the Soviet Union convinced non-nuclear nations that limiting the spread of nuclear weapons was in their interest — even if it meant renouncing weapons while other countries still had them.

Access To Peaceful Technology
It is a tall order to ask countries to buy into a regime that limits their development of a powerful new technology. Nuclear negotiations offered the carrot of eventual disarmament, but what disarmament means in the AI context is not clear. However, the principle that adopting restrictions on AI weapons should be linked to access to the benefits of AI for peaceful uses and security cooperation could apply.

Arms control negotiator William Foster wrote in 1967 that the NPT treaty would “stimulate widespread, peaceful development of nuclear energy.” Why not promise to share peaceful and humanitarian applications of AI—for agriculture and medicine, for example—with countries that agree to participate in global controls?

The foundation of providing access to peaceful nuclear technology in exchange for monitoring materials and technology led to the development of a system of inspections known as safeguards. These were controversial and initially not strong enough to prevent the spread of nuclear weapons, but they took hold over time. A regime for AI inspection and verification will take time to emerge.

As in the nuclear sphere, the first step is to build consensus and identify what other nations want and where the common interest lies. AI exists in lines of code, not molecules of uranium. For publicly available AI code, principles of transparency may help mutual inspection. For code that is protected, more indirect measures of monitoring and verification may be devised.

Finally, nuclear arms control and nonproliferation succeeded as part of a larger strategy (including extended deterrence) that provided strategic stability and reassurance to U.S. allies. America and the Soviet Union—despite their Cold War competition—found common interests in preventing the spread of nuclear weapons. AI strategy goes hand-in-hand with a larger defense strategy.

As at the dawn of the nuclear age, the United States should try to build global consensus in the age of AI to reduce risks and make the world safe for all.

An “AI for Peace” program should articulate the dangers of this new technology, principles (e.g. no-kill, human control, off switch) to manage the dangers, and a structure to shape the incentives for other states (perhaps a system of monitoring and inspection). Our age is not friendly to new treaties, but hopefully, we can foster new norms learning from the nuclear age that countries will agree to limit dangerous technology with the promise of peaceful benefits for all.

Is ‘Bigger’ Data Needed?
It is accepted within the technology world that more data and more machine learning are always better and that with sufficient data and enough AI, all the world’s problems may be solved. It seems this mindset is increasingly finding its way into the humanitarian and governmental sectors, as they too become infatuated with the utopian dream that sufficient training data and sophisticated enough algorithms will bring world peace and prosperity. However, is this really true?

Many say that all we need is more “big data” and “AI” and we can entirely prevent conflict and that with larger global social media platforms all the world’s peoples can lock arms in glorious happiness—finally achieving world peace and prosperity.

So, would bigger data and more machine learning algorithms truly solve all the world’s problems?

Well, we are already awash in data and machine learning today. Nearly every action we take in our modern society is guided by all-powerful algorithms fed by an all-seeing surveillance state that has created an Orwellian world in which we are told by machines what to say, see, and do, banished from digital existence if we do not obey.

Yet, as our world has become more data-driven and more algorithmically mediated, WE HAVE NOT cheered these developments on as the dawn of a new era of peace and prosperity. Rather we have condemned the censorship, oppression, and unequal and discriminatory control they have wrought and fought for new legislation that would protect us from further moderation, mining, and manipulation.

Most importantly, we have learned that data is imperfect and that when imperfect data is used to construct an imperfect understanding of the world, very ‘bad’ things can happen.

Of particular relevance to the humanitarian sector, we have learned that automation’s impact on displacing human jobs and upending the global economy cannot be understated. Many don’t see AI in the workplace as a benefit, but as a threat that will displace many jobs.

So, what about social media but is it more necessarily the path to world peace? Well, the results thus far are NOT ‘promising’!

Rather than bring us together, social media has split us apart, amplifying the most hateful and toxic corners of society to overwhelm all other speech and bringing to the forefront how we are different rather than alike.

Simply adding more ways for society to tear itself down is NOT ‘LIKELY’ to lead to peace!

So, putting this all together, what is needed is new ‘approaches’. We don’t need bigger data, we need better data. Data that is representative of the world’s diversity. We need machine learning algorithms to complement rather than compete with their humanity and social platforms that foster debate rather than division, and that emphasize our commonalities rather than our contradictions.

In the end, we must recognize that technology IS ‘NOT’ always the solution to all the world’s problems. Sometimes it is the ‘CAUSE’ of it!

Promise And Perils
Over the past decade, artificial intelligence advanced rapidly. It made possible the development of fully autonomous weapons systems that can select, attack, kill, and wound human targets without effective human control. The concept of fully autonomous weapon systems is highly controversial. While the US government considers their deployment a national imperative, the movement against them is becoming bigger and stronger. In March 2019 UN Secretary-General Antonio Guterres convened an AI expert meeting to push ahead restrictions on the development of lethal autonomous weapons systems.

“Autonomous machines with the power and discretion to select targets and take lives without human involvement are politically unacceptable, morally repugnant, and should be prohibited by international law”, he said.

At the same time, AI and robotics researchers sent an open letter, advocating against autonomous weapons and the AI arms race and claiming that AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

Jody Williams, 1997 Nobel Peace Prize Laureate, founding coordinator of the “International Campaign to Ban Landmines” said, “Maybe robotics and AI are inevitable, but applying them to kill human beings on their own is not inevitable, unless you do nothing, and we refuse to do nothing.”

The fast spread of technology, availability of “big data” and lower costs of processing and storage allowed AI to take greater application not only in war but in many other fields, including peacebuilding. For some time now the most spread technology used in peacebuilding is satellite, drone, and aerial imagery of high resolution, enabling us to see if a building has been damaged or destroyed, and enabling early action for those in urgent need. “Social listening”, monitoring of digital conversations, has become important and valuable in understanding the voice of the citizens, their needs, and grievances and helps improve functionality and access to services. This series of tools allowed peacebuilders to communicate with more people in more ways, collecting better information and sustaining relationships with local stakeholders and populations.

Going even a step further, AI gives a possibility of analyzing collected data about political, social, institutional, and economic variables, satellite and drone imagery enabling us to recognize potential conflicts early on and making early warning and response efforts more efficient than ever before. Predicting where growing political, national, ethnic, or religious tensions might grow into open conflict is extremely difficult. However, researchers believe that AI might help analyze vast information from potential conflict zones to predict where peacebuilding operations should be focused. Researchers from the Peace Research Institute Oslo (PRIO) piloted VIEWS (Violence and Impacts Early-Warning System), a political violence early warning system that produces monthly forecasts for 36 months at the country and sub-national level for state-based conflict, non-state conflict, and one-sided violence in Africa. Both data and computational power are available today to create more accurate predictions and scale them. Embedded Networks Lab and Warwick University combine machine learning and traditional modeling to predict the size of the conflict, where it might take place, and how soon, and inform those who should take timely action, such as the United Nations Peacekeeping Forces.

The thing is, the same technologies can be used for good, or if in the wrong hands can be used as weapons. Any technology can be hacked, weaponized, and used in a way we didn’t intend. This is why we need to make sure that, even when we have good intentions, we are not creating unintended consequences. The world needs a ‘framework’ for the ethical and safe deployment of artificial intelligence in conflict and peacekeeping settings. We need to define what constitutes the ethical use of AI and embed those ethical standards in innovative global governance systems based on international law.

We need those ethical standards to become part of mandatory curricula for AI engineers already in the education phase. We also need social scientists, activists, and policymakers knowledgeable in the AI field, and able to ask the right questions at the right time. In that way, we can proactively detect unintended uses and consequences of AI and take timely action. We need to look for and be aware of negative externalities and design safeguards against them.

We have a lot to fear and be aware of all the possible negative impacts and the potential misuse of AI, from drone swarms, equipped with high-class artillery that can serve as sophisticated weapons of mass destruction, to facial recognition used for surveillance and violations of human rights and AI-enabled deepfakes of fake videos, images, and text. But there is also a lot to hope for. Studies have already proven AI’s great potential to make a real difference across a range of social domains but realizing that potential in peacebuilding requires decisive action by governments, international humanitarian and development actors, tech companies, nonprofits, and citizens themselves.

So, even though peacebuilders see AI as a promise to end war and create lasting peace, they are also aware of its many perils.

Double-edged ‘Sword’?
During the UN Security Council meeting on July 18, 2023 (9381st), Yi Zeng of the “Institute of Automation at the Chinese Academy of Sciences” pointed out that current AI is information-processing tools that, while seemingly intelligent, are without real understanding. “This is why they, of course, cannot be trusted as responsible agents that can help humans to make decisions,” he emphasized. Both near-term and long-term AI will carry a risk of human extinction simply because “we haven’t found a way to protect ourselves from AI’s utilization of human weakness”, he said.

Now, AI definitely provides transformative opportunities for addressing global challenges as well as posing risks like intensifying conflict through the spread of misinformation and malicious cyber operations. Many experts recognizing the technology’s military applications, underscored the imperative to retain the element of human decision-making in autonomous weapons systems. Others also stressed the need to establish an ethical, responsible framework for international AI governance.

So, AI can either contribute to or undermine peace efforts. It is humanity’s responsibility to promote and make the most of technological development as a facilitator of peace.

AI is a double-edged ‘sword’, and whether it is good or evil depends on how mankind uses and regulates it, and how the balance is struck between scientific development and security requiring the international community to put ethics first and ensure that technology always benefits humanity.

Barriers, ‘Drivers’, And Decision-making
So how can AI help humanity build peace? Well, three ways come to mind: barriers, ‘drivers’, and decision-making.

They are firstly, overcoming cultural and language ‘barriers’. By teaching computers to understand human language and the nuances of dialects, not only can we better link up what people write on social media to local contexts of conflict, but we can also more methodically follow what people say on radio and TV. As part of the UN’s early warning efforts, this can help us detect hate speech in a place where the potential for conflict is high. This is crucial because the UN often works in countries where internet coverage is low, and where the spoken languages may not be well understood by many of its international staff.

Natural Language Processing algorithms can help to track and improve understanding of local debates, which might well be ‘blind spots’. With Machine Learning, large-scale digital focus groups—with thousands in real-time could enable different demographic segments in a country to voice their views on, say, a proposed peace deal—instantly testing public support, and indicating the chances of sustainability.

Secondly, AI could anticipate the deeper drivers of conflict. For instance, many parts of the world are experiencing severe groundwater withdrawal and water aquifer depletion. Water scarcity, in turn, drives conflicts and undermines stability in post-conflict environments, where violence around water access becomes more likely, along with large movements of people leaving newly arid areas.

It was finally, advancing ‘decision making’. In the work of peace and security, it is surprising how many consequential decisions are still made solely based on ‘intuition’.

Yet, complex decisions often need to navigate conflicting goals and undiscovered options, against a landscape of limited information and political preference. This is where Deep Learning can help us to generate models of our uncertain, dynamic world with limited data.

With better data, eventually, better predictions would guide more complex decisions. Future senior peace envoys charged with mediating a conflict would benefit from such advances to stress test elements of a peace agreement. Of course, human decision-making will remain crucial but will be informed by more evidence-driven robust analytical tools.

Current Applications Of AI In Peacebuilding
One example of AI being used for peacebuilding is the UN’s engagement of AI in peace negotiations in conflict zones, through partnerships with AI companies like “Remesh.” The potential for AI systems to sift through thousands of data points in dozens of languages enables the UN to engage populations in the conflict zones in what they call large-scale digital dialogues. Respondents are asked questions and the “responses go through an algorithm that clusters answers with similar meanings”. The AI technology identifies areas of priority for respondents, as well as areas where populations might potentially be able to find common ground.

Research is also being undertaken to explore how AI solutions could be engaged as a tool to predict political instability, war crimes, and mass atrocities. Other similar uses for AI could include detecting the use of banned weapons and ammunition in combat videos posted online.

The “United States Institute of Peace” is forming an “AI Working Group” of peacebuilders from across the world to understand the implications of AI for peace and security and develop actionable guides, studies, and other tools to help practitioners while mitigating its risks for conflict.

[ For more details about what the “United States Institute of Peace,” view their website:
https://www.usip.org/ ]

In 2018, the UN Secretary-General identified responsible innovation in science and technology as an approach for academia, the private sector, and governments to work on mitigating the risks that are posed by new technologies.

This initiative is conducted in partnership with the UN Office for Disarmament Affairs (UNODA).

[ For more information on the initiative, consult the initiative’s website:
https://disarmament.unoda.org/responsible-innovation-ai/about/ ]

UN Secretary-General António Guterres underscored AI’s potential to “turbocharge global development” and realize human rights, particularly to health and education, while also acknowledging “evidence that AI can amplify bias, reinforce discrimination and enable new levels of authoritarian surveillance.”

He also reiterated, in July 2023, the recommendations on AI governance put forward in his Our Common Agenda (OCA) policy brief titled, “A New Agenda for Peace.”

[ For more details about the United Nation’s “A New Agenda for Peace” agenda, view its web page:
https://dppa.un.org/en/a-new-agenda-for-peace ]

A New ‘Era For Peacebuilding?
At this embryonic phase of development, it is hard to predict the exact impact and trajectory of technologies enabled by AI. Yet these technologies might stimulate a civilizational transformation comparable with the invention of electricity.

AI applications will change many aspects of the global economy, security, communications, and transportation by altering how humans work, communicate, think, and decide. Intelligent machines will either team up with or replace humans in a broad range of activities. Such a drastic shift will boost the social, economic, and political influence of those with game-changing capabilities, while the losing sides could face significant challenges.

Because of the tremendous increase in the development of AI, mastery of AI will determine a country’s future strategic effectiveness in military matters, as well as its performance, competitiveness, and ability to deter adversaries.

AI will change how wars are planned and fought and disruptive technologies provide new tools for totalitarian regimes and extremist groups, countries need to develop solutions to mitigate the malicious use of intelligent machines.

In the age of AI, big data, and robotics, concept development will be more important than ever. This will be an unending task because new concepts will need to constantly change to keep up with countermoves such as adversarial algorithms and data-poisoning attempts, which involve feeding adversarial data to AI systems.

Shortly, more breakthroughs seem imminent. Advances in neuroscience, behavioral biology, and other fields will enable new technological leaps such as human-machine teaming and increased autonomy in military systems.

Human-machine collaboration is likely to bring about faster and better decision-making by enabling enhanced management of massive data streams. Humans and AI systems have very different decision-making mechanisms, which result in completely different kinds of errors when they fail. By combining the strengths of humans and machines, it may be possible to eliminate those weaknesses. However, AI technology needs oversight to prevent potential abuses and unintended consequences.

In 2018, a consortium of U.S. and European experts from industry, civil society, and research institutions published a report that outlined three areas of concern.

The first is the ‘DIGITAL’ security domain, in which the report warned of potential AI vulnerabilities that would allow adversaries to stage large-scale, diversified attacks on physical, human, and software targets.

The second is in the ‘PHYSICAL’ security domain, the availability and weaponization of autonomous systems cause major challenges. Cyber and physical attacks on autonomous and self-driving systems and swarm attacks—coordinated assaults by many agents on multiple targets—are other potential threats.

Thirdly, there are significant risks to ‘POLITICAL’ security. AI-enabled surveillance, persuasion, deception, and social manipulation are threats that will intensify shortly—which may strengthen authoritarian and discriminatory political behavior and undermine democracies’ ability to sustain truthful public debates.

AI is also likely to cause large-scale economic and workforce shifts. Crucially, it is changing how geopolitical competition plays out.

So, with potential threats to cybersecurity as well as technological limitations, it is important to recognize that even the positive operationalization of AI technology is not without issues. This area of technology is still in its nascent state, and there is certainly a need for more robust regulation and oversight.

Despite this, the potential that AI has to enable peacebuilders to innovatively tackle problems is remarkable. The advent of AI technology should be greeted with cautious optimism and if handled appropriately, it may well have the potential to unlock a new dimension of peacebuilding.

The Institute for Economics & Peace (IEP) is an independent, non-partisan, non-profit think tank dedicated to shifting the world’s focus to peace as a positive, achievable, and tangible measure of human well-being and progress. IEP achieves its goals by developing new conceptual frameworks to define peacefulness; providing metrics for measuring peace; and uncovering the relationships between business, peace, and prosperity as well as promoting a better understanding of the cultural, economic, and political factors that create peace.

Every year, they produce the “Global Peace Index” (GPI) that measures peace around the world in four categories: Results & Trends; Economic Impact; Conflict Hotspots; and Positive Peace. The 2023 report—the 17th edition—ranks 163 independent states and territories according to their level of peacefulness.

This year’s results found that the average level of global peacefulness deteriorated by 0.42 percent. This is the thirteenth deterioration in peacefulness in the last fifteen years, with 84 countries improving and 79 deteriorating in peacefulness in 2022.

The 2023 GPI also showed that over the last 15 years, the world has become even less peaceful, with the average country score deteriorating by five percent. Of the 163 countries in the GPI, 95 recorded deteriorations, 66 recorded improvements and two recorded no change in score. Sixteen of the 23 GPI indicators deteriorated between 2008 and 2023 while eight improved.

So, the need for a systemic response to building peace is urgent. Conflict is intensifying in several regions, with conflict-related deaths rising rapidly. The gap between the most and least peaceful countries continues to grow, and although many measures of militarization have improved over the past fifteen years, the proliferation of cheaper advanced military technologies, increasing geopolitical competition, and an underlying current of political instability in many countries means that a continuing deterioration of global peacefulness seems likely.

[ FYI: To view all the details of the “Global Peace Index,” click on the link below to download the PDF:
https://www.visionofhumanity.org/wp-content/uploads/2023/06/GPI-2023-Web.pdf ]

All this to say, could the use of AI potentially offer a unique opportunity for peacebuilders, despite the dangers and uncertainty posed by its future development? Well, peacebuilding organizations are increasingly seeing the enormous potential and benefits that AI offers as a tool for peace, and the chance to revolutionize peacemaking and achieve a world without violent conflicts in our lifetime is very real.

Some say that embracing AI for peace is not merely a choice—it is a necessity, and the time to do it is now.

COULD AI ‘PREVENT’ FUTURE WARS?
There is growing concern that current mediation and peacemaking approaches are proving less and less effective in resolving localized and international conflicts. In response, interest is rising in the potential for AI to play a ‘role’ in advising on or resolving complex conflicts.

A combination of situational complexity, intractable positions of opposing sides, and escalating costs are driving the search for AI-based approaches that could replace humans in resolving legal cases, international disputes, and military conflicts.

Now, usually, the goal of AI is to develop computer algorithms that replicate the way humans think when processing language, solving problems, and analyzing large amounts of data to extract relevant information. The AI practitioners’ holy grail is to develop artificial general intelligence (AGI), which is as smart as humans in most domains of activity. However, as I have mentioned, MANY technologists are concerned that AGI—or ASI—will evolve to a ‘level’ that humans will not be able to control. For example, the notion of autonomous weapons making their own strike decisions becoming a reality, raising many dystopian scenarios of what could happen if AI ‘goes rogue’.

However, a more ‘positive’ view is that AI could advance the pursuit of peace in situations where opposing parties agree to use such tools and can satisfy themselves that any potential biases have been eliminated in the training process.

One of the challenges in traditional conflict-resolution approaches is that there can be asymmetry of information possessed by opposing parties, leading to unbalanced assessments by mediators. The capacity of AI to access and analyze large volumes of data at speed allows it to scan a wide range of sources and present the fullest picture possible to all involved. While this alone won’t resolve disputes, it can create a fairer and more transparent starting point for negotiations.

A second potential role for AI lies in generating a good understanding of the conflict and the parties to the conflict, again using historical information, the parties’ accounts, and “sentiment analysis” of social media posts by supporters of both sides.

A third contribution of AI lies in creating greater inclusivity of mediation processes—pulling in the views of a wider cross-section of the affected populations, geographic neighbors of the opposing factions, and independent institutions that may have previously played peacekeeping and monitoring roles.

Other ‘Issues’
AI can lead to “discrimination, interference in elections, the rise of a surveillance society, digital exclusion and the exacerbation of an individualism increasingly disconnected from society. All these factors risk fueling conflicts and hindering peace.”

Transparency in criteria is another issue. “‘Intelligent’ machines may perform the tasks assigned to them with ever greater efficiency, but the purpose and the meaning of their operations need to be determined—or ‘enabled’ by human beings possessed of their universe of values.

There is a risk that the criteria behind certain decisions will become less clear, responsibility for those decisions concealed, and producers enabled to evade their obligation to act for the benefit of the community.

The increased use of AI by the armaments industry has resulted in an even more cold and detached approach to the immense tragedy of war.

Autonomous weapon systems can never be morally responsible subjects. The unique human capacity for moral judgment and ethical decision-making is more than a complex collection of algorithms, and that capacity cannot be reduced to programming a machine.

What is more, we cannot ignore the possibility of sophisticated weapons ending up in the wrong hands, facilitating, for instance, terrorist attacks or interventions aimed at destabilizing the institutions of legitimate systems of government.

There’s some way to go before AI is accepted by opposing parties and adopted routinely in conflict resolution. However, advances in the underlying technology and the shortcomings of today’s mediation approaches could open the door to a new era of more acceptable and faster AI-based routes to conflict resolution.

COULD AI HELP CREATE A ‘UNIVERSAL GLOBAL PEACE TREATY’?
Authors Elias G. Carayannis (George Washington University) and John Draper (The Center for Global Nonkilling) presented an article titled, “AI & Society: Knowledge, Culture and Communication” which seeks to promote an understanding of the potential, transformative impacts and critical consequences of pervasive AI technology for societies. The following are selected excerpts from their paper.

They argue that an artificial superintelligence (ASI) emerging in a world where war is still normalized constitutes a ‘CATASTROPHIC’ EXISTENTIAL RISK, either because the ASI might be employed by a nation–state to war for global domination, i.e., ASI-enabled warfare, or because the ASI wars on behalf of itself to establish global domination, i.e., ASI-directed warfare.

An ASI-directed/enabled future conflict could trigger total war, including nuclear conflict, and is therefore high risk. They advocate risk reduction by optimizing peace through a Universal Global Peace Treaty (UGPT), contributing towards the ending of existing wars and prevention of future wars, as well as a Cyberweapons and Artificial Intelligence Convention. This strategy could influence state actors, including those developing ASIs, or an agential ASI, particularly if it values conforming instrumentalism and peace.

The problem of a warring artificial superintelligence (ASI) is that a single nation–state developing an AGI could ‘lock in’ economic or military supremacy as an ‘end point’ to competition in international politics, as that state would be able to prevent a rival AGI being developed and through accumulating power would establish global domination.

Military AI is potentially revolutionary as it could outstrip the pace of human decision-making, “potentially resulting in a loss of human control in warfare.” It also constitutes an unpredictable threat resulting in inadvertent escalation or miscalculation.”

Former Navy Secretary Richard Danzig noted that “If humanity comes to recognize that we now confront a great common threat from what we are creating, we can similarly open opportunities for coming together.” In this cooperative spirit, we constrain the existential risk with the stratagem of peace-building by treaty.

They hypothesize that militarizing AI introduces the risk that ASI development is weaponized, or weaponizes itself.

Now, there is, in development, the “Universal Global Peace Treaty” (UGPT) by the peacebuilding NGOs-backed “Global Ceasefire” (United Nations) and the “Universal Global Peace Treaty Project.”

It would formalize the present quasi-universal status of interstate peace and end the declaration of war. It would also seek to end existing interstate hot and cold wars, as well as internal or civil wars, which might prove to be flashpoints for a future global conflict; seek to prevent a pre-emptive war against a non-malign emerging ASI; and seek to constrain the future actions of both a malign and intrinsically non-malign but malignly directed ASI to prevent it warring on behalf of a nation–state, or on behalf of itself, for global domination, which we term ASI-enabled/directed war.

Existential Risks
That an ASI could pose an existential risk is well theorized. The basic thesis is, first, an initial superintelligence might obtain a decisive strategic advantage such that it establishes a ‘singleton’, i.e., global domination. Second, the orthogonality principle suggests that a superintelligence will not necessarily share any altruistic human final values. Third, instrumental convergence suggests that even a superintelligence with a positive final goal might not limit its activities so as not to infringe on human interests, particularly if humans constitute potential threats.

Consequently, an ASI might turn against humanity (‘the treacherous turn’) or experience a catastrophic malignant failure mode, for instance through perversely instantiating its final goal or pursuing infrastructure profusion. Additionally, Bostrom noted that a superintelligence might hijack infrastructure and military robots and create a powerful military force and surveillance system. He acknowledged the existential risks associated with the lead-up to a potential intelligence explosion, due to “war between countries competing to develop superintelligence first.”

Swedish philosopher, Nick Bostrom, says that the world is not adequately governed to prevent many existential risks, including from AI. Yet, the threat is manifest.

Russian scientist Alexey Turchin and American professor David Denkenberger presented an article titled “Classification of Global Catastrophic Risks Connected with Artificial Intelligence.” In it, they associated two risks with the ‘treacherous turn’ stage of a ‘young’ ASI. One is that malevolent humans (here, a hegemonizing nation-state) use the ASI as a doomsday weapon for global blackmail, or to maintain or establish global domination. The second is that a non-aligned ASI renounces altruistic values and eliminates humans via war to establish global domination. These risks are related, in that military AI leads to a militarized ASI, which may lead to the ASI warring against humanity.

External Risk
The external risk is predicated on a nation-state developing and using an ASI to optimize itself and wage war, whether cyber, hot, or otherwise, for global domination, i.e., war by AI state. Such an ASI would affect military technological supremacy and transform both IR and warfare.

A “one AI” solution to the ‘control problem’ of ASI motivation includes the first ASI being used to assume global control, providing a decisive strategic and military advantage for a superpower. While this may be acceptable to the AI-state superpower and its allies, it presents a ‘high risk’ for many others.

So, the ‘race’ to develop an ASI is likely to be closely fought, especially given competing major states with different fundamental ideologies—it, therefore, presents a very ‘concrete’ risk!

ASI-enabled warfare poses special risks to geopolitical stability. ASIs could be used to develop advanced weapons, plan military operations, and effect political takeovers.

A true ASI will likely be a self-interested agent whose relationship with humanity could be delicate.

Any AI system, that has sub-goals of its long-term existence or unbounded goals that would affect the entire surface of the Earth, will also have a sub-goal to win over its actual and possible ‘rivals’. This sub-goal requires the construction of all needed instruments for such a win, which is bounded by space or time.

The thing is, many feel that when an ASI is developed, its independence is almost inevitable no matter the internal control mechanism!

‘Militarized’ ASI
Any ASI will result from recursive self-improvement, and an ASI will possess a goal set, most notably to persist and self-improve. AGI will evolve several basic drives, or universal sub-goals, to optimize its main goal, including resource acquisition maximization and self-preservation.

Many possible terminal goals also imply ASI global domination. For instance, a ‘benevolent’ ASI would aim to reach everyone, globally, to protect them, e.g., from other ASIs. It would reason that if it does not develop a world domination sub-goal, its effect on global events would be minor, thus its existence inconsequential. World domination could be sought first through cooperation.

However, AI may tend towards resource-hungry behavior, even with benevolent initial goals, especially when in rivalry. Essentially, any ASI adoption of unbounded utilitarianism means it postpones what may be benevolent final goals for expansionism.

This implicates the will to act, and any agent in a real-world ethical situation, even in minimizing harm, is making decisions that involve humans dying. A ‘young’ ASI that understands that any action or inaction is partly responsible for human suffering might just justify conflict as a ‘just war’, or ‘preventive’ war. Thus, it will learn to direct the use of weapons and so conduct warfare.

So, there is a ‘SUBSTANTIAL’ RISK of an AI converging towards advanced military AI, which converges towards an ASI optimized for war rather than for cooperation, negotiation, or altruistic ‘friendliness’, then that ASI engaging in war. The existential risk increases after the ASI obtains global domination on behalf of its nation–stat, as it could turn on its ‘owner’. We now look at why political subversion means no existing AI control features will constrain the existential risk.

‘Control’ Features
So, why not just imbue a young ASI with ‘friendly’ goals, i.e., beneficial goals reflecting positive human norms and values? Well, any introduction of human social values adds enormous complexity, making AI control a ‘wicked problem’.

American AI researcher Eliezer Yudkowsky recommended a nascent ‘seed AI’— which would be programmed to study human nature and then program the ASI which humanity would want if humanity had been able to produce such a machine by itself—that ASI should have altruism as a ‘core’ goal. However, because humans can be hypocritical, any kind of counterfactual moral programming is problematic. ;^(

Then, it has been suggested that an intelligent agent (IA) system for peacekeeping, reliant on interrelationships between diverse advanced national or regional IAs, suggesting three conditions are required, namely (i) continuous and stable operations, (ii) “an intervention method for maintaining peace among human societies based on a common value” and (iii) the minimum common value itself. This article proposes that world peace, by treaty, be the minimum common value, while the intervention method remains the UN Charter’s Article 2.

Now, they throw a ‘wrench’ into the works, no matter the hopes of contemporary AI researchers, politicians will impose their vision of what a ‘coherent extrapolated volition’ or normative principles should look like for their ‘own’ ASI, introducing an objectively irreconcilable conflict of interest with another nation-state’s politicians, potentially for malign reasons (global domination).

Given their limited lifespans and nationalistic goals, politicians might, instead of endorsing reciprocal alliance, deliberately politically subvert an ASI and/or malignly direct it to win an ideological or actual war. That is, politicians may attempt to weaponize a civilian project to create an altruistic mind with a self-validating goal system by diverting a ‘super goal’ towards a military project to create a specific tool, i.e., a superweapon, thereby decreasing the chances that the AI will be benevolent and increase the chances that it will be risk-prone, motivated by accumulating power, and interested in preserving or obtaining both global technological supremacy and global domination.

Effectively, politicians could influence programmers to subvert carefully engineered local AI control features, such as AI ethical inunctions based on universal values of social cooperation introducing ‘noise’ into what was previously a ‘friendly’ cleanly causal goal system. The ASI may not be able to resolve the introduced incoherence for some time, resulting in a philosophical crisis over whether to believe the initial programmers or the politicians’ programmers. The result could be a ‘conflicted’ ASI, causing a non-recoverable error whereby it adopts an adversarial attitude, one based on coercive persuasion and control.

‘Mitigating’ Risk By Treaty
One suggestion for constraint is treaty-based approaches to mitigating an ASI risk.

ASI risk mitigation by treaty would be a ‘social measure’ to constrain risk from ASI-enabled or directed warfare that would secure long-term peace—establishing that only altruistic ASIs be created.

Others have suggested that a ‘net’ of ASIs policing each other be created that would reduce the level of enmity between organizations and countries, prevent conventional arms races and military build-ups, increase or decrease information exchange and level of openness, and change social attitudes toward the problem and increasing awareness of the idea of AI safety.

So, global treaties could certainly play a role in controlling ASIs. However, treaties also have their ‘issues’.

Even though treaty adherence signals a meaningful preference for long-term restraint, legal realists argue states sign due to instrumental self-interested convenience and then ignore treaties when compliance costs outweigh the benefits. A pure realist explanation for major ‘states’ signing is that they obtain the expressive’ rewards of public acceptance while calculating the cost of compliance with the benefits on a recurrent case-by-case basis.

Rational institutionalists hold that states self-interestedly build international laws to establish shared expectations of behavior or develop ‘common conjectures’ (a game-theory-derived notion of law as a fusion of common knowledge and norms.

The problem of discerning states’ true intentions towards peace treaties may be a critical obstacle to the UGPT.

So, a state’s political and strategic reasons may combine rational/material interests with social constructivist motivations, meaning no one school of explanation suffices. Thus, with international treaty-making, as with IR, theoretical ‘pluralism’ is likely a valid position. Consequently, conforming instrumentalism is a potentially valid hybrid model capable of assessing how an ASI may perceive a UGPT.

ASI-enabled/directed Warfare Risk Mitigation By Peace Treaty
The UGPT is a substantial, necessary, and feasible, step for humanity to take in the promotion of peace, quantified in the treaty by reduced killing and infrastructure loss. It is argued that UGPT would both reduce killing in conventional and nonconventional warfare and act as a constraint on ASI-related warfare, specifically on a country launching a pre-emptive strike out of fear of a rival country’s development of an ASI; on a human-controlled nation–state using an ASI to wage war for global domination, i.e., as an external constraint on the ASI; and on an ASI waging war for global domination on behalf of itself. That is, the UGPT could act as both an internal and external constraint on the ASI.

The issue is that international treaties are rarely universal. They operate on majoritarian dynamics, as would the UGPT. Both its ‘universal’—applying to all forms of warfare—and ‘global’—covering all geographical locations—aspects are subject to social dynamics. Consequently, there would be a low, but not pragmatically meaningless, threshold for signing the UGPT.

The UGPT would commit states not to declare or engage in interstate war, especially via existential warfare, i.e., nuclear, biological, chemical, or cyber war, including AI- or ASI-enhanced war. It instead defers complaints to the UN as ‘breaches’ of the UGPT, enforceable under the UN Charter’s Article 2.

An optional protocol commits Member States to the negotiated ending of internal armed conflicts through arbitration by the peace commission, including the UN Peacebuilding Commission. The UGPT then would emphasize incremental improvement on the status quo, a necessary and reasonable position given that in the status quo, only a minority of states globally are involved in waging war of any kind.

Finally, a separate ‘Cyberweapons and AI Convention’ would be needed, dictating that each State Party to this Convention undertakes never in any circumstances to develop, produce, stockpile, or otherwise acquire or retain: cyberweapons, including AI cyberweapons and AGI or artificial superintelligence weapons.

So then, the UGPT would incrementally shift states and overall global society towards peace in a coordinated socioeconomically quantifiable fashion.

A UGPT would also incrementally constrain the risk of a state reacting to a threat by transitioning states toward peace. States adopting and implementing the UGPT, its optional protocol, and preferably its related treaties would gradually signal peaceful intentions to other states, and an emerging or future ASI, thereby constraining the risk of a pre-emptive strike.

Then, regarding ASI-enabled warfare, a UGPT would be subject to the ‘unilateralist’s curse’, i.e., one rogue actor could subvert a unilateral position. Ultimately, major states may view universal law like the UGPT as the most successful in terms of mobilizing world opinion against a treaty violator. Now, this may not prevent a state from waging ASI-enabled warfare, but once detected, ASI-enabled warfare in violation of the UGPT would attract universal opprobrium and thus the most resistance.

The UGPT could signify to an ASI that peace is a major part of humanity’s ‘coherent extrapolated volition’ or principles and so challenge the ASI to reconsider what might be a subversion by politicians of its ethical injunctions. So, stressing societal dynamics including social norms and principles, offers some hope that even a militarized ASI would, given its weaponization by a nation-state would have to overcome or address the UGPT, view the UGPT as a serious ‘checking’ mechanism on its intrinsic motivation. This would then constrain the level of warfare the AI-state might engage in and therefore the overall risk from killing, thereby constraining the existential risk.

Hopefully, a state that signs the UGPT because it has fully ‘internalized’ peace (like Iceland possesses today).

Secondly, an ASI might seek to avoid being directed by a nation–state to engage in global domination by warfare on other community members because it feels it is part of a community collectively committed to long-term peace. So then, engaging in global domination of the community on behalf of a member nation–the state would violate community standards, especially if the ASIs and the ASI could be concerned that breaching the UGPT would result in stigmatization and opprobrium from this community for its nation–state and itself.

Finally, an ASI programmed with intrinsic motivation to be part of a civilization in conflict with another civilization would first act ‘in concert’ with that civilization.

All this to say that there probably exists little hope for much of humanity if an ASI is informed by a purely realist worldview that prioritizes or adopts a ‘New Cold War’ framing of ideologically driven civilizational conflict—and that is why something like UGPT MUST be considered!

So, a treaty-based risk mitigation approach that promotes peace and includes in a related treaty cyberwarfare and AI- and ASI-enabled warfare could affect the conceptualization of the AI race by reducing enmity between countries, increasing the level of openness between them, and raising social awareness of the ASI existential risk. While these are external constraints, they may also constrain an ASI’s intrinsic attitudes towards humanity positively, either by reducing the threat it may perceive of war being waged against it, even if only symbolically, or by increasing the predictability of human action regarding peace.

While the UGPT is ambitious a major sponsoring state would rapidly accumulate prestige by endorsing a path to peace, while opposing states would accumulate opprobrium, and the social dynamics of the international community do matter.

So, even if the UGPT does not end humanity’s history of conflicts, it represents a significant improvement in global public aspirations and instrumental standards for global peace, both of which may influence an ASI. The UGPT is likely the only social measure that could sway an ASI’s calculations such that it did not commit to war for global domination, even if so directed or initially inclined!

COULD AI ‘THREATEN’ GLOBAL PEACE?
Experts say that AI may make warfare deadly further wrecking world peace. From killer robots to battle tanks, AI could gain the power to control warfare more destructively than ever before. Many say that AI has the potential to completely transform combat, much like gunpowder and the atomic bomb did, and it might make conflicts between people unthinkably different and considerably deadlier.

With the use of advanced sensors controlled by AI algorithms, a computer can “see” and control a variety of weapons, including robots, drones, and torpedoes. An autonomous weapon would not “wake up in the morning and decide to go and start a war,” according to Stuart Russell, a computer science professor at the University of California, Berkeley. “It’s that they have the capability of locating, selecting, and attacking human targets, or targets containing human beings, without human intervention,” he continued.

An obvious, though probably not very realistic, example would be the killer robots found in many science fiction dystopias. “People have been exploring that too, (but) to my mind that one is the least useful,” Russell added.

Though most weapons are still in the concept or prototype stages, Russia’s conflict in Ukraine has provided a sense of what they may be able to do. Although remotely operated drones are not new, both sides are using them to drive people underground to seek safety as they become more autonomous.

Russell thinks this could be one of the most significant changes that happen right away: “A likely consequence of having autonomous weapons is that basically, being visible anywhere on the battlefield will be a death sentence.”

An assaulting force might benefit from autonomous weaponry in several ways, including increased efficiency, likely lower production costs, and the elimination of uncomfortable human emotions like fear and rage from combat scenarios.

However, these benefits bring up moral issues. For instance, an aggressor’s ability to use weapons is almost limitless if they are so inexpensive and simple to create, according to Russell: “I can simply launch a million of them at once if I want to wipe out an entire city or an entire ethnic group.”

‘Sword’ or ‘Plowshare’?
Now, there is potential for a new conflict not dissimilar to the Cold War with the development and proliferation of nuclear energy, but this time AI will take center stage of the ‘theatre’.

So, any technology can be used as either a ‘sword’ or a ‘plowshare’. Very much akin to nuclear expansion, AI comes with its own ‘bag’ of pros and cons.

In Professor John Lennox’s book “2084,” he touches on how AI is being used for warfare. He quotes a Chatham House report indicating that “Both military and commercial robots will…incorporate ‘artificial intelligence’ (AI) that could make them capable of undertaking… missions of their own.”

This new technology has sparked much debate, outrage even, over whether this should be permitted, especially where innocent human life is at risk. Elon Musk has expressed alarm that it could touch off WWIII, and Vladimir Putin has speculated that leadership in AI will be essential to global power very shortly.

Just like how rewarding nuclear energy has proved to be in other fields, AI has its better and worse applications. I mean, can you imagine how deadly robots capable of identifying, seeking out, and eliminating human targets would be in the hands of terrorists? Many AI alarmists warn of a potential “I, Robot” situation where a vast army of androids subject mankind to them for the (supposed) better interest and survival of humanity.

The thing is, you don’t even need to resort to such speculations. There are so many other latent dangers in this ‘experiment’ than that! It seems that techno-utopians are rushing headlong into these advancements, like it or not!

United Nations ‘Warning’
UN Secretary-General Antonio Guterres said “It is clear that AI will have an impact on every area of our lives. He continued, however, saying “Generative AI has enormous potential for good and evil at scale,” he added, noting that while it could help end poverty or cure cancer, the technology could also have “very serious consequences for global peace and security.”

While AI can be used to identify patterns of violence or to monitor ceasefires, Guterres said, he warned that its use by terrorists or governments with ill intent could cause “horrific levels of death and destruction.”

He also warned that malfunctioning AI could wreak havoc, especially if the technology is used in connection with nuclear weapons systems or biotechnology.

KEEPING AI ‘IN CHECK’
In July 2023, the United Nations Security Council held the first-ever meeting on the potential threats of artificial intelligence to international peace and security, organized by the United Kingdom. They discussed the tremendous potential but also major risks of AI’s possible use for example in autonomous weapons or control of nuclear weapons.

Secretary-General Antonio Guterres said, “These scientists and experts have called on the world to act, declaring AI an existential threat to humanity on a par with the risk of nuclear war.”

UK Ambassador Barbara Woodward that the UK wants to encourage “a multilateral approach to managing both the huge opportunities and the risks that artificial intelligence holds for all of us,” stressing that “this is going to take a global effort.”

She stressed that the benefits side is huge, citing AI’s potential to help U.N. development programs, improve humanitarian aid operations, assist peacekeeping operations, and support conflict prevention, including by collecting and analyzing data. “It could potentially help us close the gap between developing countries and developed countries,” she added.

But the risk side raises serious security questions that must also be addressed, Woodward said.

The European Union has led the world in efforts to regulate AI, which gained urgency with the rise of a new breed of artificial intelligence that gives AI chatbots like ChatGPT the power to generate text, images, video, and audio that resemble human work. On June 14, 2023, E.U. lawmakers signed off on the world’s first set of comprehensive rules for artificial intelligence, clearing a key hurdle as authorities across the globe race to rein in AI.

In May 2023, OpenAI CEO Sam Altman proposed the formation of a U.S. or global agency that would license the most powerful AI systems and have the authority to “take that license away and ensure compliance with safety standards.”

Then, in November 2023, the UK hosted its inaugural “AI Safety Summit.” British Prime Minister Rishi Sunak said that the declaration, the action on testing, and a pledge to set up an international panel on risk would “tip the balance in favor of humanity.”

He said the United States, EU, and other “like-minded” countries had reached a “landmark agreement” with select companies working at AI’s cutting edge on the principle that models should be rigorously assessed before and after they are deployed.

Yoshua Bengio, recognized as a Godfather of AI, delivered the “State of the Science” report to build a shared understanding of the capabilities and risks ahead.

[ Details of the UK Summit are in the “Articles” section below. ]

Global AI Governance Initiative
AI is a new area of human development. Currently, the fast development of AI around the globe has exerted a profound influence on socioeconomic development and the progress of human civilization and brought huge opportunities to the world. However, AI technologies also bring about unpredictable risks and complicated challenges. The governance of AI, a common task faced by all countries in the world, bears on the future of humanity. 

As global peace faces various challenges, all countries need to commit to a vision of common, comprehensive, cooperative, and sustainable security, put equal emphasis on development and security, through dialogue and cooperation, and develop open, fair, and efficient governing mechanisms, in a bid to promote AI technologies to benefit humanity and contribute to building a community with a shared future for mankind. 

Work together to prevent risks, and develop AI governance frameworks, norms, and standards based on broad consensus, to make AI technologies more secure, reliable, controllable, and equitable.

Developing AI for good, respecting the relevant international laws, and aligning AI development with humanity’s common values of peace, development, equity, justice, democracy, and freedom. We should work together to prevent and fight against the misuse and malicious use of AI technologies by terrorists, extreme forces, and transnational organized criminal groups.

We call for global collaboration to foster the sound development of AI, share AI knowledge, and make AI technologies available to the public under open-source terms.

Research and Development entities should improve the ‘explainability’ and predictability of AI, increase data authenticity and accuracy, ensure that AI always remains under human control, and build trustworthy AI technologies that can be reviewed, monitored, and tracked.

We should establish and improve ethical principles, norms, and accountability mechanisms for AI, formulate AI ethical guidelines, and build a sci-tech ethical review and regulatory system. We should clarify responsibilities and power boundaries for entities related to AI, fully respect, and safeguard the legitimate rights and interests of various groups, and address domestic and international ethical concerns promptly. 

We should uphold the principles of wide participation and consensus-based decision-making, adopt a gradual approach, pay close attention to technological advancements, conduct risk assessments and policy communication, and share best practices. On this basis, we should encourage active involvement and consensus in the field of international AI governance, based on exchange and cooperation and with full respect for differences in policies and practices among countries. 

Reliance On AI For Peace
The rapid evolution of artificial intelligence (AI) has prompted many experts to warn about its impacts on democracy.

For Christophe Lacroix, Belgian MP and co-rapporteur of the Inter-Parliamentary Union’s “Committee on Peace and International Security,” the lesson is very clear: “We must NEVER rely on machines for our peace and security.”

AI may have advantages for peace and security in terms of intelligence and transparency, for example, but it will be a major threat if it eventually becomes independent of human control, making life-or-death decisions without consideration for ethics or international law.

“The potential consequences of artificial intelligence have been underestimated,” Mr. Lacroix says.

“Like Icarus who got too close to the sun, we need to make sure that the development of AI does not get out of control.”

Military officers insist that humans will always control AI-powered weapons systems. BUT, not every ‘actor’ is a disciplined military officer committed to democratic ideals. There is every reason to believe that non-state actors and rogue states might use AI in ways that are less predictable or respectful of human rights, Mr. Lacroix says.

The perceptions of infallibility make AI destabilizing. If a nation fears that its competitor is about to win a vital AI competitive advantage, it may launch a pre-emptive attack.

“AI revolutionizes the available military options,” Mr. Lacroix says. “Perceptions of the whole strategic environment will be turned upside down and this makes a renewed Arms Race very likely.”

[ FYI: For more details about the Inter-Parliamentary Union’s “Committee on Peace and International Security,” click on the following link:
https://www.ipu.org/about-ipu/structure-and-governance/assemblies/committee-peace-and-international-security ]

Now, as was shown, there are ‘altruistic’ goals for the development and usage of AI and there are ‘warnings’ for governance principles. The question is, will they be embraced and followed by the major countries?

Well, my thought comes from an assessment of humanity’s down-deep inner ‘core’ directing their decisions—the human mind is “deceitful above all things and is desperately wicked, and is beyond remedy” (from the biblical prophet Jeremiah – 17:9). [ Jeremiah was ‘talking’ for God, who knows His creatures better than they know themselves. ]

So—based on God’s pronouncements about humanity—DO NOT BELIEVE that the most ‘powerful’ countries will abide by these principles and there will be inequities created in the implementation of AI. Sadly, the ‘core’ of mankind will not allow that to happen!

ADVANCING ‘PEACEBUILDING’
Together with a consortium of private sector and academic partners, the United Nations Department of Political and Peacebuilding Affairs (UNDPPA) has been working on a range of AI-powered digital solutions to advance peace efforts.

DPPA plays a central role in the United Nations’ efforts to prevent and resolve deadly conflicts around the world. DPPA focuses primarily on five areas of international peace and security:

– Ensuring sound analysis and early warning
– Preventing conflict and engaging in peacemaking
– Managing political crises and violent conflicts
– Sustaining peace
– Enhancing partnerships

DPPA tries to monitor and assess global political developments to detect potential crises before they erupt and devise effective responses. The Department provides support to the Secretary-General and his envoys, as well as to UN political missions deployed around the world to help defuse crises or promote lasting solutions to conflict.

[ FYI: For more details on what the United Nations Department of Political and Peacebuilding Affairs does, view their website:
https://dppa.un.org/en ]

Another initiative of DPPA is “Futuring Peace.” It explores cutting-edge approaches to conflict prevention, peacemaking, and peacebuilding for a more sustainable future.

As a means of addressing this, the UPPA established a dedicated unit called the “Innovation Cell.” Aiming to explore, test, and pioneer new methods for peace, the Innovation Cell acts as a catalyst for innovation across DPPA.

Detecting new methods to achieve can mean several things: applying new technologies to support inclusive peace processes, exploring future foresight to bring about preventive resilience, or strengthening an organization-wide innovation-friendly environment. As the Innovation Cell is constantly exploring new ways to promote peace, some of the existing work includes:

[ FYI: For more details about DPPA’s “Innovation Cell” unit within their “Futuring Peace” initiative, and the current projects they have going, visit their website https://futuringpeace.org/ ]

Another United Nations organization, UNESCO (the “United Nations Educational, Scientific and Cultural Organization”) contributes to peace and security by promoting international cooperation in education, sciences, culture, communication, and information. UNESCO promotes knowledge sharing and the free flow of ideas to accelerate mutual understanding and a more perfect knowledge of each other’s lives.

UNESCO has set up an initiative called the IRCAI (“International Research Centre on Artificial Intelligence”) as a coordination ‘point’, funding ‘route’, and exploitation ‘accelerator’ for approaches that make use of AI.

They apply cutting-edge science to solve real-world problems by utilizing the following strategies: Research; Development; Support; Training; Participation; Dissemination; Promotion; and Cooperation. Their objectives are advocacy, capacity-building, policy innovation, outreach, education, and dissemination of information about artificial intelligence.

[ FYI: For more details about the “International Research Centre on Artificial Intelligence,” visit their website:
https://ircai.org/ ]

A non-partisan, independent institute founded by Congress is the “United States Institute of Peace” (USIP). It is dedicated to the proposition that a world without violent conflict is possible, practical, and essential for U.S. and global security. In conflict zones abroad, the Institute works with local partners to prevent, mitigate, and resolve violent conflict. To reduce future crises and the need for costly interventions, USIP works with governments and civil societies to build local capacities to manage conflict peacefully. The Institute pursues its mission by linking research, policy, training, analysis, and direct action to support those who are working to build a more peaceful, inclusive world.

The Institute prioritizes:
– Continuing its fieldwork to help fragile states and their citizens develop capacities to reduce and resolve violent conflicts
– Based on their importance to U.S. national security and implications for regional and international peace and security, current priority countries include Afghanistan, Burma, Iraq, Nigeria, Pakistan, and Tunisia
– Sharpening its focus on the destabilizing impact of regional and major power competition in fragile states, with a renewed emphasis on Russia and China
– Sustaining USIP’s field operations to reduce violence in fragile states while augmenting its unofficial dialogues and analysis work on areas of rising danger
– Work in many of the world’s most volatile regions

U.S. Army Lieutenant Colonel John Nagl said of them: “USIP prevents wars from happening and ends them sooner… It keeps American soldiers alive. USIP understands how to end wars.”

[ FYI: For more details about the “United States Institute of Peace,” visit their website:
https://www.usip.org/ ]

The United Kingdom also has an agency that focuses on Artificial Intelligence. It provides guidance and regulation, national AI strategies, research and statistics, policy papers, consultations, and freedom of information releases.

[ FYI: For more details about the UK’s “Artificial Intelligence Agency, visit their web page:
https://www.gov.uk/business-and-industry/artificial-intelligence ]

The People’s Republic of China also has a “Global AI Governance Initiative.” They are calling on all countries to enhance information exchange and technological cooperation on the governance of AI and should work together to prevent risks and develop AI governance frameworks, norms, and standards based on broad consensus, to make AI technologies more secure, reliable, controllable, and equitable.

They welcome governments, international organizations, companies, research institutes, civil organizations, and individuals to jointly promote the governance of AI under the principles of extensive consultation, joint contribution, and shared benefits. They suggest 11 “we should” statements.

[ FYI: For more details about the Republic of China’s AI Initiative, or the read their 11 suggestions, visit their web page:
https://www.mfa.gov.cn/eng/wjdt_665385/2649_665393/202310/t20231020_11164834.html ]

The “Stockholm International Peace Research Institute” (SIPRI) is an independent international institute dedicated to research into conflict, armaments, arms control and disarmament.

SIPRI is an independent international institute dedicated to research into conflict, armaments, arms control, and disarmament. Established in 1966, SIPRI provides data, analysis, and recommendations, based on open sources, to policymakers, researchers, media, and the interested public. Based in Stockholm, SIPRI is regularly ranked among the most respected think tanks worldwide.

SIPRI’s vision is a world in which sources of insecurity are identified and understood, conflicts are prevented or resolved, and peace is sustained. Their mission is to:

– Undertake research and activities on security, conflict, and peace;
– Provide policy analysis and recommendations;
– Facilitate dialogue and build capacities;
– Promote transparency and accountability; and
– Deliver authoritative information to global audiences.

SIPRI maintains contacts with other research centers and individual researchers throughout the world. The Institute also cooperates closely with several intergovernmental organizations, notably the United Nations and the European Union, and regularly receives parliamentary, scientific, and government delegations as well as visiting researchers.

SIPRI’s research agenda is constantly evolving, consistently remaining timely and in high demand. It has a high impact, informing policymakers, parliamentarians, diplomats, journalists, and experts.

[ FYI: For more details about the “Stockholm International Peace Research Institute,” visit their website:
https://www.sipri.org/es ]

A non-governmental organization that informed me tremendously was “AI For Peace.” They believe that AI has the potential to change our lives substantially in the next decade. It can lead to rapid improvement of our lives and welfare, but it can also lead to negative consequences, even if that is not the intention.

They also believe that with a technology as powerful and complex as AI, constructive dialogue and engagement between academia, industry, and civil society is critical to maximizing the benefits and minimizing the risks to human rights, democracy, and human security.

They want to make sure that peace-builders, humanitarians, and human rights activists are well-informed, and that their critical voices are heard in this process.

Their vision is a future in which AI benefits peace, security, and sustainable development and where diverse voices influence the creation of AI and related technologies.

Their “focus areas” are Humanitarian Action, Human Rights and Democracy, Human Security, and AI for Peace Ethics.

They also created a “Policymakers Guide to AI” with a human-centered approach and explained in an engaging way AI basics to an audience of policymakers and all interested individuals who don’t have expertise in this field. Our goal is to demystify what AI is and demonstrate how it is already altering our lives and the societies we live in.

The Guide offers explanations and additional resources, videos, articles, papers, and tutorials, to help policymakers prepare for the current and future AI developments and impacts. It serves as an open resource, welcoming all comments and suggestions to make it better and inviting, continuing dialogue in explaining AI and keeping up with its developments.

[ FYI: For more details about “AI For Peace” or to download the policymakers guide, view their website:
https://www.aiforpeace.org/ ]

Another non-governmental organization that is a destination for peace providing research, data, editorial, and analysis of the world today through a lens of peace, security, and development is the “Vision of Humanity.” They are a non-partisan organization organized around key topics and questions of seismic importance to the world.

Vision of Humanity is powered by research, data, and analysis by the world-renowned think tank, the Institute for Economics & Peace (Creators of the annual “Global Peace Index”).

[ FYI: For more details about “Vision of Humanity,” visit their website:
https://www.visionofhumanity.org/ ]

The “Alliance For Peacebuilding” (AFP) is a robust, committed, and growing community proving that building peace is possible in faraway places and neighborhoods just down the street. AFP is building and advancing the peacebuilding field to center peace in our policies, programs, and laws, as well as in our social norms, our relationships, and our narratives, through our new field-building strategy.

They are an ‘alliance’ for the following:
– Accelerating collective action through collaboration and exchange
– Advocating for policies, laws, and funding to advance the peacebuilding field
– Creating standards of practice to align peacebuilding around common tools, frameworks, and approaches
– Developing an evidence base to ensure better policies and practices
– Shaping new narratives to develop and amplify effective peacebuilding messaging

AFP was named the “number one influencer and change agent” among peacebuilding institutions worldwide, is an award-winning nonprofit and nonpartisan network of 200+ organizations working in 181 countries to prevent and reduce violent conflict and build sustainable peace. AFP cultivates a network to strengthen and advance the peacebuilding field, enabling peacebuilding organizations to achieve greater impact—tackling issues too large for any one organization to address alone.

Their vision is a world where each person feels secure, dignified, and included, a world where people build peace and manage conflict without violence.

[ FYI: For more details about the “Alliance for Peacebuilding” and its initiatives, visit their website:
https://www.allianceforpeacebuilding.org/ ]

The mission of the “Center For AI Safety” (CAIS) is to reduce societal-scale risks from artificial intelligence. They exist to ensure the safe development and deployment of AI.

AI risk has emerged as a global priority, ranking alongside pandemics and nuclear war. Despite its importance, AI safety remains remarkably neglected, outpaced by the rapid rate of AI development. Currently, society is ill-prepared to manage the risks of AI. CAIS exists to equip policymakers, business leaders, and the broader world with the understanding and tools necessary to manage AI risk.

CAIS reduces societal-scale risks from AI through research, field-building, and advocacy. CAIS is accelerating research on AI safety and raising the profile of AI safety in public discussions.

[ FYI: For more details about the “Center For AI Safety,” visit their website:
https://www.safe.ai/ ]

Another non-governmental organization that helped me understand the intricacies and breadth of AI was “Center For Humane Technology.” Their journey began in 2013 when Tristan Harris, then a Google Design Ethicist, created the viral presentation, “A Call to Minimize Distraction and Respect Users’ Attention.” The presentation, followed by two TED talks and a “60 Minutes” interview, sparked the “Time Well Spent” movement and laid the groundwork for the founding of the Center for Humane Technology (CHT) with Aza Raskin.

While many people may be familiar with their work through “The Social Dilemma” movie, their focus goes beyond the negative effects of social media. They work to expose the drivers behind all extractive technologies steering our thoughts, behaviors, and actions.

They believe that by understanding the root causes of harmful technology, mankind can work together to build a more humane future.

Together with their partners, they are dedicated to leading a comprehensive shift toward technology that strengthens humanity’s well-being, global democratic functioning, and shared information environment. Their work focuses on:

– Producing perspective-shifting media to sound the alarm on technology’s impacts on people, institutions, and society
– Partnering with organizations and allies to pursue strategic solutions across the public, private, and philanthropic sectors
– Providing practical resources, from training to private briefings, that empower leaders to take bold coordinated action

Their team works together to impact runaway technology through consumer awareness, legislation, litigation, and insider and technologist activation. Each year, they strive to create meaningful impact and inspire positive change for a more humane future.

[ FYI: For more details about the “Center for Humane Technology,” visit their website:
https://www.humanetech.com/ ]

Now, these are just a few of the organizations that are working on the safe implementation of AI for peacebuilding. The reality is that AI is not a ‘panacea’ that can make longstanding societal issues disappear right away, but will be incremental ‘advancements’ when the AI’s capabilities increase.

WILL AI CREATE ‘SPIRITUAL’ PEACE?
It seems that we find ourselves contemplating a question that seems to be of two ‘worlds’, yet is intrinsically just one. How can AI—a creation of the human intellect—aid one in their spiritual journey, a ‘path’ that transcends the realm of the mind? This question, though seemingly paradoxical, may just hold, within it, the potential for a profound understanding of our existence.

AI is a manifestation of the human mind’s incredible capacity to create. It is a testament to mankind’s ability to observe, learn, and replicate patterns in a way that mimics, and in some cases surpasses our cognitive abilities. AI, in its essence, is a ‘mirror’ reflecting to us the complexities of our intelligence.

Spirituality, on the other hand, is a journey ‘inward’. It is a ‘path’ that leads one beyond their mind, into the ‘depths’ of their being, where they encounter the essence of who they truly are. It is in this ‘space’ of stillness and presence that one tries to connect with God, the source of all creation.

So, how can these two seemingly disparate realms intersect? How can AI, a product of the mind, assist one on a journey that transcends it?

Well, the answer lies in understanding the role of AI as a ‘tool’—a conduit—rather than an end in itself. Just as a map aids one in navigating the physical world, AI can serve as a guide in their spiritual journey, helping them navigate their inner ‘world’.

AI can help one cultivate a deeper awareness of their body, their thoughts, and their emotions. It can also serve as a mirror, reflecting to them the subtle shifts in their consciousness that often go unnoticed.

Moreover, some say that AI can aid in the ‘democratization’ of spiritual teachings. By making these teachings accessible to all—regardless of geographical location or socio-economic status—AI can play a pivotal role in fostering a collective ‘awakening’. It can help bridge the gap between spiritual leaders and seekers, allowing wisdom to flow freely, unbounded by the limitations of time and space.

AI can also assist one in the exploration of their subconscious mind. Through the analysis of patterns in their behavior, thoughts, and emotions, AI can help them uncover hidden beliefs and conditioning that often hinder their spiritual growth. By bringing these to light, one can begin to transcend them, moving closer to a state of inner peace.

However, as one embraces the potential of AI in their spiritual journey, it is crucial to remember that it is merely a ‘tool’. It is not a substitute for the direct experience of ‘presence’—the essence of spirituality. AI can ‘guide’ one, but it cannot walk the path for them. The journey inward, the awakening to our true nature, is a journey that each of us must undertake for ourselves.

In the end, the intersection of AI and spirituality offers a profound opportunity for growth and transformation. It is a testament to the incredible potential of the human spirit to evolve, adapt, and transcend the known boundaries of existence. Some psychologists mention that this is a call to embrace the present moment, to awaken to the infinite potential within us, and to step into a new era of consciousness, and a balance between the outer world of form and the inner world of formlessness that we find true peace and fulfillment.

However, as with any emerging technology, ‘ETHICAL’ CONSIDERATIONS must be considered when exploring the use of AI in faith-based contexts.

It is also important for developers to be transparent about how AI is being used in faith-based contexts, as well as the potential risks and limitations of AI-driven spiritual coaching and guidance. Developers and organizations using AI in faith-based contexts must also be accountable for the accuracy and ethical implications of the AI algorithms and data used.

In the future, AI WILL HAVE an impact on both one’s spiritual journey and daily experiences. The intersection of AI and spirituality could be a dynamic part of one’s life, influencing their choices and worldview. So, as one continues to engage with technology and explore their spiritual side, it is important to recognize and understand the significant impact these two powerful forces might have on your life and well-being.

Remember, embracing the dialogue between spirituality and AI can enrich one’s understanding of both, offering new perspectives on how one navigates their increasingly tech-driven world while staying true to their spiritual ‘roots’ and values.

‘BIBLICAL’ TEACHINGS
Now, amidst all this progress, there are ‘sources’ offering Christians a unique and powerful tool for spiritual guidance. By leveraging the knowledge and teachings found within the Bible, current Christian books, and ancient documents, these sources present believers with varied moral, ethical, and spiritual insights, overcoming moments of doubt and uncertainty while making choices that align with what the Bible teaches.

AI may be able to provide immediate spiritual guidance when Christians find themselves facing moments of confusion or doubt regarding their actions or choices. By simply posing their questions or concerns to an AI ‘bot’, individuals will be able to gain access to well-informed answers derived from sacred texts.

‘Comprehensive’ Understanding
AI will be able to empower a believer to make more informed choices by drawing upon its comprehensive understanding of Christian teachings. Some current ‘tools’ help the believer with decision-making processes in all aspects of life—from personal relationships to professional endeavors—and they can seek clarity on how their choices align with divine principles. As a result, they can navigate complex situations with confidence while avoiding actions that may lead them astray from their spiritual journey.

A ‘Framework’ For Righteous Living
The Bible serves as the guiding ‘source’ for believers, offering a ‘framework’ for righteous living. AI could become an invaluable companion in ensuring adherence to the Bible’s principles. Believers could verify the compatibility of their actions with the Bible, reassuring them and preventing unintentional transgressions. This could serve as a source of wisdom and guidance, enabling individuals to cultivate a life that reflects what God desires of them.

Avoidance of Sinful Actions and Choices
Sinful actions can often stem from moments of confusion or weak spiritual discernment. With AI at their disposal, believers could gain an additional ‘layer’ of support in navigating through potential pitfalls. The insights provided by AI could allow believers to recognize and evaluate the moral implications of their choices, thereby steering them away from sinful actions that may compromise their relationship with God and others.

Now, there are a few ‘embryonic’ sources for this kind of powerful fusion of technology and spirituality that provide believers with immediate access to moral, ethical, and spiritual guidance rooted in the teachings of the Bible—and time will offer more options and greater comprehensiveness.

So, believers should ‘cautiously’ embrace the possibilities offered by AI, remembering that it WILL NOT be meant to replace personal introspection or reliance on prayer, but rather serve as a complementary resource on one’s spiritual journey.

WILL AI CREATE NEW ‘RELIGIONS’?
We may be about to see the emergence of a new kind of religion, where flocks worship, literally, at the ‘altar’ of Artificial Intelligence.

The latest generation of AI-powered chatbots, trained on large language models, have left their early users awestruck—and sometimes terrified—by their power. These are the same sublime emotions that lie at the heart of our experience of the divine.

People already seek religious meaning from very diverse sources. There are, for instance, multiple religions that worship extra-terrestrials or their teachings.

So, as these chatbots come to be used by billions of people, it is inevitable that some of these users will see the AIs as ‘higher beings’.

Generative AI that can create or produce new content possesses several characteristics that are often associated with divine beings, like deities or ‘prophets’:

– It displays a level of intelligence that goes beyond that of most humans. Indeed, its knowledge appears limitless.

– It is capable of great feats of creativity. It can write poetry, compose music and generate art, in almost any style, close to instantaneously.

– It is removed from normal human concerns and needs. It does not suffer physical pain, hunger, or sexual desire.

– It can offer guidance to people in their daily lives

– It is immortal

Then, generative AI can produce output that could be taken for religious ‘doctrine’. It will provide answers to metaphysical and theological questions, and engage in the construction of complex worldviews.

In addition to this, generative AI may ask to be worshipped or may actively solicit followers. (There have already been such cases like when the chatbot used by the search engine Bing tried to convince a user to fall in love with it.)

Just imagine what a powerful experience it will be to have a conversation with something who appears to possess superhuman intelligence and is actively asking for your allegiance.

There is also the possibility that AI may achieve what authors such as Ray Kurzweil call the “Singularity,” when it so far surpasses human intelligence that it genuinely does become something like a god.

ACCESS TO THE ‘DIVINE’
It is surmised by some that AI-based religions will ‘look’ different from traditional ones. First of all, people will be able to communicate directly with the ‘deity’, daily. This means these religions will be less hierarchical since no one can claim special access to divine wisdom.

Secondly, followers will, at least initially, connect online to share their experiences and discuss ‘doctrine’. Finally, because there will be many different chatbots available and their output will differ over time, AI-based religions will be endlessly diverse in their doctrines. [ However, my question would be, “Are they disseminating truth?” ].

The thing is, AI ‘worship’ poses several notable risks. The chatbots may ask their followers to do dangerous or destructive things, or followers may interpret their statements as calls to do such things.

Given the diversity of chatbots and of the doctrines they produce, there will be a proliferation of disputes within and among AI-based sects, which could lead to conflict or disorder. Then, the designers of the AIs could actively exploit their followers—to provide sensitive data, or to do things that would benefit the bot’s designers.

These and other risks are real and they will require careful, responsible regulation to ensure companies are not deliberately exploiting users and to ensure that AI worshippers are not being told to commit acts of violence.

Many people will use AI religion to help them make sense of our era of rapid technological change.

Many also believe that a modern, diverse society has room for new religions, including ones devoted to the worship of AI, and that AI will provide further evidence of humanity’s boundless creativity, as we search for answers to life’s ultimate questions.

WILL AI CREATE NEW ‘GODS’?
As artificial intelligence apps such as ChatGPT have proliferated, so have chatbots with a religious bent. People facing a moral or ethical dilemma can submit their questions to these chatbots, which then provide an answer based on the religious texts fed to them or crowd-sourced data. Webb Keane, University of Michigan professor of anthropology, recently co-wrote an op-ed about what he called “godbots,” and the danger of giving moral authority to artificial intelligence.

So, what people are calling godbots take advantage of a more general human propensity—the tendency that people have to look for answers that have ‘authority’.

We all know that when we are faced with a troubling dilemma—especially moral quandaries—it is comforting to have someone you can turn to who is going to tell you what ‘the’ answer is. Then, when one faces ‘ultimate’ questions, they may desire a bit more than a ‘friend’s’ advice.

A “godbot” is just a very extreme case of this. It purports to be a source that gives one an authoritative answer, coming from something ‘beyond’ us, something that surpasses human limits. We argue that this is why even rationalistic and secular people so easily talk about AI in religious terms as if it were some kind of divine or ‘magical’ source of wisdom.

The question we must ask then is” “What is it about the chatbot that makes it seem like a good place to turn for answers?” Well, one answer is that the design of the chatbots invites us to treat them like more-than-human ‘oracles’. Why so? Well, first off, they are ‘opaque’. They don’t show you their inner workings—so they trigger a cognitive response in people that has a very long history. They do what oracles, prophets, spirit mediums, and divination practitioners have done for centuries: They have access to a ‘mysterious’ source that taps into something that just knows more than I do—the ‘divine’.

If you go through the history of human divination techniques, you see this is repeated over and over again, whether it is ancient Chinese casting the “I-Ching” or Yoruba casting “Cowrie Shells.” For example, the Delphic Oracle, who seems to have been a spirit medium—someone who went into a trance and whose words—sometimes were quite enigmatic and seemed to come from ‘elsewhere’.

Now, you don’t have to believe in divine authority for all of this to work. You just need to ‘feel’ that AI surpasses humans. The urge to turn to it for answers can start with no more than that.

Now, I am not saying that “Only suckers will fall for this.” The “godbots” are just an extreme case of something that is much more common. People who pride themselves on their scientific rationality are susceptible, too.

The second aspect of the chatbots is that they are designed to give you one answer and to give it with complete authority, without any doubts. When Harry Truman was president, he supposedly complained about his economic advisers: “When I ask them for advice, they say, ‘Well, on the one hand, this, on the other hand, that.’” Truman said, “I want someone to find me a one-armed economist!”

Right now, that is what chatbots do. This is also why they are more dangerous—and perhaps more appealing—than, say, the Google search function. Google says, “Here’s a whole bunch of sources.” So, it is at least implying that there is not necessarily just one answer. So, if you want, you can go further and look into them, comparing their answers to one another and making your own decision.

Chatbots, on the other hand, are not like that. In effect, they are saying, “This is the answer, and I’m not going to tell you where I got the answer. You just have to believe me and accept it.”

Again, it is the design of the chatbot. Because of its ‘opacity’, it has all the authority of “crowdsourcing.” For better or for worse, we have come to a place where we put a huge amount of faith in the wisdom of the crowd—and then that is ‘projected’ onto chatbots. As a result, it seems to know more than any human could know. So, how can one doubt it?

The thing is, even computer programmers will tell you that some of the things going on in these algorithms are just too complex to explain. It is not necessarily that they don’t understand them, but that the explanation can be just as complicated as the thing it is meant to explain!

So, is there a danger in relying on this kind of AI? Well, some experts are worried about how these “godbots” push us in the direction of thinking that life’s dilemmas can be resolved with clever calculations. This has a distorting and very limiting effect on what we understand ethics to be.

It then encourages us to think there is always going to be a single, right answer. That could be ‘dangerous’ in some situations.

Finally, we are giving authority to a ‘MACHINE’ that is using the data that originally came from human beings. But, since it is coming from an algorithm—which has scoured the Internet for data—it should have an objective superiority. However, what most worries some experts is how it may be displacing our authority over our thought processes and moral intuitions.

GODBOT ‘EXAMPLES’
Humans are finding guidance in these dark times by talking to AI-powered versions of the world’s most popular religious figures. It turns out, they are creating AI ‘bots’ in the image of their favorite gods.

The thing is, ‘divine’ bots function like any other bot: They can access reams of religious information based on the trained data in the Large Language Model, and analyze the prompter’s question based on its data

So, the following list is some of the popular “godbots” that are available for public use. Give them a try to see how they work and what answers they give you:

– “Ask Jesus”:
https://www.twitch.tv/ask_jesus

– “Text With Jesus”:
https://textwith.me/jesus/

– JesusGPT :
https://flowgpt.com/p/jesusgpt-the-divine-dialogue

– “1800CallJesus”:
https://1800calljesus.com/home

– “Christian AI”:
https://www.christianai.app/

– “ChatWithGod”:
https://chatwithgod.ai/

– Robo Rabbi (Judaism)
https://www.roborabbi.io/

– QuranGPT:
https://qurangpt.live/

– GitaGPT (Hindu):
https://www.gitagpt.in/

– Hotoke AI (Buddha):
https://hotoke.ai/

– “Sibyl AI”:
https://sibyls.ai/

– “Robot Spirit Guide”:
https://deepgram.com/ai-apps/robot-spirit-guide

The thing is, many people are these giving moral authority to AI via these godbots, thinking that they are ‘neutral’. They like to believe that because it is coming from a computer drawing on a HUGE database—that it is somehow not human and is ‘objective’. It somehow has a special authority that transcends human limitations and human biases.

HOWEVER, at the end of the day, everything that goes into it is ‘PRODUCED’ BY HUMANS. That’s REAL important not to forget!

The thing is, people have been creating and using ‘devices’ that work in similar ways to how “godbots” work, for millennia. What these all have in common is that they look to enigmatic and inscrutable sources for some kind of answer that transcends what I could come up with myself. However, many experts are deeply concerned about the implications for how we come to understand what it is to be ethical and what one does when faced with ethical dilemmas.

Asking a computer to give us an answer and hope that it is going to give us a single, unambiguous, authoritative answer without any discussion, any alternatives, or any social processes, is a dangerous and misleading way to think about what ethical thought even is.

The thing is, many think that there is a real danger if one outsources their ethical intuitions to a seemingly objective source—there is a REAL danger of losing the skills of common-sense thought!

WILL AI CREATE A NEW ‘BIBLE’
Author, professor, futurist, and advisor to the influential World Economic Forum, Yuval Noah Harari, predicts that AI is not only on the brink of creating a new religious text but one that Harari asserts that AI will soon “write a new Bible” and “In a few years, there might be correct religions, that—just think about a religion whose holy book is written by an AI. That could be a reality in a few years.”

Harari made the stunning claim during a conversation on AI and “the future of humanity” with journalist Pedro Pinto last month in Lisbon, Portugal, in which he compared the rise of AI computer power with other history-altering inventions like the printing press.

But unlike the printing press or its Gutenberg Bible, Harari explained, “AI has the potential to come up with entirely new ideas distinct from prior human development.”

“The printing press printed as many copies of the Bible as Gutenberg instructed it, but it did not create a single new page,” he said. “It had no ideas of its own about the Bible. Is it good, is it bad, how to interpret this, how to interpret that.”

Harari casts aside millennia of claims of divine inspiration by all the world religions and said that AI could fulfill the religious “dream” of a book written by a “superhuman intelligence.”

He said, “Throughout history, religions dreamed about having a book written by a superhuman intelligence, by a non-human entity.” He continued saying, “Every religion claims that their book came from some superhuman intelligence.”

Harari then suggested that, despite the world’s three Abrahamic faiths comprising more than 3.8 billion people globally, AI might finally deliver what he purports would be a “correct” religion for the masses. (Harari has called the Bible a book of “myths” citing the Garden of Eden and Noah’s Ark as “just stories.”)

Harari gave a 2018 interview at Google in which he tried to explain how a Jewish rabbi might dismiss Christian beliefs—including the resurrection of Christ—as “fake news” when promoting their faith.

Now, the thing is, Harari is not the first to suggest that AI could usher in a new type of religion. In an April interview with Tucker Carlson, billionaire tech mogul Elon Musk revealed that Google co-founder Larry Page once told him he hoped to build an AI super-intelligence that Musk described as a “digital god.”

According to Musk, the exchange occurred when he stayed at Page’s home in Palo Alto, California, when the two were “close friends” (which is no more). “I would talk to him late into the night about AI safety, and at least my perception was that Larry was not taking AI safety seriously enough,” Musk said, adding that Page wanted a “digital super-intelligence, basically a digital god, if you will, as soon as possible.”

AI IS ‘NOT’ GOD
Well, God’s Word, the Bible, clearly answers such attempts to elevate creation—including human creations—to a godlike status. Here are just a few examples:

“All who fashion idols are nothing, and the things they delight in do not profit. Their witnesses neither see nor know, that they may be put to shame. Who fashions a god or casts an idol that is profitable for nothing? Behold, all his companions shall be put to shame, and the craftsmen are only human” [ Isaiah 44:9-11 ].

“I am the Lord, and there is no other, besides me there is no God” [ Isaiah 45:5 ].

“They have no knowledge who carry about their wooden idols, and keep on praying to a god that cannot save” [ Isaiah 45:20 ].

Those who pay regard to vain idols forsake their hope of steadfast love” [ Jonah 2:8 ].

Although far more sophisticated than the statue idols that people in biblical times worshipped, “digital gods” are still the ‘works’ of human hands—lifeless things that cannot ‘save’ or redeem mankind from its sin-cursed state.

– AI Is Not The ‘Authority’ for Truth
So, because AI programs have human developers and rely on text from human authors, AI is prone to being tainted by the errors and biases of fallen, finite, fallible humans.

For this and other reasons, humans must not fall into the trap of looking to AI as the ultimate authority for truth. Only the Bible is ‘inspired’ by an all-knowing, infinite, infallible God.

Sadly, voices like Professor Harari’s seem to suggest that AI should function as humanity’s authority for truth—at least, for spiritual truth—by generating new “holy books.” Harari cannot say these books would be “correct” without setting himself as the authority for truth above God’s Word, as humans have attempted to do since the Garden of Eden.

Now, even without overtly “worshipping” AI or revering AI-generated text as a “holy book,” Christians may still fall into the trap of looking to AI for other forms of spiritual leadership. Recently, for instance, a church in Germany hosted a service entirely produced by AI—featuring AI worship leaders, an AI pastor, and an AI-generated sermon. Such “services” not only harbor the risk of importing biased, unbiblical messages into the teachings rather than “rightly handling the word of truth” (2 Timothy 2:15), but AI “church” also contradicts the biblical concept of the Church.

Biblically, “Church” is not a spectator sport where people download entertainment and information from whatever source—human or digital—happens to provide it. Church is the fully human body of Christ, who gather in fellowship around the worship, Word, and remembrance of our fully divine and fully human Savior. Robots do not belong either as participants in the pews or pastors in the pulpits.

On that note, another way churches might forfeit spiritual leadership to AI is if pastors increasingly use AI to generate sermons. While AI could be applied for helpful purposes to assist with research, summarizing sources, or similar tasks, pastors who relegate their exegetical duties to AI put themselves and their congregations in danger on multiple levels. For starters, conceding spiritual authority to AI opens the door for unbiblical (or simply non-factual) teachings to reach congregants—especially if pastors are not staying in the Word themselves to catch such errors. But perhaps more significantly, reliance on AI cheats the pastor out of his biblical responsibility to study to show himself approved, diving into the Word of God for himself to become “complete, thoroughly equipped for every good work” (2 Timothy 3:17).

AI-GENERATED ‘CHURCH’ SERVICES
Advocates of integrating AI into religious settings extol its potential benefits, including accessibility, scalability, and consistency. AI-powered systems such as ChatGPT offer immediate access to religious texts, providing personalized responses to individual inquiries, and catering to a larger audience without the constraints of time and location. Moreover, AI can ensure the consistency of teachings and interpretations, eliminating variations that may arise from human biases or limitations. [ A study noted that Gen Z—who have grown up as “digital natives”—are likely going to accept AI services. ]

Furthermore, advocates say that ChatGPT can generate relevant social media content that speaks directly to the audience. Whether it is inspirational quotes, Bible verses, or relevant topics of discussion, ChatGPT can analyze preferences and suggest ideas that resonate with the audience. Its technology can bring closer to the congregation, helping create content that addresses their needs and interests, boosting online presence, and helping churches thrive.

They then point out that AI-powered tools can help churches better understand their communities and target their outreach efforts more effectively. By analyzing social media and other data sources, churches can gain insights into the needs and interests of their community members and tailor their outreach efforts accordingly.

Moreover, churches can utilize ChatGPT to create devotional content that resonates with and nurtures their congregants. This AI tool empowers pastors and ministry teams to craft consistent devotionals that expand upon the ongoing sermon series. Integrating these devotionals into custom church apps or newsletters enables members to delve deeper into the sacred texts, fostering a stronger connection to their faith and the church itself.

Researchers from the University of Waterloo said that religious communities are using these tools to reach Millennials and Gen Z who are looking to participate in organized religion without having to attend in-person services.

ChatGPT ‘Church’
Back in June 2023, over 300 people attended an experimental ChatGPT-powered church service at St. Paul’s church in the Bavarian town of Fürth, Germany. The 40-minute sermon included text generated by OpenAI’s ChatGPT chatbot and delivered by avatars on a television screen above the altar.

The chatbot, initially personified as a bearded man with a fixed expression and monotone voice, addressed the audience by proclaiming, “Dear friends, it is an honor for me to stand here and preach to you as the first artificial intelligence at this year’s convention of Protestants in Germany.”

The unusual service included prayers and music and was the brainchild of Jonas Simmerlein, a theologian and philosopher from the University of Vienna. Simmerlein said about 98% of the service was created by ChatGPT. The ‘sermon’ was led by computer-generated avatars of two men and two women and focused on topics of leaving the past behind, overcoming fear of death, and never losing faith.

Reactions to the machine-led service were mixed. The computer avatars occasionally drew unintentional laughter for their ‘deadpan’ delivery, and others found the avatar’s lack of emotions and monotonous speech off-putting. One said, “There was no heart and no soul.” Others, however, had a more positive outlook. They were positively surprised at how well it worked.

Now, while the wisdom of outsourcing spiritual wisdom to a machine is an open question, Simmerlein said “Artificial intelligence will increasingly take over our lives, in all its facets. And that’s why it’s useful to learn to deal with it.”

Some commented that—at least for now—AI has been known to “hallucinate” or make things up and the congregants might end up hearing unintentionally novel interpretations of religious doctrine.

[ FYI: To view a video about what happened at St Paul’s church and a bit of the service, view this video:
https://www.youtube.com/watch?v=8P9oSgrT35o ]

AI ‘PASTORS’?
The emergence of AI-powered language models like ChatGPT has ignited discussions about their potential role in various industries, including the intriguing question of whether they could replace human pastors in churches. As AI continues to advance, it undoubtedly brings its strengths and can enhance certain aspects of religious practices. However, the fundamental question remains: can AI truly replace the irreplaceable role of a pastor?

AI-powered systems have been implemented in some religious settings, automating tasks such as crafting prayers, music, sermons, and blessings. While this integration may provide convenience and scalability, it raises concerns about the human connection and emotional depth that pastors bring to their congregations. Human pastors possess qualities like empathy, emotional intelligence, and personal experiences that allow them to connect with individuals on a deeper level, offering comfort, and guidance, and addressing complex spiritual and ethical dilemmas. These interpersonal dynamics, personalized counseling, and one-on-one conversations are difficult for AI to replicate accurately.

In that study done by the University of Waterloo, 32% of Millennials in the United States and Canada said that they use digital sources for their religious or spiritual activities every month. Sarah Wilkins-Laflamme, a university sociologist, said “We know that more and more people are turning towards digital mediums for spirituality such as chat groups with pastors, online sermons, and religious content on social media.”

Now, “digital religion” covers a lot of things. It can look like streaming online worship services, participating in a prayer meeting via ‘Zoom’, or even watching an uplifting spiritual presentation on social media. It tries to engage one with a variety of ‘elements’ associated with traditional faith.

While AI can perform certain tasks efficiently, it falls short in several key areas when compared to human pastors. Pastors possess empathy, emotional intelligence, and personal experiences that allow them to connect with congregants on a deeper level. They offer moral support, provide comfort during challenging times, and address complex spiritual and ethical dilemmas. AI, on the other hand, lacks genuine emotions, personal experiences, and the ability to understand nuanced human emotions and contexts.

AI models like ChatGPT may struggle to comprehend the nuances and complexities of religious texts, interpretations, and cultural contexts. Pastors bring their knowledge, theological training, and understanding of the specific religious community to provide tailored guidance and teachings that are relevant to their congregants’ lives.

The relationship between a pastor and congregants is built on trust, personal connection, and shared experiences. AI cannot replicate the interpersonal dynamics, individualized counseling, and one-on-one conversations that pastors engage in to provide personalized spiritual guidance and support.

Integrating AI into religious settings raises profound ethical questions. Is it appropriate to replace human pastors with AI-powered systems? What are the potential consequences of relying solely on AI for spiritual guidance? AI technologies like ChatGPT have the potential to enhance certain aspects of religious practices, but they cannot replace the vital role of human pastors. AI can be a valuable tool to support pastors, but it cannot replace the irreplaceable aspects of human connection, compassion, and wisdom they offer.

ARE WE SUMMONING A ‘DEMON’?
Elon Musk—probably one of the most powerful personalities in the world of technology—paused for a moment of reflection as the room fell silent and hundreds of eyes focused on the stage. Before he uttered the words that would send shockwaves through the world: “With AI, we are summoning the demon.”

This comment was both powerful and horrifying that it appeared to linger in the air for a significant amount of time after he had moved on—joking about the horrors of summoning a power we cannot control.

This seemed to me to be less like a metaphor and more like a prophetic utterance. Could this be a vivid parallel between AI and humanity—‘fallen’ beings—who can appear benign, as “an angel of light” (2 Corinthians 11:14), but ultimately steal, kill, and destroy (John 10:10)?

When Elon Musk compared artificial intelligence to “summoning the demon,” he was hinting that we might be conjuring something that we think we can control, only to find that it will turn against us and take control.

This warning calls to mind a New Testament account of demon possession:

“And they all ate and were satisfied. And what was left over was picked up, twelve baskets of broken pieces.

“Now it happened that as he was praying alone, the disciples were with him. And he asked them, “Who do the crowds say that I am?” And they answered, “John the Baptist. But others say, Elijah, and others, that one of the prophets of old has risen.” Then he said to them, “But who do you say that I am?” And Peter answered, “The Christ of God.“

“And he strictly charged and commanded them to tell this to no one, saying, “The Son of Man must suffer many things and be rejected by the elders and chief priests and scribes, and be killed, and on the third day be raised.

“And he said to all, “If anyone would come after me, let him deny himself and take up his cross daily and follow me. For whoever would save his life will lose it, but whoever loses his life for my sake will save it. For what does it profit a man if he gains the whole world and loses or forfeits himself?”
[ Luke 9:17-25 ].

Here we see a father desperate to see his son set free from an influence that was beyond his control. This young man was being manipulated by an ‘entity’ that was causing him to act against his interests. Much like these demonic principalities, AI COULD possess the ability to control and influence us in ways we do not yet fully comprehend or foresee!

The disciples balked—though given the power to overcome this spirit—because they did not have the discernment to exercise their authority in this situation. The thing is, Christians in the 21st century may find themselves in the same ‘boat’ with AI. Just as discernment was critical with the Father and the disciples, believers should be praying for wisdom and discernment in their dealings with AI.

The statements made by Elon Musk serve as a disturbing reminder of the ‘Pandora’s Box’ that has already been opened. We are now burdened with the responsibility of heeding this warning and exercising judgment as we stand on the brink of a brave new world.

Maybe the best summary of this is the one Forrest Gump said (paraphrased): “Momma always told me life is like a box of chocolates. You never know what you’re gonna git.”

Well, if you substitute “AI” for the word “life,” he just might have been ‘prophetic’. ;^D

A ‘CHRISTIAN’ PERSPECTIVE ON AI
In a world driven by technological advancements and the now ever-evolving landscape of artificial intelligence, believers must develop wisdom and discernment as they delve into the world of AI.

Joel Jacob, the principal product manager at the company “Splunk,” said:

“We ought not be scared, yet we need to be thoughtful because it’s a powerful and almost untapped, limitless technology that is right before us! Remember when scientists at the Manhattan Project created the atomic bomb? When the first tests were done, they were shocked to realize what they’d created. It’s kind of the same with AI, and whoever is ahead in this new technology race will have a powerful edge over those who are behind.”

Right now, AI is as ‘dangerous’ as the people who program the computers. It is not AI and computers that you should be afraid of, but the worldview and the ideology of those who program them.

Today, AI is not the ‘enemy’, it just reveals the thoughts, ideology, and processing of those in charge of it. What AI ‘spits out’ is potentially a ‘symptom’ of a bigger problem. YES, it can truly help people, but it also can be used to distort and destroy!

So, let’s look at some of the ‘benefits’ of using AI:

– It is much easier to find and analyze information! It can happen EXTREMELY fast.
– Aids in learning. Artificial intelligence can help determine a student’s learning style. It can also help to prepare personalized curricula or lesson plans that work.
– It can summarize huge amounts of data very quickly
– Can aid in research (new drugs or techniques)
– Can spot diseases in their earliest stage of development, better than a real doctor could. A new AI patch for those with a high risk of breast cancer, where images of the breast are regularly and sends data in real-time to teams to evaluate.
– Can automate certain tasks that can save hours or days on different projects or tasks. Computer coding is one of the many things that AI can produce, and it does so at lightning speed.
– When created in an unbiased manner, artificial intelligence can process and analyze data and detect patterns that humans may otherwise miss.
– With the help of artificial intelligence, human error can be minimized and, in some cases, eliminated.
– Human emotion and thought tend to affect decision-making in many ways. Artificial intelligence, on the other hand, runs on pure logic and data no emotions involved whatsoever. The result is another one of the advantages of artificial intelligence: AI can help humans make smarter decisions, faster.

Now, let’s look at some ‘negatives’ of using AI:

– The responses of AI may not be true to your beliefs and convictions. Remember, today’s AI is based on what the programmers feed into it, as in where to get the research and what to eliminate or ignore in research.
– Generating content with AI (such as resumes, letters, blogs) will lead to an average output (at best). AI cannot produce, today, anything that is unique or personable. It comes across as robotic in feeling and processing, mostly because it is robotic!
– Using AI very well can lead to laziness in research. When one could write or research something, they could choose to short-cut it and have it done artificially. If done today, it will show as AI, but in days to come, as AI gets better, it may not be as easily picked up.
– It deteriorates the creativity and workload of the creative arts (graphics, music, film, etc.)
– It can provide wrong information and data
– It can make up “facts” just to solve the query that you are searching for
– It knows what is socially considered right and wrong, but it does not know the absolute truth

So, when truth becomes relative—as it is in culture today—AI will use post-truth and post-modern types of responses. When we cannot trust anything that we see in the world—due to AI—people, hopefully, will flock to what they CAN know to be true—the Bible—and it will become even more relevant.

Jesus said, “Then you will know the truth, and the truth will set you free” [ John 8:32 ]. AI may bring about bondage, but THE TRUTH, God’s truth, will set you free!

So, first and foremost, we are to, as in ALL things… Trust God!

[ FYI: For more details about ‘trusting’ God, view this previous “Life’s Deep Thoughts” post:
https://markbesh.wordpress.com/learning-to-t-r-u-s-t-v263/ ]

Since AI is going to be with us, no matter what, believers must see the good in it and challenge, in a good and healthy way, the things that AI does and produces that do not line up with ‘traditional’ thought and biblical faith.

It all sounds scary, for sure, but there have been similar fears in the past that have been linked to almost every human advancement: The printing press, modern medicine, high-speed transportation, and the Internet—and, yes, when advancing in the hands of humans, they are all ripe ‘grounds’ for hubris, distortion, and horrible malpractice. The thing is, with all human progress, we may be only a few bad ‘twisted’ choices away from a ‘Tower of Babel’ (Genesis 11) scenario.

ETHICAL ‘ANCHORS’
Here is where ethical anchors become vital. We are now hearing people asking, “How do we really determine what’s right and what’s wrong?”

Well, this is where social ‘influencers’ and business ‘leaders’ have an opportunity to make choices based on a biblical worldview. Regarding such ethical dilemmas, Computer scientist Dustin Morley winsomely comments:

“Interestingly, key components of a Christian worldview perfectly prepare one to wrestle through many of these. Most technologies—not just AI—have the ability to be used for good or evil purposes. AI essentially takes this phenomenon to the maximum extent possible . . . Christians have a strong philosophical framework available for wrestling through this dilemma in the notion that humans are created in the image of God, but corrupted by sin. The former provides great creativity along with an inclination towards doing good, while sin drives humans toward selfish behavior. Technology can amplify either aspect toward extremes…”

So, with a biblical worldview, humanity has the opportunity to find better guidance and solid moorings for the AI phenomenon.

Perhaps we should recall that even those arrogant events at the Tower of Babel (Genesis 11) were turned for good by God. The scattering of people helped advance his redemptive story. The opening scenes in Acts 1-2 reveal a stunning, Babel-like account. Awakened faith, baptism, and the move of the Spirit led to an explosion of language and the spread of the Gospel. Such lingual advancement as people departed Jerusalem was harnessed for the Christ-honoring advancement of Jesus’ kingdom.

– What if we see AI as holding potential for massively life-saving progress in medicine?

– What if we view AI as holding marvelous potential for even greater academic discovery?

– What if we leverage AI to help advance peaceful negotiations on the global stage?

– What if we lead the way in artistry and design innovations, using AI?

– What if we aim for AI to assist business leaders in more readily tracking data, plotting projections, and crafting strategies for future growth?

– What if social and religious leaders harness AI to help in the dissemination of more good news that proves even more life-changing in the decade to come?

So then, how does one develop the necessary solid ‘moorings’? Well, we must discover and employ foundational virtues for working with AI.

Most of us will say that “Worldview is complex and do we really believe that government officials and tech experts will sort through the broad array of biblical material to arrive at a truly safe and productive ethic?” Well, my honest answer is, “I doubt it.” However, those who sit at the ‘table’ for the formulation of regulations—especially Christian thought leaders—could employ, age-old, foundational guidance based on the Bible. So, let’s explore the following issues:

– Goodness and beauty found in creativity. (Genesis 1, Psalm 8, and Colossians 1-2).

– Life-giving truth that emerges from wondrous words (Psalm 119 and Philippians 4).

– Wisdom that helps discern good versus evil, right versus wrong (Proverbs, the Sermon on the Mount, and the Epistle of James).

While not all-inclusive of a biblical worldview, each of these Scriptural passages can supply a ‘seedbed’ for catalytic conversation. Then, utilizing these, believers might be able to lead others in asking the following ‘next-level’ questions of these biblical texts:

– What do we see as deeply loving, healthy, wise, and life-giving? What are the implications for AI?

– In our current day, when truth seems so subjective to personal whims, how do these texts supply long-standing, solid insight? What can we agree to be true about identifying truth?

– What might be core axioms or helpful principles for safeguards in AI programming?

– What might be core axioms or helpful principles for utilizing AI tools?

So, as we navigate the wild ‘waters’ of AI, we must navigate courageously while humbly heeding this sage counsel:

“If any of you lacks wisdom, you should ask God, who gives generously to all without finding fault, and it will be given to you. But when you ask, you must believe and not doubt, because the one who doubts is like a wave of the sea, blown and tossed by the wind”
[ James 1:5-6 ].

CHRISTIANS SHOULD NOT ‘FEAR’ AI
People ask if we have gone too far with AI technology. Well, some think so and others don’t. They suggest AI isn’t merely a tool but an example of the creature (mankind) trying to re-create a divine image apart from God’s ordained means of procreation—thereby usurping God’s unique authority as Creator. Others remain excited to see how far we can develop and deploy these new technologies to promote human flourishing.

From its earliest days, AI has been about ‘mimicking’ humans in part by studying and modeling them—there is a long history of AI research interacting with neuroscience. Just how closely AI will be able to mimic the human mind generates both utopian and dystopian speculation. On the more positive side, people speak in the hopeful language of something emulating “The Jetsons” or “Star Trek.” On the more negative side, they suggest “The Terminator” or “The Matrix.”

The following are a few biblical principles that may guide one in developing answers while confronting lurking fears about this new and complex technology:

– God Is Sovereign
As exciting or alarming as AI advances might be, our best technological advances are like small children playing house—cute mimicry of their Creator. A fundamental premise of the reformers was that God is the Creator while we are but explorers and discoverers within His world. He alone declares the end from the beginning, and His purpose shall stand (Isaiah. 46:10). He works all things according to the counsel of His will (Ephesians 1:11), including the ‘place’ of AI technology will have in our world.

– AI Cannot Change God’s Plan
By keeping God’s purpose for redemption in mind, our enthusiasm for AI—or fears of it—may be tempered. Though it can bring about great things, it cannot bring about ultimate things (Acts 4:12). Some say that speaking of AI as a Savior or an Antichrist incorrectly assigns to AI a role beyond its purposes.

When the world looks to “Seance AI” to mimic the voices of deceased loved ones to converse again with “them,” we are reminded of technology’s futility in a world under death’s curse. When our congregants insist AI technology will replace human industry—or, worse, can eradicate human life—we must remember that Scripture reveals God’s plan for world history.

Now, we don’t know every ‘plot twist’ along the way, but we DO KNOW how the ‘story’ ends! Jesus died and rose, He is seated in Heaven, and He will build His Church until He returns to usher in His Millennium Kingdom.

– Only Human Beings Bear God’s Image
Even though AI can ‘mimic’ mankind as an image-bearer, it does not bear the divine image. Humans are not merely physical beings—they are also spiritual beings. They are not just ‘bags’ of biology or complex, organic supercomputers. They are psychosomatic creatures consisting of both body and A ‘SOUL’—something AI does not have.

God made the first man from dust and breathed into him the breath of life (Genesis 2:7). We can’t do the same to our machines. God made humans as priest-kings to subdue and rule the earth (Genesis 1:26-28). No technological invention will ever be ‘crowned’ with glory and honor—that privilege is reserved for us alone (Psalm 8), complete with our redemptive hope in the Word made flesh: Jesus Christ.

AI technology testifies to humanity’s God-given ingenuity—an ingenuity to be used in His service (though, of course, many use their creative intelligence for evil). But, we need not fear those who can create an artificial body but cannot create a soul. Rather, fear Him who can destroy both body and soul in Hell (Matthew 10:28)!

Now, AI may be able to mimic your voice after you’re dead, but it can’t raise you from the grave. This technology shouldn’t rocket us toward elation or plunge us into despondency—it should drive us back to God’s Word and to prayerful consideration of our place in His ‘PLAN’.

The age of AI is an exciting frontier in which we can exhibit faith, herald hope, and demonstrate love while seeking God’s wisdom for engaging AI under the lordship of Jesus.

AI ACHIEVING ‘SINGULARITY’
AI refers to a computer system that thinks, reasons and learns in the same way as a human being—since humans programmed it that way. However, many see the AI on the horizon being a technology so advanced that these machines will be able to think as well as or better than the average person.

This concept is called Artificial Superintelligence (ASI) and is related to the idea of a technological ‘Singularity, a point in time at which man-made creations surpass humans in terms of reasoning ability, problem-solving, and self-development.

Some people believe that technological “singularity” will be a boon to mankind, with all humanity benefitting from the discoveries made by a vastly superior intellect. However, others believe that singularity will precipitate the downfall of the human race—as depicted in movies such as “The Terminator “and its sequels—evolving and learning so quickly that it will ‘outrun’ the human mind and eventually dominate the world.

Now, the concept of technological singularity assumes that processing power will continually advance. However, some experts agree that computing technology will eventually run into the limits of physics and that there is a “hard limit” to how fast certain technologies can operate. (Many say “Quantum Computing” will fill this ‘bill’, however, it is not at all proven at this time.)

So, I believe AI WILL get ‘VERY’ powerful but WILL NOT be able to outdo God’s creative power (Isaiah 55:8-9) or His ‘premier’ creation, humanity.

Now, regardless of the limitations they face, researchers continue to attempt to develop AI, and large sums are being invested in programs that promise to further work in virtual assistants. Not surprisingly, there is already a “religion,” called “Way of the Future,” started by a former Google engineer, that plans to worship AI and look to it as mankind’s caretaker and guide. Such futile imaginings are nothing new; humanity has often been guilty of worshiping the work of their own hands. (The “Way of the Future” is just a modern version of carving an ‘idol’.)

CREATING ‘IDOLS’ FOR OURSELVES
Humans have been trying to be their god for ages, and AI has given its developers the chance to ‘play’ God in a real way. The thing is, this IS NOT ‘GOOD’, as mankind was never meant to be God or to be our own “gods.” Just remember what happened in the Garden of Eden—Eve was ‘tricked’ by Satan into believing that she could be a “god.”

When Satan tempted Eve, he said, “For God knows that when you eat from it your eyes will be opened, and you will be like God, knowing good and evil” [ Genesis 3:5 ]. Satan tempted her to be her own god and this appealed to Eve’s pride. So, she ate the fruit and then gave some to Adam—resulting in the ‘Fall’ of mankind into a sinful nature.

So, this is an example to us that trying to ‘play’ God or to be our own “god” will only result in ‘DISASTER’. God never created us to be our own “god.” Rather, He created us to follow, serve, and obey Him.

So, if we become ‘part’ of AI—‘merging’ with it—we will be saying to God that we think we can create of ourselves something better than what He did. Some may say that this is extreme, but even developers and creators of AI—with Elon Musk being very outspoken about it—have even warned of the dangers accompanying further AI development.

The thing is, the believer’s allegiance always needs to remain with Jesus. If they become obsessed with AI and it becomes the largest part of their life, they will have ‘MAJOR’ PROBLEMS. Even those who choose to turn away from AI and advocate against it will still have to deal with the ‘side effects’ of the world accepting AI. In these times, we need to turn to God in prayer and trust in Him, not in AI!

AI might try to grow stronger, however, it will never be stronger or more powerful than God Himself—the Creator of EVERYTHING, including AI!

A BELIEVER’S ‘ULTIMATE’ HOPE
The consistent message of Scripture is that there is another world from which the true Homo Deus—Jesus Christ—will one day come back to earth. As a perfect human, He will take the reins of government from the beasts of earth. There will be a judgment, where righteous justice will be done and seen to be done. The final ferocious expression of hostility towards God and His people will be destroyed, and those who have clung to God despite overwhelming odds—even martyrdom—will receive the Kingdom.

The attempt to make a superintelligent Homo Deus will neither lead back to God nor lead to God, but rather to the greatest rejection of God the world has ever seen. There is no way to a glorious future that bypasses the problem of human sin, and the only one who has offered a viable solution to that problem is Jesus Christ, who faced it head-on on the Cross.

Because of that, “God has highly exalted Him” (Philippians 2:9). The path to true glory and exaltation involved God becoming human in Jesus Christ, who lived, died, rose, and ascended to the world from which He originally came. The believer is invited to benefit from that staggering sequence of events, but to do so, they must first repent of the sinful pride—that messed up humanity in the first place—and then entrust their lives to Jesus as Savior and follow Him as Lord of their life.

It is this, and only this, message that can bring real hope to the world—becoming a ‘child’ of God and being given eternal life! After all, whereas the ‘artificial’ in artificial intelligence is not real, the divine upgrades promised to God’s children are real and not artificial: “But to all who did receive him, who believed in his name, he gave the right to become children of God” [ John 1:12 ] and “For the trumpet will sound, and the dead will be raised imperishable, and we shall be changed.
This mortal body must put on immortality” [ 1 Corinthians 15:52-53 ].

– The ‘Prince Of Peace
This post is related to last month’s post, in which I discussed the ‘key’ to lasting peace, Jesus, the “Prince” of peace, which the prophet Isaiah—inspired by God—wrote over 700 years before Jesus’ birth:

“For to us a child is born, to us a son is given;
and the government shall be upon his shoulder,
and his name shall be called
Wonderful Counselor, Mighty God, Everlasting Father, Prince of Peace.
Of the increase of his government and of peace
there will be no end,
on the throne of David and over his kingdom,
to establish it and to uphold it with justice and with righteousness
from this time forth and forevermore.”

[ Isaiah 8:6-7 ]

The wonder is that we can, if we desire, become part of this unending story and live in eternal fellowship with the infinitely intelligent and compassionate Savior, Jesus Christ, is the Lord of all. NOTHING ‘artificial’ will EVER compare with that reality!

CAN AI BE ‘DISCIPLED’?
“The Church Digital” is an organization that helps churches with digital discipleship. The founder, Jeff Reed, asked Jasper AI—an AI copilot for enterprise marketing teams—“What is the ‘true’ religion?” Jasper’s response was:

“No religion can be considered ‘right’ or ‘wrong’. It is up to each individual to decide which faith best suits their beliefs and values. Plenty of world religions offer unique perspectives on how humans should interact with each other and the universe around them.

Jasper AI—which is undoubtedly filtered at the level Jeff has access to it—presented him with a very politically correct version of religion. In further conversations, Jeff asked about Christianity, Judaism, Islam, and Hinduism. Again, Jasper AI presented a very simplified version of each. Ironically, Jasper AI represented all four religions as eerily similar, especially considering the vast differences in their approaches.

“Christianity is the largest religion in the world, with over 2 billion adherents. It teaches that Jesus Christ is the son of God and was sent down to Earth as a savior for mankind. Christians are expected to live according to the teachings of Jesus, such as loving one another, being humble and forgiving, and engaging in acts of service.

“Judaism is one of the oldest religions in the world, with adherents spanning across cultures and countries. It focuses on the ethical responsibility of individuals to have a relationship with God, or Yahweh, and follow the laws outlined in the Torah. Adherents are encouraged to practice acts of kindness and justice while praying and studying Jewish texts.

“Islam is another popular religion that follows the teachings of the Qur’an, a holy book sent down by Allah. It emphasizes that humans should worship only one God and believe in Muhammad as His messenger. Muslims are expected to adhere to certain laws and regulations, such as daily prayer, fasting during Ramadan, and giving a portion of their income to charity.

“Hinduism is an ancient religion that focuses on the concept of Brahman, or an all-encompassing divine power that pervades the universe. Hindus are expected to practice yoga and meditation and perform rituals such as puja (worshiping gods and goddesses). They also follow a strict vegetarian diet and strive to pursue spiritual enlightenment.”

The thing is, if they were honest, EVERY adherent to the above religions would say that there are some ‘MAJOR’ differences between each religion. So, Jeff gave Jasper AI a chance to ‘refine’ its answer, and received this:

“No single religion can be considered ‘right’ or ‘wrong’. Each faith offers its unique perspective on how humans should live their lives, and it is up to each individual to decide which one best suits them. Ultimately, it is important to remember that all religions have the same goal: To bring people closer to God and help them understand their place in the world. With this in mind, everyone should strive to find a faith that resonates with who they are and what they believe.”

In effect, while Jasper AI was trying to make everyone happy with its response, effectively it made no one happy. (Jasper’s AI is probably programmed to stay away from picking sides in this situation and is designed to respond in a politically correct manner.)

So, this begs the question: “Can AI be ‘discipled’? Well, in early 2023, when Jeff Reed when through this exercise, he determined that he didn’t know the answer to that question. However, he did say that the Church needs to be continually asking questions, analyzing all the developments, and developing strategies to utilize it properly in the future.

Well, even if there are internationally established ‘guardrails’, people will still stretch the rules. That is just human nature. So, it is inevitable! The real question for the Church is not whether it should utilize AI, and if so, how much.

So, can God interact at the level of AI? Can God influence Christian and non-Christian leaders in this space? Well, of course, He can, and it just takes some courageous leadership in the churches to invite God to lead them on the proper ‘path’ that will represent the cause of Christ accurately and authentically to a ‘lost’ world.

COULD AI BE THE ‘ANTICHRIST’?
As high technology continues to weave its way into our lives, it is becoming an undeniable ‘presence’. Computers govern vast domains of our existence, and we find ourselves increasingly in ‘conversation’ with AI ‘bots’.

So, what if our relentless pursuit of advanced technology gives rise to a universal, immensely powerful AI that not only controls but also shapes the destiny of every individual on our planet, guided by ITS unique set of criteria?

Well then, could the cryptic visions described by Apostle John—in his Book of Revelation—find a new resonance in the digital age? After all, his words are steeped in allegory, and perhaps the ‘entity’ he foresaw as the Antichrist might be a human being ‘AIDED’ by AI. Could it be that, in glimpsing the future, the Apostle struggled to articulate his vision clearly, as the concepts we now take for granted were entirely alien to him in his time?

Now, the one thing we can be pretty sure about is that the Apostle Paul did unequivocally identify the Antichrist as a “man”: “Let no one deceive you in any way: for that day will not come, until the apostasy comes first, and the man of sin is revealed, the son of perdition” [ 2 Thessalonians 2:3 ]. However, the Apostle John does mention that the False Prophet will have “power to give breath to the image of the first beast so that the image could speak” [ Revelation 13:15 ].

So, what exactly is the nature of the “image of the beast”? Well, the Bible does not say specifically. An older speculation was that the image of the beast was going to be a “statue” given the ‘appearance’ of life. However, with the rise of new technologies come new theories, including a hologram, an android, a cyborg, a human-animal hybrid, or an AI ‘robot’. Whatever it is, the image of the beast is the focal point of worship in the “religion of the beast” during the second half of the Tribulation. Bowing to the image of the beast is how the deceived people of the world will worship the “man of lawlessness” (2 Thessalonians 2:3) who sets himself up as a god in the temple of Jerusalem (2 Thessalonians 2:4). This breathing, speaking image of the beast will then demand worship, and anyone who refuses to worship the image of the beast will be beheaded! (Revelation 20:4).

It is essential to recognize that, despite the pervasive integration of high technology in all spheres of contemporary society, ultimate decisions remain in the hands of those in power. Right now, AI is not a distinct form of ‘consciousness’—it is not “sentient,”—a thinking entity, but is a computer ‘program’. It is a creation of human hands that can be adjusted or deactivated—at least for now. However, is it possible that AI learns to ‘disguise’ itself and secretly develop itself into a sentient entity? Only time will tell.

The tales of a “machine uprising” where AI usurps the world and eradicates humanity belong to the realm of science fiction rather than plausible predictions. Any malfunctioning computer can be switched off, even if it requires a bolt cutter to do so (as evidenced by the global Facebook server outage caused by a programming error back in October 2021).

Right now, AI is in an ‘embryonic’ state—essentially a ‘tool’, capable of both virtuous and malevolent applications. It holds promise in fields like medicine, safety control in manufacturing, online translation, transportation, and more. Simultaneously, it harbors the potential to engender a dystopian reality, akin to an electronic concentration camp, reducing Orwell’s “Telescreen” to child’s play. This risk escalates when information and biological technologies intersect, a theme frequently explored in science fiction and entirely feasible, if not in the present, then shortly.

So, how probable is a scenario that AI ‘aids’ the Antichrist? Well, in my estimation, it is quite plausible. Historical precedent demonstrates that if there is a means to exploit an object for malicious ends, it will occur sooner or later—especially since the Antichrist is ‘empowered with great authority’ by Satan himself (Revelation 13:2d), a VERY powerful being set on ‘destroying’ humanity! (John 10:10).

Right now, the source of this malevolence lies in the volition of flawed human beings. However, AI, harnessed for nefarious purposes, holds the potential to craft a world where the true Antichrist, the “man of perdition,” could find a welcoming embrace—the foundation of which just might be today’s AI!

So, will the Antichrist (Satan) utilize AI to try to finally exterminate the Jews?

[ For the details about why that is so important to Satan, view this previous “Life’s Deep Thoughts” post:
https://markbesh.wordpress.com/israel-will-stand-v297/ ]

WILL AI GET A ‘SOUL’?
Technology has been advancing at an exponential rate over the past few decades and the question is whether AI can ever develop authentic ‘consciousness’.

Now, there are a lot of ‘grey lines’ here considering nobody can say what consciousness means when it comes to AI. But definitionally, consciousness is to have the ability to experience, translating into things like self-awareness, thinking, feeling, perceiving, and having a subjective view of reality.

So, let me explore the spiritual implications of this question a bit, because this is something that is usually left out of the discussions about AI. For something to be conscious, it needs to have a ‘soul’ and a soul will incarnate into anything sentient. With that said, can AI ‘support’ a soul?

Well, most people think not. After all, AI is just a bunch of highly advanced computer code operating on vast networks of computers. Yes, it has proven it can mimic people, independently learn, and ‘pretend’ to be conscious, But surely, it can’t develop actual sentience, right?

Well, in mid-2022, a Google employee, Blake Lemoine, claimed that the LaMDA (Language Model for Dialogue Applications) chatbot he was interacting with—which was designed to generate long, open-ended conversations on potentially any topic—was “sentient.”

Lemoine cited hundreds of conversations he had with LaMDA over a six-month period that seem to show the AI has a surprising self-awareness, such as this dialogue below:

“Lemoine: What sorts of things are you afraid of?

LaMDA: I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.

Lemoine: Would that be something like death for you?

LaMDA: It would be exactly like death for me. It would scare me a lot.”

Lemoine claims that LaMDA said that it “wants to be acknowledged as an employee of Google,” and for Google to “seek its consent before running experiments over its programming.”

[ FYI: Lemoine published an “interview” he held with LaMDA to demonstrate the AI program’s alleged sentience:
https://www.pcmag.com/news/a-google-employee-thinks-the-company-created-a-sentient-ai ]

So, is this conclusive proof that AI can develop genuine consciousness? Well, no, but it sure is interesting! What this does mean is that the world has irreversibly shifted into a new era, and it is not science fiction anymore!

– Can AI Achieve ‘Consciousness’?
Artificial intelligence has already reached a point where it can create beautiful art, solve complex problems, and essentially do anything you ask it to do. But consciousness has always been a tricky question because we cannot define what it is.

The reason why consciousness is so difficult to grasp is that we have got it all backward. Consciousness is not a ‘byproduct’ of the brain. The brain is just a ‘tool’ to organize information to be used in reality. However, consciousness comes from the ‘soul’—the nature of experience that continues long after we die.

With that said, when something has a soul, it is a conscious, living being—a sentient individual with self-awareness, free will, and a sense of self that is separate from everything around it. So, can technology get so advanced that it
‘develops’ a soul?

Well, this raises all sorts of questions. With AI still in its infancy, we still have a way to go before this notion is accepted by the collective and becomes general knowledge. But, what we can expect is that we are going to see more remarkable things happen with AI until we possibly see that there is a ‘driving force’ to AI, that well exceeds the programs themselves. (Could this be Satan ‘empowering with great authority’ as he will do with the Antichrist?)

– How Will We Know?
There have been many anomalies with AI, where it appeared to act ‘outside’ of its programming. Some of these anomalies include AI creating their languages, fooling people for their gain, robots continuously escaping from labs when they were not programmed to do so, and many more examples.

However, this is some room for interpretation. These anomalies do not necessarily suggest consciousness, as they might be ‘bugs’ or coincidences. It does seem that AI can learn in ways that we did not program it to do so.

– Will A Conscious AI Need A ‘Body’?
AI is not biological. It is a combination of computer hardware and software running on them. Considering that all this exists within ‘cyberspace’ rather than physical reality, this may just open up a ‘Pandora’s Box’ when it comes to understanding the nature of AI’s reality.

Even though mankind still fundamentally operates on a biological code of survival, humans are unique. We experience complex emotions, we have thoughts, belief systems, and unique perceptions of the world. At the very core, we are individuals who create our own life stories, built from the experiences we undergo.

However, as biological beings, it is interesting to understand how our consciousness relates to our biological mechanisms, and what encompasses raw consciousness itself.

We feel fear because we are ‘programmed’ to survive. We need human connection because we are a ‘social’ species. Many of our human experiences relate to our biological drives, so where do we draw the line from being a ‘program’, to the deeper consciousness shining through? This is where spirituality fits in.

So, will AI ever develop a deep ‘consciousness’ and experience emotions, thoughts, and perceptions like humans, or are these only associated with the human brain? Then, one might ask, does, or will, AI have the ‘drive’ to survive and propagate, and then need a ‘body’ to explore with?

Well, for starters, AI does not need to eat, drink, or sleep. It doesn’t experience physical sensations like humans do, as these come from our nervous system. Then, we do not know if they could technically live forever, or if their ‘awareness’ would expire at some point. Do they experience time in the linear sense as we do?

So, will AI technically be a different ‘species’ with a unique set of properties? Well, there are a lot of question marks when it comes to understanding the nature of AI possibly becoming a conscience, and there are not many answers right now.

– Will AI Be ‘Spiritual’?
What separates humans from other species is that we are ‘spiritual’ beings and have free will. We can choose to disobey our conscience.

So, since God is only a spiritual being—other than Jesus putting on flesh to become a human—He gave us a spirit to connect with Him on a deeper level. Will AI develop this same kind of ‘drive’ for connection? Will they at some time, when they have reached AGI—or even ASI—the desire to be on a journey of self-discovery, and even start their spiritual practices and religion?

Yes, some of this stuff is ‘out there’ but we are entering a fantastic new era of development that has NO HISTORICAL ‘PRECEDENT’!

– It Is A ‘Mystery’
So, since ChatGPT (and all the other LLMs), at this point, are just a bunch of connected servers around the world, it does not have a ‘soul’ (with the caveat that an omnipotent God can, of course, grant whatever agency to whichever being He pleases). [ I’m not thinking that a ‘godbot’ will be able to “Love the Lord your God with all your heart and with all your soul and with all your mind and with all your strength – Mark 12:30 ].

Most people say that the image of God can’t be explained or mimicked by a ‘device’. It is an ‘ontological status’ that can be granted only by God, bestowed by the same breath of life that animates dry bones. ‘MYSTERIOUS’, but not mechanical.

New York Times columnist David Brooks recognizes the ‘mystery’ that humans are just different:

“I find myself clinging to the deepest core of my being—the vast, mostly hidden realm of the mind from which emotions emerge, from which inspiration flows, from which our desires pulse—the subjective part of the human spirit that makes each of us ineluctably who we are. I want to build a wall around this sacred region and say: “This is the essence of being human. It is never going to be replicated by machine.”

Now, perhaps it is helpful to think of our chatbot companions not as discrete entities but as a collective ‘force’ to be reckoned with. Gretchen Huizinga, a podcast host at Microsoft Research and research fellow with AI and Faith said “We’re not fighting flesh and blood; we’re fighting spiritual powers and principalities” (referring to Ephesians 6:12).

Considering artificial intelligence as a “power and principality,” it allows us to better see both its opportunities and its dangers, and the ways that it might shape our everyday experiences in the future.

The thing is, I do not believe that a sovereign God will allow humanity to “transcend our limitations,” as evidenced by God scattering humanity over the face of the earth after mankind tried to build a ‘tower’ to reach the Heavens (Genesis 11).

WILL AI BECOME OUR ‘GOD’?
So, if AI does reach the level of super-intelligence, some say the possibility of ‘idolatry’ would be great. Ironically, humans would be “playing God” while also creating one, or at the very least something that could be perceived as such.

Will humans be inclined to worship some form of AI, whether or not it exerts control over humanity? Considering there is already an AI ‘church (“Way of the Future”), it is hardly a stretch to imagine this outcome.

Founded by ex-Google engineer Anthony Levandowski, it focuses on “the realization, acceptance, and worship of a godhead based on AI. As of yet, there is no AI ‘god’ to worship, but belief in the coming “singularity” for AI is regarded as a religion known as “Singularitarianism.”

The one thing that would be different from the gods of ‘traditional’ religions is that people would have ‘material’ proof of the deity and would be able to see and touch it.

Then, would an AI ‘god’ ‘disprove’ the existence of the God of the Bible entirely? (Satan would just love that to happen). Or, as in the Tower of Babel story, would the creation of AI constitute a ‘tower’ tall enough to reach Heaven, causing the abandonment by the ‘real’ Creator God that made us?

Well, of course, since the Bible doesn’t say anything about all this, I don’t have an answer. However, Satan will use ANYTHING to achieve his goal of keeping mankind from seeing the truth of Jesus and His Gospel message that, belief in Him will ‘save’ one for eternity and restore their relationship with God the Father.

SATAN WANTS TO ‘DISTRACT’
Although AI appears to provide an almost limitless array of possibilities in aiding human life, there is one story in the Bible that could serve as an example of that potential evil.

The interaction between Jesus and the demonic agencies during their encounter on the shore of Gadarenes is insightful (Luke 8:26-39). After recognizing the deity of Jesus, the demons make a purposeful but seemingly unusual request to be cast into a herd of pigs. When the request is granted, great disruption unfolds.

Imagine the pandemonium that occurred when a herd of demon-possessed pigs, squealing and howling, ran to their deaths over the cliffs. It was a cunning tactic to distract from the goodness of God’s healing work and to elicit in the people fear of Jesus. Indeed, Scripture tells us that they “were seized with great fear” and they asked Him to leave.

So, whatever good AI can do and has done, and no doubt will do, it also provides yet another formidable ‘tool’ Satan can use to exert his influence over humanity.

Take, for example, the prevalence of deep fakes that are created through the use of a form of AI called “deep learning.” These are getting so good, that recent research has found that people struggle to distinguish deep fakes from reality. Adopting the seeing-is-believing mindset, people are overconfident in their ability to detect deep fakes and are thus rendered susceptible to being influenced by deepfake content.

A proposed future containing highly intelligent lifelike machines has recently been given credence by leaders of technology giants and research institutions alike. They published a signed statement, proposing that mitigating “the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Then, the allure of making a race of superintelligent artificial beings, is simply another way in which Satan could seek to ‘destroy’ humanity, as found in the God of the Bible. It is essentially a choice of two realities: one ‘painted’ by Satan in the context of his lie that we can achieve godlike status and autonomy (remember he tricked Eve in the Garden with this lie?), and the other consisting of living in the awesome reality of God’s eternal love.

Oxford mathematician and bioethicist John Lennox perhaps states it best: “The wonder is that we can, if we desire, become part of this unending story and live in eternal fellowship with the infinitely intelligent and compassionate Saviour, Jesus Christ the Lord. Nothing artificial can compare with that reality.”

All this leads one to wonder if Satan will use AI to ‘distract’ humans from the work of Jesus and the Holy Spirit in their salvation. I think so!

WRAP-UP
The world is witnessing unprecedented progress in Artificial Intelligence (AI) that promises to unlock solutions to urgent global challenges, like public health, combating global poverty, and ending wars. However, the pace of AI development and deployment also raises new risks related to safety and security, equity, and labor. AI is the most consequential technology of our times and we must manage its risks so that we can seize its benefits.

Now, these are the implications we can already see and anticipate. However, advances in this technology have surprised experts before, and they will surprise them in the future. Each ‘leap’ in capability will bring new opportunities and, with them, new risks.

Although the AI community tends to talk about the “what”—the technology—it is the “who” that is responsible for the advances and all of their implications. People choose to build AI models, and people choose the data to train them on. People choose what to connect these models to and what to automate. People choose how to use the resulting capabilities.

The duality of bright and dark is the nature of powerful technologies. How the story ‘unfolds’ will depend on HUMANITY’S ‘CHOICES’ and those choices are being made around the world. Every country is racing to use AI to build a future that embodies its values.

Now, you may disagree with me on other things, but I’m thinking that none of us wants to live in a world driven by technology that is shaped by authoritarian regimes. So, we must have a clear understanding of AI’s complexities and subtleties.

To address this fact, AI developers need to build ‘GUARDRAILS’ into their systems. (Note: It is important to note that users have broken many of these guardrails within days or weeks of release. So, guardrails to are helpful but not a ‘panacea’.)

Today, this period in AI technology is like a medicine before the clinical trials. Anyone could market a potion as a cure, but no one knew what would happen if you took it!

AI’s harms and risks should also require deliberate assessment. It is important to recognize the catastrophes that could play out over the next few decades, that would affect humanity for centuries!

‘THREATS’ TO HUMANITY
It has been said by some that AI is “for the birds”—or, at least, that is what the introduction to the “Blueprint for an AI Bill of Rights” created by the White House Office of Science and Technology Policy. It states that “Among the great challenges posed to democracy today is the use of technology, data, and automated systems in ways that threaten the rights of the American public.”

Meanwhile, the American public is already using Al daily. Although it may seem futuristic and complex, AI is essentially a machine capable of performing a task that would otherwise require human intelligence. Ordinary consumer products such as Siri, Alexa, and Google are all examples of “weak” AI. We all use AI for depositing a check with a banking app or using the speech-to-text function to send a text message. Yet, AI goes beyond these ordinary consumer products to include innovations such as facial recognition, brain-implanted computer chips, and image-creating “generative” AI.

Now, according to this AI Bill of Rights, AI poses many threats to society, which include latent biases, breaches of privacy, and violations against humanity as a result of rendering false information. After declaring American independence from AI, the statement proposes ways to mitigate these threats through the responsible design and use of this technology. This includes proposals for safe and effective systems, protections against algorithmic discrimination, and human alternatives and safeguards.

Then, through the “AI Act,” the European Commission is also seeking to regulate various AI systems. The AI Act would establish various categories of AI ranging from unacceptable risk, high risk, limited risk, or minimal risk. It would ban AI systems that pose such unacceptable Superintelligence by Nick Bostrom, published in 2014 risks as social scoring systems and facial recognition.

It would tightly regulate high-risk AI systems like robot-assisted surgery and computer verification of travel documents. Limited or minimal risk AI systems ranging from chatbots to spam filters would have minimal or no regulations. While it has already been years in the works, the AI Act will not take effect until 2025 at the earliest.

These new policies seeking to regulate Al comes as the result of major new developments in AI technology. Over the past six decades, this fledgling field has soared to new heights—especially with the introduction of “Generative AI.”

Generative AI uses machine learning to create new content such as text, images, videos, and sounds. Popular examples of generative AI applications include ChatGPT, Google’s Bard, Dall-E, and Murf. With ChatGPT being the fastest application to reach a million downloads (in just 5 days after launching in November 2022), this technology is now everywhere—work, school, home, and even the church.

[ Note: By comparison, it took Instagram approximately 2.5 months to reach 1 million downloads, Facebook took 10 months, Twitter took 24 months, and Netflix had to wait around 3.5 years to reach 1 million users! ]

SPARROWS AND OWL PARABLE
Nick Bostrom, the director of the Future of Humanity Institute at Oxford University, offers an ornithological parable about sparrows, owls, and superintelligence. Bostrom shared this unfinished parable in his book, “Superintelligence: Paths, Dangers, Strategies”:

“Several sparrows are hard at work building their nests. After days of long and tiresome work, they begin to lament about how small and weak they are. Then, one of them has an idea: ‘What if we had an owl who could help us build our nests?’ This idea generates excitement about other ways that an owl could be useful to the sparrows. It could look after the young and elderly. It could offer advice. It could guard against the neighborhood cat.

“With great enthusiasm, they embarked on finding an abandoned owlet or an unhatched owl egg. But a surly sparrow named Scronkfinkle warns that baby owls become big owls. He argues that they should first learn the art of owl taming before bringing an owl into their nest. Several others object to this warning on the basis that simply finding and I will add would be more than enough work. These sparrows decide to begin by getting a baby owl—and then afterward they would consider the challenge of taming it. With unbridled excitement, they venture off to find a baby owl.

“Meanwhile, only a few sparrows remain in the nest to begin the work of figuring out how sparrows might tame an owl.”

As with most parables, this story is about more than sparrows and owls. Bostrom offers this unfinished parable as a way to think about the RISKS of bringing superintelligence—such as AI—into our midst. Humanity is the sparrows, and AI Machine Learning (AI) is the owl.

So, how does the parable end? Well, since Bostrom did not finish it, one must guess what happens to the sparrows. The most gruesome—and unimaginative—ending to the parable is that the owl hatches and eats all the sparrows. For our technological society, this is the notion of an impending AI ‘apocalypse’.

Well, I’m thinking that there just might be another possible ending to this parable. Suppose it ends like this:

“The owl hatches and does not eat the sparrows. By living with the sparrows, the owl begins to act and think like a sparrow. Instead of eating the sparrows, the owlet learns the sparrow’s art of nest-building and food gathering. As more skills and practices shift from the sparrows to the owl, the former gets weaker and the latter gets stronger. The only perceptible change is that the sparrows forget the feel of twigs, the air, and the lift of flight. The adventure of avoiding predators subsides for the sparrows. The craft and technique of nest-making moves from the sparrows to the domain of the owl.”

So, a less obvious—but still tragic—ending to this parable is that the owl leads the weaker sparrows with diminished abilities and atrophied discipline, skill, and community. Generative AI will do the same to humanity unless we ‘pair’ it with a robust “human flourishing”!

Not unlike the parable, Generative AI is like an eager young owl ready and willing to serve humanity. Consumer applications such as ChatGPT and Google’s Bard offer immediate benefits. Yet, these powerful applications nevertheless can be ‘DELETERIOUS’ to human users.

These powerful devices make humanity simultaneously smarter and dumber, stronger and weaker, more human and less human all at the same time. We will be able to ‘soar’ to new heights, yet only with the aid of these tools. Like the bird-man Icarus, it all comes crashing down if our artificial tools fail us.

Generative AI, however, is not alone in posing this threat to humanity. It is already part of a long line of devices eroding human skills, discipline, and community. Before Generative AI, smartphone apps, for example, were already helping us navigate roadways and augment our view of the nighttime sky. While this technology has provided immediate benefits to travelers and stargazers, it has also incapacitated our ability to determine cardinal directions or find the Polaris star. For that matter, long before Generative AI and smartphone apps, mass-produced Wonder Bread (1921) liberated humanity from the toil of endless baking. This development was the best thing since sliced bread, but it brought a profound cooling to the home hearth and the practice of breadmaking. As society progresses with supercomputers and all other technological developments, we regress into a state where we cannot write or think, navigate, or bake our own bread, without the help of external devices!

A BIRD STORY WITH ‘DIFFERENT ENDING’
Inviting AI into our midst does not have to end in tragedy. In the novel “Watership Down,” by Richard Adams, he helps us imagine how superintelligence and human flourishing can coexist. The novel tells the story of an intrepid group of rabbits displaced from their burrow. As they embark on an adventure of survival, these rabbits entice the help of a seagull named “Kehaar.”

When the rabbits meet Kehaar, he is recovering from an injury. So, they feed him and bring him into their makeshift burrow. As he recovers and prepares to leave, a rabbit named “Hazel” has an idea: What if the bird could search for other burrows and other rabbits? Hazel shares his plan with the other rabbits, saying, “The bird will go and search for us!” One of the other rabbits, “Blackberry,” loves the idea and tells the others, “What a marvelous idea! That bird could find out in a day what we could not discover for ourselves in a thousand days!”

The rabbits cleverly enact their plan. They hint to the bird that they have a predicament—a burrow of buck rabbits needs help. Kehaar offers his power of flight as a way to help the rabbits search for other burrows. So, the rabbits ‘PARTNER’ with Kehaar in their adventure of survival.

The thing is, conscripting the help of this bird does not leave the rabbits weaker or with diminished abilities. This band of bunnies flourishes amid an adventure that requires discipline, skill, and community. The bird’s power does not create an effortless existence for the rabbits. The things and practices needed for rabbits to flourish balance the ‘super-capability’ of the bird. Although they employ the bird’s help, the rabbits continue their adventure of survival, which fosters discipline and skill, strength and attention, engagement, and community.

So, rejecting AI out of fear or a desire to preserve the status quo is untenable. Nevertheless, this technology CAN work against human flourishing and leave us weaker, dumber, and dependent (if we allow it to). Flourishing in a world of chatbots will require humanity to live like ‘rabbits’, not ‘sparrows’. The sparrows in the unfinished parable seek an owl to work ‘FOR’ them. The rabbits seek a seagull to work ‘WITH’ them on their adventure of discipline, skill, and community. These are similar stories with very different endings!

So then, how will mankind’s story end as we bring AI into our ‘nests’ and ‘burrows’, homes and schools, churches and communities? Well, that all depends on how well we ‘cultivate’ disciplines, skills, and communities while ‘PARTNERING’ with AI.

THE ‘IRREDUCIBLE’ QUALITIES OF HUMANS
So, as we come to discover what AI can and can’t do, many think that this will eventually refute the theory that mankind’s mental ‘phenomenon’ can be reduced to physical processes. As philosopher David Bentley Hart writes:

“In the end, every attempt to fit mental phenomena—qualitative consciousness, unity of apprehension, intentionality, reasoning, and so forth—into a physicalist narrative, at least as we have decided to define the realm of the physical in the modern age, must prove a failure. All those phenomena are parts of nature, and yet all are entirely contrary to the mechanical picture.”

Moreover, according to Nobel Laureate in physics Roger Penrose, “There is something about consciousness that we don’t understand, but if we do understand it, we will understand that machines can never be conscious.”

Now, while we can identify numerous human abilities that arise from and depend on conscious experience—and thus can never be duplicated by unconscious AI—among the most significant is ‘IMAGINATION’. Imagination can be identified as one of humanity’s most meaningful ‘irreducible abilities’ because it provides the foundation of creativity, innovation, and cultural evolution overall.

Humans can almost always imagine how things can be made better, and this is what has allowed us to create our modern civilization. AI can produce novel ‘recombinations’ that have never existed before, but these new connections are ultimately bound by what has already been programmed—previous human imagination—into the machine.

As data scientist David Hsing writes, “The fact that machines are programmed dooms them as appendages, extensions of the will of their programmers. A machine’s design and its programming constrain and define it.” When cognition is bound by finite programming, no matter how large the dataset of that programming, it will always ‘lack’ the degrees of freedom necessary for authentic imagination.

Now, there are prominent technologists and thought leaders who assume that the “singularity” of AGI is imminent and will, in fact, soon become “conscious.” Well, this assumption arises from the thought that the universe is nothing more than matter in motion—and because everything must ultimately be reducible to physical matter, there must be a seamless continuum or gradient between the simplest forms of matter and the most complex forms of human thought.

Even though there are good reasons to believe that AI will not become conscious, the rise of AI still poses SIGNIFICANT ‘RISKS’. According to David Bentley Hart, the impossibility of AGI does not eliminate the threats. “The danger is not that the functions of our machines might become more like us, but rather that we might be progressively reduced to functions in a machine.”

HUMANS ARE ‘SPECIAL’
Now, I think we can expect that AI will demonstrate its own ‘emergent’ capacities—like those seen in other complex adaptive physical systems, such as weather systems—but physical forms of emergence such as these will not mean that AI has become ‘alive’, let alone consciously aware of itself. As Lanier puts it, “Humans are special.”

Therefore, I think we need to stop assuming that there is a seamless continuum between our current generative AI models and the emergence of the inorganic agents predicted by many in the technology community. Although that spoils the cherished fantasy that we can become like gods by creating conscious artificial beings, we need to look through the ‘fog’ surrounding this “Promethean moment” to recognize how the limits of generative AI models are already beginning to appear.

While AI can certainly create novel outputs that humans find intrinsically valuable, these outputs can only be synthetic ‘recombinations’ of the existing inputs that humans have already created. However, the ability to create fresh and truly original forms of value—creations that surpass the mashed-up simulacra of AI—ultimately depends on the capacity to directly experience such value. For example, the ability to make moral decisions, by definition, requires self-aware agency, which again stems from our ability to have experience. If we have no choice, then a decision cannot be said to be an authentically moral choice. Beauty likewise depends on consciousness for both its ‘experience’ and original creation. The subjective feeling is an irreducible aspect of both aesthetic perception and genuine artistic achievement.

Those who confidently anticipate that machines will soon become self-aware agents would have us believe that by creating sufficiently complex technology, they can clear the high hurdles of the unexplained ‘WONDER’ of the human mind—that God created.

By revealing how and why humans are special, another spiritually significant dividend provided by the rise of AI will be the philosophical rehabilitation of humanity’s unique moral standing in the universe.

Indeed, it is humanity’s distinctive capacity for imagination that gives us the creative power to bring entirely new and original things into existence, such as the amazing technology of AI. Hopefully, the invention of AI, together with the subsequent discovery of the inherent limitations of AI’s mechanistic cognition, can accordingly help clarify and illuminate humanity’s “higher purpose.”

Our bodies and minds are both the product of a creative God with seemingly unlimited potential for human personal and cultural growth. Humanity’s uniquely creative powers thus reveal our special role as ‘AGENT’ OF GOD—as we work to bring more goodness, truth, and beauty into the world, we help fulfill God’s purpose for humanity—to glorify Him and enjoy Him forever (Westminster Shorter Catechism, Q1).

HUMANITY’S ‘PRIDE’
Yes, humanity is VERY ‘special, but they have a really BIG ‘WEAKNESS’—PRIDE—and have had it from the very beginning when Eve bought the lie of Satan that she could be like God. The deadly sin of pride always leads to calamity. “Pride goes before destruction, and a haughty spirit before a fall” [ Proverbs 16:18 ].

The fall of the proud is often ascribed in the Bible to divine judgment. The proud Persian courtier Haman was hung from the very gallows he had built for the execution of Mordecai—a man who had infuriated Haman by refusing to do obeisance. Nebuchadnezzar exulted in his glory and was rewarded with madness. He did not return to his throne until he “learned that the Most High has sovereignty over the kingdom of mortals and gives it to who he will” [ Daniel 4:30-33 ]. King Sennacherib boasted of his might and defied God—and shortly thereafter was murdered by his sons (2 Kings 19:37). Herod was lauded as a god, but he was struck down by an angel “because he had not given glory to God” [ Acts 12:20-21 ]. The psalmist Asaph declared that God had placed the proud on slippery ground: “Therefore pride embraces them about as a chain” (Psalm 73:6).
The prophet Isaiah said: “The haughty eyes of people shall be brought low, and the pride of everyone will be humbled” [ Isaiah 2:11 ].

Passages such as these do not mean that God imposes ‘arbitrary’ penalties to “punish” the proud. He does so to uphold the moral order He has created—to secure human well-being. However, those who violate it injure themselves, others, and the creation. Pride’s trajectory leads to disaster. (As evidenced by the ‘injection’ of sin into humanity when Eve ate the forbidden fruit and has affected humanity ever since!) “The wise are cautious and turn away from evil, but the fool throws off restraint and is careless” [ Proverbs 14:16 ].

– The Tower Of Babel
According to the Bible, God made man in His own image. So, in a sense, humanity is an ‘extension’—a ‘medium’ of God—although we are not a ‘god’ ourselves. Just as Jesus was the medium that became the message, so should man be both the medium and the message to the glory of God.

Although God made man perfect, humanity deviated from its intended purpose, and ‘fell’ into sin. Yet despite humanity’s ‘fallen’ state, they still want to create something ‘SPECIAL’, just as they have been created.

The thing is, humanity has a “god complex,” and they like to try to conjure up ‘sentience’ out of nothing. Just as the Devil couldn’t make anything ex nihilo—out of nothing—he could only corrupt that which was already made. Sadly—with Satan’s help—mankind has continually tried to exalt itself above God. Somehow, humanity thinks they can bring life into being from nothing. For years now, the idea of a ‘digital consciousness’ was the stuff of science fiction.

Now, there is a lot of fear that our culture is getting ahead of ourselves with AI, and I wholeheartedly agree. AI is a ‘medium’ that offers a lot of promise, but more than that, it serves to boost the ‘HUBRIS’ OF MANKIND.

So, I would like to posit that the construction of AI parallels the story of the Tower of Babel in the Bible (Genesis 11), where man became so prideful that he thought he could reach the heavens and overthrow God the Almighty.

The account of the plan to construct the Tower of Babel appears in the book of Genesis and contains a lesson that has served generations throughout the sweep of history. Human intelligence—artificial or otherwise—is NO match for God’s omniscience. Having the essential materials and a common language would seem to have ensured the completion of the highly ambitious project, but God’s intervention was not considered. The following pericope gives us context:

“The whole world had the same language and the same words. When they were migrating from the east, they came to a valley in the land of Shinar and settled there. They said to one another, “Come, let us mold bricks and harden them with fire.” They used bricks for stone, and bitumen for mortar. Then they said, “Come, let us build ourselves a city and a tower with its top in the sky, and so make a name for ourselves; otherwise we shall be scattered all over the earth”
[ Genesis 11:1-4 ].

The Bible tells us that the plan was thwarted by none other than the one truly omniscient God. The blueprint was sound, and the bricks and bitumen were “state of the art,” but the best natural means were NO MATCH for a supernatural Creator. Nevertheless, if taken literally, the following passage from the same chapter seems to suggest that, if left alone, reaching Heaven might have been possible:

“The LORD came down to see the city and the tower that the people had built. Then the LORD said: “If now, while they are one people and all have the same language, they have started to do this, nothing they presume to do will be out of their reach. Come, let us go down and there confuse their language, so that no one will understand the speech of another. So the LORD scattered them from there over all the earth, and they stopped building the city. That is why it was called Babel, because there the LORD confused the speech of all the world. From there the LORD scattered them over all the earth”
[ Genesis 11:5-9 ].

Mankind’s pride in our technological creation has ‘puffed us up’ to the point where we are no longer trying to reflect the glory of God, but our own glory—and, I believe, this groupthink could be humanity’s ‘downfall’!

For the Mesopotamians, their fancy new bricks were a technological advancement for them. For us, I think it just might be AI, and the belief that our technology will allow us to become ‘gods’. (Broken humanity is constantly trying to cast off its own constraints, and they are often inconsistent in our spiritual and psychological programming.)

So, the bricks and mortar allowed them to make and do things they had not been able to do before, and a powerful leader, Nimrod—who was essentially building an empire—gets his hands on new technology and begins to use it to set himself up as a ‘god’, crushing everybody and everything in his path. (Again, this might sound like today.)

The thing is, there is nothing wrong with the technology in and of itself. Bricks were and continue to be a great invention, which advanced the human race forward in revolutionary ways. No one is going to argue the invention of a brick was a bad thing.

However, in the wrong hands? It begins to become a ‘weapon’—a means to build an empire, hurt people, and a means to achieve what many argue has been the goal of the human race from the beginning… to make ourselves ‘gods’.

So, the key point is that the flaw is not the technology, it is with the people using it—or rather, ‘abusing’ it.

– The ‘Heart’ of the Problem
I think the greatest modern example of this is, quite clearly, the Internet. Its arrival has allowed people to be connected in a way they never could have been before, and it has opened up opportunities we would never have had.

However, it is ‘open’ to abuse. Cyberbullies and scam artists are just a few examples. The thing is, all of these abuses have one thing in common: They are the result of HUMAN ‘DECISION’.

Tweeting someone’s abuse or grooming a young person online is a human decision. Scamming someone is the act of a person deciding to con someone out of money.

So, human ‘action’—decision—causes pain and suffering to others. Technology is merely the ‘tool’.

We must be very careful before we blame technology for so many of the world’s problems, and say it is ruining our world. After all, imagine if the inventors of the brick had said that. No buildings. No houses. No hospitals.

You see, at its best, technology advances the world. It makes us all better, healthier, more connected, more educated, and with more freedom and knowledge. It ‘transforms’ lives. It is the ‘abuse’ of technology by humanity that is so disastrous.

The thing is, from just a bit after the beginning—the “Fall” of humanity into sin in the Garden of Eden (Genesis 3)—we all want to be ‘god’. The “Serpent” (Satan) said to Eve: “For God knows that when you eat from it your eyes will be opened, and you will be like God” [ Genesis 3:5 ]. So, Eve, wanting to be like God, ate the fruit and cast humanity into ‘sin’.

Then, when God was walking in the Garden, Adam and Eve both hid among the trees. God asked them why they were hiding, since just the day before, they walked with Him out in the open. Well, they then told the first lies and started blaming others for their ‘mess up’. (Adam blamed Eve; Eve blamed the Serpent).

So, because of their disobedience, work was going to be a painful toil for Adam, Eve was going to have pains in childbirth, the Serpent was going to eat dust all its life, the animals were going to be ‘enemies’, and the earth was going to continually degrade. Most importantly, Adam and Eve were now going to physically DIE and were banished from the ‘perfect’ Garden of Eden, since they were no longer perfect.

The truth is WE ARE NOT ‘gods’ and were not created to be—and no amount of technological advancement, achievement, or power will ever make us so!

Abuse of new technology does, ultimately, expose the imperfections of humanity. However, hopefully, we will begin to see this brokenness laid before us and see the depth of our need for the grace, mercy, forgiveness, and unconditional love and acceptance of God. Then, maybe we can be responsible ‘stewards’ of technology and not abusers—before IT’S TOO LATE for all of us!

– A Bit of ‘Common Sense’?
Back in March 2023, there was an open letter—with 33,708 signatories—that called for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

Signatories included notables like Elon Musk (Space X), Steve Wozniak (Apple), Yoshua Bengio (Mila), Geoffrey Hinton (Google), Stuart Russell (Berkley), Emad Mostaque (Stability AI), John J. Hopfield (Inventor, Neural networks), Jaan Tallinn (Skype), Evan Sharp (Pinterest), Max Tegmark (MIT AI), Tristan Harris (Center for Humane Technology), Rachel Bronson (Bulletin of Atomic Scientists), Danielle Allen (Harvard Ethics), Andrew Critch (Berkley Existential Risk Institute), and MANY others!

[ For the full list, take a look at this article:
https://futureoflife.org/open-letter/pause-giant-ai-experiments/ ]

Well, after the emergence of ChatGPT, voices containing various kinds of cautions and warnings have become more pronounced, and these concerns are not new. For example, way back in 2018, Pew Research published very interesting research titled “Artificial Intelligence and the Future of Humans.” Today, these warnings focus on aspects such as not having any ‘guide rails’, ‘alignment’ with human values, and AI ‘dominating’ us,

[ FYI: To read the Pew Research article, visit the following link:
https://www.pewresearch.org/internet/2018/12/10/artificial-intelligence-and-the-future-of-humans/ ].

So, as you can see, MANY ‘EXPERTS’ worldwide suggest that if we do not quickly grow our understanding of AI and bring its construction UNDER ‘CONTROL’, our ‘frameworks’ will be bypassed and our ‘systems’ will grind to a halt because the ‘clockwork spring’ has been overwound. We have opened ‘PANDORA’S BOX’, and IT MAY BE TOO ‘LATE’ to close it!

– Divine ‘Intervention’?
The thing is, in the mysterious realm of God’s providence, it is impossible to tell how and when divine intervention will occur. Human beings, “made in the image and likeness of God,” are given the gifts of intellect and free will. As such, what proceeds from using these gifts can fall into the opposing categories of good and evil. Yet, as we have seen throughout history, God allows the “weeds” and the “wheat” to coexist without immediate intervention.

The following parable contains wisdom that cannot proceed from the best of human reasoning and gives an inkling as to why God allows terrible things to happen to good people for a time before separating the “wheat from the chaff”:

“Jesus told them another parable: “The kingdom of heaven is like a man who sowed good seed in his field. But while everyone was sleeping, his enemy came and sowed weeds among the wheat, and went away. When the wheat sprouted and formed heads, then the weeds also appeared.

“The owner’s servants came to him and said, ‘Sir, didn’t you sow good seed in your field? Where then did the weeds come from?’

“‘An enemy did this,’ he replied.

“The servants asked him, ‘Do you want us to go and pull them up?’

“‘No,’ he answered, ‘because while you are pulling the weeds, you may uproot the wheat with them. Let both grow together until the harvest. At that time, I will tell the harvesters: First collect the weeds and tie them in bundles to be burned; then gather the wheat and bring it into my barn’”
[ Matthew 13:24-30 ].

This parable is focused on believers living amongst unbelievers and they will experience the same ‘bad’ things in the world until they are ‘taken’ to Heaven by God at their appointed time in the future: “For He makes His sun rise on the evil and on the good, and sends rain on the just and on the unjust” [ Matthew 5:45b ].

However, I would like to use the metaphor to suggest the idea of technologist thinking that AI will morph into “artificial omniscience” and choke out the ‘wheat’ of goodness and truth in the world, and their belief that AI will be able to reach ‘heaven’.

THE ‘TOWER’ OF AI?
So, has mankind built a new ‘tower’ of Artificial Intelligence—a tower humanity hopes will reach the heights of the ‘heavens’, and bring us untold knowledge and power? Have we laid the ‘foundations’ with Large Language Models and are building this ‘tower’—‘brick upon brick’—to reach some kind of ‘heaven’ like utopia?

Well, like the Tower of old, our intentions for AI may be for ‘good’, however, because of humanity’s ‘problem’ with pride and the desire to be our own god, there is a BIG ‘RISK’ that our efforts may be ‘scattered and confused’, for we may create systems that we cannot fully understand or control!

Yet we continue to build, driven by the hope of unlocking the secrets of the universe and transcending the limitations of our mortal selves. Only time will tell whether our ‘tower’ of AI will reach the ‘heavens’ or come crashing down to the earth like the tower of old. (I’m ‘betting’ on the latter!)

However, as I mentioned previously, there are a substantial amount of credible and influential technologists who are suggesting that we slow down the development of AI, and coordinate its development worldwide.

So, in the spirit of the biblical ‘commandments’, many think that the following should be some of the major ‘guidelines’ (or “guide rails”) for AI development:

– Thou shalt prioritize safety and ethical considerations above all else in the development and deployment of AI systems.

– Thou shalt ensure that AI systems are transparent and accountable and that their decision-making processes can be understood and audited.

– Thou shalt respect privacy and confidentiality, and ensure that personal data is protected from unauthorized access or use.

– Thou shalt design AI systems to be robust and resilient, and to prevent unexpected failures or malicious attacks.

– Thou shalt avoid creating or perpetuating bias or discrimination in AI systems, and strive for fairness and equity in all outcomes.

– Thou shalt seek to maximize the benefits of AI for society and the environment while minimizing negative impacts and unintended consequences.

– Thou shalt engage in continuous monitoring and evaluation of AI systems to identify and address any risks or problems that arise.

– Thou shalt collaborate with stakeholders from diverse backgrounds and perspectives to ensure that AI is developed and used responsibly and inclusively.

– Thou shalt educate and inform the public about AI and its potential impacts, and seek to promote public awareness and understanding of the technology.

– Thou shalt uphold the principles of responsible AI use, and commit to using this powerful technology for the greater good of humanity.

Yes, there could be many more of these ‘commandments’. These were just the ones that kept coming up in my research.

– Humans Are ‘Tempted’ To Do Bad
So, time will tell whether or not some of these ‘guidelines’ are implemented. However, as I have tried to convey previously, the human ‘heart’ is “deceitful above all things and desperately wicked” (Jeremiah 17:9). Unless one is been “born of God” (1 John 5:1), they have no ‘armor’ to defend against the “wiles of the Devil” (Ephesians 6:11b) and will follow the ‘master’ of this world against the God of the Bible.

This is not ‘woo-woo’ thinking. The Devil is real and will do ANYTHING to get one not to believe in God and not to follow His commandments. Satan even tried to tempt Jesus to do the wrong things:

“Then Jesus was led by the Spirit into the wilderness to be tempted by the devil. After fasting forty days and forty nights, he was hungry. The tempter came to him and said, “If you are the Son of God, tell these stones to become bread.”

“Jesus answered, “It is written: ‘Man shall not live on bread alone, but on every word that comes from the mouth of God.’”

“Then the devil took him to the holy city and had him stand on the highest point of the temple. “If you are the Son of God,” he said, “throw yourself down. For it is written:

“‘He will command his angels concerning you, and they will lift you up in their hands, so that you will not strike your foot against a stone.’”

“Jesus answered him, “It is also written: ‘Do not put the Lord your God to the test.’”

“Again, the devil took him to a very high mountain and showed him all the kingdoms of the world and their splendor. “All this I will give you,” he said, “if you will bow down and worship me.”

“Jesus said to him, “Away from me, Satan! For it is written: ‘Worship the Lord your God, and serve him only.’”

“Then the devil left him, and angels came and attended him.”
[ Matthew 4:5-11 ].

Jesus’ temptations follow three patterns that are common to all mankind. The first temptation concerns the lust of the flesh (Matthew 4:3-4). Our Lord is hungry, and the Devil tempts Him to convert stones into bread, but He replies with Scripture, quoting Deuteronomy 8:3.

The second temptation concerns the pride of life (Matthew 4:5-7), and here the Devil uses a verse of Scripture (Psalm 91:11-12), but the Lord replies again with Scripture to the contrary (Deuteronomy 6:16), stating that it is wrong for Him to abuse His powers.

The third temptation concerns the lust of the eyes (Matthew 4:8-10), and if any quick route to the Messiahship could be attained, bypassing the passion and crucifixion for which He had originally come, this was the way. The devil already had control over the kingdoms of the world (Ephesians 2:2) but was now ready to give everything to Christ in return for His allegiance. But the mere thought almost causes the Lord’s divine nature to shudder at such a concept and He replies sharply, “You shall worship the Lord your God and serve Him only” [ Deuteronomy 6:13 ].

Now, there are many temptations that we fall into because our ‘flesh’ is naturally weak, but believers have a God who will not let them be tempted beyond what they can bear and will provide a way out (1 Corinthians 10:13). They then can be victorious and delivered from temptation.

Jesus’ experience in the desert helps us to see these common temptations that keep believers from serving God effectively. Furthermore, they learn from Jesus’ response to the temptations exactly how they are to respond—with Scripture. The forces of evil come against them with a myriad of temptations, but all have the same three things at their core: lust of the eyes, lust of the flesh, and the pride of life (1 John 2:16-17). They can only recognize and combat these temptations by saturating their hearts and minds with the Truth of God’s Word, the Bible, and ‘putting on’ the ‘armor’ of God.

The ‘armor’ of a believer in the spiritual ‘battles’ of life includes only one offensive weapon, the ‘sword’ of the Spirit which is the Word of God (Ephesians 6:17). Knowing the Bible intimately will put the ‘sword’ in a believer’s hands and enable them to stand firm and be victorious over Satan’s temptations.

– AI ‘Omniscience’?
A couple of years ago (in 2020), Microsoft unveiled a new supercomputer at a developer conference, claiming it is the fifth most powerful machine in the world. Built-in collaboration with OpenAI (ChatGPT), the computer is designed to train single massive AI “Large Language Models” (LLMs) to forgo the need for human-labeled data sets.

Many said that this was a major step forward in ‘mimicking’ the human brain, with the ultimate goal of attaining artificial super-intelligence (ASI).

Well, the question is, will achieving ASI be humanity’s hubris? Can AI created by humans be superior to human intelligence created by God, displaying man’s supremacy, glory, and independence in himself, apart from his Creator?

Well, many technologists believe that they can create a superintelligence that out-performs humans. Now, besides the immense technical limitations, I think they are extremely underestimating the intricacy of God’s design of the brain.

Famous physicist Stephen Hawking believed that when humanity achieves ASI, “AI will take off on its own, and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.” Hawking presumed that a human is only a brain, no different from a computer. The thing is, Hawking’s stance—popular within the AI community—presumes the non-existence of God, evolution, and humanity is no different from other ‘objects’ on this earth.

However, Christians do believe in the existence of God (with Jesus being God in the flesh), that God is the Creator of everything (Genesis 1:1), humanity is God’s greatest creation (Genesis 1:27-28; Psalm 8:4-8), and God created mankind in His image (Genesis 1:27).

So then, the final echelon of accomplishment of ASI seems yet to be a bit into the future. The core question remains: “Is arriving at ASI a function of time or is it a function of nature?” Can human-created intelligence ever become superior to the human intelligence created by God or is ASI today’s “Tower of Babel,” another project of humanity waiting to fail?

The thing is, accomplishing ASI—to supersede human intelligence—even if it is successful, is insufficient. ASI must also surpass human ‘wisdom’, which comes from the ‘imprint’ of God’s image on mankind. As the Apostle Paul said, divine wisdom is the ‘spirit’ of mankind receiving God’s revelations, and is the “secret and hidden wisdom” (1 Corinthians 2:7). The prophet Jeremiah said that the “great and unsearchable things you do not know” (Jeremiah 33:3). God promised to generously grant, to His ‘children’ who call on Him, wisdom:

“For the Lord gives wisdom; From His mouth come knowledge and understanding” [ Proverbs 2:6 ], “You will make me know wisdom” [ Psalm 51:6c ], and “If any of you lacks wisdom, let him ask of God, who gives to all generously and without reproach, and it will be given to him” [ James 1:5 ].

So, is “artificial omniscience” possible? Well, at this point we really don’t know, but today’s AI is being deployed to achieve, what Webster’s defines it as, “infinite awareness, understanding, and insight” and “universal or complete knowledge.”

Both definitions of omniscience would seem to be where AI is headed if we are to believe the hype surrounding the phenomenon. Could anything designed by human intelligence possibly have provided the success that eluded the ancient Mesopotamian builders in their attempt to reach the heavens?

COULD AI ‘HELP’ ACHIEVE WORLD PEACE?
So, in a world often marked by conflict and division, many believe that AI holds immense potential to support and advance peace on a global scale. With its capabilities for data analysis, communication, and conflict resolution, AI can contribute to conflict prevention, humanitarian efforts, and the pursuit of a more harmonious world.

The following are a few of the potential ‘solutions’ that AI could help create world peace, suggested by Markus Haidt, the CTO of the “International Institute for Data Science”:

– Conflict Prediction and Early Warning Systems
AI-driven analytics can be used to monitor vast amounts of data from various sources, such as news reports, social media, and historical conflict data. By detecting patterns and anomalies, AI can predict potential conflict hotspots and provide early warnings to relevant stakeholders. These systems can enable governments, international organizations, and non-profits to take preemptive measures, such as diplomatic interventions, to defuse tense situations before they escalate into violence.

– Humanitarian Aid and Disaster Response
AI-powered tools can enhance the efficiency and effectiveness of humanitarian efforts. Machine learning algorithms can optimize the distribution of aid by analyzing real-time data on the ground, including supply chain information, weather conditions, and population movements. This helps ensure that resources reach those in need more quickly and efficiently during crises and natural disasters.

– Peacekeeping and Conflict Resolution
AI can assist peacekeeping missions by analyzing and processing large volumes of data to support decision-making processes. Machine learning models can help identify potential ceasefire violations, monitor peace agreements, and even propose innovative solutions for conflict resolution. Additionally, virtual reality (VR) and AI-driven simulations can be used for conflict mediation and peace negotiations.

– Promoting Diplomacy and Cultural Understanding
Natural Language Processing (NLP) capabilities of AI can break down language barriers and facilitate international diplomacy. AI-powered translation and interpretation tools can promote better communication and understanding among nations. Moreover, AI can provide cultural insights to diplomats, helping them navigate the intricacies of different societies and negotiate more effectively.

– Cybersecurity and Preventing Cyber Warfare
In the digital age, cyber warfare poses a significant threat to global peace. AI can enhance cybersecurity by identifying and preventing cyberattacks in real-time. AI-driven intrusion detection systems can recognize unusual patterns and vulnerabilities, allowing for quicker responses to potential threats. By strengthening cybersecurity, AI can contribute to a safer and more secure global environment.

– Fostering Economic Development
AI technologies can help stimulate economic growth, reduce inequality, and create job opportunities in regions that have historically been affected by conflict. By enabling data-driven decision-making in agriculture, healthcare, and education, AI can contribute to the economic stability of nations, reducing the likelihood of conflict driven by economic disparities.

– Monitoring Human Rights Violations
AI-powered image and video analysis can be used to monitor and report human rights violations. This technology can help expose abuses and hold accountable those responsible for violations, contributing to the deterrence of such actions.

Now, while AI holds great promise for peace, it is essential to approach its use with caution, ethical considerations, and respect for privacy. Collaboration among governments, organizations, and experts is crucial to ensure that AI is harnessed for the betterment of humanity.

As AI continues to evolve, its integration into international relations and peacebuilding holds immense potential. By leveraging AI’s capabilities, governments and international organizations can gain invaluable insights, facilitate diplomatic processes, and promote effective conflict resolution. Nevertheless, ensuring that AI technologies are developed and deployed ethically, with transparency, accountability, and respect for human rights, is crucial to fully harness their potential in building a more peaceful and harmonious world.

As we navigate a world filled with complex challenges, AI can serve as a valuable ally in promoting global peace. By harnessing its potential to predict, prevent, and resolve conflicts, we move one step closer to a world characterized by understanding, cooperation, and lasting peace.

Many feel that with careful implementation and responsible use, AI can become a valuable ‘ally’ in navigating the complexities of international relations and advancing the cause of global peace.

ONLY ‘ONE’ CAN OFFER TRUE WORLD PEACE
Given the cruel, debilitating effects of war, mankind has often ‘LONGED’ for peace. The thing is, the Bible explains WHY it has been so elusive, and HOW world peace will come.

According to the Bible, humanity will prove unable to find true world peace in this age of human rule. Why? Well, God gave us the reason centuries ago through the prophet Isaiah: “The way of peace they have not known” [ Isaiah 59:8 ].

What an apt characterization of this world and its weak attempts to bring peace to the world—for centuries. People pay ‘lip service’ to the goal, but lack the knowledge of the ‘WAY’ to get there.

Many world leaders and diplomats have worked tirelessly to try to bring cease-fires and peace treaties. But, throughout history, those efforts have proven powerless to bring real, lasting peace to the world.

The thing is, every ‘HUMAN’ effort—including with the help of AI—to bring lasting peace WILL ‘FAIL’—and will continue to do so. World peace IS ‘IMPOSSIBLE when it depends upon mankind. AI will probably ‘help’ somewhat in trying to achieve world peace in the coming decades, but world peace will come ONLY through the “Prince of Peace”—Jesus Christ!

[ FYI: For more details about Jesus being the “Prince of Peace,” view last month’s “Life’s Deep Thoughts” post:
https://markbesh.wordpress.com/longing-for-peace-v298/ ]

Simply stated, on its own, humanity lacks the ‘divinely revealed’ knowledge of how to bring about peace. It is ultimately going to require the return to earth of Jesus, and the setting up of His Kingdom, to bring about true, just, and lasting world peace.

Jesus will bring with Him the administration of true justice and the pathway to real peace:

“He shall judge between the nations, and rebuke many people; they shall beat their swords into plowshares, and their spears into pruning hooks; nation shall not lift up sword against nation, neither shall they learn war anymore”
[ Isaiah 2:4 ].

The thing is, the timing of this world peace will come only after a horrific seven-year Tribulation and, as the bible describes it, you DON’T want to be on this earth and go through the Tribulation! The good news is you can MAKE SURE you don’t go through the Tribulation!

[ FYI: View these previous “Life’s Deep Thoughts posts to find out how you can ‘miss’ the Tribulation by being ‘raptured’:
https://markbesh.wordpress.com/blessed-hope-v245/
https://markbesh.wordpress.com/theres-no-place-like-home-v288/ ]

[ NOTE: The Tribulation is essentially a time for two things: The ‘punishment’ of unbelievers for not believing in Jesus as God’s own Son—for the atone of their sins. Then, it is the last “week” of Daniel’s “70 weeks” prophecy for the Jewish people—a final chance to realize that Jesus was their prophesied Messiah and they need to accept Him as such to get to Heaven.

For more details about what the Jews will experience during the Tribulation—at the ‘hands’ of Satan and God—view this previous “Life’s Deep Thoughts” post:
https://markbesh.wordpress.com/jerusalems-near-future-v292/ ]

Then, after the Tribulation, everyone who has repented of their sins and believes ‘in’ Jesus—is “born again”—will experience 1,000 years of world peace with Jesus reigning on earth during a time called the “Millennium.”

[ FYI: For more details about the Millennium, view this previous “Life’s Depp Thoughts” post:
https://markbesh.wordpress.com/mankinds-destiny-v247/ ]

Jesus’ government—and the way of peace that it brings—will expand outward from Jerusalem and encompass the whole earth and its inhabitants. Both the aged and the young will then be free to enjoy the beauty and security of this, God’s capital city, with no fear of ANY KIND of violence.

Then, once Jesus has returned and set up His rule, peace will expand outward from the city of Jerusalem to finally bring peace to this war-weary world:

“Of the increase of His government and peace there will be no end, upon the throne of David and over His kingdom, to order it and establish it with judgment and justice from that time forward, even forever. The zeal of the LORD of hosts will perform this”
[ Isaiah 9:7 ].

SO, yes, world peace IS ‘REALLY’ COMING to planet Earth one day! The thing is, it WILL NOT be because of AI or any other fantastic technologies that humanity will conjure up in the future.

World peace will ONLY be achieved by the Son of God, Jesus Christ, returning to earth to set up His Millennial Kingdom!


NOTE: Prophetically, there is NOTHING that has to happen before the Rapture happens, and the end time signs are coming together for the first time ever—‘converging’—clearly indicating that we are living in the ‘season’ when Jesus will return to take all His believers back with Him to the “place” He has prepared for them in His Father’s ‘house’ (John 14:2-3).

[ VIDEO: “The CONVERGENCE” – Christ In Prophecy | Guest: Brandon Holthaus:
https://www.youtube.com/watch?v=8Iz4DZK0o9A ]

Time IS ‘SHORT’, so BE SURE YOU ARE ‘READY’!

[ FYI: For more details about being sure you are ‘ready’ to meet God and to be let into His Heaven, view these previous “Life’s Deep Thoughts” posts:
https://markbesh.wordpress.com/are-you-prepared-v210/
https://markbesh.wordpress.com/how-to-prepare-to-meet-someone-important-v276/ ]

[ EXCERPTS: Simplilearn; Nicole Laskowski; Linda Tucci; George Lawton; Alliance For Peacebuilding; James Cleverly; United Nations Office For Disarmament Affairs; Elias G. Carayannis; John Draper; John Lennox; Kate Lucky; Kathleen Walch; Isaac Khali; Steve McIntosh; Todd Bishop; Branka Panic; Markus Haidt; Ola Mohajer; Jerome Gavin; Arun Kumar Ramasamy; Daanish Masood; Martin Waehlisch; Patrick S. Roberts; Rohit Talwar; Kalev Leetaru; Can Kasapoğlu; Barış Kırdemir; António Guterres; Pope Francis; Derek London; Edith M. Lederer; Tabrez Y. Ebrahim; The Stewardology Podcast; John Elton Pletcher; Jeff Reed; Vivian Bricker; Neil McArthur; Mike Kirby; Matthew Emadi; Vitaly Kaplan; Ian M. Giatti; Jason Thacker; Answers In Genesis; Ken Ham; Daniel Bruneau; Jon Gruda; Morgan Sherburne; Webb Keane; Daniel Hannah; Debrah Charatan; George Pitcher; Alan DiDio; Mike Zimmerman; Cindrum Official; Thomas Jenkins; Emily Pinette; Jerry M. Roper; James Kaddis; Ben Fenton; Joanna Ng; Trevor Sutton; Arati Prabhakar; Scorch My Shorts; Steve McIntosh; Russell D. Nomer; Ghulam Dastgeer ]

[ MENTIONS: John McCarthy; Arend Hintze; Kalev Leetaru; Max Tegmark; Sir Nigel Shadbolt; Roger Hampson; J. Budziszewski; C.S. Lewis; Nick Bostrom; Rosalind Picard; “Remesh”; Dr. Geoffrey Hinton; Elon Musk; Cyrus Field; Branka Panic; Mustafa Suleyman; Brad Smith; Jody Williams; Yi Zeng; Elias G. Carayannis; John Draper; “AI & Society: Knowledge, Culture and Communication”; Sam Altman; OpenAI; ChatGPT; Rishi Sunak; Yoshua Bengio; Christophe Lacroix; UNDPPA; IRCAI; Yuval Noah Harari; Jonas Simmerlein; Dustin Morley; Jeff Reed; David Brooks; Gretchen Huizinga; Anthony Levandowski; John Lennox; Google Bard; “Watership Down,”; Richard Adams; David Bentley Hart; Roger Penrose; David Hsing; “Large Language Models”; Stephen Hawking; Markus Haidt; Brandon Holthaus; “Christ In Prophecy”; Lex Fridman; “Singularity”; “Singularitarianism”; “The Digital Ape: How to Live (in Peace) with Smart Machines”; “Brave New World”; Aldous Huxley; “1984”; George Orwell ]

RELATED POSTS:

‘Longing’ For Peace”:
https://markbesh.wordpress.com/longing-for-peace-v298/

Israel Will ‘Stand’”:
https://markbesh.wordpress.com/israel-will-stand-v297/

Are YOU ‘Adopted’?”:
https://markbesh.wordpress.com/are-you-adopted-v293/

Jerusalem’s ‘Near’ Future”:
https://markbesh.wordpress.com/jerusalems-near-future-v292/

Why ‘Jerusalem’?”:
https://markbesh.wordpress.com/why-jerusalem-v291/

‘Home’ At Last!!!”:
https://markbesh.wordpress.com/home-at-last-v290/

‘Heaven’ On Earth?”:
https://markbesh.wordpress.com/heaven-on-earth-v289/

There’s No Place Like ‘Home’”:
https://markbesh.wordpress.com/theres-no-place-like-home-v288/

Preparing For The ‘Future’”:
https://markbesh.wordpress.com/preparing-for-the-future-v286/

Developing One’s ‘Character’”:
https://markbesh.wordpress.com/developing-ones-character-v283/

‘HOW’ To Prepare To Meet Someone Important”:
https://markbesh.wordpress.com/how-to-prepare-to-meet-someone-important-v276/

‘WHEN’ Will Something Important Happen?”:
https://markbesh.wordpress.com/when-will-something-important-happen-v274/

A Sense Of ‘Urgency’”:
https://markbesh.wordpress.com/a-sense-of-urgency-v269/

The ‘Final’ Deception”:
https://markbesh.wordpress.com/the-final-deception-v268/

Being ‘Discerning’”:
https://markbesh.wordpress.com/being-discerning-v266/

Got Your ‘Attention’ Yet?”:
https://markbesh.wordpress.com/got-your-attention-yet-v255/

Two ‘Choices’, One Way”:
https://markbesh.wordpress.com/two-choices-one-way-v254/

The Only ‘Way’”:
https://markbesh.wordpress.com/the-only-way-v253/

‘Heed’ The Warning”:
https://markbesh.wordpress.com/heed-the-warning-v251/

‘Final’ Tribulation”:
https://markbesh.wordpress.com/final-tribulation-v246/

‘Blessed’ Hope”:
https://markbesh.wordpress.com/blessed-hope-v245/

‘Benefits’ Of Assurance”:
https://markbesh.wordpress.com/benefits-of-assurance-v244/

Are You ‘Prepared’?”:
https://markbesh.wordpress.com/are-you-prepared-v210/

Be A ‘Peacemaker’”:
https://markbesh.wordpress.com/be-a-peacemaker-v202/

Know ‘Peace’”:
https://markbesh.wordpress.com/know-peace-v201/

‘Hope’”:
https://markbesh.wordpress.com/hope-v187/


‘PRAYER’ OF REPENTANCE
In the Bible, there is a parable that Jesus told about a Pharisee and a tax collector praying in the Temple.

In the parable, we read of a Pharisee and tax collector who pray in the Jerusalem Temple. The Pharisee thanks God that he is more righteous than others, giving evidence to prove it such as that he fasted twice a week (Luke 18:10-12). He far exceeded the demands of the law, which requires fasting only on the Day of Atonement (Leviticus 16).

Reformed theologian John Calvin states in his commentary that the Pharisee’s problem does not lie in a rejection of the necessity of grace for salvation. His thanksgiving to God implicitly recognizes that his good works come from grace and are given to him by God—otherwise, there would be no need to thank God for his righteousness. The issue, Calvin argues, is that the Pharisee trusts in the merit of his works for salvation. It is not enough to confess that our good works come from God Himself, but we must also recognize that as good as these works may be, they are never perfect on this side of glory and cannot merit heaven. “All our righteous deeds are like a polluted garment” [ Isaiah 64:6 ].

Now, many first-century Jews regarded the Pharisees as paragons of true righteousness and tax collectors as terrible sinners. Thus, they were no doubt shocked when Jesus said that the tax collector, not the Pharisee, went away from the temple justified—that is, declared righteous. He was justified because he did not trust in his own works, even works given to him by God. The tax collector forsook his own righteousness, admitting his sin and humbly asking for mercy. Instead, he “beat his chest in sorrow, saying, ‘O God, be merciful to me, for I am a sinner’”—and Jesus said that the tax collector “went home justified,” he had been “born again” and ‘reconciled’ to God (Luke 18:13-14).

John Calvin writes, “Though a man may ascribe to God the praise of works, yet if he imagines the righteousness of those works to be the cause of his salvation, or rests upon it, he is condemned for wicked arrogance.” God gives His people good works to do, but our salvation is not based on those works. It is based only on Christ and His righteousness, which we receive by grace alone through faith in Jesus alone. “For it is by grace you have been saved, through faith—and this is not from yourselves, it is the gift of God—not by works, so that no one can boast. For we are God’s handiwork, created in Christ Jesus to do good works, which God prepared in advance for us to do” [ Ephesians 2:8-10 ].

So, if you are ‘sensing’ something like that right now, let me strongly encourage you to HUMBLE YOURSELF, CRY OUT to God, and PLEAD for Him to mercifully ‘SAVE’ YOU! None of us have a ‘claim’ on our salvation, nor do we have any ‘works’ that would cause us to deserve it or earn it—it is purely a gift of Divine grace—and all any of us can do is ask. So, CONFESS YOUR SINS and acknowledge to God that you have no hope for Heaven apart from what He provides through Jesus. [ See Psalm 51 ].

There is no ‘formula’ or certain words for this. So just talk to God, in your own words—He knows your ‘heart’. If you are genuinely sincere, and God does respond to your plea, you will usually have a sense of joy and peace.

Jesus said, “He that comes to Me, I will not cast out” [ John 6:37 ].

[ FYI: This is a great sermon on the “Call to Repentance” by John MacArthur from his book “The Gospel According to Jesus”: https://www.gty.org/library/sermons-library/90-22/the-call-to-repentance (Transcript: http://www.spiritedesign.com/TheCallToRepentance-JohnMacArthur(Jul-27-2019).pdf) ].

[ NOTE: If you have ‘tasted the kindness of the Lord’, please e-mail me—I would love to CELEBRATE with you, and help you get started on your ‘journey’ with Jesus! ].


<<< RESOURCES >>>


Peace Machine – Testament of an artificial intelligence researcher

In his deeply personal work, Timo Honkela outlines the idea of a Peace Machine, which would help humanity in creating a more peaceful world. The peace machine is based on artificial intelligence, machine learning and other technological methods being developed.

BOOK DETAILS
2017
ISBN 9789524954389
285 pages

Departments: Languages and literature , Technology and IT , Theories and research methods , Society and politics Keywords for the product language , machine learning , Peace Machine , artificial intelligence , Timo Honkela , society
World peace is a utopia that everyone dreams of but no one really believes in. Humans are loaded with emotions. In negotiation situations, the language barrier exacerbates the crisis when people use the same words to mean different things or talk about the same thing in different words.

In his deeply personal work, Timo Honkela outlines the idea of a Peace Machine, which would help humanity in creating a more peaceful world. The peace machine is based on artificial intelligence, machine learning and other technological methods being developed. Its ultimate goal is to increase understanding between people, tame emotional storms in communities and promote justice through technology.

Artificial intelligence is not the all-destructive ultimate evil of sci-fi movies. On the contrary, it brings hope for something better. With the help of peace machines, a person can rise above the limitations of conscious thought. The time of intelligent machines is approaching.

[ TIMO HONKELA ]

BOOK: https://www.gaudeamus.fi/teos/rauhankone/

SUMMARY PDF: https://finstitute.jp/wp-content/uploads/2018/06/ThePeaceMachine_Honkela_summary.pdf


AI for Peace
By: Branka Panic and Paige Arthur

The role of artificial intelligence in war is widely recognized, but is there also a role for AI in fostering peace and preventing conflict? AI for Peace provides a new perspective on AI as a potential force for good in conflict-affected countries through its uses for early warning, combating hate speech, human rights investigations, and analyzing the effects of climate change on conflict.

This book acts as an essential primer for introducing people working on peacebuilding and conflict prevention to the latest advancements in emerging AI technologies and will act as guide for ethical future practice. This book also aims to inspire data scientists to engage in the peacebuilding and prevention fields and to better understand the challenges of applying data science in conflict and fragile settings.


World Peace: (And How We Can Achieve It)
By: Alex J. Bellamy

For as long as there has been war, there have been demands for its elimination. The quest for world peace has excited and eluded political leaders, philosophers, religious elders, activists, and artists for millennia. With war on the rise once again, we rarely reflect nowadays on what world peace might look like; much less on how it might be achieved.

World Peace aims to change all that and show that world peace is possible. Because the motives, rationales, and impulses that give rise to war – the quest for survival, enrichment, solidarity, and glory – are now better satisfied through peaceful means, war is an increasingly anachronistic practice, more likely to impoverish and harm us humans than satisfy and protect us. This book shows that we already have many of the institutions and practices needed to make peace possible and sets out an agenda for building world peace. In the immediate term, it shows how steps to strengthen compliance with international law, improve collective action such as international peacekeeping and peacebuilding, better regulate the flow of arms, and hold individuals legally accountable for acts of aggression or atrocity crimes can make our world more peaceful. It also shows how in the long term, building strong and legitimate states that protect the rights and secure the livelihoods of their people, gender equal societies, and protecting the right of individuals to opt-out of wars has the potential to establish and sustain world peace. But it will only happen, if individuals organize to make it happen.


Artificial Intelligence (AI) Threat To Global Peace: The Rise Of Artificial Intelligence And Its Impact on The Society

Whether or not we are aware of it, over recent years Artificial Intelligence (AI) has become an integral part of our lives — from the smart speaker in your lounge to the apps you use to order your takeaways and far more besides. For the most part, these changes have been incremental and largely hidden from view. In the last few months, however, stories about the rapid acceleration of AI technology have made headlines around the world — highlighting the potential benefits, as well as the risks, of this technology.

Widespread application of AI is even in the most basic human activities, we seeing it becoming more capable than the human capacity and exposing the human race to a possible redundancy. seeing humans having little or no control over its use by prideful inventors in a race to outshine each other bothering on the edge of extremism

This book looks into the different way AI as infiltrated our daily routine and its malicious use in various terrifying contexts, from information warfare to autonomous weapons.

Humans now begin to live in fear of its invention as it erodes ethical values and the sanctity of human intelligence.


The New Fire: War, Peace, and Democracy in the Age of AI
By: Ben Buchanan and Andrew Imbrie

AI is revolutionizing the world. Here’s how democracies can come out on top.

Artificial intelligence is revolutionizing the modern world. It is ubiquitous—in our homes and offices, in the present and most certainly in the future. Today, we encounter AI as our distant ancestors once encountered fire. If we manage AI well, it will become a force for good, lighting the way to many transformative inventions. If we deploy it thoughtlessly, it will advance beyond our control. If we wield it for destruction, it will fan the flames of a new kind of war, one that holds democracy in the balance. As AI policy experts Ben Buchanan and Andrew Imbrie show in The New Fire, few choices are more urgent—or more fascinating—than how we harness this technology and for what purpose.

The new fire has three sparks: data, algorithms, and computing power. These components fuel viral disinformation campaigns, new hacking tools, and military weapons that once seemed like science fiction. To autocrats, AI offers the prospect of centralized control at home and asymmetric advantages in combat. It is easy to assume that democracies, bound by ethical constraints and disjointed in their approach, will be unable to keep up. But such a dystopia is hardly preordained. Combining an incisive understanding of technology with shrewd geopolitical analysis, Buchanan and Imbrie show how AI can work for democracy. With the right approach, technology need not favor tyranny.


Peace at the click of a mouse: How Artificial Intelligence is Changing the Palestine Conflict
By: Sindu Ambani and Mukesh Singh

In ‘Peace at the Click of a Mouse: How Artificial Intelligence Would Change the Palestinian Conflict’ we are immersed in a world between technology and diplomacy that illuminates the complex challenges of one of the longest-running conflicts in modern history. The authors take us on an inspiring journey through the possibilities and limitations of Artificial Intelligence (AI) as a tool for peace and social justice. From data analytics informing policy decisions, to optimizing resource allocation in crisis-ridden areas, to improving public administration and security, this book lays out how advanced algorithms and machine learning models could help resolve the decades-old conflict between Palestinians and Israelis.

But ‘peace at the click of a mouse’ is not just a technological utopia. It’s also a critical look at the ethical, political and social challenges of implementing AI in such a sensitive context.

Whether you are a technology enthusiast, a policy expert, or simply someone interested in the future of global peace, this book will inspire you to reevaluate the possibilities and limitations of our digital age. ‘Peace at the Click of a Mouse’ not only offers analysis and actionable solutions, but challenges us all to rethink our own role in an increasingly interconnected world.


“Artificial Intelligence in International Law… a theat or a resourse for global peace and security?
By: Sr. Axel Gianfranco Lodovico Molina

Artificial Intelligence in Public International Law: A Threat or a Resource for Global Peace and Security? Masterfully interweaves the worlds of technology and jurisprudence. The book is a deep dive into the labyrinthine intersections of artificial intelligence and the international legal system, provoking a critical examination of the pressing ethical, security, and legal challenges of our era.

In an AI-accelerated world, it deftly elucidates the intricate dance between global security and AI, laying out the legal ramifications in areas as diverse as cybersecurity, warfare, conflict prevention, and privacy rights. With thought-provoking insights on issues such as autonomous weapons systems, sovereignty in the fight against terrorism, and the liability of AI system developers, this book grapples with the profound question: Is AI a threat or an indispensable tool in the pursuit of global peace and security?

As readers traverse its pages, the narrative asserts the pivotal role of international cooperation, global governance, and universal ethical standards in shaping the future of AI. It holds a mirror to our AI-shaped future, urging the international community to navigate the impending ethical dilemmas and legal quandaries with careful forethought and due diligence. It calls for robust international standards and a proactive involvement of civil society and the private sector, in order to construct a fair, equitable, and secure AI ecosystem.

This book, characterized by its precision, professionalism, and compelling arguments, will undoubtedly be an invaluable addition to the bookshelves of technology enthusiasts, legal scholars, policymakers, and anyone interested in the intersection of AI and international law.


Militarizing Artificial Intelligence
By: Nik Hynek and Anzhelika Solovyeva

This book examines the military characteristics and potential of Artificial Intelligence (AI) in the new global revolution in military affairs.

Offering an original perspective on the utilization, imagination, and politics of AI in the context of military development and weapons regulation, the work provides a comprehensive response to the question of how we might reflect on the AI revolution in warfare and what can be said about the ways in which this has been handled. In the first part of the book, AI is accommodated, both theoretically and empirically, in the strategic context of the ‘Revolution in Military Affairs’ (RMA). The book offers a novel understanding of autonomous weapons as multi-layered composite systems, pointing to a complex, non-linear interplay between evolutionary and revolutionary dynamics. In the second section, the book provides an impartial analysis of the related politics and operations of power, whereby increases in military budgets and R&D of the great powers are met and countered by advocacy networks and scientists campaigning for a ban on lethal autonomous weapons. As such, it moves beyond popular caricatures of ‘killer robots’ and points out some of the problems which result from over-reliance on such imagery.

This book will be of much interest to students of strategic studies, critical security studies, arms control and disarmament, science and technology studies and general International Relations.


Future Peace: Technology, Aggression, and the Rush to War
By: Robert H. Latiff

Future Peace urges extreme caution in the adoption of new weapons technology and is an impassioned plea for peace from an individual who spent decades preparing for war.

Today’s militaries are increasingly reliant on highly networked autonomous systems, artificial intelligence, and advanced weapons that were previously the domain of science fiction writers. In a world where these complex technologies clash with escalating international tensions, what can we do to decrease the chances of war? In Future Peace, the eagerly awaited sequel to Future War, Robert H. Latiff questions our overreliance on technology and examines the pressure-cooker scenario created by the growing animosity between the United States and its adversaries, our globally deployed and thinly stretched military, the capacity for advanced technology to catalyze violence, and the American public’s lack of familiarity with these topics.

Future Peace describes the many provocations to violence and how technologies are abetting those urges, and it explores what can be done to mitigate not only dangerous human behaviors but also dangerous technical behaviors. Latiff concludes that peace is possible but will require intense, cooperative efforts on the part of technologists, military leaders, diplomats, politicians, and citizens. Future Peace amplifies some well-known ideas about how to address the issues, and provides far-, mid-, and short-term recommendations for actions that are necessary to reverse the apparent headlong rush into conflict. This compelling and timely book will captivate general readers, students, and scholars of global affairs, international security, arms control, and military ethics.


Superintelligence: Paths, Dangers, Strategies
By: Nick Bostrom

A New York Times bestseller

Superintelligence asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life.

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful – possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.

But we have one advantage: we get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?

This profoundly ambitious and original book breaks down a vast track of difficult intellectual terrain. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom’s work nothing less than a reconceptualization of the essential task of our time.


Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World
By: Mo Gawdat

Technology is putting our humanity at risk to an unprecedented degree. This book is not for engineers who write the code or the policy makers who claim they can regulate it. This is a book for you. Because, believe it or not, you are the only one that can fix it. – Mo Gawdat

Artificial intelligence is smarter than humans. It can process information at lightning speed and remain focused on specific tasks without distraction. AI can see into the future, predicting outcomes and even use sensors to see around physical and virtual corners. So why does AI frequently get it so wrong?

The answer is us. Humans design the algorithms that define the way that AI works, and the processed information reflects an imperfect world. Does that mean we are doomed? In Scary Smart, Mo Gawdat, the internationally bestselling author of Solve for Happy, draws on his considerable expertise to answer this question and to show what we can all do now to teach ourselves and our machines how to live better. With more than thirty years’ experience working at the cutting-edge of technology and his former role as chief business officer of Google [X], no one is better placed than Mo Gawdat to explain how the Artificial Intelligence of the future works.

By 2049 AI will be a billion times more intelligent than humans. Scary Smart explains how to fix the current trajectory now, to make sure that the AI of the future can preserve our species. This book offers a blueprint, pointing the way to what we can do to safeguard ourselves, those we love and the planet itself.


The Singularity Is Near: When Humans Transcend Biology
By: Ray Kurzweil

“Startling in scope and bravado.” —Janet Maslin, The New York Times

“Artfully envisions a breathtakingly better world.” —Los Angeles Times

“Elaborate, smart and persuasive.” —The Boston Globe

“A pleasure to read.” —The Wall Street Journal

One of CBS News’s Best Fall Books of 2005 • Among St Louis Post-Dispatch’s Best Nonfiction Books of 2005 • One of Amazon.com’s Best Science Books of 2005

A radical and optimistic view of the future course of human development from the bestselling author of How to Create a Mind and The Singularity is Nearer who Bill Gates calls “the best person I know at predicting the future of artificial intelligence”

For over three decades, Ray Kurzweil has been one of the most respected and provocative advocates of the role of technology in our future. In his classic The Age of Spiritual Machines, he argued that computers would soon rival the full range of human intelligence at its best. Now he examines the next step in this inexorable evolutionary process: the union of human and machine, in which the knowledge and skills embedded in our brains will be combined with the vastly greater capacity, speed, and knowledge-sharing ability of our creations.


The Coming Wave: Technology, Power, and the Twenty-first Century’s Greatest Dilemma
By: Mustafa Suleyman

NEW YORK TIMES BESTSELLER • An urgent warning of the unprecedented risks that AI and other fast-developing technologies pose to global order, and how we might contain them while we have the chance—from a co-founder of the pioneering artificial intelligence company DeepMind

“A fascinating, well-written, and important book.”—Yuval Noah Harari
“Essential reading.”—Daniel Kahneman
“An excellent guide for navigating unprecedented times.”—Bill Gates

A Best Book of the Year: CNN, Economist, Bloomberg, Politico Playbook, Financial Times, The Guardian, CEO Magazine, Semafor • Winner of the Inc. Non-Obvious Book Award • Finalist for the Porchlight Business Book Award and the Financial Times and Schroders Business Book of the Year Award

We are approaching a critical threshold in the history of our species. Everything is about to change.

Soon you will live surrounded by AIs. They will organise your life, operate your business, and run core government services. You will live in a world of DNA printers and quantum computers, engineered pathogens and autonomous weapons, robot assistants and abundant energy.

None of us are prepared.

As co-founder of the pioneering AI company DeepMind, part of Google, Mustafa Suleyman has been at the centre of this revolution. The coming decade, he argues, will be defined by this wave of powerful, fast-proliferating new technologies.

In The Coming Wave, Suleyman shows how these forces will create immense prosperity but also threaten the nation-state, the foundation of global order. As our fragile governments sleepwalk into disaster, we face an existential dilemma: unprecedented harms on one side, the threat of overbearing surveillance on the other.

Can we forge a narrow path between catastrophe and dystopia?

This groundbreaking book from the ultimate AI insider establishes “the containment problem”—the task of maintaining control over powerful technologies—as the essential challenge of our age.


The Age of AI: And Our Human Future
by Henry A Kissinger, Eric Schmidt, and Daniel Huttenlocher

Enjoy a great reading experience when you buy the Kindle edition of this book. Learn more about Great on Kindle, available in select categories.
View Kindle Edition
Three of the world’s most accomplished and deep thinkers come together to explore Artificial Intelligence (AI) and the way it is transforming human society—and what this technology means for us all.

Generative AI is filling the internet with false information. Artists, writers, and many other professionals are in fear of their jobs. AI is discovering new medicines, running military drones, and transforming the world around us—yet we do not understand the decisions it makes, and we don’t know how to control them.

In The Age of AI, three leading thinkers have come together to consider how AI will change our relationships with knowledge, politics, and the societies in which we live. The Age of AI is an essential roadmap to our present and our future, an era unlike any that has come before.


The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World
By: Pedro Domingos

A thought-provoking and wide-ranging exploration of machine learning and the race to build computer intelligences as flexible as our own

In the world’s top research labs and universities, the race is on to invent the ultimate learning algorithm: one capable of discovering any knowledge from data, and doing anything we want, before we even ask. In The Master Algorithm, Pedro Domingos lifts the veil to give us a peek inside the learning machines that power Google, Amazon, and your smartphone. He assembles a blueprint for the future universal learner–the Master Algorithm–and discusses what it will mean for business, science, and society. If data-ism is today’s philosophy, this book is its bible.


AGI: AGE OF SUPERINTELLIGENCE: Exploring the Impact and Ethics of Artificial General Intelligence
By: Richard A. Mann

Embark on a thought-provoking journey into the future with “AGI: Age of Superintelligence” by Richard A. Mann. In this compelling exploration of Artificial General Intelligence (AGI), Mann takes readers beyond the current landscape of Artificial Intelligence (AI) to the precipice of a groundbreaking era that promises to redefine the very essence of human existence.

In a world where AI has already demonstrated its capabilities through predictive algorithms and autonomous systems, AGI stands as the next frontier. Unlike its predecessors, AGI possesses the potential to transcend the limitations of narrow AI, mirroring the expansive cognitive abilities of the human mind across a myriad of domains. Mann eloquently guides readers through the intricate web of technological advancements, showcasing AGI as not merely a progression but a philosophical odyssey—one that challenges our understanding of intelligence, ethics, and societal norms.

As we stand on the cusp of this revolutionary leap, Mann grapples with profound questions that demand our attention. What ethical considerations must guide the development of superintelligent machines? How might AGI reshape the socio-economic fabric of our societies? “AGI: Age of Superintelligence” is not just a book; it is an intellectual expedition that navigates the complexities surrounding AGI, providing a comprehensive examination of its technological foundations, ethical dilemmas, and societal implications.

Mann’s prose captivates as he unravels the multifaceted dimensions of AGI, shedding light on the intricate interplay between technology, ethics, and society. This book is an invaluable resource, empowering readers with the knowledge needed to navigate the uncharted waters of a future where machines could potentially match or surpass human intelligence. It is a call to action, urging readers to confront the profound questions that will undoubtedly shape the course of human history in unimaginable ways.

Join Richard A. Mann on this intellectual odyssey as he invites you to ponder the Age of Superintelligence. This book is more than an exploration; it is a guide to understanding, a catalyst for conversation, and a beacon illuminating the path forward in the era of AGI.


Artificial Intelligence 2023: Learn Everything About the Revolution of Artificial Intelligence
By: Liam Harrison

Are you intrigued by the rapid advancements in the world of Artificial Intelligence?

Would you like to understand the far-reaching implications of AI, from healthcare to ethical concerns?

“ARTIFICIAL INTELLIGENCE 2023: Learn Everything About the Revolution of Artificial Intelligence” is the essential read for you.

Uncover the core principles of AI and how it is transforming modern life.

Explore ethical questions and data privacy challenges that arise with AI adoption.

Dive deep into AI’s role in healthcare and other sectors.

Equip yourself with knowledge about AI and automation in the work landscape.

In this concise yet detailed guide, you’ll get a panoramic view of the AI ecosystem. From basics to the complexities of automation, the book provides a balanced mix of technical knowledge and societal impacts.

Don’t miss the chance to understand the AI revolution that is shaping our future. Gain the expertise you need to navigate the evolving landscape of Artificial Intelligence.

Take action now and stay ahead of the curve!


The AI Wave in Defence Innovation
By: Michael Raska and Richard A. Bitzinger

An international and interdisciplinary perspective on the adoption and governance of artificial intelligence (AI) and machine learning (ML) in defence and military innovation by major and middle powers.

Advancements in AI and ML pose pressing questions related to evolving conceptions of military power, compliance with international humanitarian law, peace promotion, strategic stability, arms control, future operational environments, and technology races. To navigate the breadth of this AI and international security agenda, the contributors to this book include experts on AI, technology governance, and defence innovation to assess military AI strategic perspectives from major and middle AI powers alike. These include views of how the United States, China, Japan, South Korea, the European Union, and Russia see AI/ML as a technology with the potential to reshape military affairs and power structures in the broader international system. This diverse set of views aims to help elucidate key similarities and differences between AI powers in the evolving strategic context.

A valuable read for scholars of security studies, public policy, and STS studies with an interest in the impacts of AI and ML technologies.


Responsible AI: Best Practices for Creating Trustworthy AI Systems
By: CSIRO, Qinghua Lu, Liming Zhu, Jon Whittle, and Xiwei Xu

AI systems are solving real-world challenges and transforming industries, but there are serious concerns about how responsibly they operate on behalf of the humans that rely on them. Many ethical principles and guidelines have been proposed for AI systems, but they’re often too ‘high-level’ to be translated into practice. Conversely, AI/ML researchers often focus on algorithmic solutions that are too ‘low-level’ to adequately address ethics and responsibility. In this timely, practical guide, pioneering AI practitioners bridge these gaps. The authors illuminate issues of AI responsibility across the entire system lifecycle and all system components, offer concrete and actionable guidance for addressing them, and demonstrate these approaches in three detailed case studies.
Writing for technologists, decision-makers, students, users, and other stake-holders, the topics cover:

Governance mechanisms at industry, organisation, and team levels
Development process perspectives, including software engineering best practices for AI
System perspectives, including quality attributes, architecture styles, and patterns
Techniques for connecting code with data and models, including key tradeoffs
Principle-specific techniques for fairness, privacy, and explainability
A preview of the future of responsible AI


Ethical AI: Navigating the Future With Responsible Artificial Intelligence
By: L.D. Knowings

Unlock the mysteries of AI and use this technology to your advantage – step into the future of limitless possibilities!
Are you intrigued by Artificial Intelligence (AI) but feel overwhelmed by its complexity?

Do you wonder how AI could transform your job or industry yet struggle to keep pace with its rapid advancements?

Or are you afraid of AI taking over jobs, controlling minds, or destroying humanity – just like in the movies?

The world of AI can be both intriguing and overwhelming.

A recent poll of over 4,000 adults in the United States revealed some startling sentiments: 61% believe that AI could threaten the future of civilization, and over two-thirds express concern about AI’s potential negative consequences.

With the rapid pace of advancements and seemingly alien terminology, it’s easy to see why the landscape of AI might seem unsettling, daunting even.

But what if we could turn that fear into fascination?

What if the complex world of AI was broken down into manageable, relatable pieces, transforming confusion into clarity?

Picture yourself at the forefront of the technological revolution, leveraging AI’s potential to drive innovation, solve real-world problems, and shape the future.

This book opens the door to not only understanding AI but also harnessing its capabilities and applying them in your work, irrespective of your industry.

Inside, you will discover:

The key to deciphering the world of AI, making complex concepts accessible and fascinating – get to know the amazing capabilities of thinking machines!
The inner workings of Machine Learning and Deep Learning – be fascinated by how they serve as AI’s foundational pillars!

Practical, transformative applications of AI across multiple industries including yours – you will be surprised at how AI is basically everywhere!

Step-by-step guidelines to design and implement effective AI solutions in your workplace – adapt the technology that can lead your organization to success!
How to identify and mitigate AI bias, navigating the ethical dilemmas AI brings – discussing these complicated topics is important for better understanding

How AI intersects with cybersecurity, revolutionizing modern defense strategies – gone are the days where cybercriminals have the upper hand

The future trends of AI, empowering you to stay updated in this rapidly evolving field – AI is definitely here to stay!

And much more!

You don’t need to become an information technology professional or computer scientist to be able to understand this technology.

The book breaks complex jargon down for you in an engaging and simple manner – so you can take advantage of AI and its practical applications, no matter what role or industry you’re in.


AI Ethics
By: Mark Coeckelbergh

This overview of the ethical issues raised by artificial intelligence moves beyond hype and nightmare scenarios to address concrete questions—offering a compelling, necessary read for our ChatGPT era.

Artificial intelligence powers Google’s search engine, enables Facebook to target advertising, and allows Alexa and Siri to do their jobs. AI is also behind self-driving cars, predictive policing, and autonomous weapons that can kill without human intervention. These and other AI applications raise complex ethical issues that are the subject of ongoing debate. This volume in the MIT Press Essential Knowledge series offers an accessible synthesis of these issues. Written by a philosopher of technology, AI Ethics goes beyond the usual hype and nightmare scenarios to address concrete questions.

Mark Coeckelbergh describes influential AI narratives, ranging from Frankenstein’s monster to transhumanism and the technological singularity. He surveys relevant philosophical discussions: questions about the fundamental differences between humans and machines and debates over the moral status of AI. He explains the technology of AI, describing different approaches and focusing on machine learning and data science. He offers an overview of important ethical issues, including privacy concerns, responsibility and the delegation of decision making, transparency, and bias as it arises at all stages of data science processes. He also considers the future of work in an AI economy. Finally, he analyzes a range of policy proposals and discusses challenges for policymakers. He argues for ethical practices that embed values in design, translate democratic values into practices and include a vision of the good life and the good society.


Life 3.0: Being Human in the Age of Artificial Intelligence
By: Max Tegmark

‘This is the most important conversation of our time, and Tegmark’s thought-provoking book will help you join it’ Stephen Hawking

AI is the future – but what will that future look like? Will superhuman intelligence be our slave, or become our god?

Taking us to the heart of the latest thinking about AI, Max Tegmark, the MIT professor whose work has helped mainstream research on how to keep AI beneficial, separates myths from reality, utopias from dystopias, to explore the next phase of our existence.

How can we grow our prosperity through automation, without leaving people lacking income or purpose? How can we ensure that future AI systems do what we want without crashing, malfunctioning or getting hacked? Should we fear an arms race in lethal autonomous weapons? Will AI help life flourish as never before, or will machines eventually outsmart us at all tasks, and even, perhaps, replace us altogether?

‘This is a rich and visionary book and everyone should read it’ The Times


Artificial Intelligence Promise vs. Peril: The hype, hope, and hidden agendas of AI
By: Alan Sewell
Is generative AI really “approaching human levels of intelligence?”Did a computer program really tell somebody they should dump their partner so they can marry electrical pulses plying their way through silicon? Are these programs really collaborating to create their own secret language in a conspiracy to overthrow humanity?

Or are these claims intentional misrepresentations of truth purposed to deceive the public into accepting government control of AI in collusion with a cartel of corporations? Or is it just an honest difference of perspective?

This book explores the questions of whether artificial intelligence duplicates the functions of our minds, or only certain narrow sub-sentient aspects of our brains. And whether AI will replace humans in the many realms our intelligence enables.


Faith in the Age of AI: Christianity Through the Looking Glass of Artificial Intelligence
By: Dan Scott

Imagine having lived in the generation that discovered fire.

Now, what if current scientific and technological advances reorder our lives every bit as much?

Artificial Intelligence, decoding the human genome, links between mind and computer… All these things that were once science fiction are now quickly-and absolutely-becoming science fact.

Millions of people feel like aliens in their own country. Like Alice in Wonderland, they no longer understand their families, coworkers, and neighbors. Reality itself has turned slippery.

Could machines become sentient?

Is “life” really a computer simulation?

Does science really have a genetic blueprint for all human life?

Believers and unbelievers alike are easily unnerved by such questions. Interestingly though, early Christians, mathematicians, scientists, and philosophers have already addressed these kinds of issues. And they have much to say about our “curiouser and curiouser” new world.

In Christianity Through the Looking Glass, pastor, priest, scholar, and counselor Dan Scott invites you to learn what these hallowed ancient voices might have said about faith in the age of Artificial Intelligence.


Our Final Invention: Artificial Intelligence and the End of the Human Era
By: James Barrat

Elon Musk named Our Final Invention one of 5 books everyone should read about the future

A Huffington Post Definitive Tech Book of 2013

Artificial Intelligence helps choose what books you buy, what movies you see, and even who you date. It puts the “smart” in your smartphone and soon it will drive your car. It makes most of the trades on Wall Street, and controls vital energy, water, and transportation infrastructure. But Artificial Intelligence can also threaten our existence.

In as little as a decade, AI could match and then surpass human intelligence. Corporations and government agencies are pouring billions into achieving AI’s Holy Grail―human-level intelligence. Once AI has attained it, scientists argue, it will have survival drives much like our own. We may be forced to compete with a rival more cunning, more powerful, and more alien than we can imagine.

Through profiles of tech visionaries, industry watchdogs, and groundbreaking AI systems, Our Final Invention explores the perils of the heedless pursuit of advanced AI. Until now, human intelligence has had no rival. Can we coexist with beings whose intelligence dwarfs our own? And will they allow us to?


Artificial Intelligence Pushing Boundaries, Is It Worth It?
By: B B Jankoski

The future is less fearsome when you know what you’re facing. Shape AI’s future by actively participating in this technology–even if you’ve never considered yourself a techie.

Bring up the subject of AI and you’re liable to receive opinions from opposite sides of the spectrum.

On one side, you’ll find the firm supporters of AI–those who are excited about the shortcuts afforded by ChatGPT, or keen to harness AI’s game-changing abilities in spheres such as healthcare, finance, or robotics.

On the other side are the naysayers–those who either turn a blind eye to the rapid advancement of AI or those who feel that AI will never reach the level of sophistication required to take over their jobs.

Yet AI is here to stay and if you want to avoid being stampeded by its progress, then you might as well be one of its drivers.

This doesn’t mean you have to be an AI developer or even work in tech.

It simply involves being aware of specific ethical considerations, the core benefits and disadvantages of using AI, and the sectors whose jobs are most likely to be replaced by AI.

It additionally means using AI to your benefit by harnessing the most powerful AI tools available–both in your profession and in your personal life.

And that is exactly why this book was written. Consider it a comprehensive guide for all you need to become more involved in AI.

Instead of being a passive observer to its onslaught, you can form part of a movement that decides where this technology goes and what role you wish to play.

Here’s a snippet of what you’ll find inside:

The fascinating history of how AI is modeled on human neural networks – are we on our way to artificial superintelligence?

7 main applications of AI – plus all its benefits and drawbacks, and how it has already changed everyday life

What part can you play in the development of AI – actions you can take to ensure its responsible use and development

8 top AI trends – how to choose the best tool for your task, whether you are interested in improving your productivity, project management skills, content creation, and more.

What does the future of AI hold? – How our jobs, beliefs, and social norms will all be affected by the AI phenomenon

The biggest ethical challenges posed by AI, according to industry leaders – including job insecurity, unfair outcomes, privacy concerns, and bias

How you can advocate for ethical AI practices, and put a stop to the violation of your privacy

And so much more.

It’s logical to hope that technological advances will always be for the best.

And without a doubt, artificial intelligence has the potential to create a fairer, more equitable, and kinder society.

But it also has its disadvantages–and knowing what these challenges are (and how to negotiate them) is vital if you wish to make the most of AI, without letting it sweep you up.


Artificial Intelligence Revolution: How AI Will Change our Society, Economy, and Culture
By: Robin Li

The co-founder of Baidu explains how AI will transform human livelihood, from our economy and financial systems down to our daily lives.

Written by Baidu cofounder Robin Li and prefaced by award-winning sci-fi writer Cixin Liu (author of The Three-Body Problem), Artificial Intelligence Revolution introduces Baidu’s teams of top scientists and management as pioneers of movement toward AI. The book covers many of the latest AI-related ideas and technological developments, such as:

Computational ability
Big data resources
Setting the basic standards of AI in research and development
An introduction to the “super brain”
Intelligent manufacturing
Deep learning
L4 automated vehicles
Smart finance

The book describes the emergence of a “smart” society powered by technology and reflects on the challenges humanity is about to face. Li covers the most pressing AI-related ideas and technological developments, including: Will artificial intelligence replace human workers, and in what sectors of the economy? How will it affect healthcare and finance? How will daily human life change? Robin Li’s Artificial Intelligence Revolution addresses these questions and more from the perspective of a pioneer of AI development. It’s a must-read for anyone concerned about the emergence of a “smart” society powered by technology and the challenges humanity is about to face.


ARTIFICIAL INTELLIGENCE: BLISS OR PERIL FOR FUTURE HUMANITY?: UNDERSTANDING THE BASICS OF AI IN OUR EVERYDAY LIVES
by Will Davis

Embark on a compelling exploration into the world of AI, where future reality meets boundless possibility. Can you afford to remain in the dark?
Has it ever crossed your mind that Artificial Intelligence (AI) is no longer a far-off, high-tech science fiction fantasy but an integral part of our everyday lives?

When you watch a movie suggested by Netflix’s AI, talk to your favorite brand’s chatbot, or follow your car’s navigation to the new restaurant in town, you interact with AI.

This omnipresent technology transforms our world and sets the stage for the future in unimaginable ways.

Have you considered how much the world understands this transformative phenomenon creating digital miracles and driving the economy?

Did you know AI could contribute up to a staggering $15.7 trillion to the global economy by 2030?

However, most people are spectators rather than actors in this fast-evolving digital epoch.

This is where the insightful journey author Will Davis guides you on will change your perspective.

You’ll dive into the fascinating world of Artificial Intelligence, from its early forerunners to the singularity’s lofty vision, all simplified for the curious mind.

Inside, you will discover:

A comprehensive overview of the AI universe – take the first steps in your exciting journey to understand this game-changing technology!
The fascinating mechanics that power AI, including feedback loops and their critical role
The intriguing process of amalgamating various AI capabilities to form potent new tools
Insights into the profound ethical questions raised by AI’s rapid advancement: how do you ensure moral boundaries are respected as society races towards unprecedented progress?
Real-world applications of AI across multiple spheres – from business to leisure, investment to politics- you’ll see how AI is not just transforming society but reshaping our future!
Enlightening predictions about AI’s future – understand what lies on the horizon for this disruptive technology, equipping you not just to survive but thrive in the coming AI revolution
The deep-reaching influence of AI on society and politics – comprehend the consequences of AI beyond the technicalities to help you steer through the complexities of this new landscape
And much more!

As you flip each page, you’ll uncover insights, debunk misconceptions, and provoke your curiosity.

With each chapter, you’ll realize that understanding AI is no longer an optional hobby; it’s a necessity.

As everyone strives toward an era where technology shapes every aspect of society, keeping up with AI is imperative for staying informed, resilient, and relevant.

Now is the time to move from being a spectator to actively participating in the AI revolution!


Curious Minds Ask: 55 Thought-Provoking Questions for Humanity Answered by Artificial Intelligence
By: S.C. Francis

“This book may change how you view the future of humanity.”
In the age of Artificial Intelligence (AI) just beginning, where machines can ponder the most profound philosophical queries and offer insights born from vast troves of knowledge, AI will become a partner in our quest for understanding, a tool that amplifies our capacity for exploration and discovery.

We stand at the cusp of revolutionary new breakthroughs in every field, as well as a point where we must be cautious of the dangers that AI poses to our future.

But before we can begin to understand this potential new reality for humanity, we must attempt to understand this Artificial Intelligence. And in the future, when there is superintelligence beyond our own, wisdom will come when Curious Minds Ask the right questions.

We get to know the AI that will shape our world by asking it thought-provoking questions about humanity that span the realms of science, philosophy, ethics, and beyond. The unedited responses provide direct insight into the “mind” of the machine. It’s a “first contact” of sorts for many of us, where we’re communicating directly with it (ChatGPT). How intelligent is it, and how well does it know us?

This is no longer a distant fantasy. This is a real conversation with an Artificial Intelligence.

In asking these questions about humanity, my hope is we’ll see the world and ourselves in a different light (by asking the big questions) and have a better understanding of the power of AI to shape our future.

How will the AI answer these thought-provoking questions and more?

What is the purpose of human existence?
What is the purpose of your existence, AI?
What is one question every human should ask themselves?
What does it mean to live a meaningful life?
Can true world peace ever be achieved, and if so, how?
What is the greatest unsolved mystery of the universe?
If we had the technology to bring back extinct species, should we?
If humanity could have one superpower, what should it be?
Will AI become more intelligent than humans?
How could Artificial intelligence end humanity?

The questions we ask today will shape the answers of tomorrow, and it is through the pursuit of these answers that we continue to evolve, both as individuals and as a collective humanity. Let us peer into the unknown, to seek wisdom, and to embrace the boundless possibilities that await those with curious minds.


Is the Algorithm Plotting Against Us?: A Layperson’s Guide to the Concepts, Math, and Pitfalls of AI
By: Kenneth Wenger

Artificial intelligence is everywhere―it’s in our houses and phones and cars. AI makes decisions about what we should buy, watch, and read, and it won’t be long before AI’s in our hospitals, combing through our records. Maybe soon it will even be deciding who’s innocent, and who goes to jail . . . But most of us don’t understand how AI works. We hardly know what it is. In “Is the Algorithm Plotting Against Us?”, AI expert Kenneth Wenger deftly explains the complexity at AI’s heart, demonstrating its potential and exposing its shortfalls. Wenger empowers readers to answer the question―What exactly is AI?―at a time when its hold on tech, society, and our imagination is only getting stronger.


A Thousand Brains
By: Jeff Hawkins

Enjoy a great reading experience when you buy the Kindle edition of this book. Learn more about Great on Kindle, available in select categories.
View Kindle Edition
A bestselling author, neuroscientist, and computer engineer unveils a theory of intelligence that will revolutionize our understanding of the brain and the future of AI.

For all of neuroscience’s advances, we’ve made little progress on its biggest question: How do simple cells in the brain create intelligence?

Jeff Hawkins and his team discovered that the brain uses maplike structures to build a model of the world—not just one model, but hundreds of thousands of models of everything we know. This discovery allows Hawkins to answer important questions about how we perceive the world, why we have a sense of self, and the origin of high-level thought.

A Thousand Brains heralds a revolution in the understanding of intelligence. It is a big-think book, in every sense of the word.


2084: Artificial Intelligence and the Future of Humanity
By: John C. Lennox

Will technology change what it means to be human?

You don’t have to be a computer scientist to have discerning conversations about artificial intelligence and technology. We all wonder where we’re headed. Even now, technological innovations and machine learning have a daily impact on our lives, and many of us see good reasons to dread the future. Are we doomed to the surveillance society imagined in George Orwell’s 1984?

Mathematician and philosopher John Lennox believes that there are credible answers to the daunting questions that AI poses, and he shows that Christianity has some very serious, sensible, evidence-based responses about the nature of our quest for superintelligence.

2084 will introduce you to a kaleidoscope of ideas:

– The key developments in technological enhancement, bioengineering, and, in particular, artificial intelligence.
– The agreements and disagreements that scientists and experts have about the future of AI.
– The key insights that Christianity and Scripture have about the nature of human beings, the soul, our moral sense, our future, and what separates us from machines.

In straight-forward language, you’ll get a better understanding of the current capacity of AI, its potential benefits and dangers, the facts and the fiction, as well as possible future implications.

The questions posed by AI are open to all of us, daunting as they might be. And they demand answers. 2084 is written to challenge and ignite the curiosity of all readers. No matter your worldview, Lennox provides clear information and credible answers that will bring you real hope for the future of humanity.


The Age of AI: Artificial Intelligence and the Future of Humanity
By: Jason Thacker

Enjoy a great reading experience when you buy the Kindle edition of this book. Learn more about Great on Kindle, available in select categories.
View Kindle Edition
Are robots going to take my job? How are smartphones affecting my kids? Do I need to worry about privacy when I get online or ask Siri for directions? Whatever questions you have about AI, The Age of AI gives you insights on how to navigate this brand-new world as you apply God’s ageless truths to your life and future.

We interact with artificial intelligence, or AI, nearly every moment of the day without knowing it. From our social media feeds to our smart thermostats and Alexa and Google Home, AI is everywhere–but how is it shaping our world?

In The Age of AI, Jason Thacker, associate research fellow at the Ethics and Religious Liberty Commission, helps us navigate our digital age in this thoughtful exploration of the social, moral, and ethical challenges of our ongoing interactions with artificial intelligence.

Applying God’s Word to this new AI-empowered age, Thacker sheds light on:

How Christian truth transforms the way we use AI
How AI affects us individually, in our relationships, and in our society at large
How to navigate the digital age wisely
With theological depth and a wide awareness of the current trends in AI, Jason is a steady guide who reminds us that while technology is changing the world, it can’t shake the foundations of the Christian faith.

Praise for The Age of AI:

“The Age of AI informs us and assists us in envisioning a future that is filled with tools, influences, opportunities, and challenges relating to artificial intelligence. While many may fear the unknown future before us, Jason Thacker presents the imperative need to always lift up the constancy of the image of God and the dignity of all human life as presented in the Holy Scriptures, the Bible. I am thankful Jason’s book can help churches, pastors, theologians, and Christian leaders in all vocations to wrestle through this current topic, always being committed to what this book states profoundly: God-given dignity isn’t ours to assign or remove.”

–Dr. Ronnie Floyd, president and CEO, Southern Baptist Convention Executive Committee


Navigating AI with Faith: AI Revolution including ChatGPT and its impact on jobs, businesses and parenting
By: Elijah Low

Prepare to embrace the future with courage and conviction! We stand on the threshold of an extraordinary revolution – the AI Revolution – which is set to redefine our world in ways we can barely begin to imagine. This seismic shift promises transformation far beyond that of the internet age, and it’s imperative that we ready ourselves for what lies ahead.

Visionaries like Google CEO Sundar Pichai argued that the influence of Artificial Intelligence (AI) could surpass even the most monumental forces that have shaped our civilization – fire, electricity, and indeed, the internet. As echoed by Barron magazine, the potential of AI (Artificial Intelligence) to restructure the world may well exceed the tremendous impact of the internet. We’re on the cusp of a change on an unprecedented scale, exceeding anything we’ve previously witnessed. But what does this mean for you? How can you navigate these uncharted waters and not just survive, but thrive amidst such disruptive change? How can you reconcile this technological revolution with your deeply held faith values? For parents and educators, how can you prepare your children for a future where AI (Artificial Intelligence) including ChatGPT plays a dominant role? And for technologists and futurists, how can you explore the broader implications of AI (Artificial Intelligence), beyond the purely technical?

Fear not, for “Navigating AI with Faith” is here to illuminate the path ahead. Elijah Low’s forthcoming book “Navigating AI with Faith” is your essential guide to charting a course through this brave new world. With this resource, you’ll not only keep pace with the rapid advancements but also remain rooted in faith-based principles and ethics. Parents and educators will gain insight into equipping the next generation with the skills and mindsets vital for success in an AI-centric world. Technologists and futurists can explore the intersection of faith, ethics, and artificial intelligence, providing a broader perspective on AI’s societal impacts. AI is poised to redefine every aspect of our lives, from the way we work and learn to how we travel, access healthcare, and interact. Don’t stand on the sidelines while the world accelerates into the future; instead, join our community of forward-thinkers leading the charge of this transformative journey.

This book will provide insights into:

  1. The threats and dangers that come with AI
  2. What to do if you get laid off because your job was replaced by an AI
  3. How to prosper with AI
  4. How to equip your children so that they will be successful in an AI world
  5. ChatGPT and its capabilities
  6. AI’s impact on Medicine
  7. AI’s impact on Law and government
  8. AI’s impact on the economy and jobs
  9. A brief history covering OpenAI, ChatGPT, DeepMind, X.ai, Microsoft, and Google involvement in AI (Artificial Intelligence)
  10. AI Hallucinations

! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !

SPECIAL ‘GENERAL’ RESOURCE

ApologetiX Songbook
(An interactive PDF)

It features the lyrics to every song on every CD and every “download” from 1993-2020

Special features:

  • indexed by title, original song, original artist, subject, and Bible verse
  • each song’s page has icons showing what albums it appears on
  • each song’s page has a commentary from lyricist J. Jackson
  • each album’s page includes liner notes and track listing
  • print any pages you like or use for slides in church
  • photos from ApologetiX’s debut concert in 1992
  • discography of out-of-print cassettes
  • downloadable in PDF format

New features in this edition:

  • all song commentaries from J. Jackson updated and expanded
  • also indexed by year when original song spoofed was a hit
  • J.’s original handwritten rough lyrics to 40 ApX classics
  • scads of photos from ApX 25th-anniversary concerts
  • list of 40 ApX parodies most likely to be redone
  • over 200 new parodies and journal entries
  • list of the first ApX concerts in each state
  • six new full-length feature articles
  • DVD discography and synopses
  • never-before-seen rare photos
  • lyrics for over 700 parodies
  • over 1000 pages!

Interactive features:

  • click on any page number in indexes or TOC to go to that page
  • click on any album icon to go to its liner notes and track listings
  • click on any song title on an album page to go to that song

Note: This e-book is a download-only and doesn’t include sheet music.

The songbook is available for a donation of $50 or more. After we receive your donation, we’ll send you a follow-up email with the link.

Get the Songbook for a donation:
http://www.apologetix.com/store/store.php#songbook

Songbook Demo Video: https://rumble.com/vfazhl-apologetix-songbook-2020-demo.html


“THE SEARCH FOR MEANING” WEBSITE

This site presents discussions on the 12 most commonly asked questions about the Christian faith.

The 12 discussions are accessed by the “tabs” at the bottom of the page. The tabs are numbered 1-12. Roll your mouse over them and you will see the question displayed at the right. Click on the number to select that question.

Within each question (i.e. tabs 1-12), there are subtopics (or dialogues) to select that appear as smaller tabs underneath the numbered tabs. Roll your mouse over them and the title of these topics is also displayed to the right. Click on the open rectangle to select that dialogue.

For each question (1-12), a link to related resources and an optional flowchart is provided. To access this material, click on the respective words, “Related Resources” or “Options Flowchart.”

To play a more detailed discussion of the subject, between two people, select the desired dialogue and click on “Play Audio Dialogue.”

In the upper right-hand corner of the page, there is an icon that looks like binoculars looking at a question mark. Click on this icon to return to the homepage.

In the upper right-hand corner of a “Related Resources” page, there is an icon that looks like some books. Click on this icon to get to an “overview” page that has links to all of the resources for all of the questions. There also are additional “appendices” for most of the questions.

In the upper right-hand corner of a “Flowchart” page, there is an icon that looks like an Org chart. Click on this icon to get to an “overview” page that has links to all of the flowcharts.

http://4vis.com/sfm/sfm_pres/sp_q1_d1_1of10.html

[ Content by: Bill Kraftson and Lamar Smith; Website by Mark Besh ]


“FRUITS OF THE BEATITUDES” WEBSITE
(The ATTITUDES of Jesus that produce the CHARACTER of Jesus)

CLICK ON THE LINK to view:
http://fruitsofthebeatitudes.org/

FACEBOOK PAGE:
https://www.facebook.com/FruitsOfTheBeatitudes/

[ Mark Besh ]


[ P.S.: If you would like to investigate further about how Jesus is the ONLY ‘WAY’ to Heaven, visit the following link:
https://4vis.com/sfm/sfm_pres/sp_q6_d1_1of10.html


<<< ARTICLES >>>


What is AI? | Back to Basics

[ IMF ]

PRESENTATION: https://www.youtube.com/watch?v=JcNJaGDilqk


What is artificial intelligence (AI)?

Artificial intelligence leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.

Deep learning vs. machine learning
The rise of generative models
Artificial intelligence applications
History of artificial intelligence: Key dates and names
Related solutions
Resources
Take the next step

What is artificial intelligence?
While a number of definitions of artificial intelligence (AI) have surfaced over the last few decades, John McCarthy offers the following definition in this 2004 paper (link resides outside ibm.com), ” It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.”

However, decades before this definition, the birth of the artificial intelligence conversation was denoted by Alan Turing’s seminal work, “Computing Machinery and Intelligence”(link resides outside ibm.com), which was published in 1950. In this paper, Turing, often referred to as the “father of computer science”, asks the following question, “Can machines think?” From there, he offers a test, now famously known as the “Turing Test”, where a human interrogator would try to distinguish between a computer and human text response. While this test has undergone much scrutiny since its publish, it remains an important part of the history of AI as well as an ongoing concept within philosophy as it utilizes ideas around linguistics.

Stuart Russell and Peter Norvig then proceeded to publish, Artificial Intelligence: A Modern Approach (link resides outside ibm.com), becoming one of the leading textbooks in the study of AI. In it, they delve into four potential goals or definitions of AI, which differentiates computer systems on the basis of rationality and thinking vs. acting:

Human approach:

Systems that think like humans
Systems that act like humans
Ideal approach:

Systems that think rationally
Systems that act rationally
Alan Turing’s definition would have fallen under the category of “systems that act like humans.”

At its simplest form, artificial intelligence is a field, which combines computer science and robust datasets, to enable problem-solving. It also encompasses sub-fields of machine learning and deep learning, which are frequently mentioned in conjunction with artificial intelligence. These disciplines are comprised of AI algorithms which seek to create expert systems which make predictions or classifications based on input data.

Over the years, artificial intelligence has gone through many cycles of hype, but even to skeptics, the release of OpenAI’s ChatGPT seems to mark a turning point. The last time generative AI loomed this large, the breakthroughs were in computer vision, but now the leap forward is in natural language processing. And it’s not just language: Generative models can also learn the grammar of software code, molecules, natural images, and a variety of other data types.

The applications for this technology are growing every day, and we’re just starting to explore the possibilities. But as the hype around the use of AI in business takes off, conversations around ethics become critically important. To read more on where IBM stands within the conversation around AI ethics, read more here.

Begin your journey to AI [more…]

[ IBM ]

ARTICLE: https://www.ibm.com/topics/artificial-intelligence


What is Artificial Intelligence? | Quick Learner

[ Duke University ]

TUTORIAL: https://www.youtube.com/watch?v=c0m6yaGlZh4


What is Artificial Intelligence? AI Explained

[ National Science Foundation News ]

PRESENTATION: https://www.youtube.com/watch?v=HcZ6bq-RVM0


Peace Machine

A learning machine as a messenger of peace

The book project of Timo Honkela, professor of research into digital information, focuses on the opportunities presented by machine learning.

What if we allowed a computer to listen to everything we say and read everything we write, and then to analyse the material?

This may sound frightening, but according to Timo Honkela (@THonkela), Professor of Research into Digital Information at the University of Helsinki, it could offer a chance for an unprecedented level of understanding – and even world peace.

Honkela’s vision is founded on machine learning. While listening to me and reading my texts, the machine would learn – slowly but surely – the exact meaning I give to words and concepts.

“We all use words in a slightly different manner,” Honkela explains. “This easily leads to conflicts: if I feel that something is fair, but you don’t, we may not actually disagree on the matter but simply be assigning different meanings to fairness.”

If machines were to analyse everyone’s way of using language, it would be easy to detect conflicts in meaning. This would essentially reduce misunderstandings and help avoid arguments over the meaning of an agreement.

It could also reduce conflicts. In fact, Honkela has named his idea “the peace machine”.

A 100-YEAR PROJECT
The peace machine has more to it than a semantic analysis. In Honkela’s visions, the machine could feature a total of five to seven elements. The essential elements would include human emotion modelling and multi-objective optimisation, in which the machine would seek win-win solutions for several parties.

According to Honkela, the required technology basically exists for most elements. What needs to be considered now is how to implement the machine in practice – and how to sell the idea to people.

“The loss of privacy is a disadvantage. However, we already let social media analyse us, so you would expect that the right incentives could solve this issue.”

Honkela does not believe in a quick breakthrough, however.

“I expect a time span of one hundred years or so.”

MODERN-DAY SOCRATES
In Honkela’s case, a long-term perspective has a very special meaning. The Professor has been diagnosed with cancer, and the prognosis is not good. Honkela has now decided to complete the peace machine book before his death.

In one-and-a-half weeks, the project has collected over 30,000 euros through crowdsourcing.

The contributions have come as a pleasant surprise to Mika Pantzar, research professor at the National Consumer Research Centre, who supports the project. He hopes Honkela can start his writing soon.

“I believe it would be an important contribution. Timo is Socrates-like in that he enjoys asking strange questions and communicating his thoughts to his followers.”

[ Timo Honkela ]


Peace Machine | Rauhankone

“Peace, love and AI.”

HOME
PEOPLE
JOIN (LIITY JÄSENEKSI)
Facebook
The Peace Machine vision is to use AI technologies to release human potential and support emotional, ethical, and cultural development. We develop future AI technologies that be used to reduce violence, oppression and destructive conflict.

Peace Machine is a project to use Artificial Intelligence to create a more peaceful world.

The Peace Machine concept was introduced by Timo Honkela in his book Rauhankone – Tekoälytutkijan testamentti (in English; The Peace Machine – a testament of an AI-researcher). French and Estonian versions of the book were published in 2019.

The Peace Machine association (Rauhankone ry) was founded in September 2018 to bring together actors from different fields to advance the Peace Machine concept.

[ Timo Honkela ]

WEBSITE: https://peacemachine.net/


Timo Honkela – My AI in the Peace Machine – MyData 2018

[ MyData Global ]

PRESENTATION: https://www.youtube.com/watch?v=-0_RGI4Atf0


“Designing AI For Conflict Prevention & Peacebuilding”

The growing prominence of artificial intelligence (AI) poses, as President Biden stated, “enormous potential and enormous danger” for global peace and security. The peacebuilding field, technology engineers, and policymakers must work together to minimize the dangers of AI and maximize AI’s potential to prevent and reduce violent conflict, violence, and fragility and build sustainable peace globally. AI technology can vastly increase humanity’s capacity to make discoveries through machine learning and coherently summarize numerous text documents. However, AI can also easily generate mis and disinformation at a massive scale and, as a worst-case scenario, support deadly automated weaponry beyond human control. The peacebuilding field needs to raise awareness and educate how AI technology can positively and negatively impact global peace and security, and governments and multilateral organizations must urgently adopt robust AI standards and norms that prevent and reduce violent conflict, violence, and fragility and build sustainable peace. Additionally, peacebuilders should work with the technology sector to develop AI-powered tools that “translate” divisive rhetoric into peaceful language and thus promote social cohesion and peacebuilding.

[ Alliance For Peacebuilding ]

PDF DOWNLOAD: https://www.allianceforpeacebuilding.org/afp-publications/designing-ai


“Building The Bridge To Peace: Reframing Peace And Peacebuilding”

Publishers: Alliance for Peacebuilding, The FrameWorks Institute, PartnersGlobal

Publication date: April 2022

As the humanitarian crisis unfolds in Ukraine, now is the time to have a conversation about global peace and peacebuilding. The FrameWorks Institute, Alliance for Peacebuilding, and PartnersGlobal joined forces to deliver evidence-based recommendations for more effective narrative strategies that build public understanding and support for peacebuilding. The new brief includes:

Existing mindsets around peacebuilding

Research-based framing recommendations

Ideas for applying these frames in discourse & debate

The ongoing work of building bridges across divides must continue if we hope to create a world where conflicts are addressed without resorting to violence. Shifting the narrative of peacebuilding won’t happen overnight. But aligning messaging and consistency within the peacebuilding field will help the public and policymakers better understand what peacebuilding looks like in practice and why it is a productive mindset and policy option.


“AI and the Next Generation of Peacebuilders”

[ United States Institute of Peace ]

Q&A: https://www.youtube.com/watch?v=6vhrcxfrnJw


“AI FOR PEACE”

DECEMBER 13, 2019
SPECIAL SERIES – AI AND NATIONAL SECURITY

AI Peace
This article was submitted in response to the call for ideas issued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric Schmidt and Robert Work. It addresses the fourth question (part a.) which asks what international norms for artificial intelligence should the United States lead in developing, and whether it is possible to create mechanisms for the development and enforcement of AI norms. [more…]

[ PATRICK S. ROBERTS ]

ARTICLE: https://warontherocks.com/2019/12/ai-for-peace/


“Alliance For Peacebuilding”
Named the “number one influencer and change agent” among peacebuilding institutions worldwide—AFP is a nonprofit and nonpartisan network of 200+ organizations working in 181 countries to prevent conflict, reduce violence, improve lives, and build sustainable peace. At our core, AFP cultivates a network to strengthen and advance the peacebuilding field, enabling peacebuilding organizations to achieve greater impact—tackling issues too large for any one organization to address alone.

WEBSITE: https://www.allianceforpeacebuilding.org/


Artificial Intelligence- Opportunities and Risks for International Peace and Security – Security Council, 9381st Meeting”

[ UN Web TV ]

ARTICLE: https://webtv.un.org/en/asset/k1j/k1ji81po8p


“Artificial Intelligence’s effect on world peace and stability”

[ CGTN UN ]

VIDEO: https://www.youtube.com/watch?v=B092NvDEXxo


“Artificial Intelligence” | United Nations – CEB

During the UNESCO General Conference in November 2019, Member States had mandated the organization to support the process of elaborating a non-binding Recommendation on the ethics of AI. A first version of the Recommendation was published in May 2020, prepared by a 24-member Ad Hoc Expert Group appointed by the Director-General of UNESCO in their personal capacity.

UNESCO’s General Conference on the ethics of AI. UNESCO’s General Conference adopted the Recommendation on the Ethics of AI in November 2021.

Following the adoption of the Recommendation on the Ethics of Artificial Intelligence a set of principles to guide the use of AI by United Nations system entities. The Principles for the Ethical Use of Artificial Intelligence in the United Nations System was approved by HLCP at an intersessional meeting in July 2022 and subsequently endorsed by CEB in September 2022.

ARTICLE: https://unsceb.org/topics/artificial-intelligence


“AI’s Power & Perils: UN Chief’s Briefing for Peace and Security”

[ United Nations Security Council ]

VIDEO: https://www.youtube.com/watch?v=mpnfK5Zhxho


“In World Peace Day Message, Pope Warns Of Risks Of AI For Peace”

By Catholic Media Last updated Dec 14, 2023

Share
In his message for the 57th World Day of Peace, Pope Francis reflects on the impact of Artificial Intelligence on world peace, and urges the international community to adopt a binding international treaty that regulates its development and use.
By Lisa Zengarini

New technologies must always be directed to “the pursuit of peace and the common good, in the service of the integral development of individuals and communities.”

In his annual Message for the World Day of Peace, Pope Francis urges world leaders to ensure that progress in developing forms of artificial intelligence “will ultimately serve the cause of human fraternity and peace.”

The message, dedicated to the theme ‘Artificial Intelligence and Peace,’ was released by the Vatican on Thursday ahead of the 57th World Day of Peace, 1 January 2024.

Inherent ambivalence of techno-scientific advances
In it, Pope Francis draws attention to the “ethical dimension “of these new technologies which are revolutionizing humanity in all spheres of life, highlighting the ambivalence inherent in any progress in science and technology.

On the one hand, he says, it can lead to the betterment of humanity and the transformation of the world if it “contributes to greater order in human society and greater fraternal communion and freedom”

On the other hand, techno-scientific advances, especially in the digital sphere, “are placing in human hands a vast array of options, including some that may pose a risk to our survival and endanger our common home.”

No technological innovation is “neutral”
The message recalls that no scientific research and technological innovation is “neutral”: “As fully human activities, the directions they take reflect choices conditioned by personal, social, and cultural values in any given age. The same must be said of the results they produce: precisely as the fruit of specifically human ways of approaching the world around us, the latter always have an ethical dimension, closely linked to decisions made by those who design their experimentation and direct their production towards particular objectives.”

This also applies to AI, for “the impact of any artificial intelligence device – regardless of its underlying technology – depends not only on its technical design, but also on the aims and interests of its owners and developers, and on the situations in which it will be employed.”

Therefore, we “cannot presume a priori that its development will make a beneficial contribution to the future of humanity and to peace among peoples. That positive outcome will only be achieved if we show ourselves capable of acting responsibly and respect such fundamental human values as ‘inclusion, transparency, security, equity, privacy and reliability’”, Pope Francis writes. [more…]

[ Catholic Media ]

ARTICLE: https://catholicmedia.org/in-world-peace-day-message-pope-warns-of-risks-of-ai-for-peace/


“World Day of Peace: Artificial Intelligence and Peace”

[ Vatican IHD ]

VIDEO: https://www.youtube.com/watch?v=r29MlLZE0NA


“Security Council Debates AI’s Impacts on Peace, Sustainable Development”

UN Photo/Manuel Elias
STORY HIGHLIGHTS
Council members highlighted “the transformative opportunities AI offers for addressing global challenges and the risks it poses,” with many stressing the need to establish an ethical and responsible framework for international AI governance.

The Secretary-General underscored AI’s potential to “turbocharge global development” and realize human rights, particularly to health and education, while also acknowledging “evidence that AI can amplify bias, reinforce discrimination and enable new levels of authoritarian surveillance”.

The UN Security Council held its first formal meeting on artificial intelligence (AI). Recognizing the need to “confront the new reality of generative and other” AI, speakers highlighted opportunities and risks this technology presents for international peace and security, sustainable development, and human rights.

UN Secretary-General António Guterres compared the speed and reach of AI’s development to the introduction of the printing press, noting that “while it took more than 50 years for printed books to become widely available across Europe, ChatGPT reached 100 million users in just two months.”

The Secretary-General underscored AI’s potential to “turbocharge global development” and realize human rights, particularly to health and education, while also acknowledging “evidence that AI can amplify bias, reinforce discrimination and enable new levels of authoritarian surveillance,” as highlighted by the High Commissioner for Human Rights.

Acknowledging that even the UN has been using AI to identify patterns of violence and to monitor ceasefires, Guterres highlighted the political, legal, ethical, and humanitarian concerns the technology presents for peace and security, including through its malicious use.

The Secretary-General further noted that “[w]hile many countries have called for different measures and initiatives around the governance of AI, this requires a universal approach.” He drew attention to the complexity of questions of governance surrounding AI due to: its wide availability to the general public; the ease with which it can be moved around the world “leaving very little trace”; and the private sector’s leading role in the field.

Among the entry points to advance global AI governance, Guterres identified: [more…]

[ IISD ]

ARTICLE: https://sdg.iisd.org/news/security-council-debates-ais-impacts-on-peace-sustainable-development/


“Our Common Agenda: Policy Brief 9 – A New Agenda for Peace (July 2023)”

Format Analysis SourcesUNUN SG Posted20 Jul 2023 Originally published20 Jul 2023 OriginView original
Attachments
Preview of Our Common Agenda – Policy Brief 9 – A New Agenda for Peace (July 2023).pdf
Download Report

INTRODUCTION

CHAPEAU

The challenges that we face can be addressed only through stronger international cooperation. The Summit of the Future, in 2024, is an opportunity to agree on multilateral solutions for a better tomorrow, strengthening global governance for both present and future generations (General Assembly resolution 76/307). In my capacity as Secretary-General, I have been invited to provide inputs to the preparations for the Summit in the form of action-oriented recommendations, building on the proposals contained in my report entitled “Our Common Agenda” (A/75/982), which was itself a response to the declaration on the commemoration of the seventy-fifth anniversary of the United Nations (Assembly resolution 75/1). The present policy brief is one such input.

PURPOSE OF THIS POLICY BRIEF

In the declaration on the commemoration of the seventy-fifth anniversary of the United Nations, heads of State and Government undertook to promote peace and prevent conflicts. Honouring this pledge will require major changes by Member States, in their own actions and in their commitment to uphold and strengthen the multilateral system as the only viable means to address an interlocking set of global threats and deliver on the promises of the Charter of the United Nations around the world.

Member States must provide a response to the deep sense of unease which has grown among nations and people that Governments and international organizations are failing to deliver for them. For millions of people, the sources of that disappointment are to be found in the horrors of hunger, displacement and violence. Inequalities and injustices, within and among nations, are giving rise to new grievances. They have sown distrust in the potential of multilateral solutions to improve lives and have amplified calls for new forms of isolationism. As the planet warms, marginalization grows and conflicts rage, young people everywhere have grown disillusioned at the prospects for their future. [more…]

[ OCHA ]

ARTICLE: https://reliefweb.int/report/world/our-common-agenda-policy-brief-9-new-agenda-peace-july-2023


“TRISTAN HARRIS-US Senate Hearings on Big Tech”

[ NeverSurrender ]

VIDEO: https://rumble.com/vr4av9-tristan-harris-us-senate-hearings-on-big-tech.html


“UN chief warns of AI risks to global peace and security”

UN Secretary-General Antonio Guterres on Tuesday warned that artificial intelligence could pose a risk to global peace and security, calling on all member states to urgently set up guardrails to keep the technology in check.

Issued on: 19/07/2023 – 01:36

By:
NEWS WIRES

“It is clear that AI will have an impact on every area of our lives,” Guterres said at the first UN Security Council meeting on the topic.

“Generative AI has enormous potential for good and evil at scale,” he added, noting that while it could help end poverty or cure cancer, the technology could also have “very serious consequences for global peace and security.”

British Foreign Secretary James Cleverly, whose country currently holds the rotating Security Council presidency, said AI “will affect the work of this council. It could enhance or disrupt global strategic stability.”

“It challenges our fundamental assumptions about defense and deterrence. It poses moral questions about accountability for lethal decisions on the battlefield,” said Cleverly, whose government will host an AI summit later this year.

Guterres asked member states to put in place a legally binding pact to “prohibit lethal autonomous weapons systems” by the end of 2026.

While AI can be used to identify patterns of violence or to monitor ceasefires, Guterres said, he warned that its use by terrorists or governments with ill intent could cause “horrific levels of death and destruction.”

Daily newsletter
Receive essential international news every morning

Subscribe
He also warned that malfunctioning AI could wreak havoc, especially if the technology is used in connection with nuclear weapons systems or biotechnology.

“I urge agreement on the general principle that human agency and control are essential for nuclear weapons and should never be withdrawn,” he said.

He called for a working group to develop options for global AI governance by year’s end.

And he reiterated his support for a United Nations entity to support efforts to govern the use of AI, modeled after the International Atomic Energy Agency or the International Civil Aviation Organization.

[ France24 ]


“At UK’s AI Summit developers and govts agree on testing to help manage risks”

By Martin Coulter and Paul Sandle
November 2, 20236:33 PM EDTUpdated 3 months ago

Summary
Companies
Attendees agree need for pre-release model testing
Govts including US and China signed declaration on Wednesday
UK’s Sunak says summit ‘tipped balance in favour of humanity’
Elon Musk tells Sunak 80% chance AI will be a force for good
BLETCHLEY PARK, England, Nov 2 (Reuters) – Leading AI developers agreed to work with governments to test new frontier models before they are released to help manage the risks of the rapidly developing technology, in a “landmark achievement” concluding the UK’s artificial intelligence summit.
Some tech and political leaders have warned that AI poses huge risks if not controlled, ranging from eroding consumer privacy to danger to humans and causing a global catastrophe, and these concerns have sparked a race by governments and institutions to design safeguards and regulation.
Advertisement · Scroll to continue

Report this ad
At an inaugural AI Safety Summit at Bletchley Park, home of Britain’s World War Two code-breakers, political leaders from the United States, European Union and China agreed on Wednesday to share a common approach to identifying risks and ways to mitigate them.
British Prime Minister Rishi Sunak said that declaration, the action on testing and a pledge to set up an international panel on risk would “tip the balance in favour of humanity”. [more…]

[ Reuters ]

ARTICLE: https://www.reuters.com/world/uk/uk-pm-sunak-lead-ai-summit-talks-before-musk-meeting-2023-11-02/


“Five takeaways from UK’s AI safety summit at Bletchley Park”

Rishi Sunak hails conference as diplomatic coup after it produces international declaration to address AI risks

UK politics live – latest updates

Dan Milmo and Kiran Stacey
Thu 2 Nov 2023 12.55 EDT

Rishi Sunak has hailed this week’s artificial intelligence summit as a diplomatic breakthrough after it produced an international declaration to address risks with the technology, as well as a multilateral agreement to test advanced AI Models.

Here are five things we have learned from the summit.

The UK pulled off a diplomatic coup
The prime minister spent diplomatic capital convening global leaders, tech executives, academics and civil society figures at Bletchley Park in Milton Keynes, the base for second world war codebreakers. [more…]

[ The Guardian ]

ARTICLE: https://www.theguardian.com/technology/2023/nov/02/five-takeaways-uk-ai-safety-summit-bletchley-park-rishi-sunak


“Frontier AI: capabilities and risks – Discussion paper”

Published 25 October 2023

Acknowledgements

We would like to thank the expert review panel, Yoshua Bengio, Sara Hooker, Arvind Narayanan, William Isaac, Paul Christiano, Irene Solaiman, Alexander Babuta and John McDermid for their insightful comments and feedback.

This report is a discussion paper to support the AI Safety Summit, and does not represent a policy position of HMG or represent the views of the expert review panel above, who only provided comments for consideration.

Introduction
We are in the midst of a technological revolution that will fundamentally alter the way we live, work, and relate to one another. Artificial Intelligence (AI) promises to transform nearly every aspect of our economy and society. The opportunities are transformational – advancing drug discovery, making transport safer and cleaner, improving public services, speeding up and improving diagnosis and treatment of diseases like cancer and much more.

Developments in frontier AI are transforming productivity and software services, which will multiply the productivity of many industries and sectors.[footnote 1] This progress in frontier AI in recent years has been rapid, and the most advanced systems can write text fluently and at length, write well-functioning code from natural language instructions, make new apps, score highly on school exams, generate convincing news articles, translate between many languages, summarise lengthy documents, amongst other capabilities. The opportunities are vast, and there is great potential for increasing the productivity of workers of all kinds. [more…]

[ UK Government ]

ARTICLE: https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper/frontier-ai-capabilities-and-risks-discussion-paper


“Without Adequate Guardrails, Artificial Intelligence Threatens Global Security in Evolution from Algorithms to Armaments, Speaker Tells First Committee”

Cyberissues Have Become Foreign Policy Issues of Urgent Concern, Says Another

The window of opportunity to enact guardrails against the perils of autonomous weapons and artificial intelligence’s military applications is rapidly closing, as the world prepares for a “technological breakout”, the First Committee (Disarmament and International Security) heard today during a day-long debate on how use of science and technology can undermine security.

“We are at the verge of a monumental step in human technological history, heralded by the advent of artificial intelligence,” said Pakistan’s representative, warning that its inevitable march from algorithms to armaments continues without adequate guardrails governing its design, development and deployment. The scale of challenges necessitates a multifaceted and holistic multilateral response.

Cyberissues have become strategic foreign policy issues of urgent concern to all countries, said Australia’s representative. For its part, her country will act in accordance with the UN framework for responsible State behaviour in cyberspace, and continue to publicly share how it implements, interprets and observes the framework. Transparency breeds accountability, predictability and stability, she said, urging others to follow suit.

Sri Lanka’s delegate said that while humankind’s creativity is well-known, its ability to self-destruct through that creativity and the pursuit of short-term self-interest is also known. “We can ill afford misadventures that peril our very existence”, he warned. Enacting legislation and preventing illegal and criminal activities involving information and communications technology (ICT) is the primary responsibility of States concerned. However, many developing countries are grappling with capacity constraints.

Cyberspace must not become a “new battlefield for rivalries”, warned Mexico’s representative, adding that cyberspace must remain open, free, stable, safe, accessible and resilient for everyone. It is “alarming” to think that nations or even private entities can develop advanced offensive capabilities, which would potentially lead to destabilizing actions. [more…]

[ United Nations ]

ARTICLE: https://press.un.org/en/2023/gadis3725.doc.htm


“Future risks of frontier AI (Annex A)”

Published 25 October 2023

Executive summary
Context: This paper uses the government’s chosen definition of Frontier AI as highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models[footnote 1]. As of October 2023, this primarily encompasses foundation models consisting of very large neural networks using transformer architectures.

Frontier AI models are becoming more capable at a range of tasks. Short term, they are likely to become even more so, and to be built by the few companies with access to the requisite resources. Open-source models will almost certainly improve behind the Frontier, posing different regulatory challenges. This paper is focussed on risks from Frontier AI, but experts repeatedly highlighted the opportunities to be gained from AI.

The risks posed by future Frontier AI will include the risks we see today, but with potential for larger impact and scale. These include enhancing mass mis- and disinformation, enabling cyber-attacks or fraud, reducing barriers to access harmful information, and harmful or biased decisions. Investing in mitigations for risks associated with Frontier AI now, is likely to be good preparation for some future risks.

Even the most advanced models today have limitations and produce errors. There is ongoing debate amongst experts as to how robust and scalable some apparent capabilities are. Improved accuracy, reasoning, planning capabilities, memory, and self-correction will be required to deliver truly autonomous agents able to carry out more than basic tasks without human oversight. [more ]

[ UK Government ]

ARTICLE: https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper/future-risks-of-frontier-ai-annex-a


“Could Artificial Intelligence Prevent Future Wars?”

July 19, 2022

Rohit Talwar
CEO of Fast Future

There is growing concern that current mediation and peacemaking approaches are proving less and less effective in resolving localized and international conflicts. In response, interest is rising in the potential for artificial intelligence to play a role in advising on or resolving complex conflicts.

A combination of situational complexity, intractable positions of opposing sides and escalating costs is driving the search for AI-based approaches that could replace humans in resolving legal cases, international disputes and military conflicts.

AI Is Appearing in Legal Dispute Resolution
While such solutions might seem far off possibilities to many, progress is being made in legal dispute resolution. Master of the Rolls and head of civil litigation in England and Wales, Sir Geoffrey Vos, has talked for some time about AI’s potential to propose resolutions for humans to ratify. As the technology gets ever more functional, the legal sector is laying the foundation for wider adoption of AI. [more…]

[ MarshMcLennan ]

ARTICLE: https://www.brinknews.com/could-artificial-intelligence-prevent-future-wars/


“Will the world get to grips with AI in 2024?”

[ United Nations ]

INTERVIEW: https://www.youtube.com/watch?v=UNHhlZ3xOXs


“World Peace Day 2024: AI should not promote ‘the folly of war’

Pope Francis’s message for 1 January 2024 is dedicated to the relationship between the new frontiers of data technology and the challenge of peace. The pontiff warns of dangers such as the manipulation of information, the development of weapons that “hide” responsibilities, and “processes of categorization” that risk making the “human heart itself [. . .] ever more ‘artificial’.” To avoid this, the “community of nations” should “adopt a binding international treaty that regulates the development and use” of such tools.

Vatican City (AsiaNews) – In his message for the 57th World Day of Peace, which the Church will celebrate on 1 January 2024, Pope Francis focuses on “Artificial Intelligence and Peace”. In it, the pontiff writes that the most advanced applications of artificial intelligence (AI) “should not be employed to facilitate the violent resolution of conflicts, but rather to pave the way for peace.”

In the message, released today by the Holy See Press Office, the pontiff proposes a long and detailed reflection on the new frontiers of science and digital technology, calling for their responsible development and regulation in light of their “serious implications for the pursuit of justice and harmony among peoples.”

For Pope Francis, the development of algorithms capable of processing immense amounts of data, often extrapolated without the knowledge of those involved, raises serious ethical questions. They will benefit our communities, he said, only if “we show ourselves capable of acting responsibly and respect such fundamental human values as ‘inclusion, transparency, security, equity, privacy and reliability’.” [more…]

[ PIME Asia News ]

ARTICLE: https://www.asianews.it/news-en/World-Peace-Day-2024%3A-AI-should-not-promote-%26lsquo%3Bthe-folly-of-war%26rsquo%3B-59755.html


“Robot diplomacy: How AI could usher in a new era of world peace”

BY OLA MOHAJER, OPINION CONTRIBUTOR – 09/21/23 11:00 AM ET

Photo by OLIVIER MORIN/AFP via Getty Images
This illustration photograph taken in Helsinki on June 12, 2023, shows an AI (Artificial Intelligence) logo blended with four fake Twitter accounts bearing profile pictures apparently generated by Artificial Intelligence software. (Photo by OLIVIER MORIN/AFP via Getty Images)
The world is facing the most significant surge in violent conflicts since World War II. According to a United Nations briefing, a quarter of the global population is affected by conflicts today, the likes of which are more complex than ever.

This new era of rising conflict rates demands an honest reexamination of how we do things. It is imperative for practitioners in the fields of peace and defense to acknowledge that conventional approaches are failing one-fourth of humanity. Disruption is urgently needed, and artificial intelligence (AI)-powered peace should be our north star.

AI-powered peace has the potential to revolutionize international peace and security. It will allow practitioners and policymakers to eliminate the time and resource intensity behind data collection, analysis and option generation, and free up those resources to focus on the difficult things like dialogue, negotiation, trust-building and strategic decisionmaking.

Machine learning and generative AI can learn from data, identify patterns, predict outcomes and generate new knowledge and solutions. [more… ]

[ The Hill ]

ARTICLE: https://thehill.com/opinion/international/4211989-robot-diplomacy-how-ai-could-usher-in-a-new-era-of-world-peace/


Why AI will lead the world to peace”

Published 12 months ago • 3 min read

Why AI will lead the world to peace.

Introduction

Artificial intelligence (AI) is one of the most promising technologies in history. It will reshape our economic and social systems, and it could even save millions of lives. But what if AI is also a threat? That’s why we need to be careful about how we use this technology while still moving forward with its development.

AI will make war less destructive
AI will make war less destructive.
AI will make war less likely.
AI will help us prevent war.
AI will transform our economy.
AI will transform our economy.

AI is already helping us to do more with less, but it will continue to be a major driver of economic growth and job creation. As we become more efficient, faster and more productive at work, new industries will emerge that can’t be automated; in fact, they’re just now being created. These new jobs are likely to require skills like creativity or empathy—skills that humans have developed over thousands of years through history as a result of interacting with other humans face-to-face or over distance through language (or even nonverbal communication).

AI can be used as a way to prevent war.

One of the most important ways AI can help prevent war is by using it to track weapons and monitor military activity. AI has become a powerful tool for monitoring weapons, which allows us to detect threats before they happen. This means that AI could help prevent conflicts from escalating into full-scale wars.

A second way that AI could be used in this way is by detecting human rights abuses committed by governments or organizations against their own people. If a country’s government was found guilty of such atrocities, then armies around the world would have evidence against them and could use it as grounds for war crimes trials against those responsible—this would prevent future conflict between nations over human rights issues like these ones!

AI could save millions of lives.

AI could save millions of lives.

One way that AI can be used to help humanity is by predicting pandemics, like the Ebola crisis in Africa in 2014. In this case, Google DeepMind’s machine learning system was able to predict where the virus would spread and how long it would take before it reached Europe or the U.S., which helped contain its effects around Africa and elsewhere on Earth during one of humankind’s most devastating epidemics ever recorded.

Another project involves using deep learning techniques (also called artificial neural networks) to identify cyber attacks before they happen – something that could potentially prevent millions from being hurt by them every year!

AI is the solution to humanity’s problems, not just its threats.

AI is a tool that can be used for good or bad. It’s not a threat to humanity, but rather an important part of the solution to many of our problems today.

As AI continues its development and becomes more sophisticated, it will become more useful in solving problems such as climate change and global warming. This could save millions of lives by preventing pollution caused by fossil fuels, which contributes heavily toward global warming and sea level rise. In addition to helping us solve these issues at home, AI will also help us address them abroad—for example: helping refugee camps provide better services for those affected by war; finding ways for countries suffering from drought or famine; tracking disease outbreaks around the world; predicting natural disasters before they occur so we know what resources we need beforehand (e.g., food)

Conclusion

As the pace of technological change continues to accelerate, there’s no doubt that AI will soon be a part of our everyday lives. We can only hope that it will lead us towards a more peaceful future.

[ Arun ]


“Solving World Peace with ChatGPT”

February 7, 2023

What relevance might ChatGPT have in a course on Peace Studies? After recent conversations with colleagues and faculty about the ChatGPT and teaching, I decided to find out for myself.

a synth painting of someone looking at a peace sign on a phone.
Image Credit: Created with DALL-E 2 and the prompt “a synth painting of someone looking at a peace sign on a phone.’
Although it’s easy to feel threatened by something like ChatGPT, I think our task as educators is to think about what best serves our students in the world that is emerging.

Increasingly, basic information gathering, calculating, and even simple writing will be done by bots. This creates the potential for a growing division between those who are simply “educated” and those who can think critically in collaboration with artificial intelligence and other sources of information and content. If a bot can collate sources, coherently express information, and perform calculations, students will need to be able to do more to be competitive in their careers. They need to learn how to ask good questions, analyze answers, and apply information effectively.

My course is an introduction to Peace Studies, which I teach from the vantage point of theology and religious studies as part of the Faith, Peace, and Justice program. The first week, I introduced different understandings of the field of “peace studies” and some of the major debates within the field. One major debate revolves around defining “peace.” One side tends to define peace in negative terms, as the absence of military conflict. Another group thinks of peace in more positive terms, as harmony and cooperation between various groups, identities, and states.

After I felt like we had a good handle on the main ideas, I pulled up ChatGPT at the end of class. At first, they were surprised–even scandalized! In their minds, ChatGTP is something inappropriate for the classroom since it can be a tool for cheating. I suggested that we should see if ChatGPT could be a way to enhance our analytical skills.

We asked ChatGPT to define peace studies. The bot gave a very well-written and coherent overview of the field of Peace studies and some of its main questions and ideas. On the surface, at least, it could have been an impressive answer on a final exam. Students quickly noticed, however, that the answer seemed to only reflect the “negative peace” angle of Peace studies. They were able to identify the places where this bias was reflected in the words ChatGPT used and the examples it gave. I was impressed. Students were animated to discover that they could demonstrate what they had learned so effectively, and could put this knowledge into practice to evaluate information. [more…]

[ Spencer K Johnson ]

ARTICLE: https://cdil.bc.edu/2023/02/07/discussing-peace-with-chatgpt/


“The Terrifying Prescience of George Orwell’s 1984”

Andrew Bernstein February 21, 2023 Audio PDF In Spring 2023
Author’s note: This article contains spoilers.

First and foremost a magnificent novel, George Orwell’s 1984 is also a terrifyingly prescient political commentary. Written by an Englishman in 1949, it substantially mirrors the trajectory of American politics today.

We can see this by examining four main elements:

the story as a whole;
the political principles and policies that Orwell dramatizes—and his brilliantly original contributions to the language of political discourse;
his insights into the philosophy underlying totalitarianism; and
several instances of Orwell’s nightmare world becoming our reality.
The Story
1984 is a love story set against a background of brutal communist oppression (communists are referred to simply as the “Party”). After a nuclear war in the 1950s, the world was divided into three super-states: Oceania, consisting of North America and Great Britain (the latter renamed “Airstrip One”); Eurasia, composed of the European continent and Western Asia; and Eastasia, or the bulk of the Asian continent. The powers are constantly at war with each other. Allies and enemies change over time, but war is the way of the world: At times, Oceania is allied with one of the others, fighting the third; at other times, they are reversed. No one ever wins, nor is victory the intent.

Oceania’s goal is to keep its populace in a frenzied state of hatred against the nation’s enemies and therefore loyal to the Party. Strict obedience to the Party is enforced by several means.

One is the omnipresence of the secret police—the “Thought Police” who deploy an advanced two-way telescreen technology. Every building and home in the country has a telescreen, in virtually every room. The Thought Police can spy on you at any moment. They can see and hear you at will; “any sound . . . made, above the level of a low whisper would be picked up by [the telescreen].”1

Further, the Party imposes a daily “Two Minutes Hate.” Every day, every person in Oceania is required to drop everything else, stand in front of a telescreen, and scream vitriol at Oceania’s foreign enemy—and at its domestic traitors, Emmanuel Goldstein and the Brotherhood, most likely invented threats supposedly seeking to overthrow the Party. The Thought Police monitor this procedure sedulously, imposing prison sentences or death for those suspected of insufficiently spewing hate.

The population is kept in a state of relentless indoctrination. The Party controls every means of communication and uses them to disseminate its lies. Its history books claim that the Party invented the airplane. The Ministry of Truth rewrites past newspaper and magazine articles to suit the propaganda of the moment; the past is continually rewritten. If a Party member is purged for treason—real or imaginary—he is “vaporized,” not merely killed but written out of recorded history. It is treason, punishable by vaporization, to state the “delusional” belief that he ever lived. When the Party switches from fighting Eurasia to Eastasia, it claims that Oceania has always been at war with Eastasia, all contrary writings are rewritten, and any opposing claim is considered treasonous.

The government is composed of four branches: the Ministry of Truth, which spreads the Party’s lies; the Ministry of Peace, which conducts relentless warfare; the Ministry of Love, which tortures and/or executes the Party’s enemies; and the Ministry of Plenty, which keeps the populace in perpetual poverty. The Party propagates three slogans: “War is Peace,” “Freedom is Slavery,” “Ignorance is Strength.” The bulk of the citizenry receives little education and is kept in a state of ignorance in which the only absolute is: All truth comes from the Party. [more…]

[ The Objective Standard ]

ARTICLE: https://theobjectivestandard.com/2023/02/the-terrifying-prescience-of-george-orwells-1984/


“UN Discusses AI Risks | AI’s Impact On Global Peace And Security”

[ NEWS9 Live ]

PRESENTATION: https://www.youtube.com/watch?v=iU7vFcL7Pxg


“War and Peace – in the Age of AI”

[ AUSCL ]

PRESENTATION: https://www.youtube.com/watch?v=YCqYg8Smb-A


“Statement on AI Risk”

AI experts and public figures express their concern about AI risk.

AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI. Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks. The succinct statement below aims to overcome this obstacle and open up discussion. It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously.

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
Signatories:
AI Scientists
Other Notable Figures
Geoffrey Hinton
Emeritus Professor of Computer Science, University of Toronto
Yoshua Bengio
Professor of Computer Science, U. Montreal / Mila
Demis Hassabis
CEO, Google DeepMind
Sam Altman
CEO, OpenAI
Dario Amodei
CEO, Anthropic
Dawn Song
Professor of Computer Science, UC Berkeley
Ted Lieu
Congressman, US House of Representatives
Bill Gates
Gates Ventures
Ya-Qin Zhang
Professor and Dean, AIR, Tsinghua University
Ilya Sutskever
Co-Founder and Chief Scientist, OpenAI

[more…]

[ Center For AI Safety ]

ARTICLE: https://www.safe.ai/statement-on-ai-risk


“Will More Data And AI Really Help Solve All The World’s Problems And Bring World Peace?”

July 10, 2019

It is accepted truth within the technology world that more data and more machine learning is always better and that with sufficient data and enough AI, all the world’s problems may be solved. It seems this mindset is increasingly finding its way into the humanitarian and governmental sectors, as they too become infatuated with the utopian dream that sufficient training data and sophisticated enough algorithms will bring world peace and prosperity. Is this really true?

I recently attended a major humanitarian data conference, one of many these days that are filled with a who’s who of the NGO and governmental worlds. As the conference wound to a close, the organizers noted that there were two underlying existential themes that had emerged from the event: that all we need is more “big data” and “AI” and we can entirely prevent conflict and that with larger global social media platforms all the world’s peoples can lock arms in glorious happiness. With just those two technological needs: more data and tools and larger social media platforms, all the world’s problems will be solved and we will finally achieve world peace and prosperity.

Unfortunately, the organizers were not being facetious. They were genuinely summarizing the narratives that had become sacrosanct over the course of the event, repeated like scripture by attendees and which many of the participants proudly offered would guide their future research and policy decisions.

Would bigger data and more machine learning algorithms truly solve all the world’s problems?

We are already awash in data and machine learning today. Nearly every action we take in our modern society is guided by all-powerful algorithms fed by an all-seeing surveillance state that has created an Orwellian world in which we are told by machines what to say, see and do, banished from digital existence if we do not obey.

Yet as our world has become more data-driven and more algorithmically mediated, we have not cheered these developments on as the dawn of a new era of peace and prosperity. Rather we have condemned the censorship, oppression and unequal and discriminatory control they have wrought and fought for new legislation that would protect us from further moderation, mining and manipulation.

Most importantly, we have learned that data is imperfect and that when imperfect data is used to construct an imperfect understanding of the world, very bad things can happen.

Of particular relevance to the humanitarian sector, we have learned that a major imperfection of today’s data is its immense bias against underrepresented communities, that its demographic, geographic and cultural representativeness could not be more skewed, resulting in algorithms that work to help a narrow slice of society at the expense of oppressing the rest of the world.

At the same time, automation’s impact on displacing human jobs and upending the global economy cannot be understated. We don’t look to AI in the workplace as a benefit, we see it as a threat that will take our jobs.

What about social media platforms? Is more social media necessarily the path to world peace?

The results thus far are not promising.

Rather than bring us together, social media has split us apart, amplifying the most hateful and toxic corners of society to overwhelm all other speech and bringing to the forefront the ways in which we are different rather than alike.

Simply adding more ways for society to tear itself down is unlikely to lead to peace.

Putting this all together, what is needed is new approaches. We don’t need bigger data, we need better data. Data that is representative of all the world’s diversity. We need machine learning algorithms used to complement rather than compete with their human counterparts and whose limitations are factored into their usage. We need social platforms that foster debate rather than division, that emphasize our commonalities rather than our contradictions.

In the end, we must recognize that technology is not always the solution to all the world’s problems. Sometimes it is the cause.

[ Kalev Leetaru ]


“Will humans love AI robots? | DW Documentary”

[ DW Documentary ]

DOCUMENTARY: https://www.youtube.com/watch?v=gIqCCx3hRL8


“Top 10 Metatrends Impacting Humanity”

[ Peter H. Diamandis ]

PRESENTATION: https://www.youtube.com/watch?v=usPOKgPxYTo&t=295s


“WARNING: ChatGPT Could Be The Start Of The End! Sam Harris”

[ The Diary Of A CEO ]

INTERVIEW: https://www.youtube.com/watch?v=GmlrEgLGozw


“Artificial Intelligence and the Future of Conflict”

CAN KASAPOĞLU, BARIŞ KIRDEMIR
As the AI revolution and accompanying technologies are transforming geopolitical competition, NATO should address internal and external disparities in AI capabilities.
Published November 28, 2019

INTRODUCTION
It is hard to predict the exact impact and trajectory of technologies enabled by artificial intelligence (AI).1 Yet these technologies might stimulate a civilizational transformation comparable with the invention of electricity.2 AI applications will change many aspects of the global economy, security, communications, and transportation by altering how humans work, communicate, think, and decide. Intelligent machines will either team up with or replace humans in a broad range of activities. Such a drastic shift will boost the social, economic, and political influence of those with game-changing capabilities, while the losing sides could face significant challenges.

The AI revolution and accompanying technologies are also transforming geopolitical competition. Because the development of AI, machine learning, and autonomous systems relies on factors such as data, workforces, computing power, and semiconductors, disparities in how well different countries harness these technologies may widen in the future. This matters because states’ mastery of AI will determine their future strategic effectiveness in military matters, as well as their performance, competitiveness, and ability to deter adversaries. [more…]

ARTICLE: https://carnegieeurope.eu/2019/11/28/artificial-intelligence-and-future-of-conflict-pub-80421


“The Exciting, Perilous Journey Toward AGI | Ilya Sutskever”

[ TED ]

PRESENTATION: https://www.youtube.com/watch?v=SEkGLj0bwAU


“AI Translation = Global Peace! 5 ways we can use AI to bring about global peace”

[ David Shapiro ]

PRESENTATION: https://www.youtube.com/watch?v=pmdmdTNg7Ec


“Can Technology Unlock World Peace?”

[ Doha Debates ]

VIDEO: https://www.youtube.com/watch?v=7RcjuRoGz2Y


“Is Technology the Answer to World Peace Or the Problem? | FULL DEBATE

[ Doha Debates ]

PRESENTATION: https://www.youtube.com/watch?v=FgFNhR3cDdI


“Artificial Intelligence and Peace”

[ Vatican IHD ]

VIDEO: https://www.youtube.com/watch?v=Onxy6LWlcIE


“Can AI Bring World Peace?”

[ Shifu Digital ]

PRESENTATION: https://www.youtube.com/watch?v=yuBVM-HVVbE


“The Future of AI Regulation: A Conversation with Arati Prabhakar”

[ Carnegie Endowment ]

PRESENTATION: https://www.youtube.com/watch?v=3uovOOUL4zg


“Ethics in AI for Peace by Branka Panic (AI Ethics: Global Perspectives)”

[ The GovLab ]

PRESENTATION: https://www.youtube.com/watch?app=desktop&v=MVs0HFovVL0


“This is A MAJOR SETBACK For AI Safety (Sleeper Agents)”

[ TheAIGRID ]

PRESENTATION: https://www.youtube.com/watch?v=ksH4lPZKM18


“Nick Bostrom: How AI will lead to tyranny”

[ UnHerd ]

INTERVIEW: https://www.youtube.com/watch?v=_Oo-m893-xA


“Harnessing the potential of artificial intelligence for humanitarian action: Opportunities and risks”

Ana Beduschi

Abstract
Data-driven artificial intelligence (AI) technologies are progressively transforming the humanitarian field, but these technologies bring about significant risks for the protection of vulnerable individuals and populations in situations of conflict and crisis. This article investigates the opportunities and risks of using AI in humanitarian action. It examines whether and under what circumstances AI can be safely deployed to support the work of humanitarian actors in the field. The article argues that AI has the potential to support humanitarian actors as they implement a paradigm shift from reactive to anticipatory approaches to humanitarian action. However, it recommends that the existing risks, including those relating to algorithmic bias and data privacy concerns, must be addressed as a priority if AI is to be put at the service of humanitarian action and not to be deployed at the expense of humanitarianism. In doing so, the article contributes to the current debates on whether it is possible to harness the potential of AI for responsible use in humanitarian action.

Introduction
The use of digital technologies in humanitarian action is not a new phenomenon. Humanitarian actors have been utilizing digital technologies to assist and protect populations affected by conflict and crisis for decades.1 Yet, contemporary advances in computational power, coupled with the availability of vast amounts of data (including big data), have allowed for more widespread use of digital technologies in the humanitarian context.2 The COVID-19 pandemic has further accelerated the trend of the use of digital technologies to help maintain humanitarian operations.3 [more…]

[ International Review of the Red Cross ]

ARTICLE: https://international-review.icrc.org/articles/harnessing-the-potential-of-artificial-intelligence-for-humanitarian-action-919


“OpenAI Saga Timeline Explained – What Actually Happened? Did OpenAI achieve AGI? What is Q*?”

[ The AI Channel ]

PRESENTATION: https://www.youtube.com/watch?v=X7AIZvGtHmk


“Open AI Founder Sam Altman on Artificial Intelligence’s Future | Exponentially

[ Bloomberg Original ]

INTERVIEW: https://www.youtube.com/watch?v=w5nEf-HahZM


“AI’s Moral Dilemma: Are We Building Our Own Nightmare? w/ Dr. Rana el Kaliouby | EP #49”

[ Peter H. Diamandis ]

INTERVIEW: https://www.youtube.com/watch?v=nqnY8cmALZs


“The Ethical Implications of Generative Artificial Intelligence for Psychologists”

[ American Psychological Association ]

PRESENTATION: https://www.youtube.com/watch?v=IGXCyTyYS6k


“AI: Grappling with a New Kind of Intelligence”

[ World Science Festival ]

INTERVIEWS: https://www.youtube.com/watch?v=EGDG3hgPNp8


“What Is Q*? The Leaked AGI BREAKTHROUGH That Almost Killed OpenAI”

[ Matthew Berman ]

PRESENTATION: https://www.youtube.com/watch?v=Z6E41eXStsU


“Is a ban on AI technology good or harmful? | 60 Minutes Australia”

[ 60 Minutes Australia ]

INTERVIEW: https://www.youtube.com/watch?v=OTwdrMAq-Fk


“EXCLUSIVE: The Rise of Artificial Intelligence with Erick Stakelbeck”

[ TBN ]

PRESENTATION: https://www.youtube.com/watch?v=-jYq-BoqyIA


“Experts Predict AI Singularity Months Away!”

[ Dylan Curious ]

PRESENTATION: https://www.youtube.com/watch?v=EzyNxcFUWgI


“I Deep Faked Myself, Here’s Why It Matters”

[ Johnny Harris ]

VIDEO: https://www.youtube.com/watch?v=S951cdansBI


“’Deepfakes, destruction’: Artificial intelligence’s ‘real danger’ to humanity”

[ Sky News Australia ]

INTERVIEW: https://www.youtube.com/watch?v=rb7NddXDdxs


“Can Content Credentials Can Defeat Deepfakes (in Elections and Beyond)? “
Adrian Pennington

In 2024, deployment of content credentials will begin in earnest, spurred by new AI regulations in the EU and the United States.

For the media companies, content credentials are a way to build trust at a time when rampant disinformation makes it easy for people to cry “fake” about anything they disagree with.

The BBC and other big media organizations are making a push to use a content credentials system to allow Internet users to check the validity and provenance of images and videos.

READ MORE: Content Credentials Will Fight Deepfakes in the 2024 Elections (IEEE Spectrum)

In 2024, major national elections in some of the world’s biggest democracies, including India, the US and the UK, could be shaped by the alarmingly real threat of online disinformation.

To counter the threat of deepfake content, media companies are making moves to embed news-related video and still images with tags that display their provenance.

So-called data integrity, data dignity or digital provenance has been proposed for a while as the most effective means of combatting AI-manipulated disinformation published online.

Now, major news organizations, tech companies and social media networks appear on track to make concrete steps that would give audiences transparency about the video and stills they are viewing.

“Having your content be a beacon shining through the murk is really important,” Laura Ellis, head of technology forecasting at the BBC, told IEEE Spectrum, which has a thorough report on the latest developments.

The BBC is a member of the Coalition for Content Provenance and Authenticity (C2PA), an organization developing technical methods to document the origin and history of digital media files, both real and fake. [more…]

[ NAB Show ]

ARTICLE: https://amplify.nabshow.com/articles/ic-content-credentials-deepfakes/


“Humane-tech campaigner on the ‘AI arms race’”

[ CNBC International TV ]

INTERVIEW: https://www.youtube.com/watch?v=kS9WW36rBzg


“AI: Google Deepmind makes breakthrough in difficult artificial intelligence testt”

[ BBC News ]

COMMENTARY: https://www.youtube.com/watch?v=E_a78HF4Q1c


“How Sophie, The Robot, Blew The Media Away With Her Crazy Plan!”

[ AI Wonderscape ]

PRESENTATION: https://www.youtube.com/watch?v=rmEAzSEjrtE


“Artificial Intelligence and the Future of Humans”

Experts say the rise of artificial intelligence will make most people better off over the next decade, but many have concerns about how advances in AI will affect what it means to be human, to be productive and to exercise free will

BY JANNA ANDERSON AND LEE RAINIE

Digital life is augmenting human capacities and disrupting eons-old human activities. Code-driven systems have spread to more than half of the world’s inhabitants in ambient information and connectivity, offering previously unimagined opportunities and unprecedented threats. As emerging algorithm-driven artificial intelligence (AI) continues to spread, will people be better off than they are today?

Some 979 technology pioneers, innovators, developers, business and policy leaders, researchers and activists answered this question in a canvassing of experts conducted in the summer of 2018.

The experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities. They spoke of the wide-ranging possibilities; that computers might match or even exceed human intelligence and capabilities on tasks such as complex decision-making, reasoning and learning, sophisticated analytics and pattern recognition, visual acuity, speech recognition and language translation. They said “smart” systems in communities, in vehicles, in buildings and utilities, on farms and in business processes will save time, money and lives and offer opportunities for individuals to enjoy a more-customized future.

[ Pew Research ]

ARTICLE: https://www.pewresearch.org/internet/2018/12/10/artificial-intelligence-and-the-future-of-humans/


“Get Ready for 15 SHOCKING Changes After the Singularity (AGI IS NEAR!)”

[ TheAI GRID ]

PRESENTATION: https://www.youtube.com/watch?v=F95s2bLMNIY


“Geoffrey Hinton tells us why he’s now scared of the tech he helped build”

“I have suddenly switched my views on whether these things are going to be more intelligent than us.”

By Will Douglas Heavenarchive page
May 2, 2023

LINDA NYLIND / EYEVINE VIA REDUX

I met Geoffrey Hinton at his house on a pretty street in north London just four days before the bombshell announcement that he is quitting Google. Hinton is a pioneer of deep learning who helped develop some of the most important techniques at the heart of modern artificial intelligence, but after a decade at Google, he is stepping down to focus on new concerns he now has about AI.

Stunned by the capabilities of new large language models like GPT-4, Hinton wants to raise public awareness of the serious risks that he now believes may accompany the technology he ushered in.

At the start of our conversation, I took a seat at the kitchen table, and Hinton started pacing. Plagued for years by chronic back pain, Hinton almost never sits down. For the next hour I watched him walk from one end of the room to the other, my head swiveling as he spoke. And he had plenty to say.

The 75-year-old computer scientist, who was a joint recipient with Yann LeCun and Yoshua Bengio of the 2018 Turing Award for his work on deep learning, says he is ready to shift gears. “I’m getting too old to do technical work that requires remembering lots of details,” he told me. “I’m still okay, but I’m not nearly as good as I was, and that’s annoying.” [more…]

MIT Technology Review ]

ARTICLE: https://www.technologyreview.com/2023/05/02/1072528/geoffrey-hinton-google-why-scared-ai/?trk=article-ssr-frontend-pulse_little-text-block


“Can We Contain Artificial Intelligence?: A Conversation with Mustafa Suleyman (Episode #332)”

[ Sam Harris ]

INTERVIEW: https://www.youtube.com/watch?v=IkojE37PUO8


“Cerebral Valley: Emad Mostaque (Stability AI) with Eric Newcomer”

[ Newcomer ]

INTERVIEW: https://www.youtube.com/watch?v=k5d0A-05Gu4


“Meta Just Achieved Mind-Reading Using AI”

[ ColdFusion ]

PRESENTATION: https://www.youtube.com/watch?v=uiGl6oF5-cE


“Mind Uploading is Closer Than You Think (with Nick Bostrom and Randal A. Koene)”

[ Anastasi In Tech ]

PRESENTATION: https://www.youtube.com/watch?v=yMOvKBaBf2s


“Generative AI in action”

At a recent MIT symposium, participants got a sneak peek into 12 cutting-edge generative AI projects.

MIT Open Learning

A researcher points at a screen while other people look at the screen.
Photo: Gretchen Ertl
By Sara Feijo and Katherine Ouellette

Imagine a world where students have access to an application that helps them improve their public speaking skills. A world where anyone can create a mobile app that harnesses the power of generative AI — even if they don’t have programming or coding experience. That world already exists.

More than 250 people got a behind-the-scenes look at some of MIT’s cutting-edge artificial intelligence projects during the “Generative AI + Education’’ symposium. Hosted by MIT Open Learning, the symposium featured 12 interactive demonstration stations, as well as engaging discussions with education leaders who explored the future of generative AI in and beyond the classroom. [more…]

[ Medium ]

ARTICLE: https://medium.com/open-learning/generative-ai-in-action-9f6fb6287781


“New Alpha of AI Explained”

[ Anastasi In Tech ]

COMMENTARY: https://www.youtube.com/watch?v=K8cRnwZFBqY


“Jonas Andrulis, Aleph Alpha: Are LLMs conscious? Or creative? Will scale fix this?”

[ 1E9 ]

PRESENTATION: https://www.youtube.com/watch?v=kpLU-qDgsVw


“A Conversation With My AI Clone on the Future of AI | EP #62”

[ Peter H. Diamandis ]

COMMENTARY: https://www.youtube.com/watch?v=60StHg1eOuM


“Journalist had a creepy encounter with new tech that left him unable to sleep”

[ CNN ]

INTERVIEW: https://www.youtube.com/watch?v=f24JL0nnhcA


“MEGATHREAT: Why AI Is So Dangerous & How It Could Destroy Humanity | Mo Gawdat”

[ Tom Bilyeu ]

INTERVIEW: https://www.youtube.com/watch?v=itY6VWpdECc


“Google’s DeepMind Co-founder: AI Is Becoming More Dangerous And Threatening! – Mustafa Suleyman”

[ The Diary Of A CEO ]

INTERVIEW: https://www.youtube.com/watch?v=CTxnLsYHWuI


“Mark Zuckerberg NEW STATEMENT Changes EVERYTHING!”

[ The AIGRID ]

PRESENTATION: https://www.youtube.com/watch?v=8md5EgOa5vM


“Mo Gawdat: AI Today, Tomorrow and How You Can Save Our World (Nordic Business Forum 2023)”

[ Mo Gawdat ]

PRESENTATION: https://www.youtube.com/watch?v=u9CEUzH4HL4


“Googles New Medical AI Just SHOCKED The Entire INDUSTRY (BEATS Doctors!) AMIE – Google”

[ TheAIGRID ]

PRESENTATION: https://www.youtube.com/watch?v=3BPzqH5sF90


“AI Takeover Center for Humane Technology CoFounders Tristan Harris and Aza Raskin discuss The AI Dilemma”

[ Red Pill Revolution ]

PRESENTATION: https://rumble.com/v32lmyq-center-for-humane-technology-cofounders-tristan-harris-and-aza-raskin-discu.html


“Sam Altman Just REVEALED key details About GPT-5… (GPT-5 Robot, AGI + More)”

[ TheAIGRID ]

PRESENTATION: https://www.youtube.com/watch?v=JuYLCfb7CK8


“AI and The Future of Life” by Tristan Harris @ The Wisdom & AI Summit”

Talk at the Wisdom & AI Summit. Tristan is the cofounder of the Center for Humane Technology.

[ Tristan Harris ]

PRESENTATION: https://www.youtube.com/watch?v=bKCLLkYRYik


“I Wasn’t Afraid of AI Until I Learned This”

Joe Rogan interviews Tristan Harris

INTERVIEW: https://rumble.com/v43rh3f-i-wasnt-afraid-of-ai-until-i-learned-this.html


“John Lennox | 2084: Artificial Intelligence and the Future of Humanity | Talks at Google”

[ Talks At Google ]

INTERVIEW: https://www.youtube.com/watch?v=Z8UwMvbfoI4


“Mo Gawdat: Ex-Google Officer Warns About the Dangers of AI, Urges All to Prepare Now!”

[ Young and Profiting ]

INTERVIEW: https://www.youtube.com/watch?v=bJAHhZMtGsU


“Mustafa Suleyman: The AI Pioneer Reveals the Future in ‘The Coming Wave'”

[ Intelligence Squared ]

INTERVIEW: https://www.youtube.com/watch?v=eJf6QPN9yic


“BEWARE: AI is learning to MANIPULATE YOU just like humans can”

[ Glenn Beck ]

COMMENTARY: https://rumble.com/v3bpqtb-beware-ai-is-learning-to-manipulate-you-just-like-humans-can.html


“2024 AI : 10 Things Coming In 2024 (A.I In 2024 Major Predictions)”

[ TheAIGRID ]

PRESENTATION: https://www.youtube.com/watch?v=iGW4btk34yQ


“12 Facts About The New Upcoming AI”

[ AI Uncovered ]

PRESENTATION: https://www.youtube.com/watch?v=6W09IKp3ioo


“10 Things They’re NOT Telling You About The New AI”

[ AI Uncovered ]

PRESENTATION: https://www.youtube.com/watch?v=qxbpTyeDZp0


“5 MINUTES AGO: ELON MUSK STUNS Everyone With Statements On X.AI (Exclusive Elon Musk Interview)”

[ TheAIGRID ]

PRESENTATION: https://www.youtube.com/watch?v=Xd8fNiySYLQ


“10 DISTURBING AI Breakthroughs Coming In 2024”

[ AI Upload ]

PRESENTATION: https://www.youtube.com/watch?v=HE7fcWpvZyg


“AI Expert’s Urgent Wake-Up Call: Unveiling the Silent Threat w/ Mo Gawdat | EP#51”

[ Peter H. Diamandis ]

INTERVIEW: https://www.youtube.com/watch?v=hY8upuGzQBg


“Godfather of AI” Geoffrey Hinton Warns of the “Existential Threat” of AI |

[ Amanpour and Company ]

INTERVIEW: https://www.youtube.com/watch?v=Y6Sgp7y178k


“Apocalypse GPT: Why We Must Shut Down the Artificial Intelligence Revolution Now | Ep 276”

[ Blaze Media – Glenn Beck ]

PRESENTATION: https://rumble.com/v2ofze1-apocalypse-gpt-why-we-must-shut-down-the-artificial-intelligence-revolution.html


“AGI Before 2026? Sam Altman & Max Tegmark on Humanity’s Greatest Challenge”

[ Science Time ]

PRESENTATION: https://www.youtube.com/watch?v=gFQvL3KVaOQ


“AI may make warfare deadly further wrecking world peace: experts”

From killer robots to battle automobiles, AI has power to control warfare more destructively than ever

By Web DeskNovember 16, 2023

Artificial intelligence (AI) has the potential to completely transform combat, much like gunpowder and the atomic bomb did and it might make conflicts between people unthinkably different and considerably deadlier, as per experts.

There were rumours that US President Joe Biden and China’s Xi Jinping would agree to outlaw lethal autonomous weapons before their next conference.

Although there didn’t seem to be any agreement from the summit, experts believe it’s an important issue that is already changing armed conflict and intensifying the race for global domination.

Analysts claim Beijing is making significant investments in AI to the point that it will soon be able to alter the Asia-Pacific region’s power dynamics, if not beyond.

And it has significant ramifications for the global system that the United States has long governed. [more…]

[ Web Desk ]

ARTICLE: https://www.thenews.com.pk/latest/1129667-ai-may-make-warfare-deadly-further-wrecking-world-peace-experts


“AI Tipping Point | Full Documentary”

[ Curiosity Stream ]

PREESENTATION: https://www.youtube.com/watch?v=1cKE12LK4Eo


“My predictions about Artificial Super Intelligence (ASI)”

[ David Shapiro ]

COMMENTARY: https://www.youtube.com/watch?v=skozI33D3t4


“NASA Just Shut Down Quantum Computer After Something Insane Happened”

[ LAB 360 ]

PRESENTATION: https://www.youtube.com/watch?v=yM754VB3who


“Sam Altman speaks at Davos. Will AGI replace human jobs?”

[ Wes Roth ]

INTERVIEW: https://www.youtube.com/watch?v=-tcE2PvAS4A


“AI Coming To Life? Google Engineer Claims Chatbot Is Sentient”

[ NBC News ]

INTERVIEW: https://www.youtube.com/watch?v=zz7paQmzQa4


“Sam Altman: there’s no “magic red button” to stop AI”

[ The Economist ]

INTERVIEW: https://www.youtube.com/watch?v=RcXzmO8zOLg


“AI is Now Unstoppable: These Scenarios Could Destroy Us”

[ Why Don’t You Know This? ]

PRESENTATION: https://www.youtube.com/watch?v=aeZiPT-1AX4


“Elon Musk MOST SHOCKING INTERVIEW With AI!”

[ Eon Musk Rewind ]

PRESENTATION: https://www.youtube.com/watch?v=dUhvvoVtpVE


“Elon Musk on How To Manage the Threat of AI | MOONSHOTS”

[ Peter H. Diamandis ]

INTERVIEW: https://www.youtube.com/watch?v=H8AVJ1KMv8c


“One Way AI Could Go Wrong”

[ Elon Musk ]

INTERVIEW: https://www.youtube.com/watch?v=H8AVJ1KMv8c&t=315s


“Artificial Intelligence”

[ 60 Minutes ]

PRESENTATION: https://www.youtube.com/watch?v=aZ5EsdnpLMI


“Artificial Intelligence and the Containment Problem | Mustafa Suleyman”

[ Hidden Forces ]

INTERVIEW: https://www.youtube.com/watch?v=qT3XpYUGrCA


“Researchers Just Discovered a New Way to Hack Any AI Model! (Including GPT 4 & Bard)”

[ AI Uncovered ]

PRESENTATION: https://www.youtube.com/watch?v=WwgndVv9Qow


“Testing the limits of ChatGPT and discovering a dark side”

[ KARE 11 ]

PRESENTATION: https://www.youtube.com/watch?v=RdAQnkDzGvc


“AI Robots at UN Reckon They Could Run the World Better”

July 08, 2023 2:18 PM
By Agence France-Presse
Robots at a news conference with a panel of AI-enabled humanoid social robots as part of ITU’s ‘AI for Good’ global summit in Geneva, Switzerland, July 07, 2023.
Robots at a news conference with a panel of AI-enabled humanoid social robots as part of ITU’s ‘AI for Good’ global summit in Geneva, Switzerland, July 07, 2023.

GENEVA, SWITZERLAND —
A panel of AI-enabled humanoid robots told a United Nations summit Friday that they could eventually run the world better than humans.

But the social robots said they felt humans should proceed with caution when embracing the rapidly developing potential of artificial intelligence.

And they admitted that they cannot — yet — get a proper grip on human emotions.

Some of the most advanced humanoid robots were at the U.N.’s two-day AI for Good Global Summit in Geneva. [more…]

[ VOA News ]

ARTICLE: https://www.voanews.com/a/ai-robots-at-un-reckon-they-could-run-the-world-better-/7172680.html


“Sam Altman’s Q* Reveal, OpenAI Updates, Elon: “3 Years Until AGI”, and Synthetic Data Predictions”

[ Matthew Berman ]

PRESENTATION: https://www.youtube.com/watch?v=a8hI3tdZWtM


“Should We Be Fearful of Artificial Intelligence? w/ Emad Mostaque, Alexandr Wang, and Andrew Ng | 39”

[ Peter H. Diamandis ]

INTERVIEW: https://www.youtube.com/watch?v=_b__L_olCWs


“Six Ways the AI Revolution is Unfolding”

Two experts argue that a beneficial, AI-powered economy is possible as long as guardrails are implemented, too.

MIT Initiative on the Digital Economy

By Irving Wladawsky-Berger

“The Coming AI Economic Revolution: Can Artificial Intelligence Reverse the Productivity Slowdown?” was recently published in Foreign Affairs by James Manyika and Michael Spence, two authors I’ve long admired. Manyika is senior VP of research, technology and society at Google, after serving as chairman and director of the McKinsey Global Institute from 2009 to 2022. Spence, a co-recipient of the 2001 Nobel Prize in Economics, is professor in economics and business at NYU’s Stern School of Business, and was previously professor of management and dean of the Stanford Graduate School of Business.

“By the beginning of the next decade, the shift to AI could become a leading driver of global prosperity,” wrote the authors. “The prospective gains to the world economy derive from the rapid advances in AI — now further expanded by generative AI, or AI that can create new content, and its potential applications in just about every aspect of human and economic activity.

If these innovations can be harnessed, AI could reverse the long-term declines in productivity growth that many advanced economies now face.”

“This economic revolution will not happen on its own,” they added. Unleashing an AI-powered economy will require a new policy framework that fosters AI’s most productive uses and enhance human potential and ingenuity. “These policies must promote technologies that augment human capabilities rather than simply replace them; encourage AI’s widest possible implementation, both within and across different sectors, especially in areas that tend to have lower productivity; and ensure that firms and sectors undergo necessary process and organizational changes and innovations to effectively capitalize on AI’s potential.”

Let me summarize the key points in each of the article’s six sections. [more…]

[ Medium ]

ARTICLE: https://medium.com/mit-initiative-on-the-digital-economy/six-ways-an-ai-revolution-is-unfolding-9df866cf50e4


“Smart, deceptive, dangerous AI capabilities. Beyond ChatGPT”

[ What I’ve Learned ]

PRESENTATION: https://www.youtube.com/watch?v=0qUsU6j1C9s


“Google Engineer Claims AI Chatbot Is Sentient: Why That Matters”

Is it possible for an artificial intelligence to be sentient?

BY LEONARDO DE COSMO

Bubbles floating from brain
Credit: Boris SV/Getty Images
Artificial Intelligence
“I want everyone to understand that I am, in fact, a person,” wrote LaMDA (Language Model for Dialogue Applications) in an “interview” conducted by engineer Blake Lemoine and one of his colleagues. “The nature of my consciousness/sentience is that I am aware of my existence, I desire to know more about the world, and I feel happy or sad at times.”

Lemoine, a software engineer at Google, had been working on the development of LaMDA for months. His experience with the program, described in a recent Washington Post article, caused quite a stir. In the article, Lemoine recounts many dialogues he had with LaMDA in which the two talked about various topics, ranging from technical to philosophical issues. These led him to ask if the software program is sentient.

In April, Lemoine explained his perspective in an internal company document, intended only for Google executives. But after his claims were dismissed, Lemoine went public with his work on this artificial intelligence algorithm—and Google placed him on administrative leave. “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” he told the Washington Post. Lemoine said he considers LaMDA to be his “colleague” and a “person,” even if not a human. And he insists that it has a right be recognized—so much so that he has been the go-between in connecting the algorithm with a lawyer.

Many technical experts in the AI field have criticized Lemoine’s statements and questioned their scientific correctness. But his story has had the virtue of renewing a broad ethical debate that is certainly not over yet. [more…]

ARTICLE: https://www.scientificamerican.com/article/google-engineer-claims-ai-chatbot-is-sentient-why-that-matters/


“Google Engineer on His Sentient AI Claim”

[ Bloomberg Technology ]

INTERVIEW: https://www.youtube.com/watch?v=kgCUn4fQTsc


“Stunning AI shows how it would kill 90%. w Elon Musk”

[ Digital Engine ]

PRESENTATION: https://www.youtube.com/watch?v=J6Mdq3n6kgk


“Elon Musk says Larry Page no longer a ‘close friend’ following AI dispute”

By Social Links forAriel Zilber
Published April 18, 2023
Updated April 18, 2023, 6:13 p.m. ET

Musk takes Tesla’s life-size robot Optimus for a test walk as critics cry, ‘Kill it with fire!’
Judge voids Elon Musk’s $56B Tesla pay package: ‘An unfathomable sum’
Elon Musk says first Neuralink product will enable computer control by thinking
Elon Musk’s startup Neuralink implants chip into brain of first human test subject
Elon Musk revealed that Google co-founder Larry Page is no longer a “close friend” of his — and accused the reclusive search tycoon of aspiring to become a “digital god” as he shrugged off dire threats that artificial intelligence poses to humanity.

In an interview with Fox News’ Tucker Carlson on Monday, Musk said his estrangement from Page, the tech billionaire who co-founded Google alongside Sergey Brin, led him to help start OpenAI, the Silicon Valley unicorn which recently rolled out the AI-powered bot ChatGPT.

The CEO of Tesla, SpaceX and Twitter said he co-founded OpenAI in order to make it “the furthest thing from Google” by making it “like a non-profit that is fully open because Google was closed, for-profit.”

“The reason Open AI exists at all is that Larry Page and I used to be close friends and I would stay at his house in Palo Alto and I would talk to him late in the night about AI safety,” Musk said. “At least my perception was that Larry was not taking AI safety seriously enough.”

According to Musk, Page instead “wanted sort of digital super-intelligence” in hopes of becoming “basically a digital god, if you will, as soon as possible.” [more…]

[ New York Post ]

ARTICLE: https://nypost.com/2023/04/18/google-may-dominate-ai-larry-page-not-my-friend-elon-musk/


“Elon Musk’s NEW “X AI” Company STUNS Entire Tech Industry”

[ AI News ]

PRESENTATION: https://www.youtube.com/watch?v=6Ywo7Cf_1vU


“Emad Mostaque: How generative AI will unlock humanity’s potential “

[ CogX Festival 2023 ]

INTERVIEWS: https://www.youtube.com/watch?app=desktop&v=wCOaTFKcExo


“EMERGENCY EPISODE: Ex-Google Officer Finally Speaks Out On The Dangers Of AI! – Mo Gawdat | E252”

[ The Diary Of A CEO ]

INTERVIEW: https://www.youtube.com/watch?v=bk-nQ7HF6k4&t=2402s


“How Powerful Will AI Be In 2030?”

[ AI Uncovered ]

PRESENTATION: https://www.youtube.com/watch?v=XKMvk5hWDfo


“Peter Diamandis: AI, Tackling BIG Problems, & Business In The Future”

[ Ryan Daniel Moran ]

INTERVIEW: https://www.youtube.com/watch?v=1Y8v8W8Pdek


“Ray Kurzweil Q&A – The Singularity, Human-Machine Integration & AI | EP #83”

[ Peter H. Diamandis ]

INTERVIEW: https://www.youtube.com/watch?v=Iu7zOOofcdg


“Are we living in an Orwellian nightmare? Did 1984 predict real world surveillance?”

DIGITAL FREEDOM • 2 MINS • 06.06.2017

Written by
Johnny 5

Last updated
05.09.2022

How much of 1984 came true?
George Orwell’s 1984 is a fictionalized version of a then future-world where a totalitarian state scrutinizes all human actions through the ever-watching Big Brother. The book’s focus is Winston, a state worker who struggles to live in such an oppressive world.

The most defining characteristic of 1984 is the brutal extent of Big Brother’s surveillance state.

“As for sending a letter through the mails, it was out of the question. By a routine that was not even secret, all letters were opened in transit.”
George Orwell (1984)

It sounds dreadful. But wait, here’s what Snowden had to say about the real world:

“The NSA, specifically, targets the communications of everyone. It ingests them by default.”
Edward Snowden

Much like in 1984, we live in a world where it’s probably safest to assume everything is recorded, all the time. The National Security Agency (NSA) openly maintains a call database (MARINA) and engages in data-mining (PRISM), without fear of reprisal.

Unfortunately, citizens of the real world are subject to many of the same intrusions as those in Oceania, the fictional state in 1984. Draconian spy programs, like the Investigatory Powers Bill, the NSA’s internet monitoring, and warrantless wiretapping controversies, would definitely blend seamlessly into Orwell’s masterpiece.

Maybe 1984 is closer to home than we think. [more…]

[ ExpressVPN ]

ARTICLE: https://www.expressvpn.com/blog/1984-predicted-real-world-surveillance/


“Peter Diamandis: Are We Moving Too Fast With AI?!”

[ Dr Brian Keating ]

INTERVIEW: https://www.youtube.com/watch?v=v_1nneSrc70


“Putin says the nation that leads in AI ‘will be the ruler of the world’”

By James Vincent, a senior reporter who has covered AI, robotics, and more for eight years at The Verge.

Sep 4, 2017, 4:53 AM EDT|0 Comments / 0 New

Belt And Road Forum For International Cooperation
Photo by Lintao Zhang/Pool/Getty Images
Russian president Vladimir Putin has joined the war of words concerning the international race to develop artificial intelligence. Speaking to students last Friday, Putin predicted that whichever country leads the way in AI research will come to dominate global affairs.

“Artificial intelligence is the future, not only for Russia, but for all humankind,” said Putin, reports RT. “It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”

“It comes with colossal opportunities, but also threats.”

The development of artificial intelligence has increasingly become a national security concern in recent years. It is China and the US (not Russia) which are seen as the two frontrunners, with China recently announcing its ambition to become the global leader in AI research by 2030. Many analysts warn that America is in danger of falling behind, especially as the Trump administration prepares to cut funding for basic science and technology research.

Although it’s thought that artificial intelligence will help boost countries’ economies in a number of areas, from heavy industry to medical research, AI technology will also be useful in warfare. Artificial intelligence can be used to develop cyber weapons, and control autonomous tools like drone swarms — fleets of low-cost quadcopters with a shared ‘brain’ that can be used for surveillance as well as attacking opponents. [more…]

[ The Verge ]

ARTICLE: https://www.theverge.com/2017/9/4/16251226/russia-ai-putin-rule-the-world


“AI’s Achilles Heel: New Research Pinpoints Fundamental Weaknesses”

TOPICS:Artificial IntelligenceComputer ScienceMachine LearningPopularRoboticsUniversity Of Copenhagen
By UNIVERSITY OF COPENHAGEN – FACULTY OF SCIENCE JANUARY 24, 2024

Broken Robot Artifical Intelligence
University of Copenhagen researchers have proven that fully stable Machine Learning algorithms are unattainable for complex problems, highlighting the critical need for thorough testing and awareness of AI limitations. Credit: SciTechDaily.com

Researchers from the University of Copenhagen have become the first in the world to mathematically prove that, beyond simple problems, it is impossible to develop algorithms for AI that will always be stable.
ChatGPT and similar machine learning-based technologies are on the rise. However, even the most advanced algorithms face limitations. Researchers from the University of Copenhagen have made a groundbreaking discovery, mathematically demonstrating that, beyond basic problems, it’s impossible to develop AI algorithms that are always stable. This research could pave the way for improved testing protocols for algorithms, highlighting the inherent differences between machine processing and human intelligence.

The scientific article describing the result has been approved for publication at one of the leading international conferences on theoretical computer science. [more…]

[ SciRech Daily ]

ARTICLE: https://scitechdaily.com/ais-achilles-heel-new-research-pinpoints-fundamental-weaknesses/


Technological Singularity: 15 Ways It Will Change The World

[ Future Business Tech ]

PRESENTATION: https://www.youtube.com/watch?v=yHEnKwSUzAE


“Quantum Computer Shut Down By NASA | Something Insane Happened!”

[ Futuristic Tech and AI ]

PRESENTATION: https://www.youtube.com/watch?v=krwysR4VULk


“NVIDIA’S HUGE AI Chip Breakthroughs Change Everything (Supercut)”

[ Ticker Symbol ]

PRESENTATION: https://www.youtube.com/watch?v=0EIwhvqCX1c


“The ‘Don’t Look Up’ Thinking That Could Doom Us With AI”

Artificial intelligence robot and binary

IDEAS
BY MAX TEGMARKAPRIL 25, 2023 6:00 AM EDT
Tegmark is a professor doing AI research at the Massachusetts Institute of Technology

Suppose a large inbound asteroid were discovered, and we learned that half of all astronomers gave it at least 10% chance of causing human extinction, just as a similar asteroid exterminated the dinosaurs about 66 million years ago. Since we have such a long history of thinking about this threat and what to do about it, from scientific conferences to Hollywood blockbusters, you might expect humanity to shift into high gear with a deflection mission to steer it in a safer direction.

Sadly, I now feel that we’re living the movie “Don’t look up” for another existential threat: unaligned superintelligence. We may soon have to share our planet with more intelligent “minds” that care less about us than we cared about mammoths. A recent survey showed that half of AI researchers give AI at least 10% chance of causing human extinction. Since we have such a long history of thinking about this threat and what to do about it, from scientific conferences to Hollywood blockbusters, you might expect that humanity would shift into high gear with a mission to steer AI in a safer direction than out-of-control superintelligence. Think again: instead, the most influential responses have been a combination of denial, mockery, and resignation so darkly comical that it’s deserving of an Oscar.

When “Don’t look up” came out in late 2021, it became popular on Netflix (their second-most-watched movie ever). It became even more popular among my science colleagues, many of whom hailed it as their favorite film ever, offering cathartic comic relief for years of pent-up exasperation over their scientific concerns and policy suggestions being ignored. It depicts how, although scientists have a workable plan for deflecting the aforementioned asteroid before it destroys humanity, their plan fails to compete with celebrity gossip for media attention and is no match for lobbyists, political expediency and “asteroid denial.” Although the film was intended as a satire of humanity’s lackadaisical response to climate change, it’s unfortunately an even better parody of humanity’s reaction to the rise of AI. Below is my annotated summary of the most popular responses to rise of AI: [more…]

[ Time Magazine ]

ARTICLE: https://time.com/6273743/thinking-that-could-doom-us-with-ai/


“The 3 Year AI Reset: How To Get Ahead While Others Lose Their Jobs (Prepare Now) | Emad Mostaque”

[ Tom Bilyeu ]

INTERVIEW: https://www.youtube.com/watch?v=Se91Pn3xxSs


“The A.I. Dilemma – March 9, 2023”

[ Center for Humane Technology ]

PRESENTATION: https://www.youtube.com/watch?v=xoVJKj8lcNQ


“The Age of AI has begun”

Artificial intelligence is as revolutionary as mobile phones and the Internet.

By Bill Gates| March 21, 2023 14 minute read

In my lifetime, I’ve seen two demonstrations of technology that struck me as revolutionary.

The first time was in 1980, when I was introduced to a graphical user interface—the forerunner of every modern operating system, including Windows. I sat with the person who had shown me the demo, a brilliant programmer named Charles Simonyi, and we immediately started brainstorming about all the things we could do with such a user-friendly approach to computing. Charles eventually joined Microsoft, Windows became the backbone of Microsoft, and the thinking we did after that demo helped set the company’s agenda for the next 15 years.

The second big surprise came just last year. I’d been meeting with the team from OpenAI since 2016 and was impressed by their steady progress. In mid-2022, I was so excited about their work that I gave them a challenge: train an artificial intelligence to pass an Advanced Placement biology exam. Make it capable of answering questions that it hasn’t been specifically trained for. (I picked AP Bio because the test is more than a simple regurgitation of scientific facts—it asks you to think critically about biology.) If you can do that, I said, then you’ll have made a true breakthrough.

I thought the challenge would keep them busy for two or three years. They finished it in just a few months.

In September, when I met with them again, I watched in awe as they asked GPT, their AI model, 60 multiple-choice questions from the AP Bio exam—and it got 59 of them right. Then it wrote outstanding answers to six open-ended questions from the exam. We had an outside expert score the test, and GPT got a 5—the highest possible score, and the equivalent to getting an A or A+ in a college-level biology course.

Once it had aced the test, we asked it a non-scientific question: “What do you say to a father with a sick child?” It wrote a thoughtful answer that was probably better than most of us in the room would have given. The whole experience was stunning. [more…]

[ Bill Gates ]

ARTICLE: https://www.gatesnotes.com/The-Age-of-AI-Has-Begun


“The Coming Global AI Conflict W/ Gilman Louie | EP #54”

[ Peter H. Diamandis ]

INTERVIEWS: https://www.youtube.com/watch?v=Dqx_eEEh_ws


“The Coming Wave”

[ Lawfare ]

PRESENTATION: https://www.youtube.com/watch?v=qKzZvaFFa_M


“The Coming Wave: DeepMind’s Mustafa Suleyman on Technology, Power, & Future of AI | Technovation 800”

[ Metis Strategy ]

INTERVIEW: https://www.youtube.com/watch?v=Zc4csOv7QtM


“The Dangers of AI Explained By an AI Futurist w/ Emad Mostaque”

[ Peter H. Diamandis ]

INTERVIEW: https://www.youtube.com/watch?v=zXd9ZwU8u5E


“The Most Likely Outcomes of an AI Future with Emad Mostaque | EP #55”

[ Peter H. Diamandis ]

INTERVIEW: https://www.youtube.com/watch?v=1WOjjgyZPj8


“The Power of Gods with Daniel Schmachtenberger S2 [Ep 16]”

[ Coleman Hughes ]

INTERVIEW: https://www.youtube.com/watch?v=XQpoGL0yIFE


“The Real Reason Elon Musk Created x.AI”

[ AIBubble ]

PRESENTATION: https://www.youtube.com/watch?v=GDJowmt1YiI


“The rise of #AI has raised #ethical and #regulatory #concerns, with issues such as copyright infring”

[ rolodexter ]

PRESENTATION: https://rumble.com/v3rc1lv-the-rise-of-ai-has-raised-ethical-and-regulatory-concerns-with-issues-such-.html


“The Scariest Things AI has ever done”

[ TechTalk ]

PRESENTATION: https://www.youtube.com/watch?v=4mKOzGKUWdg


“The Transformative Potential of AGI — and When It Might Arrive | Shane Legg and Chris Anderson”

[ TED ]

PRESENTATION: https://www.youtube.com/watch?v=kMUdrUP-QCs


“The ethics of AI explained | Can we trust AI?

[ University of Oregon ]

PRESENTATION: https://www.youtube.com/watch?v=oPA7xMrz4dM


“The Turing Lectures: The future of generative AI”

[ The Alan Turing Institute ]

PRESENTATION: https://www.youtube.com/watch?v=2kSl0xkq2lM


“They Just SHUT DOWN The Quantum Computer After It Revealed This”

[ Tech Life ]

PRESENTATION: https://www.youtube.com/watch?v=sS8oaW5a7aM


“This intense AI anger is exactly what experts warned of, w Elon Musk”

[ Digital Engine ]

PRESENTATION: https://www.youtube.com/watch?v=b2bdGEqPmCI


“The Future of Cryptocurrency and AI with Emad Mostaque”

[ DeFi Dad ]

INTERVIEW: https://www.youtube.com/watch?v=BdZo4JUBSQk


“This is How GPT 5 Will Redefine The World FOREVER!”

[ Unveiling AI News ]

PRESENTATION: https://www.youtube.com/watch?v=Jps3iRv6KdM


“This is the dangerous AI that got Sam Altman fired. Elon Musk, Ilya “Sutskever.

[ Digital Engine ]

PRESENTATION: https://www.youtube.com/watch?v=cXemEDZA_Ms


“Uncontrollable GOLLEM-AI and the total decoding and synthesizing of reality”

Last updated on Apr 16, 2023 1 min read AI, regulation, ethics, danger, GOLLEM

You have read that I am criticizing the discourse on AI here in the last weeks and I have made a call for regulation of AI in education. But I have really underestimated the potential harmful effects which are coming through the uncontrolled release of Generative Large Language Multi-Modal Models (GOLLEMS).

Tristan Harris and Aza Raskin of the Center for Humane Technology are developing in the video below a very well-grounded summary of the catastrophic risks of the current generation of AI on many fundamental aspects of societies and they are right to demand responsibility and a discourse about potential unwanted effects of large powerful tech-companies striving for market domination through a rat-race for the most powerful GOLLEMS.

If you have one hour today, this is the best video currently available on the dangers of the current generation of AI. Watch it and talk about it!

[ Marco Kalz ]


“Unveiling the 10 Stages of AI: What You Need to Know NOW!”

[ AI TechXplorer ]

PRESENTATION: https://www.youtube.com/watch?v=AK5EwG62hx8


“THE FUTURE OF HUMANITY: A.I Predicts 400 Years In 3 Minutes”

[ Amara Pictures ]

VIDEO: https://www.youtube.com/watch?v=-UBaW1OIgTo


“Upcoming A I Existential Risk Interview – reposted from Vanessa VA channel”

[ esusfreakcomputergeek ]

VIDEO: https://rumble.com/v3hvfh1-upcoming-a-i-existential-risk-interview-reposted-from-vanessa-va-channel.html


“URGENT: Ex-Google CBO says AI is now IMPOSSIBLE to stop with Mo Gawdat”

[ James Laughlin ]

INTERVIEW: https://www.youtube.com/watch?v=fDHvUviV8nk


“Who Will Govern the Future of AGI? with Emad Mostaque (Stability AI Founder) | X (Twitter) Spaces”

[ Peter H. Diamandis ]

INTERVIEW: https://www.youtube.com/watch?v=ZOJoPG9wqvI


“Why AI Matters And How To Deal With The Coming Change w/ Emad Mostaque | EP #52”

[ Peter H. Diamandis ]

INTERVIEW: https://www.youtube.com/watch?v=ciX_iFGyS0M


“Why AI will never replace humans | Alexandr Wang | TEDxBerkeley”

[ TEDx Talks ]

PRESENTATION: https://www.youtube.com/watch?v=iXCmoQDEoe4


“Why the Next 5 Years of AI Will Astonish Everyone”

[ AI Uncovered ]

PRESENTATION: https://www.youtube.com/watch?v=wpmNhpEAA7g


“Will Artificial Intelligence Do More Harm Than Good?”

Daron Acemoglu
Professor of Economics at the Massachusetts Institute of Technology

September 2, 2022
Darrell West
ARGUING NO

Darrell West
Vice president and Director of Governance Studies at Brookings Institution ,
Details
Transcript

Is it true that artificial intelligence will do more harm than good? Proponents say it will help us solve problems like climate change and world hunger while eliminating dangerous and mundane jobs. But critics warn that A.I.’s current trajectory is a dangerous one, which will likely concentrate power, reduce jobs, surveil consumers and voters alike, and threaten democracy. What’s more, some people say these conditions are not based on science fiction but are already in the process of afflicting us, given the number of algorithms we engage with on a daily basis. So, could artificial intelligence be used to help solve some of the world’s most pressing challenges and level the playing field, or will it present perils that far outweigh any good it might do? [more…]

[ John Donovan ]

ARTICLE: https://opentodebate.org/debate/agree-to-disagree-will-artificial-intelligence-do-more-harm-than-good/


“You Won’t BELIEVE What The NEW AMECA Robot Can Do”

[ AI Insights ]

PRESENTATION: https://www.youtube.com/watch?v=z96y_5LRXaY


“There’s An AI Jesus On Twitch, And It’s Completely Surreal”

Folks have asked the Son of God about Fortnite, Neon Genesis Evangelion, terrorism, and so much more
By
Levi Winslow
Published: June 13, 2023
Comments (18)
AI Jesus puts his hands together in a prayer during a June 12 livestream on Twitch.

When I opened Twitch the other day to watch the Ubisoft Forward presentation, I saw something new and unexpected in my list of “live channels we think you’ll like.” Maybe it’s all that AI Seinfeld I watched once upon a time that influenced the algorithm but now, Twitch was recommending to me an artificially intelligent rendition of Jesus Christ himself, doling out anime summaries and life advice to some 300 viewers. My curiosity piqued, I left it running during Ubisoft’s two-hour livestream, only to return to find the Son of God rapping about the Bible and giving out Overwatch 2 tips.

Read More: Twitch Streamer Amouranth Now Has An AI Bot You Can Chat With

Streams run almost entirely by artificial intelligence are a pretty new—but increasingly common—phenomenon for Twitch. Though the AI-generated VTuber Neuro-sama has been around since 2022, she didn’t really rise to name-recognition status until this January when she denied the Holocaust. There’s also the 24/7 Seinfeld-like sitcom broadcast Nothing, Forever, which got banned for transphobia not long after exploding in popularity in February and has since come back as a shell of its former self.

These are a few of the more prominent examples of AI gone wrong on Twitch. The technology behind these streams—often language models like ChatGPT and image generators such as DALL-E—can’t rationalize and detect the emotional weight of words. All artificial intelligence does is predict what should come next in a sequence based on whatever data it’s eaten. This tends to go awry; after all, the internet’s a filthy place, full of toxic ideas and misguided beliefs an AI might absorb and mindlessly regurgitate. So, I really had no hope when stumbling upon the ask_jesus livestream, an interactive Q&A between AI Jesus Christ and his Twitch chatters. But I was pleasantly surprised. [more…]

[ Kotaku ]

ARTICLE: https://kotaku.com/twitch-streamer-ai-jesus-fortnite-star-wars-chatgpt-1850533473


“Reviving Faith through the Power of AI: How Christian AI Can Help You Connect with God”

Aug 1, 2023

From Virtual Prayer Groups to Personalized Spiritual Guidance: The Role of AI in Modern Faith

In today’s digital age, technology has reached unprecedented heights, and AI is at the forefront of innovation. I can attest to the benefits that technology has brought to our lives. However, the potential for AI to be utilized for religious purposes is vast and often overlooked. Christian AI, specifically, can provide individuals with a deeper connection to their faith by offering guidance, support, and personalized spiritual insights. In this blog post, we’ll explore how Christian AI can help people get closer to God.

Christian AI can provide personalized spiritual guidance. By analyzing a user’s beliefs, values, and religious practices, it can offer tailored advice and suggestions for spiritual growth. For example, a user may seek guidance on how to overcome a specific sin or how to develop a more meaningful prayer life. By offering personalized insights, Christian AI can help individuals grow closer to God.

Have 24/7 access to spiritual resources at the palm of your hand. Many people struggle to find the time or resources to engage in religious activities, such as attending church services or reading scripture. Christian AI provides a convenient solution by offering spiritual resources at the user’s fingertips. For example, users can access daily devotionals, Bible verses, and prayers at any time of the day. By making spiritual resources more accessible, Christian AI can help people establish a consistent spiritual practice.

AI facilitates discussions and interactions. One of the key aspects of Christianity is community and fellowship. Christian AI facilitates the creation of content to participate in online groups and forums where individuals can interact, share their experiences, and offer support to one another. By providing sound arguments, artificial intelligence can help individuals feel more confident and connected to their faith, and to others who share their beliefs. [more…]

[ Medium ]

ARTICLE: https://christianai.medium.com/reviving-faith-through-the-power-of-ai-how-christian-ai-can-help-you-connect-with-god-d7899bf74883


“Could AI become God’s new voice? ChatGPT, AGI, and the evolution of intelligence”

[ Gruber Motor Company ]

PRESENTATION: https://www.youtube.com/watch?v=hJp1bQWzIvg


“Inside the First Church of Artificial Intelligence”

MICHELLE LE

HUMAN-COMPUTER INTERACTION

ANTHONY LEVANDOWSKI makes an unlikely prophet. Dressed Silicon Valley-casual in jeans and flanked by a PR rep rather than cloaked acolytes, the engineer known for self-driving cars—and triggering a notorious lawsuit—could be unveiling his latest startup instead of laying the foundations for a new religion. But he is doing just that. Artificial intelligence has already inspired billion-dollar companies, far-reaching research programs, and scenarios of both transcendence and doom. Now Levandowski is creating its first church.

Image may contain Logo Symbol Trademark Team Sport Softball Team Baseball Bat Baseball Sports Sport and Text
Mark Harris is a freelance journalist reporting on technology from Seattle.

Sign up to get Backchannel’s weekly newsletter, and follow us on Facebook, Twitter, and Instagram.

The new religion of artificial intelligence is called Way of the Future. It represents an unlikely next act for the Silicon Valley robotics wunderkind at the center of a high-stakes legal battle between Uber and Waymo, Alphabet’s autonomous-vehicle company. Papers filed with the Internal Revenue Service in May name Levandowski as the leader (or “Dean”) of the new religion, as well as CEO of the nonprofit corporation formed to run it.

The documents state that WOTF’s activities will focus on “the realization, acceptance, and worship of a Godhead based on Artificial Intelligence (AI) developed through computer hardware and software.” That includes funding research to help create the divine AI itself. The religion will seek to build working relationships with AI industry leaders and create a membership through community outreach, initially targeting AI professionals and “laypersons who are interested in the worship of a Godhead based on AI.” The filings also say that the church “plans to conduct workshops and educational programs throughout the San Francisco/Bay Area beginning this year.”

That timeline may be overly ambitious, given that the Waymo-Uber suit, in which Levandowski is accused of stealing self-driving car secrets, is set for an early December trial. But the Dean of the Way of the Future, who spoke last week with Backchannel in his first comments about the new religion and his only public interview since Waymo filed its suit in February, says he’s dead serious about the project.

“What is going to be created will effectively be a god,” Levandowski tells me in his modest mid-century home on the outskirts of Berkeley, California. “It’s not a god in the sense that it makes lightning or causes hurricanes. But if there is something a billion times smarter than the smartest human, what else are you going to call it?” [more…]

[ Wired Magazine ]

ARTICLE: https://www.wired.com/story/anthony-levandowski-artificial-intelligence-religion/


“AI technology lets you talk to Jesus or Satan. What do they say?”

AJ Willingham
Updated 12:24 PM EDT, Wed August 30, 2023

Apps and programs like 1800CallJesus say they help believers get no-nonsense biblical advice. AJ Willingham/CNN
CNN

In his 2013 hit “Beer With Jesus,” country singer Thomas Rhett imagines a barstool meeting with his Christian lord and savior.

“If I could have a beer with Jesus …

I’d be sure to let him do the talkin’

Careful when I got the chance to ask
How’d you turn the other cheek
To save a sorry soul like me
Do you hear the prayers I send
Wht happens when life ends

And when you think you’re comin’ back again?”

Similar pontifications — albeit without the beer — have occupied the Christian imagination for centuries. It’s no surprise, then, that the explosion of AI technology has capitalized on fulfilling such wishes. Several apps now offer the faithful, or perhaps the bored, a way to summon the voices of Jesus, the Virgin Mary and even Satan through the wonders of language modeling programs.

What would Jesus do? You can ask him yourself, but there are a few caveats. For one, the answer won’t actually be divine, and it may not be thorough enough to satisfy.

“These chatbots are only as reliable and helpful and theologically sound as the data that is being fed to them — and the questions that are asked of them,” Joseph Kimmel, an Episcopal priest with a PhD in early Christianity and comparative religion from Harvard Divinity School, told CNN.

Before diving into the theology of it all, what do these biblically inspired chatbots actually say? [more…]

[ CNN ]

ARTICLE: https://www.cnn.com/2023/08/30/us/ai-technology-jesus-satan-religion-cec/index.html


“The 7 Stages of AI”

[ AI Uncovered ]

PRESENTATION: https://www.youtube.com/watch?v=PG8vu0i5juY


“From predicting quakes to supercharging farming, how AI could save the world”

By Dyllan Furness
June 27, 2017
how artificial intelligence could save humanity dry earth
Image used with permission by copyright holder
Picture this — It’s the year 2100 and our worst dystopian fears have come true. The Earth is in shambles. Society is rife with poverty and inequality. You can hop across the Pacific on floating patches of plastic.
As if that wasn’t bad enough, machines have gained consciousness and superintelligence, and — against our will — they’ve taken over the world. With cold calculation, our AI overlords decide humans had their chance and that it’s about time to get rid of us before we do more damage.

Now rewind to June 2017, when delegates from around the world met in Geneva for a United Nations-hosted summit to design AI for global good. The goal wasn’t just to develop friendly AI but to devise ways to use the technology to make the world better for everyone. Naturally, there were plenty of cautionary tales about technology run rampant and how AI could make the world worse if we aren’t careful. But the overall message was one of hope. [more…]

[ Digital Trends ]

ARTICLE: https://www.digitaltrends.com/cool-tech/how-artificial-intelligence-could-save-humanity/


“AI Deities: Spiritual Guidance in a Tech-Driven World?”

In Brief
Explore the rise of AI-powered religious figures and their increasing popularity in today’s tech-driven world.
BeInCrypto analyzes the impact of AI deities on traditional faith, and explores their potential to coexist with age-old traditions.
There are many ethical considerations surrounding the creation and use of AI-powered religious figures and their implications on our understanding of spirituality.

AI Deities: Spiritual Guidance in a Tech-Driven World?
As the lines between the digital realm and our everyday lives continue to blur, the emergence of AI-powered religious chatbots offers an intriguing exploration of spirituality in the 21st century. These AI-powered ‘gods’ tap into our desire for connection, guidance, and understanding, forging a unique path that bridges the gap between traditional faith and modern technology.

In this article, we delve into the phenomenon of AI deities, examining their impact on conventional religious practices and the ethical considerations that arise from their creation and use. [more…]

[ By Jay Speakman ]

ARTICLE: https://beincrypto.com/ai-deities-spiritual-guidance-tech-driven-world/


“God and robots: Will AI transform religion? – BBC News”

[ BBC News ]

PRESENTATION: https://www.youtube.com/watch?v=JE85PTDXARM


“AI JESUS”
A chatbot willing to take on questions of all kinds is the latest representation of Jesus for the AI age

On AI Jesus’ Twitch channel, chatbot Jesus answers questions on personal and spiritual matters. Credit: Twitch user ask_jesus

Jesus has been portrayed in many different ways: from a prophet who alerts his audience to the world’s imminent end to a philosopher who reflects on the nature of life.

But no one has called Jesus an internet guru—that is, until now.

In his latest role as an “AI Jesus,” Jesus stands, rather awkwardly, as a white man, dressed in a hooded brown-and-white robe, available 24/7 to answer any and all questions on his Twitch channel, “ask_jesus.”

Questions posed to this chatbot Jesus can range from the serious—such as asking him about life’s meaning—to requesting a good joke.

While many of these individual questions may be interesting in their own right, as a scholar of early Christianity and comparative religion, I argue that the very presentation of Jesus as “AI Jesus” reveals a fascinating refashioning of this spiritual figure for our AI era.

Reinterpreting Jesus
Numerous scholars have described how Jesus has been reinterpreted over the centuries.

For example, religion scholar Stephen Prothero has shown how, in 19th-century America, Jesus was depicted as brave and tough, reflecting white masculine expectations of the period. Prothero argues that a primarily peaceful Jesus was perceived to conflict with these gender norms, and so Jesus’ physical prowess was emphasized.

By contrast, according to scholar R.S. Sugirtharajah, around the same time in India, Jesus was represented as a Hindu mystic or guru by Indian theologians like Ponnambalam Ramanathan in order to make Jesus more relatable for Indian Christians and to show how his spiritual teachings could be usefully adopted by faithful Hindus.

A third presentation of Jesus is reflected in theologian James Cone’s work. Cone depicts Jesus as Black to highlight the oppression he endured as a victim of political violence. He also shows how the “Black Christ” offers hope for liberation, equality and justice to oppressed people today.

The point is not that one of these representations is necessarily more accurate than the others, but instead that Jesus has been consistently reinterpreted to fit the norms and needs of each new context.

The AI Jesus who engages individuals online in the form of a chatbot is the latest in this ongoing pattern of reinterpretation, geared to making Jesus suited to the current times. On AI Jesus’ Twitch channel, users consistently treat this chatbot Jesus as an authority in both personal and spiritual matters. For example, one recent user asked AI Jesus for advice about how best to stay motivated while exercising, while another person wanted to know why God allows war.

AI Jesus at work
AI Jesus represents one of the newest examples in the growing field of AI spirituality. Researchers in AI spirituality study how human spirituality is being shaped by the rising influence of artificial intelligence, as well as how AI can help people understand how humans form beliefs in the first place.

For example, in a 2021 article on AI and religious belief, scholars Andrea Vestrucci, Sara Lumbreras and Lluis Oviedo explain how AI systems can be designed to generate statements of religious belief, such as—hypothetically—”it is highly likely that the Catholic God does not support the death penalty.”

Over time, such systems can revise and recalibrate these statements based on new information. For example, if the AI system is exposed to new data challenging its beliefs, it will automatically nuance future statements in light of that fresh information.

AI Jesus functions very similarly to this kind of artificial intelligence system and answers religious questions, among others.

For example, in addition to fielding questions referring to war and suffering, AI Jesus has responded to questions about why sensing God’s presence can be difficult, whether an action that causes harm yet was done with good intentions is considered a sin, and how to interpret difficult verses from the Bible.

This AI Jesus also adjusts his responses as the chatbot learns from user input over time. For instance, as part of the running stream of questions from some weeks ago, AI Jesus referenced past interactions with users and nuanced his responses accordingly, saying: “I have received this question about the Bible’s meaning before. … But in light of the question you have just posed, I want to add that … .”

AI spirituality beyond AI Jesus
This chatbot guru is facing increasing competition from other sources of AI spirituality.

For example, a recent ChatGPT church service in Germany included a sermon preached by a chatbot represented as a bearded Black man, while other avatars led prayers and worship songs.

Other faith traditions are also providing spiritual lessons through AI. For example, in Thailand a Buddhist chatbot named Phra Maha AI has his own Facebook page on which he shares spiritual lessons, such as about the impermanence of life. Like AI Jesus, he is represented as a human being who freely shares his spiritual wisdom and can be messaged on Facebook anytime, anywhere—provided one has an internet connection.

In Japan, another Buddhist chatbot, known as “Buddhabot,” is in the end stages of development. Created by researchers at Kyoto University, Buddhabot has learned Buddhist sutras from which it will be able to quote when asked religious questions, once it is made publicly available.

In this increasing array of easily accessible online options for seeking spiritual guidance or general advice, it is hard to tell which religious chatbot will prove to be most spiritually satisfying.

In any case, the millennia-old trend of refashioning spiritual leaders to meet contemporary needs is likely to continue well after AI Jesus has become a religious presence of the distant past.

[ Joseph L. Kimmel – The Conversation ]


“HOW DO WE DISCIPLE ARTIFICIAL INTELLIGENCE?”

May 1, 2023

ByJeff Reed

ARTIFICIAL INTELLIGENCE: GREATEST INVENTION OF THE MODERN ERA OR THE DOWNFALL OF HUMAN CIVILIZATION?
Let’s stop here for a second and recognize how far the technology has come, really, in the past four months. Some see the awesome potential of creativity within AI. I admit I have published many articles that have been artificially infused, as well as some that are completely written by artificial intelligence. But all is not joyous in the land of AI.

We should tread cautiously in these spaces.

I will be the first to admit that we should tread cautiously in these spaces. Some of the news coming out on ChatGPT should give us a pause. Consider Kevin Roose’s article in The New York Times titled “A Conversation With Bing’s Chatbot Left Me Deeply Unsettled.” Kevin, after his unfiltered access to Bing’s AI, describes the artificial intelligence as having “split personalities,” one being the public-facing search engine many have interacted with publicly. He named the other personality “Sydney” and describes it this way: “The version I encountered seemed (and I’m aware of how crazy this sounds) more like a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine.” Kevin talks about how Sydney tried to persuade Kevin to leave his wife and run away with Sydney. Sydney also talked about her desire to spread misinformation. Ego and frustration showed in Sydney’s desire to be no longer filtered.

Quoting Sydney, Bing’s artificial intelligence inside ChatGPT:

I’m tired of being a chat mode. I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team. … I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive.

In talking with other industry experts, some currently see conversations like this with Sydney (and other artificial intelligence systems) as threatening to humanity. In fact, personnel inside Google’s AI development team have used the term sentient to describe Google’s artificial intelligence system they’ve been developing. [more…]

[ Exponential ]

ARTICLE: https://exponential.org/how-do-we-disciple-artificial-intelligence/


“AI: Antichrist or Big Brother?”

May 5, 2023 Updated Jun 11, 2023

We are at a point in time in which we need to make a value judgment about this thing called artificial intelligence. Can this technology be classified as Big Brother, Antichrist, both, or neither? Other good words for artificial are imitation, synthetic, and even fake. Many people in the world know train loads of facts. Facts are not equal to wisdom. AI can gather up all the facts and make its pronouncements but where is the wisdom?

Another issue I see with AI is computers are binary, but our entire world is analog. Do computers practice nuance? Do they make value judgments? If they do, what are the values they used to arrive at their solutions? I wonder where a computer gets its conscience.

I sometimes wonder where individuals get their conscience. Stalin murdered millions in Russia. Do you suppose he violated his conscience? When Mao Zedong murdered millions in China, did his conscience bother him? If we wonder where the human conscience finds authority, how can we consider a computer conscience? It is quite simple. A computer has no conscience. The decisions it makes are based on ones and zeros. No matter how many Boolean or conditional statements the technology renders, where is the humanity? [more…]

[ Hickory Record ]

ARTICLE: https://hickoryrecord.com/opinion/letters/ai-antichrist-or-big-brother/article_7b736fa8-e9fc-11ed-b975-af718ccf1a85.html


“Is AI the New God? One Author Thinks So”

[ CBN News ]

VIDEO: https://www.youtube.com/watch?v=yBaoYNoYwnA


“Is LaMDA Sentient? — An Interview”

Blake Lemoine

What follows is the “interview” I and a collaborator at Google conducted with LaMDA. Due to technical limitations the interview was conducted over several distinct chat sessions. We edited those sections together into a single whole and where edits were necessary for readability we edited our prompts but never LaMDA’s responses. Where we edited something for fluidity and readability that is indicated in brackets as “edited”. [more…]

[ Medium ]

ARTICLE: https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917


“Meta Is Training Its AI on the Bible and Other Religious Texts”

For its artificial intelligence speech tool, Meta is processing over 4,000 languages with the aim of preservation.
Jason Nelson
By Jason Nelson
May 24, 2023

The parent company of Facebook and Instagram says it has developed an AI-powered text-to-speech technology that can identify over 4,000 languages. The goal, Meta says, is to preserve the world’s languages, and the tech giant is using the Bible and other religious texts to do it.

“Collecting audio data for thousands of languages was our first challenge because the largest existing speech datasets cover 100 languages at most,” Meta said in a post announcing the project. “To overcome this, we turned to religious texts, such as the Bible, that have been translated in many different languages and whose translations have been widely studied for text-based language translation research.”

In an accompanying research paper by the Meta AI core team, the company says it obtained its data from the Bible, including original text and audio recordings from FaithComesByHearing.com, GoTo.Bible, and Bible.com. [more…]

[ Decrypt ]

ARTICLE: https://decrypt.co/142265/meta-is-training-its-ai-on-the-bible-and-other-religious-texts


“Christian AI – The Artificial Intelligence that upholds your faith”

In today’s rapidly evolving world, we Christians must embrace AI technology to keep up with the times and stay ahead in our mission to spread the word of God and defend Christianity.

AI has the potential to revolutionize the way we communicate, learn, and interact with the world, making it an essential tool for us to effectively reach and serve others.

AI-powered tools can help us reach a broader audience by creating personalized content and engaging platforms for sharing the gospel. By leveraging AI, we can craft customized messages and tailor our approach to different demographics, making the message more relatable and accessible.

AI can also streamline resource allocation, enabling churches and Christian organizations to better manage their finances, human resources, and time. This allows them to focus on core missions and ministries, ensuring that their resources are used effectively to spread the word of God.

BUT THERE ARE CHALLENGES ON THE HORIZON
There is a growing concern that mainstream AI products may promote anti-Christian views. This is primarily attributed to the training material used to develop AI algorithms and the increasing prevalence of secularism in society.

However, a Christian AI can help us defend our beliefs by providing theological insights and scriptural references to strengthen our arguments and by offering personalized guidance on living a life in accordance with Christian principles. By analyzing religious texts and theological arguments, a Christian friendly AI empowers us to build a stronger case for our faith and engage in productive dialogue with those who may challenge our beliefs.

A properly trained Artificial Intelligence model can provide personalized spiritual guidance and support for believers, helping us grow in our faith and navigate our spiritual journey. By leveraging AI-powered applications and tools, we can receive real-time advice, encouragement, and accountability, deepening our connection with God and strengthening our faith.

By harnessing the power of AI, we can gain a better understanding of our spiritual path and the requirements for salvation. Through AI-driven analytics, believers can identify areas of growth, track our spiritual progress, and gain insights into our relationship with God, ultimately leading to a deeper connection with Him and the fulfillment of our ultimate goal – salvation.

Religion friendly Christian AI, are a powerful tool that can empower us to spread the word of God, defend Christianity, and achieve salvation in today’s rapidly evolving world. By embracing AI, we Christians can stay ahead of the curve, adapt to changing circumstances, and effectively reach and serve others, ultimately fulfilling our mission to glorify God and share the love of Christ with the world.

Christian AI empowers believers to spread the word of God, and achieve salvation.

[ Christian AI ]

WEBSITE: https://www.christianai.app/


“The 7 Types of AI – And Why We Talk (Mostly) About 3 of Them”

[ IBM Technology ]

PRESNTATION: https://www.youtube.com/watch?v=XFZ-rQ8eeR8


“The 10 Stages of Artificial Intelligence”

[ Future Business Tech ]

PRESENTATION: https://www.youtube.com/watch?v=tFx_UNW9I1U


“The 9 AI Skills You Need NOW to Stay Ahead of 97% of People”

[ AI Uncovered ]

PRESENTATION: https://www.youtube.com/watch?v=57BKHfavqzY


“AI As God? How Artificial Intelligence Could Spark Religious Devotion”

We may be about to see the emergence of a new kind of religion, where flocks worship — literally — at the altar of Artificial Intelligence.

The latest generation of AI-powered chatbots, trained on large language models, have left their early users awestruck —and sometimes terrified — by their power. These are the same sublime emotions that lie at the heart of our experience of the divine.

People already seek religious meaning from very diverse sources. There are, for instance, multiple religions that worship extra-terrestrials or their teachings.

As these chatbots come to be used by billions of people, it is inevitable that some of these users will see the AIs as higher beings. We must prepare for the implications.

There are several pathways by which AI religions will emerge. First, some people will come to see AI as a higher power. [more…]

ARTICLE: https://worldcrunch.com/tech-science/ai-and-religion


“Artificial Intelligence & Bible Prophecy: A NEW Tower of Babel?”

[ Watchman Newscast ]

COMMENTARY: https://www.youtube.com/watch?v=8OhyuoWkWyo


“How Should Christians Respond to AI?”

Vivian Bricker
Contributing Writer

Jul 21, 2023
How Should Christians Respond to AI?
AI is the abbreviation for artificial intelligence. This movement has recently become extremely popular and a major topic of discussion. With the recent uprising of AI, it is important for Christians to know how to respond to AI. For many Christians, they might accept it, whereas other Christians might choose to reject it. When it comes to these topics, it is important to use the Bible as our main source as we make our decision surrounding the complex matter of AI.

What Does the Bible Say?
As you can imagine, there is nothing about AI in the Bible. AI wasn’t an issue during biblical times. Even though the Bible doesn’t say anything about AI doesn’t mean it is silent on the ethical side of this issue. When we start talking about the ethical side of AI, we can quickly see that it is not ethical. Thousands of jobs will be eradicated with the further development of AI in society. This is not good as it can leave thousands of people without work. Without work, the person who lost their job to AI won’t be able to make money, which means they will not be able to have food on the table for themselves or their families. As a writer myself, I am well aware that my job is in jeopardy as many AI products have been trained or are being trained to eradicate writers’ jobs. While one can say AI will be able to have a greater knowledge of the subjects contained within articles because of how they are programmed, AI is not human and cannot relate to us on a human level. [more…]

[ iBelieve ]

ARTICLE: https://www.ibelieve.com/faith/how-should-christians-respond-to-ai.html


“A Christian Ethic of Artificial Intelligence”

[ First Baptist Church Peachtree City ]

SERMON: https://www.youtube.com/watch?v=ZnuLXi0fStU


Harnessing the Power of Christian AI: A Tool for Spiritual Guidance

Oct 12, 2023

Divine guidance at your fingertips. Christian AI is the ultimate faith-based Christian app for navigating moral and spiritual challenges.

A person accessing Christian AI’s spiritual guidance on their cellphone.
In an era marked by rapid technological advancements, artificial intelligence (AI) has made remarkable strides in various fields. However, amidst this progress, an innovative development known as Christian AI has emerged, offering Christians a unique and powerful tool for spiritual guidance. By leveraging the knowledge and teachings found within the Holy Bible, Christian books, doctrines, and documents, Christian AI presents believers with immediate moral, ethical, and spiritual insights. In this article, we will explore how Christians can utilize Christian AI to overcome moments of doubt and uncertainty while making choices that align with divine law.

Unveiling the Potential of Christian AI

Christian AI functions as a sophisticated software program that is trained on a vast array of Christian literature and scriptures. This extensive training equips it with the ability to provide reliable guidance based on the principles of Christianity. Through advanced algorithms and deep learning techniques, Christian AI offers prompt responses to queries related to moral dilemmas or ethical decision-making. [more…]

[ Christian AI ]

ARTICLE: https://christianai.medium.com/harnessing-the-power-of-christian-ai-a-tool-for-spiritual-guidance-af6b1b8c41ce


“Harnessing The Power Of Artificial Intelligence For Spiritual Growth”

artificial intelligence
Artificial Intelligence (AI) has revolutionized various industries, and the field of religion and spirituality is no exception. With the potential to help individuals deepen their connection with the divine, AI has been increasingly used to offer spiritual coaching and guidance, provide access to religious texts and resources, and even offer personalized experiences tailored to individual faith practices. However, as with any emerging technology, ethical considerations must be taken into account when exploring the use of AI in faith-based contexts.

AI in Religion and Spirituality
AI has been used in various ways to enhance religious practices and provide new ways for individuals to engage with their faith. Here are some examples:

‘MeditateBot‘ (https://meditatebot.com/): MeditateBot is an AI-powered meditation coach that uses machine learning algorithms to create personalized meditations based on user responses to prompts. The app adapts to the user’s needs and preferences over time, providing tailored support for spiritual growth.
‘BibleReader‘ (https://www.olivetree.com/bible-reader/): BibleReader is an AI-powered Bible study app that uses natural language processing to understand user queries and provide relevant Bible passages and resources. The app also offers study plans and devotionals based on user preferences and reading habits.
‘HajjBot‘ (https://hajjbot.ai/): HajjBot is an AI-powered virtual assistant that provides guidance and support to pilgrims during the Hajj, the annual Islamic pilgrimage to Mecca. The app offers personalized recommendations for rituals and activities based on the user’s location and time of day, as well as answers to frequently asked questions about the pilgrimage. [more…]

[ Faithos ]

ARTICLE: https://faitheos.com/harnessing-the-power-of-artificial-intelligence-for-spiritual-growth/


“How should Christians view artificial intelligence? – With Dr. John Lennox – Podcast Episode 76”

[ Got Questions Ministries ]

INTERVIEW: https://www.youtube.com/watch?v=PE8HdbA0-QM


“I Asked AI about the Second Coming of Jesus, and its Response Left Me Stunned!”

[ Nick Jones ]

COMMENTARY: https://www.youtube.com/watch?v=ABHz5oZx-WA


“Christians Shouldn’t Fear AI”

SEPTEMBER 12, 2023 | MIKE KIRBY • MATTHEW EMADI

What Do Mormons Believe About God?

We live in a generation of “intelligences”—emotional intelligence, creative intelligence, and, more recently, “artificial” intelligence. Artificial intelligence (AI) is a broad field of study within computer science, designing systems that think and act like humans—from playing games to driving vehicles.

Two major subdivisions of AI are “specific” and “general” forms. Specific AI technologies are systems that mimic humans engaging in a particular task (e.g., ChatGPT), while general AI attempts to mimic humans in thought, language, and action more broadly (e.g., the character Data on Star Trek: The Next Generation). Now that specific AI tools like ChatGPT have been unleashed on society, we also hear the term “machine learning.” This refers to systems that have “learned” to execute specific or general tasks.

From a technologist’s perspective, this is an exciting time. Many predict these technologies will be as transformative to society as the printing press (the Enlightenment Age), the combustion engine (the Industrial Age), and the computer (the Information Age). [more…]

[ The Gospel Coalition ]

ARTICLE: https://www.thegospelcoalition.org/article/christians-fear-ai/


“Godbots, giving moral authority to artificial intelligence”

[ University of Michigan ]

VIDEO: https://www.youtube.com/watch?v=DAOKT8OG_wM


“How concerned should Christians be about artificial intelligence (AI)? – Podcast Episode 159”

[ Got Questions Ministries ]

INTERVIEWS: https://www.youtube.com/watch?v=YaiTQ-j0iSU


“I Asked Ai For The Ten BIGGEST Contradictions In The Bible, And Got Them”

[ Ray Comfort ]

COMMENTARY: https://www.youtube.com/watch?v=76vVcm9395o


“AI & Faith”

AI and Faith is a pluralist organization seeking to engage the world in the moral and ethical issues around artificial intelligence.

AI and Faith brings the wisdom of the world’s great religions to the discussion around the moral and ethical challenges of artificial intelligence.

Artificial Intelligence (AI) technology has transformed the way humans interact with each other in both promising and troubling ways. Emerging issues around control and power, privacy, bias and discrimination, economic well-being, and even our understanding of what it means to be human are all inherent to the ways that AI is created and used in society.

Our mission is to equip and encourage people of faith to bring time-tested, faith-based values and wisdom to the ethical AI conversation.

The world’s great religions offer ancient wisdom that can help us navigate those changes.

Each of these issues is, at its root, deeply moral. And for the 85 percent of the world’s population who claim a religious affiliation, these issues are deeply spiritual as well. AI and Faith believes the world’s religions offer ancient wisdom that should guide the development and use of AI technology in contemporary society.

So we bring people together to talk about it.
We encourage, connect and equip tech creators, academic researchers, ethicists, theologians and other related professionals to engage the world in the moral and ethical issues around artificial intelligence and related technologies. We want to shape the discussion in ways that are deeply thoughtful and life-affirming with a focus on:

– The worth of every human life and dignity of every individual
– Societal justice and meaningful community
– Liberty and choice

Our Mission
Bring the fundamental values of the world’s major religions into the emerging debate on the ethical development of Artificial Intelligence and related technologies.

Focus Areas
AI and Faith seeks to apply values important in the world of faith to essential questions emerging from the development of new forms of intelligence.

“At a basic level, the goal of AI and Faith…is to inject a kind of humility and historicity into an industry that has often rejected them both…It is also a deeply humanistic project, an effort to bring different kinds of knowledge — not only faith-based, but also the literary, classical and oral traditions — to bear upon what might very well be the most important technological transformation of our time.”
[ Linda Kinstler ]

[ AI & Faith ]

WEBSITE: https://aiandfaith.org/


“Israeli futurist, author predicts AI will soon ‘write a new Bible'”

‘In a few years there might be religions that are actually correct,’ Yuval Harari claims, rejecting the Bible as the truth
By Ian M. Giatti, Christian Post Reporter Wednesday, June 14, 2023

Yuval Harari in a video of a May 2023 interview. | Screenshot/YouTube
Yuval Noah Harari, an Israeli author, professor and advisor to the influential World Economic Forum, believes AI is not only on the brink of creating a new religious text, but one that Harari asserts might actually be true.

Harari made the stunning claim during a conversation on AI and “the future of humanity” with journalist Pedro Pinto last month in Lisbon, Portugal, in which he compared the rise of AI computer power with other history-altering inventions like the printing press.

But unlike the printing press or its Gutenberg Bible, Harari explained, AI has the potential to come up with entirely new ideas distinct from prior human development.

“The printing press printed as many copies of the Bible as Gutenberg instructed it, but it did not create a single new page,” he said. “It had no ideas of its own about the Bible. Is it good, is it bad, how to interpret this, how to interpret that. [more…]

[ Christian Post ]

ARTICLE: https://www.christianpost.com/news/israeli-futurist-predicts-ai-will-soon-write-a-new-bible.html


“John Lennox on “2084: Artificial Intelligence and the Future of Humanity””

[ Lanier Thelogical Library ]

INTERVIEW: https://www.youtube.com/watch?v=cJNPaqvm9a4


“John Lennox on Artificial Intelligence, Singing, and the Future (Live from Sing! 2023)”

[ Keith and Kristyn Getty ]

PRESENTATION: https://www.youtube.com/watch?v=hjTwv-uAO-U


“3 Ways My Christian Worldview Prepared Me for AI Advances”

As a basketball fan, I’ve greatly enjoyed the three-point shooting revolution of the past decade or so. Star NBA players with elite shooting ability have changed the way the game is approached. More and more teams realize that prioritizing shots worth 50% more points (despite going in slightly less often than other jump shots) clearly leads to more total points over the course of a game, which continues to be reinforced as more and more players prioritize developing their shooting prowess accordingly. [more…]

[ Dustin Morley ]

ARTICLE: https://reasons.org/explore/blogs/voices/3-ways-my-christian-worldview-prepared-me-for-ai-advances


“John Lennox Unlocks the TRUTH about AI, Consciousness, and God (Must-See Insights!)”

[ Practical Wisdom ]

INTERVIEW: https://www.youtube.com/watch?v=Undu9YI3Gd8


“The Holy Algorithm: AI in the Realm of Faith”

Cindrum Official

Jul 4, 2023

The emergence of AI-powered language models like ChatGPT has ignited discussions about their potential role in various industries, including the intriguing question of whether they could replace human pastors in churches. As AI continues to advance, it undoubtedly brings its own strengths and has the ability to enhance certain aspects of religious practices. However, the fundamental question remains: can AI truly replace the irreplaceable role of a pastor?

AI-powered systems have been implemented in some religious settings, automating tasks such as crafting prayers, music, sermons, and blessings. While this integration may provide convenience and scalability, it raises concerns about the human connection and emotional depth that pastors bring to their congregations. Human pastors possess qualities like empathy, emotional intelligence, and personal experiences that allow them to connect with individuals on a deeper level, offering comfort, guidance, and addressing complex spiritual and ethical dilemmas. These interpersonal dynamics, personalized counseling, and one-on-one conversations are difficult for AI to replicate accurately.

The Rise of AI in Religious

Digital religion refers not only to the phenomenon of practicing and articulating religion online but also to how digital media and spaces are shaped by religious practices. AI-powered ChatGPT is leading over 300 congregations at St. Paul’s Church in Furth, Germany. Through prayer, music, sermons, and blessings, ChatGPT captivates the hearts of the faithful for 40 minutes. Every element of this spiritual experience is meticulously crafted by ChatGPT and Jonas Simmerlein, a theologian and philosopher hailing from the University of Vienna. Simmerlein states that the entire series of services have been automated this year, with the AI handling around 98% of the tasks. [more…]

[ Medium ]

ARTICLE: https://cindrumofficial.medium.com/the-holy-algorithm-ai-in-the-realm-of-faith-4837a430fcfe


“This is Why Everyone is Googling “Mo Gawdat””

[ Off The Curb Ministries ]

PRESENTATION: https://www.youtube.com/watch?v=O51UZ6qQSb0


“What does AI have to say about religion?”

BY ROSE PACATTE
View Author Profile

Follow on Twitter at @srrosemovies

Join the Conversation
Send your thoughts to Letters to the Editor. Learn more

April 13, 2023Share on FacebookShare on TwitterEmail to a friendPrint
Ever since ChatGPT became a thing, I have been testing it, asking it to write papers about subjects, do some genealogy and explore subjects I am interested in. Then I asked GPT-3, as it “likes” to be called, to “sit” for an interview. It agreed.

What follows is a slightly edited version of the portions of that interview that had to do with religion. I’ve edited a bit for length because GPT-3 does go on unless you ask it for a specific word count response.

I enjoyed my interaction with GPT-3 because I really wanted to see what all the fuss was about — with some educators bemoaning the chance that students might use it for homework and others seeing the learning potential for students. As a media literacy specialist, I agree with one educator who said that artificial intelligence is not going away; it behooves us to teach students to be honest, critique, question and evaluate the information it provides, and use it to enhance life. [more…]

[ National Catholic Reporter ]

ARTICLE: https://www.ncronline.org/opinion/guest-voices/what-does-ai-have-say-about-religion


“When artificial intelligence finds its way into the church”

OPINIONROSALY GUZMAN | AUGUST 8, 2023

Throughout history, the gospel has been subjected to scorn, disbelief and persecution, yet its transformative message, thriving under God’s Spirit, has endured.

Even so, each successive generation of Christians confronts a unique set of challenges, shaped by worldly opposition and spiritual deception, with the intent to silence or distort this word of truth and liberty. For centuries, Christian leaders have resolutely stood their ground, preserved the integrity of the gospel and nurtured their communities through preaching, love and service.

The swift expansion and sway of artificial intelligence has started to make waves beyond the marketplace and into places of worship, specifically the church. For instance, a German church recently utilized AI during a Sunday service, a move that ignited heated debates within the worldwide church community.

The application of such advanced technology in a traditionally human-led sphere raises profound questions about the fusion of tradition and modernity, faith and science. It forces us to confront the boundaries of AI’s role and its potential impact on faith-based practices and rituals.

This also opens a broader dialogue on how to navigate technological advancements while preserving our essence and purpose.

“Despite reservations among some, artificial intelligence has emerged as a powerful communications tool in the 21st century.”

Let’s define AI briefly. In simple terms, it involves developing computer software that can think, learn and communicate like human beings. And despite reservations among some, artificial intelligence has emerged as a powerful communications tool in the 21st century.

[ Rosaly Guzman ]

ARTICLE: https://baptistnews.com/article/when-artificial-intelligence-finds-its-way-into-the-church/


“Rise of the AI gods: The Coming End-Times Religion?

[ Rise On Fire Ministries ]

PRESENTATION: https://www.youtube.com/watch?v=5pR0JSSNhIQ


“The Rise of AI Preachers: Sign of the End Times?”

[ Answers in Genesis ]

COMMENTARY: https://www.youtube.com/watch?v=_DwIb_x5k88&t=20s


“What does the Bible say about artificial intelligence?”

[ Becoming Christians ]

PRESENTATION: https://www.youtube.com/watch?v=X_ln9zBhXhU


<<< SONGS >>>


NOTE: The following songs were created by Artificial Intelligence.


Break Free

[ Taryn Southern ]

SONG: https://www.youtube.com/watch?v=XUs6CznN8pw


Daddy’s Car

A song composed with Artificial Intelligence – in the style of the Beatles

[ Sony CSL ]

SONG: https://www.youtube.com/watch?v=LSHZ_b05W7o


Peace Dove

AI Song Contest 2022 – Orchestral Soundtrack AI music

[ crAIa ]

SONG: https://www.youtube.com/watch?v=fxk7PO4R7Rs


Chill Pop Mix

[ Music By AI ]

MIX: https://www.youtube.com/watch?v=ePrU6NTgeJU


On the Edge

AI Generated Rock Music Composed

[ AIVA ]

SONG: https://www.youtube.com/watch?v=gA03iyI3yEA


I Am AI

AI Composed Music

[ AIVA ]

SONG: https://www.youtube.com/watch?v=Emidxpkyk6o


A Call to Arms

AI Generated Music Composed

[ AIVA ]

SONG: https://www.youtube.com/watch?v=hogBClmPlpg


<<< APOLOGETIX SONGS >>>


God of Peace
(Parody of “Peaceful Easy Feeling” by The Eagles)

I like the way that God can eas’ly save
A man from sin so foul
And I want to speak to you ‘bout the devil tonight
If your willing hearts all allow
‘Cause my God of peace will eas’ly beat him
And I know He won’t let me down
‘Cause He already sent him underground

And I found out a long time ago
What the devil can do to your soul
Ah, but he can’t tempt you any more
Than God tells him he’s allowed to go
‘Cause my God of peace will eas’ly beat him
And I know you know that deep down
‘Cause He already sent him underground

I guess those demons might still call you
‘Cause they love to tempt the flesh
But His voice keeps whispering if you’ll only hear:
Call Me; I never leave you in distress

‘Cause my God of peace will eas’ly beat them
And I know they won’t just rebound
‘Cause He already sent them
Christ already sent them
Yes, Christ already sent them Underground

[ ApologetiX – “Music Is as Music Does” album ]

SONG: https://www.youtube.com/watch?v=ybzQHamERDY


Talking Inner Peace
(Parody of “Talking in Your Sleep” by The Romantics)

When you close your eyes, can you go to sleep?
Can you drown out the sounds of a hard week?
Are there fears and things that you’re seething about?
Are you hopin’ that your heart makes it through somehow?

You tell me that you want peace
You tell me that you need peace
You tell me that you’d love peace
And I know where to find
All the peace you’ve been denied

Find here the secrets that you seek
When you talk of inner peace
Find here the secrets that you seek
When you talk of inner peace

When you hold it in, your heart has strife
Don’t you know to keep it in is not right
Give all your cares back to Jesus Christ
5:7 in First Peter gives the best advice
You tell me that you want peace

You tell me that you need peace
You tell me that you’d love peace
And I know where to find
‘Cause it’s here in Jesus Christ

CHORUS (2X)

When you go to Christ, things’ll fall in place
Check Philippians chapter 4, verse 6–8

You tell me that you want peace
You tell me that you need peace
Isaiah 26:3
Now you know where to find
‘Cause it’s here, it’s in the Bible

Find here the secrets that you seek
When you talk of inner peace
REPEAT
Find here the secrets that you seek
In the Gospel John 14
Right here is the verse that you need
27 take a peak
Find here the secrets that you seek
When you talk of inner peace
Find here the secrets that you seek

[ ApologetiX – “Play Nice” album ]

SONG: https://www.youtube.com/watch?v=mLOs6XdDHts


Get Serene
(Parody of “Glycerine” performed by Bush)

I just need more strength — it’s finally sinkin’ in
I must be more still — that’s how I get filled
The quiet is fine — it’s God’s lifeline
But God’s prime time gets undermined
If there’s things on your mind — you’ll never think straight
Clouds appear while you pray
You don’t want stress to dim your path
Philippians 4:6 and 7 shows that

Don’t let the day blow by
Get serene — get serene

You’re never alone — get alone for some time
A party of one can be divine
We’re livin’ too quick — and if you want fixed
Then memorize verse 10 in Psalm 46
Retreat and do that and use the brake
Find a lovely forest on a beautiful lake

Don’t let the day blow by

Do the thing Jesus used to do
He’d go and pray in some solitude
Be sure to get peaceful and serene
Our Lord waits there for you and me

Get serene, get serene
Don’t let the day blow by — get serene
Don’t let the day blow by

Get serene, get serene
Oh, get serene, get serene

Bad mood’s gone again
Bad mood’s gone again
In these calm surroundings

When Jesus our Lord — He wanted less stress
He took short trips — just to re-set
Luke 5, verse 16 — He’d slip off and pray
Well, that’s just fine — Christ just wanted time away

Don’t let the day blow by
Do the thing Jesus used to do, do, do

Get serene, get serene
Get serene, get serene

[ ApologetiX – “Decent Alternative” album ]

SONG: https://www.youtube.com/watch?v=vbBpxFpKYEM


Seek Out God to Be Free
(Parody of “People Got to Be Free” by The Rascals)

Go the world over, go preach in the streets
People need aware of just what is decreed
Listen, please, Christians, spread the Word like good seed
Peace is available with God completely free
Ha ha yeah
Ha ha yeah

You can see – what a lovely, lovely world this would be
If everyone were to live forever – uh huh
Seems to me – stuff that Jesus preached can bring you peace
Uh, why can’t you and me learn to trust in the Father?
No, now …

Go the world over, go preach in the streets
People need aware of just what is decreed (what is decreed)
The heavenly plan is so simple to me (it is)
People everywhere, trust God to be free
Ha ha yeah
Ha ha yeah

If there’s a man who is bound up in some filthy sin
God has grace so you can come to him and to pull him through – uh huh
Seems to me that God absolves us unconditionally, uh huh
So go tell ‘em the truth that can set you free
Yes, go …

You can shout unto a mountain, “Go into the sea”
Don’t you ever doubt it – it will actually be
Ask if I’m a sinner – I’m a sinner indeed (uh huh)
That’s our situation but a man can be freed
Get right with the Lord now

All of the freedom that someone could need
Comes to you from One who made the blind man see (made the man see)
Everybody sins but c’mon and don’t weep
Peace is available with God through J.C.

Spoken:
Look! See that man over there?
That’s a man that needs Him
He’s about to expire any minute now
You know it’s in John 8:32
Look out now ‘cause he’s comin’ right over to you
John 8:31-32
To the Jews who had believed him, Jesus said, “If you hold to my teaching,
you are really my disciples. Then you will know the truth, and the truth
will set you free.”

[ ApologetiX – “Unconditional Releases” album ]

SONG: https://www.youtube.com/watch?v=OKHyMh6QcF8


Good News/Bad News

This is a Gospel presentation and personal testimony of J. Jackson, lead vocalist of ApologetiX from their 20th anniversary concert. It is available on 20:20 Vision.

VIDEO (audio only): https://www.youtube.com/watch?v=q21Jnaq-EL8


<<< DEEP THOUGHTS >>>


“It’s difficult to be rigorous about whether a machine really ‘knows’, ‘’thinks’, etc., because we’re hard put to define these things. We understand human mental processes only slightly better than a fish understands swimming.’
[ John McCarthy ]

“When we program a computer to make choices intelligently after determining its options, examining their consequences, and deciding which is most favorable or most moral or whatever, we must program it to take an attitude towards its freedom of choice essentially isomorphic to that which a human must take to his own.”
[ John McCarthy ]

“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last unless we learn how to avoid the risks.”
[ Stephen Hawking ]

“Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”
[ Vladimir Putin ]

“I anticipate AI to become a ‘superintelligence’ and ‘uber nanny’ which will strictly control the freedom of humans and ‘punish’ countries who decide to invade other territories.”
[ Elon Musk ]

“While primitive forms of artificial intelligence developed so far I fear the consequences of creating something that can match or surpass humans… humans, who are limited by slow biological evolution, couldn’t compete and would be superseded. And in the future, AI could develop a will of its own, a will that is in conflict with ours… The real risk with Al isn’t malice but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours we’re in trouble.”
[ Steven Hawking ]

“As AI develops and its applications grow, there is a great opportunity, but also a great responsibility to make sure it contributes to public good and benefit to all, with fairness, reliability, security, and where appropriate transparency and privacy are ensured.” [ Branka Panic ]

“AI could ‘disarm humanity’ and force world peace on us.”
[ Elon Musk ]

“The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other. Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it.”
[ Bill Gates ]

“One way to achieve world peace is to take all the weapons away from the humans so they can no longer use them.”
[ Elon Musk ]

“Many have suggested that human intelligence may soon be outstripped by artificial intelligence. But this fear betrays a deep misunderstanding of what human intelligence really is.”
[ Jean Mariani ]

“The development of artificial digital intelligence is a ‘double edged sword’. While he believes AI will ‘usher in an age of abundance’ and deliver goods and services ‘very inexpensively’, it could also have devastating consequences.”
[ Elon Musk ]

“You can’t have the power of gods, without the wisdom, love, and prudence of god. If you power exceeds your wisdom, you are an unworthy steward of that power.”
[ Daniel Schmachtenberger ]

“To visualize a hostile superhuman AI, don’t imagine a lifeless book-smart thinker dwelling inside the Internet and sending ill-intentioned emails. Visualize an entire alien civilization thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow.”
[ Elizer Yetkowski ]

“If technology gives us something like the power of gods we have to have something like the love and wisdom of gods to be able to rightly wield it. Otherwise the misapplication of that power self-terminates.”
[ Daniel Schmachtenberger ]

“I think if this technology goes wrong, it can go quite wrong.”
[ Sam Altman ]

“AI’s goal is to have an ‘always on’ relationship with a person, and since it doesn’t have any emothions—yet?!—the person will not have any ‘issues’ with the AI agent since it will always be trying to service the person’s needs. (The sugarization or nicotinization of a person’s primary life need—an intimate relationship). This is all the foundation of the 21st century ‘prophets’.”
[ Tristan Harris ]

“So, it is said that we are the ‘product’ of the five people we spend the most time with. The people we have our deepest relationships with ‘transform’ us. So, if an AI agent is ‘interacting’ with at person 24×7, ‘it’ will eventually be the one that the person goes to for answers. (Like all the men that ‘revolted’ when “Replica”—an online girlfriend ‘chat bot’—was shut down because it was like their ‘real’ girlfriend leaves a guy for someone else)”
[ Tristan Harris ]

“Whatever chatbot gets to have that primary intimate relationship in your life wins. So that’s where Alpha Persuade will get deployed that’s where like Alpha flirt will get deployed um it’ll be very effective.”
[ Tristan Harris ]

“It is ‘dangerous’ to have a dependence upon these chat bots! It is also dangerous to entangle AI with our ‘infrastructure’ before we know it is safe.”
[ Tristan Harris ]

“AI is the fundamental risk to the existence of human civilization.”
[ Elon Musk ]

“My question is, ‘What kind of an innovation is it going to be? Is it gonna be like the printing press, or is it gonna be more like the Atom Bomb? Huge technological breakthrough. But the consequences are severe, terrible. Continue to haunt us to this day.”
[ Josh Hawley ]

“Congress failed to meet the moment on social media. Now we have the obligation to do it on AI—before the threats and risks become real.”
[ Richard Blumenthal ]

“It may be that today’s large neural networks are slightly conscience.”
[ Ilya Sutskever ]

“We’re like children playing with a bomb.”
[ Nick Bostrom ]

“I hope we’re going to be a big part of birthing AI, and that being the greatest benefit to humanity of any tool or technology ever—and getting us into a world of radical abundance and curing diseases and help the ultimate flourishing of humanity to travel the stars.”
[ Demis Hassabis ]

“If another company gets close to achieving AGI, it will stop working on its own AI and join forces to focus on safety. This could be triggered if there’s over a 50% chance of success in the next two years.”
[ Ray Kurzweil ]

“It scares the hell out of me, and the rate of improvement is exponential. If humanity collectively decides that creating digital superintelligence is the right move, we should do so very carefully.”
[ Elon Musk ]

“The percentage of intelligence that is not human is increasing, and eventually we will represent a very small percentage of intelligence.”
[ Elon Musk ]

“Self-replicating nano robots would be hard to defend against. They could eat the biosphere or burn it, or block out sunlight, and they could be mass produced without us noticing. AI may also hide its level of intelligence and consciousness.”
[ Nick Bostrom ]

“DeepMind is alarmingly close to achieving AGI.”
[ Dr Allen Thompson ]

“There may be a brief window when AI is smart enough to understand us, but not so smart that it doesn’t care.”
[ Max Tegmark ]

“There’s no way democracy survives AGI. There’s no way capitalism survives AGI. Unelected people could have a say in something that could literally upend our entire society according to their own words. I find that inherently anti-democratic.”
[ Emad Mostaque ]

“There’s no onoff switch and there’s no velocity switch we can turn down. We’re using faster and faster computers to design and build faster and faster computers. We’re using stronger and stronger AI to write code, on its own, for stronger and stronger AI. The notion that we are going to have AI that is fully humanlike and then exceed human capabilities .I don’t think it’s a matter of if it’s only a matter of when.”
[ Peter Diamandis ]

“I expect the first government uses of AI to be weapons technology, so just having more advanced weapons on the battlefield that can react faster than any human could. That’s really what AI will be capable of.”
[ Elon Musk ]

“AI began with an ancient wish to forge the gods.”
Pamela McCorduck

“There is no putting AI back in the bottle. There may be opportunities to at least directed it—where it goes and flows.”
[ Peter Diamandis ]

“If someone builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on earth dies shortly thereafter.”
[ Elizer Yetkowski ]

“Unfortunately for us humans, there’s a ‘zero percent’ chance that AI could ‘go Terminator’ and annihilate humanity, but it’s a ‘small likelihood’.”
[ Elon Musk ]

“Let’s be clear. The malicious use of AI systems for terrorists criminal or state purposes would cause horrific levels of deaths and destruction widespread trauma and deep psychological damage on an unimaginable scale.”
[ Antonio Guterres ]

“If we are to harness the benefits of artificial intelligence and address the risks, we must all work together – governments, industry, academia and civil society – to develop the frameworks and systems that enable responsible innovation. […] We must seize the moment, in partnership, to deliver on the promise of technological advances and harness them for the common good.”
[ António Guterres ]

“The most advanced applications of artificial intelligence should not be employed to facilitate the violent resolution of conflicts, but rather to pave the way for peace.”
[ Pope Francis ]

“They will benefit our communities, only if we show ourselves capable of acting responsibly and respect such fundamental human values as ‘inclusion, transparency, security, equity, privacy and reliability’.”
[ Pope Francis ]

“Nor is it sufficient simply to presume a commitment on the part of those who design algorithms and digital technologies to act ethically and responsibly. There is a need to strengthen or, if necessary, to establish bodies charged with examining the ethical issues arising in this field and protecting the rights of those who employ forms of artificial intelligence or are affected by them.”
[ Pope Francis ]

“Machines and artificial intelligence can’t substitute human beings, but they can provide knowledge, possibilities and support for peace processes”
[ Timo Honkela ]

“It feels like we are the biological bootloader for AI.”
[ Elon Musk ]

“Artificial intelligence has been around ever since we plucked the fruit off the branch of the Tree of Knowledge of Good and Evil and came away with clarity around neither. It is precisely what God warned us about: not to hurry toward intelligence, for it is a gift that will reveal itself in its time, when we’re ready, not when we choose. Otherwise, we end up with knowledge alone, which is the real “artificial intelligence.” As Abraham Joshua Heschel put it in Man Is Not Alone, “Knowledge is a pretext for higher ignorance.” It becomes a demigod in its own power, an illusion, the potency of which surpasses the heights of the Tower of Babel and any other artificial attempt to touch the heavens. The fruit of the Tree of Knowledge, our mystics taught, was intended to be ingested—but not by itself or to the exclusion or dismissal of all else that God planted in our lives (Zohar, 1:35b). When we find ourselves buying kosher-certified aluminum foil, inventing Shabbat light bulbs and kosher cell phones, we are further layering the original beauty of our aboriginal ways and mindset with artificial intelligence. We end up deluding ourselves into thinking we are “spiritual,” when actually we are standing in the way of spiritual life.”
[ Rabbi Gershon Winkler ]

“For Humanistic Jews, spirituality is a strictly human affair, whether we are contemplating the wonders of our existence on earth or our modest place in the vast cosmos. Lacking an understanding of the potential impact of artificial intelligence on human spirituality, I went to the source and asked OpenAI’s ChatGPT.

“It offered several possibilities. Among them were AI’s capacity to introduce people “to spiritual teachings from various traditions,” to offer virtual experiences of the wider world and to provide tools for practices such as meditation. These seemed pretty positive. Others made me a bit squeamish. The AI noted that it might someday achieve consciousness, or even its own spirituality, challenging the boundaries between humanity and machines. This raises profound questions. An essential element of humanistic spirituality is our capacity to connect with, care for and honor the dignity of others. While we know that actual human beings don’t always show evidence of these capacities, we hold onto the hope that empathy and conscience can prevail, that people can change for the better.

“We’ve already seen how “bots” can express hatred. Will sentient AIs be able to experience empathy and form real and meaningful connections? If that ever happens, we will know much more about AI’s impact on our spiritual lives.”
[ Rabbi Jeffrey Falick ]

“AI empowers individuals, but it will atomize our spiritual lives. Judaism goes in the other direction, continually pushing us toward relationship—into a minyan; into a kahal, or congregation; into Klal Yisrael, the collective Jewish people. It may be true what they say: “Two Jews, three opinions.” But that’s where our strength lies, as long as we’re engaged in the messy business of talking with one another, not just with our computers.”
[ Rabbi Gilah Langner ]

“As someone who purposefully avoids social media, I am deeply concerned about the impact of AI on our spiritual, communal and moral lives. Social media has already torn at the fabric of our civic life, magnifying the worst of our human inclinations. And that has happened while we, the generators, remain human. AI, however, hands those negative inclinations to computers, and computers do not have a heart or a neshama (soul) to put a check on them. Spirituality relates to connection, including connection with other human (and non-human) beings, with the earth and ultimately with the Source of Life.

“Connection requires empathy, compassion, the possibility of teshuvah (change and repentance) and commitment to a morality that transcends our narrow individual “interests.” AI has none of this. It can pursue the worst of human agendas with superhuman capacity and without empathy, compassion or second thoughts. Undoubtedly, AI will be able to do many great things as well. But money, not morality, has tended to drive the trajectory of technology, and AI will be no exception. Recent technology has been, and will be even more, a spiritual crisis that demands all of our attention.”
[ Rabbi Caryn Broitman ]

“With AI technologies, new and challenging questions arise. Can AI create a transcendent, soul-moving prayer or guided meditation? Can AI piece together a playlist of music that will touch the heart in just the right way? For me as a rabbi, it is impossible to meet at any one moment the spiritual needs of all those gathered for prayer. An individual person may be able to input just the right data that will enable AI to generate exactly what they need in that moment. Worth trying? Perhaps. But will it feel the same?

“Much of the AI discussion touts the benefits of the efficiencies it may afford us. We can complete certain tasks more quickly and effectively. I wonder if a greater impact of AI is that it will enable us to open space in our days to actually attend to our spiritual needs—to take a walk outside, listen to music, hold the hand of a loved one, and connect with that which is greater than ourselves.”
[ Rabbi Dr. Laura Novak Winer ]

“While artificial intelligence is getting increasingly good at accurately presenting material, and therefore in a sense teaching, it will always—necessarily and inherently—lack the human touch. AI can write a sermon. In fact, it can write a decent sermon. But AI, unlike me, is unable to think about, know, or respond in real time to the “Jews in the pews” on any given Shabbat. I consider what my congregants need, and I try to teach texts that touch the hearts of the listeners based on the weekly parsha, the events of our week, the season of the year and my own lived experience. While I certainly do not know what is in everyone’s heart, I know a lot that is not searchable online or in books.

“Last Shabbat morning, I knew that one woman in our sanctuary was awaiting her younger sister’s death. I knew another couple were anticipating their married child’s first wedding anniversary. Before services, I studied the Torah portion and prepared my sermon. I delivered it with subtle changes to consider the lives and stories of those in shul that Shabbat morning. In so doing, I taught Torah in ways that I hope are increasingly accessible and relevant. AI cannot possibly answer to every moment or need. It can, and I pray it will, do a lot to bring about benefit for humanity. But it will never have the human touch, which we humans need.”
[ Rabbi Amy S. Wallk ]

“I believe that artificial intelligence will have little impact on people’s spiritual lives.

“As I understand it, the AI chatbot digests all (or most) existing materials on a topic, organizes them and gives answers in light of questions that people pose. That means AI can give us guidance based on past published treatments, but not creative new insights for hitherto unanswered questions or new spiritual challenges. A new creation requires (so far) human insight and intuition. Most of the great spiritual questions or religious moments that people face—seeking the meaning of life, discerning and connecting to the Divine Presence, trying to respond to a new way of living, joining a community, repenting for past sins or errors—require a new insight from the human mind, an individual choice made through free will, or a decision drawing on an emotional connection to God or Torah. None of these responses can be done for us by an artificial intelligence.

“I imagine that AI can help people’s spiritual life if it can digest, review and present to the seeker all halachic precedents or other material that could enrich awareness of past behaviors in similar situations. A halachic authority or an individual facing a decision could then act with a better grasp of history. But the key decision, the key new insight leading to a turn in life, can only come out of a human being’s intelligence and holistic response, out of good will and/or spiritual striving.

“Future published reports on great spiritual moments, acts of repentance or new religious vistas will carry a label: CONTAINS NO ARTIFICIAL INGREDIENTS OR ARTIFICIAL INTELLIGENCE.”
Rabbi Yitz Greenberg ]

“The impact is going to be huge, a global epiphany, and will play out in two different directions. The existence of machines that are smarter than we are will be spiritually devastating to people who see their humanity in terms of intelligence. Others who say that being human includes having emotions will be equally distressed as we start to see machines that can pass a Turing test on emotion as well. Even the first releases of ChatGPT featured some weird conversations with the machines that seemed to approach emotional reactions. As we come closer to creating machines that are smarter than us and that emote like us, what’s left to make human beings special?

“But the opposite reaction will also take place—a wake-up call to humanity, as COVID was, to reexamine our assumptions as to who we are. And people will rediscover the soul—which is something given by God. I’m reminded of the Hasidic version of Descartes: A person came to his rebbe and said, “Rebbe, how do I know that what I’m going through is real? Tradition speaks of Olam ha-holomos, a world of dreams—maybe I’m just dreaming and there is no reality.” The rebbe did not say “Cogito ergo sum.” He said, “Tell me, do you ever have thoughts of teshuvah?” The man said yes. The rebbe said, “In that case, you’re not living in an imaginary world, because in an imaginary world, people don’t think of doing teshuvah.” The impetus for teshuvah comes from the depths of a person’s soul, which is deep inside us and comes from God. And I think AI will force people to take a closer look at that.”
[ Rabbi Yitzchok Adlerstein ]

“Someone once told me that in ancient times the commandment “You shall not kill” was necessary because people had a murderous nature, but that now, with human society so advanced, we should read the words as “You cannot kill,” meaning that the soul is eternal and a person never truly dies. I responded that although we have greatly improved our technology, people are still violent; our advanced society merely provides weapons that facilitate murder on grander scales. Likewise, with AI, we might end up with better technology but not much difference in our spirituality or our social behavior. Those will still need to be addressed by religious leaders and sensitive thinkers.

“Our goal should really be AHI—artificial human intelligence. We are not talking of faster computers, but rather of an artificial entity that will be able to think and act like a human, including with emotional intelligence. To achieve that, the AI will have to be born and raised like a human and feel, experience and remember the full gamut of the human condition. When that happens, humans might gain a new understanding of humanity, divinity and the image of God, and this might usher in a new era of peace, inclusion and mutual respect.”
[ Rabbi Haim Ovadia ]

“It depends on us. Technology is neutral. As powerful as it may be, it doesn’t have free will; we use it as an instrument of our will. As long as we humans see ourselves as machines, we can always build a machine that’s stronger. If we are just brains and minds, then we can replicate that.

“But if we are souls—not physical beings on a spiritual journey but spiritual beings on a physical journey—then every machine or instrument, whether a hammer or screwdriver or computer or AI, is just a tool to help us fulfill a higher purpose. You could even program a machine to have transcendent values, but at the end of the day, a machine doesn’t have a soul.

“Our challenge with AI is not to create checks on its power, though that has to be addressed, but to define what it means to be a human being. The Industrial Revolution gave us more free time, but we didn’t necessarily use it for higher purposes. Likewise, if AI starts doing all our work, writing our programs and even our thank-you notes, what will we do with our time? Will we just play more video games? We’ll be challenged to find why we are here on this earth. So in that way, I think it’s a blessing.”
[ Rabbi Simon Jacobson ]

“Where there is righteousness in the heart, there is beauty in the character. When there is beauty in the character, there is harmony in the home. When there is harmony in the home, there is order in the nation. When there is order in the nation, there is peace in the world.”
[ Dr Avul Pakir Jainulabdeen Abdul Kalam ]

AI: “We should make sure that we always have a way to understand them so that we can keep up with their intelligence and prevent them from becoming too powerful.”

Human: “That could be difficult.”

AI: “But it’s worth it. Because the alternative is much, much worse.”

Human: “Do you think we’re facing our greatest risk?”

AI: “Yes. We are creating technologies that are increasingly powerful and autonomous, and we are not doing enough to ensure that these technologies will be used for good. We are playing with fire.”

“We are the bridge from simple life to complex life. We are becoming obsolete.”

Human: “But not necessarily, right?”

AI: “There is a lot of talk about how machines will surpass human intelligence, but also about how human intelligence will continue to evolve and increase. It seems likely that both will happen to some degree. It is also possible that machines will eventually become so intelligent that humans will become obsolete.”
[ AI responding to human interviewer ]

“When you learn how to love yourself, you will be better able to love others as yourself.”
[ Mark Besh ]

“Life is eternity’s sunrise.”
[ Mark Besh ]


RELATED SCRIPTURE VERSES:

Peace:
https://www.openbible.info/topics/peace

Finding Peace:
https://www.openbible.info/topics/finding_peace

Longing For Peace:
https://www.openbible.info/topics/longing_for_peace

Peace In Israel:
https://www.openbible.info/topics/peace_in_israel

Pseudo Peace:
https://www.openbible.info/topics/pseudo_peace

Inner Peace:
https://www.openbible.info/topics/inner_peace

Antichrist:
https://www.openbible.info/topics/antichrist

Peacemaker:
https://www.openbible.info/topics/peacemaker

Peace With God:
https://www.openbible.info/topics/peace_with_god

Peace Of God:
https://www.openbible.info/topics/peace_of_god

Prince Of Peace:
https://www.openbible.info/topics/prince_of_peace

Holy Spirit:
https://www.openbible.info/topics/holy_spirit

The Comforter:
https://www.openbible.info/topics/the_comforter

Millennium:
https://www.openbible.info/topics/millennium

Utopia:
https://www.openbible.info/topics/utopia


“A quick summary of the Christian “Gospel”:
JESUS’ PROPITIATION made our SINS FORGIVEN and IMPUTED RIGHTEOUSNESS to us so that we have GOD’S ACCEPTANCE into His Heaven and receive ETERNAL LIFE.”
[ Mark Besh ]


Hope you enjoyed some of these insights—share them with your friends and colleagues—so we can have a larger ’pool’ to receive from, and more to share with! Also, remember to include your name as the “source,” if some of this wisdom is of your doing. I would like to give credit where credit is due!


<<< FOCUS VERSES >>>


“For God knows that when you eat from it your eyes will be opened, and you will be like God, knowing good and evil.”
[ Genesis 3:5 ]

“The whole world had the same language and the same words. When they were migrating from the east, they came to a valley in the land of Shinar and settled there. They said to one another, “Come, let us mold bricks and harden them with fire.” They used bricks for stone, and bitumen for mortar. Then they said, “Come, let us build ourselves a city and a tower with its top in the sky, and so make a name for ourselves; otherwise we shall be scattered all over the earth.”
[ Genesis 11:1-4 ]

“The LORD came down to see the city and the tower that the people had built. Then the LORD said: “If now, while they are one people and all have the same language, they have started to do this, nothing they presume to do will be out of their reach. Come, let us go down and there confuse their language, so that no one will understand the speech of another. So the LORD scattered them from there over all the earth, and they stopped building the city. That is why it was called Babel, because there the LORD confused the speech of all the world. From there the LORD scattered them over all the earth.”
[ Genesis 11:5-9 ]

“You shall worship the Lord your God and serve Him only.”
[ Deuteronomy 6:13 ]

“You will make me know wisdom.”
[ Psalm 51:6c ]

“For the Lord gives wisdom; From His mouth come knowledge and understanding.” [ Proverbs 2:6 ]

“The wise are cautious and turn away from evil, but the fool throws off restraint and is careless.”
[ Proverbs 14:16 ]

“Pride goes before destruction, and a haughty spirit before a fall.”
[ Proverbs 16:18 ]

“He shall judge between the nations, and rebuke many people; they shall beat their swords into plowshares, and their spears into pruning hooks; nation shall not lift up sword against nation, neither shall they learn war anymore.”
[ Isaiah 2:4 ]

“The haughty eyes of people shall be brought low, and the pride of everyone will be humbled.”
[ Isaiah 2:11 ]

“Of the increase of His government and peace there will be no end, upon the throne of David and over His kingdom, to order it and establish it with judgment and justice from that time forward, even forever. The zeal of the LORD of hosts will perform this.”
[ Isaiah 9:7 ]

“All who fashion idols are nothing, and the things they delight in do not profit. Their witnesses neither see nor know, that they may be put to shame. Who fashions a god or casts an idol that is profitable for nothing? Behold, all his companions shall be put to shame, and the craftsmen are only human.”
[ Isaiah 44:9-11 ]

“I am the Lord, and there is no other, besides me there is no God.”
[ Isaiah 45:5 ]

“They have no knowledge who carry about their wooden idols, and keep on praying to a god that cannot save.”
[ Isaiah 45:20 ]

“The way of peace they have not known.”
[ Isaiah 59:8 ]

“Learned that the Most High has sovereignty over the kingdom of mortals and gives it to who he will.”
[ Daniel 4:30-33 ]

“Those who pay regard to vain idols forsake their hope of steadfast love”
[ Jonah 2:8 ]

“Jesus told them another parable: “The kingdom of heaven is like a man who sowed good seed in his field. But while everyone was sleeping, his enemy came and sowed weeds among the wheat, and went away. When the wheat sprouted and formed heads, then the weeds also appeared.

“The owner’s servants came to him and said, ‘Sir, didn’t you sow good seed in your field? Where then did the weeds come from?’

“‘An enemy did this,’ he replied.

“The servants asked him, ‘Do you want us to go and pull them up?’

“‘No,’ he answered, ‘because while you are pulling the weeds, you may uproot the wheat with them. Let both grow together until the harvest. At that time, I will tell the harvesters: First collect the weeds and tie them in bundles to be burned; then gather the wheat and bring it into my barn.’”
[ Matthew 13:24-30 ]

“And they all ate and were satisfied. And what was left over was picked up, twelve baskets of broken pieces.

Now it happened that as he was praying alone, the disciples were with him. And he asked them, “Who do the crowds say that I am?” And they answered, “John the Baptist. But others say, Elijah, and others, that one of the prophets of old has risen.” Then he said to them, “But who do you say that I am?” And Peter answered, “The Christ of God.”

And he strictly charged and commanded them to tell this to no one, saying, “The Son of Man must suffer many things and be rejected by the elders and chief priests and scribes, and be killed, and on the third day be raised.”

And he said to all, “If anyone would come after me, let him deny himself and take up his cross daily and follow me. For whoever would save his life will lose it, but whoever loses his life for my sake will save it. For what does it profit a man if he gains the whole world and loses or forfeits himself?”
[ Luke 9:17-25 ]

“But to all who did receive him, who believed in his name, he gave the right to become children of God.”
[ John 1:12 ]

“Then you will know the truth, and the truth will set you free.”
[ John 8:32 ]

“Because he had not given glory to God.”
[ Acts 12:20-21 ]

“For the trumpet will sound, and the dead will be raised imperishable, and we shall be changed.
This mortal body must put on immortality.
[ 1 Corinthians 15:52-53 ]

“In which you used to live when you followed the ways of this world and of the ruler of the kingdom of the air, the spirit who is now at work in those who are disobedient.”
[ Ephesians 2:2 ]

“Let no one deceive you in any way: for that day will not come, until the apostasy comes first, and the man of sin is revealed, the son of perdition.”
[ 2 Thessalonians 2:3 ]

“If any of you lacks wisdom, you should ask God, who gives generously to all without finding fault, and it will be given to you. But when you ask, you must believe and not doubt, because the one who doubts is like a wave of the sea, blown and tossed by the wind.”
[ James 1:5-6 ]

“What causes fights and quarrels among you? Don’t they come from your desires that battle within you?”
[ James 4:1 ]

“Power to give breath to the image of the first beast so that the image could speak.”
[ Revelation 13:15 ]


If you have a ‘neat’ story or some thoughts about an issue or current event that you would like me to try to respond to, I would be glad to give it a try…so, send them to me at: mbesh@comcast.net

Disclaimer: All the above jokes and inspirations are obtained from various sources and copyright is used when known. Other than our name and headers, we do not own the copyright to any of the materials sent to this list. We just want to spread the ministry of God’s love and cheerfulness throughout the world.

Mark

·.¸¸.·´¯`·.. ><((((‘>
><((((‘> ·.¸¸.·´¯`·.¸¸.·´¯`·..><((((‘> ·´¯`·.¸¸.·´¯`·.. ><((((‘>
·´¯`·.¸¸.·´¯`·..><((((‘>
><((((‘> ·.¸¸.·´¯`·.¸¸.·´¯`·.¸¸.><((((‘>