The Thing We Should Fear Most About AI Is Not What You Think — But It’s Already Happening

It’s not too late for humanity to protect itself by rediscovering its superpowers

Scroll down to see the Chinese Version (向下滚动查看中文版本)

“Is artificial intelligence going to replace humans?”  

I have been asked this question on many occasions in the past year. 

My answer: “No, but … we should be far more concerned about humanity becoming more like AI.”   

In many of these dinner conversations, I sense that people recognize the potential scale of AI's impact and want to participate in the discussions. Still, they don’t feel they have an adequate amount of knowledge. 

And they are afraid. 

One of their biggest worries regarding AI is the possibility of human replacement. We have accepted that a sewing machine can sew clothes better and a calculator can do math faster than us, but AI has somehow stirred up a much deeper fear. Not only is it quite apparent that humans cannot compete with AI in some cognitive capacities, but the latest generative AI agents that mimic complex human interactions and decisions are especially chilling to some. It might be the success of the human-like behavior simulation in AI systems that startles us. But should it be?  

This AI blog series attempts to democratize the general public’s understanding and break these barriers. Many of my wonderful tech industry friends are eager to contribute because we all see the need to invite more constituents to the discussion. AI technology cannot live up to its full potential if users do not feel safe. Besides, when it comes to AI’s role in society, it’s far better to base any decisions on the views of many, instead of the views of a few. If AI is going to shift the world so fundamentally, we need a diversity of backgrounds in the discussions because we will all be affected.  

💡
“I believe AI is going to change the world more than anything in the history of humanity. More than electricity.” — Kai-Fu Lee, Chairman and CEO of Sinovation Ventures and President of Sinovation Ventures’ Artificial Intelligence Institute

AI is not the first technology we’ve tried to integrate into our society that challenges humanity. I was a Chemical Engineering student in the late '90s working for biotechnology professors as a research assistant when the debate about the ethics of gene technology application was scorching. Fast-forward 25 years, and genetic engineering has made a giant leap forward in medicine. It contributes to vaccine and medicine production, diagnostic tests, gene therapy, etc. Sensitive topics like embryo selection continue to spark debate, but for the most part, genetic engineering has not become the enormous catastrophe that some worried it would. There are certainly differences between AI and gene technology. Still, there are enough similarities to give us confidence that we can manage AI as long as we are mindful of its design, application, and containment.  

The anxiety I hear in conversations with others suggests that they don’t trust humanity’s ability to harness AI’s power and potential—that it will become a modern Frankenstein’s monster, eventually supplanting its creators. It’s a misguided concern, when our own behaviours and beliefs resulting from the very existence of AI puts humanity at a far greater risk. But before we discuss that lesser-understood risk, we must understand why AI cannot replace humans by first addressing two fundamental questions: 1) What is AI, and 2) What is human intelligence?   

  1. What is AI?

Artificial intelligence isn’t this boundless intelligence that magically emerges from a black box. It is intelligence based on learning from a large set of learning data. Based on the pattern it recognizes, AI can then make some predictions. The more data it learns from, the better its predictions will be. Its intelligence has clear boundaries—it is limited by the amount of data we can supply. However, its computational power is vastly superior to that of humans.  

  1. What is human intelligence? 

As you contemplate this question, you are experiencing human intelligence. The brain and its billions of neurons are vital to the human intelligence system and comprise one of the world's most magnificent and complex organs. But that does not make our brains our entire humanity. Humans have a broad range of capabilities, such as seeing, sensing, and feeling. We also adapt quickly as we learn from experiences. We are capable of original ideas and innovation. We also have consciousness, awareness of our existence, and internal thoughts. 

By default, there isn’t anything AI can do that our human intelligence cannot, since humans inspire it. That said, AI may be more focused and process information faster. Suppose you single out a very narrow task, e.g., analyzing hundreds of millions of protein molecules to predict the specific three-dimensional “folded” shape of any given protein based on its specific function. In that case, AI recently solved a problem that had baffled human researchers for nearly three decades by learning how to predict the shape of proteins to the nearest atom.

However, while AI can sort through data to diagnose diseases well, it cannot deliver the news to the patient’s family with the grace and compassion that a good doctor will. AI will also be much less effective in navigating the complex emotional state of family members while problem-solving what actions they need to take based on the family’s feelings, finances, and interpersonal dynamics. 

The scenario where AI is better than humans in most of the cognitive tasks we perform is likely, and coming sooner than we have anticipated. So instead of asking if humans will be replaced by AI, we should be asking how humanity will treat knowledge when it could no longer be a tool we need to wield by and for ourselves. 

Should we still learn chess if we know we cannot beat AI in the game? Should everyone just learn from playing with AI, or is there still value in learning from another human? Will we still have the motivation to learn a skill if we have no hope of winning? In this new era, we ask ourselves a long list of questions about activities that AI could soon master. 

Ultimately, we must decide the purpose of learning, of engaging the human intellect, and what role we want AI to play in those endeavors. This is what has prompted me to write this piece with urgency. There will be a point when we cannot turn the clock back; we are not there yet. This is why your participation—your individual decision on the ultimate purpose of the human mind in the face of an intelligence far greater than ours—is essential, before those decisions are no longer ours to make.  

What makes AI potentially more damaging to our social fabric than any other technology is that it is designed to be an agent, one that can pretend to be our equal counterpart, manipulate our thinking, and make decisions for us based on algorithms, all in the blink of an eye. But it cannot factor the subjective nuances of moral, ethical, and empathetic considerations into its decision-making. This is why much of the warnings about AI are about human overreliance. 

There was recently a report that one HR team was unable to find any candidates when they solely relied on AI to initially screen applicants. To probe, the hiring manager submitted his own CV under a different name, and he too received an automated rejection email. After some investigation, the HR team realized they made a typo on the AI prompt for a technical skill they were looking for. This example shows both the danger of relying solely on AI and how simple it would be to reduce the risks by having a human assess and modify proposed conclusions made by AI.  

💡
“By allowing algorithms to control a great deal of what we see and do online, such designers have allowed technology to become a kind of ‘digital Frankenstein,’ steering billions of people’s attitudes, beliefs, and behaviors.” — Tristan Harris, Co-founder & Executive Director of the Center for Humane Technology

So, some of the fears of AI replacing human roles are valid—such as in fields where high volumes of data need to be processed rapidly—but not all of them are. As a society, we need to clear our minds of the panic fear elicits, and direct that energy into creating solutions to prevent the worst from happening. How would you develop a combat strategy if you were a CEO running a company facing emerging new competition? Some obvious steps would be understanding your new competition, revising your priorities, and strengthening your organization's core competencies.  

Instead of dwelling on the risks alone, we should focus on the opportunity by working with AI to augment our intelligence. To do this, we need to ask ourselves, What makes us human? What should we focus on if AI can do some of the cognitive tasks we no longer need to do? And how should we strengthen and enhance our uniquely human superpowers? 

AI is here to stay, so we need to revise our strategy based on this new reality and take action now to secure a better future for ourselves and the next generations. The truth is, we did not need to wait for AI to diminish our humanity—we have been doing that to ourselves. Since the digital explosion in the past decade, we have voluntarily given up many of our human superpowers. Nowadays, artists are pressured to make their art fit into the algorithm, targeting the most sizable audience cluster with comparable behavioral data, ultimately leading to derivative art. Are we allowing data associated with our on-screen habits to lock in the music, movies, food, and news we consume? 

If we allow our screens and social media algorithms to dictate how we live and what we believe in, are we losing our agency?

If we allow algorithms to dictate our human experience, are we really that different from AI?  

💡
"Man is a man because he is free to operate within the framework of his destiny. He is free to deliberate, to make decisions, and to choose between alternatives." — Martin Luther King, Jr. (1929 - 1968), American Civil Rights Activist

Now it is more important than ever to be the CEO of your human experience and exercise your free will to cultivate your human superpowers. We are so much more than just some algorithms. We are creative, collaborative, and compassionate. We are capable of giving and receiving love. We can tell right from wrong. We have humility. We have our own set of values and we do not have to be in agreement to learn from each other and solve problems together. We can feel that joy in our hearts when we are serving a purpose and making an impact (I can feel that as I write this piece). We are resilient and know how to adapt to new environments. We have intuition—isn’t it extraordinary how sometimes we just know? These magnificent qualities have helped propel us forward as a species for millennia.  

Miraculous human experience includes moments of bliss.

To be a human on earth is a miraculous experience that AI will never taste. AI can only learn from data. But if we limit our experience by living through the screens, then we are limiting our learning, similar to the limited way AI learns. Humans learn best from experiences, and to maximize that learning we need to give room and time for surprises. We need to reduce our ego, put aside our judgment, and allow life to unfold. When was the last time you experienced pure joy, with no agenda, and were just playing? Thinking this through has implications for how we spend our free time and educate the next generation. I started Finding Ananda as a platform to discuss precisely this.    

Regarding all the concerns around AI, exposing the monster in the box is the best way to combat fear. The second-best way is by ensuring we don’t trap ourselves in a similar box, with screens facing us on all six sides. In the upcoming blog posts, we will discuss the latest developments in AI, the opportunities, the risks, and what we can do to ensure AI can safely cooperate and co-exist with humans.  

💡
 "I often tell my students not to be misled by the name ‘artificial intelligence’ — there is nothing artificial about it. AI is made by humans, intended to behave by humans, and, ultimately, to impact humans’ lives in human society.” — Dr. Fei-Fei Li, Co-Director of Stanford’s Human-Centered AI Institute

How do you think humanity could benefit the most from AI? What do you think we can do to collectively strengthen our uniquely human qualities? Subscribe to share your thoughts in the comments.


我们最应担忧的AI问题并非你所想 - 但它已在悄然发生

通过重新发现自身的超能力,人类保护自己还为时未晚

“人工智能会取代人类吗?”

在过去的一年里,我多次被问到这个问题。

我的回答是:“不会,但……我们更应该担心的是人类变得越来越像人工智能。”

 在许多这样的晚宴对话中,我感到人们意识到了人工智能可能带来的深远影响,渴望参与讨论。然而,他们觉得自己的知识储备不足。

他们感到害怕。

他们对人工智能最大的担忧之一是人类被取代的可能性。我们已经接受了缝纫机可以比我们更好地缝制衣物,计算器可以比我们更快地计算数学,但人工智能却以某种方式激起了更深层的恐惧。不仅显而易见的是人类在某些认知能力上无法与AI竞争,最新的生成式人工智能代理模仿复杂的人类互动和决策,对某些人来说尤为令人不安。也许是AI系统中成功模拟人类行为的能力令我们震惊。但我们应该因此感到畏惧吗?

这个人工智能博客系列试图使公众的理解更加普及,打破这些障碍。我的许多优秀的科技行业朋友都渴望贡献力量,因为我们都看到了邀请更多群体参与讨论的必要性。如果用户不感到安全,人工智能技术就无法充分发挥其潜力。此外,谈到AI在社会中的角色,基于多数人的观点而非少数人的观点来做出任何决定,要明智得多。如果人工智能将如此深刻地改变世界,我们需要在讨论中融入多元的背景,因为我们都会受到影响。

💡
“我相信,人工智能将比人类历史上任何事物更能改变世界。比电力的影响还要大。”——李开复,创新工场董事长兼首席执行官,创新工场人工智能研究院院长

人工智能并非我们试图融入社会的第一个挑战人性的技术。上世纪90年代末,我还是一名化学工程专业的学生,为生物技术教授担任研究助理,当时关于基因技术应用伦理的辩论正如火如荼。快进25年,基因工程在医学领域取得了巨大飞跃。它有助于疫苗和药物的生产、诊断测试、基因疗法等。像胚胎选择等敏感话题仍在引发争论,但在大多数情况下,基因工程并未成为一些人担心的巨大灾难。人工智能与基因技术之间当然存在差异,但相似之处足以让我们相信,只要我们对其设计、应用和约束保持谨慎,我们就能管理好AI。

我在与他人的交谈中感受到的焦虑表明,他们不信任人类驾驭AI力量和潜力的能力——担心它会成为现代的弗兰肯斯坦怪物,最终取代其创造者。这种担忧是误导性的,因为我们自身的行为和信念,由于AI的存在而产生,才是对人类更大的威胁。但在解决这个较少被理解的风险之前,我们必须通过回答两个基本问题来理解为什么AI无法取代人类:1)什么是人工智能,2)什么是人类智慧。

1) 什么是人工智能?

人工智能并不是从黑盒子里神奇出现的无限智能。它是基于从大量学习数据中学习的智能。基于它识别的模式,AI可以进行一些预测。它的数据量越大,其预测就越准确。它的智能有明确的边界——受限于我们能提供的数据量。然而,它的计算能力远远超过人类。

2) 什么是人类智慧?

当你思考这个问题时,你正在体验人类的智慧。大脑及其数十亿的神经元是人类智慧系统的关键,构成了世界上最宏伟、最复杂的器官之一。但这并不意味着我们的脑袋就是我们全部的人性。人类拥有广泛的能力,如视觉、感知和情感。我们也能从经验中快速适应。我们有原创的思想和创新能力。我们还有意识,能意识到自己的存在和内心的想法。

默认情况下,AI能做的任何事情,我们的人类智慧都能做到,因为它是受人类启发的。也就是说,AI可能更专注,处理信息更快。如果你单独挑选一项非常狭窄的任务,例如分析数亿个蛋白质分子,基于其特定功能预测任何给定蛋白质的三维“折叠”形状,那么AI通过学习如何精确到原子地预测蛋白质的形状,解决了困扰人类研究人员近三十年的问题。然而,虽然AI可以通过数据进行疾病诊断,但它无法以一位好医生的优雅和同情心将消息传达给患者家属。AI在应对家属复杂的情绪状态、同时根据家庭的情感、财务和人际关系来解决他们需要采取的行动时,也会效果大打折扣。

AI在我们大多数执行的认知任务上优于人类的情况是可能的,而且比我们预期的要早。所以,与其问人类是否会被AI取代,我们不如问,当知识不再是我们自己需要掌握的工具时,人类将如何对待它。

如果知道无法击败AI,我们还应该学习国际象棋吗?每个人都只和AI一起学习,还是从他人那里学习仍有价值?如果我们没有获胜的希望,我们是否还有动力去学习一项技能?在这个新时代,我们对AI可能很快掌握的活动提出了一长串问题。 

归根结底,我们必须决定学习的目的、激发人类智慧的意义,以及我们希望AI在这些努力中扮演什么角色。这促使我急切地写下这篇文章。会有一个我们无法回头的节点;但我们还没到那一步。这就是为什么在面对一个远超我们智慧的存在时,您对人类心智最终目的的个人决策至关重要——在这些决定不再由我们做出之前。

使AI可能比任何其他技术更具破坏性的是,它被设计成一个代理人,能够假装是我们的平等对手,操纵我们的思维,并在瞬间根据算法为我们做出决策。但它无法将主观的道德、伦理和同理心因素纳入其决策过程。这就是为什么许多关于AI的警告都涉及人类的过度依赖。

最近有报道称,一个人力资源团队在完全依赖AI初步筛选申请者时,无法找到任何合适的候选人。为了探究原因,招聘经理以另一个名字提交了自己的简历,结果也收到了自动拒绝的邮件。经过调查,HR团队发现他们在AI提示中对所寻找的技术技能拼写错误。这个例子既展示了完全依赖AI的危险,也显示了通过让人类评估和修改AI提出的结论来降低风险的简单方法。

💡
“通过允许算法控制我们在线上看到和做的大部分内容,这些设计者让技术成为一种‘数字化的弗兰肯斯坦’,引导数十亿人的态度、信仰和行为”——特里斯坦·哈里斯,人性科技中心联合创始人兼执行董事

因此,AI在某些领域取代人类角色的担忧是有道理的——例如在需要快速处理大量数据的领域——但并非所有担忧都是如此。作为一个社会,我们需要清除恐惧引发的恐慌,将这种能量引导到创造解决方案中,以防止最坏的情况发生。如果您是一家面对新兴竞争对手的公司的CEO,您将如何制定应对策略?一些明显的步骤是了解您的新竞争对手,调整您的优先事项,强化组织的核心竞争力。

与其沉溺于风险本身,我们应该专注于通过与AI合作来增强我们的智慧。为此,我们需要问自己,是什么使我们成为人类?如果AI能够完成一些我们不再需要做的认知任务,我们应该专注于什么?我们又该如何加强和提升我们独特的人类超能力?

AI已然存在,因此我们需要根据这一新现实调整我们的战略,立即采取行动,为自己和后代创造更美好的未来。事实是,我们不需要等待AI来削弱我们的人性——我们一直在这样做。自过去十年数字化爆炸以来,我们自愿放弃了许多我们的人类超能力。如今,艺术家被迫让他们的艺术适应算法,最终导致衍生艺术,瞄准具有相似行为数据的最大受众群体。我们是否允许与我们屏幕行为相关的数据锁定我们所消费的音乐、电影、食物和新闻?

如果我们允许屏幕和社交媒体算法来决定我们的生活方式和信仰,我们是否正在失去自主权?

如果我们允许算法主宰我们的人类体验,我们和AI真的有那么不同吗?

💡
“人之所以为人,是因为他有自由,在命运的框架内行动。他有自由去思考、做决定、在不同的选择中做出选择。”——马丁·路德·金(1929-1968),美国民权运动活动家

现在,比以往任何时候都更重要的是,成为您人类体验的首席执行官,运用您的自由意志,培养您的人类超能力。我们远不止一些算法。我们富有创造力、协作精神和同情心。我们能够给予和接受爱。我们能分辨是非。我们有谦逊之心。我们有自己的一套价值观,我们不必达成一致就能彼此学习、共同解决问题。当我们服务于某个目标并产生影响时,我们能感受到内心的喜悦(我在写这篇文章时就能感受到)。我们具有适应新环境的韧性。我们有直觉——有时我们就是知道,不是很奇妙吗?这些非凡的品质在数千年来推动着我们物种的前进。

人类的神奇经历包括幸福的时刻

作为地球上的人类,是一种奇迹般的体验,AI永远无法品尝。AI只能从数据中学习。但如果我们通过屏幕来限制自己的体验,那么我们的学习就像AI一样受限。人类最擅长从经验中学习,为了最大化这种学习,我们需要给惊喜留出空间和时间。我们需要降低自我,放下评判,让生活自行展开。您上次无所事事、纯粹享受快乐、只是玩耍,是什么时候?思考这些对于我们如何度过空闲时间以及如何教育下一代都有着重要意义。我创立了“寻找安娜达”(Finding Ananda)这个平台,正是为了讨论这些。

关于所有与AI相关的担忧,揭露盒子里的怪物是对抗恐惧的最佳方式。第二好的方式是确保我们不会把自己困在类似的盒子里,六面都朝着屏幕。在即将到来的博客文章中,我们将讨论AI的最新发展、机遇、风险,以及我们能做些什么来确保AI能够安全地与人类合作、共存。 

💡
“我经常告诉我的学生,不要被‘人工智能’这个名字误导——它一点也不人工。AI是由人类制造的,旨在由人类行为,人类,最终影响人类社会中的人类生活。”——李飞飞博士,斯坦福大学以人为本人工智能研究院联合主任