As of December 27, 2024, 1:30:37 am, Friday. Professor Liang Zheng of Tsinghua University explores why AI cannot replace humans.
**I. The Impact of AI on Society**
*1. Short-Term Impact*
– AI amplifies pre-existing social issues such as privacy protection, fairness, and discrimination. This is not a problem with AI itself. Instead, it highlights these existing social issues. And these issues cannot be fundamentally resolved through value alignment. We should avoid problems from the design perspective and conduct training and design under clear rule requirements in the early stage.
*2. Long-Term Impact*
– The issue of human-machine relationship. The design concept of AI is not necessarily to replace humans. We should guide from the perspective of human-machine cooperation and let AI do things that humans are not good at, cannot do, or do poorly. For example, in scientific research, although AI is powerful, it cannot replace humans. Because scientific breakthroughs often come from diverse exploration and interdisciplinary inspiration. However, AI may make everyone’s actions converge. In the future, every field needs to address the human-machine relationship issue, consider the boundaries of human-machine cooperation, and avoid losing other possibilities of human creation, exploration, and discovery due to excessive reliance on tools.
**II. The Dilemmas Brought by AI and Governance Measures**
*1. Thoughts Triggered by Cases*
– Taking the lawsuit against the chatbot startup company Character.AI as an example, it shows that chat tools may exacerbate the dilemma of information cocoons and affect the richness and autonomy of people themselves. Therefore, it is necessary to limit the capabilities of AI in some scenarios and set clear boundaries.
*2. Key Issues in Governance*
– Asynchrony: Technology moves faster than governance. Existing management may not be applicable to future technological developments and changes. We need to adopt a flexible “agile governance” approach.
– Dispersion: The risks of AI come from human applications. Different fields have different governance requirements. We need to govern by scenarios, make use of existing regulatory means, set bottom-line principles, and flexibly adjust according to different scenarios.
*3. Inspiration from Governance Measures in Different Countries*
– The United States: It does not impose strict restrictions on AI but promotes standard construction and improves technical quality. For example, the California bill was vetoed because it may cause developers to be afraid to try.
– Europe: It implements horizontal supervision, classifies and identifies high-risk scenarios, and based on human rights considerations, focuses on the impact of AI on people. However, the existing management system is not perfect in terms of relief systems and compensation.
**III. Balanced Strategies for AI Governance**
*1. Balancing Development and Governance*
– Clearly distinguish whether problems are caused by insufficient development or excessive development. We cannot limit technology only based on social reactions. We should consider that technological iteration may solve problems. Taking self-driving taxis as an example, it is a trend for them to replace traditional taxis. When problems arise, we should think about how to promote technological development and achieve a smooth transition.
*2. Balancing with Social Acceptance*
– The application of new technologies triggers social resistance. For example, ride-hailing groups are worried about autonomous driving. The essence of the problem is to create more high-quality jobs through technological innovation, adjust production relations, and activate the potential of the new economy in creating incremental employment.
**IV. Promoting Good AI Innovation in the Technology Industry**
*1. The Industry Needs to Objectively Reflect Technological Development*
– Avoid being led by public opinion. Honestly publicize the capabilities and limitations of large model technology.
*2. Attach Importance to Ethical Issues*
– Refer to the development process of corporate governance. From meeting user needs to considering industry partners and assuming social responsibilities, conduct responsible innovation.
*3. Enterprises Improve Technology*
– Enterprises hold the decision-making power. Based on ethics and social responsibilities, improve technology. For example, Anthropic does limited work in terms of safety.
**V. Promotion of the Concept of AI for Good**
*1. Good AI and AI for Good*
– Good AI should be safe and inclusive. For example, domestic enterprises’ efforts in the inclusiveness of digital technology. The scope of AI for Good includes solving social problems such as addressing climate change and poverty reduction. Ultimately, it is implemented as AI for Human.
*2. Pay Attention to the Actual Impact of AI Applications*
– Study the impact of intelligent government affairs tools on grassroots civil servants and the public. The basic income experiments of foreign companies are also worthy of consideration by domestic enterprises. At the same time, European scholars emphasize that Ethical AI is the result of social interaction and needs to incorporate user experience.
**VI. The Relationship between AI Development, Cultural Diversity, and Education**
*1