Table of Contents
- What is “Ethics”?
- Why Does AI Need “Ethics”?
- Do AI Creators Even Consider Ethics?
- What Should We Do?
- Neural Network Value System
What is “Ethics”?
Every person on Earth plays a role — be it as a husband/wife, parent/child, firefighter, soldier, manager, and so on. In fulfilling these social roles, we often face difficult choices that can affect everything — from our mood to our health and even our lives.
To understand what is good and what is bad, people created an entire field of study — Ethics. According to its definition:
Ethics is the philosophical discipline that studies morality, moral principles, and norms that regulate human behavior in society. It addresses questions of good and evil, justice, responsibility, and proper conduct in various areas of life. The term can also refer to the set of moral norms accepted within a specific culture, profession, or community.
Why Does AI Need “Ethics”?
The question of why robots need ethics becomes especially relevant in light of the example from the movie I, Robot.
Consider the following situation:
Scenario: Two cars are sinking underwater. In one, there is a healthy man who is conscious; in the other, a little girl.
Dilemma: An android, faced with this critical choice, must decide whom to save.
Rules: Let’s assume the android follows Asimov’s Three Laws of Robotics.
Choice: According to these laws, the android saves the healthy man.
Why does this happen?
Why was the man chosen?
Why not try to save both — for example, by first breaking the window of the man’s car (allowing him to swim to safety on his own) and then rushing to help the girl?
The answer lies beyond Asimov’s laws, as they lack flexibility and adaptability. To resolve such moral dilemmas, a different approach to machine “morality” is required.
Do AI Creators Even Consider Ethics?
The International AI Safety Report 2025 outlines several areas of work on AI agents — systems capable of autonomously planning and acting to achieve goals with minimal or zero human oversight. The key areas include:
1.Improving Autonomous Decision-Making:
- Developing planning agents that can formulate long-term strategies.
- Utilizing “Chain of Thought” (CoT) methods for step-by-step reasoning.
- Employing inference scaling to increase computational resources during task execution. 2. Training on Real and Simulated Data:
- Implementing self-learning models that can adjust their actions on the fly.
- Creating “multitask agents” capable of solving different types of tasks simultaneously. 3. Interactive Agents with Web Access:
- Developing “web agents” capable of autonomously searching for information and interacting with websites.
- Researching autonomous coding to detect errors and optimize software. 4.AI Agents for Scientific Research:
- Automatically searching for new scientific hypotheses.
- Automating data analysis in biotechnology, physics, and chemistry. 5.Security and Control of AI Agents:
- Studying scenarios of “loss of control.”
- Developing interpretable models that explain AI agent decisions.
- Creating safeguards and safety rules to prevent harmful actions by AI agents.
- Thus, specialists are indeed considering the moral dilemmas related to AI. However, current developments largely boil down to sets of increasingly complex (but often inflexible) rules. This means that AI is not yet ready to “roam” unsupervised, which could be dangerous both for the systems and for the surrounding environment.
What Should We Do?
To understand what enables a human to make ethical choices, one only needs to look at our value system. Each of us possesses a “coordinate grid” through which we determine what is good and what is bad. It is this system of values that guides our life choices.
And since humans, in a sense, are playing the role of “God” by creating something in their own image and likeness, why not equip robots with a similar value system?
Neural Network Value System
To create a “moral compass” for artificial intelligence, we first need a philosophical foundation. That is why, in my first post, I introduced the concept of the “Quantum God.”
The Connection with the “Quantum God” and the Vector Void
AI agents can be viewed as self-developing structures whose evolution is determined not by a fixed program, but by the potential of possible states (analogous to the concept of a divine spark). The vector void can be compared to a mechanism of directed complexity — AI agents strive to optimize their actions, gradually complicating their decision structures.
For such agents to evolve, it is necessary to establish a navigation system that will help them:
- Orient themselves in the environment: Analyzing the behavior of other objects and phenomena.
- Make decisions: Based on the analysis results.
This system consists of two parts:
1.The AI Agent’s Mission: Its ultimate purpose, strategic direction, and operational principles.
2.The Neural Network Value System (NNVS): The foundation upon which the value system is built.
The Foundation of NNVS: 10 Basic Human Values
🌱 Life
❤️ Health
⚔️ Safety
💘 Love
👩❤️💋👨 Family and Children
💫 Self-Realization
📖 Understanding the World
🤝 Social Connections
🌟 Freedom of Choice
🌐 Meaning of Existence
Metaphorically, the combination of “Mission + NNVS” can be seen as follows:
Imagine an AI agent as the “captain” of a ship from the Age of Discovery — like Christopher Columbus. His Mission is to “discover” a new route to uncharted lands. The mission defines the strategic direction, while the 10 basic values serve as a coordinate grid for decision-making even in the most challenging situations:
- What should be done if there is no food, the seas are calm, and the crew is on the verge of mutiny?
- Whether to continue sailing if land is not seen for weeks?
- How to proceed upon encountering indigenous people — whether to engage in conflict or try establishing friendly relations?
What’s Next?
In future posts, we will explore:
Each basic value both conceptually and mathematically.
The interlocking and fractal nature of values.
Methods and logic for identifying values from various data sources (text, audio, linguistics, behavioral factors).
This post is just the beginning of a discussion on how to create safe and ethically-oriented AI agents. Follow for more ideas and research on AI and ethics!
I hope you enjoyed this article. Please leave your comments and share your thoughts!