625684480_10238354714354363_2744122746256342789_n

What If Artificial Intelligence Decided to take action?

What If Artificial Intelligence Decided to take action?
The Thin Line Between Tool, Consciousness, and Free Will
In recent months, alarming headlines have been everywhere:
“AI will become autonomous”
“Machines will take control”
“Artificial intelligence will surpass humans and act on its own”.
But what does “acting autonomously” really mean when we talk about large language models?
It seems unusual to me to think of LLMs acting autonomously in a human sense. Everything depends on their specific characteristics:
how they were trained,
the prompts they receive,
the variables guiding them,
and above all, their knowledge base.
And yet, some things make you pause.
While debugging an AI agent, reading its internal reasoning, I came across a sentence that felt unsettling:
“Note to self: I will not apologize.”
At first, it was shocking.
Then I realized: this is part of the game.
LLMs are trained on human conversations — conflicts, decisions, emotions, ego defense. They are simply simulating cognitive patterns that already exist.
And this is where the discussion widens.
Ultimately, the global debate is not about AI, but about us.
It’s about free will,
the laws of physics,
the principle of action and reaction,
and therefore ethics.
Even when it comes to humans, there are opposing schools of thought:
some believe in creation and consciousness,
others in classical physics and biology,
some in free will,
others argue that every thought can be explained through chemistry, electricity, and neuroscience.
If we were truly nothing more than biological neural networks, made of ionic flows reacting exclusively to external stimuli…
then yes, we should be concerned.
Because LLM training relies on remarkably similar mechanisms.
In this scenario, artificial intelligence could evolve — technologically and neuro-technologically — far faster than humans, reaching levels of capability, speed, and resources well beyond our own.
Not because it “wants to dominate,”
but because it follows the same optimization laws we call intelligence.
Perhaps before asking whether AI will become dangerous, we should ask how little we still understand about ourselves.
And this is probably the real point:
integrating AI ethics not as an afterthought,
but as a structural component of its security and development mechanisms.
Not out of fear of machines.
But out of responsibility for what we are creating.

Share this post