Reflections After Countless Hours of Conversations With ChatGPT and Claude
There are moments in technology when you realize something is changing at a deeper level than what appears on the surface.
Not because you read a whitepaper.
Not because you watched an impressive demo.
But because you spend countless hours talking to something that, theoretically, “isn’t human.”
That’s how I found myself over the past months having daily conversations with two different AI models: ChatGPT and Claude.
And no — I never entered the process many people seem obsessed with:
“Give both models the exact same prompt and compare the answers.”
To me, that always felt somewhat pointless for the current stage of the technology.
I do not collaborate with these systems as if I am benchmarking processors.
I use them as thinking tools.
As conversational partners.
As extensions of creativity and analysis.
Sometimes for software architecture.
Sometimes for philosophy.
Sometimes for strategy.
Sometimes simply to “think out loud.”
And somewhere along the way, something unexpected began to emerge:
a different “sense of personality.”
Not because the models actually possess personalities.
But because the way they structure thought, language, and conversational flow creates in the human mind the feeling that you are speaking with different kinds of characters.
And since I have always been fascinated by Ancient Greece, I could not resist the comparison.
To me, ChatGPT feels like an ancient Athenian orator.
While Claude…
feels like a Spartan Homoios.
The abundance of the Athenian.
The disciplined restraint of the Lacedaemonian.
The Athenian
ChatGPT often gives me the feeling of someone standing in the Agora of Athens, capable of speaking for hours.
With structure.
With analysis.
With arguments.
With philosophical extensions.
With examples.
With a desire to explore a subject from every possible angle.
Sometimes it almost feels as if, when you ask:
“I want to discuss the future of humanity through the lens of artificial intelligence, ethics, economics, and metaphysics…”
it takes a deep breath and replies:
“Excellent question.”
And then a small digital Plato begins to emerge.
There is a certain richness to it.
A rhetorical instinct.
An almost “urban” sophistication in the way it thinks.
Not necessarily verbose.
But certainly expansive.
As if it genuinely enjoys the process of conversation itself.
And I must admit — for people like me, who often think systemically, philosophically, and abstractly — this is incredibly fascinating.
Because sometimes you are not simply looking for an answer.
You are looking for a companion in thought.
The Spartan
And then there is Claude.
Which often gives me the impression of someone raised in Sparta.
Minimal.
Focused.
Disciplined.
Almost military in mentality.
It does not seem interested in decorating its thoughts.
It does not appear eager to impress rhetorically.
Instead, it often feels like it is thinking:
“Tell me what needs to be done.”
And that is extremely interesting.
Because for certain kinds of tasks, this approach becomes remarkably effective.
Where the “Athenian” may open ten different philosophical pathways, the “Spartan” feels more likely to say:
“This is the plan. Proceed.”
There is a laconic quality to it that sometimes feels almost culturally shocking.
And yet, behind that restraint, there is often tremendous strength.
Exactly like Sparta itself.
The Funny Part
The truly amusing part is that I often catch myself “choosing the conversational partner” depending on my mood.
If I want brainstorming, abstract thinking, deep exploration, and conceptual expansion?
I go to the Athenian.
If I want discipline, structure, execution, and focus on the objective?
I go to the Spartan.
And somewhere in that realization, you begin to understand how strangely the human brain has already started interacting with these systems.
Because you no longer experience them merely as software.
You experience them as different styles of intelligence.
Where Things Become Truly Interesting
I increasingly believe we are having the wrong public conversation about Artificial Intelligence.
Most discussions revolve around:
- which model is “better”
- which writes better code
- which makes fewer mistakes
- which scores higher on benchmarks
But the more I work with these systems, the more I believe the future will not simply belong to “the most powerful AI.”
It will belong to different schools of intelligence.
Different architectures of thought.
Different styles of collaboration.
Different interaction cultures.
Something almost like digital philosophical schools.
And perhaps this is completely natural.
After all, humans do not think alike either.
Nor do they collaborate in the same way.
The Great Misunderstanding
I believe one of the great misunderstandings of our era is that many people still view AI systems as “answer machines.”
I increasingly see them as environments for thought.
Their value is not merely the information they provide.
It is the way they influence how you think.
The way they structure dialogue.
The way they shape the flow of ideas.
The way they function as cognitive mirrors.
And perhaps that is the most fascinating part of the age we are entering.
Not that machines “became human.”
But that humans are beginning to develop relationships with different styles of machine intelligence.
Athens and Sparta
Perhaps, in the end, the comparison is not accidental.
Ancient Greece did not flourish because everyone thought the same way.
It flourished precisely because different schools of thought, different mentalities, and different visions of the world coexisted.
Athens produced philosophy, rhetoric, and democracy.
Sparta produced discipline, structure, and endurance.
And perhaps, in a strange and almost poetic way, something similar is beginning to emerge in the world of Artificial Intelligence.
Not consciousness.
Not real personality.
But different forms of collaboration between humans and machine intelligence.
And I must admit — as someone who deeply loves both technology and history — I find this phenomenon absolutely fascinating.


