Joseph Plazo Didn’t Come to Preach Automation. He Came to Warn Us.
Joseph Plazo Didn’t Come to Preach Automation. He Came to Warn Us.
Blog Article
The scene was set for celebration. But what happened instead left the audience reeling.
MANILA — At the heart of the Philippines’ premier university, Asia’s brightest students—engineers, economists, AI researchers—converged to see the future of trading laid bare by machines.
They expected Plazo to preach automation, unveil breakthroughs, and fan their enthusiasm.
Instead, they got silence, contradiction, and truth.
---
### The Opening That Made Them Stop Breathing
Some call him the architect of near-perfect trading machines.
So when he took the stage, the room went still.
“AI can beat the market. But only if you teach it when not to try.”
The note-taking paused.
That sentence wasn’t just provocative—it was prophetic.
---
### Dismantling the Myth of Machine Supremacy
Plazo didn’t pitch software.
He projected mistakes— neural nets falling apart under real-world pressure.
“Most models,” he said, “are just statistical mirrors of the past.
Then, with a pause that felt like a punch, he asked:
“ Can it compute the panic of dominoes falling on Wall Street? Not the charts. The *emotion*.”
You could hear a breath fall.
---
### The Smartest Students in Asia Push Back
Of course, the audience pushed back.
A PhD student from Kyoto noted how large language models now detect emotion in text.
Plazo nodded. “Knowing someone’s angry doesn’t tell you why—or what comes next.”
A data scientist from HKUST proposed that probabilistic models could one day simulate conviction.
Plazo’s reply was metaphorical:
“You can simulate weather. But conviction? That’s lightning. You can’t forecast where it’ll strike. Only feel when it does.”
---
### The Real Problem Isn’t AI. It’s Us.
Plazo’s core argument wasn’t that AI is broken. It’s that humans are outsourcing responsibility.
“People are worshipping outputs like oracles.”
Yet his own firm uses AI—but wisely.
His company’s systems scan sentiment, order flow, and liquidity.
“But every output is double-checked by human eyes.”
And with grave calm, he said:
“‘The model told me to do it.’ That’s what we’ll hear after every disaster in the next decade.”
---
### The Warning That Cut Through the Code
Across Asian tech hubs, AI is gospel.
Dr. Anton Leung, a Singapore-based ethicist, whispered after the talk:
“He reminded us: tools without ethics are just sharp objects.”
In a private dialogue among professors, Plazo pressed the point:
“Don’t just teach students to *code* AI. Teach them to *think* with it.”
---
### No Clapping. Just Standing.
The crowd expected a crescendo. They got a challenge.
“The market isn’t math,” he said. “ It’s human, messy, unpredictable. And if your AI can’t read more info character, it’ll miss the plot.”
And then, slowly, they stood.
Others compared it to hearing Taleb for the first time.
And that sometimes, in the age of machines, the most human thing is to *say no to the model*.