Neurosymbolic AI Agents
or Contemporary Classic AI
Extended from original post on LinkedIn as a quoted repost of Kerry C.’s post on the Neurosymbolic AI.
I’ve been advocating for hybrid approaches for a long time now.
What we call AI agents are basically just the tip of the potential iceberg, and unfortunately we’ve seen too many attempts of coining fancy terms by people who know too little about Classic AI (or GOFAI, the Good Old Fashion AI).
I frequently criticise the connectionist approach in that they are great operators but don’t work as a full program.
I’ve raised the question on how an LLM would do multi-tasking and it’s currently architecturally unfeasible.
Yes, you run two LLMs in parallel - not one system doing actual multitasking.
So it’s an operator.
A “method”.
A “nucleus” in neurological terms.
Not “a brain”.
With some wise prompting you can get it to run two tasks sequentially, but expect a lot of memory bleeding, especially given the indications we’ve been seeing that seems to inversely correlate context length and accuracy.
This means we’re technically stuck at a “single-thread” generation of AI models, where the solution to program scale and complexity is adding more computers to the local network.
The Neurosymbolic approach doesn’t suddenly take us into a full “multi-task” model, but paves the way to it.
I’d actually challenge Gary Marcus’s notes by adding that current approaches of a symbolic subsystem within the LLM pipeline are still at a stage of Linear Programming, while our next step will be the Dynamic Programming.
We’re pretty much knocking on its door already - take the Neurosymbolic AI approach along with Context Engineering and it becomes a Dynamic Neurosymbolic AI.
It means the LLM takes a neurosymbolic approach to digest its given input, and generates new in-context artifacts with a symbolic grounding, dynamically.
Instead of scaling to trillion parameter models for uber in-context symbolic generation, training smaller models may take us to dynamic neurosymbolic mixture-of-experts.
Where does it end?
The real question is how do we start making AI useful and valuable for real tasks.
Deploying it to the real world to be used by real people and real problems.
We’re doing quite well in advancing the SOTA of AI models, but we’ve been falling behind on understanding how AI fundamentally changes many paradigms of User Experience and Usability, of Apps and the Web (4.0) itself, and how those become significant factors of product adoption, retention and monetization.
PS.: I think Symbolic AI as a term is way better than goofy GOFAI or Classic AI, because it reminds people that symbolic methods can be modern and push things forward; just like people call Classical music Serious music - though I’m fine with just contemporary classical music. Maybe I’ll be fine with Contemporary Classic AI too.
PPS.: Here are some suggestions of Contemporary Classical Music: