At least the authors acknowledge it for what it is: a tiny model on a tiny corpus and worse than the comparable transformers in terms of accuracy. I like the experimentation with new designs and one doesnt always need to show near SOTA results. From a brief inspection, however, I think it will be hard for the work to become a high profile conference acceptance without significan additional work.
Skimming it I get this incredible sci-fi feeling of AI being the thing that solves P vs. NP (the diagrams are reminiscent of boolean/arithmetic circuits which have produced some results in the compcomp space)
At least the authors acknowledge it for what it is: a tiny model on a tiny corpus and worse than the comparable transformers in terms of accuracy. I like the experimentation with new designs and one doesnt always need to show near SOTA results. From a brief inspection, however, I think it will be hard for the work to become a high profile conference acceptance without significan additional work.
I would really like to see more testing with a deeper hierarchy and alpha and beta nonzero.
Skimming it I get this incredible sci-fi feeling of AI being the thing that solves P vs. NP (the diagrams are reminiscent of boolean/arithmetic circuits which have produced some results in the compcomp space)