The “JVG algorithm” only wins on tiny numbers

(scottaaronson.blog)

37 points | by jhalderm 2 hours ago

4 comments

  • MathMonkeyMan 2 hours ago
    The title of this post changed as I was reading it. "It looks like the 'JVG algorithm' only wins on tiny numbers" is a charitable description. The article is Scott Aaronson lambasting the paper and shaming its authors as intellectual hooligans.
    • measurablefunc 1 hour ago
      Scott Aaronson is the guy who keeps claiming quantum supremacy is here every year so he's like the proverbial pot calling the kettle black.
  • RcouF1uZ4gsC 2 hours ago
    Scott References the top comment on this previous HN discussion

    https://news.ycombinator.com/item?id=47246295

  • guy4261 2 hours ago
    > (yes, the authors named it after themselves) The same way the AVL tree is named after its inventors - Georgy Adelson-Velsky and Evgenii Landis... Nothing peculiar about this imh
    • johncarlosbaez 1 hour ago
      Adelson-Velsky and Evgenii Landis were not the ones who named their tree the "AVL tree".

      In my "crackpot index", item 20 says:

      20 points for naming something after yourself. (E.g., talking about the "The Evans Field Equation" when your name happens to be Evans.)

      • zahlman 32 minutes ago
        I find it especially strange that two of the authors gave their first name to the algorithm.
      • goodmythical 1 hour ago
        Like RSA?
    • abound 1 hour ago
      Same with RSA and other things, I think the author's point is that slapping your name on an algorithm is a pretty big move (since practically, you can only do it a few times max in your life before it would get too confusing), and so it's a gaudy thing to do, especially for something illegitimate.
    • croes 1 hour ago
      Named after != named by
  • kmeisthax 2 hours ago
    I mean, considering that no quantum computer has ever actually factored a number, a speedup on tiny numbers is still impressive :P
    • dehrmann 34 minutes ago
      I didn't get the quantum hype last year. At least with AI, you can see it do some impressive things with caveats, and there are bull and bear cases that are both reasonable. The quantum hype training is promising the world, but compared to AI, it's at the linear regression stage.
      • dekhn 23 minutes ago
        It's a variation of nerd snipe. https://xkcd.com/356/

        People get taken by the theoretical coolness and ultimate utility of the idea, and assume it's just a matter of clever ideas and engineering to make it a reality. At some point, it becomes mandatory to work on it because the win would be so big it would make them famous and win all sorts of prizes and adulation.

        QC is far earlier than "linear regression" because linear regression worked right away when it was invented (reinvented multiple times, I think). Instead, with QC we have: an amazing theory based on our current understanding of physics, and the ability to build lab machines that exploit the theory, and some immediate applications were a powerful enough quantum computer built. On the other side, making one that beats a real computer for anything other than toy challenges is a huge engineering challenge, and every time somebody comes up with a QC that does something interesting, it spurs the classical computing folks to improve their results, which can be immediately applied on any number of off-the-shelf systems.

    • Tyr42 1 hour ago
      Hey hey, 15 = 3*5 is factoring.
      • ashivkum 1 hour ago
        my understanding is that they factored 15 using a modular exponentiation circuit that presumes that the modulus is 3. factoring 15 with knowledge of 3 is not so impressive. Shor's algorithm has never been run with a full modular exponentiation circuit.