How are people treating/deploying on-device AI?

(minimal-light-theme.yliu.me)

1 points | by romerocruzsa 6 hours ago

1 comments

  • romerocruzsa 6 hours ago
    So, I've been working on Edge AI/TinyML for over a year now and been amazed with how much progress the field has matured towards deploying intelligence that is operational in low-power hardware... or so I thought.

    On-device or Edge Computing has been (for me at least) divided into two subgroups: those with ML expertise who obsess over algorithms & optimization methods and hobbyists who run into useful case studies that fall short at LLMs and a Raspberry Pi 5 with tens of attachments. So, me, being guilty of waddling between both sides of the coin, I start to wonder how many other people out there are exploring local AI and how do they go about it.

    Ps. I've seen the work from the MIT HAN lab (motivated me to go deeper in quantization and distributed edge computing) and community-led contributions like https://canirun.ai and more.