I started tinkering with frigate and saw the option to use a coral ai device to process the video feeds for object recognition.

So, I started checking a bit more what else could be done with the device, and everything listed in the site is related to human recognition (poses, faces, parts) or voice recognition.

In some part I read stable diffusion or LLMs are not an option since they require a lot of ram which these kind of devices lack.

What other good/interesting uses can these devices have? What are some of your deployed services using these devices for?

  • Sims@lemmy.ml
    link
    fedilink
    English
    arrow-up
    5
    ·
    6 months ago

    Image recognition, speech2txt, txt2speech, classification and such smaller models. They are fast but have no memory worth mentioning and are heavily dependent on data access speed. Afaik, transformer based models are hugely memory bound and may not be a good match if run on these externally via Usb3.