

If others can get it working hopefully it can get cleaned up and added at some point.īeta Was this translation helpful? Give feedback. Let me know if anything I posted doesn't work incase I forgot to add any changes i had to make. If anyone is able to test this out and see if it works for them as well that would be awesome. I'm sure the docker files can be cleaned up and the process streamlined, there is probably some overlap between what gets built and installed between the different docker images since I was combining multiple other docker files. Previously my inference times were between 80-100ms and 75-80% CPU usage on that container. I run frigate on a Proxmox LXC container with 3 CPU cores assigned and 2GB of RAM, using nvidia hardware decoder.

Using a 2GB Quadro P400 card my inference speed is around 16-17ms (9-10ms if using t). Model: path: /yolo4/t labelmap_path: /labelmap.txt width: 416 height: 416 to my config.yml file and started it up. Once I had that image I was able to have my docker-compose file use that image instead of the standard frigate image. Then I updated the Dockerfile for amd64 instead of arm:

so from yolo4/plugin to a tensor_assets folder i made in my build folder. Then I modified the yolo4 converter files for amd64 instead of arm. I started by using the watsor dockerfile ( ) but updating it for ubuntu 20.04: Īnd then the watsor gpu dockerfile ( ) using my new local image: I was able to make an Nvidia GPU work for detection by modifying this ( #2548)
