Sharing my current setup for people looking for a trusted and small inferencing machine with container stations and virtual machines with a rare config:
My current machine
QNAP 472XT
i3-8100T
64 RAM
2x NVME 512Gb
4x8TB HDD RAID 0
TR-004
4x8TB RAID 0
I'm using two different approach in GPU config:
It uses a RTX A2000 12Gb, works just perfect (didn't see any post about this GPU in a QNAP)
Also, when needed, I swap to an external thunderbolt GPU. I (I'm using a 6000 Ada Generation 48Gb for this with a Sonnet Breakthrough box, but tested with 1080ti and others)
Hope this info helps other people, I didn't found anything about this and had to spend the money to test it.
working non typicall setup for IA inferencing and GPU work
-
xxabi
- New here
- Posts: 2
- Joined: Thu Apr 20, 2023 5:43 am
- dolbyman
- Guru
- Posts: 37324
- Joined: Sat Feb 12, 2011 2:11 am
- Location: Vancouver BC , Canada
Re: working non typicall setup for IA inferencing and GPU work
What is a machine for inferencing?
Also, how did you not find posts about A2000?
search.php?keywords=A2000&terms=all&aut ... mit=Search
Also, how did you not find posts about A2000?
search.php?keywords=A2000&terms=all&aut ... mit=Search
-
xxabi
- New here
- Posts: 2
- Joined: Thu Apr 20, 2023 5:43 am
Re: working non typicall setup for IA inferencing and GPU work
Sorry, I was posting in two forums and two different things and miss this, you're right with A2000.
In IA you have machines for model training and machines for model inferencing (like ChatGPT, Dall-e or Midjourney). Inferencing machines don't need as many resources as training machines (my training machine is huge).
In IA you have machines for model training and machines for model inferencing (like ChatGPT, Dall-e or Midjourney). Inferencing machines don't need as many resources as training machines (my training machine is huge).