A software developer and Linux nerd, living in Germany. I’m usually a chill dude but my online persona doesn’t always reflect my true personality. Take what I say with a grain of salt, I usually try to be nice and give good advice, though.

I’m into Free Software, selfhosting, microcontrollers and electronics, freedom, privacy and the usual stuff. And a few select other random things, too.

  • 0 Posts
  • 8 Comments
Joined 8 months ago
cake
Cake day: June 25th, 2024

help-circle
  • I don’t think the internet gave particularly good advice here. Sure, there are use-cases for both, and that’s why we have both approaches available. But you can’t say VMs are better than containers. They’re a different thing. They might even be worse in your case. But I mean in the end, all “simple thruths” are wrong.


  • Thanks, and I happen to already be aware of it. It doesn’t have any of that. And it’s more complicated to hook it into other things, since the good old postfix is the default case and well-trodden path. I think I’ll try Stalwart anyways. It’s a bit of a risk, though. Since it’s a small project with few developers and the future isn’t 100% certain. And I have to learn all the glue in between the mailserver stuff, since there aren’t any tutorials out there. But both the frontend, and the configuration and setup seem to make sense.





  • Yes, Deepseek V3 is a model. But what I was trying to say, you download the file. But then what? Just having the file stored on your harddisk doesn’t do much. You need to run it. That’s called “inference” in machine learning/AI terms. The repository you linked, contains some example code how to do it with Huggingface’s Transformer library. But there are quite some frameworks out there for running AI models. Ollama would be another one. And it’s not just some example code where to start with your own Python program, but a ready-made project/framework with tools and frontends available and an interface for other software to hook into.

    And generally, you need some software to actually do something. And how fast it is, depends on the software used, the hardware it’s executed on. And in this case, also on the size of the AI model and its architecture. But yeah, Deepseek V3 has some tricks up it’s sleeves to make it very efficient. Though, it is really big for home use. I think we’re looking at a mid six-figures price for the hardware to run it. Usually, people use Deepseek R1 models. Or other smaller AI models if they run them themselves.