Most of the threads I’ve found on other sites (both Reddit and the Synology forums) have basically said “go with Docker”. But what do you actually gain from this?

People suggest it’s more up-to-date, and maybe for some packages that’s true? But for Nextcloud specifically it looks pretty good. 32.0.3 came out 1 day ago and isn’t yet supported, but the version immediately preceding that, from 3 weeks ago, is.

I’ve never done Nextcloud before, but I would assume installing it via the Package Center would be way easier to install and to keep up-to-date than Docker. So what’s the reason everyone recommends Docker? Is it easier to extend?

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    But what do you actually gain from this?

    Isolation. The number one reason to use docker is isolation. If you’ve not tried to run half a dozen services on a single server then this may not mean much to you but it’s a “pretty big deal.”

    I have no idea how the synology app store works from this pov - maybe it’s docker under the covers. But in general I despise the idea of a NAS being anything than a storage server. So running Nextcloud, Immich, etc. on a NAS is pretty anathema to me either way.

    • sem@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      How isolated could it really be as a docker container vs a separate machine or proxmox? You will still have to make sure that port numbers don’t conflict, etc, but now there is a layer of complexity added (docker)

      I’m not saying it is bad, I just don’t understand the benefits vs costs.

      • atzanteol@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        How isolated could it really be as a docker container vs a separate machine or proxmox?

        You can get much better isolation with separate machines but that gets very expensive very fast.

        It’s not that it provides complete isolation - but it provides enough isolation very cheaply. You still compete with other applications for compute resources but you run in your own little filesystem jail and can run that janky python version that your application needs and not worry about breaking yum. Or you can bundle that old out-of-support version of libaio that your application requires. All of your dependencies are bundled with your application so you don’t affect the rest of the system.

        And since containers are standardized it allows you to move between physical computers without any modification or server setup other than installing docker or podman. You can run on Amazon Linux, RedHat, Ubuntu, etc. If it can run containers it can run your application. Containers can also be multi-platform so you can run on both ARM64 and AMD64 seamlessly.

        And given that isolation you can run on a kubernetes cluster, or Amazon ECS with FARGATE instances, etc.

        But that starts to get very enterprisey. For the home-gamer there is still a ton of benefit to just having file-system isolation and an easy way to run an application regardless of the local system version and installed packages. It’s a bit of an “experience” thing to truly appreciate it I suppose. Like I said - if you’ve tried running a lot of services on a system in the past without containers it gets kinda complicated rather fast. Especially if they all need databases (with containers you can spin up one db for each application easily).

        • sem@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          I still feel like I’m missing something. Flatpaks help you sidestep dependency hell, so what is docker for? What advantages does further containerization give you if you aren’t going as far as proxmox vms.?

          I guess I’ve only tried running one service at a time that needed a database, so I get it if a Docker container can include a database and a flatpak cannot.

          • atzanteol@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 day ago

            Flatpaks are similar, but more aimed at desktop applications. Docker containers are made for services and give more isolation on the network.

            Docker containers get their own IP addresses, they can discover each other internally, you get port forwarding, etc. Additionally you get volume mounts for persistent storage and other features.

            Docker compose allows you to bring up multiple dependent containers as a group and manage connections between them, with persistent volumes. It’ll handle lifecycle issues (restarting crashed containers) and health checks.

            An example - say you want a Nextcloud service and an immich service running on the same host. You can create two docker-compose files that launch both of them, each with its own supporting database, and give each db and application persistent volumes for storage. Your applications can be exposed to the network and the databases only internally to other containers. You don’t need to worry about port conflicts internally since each container is getting its own IP address. So those two MySQL DBs won’t conflict with each other. All you need to do is ensure that publicly available services have a unique port forwarded to them. So less to keep track of.

          • boonhet@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 day ago

            Docker will let you run as many database containers as you want and route things such that each service only sees its own database and none of the others, plus even processes on your host machine can’t connect unless you’ve configured ports for that.

      • non_burglar@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        You will still have to make sure that port numbers don’t conflict

        I’m sure I read you’re comment wrong, but you are aware that each docker container has its own tcp stack, right?

        • sem@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          I don’t really understand what a TCP stack is, but my question is if your IP address is 192.168.1.2, and you want to run two different services that both have a web interface. You still have to configure both of them to use different port numbers.

          If you don’t think of doing that and they both default to 8000 for example and you try to run them both at the same time, I imagine you would get a conflict when you try to go to 192.168.1.2:8000 or even localhost:8000.

          • Zagorath@aussie.zoneOP
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 day ago

            @non_burglar@lemmy.world is correct, but is perhaps not explaining it perfectly for the practical questions you seem to be asking.

            If you have, say, two Docker containers for two different web servers (maybe one’s for your Wiki, and the other is for your portfolio site), you can have both listening on ports 80 and 443 of their container, but a third Docker container running a reverse proxy which has access to your machine’s ports 80 and 443. It then looks at the incoming request and decides which container to route the request to (e.g., http://192.168.1.2/wiki/%s requests go to the Wiki container, and all other requests go to portfolio site).

            Now, reverse proxies can be run without Docker, but the isolation Docker adds makes it all a lot easier to manage, in part because you don’t need to configure loads of different ports.

            • sem@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              2
              ·
              15 hours ago

              Ok, thanks, I was wondering how a container would get its own IP address. A reverse proxy makes way more sense.

          • non_burglar@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            Sorry, that was presumptuous of me. ‘TCP stack’ just means each container can have its own IP and services. Each docker, and in fact each Linux host can have as many interfaces as you like.

            I imagine you would get a conflict when you try to go to 192.168.1.2:8000 or even localhost:8000.

            You’re free to run a service on port 8000 on one IP and still run the same port 8000 on another ip on the same subnet. However, two services can’t listen on the port at the same ip address.

            • sem@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 day ago

              The only way I know of giving one computer multiple IP addresses is proxmox but can you do that with docker also?

              • non_burglar@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 day ago

                Yes. Proxmox isn’t doing anything magic another Linux machine (or windows for that matter ) can’t do. A router, for instance, is a good example of this.