Docker and Container


ptitSeb

Serial Porter
Joined
Aug 15, 2012
Messages
9,306
Age
51
Location
France, near Lyon
I've been very happy recently, running Manjaro Linux 64 bit on my Pinebook pro (much happier than I was when it ran Debian) which sports a rockchip RK3399 SOC with Mali T860 MP4 GPU. Now I'm no expert on SOC's and the PBP is surely not a gaming device but Manjaro seems to run quite decently with the latest kernel. I now even have 32 bit chromium running in a docker container so I can use Wiidevine for watching and listening to DRM content like spotify and netflix and it works great. I feel it will only get better as it did with the Pandora.

So maybe that or the RK3399pro are worth considering as it would be easy to surf along on already made efforts in regards to the software?
Now build box86 and run it on your 32bit docker container!
 
Yes, it looks that way, although I'm unfamiliar to date with the website of www[dot]docker.com, but it seems to be discussing what I know of as a docker container.
 
Yes, it is.
Docker is very handy for many things.
I'm using it for my Bachelor thesis.
When I'll revive my server at some point everything will run in Docker container as well.

They have the whole environment built in.
So an application will run regardless of the base System.
Thanks to docker-compose you define and start as many container with a single command as you like.

If you work in a modern Cloud/Server environment there is no way around Containers with Docker and Kubernet.
 
I can't understand well the "limits" of this stuff, so I'll make an example:

let's say I have a Windows 98 game that runs only at 320x200 resolution, and the actual real hardware isn't capable of that, is it possible to create a "docker container" that can run the game everywhere ?
 
I can't understand well the "limits" of this stuff, so I'll make an example:

let's say I have a Windows 98 game that runs only at 320x200 resolution, and the actual real hardware isn't capable of that, is it possible to create a "docker container" that can run the game everywhere ?

AFAIK does Docker not handle the resolution.
It contains a whole Linux OS.
But in contrast to a VM it shares the Kernel with the host.
To separate the Host and the Container it uses Linux namespaces.
You can map things like Volume and Ports from inside the container to an directory or port of the host.
With Volumes you can store persistent data (Container are Stateless).
Ports are for network services and Communication via Websockets or REST for example.

But Docker does not have its own Screen like a VM has AFAIK.

One of its benefits is that you can deploy an application and include anything it needs to run.
You can also build dev environments inside Docker Container.
So you have a seperate dev environment for each application/library you build.
It contains the correct libraries and avoids conflicts.
 
It contains a whole Linux OS.
Not really. AFAIK It basically overlays the host filesystem with the filesystem from the image for the applications running within the container, using stuff from the host system each time something is not part of the image. This is actually important, as using e.g. a proprietary graphics driver on the host system can break the whole "run it everywhere" into tiny pieces if you include an own OpenGL library in the image. Every library that interfaces directly to the kernel can cause these kind of compatibility issues, and these libraries' own depencencies add a cherry on top of this issue - many early users of the Steam Runtime got some painful experience with that.

But Docker does not have its own Screen like a VM has AFAIK.
A VM does not need to have an own screen, a virtual GPU is optional. There's all sorts of other things you can use to interface with a VM, you can even passthrough file descriptors.
 
It contains a whole Linux OS.

I agree with Letalis Sonus. If we talk about a whole Linux sharing the kernel with the host, that would be an LXC container. So if we sort by most to less "virtualized".

VM -> LXC -> docker

No trying to be pedantic, I just found the conversation interesting and wanted to add something.
 
I agree with Letalis Sonus. If we talk about a whole Linux sharing the kernel with the host, that would be an LXC container. So if we sort by most to less "virtualized".

VM -> LXC -> docker

No trying to be pedantic, I just found the conversation interesting and wanted to add something.

So what does a LXC container have a Docker container is missing?
AFAIK you use Debian or the minimalistic Alpine Linux as a base for you container.
Post automatically merged:

I started a thread in the development section here.
Lets not be like the Keyboard and Case Color guys and take over foreign threads.
We don't want to be like them.
No one wants to.
They have such a sad life.

Maybe a Moderator could move our last post to the Docker topic
 
I'm not that experienced with docker to clearly explain the difference. My understanding is that docker is only visualizing the service bu it does not have an independent file system or network stack etc, everything is managed by the host. It is just the service sandboxed

In LXC (or jail on BSD) the host just bridges network interfaces and needed hardware and fom there it is like a whole virtual machine, everything is managed from the LXC container and you control a whole OS.
 
Yeah, as I read it, docker actively makes use of more recent linux kernels kernel-level virtualisation features, which work to isolate file trees and process lists so that while they're running on the same kernel, and CPU power or IO used in one docker will reduce what's available to others or the host OS.

From the way I read it, not having actually used either, the difference between BSD jail and docker, is that docker provides a full OS with all the feaures, while jail disables a handful of the more powerful networking features, since the intention is different between the two; docker is more about extracting every bit of power from significant machines, while BSD jail is intended to sandbox untrustworthy processes so you can evaluate them.
 
I'm not that experienced with docker to clearly explain the difference. My understanding is that docker is only visualizing the service bu it does not have an independent file system or network stack etc, everything is managed by the host. It is just the service sandboxed

In LXC (or jail on BSD) the host just bridges network interfaces and needed hardware and fom there it is like a whole virtual machine, everything is managed from the LXC container and you control a whole OS.

At least the Network is isolated and a Container has its own network stack.
You can map any port from the Docker container to any port of the host.
Or you use "Host" mode to remove the isolation between Docker and host network stack: https://docs.docker.com/network/host/

That gives me the impression that it has its own networking.

For file system:
Docker is using layers and thus a layered file system: https ://medium.com/@BeNitinAgarwal/docker-containers-filesystem-demystified-b6ed8112a04a
(Remove the blank between http and ":" The boards will convert the link to a broken media link if I post it correctly"
When you build a container you use an other one as base.
Then you put another non writeable Layer (lets say plain Debian) with your stuff (lets say apache server with Webpage) ontop.
It has one layer you can write to but that one exists only during container runtime and is deleted when the container is destroyed.

Maybe I got something wrong but for me it looks like it has its own stuff and not only the service is sandboxed.
 
From what I've read is Docker meant to run one application.
And it supports Things like docker swarm to scale Network services.
It can not hold persistent data.

LXC Containers on the other hand are more like a tiny VM os.
They can hold persistent Data.
When I finish my Bachelor thesis I'll dig a little deeper into it.

But because of its popularity and ease of use I'm using Docker and would recommend it to anyone building a Server or such things.
 
Oh right, I kind of connected docker to chroot but with more separation. But if it mounts a file as a loopback filesystem (a bit like the way PNDs work) and only provides a tmpfs for workspace (unlike the PND system where your appdata was your scratchspace), then it doesn't work from a real filesystem so it won't save state after running. And yes, my understanding it you run one user app per docker space.
 
Yes, you create a docker volume, install a single app with all dependencies in the container and that's it.
Docker does some xhost forwarding and starts the file like it's running native.
It works great but is very storage-hungry though, my Chromium-Armv7 docker folder is 3Gig. And yes, it saves the settings and such when exiting.
 
I have used Docker and virtual machines extensively for years. Both have their use cases.

Docker is essentially an improved version of LXC. Both are using the Linux kernel's cgroup functionality to perform the abstraction. Either approach is faster than a VM but containers are less secure since they share the same kernel as the host (either directly or indirectly).

Compared to LXC, Docker has some advantages:
  • Docker has built in versioning support and can be set up for multiple user interactions.
  • Docker runs on multiple operating systems (Linux, Mac, Windows, formerly FreeBSD)
  • Docker has more support from downstream projects and vendors than LXC (there are waay more pre-built images out there for Docker than for any VM or other container software)
  • More management software available
  • Better support for overlaying filesystem components (Docker images can reuse other parts from other images and save disk space, make the overlay read-only for security if needed)
  • Docker has some specialized filesystem features for containers run on ZFS or Btrfs volumes

That said, VMs are better from a security standpoint and if you are careful on what distro you install, can be super lean as well. I have a VirtualBox VM that regularly serves hundreds of gigs on a PHP intranet site that uses less than 500MB of memory.
 
Back
Top