Install cockpit on NethServer 7b2

Test on a NethServer 7b2 VM:

  • Update the server. I just did a yum update from commandline. It brings the server up to date with the latest patches
    (if this isn’t the NethServer way, slap me, I am new to using the server)

  • Install git
    yum install git

  • git clone the sig-atomic-buildscripts
    git clone https://github.com/baude/sig-atomic-buildscripts
    This is how the successful clone process should look like.

  • Once the clone process is complete, go into the newly created directory.
    cd sig-atomic-buildscripts

  • Edit the “virt7-testing.repo” file with your favorite editor (mine is nano, but somehow I couldn’t install nano, so I sticked with vi
    vi virt-7-testing.repo
    Change the repository url to: http://cbs.centos.org/repos/atomic7-cockpit-preview-release/x86_64/os/

  • Copy virt-7-testing.repo to yum’s repository directory:
    cp virt7-testing.repo /etc/yum.repos.d/

  • Install cockpit:
    yum install cockpit

This is the summary after install:
Installed:
cockpit.x86_64 0:0.114-2.el7.centos

Dependency Installed:
audit-libs-python.x86_64 0:2.4.1-5.el7 checkpolicy.x86_64 0:2.1.12-6.el7
cockpit-bridge.x86_64 0:0.114-2.el7.centos cockpit-docker.x86_64 0:0.114-2.el7.centos
cockpit-shell.noarch 0:0.114-2.el7.centos cockpit-storaged.noarch 0:0.114-2.el7.centos
cockpit-ws.x86_64 0:0.114-2.el7.centos cryptsetup.x86_64 0:1.6.7-1.el7
device-mapper-multipath.x86_64 0:0.4.9-85.el7_2.6 device-mapper-multipath-libs.x86_64 0:0.4.9-85.el7_2.6
docker.x86_64 0:1.10.3-46.el7.centos.14 docker-common.x86_64 0:1.10.3-46.el7.centos.14
docker-selinux.x86_64 0:1.10.3-46.el7.centos.14 dosfstools.x86_64 0:3.0.20-9.el7
gdisk.x86_64 0:0.8.6-5.el7 iscsi-initiator-utils.x86_64 0:6.2.0.873-33.el7_2.2
iscsi-initiator-utils-iscsiuio.x86_64 0:6.2.0.873-33.el7_2.2 json-glib.x86_64 0:1.0.2-1.el7
libatasmart.x86_64 0:0.19-6.el7 libcgroup.x86_64 0:0.41-8.el7
libnl.x86_64 0:1.1.4-3.el7 libreport-filesystem.x86_64 0:2.1.11-32.el7.centos
libseccomp.x86_64 0:2.2.1-1.el7 libselinux-python.x86_64 0:2.2.2-6.el7
libsemanage-python.x86_64 0:2.1.10-18.el7 libssh.x86_64 0:0.7.1-2.el7
libstoraged.x86_64 0:2.5.2-2.el7 libxml2-python.x86_64 0:2.9.1-6.el7_2.3
m2crypto.x86_64 0:0.21.1-17.el7 mdadm.x86_64 0:3.3.2-7.el7_2.1
oci-register-machine.x86_64 1:0-1.8.gitaf6c129.el7 oci-systemd-hook.x86_64 1:0.1.4-4.git41491a3.el7
policycoreutils-python.x86_64 0:2.2.5-20.el7 python-IPy.noarch 0:0.75-6.el7
python-dmidecode.x86_64 0:3.10.13-11.el7 python-ethtool.x86_64 0:0.8-5.el7
python-rhsm.x86_64 0:1.15.4-5.el7 setools-libs.x86_64 0:3.3.7-46.el7
storaged.x86_64 0:2.5.2-2.el7 storaged-iscsi.x86_64 0:2.5.2-2.el7
storaged-lvm2.x86_64 0:2.5.2-2.el7 subscription-manager.x86_64 0:1.15.9-15.el7.centos.0.1
usermode.x86_64 0:1.111-5.el7 yajl.x86_64 0:2.0.4-4.el7

Complete!

  • Now follow the instructioons from @dz00te’s post:

The result:

Now let’s take it 1 step further and install Wordpress on NethServer:
Log into cockpit with a (domain) admin account.
When you are logged incockpit you see the system overview with some tickers:


To install Wordpress in a docker container, we head over to the containers page:

On the right you see a “get images” button. Click that button and search for wordpress:

Select the Wordpress image you want to use.
After downloading the docker image, start the image by clicking the “play” triangle on the right.

A popup appears with all the preset options of the image

As you can see, automaticly a name for your WP container is created. You can tweak the other settings if you like. If you need more memory, you can override the default by ticking the box in front of memory limit.
If you want to change ports, you can do that here too.
Finally, click the blue RUN button.

I was still struggling with docker images using port 80 and the NS startpage is running on 80 and 443 too. Probably need to change something on those. A comment on that is appreciated.

5 Likes

Great shot @robb!

BTW did you try the cockpit RPM from centos-extras?

I bet this command is enough :wink:

yum install cockpit

I will give that a try too… :slight_smile:
I wasn’t aware that was available and didn’t find any references to the centos repositories.

But building using the buildscript was a piece of cake.
I agree, installing directly from repo is easier… :slight_smile:

Any idea how to come over the port problem I encountered?

1 Like

I think we’re lacking a feature for a complete docker integration: let’s call it “virtual host reverse proxy”.

This has been discussed on another thread and this one too, but I recall the current reverse proxy UI supports only URL paths. I.e. http://mysite/urlpath

We cannot mask a root path, like http://mysite/ and pass it to our docker container or whatever else.

On the firewall side, our latest Shorewall should come with improved Docker support and I hope this will help us a lot. But if we talk about ports 80 and 443, the httpd reverse proxy is the mandatory choice to keep the all-in-one design.

I also installed cockpit on my laptop running ubuntu 16.04
Here I have absolutely no problems because each container gets it’s own IP address in another subnet that is somehow bridged to the cockpit instance on my laptop.
My local IP subnet is 192.168.10.1/24 and the ip addresses for each container are in the 172.17.0.1/24 subnet.

Well that went smooth! How easy can it be.,… LOL… at least it kept me busy this evening…
Still it amazes me how easy and fast a service can be up and running if you can pick an image from cockpit repository.

Now digging into creating Docker images from scratch, but that is for another time…

1 Like

There are still some challenges. For instance a web app may require a DB, or access to LDAP. Additional containers could come into play, or access to services running on the host machine could be required.

How to configure them during the beloved point&click installation? :thinking:

That’s what I meant with how it runs on my Ubuntu laptop: Cockpit is being used as a bridge between the docker containers and the rest of the world. Each container gets it’s own IP address.
This is cockpit from my laptop: reachable on localhost:9090

On the “containers” page I have 1 image available from the cockpit repository: nextcloud.

When I start that image a container is created with the following properties:

When I point my browser to the IP address of the container I get the login page for nextcloud:

And when logging in with my credentials the familiar nextcloud userinterface:

This method of giving each container an IP address in a separate subnet would solve the port problem. I don’t know yet if the docker images will be available from another machine or even externally.

The containers IP addresses are assigned by the docker daemon. Cockpit is only a (graphic) shell for the docker API.

TCP ports forwarding and IP configuration of containers is an admin’s job. It depends on what are his goals.

2 Likes

So if I understand this correctly the host is being a router here. And Docker created a virtual interface called dockerx. During install the routes were already created but if you want to reach the docker containers, you must add DNS entries and open ports on your host so the outside world can find and reach the container?

Is it this DNS option that is currently lacking in NS? If the Docker containers only need to be available for the Domain clients, then adding entries in DNS pointing to the host should be enough?

Again, do you have any idea why the Ubuntu version of Cockpit/Docker uses an unique IP for each container in another subnet and on CentOS7(nethserver7b2) this is not the case?

Yes!

If NethServer is acting as DNS on the LAN network this would help. As said above, for web app containers we need to configure the httpd reverse proxy, too.

This sounds strange for me. Maybe I’m wrong, but I was sure on CentOS7 it works the same: each container is assigned an IP from the default docker internal network…

1 Like

Thanks for clarifying. I found an insightfull page on the docker website: https://docs.docker.com/engine/installation/linux/centos/

This is not only going through all networking, but also explains what is happening.

1 Like

installing cockpit on NS or SME is trivial…

unfortunately cockpit’s concept is “against” distro like NS, meaning that cockpit just reflects and configure underlying system via dbus and so on…

I had a chat with Stef (cockpit’s creator) some time ago… he said that if I want to create an alternative to NS webgui (or SME’s server-manager) cockpit is not the kind of product I need, because there’s no way to insert a “logical layer” (the e-smith one) between the GUI and the O.S…

this will work only if:

  • you rewrite a big part of the code (and I’m not referrig to the web part, which is quite complex but workable, but to the bridge/ws side, which are in c)
  • you use cockpit only as a part of your project, writing your own modules

in any case, ATM, there’s no idea (on cockpit side) to create a user delegation module, meaning that all the users that can auth via console can access cockpit… it’s true that then an non privileged user can edit/modify only parts on which he has privileges, but he can still see almost everything (all users ecc)

there are some workarounds (sudoers, use of local links ecc) but, for example, there’s no way to let a single user to see his account only to change his password.

finally, even if creation of custom modules is quite easy (I did it on SME) using the cockpit.spawn method, the right/secure way is to create a custom web service listening on localhost which get data from the GUI and executes some kind of logic

my 2 cents

2 Likes

Thanks you for your input @Stefano_Zamboni
I do see your reasoning, but isn’t it so that each docker container is given a seperate IP address? So each available service will be available separated from others. If you use cockpit/docker just as a gateway between your userLAN and the services, there is no need anyone has access to something like the cockpit interface, except (domain) admins.

I am still trying to get to know the basics, but I don’t see why adding a dockerized service would be a security thread. The service is running on a separate network and you have NethServer in between.
What should be identified apart from docker security aspects?

my reasoning isn’t related to docker in any way… maybe this is the wrong 3ad, but the idea to use cockpit as a new NS webgui has the problems I told above… sorry for the misunderstanding

I’m not a docker user

Then if docker isn’t the problem, why should cockpit be? Cockpit is just a nice webgui for docker (plus a management console for the server running docker).
Maybe I am too ignorant here since I still have no clue what the e-smith template layer actually does. I will have to dive into that to fully understand I guess.

I’m sorry but you’re wrong :slight_smile:
Docker is just a module, an applet in cockpit
Defining cockpit as you did it above means that you’d read carefully the “concepts behind cockpit” :wink:

Sometime, I’m like Stefano about Docker;
I’m not always sure this is a good thing;
and also attribute that at i’m too old for new stuff :stuck_out_tongue:

So, I’m going deeper into to learn and understand the beast …
One good thing about Cockpit I could say it’s using devicemapper
which is mostly the only Storage type could be considered Production Ready
It’s also develop by RedHat
https://docs.docker.com/engine/userguide/storagedriver/selectadriver/

So thank robb for this discovery
Now I need to restructure my LVM

In case someones want my receipt of how to make tinypool for Docker

##VG name is docker

VG=docker


lvcreate --wipesignatures y -n data $VG -l 95%VG
lvcreate --wipesignatures y -n meta $VG -l 1%VG
lvconvert -y --zero n -c 512K --thinpool $VG/data --poolmetadata $VG/meta

echo "activation {
    thin_pool_autoextend_threshold=80
    thin_pool_autoextend_percent=20
}" > /etc/lvm/profile/docker-thinpool.profile

lvchange --metadataprofile docker-thinpool $VG/data
lvs -o+seg_monitor

if Docker 1.12 (because you install docker with those instruction

mkdir /etc/systemd/system/docker.service.d 
echo "[Service]
ExecStart=
ExecStart=/usr/bin/dockerd --storage-driver=devicemapper --storage-opt=dm.thinpooldev=/dev/mapper/$VG-data --storage-opt dm.use_deferred_removal=true --storage-opt dm.fs=xfs" > /etc/systemd/system/docker.service.d/docker.conf

if Docker 1.10 change your thinpool into /etc/sysconfig/docker-storage

[…] --storage-opt dm.thinpooldev=/dev/mapper/$VG-data […]

systemctl daemon-reload
rm -Rf /var/lib/docker/*
systemctl start docker
docker info|grep Storage

ref: https://docs.docker.com/engine/userguide/storagedriver/device-mapper-driver/

2 Likes

O and obviously
adjusting your /usr/lib/sysctl.d/00-system.conf
will help!

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 0

Save and apply those new rules with

$ sysctl -p