Option to remove old kernels

Just identified my problem I think: virtualbox.

-------- Uninstall Beginning --------
Module: vboxhost
Version: 5.0.20
Kernel: 2.6.32-573.26.1.el6.x86_64 (x86_64)

Status: Before uninstall, this module version was ACTIVE on this kernel.
Removing any linked weak-modules

It’s certainly not VirtualBOX…
My Microserver NethServer instance has 7 old kernel :grin:

And my installonly_limit in /etc/yum.conf is set to 2 :scream:

And it’s not the presence of weak-updates, I have some systems with weak-updates modules that have 5 kernels.

I’ve discovered the problem: that lone server had kernel-debug installed, doubling the count of kernels. So it really has 5 kernels installed.
We’re back to start: I can’t reproduce the problem, all servers have 5 kernels installed.

look at this:

who has got more kernels?

Is it a contest? :money_mouth:

2 Likes

Hi,

I’m now with 11 kernels…`

What’s up docs ?

2 Likes

Let’s resurrect this zombie thread.

Now running NS7 and this is in /etc/yum.conf:

installonly_limit=5

But, here’s the relevant part of an ls from boot after the last couple of updates:

[root@Nethserver ~]# cd /boot
[root@Nethserver boot]# ls -lrt vmlinuz-*
-rwxr-xr-x. 1 root root 5392080 Nov 22  2016 vmlinuz-3.10.0-514.el7.x86_64
-rwxr-xr-x  1 root root 5392752 Feb 22 19:16 vmlinuz-3.10.0-514.6.2.el7.x86_64
-rwxr-xr-x. 1 root root 5392080 Mar  1 16:56 vmlinuz-0-rescue-13ad7b333d3e4171a17342c53d60c4c0
-rwxr-xr-x  1 root root 5393008 Mar  2 16:15 vmlinuz-3.10.0-514.10.2.el7.x86_64
-rwxr-xr-x  1 root root 5396240 Apr 12 08:15 vmlinuz-3.10.0-514.16.1.el7.x86_64
-rwxr-xr-x  1 root root 5397552 May 25 10:16 vmlinuz-3.10.0-514.21.1.el7.x86_64
-rwxr-xr-x  1 root root 5397520 Jun 20 05:36 vmlinuz-3.10.0-514.21.2.el7.x86_64
-rwxr-xr-x  1 root root 5397328 Jun 29 09:16 vmlinuz-3.10.0-514.26.1.el7.x86_64
-rwxr-xr-x  1 root root 5397008 Jul  4 08:15 vmlinuz-3.10.0-514.26.2.el7.x86_64
[root@Nethserver boot]#

Guess it still isn’t working as designed.

Cheers.

1 Like
[root@server9b ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
                   22G   13G  7.8G  62% /
tmpfs                 939M     0  939M   0% /dev/shm
/dev/sda1             504M  465M   14M  98% /boot

I have 18 kernels on this bone default production mail server and now any update attempts from the gui fail.

Removed:
  kernel.x86_64 0:2.6.32-504.el6                kernel.x86_64 0:2.6.32-504.23.4.el6           kernel.x86_64 0:2.6.32-504.30.3.el6
  kernel.x86_64 0:2.6.32-573.7.1.el6            kernel.x86_64 0:2.6.32-573.8.1.el6            kernel.x86_64 0:2.6.32-573.12.1.el6
  kernel.x86_64 0:2.6.32-573.18.1.el6           kernel.x86_64 0:2.6.32-573.22.1.el6           kernel.x86_64 0:2.6.32-573.26.1.el6
  kernel.x86_64 0:2.6.32-642.1.1.el6            kernel.x86_64 0:2.6.32-642.3.1.el6            kernel.x86_64 0:2.6.32-642.4.2.el6
  kernel.x86_64 0:2.6.32-642.6.2.el6            kernel.x86_64 0:2.6.32-642.11.1.el6

Complete!
[root@server9b boot]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
                       22G   11G  9.3G  54% /
tmpfs                 939M     0  939M   0% /dev/shm
/dev/sda1             504M  115M  364M  25% /boot

I think you desserve a new badge for this :rofl::rofl::rofl:

2 Likes

@Jim my next goal is 24, maybe with compression or something.

2 Likes

Hallo,
some years ago I found this for CentOS:
root#> yum install yum-utils

root#> package-cleanup --oldkernels --count=2

–count=[number of kernels to stay]
I created a 2kernel.sh including the last line.

1 Like

Today i installed kernel 3.10.957.10.1; after the update…

rpm -qa kernel
kernel-3.10.0-862.11.6.el7.x86_64
kernel-3.10.0-957.5.1.el7.x86_64
kernel-3.10.0-957.1.3.el7.x86_64
kernel-3.10.0-862.14.4.el7.x86_64
kernel-3.10.0-957.10.1.el7.x86_64
now installed on my ritual-murder box

Current versions of Neth should limit themselves to five installed kernels. It’s probably more than are needed, but there still should be cleanup going on.

1 Like

I can confirm 5 kernels on my test install. No issues so far…

There is plenty of systems with long uptime: at reboot they can jump many kernel releases …Perhaps 5 is not so much for them!

…but they really should be rebooted when a new kernel is installed. Otherwise, what’s the purpose in installing it? SME went overboard in requiring reboots, but this is one place where I think Neth should at least prompt for one.

Let me say that who’s installing NethServer for the first time should know that linux need a reboot at least for kernels…

You’re both right, however I believe the Red Hat support team considered the default value carefully and I wouldn’t change it.

I agree a reboot warning could be a nice thing!

I think the more likely case was, “eh, five looks about right, doesn’t it?” “Sure.” It’s probably more than are really needed, but I don’t see that it does any real harm–and if you have a really space-constrained /boot device you can always change the default. As long as the number isn’t too excessive, and the automatic cleanup is working (which it seems to be), I don’t see a problem.

1 Like