Using CPU hotplug.

The kernel option CONFIG_HOTPLUG_CPU needs to be enabled. It is currently available on multiple architectures including ARM, MIPS, PowerPC and X86.
List all current CPUs and cores in the system:

server124:~ # ls -lh /sys/devices/system/cpu
total 0
drwxr-xr-x 5 root root 0 Jul 20 09:46 cpu0
drwxr-xr-x 5 root root 0 Jul 20 09:51 cpu1
drwxr-xr-x 2 root root 0 Jul 20 09:54 cpu2
drwxr-xr-x 2 root root 0 Jul 20 09:54 cpu3
drwxr-xr-x 2 root root 0 Jul 20 10:18 cpufreq
drwxr-xr-x 2 root root 0 Jul 20 10:18 cpuidle
-r--r--r-- 1 root root 4.0K Jul 20 09:46 kernel_max
-r--r--r-- 1 root root 4.0K Jul 20 10:12 offline
-r--r--r-- 1 root root 4.0K Jul 20 09:46 online
-r--r--r-- 1 root root 4.0K Jul 20 10:18 possible
-r--r--r-- 1 root root 4.0K Jul 20 10:18 present
--w------- 1 root root 4.0K Jul 20 10:18 probe
--w------- 1 root root 4.0K Jul 20 10:18 release

Each CPU folder contains an online file which controls the logical on (1) and off (0) state.
To logically shutdown cpu3:

server124:~ # echo 0 > /sys/devices/system/cpu/cpu3/online

and in the log file you can find something like this:

Jul 20 10:52:38 server124 kernel: [ 3969.489290] CPU 2 is now offline
Jul 20 10:52:38 server124 kernel: [ 3969.492336] CPU 3 is now offline

also by executing lscpu command:

server124:~ # lscpu |grep line
On-line CPU(s) list: 0-2
Off-line CPU(s) list: 3
server124:~ #

Once the CPU is shutdown, it will be removed from /proc/interrupts, /proc/cpuinfo and should also not be shown visible by the top command.
To bring cpu3 back online:

server124:~ # echo 1 > /sys/devices/system/cpu/cpu3/online

and in the log file:

Jul 20 11:00:01 server124 kernel: [ 4412.323732] Booting Node 0 Processor 3 APIC 0x3
Jul 20 11:00:01 server124 kernel: [ 4053.024204] mce: CPU supports 0 MCE banks

and by executing lscpu command:

server124:~ # lscpu |grep line
On-line CPU(s) list: 0-3
server124:~ #

The CPU is usable again.

Also chcpu can be used (chcpu can modify the state of CPUs. It can enable or disable CPUs, scan for new CPUs, change the CPU dispatching mode of the underlying hypervisor, and request CPUs from the hypervisor (configure) or return CPUs to thehypervisor (deconfigure)).

To disable CPUs 2 and 3:

server124:~ # chcpu -d 2,3
CPU 2 disabled
CPU 3 disabled

To enable CPUs 2 and 3:

server124:~ # chcpu -e 2,3
CPU 2 enabled
CPU 3 enabled

Find out whether a filesystem check is scheduled for the next boot.

To find out whether a filesystem check is scheduled for the next boot. Use this command: “dumpe2fs -h /dev/disk”.
Fsck will run if mount count is equal or greater than maximum mount count, or if “next check after” is passed.

Continue reading “Find out whether a filesystem check is scheduled for the next boot.”

Creating virtual disks using dd and losetup.

To create an image file, in this case a “virtual disk”, use “dd” command. The below command will write zeros to a file of a specified size.

dd if=/dev/zero of=1GB_disk.img bs=1M count=1024

Once completed, a partition can be created using cfdisk or fdisk command. Then the filesystem should be created using mkfs.ext4

cfdisk 1GB_disk.img

Now, you can proceed to setup a loop device for you image. This requires the use of “losetup”. This command will assign an available loop device (-f option to find one) to the partition on the image, and show the name a loop device (–show option):​

losetup -Pf --show 1GB_disk.img

If successful, you should be able to access the partition by mounting the image.

[root@s1 disk]# lsblk|grep loop
loop0 7:0 0 1G 0 loop /mnt/disk
[root@s1 disk]#

mount /dev/loop0 /mnt/disk

[root@s1 disk]# df -hP /mnt/disk/
Filesystem Size Used Avail Use% Mounted on
/dev/loop0 976M 46M 863M 6% /mnt/disk
[root@s1 disk]#

To remove a loop device just run:

losetup -d /dev/loop0

XFS (dm-0): unknown mount option [acl].

System goes into read-only mode after upgrading to CentOS 7.4.

"kernel: XFS (dm-0): unknown mount option [acl]"

Remove acl option for xfs filesystem.

/dev/mapper/centos-root / xfs defaults,acl 0 0

In XFS file-system acl is enabled by default. Therefore, it is not needed to mention it explicitly in /etc/fstab file. Prior to CentOS 7.4, acl option was being ignored by systemd daemon even if it was added in /etc/fstab for xfs file-system.

Alternative, what it not recommended but it works, change “ro” read-only to “rw” read-write in /etc/grub2.cfg file.

linux16 /vmlinuz-3.10.0-693.11.6.el7.x86_64 root=/dev/mapper/centos-root rw rd.lvm.lv=centos/root rd.lvm.lv=centos/swap crashkernel=auto rhgb quiet

SNMP request timeouts when NFS share on remote server is hanging

SNMP request timeouts when NFS share on remote server is hanging.

root# snmpwalk -v2c -cpublic localhost
Timeout: No Response from localhost
root#

A feature called skipNFSInHostResources was added to skip NFS mounts from filesystem lookup to prevent issues in case the remote resource is not available, from manpage of snmpd.conf:

skipNFSInHostResources true
controls whether NFS and NFS-like file systems should be omitted from the hrStorageTable (true or 1) or not (false or 0, which is the default).
If the Net-SNMP agent gets hung on NFS-mounted filesystems, you can try setting this to ‘1’.

The solution is to add the following entry “skipNFSInHostResources true” in /etc/snmp/snmpd.conf and restart snmpd service.

Add optional channels via mgr-sync SUSE Manager

I have found no way to add an optional channel via the web interface of SUMA 2.1. I needed to add Debuginfo-Pool for Kdump analysis which use crash. Crash utility is used to analyze the core file captured by kdump. It can also be used to analyze the core files created by other dump utilities like netdump, diskdump, xendump. You need to ensure the “kernel-debuginfo” package is present and it is at the same level as the kernel. So, I had to use a command line of SUMA.

suma:~ # mgr-sync list channels

--cut--
[I] SLES12-Pool for x86_64 SUSE Linux Enterprise Server 12 x86_64 [sles12-pool-x86_64]
[ ] SLE-Manager-Tools12-Debuginfo-Pool x86_64 SUSE Manager Tools [sle-manager-tools12-debuginfo-pool-x86_64]
[ ] SLE-Manager-Tools12-Debuginfo-Updates x86_64 SUSE Manager Tools [sle-manager-tools12-debuginfo-updates-x86_64]
[I] SLE-Manager-Tools12-Pool x86_64 SUSE Manager Tools [sle-manager-tools12-pool-x86_64]
[I] SLE-Manager-Tools12-Updates x86_64 SUSE Manager Tools [sle-manager-tools12-updates-x86_64]
--cut--

suma:~ # mgr-sync add channel sle-manager-tools12-debuginfo-pool-x86_64
Adding 'sle-manager-tools12-debuginfo-pool-x86_64' channel
Scheduling reposync for 'sle-manager-tools12-debuginfo-pool-x86_64' channel

suma:~ # mgr-sync add channel sle-manager-tools12-debuginfo-updates-x86_64
Adding 'sle-manager-tools12-debuginfo-updates-x86_64' channel
Scheduling reposync for 'sle-manager-tools12-debuginfo-updates-x86_64' channel
suma:~ #

suma:~ # mgr-sync refresh --refresh-channels
Refreshing Channels [DONE]
Refreshing Channel families [DONE]
Refreshing SUSE products [DONE]
Refreshing SUSE Product channels [DONE]
Refreshing Subscriptions [DONE]

Scheduling refresh of all the available channels
Scheduling reposync for 'sles11-sp3-pool-x86_64' channel
Scheduling reposync for 'sle11-sdk-sp3-pool-x86_64' channel
Scheduling reposync for 'sle11-sdk-sp3-updates-x86_64' channel
--cut--