Run cron job every other week.

Run cron job every other week, let’s say Friday at 10AM. Add this to your crontab:

0 10 * * 5 [ `expr \`date +\%s\` / 86400 \% 2` -eq 1 ] &&

Every second Friday is an odd number of days since Thursday, Jan 1, 1970 and every first Friday is an even number, plus there are 86400 seconds in a day.
If there is “-eq 1” this means it will not be executed this Friday only every other Friday, if there is “-eq 0” this means it will executed this Friday and every other Friday.

Using CPU hotplug.

The kernel option CONFIG_HOTPLUG_CPU needs to be enabled. It is currently available on multiple architectures including ARM, MIPS, PowerPC and X86.
List all current CPUs and cores in the system:

server124:~ # ls -lh /sys/devices/system/cpu
total 0
drwxr-xr-x 5 root root 0 Jul 20 09:46 cpu0
drwxr-xr-x 5 root root 0 Jul 20 09:51 cpu1
drwxr-xr-x 2 root root 0 Jul 20 09:54 cpu2
drwxr-xr-x 2 root root 0 Jul 20 09:54 cpu3
drwxr-xr-x 2 root root 0 Jul 20 10:18 cpufreq
drwxr-xr-x 2 root root 0 Jul 20 10:18 cpuidle
-r--r--r-- 1 root root 4.0K Jul 20 09:46 kernel_max
-r--r--r-- 1 root root 4.0K Jul 20 10:12 offline
-r--r--r-- 1 root root 4.0K Jul 20 09:46 online
-r--r--r-- 1 root root 4.0K Jul 20 10:18 possible
-r--r--r-- 1 root root 4.0K Jul 20 10:18 present
--w------- 1 root root 4.0K Jul 20 10:18 probe
--w------- 1 root root 4.0K Jul 20 10:18 release

Each CPU folder contains an online file which controls the logical on (1) and off (0) state.
To logically shutdown cpu3:

server124:~ # echo 0 > /sys/devices/system/cpu/cpu3/online

and in the log file you can find something like this:

Jul 20 10:52:38 server124 kernel: [ 3969.489290] CPU 2 is now offline
Jul 20 10:52:38 server124 kernel: [ 3969.492336] CPU 3 is now offline

also by executing lscpu command:

server124:~ # lscpu |grep line
On-line CPU(s) list: 0-2
Off-line CPU(s) list: 3
server124:~ #

Once the CPU is shutdown, it will be removed from /proc/interrupts, /proc/cpuinfo and should also not be shown visible by the top command.
To bring cpu3 back online:

server124:~ # echo 1 > /sys/devices/system/cpu/cpu3/online

and in the log file:

Jul 20 11:00:01 server124 kernel: [ 4412.323732] Booting Node 0 Processor 3 APIC 0x3
Jul 20 11:00:01 server124 kernel: [ 4053.024204] mce: CPU supports 0 MCE banks

and by executing lscpu command:

server124:~ # lscpu |grep line
On-line CPU(s) list: 0-3
server124:~ #

The CPU is usable again.

Also chcpu can be used (chcpu can modify the state of CPUs. It can enable or disable CPUs, scan for new CPUs, change the CPU dispatching mode of the underlying hypervisor, and request CPUs from the hypervisor (configure) or return CPUs to thehypervisor (deconfigure)).

To disable CPUs 2 and 3:

server124:~ # chcpu -d 2,3
CPU 2 disabled
CPU 3 disabled

To enable CPUs 2 and 3:

server124:~ # chcpu -e 2,3
CPU 2 enabled
CPU 3 enabled

Removing volume group and logical volume after physical drive has been removed

root:/ # lvs
/dev/5gbdisk_vg/5gbdisk: read failed after 0 of 4096 at 1073676288: Input/output error
/dev/5gbdisk_vg/5gbdisk: read failed after 0 of 4096 at 1073733632: Input/output error
/dev/5gbdisk_vg/5gbdisk: read failed after 0 of 4096 at 0: Input/output error
/dev/5gbdisk_vg/5gbdisk: read failed after 0 of 4096 at 4096: Input/output error
/dev/sdc: read failed after 0 of 4096 at 0: Input/output error
/dev/sdc: read failed after 0 of 4096 at 10737352704: Input/output error
/dev/sdc: read failed after 0 of 4096 at 10737410048: Input/output error
/dev/sdc: read failed after 0 of 4096 at 4096: Input/output error
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
home sp3tosp4 -wi-ao--- 4.00g
var sp3tosp4 -wi-ao--- 8.00g
root:/ #

When the disk was physically removed, the /dev/sdc and this device nodes wasn’t automatically removed. The above errors are clearly indicating that /dev/sdc and /dev/myvg/mylv can no longer be read due to the removal of the disk.
Remove the stale /dev/sdc device node and clean up the stale device-mapper nodes. In the above example, this would be accomplished by either a simple reboot, or by running the following:

root:/ # dmsetup remove –force /dev/5gbdisk_vg/5gbdisk
root:/ # echo 1 > /sys/block/sdc/device/delete

root:/ # pvs
PV VG Fmt Attr PSize PFree
/dev/sdb sp3tosp4 lvm2 a-- 16.00g 4.00g
root:/ # lvs
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
home sp3tosp4 -wi-ao--- 4.00g
var sp3tosp4 -wi-ao--- 8.00g
root:/ #

How to install RHEL EPEL repository on Centos 7.x or RHEL 7.x

The following instruction assumes that you are running command as root user on a CentOS/RHEL 7.x system and you would like to use Fedora Epel repositories.

Install Extra Packages for Enterprise Linux repository configuration (recommended). Just type the following command on a CentOS 7 or RHEL 7:

root# yum install epel-release

or

Install the extra EPEL repositories from dl.fedoraproject.org :


root# cd /tmp
root# wget https://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
root# ls *.rpm
root# rpm -i epel-release-7-5.noarch.rpm

Install Open VMware Tools on CentOS 7

To install CentOS 7 in a virtual machine you can use either the standard CentOS distribution CD or the boot floppy/network method. The following installation instructions are for standard distribution CD.For Minimal install and Virtualization Host environments, Open VMware Tools is not available during installation. After CentOS 7 installation, to install Open VMware Tools, using root privileges, run the command:

root# yum install open-vm-tools

Exporting and NSS Volumes for NFS Access – OES11 to Linux.

Let say that you have OES11 machine with NSS volume and you would like to export this volume to Linux machine for sharing data.
To exporting and NSS Volume from an OES11 machine to a Linux machine, add the following to /etc/export on OES11 machine:

root# cat /etc/export
/media/nss/DATA/Shared/XXHR/BNMDEV1 Linux(fsid=1,rw,no_root_squash,sync,anonuid=1000,all_squash)
root#

To mount NSS volume from an OES11 machine into a Linux machine, add the following to /etc/fstab on Linux machine:

root# cat /etc/fstab
oes11:/media/nss/DATA/Shared/XXHR/BNMDEV1 /share/Shared/XXHR/BNMDEV1 nfs defaults 0 0
root#

to verity that everything is okay type mount command on a Linux machine, and you should see something like this:

root# mount
OES11:/media/nss/DATA/Shared/XXHR/BNMDEV1 on /share/Shared/XXHR/BNMDEV1 type nfs (rw,addr=172.16.12.12)