Jump to content
Slate Blackcurrant Watermelon Strawberry Orange Banana Apple Emerald Chocolate Marble
Slate Blackcurrant Watermelon Strawberry Orange Banana Apple Emerald Chocolate Marble


  • Content Count

  • Joined

  • Last visited

  • Feedback


Everything posted by glaukos

  1. Use the following script to find out top processed sorted by memory usage, in megabytes (MB) #!/bin/bash ps axo rss,comm,pid | awk '{ proc_list[$2]++; proc_list[$2 "," 1] += $1; } \ END { for (proc in proc_list) { printf("%d\t%s\n", \ proc_list[proc "," 1],proc); }}' | sort -n | tail -n 10 | sort -rn \ | awk '{$1/=1024;printf "%.0fMB\t",$1}{print $2}' View the full article
  2. 1. Run the following command to find out top biggest directories under /home partition. # du -a /home | sort -n -r | head -n 5 2. If you want to display the biggest directories in the current working directory, run # du -a | sort -n -r | head -n 5 3. Some of you would like to display the above result in human readable format. i.e you might want to display the largest files in KB, MB, or GB. # du -hs * | sort -rh | head -5 4. To display the largest folders/files including the sub-directories, run: # du -Sh | sort -rh | head -5 5. To find the largest 10 files (linux/bash) find . -type f -print0 | xargs -0 du | sort -n | tail -10 | cut -f2 | xargs -I{} du -sh {} 6. To find the largest 10 directories: find . -type d -print0 | xargs -0 du | sort -n | tail -10 | cut -f2 | xargs -I{} du -sh {} Only difference is -type {d:f} d = directory f = files View the full article
  3. Background Information A group of security researchers discovered that SSLv2 (Secure Sockets Layer protocol version 2.0) is vulnerable to the Bleichenbacher RSA padding oracle attack, which can be used to decrypt RSA cipher text without the knowledge of the matching private RSA key. This can be done by observing responses from a server that has the private key and performs the decryption of attacker-provided cipher texts using that key. The researchers also demonstrated a new cross-protocol attack which allows decryption of SSL/TLS sessions using newer protocol versions - SSLv3 or any current TLS (Transport Layer Security) version (1.0 - 1.2) - using this SSLv2 weakness. This flaw is a SSLv2 protocol issue and affects all implementations of the protocol. Researchers refer to this attack as general DROWN. Additionally, flaws were found in the SSLv2 protocol implementation in the OpenSSL cryptography and SSL/TLS library, which make it possible to perform a more efficient variant of the DROWN attack, referred to as special DROWN. These issues were assigned CVE-2016-0703 and CVE-2016-0704 , and were already recently corrected as part of the fix for CVE-2015-0293 . Further details of this attack can be found in the researchers' paper titled DROWN: Breaking TLS using SSLv2 at https://drownattack.com/ , Impact to systems A server is vulnerable to the DROWN attack if it enables SSLv2 protocol in addition to SSLv3 or TLSv1.x, and if it uses RSA key exchange cipher suites. Server that does not enable SSLv2 can also be vulnerable if it does not enable SSLv2, but shares its private RSA key with another server. For example, DROWN attack can still be used to decrypt HTTPS sessions to a web server that does not enable SSLv2 if it shares its RSA key with e.g. IMAP server, possibly running on the same host, that does enable SSLv2. Use of weak or export SSLv2 ciphers is required to efficiently perform the attack. SSL/TLS connections using non-RSA key exchange, such as Diffie-Hellman or Elliptic Curve Diffie-Hellman, can not be decrypted using the DROWN attack. What configurations are vulnerable to this attack? TLS servers which support SSLv2 are vulnerable. Most daemons capable of communication over TLS in Red Hat Enterprise Linux are linked against the OpenSSL cryptographic library. OpenSSL removed support for SSLv2 cipher suites in openssl-1.0.0. Red Hat Enterprise Linux 6 and 7 ship openssl-1.0.1e and they contain this patch. However it was found that OpenSSL servers do not respect the cipher suites advertised in the ServerHello handshake message, which could result in export SSLv2 cipher suites being used even if they were explicitly disabled. Also, the SSLv2 protocol itself was enabled. The flaw was assigned CVE-2015-3197 and is being fixed by the security advisories listed in the Resolution section below. The NSS (Network Security Services) library available in Red Hat Enterprise Linux does not enable SSLv2 by default. Services which do not use SSLv2 , but share their RSA keys with those services which have SSLv2 support, are also vulnerable. This attack can only be conducted if RSA cipher suites for key exchange are enabled on the server. Other asymmetric protocols like DHKE and ECC are not vulnerable. Lastly, the attack can be conducted in real time for OpenSSL servers which are vulnerable to CVE-2015-0293 . This security flaw has been previously addressed by RHSA-2015:0800 for Red Hat Enterprise Linux 5, RHSA-2015:0715 for Red Hat Enterprise Linux 6 and RHSA-2015:0716 for Red Hat Enterprise Linux 7. This attack, which combines the SSLv2 protocol flaw and the OpenSSL implementation flaw, is dubbed as Special DROWN. Take Action Imediately apply available patches to remediate the issue. Rebooting the system after updating is the safest way to ensure all affected services use the updated ssl library. If a reboot is not possible, restarting all network services that depend on openssl after applying the patches is required. DiagnoseTo manually detect if a system is vulnerable, please use the script below: DROWN-test.sh (script version 1.0) Usage $ chmod +x DROWN-test.sh $ ./DROWN-test.sh If the installed version of openssl is vulnerable, you will see output similar to: WARNING: The installed version of openssl (openssl-1.0.1e-42.el7) is vulnerable to both general and special DROWN attack and should be upgraded! See https://access.redhat.com/security/vulnerabilities/drown for more information. View the full article
  4. As a Linux system admin we generally face low disk space issues. By implementing the user and group disk quota on the file system we can resolve the space issue. Quota restricts the users to use only allowed disk and inodes on the particular file system. In this post we will discuss how to enable user & group disk quota on /home File system on CentOS 7 & RHEL 7 Step:1 Add usrquota & grpquota option on /home in /etc/fstab file. [root@linuxtechi ~]# vi /etc/fstab Save & exit the file. In this example I have add user and group quota options on /home Step:2 Remount /home file system via mount command [root@linuxtechi ~]# mount -o remount /home Now recheck the /home file system whether Quota is enable or not. [root@linuxtechi ~]# mount | grep /home /dev/mapper/centos-home on /home type ext4 (rw,relatime,seclabel,quota,usrquota,grpquota,data=ordered) Step:3 Create Quota Database Files using quotacheck [root@linuxtechi home]# quotacheck -cugv /home Whereas : -c : create quota file and don’t use the existing file -v : verbose ouput -u : user disk quota -g : group disk quota Above Command will create aquota.user & aquota.group files under /home Turn on quota on /home using below command : [root@linuxtechi ~]# quotaon /home/ Step:4 Assign user & group disk quota via edquota commands Syntax # edquota -u # edquota -g [root@linuxtechi ~]# edquota -u jack [root@linuxtechi ~]# edquota -g sys_admin As shown above we have two kind of Disk quota limits : soft : It will warn the users if the soft limit of disk quota reached ( size is in KB), in above example for jack user soft limit is 5500 KB ( approx 5.5MB ) hard : It will not allow the users to create new files once the hard limit is reached. ( Size in KB ), in above example hard limit for jack user is 6000 KB ( approx 6 MB ) Note : We can also set the Quota on the basis of the inodes ( i.e numbers of files that the user can create on particular file system). Let’s take an example , login as jack user and try to create a file of 8MB. [root@linuxtechi ~]# su - jack [jack@linuxtechi ~]$ dd if=/dev/zero of=bgfile bs=1M count=8 dm-2: warning, user block quota exceeded. dm-2: write failed, user block limit reached. dd: error writing ‘bgfile’: Disk quota exceeded 6+0 records in 5+0 records out 6144000 bytes (6.1 MB) copied, 0.00711317 s, 864 MB/s As we see above soft & hard limit is exceeded for jack user. Now onwards user jack can’t create new files. Step:5 Display Quota report for Users in human readable [root@linuxtechi ~]# repquota -as Step:6 Configure Grace Period for Soft Limit Grace period is the amount of time during which soft limit can can be exceeded, once the grace period reached then soft limit will become the hard limit. Use the edquota command to set Grace period . [root@linuxtechi ~]# edquota -t Source: http://www.linuxtechi.com/enable-user-group-disk-quota-on-centos-7-rhel-7/ View the full article
  5. NIC(Network Interface Card) bonding is also known as Network bonding. It can be defined as the aggregation or combination of multiple NIC into a single bond interface. It’s main purpose is to provide high availability and redundancy. In this article we will learn how to configure nic or netwok bonding in CentOS 7 & RHEL 7. In my case I have two interface cards (enp0s3 & enp0s8) and will form a bond interface (bond0). Prerequisite If bonding module is not loaded on your linux box then use the below command to load. If bonding module is not loaded on your linux box then use the below command to load. [root@openstack ~]# modprobe bonding To list the bonding module info, use following command. [root@openstack ~]# modinfo bonding Output will be something like below Step:1 Create Bond Interface File Create a bond interface file (ifcfg-bond0) under the folder “/etc/sysconfig/network-scripts/” [root@openstack network-scripts]# vi ifcfg-bond0 DEVICE=bond0 TYPE=Bond NAME=bond0 BONDING_MASTER=yes BOOTPROTO=none ONBOOT=yes IPADDR= NETMASK= GATEWAY= BONDING_OPTS="mode=5 miimon=100" Save & exit the file. Specify the IP address, Netmask & bonding modes as per your requirement. In my example i am using ‘mode=5′ which is used to provide fault tolerance and load balancing. Step:2 Edit the NIC interface files For ifcfg-enp0s3 [root@openstack ~]# vi /etc/sysconfig/network-scripts/ifcfg-enp0s3 TYPE=Ethernet BOOTPROTO=none DEVICE=enp0s3 ONBOOT=yes HWADDR="08:00:27:69:60:c9" MASTER=bond0 SLAVE=yes For ifcfg-enp0s8 [root@openstack ~]# cat /etc/sysconfig/network-scripts/ifcfg-enp0s8 TYPE=Ethernet BOOTPROTO=none DEVICE=enp0s8 ONBOOT=yes HWADDR="08:00:27:ea:71:8d" MASTER=bond0 SLAVE=yes Step:3 Restart the Network ServiceBelow command will restart the network service and will bring above changes into the effect. [root@openstack ~]# systemctl restart network.service Step:4 Test & Verify bond interface.Use ‘ifconfig‘ & ‘ip add‘ command to check bond interface along with its slave interfaces. Use following command to view bond interface settings like bonding mode & slave interface. [root@openstack ~]# cat /proc/net/bonding/bond0 Step:5 Fault tolerance testing To test the fault tolerance we can down one interface and check whether you are still able access the server. [root@openstack ~]# ifdown enp0s8 Device 'enp0s8' successfully disconnected. [root@openstack ~]# Source: http://www.linuxtechi.com/configure-nic-bonding-in-centos-7-rhel-7/ View the full article
  6. I have had many people come to my blog looking how to install oVirt 3.5 software on the new CentOS 7. Much of this content is the same as my 3.4 post, but I thought I would keep it separate. Below are simple step by step instructions for installing the node and getting it configured for the hosted engine. Installation Requirements Please Note: Installing Hosted Engine on CentOS 7 requires oVirt 3.5.1 Please Note: Both the node and engine will be running CentOS 7 (Minimal Installation) Please Note: Ensure the host is fully updated via “yum update” and rebooted before proceeding Prerequisites DNS Ensure you have set up hostnames for the host and engine. If you do not have a DNS server configured and you are only testing oVirt on a single server, you can use /etc/hosts instead. I have the following: Engine: Hostname: engine.xrsa.net, IP Address: Host: Hostname: ovirt01.xrsa.net, IP Address: NFS Ensure you have set up NFS mount points for the engine and virtual machines. If you do not have a shared NFS server and you are only testing oVirt, you can configure NFS locally on the host instead. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 ------------ -[ (Host) ]- ------------ # yum install -y nfs-utils # mkdir /home/{engineha,vms} && chown 36:36 /home/{engineha,vms} # cat > /etc/exports << EOF /home/engineha,anonuid=36,anongid=36,all_squash) /home/vms,anonuid=36,anongid=36,all_squash) EOF # # systemctl start rpcbind.service && systemctl enable rpcbind.service # systemctl start nfs-lock.service && systemctl enable nfs-lock.service # systemctl start nfs-server.service && systemctl enable nfs-server.service Verify you can see the correct mount points. 1 2 3 4 5 6 7 8 9 10 11 12 13 ---------- -[ Host ]- ---------- # showmount -e ovirt01.xrsa.net Export list for ovirt01.xrsa.net: /home/engineha /home/vms # mount ovirt01.xrsa.net:/home/engineha /mnt && umount /mnt If you get access denied please run the below command and rerun the mount test. # systemctl restart nfs-server.service Installation NTP This not a requirement, but it is recommended that you keep your servers time in sync: 1 2 3 4 5 6 7 8 9 10 ------------ -[ (Host) ]- ------------ # yum install -y ntp # systemctl start ntpd && systemctl enable ntpd # Verify you can reach the NTP servers: # ntpq -p You may put your own NTP servers in /etc/ntp.conf if required. Once you have verified DNS and NFS, install the required repositories and packages. 1 2 3 4 5 6 ------------ -[ (Host) ]- ------------ # yum localinstall -y https://anorien.csc.warwick.ac.uk/mirrors/epel/7/x86_64/e/epel-release-7-5.noarch.rpm # yum localinstall -y http://resources.ovirt.org/pub/yum-repo/ovirt-release35.rpm # yum install -y ovirt-hosted-engine-setup bind-utils screen We will need an ISO for the hosted engine installation. 1 2 3 4 5 6 ----------- -[ Host ] - ----------- # mkdir /home/tmpengineiso && cd /home/tmpengineiso # curl -O http://mirror.ukhost4u.com/centos/7.0.1406/isos/x86_64/CentOS-7.0-1406-x86_64-Minimal.iso # chown -R 36:36 /home/tmpengineiso Now all the prerequisites are in place, verify DNS and then go through the hosted-engine wizard. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 ----------- -[ Host ] - ----------- # host engine.xrsa.net; host ovirt01.xrsa.net # screen # hosted-engine --deploy ... --== CONFIGURATION PREVIEW ==-- Bridge interface : eth0 Engine FQDN : engine.xrsa.net Bridge name : ovirtmgmt SSH daemon port : 22 Firewall manager : iptables Gateway address : Host name for web application : ovirt01.xrsa.net Host ID : 1 Image alias : hosted_engine Image size GB : 25 Storage connection : nfs01.xrsa.net:/home/engineha Console type : vnc Memory size MB : 4096 MAC address : 00:16:3e:71:de:6d Boot type : cdrom Number of CPUs : 2 ISO image (for cdrom boot) : /home/tmpengineiso/CentOS-7.0-1406-x86_64-Minimal.iso CPU Type : model_Westmere Please confirm installation settings (Yes, No)[Yes]: The hosted-engine wizard will give you VNC details so you can connect to the hosted engine virtual machine and install CentOS 7. 1 $ vncviewer -quality 2 ovirt01.xrsa.net:5900 Once installed choose option (1) on the hosted-engine wizard, it will wait until you have rebooted the hosted engine virtual machine. The wizard will give you another set of VNC details to connect to if you need it. However, if you configured networking during the install you should be able to SSH instead. Once you have connected to the hosted engine, download the repositories, configure NTP and run through the ovirt-engine wizard. Please make sure the admin password matches up with the password set in the hosted-engine wizard. Please note: Ensure DNS is configured properly so the engine can resolve itself and the host, else the installation will fail! Please note: While in the engine setup wizard below ensure you put a proper ACL for the “NFS export ACL” option. If you do not you will not be able to activate the ISO_DOMAIN later. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 -------------- -[ (Engine) ]- -------------- # yum -y update # yum localinstall -y https://anorien.csc.warwick.ac.uk/mirrors/epel/7/x86_64/e/epel-release-7-5.noarch.rpm # yum localinstall -y http://resources.ovirt.org/pub/yum-repo/ovirt-release35.rpm # yum install -y ovirt-engine bind-utils screen ntp # host engine.xrsa.net; host ovirt01.xrsa.net # systemctl start ntpd && systemctl enable ntpd # ntpq -p # screen # engine-setup ... --== CONFIGURATION PREVIEW ==-- Application mode : both Firewall manager : firewalld Update Firewall : True Host FQDN : engine.xrsa.net Engine database name : engine Engine database secured connection : False Engine database host : localhost Engine database user name : engine Engine database host name validation : False Engine database port : 5432 Engine installation : True NFS setup : True PKI organization : xrsa.net NFS mount point : /var/lib/exports/iso NFS export ACL : Configure local Engine database : True Set application as default page : True Configure Apache SSL : True Configure WebSocket Proxy : True Engine Host FQDN : engine.xrsa.net Please confirm installation settings (OK, Cancel) [OK]: Once finished go back to the hosted-engine wizard and finish off the installation by choosing option (1). It will ask you one final time for the hosted engine to be shutdown, wait a few minutes and it will come back up automatically. After around a minute you can verify the state of the hosted engine virtual machine by using the following command: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 ----------- -[ Host ] - ----------- # hosted-engine --vm-status --== Host 1 status ==-- Status up-to-date : True Hostname : ovirt01.xrsa.net Host ID : 1 Engine status : {"reason": "bad vm status", "health": "bad", "vm": "up", "detail": "powering up"} Score : 2400 Local maintenance : False ... host-id=1 score=2400 maintenance=False state=EngineStarting # Please wait for around five / ten minutes for the hosted engine virtual machine to come back up properly. # hosted-engine --vm-status --== Host 1 status ==-- Status up-to-date : True Hostname : ovirt01.xrsa.net Host ID : 1 Engine status : {"health": "good", "vm": "up", "detail": "up"} Score : 2400 Local maintenance : False ... host-id=1 score=2400 maintenance=False state=EngineUp # Data Domain and ISO_Domain Setup Before you can create virtual machines in oVirt you need to create a Data Domain and ensure the ISO_DOMAIN is attached to the Default cluster. Navigate to https://engine.xrsa.net and login with admin. Create a new Data / NFS Domain by going to “System -> Storage -> New Domain”: You must wait until the NFS01 Data Domain is in an active state. Once activated attach the ISO_DOMAIN to the Default Data Center: Please note: If you are having issues attaching ISO_DOMAIN to the cluster you might have forgot to add a proper ACL on the “NFS export ACL” option during the engine wizard. You can check this as follows: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 -------------- -[ (Engine) ]- -------------- # cat /etc/exports.d/ovirt-engine-iso-domain.exports /var/lib/exports/iso engine.xrsa.net(rw) # This is incorrect as the hosts are mounting this NFS share not the engine. You can fix this by changing to the subnet the hosts are using. # sed -i "s#engine.xrsa.net#" /etc/exports.d/ovirt-engine-iso-domain.exports # cat /etc/exports.d/ovirt-engine-iso-domain.exports /var/lib/exports/iso # systemctl restart nfs-server If everything went as expected you should see both the NFS01 and ISO_DOMAIN in an up and active state: Uploading ISO ImagesThere is no GUI based ISO upload tool during this time, so to upload ISO images you must login to the engine first and run the following commands: 1 2 3 4 5 6 -------------- -[ (Engine) ]- -------------- # curl -O http://mirror.ukhost4u.com/centos/7.0.1406/isos/x86_64/CentOS-7.0-1406-x86_64-Minimal.iso # ovirt-iso-uploader upload -i ISO_DOMAIN CentOS-7.0-1406-x86_64-Minimal.iso # rm CentOS-7.0-1406-x86_64-Minimal.iso Using oVirtAt this point everything should be up and running for you to start creating virtual machines. For more information please read the oVirt Documentation: http://www.ovirt.org/Documentation Source: https://xrsa.net/2015/02/04/installing-ovirt-3-5-on-centos-7-hosted-engine/View the full article
  7. The funkily-named bug of the week is GHOST. Its official moniker is the less catchy CVE-2015-0235, and it's a vulnerability caused by a buffer overflow in a system library that is used in many, if not most, Linux distributions. A buffer overflow is where you assume, for example, that when you handle a four-byte network number written out as decimal digits, you will never get anything longer than 255.​255.​255.​255. That takes up 15 characters, so you may decide that you'll never need more than 15 bytes of memory. So, if you add a spare byte for luck and allocate 16 bytes, you're bound to have enough space. And then, one day, a malicious user decides to see what happens if he ignores the rules, and uses a network number like, say, 1024.​10224.​102224.​1022224. That network number is nonsense, of course, but your program might not hold out long enough to reject it. Your code will probably crash right away, because the attacker's 25 bytes will overflow your 16 bytes of available memory. GHOST explained As it happens, the GHOST vulnerability is connected with network names and numbers. The spooky name comes from the system functions where the vulnerable code was found. The functions are called gethostby­name() and gethostby­name2(), and they do what the names suggest. They find the computer-friendly network number of a host (e.g. 93.​184.​216.​34) from its human-friendly name (e.g. example.com). In other words, these functions do a DNS (domain name system) lookup for you, so your program doesn't need to deal with the intricacies of the DNS protocol. For example, if you ignore any error checking in your code, you might do this: And you'd see something like this: By the way, even if your program doesn't directly call gethostby­name(), you may end up calling it indirectly as a side-effect of doing something, anything, involving a computer name. For example, if your software looks up email addresses, calls home for updates, retrieves postings from online forums, plays podcasts, or any of a number of perfectly unexceptionable network-related activities, it almost certainly triggers name-to-number lookups at some point. And if those lookups are based on data received from outside, such as a sender's email address in received email headers, then attackers may very well get to choose what data gets passed to your Linux computer's gethostby­name() function. The bug It turns out that gethostby­name() has a clever feature, where it works out whether you called it with name that is already a network number (digits-dot-digits-dot-digits-dot-digits). In that case, it would be a waste of time to do a DNS lookup, so it doesn't bother. Unfortunately, the code that runs through the name to see if it's really a network number has a buffer overflow, and if you deliberately send a super-long number laid out just right... ...poof – the GHOST strikes! So an attacker may be able to rig up messages or network requests that crash your program; and with a bit (or, more likely, a lot) of trial and error, they might be able to trigger that crash in a way that gives them control over your computer. That's known as a Remote Code Execution (RCE) exploit, similar to the bug recently found in the super-secure Blackphone, though in that case it was a text message that caused the phone's software to trip over itself. What to do? The good news is that this bug doesn't exist on every computer. It actually exists only in some versions of a software module called glibc, short for GNU C library. In fact, most computers in the world don't have glibc installed, because it's not used by default on Windows, OS X, iOS or Android. The bad news is that many, if not most, computers running Linux do use glibc, and may be at risk. In short, therefore, if you have any Linux-based systems, including home firewalls and routers: Check with your vendor, or the maker of your distribution, to see if you need a patch. If you do, make plans to apply the patch as soon as you can. Oh, and if you are a programmer, you shouldn't really be using the gethostby­name functions anyway. They were superseded many years ago by the much more flexible and useful function getaddr­info(), which you should use instead. Source: https://nakedsecurity.sophos.com/2015/01/29/the-ghost-vulnerability-what-you-need-to-know/. View the full article
  8. Standard procedure With KVM, to access the virtual machine’s console under X Window, type: # virt-manager If you aren’t under X Window, there is another way to access a virtual machine’s console: you can go through a serial console. On the virtual machine, add ‘console=ttyS0‘ at the end of the kernel lines in the /boot/grub2/grub.cfg file: # grubby --update-kernel=ALL --args="console=ttyS0" Now, reboot the virtual machine: # reboot With KVM, connect to the virtual machine’s console (here vm.example.com): # virsh console vm.example.com Connected to domain vm.example.com Escape character is ^] Red Hat Enterprise Linux Server 7.0 (Maipo) Kernel 3.10.0-121.el7.x86_64 on an x86_64 vm login: Emergency procedure Sometimes you have lost all links to your virtual machine (error in the /etc/fstab file, ssh configuration, etc) and, as you didn’t set up any virtual console, you are in real trouble. There is still a solution! Connect to the physical host and shut down your virtual machine (here called vm.example.com): # virsh destroy vm.example.com Define where the virtual machine image file is located (by default in the /var/lib/libvirt/images directory with a name like vm.example.com.img): # virsh dumpxml | grep "source file=" Map your virtual machine image file into the host environment (-a for add and -v for verbose): # kpartx -av /var/lib/libvirt/images/vm.example.com.img add map loop0p1 (253:2): 0 1024000 linear /dev/loop0 2048 add map loop0p2 (253:3): 0 10240000 linear /dev/loop0 1026048 From the previous display, you know that you’ve got two partitions (in fact /boot and /, distinguishable by their respective size). You need to mount the /boot partition to be able to change the grub configuration: # mount /dev/mapper/loop0p1 /mnt Then, edit the /mnt/grub2/grub.cfg file and add console=ttyS0 at the end of every line containing /vmlinuz (the linux kernel). Unmount the partition: # umount /mnt Unmap the virtual machine image file (-d for delete and -v for verbose): # kpartx -dv /var/lib/libvirt/images/vm.example.com.img del devmap : loop0p2 del devmap : loop0p1 loop deleted : /dev/loop0 Restart your virtual machine: # virsh start vm.example.com Domain vm.example.com started Connect to your virtual machine console: # virsh console vm.example.com Connected to domain vm.example.com Escape character is ^] CentOS Linux 7 (Core) Kernel 3.10.0-123.el7.x86_64 on an x86_64 vm login: This procedure works for RHEL 6/CentOS 6 and RHEL 7/CentOS 7. Source: Deepal Jayasinghe’s blog. View the full article
  9. Although differentiation is tough in Linux distributions today, CentOS 7 has carved out a niche as the free and open alter ego to Red Hat Enterprise Linux (RHEL). We found that CentOS, which is mandated to be binary-compatible with Red Hat 7, shares about 95% of the features of its commercial enterprise-class sibling. There’s no IBM System z port, and special variants for cloud and virtualization are more limited than with Red Hat 7. But for many common and generic applications, it’s a drop-in replacement. The primary differences for most IT admins, systems people, and engineers/developers will be a (purchased) relationship with Red Hat and Red Hat’s army of supported applications, working partnership projects and management skills. CentOS 7 is perhaps most aptly put as the DIY twin of RHEL7—who lives in the distro ghetto across the railroad tracks. Still, in testing different deployment payloads, and also where we chose GUIs, the look and feel between Red Hat Enterprise Linux and CentOS 7 instances were essentially the same. And the same mistakes made to either usually cause the same explosions. You might or might not find the fix faster in a Red Hat resource. The Centos.org website doesn't have Red Hat’s polish, and has many odd links to nowhere or nothing—not 404s, rather, unfinished projects. The CentOS site seemed comparatively untended to us. Read Complete Story View the full article
  10. UPDATE Resolution for POODLE SSLv3.0 vulnerability (CVE-2014-3566) in httpd Resolution for POODLE SSLv3.0 vulnerability (CVE-2014-3566) in vsftpd Resolution for POODLE SSLv3.0 vulnerability (CVE-2014-3566) for components that do not allow SSLv3 to be disabled via configuration settings Resolution for POODLE SSLv3.0 vulnerability (CVE-2014-3566) in Firefox and Chromium What Is POODLE? POODLE stands for Padding Oracle On Downgraded Legacy Encryption. This vulnerability allows a man-in-the-middle attacker to decrypt ciphertext using a padding oracle side-channel attack. More details are available in the upstream OpenSSL advisory. POODLE affects older standards of encryption, specifically Secure Socket Layer (SSL) version 3.0. It does not affect the newer encryption mechansim known as Transport Layer Security (TLS). Recommendations To mitigate this vulnerability SSL 3.0 should be disabled in all affected packages. Red Hat is continuously working at this time to provide additional use cases and guides to disable SSL 3.0. Determining Vulnerability If you are not a subscriber, the following script can be run against the server in question. The command will return 'SSL 3.0 enabled' if vulnerable and 'SSL 3.0 disabled' if not. #!/bin/bash ulimit -t 5 ret=$(echo Q | timeout 5 openssl s_client -connect "${1-`hostname`}:${2-443}" -ssl3 2> /dev/null) if echo "${ret}" | grep -q 'Protocol.*SSLv3'; then if echo "${ret}" | grep -q 'Cipher.*0000'; then echo "SSL 3.0 disabled" else echo "SSL 3.0 enabled" fi else echo "SSL disabled or other error" fi NOTE: This script takes the hostname of the server to check as the first argument and an optional port as the second. By default it will check the local system, port 443. Also you can use Your servers are vulnerable simply if they support SSLv3. Several options here: With OpenSSL s_client: openssl s_client -connect : -ssl3 If the connection succeeds, sslv3 is enabled. If it fails, it is disabled. When it fails you should see something like: error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure Using nmap: nmap --script ssl-enum-ciphers -p 443 myhostname.tld It should output 'SSLv3: No supported ciphers found'. Adjust for your hostname/port. Using cipherscan. Clone/download the binary and execute it: ./cipherscan myhostname.tld It should not list anything with SSLv3 under the 'protocols' column. How to fix Firefox browser Open about:config, find security.tls.version.min and set the value to 1. Then restart your browser to drop any open SSL connections. Firefox from version 34 onwards will disable SSLv3 by default and thus require no action (source). However, at the moment of writing, 33 is just released and 34 is set for November 25. Google Chrome (Linux) Edit the /usr/share/applications/google-chrome.desktop file, e.g. sudo nano /usr/share/applications/google-chrome.desktop Edit all lines starting with Exec= to include --ssl-version-min=tls1. E.g. a line like Exec=/usr/bin/google-chrome-stable %U becomes Exec=/usr/bin/google-chrome-stable --ssl-version-min=tls1 %U Then make sure to fully close the browser (Chrome apps may be keeping your browser active in the background!). Note: You may need to repeat this every google-chrome package update, overwriting this .desktop launcher file. A Google Chrome or Chromium browser with SSLv3 disabled by default is not yet announced at the time of writing. Apache HTTPD Server If you're running Apache, just include the following line in your configuration among the other SSL directives: SSLProtocol All -SSLv2 -SSLv3 Then check if the new configuration is correct (no typos etc.): apachectl configtest And restart the server, e.g. sudo service apache2 restart More info: Apache documentation Now test it: If your site is publicly available, test it using Qualys’ SSL Labs tool. Nginx server If you're running Nginx, just include the following line in your configuration among the other SSL directives: ssl_protocols TLSv1 TLSv1.1 TLSv1.2; And restart the server, e.g. sudo service nginx restart Reference: Nginx documentation Now test it: If your site is publicly, available, test it using Qualys' SSL Labs tool. Lighttpd webserver Lighttpd versions >1.4.28 support a configuration option to disable SSLv2 and v3. Lighttpd releases before 1.4.28 allow you to disable SSLv2 ONLY. Please note that Ubuntu 12.04 LTS and earlier install at best lighttpd v1.4.28 and therefore a simple fix is not available for those distributions. Therefore this fix should only be used for Ubuntu versions greater than 12.04. If someone knows a fix that's good for 12.04 and earlier, please edit this. Edit your /etc/lighttpd/lighttpd.conf to add the following lines after the ssl.engine = "enable" directive ssl.use-sslv2 = "disable" ssl.use-sslv3 = "disable" Then you should restart the lighttpd service with a sudo service lighttpd restart and perform an ssl3 handshake test as described in earlier sections to make sure that the change was implemented successfully. Taken from http://redmine.lighttpd.net/projects/lighttpd/wiki/Docs_SSL. Postfix SMTP For 'opportunistic SSL' (encryption policy not enforced and plain is acceptable too), you don't need to change anything. Even SSLv2 is better than plain, so if you need to secure your server you should be using 'mandatory SSL' mode anyway. For 'mandatory SSL' mode being configured already, just add/change the smtpd_tls_mandatory_protocols setting: smtpd_tls_mandatory_protocols=!SSLv2,!SSLv3 and restart Postfix: sudo service postfix restart Sendmail (Unverified edit by anonymous user, I'm not comfortable with Sendmail, please verify.) These options are configured in the LOCAL_CONFIG section of your sendmail.mc LOCAL_CONFIG CipherList=HIGH ServerSSLOptions=+SSL_OP_NO_SSLv2 +SSL_OP_NO_SSLv3 +SSL_OP_CIPHER_SERVER_PREFERENCE ClientSSLOptions=+SSL_OP_NO_SSLv2 +SSL_OP_NO_SSLv3 Dovecot In Dovecot v2.1+, add the following to your /etc/dovecot/local.conf (or a new file in /etc/dovecot/conf.d): ssl_protocols = !SSLv2 !SSLv3 and restart Dovecot: sudo service dovecot restart For older versions you will have to patch the source code. Courier-imap (imapd-ssl) Courier-imap allows SSLv3 by default on Ubuntu 12.04 and others. You should disable it and use STARTTLS instead to force TLS. Edit your /etc/courier/imapd-ssl configuration file to reflect the following changes IMAPDSSLSTART=NO IMAPDSTARTTLS=YES IMAP_TLS_REQUIRED=1 TLS_PROTOCOL=TLS1 TLS_STARTTLS_PROTOCOL=TLS1 TLS_CIPHER_LIST="" HAProxy Server SSL is supported in HAProxy >= 1.5. Edit the /etc/haproxy.cfg file and find your bind line. Append no-sslv3. For example: bind :443 ssl crt ciphers no-sslv3 Reference: HAProxy Documentation OpenVPN Appears to be unaffected (source). OpenVPN uses TLSv1.0, or (with >=2.3.3) optionally TLSv1.2 and is thus not impacted by POODLE. Tomcat When using Tomcat with the JSSE connectors, the SSL protocol to be used can be configured via $TOMCAT_HOME/conf/server.xml. The following example shows how the sslProtocol in an https connector is configured. Tomcat 5 and 6: maxThreads="150" SSLEnabled="true" scheme="https" secure="true" clientAuth="false" sslEnabledProtocols = "TLSv1,TLSv1.1,TLSv1.2" /> Tomcat >= 7 maxThreads="150" SSLEnabled="true" scheme="https" secure="true" clientAuth="false" sslProtocols = "TLSv1,TLSv1.1,TLSv1.2" /> If the sslEnabledProtocols or sslProtocols attributes are specified, only protocols that are listed and supported by the SSL implementation will be enabled. If not specified, the JVM default is used. The permitted values may be obtained from the JVM documentation for the allowed values for algorithm when creating an SSLContext instance e.g. Oracle Java 6 and Oracle Java 7. Tomcat APR When using Tomcat with the APR/Native connectors, the SSL protocol to be used can be configured in $TOMCAT_HOME/conf/server.xml. The following example shows how the SSLProtocol in an https connector is configured. maxThreads="150" enableLookups="false" disableUploadTimeout="true" acceptCount="100" scheme="https" secure="true" SSLEnabled="true" SSLProtocol="TLSv1" SSLCertificateFile="${catalina.base}/conf/localhost.crt" SSLCertificateKeyFile="${catalina.base}/conf/localhost.key" /> Configuration parameters are documented here. The default is for the SSLProtocol attribute to be set to ALL, with other acceptable values being SSLv2, SSLv3, TLSv1 and SSLv2+SSLv3. Starting with version 1.1.21 of the Tomcat native library any combination of the three protocols concatenated with a plus sign will be supported. Note that the protocol SSLv2 is inherently unsafe. View the full article
  11. Network Time Protocol – NTP- is a protocol which runs over port 123 UDP at Transport Layer and allows computers to synchronize time over networks for an accurate time. While time is passing by, computers internal clocks tend to drift which can lead to inconsistent time issues, especially on servers and clients logs files or if you want to replicate servers resources or databases. This tutorial will demonstrate how you can install and configure NTP server on CentOS/RHEL 7 and automatically synchronize time with the closest geographically peers available for your server location by using NTP Public Pool Time Servers list. Step 1: Install and configure NTP daemon 1. NTP server package is provided by default from official CentOS /RHEL 7 repositories and can be installed by issuing the following command. # yum install ntp <img class="size-medium wp-image-8998" src="http://www.tecmint.com/wp-content/uploads/2014/09/Install-NTP-in-CentOS-620x372.png" alt="Install NTP in CentOS" width="620" height="372" /> Install NTP Server 2. After the server is installed, first go to official NTP Public Pool Time Servers, choose your Continent area where the server physically is located, then search for your Country location and a list of NTP servers should appear. <img class="size-medium wp-image-9000" src="http://www.tecmint.com/wp-content/uploads/2014/09/NTP-Pool-Server-620x352.png" alt="NTP Pool Server" width="620" height="352" /> NTP Pool Server 3. Then open NTP daemon main configuration file for editing, comment the default list of Public Servers from pool.ntp.org project and replace it with the list provided for your country like in the screenshot below. <img class="size-medium wp-image-9001" src="http://www.tecmint.com/wp-content/uploads/2014/09/Configure-NTP-Server-620x180.png" alt="Configure NTP Server in CentOS" width="620" height="180" /> Configure NTP Server 4. Further, you need to allow clients from your networks to synchronize time with this server. To accomplish this, add the following line to NTP configuration file, where restrict statement controls, what network is allowed to query and sync time – replace network IPs accordingly. restrict netmask nomodify notrap The nomodify notrap statements suggest that your clients are not allowed to configure the server or be used as peers for time sync. 5. If you need additional information for troubleshooting in case there are problems with your NTP daemon add a log file statement which will record all NTP server issues into one dedicated log file. logfile /var/log/ntp.log <img class="size-medium wp-image-9002" src="http://www.tecmint.com/wp-content/uploads/2014/09/Enable-NTP-Log-620x423.png" alt="Enable NTP Logs in CentOS" width="620" height="423" /> Enable NTP Logs 6. After you have edited the file with all configuration explained above save and close ntp.conf file. Your final configuration should look like in the screenshot below. <img class="size-medium wp-image-9003" src="http://www.tecmint.com/wp-content/uploads/2014/09/NTP-Server-Configuration-620x372.png" alt="NTP Server Configuration in CentOS" width="620" height="372" /> NTP Server Configuration Step 2: Add Firewall Rules and Start NTP Daemon 7. NTP service uses UDP port 123 on OSI transport layer (layer 4). It is designed particularly to resist the effects of variable latency (jitter). To open this port on RHEL/CentOS 7 run the following commands against Firewalld service. # firewall-cmd --add-service=ntp –permanent # firewall-cmd --reload <img class="size-medium wp-image-9004" src="http://www.tecmint.com/wp-content/uploads/2014/09/Open-NTP-Port-620x151.png" alt="Open NTP Port in Firewall" width="620" height="151" /> Open NTP Port in Firewall 8. After you have opened Firewall port 123, start NTP server and make sure you enable it system-wide. Use the following commands to manage the service. # systemctl start ntpd # systemctl enable ntpd # sytemctl status ntp <img class="size-medium wp-image-9005" src="http://www.tecmint.com/wp-content/uploads/2014/09/Start-NTP-Service-620x331.png" alt="Start NTP Service" width="620" height="331" /> Start NTP Service Step 3: Verify Server Time Sync 9. After NTP daemon has been started, wait a few minutes for the server to synchronize time with its pool list servers, then run the following commands to verify NTP peers synchronization status and your system time. # ntpq -p # date -R <img class="size-medium wp-image-9006" src="http://www.tecmint.com/wp-content/uploads/2014/09/Verify-NTP-Time-Sync-620x168.png" alt="Verify NTP Server Time" width="620" height="168" /> Verify NTP Time Sync 10. If you want to query and synchronize against a pool of your choice use ntpdate command, followed by the server or servers addresses, as suggested in the following command line example. # ntpdate -q 0.ro.pool.ntp.org 1.ro.pool.ntp.org <img class="size-medium wp-image-9007" src="http://www.tecmint.com/wp-content/uploads/2014/09/Synchronize-NTP-Time-620x221.png" alt="Synchronize NTP Time" width="620" height="221" /> Synchronize NTP Time Step 4: Setup Windows NTP Client 11. If your windows machine is not a part of a Domain Controller you can configure Windows to synchronize time with your NTP server by going to Time from the right side of Taskbar -> Change Date and Time Settings -> Internet Time tab -> Change Settings -> Check Synchronize with an Internet time server -> put your server’s IP or FQDN on Server filed -> Update now -> OK. <img class="size-medium wp-image-9008" src="http://www.tecmint.com/wp-content/uploads/2014/09/Synchronize-Windows-Time-with-NTP-620x432.png" alt="Synchronize Windows Time with NTP" width="620" height="432" /> Synchronize Windows Time with NTP That’s all! Setting up a local NTP Server on your network ensures that all your servers and clients have the same time set in case of an Internet connectivity failure and they all are synchronized with each other. Source: http://www.tecmint.com/install-ntp-server-in-centos/ View the full article
  12. 1. Open with editor the /etc/default/grub vi /etc/default/grub 2. Found for "GRUB_CMDLINE_LINUX=" and add ipv6.disable=1 GRUB_CMDLINE_LINUX="ipv6.disable=1 rd.lvm.lv=fedora/swap rd.md=0..... 3. After recreate grub config grub2-mkconfig -o /boot/grub2/grub.cfg 4. Reboot system. reboot View the full article
  13. Shellshock, also known as Bashdoor, is a family of security bugs in the widely used Unix Bash shell, the first of which was disclosed on 24 September 2014. Many Internet daemons, such as web servers, use Bash to process certain commands, allowing an attacker to cause vulnerable versions of Bash to execute arbitrary commands. This can allow an attacker to gain unauthorized access to a computer system. The vulnerability can be tested with the following command: env x='() { :;}; echo vulnerable' bash -c "echo this is a test" If your system is vulnerable to the Bash "Shellshock" bug the above command will produce a following output: vulnerable this is a test otherwise you will see a following message: this is a test View the full article
  14. A RAID 10 (aka RAID 1+0 or stripe of mirrors) array provides high performance and fault-tolerant disk I/O operations by combining features of RAID 0 (where read/write operations are performed in parallel across multiple drives) and RAID 1 (where data is written identically to two or more drives). In this tutorial, I'll show you how to set up a software RAID 10 array using five identical 8 GiB disks. While the minimum number of disks for setting up a RAID 10 array is four (e.g., a striped set of two mirrors), we will add an extra spare drive should one of the main drives become faulty. We will also share some tools that you can later use to analyze the performance of your RAID array. Please note that going through all the pros and cons of RAID 10 and other partitioning schemes (with different-sized drives and filesystems) is beyond the scope of this post. How Does a Raid 10 Array Work? If you need to implement a storage solution that supports I/O-intensive operations (such as database, email, and web servers), RAID 10 is the way to go. Let me show you why. Let's refer to the below image. Imagine a file that is composed of blocks A, B, C, D, E, and F in the above diagram. Each RAID 1 mirror set (e.g., Mirror 1 or 2) replicates blocks on each of its two devices. Because of this configuration, write performance is reduced because every block has to be written twice, once for each disk, whereas read performance remains unchanged compared to reading from single disks. The bright side is that this setup provides redundancy in that unless more than one of the disks in each mirror fail, normal disk I/O operations can be maintained. The RAID 0 stripe works by dividing data into blocks and writing block A to Mirror 1, block B to Mirror 2 (and so on) simultaneously, thereby improving the overall read and write performance. On the other hand, none of the mirrors contains the entire information for any piece of data committed to the main set. This means that if one of the mirrors fail, the entire RAID 0 component (and therefore the RAID 10 set) is rendered inoperable, with unrecoverable loss of data. Setting up a RAID 10 Array There are two possible setups for a RAID 10 array: complex (built in one step) or nested (built by creating two or more RAID 1 arrays, and then using them as component devices in a RAID 0). In this tutorial, we will cover the creation of a complex RAID 10 array due to the fact that it allows us to create an array using either an even or odd number of disks, and can be managed as a single RAID device, as opposed to the nested setup (which only permits an even number of drives, and must be managed as a nested device, dealing with RAID 1 and RAID 0 separately). It is assumed that you have mdadm installed, and the daemon running on your system. Refer to this tutorial for details. It is also assumed that a primary partition sd[bcdef]1 has been created on each disk. Thus, the output of: ls -l /dev | grep sd[bcdef] should be like: Let's go ahead and create a RAID 10 array with the following command: # mdadm --create --verbose /dev/md0 --level=10 --raid-devices=4 /dev/sd[bcde]1 --spare-devices=1 /dev/sdf1 When the array has been created (it should not take more than a few minutes), the output of: # mdadm --detail /dev/md0 should look like: A couple of things to note before we proceed further. 1. Used Dev Space indicates the capacity of each member device used by the array.2. Array Size is the total size of the array. For a RAID 10 array, this is equal to (N*C)/M, where N: number of active devices, C: capacity of active devices, M: number of devices in each mirror. So in this case, (N*C)/M equals to (4*8GiB)/2 = 16GiB. 3. Layout refers to the fine details of data layout. The possible layout values are as follows. n (default option): means near copies. Multiple copies of one data block are at similar offsets in different devices. This layout yields similar read and write performance than that of a RAID 0 array. o indicates offset copies. Rather than the chunks being duplicated within a stripe, whole stripes are duplicated, but are rotated by one device so duplicate blocks are on different devices. Thus subsequent copies of a block are in the next drive, one chunk further down. To use this layout for your RAID 10 array, add --layout=o2 to the command that is used to create the array. f represents far copies (multiple copies with very different offsets). This layout provides better read performance but worse write performance. Thus, it is the best option for systems that will need to support far more reads than writes. To use this layout for your RAID 10 array, add --layout=f2 to the command that is used to create the array. The number that follows the n, f, and o in the --layout option indicates the number of replicas of each data block that are required. The default value is 2, but it can be 2 to the number of devices in the array. By providing an adequate number of replicas, you can minimize I/O impact on individual drives. 4. Chunk Size, as per the Linux RAID wiki, is the smallest unit of data that can be written to the devices. The optimal chunk size depends on the rate of I/O operations and the size of the files involved. For large writes, you may see lower overhead by having fairly large chunks, whereas arrays that are primarily holding small files may benefit more from a smaller chunk size. To specify a certain chunk size for your RAID 10 array, add --chunk=desired_chunk_size to the command that is used to create the array. Unfortunately, there is no one-size-fits-all formula to improve performance. Here are a few guidelines to consider. Filesystem: overall, XFS is said to be the best, while EXT4 remains a good choice. Optimal layout: far layout improves read performance, but worsens write performance. Number of replicas: more replicas minimize I/O impact, but increase costs as more disks will be needed. Hardware: SSDs are more likely to show increased performance (under the same context) than traditional (spinning) disks. RAID Performance Tests using DD The following benchmarking tests can be used to check on the performance of our RAID 10 array (/dev/md0). 1. Write operationA single file of 256MB is written to the device: # dd if=/dev/zero of=/dev/md0 bs=256M count=1 oflag=dsync 512 bytes are written 1000 times: # dd if=/dev/zero of=/dev/md0 bs=512 count=1000 oflag=dsync With dsync flag, dd bypasses filesystem cache, and performs synchronized write to a RAID array. This option is used to eliminate caching effect during RAID performance tests. 2. Read operation256KiB*15000 (3.9 GB) are copied from the array to /dev/null: # dd if=/dev/md0 of=/dev/null bs=256K count=15000 RAID Performance Tests Using Iozone Iozone is a filesystem benchmark tool that allows us to measure a variety of disk I/O operations, including random read/write, sequential read/write, and re-read/re-write. It can export the results to a Microsoft Excel or LibreOffice Calc file. Installing Iozone on CentOS/RHEL 7Enable Repoforge. Then: # yum install iozone Installing Iozone on Debian 7 # aptitude install iozone3 The iozone command below will perform all tests in the RAID-10 array: # iozone -Ra /dev/md0 -b /tmp/md0.xls -R: generates an Excel-compatible report to standard out. -a: runs iozone in a full automatic mode with all tests and possible record/file sizes. Record sizes: 4k-16M and file sizes: 64k-512M. -b /tmp/md0.xls: stores test results in a specified file. Hope this helps. Feel free to add your thoughts or add tips to consider on how to improve performance of RAID 10. Source: http://xmodulo.com/setup-raid10-linux.html View the full article
  15. Log files contain useful information about what is going on within the system. They are often inspected during troubleshooting processes or server performance analysis. For a busy server, log files may grow quickly into very large sizes. This becomes a problem as the server will soon run out of space. Besides, working with a single large log file can often be tricky. logrotate is a very useful tool that can automate the process of breaking up (or rotating), compressing, and deleting old log files. For example, you can set up logrotate such that the log file /var/log/foo is rotated every 30 days, and logs older than 6 months are deleted. Once configured, the process is fully automated using logrotate without any further need for human interaction. Optionally, old logs can be emailed as well, but that option is beyond the scope of this tutorial. The logrotate package is typically installed by default on major Linux distros. If, for some reason, logrotate is not present, you can install it using apt-get or yum command. On Debian or Ubuntu: # apt-get install logrotate cron On Fedora, CentOS or RHEL: # yum install logrotate crontabs The configuration file for logrotate is /etc/logrotate.conf. Generally no modification is needed here. The log files to be rotated are defined in separate configuration file(s) placed under /etc/logrotate.d/ directory. Example One In the first example, we will create a 10 MB log file /var/log/log-file. We will see how we can use logrotate to manage this log file. We start by creating a log file, and populating it with a 10 MB worth of random bit stream. # touch /var/log/log-file # head -c 10M < /dev/urandom > /var/log/log-file Now that the log file is ready, we will configure logrotate to rotate this log file. Let's create a configuration file for this. # vim /etc/logrotate.d/log-file /var/log/log-file { monthly rotate 5 compress delaycompress missingok notifempty create 644 root root postrotate /usr/bin/killall -HUP rsyslogd endscript } Where: monthly: The log file will now be rotated monthly. Other possible values are 'daily', 'weekly' or 'yearly'. rotate 5: A total of 5 archived logs will be stored at a time. For the 6th archive, the oldest stored archive will be deleted. compress: The rotated archive will be compressed using gzip, after the rotation task is complete. delaycompress: Always used together with compress option, the delaycompress parameter instructs logrotate to not run compression on the most recent archive. Compression will be performed during the next rotation cycle. This is useful if you or any software still needs to access the fresh archive. missingok: During log rotation, any errors will be ignored, e.g., "file not found". notifempty: Rotation will not be performed if the log file is empty. create 644 root root: A fresh log file will be created with specified permissions as logrotate may rename the original log file. postrotate/endscript: The command(s) specified between postrotate and endscript will be carried out after all other instructions are completed. In this case, the process rsyslogd will re-read its configuration on the fly and continue running. The above template is generic, and the configuration parameters may vary based on your requirements. Not all the parameters may be necessary. Example Two In this example, we want to rotate a log file only when the size of the log file grows over 50 MB. # vim /etc/logrotate.d/log-file /var/log/log-file { size=50M rotate 5 create 644 root root postrotate /usr/bin/killall -HUP rsyslogd endscript } Example Three We want old log files to be named with the date of creation. This can be achieved by adding dateext parameter. # vim /etc/logrotate.d/log-file /var/log/log-file { monthly rotate 5 dateext create 644 root root postrotate /usr/bin/killall -HUP rsyslogd endscript } This will cause the archived files to contain the date in their name. Troubleshooting Here are a few troubleshooting tips for logrotate setup. 1. Running logrotate manually logrotate can be invoked manually from the command line at any time. To invoke logrotate on all logs as configured in /etc/logrotate.d/*: # logrotate /etc/logrotate.conf To invoke logrotate for a particular configuration: # logrotate /etc/logrotate.d/log-file 2. Dry run The best option during troubleshooting is to run logrotate as a dry run using '-d' option. For verification, a dry run simulates log rotation and displays its output without actually rotating any log files. # logrotate -d /etc/logrotate.d/log-file As we can see from the above output, logrotate decided that rotation is not necessary. This can happen if the age of the file is less than one day. 3. Force runWe can force logrotate to rotate log files even when rotation conditions are not met, by using '-f' option. The '-v' parameter provides verbose output. # logrotate -vf /etc/logrotate.d/log-file reading config file /etc/logrotate.d/log-file reading config info for /var/log/log-file Handling 1 logs rotating pattern: /var/log/log-file forced from command line (5 rotations) empty log files are rotated, old logs are removed considering log /var/log/log-file log needs rotating rotating log /var/log/log-file, log->rotateCount is 5 dateext suffix '-20140916' glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]' renaming /var/log/log-file.5.gz to /var/log/log-file.6.gz (rotatecount 5, logstart 1, i 5), old log /var/log/log-file.5.gz does not exist renaming /var/log/log-file.4.gz to /var/log/log-file.5.gz (rotatecount 5, logstart 1, i 4), old log /var/log/log-file.4.gz does not exist . . . renaming /var/log/log-file.0.gz to /var/log/log-file.1.gz (rotatecount 5, logstart 1, i 0), old log /var/log/log-file.0.gz does not exist log /var/log/log-file.6.gz doesn't exist -- won't try to dispose of it renaming /var/log/log-file to /var/log/log-file.1 creating new /var/log/log-file mode = 0644 uid = 0 gid = 0 running postrotate script compressing log with: /bin/gzip 4. Logrotate logging Logs for logrotate itself are usually stored in the directory /var/lib/logrotate/status. If we want logrotate to log to any specific file for troubleshooting purposes, we can specify that from the command line as follows. # logrotate -vf –s /var/log/logrotate-status /etc/logrotate.d/log-file 5. Logrotate cron job The cron jobs needed for logrotate should automatically be created during installation. I am posting the contents of the cron file for reference. # cat /etc/cron.daily/logrotate #!/bin/sh # Clean non existent log file entries from status file cd /var/lib/logrotate test -e status || touch status head -1 status > status.clean sed 's/"//g' status | while read logfile date do [ -e "$logfile" ] && echo "\"$logfile\" $date" done >> status.clean mv status.clean status test -x /usr/sbin/logrotate || exit 0 /usr/sbin/logrotate /etc/logrotate.conf To sum up, logrotate is a very useful tool for preventing gigantic log files from using up storage space. Once configured, the process is fully automated, and can run without human intervention for a long time. This tutorial focused on several basic examples of how to use logrotate. You can customize it even further to match your requirements. View the full article
  16. Keeping, maintaining and analyzing logs (i.e., accounts of events that have happened during a certain period of time or are currently happening) are among the most basic and essential tasks of a Linux system administrator. In case of user management, examining user logon and logout logs (both failed and successful) can alert us about any potential security breaches or unauthorized use of our system. For example, remote logins from unknown IP addresses or accounts being used outside working hours or during vacation leave should raise a red flag. On a CentOS system, user login history is stored in the following binary files: /var/run/utmp (which logs currently open sessions) is used by who and w tools to show who is logged on and what they are doing, and also by uptime to display system up time. /var/log/wtmp (which stores the history of connections to the system) is used by last tool to show the listing of last logged-in users. /var/log/btmp (which logs failed login attempts) is used by lastb utility to show the listing of last failed login attempts. In this post I'll show you how to use utmpdump, a simple program from the sysvinit-tools package that can be used to dump these binary log files in text format for inspection. This tool is available by default on stock CentOS 6 and 7. The information gleaned from utmpdump is more comprehensive than the output of the tools mentioned earlier, and that's what makes it a nice utility for the job. Besides, utmpdump can be used to modify utmp or wtmp, which can be useful if you want to fix any corrupted entries in the binary logs. How to Use Utmpdump and Interpret its Output As we mentioned earlier, these log files, as opposed to other logs most of us are familiar with (e.g., /var/log/messages, /var/log/cron, /var/log/maillog), are saved in binary file format, and thus we cannot use pagers such as less or more to view their contents. That is where utmpdump saves the day. In order to display the contents of /var/run/utmp, run the following command: # utmpdump /var/run/utmp To do the same with /var/log/wtmp: # utmpdump /var/log/wtmp and finally with /var/log/btmp: # utmpdump /var/log/btmp As you can see, the output formats of three cases are identical, except for the fact that the records in the utmp and btmp are arranged chronologically, while in the wtmp, the order is reversed. Each log line is formatted in multiple columns described as follows. The first field shows a session identifier, while the second holds PID. The third field can hold one of the following values: ~~ (indicating a runlevel change or a system reboot), bw (meaning a bootwait process), a digit (indicates a TTY number), or a character and a digit (meaning a pseudo-terminal). The fourth field can be either empty or hold the user name, reboot, or runlevel. The fifth field holds the main TTY or PTY (pseudo-terminal), if that information is available. The sixth field holds the name of the remote host (if the login is performed from the local host, this field is blank, except for run-level messages, which will return the kernel version). The seventh field holds the IP address of the remote system (if the login is performed from the local host, this field will show If DNS resolution is not provided, the sixth and seventh fields will show identical information (the IP address of the remote system). The last (eighth) field indicates the date and time when the record was created. Usage Examples of Utmpdump Here are a few simple use cases of utmpdump. 1. Check how many times (and at what times) a particular user (e.g., gacanepa) logged on to the system between August 18 and September 17. # utmpdump /var/log/wtmp | grep gacanepa If you need to review login information from prior dates, you can check the wtmp-YYYYMMDD (or wtmp.[1...N]) and btmp-YYYYMMDD (or btmp.[1...N]) files in /var/log, which are the old archives of wtmp and btmp files, generated by logrotate. 2. Count the number of logins from IP address # utmpdump /var/log/wtmp | grep 3. Display failed login attempts. # utmpdump /var/log/btmp In the output of /var/log/btmp, every log line corresponds to a failed login attempt (e.g., using incorrect password or a non-existing user ID). Logon using non-existing user IDs are highlighted in the above impage, which can alert you that someone is attempting to break into your system by guessing commonly-used account names. This is particularly serious in the cases when the tty1 was used, since it means that someone had access to a terminal on your machine (time to check who has keys to your datacenter, maybe?). 4. Display login and logout information per user session. # utmpdump /var/log/utmp In /var/log/utmp, a new login event is characterized by '7' in the first field, a terminal number (or pseudo-terminal id) in the third field, and username in the fourth. The corresponding logout event will be represented by '8' in the first field, the same PID as the login in the second field, and a blank terminal number field. For example, take a close look at PID 1463 in the above image. On [Fri Sep 19 11:57:40 2014 ART] the login prompt appeared in tty1. On [Fri Sep 19 12:04:21 2014 ART], user root logged on. On [Fri Sep 19 12:07:24 2014 ART], root logged out. On a side note, the word LOGIN in the fourth field means that a login prompt is present in the terminal specified in the fifth field. So far I covered somewhat trivial examples. You can combine utmpdump with other text sculpting tools such as awk, sed, grep or cut to produce filtered and enhanced output. For example, you can use the following command to list all login events of a particular user (e.g., gacanepa) and send the output to a .csv file that can be viewed with a pager or a workbook application, such as LibreOffice's Calc or Microsoft Excel. Let's display PID, username, IP address and timestamp only: # utmpdump /var/log/wtmp | grep -E "\[7].*gacanepa" | awk -v OFS="," 'BEGIN {FS="] "}; {print $2,$4,$7,$8}' | sed -e 's/\[//g' -e 's/\]//g' As represented with three blocks in the image, the filtering logic is composed of three pipelined steps. The first step is used to look for login events ([7]) triggered by user gacanepa. The second and third steps are used to select desired fields, remove square brackets in the output of utmpdump, and set the output field separator to a comma. Of course, you need to redirect the output of the above command to a file if you want to open it later (append "> [name_of_file].csv" to the command). In more complex examples, if you want to know what users (as listed in /etc/passwd) have not logged on during the period of time, you could extract user names from /etc/passwd, and then run grep the utmpdump output of /var/log/wtmp against user list. As you see, possibility is limitless. Before concluding, let's briefly show yet another use case of utmpdump: modify utmp or wtmp. As these are binary log files, you cannot edit them as is. Instead, you can export their content to text format, modify the text output, and then import the modified content back to the binary logs. That is: # utmpdump /var/log/utmp > tmp_output < modify tmp_output using a text editor> # utmpdump -r tmp_output > /var/log/utmp This can be useful when you want to remove or fix any bogus entry in the binary logs. To sum up, utmpdump complements standard utilities such as who, w, uptime, last, lastb by dumping detailed login events stored in utmp, wtmp and btmp log files, as well as in their rotated old archives, and that certainly makes it a great utility. *Source: http://xmodulo.com/2014/09/monitor-user-login-history-centos-utmpdump.html View the full article
  17. Redundant Array of Independent Disks (RAID) is a storage technology that combines multiple hard disks into a single logical unit to provide fault-tolerance and/or improve disk I/O performance. Depending on how data is stored in an array of disks (e.g., with striping, mirroring, parity, or any combination thereof), different RAID levels are defined (e.g., RAID-0, RAID-1, RAID-5, etc). RAID can be implemented either in software or with a hardware RAID card. On modern Linux, basic software RAID functionality is available by default. In this post, we'll discuss the software setup of a RAID-1 array (also known as a "mirroring" array), where identical data is written to the two devices that form the array. While it is possible to implement RAID-1 with partitions on a single physical hard drive (as with other RAID levels), it won't be of much use if that single hard drive fails. In fact, that's why most RAID levels normally use multiple physical drives to provide redundancy. In the event of any single drive failure, the virtual RAID block device should continue functioning without issues, and allow us to replace the faulty drive without significant production downtime and, more importantly, with no data loss. However, it does not replace the need to save periodic system backups in external storage. Since the actual storage capacity (size) of a RAID-1 array is the size of the smallest drive, normally (if not always) you will find two identical physical drives in RAID-1 setup. Installing mdadm on Linux The tool that we are going to use to create, assemble, manage, and monitor our software RAID-1 is called mdadm (short for multiple disks admin). On Linux distros such as Fedora, CentOS, RHEL or Arch Linux, mdadm is available by default. On Debian-based distros, mdadm can be installed with aptitude or apt-get. Fedora, CentOS or RHEL As mdadm comes pre-installed, all you have to do is to start RAID monitoring service, and configure it to auto-start upon boot: # systemctl start mdmonitor # systemctl enable mdmonitor For CentOS/RHEL 6, use these commands instead: # service mdmonitor start # chkconfig mdmonitor on Debian, Ubuntu or Linux Mint On Debian and its derivatives, mdadm can be installed with aptitude or apt-get: # aptitude install mdadm On Ubuntu, you will be asked to configure postfix MTA for sending out email notifications (as part of RAID monitoring). You can skip it for now. On Debian, the installation will start with the following explanatory message to help us decide whether or not we are going to install the root filesystem on a RAID array. What we need to enter on the next screen will depend on this decision. Read it carefully: Since we will not use our RAID-1 for the root filesystem, we will leave the answer blank: When asked whether we want to start (reassemble) our array automatically during each boot, choose "Yes". Note that we will need to add an entry to the /etc/fstab file later in order for the array to be properly mounted during the boot process as well. Partitioning Hard Drives Now it's time to prepare the physical devices that will be used in our array. For this setup, I have plugged in two 8 GB USB drives that have been identified as /dev/sdb and /dev/sdc from dmesg output: # dmesg | less [ 60.014863] sd 3:0:0:0: [sdb] 15826944 512-byte logical blocks: (8.10 GB/7.54 GiB) [ 75.066466] sd 4:0:0:0: [sdc] 15826944 512-byte logical blocks: (8.10 GB/7.54 GiB) We will use fdisk to create a primary partition on each disk that will occupy its entire size. The following steps show how to perform this task on /dev/sdb, and assume that this drive hasn't been partitioned yet (otherwise, we can delete the existing partition(s) to start off with a clean disk): # fdisk /dev/sdb Press 'p' to print the current partition table: (if one or more partitions are found, they can be deleted with 'd' option. Then 'w' option is used to apply the changes). Since no partitions are found, we will create a new primary partition ['n'] as a primary partition ['p'], assign the partition number = ['1'] to it, and then indicate its size. You can press Enter key to accept the proposed default values, or enter a value of your choosing, as shown in the image below. Now repeat the same process for /dev/sdc. If we have two drives of different sizes, say 750 GB and 1 TB for example, we should create a primary partition of 750 GB on each of them, and use the remaining space on the bigger drive for another purpose, independent of the RAID array. Create a RAID-1 ArrayOnce you are done with creating the primary partition on each drive, use the following command to create a RAID-1 array: # mdadm -Cv /dev/md0 -l1 -n2 /dev/sdb1 /dev/sdc1 Where: -Cv: creates an array and produce verbose output. /dev/md0: is the name of the array. -l1 (l as in "level"): indicates that this will be a RAID-1 array. -n2: indicates that we will add two partitions to the array, namely /dev/sdb1 and /dev/sdc1. The above command is equivalent to: # mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1 If alternatively you want to add a spare device in order to replace a faulty disk in the future, you can add '--spare-devices=1 /dev/sdd1' to the above command. Answer "y" when prompted if you want to continue creating an array, then press Enter: You can check the progress with the following command: # cat /proc/mdstat Another way to obtain more information about a RAID array (both while it's being assembled and after the process is finished) is: # mdadm --query /dev/md0 # mdadm --detail /dev/md0 (or mdadm -D /dev/md0) Of the information provided by 'mdadm -D', perhaps the most useful is that which shows the state of the array. The active state means that there is currently I/O activity happening. Other possible states are clean (all I/O activity has been completed), degraded (one of the devices is faulty or missing), resyncing (the system is recovering from an unclean shutdown such as a power outage), or recovering (a new drive has been added to the array, and data is being copied from the other drive onto it), to name the most common states. Formatting and Mounting a RAID ArrayThe next step is formatting (with ext4 in this example) the array: # mkfs.ext4 /dev/md0 Now let's mount the array, and verify that it was mounted correctly: # mount /dev/md0 /mnt # mount Monitor a RAID Array The mdadm tool comes with RAID monitoring capability built in. When mdadm is set to run as a daemon (which is the case with our RAID setup), it periodically polls existing RAID arrays, and reports on any detected events via email notification or syslog logging. Optionally, it can also be configured to invoke contingency commands (e.g., retrying or removing a disk) upon detecting any critical errors. By default, mdadm scans all existing partitions and MD arrays, and logs any detected event to /var/log/syslog. Alternatively, you can specify devices and RAID arrays to scan in mdadm.conf located in /etc/mdadm/mdadm.conf (Debian-based) or /etc/mdadm.conf (Red Hat-based), in the following format. If mdadm.conf does not exist, create one. DEVICE /dev/sd[bcde]1 /dev/sd[ab]1 ARRAY /dev/md0 devices=/dev/sdb1,/dev/sdc1 ARRAY /dev/md1 devices=/dev/sdd1,/dev/sde1 ..... # optional email address to notify events MAILADDR your@email.com After modifying mdadm configuration, restart mdadm daemon: On Debian, Ubuntu or Linux Mint: # service mdadm restart On Fedora, CentOS/RHEL 7: # systemctl restart mdmonitor On CentOS/RHEL 6: # service mdmonitor restart Auto-mount a RAID Array Now we will add an entry in the /etc/fstab to mount the array in /mnt automatically during boot (you can specify any other mount point): # echo "/dev/md0 /mnt ext4 defaults 0 2" << /etc/fstab To verify that mount works okay, we now unmount the array, restart mdadm, and remount. We can see that /dev/md0 has been mounted as per the entry we just added to /etc/fstab: # umount /mnt # service mdadm restart (on Debian, Ubuntu or Linux Mint) or systemctl restart mdmonitor (on Fedora, CentOS/RHEL7) or service mdmonitor restart (on CentOS/RHEL6) # mount -a Now we are ready to access the RAID array via /mnt mount point. To test the array, we'll copy the /etc/passwd file (any other file will do) into /mnt: On Debian, we need to tell the mdadm daemon to automatically start the RAID array during boot by setting the AUTOSTART variable to true in the /etc/default/mdadm file: AUTOSTART=true Simulating Drive Failures We will simulate a faulty drive and remove it with the following commands. Note that in a real life scenario, it is not necessary to mark a device as faulty first, as it will already be in that state in case of a failure. First, unmount the array: # umount /mnt Now, notice how the output of 'mdadm -D /dev/md0' indicates the changes after performing each command below. # mdadm /dev/md0 --fail /dev/sdb1 #Marks /dev/sdb1 as faulty # mdadm --remove /dev/md0 /dev/sdb1 #Removes /dev/sdb1 from the array Afterwards, when you have a new drive for replacement, re-add the drive again: # mdadm /dev/md0 --add /dev/sdb1 The data is then immediately started to be rebuilt onto /dev/sdb1: Note that the steps detailed above apply for systems with hot-swappable disks. If you do not have such technology, you will also have to stop a current array, and shutdown your system first in order to replace the part: # mdadm --stop /dev/md0 # shutdown -h now Then add the new drive and re-assemble the array: # mdadm /dev/md0 --add /dev/sdb1 # mdadm --assemble /dev/md0 /dev/sdb1 /dev/sdc1 Source: http://xmodulo.com/2014/09/create-software-raid1-array-mdadm-linux.html View the full article
  18. Bash shell script for take Backup in Tape Device #!/bin/sh # # #Set Device TAPE=/dev/rmt/0 #Set Source Path where take backup DIR=/files #Set Log Directory LOGDIR=/var/log/backup #Set Logfile Name LOGFILE=$LOGDIR/backup_`date '+%Y%m%d'` #===================================== # Please do not remove anything from # below lines!!!!! #===================================== # echo "`date '+%Y%m%d-%H%M'` Starting backup to tape ..." >> $LOGFILE 2>&1 find $LOGDIR -name backup_\* -mtime +7 -exec rm {} \; mt -f $TAPE rewind > /dev/null 2>&1 tape_status=$? if [ $tape_status -ne 0 ]; then echo "`date '+%Y%m%d-%H%M'` Tape drive problem ..." >> $LOGFILE 2>&1 exit 1 fi mt -f $TAPE rewind > /dev/null 2>&1 echo "`date '+%Y%m%d-%H%M'` Starting tar ..." >> $LOGFILE 2>&1 tar cvf $TAPE $DIR >> $LOGFILE 2>&1 echo "`date '+%Y%m%d-%H%M'` Finished tar and ejecting tape ..." >> $LOGFILE 2>&1 mt -f $TAPE offline View the full article
  19. We have seen how we can set up Nagios Remote Plugin Executor (NRPE) in an existing Nagios setup. However, the scripts and plugins needed to monitor memory usage do not come with stock Nagios. In this tutorial, we will see how we can configure NRPE to monitor RAM usage of a remote server. The script that we will use for monitoring RAM is available at Nagios Exchange, as well as the creators' Github repository. Assuming that NRPE has already been set up, we start the process by downloading the script in the server that we want to monitor. Preparing Remote Servers On Debian/Ubuntu: # cd /usr/lib/nagios/plugins/ # wget https://raw.githubusercontent.com/justintime/nagios-plugins/master/check_mem/check_mem.pl # mv check_mem.pl check_mem # chmod +x check_mem On RHEL/CentOS: # cd /usr/lib64/nagios/plugins/ (or /usr/lib/nagios/plugins/ for 32-bit) # wget https://raw.githubusercontent.com/justintime/nagios-plugins/master/check_mem/check_mem.pl # mv check_mem.pl check_mem # chmod +x check_mem You can check whether the script generates output properly by manually running the following command on localhost. When used with NRPE, this command is supposed to check free memory, warn when free memory is less than 20%, and generate critical alarm when free memory is less than 10%. # ./check_mem -f -w 20 -c 10 OK - 34.0% (2735744 kB) free.|TOTAL=8035340KB;;;; USED=5299596KB;6428272;7231806;; FREE=2735744KB;;;; CACHES=2703504KB;;;; If you see something like the above as an output, that means the command is working okay. Now that the script is ready, we define the command to check RAM usage for NRPE. As mentioned before, the command will check free memory, warn when free memory is less than 20%, and generate critical alarm when free memory is less than 10%. # vim /etc/nagios/nrpe.cfg For Debian/Ubuntu: command[check_mem]=/usr/lib/nagios/plugins/check_mem -f -w 20 -c 10 For RHEL/CentOS 32 bit: command[check_mem]=/usr/lib/nagios/plugins/check_mem -f -w 20 -c 10 For RHEL/CentOS 64 bit: command[check_mem]=/usr/lib64/nagios/plugins/check_mem -f -w 20 -c 10 Preparing Nagios ServerIn the Nagios server, we define a custom command for NRPE. The command can be stored in any directory within Nagios. To keep the tutorial simple, we will put the command definition in /etc/nagios directory. For Debian/Ubuntu: # vim /etc/nagios3/conf.d/nrpe_command.cfg define command{ command_name check_nrpe command_line /usr/lib/nagios/plugins/check_nrpe -H '$HOSTADDRESS$' -c '$ARG1$' } For RHEL/CentOS 32 bit: # vim /etc/nagios/objects/nrpe_command.cfg define command{ command_name check_nrpe command_line /usr/lib/nagios/plugins/check_nrpe -H $HOSTADDRESS$ -c $ARG1$ } For RHEL/CentOS 64 bit: # vim /etc/nagios/objects/nrpe_command.cfg define command{ command_name check_nrpe command_line /usr/lib64/nagios/plugins/check_nrpe -H $HOSTADDRESS$ -c $ARG1$ } Now we define the service check in Nagios. On Debian/Ubuntu: # vim /etc/nagios3/conf.d/nrpe_service_check.cfg define service{ use local-service host_name remote-server service_description Check RAM check_command check_nrpe!check_mem } On RHEL/CentOS: # vim /etc/nagios/objects/nrpe_service_check.cfg define service{ use local-service host_name remote-server service_description Check RAM check_command check_nrpe!check_mem } Finally, we restart the Nagios service. On Debian/Ubuntu: # service nagios3 restart On RHEL/CentOS 6: # service nagios restart On RHEL/CentOS 7: # systemctl restart nagios.service Troubleshooting Nagios should start checking RAM usage of a remote-server using NRPE. If you are having any problem, you could check the following. Make sure that NRPE port is allowed all the way to the remote host. Default NRPE port is TCP 5666. You could try manually checking NRPE operation by executing the check_nrpe command: /usr/lib/nagios/plugins/check_nrpe -H remote-server You could also try to run the check_mem command manually: /usr/lib/nagios/plugins/check_nrpe -H remote-server –c check_mem In the remote server, set debug=1 in /etc/nagios/nrpe.cfg. Restart the NRPE service and check the log file /var/log/messages (RHEL/CentOS) or /var/log/syslog (Debain/Ubuntu). The log files should contain relevant information if there is any configuration or permission errors. If there are not hits in the log, it is very likely that the requests are not reaching the remote server due to port filtering at some point. To sum up, this tutorial demonstrated how we can easily tune NRPE to monitor RAM usage of remote servers. The process is as simple as downloading the script, defining the commands, and restarting the services. Hope this helps. *Source: http://xmodulo.com/2014/09/monitor-server-memory-usage-nagios-remote-plugin-executor.html View the full article
  20. As far as network management is concerned, Nagios is one of the most powerful tools. Nagios can monitor the reachability of remote hosts, as well as the state of services running on them. However, what if we want to monitor something other than network services for a remote host? For example, we may want to monitor the disk utilization or CPU processor load of a remote host. Nagios Remote Plugin Executor (NRPE) is a tool that can help with doing that. NRPE allows one to execute Nagios plugins installed on remote hosts, and integrate them with an existing Nagios server. This tutorial will cover how to set up NRPE on an existing Nagios deployment. The tutorial is primarily divided into two parts: Configure remote hosts. Configure a Nagios monitoring server. We will then finish off by defining some custom commands that can be used with NRPE. Configure Remote Hosts for NRPE Step One: Installing NRPE Service You need to install NRPE service on every remote host that you want to monitor using NRPE. NRPE service daemon on each remote host will then communicate with a Nagios monitoring server. Necessary packages for NRPE service can easily be installed using apt-get or yum, subject to the platform. In case of CentOS, we will need to add Repoforge repository as NRPE is not available in CentOS repositories. On Debian, Ubuntu or Linux Mint: # apt-get install nagios-nrpe-server On CentOS, Fedora or RHEL: # yum install nagios-nrpe Step Two: Preparing Configuration File The configuration file /etc/nagios/nrpe.cfg is similar for Debian-based and RedHat-based systems. The configuration file is backed up, and then updated as follows. # vim /etc/nagios/nrpe.cfg ## NRPE service port can be customized ## server_port=5666 ## the nagios monitoring server is permitted ## ## NOTE: There is no space after the comma ## allowed_hosts=,X.X.X.X-IP_v4_of_Nagios_server ## The following examples use hard-coded command arguments. ## These parameters can be modified as needed. ## NOTE: For CentOS 64 bit, use /usr/lib64 instead of /usr/lib ## command[check_users]=/usr/lib/nagios/plugins/check_users -w 5 -c 10 command[check_load]=/usr/lib/nagios/plugins/check_load -w 15,10,5 -c 30,25,20 command[check_hda1]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% -p /dev/hda1 command[check_zombie_procs]=/usr/lib/nagios/plugins/check_procs -w 5 -c 10 -s Z command[check_total_procs]=/usr/lib/nagios/plugins/check_procs -w 150 -c 200 Now that the configuration file is ready, NRPE service is ready to be fired up. Step Three: Initiating NRPE Service For RedHat-based systems, the NRPE service needs to be added as a startup service. On Debian, Ubuntu, Linux Mint: # service nagios-nrpe-server restart On CentOS, Fedora or RHEL: # service nrpe restart # chkconfig nrpe on Step Four: Verifying NRPE Service Status Information about NRPE daemon status can be found in the system log. For a Debian-based system, the log file will be /var/log/syslog. The log file for a RedHat-based system will be /var/log/messages. A sample log is provided below for reference. nrpe[19723]: Starting up daemon nrpe[19723]: Listening for connections on port 5666 nrpe[19723]: Allowing connections from:,X.X.X.X In case firewall is running, TCP port 5666 should be open, which is used by NRPE daemon. # netstat -tpln | grep 5666 tcp 0 0* LISTEN 19885/nrpe Configure Nagios Monitoring Server for NRPE The first step in configuring an existing Nagios monitoring server for NRPE is to install NRPE plugin on the server. Step One: Installing NRPE Plugin In case the Nagios server is running on a Debian-based system (Debian, Ubuntu or Linux Mint), a necessary package can be installed using apt-get. # apt-get install nagios-nrpe-plugin After the plugin is installed, the check_nrpe command, which comes with the plugin, is modified a bit. # vim /etc/nagios-plugins/config/check_nrpe.cfg ## the default command is overwritten ## define command{ command_name check_nrpe command_line /usr/lib/nagios/plugins/check_nrpe -H '$HOSTADDRESS$' -c '$ARG1$' } In case the Nagios server is running on a RedHat-based system (CentOS, Fedora or RHEL), you can install NRPE plugin using yum. On CentOS, adding Repoforge repository is necessary. # yum install nagios-plugins-nrpe Now that the NRPE plugin is installed, proceed to configure a Nagios server following the rest of the steps. Step Two: Defining Nagios Command for NRPE Plugin First, we need to define a command in Nagios for using NRPE. # vim /etc/nagios/objects/commands.cfg ## NOTE: For CentOS 64 bit, use /usr/lib64 instead of /usr/lib ## define command{ command_name check_nrpe command_line /usr/lib/nagios/plugins/check_nrpe -H '$HOSTADDRESS$' -c '$ARG1$' } Step Three: Adding Host and Command Definition Next, define remote host(s) and commands to execute remotely on them. The following shows sample definitions of a remote host a command to execute on the host. Naturally, your configuration will be adjusted based on your requirements. The path to the file is slightly different for Debian-based and RedHat-based systems. But the content of the files are identical. On Debian, Ubuntu or Linux Mint: # vim /etc/nagios3/conf.d/nrpe.cfg On CentOS, Fedora or RHEL: # vim /etc/nagios/objects/nrpe.cfg define host{ use linux-server host_name server-1 alias server-1 address X.X.X.X-IPv4_address_of_remote_host } define service { host_name server-1 service_description Check Load check_command check_nrpe!check_load check_interval 1 use generic-service } Step Four: Restarting Nagios Service Before restarting Nagios, updated configuration is verified with a dry run. On Ubuntu, Debian, or Linux Mint: # nagios3 -v /etc/nagios3/nagios.cfg On CentOS, Fedora or RHEL: # nagios -v /etc/nagios/nagios.cfg If everything goes well, Nagios service can be restarted. # service nagios restart Configuring Custom Commands with NRPE Setup on Remote Servers The following is a list of custom commands that can be used with NRPE. These commands are defined in the file /etc/nagios/nrpe.cfg located at the remote servers. ## Warning status when load average exceeds 1, 2 and 1 for 1, 5, 15 minute interval, respectively. ## Critical status when load average exceeds 3, 5 and 3 for 1, 5, 15 minute interval, respectively. command[check_load]=/usr/lib/nagios/plugins/check_load -w 1,2,1 -c 3,5,3 ## Warning level 25% and critical level 10% for free space of /home. ## Could be customized to monitor any partition (e.g. /dev/sdb1, /, /var, /home) command[check_disk]=/usr/lib/nagios/plugins/check_disk -w 25% -c 10% -p /home ## Warn if number of instances for process_ABC exceeds 10. Critical for 20 ## command[check_process_ABC]=/usr/lib/nagios/plugins/check_procs -w 1:10 -c 1:20 -C process_ABC ## Critical if the number of instances for process_XYZ drops below 1 ## command[check_process_XYZ]=/usr/lib/nagios/plugins/check_procs -w 1: -c 1: -C process_XYZ Setup on Nagios Monitoring Server To apply the custom commands defined above, we modify the service definition at Nagios monitoring server as follows. The service definition could go to the file where all the services are defined (e.g., /etc/nagios/objects/nrpe.cfg or /etc/nagios3/conf.d/nrpe.cfg) ## example 1: check process XYZ ## define service { host_name server-1 service_description Check Process XYZ check_command check_nrpe!check_process_XYZ check_interval 1 use generic-service } ## example 2: check disk state ## define service { host_name server-1 service_description Check Process XYZ check_command check_nrpe!check_disk check_interval 1 use generic-service } To sum up, NRPE is a powerful add-on to Nagios as it provides provision for monitoring a remote server in a highly configurable fashion. Using NRPE, we can monitor server load, running processes, logged in users, disk states and other parameters. Hope this helps. *Source: http://xmodulo.com/2014/03/nagios-remote-plugin-executor-nrpe-linux.html View the full article
  21. If you are a command-line junkie, you have most likely spent hours troubleshooting broken scripts or writing your own. This process can be exhausting for one person, and you have probably wished for a simple way to share your terminal session with friends so they could assist you. Linux/Unix make it extremely easy with a simple terminal program called Byobu. Byobu is an extension of terminal multiplexers’ GNU Screen or TMUX for short, and allows you to share a terminal session with as many people as you want. Installation Installing Byobu on most Linux distributions is incredibly easy. Open your terminal and execute the appropriate command for your distribution. Debian/Ubuntu/Linux Mint (and other Debian-based or Ubuntu-based distros): sudo apt-get install byobu CentOS/Red Hat Linux/Fedora: sudo yum install byobu ArchLinux: sudo pacman -Sy byobu Gentoo: sudo emerge byobu Mac OS X Installation on Mac OS X is almost as simple. The only requirement is that you have Homebrew installed. Homebrew is an unofficial package-manager for Mac OS X. It makes installing command-line utilities extremely easy. To install Homebrew, open your terminal by navigating to “Finder -> Utilities” and locating the terminal icon. Paste the following command to begin the installation, and type your password when prompted. ruby -e "$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)" Once installation is complete, you are ready to install Byobu. sudo brew install byobu Get Sharing Byobu’s default configuration options will let you share your screen directly after install. Once you have installed byobu, you can create a session instantly by typing byobu. byobu Now open up a second terminal and do the same thing. You will instantly be dropped into the same byobu session as the first. Obviously, connecting locally to a Byobu session on the same computer is fairly useless, but Byobu works remotely as well over SSH. If you are running a remote server, first connect to it over SSH like normal. ssh user@somehost Once again, type byobu. Repeat this process with as many people as you want. Instantly, they will attach to your Byobu session. Each of them will be able to view and control the session. Byobu remote connect to server from first client: Byobu remote connect to server from second client: Extra Features Byobu has many additional features that can be accessed via keyboard shortcuts. Create a new terminal session inside Byobu: f2 Switch between terminal sessions inside Byobu: f4 Detach from Byobu session and logout: f6 Split the Byobu screen vertically: control + f2 Split the Byobu screen horizontally: shift + f2 Launch the Byobu configuration menu: f9 From the configuration menu, you can change the display notifications at the bottom of the terminal as well as launch Byobu from the startup. More advanced users can also set an alternate escape sequence which by default is “Ctrl + A”. For more cool tips and tricks such as custom background colors, check out Ubuntu’s Byobu documentation. *Source: http://www.maketecheasier.com/share-terminal-session-with-friends/ View the full article
  22. Today I was trying to learn and know about Systemd. I have found one of the great Article about firewalld, Sharing with you guys, It will help you to understand this biggest and major change in RHEL and CentOS 7. This article is not mine, I found on internet and felt that this is wonderful Article so Sharing with you all, Thanks to Original author, Given credit to him at the end of article. Presentation Firewalld is the new userland interface in RHEL 7. It replaces the iptables interface and connects to the netfilter kernel code. It mainly improves the security rules management by allowing configuration changes without stopping the current connections. To know if Firewalld is running, type: # systemctl status firewalld firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled) Active: active (running) since Tue 2014-06-17 11:14:49 CEST; 5 days ago ...or alternatively: # firewall-cmd --state running Note: If Firewalld is not running, the command displays not running. If you’ve got several network interfaces in IPv4, you will have to activate ip_forwarding. To do that, paste the following line in the /etc/sysctl.conf file: net.ipv4.ip_forward=1 Then, activate the configuration: # sysctl -p Although Firewalld is the RHEL 7 way to deal with firewalls and provides many improvements, iptables can still be used. Zone management Also, a new concept of zone appears : all network interfaces can be located in the same default zone or divided into different ones according to the levels of trust defined. To get the default zone, type: # firewall-cmd --get-default-zone public To get the list of zones where you’ve got network interfaces assigned to, type: # firewall-cmd --get-active-zones public interfaces: eth0 To get the list of all the available zones, type: # firewall-cmd --get-zones block dmz drop external home internal public trusted work To get all the details about the public zone, type: # firewall-cmd --zone=public --list-all public (default, active) interfaces: eth0 sources: services: dhcpv6-client ssh ports: masquerade: no forward-ports: icmp-blocks: rich rules: To change the default zone to home permanently, type: # firewall-cmd --set-default-zone=home success Network interfaces can be assigned to a zone in a temporary (until the next reboot or reload) or permanent way. To assign the eth0 network interface temporary to the internal zone, type: # firewall-cmd --zone=internal --change-interface=eth0 success To assign the eth0 network interface permanently to the internal zone (a file called internal.xml is created in the /etc/firewalld/zones directory), type: # firewall-cmd --permanent --zone=internal --change-interface=eth0 success To know which zone is associated with the eth0 interface, type: # firewall-cmd --get-zone-of-interface=eth0 internal Service management After assigning each network interface to a zone, it is now possible to add services to each zone. To allow the http service permanently in the internal zone, type: # firewall-cmd --permanent --zone=internal --add-service=http success # firewall-cmd --reload Note1: Type –remove-service=http to deny the http service. Note2: The firewall-cmd –reload command is necessary to activate the change. Contrary to the –complete-reload option, current connections are not stopped. To get the list of services in the default zone, type: # firewall-cmd --list-services dhcpv6-client ssh Note: To get the list of the services in a particular zone, add the –zone= option. Service firewall configuration With the Firewalld package, the firewall configuration of the main services (ftp, httpd, etc) comes in the /usr/lib/firewalld/services directory. But it is still possible to add new ones in the /etc/firewalld/services directory. Also, if files exist at both locations for the same service, the file in the /etc/firewalld/services directory takes precedence. For example, it is the case of the HAProxy service. There is no firewall configuration associated. Create the /etc/firewalld/services/haproxy.xml and paste the following lines: HAProxy HAProxy load-balancer Assign the correct SELinux context and file permissions to the haproxy.xml file: # cd /etc/firewalld/services # restorecon haproxy.xml # chmod 640 haproxy.xml Add the HAProxy service to the default zone permanently and reload the firewall configuration: # firewall-cmd --permanent --add-service=haproxy # firewall-cmd --reload Port managementPort management follows the same model as service management. To allow the 443/tcp port temporary in the internal zone, type: # firewall-cmd --zone=internal --add-port=443/tcp success # firewall-cmd --reload Note: type –remove-port=443/tcp to deny the port. To get the list of ports open in the internal zone, type: # firewall-cmd --zone=internal --list-ports 443/tcp Masquerading If your firewall is your network gateway and you don’t want everybody to know your internal addresses, you can set up two zones, one called internal, the other external, and configure masquerading on the external zone. This way, all packets will get your firewall ip address as source address. To set up masquerading on the external zone, type: # firewall-cmd --zone=external --add-masquerade Note1: To remove masquerading, use the –remove-masquerade option. Note2: To know if masquerading is active in a zone, use the –query-masquerade option. Port forwarding In addition to the masquerading, you can want to use port forwarding. If you want all packets intended for port 22 to be now forwarded to port 3753, type: # firewall-cmd --zone=external --add-forward-port=port=22:proto=tcp:toport=3753 Note1: To remove port forwarding, use the –remove-forward-port option. Note2: To know if port forwarding is active in a zone, use the –query-forward-port option. Also, if you want to define the destination ip address, type: # firewall-cmd --zone=external --add-forward-port=port=22:proto=tcp:toport=3753:toaddr= Direct rulesIt is still possible to set specific rules by using the direct mode (here to open the tcp port 9000) that by-passes the Firewalld interface: # firewall-cmd --direct --add-rule ipv4 filter INPUT 0 -p tcp --dport 9000 -j ACCEPT success # firewall-cmd --reload Note: This last example has been borrowed from Khosro Taraghi’s blog. To display all the direct rules added, type: # firewall-cmd --direct --get-all-rules In addition, you can read this very good article about Firewalld by Sander van Vugt. Thanks to Original Author for explaining it very nicely. Source : http://www.certdepot.net/rhel7-get-started-firewalld/ View the full article
  23. The default settings for the GRUB2 timeout menu selection during the system boot is 5 seconds. To change this value open /etc/default/grub grub default config file. The content of the file looks similar the the one shown below: GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)" GRUB_DEFAULT=saved GRUB_DISABLE_SUBMENU=true GRUB_TERMINAL_OUTPUT="console" GRUB_CMDLINE_LINUX="rd.lvm.lv=rhel/root crashkernel=auto \ rd.lvm.lv=rhel/swap vconsole.font=latarcyrheb-sun16 vconsole.keymap=us rhgb quiet" GRUB_DISABLE_RECOVERY="true" To change the timeout settings simply updated the GRUB_TIMEOUT GRUB2 directive the to desired number of seconds your wish GRUB2 wait until you make your boot menu selection. For example to make GRUB 2 wait 20 seconds change this value to: GRUB_TIMEOUT=20 In case that you do not wish GRUB 2 menu to appear / disable change the value to 0 seconds: GRUB_TIMEOUT=0 Once you have updated your settings update GRUB2 settings using grub2-mkconfig command to create a new updated GRUB2 config file: [root@rhel7 ~]# grub2-mkconfig -o /boot/grub2/grub.cfg Generating grub configuration file ... Found linux image: /boot/vmlinuz-3.10.0-123.el7.x86_64 Found initrd image: /boot/initramfs-3.10.0-123.el7.x86_64.img Found linux image: /boot/vmlinuz-0-rescue-75387b56d72b48b380810499805ec28a Found initrd image: /boot/initramfs-0-rescue-75387b56d72b48b380810499805ec28a.img done *Source: http://linuxconfig.org/update-grub2-menu-timeout-on-rhel-7-linux View the full article
  24. Docker is a container-based software framework for automating deployment of applications. “Containers” are encapsulated, lightweight, and portable application modules. The major (intended) benefit of using a container is that your application will run consistently on and between any server, be it cloud or dedicated, or of varying operating systems. Pre-Flight Check As of June 2014 Docker has officially released v1.0.0. These instructions are intended for installing Docker. I’ll be working from a Liquid Web Self Managed CentOS 7 server, and I’ll be logged in as root. Step 1: Installation of Docker As a matter of best practice we’ll update our packages: yum -y update Now let’s install Docker by installing the docker-io package: yum -y install docker docker-registry Step 2: Start Docker and Make Sure Docker Starts on BootSet the Docker service to start at boot: systemctl enable docker.service Then start the Docker service: systemctl start docker.service And verify your work by checking the status of ProFTPD: systemctl status docker.service Step 3: Download a Docker ContainerLet’s begin using Docker! Download the fedora Docker image: docker pull fedora Step 4: Run a Docker Container Now, to setup a basic fedora container with a bash shell, we just run one command. docker run will run a command in a new container, -i attaches stdin and stdout, -t allocates a tty, and we’re using the standard fedora container. docker run -i -t fedora /bin/bash That’s it! You’re now using a bash shell inside of a fedora docker container. To disconnect, or detach, from the shell without exiting use the escape sequence Ctrl-p + Ctrl-q. There are many community containers already available, which can be found through a search. In the command below I am searching for the keyword centos: docker search centos View the full article
  25. Question: What is a proper way to change hostname on CentOS / RHEL 7 (permanently or temporarily)? In CentOS or RHEL, there are three kinds of hostnames defined: (1) static, (2) transient, and (3) pretty. The "static" hostname is also known as kernel hostname, which is initialized from /etc/hostname automatically at boot time. The "transient" hostname is a temporary hostname assigned at run time, for example, by a DHCP or mDNS server. Both static and transient hostnames follow the same character restriction rules as Internet domain names. The "pretty" hostname is allowed to have a free-form (including special/whitespace characters) hostname, presented to end users (e.g., Dan's Computer). In CentOS/RHEL 7, there is a command line utility called hostnamectl, which allows you to view or modify hostname related configurations. To view hostname related settings: $ hostnamectl status To view static, transient or pretty hostname only, use "--static", "--transient" or "--pretty" option, respectively. $ hostnamectl status [--static|--transient|--pretty] To change all three hostnames: static, transient, and pretty, simultaneously: $ sudo hostnamectl set-hostname As shown above, in case of static/transient hostnames, any special or whitespace character will be removed, and any uppercase letter will automatically be converted to lowercase in the supplied argument. Once the static hostname is changed, /etc/hostname will automatically be updated accordingly. However, /etc/hosts will not be updated to reflect the change, so you need to update /etc/hosts manually. If you want to change a particular hostname only (static, transient or pretty), you can use "--static", "--transient" or "--pretty" option. For example, to change hostname permanently, you can change the static hostname: $ sudo hostnamectl --static set-hostname Note that you do not have to reboot the machine to activate permanent hostname change. The above command will change kernel hostname immediately. Log out, and log back in to see the new static hostname in the command-line prompt. *Source: http://ask.xmodulo.com/change-hostname-centos-rhel-7.html View the full article