Jump to content
Slate Blackcurrant Watermelon Strawberry Orange Banana Apple Emerald Chocolate Marble
Slate Blackcurrant Watermelon Strawberry Orange Banana Apple Emerald Chocolate Marble

glaukos

Members
  • Content Count

    27
  • Joined

  • Last visited

  • Feedback

    N/A

Community Reputation

4 Neutral

About glaukos

  • Rank
    Member

Personal Information

  • Website
    Array
  1. Use the following script to find out top processed sorted by memory usage, in megabytes (MB) #!/bin/bash ps axo rss,comm,pid | awk '{ proc_list[$2]++; proc_list[$2 "," 1] += $1; } \ END { for (proc in proc_list) { printf("%d\t%s\n", \ proc_list[proc "," 1],proc); }}' | sort -n | tail -n 10 | sort -rn \ | awk '{$1/=1024;printf "%.0fMB\t",$1}{print $2}' View the full article
  2. 1. Run the following command to find out top biggest directories under /home partition. # du -a /home | sort -n -r | head -n 5 2. If you want to display the biggest directories in the current working directory, run # du -a | sort -n -r | head -n 5 3. Some of you would like to display the above result in human readable format. i.e you might want to display the largest files in KB, MB, or GB. # du -hs * | sort -rh | head -5 4. To display the largest folders/files including the sub-directories, run: # du -Sh | sort -rh | head -5 5. To find the largest 10 files (linux/bash) find . -type f -print0 | xargs -0 du | sort -n | tail -10 | cut -f2 | xargs -I{} du -sh {} 6. To find the largest 10 directories: find . -type d -print0 | xargs -0 du | sort -n | tail -10 | cut -f2 | xargs -I{} du -sh {} Only difference is -type {d:f} d = directory f = files View the full article
  3. Background Information A group of security researchers discovered that SSLv2 (Secure Sockets Layer protocol version 2.0) is vulnerable to the Bleichenbacher RSA padding oracle attack, which can be used to decrypt RSA cipher text without the knowledge of the matching private RSA key. This can be done by observing responses from a server that has the private key and performs the decryption of attacker-provided cipher texts using that key. The researchers also demonstrated a new cross-protocol attack which allows decryption of SSL/TLS sessions using newer protocol versions - SSLv3 or any current TLS (Transport Layer Security) version (1.0 - 1.2) - using this SSLv2 weakness. This flaw is a SSLv2 protocol issue and affects all implementations of the protocol. Researchers refer to this attack as general DROWN. Additionally, flaws were found in the SSLv2 protocol implementation in the OpenSSL cryptography and SSL/TLS library, which make it possible to perform a more efficient variant of the DROWN attack, referred to as special DROWN. These issues were assigned CVE-2016-0703 and CVE-2016-0704 , and were already recently corrected as part of the fix for CVE-2015-0293 . Further details of this attack can be found in the researchers' paper titled DROWN: Breaking TLS using SSLv2 at https://drownattack.com/ , Impact to systems A server is vulnerable to the DROWN attack if it enables SSLv2 protocol in addition to SSLv3 or TLSv1.x, and if it uses RSA key exchange cipher suites. Server that does not enable SSLv2 can also be vulnerable if it does not enable SSLv2, but shares its private RSA key with another server. For example, DROWN attack can still be used to decrypt HTTPS sessions to a web server that does not enable SSLv2 if it shares its RSA key with e.g. IMAP server, possibly running on the same host, that does enable SSLv2. Use of weak or export SSLv2 ciphers is required to efficiently perform the attack. SSL/TLS connections using non-RSA key exchange, such as Diffie-Hellman or Elliptic Curve Diffie-Hellman, can not be decrypted using the DROWN attack. What configurations are vulnerable to this attack? TLS servers which support SSLv2 are vulnerable. Most daemons capable of communication over TLS in Red Hat Enterprise Linux are linked against the OpenSSL cryptographic library. OpenSSL removed support for SSLv2 cipher suites in openssl-1.0.0. Red Hat Enterprise Linux 6 and 7 ship openssl-1.0.1e and they contain this patch. However it was found that OpenSSL servers do not respect the cipher suites advertised in the ServerHello handshake message, which could result in export SSLv2 cipher suites being used even if they were explicitly disabled. Also, the SSLv2 protocol itself was enabled. The flaw was assigned CVE-2015-3197 and is being fixed by the security advisories listed in the Resolution section below. The NSS (Network Security Services) library available in Red Hat Enterprise Linux does not enable SSLv2 by default. Services which do not use SSLv2 , but share their RSA keys with those services which have SSLv2 support, are also vulnerable. This attack can only be conducted if RSA cipher suites for key exchange are enabled on the server. Other asymmetric protocols like DHKE and ECC are not vulnerable. Lastly, the attack can be conducted in real time for OpenSSL servers which are vulnerable to CVE-2015-0293 . This security flaw has been previously addressed by RHSA-2015:0800 for Red Hat Enterprise Linux 5, RHSA-2015:0715 for Red Hat Enterprise Linux 6 and RHSA-2015:0716 for Red Hat Enterprise Linux 7. This attack, which combines the SSLv2 protocol flaw and the OpenSSL implementation flaw, is dubbed as Special DROWN. Take Action Imediately apply available patches to remediate the issue. Rebooting the system after updating is the safest way to ensure all affected services use the updated ssl library. If a reboot is not possible, restarting all network services that depend on openssl after applying the patches is required. DiagnoseTo manually detect if a system is vulnerable, please use the script below: DROWN-test.sh (script version 1.0) Usage $ chmod +x DROWN-test.sh $ ./DROWN-test.sh If the installed version of openssl is vulnerable, you will see output similar to: WARNING: The installed version of openssl (openssl-1.0.1e-42.el7) is vulnerable to both general and special DROWN attack and should be upgraded! See https://access.redhat.com/security/vulnerabilities/drown for more information. View the full article
  4. As a Linux system admin we generally face low disk space issues. By implementing the user and group disk quota on the file system we can resolve the space issue. Quota restricts the users to use only allowed disk and inodes on the particular file system. In this post we will discuss how to enable user & group disk quota on /home File system on CentOS 7 & RHEL 7 Step:1 Add usrquota & grpquota option on /home in /etc/fstab file. [root@linuxtechi ~]# vi /etc/fstab Save & exit the file. In this example I have add user and group quota options on /home Step:2 Remount /home file system via mount command [root@linuxtechi ~]# mount -o remount /home Now recheck the /home file system whether Quota is enable or not. [root@linuxtechi ~]# mount | grep /home /dev/mapper/centos-home on /home type ext4 (rw,relatime,seclabel,quota,usrquota,grpquota,data=ordered) Step:3 Create Quota Database Files using quotacheck [root@linuxtechi home]# quotacheck -cugv /home Whereas : -c : create quota file and don’t use the existing file -v : verbose ouput -u : user disk quota -g : group disk quota Above Command will create aquota.user & aquota.group files under /home Turn on quota on /home using below command : [root@linuxtechi ~]# quotaon /home/ Step:4 Assign user & group disk quota via edquota commands Syntax # edquota -u # edquota -g [root@linuxtechi ~]# edquota -u jack [root@linuxtechi ~]# edquota -g sys_admin As shown above we have two kind of Disk quota limits : soft : It will warn the users if the soft limit of disk quota reached ( size is in KB), in above example for jack user soft limit is 5500 KB ( approx 5.5MB ) hard : It will not allow the users to create new files once the hard limit is reached. ( Size in KB ), in above example hard limit for jack user is 6000 KB ( approx 6 MB ) Note : We can also set the Quota on the basis of the inodes ( i.e numbers of files that the user can create on particular file system). Let’s take an example , login as jack user and try to create a file of 8MB. [root@linuxtechi ~]# su - jack [jack@linuxtechi ~]$ dd if=/dev/zero of=bgfile bs=1M count=8 dm-2: warning, user block quota exceeded. dm-2: write failed, user block limit reached. dd: error writing ‘bgfile’: Disk quota exceeded 6+0 records in 5+0 records out 6144000 bytes (6.1 MB) copied, 0.00711317 s, 864 MB/s As we see above soft & hard limit is exceeded for jack user. Now onwards user jack can’t create new files. Step:5 Display Quota report for Users in human readable [root@linuxtechi ~]# repquota -as Step:6 Configure Grace Period for Soft Limit Grace period is the amount of time during which soft limit can can be exceeded, once the grace period reached then soft limit will become the hard limit. Use the edquota command to set Grace period . [root@linuxtechi ~]# edquota -t Source: http://www.linuxtechi.com/enable-user-group-disk-quota-on-centos-7-rhel-7/ View the full article
  5. NIC(Network Interface Card) bonding is also known as Network bonding. It can be defined as the aggregation or combination of multiple NIC into a single bond interface. It’s main purpose is to provide high availability and redundancy. In this article we will learn how to configure nic or netwok bonding in CentOS 7 & RHEL 7. In my case I have two interface cards (enp0s3 & enp0s8) and will form a bond interface (bond0). Prerequisite If bonding module is not loaded on your linux box then use the below command to load. If bonding module is not loaded on your linux box then use the below command to load. [root@openstack ~]# modprobe bonding To list the bonding module info, use following command. [root@openstack ~]# modinfo bonding Output will be something like below Step:1 Create Bond Interface File Create a bond interface file (ifcfg-bond0) under the folder “/etc/sysconfig/network-scripts/” [root@openstack network-scripts]# vi ifcfg-bond0 DEVICE=bond0 TYPE=Bond NAME=bond0 BONDING_MASTER=yes BOOTPROTO=none ONBOOT=yes IPADDR=192.168.1.70 NETMASK=255.255.255.0 GATEWAY=192.168.1.1 BONDING_OPTS="mode=5 miimon=100" Save & exit the file. Specify the IP address, Netmask & bonding modes as per your requirement. In my example i am using ‘mode=5′ which is used to provide fault tolerance and load balancing. Step:2 Edit the NIC interface files For ifcfg-enp0s3 [root@openstack ~]# vi /etc/sysconfig/network-scripts/ifcfg-enp0s3 TYPE=Ethernet BOOTPROTO=none DEVICE=enp0s3 ONBOOT=yes HWADDR="08:00:27:69:60:c9" MASTER=bond0 SLAVE=yes For ifcfg-enp0s8 [root@openstack ~]# cat /etc/sysconfig/network-scripts/ifcfg-enp0s8 TYPE=Ethernet BOOTPROTO=none DEVICE=enp0s8 ONBOOT=yes HWADDR="08:00:27:ea:71:8d" MASTER=bond0 SLAVE=yes Step:3 Restart the Network ServiceBelow command will restart the network service and will bring above changes into the effect. [root@openstack ~]# systemctl restart network.service Step:4 Test & Verify bond interface.Use ‘ifconfig‘ & ‘ip add‘ command to check bond interface along with its slave interfaces. Use following command to view bond interface settings like bonding mode & slave interface. [root@openstack ~]# cat /proc/net/bonding/bond0 Step:5 Fault tolerance testing To test the fault tolerance we can down one interface and check whether you are still able access the server. [root@openstack ~]# ifdown enp0s8 Device 'enp0s8' successfully disconnected. [root@openstack ~]# Source: http://www.linuxtechi.com/configure-nic-bonding-in-centos-7-rhel-7/ View the full article
  6. I have had many people come to my blog looking how to install oVirt 3.5 software on the new CentOS 7. Much of this content is the same as my 3.4 post, but I thought I would keep it separate. Below are simple step by step instructions for installing the node and getting it configured for the hosted engine. Installation Requirements Please Note: Installing Hosted Engine on CentOS 7 requires oVirt 3.5.1 Please Note: Both the node and engine will be running CentOS 7 (Minimal Installation) Please Note: Ensure the host is fully updated via “yum update” and rebooted before proceeding Prerequisites DNS Ensure you have set up hostnames for the host and engine. If you do not have a DNS server configured and you are only testing oVirt on a single server, you can use /etc/hosts instead. I have the following: Engine: Hostname: engine.xrsa.net, IP Address: 192.168.122.101/24 Host: Hostname: ovirt01.xrsa.net, IP Address: 192.168.122.100/24 NFS Ensure you have set up NFS mount points for the engine and virtual machines. If you do not have a shared NFS server and you are only testing oVirt, you can configure NFS locally on the host instead. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 ------------ -[ (Host) ]- ------------ # yum install -y nfs-utils # mkdir /home/{engineha,vms} && chown 36:36 /home/{engineha,vms} # cat > /etc/exports << EOF /home/engineha 192.168.122.0/24(rw,anonuid=36,anongid=36,all_squash) /home/vms 192.168.122.0/24(rw,anonuid=36,anongid=36,all_squash) EOF # # systemctl start rpcbind.service && systemctl enable rpcbind.service # systemctl start nfs-lock.service && systemctl enable nfs-lock.service # systemctl start nfs-server.service && systemctl enable nfs-server.service Verify you can see the correct mount points. 1 2 3 4 5 6 7 8 9 10 11 12 13 ---------- -[ Host ]- ---------- # showmount -e ovirt01.xrsa.net Export list for ovirt01.xrsa.net: /home/engineha 192.168.122.0/24 /home/vms 192.168.122.0/24 # mount ovirt01.xrsa.net:/home/engineha /mnt && umount /mnt If you get access denied please run the below command and rerun the mount test. # systemctl restart nfs-server.service Installation NTP This not a requirement, but it is recommended that you keep your servers time in sync: 1 2 3 4 5 6 7 8 9 10 ------------ -[ (Host) ]- ------------ # yum install -y ntp # systemctl start ntpd && systemctl enable ntpd # Verify you can reach the NTP servers: # ntpq -p You may put your own NTP servers in /etc/ntp.conf if required. Once you have verified DNS and NFS, install the required repositories and packages. 1 2 3 4 5 6 ------------ -[ (Host) ]- ------------ # yum localinstall -y https://anorien.csc.warwick.ac.uk/mirrors/epel/7/x86_64/e/epel-release-7-5.noarch.rpm # yum localinstall -y http://resources.ovirt.org/pub/yum-repo/ovirt-release35.rpm # yum install -y ovirt-hosted-engine-setup bind-utils screen We will need an ISO for the hosted engine installation. 1 2 3 4 5 6 ----------- -[ Host ] - ----------- # mkdir /home/tmpengineiso && cd /home/tmpengineiso # curl -O http://mirror.ukhost4u.com/centos/7.0.1406/isos/x86_64/CentOS-7.0-1406-x86_64-Minimal.iso # chown -R 36:36 /home/tmpengineiso Now all the prerequisites are in place, verify DNS and then go through the hosted-engine wizard. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 ----------- -[ Host ] - ----------- # host engine.xrsa.net; host ovirt01.xrsa.net # screen # hosted-engine --deploy ... --== CONFIGURATION PREVIEW ==-- Bridge interface : eth0 Engine FQDN : engine.xrsa.net Bridge name : ovirtmgmt SSH daemon port : 22 Firewall manager : iptables Gateway address : 192.168.122.1 Host name for web application : ovirt01.xrsa.net Host ID : 1 Image alias : hosted_engine Image size GB : 25 Storage connection : nfs01.xrsa.net:/home/engineha Console type : vnc Memory size MB : 4096 MAC address : 00:16:3e:71:de:6d Boot type : cdrom Number of CPUs : 2 ISO image (for cdrom boot) : /home/tmpengineiso/CentOS-7.0-1406-x86_64-Minimal.iso CPU Type : model_Westmere Please confirm installation settings (Yes, No)[Yes]: The hosted-engine wizard will give you VNC details so you can connect to the hosted engine virtual machine and install CentOS 7. 1 $ vncviewer -quality 2 ovirt01.xrsa.net:5900 Once installed choose option (1) on the hosted-engine wizard, it will wait until you have rebooted the hosted engine virtual machine. The wizard will give you another set of VNC details to connect to if you need it. However, if you configured networking during the install you should be able to SSH instead. Once you have connected to the hosted engine, download the repositories, configure NTP and run through the ovirt-engine wizard. Please make sure the admin password matches up with the password set in the hosted-engine wizard. Please note: Ensure DNS is configured properly so the engine can resolve itself and the host, else the installation will fail! Please note: While in the engine setup wizard below ensure you put a proper ACL for the “NFS export ACL” option. If you do not you will not be able to activate the ISO_DOMAIN later. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 -------------- -[ (Engine) ]- -------------- # yum -y update # yum localinstall -y https://anorien.csc.warwick.ac.uk/mirrors/epel/7/x86_64/e/epel-release-7-5.noarch.rpm # yum localinstall -y http://resources.ovirt.org/pub/yum-repo/ovirt-release35.rpm # yum install -y ovirt-engine bind-utils screen ntp # host engine.xrsa.net; host ovirt01.xrsa.net # systemctl start ntpd && systemctl enable ntpd # ntpq -p # screen # engine-setup ... --== CONFIGURATION PREVIEW ==-- Application mode : both Firewall manager : firewalld Update Firewall : True Host FQDN : engine.xrsa.net Engine database name : engine Engine database secured connection : False Engine database host : localhost Engine database user name : engine Engine database host name validation : False Engine database port : 5432 Engine installation : True NFS setup : True PKI organization : xrsa.net NFS mount point : /var/lib/exports/iso NFS export ACL : 192.168.122.0/24(rw) Configure local Engine database : True Set application as default page : True Configure Apache SSL : True Configure WebSocket Proxy : True Engine Host FQDN : engine.xrsa.net Please confirm installation settings (OK, Cancel) [OK]: Once finished go back to the hosted-engine wizard and finish off the installation by choosing option (1). It will ask you one final time for the hosted engine to be shutdown, wait a few minutes and it will come back up automatically. After around a minute you can verify the state of the hosted engine virtual machine by using the following command: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 ----------- -[ Host ] - ----------- # hosted-engine --vm-status --== Host 1 status ==-- Status up-to-date : True Hostname : ovirt01.xrsa.net Host ID : 1 Engine status : {"reason": "bad vm status", "health": "bad", "vm": "up", "detail": "powering up"} Score : 2400 Local maintenance : False ... host-id=1 score=2400 maintenance=False state=EngineStarting # Please wait for around five / ten minutes for the hosted engine virtual machine to come back up properly. # hosted-engine --vm-status --== Host 1 status ==-- Status up-to-date : True Hostname : ovirt01.xrsa.net Host ID : 1 Engine status : {"health": "good", "vm": "up", "detail": "up"} Score : 2400 Local maintenance : False ... host-id=1 score=2400 maintenance=False state=EngineUp # Data Domain and ISO_Domain Setup Before you can create virtual machines in oVirt you need to create a Data Domain and ensure the ISO_DOMAIN is attached to the Default cluster. Navigate to https://engine.xrsa.net and login with admin. Create a new Data / NFS Domain by going to “System -> Storage -> New Domain”: You must wait until the NFS01 Data Domain is in an active state. Once activated attach the ISO_DOMAIN to the Default Data Center: Please note: If you are having issues attaching ISO_DOMAIN to the cluster you might have forgot to add a proper ACL on the “NFS export ACL” option during the engine wizard. You can check this as follows: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 -------------- -[ (Engine) ]- -------------- # cat /etc/exports.d/ovirt-engine-iso-domain.exports /var/lib/exports/iso engine.xrsa.net(rw) # This is incorrect as the hosts are mounting this NFS share not the engine. You can fix this by changing to the subnet the hosts are using. # sed -i "s#engine.xrsa.net#192.168.122.0/24#" /etc/exports.d/ovirt-engine-iso-domain.exports # cat /etc/exports.d/ovirt-engine-iso-domain.exports /var/lib/exports/iso 192.168.122.0/24(rw) # systemctl restart nfs-server If everything went as expected you should see both the NFS01 and ISO_DOMAIN in an up and active state: Uploading ISO ImagesThere is no GUI based ISO upload tool during this time, so to upload ISO images you must login to the engine first and run the following commands: 1 2 3 4 5 6 -------------- -[ (Engine) ]- -------------- # curl -O http://mirror.ukhost4u.com/centos/7.0.1406/isos/x86_64/CentOS-7.0-1406-x86_64-Minimal.iso # ovirt-iso-uploader upload -i ISO_DOMAIN CentOS-7.0-1406-x86_64-Minimal.iso # rm CentOS-7.0-1406-x86_64-Minimal.iso Using oVirtAt this point everything should be up and running for you to start creating virtual machines. For more information please read the oVirt Documentation: http://www.ovirt.org/Documentation Source: https://xrsa.net/2015/02/04/installing-ovirt-3-5-on-centos-7-hosted-engine/View the full article
  7. The funkily-named bug of the week is GHOST. Its official moniker is the less catchy CVE-2015-0235, and it's a vulnerability caused by a buffer overflow in a system library that is used in many, if not most, Linux distributions. A buffer overflow is where you assume, for example, that when you handle a four-byte network number written out as decimal digits, you will never get anything longer than 255.​255.​255.​255. That takes up 15 characters, so you may decide that you'll never need more than 15 bytes of memory. So, if you add a spare byte for luck and allocate 16 bytes, you're bound to have enough space. And then, one day, a malicious user decides to see what happens if he ignores the rules, and uses a network number like, say, 1024.​10224.​102224.​1022224. That network number is nonsense, of course, but your program might not hold out long enough to reject it. Your code will probably crash right away, because the attacker's 25 bytes will overflow your 16 bytes of available memory. GHOST explained As it happens, the GHOST vulnerability is connected with network names and numbers. The spooky name comes from the system functions where the vulnerable code was found. The functions are called gethostby­name() and gethostby­name2(), and they do what the names suggest. They find the computer-friendly network number of a host (e.g. 93.​184.​216.​34) from its human-friendly name (e.g. example.com). In other words, these functions do a DNS (domain name system) lookup for you, so your program doesn't need to deal with the intricacies of the DNS protocol. For example, if you ignore any error checking in your code, you might do this: And you'd see something like this: By the way, even if your program doesn't directly call gethostby­name(), you may end up calling it indirectly as a side-effect of doing something, anything, involving a computer name. For example, if your software looks up email addresses, calls home for updates, retrieves postings from online forums, plays podcasts, or any of a number of perfectly unexceptionable network-related activities, it almost certainly triggers name-to-number lookups at some point. And if those lookups are based on data received from outside, such as a sender's email address in received email headers, then attackers may very well get to choose what data gets passed to your Linux computer's gethostby­name() function. The bug It turns out that gethostby­name() has a clever feature, where it works out whether you called it with name that is already a network number (digits-dot-digits-dot-digits-dot-digits). In that case, it would be a waste of time to do a DNS lookup, so it doesn't bother. Unfortunately, the code that runs through the name to see if it's really a network number has a buffer overflow, and if you deliberately send a super-long number laid out just right... ...poof – the GHOST strikes! So an attacker may be able to rig up messages or network requests that crash your program; and with a bit (or, more likely, a lot) of trial and error, they might be able to trigger that crash in a way that gives them control over your computer. That's known as a Remote Code Execution (RCE) exploit, similar to the bug recently found in the super-secure Blackphone, though in that case it was a text message that caused the phone's software to trip over itself. What to do? The good news is that this bug doesn't exist on every computer. It actually exists only in some versions of a software module called glibc, short for GNU C library. In fact, most computers in the world don't have glibc installed, because it's not used by default on Windows, OS X, iOS or Android. The bad news is that many, if not most, computers running Linux do use glibc, and may be at risk. In short, therefore, if you have any Linux-based systems, including home firewalls and routers: Check with your vendor, or the maker of your distribution, to see if you need a patch. If you do, make plans to apply the patch as soon as you can. Oh, and if you are a programmer, you shouldn't really be using the gethostby­name functions anyway. They were superseded many years ago by the much more flexible and useful function getaddr­info(), which you should use instead. Source: https://nakedsecurity.sophos.com/2015/01/29/the-ghost-vulnerability-what-you-need-to-know/. View the full article
  8. Standard procedure With KVM, to access the virtual machine’s console under X Window, type: # virt-manager If you aren’t under X Window, there is another way to access a virtual machine’s console: you can go through a serial console. On the virtual machine, add ‘console=ttyS0‘ at the end of the kernel lines in the /boot/grub2/grub.cfg file: # grubby --update-kernel=ALL --args="console=ttyS0" Now, reboot the virtual machine: # reboot With KVM, connect to the virtual machine’s console (here vm.example.com): # virsh console vm.example.com Connected to domain vm.example.com Escape character is ^] Red Hat Enterprise Linux Server 7.0 (Maipo) Kernel 3.10.0-121.el7.x86_64 on an x86_64 vm login: Emergency procedure Sometimes you have lost all links to your virtual machine (error in the /etc/fstab file, ssh configuration, etc) and, as you didn’t set up any virtual console, you are in real trouble. There is still a solution! Connect to the physical host and shut down your virtual machine (here called vm.example.com): # virsh destroy vm.example.com Define where the virtual machine image file is located (by default in the /var/lib/libvirt/images directory with a name like vm.example.com.img): # virsh dumpxml | grep "source file=" Map your virtual machine image file into the host environment (-a for add and -v for verbose): # kpartx -av /var/lib/libvirt/images/vm.example.com.img add map loop0p1 (253:2): 0 1024000 linear /dev/loop0 2048 add map loop0p2 (253:3): 0 10240000 linear /dev/loop0 1026048 From the previous display, you know that you’ve got two partitions (in fact /boot and /, distinguishable by their respective size). You need to mount the /boot partition to be able to change the grub configuration: # mount /dev/mapper/loop0p1 /mnt Then, edit the /mnt/grub2/grub.cfg file and add console=ttyS0 at the end of every line containing /vmlinuz (the linux kernel). Unmount the partition: # umount /mnt Unmap the virtual machine image file (-d for delete and -v for verbose): # kpartx -dv /var/lib/libvirt/images/vm.example.com.img del devmap : loop0p2 del devmap : loop0p1 loop deleted : /dev/loop0 Restart your virtual machine: # virsh start vm.example.com Domain vm.example.com started Connect to your virtual machine console: # virsh console vm.example.com Connected to domain vm.example.com Escape character is ^] CentOS Linux 7 (Core) Kernel 3.10.0-123.el7.x86_64 on an x86_64 vm login: This procedure works for RHEL 6/CentOS 6 and RHEL 7/CentOS 7. Source: Deepal Jayasinghe’s blog. View the full article
  9. Although differentiation is tough in Linux distributions today, CentOS 7 has carved out a niche as the free and open alter ego to Red Hat Enterprise Linux (RHEL). We found that CentOS, which is mandated to be binary-compatible with Red Hat 7, shares about 95% of the features of its commercial enterprise-class sibling. There’s no IBM System z port, and special variants for cloud and virtualization are more limited than with Red Hat 7. But for many common and generic applications, it’s a drop-in replacement. The primary differences for most IT admins, systems people, and engineers/developers will be a (purchased) relationship with Red Hat and Red Hat’s army of supported applications, working partnership projects and management skills. CentOS 7 is perhaps most aptly put as the DIY twin of RHEL7—who lives in the distro ghetto across the railroad tracks. Still, in testing different deployment payloads, and also where we chose GUIs, the look and feel between Red Hat Enterprise Linux and CentOS 7 instances were essentially the same. And the same mistakes made to either usually cause the same explosions. You might or might not find the fix faster in a Red Hat resource. The Centos.org website doesn't have Red Hat’s polish, and has many odd links to nowhere or nothing—not 404s, rather, unfinished projects. The CentOS site seemed comparatively untended to us. Read Complete Story View the full article
  10. UPDATE Resolution for POODLE SSLv3.0 vulnerability (CVE-2014-3566) in httpd Resolution for POODLE SSLv3.0 vulnerability (CVE-2014-3566) in vsftpd Resolution for POODLE SSLv3.0 vulnerability (CVE-2014-3566) for components that do not allow SSLv3 to be disabled via configuration settings Resolution for POODLE SSLv3.0 vulnerability (CVE-2014-3566) in Firefox and Chromium What Is POODLE? POODLE stands for Padding Oracle On Downgraded Legacy Encryption. This vulnerability allows a man-in-the-middle attacker to decrypt ciphertext using a padding oracle side-channel attack. More details are available in the upstream OpenSSL advisory. POODLE affects older standards of encryption, specifically Secure Socket Layer (SSL) version 3.0. It does not affect the newer encryption mechansim known as Transport Layer Security (TLS). Recommendations To mitigate this vulnerability SSL 3.0 should be disabled in all affected packages. Red Hat is continuously working at this time to provide additional use cases and guides to disable SSL 3.0. Determining Vulnerability If you are not a subscriber, the following script can be run against the server in question. The command will return 'SSL 3.0 enabled' if vulnerable and 'SSL 3.0 disabled' if not. #!/bin/bash ulimit -t 5 ret=$(echo Q | timeout 5 openssl s_client -connect "${1-`hostname`}:${2-443}" -ssl3 2> /dev/null) if echo "${ret}" | grep -q 'Protocol.*SSLv3'; then if echo "${ret}" | grep -q 'Cipher.*0000'; then echo "SSL 3.0 disabled" else echo "SSL 3.0 enabled" fi else echo "SSL disabled or other error" fi NOTE: This script takes the hostname of the server to check as the first argument and an optional port as the second. By default it will check the local system, port 443. Also you can use Your servers are vulnerable simply if they support SSLv3. Several options here: With OpenSSL s_client: openssl s_client -connect : -ssl3 If the connection succeeds, sslv3 is enabled. If it fails, it is disabled. When it fails you should see something like: error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure Using nmap: nmap --script ssl-enum-ciphers -p 443 myhostname.tld It should output 'SSLv3: No supported ciphers found'. Adjust for your hostname/port. Using cipherscan. Clone/download the binary and execute it: ./cipherscan myhostname.tld It should not list anything with SSLv3 under the 'protocols' column. How to fix Firefox browser Open about:config, find security.tls.version.min and set the value to 1. Then restart your browser to drop any open SSL connections. Firefox from version 34 onwards will disable SSLv3 by default and thus require no action (source). However, at the moment of writing, 33 is just released and 34 is set for November 25. Google Chrome (Linux) Edit the /usr/share/applications/google-chrome.desktop file, e.g. sudo nano /usr/share/applications/google-chrome.desktop Edit all lines starting with Exec= to include --ssl-version-min=tls1. E.g. a line like Exec=/usr/bin/google-chrome-stable %U becomes Exec=/usr/bin/google-chrome-stable --ssl-version-min=tls1 %U Then make sure to fully close the browser (Chrome apps may be keeping your browser active in the background!). Note: You may need to repeat this every google-chrome package update, overwriting this .desktop launcher file. A Google Chrome or Chromium browser with SSLv3 disabled by default is not yet announced at the time of writing. Apache HTTPD Server If you're running Apache, just include the following line in your configuration among the other SSL directives: SSLProtocol All -SSLv2 -SSLv3 Then check if the new configuration is correct (no typos etc.): apachectl configtest And restart the server, e.g. sudo service apache2 restart More info: Apache documentation Now test it: If your site is publicly available, test it using Qualys’ SSL Labs tool. Nginx server If you're running Nginx, just include the following line in your configuration among the other SSL directives: ssl_protocols TLSv1 TLSv1.1 TLSv1.2; And restart the server, e.g. sudo service nginx restart Reference: Nginx documentation Now test it: If your site is publicly, available, test it using Qualys' SSL Labs tool. Lighttpd webserver Lighttpd versions >1.4.28 support a configuration option to disable SSLv2 and v3. Lighttpd releases before 1.4.28 allow you to disable SSLv2 ONLY. Please note that Ubuntu 12.04 LTS and earlier install at best lighttpd v1.4.28 and therefore a simple fix is not available for those distributions. Therefore this fix should only be used for Ubuntu versions greater than 12.04. If someone knows a fix that's good for 12.04 and earlier, please edit this. Edit your /etc/lighttpd/lighttpd.conf to add the following lines after the ssl.engine = "enable" directive ssl.use-sslv2 = "disable" ssl.use-sslv3 = "disable" Then you should restart the lighttpd service with a sudo service lighttpd restart and perform an ssl3 handshake test as described in earlier sections to make sure that the change was implemented successfully. Taken from http://redmine.lighttpd.net/projects/lighttpd/wiki/Docs_SSL. Postfix SMTP For 'opportunistic SSL' (encryption policy not enforced and plain is acceptable too), you don't need to change anything. Even SSLv2 is better than plain, so if you need to secure your server you should be using 'mandatory SSL' mode anyway. For 'mandatory SSL' mode being configured already, just add/change the smtpd_tls_mandatory_protocols setting: smtpd_tls_mandatory_protocols=!SSLv2,!SSLv3 and restart Postfix: sudo service postfix restart Sendmail (Unverified edit by anonymous user, I'm not comfortable with Sendmail, please verify.) These options are configured in the LOCAL_CONFIG section of your sendmail.mc LOCAL_CONFIG CipherList=HIGH ServerSSLOptions=+SSL_OP_NO_SSLv2 +SSL_OP_NO_SSLv3 +SSL_OP_CIPHER_SERVER_PREFERENCE ClientSSLOptions=+SSL_OP_NO_SSLv2 +SSL_OP_NO_SSLv3 Dovecot In Dovecot v2.1+, add the following to your /etc/dovecot/local.conf (or a new file in /etc/dovecot/conf.d): ssl_protocols = !SSLv2 !SSLv3 and restart Dovecot: sudo service dovecot restart For older versions you will have to patch the source code. Courier-imap (imapd-ssl) Courier-imap allows SSLv3 by default on Ubuntu 12.04 and others. You should disable it and use STARTTLS instead to force TLS. Edit your /etc/courier/imapd-ssl configuration file to reflect the following changes IMAPDSSLSTART=NO IMAPDSTARTTLS=YES IMAP_TLS_REQUIRED=1 TLS_PROTOCOL=TLS1 TLS_STARTTLS_PROTOCOL=TLS1 TLS_CIPHER_LIST="" HAProxy Server SSL is supported in HAProxy >= 1.5. Edit the /etc/haproxy.cfg file and find your bind line. Append no-sslv3. For example: bind :443 ssl crt ciphers no-sslv3 Reference: HAProxy Documentation OpenVPN Appears to be unaffected (source). OpenVPN uses TLSv1.0, or (with >=2.3.3) optionally TLSv1.2 and is thus not impacted by POODLE. Tomcat When using Tomcat with the JSSE connectors, the SSL protocol to be used can be configured via $TOMCAT_HOME/conf/server.xml. The following example shows how the sslProtocol in an https connector is configured. Tomcat 5 and 6: maxThreads="150" SSLEnabled="true" scheme="https" secure="true" clientAuth="false" sslEnabledProtocols = "TLSv1,TLSv1.1,TLSv1.2" /> Tomcat >= 7 maxThreads="150" SSLEnabled="true" scheme="https" secure="true" clientAuth="false" sslProtocols = "TLSv1,TLSv1.1,TLSv1.2" /> If the sslEnabledProtocols or sslProtocols attributes are specified, only protocols that are listed and supported by the SSL implementation will be enabled. If not specified, the JVM default is used. The permitted values may be obtained from the JVM documentation for the allowed values for algorithm when creating an SSLContext instance e.g. Oracle Java 6 and Oracle Java 7. Tomcat APR When using Tomcat with the APR/Native connectors, the SSL protocol to be used can be configured in $TOMCAT_HOME/conf/server.xml. The following example shows how the SSLProtocol in an https connector is configured. maxThreads="150" enableLookups="false" disableUploadTimeout="true" acceptCount="100" scheme="https" secure="true" SSLEnabled="true" SSLProtocol="TLSv1" SSLCertificateFile="${catalina.base}/conf/localhost.crt" SSLCertificateKeyFile="${catalina.base}/conf/localhost.key" /> Configuration parameters are documented here. The default is for the SSLProtocol attribute to be set to ALL, with other acceptable values being SSLv2, SSLv3, TLSv1 and SSLv2+SSLv3. Starting with version 1.1.21 of the Tomcat native library any combination of the three protocols concatenated with a plus sign will be supported. Note that the protocol SSLv2 is inherently unsafe. View the full article
  11. Network Time Protocol – NTP- is a protocol which runs over port 123 UDP at Transport Layer and allows computers to synchronize time over networks for an accurate time. While time is passing by, computers internal clocks tend to drift which can lead to inconsistent time issues, especially on servers and clients logs files or if you want to replicate servers resources or databases. This tutorial will demonstrate how you can install and configure NTP server on CentOS/RHEL 7 and automatically synchronize time with the closest geographically peers available for your server location by using NTP Public Pool Time Servers list. Step 1: Install and configure NTP daemon 1. NTP server package is provided by default from official CentOS /RHEL 7 repositories and can be installed by issuing the following command. # yum install ntp <img class="size-medium wp-image-8998" src="http://www.tecmint.com/wp-content/uploads/2014/09/Install-NTP-in-CentOS-620x372.png" alt="Install NTP in CentOS" width="620" height="372" /> Install NTP Server 2. After the server is installed, first go to official NTP Public Pool Time Servers, choose your Continent area where the server physically is located, then search for your Country location and a list of NTP servers should appear. <img class="size-medium wp-image-9000" src="http://www.tecmint.com/wp-content/uploads/2014/09/NTP-Pool-Server-620x352.png" alt="NTP Pool Server" width="620" height="352" /> NTP Pool Server 3. Then open NTP daemon main configuration file for editing, comment the default list of Public Servers from pool.ntp.org project and replace it with the list provided for your country like in the screenshot below. <img class="size-medium wp-image-9001" src="http://www.tecmint.com/wp-content/uploads/2014/09/Configure-NTP-Server-620x180.png" alt="Configure NTP Server in CentOS" width="620" height="180" /> Configure NTP Server 4. Further, you need to allow clients from your networks to synchronize time with this server. To accomplish this, add the following line to NTP configuration file, where restrict statement controls, what network is allowed to query and sync time – replace network IPs accordingly. restrict 192.168.1.0 netmask 255.255.255.0 nomodify notrap The nomodify notrap statements suggest that your clients are not allowed to configure the server or be used as peers for time sync. 5. If you need additional information for troubleshooting in case there are problems with your NTP daemon add a log file statement which will record all NTP server issues into one dedicated log file. logfile /var/log/ntp.log <img class="size-medium wp-image-9002" src="http://www.tecmint.com/wp-content/uploads/2014/09/Enable-NTP-Log-620x423.png" alt="Enable NTP Logs in CentOS" width="620" height="423" /> Enable NTP Logs 6. After you have edited the file with all configuration explained above save and close ntp.conf file. Your final configuration should look like in the screenshot below. <img class="size-medium wp-image-9003" src="http://www.tecmint.com/wp-content/uploads/2014/09/NTP-Server-Configuration-620x372.png" alt="NTP Server Configuration in CentOS" width="620" height="372" /> NTP Server Configuration Step 2: Add Firewall Rules and Start NTP Daemon 7. NTP service uses UDP port 123 on OSI transport layer (layer 4). It is designed particularly to resist the effects of variable latency (jitter). To open this port on RHEL/CentOS 7 run the following commands against Firewalld service. # firewall-cmd --add-service=ntp –permanent # firewall-cmd --reload <img class="size-medium wp-image-9004" src="http://www.tecmint.com/wp-content/uploads/2014/09/Open-NTP-Port-620x151.png" alt="Open NTP Port in Firewall" width="620" height="151" /> Open NTP Port in Firewall 8. After you have opened Firewall port 123, start NTP server and make sure you enable it system-wide. Use the following commands to manage the service. # systemctl start ntpd # systemctl enable ntpd # sytemctl status ntp <img class="size-medium wp-image-9005" src="http://www.tecmint.com/wp-content/uploads/2014/09/Start-NTP-Service-620x331.png" alt="Start NTP Service" width="620" height="331" /> Start NTP Service Step 3: Verify Server Time Sync 9. After NTP daemon has been started, wait a few minutes for the server to synchronize time with its pool list servers, then run the following commands to verify NTP peers synchronization status and your system time. # ntpq -p # date -R <img class="size-medium wp-image-9006" src="http://www.tecmint.com/wp-content/uploads/2014/09/Verify-NTP-Time-Sync-620x168.png" alt="Verify NTP Server Time" width="620" height="168" /> Verify NTP Time Sync 10. If you want to query and synchronize against a pool of your choice use ntpdate command, followed by the server or servers addresses, as suggested in the following command line example. # ntpdate -q 0.ro.pool.ntp.org 1.ro.pool.ntp.org <img class="size-medium wp-image-9007" src="http://www.tecmint.com/wp-content/uploads/2014/09/Synchronize-NTP-Time-620x221.png" alt="Synchronize NTP Time" width="620" height="221" /> Synchronize NTP Time Step 4: Setup Windows NTP Client 11. If your windows machine is not a part of a Domain Controller you can configure Windows to synchronize time with your NTP server by going to Time from the right side of Taskbar -> Change Date and Time Settings -> Internet Time tab -> Change Settings -> Check Synchronize with an Internet time server -> put your server’s IP or FQDN on Server filed -> Update now -> OK. <img class="size-medium wp-image-9008" src="http://www.tecmint.com/wp-content/uploads/2014/09/Synchronize-Windows-Time-with-NTP-620x432.png" alt="Synchronize Windows Time with NTP" width="620" height="432" /> Synchronize Windows Time with NTP That’s all! Setting up a local NTP Server on your network ensures that all your servers and clients have the same time set in case of an Internet connectivity failure and they all are synchronized with each other. Source: http://www.tecmint.com/install-ntp-server-in-centos/ View the full article
  12. 1. Open with editor the /etc/default/grub vi /etc/default/grub 2. Found for "GRUB_CMDLINE_LINUX=" and add ipv6.disable=1 GRUB_CMDLINE_LINUX="ipv6.disable=1 rd.lvm.lv=fedora/swap rd.md=0..... 3. After recreate grub config grub2-mkconfig -o /boot/grub2/grub.cfg 4. Reboot system. reboot View the full article
  13. Shellshock, also known as Bashdoor, is a family of security bugs in the widely used Unix Bash shell, the first of which was disclosed on 24 September 2014. Many Internet daemons, such as web servers, use Bash to process certain commands, allowing an attacker to cause vulnerable versions of Bash to execute arbitrary commands. This can allow an attacker to gain unauthorized access to a computer system. The vulnerability can be tested with the following command: env x='() { :;}; echo vulnerable' bash -c "echo this is a test" If your system is vulnerable to the Bash "Shellshock" bug the above command will produce a following output: vulnerable this is a test otherwise you will see a following message: this is a test View the full article
  14. A RAID 10 (aka RAID 1+0 or stripe of mirrors) array provides high performance and fault-tolerant disk I/O operations by combining features of RAID 0 (where read/write operations are performed in parallel across multiple drives) and RAID 1 (where data is written identically to two or more drives). In this tutorial, I'll show you how to set up a software RAID 10 array using five identical 8 GiB disks. While the minimum number of disks for setting up a RAID 10 array is four (e.g., a striped set of two mirrors), we will add an extra spare drive should one of the main drives become faulty. We will also share some tools that you can later use to analyze the performance of your RAID array. Please note that going through all the pros and cons of RAID 10 and other partitioning schemes (with different-sized drives and filesystems) is beyond the scope of this post. How Does a Raid 10 Array Work? If you need to implement a storage solution that supports I/O-intensive operations (such as database, email, and web servers), RAID 10 is the way to go. Let me show you why. Let's refer to the below image. Imagine a file that is composed of blocks A, B, C, D, E, and F in the above diagram. Each RAID 1 mirror set (e.g., Mirror 1 or 2) replicates blocks on each of its two devices. Because of this configuration, write performance is reduced because every block has to be written twice, once for each disk, whereas read performance remains unchanged compared to reading from single disks. The bright side is that this setup provides redundancy in that unless more than one of the disks in each mirror fail, normal disk I/O operations can be maintained. The RAID 0 stripe works by dividing data into blocks and writing block A to Mirror 1, block B to Mirror 2 (and so on) simultaneously, thereby improving the overall read and write performance. On the other hand, none of the mirrors contains the entire information for any piece of data committed to the main set. This means that if one of the mirrors fail, the entire RAID 0 component (and therefore the RAID 10 set) is rendered inoperable, with unrecoverable loss of data. Setting up a RAID 10 Array There are two possible setups for a RAID 10 array: complex (built in one step) or nested (built by creating two or more RAID 1 arrays, and then using them as component devices in a RAID 0). In this tutorial, we will cover the creation of a complex RAID 10 array due to the fact that it allows us to create an array using either an even or odd number of disks, and can be managed as a single RAID device, as opposed to the nested setup (which only permits an even number of drives, and must be managed as a nested device, dealing with RAID 1 and RAID 0 separately). It is assumed that you have mdadm installed, and the daemon running on your system. Refer to this tutorial for details. It is also assumed that a primary partition sd[bcdef]1 has been created on each disk. Thus, the output of: ls -l /dev | grep sd[bcdef] should be like: Let's go ahead and create a RAID 10 array with the following command: # mdadm --create --verbose /dev/md0 --level=10 --raid-devices=4 /dev/sd[bcde]1 --spare-devices=1 /dev/sdf1 When the array has been created (it should not take more than a few minutes), the output of: # mdadm --detail /dev/md0 should look like: A couple of things to note before we proceed further. 1. Used Dev Space indicates the capacity of each member device used by the array.2. Array Size is the total size of the array. For a RAID 10 array, this is equal to (N*C)/M, where N: number of active devices, C: capacity of active devices, M: number of devices in each mirror. So in this case, (N*C)/M equals to (4*8GiB)/2 = 16GiB. 3. Layout refers to the fine details of data layout. The possible layout values are as follows. n (default option): means near copies. Multiple copies of one data block are at similar offsets in different devices. This layout yields similar read and write performance than that of a RAID 0 array. o indicates offset copies. Rather than the chunks being duplicated within a stripe, whole stripes are duplicated, but are rotated by one device so duplicate blocks are on different devices. Thus subsequent copies of a block are in the next drive, one chunk further down. To use this layout for your RAID 10 array, add --layout=o2 to the command that is used to create the array. f represents far copies (multiple copies with very different offsets). This layout provides better read performance but worse write performance. Thus, it is the best option for systems that will need to support far more reads than writes. To use this layout for your RAID 10 array, add --layout=f2 to the command that is used to create the array. The number that follows the n, f, and o in the --layout option indicates the number of replicas of each data block that are required. The default value is 2, but it can be 2 to the number of devices in the array. By providing an adequate number of replicas, you can minimize I/O impact on individual drives. 4. Chunk Size, as per the Linux RAID wiki, is the smallest unit of data that can be written to the devices. The optimal chunk size depends on the rate of I/O operations and the size of the files involved. For large writes, you may see lower overhead by having fairly large chunks, whereas arrays that are primarily holding small files may benefit more from a smaller chunk size. To specify a certain chunk size for your RAID 10 array, add --chunk=desired_chunk_size to the command that is used to create the array. Unfortunately, there is no one-size-fits-all formula to improve performance. Here are a few guidelines to consider. Filesystem: overall, XFS is said to be the best, while EXT4 remains a good choice. Optimal layout: far layout improves read performance, but worsens write performance. Number of replicas: more replicas minimize I/O impact, but increase costs as more disks will be needed. Hardware: SSDs are more likely to show increased performance (under the same context) than traditional (spinning) disks. RAID Performance Tests using DD The following benchmarking tests can be used to check on the performance of our RAID 10 array (/dev/md0). 1. Write operationA single file of 256MB is written to the device: # dd if=/dev/zero of=/dev/md0 bs=256M count=1 oflag=dsync 512 bytes are written 1000 times: # dd if=/dev/zero of=/dev/md0 bs=512 count=1000 oflag=dsync With dsync flag, dd bypasses filesystem cache, and performs synchronized write to a RAID array. This option is used to eliminate caching effect during RAID performance tests. 2. Read operation256KiB*15000 (3.9 GB) are copied from the array to /dev/null: # dd if=/dev/md0 of=/dev/null bs=256K count=15000 RAID Performance Tests Using Iozone Iozone is a filesystem benchmark tool that allows us to measure a variety of disk I/O operations, including random read/write, sequential read/write, and re-read/re-write. It can export the results to a Microsoft Excel or LibreOffice Calc file. Installing Iozone on CentOS/RHEL 7Enable Repoforge. Then: # yum install iozone Installing Iozone on Debian 7 # aptitude install iozone3 The iozone command below will perform all tests in the RAID-10 array: # iozone -Ra /dev/md0 -b /tmp/md0.xls -R: generates an Excel-compatible report to standard out. -a: runs iozone in a full automatic mode with all tests and possible record/file sizes. Record sizes: 4k-16M and file sizes: 64k-512M. -b /tmp/md0.xls: stores test results in a specified file. Hope this helps. Feel free to add your thoughts or add tips to consider on how to improve performance of RAID 10. Source: http://xmodulo.com/setup-raid10-linux.html View the full article
  15. Log files contain useful information about what is going on within the system. They are often inspected during troubleshooting processes or server performance analysis. For a busy server, log files may grow quickly into very large sizes. This becomes a problem as the server will soon run out of space. Besides, working with a single large log file can often be tricky. logrotate is a very useful tool that can automate the process of breaking up (or rotating), compressing, and deleting old log files. For example, you can set up logrotate such that the log file /var/log/foo is rotated every 30 days, and logs older than 6 months are deleted. Once configured, the process is fully automated using logrotate without any further need for human interaction. Optionally, old logs can be emailed as well, but that option is beyond the scope of this tutorial. The logrotate package is typically installed by default on major Linux distros. If, for some reason, logrotate is not present, you can install it using apt-get or yum command. On Debian or Ubuntu: # apt-get install logrotate cron On Fedora, CentOS or RHEL: # yum install logrotate crontabs The configuration file for logrotate is /etc/logrotate.conf. Generally no modification is needed here. The log files to be rotated are defined in separate configuration file(s) placed under /etc/logrotate.d/ directory. Example One In the first example, we will create a 10 MB log file /var/log/log-file. We will see how we can use logrotate to manage this log file. We start by creating a log file, and populating it with a 10 MB worth of random bit stream. # touch /var/log/log-file # head -c 10M < /dev/urandom > /var/log/log-file Now that the log file is ready, we will configure logrotate to rotate this log file. Let's create a configuration file for this. # vim /etc/logrotate.d/log-file /var/log/log-file { monthly rotate 5 compress delaycompress missingok notifempty create 644 root root postrotate /usr/bin/killall -HUP rsyslogd endscript } Where: monthly: The log file will now be rotated monthly. Other possible values are 'daily', 'weekly' or 'yearly'. rotate 5: A total of 5 archived logs will be stored at a time. For the 6th archive, the oldest stored archive will be deleted. compress: The rotated archive will be compressed using gzip, after the rotation task is complete. delaycompress: Always used together with compress option, the delaycompress parameter instructs logrotate to not run compression on the most recent archive. Compression will be performed during the next rotation cycle. This is useful if you or any software still needs to access the fresh archive. missingok: During log rotation, any errors will be ignored, e.g., "file not found". notifempty: Rotation will not be performed if the log file is empty. create 644 root root: A fresh log file will be created with specified permissions as logrotate may rename the original log file. postrotate/endscript: The command(s) specified between postrotate and endscript will be carried out after all other instructions are completed. In this case, the process rsyslogd will re-read its configuration on the fly and continue running. The above template is generic, and the configuration parameters may vary based on your requirements. Not all the parameters may be necessary. Example Two In this example, we want to rotate a log file only when the size of the log file grows over 50 MB. # vim /etc/logrotate.d/log-file /var/log/log-file { size=50M rotate 5 create 644 root root postrotate /usr/bin/killall -HUP rsyslogd endscript } Example Three We want old log files to be named with the date of creation. This can be achieved by adding dateext parameter. # vim /etc/logrotate.d/log-file /var/log/log-file { monthly rotate 5 dateext create 644 root root postrotate /usr/bin/killall -HUP rsyslogd endscript } This will cause the archived files to contain the date in their name. Troubleshooting Here are a few troubleshooting tips for logrotate setup. 1. Running logrotate manually logrotate can be invoked manually from the command line at any time. To invoke logrotate on all logs as configured in /etc/logrotate.d/*: # logrotate /etc/logrotate.conf To invoke logrotate for a particular configuration: # logrotate /etc/logrotate.d/log-file 2. Dry run The best option during troubleshooting is to run logrotate as a dry run using '-d' option. For verification, a dry run simulates log rotation and displays its output without actually rotating any log files. # logrotate -d /etc/logrotate.d/log-file As we can see from the above output, logrotate decided that rotation is not necessary. This can happen if the age of the file is less than one day. 3. Force runWe can force logrotate to rotate log files even when rotation conditions are not met, by using '-f' option. The '-v' parameter provides verbose output. # logrotate -vf /etc/logrotate.d/log-file reading config file /etc/logrotate.d/log-file reading config info for /var/log/log-file Handling 1 logs rotating pattern: /var/log/log-file forced from command line (5 rotations) empty log files are rotated, old logs are removed considering log /var/log/log-file log needs rotating rotating log /var/log/log-file, log->rotateCount is 5 dateext suffix '-20140916' glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]' renaming /var/log/log-file.5.gz to /var/log/log-file.6.gz (rotatecount 5, logstart 1, i 5), old log /var/log/log-file.5.gz does not exist renaming /var/log/log-file.4.gz to /var/log/log-file.5.gz (rotatecount 5, logstart 1, i 4), old log /var/log/log-file.4.gz does not exist . . . renaming /var/log/log-file.0.gz to /var/log/log-file.1.gz (rotatecount 5, logstart 1, i 0), old log /var/log/log-file.0.gz does not exist log /var/log/log-file.6.gz doesn't exist -- won't try to dispose of it renaming /var/log/log-file to /var/log/log-file.1 creating new /var/log/log-file mode = 0644 uid = 0 gid = 0 running postrotate script compressing log with: /bin/gzip 4. Logrotate logging Logs for logrotate itself are usually stored in the directory /var/lib/logrotate/status. If we want logrotate to log to any specific file for troubleshooting purposes, we can specify that from the command line as follows. # logrotate -vf –s /var/log/logrotate-status /etc/logrotate.d/log-file 5. Logrotate cron job The cron jobs needed for logrotate should automatically be created during installation. I am posting the contents of the cron file for reference. # cat /etc/cron.daily/logrotate #!/bin/sh # Clean non existent log file entries from status file cd /var/lib/logrotate test -e status || touch status head -1 status > status.clean sed 's/"//g' status | while read logfile date do [ -e "$logfile" ] && echo "\"$logfile\" $date" done >> status.clean mv status.clean status test -x /usr/sbin/logrotate || exit 0 /usr/sbin/logrotate /etc/logrotate.conf To sum up, logrotate is a very useful tool for preventing gigantic log files from using up storage space. Once configured, the process is fully automated, and can run without human intervention for a long time. This tutorial focused on several basic examples of how to use logrotate. You can customize it even further to match your requirements. View the full article
×