Jump to content
Slate Blackcurrant Watermelon Strawberry Orange Banana Apple Emerald Chocolate Marble
Slate Blackcurrant Watermelon Strawberry Orange Banana Apple Emerald Chocolate Marble

NickTheGreek

Administrators
  • Content Count

    454
  • Joined

  • Last visited

  • Days Won

    76
  • Feedback

    N/A

Everything posted by NickTheGreek

  1. ConfigServer Outgoing Spam Monitor (osm) has been designed to use multiple methods to monitor outgoing email and SMTP connections for activity that could indicate a spammer is active on a server. With the proliferation of web scripts in shared hosting environments that are often poorly maintained or badly written, the chances of a hacker exploiting vulnerabilities in scripts is at an all time high. Additionally, end-user PC's and other devices that send email through a server (relay) that have been compromised and used as a spam source has always been a problem. These issues along with spammers deliberately targeting hosting providers by purchasing accounts simply to send out spam have kept the diligence required to prevent spam from being sent from servers all the more difficult. osm is for any server owner using cPanel that is concerned about future or active attempts to send out spam email through the server. It targets all the methods available to keep track of outgoing email and SMTP connections. It is designed to be used entirely from the cPanel WHM UI, which provides both configuration and viewing of reports generated by a daemon process running continuously on the server. Features Outgoing email sent via exim is tracked by cPanel account Matching Subject headers for outgoing email sent via exim is tracked by cPanel account Script path location (cwd) is tracked by cPanel account Matching script path location (cwd) is tracked by cPanel account Outgoing SMTP connections to remote servers (that bypass exim) are tracked by cPanel account Matching script path location for outgoing SMTP connections to remote servers (that bypass exim) are tracked Authenticated outgoing email is tracked by email account and connecting IP address osm uses real-time Packet Inspection to track SMTP connections, this is primarily useful if you cannot use the csf SMTP_BLOCK or cPanel provided equivalent feature Configurable trigger levels for each type of tracking by cPanel account on a per email/connection per second basis Apache Status information us used to link outgoing email with actual scripts being used Multiple actions can be performed once a report is raised after a trigger level is reached: Send an email report of the events Store the report of events to view in the WHM UI Hold outgoing email from the cPanel/email account in the exim queue Discard outgoing email from the cPanel/email account Suspend the whole cPanel account Prevent the email account from logging in Rename the reported path Run the custom script configured in the WHM UI Rename the file determined from the Apache Status Block the IP address (AUTHRELAY, ALWAYSRELAY, POPRELAY, Apache Status) in csf Custom action script is configurable and can be sent JSON, YAML, XML and PERL data structures to allow for client specific actions Inheritance rules are used to configure all trigger levels for each cPanel account plus the default settings Frequently Asked Questions Please read the osm FAQ before ordering osm. Product Requirements cPanel/WHM (supported versions) Server with static IPv4 address (for licensing) Redhat/CentOS/CloudLinux Linux v6/7 Apache with mod_status required for the Apache Status feature Pcap Kernel access via libpcap required for SMTP Packet Interception csf for IP address blocking Product Limitations Without mod_status configured via Easyapache, the Apache Status feature cannot be used mod_rewrite rules in local htaccess files may break Apache Status functionality IP addresses triggers are controlled by the "Default" settings in Event Configuration Duplication of reports will occur between logline and cwdcheck report types as they are often referring to the same email event. However, each event type offers different triggers to detect outgoing spam patterns The SMTP Packet Interception feature will not function on Virtuozzo/OpenVZ Servers (and other types of custom kernel) as the kernels do not support Pcap access See the osm FAQ for additional information Note: The Packet Inspection feature will not function on Virtuozzo/OpenVZ Servers Note: Support is not guaranteed for servers running services from 1h.com, ASL, Imunify360 or Bitninja. We only provide support for supported versions of the OS and cPanel. EOL versions are not supported. https://www.configserver.com/cp/osm.html
  2. Error: Starting container … vzquota : (warning) Incorrect quota shutdown for id 261, recalculating disk usage vzquota : (error) quota check : lstat `9e44f226674e6588a4bd5a28bb20c7f8.dat‘: No such file or directory vzquota on failed [1] Sol: vzquota off 261 vzquota on 261 —————————————————————————————————– Error: Disabling Container # vzctl start 304 Container start disabled # vzctl start 304 –force Starting container … Container is mounted Adding IP address(es😞 https://jomin.wordpress.com/2012/08/17/vzquota-warning-incorrect-quota-shutdown-for-id/
  3. This is a common problem and may have several reasons. Sometimes when we simply want to restart the MySQL Server, we can get such an error: ERROR! MySQL server PID file could not be found! First of all, always check if the /tmp partition is full. This can happen when MySQL can’t write to the /tmp partition to create a lock file. df -h Also, this may be because, somehow the /tmp partition has been cleared and the MySQL server is looking for the PID file there. So easy-peasy just create a new pid file and restart the server. touch /tmp/mysql.sock service mysqld restart It can also help to check the status, sometimes it helps. For example sometimes you can get an error like this : $ service mysqld status ERROR! MySQL is not running, but lock file (/var/lock/subsys/mysql) exists Well, it’s kind of obvious, just remove the lock file and restart the server. rm /var/lock/subsys/mysql service mysqld restart If this not work then you need to kill current mysql process from server using below steps : ps -aufx | grep mysql You get all process of mysql with process ID. You need to kill all PID for mysql using below command kill -9 PID1 PD2... then restart mysql service service mysql restart That’s it http://www.webhostingtalk.com/showthread.php?t=1424779
  4. Hi , To fix this issue , [ vzquota : (error) can't lock quota file, some quota operations are performing for id 101 ] First you need to open your ssh session and run this command: ps ax | grep vzquota Then kill the pid for example : kill -9 1234 And then try to start the vz container vzquota off id vzctl start id After that your openvz container should work fine, Some openvz nodes needs to fix the quota and this need some time depending on your quota size, if your openvz doesn't run after awhile then you need to check the availability of ram inside your openvz to see if there is enough ram or not by running the following command: free -m Have a nice day. https://www.lowendtalk.com/discussion/134404/hot-to-fix-vzquota-error-cant-lock-quota-file
  5. Now this may be very simple for some, others might not know what to do about database issues- so Ill explain in a quick post. I logged in this morning and noticed my RAM usage was very high, (91% on the resource monitor, compared to normal 71%). I hit Crtl-Shift-Esc and went to processess and my top two memory usage processes were SQL Server. The top one was using 1,540,736 K, with the second around 600 K. That is a lot. Right click on the top one, and select Go To Service. This one is MICROSOFT##SSEE and the second is SBSMONITORING. I know from past experience that SBSMONITORING can get out of control, but in my opinion 600,000 K is not bad. There is a good post on running a script that will clean up and compact the SBSMONITORING database here (Smallbizserver.net- one of my favorite sites, but you will have to find the post yourself). In this instance, I do not really care about SBSMONITORING. But the MICROSOFT##SSEE is really high, and I have never seen that before. Now I am no DBA, but I DO know that limiting a database will affect performance. I also know that if the database was using that much memory, it probably had reason. It could be a memory leak, but I do not think so in this instance, because it is just running all of the default services. Do a Google search on SQL Server memory leaks for more information. So in this case, I do not want to limit the database. Ill restart it and see what happens. Start>Administrative Tools>Services. Right click and restart Windows Internal Database. Voila! it is now hovering around 158,000 K. Thats a lot better than 10x that. And by only restarting it, I did not limit the database should it NEED much more ram, perhaps when it is synchronizing WSUS or something. SQL Process Might as well restart SBSMONITORING as well- yep, that knocked the RAM usage of that one down a few notches, though not as dramatically as the first. So, I want to do this regularly, but I do not want to remember to restart these manually. They get restarted when the server reboots, but I TRY to minimize those as well. We can write it into a VERY simple batch script. Open up Notepad. Enter the following: 1 2 3 4 net stop mssql$sbsmonitoring net start mssql$sbsmonitoring net stop mssql$microsoft##ssee net start mssql$microsoft##ssee Save the file as a text file on the root of some drive, or if you have a folder for scripts. I keep mine in D:\Scripts\. I go to the new text file location, and change the .txt to .bat Now we have a file that when it is run, stops both SQL database services and starts them one at a time. This will not cause system damage, nor damage the databases- as limiting the RAM might have. Let’s give the script a test run to make sure it works: Restart Script Navigate to the file location from a command prompt, and run the batch file. If the results look like this, then you are good. Now we need to automate this script. So, we will use scheduled tasks to enable this script to run once a week. Could be twice a week, but I think running this Monday morning will be nice. Start>Administrative Tools>Task Scheduler. Right click Task Scheduler Library, and click New Task. Give the task a name and description. In my case I named is Restart SQL. Select the radio button to run the task if no one is logged on. General Tab You have the option to run this on any account you wish. If you have an account you use for DBA (or even a power user account), then select this account. On the Triggers tab, we will select on a schedule. I set it to occur Weekly every Monday at 5AM. I know I have no other processes running at that time such as backups. I also put a random delay of 30 minutes on the task- this is not necessary in most cases. Trigger Tab On the Action tab, we will select New. Leave it on run a program. In the box under settings for Program/Script, we will select the script that we made, restart_sql.bat. Action Tab On the Conditions tab, pretty much leave stuff alone. Run if computer is idle for 10 minutes, wait for idle 1 hour. Uncheck stop if the computer ceases to be idle. Uncheck wake the computer to run this task- why would a server be in sleep mode? Conditions Tab On the settings tab, pretty much leave everything alone. It’s all self explanatory if need to change something to suit your needs, then do so. Settings Tab Now, let the task run it’s course. You can check the task scheduler after Monday to see if the task ran- which it will. This will keep my database memory usage down without me having to worry about it or by limiting the natural functions of said databases. https://chrisdill.wordpress.com/2010/10/22/restart_microsoftssee/
  6. NickTheGreek

    DNS spoofing

    DNS spoofing, also referred to as DNS cache poisoning, is a form of computer security hacking in which corrupt Domain Name System data is introduced into the DNS resolver's cache, causing the name server to return an incorrect result record, e.g. an IP address. This results in traffic being diverted to the attacker's computer (or any other computer). https://en.wikipedia.org/wiki/DNS_spoofing#Cache_poisoning_attacks
  7. The general attack outline is as follows: The attacker initiates a connection to a target. The target attempts to authenticate the attacker by sending it a challenge. The attacker opens another connection to the target, and sends the target this challenge as its own. The target responds to the challenge. The attacker sends that response back to the target on the original connection. If the authentication protocol is not carefully designed, the target will accept that response as valid, thereby leaving the attacker with one fully authenticated channel connection (the other one is simply abandoned). https://en.wikipedia.org/wiki/Reflection_attack
  8. Network Working Group D. Barr Request for Comments: 1912 The Pennsylvania State University Obsoletes: 1537 February 1996 Category: Informational Common DNS Operational and Configuration Errors Status of this Memo This memo provides information for the Internet community. This memo does not specify an Internet standard of any kind. Distribution of this memo is unlimited. Abstract This memo describes errors often found in both the operation of Domain Name System (DNS) servers, and in the data that these DNS servers contain. This memo tries to summarize current Internet requirements as well as common practice in the operation and configuration of the DNS. This memo also tries to summarize or expand upon issues raised in [RFC 1537]. 1. Introduction Running a nameserver is not a trivial task. There are many things that can go wrong, and many decisions have to be made about what data to put in the DNS and how to set up servers. This memo attempts to address many of the common mistakes and pitfalls that are made in DNS data as well as in the operation of nameservers. Discussions are also made regarding some other relevant issues such as server or resolver bugs, and a few political issues with respect to the operation of DNS on the Internet. 2. DNS Data This section discusses problems people typically have with the DNS data in their nameserver, as found in the zone data files that the nameserver loads into memory. 2.1 Inconsistent, Missing, or Bad Data Every Internet-reachable host should have a name. The consequences of this are becoming more and more obvious. Many services available on the Internet will not talk to you if you aren't correctly registered in the DNS. Barr Informational [Page 1] RFC 1912 Common DNS Errors February 1996 Make sure your PTR and A records match. For every IP address, there should be a matching PTR record in the in-addr.arpa domain. If a host is multi-homed, (more than one IP address) make sure that all IP addresses have a corresponding PTR record (not just the first one). Failure to have matching PTR and A records can cause loss of Internet services similar to not being registered in the DNS at all. Also, PTR records must point back to a valid A record, not a alias defined by a CNAME. It is highly recommended that you use some software which automates this checking, or generate your DNS data from a database which automatically creates consistent data. DNS domain names consist of "labels" separated by single dots. The DNS is very liberal in its rules for the allowable characters in a domain name. However, if a domain name is used to name a host, it should follow rules restricting host names. Further if a name is used for mail, it must follow the naming rules for names in mail addresses. Allowable characters in a label for a host name are only ASCII letters, digits, and the `-' character. Labels may not be all numbers, but may have a leading digit (e.g., 3com.com). Labels must end and begin only with a letter or digit. See [RFC 1035] and [RFC 1123]. (Labels were initially restricted in [RFC 1035] to start with a letter, and some older hosts still reportedly have problems with the relaxation in [RFC 1123].) Note there are some Internet hostnames which violate this rule (411.org, 1776.com). The presence of underscores in a label is allowed in [RFC 1033], except [RFC 1033] is informational only and was not defining a standard. There is at least one popular TCP/IP implementation which currently refuses to talk to hosts named with underscores in them. It must be noted that the language in [1035] is such that these rules are voluntary -- they are there for those who wish to minimize problems. Note that the rules for Internet host names also apply to hosts and addresses used in SMTP (See RFC 821). If a domain name is to be used for mail (not involving SMTP), it must follow the rules for mail in [RFC 822], which is actually more liberal than the above rules. Labels for mail can be any ASCII character except "specials", control characters, and whitespace characters. "Specials" are specific symbols used in the parsing of addresses. They are the characters "()<>@,;:\".[]". (The "!" character wasn't in [RFC 822], however it also shouldn't be used due to the conflict with UUCP mail as defined in RFC 976) However, since today almost all names which are used for mail on the Internet are also names used for hostnames, one rarely sees addresses using these relaxed standard, but mail software should be made liberal and robust enough to accept them. Barr Informational [Page 2] RFC 1912 Common DNS Errors February 1996 You should also be careful to not have addresses which are valid alternate syntaxes to the inet_ntoa() library call. For example 0xe is a valid name, but if you were to type "telnet 0xe", it would try to connect to IP address 0.0.0.14. It is also rumored that there exists some broken inet_ntoa() routines that treat an address like x400 as an IP address. Certain operating systems have limitations on the length of their own hostname. While not strictly of issue to the DNS, you should be aware of your operating system's length limits before choosing the name of a host. Remember that many resource records (abbreviated RR) take on more than one argument. HINFO requires two arguments, as does RP. If you don't supply enough arguments, servers sometime return garbage for the missing fields. If you need to include whitespace within any data, you must put the string in quotes. 2.2 SOA records In the SOA record of every zone, remember to fill in the e-mail address that will get to the person who maintains the DNS at your site (commonly referred to as "hostmaster"). The `@' in the e-mail must be replaced by a `.' first. Do not try to put an `@' sign in this address. If the local part of the address already contains a `.' (e.g., John.Smith@widget.xx), then you need to quote the `.' by preceding it with `\' character. (e.g., to become John\.Smith.widget.xx) Alternately (and preferred), you can just use the generic name `hostmaster', and use a mail alias to redirect it to the appropriate persons. There exists software which uses this field to automatically generate the e-mail address for the zone contact. This software will break if this field is improperly formatted. It is imperative that this address get to one or more real persons, because it is often used for everything from reporting bad DNS data to reporting security incidents. Even though some BIND versions allow you to use a decimal in a serial number, don't. A decimal serial number is converted to an unsigned 32-bit integer internally anyway. The formula for a n.m serial number is n*10^(3+int(0.9+log10(m))) + m which translates to something rather unexpected. For example it's routinely possible with a decimal serial number (perhaps automatically generated by SCCS) to be incremented such that it is numerically larger, but after the above conversion yield a serial number which is LOWER than before. Decimal serial numbers have been officially deprecated in recent BIND versions. The recommended syntax is YYYYMMDDnn (YYYY=year, MM=month, DD=day, nn=revision number. This won't overflow until the year 4294. Barr Informational [Page 3] RFC 1912 Common DNS Errors February 1996 Choose logical values for the timer values in the SOA record (note values below must be expressed as seconds in the zone data): Refresh: How often a secondary will poll the primary server to see if the serial number for the zone has increased (so it knows to request a new copy of the data for the zone). Set this to how long your secondaries can comfortably contain out-of-date data. You can keep it short (20 mins to 2 hours) if you aren't worried about a small increase in bandwidth used, or longer (2-12 hours) if your Internet connection is slow or is started on demand. Recent BIND versions (4.9.3) have optional code to automatically notify secondaries that data has changed, allowing you to set this TTL to a long value (one day, or more). Retry: If a secondary was unable to contact the primary at the last refresh, wait the retry value before trying again. This value isn't as important as others, unless the secondary is on a distant network from the primary or the primary is more prone to outages. It's typically some fraction of the refresh interval. Expire: How long a secondary will still treat its copy of the zone data as valid if it can't contact the primary. This value should be greater than how long a major outage would typically last, and must be greater than the minimum and retry intervals, to avoid having a secondary expire the data before it gets a chance to get a new copy. After a zone is expired a secondary will still continue to try to contact the primary, but it will no longer provide nameservice for the zone. 2-4 weeks are suggested values. Minimum: The default TTL (time-to-live) for resource records -- how long data will remain in other nameservers' cache. ([RFC 1035] defines this to be the minimum value, but servers seem to always implement this as the default value) This is by far the most important timer. Set this as large as is comfortable given how often you update your nameserver. If you plan to make major changes, it's a good idea to turn this value down temporarily beforehand. Then wait the previous minimum value, make your changes, verify their correctness, and turn this value back up. 1-5 days are typical values. Remember this value can be overridden on individual resource records. Barr Informational [Page 4] RFC 1912 Common DNS Errors February 1996 As you can see, the typical values above for the timers vary widely. Popular documentation like [RFC 1033] recommended a day for the minimum TTL, which is now considered too low except for zones with data that vary regularly. Once a DNS stabilizes, values on the order of 3 or more days are recommended. It is also recommended that you individually override the TTL on certain RRs which are often referenced and don't often change to have very large values (1-2 weeks). Good examples of this are the MX, A, and PTR records of your mail host(s), the NS records of your zone, and the A records of your nameservers. 2.3 Glue A Records Glue records are A records that are associated with NS records to provide "bootstrapping" information to the nameserver. For example: podunk.xx. in ns ns1.podunk.xx. in ns ns2.podunk.xx. ns1.podunk.xx. in a 1.2.3.4 ns2.podunk.xx. in a 1.2.3.5 Here, the A records are referred to as "Glue records". Glue records are required only in forward zone files for nameservers that are located in the subdomain of the current zone that is being delegated. You shouldn't have any A records in an in-addr.arpa zone file (unless you're using RFC 1101-style encoding of subnet masks). If your nameserver is multi-homed (has more than one IP address), you must list all of its addresses in the glue to avoid cache inconsistency due to differing TTL values, causing some lookups to not find all addresses for your nameserver. Some people get in the bad habit of putting in a glue record whenever they add an NS record "just to make sure". Having duplicate glue records in your zone files just makes it harder when a nameserver moves to a new IP address, or is removed. You'll spend hours trying to figure out why random people still see the old IP address for some host, because someone forgot to change or remove a glue record in some other file. Newer BIND versions will ignore these extra glue records in local zone files. Older BIND versions (4.8.3 and previous) have a problem where it inserts these extra glue records in the zone transfer data to secondaries. If one of these glues is wrong, the error can be propagated to other nameservers. If two nameservers are secondaries for other zones of each other, it's possible for one to continually pass old glue records back to the other. The only way to get rid of Barr Informational [Page 5] RFC 1912 Common DNS Errors February 1996 the old data is to kill both of them, remove the saved backup files, and restart them. Combined with that those same versions also tend to become infected more easily with bogus data found in other non- secondary nameservers (like the root zone data). 2.4 CNAME records A CNAME record is not allowed to coexist with any other data. In other words, if suzy.podunk.xx is an alias for sue.podunk.xx, you can't also have an MX record for suzy.podunk.edu, or an A record, or even a TXT record. Especially do not try to combine CNAMEs and NS records like this!: podunk.xx. IN NS ns1 IN NS ns2 IN CNAME mary mary IN A 1.2.3.4 This is often attempted by inexperienced administrators as an obvious way to allow your domain name to also be a host. However, DNS servers like BIND will see the CNAME and refuse to add any other resources for that name. Since no other records are allowed to coexist with a CNAME, the NS entries are ignored. Therefore all the hosts in the podunk.xx domain are ignored as well! If you want to have your domain also be a host, do the following: podunk.xx. IN NS ns1 IN NS ns2 IN A 1.2.3.4 mary IN A 1.2.3.4 Don't go overboard with CNAMEs. Use them when renaming hosts, but plan to get rid of them (and inform your users). However CNAMEs are useful (and encouraged) for generalized names for servers -- `ftp' for your ftp server, `www' for your Web server, `gopher' for your Gopher server, `news' for your Usenet news server, etc. Don't forget to delete the CNAMEs associated with a host if you delete the host it is an alias for. Such "stale CNAMEs" are a waste of resources. Barr Informational [Page 6] RFC 1912 Common DNS Errors February 1996 Don't use CNAMEs in combination with RRs which point to other names like MX, CNAME, PTR and NS. (PTR is an exception if you want to implement classless in-addr delegation.) For example, this is strongly discouraged: podunk.xx. IN MX mailhost mailhost IN CNAME mary mary IN A 1.2.3.4 [RFC 1034] in section 3.6.2 says this should not be done, and [RFC 974] explicitly states that MX records shall not point to an alias defined by a CNAME. This results in unnecessary indirection in accessing the data, and DNS resolvers and servers need to work more to get the answer. If you really want to do this, you can accomplish the same thing by using a preprocessor such as m4 on your host files. Also, having chained records such as CNAMEs pointing to CNAMEs may make administration issues easier, but is known to tickle bugs in some resolvers that fail to check loops correctly. As a result some hosts may not be able to resolve such names. Having NS records pointing to a CNAME is bad and may conflict badly with current BIND servers. In fact, current BIND implementations will ignore such records, possibly leading to a lame delegation. There is a certain amount of security checking done in BIND to prevent spoofing DNS NS records. Also, older BIND servers reportedly will get caught in an infinite query loop trying to figure out the address for the aliased nameserver, causing a continuous stream of DNS requests to be sent. 2.5 MX records It is a good idea to give every host an MX record, even if it points to itself! Some mailers will cache MX records, but will always need to check for an MX before sending mail. If a site does not have an MX, then every piece of mail may result in one more resolver query, since the answer to the MX query often also contains the IP addresses of the MX hosts. Internet SMTP mailers are required by [RFC 1123] to support the MX mechanism. Put MX records even on hosts that aren't intended to send or receive e-mail. If there is a security problem involving one of these hosts, some people will mistakenly send mail to postmaster or root at the site without checking first to see if it is a "real" host or just a terminal or personal computer that's not set up to accept e-mail. If you give it an MX record, then the e-mail can be redirected to a real person. Otherwise mail can just sit in a queue for hours or days Barr Informational [Page 7] RFC 1912 Common DNS Errors February 1996 until the mailer gives up trying to send it. Don't forget that whenever you add an MX record, you need to inform the target mailer if it is to treat the first host as "local". (The "Cw" flag in sendmail, for example) If you add an MX record which points to an external host (e.g., for the purposes of backup mail routing) be sure to ask permission from that site first. Otherwise that site could get rather upset and take action (like throw your mail away, or appeal to higher authorities like your parent DNS administrator or network provider.) 2.6 Other Resource Records 2.6.1 WKS WKS records are deprecated in [RFC 1123]. They serve no known useful function, except internally among LISP machines. Don't use them. 2.6.2 HINFO On the issue HINFO records, some will argue that these is a security problem (by broadcasting what vendor hardware and operating system you so people can run systematic attacks on known vendor security holes). If you do use them, you should keep up to date with known vendor security problems. However, they serve a useful purpose. Don't forget that HINFO requires two arguments, the hardware type, and the operating system. HINFO is sometimes abused to provide other information. The record is meant to provide specific information about the machine itself. If you need to express other information about the host in the DNS, use TXT. 2.6.3 TXT TXT records have no specific definition. You can put most anything in them. Some use it for a generic description of the host, some put specific information like its location, primary user, or maybe even a phone number. 2.6.4 RP RP records are relatively new. They are used to specify an e-mail address (see first paragraph of section 2.2) of the "Responsible Person" of the host, and the name of a TXT record where you can get more information. See [RFC 1183]. Barr Informational [Page 8] RFC 1912 Common DNS Errors February 1996 2.7 Wildcard records Wildcard MXs are useful mostly for non IP-connected sites. A common mistake is thinking that a wildcard MX for a zone will apply to all hosts in the zone. A wildcard MX will apply only to names in the zone which aren't listed in the DNS at all. e.g., podunk.xx. IN NS ns1 IN NS ns2 mary IN A 1.2.3.4 *.podunk.xx. IN MX 5 sue Mail for mary.podunk.xx will be sent to itself for delivery. Only mail for jane.podunk.xx or any hosts you don't see above will be sent to the MX. For most Internet sites, wildcard MX records are not useful. You need to put explicit MX records on every host. Wildcard MXs can be bad, because they make some operations succeed when they should fail instead. Consider the case where someone in the domain "widget.com" tries to send mail to "joe@larry". If the host "larry" doesn't actually exist, the mail should in fact bounce immediately. But because of domain searching the address gets resolved to "larry.widget.com", and because of the wildcard MX this is a valid address according to DNS. Or perhaps someone simply made a typo in the hostname portion of the address. The mail message then gets routed to the mail host, which then rejects the mail with strange error messages like "I refuse to talk to myself" or "Local configuration error". Wildcard MX records are good for when you have a large number of hosts which are not directly Internet-connected (for example, behind a firewall) and for administrative or political reasons it is too difficult to have individual MX records for every host, or to force all e-mail addresses to be "hidden" behind one or more domain names. In that case, you must divide your DNS into two parts, an internal DNS, and an external DNS. The external DNS will have only a few hosts and explicit MX records, and one or more wildcard MXs for each internal domain. Internally the DNS will be complete, with all explicit MX records and no wildcards. Wildcard As and CNAMEs are possible too, and are really confusing to users, and a potential nightmare if used without thinking first. It could result (due again to domain searching) in any telnet/ftp attempts from within the domain to unknown hosts to be directed to one address. One such wildcard CNAME (in *.edu.com) caused Internet-wide loss of services and potential security nightmares due to unexpected interactions with domain searching. It resulted in swift fixes, and even an RFC ([RFC 1535]) documenting the problem. Barr Informational [Page 9] RFC 1912 Common DNS Errors February 1996 2.8 Authority and Delegation Errors (NS records) You are required to have at least two nameservers for every domain, though more is preferred. Have secondaries outside your network. If the secondary isn't under your control, periodically check up on them and make sure they're getting current zone data from you. Queries to their nameserver about your hosts should always result in an "authoritative" response. If not, this is called a "lame delegation". A lame delegations exists when a nameserver is delegated responsibility for providing nameservice for a zone (via NS records) but is not performing nameservice for that zone (usually because it is not set up as a primary or secondary for the zone). The "classic" lame delegation can be illustrated in this example: podunk.xx. IN NS ns1.podunk.xx. IN NS ns0.widget.com. "podunk.xx" is a new domain which has recently been created, and "ns1.podunk.xx" has been set up to perform nameservice for the zone. They haven't quite finished everything yet and haven't made sure that the hostmaster at "ns0.widget.com" has set up to be a proper secondary, and thus has no information about the podunk.xx domain, even though the DNS says it is supposed to. Various things can happen depending on which nameserver is used. At best, extra DNS traffic will result from a lame delegation. At worst, you can get unresolved hosts and bounced e-mail. Also, sometimes a nameserver is moved to another host or removed from the list of secondaries. Unfortunately due to caching of NS records, many sites will still think that a host is a secondary after that host has stopped providing nameservice. In order to prevent lame delegations while the cache is being aged, continue to provide nameservice on the old nameserver for the length of the maximum of the minimum plus refresh times for the zone and the parent zone. (See section 2.2) Whenever a primary or secondary is removed or changed, it takes a fair amount of human coordination among the parties involved. (The site itself, it's parent, and the site hosting the secondary) When a primary moves, make sure all secondaries have their named.boot files updated and their servers reloaded. When a secondary moves, make sure the address records at both the primary and parent level are changed. It's also been reported that some distant sites like to pick popular nameservers like "ns.uu.net" and just add it to their list of NS records in hopes that they will magically perform additional Barr Informational [Page 10] RFC 1912 Common DNS Errors February 1996 nameservice for them. This is an even worse form of lame delegation, since this adds traffic to an already busy nameserver. Please contact the hostmasters of sites which have lame delegations. Various tools can be used to detect or actively find lame delegations. See the list of contributed software in the BIND distribution. Make sure your parent domain has the same NS records for your zone as you do. (Don't forget your in-addr.arpa zones too!). Do not list too many (7 is the recommended maximum), as this just makes things harder to manage and is only really necessary for very popular top- level or root zones. You also run the risk of overflowing the 512- byte limit of a UDP packet in the response to an NS query. If this happens, resolvers will "fall back" to using TCP requests, resulting in increased load on your nameserver. It's important when picking geographic locations for secondary nameservers to minimize latency as well as increase reliability. Keep in mind network topologies. For example if your site is on the other end of a slow local or international link, consider a secondary on the other side of the link to decrease average latency. Contact your Internet service provider or parent domain contact for more information about secondaries which may be available to you. 3. BIND operation This section discusses common problems people have in the actual operation of the nameserver (specifically, BIND). Not only must the data be correct as explained above, but the nameserver must be operated correctly for the data to be made available. 3.1 Serial numbers Each zone has a serial number associated with it. Its use is for keeping track of who has the most current data. If and only if the primary's serial number of the zone is greater will the secondary ask the primary for a copy of the new zone data (see special case below). Don't forget to change the serial number when you change data! If you don't, your secondaries will not transfer the new zone information. Automating the incrementing of the serial number with software is also a good idea. If you make a mistake and increment the serial number too high, and you want to reset the serial number to a lower value, use the following procedure: Barr Informational [Page 11] RFC 1912 Common DNS Errors February 1996 Take the `incorrect' serial number and add 2147483647 to it. If the number exceeds 4294967296, subtract 4294967296. Load the resulting number. Then wait 2 refresh periods to allow the zone to propagate to all servers. Repeat above until the resulting serial number is less than the target serial number. Up the serial number to the target serial number. This procedure won't work if one of your secondaries is running an old version of BIND (4.8.3 or earlier). In this case you'll have to contact the hostmaster for that secondary and have them kill the secondary servers, remove the saved backup file, and restart the server. Be careful when editing the serial number -- DNS admins don't like to kill and restart nameservers because you lose all that cached data. 3.2 Zone file style guide Here are some useful tips in structuring your zone files. Following these will help you spot mistakes, and avoid making more. Be consistent with the style of entries in your DNS files. If your $ORIGIN is podunk.xx., try not to write entries like: mary IN A 1.2.3.1 sue.podunk.xx. IN A 1.2.3.2 or: bobbi IN A 1.2.3.2 IN MX mary.podunk.xx. Either use all FQDNs (Fully Qualified Domain Names) everywhere or used unqualified names everywhere. Or have FQDNs all on the right- hand side but unqualified names on the left. Above all, be consistent. Use tabs between fields, and try to keep columns lined up. It makes it easier to spot missing fields (note some fields such as "IN" are inherited from the previous record and may be left out in certain circumstances.) Barr Informational [Page 12] RFC 1912 Common DNS Errors February 1996 Remember you don't need to repeat the name of the host when you are defining multiple records for one host. Be sure also to keep all records associated with a host together in the file. It will make things more straightforward when it comes time to remove or rename a host. Always remember your $ORIGIN. If you don't put a `.' at the end of an FQDN, it's not recognized as an FQDN. If it is not an FQDN, then the nameserver will append $ORIGIN to the name. Double check, triple check, those trailing dots, especially in in-addr.arpa zone files, where they are needed the most. Be careful with the syntax of the SOA and WKS records (the records which use parentheses). BIND is not very flexible in how it parses these records. See the documentation for BIND. 3.3 Verifying data Verify the data you just entered or changed by querying the resolver with dig (or your favorite DNS tool, many are included in the BIND distribution) after a change. A few seconds spent double checking can save hours of trouble, lost mail, and general headaches. Also be sure to check syslog output when you reload the nameserver. If you have grievous errors in your DNS data or boot file, named will report it via syslog. It is also highly recommended that you automate this checking, either with software which runs sanity checks on the data files before they are loaded into the nameserver, or with software which checks the data already loaded in the nameserver. Some contributed software to do this is included in the BIND distribution. 4. Miscellaneous Topics 4.1 Boot file setup Certain zones should always be present in nameserver configurations: primary localhost localhost primary 0.0.127.in-addr.arpa 127.0 primary 255.in-addr.arpa 255 primary 0.in-addr.arpa 0 These are set up to either provide nameservice for "special" addresses, or to help eliminate accidental queries for broadcast or local address to be sent off to the root nameservers. All of these files will contain NS and SOA records just like the other zone files you maintain, the exception being that you can probably make the SOA Barr Informational [Page 13] RFC 1912 Common DNS Errors February 1996 timers very long, since this data will never change. The "localhost" address is a "special" address which always refers to the local host. It should contain the following line: localhost. IN A 127.0.0.1 The "127.0" file should contain the line: 1 PTR localhost. There has been some extensive discussion about whether or not to append the local domain to it. The conclusion is that "localhost." would be the best solution. The reasons given include: "localhost" by itself is used and expected to work in some systems. Translating 127.0.0.1 into "localhost.dom.ain" can cause some software to connect back to the loopback interface when it didn't want to because "localhost" is not equal to "localhost.dom.ain". The "255" and "0" files should not contain any additional data beyond the NS and SOA records. Note that future BIND versions may include all or some of this data automatically without additional configuration. 4.2 Other Resolver and Server bugs Very old versions of the DNS resolver have a bug that cause queries for names that look like IP addresses to go out, because the user supplied an IP address and the software didn't realize that it didn't need to be resolved. This has been fixed but occasionally it still pops up. It's important because this bug means that these queries will be sent directly to the root nameservers, adding to an already heavy DNS load. While running a secondary nameserver off another secondary nameserver is possible, it is not recommended unless necessary due to network topologies. There are known cases where it has led to problems like bogus TTL values. While this may be caused by older or flawed DNS implementations, you should not chain secondaries off of one another since this builds up additional reliability dependencies as well as adds additional delays in updates of new zone data. Barr Informational [Page 14] RFC 1912 Common DNS Errors February 1996 4.3 Server issues DNS operates primarily via UDP (User Datagram Protocol) messages. Some UNIX operating systems, in an effort to save CPU cycles, run with UDP checksums turned off. The relative merits of this have long been debated. However, with the increase in CPU speeds, the performance considerations become less and less important. It is strongly encouraged that you turn on UDP checksumming to avoid corrupted data not only with DNS but with other services that use UDP (like NFS). Check with your operating system documentation to verify that UDP checksumming is enabled. References [RFC 974] Partridge, C., "Mail routing and the domain system", STD 14, RFC 974, CSNET CIC BBN Laboratories Inc, January 1986. [RFC 1033] Lottor, M, "Domain Administrators Operations Guide", RFC 1033, USC/Information Sciences Institute, November 1987. [RFC 1034] Mockapetris, P., "Domain Names - Concepts and Facilities", STD 13, RFC 1034, USC/Information Sciences Institute, November 1987. [RFC 1035] Mockapetris, P., "Domain Names - Implementation and Specification", STD 13, RFC 1035, USC/Information Sciences Institute, November 1987. [RFC 1123] Braden, R., "Requirements for Internet Hosts -- Application and Support", STD 3, RFC 1123, IETF, October 1989. [RFC 1178] Libes, D., "Choosing a Name for Your Computer", FYI 5, RFC 1178, Integrated Systems Group/NIST, August 1990. [RFC 1183] Ullman, R., Mockapetris, P., Mamakos, L, and C. Everhart, "New DNS RR Definitions", RFC 1183, October 1990. [RFC 1535] Gavron, E., "A Security Problem and Proposed Correction With Widely Deployed DNS Software", RFC 1535, ACES Research Inc., October 1993. [RFC 1536] Kumar, A., Postel, J., Neuman, C., Danzig, P., and S. Miller, "Common DNS Implementation Errors and Suggested Fixes", RFC 1536, USC/Information Sciences Institute, USC, October 1993. Barr Informational [Page 15] RFC 1912 Common DNS Errors February 1996 [RFC 1537] Beertema, P., "Common DNS Data File Configuration Errors", RFC 1537, CWI, October 1993. [RFC 1713] A. Romao, "Tools for DNS debugging", RFC 1713, FCCN, November 1994. [BOG] Vixie, P, et. al., "Name Server Operations Guide for BIND", Vixie Enterprises, July 1994. 5. Security Considerations Security issues are not discussed in this memo. 6. Author's Address David Barr The Pennsylvania State University Department of Mathematics 334 Whitmore Building University Park, PA 16802 Voice: +1 814 863 7374 Fax: +1 814 863-8311 EMail: barr@math.psu.edu Barr Informational [Page 16] https://www.ietf.org/rfc/rfc1912.txt
  9. DNS BIND9 logging Clause This section describes the logging clause which prior to BIND 9 needed to appear first in the named.conf file. This no longer the case and it may appear anywhere convenient. BIND uses syslogd before a valid logging clause is available so named.conf parse errors and other information will appear in /var/log/messages (depending on syslog.conf) prior to, or in the absence of, a valid logging clause. In the case of windows parse errors are written to the Event Log. Only one logging clause can be defined but multiple channels may be defined to stream logs. logging Clause Syntax BIND provides comprehensive logging features. Values in bold type below are keywords; logging { [ channel channel_name { ( file path name [ versions ( number | unlimited ) ] [ size size_spec ] | syslog syslog_facility | stderr | null ); [ severity (critical | error | warning | notice | info | debug [ level ] | dynamic ); ] [ print-category yes | no; ] [ print-severity yes | no; ] [ print-time yes | no; ] }; ] [ category category_name { channel_name ; [ channel_name ; ... ] }; ] ... }; The following notes describe the various fields and values: channel channel_name BIND will accept multiple channel definitions in a single logging statement. 'channel_name' is normally written as a non-space name, for instance, my_channel but it can be written as a quoted string, for instance, "my channel". It is an arbitrary but unique name used to associate the category statement with this channel definition or it may take one of the standard (pre-defined) values below: "default_syslog" log everything to syslog (default logging destination) "default_debug" "default_stderr" output to stderr (normally the console) "null" discard all log entries (write to /dev/null) file 'path_name' is a quoted string defining the absolute path to the logging file, for example, "/var/log/named/namedlog.log". From the grammar above 'file', 'syslog', 'stderr' and 'null' are mutually exclusive for a 'channel'. versions 'versions' may take the parameter 'number' or 'unlimited' and defines the number of file versions that should be kept by BIND. Version files are created by BIND by appending .0, .1 etc to the file named defined by the file parameter. Files are 'rolled' (renamed or overwritten) so .0 will always contain the last log information prior to commencing the new log., .1 the next and so on. 'unlimited' currently implies 'versions 99'. Unless a sizeparameter is used new log versions will only be 'rolled' when BIND is restarted. If no versions statement is defined a single log file of unlimited size is used and on restart new data is appended to the defined file. This can get to be a very big file. size size_spec 'size' allows you to define a limit to the file size created. A numeric only size_spec value is assumed to be the size in bytes, you may use the short forms k or K, m or M, g or G e.g. 25m = 25000000. size and versions are related in the following way: If you specify a size value and NO versions parameter when the size limit is reached BIND will stop logging until the file size is reduced to below the threshold defined i.e. by deleting or truncating the file. If you specify a size AND a versions parameter the log files will be 'rolled' (renamed and overwritten as defined in the versions section above) when the size limit is reached. If you specify NO size AND a versions parameter the log files will be 'rolled' (renamed and overwritten as defined in the versions section above) only when BIND is restarted. syslog syslog_facility 'syslog' indicates that this channel will use syslogd logging features (as defined in syslog.conf). The syslog_facility is the facility definition for 'syslog' and may be found in syslog's man pages. From the grammar above 'file', 'syslog', 'stderr' and 'null' are mutually exclusive for a 'channel'. stderr 'stderr' writes to the current standard out and would typically be only used for debug purposes. From the grammar above 'file', 'syslog', 'stderr' and 'null' are mutually exclusive for a 'channel'. null 'null' writes to /dev/null - the bit bucket, nowhere. It does not produce a log. From the grammar above 'file', 'syslog', 'stderr' and 'null' are mutually exclusive for a 'channel'. severity Controls the logging levels and may take the values defined. Logging will occur for any message equal to or higher than the level specified (=>) lower levels will not be logged. Severity Description critical only critical errors. error error and above. warning warning and above. notice notice and above. info info and above - log starting to get chatty. debug debug and above. Various debug levels can be defined with 'debug 0' meaning no debugging. dynamic debug and above. Means assume the global debug level defined by either the command line parameter -d or by running rndc trace print-time yes | no Controls whether the date and time are written to the output channel (yes) or not (no). The default is 'no'. print-severity yes | no Controls whether the severity level is written to the output channel (yes) or not (no). The default is 'no'. print-category yes | no Controls whether the severity level is written to the output channel (yes) or not (no). The default is 'no'. categorycategory_name Controls what categories are logged to the various defined or default 'channel_names'. The category_name (a quoted string, for example, "default") may take one of the following values: Category Description client Processing of client requests. config Configuration file parsing and processing. database Messages relating to the databases used internally by the name server to store zone and cache data. default Logs all values which are not explicitly defined in category statements i.e. if this is the only category defined it will log all categories listed in this table with the exception of queries which are not turned on by default. delegation-only Logs queries that have returned NXDOMAIN as the result of a delegation-only zone or a delegation-only statement in a hint or stub zone declaration. dispatch Dispatching of incoming packets to the server modules where they are to be processed. dnssec DNSSEC and TSIG protocol processing. general Anything that is not classified as any other item in this list defaults to this category.. lame-servers Lame servers. Mis-configuration in the delegation of domains discovered by BIND 9 when trying to authoritative answers. If the volume of these messages is high many users elect to send them to the null channel e.g. category lame-servers {null;}; statement. network Logs all network operations. notify Logs all NOTIFY operations. queries Logs all query transactions. The querylog statement may be used to override this category statement. This entry can generate a substantial volume of data very quickly. This category is not turned on by default and hence the default type above will not log this information. resolver Name resolution including recursive lookups performed on behalf of clients by a caching name server. rpz All operations related to Response Policy Zone (RPZ) processing. Even when RPZ zones are disabled (using policy disabled parameter in the response-policy statement) the operation is completed, logged then discarded (the real response is returned to the user). rate-limit All operations related to one or more rate-limit statements in the options or view clauses. security Approval and denial of requests. unmatched No matching view clause or unrecognized class value. A one line summary is also logged to the client category. By default this category is sent to the null channel. update Logging of all dynamic update (DDNS) transactions. update-security Approval and denial of update requests used with DDNS. xfer-in Details of zone transfers the server is receiving. xfer-out Details of zone transfers the server is sending. Examples The first example shows a minimal logging configuration that will work and generate modest log volumes. logging{ channel simple_log { file "/var/log/named/bind.log" versions 3 size 5m; severity warning; print-time yes; print-severity yes; print-category yes; }; category default{ simple_log; }; }; Problems, comments, suggestions, corrections (including broken links) or something to add? Please take the time from a busy life to 'mail us' (at top of screen), the webmaster (below) or info-support at zytrax. You will have a warm inner glow for the rest of the day. http://www.zytrax.com/books/dns/ch7/logging.html
  10. Why does rndc log warning key file ... exists, but using default configuration file (rndc.conf)? Author: Cathy Almond Reference Number: AA-00722 Views: 24969 Created: 2012-07-18 15:04 Last Updated: 2017-09-18 04:44 0 Rating/ Voters After upgrading BIND to a current version, you might be surprised to see this warning when using rndc commands (although the command should still work as before, unless you've made other configuration changes): WARNING: key file (rndc.key) exists, but using default configuration file (rndc.conf) Both named and rndc can operate with explicit or automatic control configuration. They do this by looking for the file rndc.key in the default configuration files directory. If there is no explicit configuration (the controls statement in named.conf for named, or the existence of the file rndc.conf for rndc), then the key in the rndc.key file will be used instead (if it exists). The rndc.key file isn't created automatically on installation Use "rndc-confgen -a" to create the rndc.key file Unfortunately, in the situation where there is both an explicit configuration, and the file rndc.key exists, it can sometimes be confusing for troubleshooting to know which configuration option is in use, particularly if there are problems with issuing rndc commands. So from BIND 9.7.0, the warning was added so that the choice made by rndc was clearly indicated to the operator. Administrators who have made use of the include functionality of named.conf and rndc.conf to import an independently-generated rndc.key file will see this new warning, but can safely ignore it. Getting rid of the warning message There is no need to make any configuration changes if rndc commands are not failing, but administrators might prefer to ensure that any ambiguity is removed. Options include: Removing the rndc.key file Keeping rndc.key, but removing the controls statements from named.conf and deleting rndc.conf If using include for rndc.key, you could put the file elsewhere and import it from there © 2001-2018 Internet Systems Consortium For assistance with problems and questions for which you have not been able to find an answer in our Knowledge Base, we recommend searching our community mailing list archivesand/or posting your question there (you will need to register there first for your posts to be accepted). The bind-users and the dhcp-users lists particularly have a long-standing and active membership. ISC relies on the financial support of the community to fund the development of its open source software products. If you would like to support future product evolution and maintenance as well having peace of mind knowing that our team of experts are poised to provide you with individual technical assistance whenever you call upon them, then please consider our Professional Subscription Support services - details can be found on our main website. https://deepthought.isc.org/article/AA-00722/0/Why-does-rndc-log-warning-key-file-...-exists-but-using-default-configuration-file-rndc.conf.html
  11. Red Hat Linux 7.2 The Official Red Hat Linux Reference Guide Copyright © 2001 by Red Hat, Inc. Red Hat, Inc. 2600 Meridian Parkway Durham, NC 27713 USA Phone: +1 919 547 0012 Phone: 888 733 4281 Fax: +1 919 547 0024 PO Box 13588 Research Triangle Park, NC 27709 USA ISBN: N/A http://www-uxsup.csx.cam.ac.uk/pub/doc/redhat/AS2.1/rhl-rg-en-7.2/
  12. What is PECL? PECL is a repository for PHP Extensions, providing a directory of all known extensions and hosting facilities for downloading and development of PHP extensions. The packaging and distribution system used by PECL is shared with its sister, PEAR. News Documentation PECL specific docs Mailing Lists & Support Resources Downloads Browse All Packages Search Packages Download Statistics I want to publish my PHP Extension in PECL https://pecl.php.net/
  13. The Experimental Repository https://documentation.cpanel.net/display/EA4/The+Experimental+Repository
  14. https://www.regular-expressions.info/ Welcome to Regular-Expressions.info The Premier website about Regular Expressions A regular expression (regex or regexp for short) is a special text string for describing a search pattern. You can think of regular expressions as wildcards on steroids. You are probably familiar with wildcard notations such as *.txt to find all text files in a file manager. The regex equivalent is ^.*\.txt$. But you can do much more with regular expressions. In a text editor like EditPad Pro or a specialized text processing tool like PowerGREP, you could use the regular expression \b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,}\bto search for an email address. Any email address, to be exact. A very similar regular expression (replace the first \bwith ^ and the last one with $) can be used by a programmer to check whether the user entered a properly formatted email address. In just one line of code, whether that code is written in Perl, PHP, Java, a .NET language, or a multitude of other languages. https://www.regular-expressions.info/
  15. Introduction Logrotate is a utility designed for administrators who manage servers producing a high volume of log files to help them save some disk space as well as to avoid a potential risk making a system unresponsive due to the lack of disk space. Normally, a solution to avoid this kind of problem is to setup a separate partition or logical volume for a /var mount point. However, logrotate may also be a viable solution to this problem especially if it is too late to move all logs under different partition. In this article we will talk about usage and configuration of logrotate on RedHat / CentOS Linux server. What is Logrotate Logrotate provides an ability for a system administrator to systematically rotate and archive any log files produced by the system and thus reducing a operating system's disk space requirement. By default logrotate is invoked once a day using a cron scheduler from location /etc/cron.daily/ # ls /etc/cron.daily/ cups logrotate makewhatis.cron mlocate.cron prelink readahead.cron rhsmd tmpwatch Configuring Logrotate Logrotate's configuration is done by editing two separate configuration files: /etc/logrotate.conf service specific configuration files stored in /etc/logrotate.d/. The main logrotate.conf file contains a generic configuration. Here is a default logrotate configuration file logrotate.conf: 1 weekly 2 rotate 4 3 create 4 dateext 5 include /etc/logrotate.d 6 /var/log/wtmp { 7 monthly 8 create 0664 root utmp 9 minsize 1M 10 rotate 1 11 } Line 1 - weekly configuration option ensures a weekly rotation of all log-files defined in main configuration file and in /etc/logrotate.d/ directory. Line 2 - rotate 4 ensures that logrotate keeps a 4 weeks backup of all log files Line 3 - create option instructs logrotate to create new empty log files after each rotation Line 4 - dateext appends an extension to all rotated log files in form of date when each particular log file was processed by logrotate Line 5 - include all other configuration from directory /etc/logrotate.d Line 6 - 11 contains a specific service log rotate configuration As opposed to logrotate.conf a directory /etc/logrotate.d/ contains a specific service configuration files used by logrotate. In the next section we will create a sample skeleton logrotate configuration. Including new service logs to logrotate In this section we will add new log file into a logrotate configuration. Let's say that we have a log file called: /var/log/linuxcareer.log sitting in our /var/log directory that needs to be rotated on daily basis. First we need to create a new logrotate configuration file to accommodate for our new log file: $ vi /etc/logrotate.d/linuxcareer Insert a following text into /etc/logrotate.d/linuxcareer: /var/log/linuxcareer.log { missingok notifempty compress size 20k daily create 0600 root root } Here is a line by line explanation of the above logrotate configuration file: TIP: If you wish to include multiple log files in a single configuration file use wildcard. For example /var/log/mylogs/*.log will instruct logrotate to rotate all log files located in /var/log/mylogs/ with extension .log. missingok - do not output error if logfile is missing notifempty - donot rotate log file if it is empty compress - Old versions of log files are compressed with gzip(1) by default size - Log file is rotated only if it grow bigger than 20k daily - ensures daily rotation create - creates a new log file wit permissions 600 where owner and group is root user The logrotate utility as quite versatile as it provides many more configuration options. Below, I will list few other configuration options for log rotate. To get a complete list, consult logrotate's manual page: $ man logrotate copy - Make a copy of the log file, but don’t change the original at all. mail <email@address> - When a log is rotated out-of-existence, it is mailed to address. olddir <directory> - Logs are moved into <directory> for rotation. postrotate/endscript - The lines between postrotate and endscript are executed after the log file is rotated. Testing a new Logrotate configuration Once you have created a new logrotate configuration file within /etc/logrotate.d: # cat /etc/logrotate.d/linuxcareer /var/log/linuxcareer.log { missingok notifempty compress size 20k daily create 0600 root root } create some sample log file ( if not existent ! 😞 # echo "rotate my log file" > /var/log/linuxcareer.log Once your log file is in place force logrotate to rotate all logs with -f option. # logrotate -f /etc/logrotate.conf Warning: The above command will rotate all your logs defined in /etc/logrotate.d directory. Now visit again your /var/log/directory and confirm that your log file was rotated and new log file was created: # cat /var/log/linuxcareer.log rotate my log file # logrotate -f /etc/logrotate.conf # cat /var/log/linuxcareer.log file /var/log/linuxcareer.log-20130409.gz /var/log/linuxcareer.log-20130409.gz: gzip compressed data, from Unix, last modified: Tue Apr 9 12:43:50 2013 # zcat /var/log/linuxcareer.log-20130409.gz rotate my log file As you can see the new empty log file linuxcareer.log was created and old linuxcareer.log file was compressed with gzip and renamed with date extension. TIP: In order to see a content of your compressed log file you do not need to decompress it first. Use zcat or zless commands which will decompress your log file on fly. Conclusion As it was already mentioned previously, the best way to avoid your system being clogged by log files is to create a separate partition/logical volume for your /var/ or even better /var/log directory. However, even then logrotate can help you to save some disk space by compressing your log files. Logrotate may also help you to archive your log files for a future reference by creating an extra copy or by emailing you any newly rotated log files. For more information see logrotate's manual page: $ man logrotate https://linuxconfig.org/setting-up-logrotate-on-redhat-linux
  16. The Awful Truth about sed Sed is the ultimate stream editor. If that sounds strange, picture a stream flowing through a pipe. Okay, you can't see a stream if it's inside a pipe. That's what I get for attempting a flowing analogy. You want literature, read James Joyce. Anyhow, sed is a marvelous utility. Unfortunately, most people never learn its real power. The language is very simple, but the documentation is terrible. The Solaris on-line manual pages for sed are five pages long, and two of those pages describe the 34 different errors you can get. A program that spends as much space documenting the errors as it does documenting the language has a serious learning curve. Do not fret! It is not your fault you don't understand sed. I will cover sedcompletely. But I will describe the features in the order that I learned them. I didn't learn everything at once. You don't need to either. http://www.grymoire.com/Unix/Sed.html
  17. INDIANA UNIVERSITY Knowledge Base What is grep, and how do I use it? The grep utilities are a family of Unix tools, including grep, egrep, and fgrep, that perform repetitive searching tasks. The tools in the grep family are very similar, and all are used for searching the contents of files for information that matches particular criteria. For most purposes, you'll want to use fgrep, since it's generally the fastest. The general syntax of the grep commands is: grep [-options] pattern [filename] You can use fgrep to find all the lines of a file that contain a particular word. For example, to list all the lines of a file named myfile in the current directory that contain the word "dog", enter at the Unix prompt: fgrep dog myfile This will also return lines where "dog" is embedded in larger words, such as "dogma" or "dogged". You can use the -w option with the grep command to return only lines where "dog" is included as a separate word: grep -w dog myfile To search for several words separated by spaces, enclose the whole search string in quotes, for example: fgrep "dog named Checkers" myfile The fgrep command is case sensitive; specifying "dog" will not match "Dog" or "DOG". You can use the -i option with the grep command to match both upper- and lowercase letters: grep -i dog myfile To list the lines of myfile that do not contain "dog", use the -v option: fgrep -v dog myfile If you want to search for lines that contain any of several different words, you can create a second file (named secondfile in the following example) that contains those words, and then use the -f option: fgrep -f secondfile myfile You can also use wildcards to instruct fgrep to search any files that match a particular pattern. For example, if you wanted to find lines containing "dog" in any of the files in your directory with names beginning with "my", you could enter: fgrep dog my* This command would search files with names such as myfile, my.hw1, and mystuff in the current directory. Each line returned will be prefaced with the name of the file where the match was found. By using pipes and/or redirection, you can use the output from any of these commands with other Unix tools, such as more, sort, and cut. For example, to print the fifth word of every line of myfile containing "dog", sort the words alphabetically, and then filter the output through the more command for easy reading, you would enter at the Unix prompt: fgrep dog myfile | cut -f5 -d" " | sort | more If you want to save the output in a file in the current directory named newfile, enter: fgrep dog myfile | cut -f5 -d" " | sort > newfile For more information about grep, egrep, and fgrep, enter: man grep At Indiana University, for personal or departmental Linux or Unix systems support, see At IU, how do I get support for Linux or Unix? Related documents In Unix, what is the man command, and how do I use it to read manual pages? This is document afiy in the Knowledge Base. Last modified on 2017-05-16 11:52:15. Contact us For help or to comment, email the UITS Support Center. https://kb.iu.edu/d/afiy
  18. Exim Cheatsheet Home Pics Personal Guitar Technical Contact Here are some useful things to know for managing an Exim 4 server. This assumes a prior working knowledge of SMTP, MTAs, and a UNIX shell prompt. Message-IDs and spool files The message-IDs that Exim uses to refer to messages in its queue are mixed-case alpha-numeric, and take the form of: XXXXXX-YYYYYY-ZZ. Most commands related to managing the queue and logging use these message-ids. There are three -- count 'em, THREE -- files for each message in the spool directory. If you're dealing with these files by hand, instead of using the appropriate exim commands as detailed below, make sure you get them all, and don't leave Exim with remnants of messages in the queue. I used to mess directly with these files when I first started running Exim machines, but thanks to the utilities described below, I haven't needed to do that in many months. Files in /var/spool/exim/msglog contain logging information for each message and are named the same as the message-id. Files in /var/spool/exim/input are named after the message-id, plus a suffix denoting whether it is the envelope header (-H) or message data (-D). These directories may contain further hashed subdirectories to deal with larger mail queues, so don't expect everything to always appear directly in the top /var/spool/exim/input or /var/spool/exim/msglog directories; any searches or greps will need to be recursive. See if there is a proper way to do what you're doing before working directly on the spool files. Basic information Print a count of the messages in the queue: root@localhost# exim -bpc Print a listing of the messages in the queue (time queued, size, message-id, sender, recipient): root@localhost# exim -bp Print a summary of messages in the queue (count, volume, oldest, newest, domain, and totals): root@localhost# exim -bp | exiqsumm Print what Exim is doing right now: root@localhost# exiwhat Test how exim will route a given address: root@localhost# exim -bt alias@localdomain.com user@thishost.com <-- alias@localdomain.com router = localuser, transport = local_delivery root@localhost# exim -bt user@thishost.com user@thishost.com router = localuser, transport = local_delivery root@localhost# exim -bt user@remotehost.com router = lookuphost, transport = remote_smtp host mail.remotehost.com [1.2.3.4] MX=0 Run a pretend SMTP transaction from the command line, as if it were coming from the given IP address. This will display Exim's checks, ACLs, and filters as they are applied. The message will NOT actually be delivered. root@localhost# exim -bh 192.168.11.22 Display all of Exim's configuration settings: root@localhost# exim -bP Searching the queue with exiqgrep Exim includes a utility that is quite nice for grepping through the queue, called exiqgrep. Learn it. Know it. Live it. If you're not using this, and if you're not familiar with the various flags it uses, you're probably doing things the hard way, like piping `exim -bp` into awk, grep, cut, or `wc -l`. Don't make life harder than it already is. First, various flags that control what messages are matched. These can be combined to come up with a very particular search. Use -f to search the queue for messages from a specific sender: root@localhost# exiqgrep -f [luser]@domain Use -r to search the queue for messages for a specific recipient/domain: root@localhost# exiqgrep -r [luser]@domain Use -o to print messages older than the specified number of seconds. For example, messages older than 1 day: root@localhost# exiqgrep -o 86400 [...] Use -y to print messages that are younger than the specified number of seconds. For example, messages less than an hour old: root@localhost# exiqgrep -y 3600 [...] Use -s to match the size of a message with a regex. For example, 700-799 bytes: root@localhost# exiqgrep -s '^7..$' [...] Use -z to match only frozen messages, or -x to match only unfrozen messages. There are also a few flags that control the display of the output. Use -i to print just the message-id as a result of one of the above two searches: root@localhost# exiqgrep -i [ -r | -f ] ... Use -c to print a count of messages matching one of the above searches: root@localhost# exiqgrep -c ... Print just the message-id of the entire queue: root@localhost# exiqgrep -i Managing the queue The main exim binary (/usr/sbin/exim) is used with various flags to make things happen to messages in the queue. Most of these require one or more message-IDs to be specified in the command line, which is where `exiqgrep -i` as described above really comes in handy. Start a queue run: root@localhost# exim -q -v Start a queue run for just local deliveries: root@localhost# exim -ql -v Remove a message from the queue: root@localhost# exim -Mrm <message-id> [ <message-id> ... ] Freeze a message: root@localhost# exim -Mf <message-id> [ <message-id> ... ] Thaw a message: root@localhost# exim -Mt <message-id> [ <message-id> ... ] Deliver a message, whether it's frozen or not, whether the retry time has been reached or not: root@localhost# exim -M <message-id> [ <message-id> ... ] Deliver a message, but only if the retry time has been reached: root@localhost# exim -Mc <message-id> [ <message-id> ... ] Force a message to fail and bounce as "cancelled by administrator": root@localhost# exim -Mg <message-id> [ <message-id> ... ] Remove all frozen messages: root@localhost# exiqgrep -z -i | xargs exim -Mrm Remove all messages older than five days (86400 * 5 = 432000 seconds): root@localhost# exiqgrep -o 432000 -i | xargs exim -Mrm Freeze all queued mail from a given sender: root@localhost# exiqgrep -i -f luser@example.tld | xargs exim -Mf View a message's headers: root@localhost# exim -Mvh <message-id> View a message's body: root@localhost# exim -Mvb <message-id> View a message's logs: root@localhost# exim -Mvl <message-id> Add a recipient to a message: root@localhost# exim -Mar <message-id> <address> [ <address> ... ] Edit the sender of a message: root@localhost# exim -Mes <message-id> <address> Access control Exim allows you to apply access control lists at various points of the SMTP transaction by specifying an ACL to use and defining its conditions in exim.conf. You could start with the HELO string. # Specify the ACL to use after HELO acl_smtp_helo = check_helo # Conditions for the check_helo ACL: check_helo: deny message = Gave HELO/EHLO as "friend" log_message = HELO/EHLO friend condition = ${if eq {$sender_helo_name}{friend} {yes}{no}} deny message = Gave HELO/EHLO as our IP address log_message = HELO/EHLO our IP address condition = ${if eq {$sender_helo_name}{$interface_address} {yes}{no}} accept NOTE: Pursue HELO checking at your own peril. The HELO is fairly unimportant in the grand scheme of SMTP these days, so don't put too much faith in whatever it contains. Some spam might seem to use a telltale HELO string, but you might be surprised at how many legitimate messages start off with a questionable HELO as well. Anyway, it's just as easy for a spammer to send a proper HELO than it is to send HELO im.a.spammer, so consider yourself lucky if you're able to stop much spam this way. Next, you can perform a check on the sender address or remote host. This shows how to do that after the RCPT TO command; if you reject here, as opposed to rejecting after the MAIL FROM, you'll have better data to log, such as who the message was intended for. # Specify the ACL to use after RCPT TO acl_smtp_rcpt = check_recipient # Conditions for the check_recipient ACL check_recipient: # [...] drop hosts = /etc/exim_reject_hosts drop senders = /etc/exim_reject_senders # [ Probably a whole lot more... ] This example uses two plain text files as blacklists. Add appropriate entries to these files - hostnames/IP addresses to /etc/exim_reject_hosts, addresses to /etc/exim_reject_senders, one entry per line. It is also possible to perform content scanning using a regex against the body of a message, though obviously this can cause Exim to use more CPU than it otherwise would need to, especially on large messages. # Specify the ACL to use after DATA acl_smtp_data = check_message # Conditions for the check_messages ACL check_message: deny message = "Sorry, Charlie: $regex_match_string" regex = ^Subject:: .*Lower your self-esteem by becoming a sysadmin accept Fix SMTP-Auth for Pine If pine can't use SMTP authentication on an Exim host and just returns an "unable to authenticate" message without even asking for a password, add the following line to exim.conf: begin authenticators fixed_plain: driver = plaintext public_name = PLAIN server_condition = "${perl{checkuserpass}{$1}{$2}{$3}}" server_set_id = $2 > server_prompts = : This was a problem on CPanel Exim builds awhile ago, but they seem to have added this line to their current stock configuration. Log the subject line This is one of the most useful configuration tweaks I've ever found for Exim. Add this to exim.conf, and you can log the subject lines of messages that pass through your server. This is great for troubleshooting, and for getting a very rough idea of what messages may be spam. log_selector = +subject Reducing or increasing what is logged. Disable identd lookups Frankly, I don't think identd has been useful for a long time, if ever. Identd relies on the connecting host to confirm the identity (system UID) of the remote user who owns the process that is making the network connection. This may be of some use in the world of shell accounts and IRC users, but it really has no place on a high-volume SMTP server, where the UID is often simply "mail" or whatever the remote MTA runs as, which is useless to know. It's overhead, and results in nothing but delays while the identd query is refused or times out. You can stop your Exim server from making these queries by setting the timeout to zero seconds in exim.conf: rfc1413_query_timeout = 0s Disable Attachment Blocking To disable the executable-attachment blocking that many Cpanel servers do by default but don't provide any controls for on a per-domain basis, add the following block to the beginning of the /etc/antivirus.exim file: if $header_to: matches "example\.com|example2\.com" then finish endif It is probably possible to use a separate file to list these domains, but I haven't had to do this enough times to warrant setting such a thing up. Searching the logs with exigrep The exigrep utility (not to be confused with exiqgrep) is used to search an exim log for a string or pattern. It will print all log entries with the same internal message-id as those that matched the pattern, which is very handy since any message will take up at least three lines in the log. exigrep will search the entire content of a log entry, not just particular fields. One can search for messages sent from a particular IP address: root@localhost# exigrep '<= .* \[12.34.56.78\] ' /path/to/exim_log Search for messages sent to a particular IP address: root@localhost# exigrep '=> .* \[12.34.56.78\]' /path/to/exim_log This example searches for outgoing messages, which have the "=>" symbol, sent to "user@domain.tld". The pipe to grep for the "<=" symbol will match only the lines with information on the sender - the From address, the sender's IP address, the message size, the message ID, and the subject line if you have enabled logging the subject. The purpose of doing such a search is that the desired information is not on the same log line as the string being searched for. root@localhost# exigrep '=> .*user@domain.tld' /path/to/exim_log | fgrep '<=' Generate and display Exim stats from a logfile: root@localhost# eximstats /path/to/exim_mainlog Same as above, with less verbose output: root@localhost# eximstats -ne -nr -nt /path/to/exim_mainlog Same as above, for one particular day: root@localhost# fgrep YYYY-MM-DD /path/to/exim_mainlog | eximstats Bonus! To delete all queued messages containing a certain string in the body: root@localhost# grep -lr 'a certain string' /var/spool/exim/input/ | \ sed -e 's/^.*\/\([a-zA-Z0-9-]*\)-[DH]$/\1/g' | xargs exim -Mrm Note that the above only delves into /var/spool/exim in order to grep for queue files with the given string, and that's just because exiqgrep doesn't have a feature to grep the actual bodies of messages. If you are deleting these files directly, YOU ARE DOING IT WRONG! Use the appropriate exim command to properly deal with the queue. If you have to feed many, many message-ids (such as the output of an `exiqgrep -i` command that returns a lot of matches) to an exim command, you may exhaust the limit of your shell's command line arguments. In that case, pipe the listing of message-ids into xargs to run only a limited number of them at once. For example, to remove thousands of messages sent from joe@example.com: root@localhost# exiqgrep -i -f '<joe@example.com>' | xargs exim -Mrm Speaking of "DOING IT WRONG" -- Attention, CPanel forum readers I get a number of hits to this page from a link in this post at the CPanel forums. The question is: Due to spamming, spoofing from fields, etc., etc., etc., I am finding it necessary to spend more time to clear the exim queue from time to time. [...] what command would I use to delete the queue The answer is: Just turn exim off, because your customers are better off knowing that email simply isn't running on your server, than having their queued messages deleted without notice. Or, figure out what is happening. The examples given in that post pay no regard to the legitimacy of any message, they simply delete everything, making the presumption that if a message is in the queue, it's junk. That is total fallacy. There are a number of reasons legitimate mail can end up in the queue. Maybe your backups or CPanel's "upcp" process are running, and your load average is high -- exim goes into a queue-only mode at a certain threshold, where it stops trying to deliver messages as they come in and just queues them until the load goes back down. Or, maybe it's an outgoing message, and the DNS lookup failed, or the connection to the domain's MX failed, or maybe the remote MX is busy or greylisting you with a 4xx deferral. These are all temporary failures, not permanent ones, and the whole point of having temporary failures in SMTP and a mail queue in your MTA is to be able to try again after awhile. Exim already purges messages from the queue after the period of time specified in exim.conf. If you have this value set appropriately, there is absolutely no point in removing everything from your queue every day with a cron job. You will lose legitimate mail, and the sender and recipient will never know if or why it happened. Do not do this! If you regularly have a large number of messages in your queue, find out why they are there. If they are outbound messages, see who is sending them, where they're addressed to, and why they aren't getting there. If they are inbound messages, find out why they aren't getting delivered to your user's account. If you need to delete some, use exiqgrep to pick out just the ones that should be deleted. Reload the configuration After making changes to exim.conf, you need to give the main exim pid a SIGHUP to re-exec it and have the configuration re-read. Sure, you could stop and start the service, but that's overkill and causes a few seconds of unnecessary downtime. Just do this: root@localhost# kill -HUP `cat /var/spool/exim/exim-daemon.pid` You should then see something resembling the following in exim_mainlog: pid 1079: SIGHUP received: re-exec daemon exim 4.52 daemon started: pid=1079, -q1h, listening for SMTP on port 25 (IPv4) Read The Fucking Manual The Exim Home Page Documentation For Exim The Exim Specification - Version 4.5x Exim command line arguments Any questions? Well, don't ask me! I'm one guy, with just enough time and Exim skills to keep my own stuff running okay. There are several (perhaps even dozens) of people on the Internet who know way more than me, and who are willing to help out random strangers. Check into the Exim users mailing list, or one of the many web-based gateways to that list. And good luck. http://bradthemad.org/tech/notes/exim_cheatsheet.php
  19. xargs is a command on Unix and most Unix-like operating systems used to build and execute commands from standard input. It converts input from standard input into arguments to a command. Some commands such as grep and awk can take input either as command-line arguments or from the standard input. However, others such as cp and echo can only take input as arguments, which is why xargs is necessary. Contents 1Examples 2Placement of arguments 2.1-I option 2.2Shell trick 3Separator problem 4Operating on a subset of arguments at a time 5Encoding problem 6References 7External links 7.1Manual pages Examples[edit] One use case of the xargs command is to remove a list of files using the rm command. Under the Linux kernel before version 2.6.23, and under many other Unix-like systems, arbitrarily long lists of parameters cannot be passed to a command,[1] so the command may fail with an error message of "Argument list too long" (meaning that the exec system call's limit on the length of a command line was exceeded): rm /path/* or rm $(find /path -type f) This can be rewritten using the xargs command to break the list of arguments into sublists small enough to be acceptable: find /path -type f -print | xargs rm In the above example, the find utility feeds the input of xargs with a long list of file names. xargs then splits this list into sublists and calls rm once for every sublist. xargs can also be used to parallelize operations with the -P maxprocs argument to specify how many parallel processes should be used to execute the commands over the input argument lists. However, the output streams may not be synchronized. This can be overcome by using an --output file argument where possible, and then combining the results after processing. The following example queues 24 processes and waits on each to finish before launching another. find /path -name '*.foo' | xargs -P 24 -I '{}' /cpu/bound/process '{}' -o '{}'.out xargs often covers the same functionality as the backquote (`) feature of many shells, but is more flexible and often also safer, especially if there are blanks or special characters in the input. It is a good companion for commands that output long lists of files such as find, locate and grep, but only if you use -0, since xargs without -0 deals badly with file names containing ', " and space. GNU Parallel is a similar tool that offers better compatibility with find, locate and grep when file names may contain ', ", and space (newline still requires -0). Placement of arguments[edit] -I option[edit] The xargs command offers options to insert the listed arguments at some position other than the end of the command line. The -I option to xargs takes a string that will be replaced with the supplied input before the command is executed. A common choice is %. $ mkdir ~/backups $ find /path -type f -name '*~' -print0 | xargs -0 -I % cp -a % ~/backups Shell trick[edit] Another way to achieve a similar effect is to use a shell as the launched command, and deal with the complexity in that shell, for example: $ mkdir ~/backups $ find /path -type f -name '*~' -print0 | xargs -0 bash -c 'for filename; do cp -a "$filename" ~/backups; done' bash The word bash at the end of the line is interpreted by bash -c as special parameter $0. If the word bash weren't present, the name of the first matched file would be assigned to $0 and the file wouldn't be copied to ~/backups. Any word can be used instead of bash, but since $0 usually expands to the name of the shell or shell script being executed, bash is a good choice. Separator problem[edit] Many Unix utilities are line-oriented. These may work with xargs as long as the lines do not contain ', ", or a space. Some of the Unix utilities can use NUL as record separator (e.g. Perl (requires -0 and \0 instead of \n), locate (requires using -0), find (requires using -print0), grep (requires -z or -Z), sort (requires using -z)). Using -0 for xargs deals with the problem, but many Unix utilities cannot use NUL as separator (e.g. head, tail, ls, echo, sed, tar -v, wc, which). But often people forget this and assume xargs is also line-oriented, which is not the case (per default xargs separates on newlines and blanks within lines, substrings with blanks must be single- or double-quoted). The separator problem is illustrated here: touch important_file touch 'not important_file' find . -name not\* | tail -1 | xargs rm mkdir -p '12" records' find \! -name . -type d | tail -1 | xargs rmdir Running the above will cause important_file to be removed but will remove neither the directory called 12" records, nor the file called not important_file. The proper fix is to use the -print0 option, but tail (and other tools) do not support NUL-terminated strings: touch important_file touch 'not important_file' find . -name not\* -print0 | xargs -0 rm mkdir -p '12" records' find \! -name . -type d -print0 | xargs -0 rmdir When using the -print0 option, entries are separated by a null character instead of an end-of-line. This is equivalent to the more verbose command: find . -name not\* | tr \\n \\0 | xargs -0 rm or shorter, by switching xargs to line-oriented mode with the -d (delimiter) option: find . -name not\* | xargs -d '\n' rm but in general using the -0 option should be preferred, since newlines in filenames are still a problem. GNU parallel is an alternative to xargs that is designed to have the same options, but be line-oriented. Thus, using GNU Parallel instead, the above would work as expected.[2] For Unix environments where xargs does not support the -0 option (e.g. Solaris, AIX), the following can not be used as it does not deal with ' and " (GNU parallelwould work on Solaris, though): find . -name not\* | sed 's/ /\\ /g' | xargs rm For Solaris, do not use these examples to fix file perms as they do not deal correctly with names such as 12" records (GNU parallel instead of xargs would work, though): find . -type d -print | sed -e 's/^/"/' -e 's/$/"/' | xargs chmod 755 find . -type f -print | sed -e 's/^/"/' -e 's/$/"/' | xargs chmod 644 Operating on a subset of arguments at a time[edit] One might be dealing with commands that can only accept one or maybe two arguments at a time. For example, the diff command operates on two files at a time. The -noption to xargs specifies how many arguments at a time to supply to the given command. The command will be invoked repeatedly until all input is exhausted. Note that on the last invocation one might get fewer than the desired number of arguments if there is insufficient input. Use xargs to break up the input into two arguments per line: $ echo {0..9} | xargs -n 2 0 1 2 3 4 5 6 7 8 9 In addition to running based on a specified number of arguments at a time, one can also invoke a command for each line of input with the -L 1 option. One can use an arbitrary number of lines at a time, but one is most common. Here is how one might diff every git commit against its parent.[3] $ git log --format="%H %P" | xargs -L 1 git diff Encoding problem[edit] The argument separator processing of xargs is not the only problem with using the xargs program in its default mode. Most Unix tools which are often used to manipulate filenames (for example sed, basename, sort, etc.) are text processing tools. However, Unix path names are not really text. Consider a path name /aaa/bbb/ccc. The /aaa directory and its bbb subdirectory can in general be created by different users with different environments. That means these users could have a different locale setup, and that means that aaa and bbb do not even necessarily have to have the same character encoding. For example, aaa could be in UTF-8 and bbb in Shift JIS. As a result, an absolute path name in a Unix system may not be correctly processable as text under a single character encoding. Tools which rely on their input being text may fail on such strings. One workaround for this problem is to run such tools in the C locale, which essentially processes the bytes of the input as-is. However, this will change the behavior of the tools in ways the user may not expect (for example, some of the user's expectations about case-folding behavior may not be met). https://en.wikipedia.org/wiki/Xargs
  20. Pipes: A Brief Introduction A pipe is a form of redirection that is used in Linux and other Unix-like operating systems to send the output of one program to another program for further processing. Redirection is the transferring of standard output to some other destination, such as another program, a file or a printer, instead of the display monitor (which is its default destination). Standard output, sometimes abbreviated stdout, is the destination of the output from command line (i.e., all-text mode) programs in Unix-like operating systems. Pipes are used to create what can be visualized as a pipeline of commands, which is a temporary direct connection between two or more simple programs. This connection makes possible the performance of some highly specialized task that none of the constituent programs could perform by themselves. A command is merely an instruction provided by a user telling a computer to do something, such as launch a program. The command line programs that do the further processing are referred to as filters. This direct connection between programs allows them to operate simultaneously and permits data to be transferred between them continuously rather than having to pass it through temporary text files or through the display screen and having to wait for one program to be completed before the next program begins. History Pipes rank alongside the hierarchical file system and regular expressions as one of the most powerful yet elegant features of Unix-like operating systems. The hierarchical file system is the organization of directories in a tree-like structure which has a single root directory (i.e., a directory that contains all other directories). Regular expressions are a pattern matching system that uses strings (i.e., sequences of characters) constructed according to pre-defined syntax rules to find desired patterns in text. Pipes were first suggested by M. Doug McIlroy, when he was a department head in the Computing Science Research Center at Bell Labs, the research arm of AT&T (American Telephone and Telegraph Company), the former U.S. telecommunications monopoly. McIlroy had been working on macros since the latter part of the 1950s, and he was a ceaseless advocate of linking macros together as a more efficient alternative to series of discrete commands. A macro is a series of commands (or keyboard and mouse actions) that is performed automatically when a certain command is entered or key(s) pressed. McIlroy's persistence led Ken Thompson, who developed the original UNIX at Bell Labs in 1969, to rewrite portions of his operating system in 1973 to include pipes. This implementation of pipes was not only extremely useful in itself, but it also made possible a central part of the Unix philosophy, the most basic concept of which is modularity (i.e., a whole that is created from independent, replaceable parts that work together efficiently). Examples A pipe is designated in commands by the vertical bar character, which is located on the same key as the backslash on U.S. keyboards. The general syntax for pipes is: command_1 | command_2 [| command_3 . . . ] This chain can continue for any number of commands or programs. A very simple example of the benefits of piping is provided by the dmesg command, which repeats the startup messages that scroll through the console (i.e., the all-text, full-screen display) while Linux is booting (i.e., starting up). dmesg by itself produces far too many lines of output to fit into a single screen; thus, its output scrolls down the screen at high speed and only the final screenful of messages is easily readable. However, by piping the output of dmesg to the filter less, the startup messages can conveniently be viewed one screenful at a time, i.e., dmesg | less less allows the output of dmesg to be moved forward one screenful at a time by pressing the SPACE bar and back one screenful at a time by pressing the b key. The command can be terminated by pressing the q key. (The more command could have been used here instead of less; however, less is newer than more and has additional functions, including the ability to return to previous pages of the output.) The same result could be achieved by first redirecting the output of dmesg to a temporary file and then displaying the contents of that file on the monitor. For example, the following set of two commands uses the output redirection operator (designated by a rightward facing angle bracket) to first send the output of dmesg to a text file called tempfile1 (which will be created by the output redirection operator if it does not already exist), and then it uses another output redirection operator to transfer the output of tempfile1 to the display screen: dmesg > tempfile1 tempfile1 > less However, redirection to a file as an intermediate step is clearly less efficient, both because two separate commands are required and because the second command must await the completion of the first command before it can begin. The use of two pipes to chain three commands together could make the above example even more convenient for some situations. For example, the output of dmesg could first be piped to the sort filter to arrange it into alphabetic order before piping it to less: dmesg | sort -f | less The -f option tells sort to disregard case (i.e., whether letters are lower case or upper case) while sorting. Likewise, the output of the ls command (which is used to list the contents of a directory) is commonly piped to the the less (or more) command to make the output easier to read, i.e., ls -al | less or ls -al | more ls reports the contents of the current directory (i.e., the directory in which the user is currently working) in the absence of any arguments(i.e., input data in the form of the names of files or directories). The -l option tells ls to provide detailed information about each item, and the -a option tells ls to include all files, including hidden files (i.e., files that are normally not visible to users). Because ls returns its output in alphabetic order by default, it is not necessary to pipe its output to the sort command (unless it is desired to perform a different type of sorting, such as reverse sorting, in which case sort's -r option would be used). This could just as easily be done for any other directory. For example, the following would list the contents of the /bin directory (which contains user commands) in a convenient paged format: ls -al /bin | less The following example employs a pipe to combine the ls and the wc (i.e., word count) commands in order to show how many filesystem objects(i.e., files, directories and links) are in the current directory: ls | wc -l ls lists each object, one per line, and this list is then piped to wc, which, when used with its -l option, counts the number of lines and writes the result to standard output (which, as usual, is by default the display screen). The output from a pipeline of commands can be just as easily redirected to a file (where it is written to that file) or a printer (where it is printed on paper). In the case of the above example, the output could be redirected to a file named, for instance, count.txt: ls | wc -l > count.txt The output redirection operator will create count.txt if it does not exist or overwrite it if it already exists. (The file does not, of course, require the .txt extension, and it could have just as easily been named count, lines or anything else.) The following is a slightly more complex example of combining a pipe with redirection to a file: echo -e "orange \npeach \ncherry" | sort > fruit The echo command tells the computer to send the text that follows it to standard output, and its -e option tells the computer to interpret each \n as the newline symbol (which is used to start a new line in the output). The pipe redirects the output from echo -e to the sort command, which arranges it alphabetically, after which it is redirected by the output redirection operator to the file fruit. As a final example, and to further illustrate the great power and flexibility that pipes can provide, the following uses three pipes to search the contents of all of the files in current directory and display the total number of lines in them that contain the string Linux but not the string UNIX: cat * | grep "Linux" | grep -v "UNIX" | wc -l In the first of the four segments of this pipeline, the cat command, which is used to read and concatenate (i.e., string together) the contents of files, concatenates the contents of all of the files in the current directory. The asterisk is a wildcard that represents all items in a specified directory, and in this case it serves as an argument to cat to represent all objects in the current directory. The first pipe sends the output of cat to the grep command, which is used to search text. The Linux argument tells grep to return only those lines that contain the string Linux. The second pipe sends these lines to another instance of grep, which, in turn, with its -v option, eliminates those lines that contain the string UNIX. Finally, the third pipe sends this output to wc -l, which counts the number of lines and writes the result to the display screen. "Fake Pipes" A notation similar to the pipes of Unix-like operating systems is used in Microsoft's MS-DOS operating system. However, the method of implementation is completely different. Sometimes the pipe-like mechanism used in MS-DOS is referred to as fake pipes because, instead of running two or more programs simultaneously and channeling the output data from one continuously to the next, MS-DOS uses a temporary buffer file (i.e., section of memory) that first accumulates the entire output from the first program and only then feeds its contents to the next program. This more closely resembles redirection through a file than it does the Unix concept of pipes. It takes more time because the second program cannot begin until the first has been completed, and it also consumes more system resources (i.e., memory and processor time). This approach could be particularly disadvantageous if the first command produces a very large amount of output and/or does not terminate. Created April 29, 2004. Last updated August 23, 2006. Copyright © 2004 - 2006 The Linux Information Project. All Rights Reserved. http://www.linfo.org/pipes.html
  21. Just as when you report spam manually, SpamCop requires the full header information from your email software. It also requires the unmodified body including HTML codes if any and/or MIME information. It depends on your email software. Here are instructions for some of the more popular programs: Normal email software: Microsoft products Mac OS X Netscape, Mozilla and Thunderbird Eudora AOL Pine Lotus Notes (v.4.x and v.5.x) Lotus Notes (v.6.x) Pegasus Mail WebTV Claris Emailer kmail (KDE Desktop) GNU/Emacs integrated email Mail Warrior Juno Version 4+ Mutt The Bat! Pronto mail (GTK/unix) StarOffice Novell Groupwise Blitzmail Fort� Agent Ximian Evolution Sylpheed Web-based email software: Hotmail and Windows Live Hotmail Yahoo Mail Excite web-mail Netscape Webmail Blitzmail Operamail Lycos Mail (mailcity.com) Onebox.com Outlook Web Access Shawcable Webmail MSN Premium GMail https://www.spamcop.net/fom-serve/cache/19.html
  22. https://www.sslshopper.com/ssl-checker.html example:
  23. PowerShell Scripts for day to day tasks! Monday, January 16, 2012 Outlook 2010 forward emails after specific time. http://danfolkes.com/2011/02/18/outlook-rule-only-at-specific-times/ I wanted to create a forward so that helpdesk email gets forwarded to my phone on my personal email. Setting up helpdesk email was the easy way but it gets 20+ emails every hour during work time. After work hours its pretty much quite. So for those times I wanted the rule. Rules which needs to be created :- Quoted from the post below :- " The Rule that will check email header* for UTC times. Make sure it’s assigned to the FWD category. And then FWD it: Apply this rule after the message arrives with '2011 23:' or '2012 18:,2012 19:.....2012 11:' in the message header and assigned to 'FWD' category forward it to 'email@example.com' " Seach for this string in header. 2012 18:,2012 19:,2012 20:,2012 21:,2012 22:,2012 23:,2012 24:,2012 01:,2012 02:,2012 03:,2012 04:,2012 05:,2012 06:,2012 07:,2012 08:,2012 09:,2012 10:,2012 11: https://usyse.blogspot.com/2012/01/outlook-2010-forward-emails-after.html
  24. Outlook Rule : Only at Specific Times Times Lets say you want to have a rule in outlook send to you only between specific times in the day. Only after 6pm and before 8am Only on your lunch hour When you are not at work I will explain this by having emails forward to my cell phone, only when I am normally not at the office. (From 6pm-8am) This way, I will be able to receive important emails that may require special outside assistance. What I do is: Create a special category called FWD Use other rules to set messages into the FWD category if I want them forwarded. (Explained Below) Then, create a rule to run last in the rules list called FWD Rule *Important Part* This will check the time on the messages, if it’s within the specified hours, it will forward the email (Explained Below) Creating a Rule to set the FWD Category: Your Rule Description should look something like this. The important part is that it is assigning it to the FWD Category: Apply this rule after the message arrives with 'Emergency from client' in the subject and marked as 'high importance' and assigned to 'FWD' Category The Rule that will email header* for UTC times. Make sure it’s assigned to the FWD category. And then FWD it: Apply this rule after the message arrives with '2011 23:' or '2011 02:' or ... '2011 10:' in the message header and assigned to 'FWD' category forward it to 'email@example.com' * This should work on most emails, but if you want to look at the email header Right-click on the message in the Inbox and select Message Options. * I included the 2011 and the colon to make it more specific. UTC Time for 6pm – 8am: Email Header contained: X-OriginalArrivalTime: 18 Feb 2011 03:23:52.0368 (UTC) So I searched for: 2011 23:,2011 01:,2011 02:,2011 03:,2011 04:,2011 05:,2011 06:,2011 07:,2011 08:,2011 09:,2011 10:,2011 11: http://dan.folkes.me/2011/02/18/outlook-rule-only-at-specific-times/
×