Jump to content
Slate Blackcurrant Watermelon Strawberry Orange Banana Apple Emerald Chocolate Marble
Slate Blackcurrant Watermelon Strawberry Orange Banana Apple Emerald Chocolate Marble

NickTheGreek

Administrators
  • Content Count

    454
  • Joined

  • Last visited

  • Days Won

    76
  • Feedback

    N/A

Everything posted by NickTheGreek

  1. NickTheGreek

    IW10

    CentOS 6.4 update finally supports IW10 for tcp/ip tuning Google and others have published several whitepapers showing benchmarks where tuning tcp/ip on servers can really help web browsing performance. One of these important tweaks is known as IW10 which improves the tcp/ip congestion window by reducing ACKs (initally sending 10 packets instead of only 3) and is enabled by increasing initcwnd and initrwnd. Unfortunately those on CentOS (one of the most popular linux production distributions) were not able to take advantage of this – until this weekend when CentOS 6.4 was shipped. CentOS 6.3 could change initcwnd but NOT initrwnd (RWIN) which requires a kernel > 2.6.33 CentOS 6.4 is 2.6.32-358.0.1.el6 (Red Hat 4.4.7-3) so the feature has been backported Here’s how to take advantage of IW10 on CentOS 6.4 This is obviously for power users with root access to their VPS or dedicated servers (this will NOT work on Virtuozzo VZ vps!) First of course do a yum update to get Centos 6.4 Then take a look at your current settings via ip route show You are looking for the last line that will say something like default via 4.3.2.1 dev eth0 Then just take that line and add two parts to it ip route change default via 4.3.2.1 dev eth0 initcwnd 10 initrwnd 10 And bingo, you now have IW10 support. If something goes wrong, it will just complain and not make the change. Note these changes aren’t permanent, you will have to add the line to /etc/rc.local or similar, to make it happen on each reboot There are other important things to tune with tcp/ip, but one of the other most important is to disable “slow start after idle” sysctl -a | grep tcp_slow_start_after_idle sysctl -w net.ipv4.tcp_slow_start_after_idle=0 It is possible to actually examine the performance improvements of making this change on a packet level but it requires the use of wireshark and and counting ACKs which is a bit tedious so I will skip that for now. Unfortunately it might be YEARS before CentOS gets things like TCP Fast Open which has only appeared a few months ago in linux kernel > 3.5 but I am still investigating if it’s possible in CentOS 6.4 to easily reduce the RTT initial timeout from 3 seconds to 1 second which is another recommended tcpip tuning improvement. https://ckon.wordpress.com/2013/03/11/centos-6-4-supports-iw10-tcpip-tuning/
  2. NickTheGreek

    IW10

    Τελικά τι έγινε με αυτό και δεν το ακούω ? Increasing TCP’s Initial Congestion Window Increasing the initial cwnd size on the server to the new RFC 6928 value of 10 segments (IW10) is one of the simplest ways to improve performance for all users and all applications running over TCP. And the good news is that many operating systems have already updated their latest kernels to use the increased value — check the appropriate documentation and release notes. For Linux, IW10 is the new default for all kernels above 2.6.39. However, don’t stop there: upgrade to 3.2+ to also get the benefit of other important updates; see Proportional Rate Reduction for TCP. https://hpbn.co/building-blocks-of-tcp/
  3. ReputationAuthority WatchGuard’s ReputationAuthority helps protect business and government organizations from unwanted email and web traffic that contain spam, malware, spyware, malicious code, and phishing attacks. You can look up your IP address or domain, receive a reputation score from 0-100, and get the percentage of emails that were good versus bad You Can Always Fix Your Reputation Another way to check your reputation is to find out if you are on any blacklists (a.k.a. blocklist). Blacklists contain lists of IPs or domains that pose a threat to consumer inboxes. Your email service provider may automatically alert you if you’re added to one, but it’s good to check for yourself. If you are on a blacklist, act quickly. Just a few spam complaints can add a legitimate sender to a blacklist. Blacklist Resources There are a lot of blacklists, but a good starting point is checking to see if your IPs or domains are on any of these popular lists: Barracuda Reputation Block List: BRBL is a free DNS blacklist (DNSBL) of IP addresses known to send spam. Invaluement: The Invaluement anti-spam DNSBL blocks elusive types of spam where the sender is sending unsolicited bulk email and escaping traditional detection methods. MXToolBox: MXToolbox shows you whether or not your domain or IP address is blacklisted and can perform checks on your DNS to see how it is configured. MultiRBL: This free multiple DNS blacklist service cross-references other blacklists by IPV4, IPV6, or by domain. Spamcop: The SpamCop Blocking List (SCBL) lists IP addresses that had mail reported as spam by SpamCop users. Spamhaus: The Spamhaus Project maintains a number of DNSBLs as part of their effort to identify and track spam sources, and provide anti-spam protection. To be removed from this list, visit their blocklist removal center. SURBL: Unlike most lists, SURBLs are not lists of message senders. SURBLs are lists of websites that have appeared in unsolicited messages. How Blacklists Know You’re Sending Unwanted Mail I recently asked one of email deliverability consultants, Luke Martinez, for some more context on how senders get added to blacklists and what you can do to protect your reputation. Here’s what he had to say: “All blacklists have different ways of determining whether or not a sender should be listed. But almost all of them use some combination of spam traps and recipient feedback. Many blacklist operators manage large networks of spam traps (email addresses that are valid, but have never signed up to receive mail, or addresses that have been inactive for an extended period of time and should no longer be receiving email). The blacklist operators will monitor these addresses and blacklist any IPs or domains that send excessive amounts of mail to them. Blacklist operators also work off of user feedback. A blacklist operator can trigger a listing if they receive an excessive amount of direct abuse complaints about mail coming from your IP or your domain. The key to avoiding blacklists is to make sure you are sending mail people want, removing non-engaged users from your mailing lists, not purchasing, renting, or sharing address lists, and using a confirmed opt-in strategy for collecting addresses.” Staying on top of your reputation by regularly monitoring your presence on blacklists and analyzing your engagement metrics can help clear your path to the inbox. SendGrid knows how important your reputation is, so our technical account managers and delivery team help to monitor blacklists on behalf of our customers and our 24/7 global support team is always ready and willing to help with questions. https://sendgrid.com/blog/blacklisted-check-7-popular-blacklists-keep-reputation-intact/
  4. Complaint Feedback Loop program Yahoo Mail offers a free Complaint Feedback Loop program to help email senders minimize complaint rates. If you participate, Yahoo Mail forwards complaints from our users about emails sent from your organization. Program goals This program aims to help senders: Understand what Yahoo Mail users think of their messages. Take corrective action to minimize the complaints generated by their mailings. Improve their sending reputation. Participation eligibility To be eligible, email senders must: Sign their outbound emails with DomainKeys and/or DomainKeys Identified Mail (DKIM). Have a Yahoo ID. Learn how to sign up for a Yahoo Account. Sign up or update your CFL Fill out a new CFL application to enroll in the program, or to update an existing CFL. When you sign up for or update an existing Complaint Feedback Loop you'll need to enter a verification code. This code can only be sent to: postmaster@(your domain) This verification process is necessary to activate any domain for the feedback loop. Turn off CFL reports You can turn off your service if you no longer want to receive reports. Fill out the CFL application and under “Request type,” select Delete. Once your submission is processed, you’ll stop receiving them. https://help.yahoo.com/kb/yahoo-complaint-feedback-loop-program-sln3438.html
  5. Postmaster Tools If you send a large volume of emails to Gmail users, you can use Postmaster Tools to see: If users are marking your emails as spam Whether you’re following Gmail's best practices Why your emails might not be delivered If your emails are being sent securely https://support.google.com/mail/answer/6227174 https://postmaster.google.com
  6. Outlook.com Smart Network Data Services Deliverability to Outlook.com is based on your reputation. The Outlook.com Smart Network Data Services (SNDS) gives you the data you need to understand and improve your reputation at Outlook.com. But just looking at the data isn't enough! Maintaining a good reputation is a lot of work. You should use this data to keep your mailing lists clean and to monitor the IPs you control for unusual behavior. Reputation is always the responsibility of the sender. SNDS gives senders access to detailed data about individual IPs, and it also includes our Junk Email Reporting Program, which lets you receive reports when users junk your messages. Now you can view IP data and manage feedback loop settings from one convenient website. Getting started To access SNDS, please log in with a Microsoft Account and then request access to the IPs for which you are responsible. You'll be taken through a simple authorization process, and then you'll soon have access to a wealth of information about those IPs. Help! I have a problem sending mail to Outlook.com Building & maintaining good reputation is a long-term proposition. The data on this site can help you do that, but if you have an urgent deliverability issue please have the person most familiar with the issue and your email infrastructure contact sender support. Other Benefits SNDS is useful for far more than just monitoring email reputation. It can help IP owners to detect compromised servers, malware, viruses, and botnets. We help network administrators detect these problems so that they can clean them up and make the internet a safer place. https://postmaster.live.com/snds
  7. We always stress the importance of a strong sending reputation. By keeping an eye on your engagement (opens, clicks TiNs, etc.) and reputation (spam complaints, spam traps, unknown users, etc.) metrics you’ll get a good picture of how your emails are being received by subscribers. But if you’re looking for another measure of your reputation, you can take advantage of a handful of resources that will let you know where you stand. Here are 5 sites that will help you check your sending reputation and keep you on track: SenderScore.org Like a credit score, a Sender Score is a measure of your reputation. Scores are calculated from 0 to 100. The higher your score, the better your reputation and the higher your email deliverability rate. Numbers are calculated on a rolling 30-day average and illustrate where your IP address ranks against other IP addresses. This service is provided by Return Path. Senderbase.org Senderbase is a product of Cisco and provides you with the tools to check your reputation by ranking you as Good, Neutral, or Poor. Good means there is little or no threat activity. Neutral means your IP address or domain is within acceptable parameters, but may still be filtered or blocked. Poor means there is a problematic level of threat activity and you are likely to be filtered or blocked. ReputationAuthority WatchGuard’s ReputationAuthority helps protect business and government organizations from unwanted email and web traffic that contain spam, malware, spyware, malicious code, and phishing attacks. You can look up your IP address or domain, receive a reputation score from 0-100, and get the percentage of emails that were good versus bad. IP lookup for ReputationAuthority BarracudaCentral Barracuda Networks provides both an IP and domain reputation lookup via their Barracuda Reputation System; a real-time database of IP addresses with “poor” or “good” reputations. TrustedSource TrustedSource is a site very similar to senderbase.org, but run by McAfee. It provides information on both your domain’s email and web reputations as well as affiliations, domain name system (DNS), and mail server information. It also provides details on the history, activation, and associations of your domain. Takeaways Your email reputation is always in your control. By taking advantage of these resources, you can ensure that you’re being proactive about your email deliverability. But our delivery team likes to emphasize that while these reputation monitoring systems are a great help, they are not the authoritative metric by which you should determine the health of your entire email program. They are best used as one data point (an important one) in an overarching holistic assessment of your email program. It is not uncommon to see senders with SenderScores around 98-99 that are still having inboxing problems, so it’s important to know that just because you get a high score on one of these sites, it doesn’t mean you won’t be getting throttled, sent to the spam folder, or blocked outright. So the more eyes and ears you can have on your IP and domain reputation the better. https://sendgrid.com/blog/5-ways-check-sending-reputation/
  8. Changing DNS records can result in your website being unreachable for a while. This article explains how you can minimize downtime while changing domain name records. NS records When changing nameserver records, first ensure that your new nameserver(s) define the same records as your old nameservers. That is to say, your new nameservers must be in a ready-to-use state. Now you can change your NS records so that they point to the new nameserver(s). But pay attention to the fact, that the NS records of your parent DNS servers are usually cached for 48 hours. Thus you should keep your old nameservers online for at least 48 hours after making the changes to your NS records. Other records For A records, MX records, PTR records and the like there is a nice way to update a record while still not having inconsistent data. What I mean by "inconsistent" is the following scenario: Suppose you have an A record for www.dnswatch.info pointing to the IP address 193.111.199.111 with a "Time To Live" value set to 3600 (1 hour). And let's further assume that you now want to update this A record so that it points to the IP address 193.111.199.214. If you just changed the record now, DNS resolvers all over the world who don't have the old data cached would instantly see the new IP address (193.111.199.214). But DNS resolvers who do have this record cached (e.g. a resolver who already queried your nameserver 8 minutes ago) would still see the old IP address (193.111.199.111). So if a resolver queried your nameserver 8 minutes ago, it would see the old data for the next 52 minutes because the "Time To Live" value's set to 1 hour meaning that the record may be cached for 1 hour. If for example, there was some webserver behind those IPs, some browsers would now be accessing your old webserver (on the old IP) and some would request data from your new webserver (on the new IP). The easy solution to this inconsistent state is as follows: First reduce the TTL of the record that you want to change to a minimal value, e.g. 30 seconds. Then wait "old TTL value" seconds. So we'd have had to wait 1 hour in our last example after reducing the TTL to 30 because the old TTL was 1 hour. After that period you can change your data. Or you can now even further reduce the TTL to 5 seconds. Then wait 30 seconds, and then do the actual update to the record. This results in your DNS data being inconsistent for only 5 seconds instead of an hour as in the initial example. Don't forget however to increase the TTL again, after changing the record and assuring that your change was successful. If you leave the TTL at 5 seconds, your DNS servers could get overwhelmed by lookup requests. In addition a DNS lookup can take quite some time (sometimes even half a second), thus the end user would need many coffe breaks. written by Can Özdemir September 30th, 2005 https://www.dnswatch.info/articles/dns-update
  9. Last week, WordPress patched three security flaws, but just yesterday the company disclosed about a nasty then-secret zero-day vulnerability that let remote unauthorized hackers modify the content of any post or page within a WordPress site. The nasty bug resides in Wordpress REST API that would lead to the creation of two new vulnerabilities: Remote privilege escalation and Content injection bugs. Wordpress is the world's most popular content management system (CMS) used on millions of websites. The CMS recently added and enabled REST API by default on WordPress 4.7.0. Flaw lets Unauthorised Hacker Redirect Visitors to Malicious Exploits The vulnerability is easy to exploit and affects versions 4.7 and 4.7.1 of the Wordpress content management system (CMS), allowing an unauthenticated attacker to modify all pages on unpatched sites and redirect visitors to malicious exploits and a large number of attacks. The vulnerability was discovered and reported by Marc-Alexandre Montpas from Sucuri to the WordPress security team who handled the matter very well by releasing a patch, but not disclosing details about the flaw in an effort to keep hackers away from exploiting the bug before millions of websites implement the patch. "This privilege escalation vulnerability affects the WordPress REST API," Montpas writes in a blog post. "One of these REST endpoints allows access (via the API) to view, edit, delete and create posts. Within this particular endpoint, a subtle bug allows visitors to edit any post on the site." Why WordPress Delayed the Vulnerability Disclosure The issue was discovered on January 22nd, patched on January 26th and the fix was made available in release 4.7.2 to websites using the popular CMS. Sucuri security providers and hosts worked closely with Wordpress security team for over a week to install the patch, ensuring that the issue was dealt with in short order before it became public. The company also tipped off security companies including SiteLock, Cloudflare, and Incapsula over 9 days between disclosure and patch. Here's what the Wordpress core contributor Aaron Campbell says about the delay in the vulnerability disclosure: "We believe transparency is in the public's best interest...[and]... in this case, we intentionally delayed disclosing the issue by one week to ensure the safety of millions of additional WordPress sites." "Data from all four WAFs and WordPress hosts showed no indication that the vulnerability had been exploited in the wild. As a result, we made the decision to delay disclosure of this particular issue to give time for automatic updates to run and ensure as many users as possible were protected before the issue was made public." Patch your CMS Now! The flaw has been rated critical, although the fix has automatically been deployed on millions of WordPress installations in the few hours after the patch was issued. For a more technical explanation about the vulnerability, you can head on the Sucuri's official blog post. WordPress admins who have not yet implemented the patch against the nasty vulnerability are strongly advised to update their CMS to Wordpress version 4.7.2. http://thehackernews.com/2017/02/wordpress-exploit-patch.html
  10. Slashdot has a link to a tribute video to a Sun that was up continuously for 3737 days. That’s 10.23 years. It’s like a sequoia tree seeing the passage of civilization around it: My thoughts on this: The data center and infrastructure powering this machine was built in such a way as to keep this thing powered continuously for 10 years. Whoever built and ran that infrastructure was doing a good job. It’s a generalization but I bet there are very few cloud providers that can boast anything like that. That version of Sun Solaris is reliable enough to keep operating for years without disruption. Most OSes are, by the way, even Microsoft Windows. That particular hardware is reliable enough to keep operating for years. Factors that influence this include enough hot-serviceable redundancy built-in, a stable environment for the server to run, clean power, etc. Given that something like 85% of downtime is caused by human error the admins of this host were competent enough to operate the host without disrupting service, or didn’t touch it much. The workloads for this host were sized appropriately for the host for 10 years, and any errors in this regard were resolvable without a restart. This host probably has 10+ years of security holes on it. I’m not super familiar with patching Solaris hosts but, generally, unless you restart the software running on a host they don’t pick up library updates. So even if it is patched you’d have to restart everything running on it to guarantee library security updates take effect. Possible, just not likely. The kernel itself likely has not been patched, unless there is a mechanism to load new code in on the fly (like Ksplice on Linux). The comments on the video indicate there hasn’t been much patching. We cannot infer anything about service availability from the little we know about this system design. There are many services that do not require high availability or continuous uptime and a vendor warranty with a certain level of response might be just fine. We can speculate that the service contract on this hardware is probably expensive, though the particular economics of replacing the system vs. maintaining a service contract are unknown to us. The people who built this system may be gone, retired, perhaps even dead. Hopefully the builders left good documentation about it so that current admins understood its role and configuration. To me, security is the biggest problem here, because patching is a big part of defense-in-depth. Firewalls are neat but you have to punch holes in the firewall to let people use applications, right? If an application running on a host like this gets compromised it may be very easy for the attacker to compromise the rest of the system by exploiting 10+ years of kernel vulnerabilities. Game over. In the face of threats like APT1 where attackers are coming from inside your network, or even just a firewall rule misconfiguration that isn’t caught, the kernel & system software is effectively the last good line of defense on most systems. It limits a compromise to the application and not the whole host and prevents the attackers from establishing a beachhead inside your security perimeter where they can compromise other hosts from the inside. As such, it is very important for system software to stay current. Especially in an era of virtualization, where physical hardware issues are lessened and worked around with live migration and fault tolerance features. Seeing OSes outlive their vendor support is unfortunately becoming pretty common, as hardware lifespans just don’t provide a natural OS upgrade point anymore. These guys seem pretty aware that this wasn’t an ideal situation, and I’m not picking on them. In fact, I rather enjoyed the video, because how often do you really see something like this? This is just my occasional opportunity to reiterate that this isn’t how we, as system administrators and IT staff, should be regularly doing business. We should not be encouraging our customers and employers to do business this way, either. High-uptime systems like this become serious liabilities in so many ways, from security to lack of understanding and documentation, that when we discover them we should do what these guys did: shut it down. https://lonesysadmin.net/2013/03/14/uptime-is-not-something-to-be-revered/
  11. In previous posts on vm.swappiness and using RAM disks we talked about how the memory on a Linux guest is used for the OS itself (the kernel, buffers, etc.), applications, and also for file cache. File caching is an important performance improvement, and read caching is a clear win in most cases, balanced against applications using the RAM directly. Write caching is trickier. The Linux kernel stages disk writes into cache, and over time asynchronously flushes them to disk. This has a nice effect of speeding disk I/O but it is risky. When data isn’t written to disk there is an increased chance of losing it. There is also the chance that a lot of I/O will overwhelm the cache, too. Ever written a lot of data to disk all at once, and seen large pauses on the system while it tries to deal with all that data? Those pauses are a result of the cache deciding that there’s too much data to be written asynchronously (as a non-blocking background operation, letting the application process continue), and switches to writing synchronously (blocking and making the process wait until the I/O is committed to disk). Of course, a filesystem also has to preserve write order, so when it starts writing synchronously it first has to destage the cache. Hence the long pause. The nice thing is that these are controllable options, and based on your workloads & data you can decide how you want to set them up. Let’s take a look: $ sysctl -a | grep dirty vm.dirty_background_ratio = 10 vm.dirty_background_bytes = 0 vm.dirty_ratio = 20 vm.dirty_bytes = 0 vm.dirty_writeback_centisecs = 500 vm.dirty_expire_centisecs = 3000 vm.dirty_background_ratio is the percentage of system memory that can be filled with “dirty” pages — memory pages that still need to be written to disk — before the pdflush/flush/kdmflush background processes kick in to write it to disk. My example is 10%, so if my virtual server has 32 GB of memory that’s 3.2 GB of data that can be sitting in RAM before something is done. vm.dirty_ratio is the absolute maximum amount of system memory that can be filled with dirty pages before everything must get committed to disk. When the system gets to this point all new I/O blocks until dirty pages have been written to disk. This is often the source of long I/O pauses, but is a safeguard against too much data being cached unsafely in memory. vm.dirty_background_bytes and vm.dirty_bytes are another way to specify these parameters. If you set the _bytes version the _ratio version will become 0, and vice-versa. vm.dirty_expire_centisecs is how long something can be in cache before it needs to be written. In this case it’s 30 seconds. When the pdflush/flush/kdmflush processes kick in they will check to see how old a dirty page is, and if it’s older than this value it’ll be written asynchronously to disk. Since holding a dirty page in memory is unsafe this is also a safeguard against data loss. vm.dirty_writeback_centisecs is how often the pdflush/flush/kdmflush processes wake up and check to see if work needs to be done. You can also see statistics on the page cache in /proc/vmstat: $ cat /proc/vmstat | egrep "dirty|writeback" nr_dirty 878 nr_writeback 0 nr_writeback_temp 0 In my case I have 878 dirty pages waiting to be written to disk. Approach 1: Decreasing the Cache As with most things in the computer world, how you adjust these depends on what you’re trying to do. In many cases we have fast disk subsystems with their own big, battery-backed NVRAM caches, so keeping things in the OS page cache is risky. Let’s try to send I/O to the array in a more timely fashion and reduce the chance our local OS will, to borrow a phrase from the service industry, be “in the weeds.” To do this we lower vm.dirty_background_ratio and vm.dirty_ratio by adding new numbers to /etc/sysctl.conf and reloading with “sysctl –p”: vm.dirty_background_ratio = 5 vm.dirty_ratio = 10 This is a typical approach on virtual machines, as well as Linux-based hypervisors. I wouldn’t suggest setting these parameters to zero, as some background I/O is nice to decouple application performance from short periods of higher latency on your disk array & SAN (“spikes”). Approach 2: Increasing the Cache There are scenarios where raising the cache dramatically has positive effects on performance. These situations are where the data contained on a Linux guest isn’t critical and can be lost, and usually where an application is writing to the same files repeatedly or in repeatable bursts. In theory, by allowing more dirty pages to exist in memory you’ll rewrite the same blocks over and over in cache, and just need to do one write every so often to the actual disk. To do this we raise the parameters: vm.dirty_background_ratio = 50 vm.dirty_ratio = 80 Sometimes folks also increase the vm.dirty_expire_centisecs parameter to allow more time in cache. Beyond the increased risk of data loss, you also run the risk of long I/O pauses if that cache gets full and needs to destage, because on large VMs there will be a lot of data in cache. Approach 3: Both Ways There are also scenarios where a system has to deal with infrequent, bursty traffic to slow disk (batch jobs at the top of the hour, midnight, writing to an SD card on a Raspberry Pi, etc.). In that case an approach might be to allow all that write I/O to be deposited in the cache so that the background flush operations can deal with it asynchronously over time: vm.dirty_background_ratio = 5 vm.dirty_ratio = 80 Here the background processes will start writing right away when it hits that 5% ceiling but the system won’t force synchronous I/O until it gets to 80% full. From there you just size your system RAM and vm.dirty_ratio to be able to consume all the written data. Again, there are tradeoffs with data consistency on disk, which translates into risk to data. Buy a UPS and make sure you can destage cache before the UPS runs out of power. No matter the route you choose you should always be gathering hard data to support your changes and help you determine if you are improving things or making them worse. In this case you can get data from many different places, including the application itself, /proc/vmstat, /proc/meminfo, iostat, vmstat, and many of the things in /proc/sys/vm. Good luck! https://lonesysadmin.net/2013/12/22/better-linux-disk-caching-performance-vm-dirty_ratio/
  12. How do I change the I/O scheduler for a particular hard disk without rebooting my Linux server system? CFQ [cfq] (Completely Fair Queuing) is an I/O scheduler for the Linux kernel and default under many Linux distributions Noop scheduler (noop) is the simplest I/O scheduler for the Linux kernel based upon FIFO queue concept. Anticipatory scheduler (anticipatory) is an algorithm for scheduling hard disk input/output as well as old scheduler which is replaced by CFQ Deadline scheduler (deadline) – it attempt to guarantee a start service time for a request. Task: View Current Disk scheduler Assuming that your disk name /dev/sda, type: # cat /sys/block/{DEVICE-NAME}/queue/scheduler # cat /sys/block/sda/queue/scheduler Sample output: noop anticipatory deadline [cfq] Task: Set I/O Scheduler For A Hard Disk To set a specific scheduler, simply type the command as follows: # echo {SCHEDULER-NAME} > /sys/block/{DEVICE-NAME}/queue/scheduler For example, set noop scheduler, enter: # echo noop > /sys/block/hda/queue/scheduler
  13. ε να, πρακτικά κανει update αλλα δεν βλέπεις τι φέρνει τι αλλάζει κτλ Σε παλαιότερες εκδόσεις του IPB ναί μεν η διαδικάσία γινόταν χειροκίνητα αλλά είχες πολύ μεγαλύτερη πληροφόρηση σε κάθε στάδιο της διαδικασίας. Αν δε υπήρχε θεμα κάπου, είχες κάποιο error, εδώ δεν ξέρω καν τι θα δείς σε αντίστοιχη περίπτωση
  14. If you are a skilled blogger or article / tutorial author with vast expertise on Hosting, Development or System Management please contact me via PM or email at info [ a t ] designhost [ dot ] gr We would like to recruit such talent for our own forum and community content development. Management
  15. We need developers and system admins for work on the new development and upgrade of the existing cwp parts. We are also ready to pay for all your suggestions and solutions you can do to make cwp better. Needed - php, javascript developers - linux system admins Task - Improvement of existing php modules of cwp - Improvement of installer and bash scripts - improvement of the existing design - Adding new things and scripts into cwp ...many others Your assistance will help us to be even better and we are ready to pay you for your assistance. For any info contact us from the following link http://centos-webpanel.com/contact To unsubscribe from these announcements, login to the forum and uncheck "Receive forum announcements and important notifications by email." in your profile. You can view the full announcement by following this link: http://forum.centos-webpanel.com/informations/developers-needed/ Regards, The CentOS Web Panel Team.
  16. απλά δεν μου αρέσει που δεν είναι και τόσο transparent η διαδικασία
  17. all fine, upgrade went smoothly downtime was minor
  18. η πλάκα ειναι πως και πριν απο αυτο το upgrade έχει λυθει το Google+ login handle από το πρωί, μίλησα και με το support τους. Nonetheless, we are now turning offline and upgrading as necessary
  19. ευχαριστούμε θερμά, πολύ ωραία δουλειά
  20. Hello alexitiris,

    Welcome to designhost.gr.

    Feel free to browse our community accessing all sorts of information and getting to know our members.

    Do not hesitate to ask anything in our forums.

    designhost.gr

  21. έτοιμο και το Google+ API login
  22. Little tips: ρίχνετε καμιά ματιά και στο /homer/user/public_html αν ο PHP handler απαιτει nobody group και γυρίστε το ανάλογα το ίδιο ισχύει και για τα permissions, ειδικά για το public_html καλύτερα 750 https://documentation.cpanel.net/display/ALD/Configure+PHP+and+suEXEC https://forums.cpanel.net/threads/switching-to-suphp.160421/ το παρακάτω ειναι απο offline website, το τράβηξα από Google Cache: SysAdmin Tip - recursive chown can open your system for exploit 03 Jun 2010 A scenario system admins often encounter on shared hosting servers is the permission and ownership issue caused by using mod_php (otherwise known is PHP DSO). Files and directories created by mod_php will be owned by the user that Apache runs as, usually nobody or www-data. This presents a problem in at least two scenarios: The user needs to perform some kind of management on the files; or The sysadmin wants to convert to a more secure method of serving PHP content, usually mod_suphp where such ownership mis-match will cause 500 Internal Server errors. One method often espoused when discussing how to address the ownership issue is to merely execute a recursive chown of the user’s home directory, like such: root@localhost # chown -R user:user /home/user The recursive chown can open your system to exploitation. Let me demonstrate with a simple example. Simple Example Note: I strongly urge you to not perform this test on a production system. Use a throw away system, such as a VPS test system. Note: the following will not work if the home directory for the user, or /etc is on a separate file system. Hard links may not cross device boundaries. [root@squash ~]# ls -l /etc/shadow -rw------- 1 root root 0 Jun 3 07:30 /etc/shadow [whoanel@squash ~]$ ln /etc/shadow public_html/favorite_book_list.txt [whoanel@squash ~]$ ls -l public_html/favorite_book_list.txt -rw------- 2 root root 905995 May 14 09:49 public_html/favorite_book_list.txt [root@squash ~]# chown -R whoanel:whoanel /home/whoanel [root@squash ~]# ls -l /etc/shadow -rw-r--r-- 2 homer homer 0 Jun 3 07:35 /etc/shadow The point to derive from the above is that the recursive chown will grant ownership of hard-linked files to the malicious user. As noted before the example, the hard link vector will only work on files existing on the same partition/file system as the malicious account. Rather than targeting /etc/shadow such a user may try to link to other sensitive files, such as .my.cnf, wp-config.php and the like. For this attack to properly work, the malicious user must accompany it with a social engineering vector. One can imagine a support request like: “Hey, I'm trying to manage some files in my account but I keep getting errors. Could you help me?” By sprinkling files throughout the directory via mod_php (thus increasing the cost of examining each file before changing ownership) the malicious user can mask his intent. “Can’t I simply block the ability to create hard links?” If the user can run code at all on the system, then he can create a hard link. The point I want to leave you with is this: Don't blindly perform a recursive chown. Examine and perform a specific chown.
  23. Καλώς ήρθατε στο designhost.gr.

    Νιώστε ελεύθερα και πλοηγηθείτε στην κοινότητα μας όπου μπορείτε να βρείτε διάφορες πληροφορίες και να γνωριστείτε με τα υπόλοιπα μέλη.

    Μην διστάσετε να μας ρωτήσετε για οποιαδήποτε απορία σας ή διευκρίνηση.

  24. Με βάση αυτο το logo πάντως θα ήταν ενδιαφέρον να δω τι άλλες ιδέες έχεις. Ακόμα και χωρίς να βασιστείς σε αυτό, στην τελική. If you have the time and mood for it ...
×