Jump to content
Slate Blackcurrant Watermelon Strawberry Orange Banana Apple Emerald Chocolate Marble
Slate Blackcurrant Watermelon Strawberry Orange Banana Apple Emerald Chocolate Marble

NickTheGreek

Administrators
  • Content Count

    452
  • Joined

  • Last visited

  • Days Won

    76
  • Feedback

    N/A

Everything posted by NickTheGreek

  1. Ναί, στην ίδια ενότητα μπορεί κανείς ελεύθερα να εκφράσει την - προσωπική του - άποψη πάντα. Προφανώς για παράδειγμα οι εμπειρίες μου ως πελάτη δεν ισοδυναμούν απαραίτητα με αξιόπιστη κριτική για μια εταιρεία, ακόμα και αν τα κίνητα μου είναι ειλικρινή και ανιδιοτελή.
  2. Είναι σημαντικό σε τέτοιες περιπτώσεις πάντως να έχεις κάπως μακρόχρονη συνεργασία και ειδικά εμπειρία σε περιπτώσεις που χρειάστηκε το εκεί support
  3. Introduction If you’ve never heard about HHVM, it’s an open-source Virtual Machine designed for executing programs written in Hack and PHP. For performance reasons, it uses a just-in-time compilation process. Just like other similar projects, HHVM performs execution in a two-phase approach: first, it compiles PHP and Hack in an intermediate bytecode, then this bytecode is translated into AMD64 machine code at runtime, with a JIT (just-in-time) compiler. This tutorial demonstrates how to install WordPress with MariaDB, Nginx and, of course, HHVM on Ubuntu 16.04. Prerequisites As stated on the official page, HHVM supports only 64 bit architectures, so you need Ubuntu 16.04 Server 64bit. Install Nginx First, we install Nginx, which is available in Ubuntu repositories. Execute the following command: # apt install nginx The installation process is very quick. When it is complete, start Nginx: # systemctl start nginx Install and configure MariaDB MariaDB is also available in the repository, so just use apt: # apt-get install mariadb-client mariadb-server MariaDB is a MySQL fork, and it uses its name for the systemd service: # systemctl start mysql Set MariaDB root password to secure your database: # mysql_secure_installation You will be asked for the following configuration parameters: Enter current password for root (enter for none): PRESS ENTER Set root password? [Y/n] Y ENTER YOUR PASSWORD Remove anonymous users? [Y/n] Y Disallow root login remotely? [Y/n] Y Remove test database and access to it? [Y/n] Y Reload privilege tables now? [Y/n] Y Once that step is complete you can access the MariaDB database with your password: $ mysql -u root -p Use the MariaDB prompt to create a new database for WordPress. In this tutorial, we use mywordpressdb as the database name, and wordpressuser as the username for the WP installation. So our code looks like this: mysql> CREATE DATABASE mywordpressdb; mysql> CREATE USER wordpressuser@localhost IDENTIFIED BY 'my_strong_password'; mysql> GRANT ALL PRIVILEGES ON mywordpressdb.* to wordpressuser@localhost IDENTIFIED BY 'my_strong_password'; Next, you can flush privileges and exit: mysql> FLUSH PRIVILEGES; mysql> EXIT; Install HHVM HHVM is not available in the Ubuntu repository, so, first, it’s necessary to add an external one. This requires editing /etc/apt/sources.list and updating repos. Just execute the following commands: $ wget -O - http://dl.hhvm.com/conf/hhvm.gpg.key | sudo apt-key add - $ echo deb http://dl.hhvm.com/ubuntu xenial main | sudo tee /etc/apt/sources.list.d/hhvm.list # apt update Now, install HHVM with apt: # apt install -y hhvm Configure and test HHVM After installation, in /usr/share/hhvm there is a script for configuring the Nginx web server to use HHVM. Just execute the following: # /usr/share/hhvm/install_fastcgi.sh This is a quick process, at the end of which you can start HHVM: # systemctl start hhvm.service If you need to run web scripts, and you want it to start at boot, execute the command: # update-rc.d hhvm defaults You can decide to use HHVM for /usr/bin/php even if you have a php-cli already installed: # /usr/bin/update-alternatives --install /usr/bin/php php /usr/bin/hhvm 60 Next, you can test HHVM in different ways (for example, you can call it on existing PHP scripts present in your filesystem, just like php-cli). In this case, we use it on the web server. So, create a file called info.php in /var/www/html and enter the following lines: <?php phpinfo(); ?> Edit Nginx Virtual Host file, adding in it index.php: # $EDITOR /etc/nginx/sites-available/default Here: index index.php index.html index.htm index.nginx-debian.html; After saving, exit, and test. With your browser, go to http://localhost/info.php Here, you should see HHVM on top of the page: this means that it is working as expected. Install WordPress Now, you must install WordPress. This is quite easy, just execute the commands: # cd /var/www/html # wget wordpress.org/latest.zip # unzip latest.zip The unzip command will create a new folder, wordpress. Move all of its content into /var/www/html # mv wordpress/* . # rm -rf wordpress/ Change the owner for the WordPress files: # find . -type d -exec chown www-data:www-data {} \; # find . -type f -exec chown www-data:www-data {} \; Rename wp-config-sample.php to wp-config.php, then edit it: # mv wp-config-sample.php wp-config.php # $EDITOR wp-config.php Here, change database informations using the one you specified in the MariaDB configuration process: DB_NAME = mywordpressdb DB_USER = wordpressuser DB_PASSWORD = my_strong_password Restart the server: # systemctl restart nginx After that, go to your server IP, and you will be redirected to the WordPress installation, which is totally created in your web browser. After filling out all of the required forms, WordPress will be ready for you! And that’s all you need for creating you website with WP on an Ubuntu 16.04 running Nginx, with HHVM. https://www.unixmen.com/install-wordpress-nginx-hhvm-mariadb-ubuntu-16-04/
  4. Γιάννη δεν έχω ιδέα, έχεις κάποια συγκεκριμένη προσφορά να δούμε ?
  5. Καλημέρα !!! πως σου φαίνεται η όλη προσπάθεια ?
  6. Καλώς ήρθατε στο designhost.gr.

    Νιώστε ελεύθερα και πλοηγηθείτε στην κοινότητα μας όπου μπορείτε να βρείτε διάφορες πληροφορίες και να γνωριστείτε με τα υπόλοιπα μέλη.

    Μην διστάσετε να μας ρωτήσετε για οποιαδήποτε απορία σας ή διευκρίνηση.

  7. This video walks you through the SSL TLS Wizard to obtain a certificate for an arrangement of addon, subdomains, and parked/aliased domains. Music by bensound.com We filmed this tutorial with cPanel & WHM version 60 and the Paper Lantern theme, so the interface and features may differ from the current version. We'll produce newer tutorials as the feature evolves and improves. You can find documentation about the main cPanel interface at https://documentation.cpanel.net/disp... Please let us know if there are additional features or procedures that you would like for us to demonstrate with a walkthrough in video format.
  8. benny Vasquez / Apr 21st, 2016 / Products March 31st, 2017: The Day the Sun Sets on CentOS 5 CentOS 5 will reach end of life officially on March 31st of 2017, but as an industry we are already feeling the pain of people still using the out of date OS. Trying to support a system on an aging Operating System is not that far from trying to live in an aging apartment building. Ill-fitting fixtures, flickering electricity, and malfunctioning plumbing are just some of the daily struggles in those old buildings. Similarly, when the basic requirements of modern hosting are a struggle to maintain it has a severe and direct impact on every webhost’s quality of life. cPanel & WHM version 56 (in the CURRENT tier as of April 11th, 2016, with an expected End of Life of May of 2017) will be the last version of cPanel to support CentOS 5, and 32bit Architecture. We have considered this heavily, and have decided to drop support for those things for two primary reasons: As a software provider, the opportunity cost of continuing to support the oldest infrastructure is too high, and it severely limits the amount of new things we can deliver to our users. As an industry, cPanel believes making it easier for people to continue to host on old, insecure, out of date software ultimately hurts us all. Deprecating support for older systems and older software is necessary for the over-all health of the hosting industry. cPanel is keeping its eye on the future, and letting the old software go. But we know how hard it traditionally is to migrate from an old server to a new one. To help with that we developed the Server Configuration addition to the transfer tool. The Server Configuration Additions to the Transfer Tool In version 11.44 we introduced the Transfer Tool, and it has proved a reliable tool to migrate accounts and packages between servers. As of v56 you can use it to transfer server configurations between servers as well. Note: This is intended to be used to transfer configurations from an existing v56 server to a new server. No attempt is made to back up configurations on the destination server. Additionally, this tool is not intended to be used to migrate configurations from compromised servers. If you’re familiar with the transfer tool, you will find that the additions we’ve made are incredibly simple to use. Once you authenticate to the origin server, you’ll see an expandable section called “Service Configurations”. Click the ‘Show’ button: Choose the configurations you want to transfer, and go through the Transfer Tool like you normally would: The configurations are moved to the new server, any relevant services are restarted, and you’re all set! The same team that built this interface also expanded the functionality of the cpconftool, to help you complete a similar migration on the command line. The cpconftool a command line utility that operates in a very similar way to pkgacct. You can read more about it on our documentation site. You can find the full documentation about the additions to the Transfer Tool in the Transfer Tool Documentation for v56. Note: Due to the potential complications involved, currently MySQL is not included in the WHM Transfer Tool interface. You can still copy your MySQL configuration using the command line tool cpconftool, you can still back up and manually copy your MySQL configuration v56 will be the last version of cPanel to support CentOS 5, the last version to support 32bit architecture. Just in case it wasn’t obvious yet, we want you to upgrade, to migrate, and to keep using and providing new and better software. If you still have any hesitation about the Transfer Tool’s additions, I’d love to hear about it! Email me or tweet me, and let me know! https://blog.cpanel.com/end-of-life-for-centos5/
  9. The evolution of extortionware https://techtalk.gfi.com/the-evolution-of-extortionware/ Debra Littlejohn Shinder on February 7, 2017 We’ve all heard a lot about ransomware that makes data inaccessible and/or locks up the computer or device until you pay for the encryption key. Now we’re seeing a variant on that theme that some call doxware, or extortionware, that goes further and threatens to release copies of the private documents, photos, and email messages on your computer to the public if you don’t pay up. It’s just one example of how malware has evolved over the past few years and is becoming more and more aggressive. Ransomware gets its name from the fact that it holds some or all of your files hostage and demands payment to release them. This particularly heinous type of malware has been around at least since the 1980s, when the incarnation known as AIDS appeared. Various ransomware Trojans have cropped up over the years, but really ramped up a few years ago as Bitcoin offered the opportunity for attackers to easily collect the money without going through traditional channels. CryptoLocker is one of the best-known examples of ransomware. The original version sprang to life in 2013, distributed via a botnet and malicious email attachments. The next year, the botnet was taken down by a consortium of law enforcement agencies, software vendors, commercial security research companies and academic security research departments at several prominent universities, in Operation Tovar. Not only was the botnet dismantled, but the private keys used to encrypt victims’ files were seized and made available, so that particular story had a happy ending. Unfortunately, it was far from the end of ransomware, and new extortionist malware programs have “borrowed” the CryptoLocker name even though they aren’t technically variants of it. Other well-known ransomware attacks include Cryptowall and Reveton for Windows, and KeeRanger on Mac OS X. Then there’s a whole other category of ransomware that targets smartphone and tablet operating systems. Extortionware goes mobile Many people today, especially young people, rarely use desktop or even laptop computers. That doesn’t mean they’ve sworn off the Internet; to the contrary, they’re online all the time, but they do it using mobile devices. So it’s no wonder the ransomware authors have turned their attention to creating malicious code that targets those devices. As mentioned in the introduction, there are basically two types of ransomware: Those that encrypt your data so you can’t view or use it, called (appropriately enough)encrypting ransomware Those that prevent you from logging onto the computer or device or from using one or more applications (such as your web browser), called blocking ransomware or just blockers. Note: There is also a variety of scamware called “scareware” that doesn’t really do anything but only threatens to; this includes those email messages that tell you the FBI has found something illegal on your computer and will arrest you if you don’t send money for the “fine.” Encrypting ransomware is the most popular on desktop operating systems, but they don’t work as well with mobile devices because the data is usually stored in the cloud instead of (or in addition to) on the device where the malware can hijack it. Thus a majority of mobile ransomware programs are blockers. Popular mobile ransomware blockers include Small, Fusob, Pletor and Svpeng. Small and Fusob combine the “threatware” idea with screen overlays that prevent you from using the apps on your device. Mobile ransomware is often disguised as a legitimate third party app or game, installed as a “drive-by download” from questionable websites, or through links in text messages. Doxware takes electronic extortion to a new level Doxing is a relatively new term that refers to the public release of private information about an individual or organization. Doxware does (or threatens to do) exactly the opposite of what traditional ransomware does; instead of locking up your sensitive “docs” and making them inaccessible to you, it makes them accessible to everybody – unless you pay up. If encrypting and blocking extortionware is analogous to hostage-taking for ransom, doxware is comparable to the blackmailer who demands money to keep your secrets quiet. For this reason, doxware is sometimes also called leakware. Doxware often targets email and word processing files. Mobile variants could release private messages, pictures or contact lists from users’ phones. Doxware can be more effective than ransomware at invoking a positive (from the attacker’s point of view) response because victims can circumvent regular ransomware encryption by maintaining separate backups of data, or get past blockers by formatting the hard drive and doing a clean restore. However, once an attacker has information that you don’t want made public, there is little you can do to prevent that other than pay up. The scope of the problem Even though it’s been with us for a long time, the proliferation of extortionware has exploded over the last few years, and some have dubbed 2016 “the year of ransomware” as both the distribution and the code itself became more sophisticated and touched more and more computer and device users. I’ve never had to deal with extortionware on my own systems (thanks in part to careful security practices and in part to good luck) but I have a number of friends and relatives from all walks of life, including a few in the IT industry, who have fallen victim to it. Both individual users and businesses are vulnerable. An Osterman Research survey in 2016 showed that nearly half of respondents said their organizations that been the victim of a ransomware attack during the previous 12 months. CNN statistics showed $209 million paid to ransomware attackers in the first quarter of 2016, and FBI estimates indicated a likely ransomware loss of $1 billion or more for the year. In addition, the average ransomware demand increased from $294 in 2015 to $679 in 2016. While that $679 average might not seem like much, smaller amounts extorted from individuals are balanced by much larger ransoms targeting organizations, such as the $3.4 million that cybercriminals demanded from Hollywood Presbyterian hospital in early 2016. Protecting against extortionware Obviously extortionware is a big threat and should be on your security radar. So how do you protect your organization from it? As you might guess, many of the recommended measures are the same used to protect against other types of malware: Keep your OS and applications updated and install all security patches Back up important files regularly to a site that is disconnected when not backing up (to prevent ransomware from also encrypting your backup files) Authenticate inbound email Implement ad blocking on web browsers since malware is often delivered through “malvertisements,” even on legitimate sites Disable unneeded services (CryptoLocker, for example, often targeted machines using Remote Desktop Protocol, so disabling RDP if you don’t use it helps protect against it) Educate users in safe computing practices and reinforce the importance on a regular basis: Don’t click on links in email, texts, and on web sites you don’t trust Show hidden file extensions so executables can’t be disguised as PDFs or JPGs or other more innocuous file types Don’t visit questionable web sites (porn, warez, music piracy sites, hacker forums, etc.) Don’t download mobile apps from untrusted sources Several of the popular security vendors offer security suites to protect against malware, including extortionware. They also offer mobile security solutions that are designed to scan your phone or tablet and check apps against their databases of known threats, and can also warn you of websites that are infected with ransomware (or other malware). Most experts recommend that you not pay the ransom as this encourages the criminals – and in many cases victims have paid and never received the keys, or received keys that didn’t work. Remember that criminals, by definition, are not trustworthy and are prone to not keeping their promises. Summary Extortionware is big business and becoming more so. Companies and other large organizations are prime targets because they’re seen as having deep pockets.
  10. πολλοί γνωρίζουν το MX toolbox ( http://mxtoolbox.com/ ) αλλά πολύ λιγότεροι έχουν δεί πως διαθέτει δικό του deliverability tool https://mxtoolbox.com/deliverability How it works: Step 1: Send an email to ping@tools.mxtoolbox.com Step 2: Once you have sent that message it will reply. Click the link: "View your full Deliverability Report" ABOUT EMAIL DELIVERABILITY This tool requires you to send a test message to MxToolbox. Our systems analyze the headers, the blacklist reputation of your outbound IP address, and your SPF records to generate a comprehensive deliverability report.
  11. έχει τύχει να αλλάζει κωδικός σε Active Directory ( συνεπώς και MS Exchange password ) και το ActiveSync handheld να συνεχίζει να συγχρονίζει το mailbox ώρες μετά με τον παλιό .... πρακτικά τα δύο παρακάτω άρθρα το επεξηγούν, ειδικά το 1ο να πως υποτίθεται πως λειτουργεί και να πως ειναι http://www.techrepublic.com/article/why-does-my-old-password-work-via-activesync/ https://social.technet.microsoft.com/Forums/office/en-US/e792fd57-84b2-4e48-8281-99b062b8a014/activesync-can-sync-6-hours-after-changing-password-in-ad-before-client-asks-for-new-credentials?forum=exchangesvrclientslegacy
  12. Κάποιες φορές τυχαίνει να πρέπει να ξανακάνω import σε βάση από SQL data που δεν περιέχουν DROP statements Εκει η διαδικασία λογικά σταματά νωρίς με κάτι τέτοιο: "ERROR 1050 (42S01) at line χχ: Table 'db_table' already exists" αντί να κάνουμε DROP την βάση / recreate και ξανά recreate user connections υπάρχει και το ακόλουθο που παράγει όλα τα DROP statements για κάθε πίνακα της βασης: SELECT CONCAT('DROP TABLE ', TABLE_NAME, ';') FROM INFORMATION_SCHEMA.tables WHERE TABLE_SCHEMA = '{database name}'; θα δείτε κάτι τέτοιο ( παράδειγμα από WP ) +----------------------------------------+ | CONCAT('DROP TABLE ', TABLE_NAME, ';') | +----------------------------------------+ | DROP TABLE wpdbase_commentmeta; | | DROP TABLE wpdbase_comments; | | DROP TABLE wpdbase_links; | | DROP TABLE wpdbase_options; | | DROP TABLE wpdbase_pmxi_history; | | DROP TABLE wpdbase_postmeta; | | DROP TABLE wpdbase_posts; | | DROP TABLE wpdbase_term_relationships; | | DROP TABLE wpdbase_term_taxonomy; | | DROP TABLE wpdbase_termmeta; | | DROP TABLE wpdbase_terms; | | DROP TABLE wpdbase_usermeta; | | DROP TABLE wpdbase_users; | | DROP TABLE wpdbase_wysija_campaign; | | DROP TABLE wpdbase_wysija_campaign_list; | | DROP TABLE wpdbase_wysija_custom_field; | | DROP TABLE wpdbase_wysija_email; | | DROP TABLE wpdbase_wysija_email_user_stat; | | DROP TABLE wpdbase_wysija_email_user_url; | | DROP TABLE wpdbase_wysija_form; | | DROP TABLE wpdbase_wysija_list; | | DROP TABLE wpdbase_wysija_queue; | | DROP TABLE wpdbase_wysija_url; | | DROP TABLE wpdbase_wysija_url_mail; | | DROP TABLE wpdbase_wysija_user; | | DROP TABLE wpdbase_wysija_user_field; | | DROP TABLE wpdbase_wysija_user_history; | | DROP TABLE wpdbase_wysija_user_list; | +----------------------------------------+ 28 rows in set (0.00 sec) και εύκολα μετα προχωράτε σε διαγραφή τους με ένα απλό copy paste εχω βρει και άλλες λύσεις και σίγουρα θα υπάρχει και καλύτερη http://stackoverflow.com/questions/27606518/how-to-drop-all-tables-from-a-database-with-one-sql-query http://stackoverflow.com/questions/536350/drop-all-the-tables-stored-procedures-triggers-constraints-and-all-the-depend http://stackoverflow.com/questions/11053116/mysql-bulk-drop-table-where-table-like
  13. Current Release: http://www.rfxn.com/downloads/apf-current.tar.gz http://www.rfxn.com/appdocs/README.apf http://www.rfxn.com/appdocs/CHANGELOG.apf Description: Advanced Policy Firewall (APF) is an iptables(netfilter) based firewall system designed around the essential needs of today’s Internet deployed servers and the unique needs of custom deployed Linux installations. The configuration of APF is designed to be very informative and present the user with an easy to follow process, from top to bottom of the configuration file. The technical side of APF is such that it utilizes the latest stable features from the iptables (netfilter) project to provide a very robust and powerful firewall. The filtering performed by APF is three fold: 1) Static rule based policies (not to be confused with a “static firewall”) 2) Connection based stateful policies 3) Sanity based policies The first, static rule based policies, is the most traditional method of firewalling. This is when the firewall has an unchanging set of instructions (rules) on how traffic should be handled in certain conditions. An example of a static rule based policy would be when you allow/deny an address access to the server with the trust system or open a new port with conf.apf. So the short of it is rules that infrequently or never change while the firewall is running. The second, connection based stateful policies, is a means to distinguish legitimate packets for different types of connections. Only packets matching a known connection will be allowed by the firewall; others will be rejected. An example of this would be FTP data transfers, in an older era of firewalling you would have to define a complex set of static policies to allow FTA data transfers to flow without a problem. That is not so with stateful policies, the firewall can see that an address has established a connection to port 21 then “relate” that address to the data transfer portion of the connection and dynamically alter the firewall to allow the traffic. The third, sanity based policies, is the ability of the firewall to match various traffic patterns to known attack methods or scrutinize traffic to conform to Internet standards. An example of this would be when a would-be attacker attempts to forge the source IP address of data they are sending to you, APF can simply discard this traffic or optionally log it then discard it. To the same extent another example would be when a broken router on the Internet begins to relay malformed packets to you, APF can simply discard them or in other situations reply to the router and have it stop sending you new packets (TCP Reset). Features: – detailed and well commented configuration file – granular inbound and outbound network filtering – user id based outbound network filtering – application based network filtering – trust based rule files with an optional advanced syntax – global trust system where rules can be downloaded from a central management server – reactive address blocking (RAB), next generation in-line intrusion prevention – debug mode provided for testing new features and configuration setups – fast load feature that allows for 1000+ rules to load in under 1 second – inbound and outbound network interfaces can be independently configured – global tcp/udp port & icmp filtering with multiple filters (drop, reject, prohibit) – configurable policies for each ip on the system with convenience variables to import settings – packet flow rate limiting that prevents abuse on the most widely abused protocol, icmp – prerouting and postrouting rules for optimal network performance – dshield.org block list support to ban networks exhibiting suspicious activity – spamhaus Don’t Route Or Peer List support to ban known “hijacked zombie” IP blocks – any number of additional interfaces may be configured as trusted or untrusted – additional firewalled interfaces can have there own unique firewall policies applied – intelligent route verification to prevent embarrassing configuration errors – advanced packet sanity checks to make sure traffic coming and going meets the strictest of standards – filter attacks such as fragmented UDP, port zero floods, stuffed routing, arp poisoning and more – configurable type of service options to dictate the priority of different types of network traffic – intelligent default settings to meet every day server setups – dynamic configuration of your servers local DNS revolvers into the firewall – optional filtering of common p2p applications – optional filtering of private & reserved IP address space – optional implicit blocks of the ident service – configurable connection tracking settings to scale the firewall to the size of your network – configurable kernel hooks (ties) to harden the system further to syn-flood attacks & routing abuses – advanced network control such as explicit congestion notification and overflow control – helper chains for FTP DATA and SSH connections to prevent client side issues – optional rate limited event logging – logging subsystem that allows for logging data to user space programs or standard syslog files – comprehensive logging of every rule added – detailed startup error checking – if you are familiar with netfilter you can create your own rules in any of the policy files – pluggable and ready advanced use of QoS algorithms provided by the Linux – 3rd party add-on projects that compliment APF features Funding: Funding for the continued development and research into this and other projects is solely dependent on public contributions and donations. If this is your first time using this software we ask that you evaluate it and consider a small donation; for those who frequent and are continued users of this and other projects we also ask that you make an occasional donation to help ensure the future of our public projects. https://www.rfxn.com/projects/advanced-policy-firewall/
  14. μπορεί να με βρίσουν κάποια μέλη αλλά θα το ρισκάρω !
  15. Hello akimo,

    Welcome to designhost.gr.

    Feel free to browse our community accessing all sorts of information and getting to know our members.

    Do not hesitate to ask anything in our forums.

    designhost.gr

  16. σημαντικό πριν το κλείσιμο σε VPS templates https://www.cyberciti.biz/faq/howto-display-clear-last-login-information/ The /var/log/lastlog file stores user last login information. This is binary file and act as database times of previous user logins. You need to use lastlog command to formats and prints the contents of the last login log /var/log/lastlog file. lastlog command The lastlog command shows the most recent login of all users or of a given user. The Following information is printed using lastlog command: => The login-name => Port => Last login time Task: Display last login information Simply type the lastlog command : $ lastlog Sample outputs: Username Port From Latest root tty1 Thu Jan 25 15:23:50 +0530 2007 daemon **Never logged in** bin **Never logged in** sys **Never logged in** sync **Never logged in** vivek tty1 Sat Jan 27 22:10:36 +0530 2007 pdnsd **Never logged in** sshd **Never logged in** messagebus **Never logged in** bind **Never logged in** sweta tty1 Sat Jan 27 19:55:22 +0530 2007 Note: If the user has never logged in the message “**Never logged in**” will be displayed instead of the port and time. Task: Clear last login information by deleting /var/log/lastlog Simply overwrite /var/log/lastlog file. You must be the root user. First make a backup of /var/log/lastlog: # cp /var/log/lastlog /root Now overwrite file using any one of the following command: # >/var/log/lastlog OR # cat > /var/log/lastlog Press CTR+D to save the changes. last and lastb commands Use last or lastb command to display listing of last logged in users: $ last OR $ lastb Sample outputs: root pts/1 10.1.6.120 Tue Jan 7 16:43 still logged in root pts/0 10.1.6.120 Tue Jan 7 15:52 still logged in root pts/0 10.1.6.120 Tue Jan 7 11:20 - 15:07 (03:47) root pts/1 10.1.6.120 Tue Jan 7 07:07 - 09:50 (02:43) root pts/0 10.1.6.120 Tue Jan 7 05:00 - 07:22 (02:21) root pts/0 10.1.6.120 Mon Jan 6 14:16 - 16:36 (02:20) root pts/0 10.1.6.120 Sun Jan 5 16:37 - 17:01 (00:23) root pts/0 10.1.6.120 Sun Jan 5 15:12 - 15:39 (00:26) root pts/0 10.1.6.120 Sun Jan 5 14:45 - 15:05 (00:20) root pts/2 10.1.6.120 Sun Jan 5 12:53 - 15:46 (02:53) root pts/0 10.1.6.120 Sun Jan 5 12:52 - 12:53 (00:00) root pts/1 10.1.6.120 Sun Jan 5 11:09 - 14:29 (03:20) root pts/0 10.1.6.120 Sun Jan 5 10:05 - 12:19 (02:14) reboot system boot 2.6.32-431.3.1.e Sun Jan 5 10:02 - 16:48 (2+06:46) root pts/0 10.1.6.120 Sun Jan 5 09:58 - down (00:00) root pts/0 10.1.6.120 Sun Jan 5 03:33 - 05:45 (02:12) root pts/1 10.1.6.120 Sat Jan 4 15:06 - 17:28 (02:21) root pts/0 10.1.6.120 Sat Jan 4 13:46 - 15:58 (02:11) root pts/0 10.1.6.120 Sat Jan 4 05:05 - 07:16 (02:11) root pts/1 10.1.6.120 Fri Jan 3 14:29 - 15:44 (01:15) root pts/0 10.1.6.120 Fri Jan 3 13:20 - 15:32 (02:11) root pts/0 10.1.6.120 Thu Jan 2 05:19 - 05:32 (00:13) root pts/0 10.1.6.120 Tue Dec 31 13:57 - 16:06 (02:09) wtmp begins Tue Dec 31 13:57:23 2013 last and lastb use /var/log/wtmp and /var/log/btmp files to log information. You can use the following command to clear wtmp/btmp: # >/var/log/wtmp # >/var/log/btmp For more information see man pages – lastlog(8), last(1), login(1), wtmp(5)
  17. Ειδικά για WHM / cPanel https://forums.cpanel.net/threads/xcache-in-easyapache-does-not-enable-opcode-caching.444552/ εδώ και το official documentation: https://documentation.cpanel.net/display/EA/PHP+Options
  18. In most cases PHP performance can slow down the performance of websites. To optimize and accelerate website performance you need to improve the PHP performance. For this purpose, you can use opcode cachers such as eAccelerator, APC, Memcached, XCache, etc. Personally, my favourite choice is XCache. XCache is a free, open source operation code cacher, it is designed to enhance the performance of PHP scripts execution on servers. It optimizes the performance by eliminating the compilation time of PHP code by caching the compiled version of code into the memory and this way the compiled version loads the PHP script directly from the memory. This will surety accelerate the page generation time by up to 5 times faster and also optimizes and increases many other aspects of php scripts and reduce website/server load. May not be 5 times faster, but it will definite improves the standard PHP installation with the opcode XCaher. This article explains how to setup and integrate XCache into PHP installation on a RHEL, CentOS, Fedora and Ubuntu, Linux Mint and Debian systems. Step 1: Installation of XCache for PHP Users who running an Red Hat based distributions, can able to install XCache through a package manager by enabling epel repository. Once you’ve enabled epel repository, you can use the following yum command to install it. RHEL/CentOS/Fedora # yum install php-xcache xcache-admin By default, XCache is available for Debian based distributions from the package manager. Therefore, you can install the XCache package by using the following apt-get command. Debian/Ubuntu/Linux Mint # apt-get install php5-xcache Step 2: Configuring of XCache for PHP The XCache.ini configuration file has couple of settings that I do recommend you to understand as they are vital to use in this plugin. The detailed information of XCache configuration settings can be found at XcacheIni. If you don’t want to change any settings, you can use default settings as they are good enough to use with XCache. RHEL/CentOS/Fedora # vi /etc/php.d/xcache.ini Debian/Ubuntu/Linux Mint # vi /etc/php5/conf.d/xcache.ini OR # vi /etc/php5/mods-available/xcache.ini Step 3: Restarting Apache for XCache Once you’re done with configuration settings, restart your Apache web server. # /etc/init.d/httpd restart # /etc/init.d/apache2 restart Step 4: Verifying XCache for PHP Once you’ve restarted web service, type the following command to verify XCache. You should see the XCache lines as shown below. # php -v Sample Output PHP 5.3.3 (cli) (built: Jul 3 2012 16:40:30) Copyright (c) 1997-2010 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies with XCache v3.0.1, Copyright (c) 2005-2013, by mOo with XCache Optimizer v3.0.1, Copyright (c) 2005-2013, by mOo with XCache Cacher v3.0.1, Copyright (c) 2005-2013, by mOo with XCache Coverager v3.0.1, Copyright (c) 2005-2013, by mOo Alternatively, you can verify XCache by creating a ‘phpinfo.php‘ file under your document root directory (i.e. /var/www/html or /var/www). vi /var/www/phpinfo.php Next, add the following php lines to it and save the file. <?php phpinfo(); ?> Open a web browser and call the file like “http://your-ip-address/phpinfo.php“. You will see the following output screen shot. XCache Info Step 5: Enabling XCache Admin Panel for PHP By default the admin panel is protected with http-auth and in disabled state, if you’ve not set a password. To set user/password open the Xcache.ini file. But, first you have to create md5 password using following command. # echo -n "typeyourpassword" | md5sum Sample Output e10adc3949ba59abbe56e057f20f883e Now open Xcache.ini file add the generated md5 password. See the following example, add your own password md5 string. [xcache.admin] xcache.admin.enable_auth = On ; Configure this to use admin pages xcache.admin.user = "mOo" ; xcache.admin.pass = md5($your_password) xcache.admin.pass = "e10adc3949ba59abbe56e057f20f883e" The simplest way to do so is copying the whole directory xcache (admin was in older release) to your web root directory (i.e. /var/www/html or /var/www). # cp -a /usr/share/xcache/ /var/www/html/ OR # cp -a /usr/share/xcache/htdocs /var/www/xcache OR cp -a /usr/share/xcache/admin/ /var/www/ (older release) Now call it from your browser, a http-auth login prompt window will pop-up. Enter your user/pass in, and it’s done. http://localhost/xcache OR http://localhost/admin (older release) XCache 3.0 Newer Version XCache 3.0 Admin Panel XCache 2.0 Older Version XCache Admin Panel Reference Links XCache Homepage
  19. αρκετά συγκεντρωτικό oneliner για WHM / cPanel server environment details grep '' /etc/redhat-release /usr/local/cpanel/version /var/cpanel/envtype ; grep CPANEL= /etc/cpupdate.conf ; httpd -v ; php -v ; mysql -V output example
  20. Introduction For those who don’t know, SSHFS stands for Secure SHell FileSystem. It’s a client that enables us to interact with remote directories through SFTP (SSH File Transfer Protocol). This is a secure file transfer protocol that provides file access, file transfer and file management features over SSH. We’ll install it on a Red Hat based distro. Install SSHFS Client First, you need to enable epel-repository. Then, just use yum: # yum install sshfs Create a mount point and mount an FS You must create a mount point directory where you will mount your remote file system. For example on /mnt/sshfstest. Of course, use the mount point you want or need. Now, you can mount a remote directory under /etc/sshfstest. # sshfs -o IdentityFile=~/.ssh/id_rsa hostname@X.X.X.X:/remote/directory /mnt/sshfstest Check mount point To test, check if the commands above made their jobs. For instance, you can use df. # df -hT Conclusion That’s all, now you have mounted your remote directory. If you want, you can do this permanently editing the /etc/fstab file. https://www.unixmen.com/mount-remote-directories-with-sshfs/
  21. Υπάρχει τεράστιος όγκος πληροφορίας αναφορικά με την δημιουργία, συντήρηση, διαχείριση, προώθηση αλλά και εμπορική εκμετάλλευση ιστοσελίδων αλλά ελάχιστος αναφορικά με αγοραπωλησίες τους. Το παρακάτω website αποτελεί ίσως εξαιρετικά προσεγμένη δουλειά σε αυτό τον χώρο, του website real estate https://empireflippers.com/ Για την κοστολόγηση από κάποιο bot ( και όχι αξιολόγηση από ειδικούς ) υπάρχει και αυτό: http://www.worthofweb.com/
  22. What is TCP Fast Open? The TCP protocol underpins most application-layer protocols like HTTP, SSH, FTP, NFS, etc. In fact TCP sits in between the IP layer (IP address routing) and the Application layer (user data), and is responsible for guaranteed and ordered byte stream delivery. TCP is also the layer at which source and destination ports are indicated. One of the reasons that applications are so sensitive to the distance between the sender and receiver, is that TCP requires a 3-way handshake before any user data is sent. The sender sends a TCP-synchronize (SYN) packet to the receiver, indicating its desire to transmit; The receiver responds with a TCP-SYN/ACK packet, simultaneously acknowledging the sender and opening up its own TX pipe (TCP is bidirectional); Finally, the sender sends a TCP-ACK packet to acknowledge the receiver’s transmission intentions. It’s only after step 3 that the sender can actually start sending data. In fact, if you look at a Wireshark trace, what you’ll typically see is the sender’s TCP-ACK packet being followed immediately by a bunch of data packets. So the problem with distance between the sender and receiver, is that it creates a meaningful delay between step 1 and step 2. This delay is called the round-trip time (RTT, aka ping time) because the sender must wait for its packet to travel all the way to the receiver, and then wait for a reply to come back. That’s where TCP Fast Open (TFO) comes in. TFO is an extension to the TCP protocol which allows connections to be started during the handshake, by allowing data in the payload of a TCP-SYN packet and hence triggering an immediate response from the server. However, TFO is only possible after a normal, initial handshake has been performed. In other words, the TFO extension provides means by which a sender and receiver can save some data about each other, and recognize each other historically based on a TFO cookie. TFO is quite useful because: TFO is a kernel setting, thus available to all applications that want to benefit from TFO; TFO can meaningfully accelerate applications that open-use-close connections during the lifetime of the app. How meaningful is the acceleration? First, it’s meaningful if we consider the reduction of the response time. This is especially true if the sender and receiver are far apart from each other. For example, you may want your e-commerce site to load individual catalogue items faster, because every delay is an opportunity for the customer to think twice or go away. As another example, reducing the time between a user hitting the play button and the time the video actually starts, can significantly improve user experience. In terms of response time, it’s a function of the RTT. Secondly, it can be very meaningful in terms of the turnaround time. If you consider the time spent in transferring smaller files, the initial delay is typically one or more orders of magnitude larger than the actual data transfer time. For example, if an application is synchronizing many small or medium files, eliminating the handshake delay can significantly improve the total transfer time. Enabling TFO for NGINX Ok let’s get to work, there are 3 tasks to complete: Update the kernel settings to support TFO; Compile NGINX from source with TFO support; Modify NGINX configuration to accept TFO connections. Kernel support for TFO The client and server support for IPv4 TFO was merged into the Linux kernel mainline as of 3.7 – you can check your kernel version with uname -r. If you’re running 3.13, chances are that TFO is already enabled by default. Otherwise, follow this procedure to turn it on. As root, create the file /etc/sysctl.d/tcp-fast-open.conf with the following content: net.ipv4.tcp_fastopen = 3 1 net.ipv4.tcp_fastopen = 3 Restart sysctl: # systemctl restart systemd-sysctl 1 # systemctl restart systemd-sysctl Check the current setting: # cat /proc/sys/net/ipv4/tcp_fastopen 3 1 2 # cat /proc/sys/net/ipv4/tcp_fastopen 3 Compiling NGINX with TFO support Most NGINX packages do not currently include TFO support. The minimum NGINX version required for TFO is 1.5.8. However that’s a pretty old version, as NGINX is now at 1.9.7. The procedure that follows will use 1.9.7 but will likely work with future NGINX versions. Check the NGINX News page to get the latest version. As a normal user (not root), download the NGINX source nginx-1.9.7.tar.gz, extract it and move into the nginx-1.9.7 directory. sudo yum install wget -y wget http://nginx.org/download/nginx-1.9.7.tar.gz tar -xvf nginx-1.9.7.tar.gz cd nginx-1.9.7 1 2 3 4 sudo yum install wget -y wget http://nginx.org/download/nginx-1.9.7.tar.gz tar -xvf nginx-1.9.7.tar.gz cd nginx-1.9.7 Install the Fedora EPEL repository (this must be done prior to the next yum install command): sudo yum install -y epel-release 1 sudo yum install -y epel-release Install prerequisite packages: sudo yum install -y gcc zlib-devel libatomic_ops-devel pcre-devel openssl-devel libxml2-devel libxslt-devel gd-devel GeoIP-devel gperftools-devel 1 sudo yum install -y gcc zlib-devel libatomic_ops-devel pcre-devel openssl-devel libxml2-devel libxslt-devel gd-devel GeoIP-devel gperftools-devel Configure the build specifying the -DTCP_FASTOPEN=23 compiler flag. Also note that the --prefix=/usr/share/nginx configuration option specifies the installation root, and a few other directories need to be manually set as well. If you’re not worried about crushing an existing NGINX installation and/or you want to build a more standard installation, change the prefix option to /usr and remove the /usr/share/nginx prefix from the rest of the path specifications. $ ./configure \ --prefix=/usr/share/nginx \ --conf-path=/usr/share/nginx/etc/nginx/nginx.conf \ --error-log-path=/usr/share/nginx/var/log/nginx/error.log \ --http-log-path=/usr/share/nginx/var/log/nginx/access.log \ --http-client-body-temp-path=/usr/share/nginx/var/lib/nginx/tmp/client_body \ --http-proxy-temp-path=/usr/share/nginx/var/lib/nginx/tmp/proxy \ --http-fastcgi-temp-path=/usr/share/nginx/var/lib/nginx/tmp/fastcgi \ --http-uwsgi-temp-path=/usr/share/nginx/var/lib/nginx/tmp/uwsgi \ --http-scgi-temp-path=/usr/share/nginx/var/lib/nginx/tmp/scgi \ --user=nginx \ --group=nginx \ --build="TFO custom build" \ --with-threads \ --with-file-aio \ --with-ipv6 \ \ --with-http_ssl_module \ --with-http_v2_module \ \ --with-http_realip_module \ --with-http_addition_module \ --with-http_xslt_module \ --with-http_image_filter_module \ --with-http_geoip_module \ --with-http_sub_module \ --with-http_dav_module \ --with-http_flv_module \ --with-http_mp4_module \ --with-http_gunzip_module \ --with-http_gzip_static_module \ --with-http_auth_request_module \ --with-http_random_index_module \ --with-http_secure_link_module \ --with-http_degradation_module \ --with-http_stub_status_module \ \ --with-mail \ --with-mail_ssl_module \ --with-stream \ --with-stream_ssl_module \ --with-google_perftools_module \ \ --with-pcre \ --with-pcre-jit \ --with-google_perftools_module \ --with-debug \ --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -DTCP_FASTOPEN=23' \ --with-ld-opt='-Wl,-z,relro -Wl,-E' 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 $ ./configure \ --prefix=/usr/share/nginx \ --conf-path=/usr/share/nginx/etc/nginx/nginx.conf \ --error-log-path=/usr/share/nginx/var/log/nginx/error.log \ --http-log-path=/usr/share/nginx/var/log/nginx/access.log \ --http-client-body-temp-path=/usr/share/nginx/var/lib/nginx/tmp/client_body \ --http-proxy-temp-path=/usr/share/nginx/var/lib/nginx/tmp/proxy \ --http-fastcgi-temp-path=/usr/share/nginx/var/lib/nginx/tmp/fastcgi \ --http-uwsgi-temp-path=/usr/share/nginx/var/lib/nginx/tmp/uwsgi \ --http-scgi-temp-path=/usr/share/nginx/var/lib/nginx/tmp/scgi \ --user=nginx \ --group=nginx \ --build="TFO custom build" \ --with-threads \ --with-file-aio \ --with-ipv6 \ \ --with-http_ssl_module \ --with-http_v2_module \ \ --with-http_realip_module \ --with-http_addition_module \ --with-http_xslt_module \ --with-http_image_filter_module \ --with-http_geoip_module \ --with-http_sub_module \ --with-http_dav_module \ --with-http_flv_module \ --with-http_mp4_module \ --with-http_gunzip_module \ --with-http_gzip_static_module \ --with-http_auth_request_module \ --with-http_random_index_module \ --with-http_secure_link_module \ --with-http_degradation_module \ --with-http_stub_status_module \ \ --with-mail \ --with-mail_ssl_module \ --with-stream \ --with-stream_ssl_module \ --with-google_perftools_module \ \ --with-pcre \ --with-pcre-jit \ --with-google_perftools_module \ --with-debug \ --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -DTCP_FASTOPEN=23' \ --with-ld-opt='-Wl,-z,relro -Wl,-E' Compile NGINX: make -j4 1 make -j4 Check that NGINX was built correctly: $ ./objs/nginx -V nginx version: nginx/1.9.7 (TFO custom build) built by gcc 4.8.3 20140911 (Red Hat 4.8.3-9) (GCC) built with OpenSSL 1.0.1e-fips 11 Feb 2013 TLS SNI support enabled configure arguments: --prefix=/usr/share/nginx --error-log-path=/usr/share/nginx/var/log/nginx/error.log --http-log-path=/usr/share/nginx/var/log/nginx/access.log --http-client-body-temp-path=/usr/share/nginx/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/usr/share/nginx/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/usr/share/nginx/var/lib/nginx/tmp/fastcgi --http-uwsgi-temp-path=/usr/share/nginx/var/lib/nginx/tmp/uwsgi --http-scgi-temp-path=/usr/share/nginx/var/lib/nginx/tmp/scgi --user=nginx --group=nginx --build='TFO custom build' --with-threads --with-file-aio --with-ipv6 --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-http_addition_module --with-http_xslt_module --with-http_image_filter_module --with-http_geoip_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_auth_request_module --with-http_random_index_module --with-http_secure_link_module --with-http_degradation_module --with-http_stub_status_module --with-mail --with-mail_ssl_module --with-stream --with-stream_ssl_module --with-google_perftools_module --with-pcre --with-pcre-jit --with-google_perftools_module --with-debug --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -DTCP_FASTOPEN=23' --with-ld-opt='-Wl,-z,relro -Wl,-E' 1 2 3 4 5 6 $ ./objs/nginx -V nginx version: nginx/1.9.7 (TFO custom build) built by gcc 4.8.3 20140911 (Red Hat 4.8.3-9) (GCC) built with OpenSSL 1.0.1e-fips 11 Feb 2013 TLS SNI support enabled configure arguments: --prefix=/usr/share/nginx --error-log-path=/usr/share/nginx/var/log/nginx/error.log --http-log-path=/usr/share/nginx/var/log/nginx/access.log --http-client-body-temp-path=/usr/share/nginx/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/usr/share/nginx/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/usr/share/nginx/var/lib/nginx/tmp/fastcgi --http-uwsgi-temp-path=/usr/share/nginx/var/lib/nginx/tmp/uwsgi --http-scgi-temp-path=/usr/share/nginx/var/lib/nginx/tmp/scgi --user=nginx --group=nginx --build='TFO custom build' --with-threads --with-file-aio --with-ipv6 --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-http_addition_module --with-http_xslt_module --with-http_image_filter_module --with-http_geoip_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_auth_request_module --with-http_random_index_module --with-http_secure_link_module --with-http_degradation_module --with-http_stub_status_module --with-mail --with-mail_ssl_module --with-stream --with-stream_ssl_module --with-google_perftools_module --with-pcre --with-pcre-jit --with-google_perftools_module --with-debug --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -DTCP_FASTOPEN=23' --with-ld-opt='-Wl,-z,relro -Wl,-E' Install NGINX to the prefix base directory: sudo make install 1 sudo make install Create the nginx user/group along with the temporary file directory: $ sudo groupadd -r nginx $ sudo useradd -r -d /usr/share/nginx/var/lib/nginx -g nginx -s /sbin/nologin -c "Nginx web server" nginx $ sudo mkdir -p /usr/share/nginx/var/lib/nginx/tmp $ sudo chown -R nginx.wheel /usr/share/nginx/var/{log,lib}/nginx 1 2 3 4 $ sudo groupadd -r nginx $ sudo useradd -r -d /usr/share/nginx/var/lib/nginx -g nginx -s /sbin/nologin -c "Nginx web server" nginx $ sudo mkdir -p /usr/share/nginx/var/lib/nginx/tmp $ sudo chown -R nginx.wheel /usr/share/nginx/var/{log,lib}/nginx NGINX configuration for TFO Using TFO is as simple as adding the fastopen option to a server’s listen directive. From the NGINX docs: fastopen=number Enables “TCP Fast Open” for the listening socket (1.5.8) and limits the maximum length for the queue of connections that have not yet completed the three-way handshake. 1 2 3 fastopen=number Enables “TCP Fast Open” for the listening socket (1.5.8) and limits the maximum length for the queue of connections that have not yet completed the three-way handshake. Edit the /usr/share/nginx/etc/nginx/nginx.conf file and modify your listen directive as follows: listen 80 fastopen=256 1 listen 80 fastopen=256 Feel free to leave a comment to let me know how this played out for you – thanks and good luck. http://www.masv.io/enabling-tcp-fast-open-nginx-centos-7/
  23. NickTheGreek

    IW10

    Εχει ήδη εδραιωθεί ? απλά ξεχάστηκε ? αντιρρήσεις και επιφυλάξεις υπήρχαν από τότε: https://tools.ietf.org/html/draft-gettys-iw10-considered-harmful-00 Internet Engineering Task Force Jim Gettys Internet-Draft Alcatel-Lucent Bell Labs Intended status: Informational August 26, 2011 Expires: February 27, 2012 IW10 Considered Harmful draft-gettys-iw10-considered-harmful-00 Abstract The proposed change to the initial window to 10 indraft-ietf-tcpm- initcwnd must be considered deeply harmful; not because it is the proposed change is evil taken in isolation, but that other changes in web browsers and web sites that have occurred over the last decade, it makes the problem of transient congestion at a user's broadband connection two and a half times worse. This result has been hidden by the already widespread bufferbloat present in broadband connections. Packet loss in isolation is no longer a useful metric of a path's quality. The very drive to improve latency of web page rendering is already destroying other low latency applications, such as VOIP and gaming, and will prevent reliable rich web real time web applications such as those contemplated by the IETF rtcweb working group.
  24. NickTheGreek

    IW10

    CentOS 6.4 update finally supports IW10 for tcp/ip tuning Google and others have published several whitepapers showing benchmarks where tuning tcp/ip on servers can really help web browsing performance. One of these important tweaks is known as IW10 which improves the tcp/ip congestion window by reducing ACKs (initally sending 10 packets instead of only 3) and is enabled by increasing initcwnd and initrwnd. Unfortunately those on CentOS (one of the most popular linux production distributions) were not able to take advantage of this – until this weekend when CentOS 6.4 was shipped. CentOS 6.3 could change initcwnd but NOT initrwnd (RWIN) which requires a kernel > 2.6.33 CentOS 6.4 is 2.6.32-358.0.1.el6 (Red Hat 4.4.7-3) so the feature has been backported Here’s how to take advantage of IW10 on CentOS 6.4 This is obviously for power users with root access to their VPS or dedicated servers (this will NOT work on Virtuozzo VZ vps!) First of course do a yum update to get Centos 6.4 Then take a look at your current settings via ip route show You are looking for the last line that will say something like default via 4.3.2.1 dev eth0 Then just take that line and add two parts to it ip route change default via 4.3.2.1 dev eth0 initcwnd 10 initrwnd 10 And bingo, you now have IW10 support. If something goes wrong, it will just complain and not make the change. Note these changes aren’t permanent, you will have to add the line to /etc/rc.local or similar, to make it happen on each reboot There are other important things to tune with tcp/ip, but one of the other most important is to disable “slow start after idle” sysctl -a | grep tcp_slow_start_after_idle sysctl -w net.ipv4.tcp_slow_start_after_idle=0 It is possible to actually examine the performance improvements of making this change on a packet level but it requires the use of wireshark and and counting ACKs which is a bit tedious so I will skip that for now. Unfortunately it might be YEARS before CentOS gets things like TCP Fast Open which has only appeared a few months ago in linux kernel > 3.5 but I am still investigating if it’s possible in CentOS 6.4 to easily reduce the RTT initial timeout from 3 seconds to 1 second which is another recommended tcpip tuning improvement. https://ckon.wordpress.com/2013/03/11/centos-6-4-supports-iw10-tcpip-tuning/
×