Jump to content
Slate Blackcurrant Watermelon Strawberry Orange Banana Apple Emerald Chocolate Marble
Slate Blackcurrant Watermelon Strawberry Orange Banana Apple Emerald Chocolate Marble

NickTheGreek

Administrators
  • Content Count

    454
  • Joined

  • Last visited

  • Days Won

    76
  • Feedback

    N/A

Everything posted by NickTheGreek

  1. 31st of March is World Backup Day, a global initiative that aims to get organisations and individuals to pledge to back up their data on that day. As a web host, it’s something we fully support and we’ve written this article to explain why you should join in. To give you a little more incentive, we’ve also got a couple of superb backup deals on offer which we’ll tell you about at the end of the post. If you’re one of our regular readers, you’ll know that this is not the first post we’ve written about backups – and it’s unlikely to be the last. Backing up is one of the things that people need to be constantly reminded to do. Most of us know why we should but, for some reason, we don’t always see it as a priority. Data loss is what happens to others – isn’t it? If you aren’t overly concerned about the need to backup your data, here are a few points to get you thinking. Malware vulnerability According to Sophos, over 300 million new pieces of malware are created to infect websites and computers every year. This results in 10% of all computers being infected every month and 30,000 websites being infected every single day. When viruses infect, data is lost and software is corrupted. And these infections can spread beyond just computers and websites. Viruses can be transmitted to phones, tablets, pen drives, camera cards and other storage devices – and done so unwittingly by employees and customers. Whilst antivirus software can usually protect you, the companies that make the software have to detect the malware first and then discover a way to eradicate it. Unfortunately, you can’t detect malware until it is already out there doing damage in the first place. Having a clean backup means your data isn’t lost and your business recovers far quicker and with significantly less expense. Hacking Surprisingly, most website owners think hacking is a conscious undertaking done by an individual who chooses a company to target. This leaves them to believe that no-one would be interested in hacking their organisation and so they have little to fear. “We’re a plumbers’ merchant in Hampstead Heath, nobody would want to hack us.” It doesn’t quite work like that. Hacking is very much an automated process where computer programs, not too dissimilar to Google’s search bots, scour the entire internet looking for sites which have vulnerabilities. It is these sites which are then targeted; and the actual break in is much more likely to be done using highly sophisticated software rather than by some hooded character, wearing a Salvador Dali mask, beavering away at a keyboard. Anyone who has any kind of hacking detection software, such as the WordPress Wordfence plugin, will know that even small websites have multiple attempted break-ins on a daily basis. Indeed, 66% of all attacks are on SME’s and in 2016 there is expected to be a 37% increase in the number of attacks. Whilst one of the biggest threats of hacking is that data will be stolen, there is also the risk of data being lost. Once an intruder has access to your admin panel, there is nothing they cannot do. If they wish, they can delete everything and take down your entire system. I put it somewhere…. Human error is one of the most significant causes of data loss for businesses. Hundreds of thousands of computers and phones go missing every year. Even back in 2008, a study found that 12,000 laptops were lost every week, just in US airports. If you are a small business and you keep your entire business records on your laptop, losing it can have enormous consequences: invoices, clients’ work, contact details, emails, website content, account details, logins, portfolios… all of it important information. It’s not just a matter of it potentially being stolen or accessed, if it is not backed up, it’s gone. It’s not just misplacing hardware where human error causes problems; we erroneously delete data and carelessly break our devices. There are 84,900,000 results on Google for the term ‘phone down the toilet’ and over 25 million for ‘spill water on computer’ which indicates how many people have put their data in jeopardy just by accident. And, of course, devices are prone to breaking down and hard drives to failing. A regular backup would ensure that when your device dies, the data will live on and your organisation can make a full recovery. Make your World Backup Day pledge We’d like to think that we’ve done our bit here to remind you just how important backing up your data can be and hopefully this will encourage you to support World Backup Day on 31st March by pledging to backup on that day. You can find out more about World Backup Day by watching the video below. Need an incentive? Here are our offers Hopefully, we’ve convinced you that regularly backing up your data is absolutely essential. To make things easier we have got two offers which you might be interested in. Firstly, if you buy any of our VPS, Cloud or Dedicated server packages on World Backup Day, 31st March, well give you a 25GB of backup storage, FREE for 12 months. If you don’t need a VPS, Cloud or Dedicated server package but still want a backup facility, we’re offering 25% off any backup purchased on 29th, 30th, 31st March.
  2. I am using rsync to recursively sync a remote folder tree that looks something like the following: /folderA/a1/cache /folderA/a1/cache/A1 /folderA/a1/cache/A2 /folderA/a1/somefolder /folderA/a1/someotherfolder /folderA/a2/somefolder/cache /folderB/cache/ /folderB/b1/somefolder/cache /folderB/b1/somefolder/yetanotherfolder/cache /folderB/b1/somefolder/yetanotherfolder/cache/B1 /folderB/b1/somefolder/yetanotherfolder/cache/B2 I don't know what the folder tree will look like and it will change over time. So what I want to be able to do is recursively rsync the above but exclude the folder "cache" and any sub folders it contains so that I ultimately end up syncing: /folderA/a1 /folderA/a1/somefolder /folderA/a1/someotherfolder /folderA/a2/somefolder /folderB/ /folderB/b1/somefolder /folderB/b1/somefolder/yetanotherfolder/ Any suggestions? >> You want the --exclude flag. For example, a local rsync: rsync -a --exclude cache/ src_folder/ target_folder/ https://unix.stackexchange.com/questions/5774/rsync-excluding-a-particular-subdirectory-and-its-children-where-the-subdirect
  3. Hello bbestcallcenter,

    Welcome to designhost.gr.

    Feel free to browse our community accessing all sorts of information and getting to know our members.

    Do not hesitate to ask anything in our forums.

    designhost.gr

  4. many many solutions ! https://stackoverflow.com/questions/42385099/1273-unknown-collation-utf8mb4-unicode-520-ci
  5. https://krikienoid.github.io/flagwaver/
  6. It is very easy. The solution is on nixCraft, but on the comments. ## 64 bit linux ## wget https://www.rarlab.com/rar/rarlinux-x64-5.5.0.tar.gz tar -zxvf rarlinux-x64-5.5.0.tar.gz cd rar sudo cp -v rar unrar /usr/local/bin/ There’s no need to compile or anything. The binary on the tar file works out of the box. On the above example, we copy it on /usr/local/bin so it is found by default after login on our system. Other “exotic” solutions are possible, but I don’t want to make suggestions. https://bitsanddragons.wordpress.com/2018/01/09/install-rar-unrar-on-centos-7/
  7. this one is ready Your Invision Community 4 is ready The upgrade process is now complete and your Invision Community is now ready!
  8. underway, i have to also cover a glitch with their system while trying to pay for license "Sorry, there is a problem The page you are trying to access is not available for your account. Error code: 2X196/1" opened a ticket with them
  9. Changing the Postfix maximum email size I've recently had some of my customers emailing me large image attachments and my Postfix mail server has been rejecting them. A quick look at the postfix configuration showed the message size limit was the default so I needed to increase it. This post looks at how to see what the current email message size limit is with Postfix and how to change it. The configuration option is "message_size_limit" and by default is set to 10240000 bytes which is roughly 10MB. You can see what setting your postfix install is currently using by issuing the following command: postconf | grep message_size_limit Just doing "postconf" on its own will show all the options so using grep on "message_size_limit" filters the output to just the setting we want. For the default setting, you'll see this: message_size_limit = 10240000 To change the setting to some other value, open the main.cf file in your favourite text editor (it's often at a location like /etc/postfix/main.cf) and either edit or add the following line, depending if it's already in the file or not. message_size_limit = 20480000 In the above example I've changed it to ~20MB. Then reload postfix like so: service postfix reload and your setting will take effect. You can query postconf again if you want to check all is as expected. https://www.electrictoolbox.com/postfix-email-size-limit/
  10. $ echo -en "\x00\x00\x00\x00\x00\x01\x00\x00stats\r\n" | nc -q1 -u 127.0.0.1 11211 If you see non-empty response, your server is vulnerable.
  11. Hello Bestcall Center,

    Welcome to designhost.gr.

    Feel free to browse our community accessing all sorts of information and getting to know our members.

    Do not hesitate to ask anything in our forums.

    designhost.gr

  12. The Israeli security researcher Barak Tawily a vulnerability tracked as CVE-2018-6389 that could be exploited to trigger DoS condition of WordPress websites. The expert explained that the CVE-2018-6389 flaw is an application-level DoS issued that affects the WordPress CMS and that could be exploited by an attacker even without a massive amount of malicious traffic. “In this article I am going to explain how Denial of Service can easily be caused to almost any WordPress website online, and how you can patch your WordPress website in order to avoid this vulnerability being exploited.” reads the analysis of the expert. Tawily revealed that the flaw exists in almost all versions of WordPress released in last nine years, including the latest one (Version 4.9.2). The flaw affects the “load-scripts.php” WordPress script, it receives a parameter called load[] with value is ‘jquery-ui-core’. In the response, the CMS provides the JS module ‘jQuery UI Core’ that was requested. As you know, WordPress is open-source project, for this reason, it was easy for the expert to perform code review and analyzed the feature in detail. The load-scripts.php file was designed for WordPress admins and allows to load multiple JavaScript files into a single request, but the researcher noticed that that is is possible to call the function before login allowing anyone to invoke it. The response provided by the WordPress CMS depends upon the installed plugins and modules. It is possible to load them by simply passing the module and plugin names, separated by a comma, to the load-scripts.php file through the “load” parameter. https://your-wordpress-site.com/wp-admin/load-scripts.php?c=1&load[]=eutil,common,wp-a11y,sack,quicktag,colorpicker,editor,wp-fullscreen-stu,wp-ajax-response,wp-api-request,wp-pointer,autosave,heartbeat,wp-auth-check,wp-lists,prototype,scriptaculous-root,scriptaculous-builder,scriptaculous-dragdrop,scriptaculous-effects,scriptaculous-slider,scriptaculous-sound,scriptaculous-controls,scriptaculous The ‘load-scripts.php’ finds the JavaScript files included in the URL and appends their content into a single file and then send back it to the user’s web browser. The researcher highlighted that the wp_scripts list is hard-coded and is defined in the script-loader.php file, so he decided to send a request that in response will get all the JS module of the WordPress instance. “There is a well-defined list ($wp_scripts), that can be requested by users as part of the load[] parameter. If the requested value exists, the server will perform an I/O read action for a well-defined path associated with the supplied value from the user.” “I wondered what would happen if I sent the server a request to supply me every JS module that it stored? A single request would cause the server to perform 181 I/O actions and provide the file contents in the response.” Tawily developed a proof-of-concept (PoC) python script called doser.py that he used to makes large numbers of concurrent requests to the same URL to saturate the resources of the servers. An attacker with a good bandwidth or a limited number of bots can trigger the CVE-2018-6389 vulnerability to target popular WordPress websites. Tawily reported this DoS vulnerability to the WordPress team through HackerOne platform, but the company refused to acknowledge the flaw. “After going back and forth about it a few times and my trying to explain and provide a PoC, they refused to acknowledge it and claimed that: “This kind of thing should really be mitigated at the server or network level rather than the application level, which is outside of WordPress’s control.“” Tawily wrote. The expert has implemented the mitigation against this vulnerability in a forked version of WordPress, he has also released a bash script that addresses the issue. https://securityaffairs.co/wordpress/68709/hacking/cve-2018-6389-wordpress-dos-flaw.html
  13. Yesterday (Monday, February 5, 2018), a zero-day vulnerability in WordPress core was disclosed, which allows an attacker to perform a denial of service (DoS) attack against a vulnerable application. The vulnerability exists in the modules used to load JS and CSS files. These modules were designed to decrease page-loading time, but have effectively rendered the WordPress core susceptible to DoS attacks. WordPress holds a market share of more than 29 percent of internet websites and 60 percent of content management systems (CMS) worldwide, turning any vulnerability in the WordPress core into a potentially large-scale exploit. The vulnerability exists due to a flaw in the server-side static file loading mechanism. The parameter “load” in the vulnerable modules “load-styles.php” and “load-scripts.php”, which reside under the “/wp-admin/” path, accepts an array of JS/CSS files to fetch while the page is loading. The vulnerable modules are usually used only in pages accessible by authenticated users, with an exception being the login page, which exposes said modules to unauthenticated users as well. Thus, a malicious user can repeatedly request an excessive list of JS/CSS files, causing the server to retrieve vast amounts of data — and in so — render it unresponsive. Although the load parameter can accept all 181 JS files that appear in the “script-loader.php” module. Our analysis has shown that, in fact, the server doesn’t retrieve any data when calling JS files that are pulled from an external source such as https://ajax.googleapis.com. Hence, appending these JS files to the requested list is useless from an attacker’s perspective. Due to the simplicity of this attack, a low skill attacker can utilize the existing public exploit to take down virtually any unprotected WordPress site. Because the vulnerable modules are essential, a blacklist isn’t recommended and a simple authentication-based whitelist will not work either because it may break the login page. WordPress did not patch this vulnerability because they view it as an extensive resource exhaustion attack, and as such should be mitigated by a network firewall / web application firewall. This may, of course, change, and WordPress webmasters should (as always) stay tuned for new patches and versions. Until today (February 6, 2018), we have only seen a few dozen exploit attempts using this vulnerability, but we might see a steep rise in attacks using this exploit due to the popularity of the platform, unless a mitigation will be applied in the near future. It is advised to set access restrictions to the “load-styles.php” and “load-scripts.php” modules by only allowing trusted IPs to access these resources or by enabling two-factor authentication on the wp-admin directory. Another solution is to set rate limits to these resources. Vulnerability discoverer Barak Tawily released the following Bash script that patches the exploit by essentially allowing only admins to send requests to the vulnerable modules, and removes the requests to the modules from the login page. The script also removes most of the code from the “noop.php” module due to “re-declaration errors”. In a testing environment in our lab we didn’t encounter any such errors and thus we assume that altering the “noop.php” is not obligatory for all WordPress users. We did not extensively test this patch in our lab, and as of today, it wasn’t integrated into the main WordPress repository. Thus, we cannot recommend this solution, as it deviates from the WordPress core source and may cause compatibility issues. After analyzing the data on our CDN, we have deployed mitigations against this vulnerability, which are enabled by default for all Incapsula sites with protection against “Illegal Resource Access”. https://www.incapsula.com/blog/cve-2018-6389-wordpress-parameter-resource-consumption-remote-dos.html?mkt_tok=eyJpIjoiWkdVNE5UUXhaalUxTUdZMSIsInQiOiJzSEl2RzJvaDZXM0FuWWJ6R1Z2SlNzdUU4OEh0Q0JZcXlhcTYzc0FUbW9hOUFwaFwvMmlmN0Y0bitEREtNUzlwYjk3SWdsOEVtZ2pkQlZ2TFZYTWlwUG1ndWhnaW42K1N6ODdSRFdMZ0YzSzI2eWpEbkhZaVYreWdxMUhKZ0NTakwifQ%3D%3D
  14. βλεπω και παρόμοια projects για Windows 7 - 8.1 https://gist.github.com/xvitaly/eafa75ed2cb79b3bd4e9
  15. I’ve been experimenting with MySQL Fabric – the official tool in development to provide HA and sharding. Part of this experimentation has meant that I have been running 7 copies of the MySQL Server on my laptop at once, which can consume a lot of memory 🙂 So today I wanted to write about how to configure MySQL to use less memory, which may also be beneficial in situations where peak performance is not important. For example: a developer VM environment. Previous configuration guides Before I get started, let me point to some of my previous configuration guides: Reducing MySQL durability for testing environments – losing the ability to recover all changes in exchange for a performance gain. Improving MySQL’s default configuration – explaining the motivation of changing the defaults for MySQL 5.6. What to tune in MySQL 5.6 after installation – explaining what changes to make in production for MySQL 5.6. Configuring MySQL to use minimal memory The MySQL defaults have to balance performance with what is considered reasonable for what may be a development system with other applications needing to run alongside MySQL. In many cases, this will mean 4-8GB, but on virtual machines (or in my case with 7 copies of mysqld running), there is a lot less available. Obligatory warning: If you are running these settings on a machine with 1GB+ RAM, you should expect worse performance when compared to the defaults. Setting Default Minimum innodb_buffer_pool_size 128M 5M innodb_log_buffer_size 1M 256K query_cache_size 1M 0 max_connections 151 1 (although 10 might be more reasonable) key_buffer_size 8388608 8 thread_cache_size (autosized) 0 host_cache_size (autosized) 0 innodb_ft_cache_size 8000000 1600000 innodb_ft_total_cache_size 640000000 32000000 thread_stack 262144 131072 sort_buffer_size 262144 32K read_buffer_size 131072 8200 read_rnd_buffer_size 262144 8200 max_heap_table_size 16777216 16K tmp_table_size 16777216 1K bulk_insert_buffer_size 8388608 0 join_buffer_size 262144 128 net_buffer_length 16384 1K innodb_sort_buffer_size 1M 64K binlog_cache_size 32K 4K binlog_stmt_cache_size 32K 4K (Note: you may be able to save a little more memory by disabling InnoDB or performance schema, but I don’t recommend these two steps.) To summarize these changes: # /etc/my.cnf: innodb_buffer_pool_size=5M innodb_log_buffer_size=256K query_cache_size=0 max_connections=10 key_buffer_size=8 thread_cache_size=0 host_cache_size=0 innodb_ft_cache_size=1600000 innodb_ft_total_cache_size=32000000 # per thread or per operation settings thread_stack=131072 sort_buffer_size=32K read_buffer_size=8200 read_rnd_buffer_size=8200 max_heap_table_size=16K tmp_table_size=1K bulk_insert_buffer_size=0 join_buffer_size=128 net_buffer_length=1K innodb_sort_buffer_size=64K #settings that relate to the binary log (if enabled) binlog_cache_size=4K binlog_stmt_cache_size=4K http://www.tocker.ca/2014/03/10/configuring-mysql-to-use-minimal-memory.html
  16. You can check hardisk performance write/read speed in command line. For list all disk installed on your server use this command ; fdisk -l Sample output : Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 25 200781 83 Linux /dev/sda2 26 1300 10241437+ 83 Linux /dev/sda3 1301 6399 40957717+ 83 Linux /dev/sda4 6400 60801 436984065 5 Extended /dev/sda5 6400 10223 30716248+ 83 Linux /dev/sda6 10224 14047 30716248+ 83 Linux /dev/sda7 14048 16087 16386268+ 82 Linux swap / Solaris /dev/sda8 16088 16724 5116671 83 Linux /dev/sda9 16725 60801 354048471 83 Linux Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdb1 * 1 60801 488384001 83 Linux In this case we want to check first hard disk /dev/sda, use this command ; hdparm -t /dev/sda ample output : /dev/sda: Timing buffered disk reads: 310 MB in 3.05 seconds = 101.48 MB/sec or you can use dd command ; dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync Sample output : 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 22.6081 seconds, 47.5 MB/s http://ssh9.com/tips-and-tricks/check-hard-disk-speed-in-centos-from-command-line
  17. CloudFlair is a tool to find origin servers of websites protected by CloudFlare who are publicly exposed and don't restrict network access to the CloudFlare IP ranges as they should. The tool uses Internet-wide scan data from Censys to find exposed IPv4 hosts presenting an SSL certificate associated with the target's domain name. For more detail about this common misconfiguration and how CloudFlair works, refer to the companion blog post at https://blog.christophetd.fr/bypassing-cloudflare-using-internet-wide-scan-data/. Here's what CloudFlair looks like in action. $ python cloudflair.py myvulnerable.site [*] The target appears to be behind CloudFlare. [*] Looking for certificates matching "myvulnerable.site" using Censys [*] 75 certificates matching "myvulnerable.site" found. [*] Looking for IPv4 hosts presenting these certificates... [*] 10 IPv4 hosts presenting a certificate issued to "myvulnerable.site" were found. - 51.194.77.1 - 223.172.21.75 - 18.136.111.24 - 127.200.220.231 - 177.67.208.72 - 137.67.239.174 - 182.102.141.194 - 8.154.231.164 - 37.184.84.44 - 78.25.205.83 [*] Retrieving target homepage at https://myvulnerable.site [*] Testing candidate origin servers - 51.194.77.1 - 223.172.21.75 - 18.136.111.24 responded with an unexpected HTTP status code 404 - 127.200.220.231 timed out after 3 seconds - 177.67.208.72 - 137.67.239.174 - 182.102.141.194 - 8.154.231.164 - 37.184.84.44 - 78.25.205.83 [*] Found 2 likely origin servers of myvulnerable.site! - 177.67.208.72 (HTML content identical to myvulnerable.site) - 182.102.141.194 (HTML content identical to myvulnerable.site) (The IP addresses in this example have been obfuscated and replaced by randomly generated IPs) Setup Register an account (free) on https://censys.io/register Browse to https://censys.io/account/api, and set two environment variables with your API ID and API secret $ export CENSYS_API_ID=... $ export CENSYS_API_SECRET=... Clone the repository $ git clone https://github.com/christophetd/cloudflair.git Install the dependencies $ cd cloudflair $ pip install -r requirements.txt Run CloudFlair (see Usage below for more detail) $ python cloudflair.py myvulnerable.site Usage $ python cloudflair.py --help usage: cloudflair.py [-h] [-o OUTPUT_FILE] [--censys-api-id CENSYS_API_ID] [--censys-api-secret CENSYS_API_SECRET] domain positional arguments: domain The domain to scan optional arguments: -h, --help show this help message and exit -o OUTPUT_FILE, --output OUTPUT_FILE A file to output likely origin servers to (default: None) --censys-api-id CENSYS_API_ID Censys API ID. Can also be defined using the CENSYS_API_ID environment variable (default: None) --censys-api-secret CENSYS_API_SECRET Censys API secret. Can also be defined using the CENSYS_API_SECRET environment variable (default: None) Compatibility Tested on Python 2.7 and 3.5. Feel free to open an issue if you have bug reports or questions. https://github.com/christophetd/CloudFlair
  18. This tutorial REPLACES PHP 5.X COMPLETELY. Do NOT use it if you want multiple PHP versions. If you have REMI Repository you can skip this step. wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm wget http://rpms.remirepo.net/enterprise/remi-release-7.rpm rpm -Uvh remi-release-7.rpm epel-release-latest-7.noarch.rpm Then enable Remi yum --enablerepo=remi update remi-release Now remove old PHP and install PHP7 service php-fpm stop yum -y --enablerepo=remi install php70-php php70-php-pear php70-php-bcmath php70-php-pecl-jsond-devel php70-php-mysqlnd php70-php-gd php70-php-common php70-php-fpm php70-php-intl php70-php-cli php70-php php70-php-xml php70-php-opcache php70-php-pecl-apcu php70-php-pecl-jsond php70-php-pdo php70-php-gmp php70-php-process php70-php-pecl-imagick php70-php-devel php70-php-mbstring Remove and re-link new PHP7 rm -f /usr/bin/php ln -s /usr/bin/php70 /usr/bin/php Test if we have PHP 7 php -v Output should be (take note of PHP 7!!) [root@tiny system]# php -v PHP 7.0.9 (cli) (built: Jul 20 2016 16:58:48) ( NTS ) Copyright (c) 1997-2016 The PHP Group Zend Engine v3.0.0, Copyright (c) 1998-2016 Zend Technologies with Zend OPcache v7.0.9, Copyright (c) 1999-2016, by Zend Technologies Now open up new PHP-FPM.conf ( /etc/opt/remi/php70/php-fpm.conf ) file and change line 11 to: include=/etc/php-fpm.d/*.conf Start PHP-FPM service php70-php-fpm start Remove old systemctl php-fpm and link to new one rm -f /usr/lib/systemd/system/php-fpm.service ln -s /usr/lib/systemd/system/php70-php-fpm.service /usr/lib/systemd/system/php-fpm.service Reload systemctl daemon systemctl daemon-reload Restart web service (i use nginx) service nginx restart https://forum.vestacp.com/viewtopic.php?t=12319
×