Jump to content
Slate Blackcurrant Watermelon Strawberry Orange Banana Apple Emerald Chocolate Marble
Slate Blackcurrant Watermelon Strawberry Orange Banana Apple Emerald Chocolate Marble

NickTheGreek

Administrators
  • Content Count

    452
  • Joined

  • Last visited

  • Days Won

    76
  • Feedback

    N/A

Everything posted by NickTheGreek

  1. I am using rsync to recursively sync a remote folder tree that looks something like the following: /folderA/a1/cache /folderA/a1/cache/A1 /folderA/a1/cache/A2 /folderA/a1/somefolder /folderA/a1/someotherfolder /folderA/a2/somefolder/cache /folderB/cache/ /folderB/b1/somefolder/cache /folderB/b1/somefolder/yetanotherfolder/cache /folderB/b1/somefolder/yetanotherfolder/cache/B1 /folderB/b1/somefolder/yetanotherfolder/cache/B2 I don't know what the folder tree will look like and it will change over time. So what I want to be able to do is recursively rsync the above but exclude the folder "cache" and any sub folders it contains so that I ultimately end up syncing: /folderA/a1 /folderA/a1/somefolder /folderA/a1/someotherfolder /folderA/a2/somefolder /folderB/ /folderB/b1/somefolder /folderB/b1/somefolder/yetanotherfolder/ Any suggestions? >> You want the --exclude flag. For example, a local rsync: rsync -a --exclude cache/ src_folder/ target_folder/ https://unix.stackexchange.com/questions/5774/rsync-excluding-a-particular-subdirectory-and-its-children-where-the-subdirect
  2. Hello bbestcallcenter,

    Welcome to designhost.gr.

    Feel free to browse our community accessing all sorts of information and getting to know our members.

    Do not hesitate to ask anything in our forums.

    designhost.gr

  3. many many solutions ! https://stackoverflow.com/questions/42385099/1273-unknown-collation-utf8mb4-unicode-520-ci
  4. https://krikienoid.github.io/flagwaver/
  5. It is very easy. The solution is on nixCraft, but on the comments. ## 64 bit linux ## wget https://www.rarlab.com/rar/rarlinux-x64-5.5.0.tar.gz tar -zxvf rarlinux-x64-5.5.0.tar.gz cd rar sudo cp -v rar unrar /usr/local/bin/ There’s no need to compile or anything. The binary on the tar file works out of the box. On the above example, we copy it on /usr/local/bin so it is found by default after login on our system. Other “exotic” solutions are possible, but I don’t want to make suggestions. https://bitsanddragons.wordpress.com/2018/01/09/install-rar-unrar-on-centos-7/
  6. this one is ready Your Invision Community 4 is ready The upgrade process is now complete and your Invision Community is now ready!
  7. underway, i have to also cover a glitch with their system while trying to pay for license "Sorry, there is a problem The page you are trying to access is not available for your account. Error code: 2X196/1" opened a ticket with them
  8. Changing the Postfix maximum email size I've recently had some of my customers emailing me large image attachments and my Postfix mail server has been rejecting them. A quick look at the postfix configuration showed the message size limit was the default so I needed to increase it. This post looks at how to see what the current email message size limit is with Postfix and how to change it. The configuration option is "message_size_limit" and by default is set to 10240000 bytes which is roughly 10MB. You can see what setting your postfix install is currently using by issuing the following command: postconf | grep message_size_limit Just doing "postconf" on its own will show all the options so using grep on "message_size_limit" filters the output to just the setting we want. For the default setting, you'll see this: message_size_limit = 10240000 To change the setting to some other value, open the main.cf file in your favourite text editor (it's often at a location like /etc/postfix/main.cf) and either edit or add the following line, depending if it's already in the file or not. message_size_limit = 20480000 In the above example I've changed it to ~20MB. Then reload postfix like so: service postfix reload and your setting will take effect. You can query postconf again if you want to check all is as expected. https://www.electrictoolbox.com/postfix-email-size-limit/
  9. $ echo -en "\x00\x00\x00\x00\x00\x01\x00\x00stats\r\n" | nc -q1 -u 127.0.0.1 11211 If you see non-empty response, your server is vulnerable.
  10. Hello Bestcall Center,

    Welcome to designhost.gr.

    Feel free to browse our community accessing all sorts of information and getting to know our members.

    Do not hesitate to ask anything in our forums.

    designhost.gr

  11. The Israeli security researcher Barak Tawily a vulnerability tracked as CVE-2018-6389 that could be exploited to trigger DoS condition of WordPress websites. The expert explained that the CVE-2018-6389 flaw is an application-level DoS issued that affects the WordPress CMS and that could be exploited by an attacker even without a massive amount of malicious traffic. “In this article I am going to explain how Denial of Service can easily be caused to almost any WordPress website online, and how you can patch your WordPress website in order to avoid this vulnerability being exploited.” reads the analysis of the expert. Tawily revealed that the flaw exists in almost all versions of WordPress released in last nine years, including the latest one (Version 4.9.2). The flaw affects the “load-scripts.php” WordPress script, it receives a parameter called load[] with value is ‘jquery-ui-core’. In the response, the CMS provides the JS module ‘jQuery UI Core’ that was requested. As you know, WordPress is open-source project, for this reason, it was easy for the expert to perform code review and analyzed the feature in detail. The load-scripts.php file was designed for WordPress admins and allows to load multiple JavaScript files into a single request, but the researcher noticed that that is is possible to call the function before login allowing anyone to invoke it. The response provided by the WordPress CMS depends upon the installed plugins and modules. It is possible to load them by simply passing the module and plugin names, separated by a comma, to the load-scripts.php file through the “load” parameter. https://your-wordpress-site.com/wp-admin/load-scripts.php?c=1&load[]=eutil,common,wp-a11y,sack,quicktag,colorpicker,editor,wp-fullscreen-stu,wp-ajax-response,wp-api-request,wp-pointer,autosave,heartbeat,wp-auth-check,wp-lists,prototype,scriptaculous-root,scriptaculous-builder,scriptaculous-dragdrop,scriptaculous-effects,scriptaculous-slider,scriptaculous-sound,scriptaculous-controls,scriptaculous The ‘load-scripts.php’ finds the JavaScript files included in the URL and appends their content into a single file and then send back it to the user’s web browser. The researcher highlighted that the wp_scripts list is hard-coded and is defined in the script-loader.php file, so he decided to send a request that in response will get all the JS module of the WordPress instance. “There is a well-defined list ($wp_scripts), that can be requested by users as part of the load[] parameter. If the requested value exists, the server will perform an I/O read action for a well-defined path associated with the supplied value from the user.” “I wondered what would happen if I sent the server a request to supply me every JS module that it stored? A single request would cause the server to perform 181 I/O actions and provide the file contents in the response.” Tawily developed a proof-of-concept (PoC) python script called doser.py that he used to makes large numbers of concurrent requests to the same URL to saturate the resources of the servers. An attacker with a good bandwidth or a limited number of bots can trigger the CVE-2018-6389 vulnerability to target popular WordPress websites. Tawily reported this DoS vulnerability to the WordPress team through HackerOne platform, but the company refused to acknowledge the flaw. “After going back and forth about it a few times and my trying to explain and provide a PoC, they refused to acknowledge it and claimed that: “This kind of thing should really be mitigated at the server or network level rather than the application level, which is outside of WordPress’s control.“” Tawily wrote. The expert has implemented the mitigation against this vulnerability in a forked version of WordPress, he has also released a bash script that addresses the issue. https://securityaffairs.co/wordpress/68709/hacking/cve-2018-6389-wordpress-dos-flaw.html
  12. Yesterday (Monday, February 5, 2018), a zero-day vulnerability in WordPress core was disclosed, which allows an attacker to perform a denial of service (DoS) attack against a vulnerable application. The vulnerability exists in the modules used to load JS and CSS files. These modules were designed to decrease page-loading time, but have effectively rendered the WordPress core susceptible to DoS attacks. WordPress holds a market share of more than 29 percent of internet websites and 60 percent of content management systems (CMS) worldwide, turning any vulnerability in the WordPress core into a potentially large-scale exploit. The vulnerability exists due to a flaw in the server-side static file loading mechanism. The parameter “load” in the vulnerable modules “load-styles.php” and “load-scripts.php”, which reside under the “/wp-admin/” path, accepts an array of JS/CSS files to fetch while the page is loading. The vulnerable modules are usually used only in pages accessible by authenticated users, with an exception being the login page, which exposes said modules to unauthenticated users as well. Thus, a malicious user can repeatedly request an excessive list of JS/CSS files, causing the server to retrieve vast amounts of data — and in so — render it unresponsive. Although the load parameter can accept all 181 JS files that appear in the “script-loader.php” module. Our analysis has shown that, in fact, the server doesn’t retrieve any data when calling JS files that are pulled from an external source such as https://ajax.googleapis.com. Hence, appending these JS files to the requested list is useless from an attacker’s perspective. Due to the simplicity of this attack, a low skill attacker can utilize the existing public exploit to take down virtually any unprotected WordPress site. Because the vulnerable modules are essential, a blacklist isn’t recommended and a simple authentication-based whitelist will not work either because it may break the login page. WordPress did not patch this vulnerability because they view it as an extensive resource exhaustion attack, and as such should be mitigated by a network firewall / web application firewall. This may, of course, change, and WordPress webmasters should (as always) stay tuned for new patches and versions. Until today (February 6, 2018), we have only seen a few dozen exploit attempts using this vulnerability, but we might see a steep rise in attacks using this exploit due to the popularity of the platform, unless a mitigation will be applied in the near future. It is advised to set access restrictions to the “load-styles.php” and “load-scripts.php” modules by only allowing trusted IPs to access these resources or by enabling two-factor authentication on the wp-admin directory. Another solution is to set rate limits to these resources. Vulnerability discoverer Barak Tawily released the following Bash script that patches the exploit by essentially allowing only admins to send requests to the vulnerable modules, and removes the requests to the modules from the login page. The script also removes most of the code from the “noop.php” module due to “re-declaration errors”. In a testing environment in our lab we didn’t encounter any such errors and thus we assume that altering the “noop.php” is not obligatory for all WordPress users. We did not extensively test this patch in our lab, and as of today, it wasn’t integrated into the main WordPress repository. Thus, we cannot recommend this solution, as it deviates from the WordPress core source and may cause compatibility issues. After analyzing the data on our CDN, we have deployed mitigations against this vulnerability, which are enabled by default for all Incapsula sites with protection against “Illegal Resource Access”. https://www.incapsula.com/blog/cve-2018-6389-wordpress-parameter-resource-consumption-remote-dos.html?mkt_tok=eyJpIjoiWkdVNE5UUXhaalUxTUdZMSIsInQiOiJzSEl2RzJvaDZXM0FuWWJ6R1Z2SlNzdUU4OEh0Q0JZcXlhcTYzc0FUbW9hOUFwaFwvMmlmN0Y0bitEREtNUzlwYjk3SWdsOEVtZ2pkQlZ2TFZYTWlwUG1ndWhnaW42K1N6ODdSRFdMZ0YzSzI2eWpEbkhZaVYreWdxMUhKZ0NTakwifQ%3D%3D
  13. βλεπω και παρόμοια projects για Windows 7 - 8.1 https://gist.github.com/xvitaly/eafa75ed2cb79b3bd4e9
  14. I’ve been experimenting with MySQL Fabric – the official tool in development to provide HA and sharding. Part of this experimentation has meant that I have been running 7 copies of the MySQL Server on my laptop at once, which can consume a lot of memory 🙂 So today I wanted to write about how to configure MySQL to use less memory, which may also be beneficial in situations where peak performance is not important. For example: a developer VM environment. Previous configuration guides Before I get started, let me point to some of my previous configuration guides: Reducing MySQL durability for testing environments – losing the ability to recover all changes in exchange for a performance gain. Improving MySQL’s default configuration – explaining the motivation of changing the defaults for MySQL 5.6. What to tune in MySQL 5.6 after installation – explaining what changes to make in production for MySQL 5.6. Configuring MySQL to use minimal memory The MySQL defaults have to balance performance with what is considered reasonable for what may be a development system with other applications needing to run alongside MySQL. In many cases, this will mean 4-8GB, but on virtual machines (or in my case with 7 copies of mysqld running), there is a lot less available. Obligatory warning: If you are running these settings on a machine with 1GB+ RAM, you should expect worse performance when compared to the defaults. Setting Default Minimum innodb_buffer_pool_size 128M 5M innodb_log_buffer_size 1M 256K query_cache_size 1M 0 max_connections 151 1 (although 10 might be more reasonable) key_buffer_size 8388608 8 thread_cache_size (autosized) 0 host_cache_size (autosized) 0 innodb_ft_cache_size 8000000 1600000 innodb_ft_total_cache_size 640000000 32000000 thread_stack 262144 131072 sort_buffer_size 262144 32K read_buffer_size 131072 8200 read_rnd_buffer_size 262144 8200 max_heap_table_size 16777216 16K tmp_table_size 16777216 1K bulk_insert_buffer_size 8388608 0 join_buffer_size 262144 128 net_buffer_length 16384 1K innodb_sort_buffer_size 1M 64K binlog_cache_size 32K 4K binlog_stmt_cache_size 32K 4K (Note: you may be able to save a little more memory by disabling InnoDB or performance schema, but I don’t recommend these two steps.) To summarize these changes: # /etc/my.cnf: innodb_buffer_pool_size=5M innodb_log_buffer_size=256K query_cache_size=0 max_connections=10 key_buffer_size=8 thread_cache_size=0 host_cache_size=0 innodb_ft_cache_size=1600000 innodb_ft_total_cache_size=32000000 # per thread or per operation settings thread_stack=131072 sort_buffer_size=32K read_buffer_size=8200 read_rnd_buffer_size=8200 max_heap_table_size=16K tmp_table_size=1K bulk_insert_buffer_size=0 join_buffer_size=128 net_buffer_length=1K innodb_sort_buffer_size=64K #settings that relate to the binary log (if enabled) binlog_cache_size=4K binlog_stmt_cache_size=4K http://www.tocker.ca/2014/03/10/configuring-mysql-to-use-minimal-memory.html
  15. You can check hardisk performance write/read speed in command line. For list all disk installed on your server use this command ; fdisk -l Sample output : Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 25 200781 83 Linux /dev/sda2 26 1300 10241437+ 83 Linux /dev/sda3 1301 6399 40957717+ 83 Linux /dev/sda4 6400 60801 436984065 5 Extended /dev/sda5 6400 10223 30716248+ 83 Linux /dev/sda6 10224 14047 30716248+ 83 Linux /dev/sda7 14048 16087 16386268+ 82 Linux swap / Solaris /dev/sda8 16088 16724 5116671 83 Linux /dev/sda9 16725 60801 354048471 83 Linux Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdb1 * 1 60801 488384001 83 Linux In this case we want to check first hard disk /dev/sda, use this command ; hdparm -t /dev/sda ample output : /dev/sda: Timing buffered disk reads: 310 MB in 3.05 seconds = 101.48 MB/sec or you can use dd command ; dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync Sample output : 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 22.6081 seconds, 47.5 MB/s http://ssh9.com/tips-and-tricks/check-hard-disk-speed-in-centos-from-command-line
  16. CloudFlair is a tool to find origin servers of websites protected by CloudFlare who are publicly exposed and don't restrict network access to the CloudFlare IP ranges as they should. The tool uses Internet-wide scan data from Censys to find exposed IPv4 hosts presenting an SSL certificate associated with the target's domain name. For more detail about this common misconfiguration and how CloudFlair works, refer to the companion blog post at https://blog.christophetd.fr/bypassing-cloudflare-using-internet-wide-scan-data/. Here's what CloudFlair looks like in action. $ python cloudflair.py myvulnerable.site [*] The target appears to be behind CloudFlare. [*] Looking for certificates matching "myvulnerable.site" using Censys [*] 75 certificates matching "myvulnerable.site" found. [*] Looking for IPv4 hosts presenting these certificates... [*] 10 IPv4 hosts presenting a certificate issued to "myvulnerable.site" were found. - 51.194.77.1 - 223.172.21.75 - 18.136.111.24 - 127.200.220.231 - 177.67.208.72 - 137.67.239.174 - 182.102.141.194 - 8.154.231.164 - 37.184.84.44 - 78.25.205.83 [*] Retrieving target homepage at https://myvulnerable.site [*] Testing candidate origin servers - 51.194.77.1 - 223.172.21.75 - 18.136.111.24 responded with an unexpected HTTP status code 404 - 127.200.220.231 timed out after 3 seconds - 177.67.208.72 - 137.67.239.174 - 182.102.141.194 - 8.154.231.164 - 37.184.84.44 - 78.25.205.83 [*] Found 2 likely origin servers of myvulnerable.site! - 177.67.208.72 (HTML content identical to myvulnerable.site) - 182.102.141.194 (HTML content identical to myvulnerable.site) (The IP addresses in this example have been obfuscated and replaced by randomly generated IPs) Setup Register an account (free) on https://censys.io/register Browse to https://censys.io/account/api, and set two environment variables with your API ID and API secret $ export CENSYS_API_ID=... $ export CENSYS_API_SECRET=... Clone the repository $ git clone https://github.com/christophetd/cloudflair.git Install the dependencies $ cd cloudflair $ pip install -r requirements.txt Run CloudFlair (see Usage below for more detail) $ python cloudflair.py myvulnerable.site Usage $ python cloudflair.py --help usage: cloudflair.py [-h] [-o OUTPUT_FILE] [--censys-api-id CENSYS_API_ID] [--censys-api-secret CENSYS_API_SECRET] domain positional arguments: domain The domain to scan optional arguments: -h, --help show this help message and exit -o OUTPUT_FILE, --output OUTPUT_FILE A file to output likely origin servers to (default: None) --censys-api-id CENSYS_API_ID Censys API ID. Can also be defined using the CENSYS_API_ID environment variable (default: None) --censys-api-secret CENSYS_API_SECRET Censys API secret. Can also be defined using the CENSYS_API_SECRET environment variable (default: None) Compatibility Tested on Python 2.7 and 3.5. Feel free to open an issue if you have bug reports or questions. https://github.com/christophetd/CloudFlair
  17. This tutorial REPLACES PHP 5.X COMPLETELY. Do NOT use it if you want multiple PHP versions. If you have REMI Repository you can skip this step. wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm wget http://rpms.remirepo.net/enterprise/remi-release-7.rpm rpm -Uvh remi-release-7.rpm epel-release-latest-7.noarch.rpm Then enable Remi yum --enablerepo=remi update remi-release Now remove old PHP and install PHP7 service php-fpm stop yum -y --enablerepo=remi install php70-php php70-php-pear php70-php-bcmath php70-php-pecl-jsond-devel php70-php-mysqlnd php70-php-gd php70-php-common php70-php-fpm php70-php-intl php70-php-cli php70-php php70-php-xml php70-php-opcache php70-php-pecl-apcu php70-php-pecl-jsond php70-php-pdo php70-php-gmp php70-php-process php70-php-pecl-imagick php70-php-devel php70-php-mbstring Remove and re-link new PHP7 rm -f /usr/bin/php ln -s /usr/bin/php70 /usr/bin/php Test if we have PHP 7 php -v Output should be (take note of PHP 7!!) [root@tiny system]# php -v PHP 7.0.9 (cli) (built: Jul 20 2016 16:58:48) ( NTS ) Copyright (c) 1997-2016 The PHP Group Zend Engine v3.0.0, Copyright (c) 1998-2016 Zend Technologies with Zend OPcache v7.0.9, Copyright (c) 1999-2016, by Zend Technologies Now open up new PHP-FPM.conf ( /etc/opt/remi/php70/php-fpm.conf ) file and change line 11 to: include=/etc/php-fpm.d/*.conf Start PHP-FPM service php70-php-fpm start Remove old systemctl php-fpm and link to new one rm -f /usr/lib/systemd/system/php-fpm.service ln -s /usr/lib/systemd/system/php70-php-fpm.service /usr/lib/systemd/system/php-fpm.service Reload systemctl daemon systemctl daemon-reload Restart web service (i use nginx) service nginx restart https://forum.vestacp.com/viewtopic.php?t=12319
  18. Loaded plugins: fastestmirror C7.0.1406-base | 3.6 kB 00:00:00 C7.0.1406-updates | 2.9 kB 00:00:00 C7.1.1503-base | 3.6 kB 00:00:00 C7.1.1503-updates | 3.4 kB 00:00:00 C7.2.1511-base | 3.6 kB 00:00:00 C7.2.1511-updates | 3.4 kB 00:00:00 http://vault.centos.org/7.3.1611/os/x86_64/repodata/repomd.xml: [Errno 14] HTTP Error 404 – Not Found Trying other mirror. To address this issue please refer to the below knowledge base article https://access.redhat.com/articles/1320623 If above article doesn’t help to resolve this issue please create a bug on https://bugs.centos.org/ One of the configured repositories failed (CentOS-7.3.1611 – Base), and yum doesn’t have enough cached data to continue. At this point the only safe thing yum can do is fail. There are a few ways to work “fix” this: 1. Contact the upstream for the repository and get them to fix the problem. 2. Reconfigure the baseurl/etc. for the repository, to point to a working upstream. This is most often useful if you are using a newer distribution release than is supported by the repository (and the packages for the previous distribution release still work). 3. Run the command with the repository temporarily disabled yum –disablerepo=C7.3.1611-base … 4. Disable the repository permanently, so yum won’t use it by default. Yum will then just ignore the repository until you permanently enable it again or use –enablerepo for temporary usage: yum-config-manager –disable C7.3.1611-base or subscription-manager repos –disable=C7.3.1611-base 5. Configure the failing repository to be skipped, if it is unavailable. Note that yum will try to contact the repo. when it runs most commands, so will have to try and fail each time (and thus. yum will be be much slower). If it is a very temporary problem though, this is often a nice compromise: yum-config-manager –save –setopt=C7.3.1611-base.skip_if_unavailable=true failure: repodata/repomd.xml from C7.3.1611-base: [Errno 256] No more mirrors to try. http://vault.centos.org/7.3.1611/os/x86_64/repodata/repomd.xml: [Errno 14] HTTP Error 404 – Not Found Error: yum install failed ———- fix add lines skip_if_unavailable = 1 keepcache = 0 to file /etc/yum.repos.d/CentOS-Vault.repo example: was: [C7.3.1611-extras] name=CentOS-7.3.1611 – Extras baseurl=http://vault.centos.org/7.3.1611/extras/$basearch/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 enabled=0 now: [C7.3.1611-extras] name=CentOS-7.3.1611 – Extras baseurl=http://vault.centos.org/7.3.1611/extras/$basearch/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 enabled=0 skip_if_unavailable = 1 keepcache = 0
×