marți, 24 iunie 2014

Fuzzing Tools - Making Sense Out of Nonsense

A burglar faced with a house that is locked uses guile to force an entry. Locksmiths produce tumbler locks that can only be opened with the correct key. The burglar often ignores the complexities of lock-picking and will try to slide a flexible plastic sheet through the gap between the door and the door jamb to push the catch back after which the door sometimes opens with ease. In other words, a burglar attacks a door in a way that was unforeseen. If this method of entry does not work the burglar may look elsewhere and smash a window to gain entry. Similarly, server attackers work on accepted entry points by treating them in ways that they were not intended to be used in order to force an entry. The more complex the program, the more likely there will be a flaw or a bug that can be worked on. Access to annotated source code can reveal possible areas for attackers to work on but applications have thousands of lines of code that need to be sifted through. This becomes even worse if all they have is the compiled binary code which has to be disassembled first. In this case, the hacker has to sift through the commands without any annotations to guide them through the logic. These two methods are the equivalent of picking locks. Using source code is in the first case akin to accessing the locksmith's original designs or an impression of the actual key and in the second using picks and experience to force the lock open. With so much code to sift through, both methods are time consuming and require knowledge and patience of a specialist. They are the preserve of the dedicated professional. Often the code is unavailable in any format and the average hacker has to stand back and look at the bigger picture. google_ad_client = "pub-2311940475806896"; /* 300x250, created 1/6/11 */ google_ad_slot = "0098904308"; google_ad_width = 300; google_ad_height = 250; Applications process data and that information is supplied externally using keyboard input or from strings provided by ancillary applications. These use specific formats, called protocols. A protocol may dictate that the information is a field of characters or digits of a specific maximum length, such as a name or a telephone number. The protocol may be more complex and recognize only Adobe Acrobat PDF files or JPEG image files or, if the input comes from another application, it might have a proprietary protocol. Subverting the Input The question is how to subvert these official entry points and use them to possibly crash the application or, even better, to open up a way to inject new code to allow the hacker to take control of the server. The incoming data needs to be stored in a buffer so that it can be processed by the application and this is the key to opening up an entry point. In November 1988, the Morris worm gave the world a reality check on how hackers can disrupt computer systems and inject disruptive code using weaknesses in software design. The worm exploited flaws in BSD Unix running on DEC Vax and Sun servers and succeeded in bringing 10% of the internet's servers down. This alerted the world to the dangers of buffer overflows. Buffer overflows occur when malformed data or oversized data fields are fed into an application. The program is expecting input that complies with a specific protocol, but what happens if the input does not comply? In many cases the answer is that it will disrupt the execution of the application in some way. This brute-force technique has proved to be a rich source for code injection on many computer applications and operating systems and 20 years on from the Morris exploit, it still figures highly in the list of common attack methods. It may seem strange that after so many years there are still loopholes that can be exploited but this has a lot to do with the way in which applications are tested before finally being released to the users. The pre-launch quality assurance (QA) checking looks for obvious problems by testing that the protocols do work. Initially this is performed by doing everything in the way that the developer Intended it to be done. The problem is that the developer also should have protected the code from people using the application in the way the developer did not intend. Even the best QA department cannot test for everything but more importantly, the QA department is in charge of making sure the application works as intended so it does not check what happens if the application is not used as intended. This becomes obvious when we see Microsoft, Oracle and other software specialists rushing out security fixes after an application has been released for sale. There are just too many options available and hackers always seem to find new ways to exploit code that could never have been dreamed of by the developers or checked by the QA team. "Even the best QA departmentcannot test for everything" The process of feeding in false inputs is known as fuzzing and this has become a small industry of its own. A wide range of fuzzing tools have been developed by the elite hacker community to enable the rank and file to execute exploits beyond their own natural abilities. These tools are also adopted or adapted in the QA world to test applications before they are released. Buffer overflow attacks are well known and a number of tools, or fuzzers, are openly available on the internet. Some of these are used by QA but new tools using sophisticated techniques are appearing all the time and many target specific applications. Fuzzing techniques are used to find all manner of security vulnerabilities. Apart from highly publicised buffer overflows, there are related integer overflows, race condition flaws, SQL injection, and cross-site scripting. In fact, the majority of vulnerabilities can be exploited or detected using fuzzing techniques. When the applications for exploiting the range of possible vulnerabilities are added to the buffer overflow fuzzing tools, the list is long and daunting. QA Headache The QA department faces a huge problem. Hackers outnumber QA staff and they are able to specialize in particular forms of exploit. By contrast, a QA expert has to be a jack-of-all-trades and it is a constant battle to keep up with the latest exploits and hacks. Attackers are always finding new techniques which take time to surface. For this reason partnerships between security-focused companies are important. With direct access to the server at the focus of the fuzzing attack, it is easy to monitor the effects on the host. Valuable information can be gained by using a suitable debugger such as the open sourced OllyDbg for Windows-based systems or the GDB debugger that comes free with most Unix systems. Specific parameters can also be revealed, such as memory usage, network activity, file system actions and, for Windows, registry file access. Tools for these purposes can be found as part of the Sysinternals Suite, now owned by Microsoft. Remote hacking lacks this refined option. Instead, monitoring network traffic may provide clues as to whether a system has become unstable or crashed. The absence of reply packets, the presence of unusual packets, or the absence of a service for long periods may indicate a crash. Applications like Autodafe are examining the possibility of analyzing program reactions using tracers in an attempt to improve detection of the server status. Fuzzing tools are useful because they automate the drudgery of the task. For example, transmitting data fields of various sizes by manually incrementing field lengths is boring and the task can easily be handled in code. Practice has shown that buffer lengths often follow a power of two sequence so test data tends to increase in sizes over the normal size. This means that the sequence 16, 32, 64,128 would be matched by data lengths of 20, 40, 70, 130. Similarly, after trying packets with malformed headers, specific file formats should be correctly packaged allowing the data payload to be manipulated without affecting the apparent validity of the packet. Test data should also reflect the kind of data that the application may be looking for, using @, , full stops and commas within email applications, or typical URL symbols for HTTP servers. Fuzzing techniques fall into three basic types: session data, specialized, and generic. Session data fuzzing is the simplest because it transforms legal data incrementally. For example, the starting point could be a SMTP protocol: mail from: would then be sent in the following forms to see what effect they have: mailmailmailmail from: sender @ testhost mail fromfromfromfrom: sender @ testhost mail from:::: sender @ testhost mail from: sendersendersendersender @ testhost mail from: sender @@@@ testhost mail from: sender @ testhosttesthosttesthosttesthost Specialized fuzzers are the ones that target specific protocols. Typically these would be network protocols such as SMTP, FTP, SSH, and SIP but they have now expanded to include file types such as documents, image files, video formats, and Flash animations. The most flexible type is the second generation fuzzer which allows the user to define the packet type, the protocol, and the elements within it to be fuzzed. Its flexibility is balanced by the fact that users have to be aware of the vulnerabilities to be tested and may overlook some. It is crucial that every element in the protocol is tested, no matter how unimportant it may seem. In the above example, it may seem pointless to repeat the colon but this could be the flaw that the hacker is looking for. The lesson is that nothing should be taken for granted. Buffer Overflows Developers are not infallible. When buffer overflows started to hit the headlines, many C programmers switched to using bounded string operations as a cure-all. Unfortunately, the strncpy() command was often implemented incorrectly resulting in the Off By One error. This was caused by setting a buffer size to, say, 32. It sounds logical enough but the input field has to have a null value terminator and that has to be allowed for in the character count and added by the application. The null marks the buffer's edge, but would be overwritten by an apparently legal 32 character input. This means that the boundary between neighboring buffers disappears and future accesses may treat the two strings as a larger single buffer and open up the possibility of a buffer overflow exploit where one may not have existed before. "Developers are not infallible" Once a weakness has been found the QA process may be almost over, barring a fix being devised and issued. For the hacker, however, the real task is just beginning. A successful fuzz attack typically ends with an application crash - not a clever trick unless disruption is the aim. What it does indicate is that some executable bytes have been overwritten with nonsense. The chances are that this is probably a stack and a return address has been corrupted causing the application to jump to some arbitrary memory location. Before being overwritten with nonsensical input, this location would be a pointer to the continuation of the legally running application code. Once the buffer is lying beside the stack, the hacker carefully crafts an oversized buffer input to overwrite the jump address at the top of the stack with a pointer to executable code stored elsewhere in RAM instead of just arbitrary bytes as before. Usually the pointer is set to the beginning of the buffer. When writing the new input, the hacker uses padding to ensure the four bytes carrying the jump location is correctly placed on the stack. Rather than just any kind of padding, the bytes used form a shellcode routine in assembly code. When the pointer redirects program execution to the buffered code, the attacker has taken control of program execution and can take control of the server, assuming the interrupted application had suid root, or administrator, rights. Obviously, the larger the buffer, the larger the chunk of code that can be inserted. The growth of fuzzing has been remarkable. From the QA perspective it offers a very effective way to discover flaws early. For attackers it presents a way to penetrate black box servers that would otherwise be difficult to penetrate. Reports of fuzzing exploits are vague and merely say that a specific program crashes when it opens a file containing a particular malformed file. There is no clue as to why or how this happens, leaving the security experts to recreate the conditions in order to find out the mechanics of the exploits. The number of fuzzer programs is increasing in both specialisms and subtlety. As tools become more sophisticated developers become bogged down with patch requests. This results in rising maintenance costs and the point is reached where a trade-off between increasing the security and financial considerations may start to affect the reliability of software. There is a danger that vulnerability detection will become far more reactive than proactive. Vulnerability testing is more important now than ever before as financial gains from professional hacking become more attractive because finance is increasingly directed through the internet. The current pressures on in-house departmental QA to keep up with faster moving changes in the breadth and scope of exploits is now making outsourcing of the responsibility more attractive than it was previously.

access point vs router

Niciun comentariu:

Trimiteți un comentariu