Improving open source software security using Fuzzing

Abstract : Background: Fuzzing is an automated process of identifying software vulnerabilities by supplying unexpected and faulty inputs to the software. The main aim of fuzzing is to identify the crucial edge cases where a software might fail. Therefore, fuzzing provides a crucial insight into the stability and security of the software. The process of fuzzing can be divided into following broad steps – 1. Identification of Target Function(s) – Target function(s) are typically those functions that act as entry points for processing input data. They use various APIs to perform operation on the input data. 2. Developing harness – Harnesses are small code stubs whose sole purpose is to invoke the target function by using mutated data inputs. A harness bridges the gap between how the fuzzer generates input and how the target application receives and processes the input. 3. Fuzzing - In this step, a fuzzer is used to generate numerous data inputs which are then passed to the target function using the harness. The fuzzer checks whether the application crashes by processing a certain input. If a crash occurs, then it saves the input and the memory state of the crash to file for later analysis. Description: Fuzzing has proven its effectiveness in discovering thousands of vulnerabilities in file-processing and stateless applications. In fuzzing, and automated testing in general, designing test oracles is crucial. In this challenge the team is supposed to fuzz an open source software namely the Windows variant of Sumatra PDF Reader software (version 3.5.2 or later). Sumatra PDF Reader is a very popular open source and widely used PDF viewing software. In this challenge, teams are required to develop a working harness for fuzzing of the latest version (version 3.5.2 or later) of Windows Sumatra PDF Reader software solution, fuzzed on any fuzzer of their choice. The submission will be evaluated on the following criteria – 1. Target functions identified 2. Live demonstration of fuzzing harness developed 3. Code Coverage achieved 4. Technical report submitted by the team. Expected Solution: Each team must provide a fuzzing harness that is capable of fuzzing the windows software solution of the Sumatra PDF Reader (version 3.5.2 or later). This fuzzing harness must identify target functions and supply appropriate arguments for the invocation of such functions. The fuzzing harness will be run using a fuzzer (preferably WinAFL). Each team must submit a working harness along with a technical report stating – 1. Reversing steps undertaken 2. Target functions identified 3. Dependencies identified.
 EXISTING SYSTEM :
 When evaluating existing solutions for enhancing smart contract security, it becomes evident that a multifaceted approach is essential. Traditional tools like symbolic execution, static analysis, and formal verification provide a solid foundation for identifying vulnerabilities, as shown in Table II. However, integrating multi-agent deep reinforcement learning (DRL) solutions offers a more dynamic and adaptive strategy. A. Effectiveness in Detecting Vulnerabilities 1) Symbolic execution tools: Oyente, Maian, Manticore, Mythril, Solythesis, SymbolicExec: These tools are effective in detecting vulnerabilities related to control flow, arithmetic issues, and reentrancy attacks. They use symbolic execution to explore different execution paths and identify potential security flaws. However, their effectiveness may be limited by path explosion and false positives. plosion and false positives. 2) Static analysis tools: Solgraph, Osiris, Securify, SmartCheck, Vandal, Slither, SolidityCheck, Solstice, Securify v2, SIF, SmartAnvil, SolCheck, SCAnalysisTools: These tools analyze the source code without executing it and are effective in identifying common vulnerabilities such as reentrancy, integer overflow, and unchecked calls. They are generally faster than symbolic execution tools but may suffer from false positives and negatives.
 DISADVANTAGE :
 Limited Scope of Detection: Fuzzing may not cover all possible execution paths or input scenarios, particularly those that are rare or complex. It might miss vulnerabilities that only manifest under very specific conditions. Resource Intensive: Effective fuzzing can require significant computational resources, including powerful hardware and substantial time. This can be a limitation for projects with limited budgets or for large-scale software systems. False Positives and Negatives: Fuzzing can generate a lot of noise in the form of false positives (incorrectly identifying a vulnerability) and false negatives (failing to identify a real vulnerability). This can lead to additional work in analyzing and verifying the results. Complex Configuration: Setting up a fuzzing environment can be complex and require a deep understanding of the software being tested. Misconfiguration can lead to incomplete testing or inefficient use of resources.
 PROPOSED SYSTEM :
 Today researchers often use several basic criteria for effectiveness evaluation: the number of errors found, the number of executed instructions, basic blocks or syscalls as well as cyclomatic complexity or attack surface exposure [6–9]. During the last several decades, the theory of software reliability has proposed a wide range of different metrics to assess source code complexity and the probability of errors. The general idea of this assessment is that more complex code has more bugs. In this paper, our hypothesis is that source code complexity assessment metrics could be adapted to use them for binary code analysis. Thus it would allow to perform analysis based on semantics of executed instructions as well as their interaction with input data. We will provide an overview of technique, architecture, implementation, and effectiveness evaluation of our approach. We will carry out separate tests to compare effectiveness of 25 complexity metrics on 104 wide-spread applications with known vulnerabilities. Moreover, we will perform assessment of our approach to reduce time costs of fuzzing campaigns for 5 different well-known fuzzers.
 ADVANTAGE :
 Automated Vulnerability Detection: Fuzzing can automatically generate a wide range of test inputs to discover security vulnerabilities that might be missed by manual testing or traditional code review methods. This automation helps in uncovering issues quickly and efficiently. Scalability: Once set up, fuzzing can be scaled to test large codebases or multiple components simultaneously. This scalability is particularly useful for open source projects with extensive and evolving codebases. Finding Complex Bugs: Fuzzing can uncover complex and obscure vulnerabilities that are difficult to detect through manual testing. It often reveals edge cases and unexpected conditions that may not be covered by standard test cases. Low-Cost: For many open source projects, fuzzing can be a cost-effective way to improve security. Many fuzzing tools are open source themselves or available at low cost, making them accessible even for projects with limited budgets.
Download DOC Download PPT

We have more than 145000 Documents , PPT and Research Papers

Have a question ?

Mail us : info@nibode.com