5 research outputs found
DETECTING APPLICATION ANOMALIES: MACHINE LEARNING APPROACH
In the modern era, world has completely relied on software technology. As software applications became highly demanded, security concerns have arrived. Application security has become one of the chief concerns where companies have to protect their systems from vulnerabilities. Various other securities include mobile or end-point security, operation system security and network security. All these security categories are intended to protect their users and clients from the malicious intents and hackers. Application security became a prime requirement. Security risks of the applications are enveloped and lead to direct threat to the available business. All the application vulnerabilities take the advantage to compromise the software application security. Once a flaw is been found and private data access is determined, attacker will have capability to exploit the software application vulnerability to facilitate cyber crimes. The confidentiality of the data, availability and integrity of resources are targeted by the cyber crimes (“What is Application Security?” 2019). Overall, more than 13% of the reviewed sites were compromised with the web application security vulnerabilities and they are not completely extinct even with the traditional security methodologies (Application Security Vulnerability, 2014). In order to resolve these numerous common security issues, few of the detection, remediation and prevention techniques are to be used which includes defensive programming, sophisticated input validation, dynamic checks, and static source code analysis. In this paper, runtime environment framework is been introduced. This research study extracted few publications. All the publications considered various approaches to resolve the issue. In this research paper framework, machine learning is utilized to train and predict the output. Firstly, a sample java code is executed in various CPU cores and the generated output files are collected. These output files are then used to train machine learning. Machine learning results are then compared with actual output for decision statement
Recommended from our members
Cyber security information sharing in the United States : an empirical study including risk management and control implications, 2000-2003
A tremendous amount of change in traditional business paradigms has occurred over the past decade through the development of Electronic Commerce and ad\ ancements in the field of Information Technology. As lesser-developed countries progress and become more prosperous. traditional . first world' countries have migrated to become strong service oriented economies (Asch, 200 I). Supporting technologies have developed over the past decade which has exploited the benefits of the Internet and other infonnation technologies. While Electronic Commerce continues to grow there is a corresponding impact on computer software and individual privacy (Ghosh and Swaminatha, 200 I). Recently. the U.S. National Institute of Standards and Technology (NIST) found that software bugs cost the U.S. economy approximately $59.5 billion, or .600/0 of the annual Gross Domestic Product (U.S. Department of Commerce, 2003). In addition, we have witnessed a rise in the strength and impact of Denial of Service and other types of computer attacks such as: viruses. trojans. exploit scripts and probes/scans. Popular industry surveys such as the annual Federal Bureau of Investigation/Computer Security Institute (Gordon. Et. AI.. 2006) confirm the growing threats in the Information Assurance field. In addition to these concerns our increased reliance on the Internet enabled systems (loudon and loudon. 2000). E-Commerce systems and Information Technologies an integrated suite of risks which must be managed effectively across the public and private sectors (Backhouse. Et. AI. 2005. Ghosh and Swamintha. 200 I. Parker. 200 I. Graf. 1995. Greenberg and Goldman, 1995). Previous research (Rumizen, 1998. Haver, 1998, Roulier, 1998) examined InterOrganisational, Web Infonnation Systems and Government Information Systems in order to assess how companies and other organisations can effectively design these information systems such that maximum benefits can be achieved for all participating organisations. Furthermore, Davenport, Harris and Delong (2001) and Davenport (1999) explained that collaboration is central to the results of a knowledge management system in which open, nonpolitical, non-competitive entities are involved in environments to achieve optimal individual and collective results. Before this memorable event. some related programmatic initiatives were already in-process at that time. The United States government built upon its active leadership in the areas of computer security and information assurance when it launched a number of important efforts to manage information security threats. This was clearly evident when President Clinton made the U.S. National Infrastructure (Nil) a major national priority in the 1 990s. One critical development occurred in 1998 when the National Infrastructure Protection Centre was established to be the central point for gathering, analysing and disseminating critical cyber security information and built upon the previous success of the national Computer Emergency Response Team (CERT). Earlier research (Rich. 2001, Soo Hoo, 2000. Howard. 1997 and Landwher, 1994) addressed various aspects of information security information and incident reporting. Also. Vatis 0001) addressed some research considerations in this area while investigating foreign network centric and traditional warfare events primarily through Denial of Service and Web Site Defacement attacks. However. areas for new exploration existed especially as they related to U.S. critical infrastructure protection (Karestand. 2003. Vatis. 200 I. U.S. General Accounting Office. 2000. Alexander and Swetham. 19(9). Finally. Information and Network Centric Warfare (Arens and Rosenbloom. 2003. Davies. 2000. Denning and Baugh. 2000. and Schwartau. 1997) are increasing national security issues in the \\' ar on Terrorism and Homeland Security in general
Methods for the prevention, detection and removal of software security vulnerabilities
Over the past decade, the need to build secure software has become a dominant goal in software development. Consequently, software researchers and practitioners have identified ways that malicious users can exploit software and how developers can fix the vulnerabilities. They have also built a variety of source code security checking software applications to partially automate the task of performing a security analysis of a program. Although great advances have been made in this area, the core problem of how the security vulnerabilities occur still exists. An answer to this problem could be a paradigm shift from imperative to functional programming techniques. This may hold the key to removing software vulnerabilities altogether
Explainable, Security-Aware and Dependency-Aware Framework for Intelligent Software Refactoring
As software systems continue to grow in size and complexity, their maintenance continues to become more challenging and costly. Even for the most technologically sophisticated and competent organizations, building and maintaining high-performing software applications with high-quality-code is an extremely challenging and expensive endeavor. Software Refactoring is widely recognized as the key component for maintaining high-quality software by restructuring existing code and reducing technical debt. However, refactoring is difficult to achieve and often neglected due to several limitations in the existing refactoring techniques that reduce their effectiveness. These limitation include, but not limited to, detecting refactoring opportunities, recommending specific refactoring activities, and explaining the recommended changes. Existing techniques are mainly focused on the use of quality metrics such as coupling, cohesion, and the Quality Metrics for Object Oriented Design (QMOOD). However, there are many other factors identified in this work to assist and facilitate different maintenance activities for developers:
1. To structure the refactoring field and existing research results, this dissertation provides the most scalable and comprehensive systematic literature review analyzing the results of 3183 research papers on refactoring covering the last three decades. Based on this survey, we created a taxonomy to classify the existing research, identified research trends and highlighted gaps in the literature for further research.
2. To draw attention to what should be the current refactoring research focus from the developers’ perspective, we carried out the first large scale refactoring study on the most popular online Q&A forum for developers, Stack Overflow. We collected and analyzed posts to identify what developers ask about refactoring, the challenges that practitioners face when refactoring software systems, and what should be the current refactoring research focus from the developers’ perspective.
3. To improve the detection of refactoring opportunities in terms of quality and security in the context of mobile apps, we designed a framework that recommends the files to be refactored based on user reviews. We also considered the detection of refactoring opportunities in the context of web services. We proposed a machine learning-based approach that helps service providers and subscribers predict the quality of service with the least costs. Furthermore, to help developers make an accurate assessment of the quality of their software systems and decide if the code should be refactored, we propose a clustering-based approach to automatically identify the preferred benchmark to use for the quality assessment of a project.
4. Regarding the refactoring generation process, we proposed different techniques to enhance the change operators and seeding mechanism by using the history of applied refactorings and incorporating refactoring dependencies in order to improve the quality of the refactoring solutions. We also introduced the security aspect when generating refactoring recommendations, by investigating the possible impact of improving different quality attributes on a set of security metrics and finding the best trade-off between them. In another approach, we recommend refactorings to prioritize fixing quality issues in security-critical files, improve quality attributes and remove code smells.
All the above contributions were validated at the large scale on thousands of open source and industry projects in collaboration with industry partners and the open source community. The contributions of this dissertation are integrated in a cloud-based refactoring framework which is currently used by practitioners.Ph.D.College of Engineering & Computer ScienceUniversity of Michigan-Dearbornhttp://deepblue.lib.umich.edu/bitstream/2027.42/171082/1/Chaima Abid Final Dissertation.pdfDescription of Chaima Abid Final Dissertation.pdf : Dissertatio