209 research outputs found

    Stable localized modes in asymmetric waveguides with gain and loss

    Full text link
    It is shown that asymmetric waveguides with gain and loss can support a stable propagation of optical beams. This means that the propagation constants of modes of the corresponding complex optical potential are real. A class of such waveguides is found from a relation between two spectral problems. A particular example of an asymmetric waveguide, described by the hyperbolic functions, is analyzed. The existence and stability of linear modes and of continuous families of nonlinear modes are demonstrated.Comment: 10 pages, 4 figures. Accepted in Optics Letters, 201

    Approaches for Testing and Evaluation of XACML Policies

    Get PDF
    Security services are provided through: The applications, operating systems, databases, and the network. There are many proposals to use policies to define, implement and evaluate security services. We discussed a full test automation framework to test XACML based policies. Using policies as input the developed tool can generate test cases based on the policy and the general XACML model. We evaluated a large dataset of policy implementations. The collection includes more than 200 test cases that represent instances of policies. Policies are executed and verified, using requests and responses generated for each instance of policies. WSO2 platform is used to perform different testing activities on evaluated policies

    Measuring Defect Datasets Sensitivity to Attributes Variation

    Get PDF
    The study of the correlation between software project and product attributes and its modules quality status (faulty or not) is the subject of several research papers in the software testing and maintenance fields. In this paper, a tool is built to change the values of software data sets\u27 attributes and study the impact of this change on the modules\u27 defect status. The goal is to find those specific attributes that highly correlate with the module defect attribute. An algorithm is developed to automatically predict the module defect status based on the values of the module attributes and based on their change from reference or initial values. For each attribute of those software projects, results can show when such attribute can be, if any, a major player in deciding the defect status of the project or a specific module. Results showed consistent, and in some cases better, results in comparison with most surveyed defect prediction algorithms. Results showed also that this can be a very powerful method to understand each attribute individual impact, if any, to the module quality status and how it can be improved

    Activities and Trends in Testing Graphical User Interfaces Automatically

    Get PDF
    This study introduced some new approaches for software test automation in general and testing graphical user interfaces in particular. The study presented ideas in the different stages of the test automation framework. Test automation framework main activities include test case generation, execution and verification. Other umbrella activities include modeling, critical paths selection and some others. In modeling, a methodology is presented to transform the user interface of applications into XML (i.e., extensible Markup Language) files. The purpose of this intermediate transformation is to enable producing test automation components in a format that is easier to deal with (in terms of testing). Test cases are generated from this model, executed and verified on the actual implementation. The transformation of products\u27 Graphical User Interface (GUI) into XML files also enables the documentation and storage of the interface description. There are several cases where we need to have a stored documented format of the GUI. Having it in XML universal format, allows it to be retrieved and reused in other places. XML Files in their hierarchical structure make it possible and easy to preserve the hierarchical structure of the user interface. Several GUI structural metrics are also introduced to evaluate the user interface from testing perspectives. Those metrics can be collected automatically using the developed tool with no need for user intervention

    Evaluating Network Test Scenarios for Network Simulators Systems

    Get PDF
    Networks continue to grow as industries use both wired and wireless networks. Creating experiments to test those networks can be very expensive if conducted on production networks; therefore, the evaluation of networks and their performance is usually conducted using emulation. This growing reliance on simulation raises the risk of correctness and validation. Today, many network simulators have widely varying focuses and are employed in different fields of research. The trustworthiness of results produced from simulation models must be investigated. The goal of this work is first to compare and assess the performance of three prominent network simulators—NS-2, NS-3, and OMNet++—by considering the following qualitative characteristics: architectural design, correctness, performance, usability, features, and trends. Second, introduce the concept of mutation testing to design the appropriate network scenarios to be used for protocol evaluation. Many works still doubt if used scenarios can suit well to claim conclusions about protocol performance and effectiveness. A large-scale simulation model was implemented using ad hoc on-demand distance vector and destination-sequenced distance vector routing protocols to compare performance, correctness, and usability. This study addresses an interesting question about the validation process: “Are you building the right simulation model in the right environment?” In conclusion, network simulation alone cannot determine the correctness and usefulness of the implemented protocol. Software testing approaches should be considered to validate the quality of the network model and test scenarios being used

    Enhance Rule Based Detection for Software Fault Prone Modules

    Get PDF
    Software quality assurance is necessary to increase the level of confidence in the developed software and reduce the overall cost for developing software projects. The problem addressed in this research is the prediction of fault prone modules using data mining techniques. Predicting fault prone modules allows the software managers to allocate more testing and resources to such modules. This can also imply a good investment in better design in future systems to avoid building error prone modules. Software quality models that are based upon data mining from previous projects can identify fault-prone modules in the current similar development project, once similarity between projects is established. In this paper, we applied different data mining rule-based classification techniques on several publicly available datasets of the NASA software repository (e.g. PC1, PC2, etc). The goal was to classify the software modules into either fault prone or not fault prone modules. The paper proposed a modification on the RIDOR algorithm on which the results show that the enhanced RIDOR algorithm is better than other classification techniques in terms of the number of extracted rules and accuracy. The implemented algorithm learns defect prediction using mining static code attributes. Those attributes are then used to present a new defect predictor with high accuracy and low error rate

    Ensemble Models for Intrusion Detection System Classification

    Get PDF
    Using data analytics in the problem of Intrusion Detection and Prevention Systems (IDS/IPS) is a continuous research problem due to the evolutionary nature of the problem and the changes in major influencing factors. The main challenges in this area are designing rules that can predict malware in unknown territories and dealing with the complexity of the problem and the conflicting requirements regarding high accuracy of detection and high efficiency. In this scope, we evaluated the usage of state-of-the-art ensemble learning models in improving the performance and efficiency of IDS/IPS. We compared our approaches with other existing approaches using popular open-source datasets available in this area

    Call Graph Based Metrics to Evaluate Software Design Quality

    Get PDF
    Software defects prediction was introduced to support development and maintenance activities such as improving the software quality through finding errors or patterns of errors early in the software development process. Software defects prediction is playing the role of maintenance facilitation in terms of effort, time and more importantly the cost prediction for software maintenance and evolution activities. In this research, software call graph model is used to evaluate its ability to predict quality related attributes in developed software products. As a case study, the call graph model is generated for several applications in order to represent and reflect the degree of their complexity, especially in terms of understandability, testability and maintenance efforts. This call graph model is then used to collect some software product attributes, and formulate several call graph based metrics. The extracted metrics are investigated in relation or correlation with bugs collected from customers-bug reports for the evaluated applications. Those software related bugs are compiled into dataset files to be used as an input to a data miner for classification, prediction and association analysis. Finally, the results of the analysis are evaluated in terms of finding the correlation between call graph based metrics and software products\u27 bugs. In this research, we assert that call graph based metrics are appropriate to be used to detect and predict software defects so the activities of maintenance and testing stages after the delivery become easier to estimate or assess
    • …
    corecore