17 research outputs found

    Establishing a Search String to Detect Secondary Studies in Software Engineering

    Full text link
    Search for secondary studies is essential to establish whether the review on the intended topic has already been done, avoiding waste time. In addition, secondary studies are the inputs of a tertiary study. However, one critical step in searching for secondary studies is to elaborate a search string. The main goal of this work is to analyze search strings to establish directions to better detect secondary studies in Software Engineering (SE). We analyzed seven tertiary studies under two perspectives: (1) structure - strings' terms to detect secondary studies; and (2) field: where searching - titles alone or abstracts alone or titles and abstracts together, among others. We also performed a validation of the results found. The suitable search string for finding secondary studies in SE contain the terms "systematic review", "literature review", "systematic mapping", "mapping study", "systematic map", "meta-analysis", "survey" and "literature analysis". Furthermore, we recommend (1) researchers use the title, abstract and keywords search fields in their searches to increase studies recall; (2) researchers choose carefully their paper title, abstract and keyword terms to increase the chance of having such studies found on digital libraries

    Analysis of handover based on the use of femtocells in LTE networks

    Get PDF
    One of the key elements in the networks LTE (Long Term Evolution) is the possibility of deploying multiple femtocells for the improvement of coverage and data rate. However, arbitrary overlapping coverage of these cells makes the handover mechanism complex and challenging. In this paper, simulations of deploying LTE femtocells in a scenario were evaluated. With this objective, measure impact and correlation of the use of femtocell parameters of QoS (Quality of Service) and handover. Possible limitations of this integration are discussed. Will be the integration of LTE femtocell a panacea? Despite this promising alternative estimates are fraught with uncertainty. The results show that the use of femtocell got worse on indicators of handover, impact on indicators of QoS

    On Calculating and Visualizing Rocket Trajectory Reconstitution

    No full text
    In rocket launching, it is fundamental to determine the right tracking of the vehicle in flight. Usually, real time information is received from more than one tracking system. Such information is useful while the vehicle is in flight as well as for future analysis. However, sometimes, vehicle may not be tracked by any system. Therefore, no registered data of the complete flight is available. One possible solution is to calculate the missing information to cover the "gaps" in trajectory path. This reconstitution is based on vehicles position and velocity, in reference to any instant of the flight. This article describes an application that calculates a trajectory reconstitution in order to obtain data that is eventually missing. This trajectory path is plotted in a cartographic map projection in a browser. Resources used on this application are based on web standards

    Systematic Generation of Test and Fault Cases for Space Application Validation

    No full text
    The most critical activity of a V&V process is the test design, as it should be systematic and less dependent of the expert inspiration, mainly for companies developing software whose failures place millions of dollars at risk. Additionally, the trend towards service standardization for the most common space applications motivated us to define a conformance testing methodology added with fault injection concepts. The methodology, named CoFI, defines steps to generate tests that cover the conformance of an implementation with respect to a standard specification. It allows to generate repeatable and controllable test cases and is conceived to be automated and easy to learn. The idea is to translate the service behavior, written in natural language, into a FSM-based notation, then automatically generate test cases. Its main characteristic is to separate in distinct diagrams the normal, the exceptional behavior, those explicitly-specified and those based on mapping an external fault model. The paper describes the CoFI testing methodology and its use in two real case studies. In the first the TC Verification service specified in ECSS-E_70-41A was used for showing the feasibility of generating test cases from a publicly recognized standard specification. In the second, an OBDH-Scientific_Experiment protocol, developed at INPE, was used for evaluating the set of test and fault cases created by the CoFI methodology. Preliminary results pointed out that: (i) the methodology is simple and effective specially for creating fault cases based on a fault model; (ii) current standard service specification requires extra information to achieve an applicable set of test cases. Effectiveness metrics of the tests were obtained with specification-based mutant analysis. 1

    A Conformance Testing Process for Space Applications Software Services

    No full text
    Comprehensive tests for space applications software are costly but extremely necessary. These software must be reliable and produced within schedule and budget. In a tentative of make the space mission software development for space agencies and industries more costeffective, the European Committee for Space Standardization (ECSS) has been compiling a set of standards that specify the common core of some space application systems. Once the set of services is standardized, the conformance problem is raised. In this paper we present a testing process for standardized services, which is based on the IS-9646 standard for ISO protocol conformance testing. The process includes an approach to derive test and fault cases by combining conformance testing concepts with the software-implemented fault injection (SWIFI) technique. One advantage of this process is the generation of a re-usable abstract test suite which improves the testing effectiveness. Reliability and convergence in the test cases are increased the more the tests are applied. Additionally, the evaluation of the software behavior under external faults may be performed under the repeatable set of fault cases. The approach is illustrated with abstract test and fault cases derived for the telecommand verification service stated in the ECSS-E-70-41A standard. These services

    A Multi-Resolution Multi-Temporal Technique for Detecting and Mapping Deforestation in the Brazilian Amazon Rainforest

    No full text
    The analysis of rapid environment changes requires orbital sensors with high frequency of data acquisition to minimize cloud interference in the study of dynamic processes such as Amazon tropical deforestation. Moreover, a medium to high spatial resolution data is required due to the nature and complexity of variables involved in the process. In this paper we describe a multiresolution multitemporal technique to simulate Landsat 7 Enhanced Thematic Mapper Plus (ETM+) image using Terra Moderate Resolution Imaging Spectroradiometer (MODIS). The proposed method preserves the spectral resolution and increases the spatial resolution for mapping Amazon Rainfores deforestation using low computational resources. To evaluate this technique, sample images were acquired in the Amazon rainforest border (MODIS tile H12-V10 and ETM+/Landsat 7 path 227 row 68) for 17 July 2002 and 05 October 2002. The MODIS-based simulated ETM+ and the corresponding original ETM+ images were compared through a linear regression method. Additionally, the bootstrap technique was used to calculate the confidence interval for the model to estimate and to perform a sensibility analysis. Moreover, a Linear Spectral Mixing Model, which is the technique used for deforestation mapping in Program for Deforestation Assessment in the Brazilian Legal Amazonia (PRODES) developed by National Institute for Space Research (INPE), was applied to analyze the differences in deforestation estimates. The results showed high correlations, with values between 0.70 and 0.94 (p < 0.05, student’s t test) for all ETM+ bands, indicating a good assessment between simulated and observed data (p < 0.05, Z-test). Moreover, simulated image showed a good agreement with a reference image, originating commission errors of 1% of total area estimated as deforestation in a sample area test. Furthermore, approximately 6% or 70 km² of deforestation areas were missing in simulated image classification. Therefore, the use of Landsat simulated image provides better deforestation estimation than MODIS alone
    corecore