8 research outputs found

    History Based Multi Objective Test Suite Prioritization in Regression Testing Using Genetic Algorithm

    Get PDF
    Regression testing is the most essential and expensive testing activity which occurs throughout the software development life cycle. As Regression testing requires executions of many test cases it imposes the necessity of test case prioritization process to reduce the resource constraint. Test case prioritization technique schedule the test case in an order that increase the chance of early fault detection. In this paper we propose a genetic algorithm based prioritization technique which uses the historical information of system level test cases to prioritize test cases to detect most severe faults early. In addition the proposed approach also calculates weight factor for each requirement to achieve customer satisfaction and to improve the rate of severe fault detection. To validate the proposed approach we performed controlled experiments over industry projects which proved the proposed approach effectiveness in terms of average percentage of fault detected

    Vers une surveillance des zoonoses associées aux rats (Rattus norvegicus)

    Get PDF
    Rats (Rattus spp.) are a source of a number of zoonotic pathogens responsible for morbidity and mortality worldwide. These species are particularly problematic with regards to rat associated health risks because rats are living in close contact with people leading to potentially rat disease transmission. Based on the "One Health" approach, surveillance of zoonotic pathogens in rats and other susceptible hosts should help to improve animal and human health. Our aim was to develop the surveillance of rat-associated zoonoses in a source species (Rattus norvegicus) and, in some target populations (cattle, dogs and pigs) as sentinels of human exposure. The screening methods including DNA microarray developed for the purpose of the "WildTech" project and the spatial distribution of the risk were the major themes in this work. They have been documented based on 181 rats captured in the administrative unit “département du Rhône” between 2010 and 2013 and, diagnostic data of leptospirosis in cattle, dogs and pigs, recorded at "Laboratoire des Leptospires – Lyon" between 2008 and 2012. The application of various screening methods (direct and indirect) for the purpose of surveillance were relevant and detected four potentially zoonotic pathogens circulating in rats, (hantavirus Seoul, hepatitis E virus, Leptospira spp. and Toxoplasma gondii). Although the location of infected rats varied among a short geographic distance, Leptospira spp. and hantavirus Seoul were the predominant hazard with respectively 26%, IC95% = 20% -33% and 14%, IC95% = 8% -20% of infected rats. Their spatial distribution could be characterized with socio-economic indices and, regarding Leptospira-infected rats, a further study shown that the maintenance of strains was related to local and intrinsic factors. The study of leptospirosis in dogs and cattle revealed their increased exposure to the serogroup Australis, their heterogeneous spatial distribution and the significant increase of annual incidence in dogs. The same trends were observed in humans which underlines the relevance of surveillance of animal leptospirosis as sentinels of human exposure. All together, the information obtained contributes to a better understanding of the epidemiology of rat-associated zoonoses to support implementation of surveillance and public health decisions in the future.Le rat (Rattus spp.) est une source de nombreux pathogènes zoonotiques responsables de morbidité et de mortalité dans le monde. Ces espèces sont particulièrement problématiques en santé publique car leur mode de vie synanthrope favorise la proximité rat-Homme et la transmission potentielle de pathogènes. Selon l'approche « une seule santé », la surveillance sanitaire des rats et d'autres espèces animales sensibles devrait contribuer à améliorer la santé de ces dernières et de l'Homme. Notre objectif était de développer la surveillance des zoonoses associées aux rats chez une espèce source (R. norvegicus) et chez des espèces cibles (bovins, chiens et porcs) en tant que sentinelles de l'exposition de l'Homme. L'intérêt de méthodes de détection dont la micro-puce à ADN développée dans le cadre du projet européen « WildTech » et l'investigation de la distribution du risque étaient les thèmes majeurs de ces travaux. Ils ont été documentés à partir de 181 rats capturés dans le Rhône entre 2010 et 2013 et, de données diagnostiques de leptospiroses animales enregistrées au Laboratoire des Leptospires – Lyon entre 2008 et 2012. Les méthodes de détection directes et indirectes utilisées à des fins de surveillance ont montré leur intérêt par la mise en évidence de quatre pathogènes potentiellement zoonotiques chez les rats (Hantavirus Séoul, virus de l'hépatite E, Leptospira spp. et Toxoplasma gondii). Malgré la spatialisation hétérogène des statuts infectieux, Leptospira spp. et l'hantavirus Séoul étaient les dangers prédominants avec respectivement, 26%, CI95%=20%-33% et 14%, CI95%=8%-20% de rats infectés par ces agents. Leur distribution spatiale a été caractérisée par des indices socio-économiques et, dans le cas des infections par les leptospires, une étude approfondie des souches circulantes a montré que leur persistance relevait de facteurs locaux, intrinsèques à la colonie. L'étude des leptospiroses animales (chiens et bovins) suggère leur exposition accrue au sérogroupe Australis, leur distribution spatiale hétérogène et une croissance significative de l'incidence annuelle canine. Ces trois observations également rapportées chez l'Homme soulignent l'intérêt de la surveillance de ces espèces en tant que sentinelles. Les informations obtenues par l'ensemble des méthodes appliquées contribuent à une meilleure compréhension de l'épidémiologie des zoonoses associées aux rats et de la leptospirose en particulier, afin d'orienter la mise en œuvre de leur surveillance et les décisions de santé publique à venir

    廃棄率を考慮した市場信頼性寿命データの解析法

    Get PDF
    自動車やその構成部品などを製造するメーカーにとって、市場における製品の寿命特性を把握することは極めて重要である。そのためには、製品の保証期間内に故障が発生した際のユーザの修理依頼によって、容易に得られる故障データを収集し、これらを解析することが有用である。また、保証期間の延長が進む中、長期間にわたる故障情報から廃棄が生じうる製品の信頼性特性値を推定しようとすると、既存の推定法では偏った推定結果となる。なぜなら、本来ならば廃棄を考慮した市場での残存台数で故障確率を求めなければならないところを、廃棄を考慮せず故障確率を求める故に、故障確率を実際より小さめに推定してしまう。本研究では、自動車のように廃棄が生じうる製品に対して、寿命分布のパラメータをより正しく推定するために、保証期間中に得たデータを活用した廃棄分布を用いたモデルを構築し、廃車を考慮した寿命分布の推定法を提案している。ここでは、2つのモデルを例にとり、検討を行っている。モデル1は、保証期間中に得られる市場信頼性データとして、故障データ並びに、追跡調査などで廃棄と打ち切り情報が得られる場合を想定する。また、モデル1は故障の寿命分布が特定できない場合を想定し、ノンパラメトリック推定方法を利用する。このモデルでは、廃車分布と着目する故障モードで故障が発生したときの故障時廃棄確率を用いたモデルを構築し、廃棄を考慮した寿命分布の推定法を提案し、その特性を評価している。モデル2では、故障データ並びに廃車情報のみが得られる場合の寿命推定を考える。このモデルでは、故障の寿命分布型並びに廃棄と打ち切り情報の分布型を既知とし、故障の寿命分布をワイブル分布と仮定する。従来研究として、Alam and Suzuki (2009) は、製品の使用分布の分布型が既知という条件の下での故障データのみを用いた寿命推定法を提案し、これによって製品の寿命特性を推定し得ることを示した。この推定法に基づき、自動車のように廃棄が生じうる製品に対しても、寿命分布のパラメータを正確に推定するために、廃車分布と故障分布を用いたモデルを構築し、従来の故障分布と使用分布に廃車分布を加えた寿命分布の推定法を提案している。本稿は5つの章により構成されており,第1章では寿命分布の推定に用いられる保証期間中のデータの特徴,および廃棄情報を考慮する必要性の説明を行った後に,本稿の目的,構成について述べている。保証期間が長く、この間に廃棄が生じうる製品に対して、第2章では故障データ並びに廃棄と打ち切りに関する情報がある場合と故障データ並びに廃棄情報のみの場合に分けて、二つの推定法を説明している。廃棄と打ち切り情報がある場合では、廃棄分布と着目した故障モードの故障時廃棄確率を用いたモデルを構築し、ノンパラメトリック推定法を提案した。故障データ並びに廃棄情報のみの場合は、故障と廃棄は独立と仮定し、パラメトリック推定を行う方法を提案している。第3章では、第2章で提案した故障データ並びに廃棄と打ち切り情報がある場合の推定法をベースに、シミュレーションを通じて推定値の性能を示した。故障時廃棄確率は、着目した故障モードで故障が発生するときに、システム全体を廃棄する確率であり、稼働経過時間によって変化しうる。この故障時廃棄確率について、データを集めることは容易とは言えない。そこで、故障時廃棄確率が既知または未知の場合に、寿命の推定量にどのような影響を与えるのかを検討している。第4章では、第2章で説明した故障データ並びに廃棄情報のみの推定法を用いて,故障分布,廃棄分布と打ち切り分布の三つの分布ともワイブル分布,廃棄分布を既知としたとき、ワイブル故障確率変数の最尤推定について検討し,シミュレーション実験を通じて推定量の性能を示した。また,従来研究 Alam and Suzuki(2009) の廃棄率を考慮しない場合との比較も行っている。最後に第5章にて本稿全体のまとめを述べている。保証期間中のデータは市場における貴重な故障データであり容易に得られる。そのデータに本研究で提案した廃車分布を加えたモデルを用いることで,廃棄が生じている製品に対する有効性のある寿命評価が可能となることを最後にまとめている。電気通信大学201

    Microarchitectural Low-Power Design Techniques for Embedded Microprocessors

    Get PDF
    With the omnipresence of embedded processing in all forms of electronics today, there is a strong trend towards wireless, battery-powered, portable embedded systems which have to operate under stringent energy constraints. Consequently, low power consumption and high energy efficiency have emerged as the two key criteria for embedded microprocessor design. In this thesis we present a range of microarchitectural low-power design techniques which enable the increase of performance for embedded microprocessors and/or the reduction of energy consumption, e.g., through voltage scaling. In the context of cryptographic applications, we explore the effectiveness of instruction set extensions (ISEs) for a range of different cryptographic hash functions (SHA-3 candidates) on a 16-bit microcontroller architecture (PIC24). Specifically, we demonstrate the effectiveness of light-weight ISEs based on lookup table integration and microcoded instructions using finite state machines for operand and address generation. On-node processing in autonomous wireless sensor node devices requires deeply embedded cores with extremely low power consumption. To address this need, we present TamaRISC, a custom-designed ISA with a corresponding ultra-low-power microarchitecture implementation. The TamaRISC architecture is employed in conjunction with an ISE and standard cell memories to design a sub-threshold capable processor system targeted at compressed sensing applications. We furthermore employ TamaRISC in a hybrid SIMD/MIMD multi-core architecture targeted at moderate to high processing requirements (> 1 MOPS). A range of different microarchitectural techniques for efficient memory organization are presented. Specifically, we introduce a configurable data memory mapping technique for private and shared access, as well as instruction broadcast together with synchronized code execution based on checkpointing. We then study an inherent suboptimality due to the worst-case design principle in synchronous circuits, and introduce the concept of dynamic timing margins. We show that dynamic timing margins exist in microprocessor circuits, and that these margins are to a large extent state-dependent and that they are correlated to the sequences of instruction types which are executed within the processor pipeline. To perform this analysis we propose a circuit/processor characterization flow and tool called dynamic timing analysis. Moreover, this flow is employed in order to devise a high-level instruction set simulation environment for impact-evaluation of timing errors on application performance. The presented approach improves the state of the art significantly in terms of simulation accuracy through the use of statistical fault injection. The dynamic timing margins in microprocessors are then systematically exploited for throughput improvements or energy reductions via our proposed instruction-based dynamic clock adjustment (DCA) technique. To this end, we introduce a 6-stage 32-bit microprocessor with cycle-by-cycle DCA. Besides a comprehensive design flow and simulation environment for evaluation of the DCA approach, we additionally present a silicon prototype of a DCA-enabled OpenRISC microarchitecture fabricated in 28 nm FD-SOI CMOS. The test chip includes a suitable clock generation unit which allows for cycle-by-cycle DCA over a wide range with fine granularity at frequencies exceeding 1 GHz. Measurement results of speedups and power reductions are provided

    Towards a standardised attack graph visual syntax

    Get PDF
    More research needs to focus on developing effective methods of aiding the understanding and perception of cyber-attacks. Attack modelling techniques (AMTs) - such as attack graphs, attack trees and fault trees, are popular methods of mathematically and visually representing the sequence of events that lead to a successful cyber-attack. Although useful in aiding cyber-attack perception, there is little empirical or comparative research which evaluates the effectiveness of these methods. Furthermore, there is no standardised attack graph visual syntax configuration, currently more than seventy-five self-nominated attack graph and twenty attack tree configurations have been described in the literature - each of which presents attributes such as preconditions and exploits in a different way. This research analyses methods of presenting cyber-attacks and reveals that attack graphs and attack trees are the dominant methods. The research proposes an attack graph visual syntax which is designed using evidence based principles. The proposed attack graph is compared with the fault tree - which is a standard method of representing events such as cyber-attacks. This comparison shows that the proposed attack graph visual syntax is more effective than the fault tree method at aiding cyber-attack perception and that the attack graph can be an effective tool for aiding cyber-attack perception - particularly in educational contexts. Although the proposed attack graph visual syntax is shown to be cognitively effective, this is no indication of practitioner acceptance. The research proceeds to identify a preferred attack graph visual syntax from a range of visual syntaxes - one of which is the proposed attack graph visual syntax. The method used to perform the comparison is conjoint analysis which is innovative for this field. The results of the second study reveal that the proposed attack graph visual syntax is one of the preferred configurations. This attack graph has the following attributes. The flow of events is represented top-down, preconditions are represented as rectangles, and exploits are represented as ellipses. The key contribution of this research is the development of an attack graph visual syntax which is effective in aiding the understanding of cyber-attacks particularly in educational contexts. The proposed method is a significant step towards standardising the attack graph visual syntax
    corecore