765 research outputs found

    Feasibility study of an Integrated Program for Aerospace vehicle Design (IPAD). Volume 2: The design process

    Get PDF
    The extent to which IPAD is to support the design process is identified. Case studies of representative aerospace products were developed as models to characterize the design process and to provide design requirements for the IPAD computing system

    Test analysis & fault simulation of microfluidic systems

    Get PDF
    This work presents a design, simulation and test methodology for microfluidic systems, with particular focus on simulation for test. A Microfluidic Fault Simulator (MFS) has been created based around COMSOL which allows a fault-free system model to undergo fault injection and provide test measurements. A post MFS test analysis procedure is also described.A range of fault-free system simulations have been cross-validated to experimental work to gauge the accuracy of the fundamental simulation approach prior to further investigation and development of the simulation and test procedure.A generic mechanism, termed a fault block, has been developed to provide fault injection and a method of describing a low abstraction behavioural fault model within the system. This technique has allowed the creation of a fault library containing a range of different microfluidic fault conditions. Each of the fault models has been cross-validated to experimental conditions or published results to determine their accuracy.Two test methods, namely, impedance spectroscopy and Levich electro-chemical sensors have been investigated as general methods of microfluidic test, each of which has been shown to be sensitive to a multitude of fault. Each method has successfully been implemented within the simulation environment and each cross-validated by first-hand experimentation or published work.A test analysis procedure based around the Neyman-Pearson criterion has been developed to allow a probabilistic metric for each test applied for a given fault condition, providing a quantitive assessment of each test. These metrics are used to analyse the sensitivity of each test method, useful when determining which tests to employ in the final system. Furthermore, these probabilistic metrics may be combined to provide a fault coverage metric for the complete system.The complete MFS method has been applied to two system cases studies; a hydrodynamic “Y” channel and a flow cytometry system for prognosing head and neck cancer.Decision trees are trained based on the test measurement data and fault conditions as a means of classifying the systems fault condition state. The classification rules created by the decision trees may be displayed graphically or as a set of rules which can be loaded into test instrumentation. During the course of this research a high voltage power supply instrument has been developed to aid electro-osmotic experimentation and an impedance spectrometer to provide embedded test

    Development of Secure Software : Rationale, Standards and Practices

    Get PDF
    The society is run by software. Electronic processing of personal and financial data forms the core of nearly all societal and economic activities, and concerns every aspect of life. Software systems are used to store, transfer and process this vital data. The systems are further interfaced by other systems, forming complex networks of data stores and processing entities.This data requires protection from misuse, whether accidental or intentional. Elaborate and extensive security mechanisms are built around the protected information assets. These mechanisms cover every aspect of security, from physical surroundings and people to data classification schemes, access control, identity management, and various forms of encryption. Despite the extensive information security effort, repeated security incidents keep compromising our financial assets, intellectual property, and privacy. In addition to the direct and indirect cost, they erode the trust in the very foundation of information security: availability, integrity, and confidentiality of our data. Lawmakers at various national and international levels have reacted by creating a growing body of regulation to establish a baseline for information security. Increased awareness of information security issues has led to extend this regulation to one of the core issues in secure data processing: security of the software itself. Information security contains many aspects. It is generally classified into organizational security, infrastructure security, and application security. Within application security, the various security engineering processes and techniques utilized at development time form the discipline of software security engineering. The aim of these security activities is to address the software-induced risk toward the organization, reduce the security incidents and thereby lower the lifetime cost of the software. Software security engineering manages the software risk by implementing various security controls right into the software, and by providing security assurance for the existence of these controls by verification and validation. A software development process has typically several objectives, of which security may form only a part. When security is not expressly prioritized, the development organizations have a tendency to direct their resources to the primary requirements. While producing short-term cost and time savings, the increased software risk, induced by a lack of security and assurance engineering, will have to be mitigated by other means. In addition to increasing the lifetime cost of software, unmitigated or even unidentified risk has an increased chance of being exploited and cause other software issues. This dissertation concerns security engineering in agile software development. The aim of the research is to find ways to produce secure software through the introduction of security engineering into the agile software development processes. Security engineering processes are derived from extant literature, industry practices, and several national and international standards. The standardized requirements for software security are traced to their origins in the late 1960s, and the alignment of the software engineering and security engineering objectives followed from their original challenges to the current agile software development methods. The research provides direct solutions to the formation of security objectives in software development, and to the methods used to achieve them. It also identifies and addresses several issues and challenges found in the integration of these activities into the development processes, providing directly applicable and clearly stated solutions for practical security engineering problems. The research found the practices and principles promoted by agile and lean software development methods to be compatible with many security engineering activities. Automated, tool-based processes and the drive for efficiency and improved software quality were found to directly support the security engineering techniques and objectives. Several new ways to integrate software engineering into agile software development processes were identified. Ways to integrate security assurance into the development process were also found, in the form of security documentation, analyses, and reviews. Assurance artifacts can be used to improve software design and enhance quality assurance. In contrast, detached security engineering processes may create security assurance that serves only purposes external to the software processes. The results provide direct benefits to all software stakeholders, from the developers and customers to the end users. Security awareness is the key to more secure software. Awareness creates a demand for security, and the demand gives software developers the concrete objectives and the rationale for the security work. This also creates a demand for new security tools, processes and controls to improve the efficiency and effectiveness of software security engineering. At first, this demand is created by increased security regulation. The main pressure for change will emanate from the people and organizations utilizing the software: security is a mandatory requirement, and software must provide it. This dissertation addresses these new challenges. Software security continues to gain importance, prompting for new solutions and research.Ohjelmistot ovat keskeinen osa yhteiskuntamme perusinfrastruktuuria. Merkittävä osa sosiaalisesta ja taloudellisesta toiminnastamme perustuu tiedon sähköiseen käsittelyyn, varastointiin ja siirtoon. Näitä tehtäviä suorittamaan on kehitetty merkittävä joukko ohjelmistoja, jotka muodostavat mutkikkaita tiedon yhteiskäytön mahdollistavia verkostoja. Tiedon suojaamiseksi sen ympärille on kehitetty lukuisia suojamekanismeja, joiden tarkoituksena on estää tiedon väärinkäyttö, oli se sitten tahatonta tai tahallista. Suojausmekanismit koskevat paitsi ohjelmistoja, myös niiden käyttöympäristöjä ja käyttäjiä sekä itse käsiteltävää tietoa: näitä mekanismeja ovat esimerkiksi tietoluokittelut, tietoon pääsyn rajaaminen, käyttäjäidentiteettien hallinta sekä salaustekniikat. Suojaustoimista huolimatta tietoturvaloukkaukset vaarantavat sekä liiketoiminnan ja yhteiskunnan strategisia tietovarantoj että henkilökohtaisia tietojamme. Taloudellisten menetysten lisäksi hyökkäykset murentavat luottamusta tietoturvan kulmakiviin: tiedon luottamuksellisuuteen, luotettavuuteen ja sen saatavuuteen. Näiden tietoturvan perustusten suojaamiseksi on laadittu kasvava määrä tietoturvaa koskevia säädöksiä, jotka määrittävät tietoturvan perustason. Lisääntyneen tietoturvatietoisuuden ansiosta uusi säännöstö on ulotettu koskemaan myös turvatun tietojenkäsittelyn ydintä,ohjelmistokehitystä. Tietoturva koostuu useista osa-alueista. Näitä ovat organisaatiotason tietoturvakäytännöt, tietojenkäsittelyinfrastruktuurin tietoturva, sekä tämän tutkimuksen kannalta keskeisenä osana ohjelmistojen tietoturva. Tähän osaalueeseen sisältyvät ohjelmistojen kehittämisen aikana käytettävät tietoturvatekniikat ja -prosessit. Tarkoituksena on vähentää ohjelmistojen organisaatioille aiheuttamia riskejä, tai poistaa ne kokonaan. Ohjelmistokehityksen tietoturva pyrkii pienentämään ohjelmistojen elinkaarikustannuksia määrittämällä ja toteuttamalla tietoturvakontrolleja suoraan ohjelmistoon itseensä. Lisäksi kontrollien toimivuus ja tehokkuus osoitetaan erillisten verifiointija validointimenetelmien avulla. Tämä väitöskirjatutkimus keskittyy tietoturvatyöhön osana iteratiivista ja inkrementaalista ns. ketterää (agile) ohjelmistokehitystä. Tutkimuksen tavoitteena on löytää uusia tapoja tuottaa tietoturvallisia ohjelmistoja liittämällä tietoturvatyö kiinteäksi osaksi ohjelmistokehityksen prosesseja. Tietoturvatyön prosessit on johdettu alan tieteellisestä ja teknillisestä kirjallisuudesta, ohjelmistokehitystyön vallitsevista käytännöistä sekä kansallisista ja kansainvälisistä tietoturvastandardeista. Standardoitujen tietoturvavaatimusten kehitystä on seurattu aina niiden alkuajoilta 1960-luvulta lähtien, liittäen ne ohjelmistokehityksen tavoitteiden ja haasteiden kehitykseen: nykyaikaan ja ketterien menetelmien valtakauteen saakka. Tutkimuksessa esitetään konkreettisia ratkaisuja ohjelmistokehityksen tietoturvatyön tavoitteiden asettamiseen ja niiden saavuttamiseen. Tutkimuksessa myös tunnistetaan ongelmia ja haasteita tietoturvatyön ja ohjelmistokehityksen menetelmien yhdistämisessä, joiden ratkaisemiseksi tarjotaan toimintaohjeita ja -vaihtoehtoja. Tutkimuksen perusteella iteratiivisen ja inkrementaalisen ohjelmistokehityksen käytäntöjen ja periaatteiden yhteensovittaminen tietoturvatyön toimintojen kanssa parantaa ohjelmistojen laatua ja tietoturvaa, alentaen täten kustannuksia koko ohjelmiston ylläpitoelinkaaren aikana. Ohjelmistokehitystyön automatisointi, työkaluihin pohjautuvat prosessit ja pyrkimys tehokkuuteen sekä korkeaan laatuun ovat suoraan yhtenevät tietoturvatyön menetelmien ja tavoitteiden kanssa. Tutkimuksessa tunnistettiin useita uusia tapoja yhdistää ohjelmistokehitys ja tietoturvatyö. Lisäksi on löydetty tapoja käyttää dokumentointiin, analyyseihin ja katselmointeihin perustuvaa tietoturvan todentamiseen tuotettavaa materiaalia osana ohjelmistojen suunnittelua ja laadunvarmistusta. Erillisinä nämä prosessit johtavat tilanteeseen, jossa tietoturvamateriaalia hyödynnetään pelkästään ohjelmistokehityksen ulkopuolisiin tarpeisiin. Tutkimustulokset hyödyttävät kaikkia sidosryhmiä ohjelmistojen kehittäjistä niiden tilaajiin ja loppukäyttäjiin. Ohjelmistojen tietoturvatyö perustuu tietoon ja koulutukseen. Tieto puolestaan lisää kysyntää, joka luo tietoturvatyölle konkreettiset tavoitteet ja perustelut jo ohjelmistokehitysvaiheessa. Tietoturvatyön painopiste siirtyy torjunnasta ja vahinkojen korjauksesta kohti vahinkojen rakenteellista ehkäisyä. Kysyntä luo tarpeen myös uusille työkaluille, prosesseille ja tekniikoille, joilla lisätään tietoturvatyön tehokkuutta ja vaikuttavuutta. Tällä hetkellä kysyntää luovat lähinnä lisääntyneet tietoturvaa koskevat säädökset. Pääosa muutostarpeesta syntyy kuitenkin ohjelmistojen tilaajien ja käyttäjien vaatimuksista: ohjelmistojen tietoturvakyvykkyyden taloudellinen merkitys kasvaa. Tietoturvan tärkeys tulee korostumaan entisestään, lisäten tarvetta tietoturvatyölle ja tutkimukselle myös tulevaisuudessa

    Software reliability through fault-avoidance and fault-tolerance

    Get PDF
    Strategies and tools for the testing, risk assessment and risk control of dependable software-based systems were developed. Part of this project consists of studies to enable the transfer of technology to industry, for example the risk management techniques for safety-concious systems. Theoretical investigations of Boolean and Relational Operator (BRO) testing strategy were conducted for condition-based testing. The Basic Graph Generation and Analysis tool (BGG) was extended to fully incorporate several variants of the BRO metric. Single- and multi-phase risk, coverage and time-based models are being developed to provide additional theoretical and empirical basis for estimation of the reliability and availability of large, highly dependable software. A model for software process and risk management was developed. The use of cause-effect graphing for software specification and validation was investigated. Lastly, advanced software fault-tolerance models were studied to provide alternatives and improvements in situations where simple software fault-tolerance strategies break down

    Novel, Integrated and Revolutionary Well Test Interpretation and Analysis

    Get PDF
    Well test interpretation is an important tool for reservoir characterization. There exist four methods to achieve this goal, which are as follows: type‐curve matching, conventional straight‐line method, non‐linear regression analysis, and TDS technique. The first method is basically a trial‐and‐error procedure; a deviation of a millimeter involves differences up to 200 psi and the difficulty of having so many matching charts. The second one, although very important, requires a plot for every flow regime, and there is no way for verification of the calculated parameters, and the third one has a problem of diversity of solutions but is the most used by engineers since it is automatically made by a computer program. This book focuses on the fourth method that uses a single plot of the pressure and pressure derivative plot for identifying different lines and feature for parameter estimation. It can be used alone and is applied practically to all the existing flow regime cases. In several cases, the same parameter can be estimated from different sources making a good way for verification. Combination of this method along with the second and third is recommended and widely used by the author

    European Atlas of Natural Radiation

    Get PDF
    Natural ionizing radiation is considered as the largest contributor to the collective effective dose received by the world population. The human population is continuously exposed to ionizing radiation from several natural sources that can be classified into two broad categories: high-energy cosmic rays incident on the Earth’s atmosphere and releasing secondary radiation (cosmic contribution); and radioactive nuclides generated during the formation of the Earth and still present in the Earth’s crust (terrestrial contribution). Terrestrial radioactivity is mostly produced by the uranium and thorium radioactive families together with potassium. In most circumstances, radon, a noble gas produced in the radioactive decay of uranium, is the most important contributor to the total dose. This Atlas aims to present the current state of knowledge of natural radioactivity, by giving general background information, and describing its various sources. This reference material is complemented by a collection of maps of Europe displaying the levels of natural radioactivity caused by different sources. It is a compilation of contributions and reviews received from more than 80 experts in their field: they come from universities, research centres, national and European authorities and international organizations. This Atlas provides reference material and makes harmonized datasets available to the scientific community and national competent authorities. In parallel, this Atlas may serve as a tool for the public to: • familiarize itself with natural radioactivity; • be informed about the levels of natural radioactivity caused by different sources; • have a more balanced view of the annual dose received by the world population, to which natural radioactivity is the largest contributor; • and make direct comparisons between doses from natural sources of ionizing radiation and those from man-made (artificial) ones, hence to better understand the latter.JRC.G.10-Knowledge for Nuclear Security and Safet

    Certification of open-source software : a role for formal methods?

    Get PDF
    Despiteitshugesuccessandincreasingincorporationincom- plex, industrial-strength applications, open source software, by the very nature of its open, unconventional, distributed development model, is hard to assess and certify in an effective, sound and independent way. This makes its use and integration within safety or security-critical systems, a risk. And, simultaneously an opportunity and a challenge for rigourous, mathematically based, methods which aim at pushing software analysis and development to the level of a mature engineering discipline. This paper discusses such a challenge and proposes a number of ways in which open source development may benefit from the whole patrimony of formal methods.L. S. Barbosa research was partially supported by the CROSS project, under contract PTDC/EIA-CCO/108995/2008

    Mathematical Modeling and Simulation in Mechanics and Dynamic Systems

    Get PDF
    The present book contains the 16 papers accepted and published in the Special Issue “Mathematical Modeling and Simulation in Mechanics and Dynamic Systems” of the MDPI “Mathematics” journal, which cover a wide range of topics connected to the theory and applications of Modeling and Simulation of Dynamic Systems in different field. These topics include, among others, methods to model and simulate mechanical system in real engineering. It is hopped that the book will find interest and be useful for those working in the area of Modeling and Simulation of the Dynamic Systems, as well as for those with the proper mathematical background and willing to become familiar with recent advances in Dynamic Systems, which has nowadays entered almost all sectors of human life and activity

    45-nm Radiation Hardened Cache Design

    Get PDF
    abstract: Circuits on smaller technology nodes become more vulnerable to radiation-induced upset. Since this is a major problem for electronic circuits used in space applications, designers have a variety of solutions in hand. Radiation hardening by design (RHBD) is an approach, where electronic components are designed to work properly in certain radiation environments without the use of special fabrication processes. This work focuses on the cache design for a high performance microprocessor. The design tries to mitigate radiation effects like SEE, on a commercial foundry 45 nm SOI process. The design has been ported from a previously done cache design at the 90 nm process node. The cache design is a 16 KB, 4 way set associative, write-through design that uses a no-write allocate policy. The cache has been tested to write and read at above 2 GHz at VDD = 0.9 V. Interleaved layout, parity protection, dual redundancy, and checking circuits are used in the design to achieve radiation hardness. High speed is accomplished through the use of dynamic circuits and short wiring routes wherever possible. Gated clocks and optimized wire connections are used to reduce power. Structured methodology is used to build up the entire cache.Dissertation/ThesisM.S. Electrical Engineering 201
    corecore