411 research outputs found

    Sampling Plan Using Process Loss Index Using Multiple Dependent State Sampling Under Neutrosophic Statistics

    Get PDF
    This paper presents the designing of a sampling plan using the process loss consideration for the multiple dependent state sampling under the neutrosophic statistics. The operating characteristics under the neutrosophic statistical interval method (NSIM) are developed to find the neutrosophic plan parameters of the proposed sampling plan. A non-linear optimization under NSIM is used to find the optimal neutrosophic plan parameters under the given conditions. The advantages of the proposed sampling plan are discussed over the existing sampling plans. A real example having some uncertain observations is given for the illustration purpose

    Studies on chain sampling schemes in quality and reliability engineering

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Nursing Students\u27 Experiences Using High-Fidelity Cardiovascular Simulation: a Descriptive Study

    Get PDF
    In recent years high-fidelity simulation in nursing has become an increasingly popular education tool (Sanford, 2010). Many nursing programs throughout the United States and abroad have incorporated simulation into their nursing program curricula. In 2003, the National League of Nurses (NLN) endorsed the use of simulation in order to prepare students for critical thinking, self-reflection and the complex clinical environment (Jeffries, 2007). Simulation was defined as the creation of an event, situation or environment that closely mirrors what one would encounter in the \u27“real world (Cioffi, 2001; Rauen, 2001). Simulations were designed to motivate students to actively participate in the learning process by constructing knowledge, exploring assumptions and developing psychomotor skills in a safe environment (Tomey, 2003). High Fidelity Human Simulation (HFHS) was an experiential action assessment method using a lifelike computerized mannequin that can be programmed to respond to realworld inputs (Fero et al., 2010). Commonly identified benefits of simulation include improved skill performance, teamwork, effective communication, and the opportunity to observe the consequences of incorrect decisions as well as the achievement of competencies and the effects of medication administration (Todd, Manz, Hawkins, Parsons, & Hercinger, 2008). Another identified outcome of simulation was self-confidence building for the nursing student. Simulation experiences were effective in increasing students\u27¸ self-efficacy in their ability to perform clinical skills (Bambini, Washburn, & Perkins, 2009). The level of selfefficacy was dependent on student performance during the simulation scenario. The goal for simulation in relation to self-efficacy was to improve student confidence when transferring learning to nursing practice

    The 100 ampere-hour nickel cadmium battery development program, volume 1

    Get PDF
    A program to develop a long-life, reliable and safe 100 ampere-hour sealed nickel-cadmium cell and battery module with ancillary charge control and automated test equipment to fulfill the requirements of a large Manned Orbital Space Station which uses Solar Arrays as its prime source for 25 kW of electrical power was conducted. A sealed 100 ampere-hour cell with long life potential and a replaceable, space maintainable battery module has been developed for Manned Space Station applications. The 100 ampere-hour cell has been characterized for initial (early life) anticipated conditions

    Estonia. Geographical studies. 10

    Get PDF
    http://www.ester.ee/record=b2399309*es

    Contribution à la convergence d'infrastructure entre le calcul haute performance et le traitement de données à large échelle

    Get PDF
    The amount of produced data, either in the scientific community or the commercialworld, is constantly growing. The field of Big Data has emerged to handle largeamounts of data on distributed computing infrastructures. High-Performance Computing (HPC) infrastructures are traditionally used for the execution of computeintensive workloads. However, the HPC community is also facing an increasingneed to process large amounts of data derived from high definition sensors andlarge physics apparati. The convergence of the two fields -HPC and Big Data- iscurrently taking place. In fact, the HPC community already uses Big Data tools,which are not always integrated correctly, especially at the level of the file systemand the Resource and Job Management System (RJMS).In order to understand how we can leverage HPC clusters for Big Data usage, andwhat are the challenges for the HPC infrastructures, we have studied multipleaspects of the convergence: We initially provide a survey on the software provisioning methods, with a focus on data-intensive applications. We contribute a newRJMS collaboration technique called BeBiDa which is based on 50 lines of codewhereas similar solutions use at least 1000 times more. We evaluate this mechanism on real conditions and in simulated environment with our simulator Batsim.Furthermore, we provide extensions to Batsim to support I/O, and showcase thedevelopments of a generic file system model along with a Big Data applicationmodel. This allows us to complement BeBiDa real conditions experiments withsimulations while enabling us to study file system dimensioning and trade-offs.All the experiments and analysis of this work have been done with reproducibilityin mind. Based on this experience, we propose to integrate the developmentworkflow and data analysis in the reproducibility mindset, and give feedback onour experiences with a list of best practices.RésuméLa quantité de données produites, que ce soit dans la communauté scientifiqueou commerciale, est en croissance constante. Le domaine du Big Data a émergéface au traitement de grandes quantités de données sur les infrastructures informatiques distribuées. Les infrastructures de calcul haute performance (HPC) sont traditionnellement utilisées pour l’exécution de charges de travail intensives en calcul. Cependant, la communauté HPC fait également face à un nombre croissant debesoin de traitement de grandes quantités de données dérivées de capteurs hautedéfinition et de grands appareils physique. La convergence des deux domaines-HPC et Big Data- est en cours. En fait, la communauté HPC utilise déjà des outilsBig Data, qui ne sont pas toujours correctement intégrés, en particulier au niveaudu système de fichiers ainsi que du système de gestion des ressources (RJMS).Afin de comprendre comment nous pouvons tirer parti des clusters HPC pourl’utilisation du Big Data, et quels sont les défis pour les infrastructures HPC, nousavons étudié plusieurs aspects de la convergence: nous avons d’abord proposé uneétude sur les méthodes de provisionnement logiciel, en mettant l’accent sur lesapplications utilisant beaucoup de données. Nous contribuons a l’état de l’art avecune nouvelle technique de collaboration entre RJMS appelée BeBiDa basée sur 50lignes de code alors que des solutions similaires en utilisent au moins 1000 fois plus.Nous évaluons ce mécanisme en conditions réelles et en environnement simuléavec notre simulateur Batsim. En outre, nous fournissons des extensions à Batsimpour prendre en charge les entrées/sorties et présentons le développements d’unmodèle de système de fichiers générique accompagné d’un modèle d’applicationBig Data. Cela nous permet de compléter les expériences en conditions réellesde BeBiDa en simulation tout en étudiant le dimensionnement et les différentscompromis autours des systèmes de fichiers.Toutes les expériences et analyses de ce travail ont été effectuées avec la reproductibilité à l’esprit. Sur la base de cette expérience, nous proposons d’intégrerle flux de travail du développement et de l’analyse des données dans l’esprit dela reproductibilité, et de donner un retour sur nos expériences avec une liste debonnes pratiques

    Veebi otsingumootorid ja vajadus keeruka informatsiooni järele

    Get PDF
    Väitekirja elektrooniline versioon ei sisalda publikatsioone.Veebi otsingumootorid on muutunud põhiliseks teabe hankimise vahenditeks internetist. Koos otsingumootorite kasvava populaarsusega on nende kasutusala kasvanud lihtsailt päringuilt vajaduseni küllaltki keeruka informatsiooni otsingu järele. Samas on ka akadeemiline huvi otsingu vastu hakanud liikuma lihtpäringute analüüsilt märksa keerukamate tegevuste suunas, mis hõlmavad ka pikemaid ajaraame. Praegused otsinguvahendid ei toeta selliseid tegevusi niivõrd hästi nagu lihtpäringute juhtu. Eriti kehtib see toe osas koondada mitme päringu tulemusi kokku sünteesides erinevate lihtotsingute tulemusi ühte uude dokumenti. Selline lähenemine on alles algfaasis ja ning motiveerib uurijaid arendama vastavaid vahendeid toetamaks taolisi informatsiooniotsingu ülesandeid. Käesolevas dissertatsioonis esitatakse rida uurimistulemusi eesmärgiga muuta keeruliste otsingute tuge paremaks kasutades tänapäevaseid otsingumootoreid. Alameesmärkideks olid: (a) arendada välja keeruliste otsingute mudel, (b) mõõdikute loomine kompleksotsingute mudelile, (c) eristada kompleksotsingu ülesandeid lihtotsingutest ning teha kindlaks, kas neid on võimalik mõõta leides ühtlasi lihtsaid mõõdikuid kirjeldamaks nende keerukust, (d) analüüsida, kui erinevalt kasutajad käituvad sooritades keerukaid otsinguülesandeid kasutades veebi otsingumootoreid, (e) uurida korrelatsiooni inimeste tava-veebikasutustavade ja nende otsingutulemuslikkuse vahel, (f) kuidas inimestel läheb eelhinnates otsinguülesande raskusastet ja vajaminevat jõupingutust ning (g) milline on soo ja vanuse mõju otsingu tulemuslikkusele. Keeruka veebiotsingu ülesanded jaotatakse edukalt kolmeastmeliseks protsessiks. Esitatakse sellise protsessi mudel; seda protsessi on ühtlasi võimalik ka mõõta. Edasi näidatakse kompleksotsingu loomupäraseid omadusi, mis teevad selle eristatavaks lihtsamatest juhtudest ning näidatakse ära katsemeetod sooritamaks kompleksotsingu kasutaja-uuringuid. Demonstreeritakse põhilisi samme raamistiku “Search-Logger” (eelmainitud metodoloogia tehnilise teostuse) rakendamisel kasutaja-uuringutes. Esitatakse sellisel viisil teostatud uuringute tulemused. Lõpuks esitatakse ATMS meetodi realisatsioon ja rakendamine parandamaks kompleksotsingu vajaduste tuge kaasaegsetes otsingumootorites.Search engines have become the means for searching information on the Internet. Along with the increasing popularity of these search tools, the areas of their application have grown from simple look-up to rather complex information needs. Also the academic interest in search has started to shift from analyzing simple query and response patterns to examining more sophisticated activities covering longer time spans. Current search tools do not support those activities as well as they do in the case of simple look-up tasks. Especially the support for aggregating search results from multiple search-queries, taking into account discoveries made and synthesizing them into a newly compiled document is only at the beginning and motivates researchers to develop new tools for supporting those information seeking tasks. In this dissertation I present the results of empirical research with the focus on evaluating search engines and developing a theoretical model of the complex search process that can be used to better support this special kind of search with existing search tools. It is not the goal of the thesis to implement a new search technology. Therefore performance benchmarks against established systems such as question answering systems are not part of this thesis. I present a model that decomposes complex Web search tasks into a measurable, three-step process. I show the innate characteristics of complex search tasks that make them distinguishable from their less complex counterparts and showcase an experimentation method to carry out complex search related user studies. I demonstrate the main steps taken during the development and implementation of the Search-Logger study framework (the technical manifestation of the aforementioned method) to carry our search user studies. I present the results of user studies carried out with this approach. Finally I present development and application of the ATMS (awareness-task-monitor-share) model to improve the support for complex search needs in current Web search engines

    Public Acceptance of Medical Screening Recommendations, Safety Risks, and Implied Liabilities Requirements for Space Flight Participation

    Get PDF
    The space tourism industry is preparing to send space flight participants on orbital and suborbital flights. Space flight participants are not professional astronauts and are not subject to the rules and guidelines covering space flight crewmembers. This research addresses public acceptance of current Federal Aviation Administration guidance and regulations as designated for civil participation in human space flight. The research utilized an ordinal linear regression analysis of survey data to explore the public acceptance of the current medical screening recommended guidance and the regulations for safety risk and implied liability for space flight participation. Independent variables constituted participant demographic representations while dependent variables represented current Federal Aviation Administration guidance and regulations for space flight participation. The analysis determined descriptive statistics, polytomous universal, and general linear modeling of the ordinal linear regression of the data. Odds ratios were derived based on the demographic categories to interpret likelihood of acceptance for the criteria. Various ordinal regression modeling techniques were employed to ascertain significant likely acceptance of the guidance and regulation dependent variables as derived from the demographic independent variables. Five of the twelve demographic variables significantly influenced public acceptance of one or more areas of the Federal Aviation Administration guidance and regulations; age, household size, marital status, employment status, and employment class. Specifically, increases in age and household size, as well as those never married, those employed full-time, and the self-employed exhibited significance in increased likelihood of acceptance of one or more areas of the guidance and regulations for space flight participation. The findings are intended to inform government regulators and commercial space industries on what guidance and regulations the different demographics of the public are willing to accept
    corecore