1,137 research outputs found

    Study of Radiographic Linear Indications and Subsequent Microstructural Features in Gas Tungsten Arc Welds of Inconel 718

    Get PDF
    This study presents examples and considerations for differentiating linear radiographic indications produced by gas tungsten arc welds in a 0.05-in-thick sheet of Inconel 718. A series of welds with different structural features, including the enigma indications and other defect indications such as lack of fusion and penetration, were produced, radiographed, and examined metallographically. The enigma indications were produced by a large columnar grain running along the center of the weld nugget occurring when the weld speed was reduced sufficiently below nominal. Examples of respective indications, including the effect of changing the x-ray source location, are presented as an aid to differentiation. Enigma, nominal, and hot-weld specimens were tensile tested to demonstrate the harmlessness of the enigma indication. Statistical analysis showed that there is no difference between the strengths of these three weld conditions

    Computable randomness is about more than probabilities

    Get PDF
    We introduce a notion of computable randomness for infinite sequences that generalises the classical version in two important ways. First, our definition of computable randomness is associated with imprecise probability models, in the sense that we consider lower expectations (or sets of probabilities) instead of classical 'precise' probabilities. Secondly, instead of binary sequences, we consider sequences whose elements take values in some finite sample space. Interestingly, we find that every sequence is computably random with respect to at least one lower expectation, and that lower expectations that are more informative have fewer computably random sequences. This leads to the intriguing question whether every sequence is computably random with respect to a unique most informative lower expectation. We study this question in some detail and provide a partial answer

    Adapting and implementing training, guidelines and treatment cards to improve primary care-based hypertension and diabetes management in a fragile context: results of a feasibility study in Sierra Leone

    Get PDF
    Background Sierra Leone, a fragile country, is facing an increasingly significant burden of non-communicable diseases (NCDs). Facilitated by an international partnership, a project was developed to adapt and pilot desktop guidelines and other clinical support tools to strengthen primary care-based hypertension and diabetes diagnosis and management in Bombali district, Sierra Leone between 2018 and 2019. This study assesses the feasibility of the project through analysis of the processes of intervention adaptation and development, delivery of training and implementation of a care improvement package and preliminary outcomes of the intervention. Methods A mixed-method approach was used for the assessment, including 51 semi-structured interviews, review of routine treatment cards (retrieved for newly registered hypertensive and diabetic patients from June 2018 to March 2019 followed up for three months) and mentoring data, and observation of training. Thematic analysis was used for qualitative data and descriptive trend analysis and t-test was used for quantitative data, wherever appropriate. Results A Technical Working Group, established at district and national level, helped to adapt and develop the context-specific desktop guidelines for clinical management and lifestyle interventions and associated training curriculum and modules for community health officers (CHOs). Following a four-day training of CHOs, focusing on communication skills, diagnosis and management of hypertension and diabetes, and thanks to a CHO-based mentorship strategy, there was observed improvement of NCD knowledge and care processes regarding diagnosis, treatment, lifestyle education and follow up. The intervention significantly improved the average diastolic blood pressure of hypertensive patients (n = 50) three months into treatment (98 mmHg at baseline vs. 86 mmHg in Month 3, P = 0.001). However, health systems barriers typical of fragile settings, such as cost of transport and medication for patients and lack of supply of medications and treatment equipment in facilities, hindered the optimal delivery of care for hypertensive and diabetic patients. Conclusion Our study suggests the potential feasibility of this approach to strengthening primary care delivery of NCDs in fragile contexts. However, the approach needs to be built into routine supervision and pre-service training to be sustained. Key barriers in the health system and at community level also need to be addressed

    Coherent frequentism

    Full text link
    By representing the range of fair betting odds according to a pair of confidence set estimators, dual probability measures on parameter space called frequentist posteriors secure the coherence of subjective inference without any prior distribution. The closure of the set of expected losses corresponding to the dual frequentist posteriors constrains decisions without arbitrarily forcing optimization under all circumstances. This decision theory reduces to those that maximize expected utility when the pair of frequentist posteriors is induced by an exact or approximate confidence set estimator or when an automatic reduction rule is applied to the pair. In such cases, the resulting frequentist posterior is coherent in the sense that, as a probability distribution of the parameter of interest, it satisfies the axioms of the decision-theoretic and logic-theoretic systems typically cited in support of the Bayesian posterior. Unlike the p-value, the confidence level of an interval hypothesis derived from such a measure is suitable as an estimator of the indicator of hypothesis truth since it converges in sample-space probability to 1 if the hypothesis is true or to 0 otherwise under general conditions.Comment: The confidence-measure theory of inference and decision is explicitly extended to vector parameters of interest. The derivation of upper and lower confidence levels from valid and nonconservative set estimators is formalize

    Low-Cycle Fatigue of Ultra-Fine-Grained Cryomilled 5083 Aluminum Alloy

    Get PDF
    The cyclic deformation behavior of cryomilled (CM) AA5083 alloys was compared to that of conventional AA5083-H131. The materials studied were a 100 pct CM alloy with a Gaussian grain size average of 315 nm and an alloy created by mixing 85 pct CM powder with 15 pct unmilled powder before consolidation to fabricate a plate with a bimodal grain size distribution with peak averages at 240 nm and 1.8 lm. Although the ultra-fine-grain (UFG) alloys exhibited considerably higher tensile strengths than those of the conventional material, the results from plastic-strain-controlled low-cycle fatigue tests demonstrate that all three materials exhibit identical fatigue lives across a range of plastic strain amplitudes. The CM materials exhibited softening during the first cycle, similar to other alloys produced by conventional powder metallurgy, followed by continual hardening to saturation before failure. The results reported in this study show that fatigue deformation in the CM material is accompanied by slight grain growth, pinning of dislocations at the grain boundaries, and grain rotation to produce macroscopic slip bands that localize strain, creating a single dominant fatigue crack. In contrast, the conventional alloy exhibits a cell structure and more diffuse fatigue damage accumulation

    Does the process map influence the outcome of quality improvement work? A comparison of a sequential flow diagram and a hierarchical task analysis diagram

    Get PDF
    Background: Many quality and safety improvement methods in healthcare rely on a complete and accurate map of the process. Process mapping in healthcare is often achieved using a sequential flow diagram, but there is little guidance available in the literature about the most effective type of process map to use. Moreover there is evidence that the organisation of information in an external representation affects reasoning and decision making. This exploratory study examined whether the type of process map - sequential or hierarchical - affects healthcare practitioners' judgments.Methods: A sequential and a hierarchical process map of a community-based anti coagulation clinic were produced based on data obtained from interviews, talk-throughs, attendance at a training session and examination of protocols and policies. Clinic practitioners were asked to specify the parts of the process that they judged to contain quality and safety concerns. The process maps were then shown to them in counter-balanced order and they were asked to circle on the diagrams the parts of the process where they had the greatest quality and safety concerns. A structured interview was then conducted, in which they were asked about various aspects of the diagrams.Results: Quality and safety concerns cited by practitioners differed depending on whether they were or were not looking at a process map, and whether they were looking at a sequential diagram or a hierarchical diagram. More concerns were identified using the hierarchical diagram compared with the sequential diagram and more concerns were identified in relation to clinical work than administrative work. Participants' preference for the sequential or hierarchical diagram depended on the context in which they would be using it. The difficulties of determining the boundaries for the analysis and the granularity required were highlighted.Conclusions: The results indicated that the layout of a process map does influence perceptions of quality and safety problems in a process. In quality improvement work it is important to carefully consider the type of process map to be used and to consider using more than one map to ensure that different aspects of the process are captured

    Probabilistic Algorithmic Knowledge

    Full text link
    The framework of algorithmic knowledge assumes that agents use deterministic knowledge algorithms to compute the facts they explicitly know. We extend the framework to allow for randomized knowledge algorithms. We then characterize the information provided by a randomized knowledge algorithm when its answers have some probability of being incorrect. We formalize this information in terms of evidence; a randomized knowledge algorithm returning ``Yes'' to a query about a fact \phi provides evidence for \phi being true. Finally, we discuss the extent to which this evidence can be used as a basis for decisions.Comment: 26 pages. A preliminary version appeared in Proc. 9th Conference on Theoretical Aspects of Rationality and Knowledge (TARK'03

    Strong laws of large numbers for sub-linear expectations

    Full text link
    We investigate three kinds of strong laws of large numbers for capacities with a new notion of independently and identically distributed (IID) random variables for sub-linear expectations initiated by Peng. It turns out that these theorems are natural and fairly neat extensions of the classical Kolmogorov's strong law of large numbers to the case where probability measures are no longer additive. An important feature of these strong laws of large numbers is to provide a frequentist perspective on capacities.Comment: 10 page

    Inferential models: A framework for prior-free posterior probabilistic inference

    Full text link
    Posterior probabilistic statistical inference without priors is an important but so far elusive goal. Fisher's fiducial inference, Dempster-Shafer theory of belief functions, and Bayesian inference with default priors are attempts to achieve this goal but, to date, none has given a completely satisfactory picture. This paper presents a new framework for probabilistic inference, based on inferential models (IMs), which not only provides data-dependent probabilistic measures of uncertainty about the unknown parameter, but does so with an automatic long-run frequency calibration property. The key to this new approach is the identification of an unobservable auxiliary variable associated with observable data and unknown parameter, and the prediction of this auxiliary variable with a random set before conditioning on data. Here we present a three-step IM construction, and prove a frequency-calibration property of the IM's belief function under mild conditions. A corresponding optimality theory is developed, which helps to resolve the non-uniqueness issue. Several examples are presented to illustrate this new approach.Comment: 29 pages with 3 figures. Main text is the same as the published version. Appendix B is an addition, not in the published version, that contains some corrections and extensions of two of the main theorem
    • …
    corecore