291 research outputs found

    Deltoid, triceps, or both responses improve the success rate of the interscalene catheter surgical block compared with the biceps response

    Get PDF
    Background The influence of the muscular response elicited by neurostimulation on the success rate of interscalene block using a catheter (ISC) is unknown. In this investigation, we compared the success rate of ISC placement as indicated by biceps or deltoid, triceps, or both twitches. Methods Three hundred (ASA I-II) patients presenting for elective arthroscopic rotator cuff repair were prospectively randomized to assessment by biceps (Group B) or deltoid, triceps, or both twitches (Group DT). All ISCs were placed with the aid of neurostimulation. The tip of the stimulating needle was placed after disappearance of either biceps or deltoid, triceps, or both twitches at 0.3 mA. The catheter was advanced 2-3 cm past the tip of the needle and the block was performed using 40 ml ropivacaine 0.5%. Successful block was defined as sensory block of the supraclavicular nerve and sensory and motor block involving the axillary, radial, median, and musculocutaneous nerves within 30 min. Results Success rate was 98.6% in Group DT compared with 92.5% in Group B (95% confidence interval 0.01-0.11; P<0.02). Supplemental analgesics during handling of the posterior part of the shoulder capsule were needed in two patients in Group DT and seven patients in Group B. Three patients in Group B had an incomplete radial nerve distribution anaesthesia necessitating general anaesthesia. One patient in Group B had an incomplete posterior block extension of the supraclavicular nerve. No acute or late complications were observed. Conclusions Eliciting deltoid, triceps, or both twitches was associated with a higher success rate compared with eliciting biceps twitches during continuous interscalene bloc

    Understanding software development: Processes, organisations and technologies

    Get PDF
    Our primary goal is to understand what people do when they develop software and how long it takes them to do it. To get a proper perspective on software development processes we must study them in their context — that is, in their organizational and technological context. An extremely important means of gaining the needed understanding and perspective is to measure what goes on. Time and motion studies constitute a proven approach to understanding and improving any engineering processes. We believe software processes are no different in this respect; however, the fact that software development yields a collaborative intellectual, as opposed to physical, output calls for careful and creative measurement techniques. In attempting to answer the question &quot;what do people do in software development? &quot; we have experimented with two novel forms of data collection in the software development field: time diaries and direct observation. We found both methods to be feasible and to yield useful information about time utilization. In effect, we have quantified the effect of these social processes using the observational data. Among the insights gained from our time diary experiment are 1) developers switch between developments to minimize blocking and maximize overall throughput, and 2) there is a high degree of dynamic reassignment in response to changing project and organizational priorities. Among the insights gained from our direct observation experiment are 1) time diaries are a valid and accurate instrument with respect to their level of resolution, 2) unplanned interruptions constitute a significant time factor, and 3) the amount and kinds of communication are significant time and social factors.- 2-1

    Case report: Personalized transcatheter approach to mid-aortic syndrome by in vitro simulation on a 3-dimensional printed model

    Get PDF
    An 8-year-old girl, diagnosed with mid-aortic syndrome (MAS) at the age of 2 months and under antihypertensive therapy, presented with severe systemic hypertension (&gt;200/120 mmHg). Computed tomography (CT) examination revealed aortic aneurysm between severe stenoses at pre- and infra-renal segments, and occlusion of principal splanchnic arteries with peripheral collateral revascularization. Based on CT imaging, preoperative three-dimensional (3D) anatomy was reconstructed to assess aortic dimensions and a dedicated in vitro planning platform was designed to investigate the feasibility of a stenting procedure under fluoroscopic guidance. The in vitro system was designed to incorporate a translucent flexible 3D-printed patient-specific model filled with saline. A covered 8-zig 45-mm-long Cheatham-Platinum (CP) stent and a bare 8-zig, 34-mm-long CP stent were implanted with partial overlap to treat the stenoses (global peak-to-peak pressure gradient &gt; 60 mmHg), excluding the aneurysm and avoiding risk of renal arteries occlusion. Percutaneous procedure was successfully performed with no residual pressure gradient and exactly replicating the strategy tested in vitro. Also, as investigated on the 3D-printed model, additional angioplasty was feasible across the frames of the stent to improve bilateral renal flow. Postoperative systemic pressure significantly reduced (130/70 mmHg) as well as dosage of antihypertensive therapy. This is the first report demonstrating the use of a 3D-printed model to effectively plan percutaneous intervention in a complex pediatric MAS case: taking full advantage of the combined use of a patient-specific 3D model and a dedicated in vitro platform, feasibility of the stenting procedure was successfully tested during pre-procedural assessment. Hence, use of patient-specific 3D-printed models and in vitro dedicated platforms is encouraged to assist pre-procedural planning and personalize treatment, thus enhancing intervention success

    A Review of Software Inspections

    Get PDF
    For two decades, software inspections have proven effective for detecting defects in software. We have reviewed the different ways software inspections are done, created a taxonomy of inspection methods, and examined claims about the cost-effectiveness of different methods. We detect a disturbing pattern in the evaluation of inspection methods. Although there is universal agreement on the effectiveness of software inspection, their economics are uncertain. Our examination of several empirical studies leads us to conclude that the benefits of inspections are often overstated and the costs (especially for large software developments) are understated. Furthermore, some of the most influential studies establishing these costs and benefits are 20 years old now, which leads us to question their relevance to today's software development processes. Extensive work is needed to determine exactly how, why, and when software inspections work, and whether some defect detection techniques might be more cost-effective than others. In this article we ask some questions about measuring effectiveness of software inspections and determining how much they really cost when their effect on the rest of the development process is considered. Finding answers to these questions will enable us to improve the efficiency of software development. (Also cross-referenced as UMIACS-TR-95-104

    An Experiment to Assess Cost-Benefits of Inspection Meetings and their Alternatives

    Get PDF
    We hypothesize that inspection meetings are far less effective than many people believe and that meetingless inspections are equally effective. However, two of our previous industrial case studies contradict each other on this issue. Therefore, we are conducting a multi-trial, controlled experiment to assess the benefits of inspection meetings and to evaluate alternative procedures. The experiment manipulates four independent variables- (1) the inspection method used (two methods involve meetings, one method does not), (2) the requirements specification to be inspected (there are two), (3) the inspection round (each team participates in two inspections), and (4) the presentation order (either specification can be inspected first). For each experiment we measure 3 dependent variables: (1) the individual fault detection rate, (2) the team fault detection rate, and (3) the percentage of faults originally discovered after the initial inspection phase (during which phase reviewers individually analyze the document). So far we have completed one run of the experiment with 21 graduate students in the computer science at the University of Maryland as subjects, but we do not yet have enough data points to draw definite conclusions. Rather than presenting preliminary conclusions, this article (1) describes the experiment's design and the provocative hypotheses we are evaluating, (2) summarizes our observations from the experiment's initial run, and (3) discusses how we are using these observations to verify our data collection instruments and to refine future experimental runs. (Also cross-referenced as UMIACS-TR-95-89

    Understanding the Sources of Variation in Software Inspections

    Get PDF
    In a previous experiment, we determined how various changes in three structural elements of the software inspection process (team size, and number and sequencing of session), altered effectiveness and interval. our results showed that such changes did not significantly influence the defect detection reate, but that certain combinations of changes dramatically increased the inspection interval. We also observed a large amount of unexplained variance in the data, indicating that other factors much be affecting inspection performance. The nature and extent of these other factos now have to be determined to ensure that they had not biased our earlier results. Also, identifying these other factors might suggest additional ways to improve the efficiency of inspection. Acting on the hypothesis that the "inputs" into the inspection process (reviewers, authors, and code units) were significant sources of variation, we modeled their effects on inspection performance. We found that they were responsible for much more variation in defect detection than was process structure. This leads us to conclude that better defect detection techniques, not better process structures, at the key to improving inspection effectiveness. The combined effects of process inputs and process structure on the inspection interval accounted for only a small percentage of the variance in inspection interval. Therefore, there still remain other factors which need to be identified. (Also cross-referenced as UMIACS-TR-97-22

    An Experiment to Assess the Cost-Benefits of Code Inspections in Large Scale Software Development

    Get PDF
    We are conducting a long-term experiment (in progress) to compare the costs and benefits of several different software inspection methods. These methods are being applied by professional developers to a commercial software product they are currently writing. Because the laboratory for this experiment is a live development effort, we took special care to minimize cost and risk to the project, while maximizing our ability to gather useful data. This article has several goals: (1) to describe the experiment's design and show how we used simulation techniques to optimize it, (2) to present our preliminary results and discuss their implications for both software practitioners and researchers, and (3) to discuss how we expect to modify the experiment in order to reduce potential risks to the project. For each inspection we randomly assign 3 independent variables: (1) the number of reviewers on each inspection team (1, 2 or 4), (2) the number of teams inspecting the code unit (1 or 2), and (3) the requirement that defects be repaired between the first and second team's inspections. The reviewers for each inspection are randomly selected without replacement from a pool of 11 experienced software developers. The dependent variables for each inspection include inspection interval (elapsed time), total effort, and the defect detection rate. To date we have completed 34 of the planned 64 inspections. Our preliminary results challenge certain long-held beliefs about the most cost-effective ways to conduct inspections and raise some questions about the feasibility of recently proposed methods. (Also cross-referenced as UMIACS-TR-95-14

    Consolidación y fragmentación de la investigación de la comunicación en México, 1987-1997

    Get PDF
    Este artículo expone de una manera breve y general las conclusiones del trabajo de investigación sobre los procesos de estructuración del campo de la investigación académica de la comunicación en México de 1987 a 1997. El acercamiento empírico exploratorio de este trabajo supone el acopio y la sistematización de datos sobre la producción mexicana de conocimiento sobre la comunicación y sus condiciones contextuales; sobre sus productores, tanto individuales como institucionales; y sobre sus productos objetivos, especialmente las publicaciones académicas. A partir de los resultados del análisis de toda esta información, se construyó un modelo heurístico de las determinaciones socioculturales de la estructuración del campo desde la década de 1960 hasta la de 1990, que permite formular la "doble disyuntiva" que se enfrentó en los años noventa para alcanzar la legitimación académica y social
    • …
    corecore