1,496 research outputs found

    Higher levels of process synchronisation

    Get PDF
    Four new synchronisation primitives (SEMAPHOREs, RESOURCEs, EVENTs and BUCKETs) were introduced in the KRoC 0.8beta release of occam for SPARC (SunOS/Solaris) and Alpha (OSF/1) UNIX workstations [1][2][3]. This paper reports on the rationale, application and implementation of two of these (SEMAPHOREs and EVENTs). Details on the other two may be found on the web [4]. The new primitives are designed to support higher-level mechanisms of SHARING between parallel processes and give us greater powers of expression. They will also let greater levels of concurrency be safely exploited from future parallel architectures, such as those providing (virtual) shared-memory. They demonstrate that occam is neutral in any debate between the merits of message-passing versus shared-memory parallelism, enabling applications to take advantage of whichever paradigm (or mixture of paradigms) is the most appropriate. The new primitives could be (but are not) implemented in terms of traditional channels, but only at the expense of increased complexity and computational overhead. The primitives are immediately useful even for uni-processors - for example, the cost of a fair ALT can be reduced from O(n) to O(1). In fact, all the operations associated with new primitives have constant space and time complexities; and the constants are very low. The KRoC release provides an Abstract Data Type interface to the primitives. However, direct use of such mechanisms still allows the user to misuse them. They must be used in the ways prescribed (in this paper and in [4]) else their semantics become unpredictable. No tool is provided to check correct usage at this level. The intention is to bind those primitives found to be useful into higher level versions of occam. Some of the primitives (e.g. SEMAPHOREs) may never themselves be made visible in the language, but may be used to implement bindings of higher-level paradigms (such as SHARED channels and BLACKBOARDs). The compiler will perform the relevant usage checking on all new language bindings, closing the security loopholes opened by raw use of the primitives. The paper closes by relating this work with the notions of virtual transputers, microcoded schedulers, object orientation and Java threads

    Integrating methods for determining length-at-age to improve growth estimates for two large scombrids

    Get PDF
    Fish growth is commonly estimated from length-at-age data obtained from otoliths. There are several techniques for estimating length-at-age from otoliths including 1) direct observed counts of annual increments; 2) age adjustment based on a categorization of otolith margins; 3) age adjustment based on known periods of spawning and annuli formation; 4) back-calculation to all annuli, and 5) back-calculation to the last annulus only. In this study we compared growth estimates (von Bertalanffy growth functions) obtained from the above five methods for estimating length-at-age from otoliths for two large scombrids: narrow-barred Spanish mackerel (Scomberomorus commerson) and broad-barred king mackerel (Scomberomorus semifasciatus). Likelihood ratio tests revealed that the largest differences in growth occurred between the back-calculation methods and the observed and adjusted methods for both species of mackerel. The pattern, however, was more pronounced for S. commerson than for S. semifasciatus, because of the pronounced effect of gear selectivity demonstrated for S. commerson. We propose a method of substituting length-at-age data from observed or adjusted methods with back-calculated length-at-age data to provide more appropriate estimates of population growth than those obtained with the individual methods alone, particularly when faster growing young fish are disproportionately selected for. Substitution of observed or adjusted length-at-age data with back-calculated length-at-age data provided more realistic estimates of length for younger ages than observed or adjusted methods as well as more realistic estimates of mean maximum length than those derived from backcalculation methods alone

    Solid friction between soft filaments

    Full text link
    Any macroscopic deformation of a filamentous bundle is necessarily accompanied by local sliding and/or stretching of the constituent filaments. Yet the nature of the sliding friction between two aligned filaments interacting through multiple contacts remains largely unexplored. Here, by directly measuring the sliding forces between two bundled F-actin filaments, we show that these frictional forces are unexpectedly large, scale logarithmically with sliding velocity as in solid-like friction, and exhibit complex dependence on the filaments' overlap length. We also show that a reduction of the frictional force by orders of magnitude, associated with a transition from solid-like friction to Stokes' drag, can be induced by coating F-actin with polymeric brushes. Furthermore, we observe similar transitions in filamentous microtubules and bacterial flagella. Our findings demonstrate how altering a filament's elasticity, structure and interactions can be used to engineer interfilament friction and thus tune the properties of fibrous composite materials

    Real-Time Compressive Sensing MRI Reconstruction Using GPU Computing and Split Bregman Methods

    Get PDF
    Compressive sensing (CS) has been shown to enable dramatic acceleration of MRI acquisition in some applications. Being an iterative reconstruction technique, CS MRI reconstructions can be more time-consuming than traditional inverse Fourier reconstruction. We have accelerated our CS MRI reconstruction by factors of up to 27 by using a split Bregman solver combined with a graphics processing unit (GPU) computing platform. The increases in speed we find are similar to those we measure for matrix multiplication on this platform, suggesting that the split Bregman methods parallelize efficiently. We demonstrate that the combination of the rapid convergence of the split Bregman algorithm and the massively parallel strategy of GPU computing can enable real-time CS reconstruction of even acquisition data matrices of dimension 40962 or more, depending on available GPU VRAM. Reconstruction of two-dimensional data matrices of dimension 10242 and smaller took ~0.3 s or less, showing that this platform also provides very fast iterative reconstruction for small-to-moderate size images

    Spatio-Temporal Migration Patterns of Pacific Salmon Smolts in Rivers and Coastal Marine Waters

    Get PDF
    BACKGROUND: Migrations allow animals to find food resources, rearing habitats, or mates, but often impose considerable predation risk. Several behavioural strategies may reduce this risk, including faster travel speed and taking routes with shorter total distance. Descriptions of the natural range of variation in migration strategies among individuals and populations is necessary before the ecological consequences of such variation can be established. METHODOLOGY/PRINCIPAL FINDINGS: Movements of tagged juvenile coho, steelhead, sockeye, and Chinook salmon were quantified using a large-scale acoustic tracking array in southern British Columbia, Canada. Smolts from 13 watersheds (49 watershed/species/year combinations) were tagged between 2004-2008 and combined into a mixed-effects model analysis of travel speed. During the downstream migration, steelhead were slower on average than other species, possibly related to freshwater residualization. During the migration through the Strait of Georgia, coho were slower than steelhead and sockeye, likely related to some degree of inshore summer residency. Hatchery-reared smolts were slower than wild smolts during the downstream migration, but after ocean entry, average speeds were similar. In small rivers, downstream travel speed increased with body length, but in the larger Fraser River and during the coastal migration, average speed was independent of body length. Smolts leaving rivers located towards the northern end of the Strait of Georgia ecosystem migrated strictly northwards after ocean entry, but those from rivers towards the southern end displayed split-route migration patterns within populations, with some moving southward. CONCLUSIONS/SIGNIFICANCE: Our results reveal a tremendous diversity of behavioural migration strategies used by juvenile salmon, across species, rearing histories, and habitats, as well as within individual populations. During the downstream migration, factors that had strong effects on travel speeds included species, wild or hatchery-rearing history, watershed size and, in smaller rivers, body length. During the coastal migration, travel speeds were only strongly affected by species differences

    Program Assessment the Easy Way: Using Embedded Indicators to Assess Program Outcomes

    Get PDF
    The culminating design experience for civil engineering majors at the United States Military Academy (USMA) is CE492, Design of Structural Systems. CE492 serves as a “capstone” experience or one in which students are faced with a multi-disciplinary design project incorporating facets from all previous civil engineering courses. Previous capstone experiences have required students to design structures planned for construction or currently under construction at the Academy, thus providing an opportunity for site visitations and active participation with key players in the project development process. Since CE492 provides a multi-disciplinary experience, it also provides an ideal opportunity for the application of embedded assessment indicators. The purpose of this paper is to describe the use of an embedded assessment technique which has been used successfully for two semesters in CE450, Infrastructure Development and Construction Management, to assess accomplishment of the Academy’s Engineering and Technology Goal. By merging the student evaluation and assessment processes in CE492, instructor workload was reduced, student evaluation was tied more closely to the relevant academic program and the ASCE Body of Knowledge (BOK) outcomes, and a systematic method was created for identifying shortcomings and areas of excellence in the program

    Efficiency of spinal anesthesia versus general anesthesia for lumbar spinal surgery: a retrospective analysis of 544 patients.

    Get PDF
    BACKGROUND: Previous studies have shown varying results in selected outcomes when directly comparing spinal anesthesia to general in lumbar surgery. Some studies have shown reduced surgical time, postoperative pain, time in the postanesthesia care unit (PACU), incidence of urinary retention, postoperative nausea, and more favorable cost-effectiveness with spinal anesthesia. Despite these results, the current literature has also shown contradictory results in between-group comparisons. MATERIALS AND METHODS: A retrospective analysis was performed by querying the electronic medical record database for surgeries performed by a single surgeon between 2007 and 2011 using procedural codes 63030 for diskectomy and 63047 for laminectomy: 544 lumbar laminectomy and diskectomy surgeries were identified, with 183 undergoing general anesthesia and 361 undergoing spinal anesthesia (SA). Linear and multivariate regression analyses were performed to identify differences in blood loss, operative time, time from entering the operating room (OR) until incision, time from bandage placement to exiting the OR, total anesthesia time, PACU time, and total hospital stay. Secondary outcomes of interest included incidence of postoperative spinal hematoma and death, incidence of paraparesis, plegia, post-dural puncture headache, and paresthesia, among the SA patients. RESULTS: SA was associated with significantly lower operative time, blood loss, total anesthesia time, time from entering the OR until incision, time from bandage placement until exiting the OR, and total duration of hospital stay, but a longer stay in the PACU. The SA group experienced one spinal hematoma, which was evacuated without any long-term neurological deficits, and neither group experienced a death. The SA group had no episodes of paraparesis or plegia, post-dural puncture headaches, or episodes of persistent postoperative paresthesia or weakness. CONCLUSION: SA is effective for use in patients undergoing elective lumbar laminectomy and/or diskectomy spinal surgery, and was shown to be the more expedient anesthetic choice in the perioperative setting

    Primary Beam Shape Calibration from Mosaicked, Interferometric Observations

    Full text link
    Image quality in mosaicked observations from interferometric radio telescopes is strongly dependent on the accuracy with which the antenna primary beam is calibrated. The next generation of radio telescope arrays such as the Allen Telescope Array (ATA) and the Square Kilometer Array (SKA) have key science goals that involve making large mosaicked observations filled with bright point sources. We present a new method for calibrating the shape of the telescope's mean primary beam that uses the multiple redundant observations of these bright sources in the mosaic. The method has an analytical solution for simple Gaussian beam shapes but can also be applied to more complex beam shapes through χ2\chi^2 minimization. One major benefit of this simple, conceptually clean method is that it makes use of the science data for calibration purposes, thus saving telescope time and improving accuracy through simultaneous calibration and observation. We apply the method both to 1.43 GHz data taken during the ATA Twenty Centimeter Survey (ATATS) and to 3.14 GHz data taken during the ATA's Pi Gigahertz Sky Survey (PiGSS). We find that the beam's calculated full width at half maximum (FWHM) values are consistent with the theoretical values, the values measured by several independent methods, and the values from the simulation we use to demonstrate the effectiveness of our method on data from future telescopes such as the expanded ATA and the SKA. These results are preliminary, and can be expanded upon by fitting more complex beam shapes. We also investigate, by way of a simulation, the dependence of the accuracy of the telescope's FWHM on antenna number. We find that the uncertainty returned by our fitting method is inversely proportional to the number of antennas in the array.Comment: Accepted by PASP. 8 pages, 8 figure

    The 27-28 October 1986 FIRE IFO cirrus case study: Comparison of satellite and aircraft derived particle size

    Get PDF
    Theoretical calculations predict that cloud reflectance in near infrared windows such as those at 1.6 and 2.2 microns should give lower reflectances than at visible wavelengths. The reason for this difference is that ice and liquid water show significant absorption at those wavelengths, in contrast to the nearly conservative scattering at wavelengths shorter than 1 micron. In addition, because the amount of absorption scales with the path length of radiation through the particle, increasing cloud particle size should lead to decreasing reflectances at 1.6 and 2.2 microns. Measurements at these wavelengths to date, however, have often given unpredicted results. Twomey and Cocks found unexpectedly high absorption (factors of 3 to 5) in optically thick liquid water clouds. Curran and Wu found expectedly low absorption in optically thick high clouds, and postulated the existence of supercooled small water droplets in place of the expected large ice particles. The implications of the FIRE data for optically thin cirrus are examined
    corecore