2,034 research outputs found

    Early stage fatigue damage occurs in bovine tendon fascicles in the absence of changes in mechanics at either the gross or micro-structural level

    Get PDF
    Many tendon injuries are believed to result from repetitive motion or overuse, leading to the accumulation of micro-damage over time. In vitro fatigue loading can be used to characterise damage during repeated use and investigate how this may relate to the aetiology of tendinopathy. This study considered the effect of fatigue loading on fascicles from two functionally distinct bovine tendons: the digital extensor and deep digital flexor. Micro-scale extension mechanisms were investigated in fascicles before or after a period of cyclic creep loading, comparing two different measurement techniques - the displacement of a photo-bleached grid and the use of nuclei as fiducial markers. Whilst visual damage was clearly identified after only 300 cycles of creep loading, these visual changes did not affect either gross fascicle mechanics or fascicle microstructural extension mechanisms over the 900 fatigue cycles investigated. However, significantly greater fibre sliding was measured when observing grid deformation rather than the analysis of nuclei movement. Measurement of microstructural extension with both techniques was localised and this may explain the absence of change in microstructural deformation in response to fatigue loading. Alternatively, the data may demonstrate that fascicles can withstand a degree of matrix disruption with no impact on mechanics. Whilst use of a photo-bleached grid to directly measure the collagen is the best indicator of matrix deformation, nuclei tracking may provide a better measure of the strain perceived directly by the cells

    A “stick to beat you with”? : Advocating for a critical close reading of ‘vocation’ among evangelical medics in England

    Get PDF
    Funder: Arts and Humanities Research Council AH/L503927/1Peer reviewedPublisher PD

    The Utility of Measures of Attention and Situation Awareness for Quantifying Telepresence

    Get PDF
    Telepresence is defined as the sensation of being present at a remote robot task site while physically present at a local control station. This concept has received substantial attention in the recent past as a result of hypothesized benefits of presence experiences on human task performance with teleoperation systems. Human factors research, however, has made little progress in establishing a relationship between the concept of telepresence and teleoperator performance. This has been attributed to the multidimensional nature of telepresence, the lack of appropriate studies to elucidate this relationship, and the lack of a valid and reliable, objective measure of telepresence. Subjective measures (e.g., questionnaires, rating scales) are most commonly used to measure telepresence. Objective measures have been proposed, including behavioral responses to stimuli presented in virtual worlds (e.g. ducking virtual objects). Other research has suggested use of physiological measures, such as cardiovascular responses to indicate the extent of telepresence experiences in teleoperation tasks. The objective of the present study was to assess the utility of using measures of attention allocation and situation awareness (SA) to objectively describe telepresence. Attention and SA have been identified as cognitive constructs potentially underlying telepresence experiences. Participants in this study performed a virtual mine neutralization task involving remote control of a simulated robotic rover and integrated tools to locate, uncover, and dispose of mines. Subjects simultaneously completed two secondary tasks that required them to monitor for low battery signals associated with operation of the vehicle and controls. Subjects were divided into three groups of eight according to task difficulty, which was manipulated by varying the number, and spacing, of mines in the task environment. Performance was measured as average time to neutralize four mines. Telepresence was assessed using a Presence questionnaire. Situation awareness was measured using the Situation Awareness Global Assessment Technique. Attention was measured as a ratio of the number of ?low battery signal detections to the total number of signals presented through the secondary task displays. Analysis of variance results revealed level of difficulty to significantly affect performance time and telepresence. Regression analysis revealed level of difficulty, immersive tendencies, and attention to explain significant portions of the variance in telepresence

    Hans-Ulrich Treichel\u27s Der Verlorene: Trans-Generational Trauma, Guilt And Shame

    Get PDF
    The public and private discourse about Germany\u27s past under Hitler has recently undergone a significant shift. Instead of focusing on Germans as perpetrators, the last two decades have been dominated by discussions about Germans as innocent civilians victimized by the victorious Allies during the last years of WWII. Against the backdrop of this shift in German memory politics, this thesis examines literary negotiations of the current \u27victim debate\u27, using Hans-Ulrich Treichel\u27s prose text Der Verlorene (1998) as a primary example. Der Verlorene dramatizes a childhood dominated by an irrevocable loss. The parents of the child narrator have been traumatized by the loss of their home and their first-born son in 1945. Treichel\u27s text documents how the trauma, guilt and shame experienced by the first generation has deeply affected the post-war identity of the second generation. The author articulates the legacy of war-time trauma as a series of psychosomatic symptoms afflicting the second generation, offering a glimpse of a schizophrenic Adenauer generation caught between guilt and victimhood. While Der Verlorene could be read as symptomatic of a broader change in contemporary discourses about WWII, it is foremost a personalized attempt to re-address the past. Treichel\u27s text challenges a trend of the current memory culture in Germany, which is marked by the desire to generalize and sentimentalize the suffering of ethnic Germans driven from the eastern territories. Ultimately, this text refuses any notion of closure and releases the reader into an ongoing struggle with Germany\u27s catastrophic past and its legacy

    Closing the Wealth Gap: Promoting Change by Working Together

    Get PDF
    Closing the Wealth Gap: Promoting Change by Working Togethe

    Non-immunological hydrops fetalis

    Get PDF
    A case of hydrops fetalis which was not due to isoimmunization is presented. The condition was diagnosed antenatally by means of Ultrasonography and the infant was delivered at 32 weeks' gestation. He required intensive care, but survived and is well at 18 months of age. The causation, diagnosis and management of this problem are discussed

    Vegetative and Climatic Controls on Holocene Wildfire and Erosion Recorded in Alluvial Fans of the Middle Fork Salmon River, Idaho

    Get PDF
    The Middle Fork Salmon River watershed spans high-elevation mixed-conifer forests to lower-elevation shrub-steppe. In recent decades, runoff from severely burned hillslopes has generated large debris flows in steep tributary drainages. These flows incised alluvial fans along the mainstem river, where charcoal-rich debris-flow and sheetflood deposits preserve a record of latest Pleistocene to Holocene fires and geomorphic response. Through deposit sedimentology and 14C dating of charcoal, we evaluate the processes and timing of fire-related sedimentation and the role of climate and vegetation change. Fire-related deposits compose ~66% of the total measured fan deposit thickness in more densely forested upper basins versus ~33% in shrub-steppe-dominated lower basins. Fires during the middle Holocene (~8000 - 5000 cal yr BP) mostly resulted in sheetflood deposition, similar to modern events in lower basins. Decreased vegetation density during this generally warmer and drier period likely resulted in lower-severity fires and more frequent but smaller fire-related sedimentation events. In contrast, thick fire-related debris-flow deposits of latest Pleistocene-early Holocene (~13,500-8000 cal yr BP) and late Holocene (\u3c 4000 cal yr BP) age are inferred to represent higher-severity fires, though data in the former period are limited. Widespread fires occurred in both upper and lower basins within the Medieval Climatic Anomaly (1050-650 cal yr BP) and the early Little Ice Age ca. 550 cal yr BP. We conclude that a generally cooler late Holocene climate and a shift to denser lodgepole pine forests in upper basins by ~2500 cal yr BP provided fuel for severe fires during episodic droughts

    Evaluation of ‘Super Bright’ polymer dyes in 13-16-color human immunophenotyping panels

    Full text link
    Sirigen Group Limited developed unique polymer 'Brilliant' dyes that have become a staple of modern multicolor panel design. Polymer-based conjugates are often 4-10 times brighter than conventional fluorochromes with similar excitation/emission parameters. A new group of polymer fluorochromes, the 'Super Bright' dyes, was recently launched by eBioscience. The performance of these new dyes in large polychromatic panels is unclear to date. Therefore, we tested several preparations of the Super Bright dyes (such as Super Bright 436 and Super Bright 600) in two polychromatic fluorescent panels (one 13-and one 16-color). Specifically, we evaluated the spillover spread matrices of both panels to evaluate the compatibility of Super Bright dyes with other fluorochromes in a setup with tight placement of fluorochrome emissions over the spectrum. We have also matched Super Bright conjugates with comparable Brilliant Violet-labeled antibodies of same specificity in an existing 13-color panel where those conjugates are staining relatively dim targets, such as CCR6 and CD25, on resting human PBMC cells. Our results show that Super Bright dyes inflict a modest spillover spread in neighboring channels. In a 16x16 spillover spread matrix (3-UV, 5-VIOLET, 5-BLUE, 3-RED) Super Bright dyes demonstrate low to moderate spillover that is very close quantitatively to the Brilliant Violet dyes. In a 13-color human immunophenotyping panel that we previously developed to quantify T cell subsets, the " brightness " (i.e. the staining index of the Super Bright-conjugated antibodies) appears to be lower than comparable Brilliant Violet dyes when titrated, although stained populations in a full panel are still well separated. As the use of up to nine Brilliant polymer dyes simultaneously in large panels is not uncommon, we also tested the performance of Super Bright dyes in staining protocols that include Brilliant Buffer (BD Biosciences) to prevent polymer dye interactions and found them compatible. Overall, we found Super Bright dyes to perform well in large polychromatic panels. This expansion of commercially available conjugated antibody repertoire with the addition of Super Brights is timely and will greatly facilitate the success of larger (13+ color) fluorescent panel design

    A jet-dominated model for a broad-band spectral energy distribution of the nearby low-luminosity active galactic nucleus in M94

    Full text link
    We have compiled a new multiwavelength spectral energy distribution (SED) for the closest obscured low-ionization emission-line region active galactic nucleus (AGN), NGC 4736, also known as M94. The SED comprises mainly high-resolution (mostly sub-arcsecond, or, at the distance to M94, <23 pc from the nucleus) observations from the literature, archival data, as well as previously unpublished sub-millimetre data from the Plateau de Bure Interferometer (PdBI) and the Combined Array for Research in Millimeter-wave Astronomy, in conjunction with new electronic MultiElement Radio Interferometric Network (e-MERLIN) L-band (1.5 GHz) observations. Thanks to the e-MERLIN resolution and sensitivity, we resolve for the first time a double structure composed of two radio sources separated by ~1 arcsec, previously observed only at higher frequency. We explore this data set, which further includes non-simultaneous data from the Very Large Array, the Gemini telescope, the Hubble Space Telescope and the Chandra X-ray observatory, in terms of an outflow-dominated model. We compare our results with previous trends found for other AGN using the same model (NGC 4051, M81*, M87 and Sgr A*), as well as hard- and quiescent-state X-ray binaries. We find that the nuclear broad-band spectrum of M94 is consistent with a relativistic outflow of low inclination. The findings in this work add to the growing body of evidence that the physics of weakly accreting black holes scales with mass in a rather straightforward fashion.Comment: 18 pages, 7 figure

    Spatializing Data to Optimize Pricing and Improve User Experience : A Case Study Analyzing Trends in London, Paris and St. Petersburg

    Get PDF
    Technology continues to rapidly change the ways in which we live. The ground transportation industry is one of the most recent to be in the midst of this fast paced change. In this thesis, I analyze data from _________, one of the actors within the ground transportation industry. In a three city (London, Paris and St. Petersburg) case study, I spatially analyze the point density distribution of two separate groups of data –booked rides and searched (but not booked) rides. I hypothesized that each city would show clear differences in the booked versus searched ride data. These differences would s pell out specific transfer routes that need to be enhanced in order to increase booking volumes. According to my results, only one city behaved in the above mentioned fashion. Instead of all three cities behaving the same way, a model of city behavior emerged with three distinct city types. At one end of the scale is city type number one, where the booked ride patterns closely match the searched ride patterns. There are no specific calls to action here, instead city type number one is the goal for all cities within the _______ network. With little mismatch between the searched and booked ride data, there are not factors preventing or discouraging the customers from executing searches into real bookings. St. Petersburg exhibited the characteristics of city type one. In the middle of the scale is city type number two, where the booked ride patters differ in a number of fundamental ways from the searched ride patterns. In this city type, a number of specific mismatch locations can be identified for further investigation and pricing enhancements. The majority of these enhancements should be achievable through the current supply base for the given city. Paris exhibited the characteristics of city type two. Finally, at the far end of the scale is city type number three, where the booked ride patters significantly differ from the searched ride patterns. In this city type, there are many broad area mismatch locations. To enhance the situation here, it is necessary to enroll more suppliers to achieve the pricing and availability that is required. London exhibited the characteristics of city type three. Overall, a model of the city types within the ________ network was created and key recommendations were made to help decrease the mismatch exhibited in Paris and London. The next steps from here include the possiblilty of rolling out such an analysis throughout the entire network to understand the specific pattern in all Cabforce cities. To achieve this, focused planning will be required and the data must be more finely processed from the beginning to move towards greater efficiencies. Spatializing data and analyzing geographic trends can be an enormously powerful tool for many businesses looking to gain a competitive edge
    corecore