225 research outputs found

    The Golden Retriever Rule: Alaska’s Identity Privilege for Animal Adoption Agencies and for Adoptive Animal Owners

    Get PDF
    Sedan 1900-talets mitt har användandet av tegelkonstruktioner i bostadsbyggandet minskat kraftigt; materialet har under modernismen upplevts otidsenligt och byggnadssättet har ansetts ineffektivt. Trots att kanalmurstekniken, som är en byggteknik med bärande tegelkonstruktion och högt isoleringsvärde, togs fram på 1930-talet för att följa hårdare energihushållningskrav, har ändå lätta träregelkonstruktioner dominerat det svenska småhusbyggandet. Kraven på energihushållning har under åren ökat successivt och livscykelanalysen (LCA) har utvecklats. LCA är en metodik som analyserar produkters eller tjänsters klimatbelastning ur livscykelperspektiv. Svårigheter har dock funnits i att omsätta metodiken på större komponenter än enskilda material. Därför har europastandarder tagits fram som enkom tjänar till att systematisera livscykelanalyser av hela byggnader och de kommer att följas i denna studie. Syftet med examensarbetet är att jämföra hur ett typhus med tegel som stommaterial belastar miljön under produktion och drift i en livscykel satt till 100 år, jämfört med ett motsvarande trätyphus. Till tegelhusets nackdel talar den höga energiåtgången vid materialframställningen. Trä å sin sida löper stor risk för förkortad livscykel i och med riskerna för fuktskador. För att undersöka skillnaderna i trä- och tegelkonstruktioner har en typhusritning i kanalmurskonstruktion analyserats mot en motsvarande träkonstruktion, där byggnadstyperna har samma boarea och väggkonstruktionerna samma värmemotstånd. För att få fram husens skillnad energiåtgång under driftskedet har energibehovsberäkningar utförts för byggnaderna. Livscykelanalysen har utförts i programvaran Anavitor utifrån 3D-modeller med byggnadsinformation som matchas mot en materialdatabas med livscykeldata. Ur jämförelsen har resultat kunnat hämtas på vilken av konstruktionerna som belastar miljön minst över livscykeln, med avseende på klimatbelastning räknat i koldioxidekvivalenter. Resultat visar att ett tegelhus belastar miljön dubbelt så mycket som ett trähus i produktionsfasen medan tegelhuset är miljövänligare avseende underhåll och drift. Efter 100 år är skillnaden 7,3 ton koldioxidekvivalenter, till trähusets fördel. Enligt livscykelanalysen har byggnaderna, enligt de antaganden som gjorts, belastat miljön lika efter 168 år. Till tegelhusets fördel talar dess säkerhet gällande livslängd, beständighet, fuktsäkerhet och goda möjlighet till återbruk av stommaterialet.Since the mid-1900s has brick building marginalized; the material has in the modernist era been experienced as dated and the construction method considered inefficient. In the 1930s the canal wall technique were developed to meet the coming stringent energy requirements. Despite opportunities to meet modern building norms have yet lightweight timber structures dominated the Swedish construction sector concerning single-family houses since then. The requirements for energy conservation have increased over the years to an even greater degree, and Life Cycle Assessment (LCA) has been developed; a methodology that analyzes products from a life cycle perspective. There have been difficulties to put the methodology on larger components than individual materials. Therefore, European Standards have been developed that specifically serve to systematize Life Cycle Assessments of entire buildings, which will be followed in this study. The purpose of this study is to compare which impact a standard house with brick structure has a on the environment in a lifecycle set to 100 years, compared with a corresponding timber structure. To the disadvantage for a brick house speaks the high energy consumption in material production. Timber structures at their part are at high risk for shortened life cycle due to risk of moisture damage. To examine the differences in wood and brick structures has a standard house drawing in canal wall technique been analyzed against a corresponding wooden construction. The building types have the same floor area and the wall constructions have the same heat resistance. To receive the differences in energy use during the operational phase between the buildings has energy calculations been made. The life cycle analysis has been performed in the software Anavitor based on 3D models with building information that is matched against a database of materials life cycle data. The results from the comparison are measured in terms of carbon dioxide equivalents, and will show which construction type will make least impact on the environment. Results show that a brick house has doubled environmental impact compared to a wooden house in the production phase. The brick house is a better alternative concerning environmental impact during operational phase and maintenance. After 100 years, the difference is 7,3 tons of carbon dioxide equivalents to the advantage of the wooden house. According to the LCA and the assumptions made, the buildings have charged the environment equally after 168 years. To the advantage of the brick house speaks its longevity, durability, moisture resistance and good opportunity for reuse of the bricks

    An operative approach to address severe genu valgum deformity in the Ellis-van Creveld syndrome

    Get PDF
    BACKGROUND: The genu valgum deformity seen in the Ellis-van Creveld syndrome is one of the most severe angular deformities seen in any orthopaedic condition. It is likely a combination of a primary genetic-based dysplasia of the lateral portion of the tibial plateau combined with severe soft-tissue contractures that tether the tibia into valgus deformations. Progressive weight-bearing induces changes, accumulating with growth, acting on the initially distorted and valgus-angulated proximal tibia, worsening the deformity with skeletal maturation. The purpose of this study is to present a relatively large case series of a very rare condition that describes a surgical technique to correct the severe valgus deformity in the Ellis-van Creveld syndrome by combining extensive soft-tissue release with bony realignment. METHODS: 1. Complete proximal to distal surgical decompression of the peroneal nerve. 2. Radical release and mobilization of the severe quadriceps contracture and iliotibial band contracture. 3. Distal lateral hamstring lengthening/tenotomy and lateral collateral ligament release. 4. Proximal and distal realignment of the subluxed/dislocated patella, medial and lateral retinacular release, vastus medialis advancement, patellar chondroplasty, medial patellofemoral ligament plication, and distal patellar realignment by Roux-Goldthwait technique or patellar tendon transfer with tibial tubercle relocation. 5. Proximal tibial varus osteotomy with partial fibulectomy and anterior compartment release. 6. Occasionally, distal femoral osteotomy. RESULTS: In all cases, the combination of radical soft-tissue release, patellar realignment and bony osteotomy resulted in 10° or less of genu valgum at the time of surgical correction. Complications of surgery included three patients (five limbs) with knee stiffness that was successfully manipulated, one peroneal nerve palsy, one wound slough and hematoma requiring a skin graft, and one pseudoarthrosis requiring removal of hardware and repeat fixation. At last follow-up, radiographic correction of no more than 20° of genu valgum was maintained in all but four patients (four limbs). Two patients (three limbs) had or currently require revision surgery due to recurrence of the deformity. CONCLUSION: The operative approach presented in this study has resulted in correction of the severe genu valgum deformity in Ellis-van Creveld syndrome to 10° or less of genu valgum at the time of surgery. Although not an outcomes study, a correction of no more than 20° genu valgum has been maintained in many of the cases included in the study. Further clinical follow-up is still warranted. LEVEL OF EVIDENCE: IV

    The DEEP Groth Strip Galaxy Redshift Survey. III. Redshift Catalog and Properties of Galaxies

    Full text link
    The Deep Extragalactic Evolutionary Probe (DEEP) is a series of spectroscopic surveys of faint galaxies, targeted at the properties and clustering of galaxies at redshifts z ~ 1. We present the redshift catalog of the DEEP 1 GSS pilot phase of this project, a Keck/LRIS survey in the HST/WFPC2 Groth Survey Strip. The redshift catalog and data, including reduced spectra, are publicly available through a Web-accessible database. The catalog contains 658 secure galaxy redshifts with a median z=0.65, and shows large-scale structure walls to z = 1. We find a bimodal distribution in the galaxy color-magnitude diagram which persists to z = 1. A similar color division has been seen locally by the SDSS and to z ~ 1 by COMBO-17. For red galaxies, we find a reddening of only 0.11 mag from z ~ 0.8 to now, about half the color evolution measured by COMBO-17. We measure structural properties of the galaxies from the HST imaging, and find that the color division corresponds generally to a structural division. Most red galaxies, ~ 75%, are centrally concentrated, with a red bulge or spheroid, while blue galaxies usually have exponential profiles. However, there are two subclasses of red galaxies that are not bulge-dominated: edge-on disks and a second category which we term diffuse red galaxies (DIFRGs). The distant edge-on disks are similar in appearance and frequency to those at low redshift, but analogs of DIFRGs are rare among local red galaxies. DIFRGs have significant emission lines, indicating that they are reddened mainly by dust rather than age. The DIFRGs in our sample are all at z>0.64, suggesting that DIFRGs are more prevalent at high redshifts; they may be related to the dusty or irregular extremely red objects (EROs) beyond z>1.2 that have been found in deep K-selected surveys. (abridged)Comment: ApJ in press. 24 pages, 17 figures (12 color). The DEEP public database is available at http://saci.ucolick.org

    Applying human factors principles to alert design increases efficiency and reduces prescribing errors in a scenario-based simulation

    Get PDF
    OBJECTIVE: To apply human factors engineering principles to improve alert interface design. We hypothesized that incorporating human factors principles into alerts would improve usability, reduce workload for prescribers, and reduce prescribing errors. MATERIALS AND METHODS: We performed a scenario-based simulation study using a counterbalanced, crossover design with 20 Veterans Affairs prescribers to compare original versus redesigned alerts. We redesigned drug-allergy, drug-drug interaction, and drug-disease alerts based upon human factors principles. We assessed usability (learnability of redesign, efficiency, satisfaction, and usability errors), perceived workload, and prescribing errors. RESULTS: Although prescribers received no training on the design changes, prescribers were able to resolve redesigned alerts more efficiently (median (IQR): 56 (47) s) compared to the original alerts (85 (71) s; p=0.015). In addition, prescribers rated redesigned alerts significantly higher than original alerts across several dimensions of satisfaction. Redesigned alerts led to a modest but significant reduction in workload (p=0.042) and significantly reduced the number of prescribing errors per prescriber (median (range): 2 (1-5) compared to original alerts: 4 (1-7); p=0.024). DISCUSSION: Aspects of the redesigned alerts that likely contributed to better prescribing include design modifications that reduced usability-related errors, providing clinical data closer to the point of decision, and displaying alert text in a tabular format. Displaying alert text in a tabular format may help prescribers extract information quickly and thereby increase responsiveness to alerts. CONCLUSIONS: This simulation study provides evidence that applying human factors design principles to medication alerts can improve usability and prescribing outcomes

    Ubiquitous outflows in DEEP2 spectra of star-forming galaxies at z=1.4

    Full text link
    Galactic winds are a prime suspect for the metal enrichment of the intergalactic medium and may have a strong influence on the chemical evolution of galaxies and the nature of QSO absorption line systems. We use a sample of 1406 galaxy spectra at z~1.4 from the DEEP2 redshift survey to show that blueshifted Mg II 2796, 2803 A absorption is ubiquitous in starforming galaxies at this epoch. This is the first detection of frequent outflowing galactic winds at z~1. The presence and depth of absorption are independent of AGN spectral signatures or galaxy morphology; major mergers are not a prerequisite for driving a galactic wind from massive galaxies. Outflows are found in coadded spectra of galaxies spanning a range of 30x in stellar mass and 10x in star formation rate (SFR), calibrated from K-band and from MIPS IR fluxes. The outflows have column densities of order N_H ~ 10^20 cm^-2 and characteristic velocities of ~ 300-500 km/sec, with absorption seen out to 1000 km/sec in the most massive, highest SFR galaxies. The velocities suggest that the outflowing gas can escape into the IGM and that massive galaxies can produce cosmologically and chemically significant outflows. Both the Mg II equivalent width and the outflow velocity are larger for galaxies of higher stellar mass and SFR, with V_wind ~ SFR^0.3, similar to the scaling in low redshift IR-luminous galaxies. The high frequency of outflows in the star-forming galaxy population at z~1 indicates that galactic winds occur in the progenitors of massive spirals as well as those of ellipticals. The increase of outflow velocity with mass and SFR constrains theoretical models of galaxy evolution that include feedback from galactic winds, and may favor momentum-driven models for the wind physics.Comment: Accepted by ApJ. 25 pages, 17 figures. Revised to add discussions of intervening absorbers and AGN-driven outflows; conclusions unchange

    A Survey of Galaxy Kinematics to z ~ 1 in the TKRS/GOODS-N Field. I. Rotation and Dispersion Properties

    Full text link
    We present kinematic measurements of a large sample of galaxies from the TKRS Survey in the GOODS-N field. We measure line-of-sight velocity dispersions from integrated emission for 1089 galaxies with median z=0.637, and spatially resolved kinematics for a subsample of 380 galaxies. This is the largest sample of galaxies to z ~ 1 with kinematics to date, and allows us to measure kinematic properties without morphological pre-selection. Emission linewidths provide kinematics for the bulk of blue galaxies. To fit the spatially resolved kinematics, we fit models with both line-of-sight rotation amplitude and velocity dispersion. Integrated linewidth correlates well with a combination of the rotation gradient and dispersion, and is a robust measure of galaxy kinematics. The spatial extents of emission and continuum are similar and there is no evidence that linewidths are affected by nuclear or clumpy emission. The measured rotation gradient depends strongly on slit PA alignment with galaxy major axis, but integrated linewidth does not. Even for galaxies with well-aligned slits, some have kinematics dominated by dispersion (V/sigma<1) rather than rotation. These are probably objects with disordered velocity fields, not dynamically hot stellar systems. About 35% of the resolved sample are dispersion dominated; galaxies that are both dispersion dominated and bright exist at high redshift but appear rare at low redshift. This kinematic morphology is linked to photometric morphology in HST/ACS images: dispersion dominated galaxies include a higher fraction of irregulars and chain galaxies, while rotation dominated galaxies are mostly disks and irregulars. Only one-third of chain/hyphen galaxies are dominated by rotation; high-z elongated objects cannot be assumed to be inclined disks. (Abridged)Comment: ApJ in press. 23 pages, 23 figures. Full data tables available from http://www.astro.umd.edu/~bjw/tkrs_kinematics
    corecore