403 research outputs found

    Risk Factor Analysis for 30-Day Readmission Rates of Newly Tracheostomized Children

    Get PDF
    Objectives: Pediatric patients undergo tracheostomy for a variety of reasons; however, medical complexity is common among these patients. Although tracheostomy may help to facilitate discharge, these patients may be at increased risk for hospital readmission. The purpose of this study was to evaluate our institutional rate of 30-day readmission for patients discharged with new tracheostomies and to identify risk factors associated with readmission. Study Design: A retrospective cohort study was conducted for all pediatric patients ages 0-18 years with new tracheostomies at our institution over a 36-month period. Methods: A chart review was performed for all newly tracheostomizedchildren from 2013 to 2016. We investigated documented readmissions within 30 days of discharge, reasons for readmission, demographic variables including age and ethnicity, initial discharge disposition, co-morbidities, and socioeconomic status estimated by mean household income by parental zip code. Results: 45 patients were discharged during the study time period. A total of 13 (28.9%) required readmission within 30 days of discharge. Among these 13 patients, the majority (61.5%) were readmitted for lower airway concerns, many (30.8%) were admitted for reasons unrelated to tracheostomy or respiratory concerns, and only one patient (7.7%) was readmitted for a reason related to tracheostomy itself (tracheostomalbreakdown). Age, ethnicity, discharge disposition, co-morbidities, and socioeconomic status were not associated with differences in readmission rates. Patients readmitted within 30 days had a higher number of admissions within the first year. Conclusion: Pediatric patients with new tracheostomies are at high risk for readmission after discharge from initial hospitalization. The readmissions are most likely secondary to underlying medical complexity rather than issues related specifically to the tracheostomy procedure.https://jdc.jefferson.edu/patientsafetyposters/1046/thumbnail.jp

    Algorithms that "Don't See Color": Comparing Biases in Lookalike and Special Ad Audiences

    Full text link
    Today, algorithmic models are shaping important decisions in domains such as credit, employment, or criminal justice. At the same time, these algorithms have been shown to have discriminatory effects. Some organizations have tried to mitigate these effects by removing demographic features from an algorithm's inputs. If an algorithm is not provided with a feature, one might think, then its outputs should not discriminate with respect to that feature. This may not be true, however, when there are other correlated features. In this paper, we explore the limits of this approach using a unique opportunity created by a lawsuit settlement concerning discrimination on Facebook's advertising platform. Facebook agreed to modify its Lookalike Audiences tool - which creates target sets of users for ads by identifying users who share "common qualities" with users in a source audience provided by an advertiser - by removing certain demographic features as inputs to its algorithm. The modified tool, Special Ad Audiences, is intended to reduce the potential for discrimination in target audiences. We create a series of Lookalike and Special Ad audiences based on biased source audiences - i.e., source audiences that have known skew along the lines of gender, age, race, and political leanings. We show that the resulting Lookalike and Special Ad audiences both reflect these biases, despite the fact that Special Ad Audiences algorithm is not provided with the features along which our source audiences are skewed. More broadly, we provide experimental proof that removing demographic features from a real-world algorithmic system's inputs can fail to prevent biased outputs. Organizations using algorithms to mediate access to life opportunities should consider other approaches to mitigating discriminatory effects

    Tracking Flanker Task Dynamics: Evidence for Continuous Attentional Selectivity

    Get PDF
    A central research goal in the cognitive sciences has been to understand the processes that underlie selective attention, or the ability to focus on goal-relevant information. Two opposing theories have been proposed in an effort to explain how selective attention emerges: one suggests that attention improves continuously over time, whereas the other proposes that attention improves at a discrete time point. While outcome-based data (e.g., reaction time) have successfully provided evidence for both accounts, there has been no empirical evidence to differentiate them. In this study, we used mouse-tracking in a flanker task that provided time sensitive measures associated with selective attention. Specifically, we recorded the path of real-time movement trajectories to assess characteristics of continuous and discrete shifts in selective attention. Our results strongly suggested that selective attention increased gradually over time, as opposed to at a discrete point, thus providing support for a continuous account of selective attention

    Prospectus, March 18, 2009

    Get PDF
    https://spark.parkland.edu/prospectus_2009/1008/thumbnail.jp

    A biological and chemical approach to restoring water quality: A case study in an urban eutrophic pond

    Get PDF
    Efforts to improve water quality of eutrophic ponds often involve implementing changes to watershed management practices to reduce external nutrient loads. While this is required for long-term recovery and prevention, eutrophic conditions are often sustained through the recycling of internal nutrients already present within the waterbody. In particular, internal phosphorus bound to organic material and adsorbed to sediment has the potential to delay lake recovery for decades. Thus, pond and watershed management techniques are needed that not only reduce external nutrient loading but also mitigate the effects of internal nutrients already present. Therefore, our objective was to demonstrate a biological and chemical approach to remove and sequester nutrients present and entering an urban retention pond. A novel biological and chemical management technique was designed by constructing a 37 m2 (6.1 m × 6.1 m) floating treatment wetland coupled with a slow-release lanthanum composite inserted inside an airlift pump. The floating treatment wetland promoted microbial denitrification and plant uptake of nitrogen and phosphorus, while the airlift pump slowly released lanthanum to the water column over the growing season to reduce soluble reactive phosphorus. The design was tested at the microcosm and field scales, where nitrate-N and phosphate-P removal from the water column was significant (α = 0.05) at the microcosm scale and observed at the field scale. Two seasons of field sampling showed both nitrate-N and phosphate-P concentrations were reduced from 50 μg L–1 in 2020 to \u3c10 μg L–1 in 2021. Load calculations of incoming nitrate-N and phosphate-P entering the retention pond from the surrounding watershed indicate the presented biological-chemical treatment is sustainable and will minimize the effects of nutrient loading from nonpoint source pollution

    Quartet S Wave Neutron Deuteron Scattering in Effective Field Theory

    Get PDF
    The real and imaginary part of the quartet S wave phase shift in nd scattering (^4 S_{3/2}) for centre-of-mass momenta of up to 300 MeV (E_cm \approx 70 MeV) is presented in effective field theory, using both perturbative pions and a theory in which pions are integrated out. As available, the calculation agrees with both experimental data and potential model calculations, but extends to a higher, so far untested momentum r\'egime above the deuteron breakup point. A Lagrangean more feasible for numerical computations is derived.Comment: 27 pages LaTeX2e with 11 figures, uses packages includegraphicx (6 .eps files), color and feynmp (necessary Metapost files included). Corrections in bibliography and NNLO results added above breaku

    Tasa de cambio real de equilibrio en Colombia: un enfoque de miles de modelos VEC

    Get PDF
    La metodología Behavioral Equilibrium Exchange Rate (BEER) sugiere muchas variables como fundamentales de la tasa de cambio real de equilibrio (TCRE). Esto genera incertidumbre en la especificación de los modelos debido a que la TCRE depende y varía, a menudo de manera drástica, del conjunto particular de variables elegidas. Abordamos este problema estimando miles de especificaciones de vectores de corrección de errores (VEC) para datos colombianos entre 2000Q1-2019Q4. De acuerdo con una extensa revisión de la literatura, empleamos treinta y cinco proxies clasificadas entre cinco grupos fijos de fundamentales económicos que subyacen la TCRE: endeudamiento, sector fiscal, productividad, términos de intercambio y diferenciales de tasas de interés. Nuestro enfoque deriva una distribución empírica de la TCRE que nos permite afirmar con mayor certeza, entre cientos de especificaciones económicas plausibles, si el tipo de cambio real está desalineado o en equilibrio.Behavioral Equilibrium Exchange Rate (BEER) models suggest many variables as potential drivers of equilibrium real exchange rates (ERER). This gives rise to model uncertainty issues, as ERER depends and varies, often drastically, on a particular set of chosen variables. We address this issue by estimating thousands of Vector Error Correction (VEC) specifications for Colombian data between 2000Q1-2019Q4. According to an extensive literature review, we employ thirty-five proxies categorized among five fixed groups of economic fundamentals that underlie the ERER: Indebtedness, Fiscal sector, Productivity, Terms-of-Trade, and Interest Rate Differentials. Our approach derives an empirical distribution of ERER that allows us to state with greater certainty, among hundreds of plausible economic specifications, whether the real exchange rate is either misaligned or in equilibrium.Este trabajo estima la tasa de cambio real de equilibrio en Colombia bajo una metodología que busca explicar su comportamiento con un grupo de variables macroeconómicas fundamentales. Dada la incertidumbre y todas las posibles variables que se pueden considerar como fundamentales, en esta investigación se emplean más de treinta variables clasificadas entre cinco grupos de fundamentales (endeudamiento, sector fiscal, productividad, términos de intercambio y diferencial de tasas). Con esto, se estiman miles de modelos de corrección de errores de vectores cointegrados (VECM por sus siglas en inglés) con todas las posibles combinaciones entre variables clasificadas dentro de cada fundamental. A partir de las relaciones de cointegración estimadas, se construye una distribución empírica de la tasa de cambio real de equilibrio que permite identificar con mayor certeza periodos en que la tasa de cambio real observada se encontraba por encima o por debajo de su nivel de equilibrio. Contribución El estudio del desalineamiento de la tasa de cambio real ha sido de gran interés a lo largo de los años, debido a su utilidad para evaluar el desequilibrio macroeconómico de un país. Este indicador ayuda a las autoridades monetarias y cambiarias a determinar si los movimientos del tipo de cambio nominal obedecen a choques temporales, que probablemente retrocedan en el corto plazo, o si corresponden a cambios más permanentes en las variables macroeconómicas fundamentales. No obstante, la literatura ha destacado que, incluso dentro de una misma metodología para estimar la tasa de cambio real de equilibrio, los resultados pueden variar dependiendo de las variables que se incluyan como fundamentales. Esta investigación aborda este problema al estimar miles de modelos que resultan de combinar múltiples formas de medir los fundamentales y así dar mayor robustez a los resultados. Resultados Los resultados muestran que los coeficientes de las variables consideradas cumplen, en su mayoría, con el signo esperado. Es decir, que su dirección y relación con la tasa de cambio real es consistente, robusta y va en línea con lo esperado y planteado en la literatura internacional. Por su parte, la distribución empírica del equilibrio de la tasa de cambio real evidencia que durante 2004 y 2013 se experimentó una reducción de este indicador (apreciación) acorde con los mayores términos de intercambio, alguna mejora en la productividad relativa, el aumento del gasto del gobierno y la reducción del endeudamiento externo. Para el periodo 2014-2019 se resalta que este indicador tuvo un incremento (depreciación) consistente con la caída internacional del precio del petróleo que resultó en un deterioro de los términos de intercambio y mayores indicadores de deuda externa. Por su parte, el desalineamiento que es calculado como la diferencia entre la tasa de cambio real observada y este equilibrio exhibe ciertos periodos de alto desalineamiento positivo, es decir, periodos en donde la tasa de cambio real observada fue mayor su equilibrio. Estos, estuvieron asociados con la crisis financiera global (2008-2009) y el choque petrolero (2014-2015). Además, teniendo en cuenta el régimen de tipo de cambio flexible adoptado en Colombia, la tasa de cambio real ha tendido a oscilar alrededor su equilibrio
    corecore