1,443 research outputs found

    Breaking Down Barriers: An Evaluation of Parent Engagement In School Closures and Co-Locations

    Get PDF
    The Department of Education's ("Department") decisions to close or co-locate schools frequently involves the loss of critical space and programs, which can have serious impacts on students' education. Historically, in making these decisions the Department has a poor track record of soliciting and incorporating parental and community input. Despite new parental engagement procedures added to the law in 2009 to facilitate greater parental consultation in major school change decisions, this year's story does not seem to be markedly different. The Department treated these hearings as procedural hurdles in order to satisfy the letter of the law, rather than an opportunity to engage in a productive dialogue about the impacts of proposed school closures and co-locations on students and what is in the best interests of affected students. By examining the New York State Education Law, Educational Impact Statements (EIS), transcripts from public hearings, and by conducting a parent survey of 873 parents at 34 schools affected by co-locations, the report concludes that the Department's parental engagement process provided insufficient information and left too many questions unanswered questions about how students and the school community will be affected by these major school decisions. The report's key finding is that the EIS -- the official document assessing the impact that a proposed change will have on school services -- does not provide adequate information for members of the school community to understand and comment about how students will be affected by these decisions. This finding is consistent with the courts' recent decision that the school closure process is flawed. Further, if not well-planned and coordinated, closures and co-locations can disrupt students' education and decrease their access to school facilities such as classrooms, gymnasiums and cafeterias

    Combination of GNSS and SLR observations using satellite co-locations

    Get PDF
    Satellite Laser Ranging (SLR) observations to Global Navigation Satellite System (GNSS) satellites may be used for several purposes. On one hand, the range measurement may be used as an independent validation for satellite orbits derived solely from GNSS microwave observations. On the other hand, both observation types may be analyzed together to generate a combined orbit. The latter procedure implies that one common set of orbit parameters is estimated from GNSS and SLR data. We performed such a combined processing of GNSS and SLR using the data of the year 2008. During this period, two GPS and four GLONASS satellites could be used as satellite co-locations. We focus on the general procedure for this type of combined processing and the impact on the terrestrial reference frame (including scale and geocenter), the GNSS satellite antenna offsets (SAO) and the SLR range biases. We show that the combination using only satellite co-locations as connection between GNSS and SLR is possible and allows the estimation of SLR station coordinates at the level of 1-2cm. The SLR observations to GNSS satellites provide the scale allowing the estimation of GNSS SAO without relying on the scale of any a priori terrestrial reference frame. We show that the necessity to estimate SLR range biases does not prohibit the estimation of GNSS SAO. A good distribution of SLR observations allows a common estimation of the two parameter types. The estimated corrections for the GNSS SAO are 119mm and −13mm on average for the GPS and GLONASS satellites, respectively. The resulting SLR range biases suggest that it might be sufficient to estimate one parameter per station representing a range bias common to all GNSS satellites. The estimated biases are in the range of a few centimeters up to 5cm. Scale differences of 0.9ppb are seen between GNSS and SL

    Interplay between telecommunications and face-to-face interactions - a study using mobile phone data

    Get PDF
    In this study we analyze one year of anonymized telecommunications data for over one million customers from a large European cellphone operator, and we investigate the relationship between people's calls and their physical location. We discover that more than 90% of users who have called each other have also shared the same space (cell tower), even if they live far apart. Moreover, we find that close to 70% of users who call each other frequently (at least once per month on average) have shared the same space at the same time - an instance that we call co-location. Co-locations appear indicative of coordination calls, which occur just before face-to-face meetings. Their number is highly predictable based on the amount of calls between two users and the distance between their home locations - suggesting a new way to quantify the interplay between telecommunications and face-to-face interactions

    SeqX: a tool to detect, analyze and visualize residue co-locations in protein and nucleic acid structures

    Get PDF
    BACKGROUND: The interacting residues of protein and nucleic acid sequences are close to each other – they are co-located. Structure databases (like Protein Data Bank, PDB and Nucleic Acid Data Bank, NDB) contain all information about these co-locations; however it is not an easy task to penetrate this complex information. We developed a JAVA tool, called SeqX for this purpose. RESULTS: SeqX tool is useful to detect, analyze and visualize residue co-locations in protein and nucleic acid structures. The user a. selects a structure from PDB; b. chooses an atom that is commonly present in every residues of the nucleic acid and/or protein structure(s) c. defines a distance from these atoms (3–15 Å). The SeqX tool detects every residue that is located within the defined distances from the defined "backbone" atom(s); provides a DotPlot-like visualization (Residues Contact Map), and calculates the frequency of every possible residue pairs (Residue Contact Table) in the observed structure. It is possible to exclude +/- 1 to 10 neighbor residues in the same polymeric chain from detection, which greatly improves the specificity of detections (up to 60% when tested on dsDNA). Results obtained on protein structures showed highly significant correlations with results obtained from literature (p < 0.0001, n = 210, four different subsets). The co-location frequency of physico-chemically compatible amino acids is significantly higher than is calculated and expected in random protein sequences (p < 0.0001, n = 80). CONCLUSION: The tool is simple and easy to use and provides a quick and reliable visualization and analyses of residue co-locations in protein and nucleic acid structures. AVAILABILITY AND REQUIREMENTS: SeqX, Java J2SE Runtime Environment 5.0 (available from [see Additional file 1] ) and at least a 1 GHz processor and with a minimum 256 Mb RAM. Source codes are available from the authors

    Comparison of next-generation portable pollution monitors to measure exposure to PM2.5 from household air pollution in Puno, Peru.

    Get PDF
    Assessment of personal exposure to PM2.5 is critical for understanding intervention effectiveness and exposure-response relationships in household air pollution studies. In this pilot study, we compared PM2.5 concentrations obtained from two next-generation personal exposure monitors (the Enhanced Children MicroPEM or ECM; and the Ultrasonic Personal Air Sampler or UPAS) to those obtained with a traditional Triplex Cyclone and SKC Air Pump (a gravimetric cyclone/pump sampler). We co-located cyclone/pumps with an ECM and UPAS to obtain 24-hour kitchen concentrations and personal exposure measurements. We measured Spearmen correlations and evaluated agreement using the Bland-Altman method. We obtained 215 filters from 72 ECM and 71 UPAS co-locations. Overall, the ECM and the UPAS had similar correlation (ECM ρ = 0.91 vs UPAS ρ = 0.88) and agreement (ECM mean difference of 121.7 ”g/m3 vs UPAS mean difference of 93.9 ”g/m3 ) with overlapping confidence intervals when compared against the cyclone/pump. When adjusted for the limit of detection, agreement between the devices and the cyclone/pump was also similar for all samples (ECM mean difference of 68.8 ”g/m3 vs UPAS mean difference of 65.4 ”g/m3 ) and personal exposure samples (ECM mean difference of -3.8 ”g/m3 vs UPAS mean difference of -12.9 ”g/m3 ). Both the ECM and UPAS produced comparable measurements when compared against a cyclone/pump setup

    Architecting Data Centers for High Efficiency and Low Latency

    Full text link
    Modern data centers, housing remarkably powerful computational capacity, are built in massive scales and consume a huge amount of energy. The energy consumption of data centers has mushroomed from virtually nothing to about three percent of the global electricity supply in the last decade, and will continuously grow. Unfortunately, a significant fraction of this energy consumption is wasted due to the inefficiency of current data center architectures, and one of the key reasons behind this inefficiency is the stringent response latency requirements of the user-facing services hosted in these data centers such as web search and social networks. To deliver such low response latency, data center operators often have to overprovision resources to handle high peaks in user load and unexpected load spikes, resulting in low efficiency. This dissertation investigates data center architecture designs that reconcile high system efficiency and low response latency. To increase the efficiency, we propose techniques that understand both microarchitectural-level resource sharing and system-level resource usage dynamics to enable highly efficient co-locations of latency-critical services and low-priority batch workloads. We investigate the resource sharing on real-system simultaneous multithreading (SMT) processors to enable SMT co-locations by precisely predicting the performance interference. We then leverage historical resource usage patterns to further optimize the task scheduling algorithm and data placement policy to improve the efficiency of workload co-locations. Moreover, we introduce methodologies to better manage the response latency by automatically attributing the source of tail latency to low-level architectural and system configurations in both offline load testing environment and online production environment. We design and develop a response latency evaluation framework at microsecond-level precision for data center applications, with which we construct statistical inference procedures to attribute the source of tail latency. Finally, we present an approach that proactively enacts carefully designed causal inference micro-experiments to diagnose the root causes of response latency anomalies, and automatically correct them to reduce the response latency.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/144144/1/yunqi_1.pd
    • 

    corecore