1,139 research outputs found

    Integration of Geophysical Parameters for Electrodynamic Tether Propulsion Modeling Environment

    Full text link
    Electrodynamic tether propulsion enables a future for very small satellites to operate over theoretically infinite orbital lifetimes, subjected to material lifetimes in the space environment. The advantage of electrodynamic tethers relying on the in-situ collection of electrons for propulsion from the ionosphere makes this an attractive alternative to consumable propulsion systems. However, the extremely complex electrodynamics and mechanical dynamics of operating this system in the space environment requires a robust modeling environment. This report explores the recent developments to integrate updated geophysical parameters into the TEMPEST modeling software to support this goal. The discussion is introduced by a detailed exploration of the fundamental tradeoffs of a CubeSat versus traditional satellite system and how electrodynamic tethers can bridge this gap. The report is then concluded by a summary of the motivations of the MiTEE CubeSat Program and the progress into this modeling endeavor.http://deepblue.lib.umich.edu/bitstream/2027.42/169563/1/Honors_Capstone_Miller_Mitchel.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/169563/2/Honors_Capstone_Miller_Mitchel_slides.pd

    Meta-analysis of the efficacy of a single stage laparoscopic management versus two-stage endoscopic management of symptomatic gallstones with common bile duct stones.

    Get PDF
    Background. The optimal treatment of gallstones with associated common bile duct stones in the laparoscopic era is controversial. Various reviews and decision based algorithms have been published, but the superior treatment modality is unclear. Therefore, a metaanalysis was conducted to compare the two most commonly used treatment strategies. Methods. A systematic review was conducted to compare single stage laparoscopic cholecystectomy with common bile duct exploration versus a combined endoscopic and laparoscopic treatment. Eligible studies were identified using a search of Medline, Embase, Cochrane and Science Citation Index Expanded databases. Appropriately selected articles were independently reviewed and data was extracted and cross referenced. A meta-analysis was conducted of the pooled trial data to determine difference in outcomes. Results. A total of seven randomized trials were identified with 746 patients with 366 in the laparoscopic only treatment group and 380 in the combined endoscopic and laparoscopic treatment arms. There was no significant difference in successful bile duct clearance between the two groups (OR 1.23; 95% CI 0.55 to 2.75, P = 0.61). There was no statistical difference in morbidity (RR 1.23; 95% CI 0.92 to 1.66; P = 0.17), mortality (RD -0.00; 95% CI -0.02 to 0.01, P = 0.59) or length of hospital stay (MD -0.31; 95% CI -1.68 to 1.06, P = 0.66). However, there was a statistically significant difference in the duration of procedure in favour of the single stage laparoscopic treatment (MD -6.83; 95% CI -9.59 to -4.07, P \u3c 0.00001). Conclusion. Both the laparoscopic alone or the combined endoscopic and laparoscopic treatment approaches show comparative efficacy in management of symptomatic gallstones with associated choledocholithiasis

    On computational irreducibility and the predictability of complex physical systems

    Full text link
    Using elementary cellular automata (CA) as an example, we show how to coarse-grain CA in all classes of Wolfram's classification. We find that computationally irreducible (CIR) physical processes can be predictable and even computationally reducible at a coarse-grained level of description. The resulting coarse-grained CA which we construct emulate the large-scale behavior of the original systems without accounting for small-scale details. At least one of the CA that can be coarse-grained is irreducible and known to be a universal Turing machine.Comment: 4 pages, 2 figures, to be published in PR

    Dephasing due to Intermode Coupling in Superconducting Stripline Resonators

    Full text link
    The nonlinearity exhibited by the kinetic inductance of a superconducting stripline couples stripline resonator modes together in a manner suitable for quantum non-demolition measurement of the number of photons in a given resonator mode. Quantum non-demolition measurement is accomplished by coherently driving another resonator mode, referred to as the detector mode, and measuring its response. We show that the sensitivity of such a detection scheme is directly related to the dephasing rate induced by such an intermode coupling. We show that high sensitivity is expected when the detector mode is driven into the nonlinear regime and operated close to a point where critical slowing down occurs

    Influence of Strouhal number on pulsating methane–air coflow jet diffusion flames

    Get PDF
    Four periodically time-varying methane–air laminar coflow jet diffusion flames, each forced by pulsating the fuel jet's exit velocity Uj sinusoidally with a different modulation frequency wj and with a 50% amplitude variation, have been computed. Combustion of methane has been modeled by using a chemical mechanism with 15 species and 42 reactions, and the solution of the unsteady Navier–Stokes equations has been obtained numerically by using a modified vorticity-velocity formulation in the limit of low Mach number. The effect of wj on temperature and chemistry has been studied in detail. Three different regimes are found depending on the flame's Strouhal number S=awj/Uj, with a denoting the fuel jet radius. For small Strouhal number (S=0.1), the modulation introduces a perturbation that travels very far downstream, and certain variables oscillate at the frequency imposed by the fuel jet modulation. As the Strouhal number grows, the nondimensional frequency approaches the natural frequency of oscillation of the flickering flame (S≃0.2). A coupling with the pulsation frequency enhances the effect of the imposed modulation and a vigorous pinch-off is observed for S=0.25 and S=0.5. Larger values of S confine the oscillation to the jet's near-exit region, and the effects of the pulsation are reduced to small wiggles in the temperature and concentration values. Temperature and species mass fractions change appreciably near the jet centerline, where variations of over 2% for the temperature and 15% and 40% for the CO and OH mass fractions, respectively, are found. Transverse to the jet movement, however, the variations almost disappear at radial distances on the order of the fuel jet radius, indicating a fast damping of the oscillation in the spanwise direction

    Pengendalian Biaya Dan Waktu Dengan Metode Analisis Nilai Dan Hasil Dengan Microsoft Project 2010 (Studi Kasus : Gedung Mantos Tahap III)

    Get PDF
    Pengendalian dalam proyek merupakan fungsi paling pokok dama pelaksanaan suatu proyek konstruksi. Pengendalian sebagai alat untuk membantu mengendalikan proyek, membantu pelaksanaan dan penyelesaian dalam suatu proyek konstruksi. Pelaksanaan suatu proyek umumnya sering terjadi penyimpanan – penyimpangan dimana biaya yang dikeluarkan dan jadwal yang direncanakan melampaui batas yang direncanakan. Pengendalian proyek bertujuan untuk mengendaliakn biaya dan waktu agar sesuai dengan biaya dan jadwal yang direncanakan. Metode nilai hasil merupakan pengembangan teknik pengendalian grafik S sampai mampu menganalisis varians biaya secara stimulant sehingga dapat melihat kemajuan proyek dari jadwal dengan anggaran yang telah dialokasikan. Metode nilai hasil ini mencakup rencana anggaran dan biaya (RAB), daftar harga satuan upah dan bahan, analisa harga satuan serta laporan kemajuan proyek di olah untuk mendapatkan BCWS (Budgeted Cost of Work Schedule), ACWP (Actual Cost of Work Performance) dan BCWP (Budgeted Cost of Work Performance) Dari hasil penerapan metode konsep nilai hasil diketahui sampai hasil tinjauan pada minggu ke 6 di dapatkan BCWS = Rp. 46,932,747,947.29; ACWP = Rp. 45,928,815,000.00; BCWP = Rp. 47,633,716,500.77; sedangkan varian biaya (CV) pada bulan satu sampai tiga adalah negative (-) dan pada bulan ke empat sampai akhir pelaksanaan proyek adalah positif (+) begitu pun varian jadwal. Dan dapat diketahui prakiraan biaya akhir proyek EAC (Estimate At Complection) adalah Rp. 70,829,440,000.00, dengan anggaran rencana sebesar Rp. 72,391,666,414.54. Estimate Complection Date (ECD) proyek mengalami sedikit kemajuan terhadap jadwal yang direncanakan yaitu 2 har

    A Five-Year Retroactive Analysis of Cut Score Impact: California’s Proposed Supervised Provisional License Program

    Get PDF
    A five-year cohort of 39,737 examinees who sat for the California Bar Exam (“CBX”) between 2014-18 was analyzed using a simulation model based on actual exam results to evaluate how the minimum passing scores (“cut score”) of 1440, 1390, 1350, 1330, and 1300, if used as qualifying scores for a provisional licensing program, would affect the number of previous examinees, by race and ethnicity, who would qualify to participate within retroactive groupings of five-year, four-year, three-year, two-year, and one-year examinee cohorts.The result of the simulation models indicated that selecting a qualifying score lower than the current California cut score of 1390 will significantly increase both the overall number of eligible participants and the diversity of the group eligible to participate in the proposed alternate licensing program.This study follows an initial study of 85,727 examinees of the CBX from 2009-18 titled, Examining the California Cut Score: An Empirical Analysis of Minimum Competency, Public Protection, Disparate Impact, and National Standards that determined maintaining a high cut score does not result in greater public protection as measured by disciplinary statistics, but does result in excluding minorities from admission to the bar and the practice of law at rates disproportionately higher than Whites

    Examining the California Cut Score: An Empirical Analysis of Minimum Competency, Public Protection, Disparate Impact, and National Standards

    Get PDF
    The selection of a minimum bar exam passing score (“cut score”) shapes the representation of racial and ethnic minorities in the legal profession and the quality of access to justice in the state. California and national policy makers have not had the benefit of detailed exam performance data that analyzes the effect of the cut score on race and ethnicity. Because policy makers consider the cut score an important public protection mechanism, this study also explored whether the selection of higher cut scores better protected the public from attorneys who do not have the minimum competence to practice law. To conduct the analysis, the study used two data sets. The first data set included 85,727 examinees who sat for 21 administrations of the CBX from 2009-18 and the race and ethnicity of each examinee. The second data set included the ABA discipline data from up to 48 U.S. jurisdictions from 2013-18 and the cut scores in each jurisdiction. Using the first data set,the study determined how the selection of a minimum cut score (1) widens or narrows the racial and ethnic impacts of the bar exam and/or (2) alters the racial and ethnic composition of new attorneys joining the legal profession. Both historical actual and simulated cut scores were analyzed. Using the second data set, this study examined a third factor: the relationship, if any, between minimum cut scores and rates of attorney discipline. This analysis determined that initial and eventual passing rates differed significantly between racial and ethnic groups, and this gap was wider at higher simulated cut scores. A simulation analysis using actual examinee scores confirmed that selecting a lower cut score would have significantly narrowed the achievement gap between Whites and racial and ethnic minorities and would have increased the number of newly admitted minority attorneys in California. For example, at 1440, the achievement gap between Whites and Blacks was 27.4 percentage points. But at a simulated cut score of 1300, the achievement gap between these two groups would have been only 14.5 percentage points. This 12.9 percentage point difference in the achievement gap at 1440 and 1300 demonstrates a disparate effect of the higher cut scores. Using the second data set about disciplinary statistics, the study determined that no relationship exists between the selection of a cut score and the number of complaints, formal charges, or disciplinary actions taken against attorneys in the jurisdictions studied. California’s recent decision to lower the cut score from 1440 to 1390 moved California from having the second-highest cut score to the fourth-highest cut score in the country. However, the report data established that at 1390 California will continue to produce significantly disparate pass rates on the basis of race and ethnicity when compared to the national norm of 1350, the New York standard of 1330, and the simulated model of 1300. This study establishes that maintaining a high cut score does not result in greater public protection as measured by disciplinary statistics but does result in excluding minorities from admission to the bar and the practice of law at rates disproportionately higher than Whites

    Coarse-graining of cellular automata, emergence, and the predictability of complex systems

    Full text link
    We study the predictability of emergent phenomena in complex systems. Using nearest neighbor, one-dimensional Cellular Automata (CA) as an example, we show how to construct local coarse-grained descriptions of CA in all classes of Wolfram's classification. The resulting coarse-grained CA that we construct are capable of emulating the large-scale behavior of the original systems without accounting for small-scale details. Several CA that can be coarse-grained by this construction are known to be universal Turing machines; they can emulate any CA or other computing devices and are therefore undecidable. We thus show that because in practice one only seeks coarse-grained information, complex physical systems can be predictable and even decidable at some level of description. The renormalization group flows that we construct induce a hierarchy of CA rules. This hierarchy agrees well with apparent rule complexity and is therefore a good candidate for a complexity measure and a classification method. Finally we argue that the large scale dynamics of CA can be very simple, at least when measured by the Kolmogorov complexity of the large scale update rule, and moreover exhibits a novel scaling law. We show that because of this large-scale simplicity, the probability of finding a coarse-grained description of CA approaches unity as one goes to increasingly coarser scales. We interpret this large scale simplicity as a pattern formation mechanism in which large scale patterns are forced upon the system by the simplicity of the rules that govern the large scale dynamics.Comment: 18 pages, 9 figure
    corecore