557 research outputs found
Episodic starbursts in dwarf spheroidal galaxies: a simple model
Dwarf galaxies in the Local Group appear to be stripped of their gas within
270 kpc of the host galaxy. Color-magnitude diagrams of these dwarfs, however,
show clear evidence of episodic star formation (\Delta{}t ~ a few Gyr) over
cosmic time. We present a simple model to account for this behaviour. Residual
gas within the weak gravity field of the dwarf experiences dramatic variations
in the gas cooling time around the eccentric orbit. This variation is due to
two main effects. The azimuthal compression along the orbit leads to an
increase in the gas cooling rate of ~([1+\epsilon]/[1-\epsilon])^2. The
Galaxy's ionizing field declines as 1/R^2 for R>R_disk although this reaches a
floor at R~150 kpc due to the extragalactic UV field ionizing intensity. We
predict that episodic SF is mostly characteristic of dwarfs on moderately
eccentric orbits (\epsilon>0.2) that do not come too close to the centre
(R>R_disk) and do not spend their entire orbit far away from the centre (R>200
kpc). Up to 40% of early infall dwarf spheroidals can be expected to have
already had at least one burst since the initial epoch of star formation, and
10% of these dwarf spheriodals experiencing a second burst. Such a model can
explain the timing of bursts in the Carina dwarf spheroidal and restrict the
orbit of the Fornax dwarf spheroidal. However, this model fails to explain why
some dwarfs, such as Ursa Minor, experience no burst post-infall.Comment: 8 pages, 8 figures. ApJ accepte
The Smith Cloud and its dark matter halo: Survival of a Galactic disc passage
The current velocity of the Smith Cloud indicates that it has undergone at
least one passage of the Galactic disc. Using hydrodynamic simulations we
examine the present day structure of the Smith Cloud. We find that a dark
matter supported cloud is able to reproduce the observed present day neutral
hydrogen mass, column density distribution and morphology. In this case the
dark matter halo becomes elongated, owing to the tidal interaction with the
Galactic disc. Clouds in models neglecting dark matter confinement are
destroyed upon disc passage, unless the initial cloud mass is well in excess of
what is observed today. We then determine integrated flux upper limits to the
gamma-ray emission around such a hypothesised dark matter core in the Smith
Cloud. No statistically significant core or extended gamma-ray emission are
detected down to a 95% confidence level upper limit of ph
cm s in the 1-300 GeV energy range. For the derived distance of
12.4 kpc, the Fermi upper limits set the first tentative constraints on the
dark matter cross sections annihilating into and
for a high-velocity cloud.Comment: 10 pages, 8 figures. Submitted to MNRA
Accretion of the Magellanic system onto the Galaxy
Our Galaxy is surrounded by a large family of dwarf galaxies of which the most massive are the Large and Small Magellanic Clouds (LMC and SMC). Recent evidence suggests that systems with the mass of the Local Group accrete galaxies in smaller groups rather than individually. If so, at least some of the Galaxy's dwarfs may have fallen in with the LMC and SMC, and were formed as part of the Magellanic system in the nearby universe. We use the latest measurements of the proper motions of the LMC and SMC and a multicomponent model of the Galactic potential to explore the evolution of these galaxy configurations under the assumption that the Magellanic system may once have contained a number of bound dwarf galaxies. We compare our results to the available kinematic data for the local dwarf galaxies, and examine whether this model can account for recently discovered stellar streams and the planar distribution of Milky Way satellites. We find that in situations where the LMC and SMC are bound to the Milky Way, the kinematics of Draco, Sculptor, Sextans, Ursa Minor, and the Sagittarius Stream are consistent with having fallen in along with the Magellanic system. These dwarfs, if so associated, will likely have been close to the tidal radius of the LMC originally and are unlikely to have affected each other throughout the orbit. However there are clear cases, such as Carina and Leo I, that cannot be explained this way
Recommended from our members
The Trick Doesnât Work if Youâve Already Seen the Gorilla: How Anticipatory Effects Contaminate Pre-treatment Measures in Field Experiments
Objectives: When participants in experiments can anticipate the intervention, the study outcomes are said to be confounded. Ample evidence on intentional and unintentional interferences with the stimuli exists which suggests that participants tend to alter their response to the intervention prior to exposure to it; consequently, the measurement of post-treatment effects has been shown to be contaminated because there is a systematic interaction between measurement and treatment. However, an often-ignored consequence of such anticipatory effects is the impact on the baseline measure. If participants can anticipate the intervention early enough, the pretreatment scores will be conditional, therefore producing a biased estimate of the measure. We explore recent evidence on this bias and present a practical option for the mitigation of anticipatory effects.
Methods: A review of the literature across multiple disciplines which addresses concerns regarding anticipatory effects.
Results: Pretreatment measures, especially of the dependent variables at their baseline values, can be contaminated by anticipatory effects. We show that the major concerns experimenters should consider in this context are: (1) When can we say that the treatment effect âcommencedâ? (2) What forms the pretesting measure? (3) Are anticipatory effects case-specific or are there industry-wide, global anticipatory effects? (4) What can we conclude from studies whose pretest measures are affected by the anticipated treatment effect? and (5) What solutions are there for anticipatory effects?
Conclusions: We outline arguments against the fundamental hypothesis that pre-treatment measurements of baseline measures are unaffected by the study conditions. The implications of anticipatory effects for both research and policy are often ignored, which may lead to erroneous conclusions regarding the treatmentâs effectiveness, its benefits being underestimated, or both. The bias can be resolved by collecting âcleanâ baseline measures prior to the commencement of the anticipatory effects, but the first step is to be aware of their potential
Recommended from our members
Tracking Kidnappings in London: Offenders, Victims and Motives
Funder: University of CambridgeAbstract: Research Question: What was the nature of kidnappings in London during a fairly recent 5-year period in the kinds of victims, offenders, motives, types of violence used and levels of injury? Data: We analyse 924 reports of kidnap crimes recorded by the Metropolitan Police Service between 1 April 2006 and 31 March 2011. These data included free text information drawn from case notes. Methods: We establish mutually exclusive categories of kidnappings by codifying all crime records, after examining case notes and populated fields from the Metropolitan Policeâs crime recording system. Descriptive statistics are used to portray the patterns and nature of these crimes. Findings: The application of a typology of mutually exclusive categories for these kidnappings shows that gangland/criminal/drugs-related cases comprised 40.5% of all kidnappings. Another 21% of all kidnaps were domestic or familial, including honour killings. Just over 10% were incidental to âacquisitiveâ crimes such as car-jacking, whilst 8% were sexually motivated. Only 6% were categorised as traditional ransom kidnappings. About 4% were categorised into a purely violent category, whilst 3% were categorised as international/political. Conclusions: The investigative and preventive implications of these many social worlds mapped out by this typology are substantial. Each social context may require investigators to possess expertise in the specific social world of kidnapping, as distinct from what might be called expertise in âkidnapsâ per se. Investigations and prevention might be re-engineered around targeted intelligence from these diverse social contexts
'Lowering the threshold of effective deterrence'-Testing the effect of private security agents in public spaces on crime: A randomized controlled trial in a mass transit system.
Supplementing local police forces is a burgeoning multibillion-dollar private security industry. Millions of formal surveillance agents in public settings are tasked to act as preventative guardians, as their high visibility presence is hypothesized to create a deterrent threat to potential offenders. Yet, rigorous evidence is lacking. We randomly assigned all train stations in the South West of England that experienced crime into treatment and controls conditions over a six-month period. Treatment consisted of directed patrol by uniformed, unarmed security agents. Hand-held trackers on every agent yielded precise measurements of all patrol time in the stations. Count-based regression models, estimated marginal means and odds-ratios are used to assess the effect of these patrols on crimes reported to the police by victims, as well as new crimes detected by police officers. Outcomes are measured at both specified target locations to which security guards were instructed to attend, as well as at the entire station complexes. Analyses show that 41% more patrol visits and 29% more minutes spent by security agents at treatment compared to control stations led to a significant 16% reduction in victim-generated crimes at the entirety of the stations' complexes, with a 49% increase in police-generated detections at the target locations. The findings illustrate the efficacy of private policing for crime prevention theory
Recommended from our members
Targeting the Most Harmful Offenders for an English Police Agency: Continuity and Change of Membership in the âFelonious Fewâ
Funder: University of CambridgeAbstract
Research Question
How concentrated is the total harm of offences with detected offenders (identified suspects) among the complete list of all detected offenders in a given year in an English police agency, and how consistent is the list of highest-harm âfelonious fewâ offenders from one year to the next?
Data
Characteristics of 327,566 crimes and 39,545 unique offenders as recorded by Northamptonshire Police in 7Â years from 2010 to 2016 provide the basis for this analysis.
Methods
Crime and offender records were matched to harm weightings derived from the Cambridge Crime Harm Index (Sherman et al. 2016a; Sherman et al., Policing, 10(3), 171â183, 2016b). Descriptive statistics summarize a concentration of harm identifying the felonious few, changes over time in membership of the âfewâ, offender typologies and tests for escalation of severity, frequency and intermittency across repeated offences.
Findings
Crime harm is much more concentrated among offenders than crime volume: 80% of crime harm that is identified to an offender is linked to a felonious few of just 7% of all detected offenders. While chronic repeat offenders are the majority contributors to harm totals of this group, those with the most general range of offence types contribute the most harm. Individual members of the felonious few rarely maintain that position year on year; over 95% of each yearâs list is composed of individuals not present in previous years. Within individual crime histories, we observe a pattern of de-escalation in crime harm per offence over time. âOne-timeâ offenders, those with just one crime record, typically made up a third of the felonious few in both number and harm contribution.
Conclusions
These findings demonstrate the potential to target a small number of repeat offenders for harm reduction strategies using a metric of total crime severity, not just volume, despite a substantial portion of crime harm caused by one-time offenders that may be largely unpredictable.
</jats:sec
Failures in the âLaboratories of Democracyâ and Democratic Due Process as Constitutional Guardrails
In Dobbs v. Jackson Womenâs Health Organization, the Supreme Court reversed decades of precedent supporting a substantive due process right to abortion under the Fourteenth Amendment, and purported to return the question of reproductive autonomy to the âdemocratic processâ in the states. Justice Thomas, writing in concurrence, militated for reconsidering all of the Courtâs substantive due process precedents. In todayâs era of democratic backsliding, these are dangerous pronouncements with grave, if not existential, implications for democracy in the United States. The Dobbs majority hazardously asserted that state-level abortion legislation would, in fact, be the result of a democratic process. Further, because the law of democracy draws extensively from substantive due process, including where the Fourteenth Amendment âincorporatesâ textually enumerated constitutional rights against the states, the broader threat to substantive due process in the Dobbs majority opinion and Justice Thomasâs concurrence is also a direct threat to democracy itself.
Although the literature on democracy and the literature on substantive due process are both individually voluminous, there is surprisingly limited scholarship focused specifically on both as interrelated topics. Building from democratic theory and John Hart Elyâs political process theory of judicial review, this Article seeks to begin elaborating on the important connections between democracy and substantive due process that help explain the legal and practical importance of each to the other. In doing so, it also attempts to lay the groundwork for an approach to substantive due process rooted in the Federal Constitutionâs vision of democratic self-government for a diverse society
- âŠ