2,927 research outputs found

    On solving trust-region and other regularised subproblems in optimization

    Get PDF
    The solution of trust-region and regularisation subproblems which arise in unconstrained optimization is considered. Building on the pioneering work of Gay, Mor´e and Sorensen, methods which obtain the solution of a sequence of parametrized linear systems by factorization are used. Enhancements using high-order polynomial approximation and inverse iteration ensure that the resulting method is both globally and asymptotically at least superlinearly convergent in all cases, including in the notorious hard case. Numerical experiments validate the effectiveness of our approach. The resulting software is available as packages TRS and RQS as part of the GALAHAD optimization library, and is especially designed for large-scale problems

    The Trouble With Interpreting Statistically Nonsignificant Effect Sizes in Single-Study Investigations

    Get PDF
    In this commentary, we offer a perspective on the problem of authors reporting and interpreting effect sizes in the absence of formal statistical tests of their chanceness. The perspective reinforces our previous distinction between single-study investigations and multiple-study syntheses

    Consistent Federal Education Reform Failure: A Case Study of Nebraska from 2010-2019

    Get PDF
    Beginning with the Elementary and Secondary Education Act (ESEA) of 1965, the federal government has been trying unsuccessfully to fix education and raise the performance of poor students. The more recent reauthorizations of ESEA, No Child Left Behind (2001) and Every Student Succeeds Act (ESSA) (2015), hold teachers and administrators accountable for student performance and simply shame the underperforming schools. These continued unsuccessful efforts have ignored the warning in the Coleman Report in 1966 that education has relatively little influence on student achievement compared to factors outside of school. In this paper, we focus on the state of Nebraska’s responses to these latest federal bullsh-initiatives designed to improve the “lowest performing” schools. Data provided by the Nebraska Department of Education for 2010-2019 revealed absolutely no improvements whatsoever. It is time taxpayers should demand accountability for insidious decisions that have cost both billions of dollars and too many teachers and administrators their jobs, shamed high-poverty schools, and yet produce nothing to show for it

    A comparison of the stability and performance of depth-integrated ice-dynamics solvers

    Get PDF
    In the last decade, the number of ice-sheet models has increased substantially, in line with the growth of the glaciological community. These models use solvers based on different approximations of ice dynamics. In particular, several depth-integrated dynamics solvers have emerged as fast solvers capable of resolving the relevant physics of ice sheets at the continental scale. However, the numerical stability of these schemes has not been studied systematically to evaluate their effectiveness in practice. Here we focus on three such solvers, the so-called Hybrid, L1L2-SIA and DIVA solvers, as well as the well-known SIA and SSA solvers as boundary cases. We investigate the numerical stability of these solvers as a function of grid resolution and the state of the ice sheet for an explicit time discretization scheme of the mass conservation step. Under simplified conditions with constant viscosity, the maximum stable time step of the Hybrid solver, like the SIA solver, has a quadratic dependence on grid resolution. In contrast, the DIVA solver has a maximum time step that is independent of resolution as the grid becomes increasingly refined, like the SSA solver. A simple 1D implementation of the L1L2-SIA solver indicates that it should behave similarly, but in practice, the complexity of its implementation appears to restrict its stability. In realistic simulations of the Greenland Ice Sheet with a nonlinear rheology, the DIVA and SSA solvers maintain superior numerical stability, while the SIA, Hybrid and L1L2-SIA solvers show markedly poorer performance. At a grid resolution of Delta x = 4 km, the DIVA solver runs approximately 20 times faster than the Hybrid and L1L2-SIA solvers as a result of a larger stable time step. Our analysis shows that as resolution increases, the ice-dynamics solver can act as a bottleneck to model performance. The DIVA solver emerges as a clear outlier in terms of both model performance and its representation of the ice-flow physics itself

    Oxygen minimum zone: An important oceanographic habitat for deep-diving northern elephant seals, Mirounga angustirostris.

    Get PDF
    Little is known about the foraging behavior of top predators in the deep mesopelagic ocean. Elephant seals dive to the deep biota-poor oxygen minimum zone (OMZ) (>800 m depth) despite high diving costs in terms of energy and time, but how they successfully forage in the OMZ remains largely unknown. Assessment of their feeding rate is the key to understanding their foraging behavior, but this has been challenging. Here, we assessed the feeding rate of 14 female northern elephant seals determined by jaw motion events (JME) and dive cycle time to examine how feeding rates varied with dive depth, particularly in the OMZ. We also obtained video footage from seal-mounted videos to understand their feeding in the OMZ. While the diel vertical migration pattern was apparent for most depths of the JME, some very deep dives, beyond the normal diel depth ranges, occurred episodically during daylight hours. The midmesopelagic zone was the main foraging zone for all seals. Larger seals tended to show smaller numbers of JME and lower feeding rates than smaller seals during migration, suggesting that larger seals tended to feed on larger prey to satisfy their metabolic needs. Larger seals also dived frequently to the deep OMZ, possibly because of a greater diving ability than smaller seals, suggesting their dependency on food in the deeper depth zones. Video observations showed that seals encountered the rarely reported ragfish (Icosteus aenigmaticus) in the depths of the OMZ, which failed to show an escape response from the seals, suggesting that low oxygen concentrations might reduce prey mobility. Less mobile prey in OMZ would enhance the efficiency of foraging in this zone, especially for large seals that can dive deeper and longer. We suggest that the OMZ plays an important role in structuring the mesopelagic ecosystem and for the survival and evolution of elephant seals

    Extralegal Punishment Factors: A Study of Forgiveness, Hardship, Good-Deeds, Apology, Remorse, and Other Such Discretionary Factors in Assessing Criminal Punishment

    Get PDF
    The criminal law\u27s formal criteria for assessing punishment are typically contained in criminal codes, the rules of which fix an offender\u27s liability and the grade of the offense. A look at how the punishment decision-making process actually works, however, suggests that courts and other decisionmakers frequently go beyond the formal legal factors and take account of what might be called extralegal punishment factors (XPFs). XPFs, the subject of this Article, include matters as diverse as an offender\u27s apology, remorse, history of good or bad deeds, public acknowledgment of guilt, special talents, old age, extralegal suffering from the offense, as well as forgiveness or outrage by the victim, and special hardship of the punishment for the offender or his family. Such XPFs can make a difference at any point in the criminal justice process at which decisionmakers exercise discretion, such as when prosecutors decide what charge to press, when judges decide which sentence to impose, when parole boards decide when to release a prisoner, and when executive officials decide whether to grant clemency, as well as in less-visible exercises of discretion, such as in decisions by police officers and trial jurors. After a review of the current use and rationales behind eighteen common XPFs, in Part I, the Article reports in Part II the results of an empirical study of lay intuitions regarding the propriety of taking such factors into account in adjusting the punishment that otherwise would be imposed, the extent of any adjustment to be made, as well as an assessment of how the views might change with different kinds of offenses and how they might vary with demographic factors. Part III examines the implications of the study findings for current law and practice, with special attention to the problem of disparity in application that is invited by the high levels of disagreement on the proper role of some XPFs and the problem of conflicts between lay intuitions and current law and practice. It is not uncommon that there is strong support for reliance upon XPFs that current practice ignores and little support for reliance upon XPFs the current practice commonly relied upon

    Competing Theories of Blackmail: An Empirical Research Critique of Criminal Law Theory

    Get PDF
    Blackmail, a wonderfully curious offense, is the favorite of clever criminal law theorists. It criminalizes the threat to do something that would not be criminal if one did it. There exists a rich literature on the issue, with many prominent legal scholars offering their accounts. Each theorist has his own explanation as to why the blackmail offense exists. Most theories seek to justify the position that blackmail is a moral wrong and claim to offer an account that reflects widely shared moral intuitions. But the theories make widely varying assertions about what those shared intuitions are, while also lacking any evidence to support the assertions. This Article summarizes the results of an empirical study designed to test the competing theories of blackmail to see which best accords with prevailing sentiment. Using a variety of scenarios designed to isolate and test the various criteria different theorists have put forth as “the” key to blackmail, this study reveals which (if any) of the various theories of blackmail proposed to date truly reflects laypeople’s moral judgment. Blackmail is not only a common subject of scholarly theorizing, but also a common object of criminal prohibition. Every American jurisdiction criminalizes blackmail, although there is considerable variation in its formulation. The Article reviews the American statutes and describes the three general approaches these provisions reflect. The empirical study of lay intuitions also allows an assessment of which of these statutory approaches (if any) captures the community’s views, thereby illuminating the extent to which existing law generates results that resonate with, or deviate from, popular moral sentiment. The analyses provide an opportunity to critique the existing theories of blackmail and to suggest a refined theory that best expresses lay intuitions. The present project also reveals the substantial conflict between community views and much existing legislation, indicating recommendations for legislative reform. Finally, the Article suggests lessons that such studies and their analyses offer for criminal law and theory

    Cellular expression and crystal structure of the murine cytomegalovirus MHC-Iv glycoprotein, m153

    Get PDF
    Mouse cytomegalovirus (MCMV), a β-herpesvirus that establishes latent and persistent infections in mice, is a valuable model for studying complex virus-host interactions. MCMV encodes the m145 family of putative immunoevasins with predicted MHC-I structure. Functions attributed to some family members include downregulation of host MHC-I (m152) and NKG2D ligands (m145, m152, m155) and interaction with inhibitory or activating NK receptors (m157). We present the cellular, biochemical and structural characterization of m153, which is a heavily glycosylated homodimer, that does not require β2m or peptide, and is expressed at the surface of MCMV-infected cells. Its 2.4 Å crystal structure confirms that this compact molecule preserves an MHC-I-like fold and reveals a novel mode of dimerization, confirmed by site-directed mutagenesis, and a distinctive disulfide-stabilized extended amino terminus. The structure provides a useful framework for comparative analysis of the divergent members of the m145 family

    Shuttle Shortfalls and Lessons Learned for the Sustainment of Human Space Exploration

    Get PDF
    Much debate and national soul searching has taken place over the value of the Space Shuttle which first flew in 1981 and which is currently scheduled to be retired in 2010. Originally developed post-Saturn Apollo to emphasize affordability and safety, the reusable Space Shuttle instead came to be perceived as economically unsustainable and lacking the technology maturity to assure safe, routine access to low earth orbit (LEO). After the loss of two crews, aboard Challenger and Columbia, followed by the decision to retire the system in 2010, it is critical that this three decades worth of human space flight experience be well understood. Understanding of the past is imperative to further those goals for which the Space Shuttle was a stepping-stone in the advancement of knowledge. There was significant reduction in life cycle costs between the Saturn Apollo and the Space Shuttle. However, the advancement in life cycle cost reduction from Saturn Apollo to the Space Shuttle fell far short of its goal. This paper will explore the reasons for this shortfall. Shortfalls and lessons learned can be categorized as related to design factors, at the architecture, element and sub-system levels, as well as to programmatic factors, in terms of goals, requirements, management and organization. Additionally, no review of the Space Shuttle program and attempt to take away key lessons would be complete without a strategic review. That is, how do national space goals drive future space transportation development strategies? The lessons of the Space Shuttle are invaluable in all respects - technical, as in design, program-wise, as in organizational approach and goal setting, and strategically, within the context of the generational march toward an expanded human presence in space. Beyond lessons though (and the innumerable papers, anecdotes and opinions published on this topic) this paper traces tangible, achievable steps, derived from the Space Shuttle program experience, that must be a part of any 2l century initiatives furthering a growing human presence beyond earth

    The Need for Technology Maturity of Any Advanced Capability to Achieve Better Life Cycle Cost (LCC)

    Get PDF
    Programs such as space transportation systems are developed and deployed only rarely, and they have long development schedules and large development and life cycle costs (LCC). They have not historically had their LCC predicted well and have only had an effort to control the DDT&E phase of the programs. One of the factors driving the predictability, and thus control, of the LCC of a program is the maturity of the technologies incorporated in the program. If the technologies incorporated are less mature (as measured by their Technology Readiness Level - TRL), then the LCC not only increases but the degree of increase is difficult to predict. Consequently, new programs avoid incorporating technologies unless they are quite mature, generally TRL greater than or equal to 7 (system prototype demonstrated in a space environment) to allow better predictability of the DDT&E phase costs unless there is no alternative. On the other hand, technology development programs rarely develop technologies beyond TRL 6 (system/subsystem model or prototype demonstrated in a relevant environment). Currently the lack of development funds beyond TRL 6 and the major funding required for full scale development leave little or no funding available to prototype TRL 6 concepts so that hardware would be in the ready mode for safe, reliable and cost effective incorporation. The net effect is that each new program either incorporates little new technology or has longer development schedules and costs, and higher LCC, than planned. This paper presents methods to ensure that advanced technologies are incorporated into future programs while providing a greater accuracy of predicting their LCC. One method is having a dedicated organization to develop X-series vehicles or separate prototypes carried on other vehicles. The question of whether such an organization should be independent of NASA and/or have an independent funding source is discussed. Other methods are also discussed. How to make the choice of which technologies to pursue to the prototype level is also discussed since, to achieve better LCC, first the selection of the appropriate technologies
    corecore