6,063 research outputs found
How Satellites are Moving Beyond the Class System: Class Agnostic Development and Operations Approaches for Constraints-Driven Missions
Should we abolish the Class System? The Class A/B/C/D mission assurance and risk posture designations familiar to most satellite developers were established in 1986. They are used by both the Department of Defense (DoD) and National Aeronautics and Space Administration (NASA) to define risk and risk mitigation requirements for flight missions. However, many of today’s satellites are different – smaller, digitally engineered, designed for production, and increasingly destined for proliferated architectures. The rate of development is increasing while the uniqueness of the systems being built is decreasing.
The need to move faster and the ability to utilize, for the first time in space, real product-line components challenges the premise and assumptions behind the Class A through D designations. The traditional “Class System” is not as applicable to most small satellite developments, which instead focus on ways to prioritize key, high impact, agile processes in an effort to cut costs and timelines. Operating within this environment requires satellite developers to apply practices that are agnostic to class definition (e.g., the practices that are most fundamental to ensuring the mission meets the needs).
This paper outlines the Class Agnostic approach and constraints-based mission implementation practices. It will describe several real-life examples from Air Force Research Laboratory, Space and Missile System Center, and Space Rapid Capabilities Office missions that are applying a “class agnostic” approach to their missions. It will include lessons learned from missions which failed critical Do No Harm requirements and lost a flight to missions that have fully utilized the class agnostic approach. It will also discuss how the several missions used class-agnostic techniques to balance requirements of scope, risk, cost, and schedule to maximize the chances of mission success within hard constraints. The approaches used in these missions are applicable not only to small satellites, but also to any mission intending to move beyond the “Class System” to a more agile and flexible mindset for risk mitigation and mission assurance
Restricted Dynamic Programming Heuristic for Precedence Constrained Bottleneck Generalized TSP
We develop a restricted dynamical programming heuristic for a complicated traveling salesman problem: a) cities are grouped into clusters, resp. Generalized TSP; b) precedence constraints are imposed on the order of visiting the clusters, resp. Precedence Constrained TSP; c) the costs of moving to the next cluster and doing the required job inside one are aggregated in a minimax manner, resp. Bottleneck TSP; d) all the costs may depend on the sequence of previously visited clusters, resp. Sequence-Dependent TSP or Time Dependent TSP. Such multiplicity of constraints complicates the use of mixed integer-linear programming, while dynamic programming (DP) benefits from them; the latter may be supplemented with a branch-and-bound strategy, which necessitates a “DP-compliant” heuristic. The proposed heuristic always yields a feasible solution, which is not always the case with heuristics, and its precision may be tuned until it becomes the exact DP
Defining Strong State Accountability Systems: How Can Better Standards Gain Greater Traction?
This report is a pilot study intended to inform a larger analysis of the accountability systems in every state (and the District of Columbia) during the early years of Common Core implementation. We ask that the reader treat it as such and provide us with feedback on the accountability principles contained herein. We plan to apply these principles, once revised, to all fifty state accountability systems in order to appraise their quality. Our first national report is slated for early 2013, with follow?up studies two and four years later. Tracking systems in this manner will prove beneficial because many states will be in "flux" over the next several years as they refine and adapt their systems based on the demands of the Common Core and on the plans and promises outlined in their recently approved waivers (and/or those provisions detailed by ESEA reauthorization legislation—assuming Congress one day gets its act together).Fordham is also conducting three other studies pertinent to CCSS implementation. The first is an analysis of Common Core implementation costs; the second, an in? depth study of district?level implementation of CCSS; and the third, a nationally representative survey of English language arts teachers that assesses the rigor of their reading assignments both before and after implementation of CCSS (summer 2012 and spring 2015)
Law’s Entities: Complexity, Plasticity and Justice
In the early twenty-first century, and looking beyond it, the landscapes of law’s operation are characterised by a growing degree of complexity and pressure. Law is called upon to coordinate relations in a world facing a significant complexities produced by a convergence between bio-technological developments capable of transforming the very conditions of life itself, climate-change pressures and the threat of the collapse of bio-diversity and eco-systems, and intensifying global inter-dependencies deepening vulnerability on a whole set of scales and measures
A Framework for Finding Anomalous Objects at the LHC
Search for new physics events at the LHC mostly rely on the assumption that
the events are characterized in terms of standard-reconstructed objects such as
isolated photons, leptons, and jets initiated by QCD-partons. While such
strategy works for a vast majority of physics beyond the standard model
scenarios, there are examples aplenty where new physics give rise to anomalous
objects (such as collimated and equally energetic particles, decays due to long
lived particles etc.) in the detectors, which can not be classified as any of
the standard-objects. Varied methods and search strategies have been proposed,
each of which is trained and optimized for specific models, topologies, and
model parameters. Further, as LHC keeps excluding all expected candidates for
new physics, the need for a generic method/tool that is capable of finding the
unexpected can not be understated. In this paper, we propose one such method
that relies on the philosophy that all anomalous objects are
standard-objects. The anomaly finder, we suggest, simply is a collection of
vetoes that eliminate all standard-objects up to a pre-determined acceptance
rate. Any event containing at least one anomalous object (that passes all these
vetoes), can be identified as a candidate for new physics. Subsequent offline
analyses can determine the nature of the anomalous object as well as of the
event, paving a robust way to search for these new physics scenarios in a
model-independent fashion. Further, since the method relies on learning only
the standard-objects, for which control samples are readily available from
data, one can build the analysis in an entirely data-driven way.Comment: 32 pages, 5 tables and 12 figures; v2: references added; v3:
Practical guideline given for implementation at the LHC, comments added on
the possibility of inclusion of Muons and b-jets in the framework. Accepted
for publication in Nuclear Physics B; v4: Title fixed from v3 to match
journal version, funding information update
Interpretable Machine Learning Model for Clinical Decision Making
Despite machine learning models being increasingly used in medical decision-making and meeting classification predictive accuracy standards, they remain untrusted black-boxes due to decision-makers\u27 lack of insight into their complex logic. Therefore, it is necessary to develop interpretable machine learning models that will engender trust in the knowledge they generate and contribute to clinical decision-makers intention to adopt them in the field.
The goal of this dissertation was to systematically investigate the applicability of interpretable model-agnostic methods to explain predictions of black-box machine learning models for medical decision-making. As proof of concept, this study addressed the problem of predicting the risk of emergency readmissions within 30 days of being discharged for heart failure patients. Using a benchmark data set, supervised classification models of differing complexity were trained to perform the prediction task. More specifically, Logistic Regression (LR), Random Forests (RF), Decision Trees (DT), and Gradient Boosting Machines (GBM) models were constructed using the Healthcare Cost and Utilization Project (HCUP) Nationwide Readmissions Database (NRD). The precision, recall, area under the ROC curve for each model were used to measure predictive accuracy. Local Interpretable Model-Agnostic Explanations (LIME) was used to generate explanations from the underlying trained models. LIME explanations were empirically evaluated using explanation stability and local fit (R2).
The results demonstrated that local explanations generated by LIME created better estimates for Decision Trees (DT) classifiers
230702
This article presents a novel centrality-driven gateway designation framework for the improved real-time performance of low-power wireless sensor networks (WSNs) at system design time. We target time-synchronized channel hopping (TSCH) WSNs with centralized network management and multiple gateways with the objective of enhancing traffic schedulability by design. To this aim, we propose a novel network centrality metric termed minimal-overlap centrality that characterizes the overall number of path overlaps between all the active flows in the network when a given node is selected as gateway. The metric is used as a gateway designation criterion to elect as a gateway the node leading to the minimal number of overlaps. The method is then extended to multiple gateways with the aid of the unsupervised learning method of spectral clustering. Concretely, after a given number of clusters are identified, we use the new metric at each cluster to designate as cluster gateway the node with the least overall number of overlaps. Extensive simulations with random topologies under centralized earliest-deadline-first (EDF) scheduling and shortest-path routing suggest our approach is dominant over traditional centrality metrics from social network analysis, namely, eigenvector, closeness, betweenness, and degree. Notably, our approach reduces by up to 40% the worst-case end-to-end deadline misses achieved by classical centrality-driven gateway designation methods.This work was partially supported by National Funds through FCT/MCTES (Portuguese Foundation for Science
and Technology), within the CISTER Research Unit (UIDB/04234/2020); by the Operational Competitiveness
Programme and Internationalization (COMPETE 2020) under the PT2020 Agreement, through the European
Regional Development Fund (ERDF); also by FCT and the ESF (European Social Fund) through the Regional
Operational Programme (ROP) Norte 2020, under PhD grant 2020.06685.BD.info:eu-repo/semantics/publishedVersio
- …