1,885 research outputs found
Predictive and Prescriptive Analytics for Multi-Site Modeling of Frail and Elderly Patient Services
Recent research has highlighted the potential of linking predictive and
prescriptive analytics. However, it remains widely unexplored how both
paradigms could benefit from one another to address today's major challenges in
healthcare. One of these is smarter planning of resource capacities for frail
and elderly inpatient wards, addressing the societal challenge of an aging
population. Frail and elderly patients typically suffer from multimorbidity and
require more care while receiving medical treatment. The aim of this research
is to assess how various predictive and prescriptive analytical methods, both
individually and in tandem, contribute to addressing the operational challenges
within an area of healthcare that is growing in demand. Clinical and
demographic patient attributes are gathered from more than 165,000 patient
records and used to explain and predict length of stay. To that extent, we
employ Classification and Regression Trees (CART) analysis to establish this
relationship. On the prescriptive side, deterministic and two-stage stochastic
programs are developed to determine how to optimally plan for beds and ward
staff with the objective to minimize cost. Furthermore, the two analytical
methodologies are linked by generating demand for the prescriptive models using
the CART groupings. The results show the linked methodologies provided
different but similar results compared to using averages and in doing so,
captured a more realistic real-world variation in the patient length of stay.
Our research reveals that healthcare managers should consider using predictive
and prescriptive models to make more informed decisions. By combining
predictive and prescriptive analytics, healthcare managers can move away from
relying on averages and incorporate the unique characteristics of their
patients to create more robust planning decisions, mitigating risks caused by
variations in demand
Learning nonlinear monotone classifiers using the Choquet Integral
In der jüngeren Vergangenheit hat das Lernen von Vorhersagemodellen, die eine monotone Beziehung zwischen Ein- und Ausgabevariablen garantieren, wachsende Aufmerksamkeit im Bereich des maschinellen Lernens erlangt. Besonders für flexible nichtlineare Modelle stellt die Gewährleistung der Monotonie eine große Herausforderung für die Umsetzung dar. Die vorgelegte Arbeit nutzt das Choquet Integral als mathematische Grundlage für die Entwicklung neuer Modelle für nichtlineare Klassifikationsaufgaben. Neben den bekannten Einsatzgebieten des Choquet-Integrals als flexible Aggregationsfunktion in multi-kriteriellen Entscheidungsverfahren, findet der Formalismus damit Eingang als wichtiges Werkzeug für Modelle des maschinellen Lernens. Neben dem Vorteil, Monotonie und Flexibilität auf elegante Weise mathematisch vereinbar zu machen, bietet das Choquet-Integral Möglichkeiten zur Quantifizierung von Wechselwirkungen zwischen Gruppen von Attributen der Eingabedaten, wodurch interpretierbare Modelle gewonnen werden können. In der Arbeit werden konkrete Methoden für das Lernen mit dem Choquet Integral entwickelt, welche zwei unterschiedliche Ansätze nutzen, die Maximum-Likelihood-Schätzung und die strukturelle Risikominimierung. Während der erste Ansatz zu einer Verallgemeinerung der logistischen Regression führt, wird der zweite mit Hilfe von Support-Vektor-Maschinen realisiert. In beiden Fällen wird das Lernproblem imWesentlichen auf die Parameter-Identifikation von Fuzzy-Maßen für das Choquet Integral zurückgeführt. Die exponentielle Anzahl von Freiheitsgraden zur Modellierung aller Attribut-Teilmengen stellt dabei besondere Herausforderungen im Hinblick auf Laufzeitkomplexität und Generalisierungsleistung. Vor deren Hintergrund werden die beiden Ansätze praktisch bewertet und auch theoretisch analysiert. Zudem werden auch geeignete Verfahren zur Komplexitätsreduktion und Modellregularisierung vorgeschlagen und untersucht. Die experimentellen Ergebnisse sind auch für anspruchsvolle Referenzprobleme im Vergleich mit aktuellen Verfahren sehr gut und heben die Nützlichkeit der Kombination aus Monotonie und Flexibilität des Choquet Integrals in verschiedenen Ansätzen des maschinellen Lernens hervor
Density Preserving Sampling: Robust and Efficient Alternative to Cross-validation for Error Estimation
Estimation of the generalization ability of a classi-
fication or regression model is an important issue, as it indicates
the expected performance on previously unseen data and is
also used for model selection. Currently used generalization
error estimation procedures, such as cross-validation (CV) or
bootstrap, are stochastic and, thus, require multiple repetitions
in order to produce reliable results, which can be computationally
expensive, if not prohibitive. The correntropy-inspired density-
preserving sampling (DPS) procedure proposed in this paper
eliminates the need for repeating the error estimation procedure
by dividing the available data into subsets that are guaranteed to
be representative of the input dataset. This allows the production
of low-variance error estimates with an accuracy comparable to
10 times repeated CV at a fraction of the computations required
by CV. This method can also be used for model ranking and
selection. This paper derives the DPS procedure and investigates
its usability and performance using a set of public benchmark
datasets and standard classifier
Entropy Measures in Machine Fault Diagnosis: Insights and Applications
Entropy, as a complexity measure, has been widely applied for time series analysis. One preeminent example is the design of machine condition monitoring and industrial fault diagnostic systems.
The occurrence of failures in a machine will typically lead to non-linear characteristics in the measurements, caused by instantaneous variations, which can increase the complexity in the system response. Entropy measures are suitable to quantify such dynamic changes in the underlying process, distinguishing between different system conditions.
However, notions of entropy are defined differently in various contexts (e.g., information theory and dynamical systems theory), which may confound researchers in the applied sciences. In this paper, we have systematically reviewed the theoretical development of some fundamental entropy measures and clarified the relations among them. Then, typical entropy-based applications of machine fault diagnostic systems are summarized. Further, insights into possible applications of the entropy measures are explained, as to where and how these measures can be useful towards future data-driven fault diagnosis methodologies. Finally, potential research trends in this area are discussed, with the intent of improving online entropy estimation and expanding its applicability to a wider range of intelligent fault diagnostic systems
- …