907,220 research outputs found

    A comparison of reimbursement recommendations by European HTA agencies : Is there opportunity for further alignment?

    Get PDF
    Introduction: In Europe and beyond, the rising costs of healthcare and limited healthcare resources have resulted in the implementation of health technology assessment (HTA) to inform health policy and reimbursement decision-making. European legislation has provided a harmonized route for the regulatory process with the European Medicines Agency, but reimbursement decision-making still remains the responsibility of each country. There is a recognized need to move toward a more objective and collaborative reimbursement environment for new medicines in Europe. Therefore, the aim of this study was to objectively assess and compare the national reimbursement recommendations of 9 European jurisdictions following European Medicines Agency (EMA) recommendation for centralized marketing authorization. Methods: Using publicly available data and newly developed classification tools, this study appraised 9 European reimbursement systems by assessing HTA processes and the relationship between the regulatory, HTA and decision-making organizations. Each national HTA agency was classified according to two novel taxonomies. The System taxonomy, focuses on the position of the HTA agency within the national reimbursement system according to the relationship between the regulator, the HTA-performing agency, and the reimbursement decision-making coverage body. The HTA Process taxonomy distinguishes between the individual HTA agency's approach to economic and therapeutic evaluation and the inclusion of an independent appraisal step. The taxonomic groups were subsequently compared with national HTA recommendations. Results: This study identified European national reimbursement recommendations for 102 new active substances (NASs) approved by the EMA from 2008 to 2012. These reimbursement recommendations were compared using a novel classification tool and identified alignment between the organizational structure of reimbursement systems (System taxonomy) and HTA recommendations. However, there was less alignment between the HTA processes and recommendations. Conclusions: In order to move forward to a more harmonized HTA environment within Europe, it is first necessary to understand the variation in HTA practices within Europe. This study has identified alignment between HTA recommendations and the System taxonomy and one of the major implications of this study is that such alignment could support a more collaborative HTA environment in Europe.Peer reviewedFinal Published versio

    User-centered visual analysis using a hybrid reasoning architecture for intensive care units

    Get PDF
    One problem pertaining to Intensive Care Unit information systems is that, in some cases, a very dense display of data can result. To ensure the overview and readability of the increasing volumes of data, some special features are required (e.g., data prioritization, clustering, and selection mechanisms) with the application of analytical methods (e.g., temporal data abstraction, principal component analysis, and detection of events). This paper addresses the problem of improving the integration of the visual and analytical methods applied to medical monitoring systems. We present a knowledge- and machine learning-based approach to support the knowledge discovery process with appropriate analytical and visual methods. Its potential benefit to the development of user interfaces for intelligent monitors that can assist with the detection and explanation of new, potentially threatening medical events. The proposed hybrid reasoning architecture provides an interactive graphical user interface to adjust the parameters of the analytical methods based on the users' task at hand. The action sequences performed on the graphical user interface by the user are consolidated in a dynamic knowledge base with specific hybrid reasoning that integrates symbolic and connectionist approaches. These sequences of expert knowledge acquisition can be very efficient for making easier knowledge emergence during a similar experience and positively impact the monitoring of critical situations. The provided graphical user interface incorporating a user-centered visual analysis is exploited to facilitate the natural and effective representation of clinical information for patient care

    A probabilistic interpretation of set-membership filtering: application to polynomial systems through polytopic bounding

    Get PDF
    Set-membership estimation is usually formulated in the context of set-valued calculus and no probabilistic calculations are necessary. In this paper, we show that set-membership estimation can be equivalently formulated in the probabilistic setting by employing sets of probability measures. Inference in set-membership estimation is thus carried out by computing expectations with respect to the updated set of probability measures P as in the probabilistic case. In particular, it is shown that inference can be performed by solving a particular semi-infinite linear programming problem, which is a special case of the truncated moment problem in which only the zero-th order moment is known (i.e., the support). By writing the dual of the above semi-infinite linear programming problem, it is shown that, if the nonlinearities in the measurement and process equations are polynomial and if the bounding sets for initial state, process and measurement noises are described by polynomial inequalities, then an approximation of this semi-infinite linear programming problem can efficiently be obtained by using the theory of sum-of-squares polynomial optimization. We then derive a smart greedy procedure to compute a polytopic outer-approximation of the true membership-set, by computing the minimum-volume polytope that outer-bounds the set that includes all the means computed with respect to P

    Improved decision support for engine-in-the-loop experimental design optimization

    Get PDF
    Experimental optimization with hardware in the loop is a common procedure in engineering and has been the subject of intense development, particularly when it is applied to relatively complex combinatorial systems that are not completely understood, or where accurate modelling is not possible owing to the dimensions of the search space. A common source of difficulty arises because of the level of noise associated with experimental measurements, a combination of limited instrument precision, and extraneous factors. When a series of experiments is conducted to search for a combination of input parameters that results in a minimum or maximum response, under the imposition of noise, the underlying shape of the function being optimized can become very difficult to discern or even lost. A common methodology to support experimental search for optimal or suboptimal values is to use one of the many gradient descent methods. However, even sophisticated and proven methodologies, such as simulated annealing, can be significantly challenged in the presence of noise, since approximating the gradient at any point becomes highly unreliable. Often, experiments are accepted as a result of random noise which should be rejected, and vice versa. This is also true for other sampling techniques, including tabu and evolutionary algorithms. After the general introduction, this paper is divided into two main sections (sections 2 and 3), which are followed by the conclusion. Section 2 introduces a decision support methodology based upon response surfaces, which supplements experimental management based on a variable neighbourhood search and is shown to be highly effective in directing experiments in the presence of a significant signal-to-noise ratio and complex combinatorial functions. The methodology is developed on a three-dimensional surface with multiple local minima, a large basin of attraction, and a high signal-to-noise ratio. In section 2, the methodology is applied to an automotive combinatorial search in the laboratory, on a real-time engine-in-the-loop application. In this application, it is desired to find the maximum power output of an experimental single-cylinder spark ignition engine operating under a quasi-constant-volume operating regime. Under this regime, the piston is slowed at top dead centre to achieve combustion in close to constant volume conditions. As part of the further development of the engine to incorporate a linear generator to investigate free-piston operation, it is necessary to perform a series of experiments with combinatorial parameters. The objective is to identify the maximum power point in the least number of experiments in order to minimize costs. This test programme provides peak power data in order to achieve optimal electrical machine design. The decision support methodology is combined with standard optimization and search methods – namely gradient descent and simulated annealing – in order to study the reductions possible in experimental iterations. It is shown that the decision support methodology significantly reduces the number of experiments necessary to find the maximum power solution and thus offers a potentially significant cost saving to hardware-in-the-loop experi- mentation

    Predicting Risk for Deer-Vehicle Collisions Using a Social Media Based Geographic Information System

    Get PDF
    As an experiment investigating social media as a data source for making management decisions, photo sharing websites were searched for data on deer sightings. Data about deer density and location are important factors in decisions related to herd management and transportation safety, but such data are often limited or not available. Results indicate that when combined with simple rules, data from photo sharing websites reliably predicted the location of road segments with high risk for deer-vehicle collisions as reported by volunteers to an internet site tracking roadkill. Use of Google Maps as the GIS platform was helpful in plotting and sharing data, measuring road segments and other distances, and overlaying geographical data. The ability to view satellite images and panoramic street views proved to be a particularly useful. As a general conclusion, the two independently collected sets of data from social media provided consistent information, suggesting investigative value to this data source. Overlaying two independently collected data sets can be a useful step in evaluating or mitigating reporting bias and human error in data taken from social media
    corecore