2,681 research outputs found

    Imperfect Information and the Business Cycle

    Get PDF
    Imperfect information has played a prominent role in modern business cycle theory. We assess its importance by estimating the New Keynesian (NK) model under alternative informational assumptions. One version focuses on confusion between temporary and persistent disturbances. Another, on unobserved variation in the inflation target of the central bank. A third on persistent misperceptions of the state of the economy (measurement error). And a fourth assumes perfect information (the standard NK-DSGE version). We find that imperfect information contains considerable explanatory power for business fluctuations. Signal extraction seems to provide a conceptually satisfactory, empirically plausible and quantitatively important business cycle mechanism.New Keynesian model, imperfect information, signal extraction, Bayesian estimation

    Reliability approach in spacecraft structures

    Get PDF
    This paper presents an application of the probabilistic approach with reliability assessment on a spacecraft structure. The adopted strategy uses meta-modeling with first and second order polynomial functions. This method aims at minimizing computational time while giving relevant results. The first part focuses on computational tools employed in the strategy development. The second part presents a spacecraft application. The purpose is to highlight benefits of the probabilistic approach compared with the current deterministic one. From examples of reliability assessment we show some advantages which could be found in industrial applications

    Application des surfaces de réponse pour l’analyse fiabiliste d’une structure spatiale

    Get PDF
    Cette communication présente une application des surfaces de réponse pour l’analyse de la fiabilité d’une structure satellite. Les méta-modèles sont construits par régression itérative où seul les termes significatifs sont sélectionnés parmi une liste de régresseurs potentiels préalablement déterminée par une analyse de sensibilité. Les méta-modèles sont ensuite vérifiés par une méthode de bootstrap où les variations observées sur les prédictions sont prises en compte dans le calcul des probabilités de défaillance afin de valider le résultat

    Classification of Melanoma Lesions Using Sparse Coded Features and Random Forests

    No full text
    International audienceMalignant melanoma is the most dangerous type of skin cancer, yet it is the most treatable kind of cancer, conditioned by its early diagnosis which is a challenging task for clinicians and dermatologists. In this regard, CAD systems based on machine learning and image processing techniques are developed to differentiate melanoma lesions from benign and dysplastic nevi using dermoscopic images. Generally, these frameworks are composed of sequential processes: pre-processing, segmentation, and classification. This architecture faces mainly two challenges: (i) each process is complex with the need to tune a set of parameters, and is specific to a given dataset; (ii) the performance of each process depends on the previous one, and the errors are accumulated throughout the framework. In this paper, we propose a framework for melanoma classification based on sparse coding which does not rely on any pre-processing or lesion segmentation. Our framework uses Random Forests classifier and sparse representation of three features: SIFT, Hue and Opponent angle histograms, and RGB intensities. The experiments are carried out on the public PH 2 dataset using a 10-fold cross-validation. The results show that SIFT sparse-coded feature achieves the highest performance with sensitivity and specificity of 100% and 90.3% respectively, with a dictionary size of 800 atoms and a sparsity level of 2. Furthermore, the descriptor based on RGB intensities achieves similar results with sensitivity and specificity of 100% and 71.3%, respectively for a smaller dictionary size of 100 atoms. In conclusion, dictionary learning techniques encode strong structures of dermoscopic images and provide discriminant descriptors

    Evaluation of an Approach Stabilization Advisory system in a B737 full flight simulator

    Get PDF
    Unstabilized approach has been identified to be a major causal factor of approach and landing accidents (e.g. off runway touchdowns, tail strikes etc.). In the D3CoS project, we conducted experiments in order to analyze pilots workload during approaches. Therefore 15 type rated, commercial pilots flew 4 different approaches each in a B737 full flight simulator. Geometry characteristics, winds and weather conditions were manipulated in order to induce unstabilized approaches. The pilot flying‘s eye gaze, heart rate and subjective data (NASA TLX) were collected. Flight data were also recorded and aggregated with an algorithm to provide a stabilization performance indicator. Flight data analysis suggests that the scenarios were able to induce unstabilized approaches. Moreover, our results showed that only half of the unstabilized approaches were subjectively perceived as critical by the participants. Interestingly enough, a scenario at Dalaman airport was very efficient to induce unstabilized approach and elicited higher physiological responses, as well as higher Nasa TLX scores. The next step is to implement an Approach Stabilization Advisory System (AStA) that monitors aircraft performance/configuration and pilot’s behavior/cognitive state. When AStA detects a potential occurrence of an unstabilized approach, it suggests corrective actions to restabilize the approach or to go around. AStA will be tested in the next experimental campaign of D3CoS

    Support vector machine for functional data classification

    Get PDF
    In many applications, input data are sampled functions taking their values in infinite dimensional spaces rather than standard vectors. This fact has complex consequences on data analysis algorithms that motivate modifications of them. In fact most of the traditional data analysis tools for regression, classification and clustering have been adapted to functional inputs under the general name of functional Data Analysis (FDA). In this paper, we investigate the use of Support Vector Machines (SVMs) for functional data analysis and we focus on the problem of curves discrimination. SVMs are large margin classifier tools based on implicit non linear mappings of the considered data into high dimensional spaces thanks to kernels. We show how to define simple kernels that take into account the unctional nature of the data and lead to consistent classification. Experiments conducted on real world data emphasize the benefit of taking into account some functional aspects of the problems.Comment: 13 page
    corecore